Disclaimer / TL;DR: This got out of hand. Too much time on my hands that I didn't have, today. Anyway.
The gist is: yes, there's crap in "self-help", but it's the current name for "the practical, applied branch of philosophy", i.e. methods and principles to live well, to cope, to grow, to grieve, to become. A rather ancient human tradition... There's no other name for that, as we speak. I don't think it's helpful nor relevant to blanket-judge an entire domain in such strong terms.
____
[Long version]
Wait, what?
What an impeccable way to grossly reduce an entire aspect of life —becoming, getting 'better', knowing oneself— to just about the shallowest, most commercial tip of the iceberg.
But then, how shall we call the "better methods"? See, there's a tension in vocabulary here that I'm not sure one perceives when criticizing "self-help" (been there myself, before I learned better).
Philosophy at the turn of the 20th century became a purely abstract object of academic study (doing away almost completely with the millennial tradition of "philosophy as life recipes, simple practices and principles to live well and better cope with things"). The real-world / "applied" branch of philosophy has now been excised the confines of universities and professors, and has been termed "self-help". Most people no longer know (forgotten recently, a century ago) that philosophy had forever been practical first and foremost, theoretical maybe as a distant secondary / academic concern; also that it was actually taught and practiced by every day people (life was harsher, and admittedly required a little bit more psycho-maintenance given the brutality of both nature and men). Montaigne, things like that. But we somehow took offense at the apparent "simplicity" or "narrow-mindedness" of simple, "common sense" maxims and principles — the 20th century was to be positively analytical to a fault, or it wouldn't be.
Self-help, what little actual widespread practice remains of ancestral philosophy today, is just a word. Just like putting spiritual or sci-fi terms on the same concepts doesn't in any way change their value (or lack thereof).
So self-help literally designates "the oldest, practical branch of philosophy" (as opposed to the theoretical studies taught for the obtention of degrees: see the rift between a random student working to get some 3-year degree and get on to journalism or politics or whatever versus someone— you, me —facing trials in life, searching for the deeper answers inside themselves...) Theory for the student seeking good grades, but a much more "physical" experience for all of us eventually.
Self-help as we find it today is quite literally the remnants of a battle-tested accumulation of thousands of years of learning to "deal with it" (in the very words of e.g. Ancient Stoics). If you read texts from 2,500 years ago or today's good flavor of the month, the similarities are striking — people remain people and that doesn't change at all in less than 10 or 100,000 years.
So if you mean that the good parts of "methods" should be called philosophy I agree, but again the term has now long been confiscated by academia (and to think philosophy is not science, it shouldn't be gated as such). Thus the term has become a turn-down for most people (like they perceive e.g. math: too abstract, analytical, boring, and absolutely not "educating" or "self-elevating" in any useful sense of the term unless it's for your job).
2,500 years since Pythagoras and Aristotle and here we are, by all accounts not much better at educating children and adults alike (just many more, that is a victory in economic terms). But I digress.
So we're left with "self-help". It's an umbrella word, an alley name for stores, who cares that there's poop in-between diamonds in there — the former's existence doesn't make the latter any less valuable. Actually, diamonds grow in poop at the end of the day — maybe some books are great precisely because the author was appalled like you today and me yesterday, and perhaps what stands between you and me today is just the read of one such 'great' book, profound enough to change you like great philosophy does¹.
I mean, not all programming books and courses are great either, and yet... we doubled the developer population every N months for 70 years quite steadily... 'Perfect' can sometimes be the enemy of 'good', especially on hard problems like the general becoming of human beings.
The problem we face is that any 'general' account of 'how to live well' must go through so many fields (some sciences, some not really, some cultural...) that it's virtually impossible to find a good name without emphasizing one too much over the rest — psycho-something, philo-stuff, evolutionary biology (i.e. social theory of information aka genes and behaviors), etc.
I see your problem, but I don't see a solution — change the name and the iceberg will follow, like the xkcd on standards. Gate it behind a "scientific" framework and suddenly half your objects are AWOL, N/A, no can do. Great to publish as a scholar but you just lost 99% of the effectiveness generally — as Joseph Campbell showed us so eloquently, culture matters to the making of mature beings.
Besides, there's this truth: the only one who will ever really "see" you is you. "Looking inside" is an exercise that only ever has one subject-object in life, your own self, and no one else, not in nor out. "Self-help", or "self-whatever", is a rather straightforward way to convey the idea: only you can help yourself.
In many ways, the word is much closer to its object than ‘philosophy’ ever was as an ontology².
Note that I personally opted to say "self-growth" for myself, partly inspired by this very book, and to differentiate my general synthesis from the trash you decry; but you should know also that I chose a different term precisely to avoid having to defend the value of my "principles" (by having to explain association by name with otherwise trashy content). Do you see the conundrum here? What good I found is hard to share because of the stigma perpetuated by such views/comments as yours, because the source is somehow lesser. But the blanket judgment is no more valid than saying "all Americans are..." or "all women are..."
The real trick is to brucelee through life: “take the best, leave the rest”. If one only intends to learn from Shakespeare-Plato level of execution, a lot will be missed. Most notably everything that science will not or cannot consider as an object for good reasons, that might yet "work" for you. The dirty (I think wonderful) secret in philosophy as in medicine is that a good two thirds of positive results are placebo effects. The art is about becoming a master writer of such effects for oneself —which requires intricate knowledge of the subject. In that sense, even bad books teach you about yourself — that's when anything external ceases to be an excuse but becomes a welcome obstacle, a worthy trial, XP to gain if you will.
As for blue-pill / red-ill, I don't have the faintest idea how it's related to this topic. I haven't looked for years at what these people are saying, but the core take-away³ applied to a rather limited subset of philosophy; so sure learning about RP/BP dynamics is part of self-growth⁴, but it's like hard drives vs computers, different level of objects.
Sorry for a long, but hopefully informative, rambling / post.
____
[1]: If I had to pick one, I personally recommend Stephen Covey's famous "7 habits" — as one of the best philosophy manuscripts I've ever read, it's just as good as the best Hellenist/Roman stuff.
[2]: Ontology = how linguists call a "namespace". (not the other meaning, related to metaphysical philo-stuff blabla). Notice that "philo sophia" (the love or pursuit of wisdom) is the general goal / process, whereas "self help" ("help yourself and the sky will help you") is already embedding a practical lesson in its very name: knowing the name is already enough to spread this one idea. It's very powerful, I think, sociologically.
[3]: That we should teach and learn "purple pill" or some higher-third way, not that you would hear it much but really it's the synthesis of this whole 'movement' IMHO.
[4]: My advice: skip RP/BP and move directly to evolutionary biology. The Moral Animal by R. Wright is a fantastic book.
Cool. You know a lot about this topic. I didn't connect self-help with philosophy turning super analytical in the early 1900s, that's interesting.
My opinion is more from just my experience with self-help, and others who were also really into it. Self-help, and people engaged in it, have an undercurrent of anxiety. But that's just my experiential knowledge of it (I would say the same for psychology and philosophy majors).
Also I wouldn't place Marcus Aurelius or Aristotle or some of my favorite philosophers under the umbrella of 'self-help'. If you really wanted to take it out there, then self-help is really everything. What is the purpose of anything but to improve the set standard with which you measure the experience of a human being, a society, or a civilization? You could extend the umbrella forever, and include psychology, etc., with diminishing definitiveness.
The main gripes I have with growth-mindset is the rigidity of it. It is hard to think a positive thought while you are 'negative'. But also, it is hard to think a negative thought while you are 'positive'. To the mind, negative and positive sentiments do not matter because they are simply the responses that were generated by one's beliefs.
Now growth mindset proposes that you could magically make negative beliefs positive. This is poor psychological advice. There is a reason why those beliefs are negative. While direct examination of those beliefs (an interrogation of sorts) might not be the best way to re-form those beliefs, just plain forcing positivity on a negative belief is just as harmful as forcing negativity on a positive belief.
Forced hope is just as violent as forced hopelessness. It's one thing to encourage someone, and another to say that 'the reason for your failure is that you are not positive enough'. Or to say 'if you just change your mindset, you can succeed'. It really a) cheapens how difficult it is to change yourself and b) takes the person away from actually understanding themselves on a deeper level.
I really started changing when I stopped trying to change myself so much. Self-help (and basically zero Western philosophy) doesn't understand that.
You could take the route from modern clinical psychology (memory reconsolidation, etc.) and get there, or find a good martial arts master and learn taoism.
I totally agree with everything you said. What you are rightfully criticizing goes under different names — "positive thought", "creative thought", "the secret". It's all the same woowoo indeed, you described it perfectly.
For some reason you haven't grasped or recognized the 'correct' mechanism or technique in that book (or I guess many others in the domain), but you got it otherwise so it's likely just a matter of cultural / personal fit — language that speaks to you. Martial arts certainly is one way, not fit for everyone either, though.
The actual 'magic' IME is the discovery of that "third-eye" skill i.e. 'controlled' introspection (to look into oneself). That's why I recommend Covey because he's got some of the best paragraphs on the topic, and this one thing is literally a game-changer for people who never explored for themselves the immensely vast space (and time) that exists, when trained, between "input" (things outside, things inside too) and "output" (a response, both external and internal). This quote: (emphasis mine)
> “Between stimulus and response there is a space. In that space is our power to choose our response. In our response lies our growth and our freedom.”
It really is a superpower from a cognitive standpoint (actually trained in cognitive therapy too, immensely, but sadly for now psychology is mostly focused like medicine on curing the ill, not improving the 'normal').
The Stoics saw that space —explicitly so, all of them. In Tao too I hear, though I have yet to read the Tao Te Chin myself.
It's a skill-with-no-name (probably had/has a Greek word for it) that pervaded cultures to this day obviously. But it is the actual key to what you referred as "magically make negative beliefs positive", except it's not magic, not by a long shot: it's work, it's hard on yourself, it requires effort and time, training, lots of failure. But there's only benefits and zero side effects so...
It's a trained skill. At first, you fail miserably, like anything. After about a year, it's become second nature (still de-trainable though, one must remain self-aware on occasion). At first it's conscious effort, spent willpower, it's tiring and you want to just stop, let it go. But it gets better. Effortless eventually, like riding a bike. It's certainly closer to Zen than any form of first-degree 'blind' emotional 'management' (I mean, all therapies and such help, but they merely pave the way to a deeper understanding and meaning that only one can create for oneself).
> I really started changing when I stopped trying to change myself so much.
This, to my ears, is when you began actually doing the work for yourself, by yourself, stopped believing in some shortcut-magic trick in any one book however classic or popular. Each 'real' skill is actually just the opener to a whole other level, bigger problem space. You became the creator of your own solutions, and that, if I may, is the Graal. Like literally I think it's what a lot of 'magical' metaphors (enlightenment, elevation, etc) quite physically or biologically refer to.
There is a fuckton of self-power to be released when you get serious about that path.
I'll tell you that in my anecdotal experience, most people run the other way (back to outside gratification / validation) upon discovery that "the enemy within" generally consists of getting what feels like mentally naked (vulnerable, opened, honest-to-Self) to your deepest oldest layers, and the battle is about healing the child in you left alone for so long, and welcoming him/her back into your life, in its right place.
I profoundly think that you heed those words, wherever they come from and in whatever shape or form however imperfect and partial, when you are ready to hear them, i.e. when you need them.
It's a survival thing, I don't know of going so deep otherwise (it's not like "deep inside" has an "up" or "left"; you need a pulling force like proverbial 'gravity', gravitas, i.e. emotions that run deep, to make sense of that inner space). You eventually find your way through the maze, if it's on your path, I suppose.
But honestly, taking education seriously in that regard (I argue starting with children, as important as managing physical health; and to boot with most adults thinking of this as a health matter, like exercising or nutrition) would do a huge service to society. We can and should train people massively. IMH opinion, experience, research.
So yeah, thinking back on it, I agree self-help sucks. A century later it's nowhere near realizing the social benefits it could claim because it's been too busy giving itself a bad name.
‘ This, to my ears, is when you began actually doing the work for yourself, by yourself, stopped believing in some shortcut-magic trick in any one book however classic or popular.’
I would say that’s a wrong reading of what I said. But take what you will. It’s simple, I actually just stopped trying so damn hard. Nothing fancy or crazy.
The way you are talking about self-change makes it sound really toiling and gruesome, almost too serious. That is not what I really mean.
Just you know, enjoying daily life having good meals and such. Not taking myself too seriously. Reading less. Just going about. Indulging in laziness and entertainment.
So I don’t believe in some other version of self-help (like you are presuming). I just do whatever I feel like and say whatever feels right. I don’t have high goals, I am just living day-to-day with some aspirations.
I presumed wrong, sorry, it seems I was guessing or rather projecting.
I think I see. You come from a different place than me, surely. Our initial make-up I mean, whether innate or acquired.
The thing is, I speak of a "struggle" to really put down the "get-X-quick" approach versus actual compounded effort (tiny bits but long term).
People want to eat some psychedelic or read magical incantations and get-woke-quick but the reality of becoming a well-rounded individual is closer to cooking a nice meal every day (you just need to find and learn recipes that work for you, I guess that's what you found eventually? This emotional clarity, alignement, simplicity even? That's super-zen, you should know!)
I was also speaking of another bigger and clearly 'darker' thing (as in "opaque", non-conscious, that can't be seen but rather felt). I hate the term but you'll read "quantum change" in the mainstream, the idea of a "core" or "essential" change of personality / behavior (same thing here). It happens to some "survivors" notably (of any kind, it's what the person experienced that matters). There's a before and an after — the meaning of life, what bothers you (or not), what (now) inspires you, etc. It's all so much clearer on the other side of pain.
This surely isn't zen and roses, although for me it took going down that dark path and back to really smell the roses (for what they are, and not what I wanted them to be). The terseness of comments and my will to pack too much probably blurred the line between these two experiences — daily routines versus one-off life-changing internal event and its aftermath.
Wonderful post. I have a number of related books I'd add to your list, but this one in particular got me started on a path to the kind of practical philosophy you and I both seek and extoll:
Well, that's a +1 for my list! Thank you for the recommendation. Love the title and fourth already — it seems that Mr Miller gets it in a way that would speak to me. I've seen the same objects multiple times now, they seem close to 'invariants' to me — compassion, attention, gratitude.
In particular this counter-intuitive idea that you should strive not to "do what you love" but rather adamantly to "love what you do". Understanding that relieved me of so, so much emotional burden, like dead weight I was carrying for who-knows-what reasons. It changes people in ways that make others say "maturity" or "wisdom" about it.
I think the point is to have the viewer at least try to read the long version. If they can't or won't, they can quickly scroll down.
TL;DRs at the top are just like attention grabbing headlines and opening paragraphs that often don't tell you the whole picture, either on purpose or simply because it's impossible to do so in a sentence/paragraph.
I can plead guilty of seeking attention (I figure 'commenting' is the first such step), but I thought the very length of that post is already by far its most salient aspect. I don't think a foreword changes anything to the big picture. ;-)
Also, on HN of all places, I resent doing that (long pieces), I must delete 4 out of 5 such write ups before posting — self-restrain to keep the place neat, once done it must meet high enough standards for me to actually post, i.e. "would I learn something from this?". So there's structure, and an 'abstract' naturally emerges from that.
Now the "TL;DR" up top is really practical, I care not for sensationalism, not the slightest. I just want to inform people so they can quickly decide whether to dive or skip. I like that myself as a reader (call it anti-click-bait, honest-to-god synthesis).
Oh I didn't mean you specifically, just in general if TL;DRs were on top, more people would not understand or misunderstand what an article/post actually says.
I agree. But that’s authors goal. They spend a bunch of effort on the long form and they want it to be read.
From a reader perspective,, especially in a mobile ui setting, it would be nice to know up front the content is lengthy and there is a TLDR at the bottom. It’s not necessary to start with the TLDR. I just find a lot of content I bail on because my initial interest level did not align with the time investment. The New Yorker style of journalism.
Executive Summary slides are my analogy. They always lead, raise a ton of questions, and the answers are forthcoming if you want to sit through the presentation. But don’t be that guy who starts drilling in with granular questions during this stage of a presentation
You're welcome and I'm glad my intention quite perfectly matches your expectations.
I'm pro 'reader's choice' indeed! An informed 'skip' button up top is really just good hospitality while someone reads my post, me thinks. Like those "get started"-quick pages when hesitating to RTFM.
It's actually much more than that, it's failing 99 times to get that one break; rinse and repeat 1,000 times to build something meaningful in 1~10 years.
Failure is the mother of all learning.
The more you fail, the more experienced you are (it's almost an equivalence relationship that grows in time with effort, attempts), the more you become likely to 'succeed' at that thing. I don't know of any other way.
There is one word missing from this post that contains a whole lot of the argument on the environment side: chaos.
Those who learn to thrive in chaos are most dangerous out there, for they have this contrarian impulse to rise when there's blood in the streets. If you recall a history lesson or two, that's how the most egregious powers are made —in wealth or might or legacy.
Right now, some of us are down —the situation is draining, energy-wise— whereas others feel invigorated, a drive to take action, make a move.
How we respond to chaos thus creates a big divide among us in times of major perturbation. There's this shift of potentials in the system, and kinetics go crazy, and some flee/freeze (seek security, refuge, maintaining the status quo, conservatively preserve what's left, etc) while others feel compelled to fight (to defend, protect, help; but also attack, kick in the disruptive nuts, solve problem, seek victory). For those, it could be the perfect storm to attempt a moonshot — I find there's a really unusual proportion of such stories among famous successful figures in virtually all fields, but I wonder if it's not survivorship bias + myth building + my own filters.
When in doubt it's almost always cognitive biases at play.
I think a more reasonable assessment would indicate that people who are well resourced (either through financial, social, or skill capital) are more likely to feel invigorated and therefore will build on the resources they already have by taking advantage of the chaos.
> "To fully compromise EPID, hackers would need to extract the hardware key used to encrypt the Chipset Key, which resides in Secure Key Storage (SKS)," explained Positive's Mark Ermolov.
> "However, this key is not platform-specific. A single key is used for an entire generation of Intel chipsets. And since the ROM vulnerability allows seizing control of code execution before the hardware key generation mechanism in the SKS is locked, and the ROM vulnerability cannot be fixed, we believe that extracting this key is only a matter of time.
> "When this happens, utter chaos will reign. Hardware IDs will be forged, digital content will be extracted, and data from encrypted hard disks will be decrypted."
And this formidable response as usual:
> Intel says folks should install the firmware-level mitigations, "maintain physical possession of their platform," and "adopt best security practices by installing updates as soon as they become available and being continually vigilant to detect and prevent intrusions and exploitations."
When will it stop? How deep run the flaws in Intel's platform? Is AMD equally exposed?
>Seems like a victory for privacy. Who wants to be tracked via hardware IDs?
Those are probably not the hardware ids you're thinking about. They're the hardware ids used in trusted computing (eg. remote attestation, TPM sealing), not the ones used for fingerprinting.
>People actually rely on proprietary hardware encryption? They should have learned the lesson when built-in SSD encryption turned out to be worthless.
This is a very naive take on what's at stake. With disk encryption, there's the risk of an evil maid attack (where the attacker replaces the bootloader with a malicious one and intercepts your key next time it boots). One way of preventing this is by using trusted computing to ensure that the encryption keys are only released when the system is at a known good state (ie. bootloader hasn't been tampered with). This applies to both proprietary solutions (bitlocker) and free ones (tpm-luks).
Anybody who doesn't want their data copied will be shedding tears. Including anybody with private files.
You are more than welcome to decline to use DRM if you don't like it. Just don't expect people to give you copies of data they don't want shared by you.
> Anybody who doesn't want their data copied will be shedding tears.
"Their" data? What a ludicrous concept. It's analogous to saying people own numbers.
> Just don't expect people to give you copies of data they don't want shared by you.
I fully expect people to distribute "their" data far and wide to anybody who asks for it. That's what copyright is all about: giving people the illusion they're in control of what happens to that data.
The truth is only one copy of the data is needed. Once it's out there, there are no limits to what can be done with it.
I haven't posted them. The fact that data is private means I'm currently the only one in possession of it. It doesn't mean it's mine. Should they leak, the solution is to invalidate those credentials and get new ones, not to invoke copyright and try to get all copies off the internet.
I think the average company should and does trust the physical security of Amazon's datacenters more than their own. If I had a nickel for every unvetted janitor allowed to clean an office alone near an easily pickable hardware closet...
this is more about the chipset on the motherboard.
to backdoor this you need to saddle a chip or a connector onto the PCH chip and win the race to takeover the bus.
or if your intel and you send a firmware update to modify the ME behaviour /state.
it would be fairly suspect in most cases but if this was done at the factory, it would be hard to tell for most people.
What really matters is just how much of a target you might be for someone to take the effort to engage in what really amounts to industrial/corporate espianage.
how long does it take for a machine to be opened and booted up, and what sort of charade would be required to make the opportunity.
if someone flatout stole your laptop, how long would it take for you to notice its been replaced by a stand in? would someone have the opportunity to swap your real laptop back to you unnoticed?
and seriously it doesnt need to be a nation state that does this, as all you need to be capable of physically is to inject digital pulses into the bus
crafting an exploit is where the skill comes in.
some people are motivated just by the opportunity to stir a pot.
"Cloud" is merely the modern spin on "terminal in the office, mainframe at the HQ". We moved from terminals to local mini/microcomputers back then, and we will move from "cloud" to edge computing again. Notably, serverless and "installable web apps" are already a growing thing.
And no, Sun, the network is the computer will not come to pass during this cycle.
AWS makes up a massive fraction of the whole internet. That ship has absolutely not sailed. If your company doesn't own the mainframe, it doesn't control the hardware.
> And no, Sun, the network is the computer will not come to pass during this cycle.
... we are arguing about this via web browser. O365, Google docs, Dropbox, iCloud and company are common ways to work with documents, SaaS has been a wild success in business, and major players (no pun intended) are pushing game streaming. The network isn't the only computer, but for a lot of people it's the main one.
The historical mainframes usually were rented from IBM and the likes. Less sunk investment, less reasons to stick with it.
>we are arguing about this via web browser
Which works equally well for remote AND local resources. Electron is popular for a reason.
All the centralized services - online Docs, Dropbox, Github etc., - are more subject to disruption and replacement than they would want you to believe.
SaaS has been a success in the same way "bring your own device" was a success - an end-run around the ossified, slow-moving and bureaucratic ICT department. It was nimble, fast and elastic; allowed for quick iteration and experimentation. Now that the SaaS is a big game, it's subject to the very same kind of disruption.
Take a look around, you'll see people using local Git repositories, and locally hosted web-based services to get shit done. Just to avoid the hassle of procurement & upkeep of big-name SaaS. Containers let you move the data & code to unmanaged iron where it's close to the user, instead of one big managed datacenter. SaaS and datacenter computing is not nimble anymore; local is nimble, and Google Stadia delivered the eulogy.
> When will it stop? How deep run the flaws in Intel's platform? Is AMD equally exposed?
We're seeing the tide turn from x86 to ARM pretty quick in both the datacenter and laptop markets. AMD should come through relatively unscathed as they're pretty diversified, but Intel is fucked. Graviton2 (Amazon's proprietary ARM stack) absolutely crushes x86 from a $/performance perspective, and there are plenty of other companies building 80+ core ARM chips.
Combined with the persistent rumors that Apple is shifting the Mac to ARM along with Microsoft reviving ARM Windows are a pretty strong signal as to where the laptop / desktop market is headed too. x86 (and by extension Intel's platform) is definitely headed towards a more niche role in the computing landscape.
ARM is shit compared to x86 for single-threaded computing; you're right about that. But ARM is great at the types of hypervisor-driven cloud workloads that most applications fit into. Most cloud workloads are limited by network latency far more than single-threaded performance.
x86 will still exist for high-performance workloads, and companies will happily pay a premium where they need it like they already do with GPU instances. But I do think we'll see the vast majority of cloud usage shift to ARM over the next 5 years. RISC-V may come in and replace it some time after that, but not without major cost advantages over both ARM and x86.
I, for one, as a human being whose root ancestors¹ infected planet Earth² which eventually produced me³, am will to use a definition that qualifies viruses as very much alive. ;-)
____
Tangentially related, I sympathize with Earth parasite Musk who wishes for us to infect another cell— err, I mean, planet. Shoosh, Earth, back to sleep, nothing to see here...
____
[1]: I wonder who's the chicken, here. Certainly, the virus is the egg, right? It's all a plan to engineer the actual masters of this universe?
[2]: A long, long time ago...
[3]: Well, the 10% of my weight with my DNA. I feel more like an ecosystem, I am a legion of species, a myriad DNAs, we are millions in those guts. "I" is not even the same "I" when travelling to a different biome: you are what you eat, quite literally, however you want to spin that idea.
The saddest truth is perhaps that functionally, we have the leaders we deserve.
It's a simple projection from voter space to elected space. As a former pol.sci. student/researcher, I've grown disillusioned by the fact that representative democracy, in our day and age, quite evidently no longer works as intended — the projection is flawed with regards to the conduct of affairs.
I don't know why:
- might be that voting is skewed by marketing (permanent campaign) money, government is too corrupt and establishment too entrenched, the "conspiracy hidden in plain sight / normalcy" idea;
- could be that democracy simply doesn't work as a system, or maybe not anymore, because the "average" is somehow not good enough — that was certainly not my belief, not for a decade, but I had to come to terms: it's a question worth asking, I assure you. Just considering the quality of information and the noise/signal ratio throws a huge wrench in the mechanics of any sane decision-making process...
- ... (many more hypotheses)
Regardless, this may help us craft the next-new-better political system, which might or might not be a regime of increased freedom compared to now (my guess: much more in some domains, much less in others, and not everyone will like the distribution).
I mean, look where we are. We had 15 years to get ready for such an event since coronavirus of the 2000's — tons of reports and words of good will. “Guys, we're not ready for a massive outbreak!” — everybody and their Bill Gates said so! But alas, the political fantasy kept going and it's all been just words. It wasn't that hard to stockpile masks and various supplies for when the day comes...
It's a sad day in history when we must face the consequences of our choices, but it's like a cycle... We get too complacent and something slams us, again and again and again (give it 3-4 generations apart to "forget", it's as if written language doesn't exist).
I'm jaded at the irony of our collective behavior, and sad for all those who will suffer and leave. Hopefully this will be a painful but learning experience that will last. I'm sad to report that it almost exclusively takes real pain, real hurt for societies to learn anything meaningfully in the face of history, of evolution. So, here we are, destiny.
> The saddest truth is perhaps that functionally, we have the leaders we deserve.
The public at large are terrible bosses. We're disengaged and bad at measuring what our employees (leaders) do, so style trumps substance and it's hard for us to tell if any of them have a measurable impact.
We have little ability to get to the truth of what's going on and are deliberately dissuaded from this with huge omnibus bills and whatnot. News has become about narratives instead of substance, with hardly anyone caring to look at the underlying evidence of anything, letting liars get away with so much.
We constantly raise standards with no way of enforcing them. Naturally, cheating is the only way to appear perfect, driving out very good but imperfect people in favor of massive liars who cheat. We're also not good at holding anyone in politics to account. Everyone knows that Jeffrey Epstein did not kill himself, but only some low level guards will be held to account for it most likely. The general public has no idea what to do and the leaders are too happy to sweep it under the rug.
So you're probably right, we have the leaders we deserve in some sense. The general public makes for terrible bosses, which is probably why too many people work under bad ones.
That's true and I'm certainly not advocating for dictatorship. There is a vast space of possible regimes and political systems in between total freedom and total slavery, though... ;-)
Look, I won't pretend having "the" solution because it doesn't exist until we actually implement something, then iterate our way into a more-or-less final form that works. That's how we went from absolute monarchy to representative democracy, but if you trace the roots of that process, you've got a solid century of experimentation and much, much thinking (pretty much all the great names you know from 15th-16th-17th century were on some side of that equation, informed the discussion and ultimate "object", regime, that appeared).
So what will it be? Well I could write some anticipation sci-fi to suggest possibilities that are non-dystopian (well, not so much that it's unbearable) and come with a ton of benefits, and obviously some drawbacks / limits / undesirable side effects to be managed, adjusted.
The question is what the political will of the People actually wants— it's a question we each have to answer for ourselves —and what is possible, without breaking the machine, what will be desired, refused or simply ignored by 'elites' who, like it or not, have the fuel to make-or-break such 'evolution'. Now add an 'r' to the word and you increase the range of possibilities, but risk as well.
Here is my personal feeling: I think such events as COVID-19 will spark the kind of seed that eventually grows to fundamental political change; I however think we're still half a generation away from that — give time for Millenials 1982-1999 to rise to power, as they have the right mix of "values" (philosophy, circumstantial world experience / view, etc) to move beyond the systems that govern us today — and have for 70 ±10 years already!
I had a hunch post 9/11 that this century would see regimes evolve either towards more political freedom (towards more direct democracy) or towards a more authoritarian form of society (wherein social peace is obtained at the cost of some freedoms). It turns out that I was wrong, it wasn't either/or but both combined in a weird way: political freedom was gained but used to promote clowns to power (#all.over.the.world.the.2010s.are.insane.historically) instead of e.g. tackling massive, pressing or idealistic projects (people be lazy, rite). Whereas actual sovereignty (the one you learn in constitutional law, the real kind of political power) definitely shifted towards authoritarianism under various disguises — populism is one form, China's and Russia's "restoration" of authoritarian federal powers is another, and some self-proclaimed SWJs may not be far either, in their own "inclusive" way (kinda like dictatures are all "Democratic Republic").
What this tells me (I've long studied that evolution but again it's really just my personal view) is that societies are simply not mature for the new "space of possibilities" (think positive variables, opportunities e.g. via tech, think also negative variables, constrains of climate and viruses for instance). It's just too much power, too early, so we basically just F it up like kids with a problem a tad too big for them.
But that's history. We do things and then we figure out how to deal with them, there's no reversing that causality. Hopefully we won't ravage ourselves and this Earth in the 20-60 years it'll take to adapt to this rather sudden paradigm change — think 1980-today, that's just 40 years ffs, and think of the mindset then and now... And you kinda need "natives" of an era to meaningfully internalize its reality, process the whole damn thing in "system 1" like human beings do, and eventually sometime between 20 and 80 figure out ways to make life better. Rinse and repeat with their newborn grand-children to solve the new thing that's arrived by then.
As for what that will actually look like, your guess is as good as mine. I'm partial to freedom personally, but I think it's worth choosing your battles, too. That's when it becomes political thus where we stop: a good regime doesn't favor "opinions" but rather the expression thereof, in a healthy and productive manner (able to reach decision; ideally promoting some degree of consensus, kept in check by some other legitimated power, etc).
Just know that, when we want to, it's relatively doable to write a rather good constitution (and we have, many times throughout the last century). The real tricky issue is not to technically implement the ideas, the hard part is what ideas, what system you actually want to create. Again, many possibilities, some of which were abandoned by history but are now possible because modern tech (notably complex voting systems that yield much better representation of opinions and hierarchies in complex, multi-dimensional systems, e.g. to distribute might and resources between cities, or agencies, or any such module. There's about 1 century of great science that we're barely using for the benefit of the public, troves of innovation in the political realm. Just so much apathy for change in those who man these offices).
(It's honestly a fascinating topic of inquiry if you like systems, puzzles, cybernetics (different name for systems theory), all things mechanical in nature that must work with a "real-world" chaotic human environment.)
Agreed, it’s amazing the range of guests Joe Rogan has on his show. Admittedly some are more witch doctors than actual doctors but his recent conversation with Michael Osterholm was the real deal.
> Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful. Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives. Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. [...]
> In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening.
So, there you have it: AI not "either/or" humans, but both, in conjunction, as a composition of the best of both worlds.
At the very least, that's how civilization will massively and intimately introduce true assistant AI.
It's also somewhat counter-intuitive to think that the most specialized tasks are the low hanging fruits; i.e. that the "difficult" to us, culminating years of training and experience for humans (e.g. how to read a medical scan) may be, per its natural advantages (like speed and parallelism), "easy" to the machine.
That space (where machine expertise is cheaper than human) roughly maps to the immense value attributed to the rise of industrial-age narrow AI; therein lies not a way to replace humans — we never did that in history, merely destroyed jobs to create ever more — but rather to augment ourselves once more to whole new levels of performance.
Anything more than this is AGI-level, science-fiction so far — and there's not even a shred of evidence that it's theoretically a sure thing, possible in the first place. Which is not to say that AI safety research isn't extremely important even for the narrow kind (manipulation comes to mind), but we shouldn't go as far as to bet future economic growth on its existence. Like fusion or interstellar travel, we just don't know. Yet, and for the foreseeable future, because scale.
Exactly this. This is where I see AI possibly going: To be a complimentary tool or second pair of eyes to speed up the work for the professionals rather than replacing them. I also see this research as a very positive step forward for using AI for good and especially bringing highly accurate results that can used as a aid for health professionals.
However, given that this research used a deep learning (DL) based AI system in the medical industry, there are still questions around this AI system explaining itself and its internal decision process for the sake of transparency, which will almost be ignored in other news reporting sites and will focus only on the accuracy. DL-based AI systems will still be a concern towards both patients and clinicians and I would expect this to be a focus point in the future, despite the welcoming results which is still very interesting anyways.
Other than the transparency issues behind the AI system, I'd say this is a great start into the new decade for AI.
Agreed. The ability of the someone/AI to explain their decision making process, is critical in determining whether such a decision has been adequately thought out or not. If a PhD must go through a viva, surely it is also incumbent on anybody pushing "AI" to also be able to "survive" such a viva. Otherwise, we might as well just go back to the days of reading entrails, flipping coins, etc.
[edit: typo on viva]
Note that the system does produce localization. "In addition to producing a classification decision for the entire case, the AI system was designed to highlight specific areas of suspicion for malignancy."
How many years did centaurs reign supreme over pure AI in chess? 5-10 maybe? This "both" stuff is just a temporary stop on the way to meat obsolescence.
Agreed. At some point, doctors will be a completely redundant step in analyzing these scans. Even before then, the AI will reduce the amount of labor needed and partially commoditize some medical professionals.
The only issue is that humans don't seem to do well at jobs in which another agent is at least plausibly reliable. The Tesla autopilot is an example of that, we tend to disconnect pretty quickly.
Another thing I find interesting is that Google was able to train a neural network on retinas and can reliably distinguish sex based on retinal image alone...something opthamologists basically can't do. So not only are these systems approaching human capability in tasks we can do, they can do things we can't. As medical data becomes more freely flowing (presumably) over the next couple of decades, i think we'll find that 'AI' can become even more reliable.
I think machine's advantage can be summarized as "good at aggregating weak signals". Humans excel at analyzing complex signals, but basically can't use signals weaker than some point. Machines have no trouble with weak signals.
Here we have a strong move towards what is possibly the next phase in the seemingly eternal tension between concentration/core convergence of resources versus distribution/meshing.
The last swing was strongly in favor of concentration, with a new word: "datacenters" became "clouds", though the distinction seems more marketing than ontology, if we stretch it more CS (abstract) than actual specs.
Now we're seeing a move back towards "on-prems", "self-hosted", "distributed"; towards the "edge" as we call it now, away from whatever passes for a "core" (the means is always somewhere in the middle, but as any pendulum, we probably cross it too fast to stop there given the momentum; such is the way of cyclic phenomenon driven and bound by natural objects such as human beings, or computers).
This ping-pong goes back to the first Turing machine and what came immediately thereafter; we saw big incarnations of each extreme which became popular names, iconic hallmarks of their era — the "mainframe", the "micro computer", the "super computer", the "handheld". Cue IoT (the extreme form of distribution, as far as we can conceive it), or holistic notions of "noosphere" (information "biome" so to speak, the whole of computers seen as a ecosystem, which you may describe as a "species" like we would gases, biological beings or... technological things. It might seem crazy then again internet displays some very biological properties).
The unambiguous conclusion that one should make, after observing this cycle first-hand for at least 1.5-ish period's (when you realize it's a cycle), is that we need both to make a world, and probably many variations thereof; and that we will "refresh" all of it, probably upon each half-cycle (these are, what, 30 years approx? so each 15 years give or take, like it too that from "the web" and cell phones to reach the smartphone era; or from then to now, the next few years, 2023-ish, as we cross into this next stage of the pendulum story?)
Lots of things are this way in technology, cycles from one extreme to the next, ultimately impossible to fix in a "happy medium" because there's no such thing; rather there are "happy trends" which fit their time, and as many such "trends" as there are "times" — how we moved from mainframes+terminals to the almost equally capable sever-client paradigm will be a vastly different story than the move from clouds to whatever's next.
I think HPE is poised to win huge rewards by grabbing this market first, and hopefully their execution is good enough.
The gist is: yes, there's crap in "self-help", but it's the current name for "the practical, applied branch of philosophy", i.e. methods and principles to live well, to cope, to grow, to grieve, to become. A rather ancient human tradition... There's no other name for that, as we speak. I don't think it's helpful nor relevant to blanket-judge an entire domain in such strong terms.
____
[Long version]
Wait, what?
What an impeccable way to grossly reduce an entire aspect of life —becoming, getting 'better', knowing oneself— to just about the shallowest, most commercial tip of the iceberg.
But then, how shall we call the "better methods"? See, there's a tension in vocabulary here that I'm not sure one perceives when criticizing "self-help" (been there myself, before I learned better).
Philosophy at the turn of the 20th century became a purely abstract object of academic study (doing away almost completely with the millennial tradition of "philosophy as life recipes, simple practices and principles to live well and better cope with things"). The real-world / "applied" branch of philosophy has now been excised the confines of universities and professors, and has been termed "self-help". Most people no longer know (forgotten recently, a century ago) that philosophy had forever been practical first and foremost, theoretical maybe as a distant secondary / academic concern; also that it was actually taught and practiced by every day people (life was harsher, and admittedly required a little bit more psycho-maintenance given the brutality of both nature and men). Montaigne, things like that. But we somehow took offense at the apparent "simplicity" or "narrow-mindedness" of simple, "common sense" maxims and principles — the 20th century was to be positively analytical to a fault, or it wouldn't be.
Self-help, what little actual widespread practice remains of ancestral philosophy today, is just a word. Just like putting spiritual or sci-fi terms on the same concepts doesn't in any way change their value (or lack thereof).
So self-help literally designates "the oldest, practical branch of philosophy" (as opposed to the theoretical studies taught for the obtention of degrees: see the rift between a random student working to get some 3-year degree and get on to journalism or politics or whatever versus someone— you, me —facing trials in life, searching for the deeper answers inside themselves...) Theory for the student seeking good grades, but a much more "physical" experience for all of us eventually.
Self-help as we find it today is quite literally the remnants of a battle-tested accumulation of thousands of years of learning to "deal with it" (in the very words of e.g. Ancient Stoics). If you read texts from 2,500 years ago or today's good flavor of the month, the similarities are striking — people remain people and that doesn't change at all in less than 10 or 100,000 years.
So if you mean that the good parts of "methods" should be called philosophy I agree, but again the term has now long been confiscated by academia (and to think philosophy is not science, it shouldn't be gated as such). Thus the term has become a turn-down for most people (like they perceive e.g. math: too abstract, analytical, boring, and absolutely not "educating" or "self-elevating" in any useful sense of the term unless it's for your job).
2,500 years since Pythagoras and Aristotle and here we are, by all accounts not much better at educating children and adults alike (just many more, that is a victory in economic terms). But I digress.
So we're left with "self-help". It's an umbrella word, an alley name for stores, who cares that there's poop in-between diamonds in there — the former's existence doesn't make the latter any less valuable. Actually, diamonds grow in poop at the end of the day — maybe some books are great precisely because the author was appalled like you today and me yesterday, and perhaps what stands between you and me today is just the read of one such 'great' book, profound enough to change you like great philosophy does¹.
I mean, not all programming books and courses are great either, and yet... we doubled the developer population every N months for 70 years quite steadily... 'Perfect' can sometimes be the enemy of 'good', especially on hard problems like the general becoming of human beings.
The problem we face is that any 'general' account of 'how to live well' must go through so many fields (some sciences, some not really, some cultural...) that it's virtually impossible to find a good name without emphasizing one too much over the rest — psycho-something, philo-stuff, evolutionary biology (i.e. social theory of information aka genes and behaviors), etc.
I see your problem, but I don't see a solution — change the name and the iceberg will follow, like the xkcd on standards. Gate it behind a "scientific" framework and suddenly half your objects are AWOL, N/A, no can do. Great to publish as a scholar but you just lost 99% of the effectiveness generally — as Joseph Campbell showed us so eloquently, culture matters to the making of mature beings.
Besides, there's this truth: the only one who will ever really "see" you is you. "Looking inside" is an exercise that only ever has one subject-object in life, your own self, and no one else, not in nor out. "Self-help", or "self-whatever", is a rather straightforward way to convey the idea: only you can help yourself.
In many ways, the word is much closer to its object than ‘philosophy’ ever was as an ontology².
Note that I personally opted to say "self-growth" for myself, partly inspired by this very book, and to differentiate my general synthesis from the trash you decry; but you should know also that I chose a different term precisely to avoid having to defend the value of my "principles" (by having to explain association by name with otherwise trashy content). Do you see the conundrum here? What good I found is hard to share because of the stigma perpetuated by such views/comments as yours, because the source is somehow lesser. But the blanket judgment is no more valid than saying "all Americans are..." or "all women are..."
The real trick is to brucelee through life: “take the best, leave the rest”. If one only intends to learn from Shakespeare-Plato level of execution, a lot will be missed. Most notably everything that science will not or cannot consider as an object for good reasons, that might yet "work" for you. The dirty (I think wonderful) secret in philosophy as in medicine is that a good two thirds of positive results are placebo effects. The art is about becoming a master writer of such effects for oneself —which requires intricate knowledge of the subject. In that sense, even bad books teach you about yourself — that's when anything external ceases to be an excuse but becomes a welcome obstacle, a worthy trial, XP to gain if you will.
As for blue-pill / red-ill, I don't have the faintest idea how it's related to this topic. I haven't looked for years at what these people are saying, but the core take-away³ applied to a rather limited subset of philosophy; so sure learning about RP/BP dynamics is part of self-growth⁴, but it's like hard drives vs computers, different level of objects.
Sorry for a long, but hopefully informative, rambling / post.
____
[1]: If I had to pick one, I personally recommend Stephen Covey's famous "7 habits" — as one of the best philosophy manuscripts I've ever read, it's just as good as the best Hellenist/Roman stuff.
[2]: Ontology = how linguists call a "namespace". (not the other meaning, related to metaphysical philo-stuff blabla). Notice that "philo sophia" (the love or pursuit of wisdom) is the general goal / process, whereas "self help" ("help yourself and the sky will help you") is already embedding a practical lesson in its very name: knowing the name is already enough to spread this one idea. It's very powerful, I think, sociologically.
[3]: That we should teach and learn "purple pill" or some higher-third way, not that you would hear it much but really it's the synthesis of this whole 'movement' IMHO.
[4]: My advice: skip RP/BP and move directly to evolutionary biology. The Moral Animal by R. Wright is a fantastic book.