All of Microsoft’s products seem like they are all trying to be everything for everyone. Instead, people want more focus. The entire experience of Windows is a mess. How many terminals does an OS need? Or settings and control panels? Or all the legacy stuff? Just start from scratch and think simple, secure, and fast…. Otherwise you get trapped and bound to endless reasons for doing stuff and no focus and end up with Windows 11.
That would be great, but starting from scratch would take away Windows' core strength: backwards compatibility. There's a lot of legacy out there that depends on all that old cruft being there. There's still critical infrastructure running on Windows XP.
MS could probably just virtualize it or containerize the old parts, but people made a lot of dumb choices back then and windows software got its tendrils into every part of the OS and so weird things can break when you try and containerize it.
This new model is way too sensitive to the point of being insulting. The ‘guard rails’ on this thing are off the rails.
I gave it a thought experiment test and it deemed a single point to be empirically false and just unacceptable. And it was so against such an innocent idea that it was condescending and insulting. The responses were laughable.
It also went overboard editing something because it perceived what I wrote to be culturally insensitive ... it wasn’t and just happened to be negative in tone.
I took the same test to Grok and it did a decent job and also to Gemini which was actually the best out of the three. Gemini engaged charitably and asked relevant and very interesting questions.
I’m ready to move on from OpenAI. I’m definitely not interested in paying a heap of GPUs to insult me and judge me.
An ideal listen for anyone looking to sharpen their critical thinking. The reasoning moves are subtle, and it’s easy to miss the small leaps and omissions that reveal how persuasive but unsound arguments work.
If you want to test your logic radar, keep a Reasoning-Error Bingo Card handy — here are some of the most common moves to watch for:
- Anecdotal Evidence as Proof – moving personal testimonies presented as sufficient evidence.
- Cherry-picking – highlighting the few “hits” or successful moments and ignoring null or failed sessions.
- Facilitator/Ideomotor Bias – unacknowledged influence of helpers who already know the answers.
- Lack of Experimental Control – demonstrations without blinding or verification procedures.
- Equivocation on “Spelling” and “Communication” – shifting definitions of what counts as independent expression.
- Over-extension/Universal Claim – extrapolating from a handful of cases to “all nonspeakers.”
- Appeal to Emotion and Narrative Framing – using distressing or inspiring stories to disarm skepticism.
- Appeal to Authority – invoking credentials, research funding, or famous supporters in place of data.
- Confirmation Bias/Omission of Counter-Evidence – excluding decades of research debunking similar methods.
- Shifting the Burden of Proof – implying critics must disprove telepathy rather than producers proving it.
- Quantum-Language Hijack – invoking “quantum entanglement” or “energy fields” as pseudo-explanations.
- False Dichotomy (“open-minded vs. materialist”) – framing skepticism as moral or emotional failure.
- Paradigm-Appeal Fallacy – claiming we’re witnessing a scientific “revolution” instead of providing data.
- Ambiguous Success Criteria – redefining what counts as a correct answer or “connection.”
- Halo Effect through Compassion – moral halo from helping disabled children transferred to truth of the claim.
Ironically, in trying to transcend “materialism,” the series repeats Descartes’ old mistake — treating mind and matter as mutually exclusive instead of as aspects of a single natural order. That move saddles them with the same impossible burden Descartes faced: explaining how an immaterial mind could causally interact with the physical world on top of everything else they need to prove.
I had a realization that logical biases and fallacies form well-known patterns only at uni and only on my own.
I then had a much worse realization - that most people still don't know about them and that they don't care.
You can't make people care, that only comes when they are the ones getting hurt by others being manipulated. But you can give them tools to know what they should care about when that happens.
I get the reaction, but it’s worth remembering that Japanese copyright and trademark law strongly protect creative works and registered marks. Japan doesn’t have a broad “fair use” exemption like in the U.S., only narrow, specific exceptions (such as limited quotation or educational use). Under the Trademark Act, companies also have to actively enforce their marks or risk dilution or cancellation. So while Nintendo’s actions can seem heavy-handed, they’re also legally required to defend their intellectual property to preserve its value and exclusivity.
Nintendo of America, like any U.S. company, must actively protect its trademarks or risk weakening or losing them. While the company generally seems tolerant of fan use of its copyrighted material, it’s entirely reasonable that Nintendo would want to safeguard its intellectual property, its core asset and main source of revenue, both to preserve its legal rights and maintain the integrity of its brand.
It seems that you appreciate the existence of Japanese content, but want to use it on your own terms, not theirs. Is that a fair understanding of your point of view? I think that might be the point of contention for a lot of people.
I think you might be missing out on what the Chinese Room thought experiment is about.
The argument isn’t about whether machines can think, but about whether computation alone can generate understanding.
It shows that syntax (in this case, the formal manipulation of symbols) is insufficient for semantics, or genuine meaning. That means whether you're a machine or human being, I can teach you every grammatical rule or syntactical rule of a language but that is not enough for you to understand what is being said or have meaning arise, just like in his thought experiment. From the outside it looks like you understand, but the agent in the room has no clue what meaning is being imparted. You cannot derive semantics from syntax.
Searle is highlighting a limitation for computationalism and the idea of 'Strong AI'. No matter how sophisticated you make your machine it will never be able to achieve genuine understanding, intentionality, or consciousness because it operates purely through syntactic processes.
This has implications beyond the thought experiment, for example, this idea has impacted Philosophy of Language, Linguistics, AI and ML, Epistemology, and Cognitive Science. To boil it down, one major implication is that we lack a rock-solid understanding or theory of how semantics arises, whether in machines or humans.
Slight tangent but you seem well informed so I'll ask you (I skimmed Stanford site and didn't see an obvious answer):
Is the assumption that there is internal state and the rulebook is flexible enough that it can produce the correct output even for things that require learning and internal state?
For example, the input describes some rules to a game and then initiates the game with some input and expects the Chinese room to produce the correct output?
It seems that without learning+state the system would fail to produce the correct output so it couldn't possibly be said to understand.
With learning and state, at least it can get the right answer, but that still leaves the question of whether that represents understanding or not.
We don't have continuous learning machines yet so at least understanding new things, or being able to further link ideas isn't quite there yet. I've always taken the idea of understanding as taking an unrefined idea, or incomplete information, applying experimentation/doing, and coming out with a more complete model on how to do said action.
Like understanding how to bake a cake. I can have a simplistic model, for example taking a box cake and making it. Or a more complex model, using the raw ingredients in the right proportions. Both of these have some level of understanding on what's necessary to bake a cake.
And I think AI models have this too. When they have some base knowledge on a topic, and you ask a question that can require a tool without asking for a tool directly, they can suggest a tool to use. Which at least to me make it appear the system as a whole has understanding.
You're anticipating the modern AI angle and that's a good move.
In Searle's Chinese Room, we're asked to imagine a system that appears intelligent but lacks intentionality, the capacity of mental states to be about or directed toward something. In Searle's setup he didn't conceive of a system with either learning or an internal state and instead we have a static rulebook that manipulates symbols purely according to syntactic rules.
What you're suggesting is that if the rulebook or maybe the agent could learn and remember then it could adapt and is closer to an intelligent system and in turn would have understanding. Which is something Searle anticipated.
Searle covered this idea in the original paper and in a series of replies: Minds, Brains, and Programs (1980, anticipation p. 419 + peer replies), Minds, Brains, and Science (1984), Is the Brain’s Mind a Computer Program? (1990), The Rediscovery of the Mind (1992), and many more clarifications from lectures and interviews. He was responded to in papers from Dennett, the Churchlands, Hofstadter, Boden, Clark, and Chalmers (which you may be interested in if you're looking to go deeper).
To try and summarize Searle: adding learning or state will only complicate the syntax, it's still a purely rule-governed symbol manipulation system; there is no semantic content in the symbols; and the learning or internal changes remain formal operations (not experiences or intentions).
So zooming out, even adding learning and states, we're still dealing with syntax and no amount of syntactic complexity will get us to understanding. Of course, this leads to debate from Functionalists like Putnam, Fodor, and Lewis. This is similar to what you're pointing at and they would say that if a system with an internal state and learning can interpret new information, reason about it, and act coherently, then it functionally understands. And I think this is sort of the place where people are landing with modern AI.
Searle’s deeper claim, however, is that the mind is non-computational. Computation manipulates symbols; the mind means. And the best evidence for that, I think, lies not in metaphysics but in philosophy of language, where we can observe how meaning continually outruns syntax.
Phenomena such as deixis, speech acts, irony and metaphor, reference and anaphora, presupposition and implicature, and reflexivity all reveal a cognitive and contextual dimension to language that no formal grammar explains.
Searle’s view parallels Frege’s insight that meaning involves both sense (how something is presented) and reference (what it designates), and it also echoes Kaplan’s account of indexicals in Demonstratives (1977), where expressions such as I, here, now, today, and that take their content entirely from the context of utterance: who is speaking, when, and where. Both Frege and Kaplan, in different ways, reveal the same limit that Searle emphasizes: understanding depends on an intentional, contextual relation to the world, not on syntactic form alone.
Before this becomes a rambling essay, we're left with Frege's tension of coextensivity (A = A and A = B), where logic treats them as equivalent but understanding does not. If the functionalists are right, then perhaps that difference, between meaning and mechanism, is only apparent, and we’re making distinctions without real differences.
> Frege's tension of coextensivity (A = A and A = B)
I googled and now reading up on this one. I really enjoy how things that seem basic on the surface can generate so much thoughtful analysis without a clear and obvious solution.
I understand the assertion perfectly. I understand why people might feel it intuitively makes sense. I don't understand why anyone purports to believe that saying "Chinese characters" rather than bit sequences serves any purpose other than to confuse.
I opened ChatGPT on my Mac this morning and there was an update.
I updated ChatGPT and a little window popped up asking me to download Atlas. I declined as I already have it downloaded.
There was another window, similar to the update available window, in my chat asking me to download Atlas again...I went to hit the 'X' to close it and I somehow triggered it, it opened my browser to the Atlas page and triggered a download of Atlas.
This was not cool and has further shaken my already low confidence in OpenAI.
> I don't think I've ever encountered a technology pushed quite as hard on unwilling users as AI.
That's why they have to push it so hard. They spent a lot of money on it and they NEED us to buy it so they don't take a loss, so their tactic is to try to force it on us.
It happens to many companies, you start a disruptor and get huge because you did what the market wanted and competitors didn't. Then, later, you don't want the market to go in a direction, so you try to stack the market, never once realizing that another disruptor will come along to upset your apple cart.
You can control some of the market all of the time, and all of the market some of the time, but you can't control all of the market all of the time.
All the other similar cases I can think of are either something universally vilified, like aggressive telemarketers, or outright scams. AI firms are in a good company it seems.
The only confidence I have in OpenAI at this point is that they will be using scummy tricks like that all the time. What have they ever done to earn confidence in the other direction?
They can't even lie and blame it on a programming fuckup because they'd have to say AI driven code is buggy.
I don't see how OpenAI isn't doomed. They are riding on brand recognition. They are in such a deep hole with investors and despite their meteoric revenue they are only getting deeper. And there is no moat between them and Google or Anthropic or xAI or China for that matter. If Atlas and the video slop machine are their attempts at revenue, they are worse than doomed.
It is an over simplification but Rousseau does paint this picture of humanity's natural goodness corrupted by society, or what the author calls circumstance. This idea is a cornerstone of the Discourse on Inequality and Émile.
Discourse on the Origin and Foundations of Inequality among Men (1755)
- “Nothing is more gentle than man in his primitive state… he is restrained by natural pity from doing harm to others.”
Émile, or On Education (1762)
- “Everything is good as it leaves the hands of the Author of things; everything degenerates in the hands of man.”
Confessions (1782–89)
- “I have displayed myself as I was, vile and despicable when I was so, good, generous, sublime when I was so; I have unveiled my interior being.”
For Rousseau, humans possess innate moral sentiment, society corrupts through things like comparison, and the good life is maintained by being true to one's natural self.
I also think the focus of this little essay is about contrasting two modern identities, the expressive self and the performative and productive self, and isn't steeped in moral psychology. Bringing Aristotle into this is wholly anachronistic and misses the point.
There was a time when the algorithm was truly amazing and the recommendations were smart, spot-on, and mostly high quality. I don't know what happened but you have to search now for decent content.
The recommendations part of YouTube just seems to give me old content or will show me things I've already watched. Despite it feeling almost user-hostile, I still use Youtube,
YT has a bad tendency for over-recommending. I watch one video about stock market and my next ten videos are about stock picking from random guys. I also immediately get advertisements for crypto platforms!
The only time I appreciate over-recommendations is when I search for old standup videos of Dave Attell, George Carlin, and others and it suggests more of these gems.
I recommend turning off the watch history. It means the only recommendations you get are related to the video you're currently watching. It's not perfect but it's a lot better.
I really wish their was a setting to tune feed "volatility". This also drive me crazy.
Sometimes I'll be marking something as "not interested", but that time with it spent auto-playing is enough for my feed to turn in to the stuff I just said I wasn't interested in.
They seem to give totally different suggestions based on platform. On the TV app I get great high quality content every time and I love it. While on mobile it just pushes junk and shorts.
The algorithm was slightly different and it was working on a different kind of data (mostly videos). Now it is working also on shorts and it has to deal with the rise of automated videos, videos with misleading titles and other hacks by malicious actors.