One of the ways I like to look at it is, humans have the unique ability to look into the past, and into the future. With this ability and the help of language, we are able to replay past experiences (good and bad) and make predictions on the future (plan forward, avoid things that hurt us before etc). This can cause “stress” or “pain” on the brain/person when those thoughts are negative. As things normally do, this “stress” can vary from person to person, depending on their makeup, life experiences, habits etc.
Face ID is not terrible. Especially on newer phones which support landscape rotation etc. They check to see if you’re looking at the screen and your eyes are open, so they can keep the screen on regardless of the auto lock setting. It’s a smart and useful feature which you can turn off if you don’t like it.
> It’s a smart and useful feature which you can turn off if you don’t like it.
If they hadn't gotten rid of the fingerprint sensor, I'd believe in the sincerity of that statement.
> It is not terrible
The fingerprint sensor was at the perfect location, it worked perfectly. FaceID has the downsides I have outlined and therefore in my opinion absolutely terrible.
> Especially on newer phones which support landscape rotation etc.
I don't understand how you could say "newer phones which support landscape rotation" in 2025 with a straight face. Even iPod Touch 2G supported landscape rotation.
The rotation doesn't help anyway, it is technically capable of detecting it is sitting still on my desk but it still does the FaceID dance first before showing me the passcode prompt which I also don't appreciate along with scanning my face every 30 seconds even when unlocked.
If it can scan it so rapidly, why not show me the passcode prompt or design the UX better so that I can already input my passcode before waiting for the device to decide it sees no face in there?
It can do it better but by design it is too eager to just perform the FaceID unlock and then turn itself into a user presence and attention sensor.
I'd easily pay $100 extra for an iPhone that didn't solely rely on FaceID to log me in and instead gave me a fingerprint sensor it had from generations ago.
> I don't understand how you could say "newer phones which support landscape rotation" in 2025 with a straight face. Even iPod Touch 2G supported landscape rotation.
Waiting for the day when Apple announces supporting recording videos horizontally and the Apple fanatics to go wild as they show off how amazing videos can be when the view is wider than it is tall.
Touch ID had a ton of downsides. It didn’t work with wet or dirty fingers, didn’t work with gloves, if you have clammy hands it would constantly fail. Plus it has a higher false positive rate than Face ID and has less features. Not to mention the speed and UX of Face ID is significantly better for MOST people than Touch ID.
I also meant landscape Face ID recognition, obviously not landscape device orientation.
I remember my iPhone, on my desk, turning up because of a notification, then hearing the vibration for a failed FaceID unlock. This very smart system wasn't able to understand that it was looking at a ceiling. So I always ended up having to type my password due to too many failed FaceID attempts.
FaceID was highly ranked in the reasons why I disliked iPhones.
It wouldn’t be instant, next week or even next month. Pre-training doesn’t happen that frequently and varies between each model provider. As for the strawberry test, this is a tokenization issue that is fundamental to LLM’s, however, most models can now solve this type of question using thinking/code/tools to count the letters.
I'd personally rather use gpt-5. The sub price is cheap and offers more overall value than an Anthropic sub or paying per token. The chatgpt app on iPhone and Mac are native and nicer than Anthropic's and offer more features. Codex is close enough to Claude Code and also now native. For me it's nicer to use the "same" model across each use case like text, images, code etc. this way I better understand the limitations and quirks of the model rather than the constant context switching to different models to get maybe slightly better perf. To each their own though depending on your personal use case.
The problem is GPT-5 is not in the same league as even Claude 3.5. But I do hope their lower pricing puts some downward pressure on Anthropic's next release.
I don’t believe this is true but I’m willing to be proven wrong. I believe people who think this are just used to Claude’s models and therefore understand the capabilities and limitations due to their experience using them.
This is interesting. The "professional level" rating of <1800 isn't, but still.
However:
"A significant Elo rating jump occurs when the model’s Legal Move accuracy reaches
99.8%. This increase is due to the reduction in errors after the model learns to generate legal moves,
reinforcing that continuous error correction and
learning the correct moves significantly improve ELO"
You should be able to reach the move legality of around 100% with few resources spent on it. Failing to do so means that it has not learned a model of what chess is, at some basic level. There is virtually no challenge in making legal moves.
> Failing to do so means that it has not learned a model of what chess is, at some basic level.
I'm not sure about this. Among a standard amateur set of chess players, how often when they lack any kind of guidance from a computer do they attempt to make a move that is illegal? I played chess for years throughout elementary, middle and high school, and I would easily say that even after hundreds of hours of playing, I might make two mistakes out of a thousand moves where the move was actually illegal, often because I had missed that moving that piece would continue to leave me in check due to a discovered check that I had missed.
It's hard to conclude from that experience that players that are amateurs lack even a basic model of chess.
Can you say 100% you can generate a good next move (example from the paper) without using tools, and will never accidentally make a mistake and give an illegal move?
They are MitM-ing everything with server-side support. If you're already using Apple stuff I don't think it's any more invasive than what you already get, but I'm guessing GP says that because there's really no need for a server in the middle for most of what gets done. It just lets things run a little more seamlessly without requiring you to be on the same network all the time (or setup something like Tailscale or another VPN). Also the sweet sweet analytics (for Apple).
Presumably AirDrop, iCloud KeyChain and Apple Push Notification Service? I don’t find any of them particularly invasive, but I guess some might not like the device id/auth/centralisation aspects
The option to buy devices from different brands is there, but you don't have to. Samsung or Google is happy to sell you a phone, a watch, earphones, and maybe some speakers for your home. Heck, Samsung will even sell you a TV, washing machine, and a fridge if you want.
In any case, what I struggle to understand is the position some have when it comes to install apps on iOS. I understand the security angle, but you can do it right (with Apple always in control of the OS and the permissions apps get), still be in the Apple ecosystem, and having the option to install a VPN app after Apple receives a court order to block said app.