Hacker Newsnew | past | comments | ask | show | jobs | submit | K2L8M11N2's commentslogin

As a premium subscriber I currently have 4x available on Android and they recently (in the last month) added it to web too


As a premium subscriber I no longer have 4x available on iOS and I never had it on web. I also no longer see it as an experiment on https://www.youtube.com/new

I hope they make up their mind on it soon instead of this endless A/B testing.


In 24h mode it would show as 02:21.


It appears every time you get into an app by clicking a notification while you're in another app, and it's 100% consistent in that context.


It is consistent when the apps are consistent.

But it will not appear if the app opens the webpage as its own View (instead of opening Safari). In that case, there is no button and the user has to hunt for how to go back. And the user has no way of knowing whether an app will open Safari or will open the webpage itself.

iOS requires users to build muscle memory in learning how to use each app. Android requires users to maintain a back stack in their head to remember what came before. Switching between the two is very jarring.


If it's a full-screen in-app webview, there's a Done button on the top-left; alternatively, you can swipe back from the first page in the webview and it will close the webview and leave you where you were.

If it's a card webview, just swipe it down from the top edge.

> Switching between the two is very jarring.

I think that's the crux of the problem. Someone who's used to one style and has built muscle memory and some kind of hierarchy and ontology of the interaction with the system will have trouble with a different paradigm. That doesn't necessarily mean there's something wrong with either - they're just different, and some people will prefer one over the other.

Switching between GNOME and KDE, or vice-versa, has a similar effect.


To be fair, captive browsers are also a plague on Android apps. I despise them and especially so if there's no way to turn them off in a particular app.


The fact that you had to explain how it works indicates that a bit more consistency might be beneficial.


> It appears every time

Would be consistent.

> It appears every time [long list of qualifications]

Is inconsistent.


It is literally just one qualification, which doesn't even apply because why would the "back to previous app" button show when you're in the same app. In the meantime you haven't mentioned one inconsistency yourself. If you like discussions with made up arguments may I suggest you check out reddit?


It is consistent. If the current app was opened by another app (e.g. by clicking a link that opened Safari, or if you're in Safari and click a link that opens in another app, etc), the button will be there.

OTOH if the current app was opened manually by the user, it won't be there.


GPT-4 still recognizes the pattern and solves it:

https://chat.openai.com/share/c6b37b7a-609e-4593-a78f-421f80...

> Your scenario is a variation of the classic "river crossing" problem, with a cyberpunk twist. The goal is to move all three viruses (a_virus.exe, b0Rk.worm, and cy83r_h4x.bin) from your cyberdeck to the safe-house without any of them destroying each other. Here's a step-by-step solution:


Exactly. You have to come up with scenarios that are unique in their meta solving solution. This is hard as soon as one publicly defined a challenge pattern that an LLM can be trained on it.


Hey, neat!

Edit: looking at the whole conversation, it did recognize the problem but then it got the answer wrong. Interesting.


why is that interesting?

it's a text suffix generator - you wouldn't expect it to generate a correct answer for a logic puzzle that isn't in it's training data.


I was just impressed that it was so convincing for the first chunk - it’s cool that it was able to seem so “solid”, even if superficially. I’ve been out of the loop for a while and stuff’s been moving fast!


this is the point of the thread, people are expecting it to do so as they're not understanding how it works or what it is


it's the point of basically every discussion on HN about this. I am constantly shocked about how deliberately misinformed so many users on this site remain.


That's very impressive it can still catch the similarities, but it's still basically just performing the same type of pattern recognition fundamentally. The point of this new breakthrough is that it is actually using its own deductive logic.


You can change the volume of those sounds under Settings > AirPods > Accessibility.


I think this essay is relevant here: https://ansuz.sooke.bc.ca/entry/23

> Suppose you publish an article that happens to contain a sentence identical to one from this article, like "The law sees Colour." That's just four words, all of them common, and it might well occur by random chance. Maybe you were thinking about similar ideas to mine and happened to put the words together in a similar way. If so, fine. But maybe you wrote "your" article by cutting and pasting from "mine" - in that case, the words have the Colour that obligates you to follow quotation procedures and worry about "derivative work" status under copyright law and so on. Exactly the same words - represented on a computer by the same bits - can vary in Colour and have differing consequences. When you use those words without quotation marks, either you're an author or a plagiarist depending on where you got them, even though they are the same words. It matters where the bits came from.


this is a basic misunderstanding of copyright


Is the GP misunderstanding copyright, or is GP describing a common basic misunderstanding?

If the former, could you please elaborate? I also frequently cite this article in discussions.


But it does, that's what ::ffff:0:0/96 is for


I can't `ping ::ffff:192.168.0.1` and have it ping my router. There is a range reserved for representing IPv4 addresses, but the stack doesn't translate.


You can if you have NAT64:

    $ ping 64:ff9b::1.1.1.1
    PING 64:ff9b::1.1.1.1(one.one.one.one (64:ff9b::101:101)) 56 data bytes
    64 bytes from one.one.one.one (64:ff9b::101:101): icmp_seq=1 ttl=54 time=10.4 ms
    64 bytes from one.one.one.one (64:ff9b::101:101): icmp_seq=2 ttl=54 time=10.0 ms


The NAT64 prefix (64:ff9b::/96) is not the one GP cited (::ffff:0:0/96).

Also I don't have NAT64... The fact that ISPs don't provide NAT64 by default is kind of my point.


And then we are back to NAT...


Yes. What were you expecting? There's no way for a v4-only device to reply to a packet from a v6 source address otherwise. The source address has to be mapped to an address the v4-only device understands, and then mapped back again for the reply packets.

How else could this work?


It does translate, but it doesn't work for ping because ping bypasses most of the stack by sending raw packets. Try something like `telnet ::ffff:192.168.0.1 80`.


That does work. Interesting, the OS translates this at the socket level.


> I can't `ping ::ffff:192.168.0.1` and have it ping my router.

How would that even work in theory?

How would a ('legacy'?) host that only understands the 32-bit data structure of IPv4 addresses talk to a >32-bit data structure IPv6 addressed host?


You need a translator, i.e. a middle host with dual IPv4/IPv6 stack that can convert an IPv4 packet to an IPv6 packet and v.v. By the way, it's not just theoretical, it exists and it has been standardised, see https://nicmx.github.io/Jool/en/intro-xlat.html#ipv4ipv6-tra...


If it truly encapsulated IPv4, then there wouldn't be two stacks. It would be one stack and legacy devices could snip the xtra bits (or have it done for them via a router).


I'm skeptical. How would the legacy device V4 understand the "extra bits"? How would this work on the same subnet (no router)?


If it can't natively (by creating a new networking stack), then a router would have to re-write the packet.

Endpoint dvices should not be direct peering (security). Always go through either a passthrough inspection device or router.


Endpoint devices peering directly is how things work on most small networks. What you describe would cause more problems than it solves.


> (or have it done for them via a router)

And then we are back to NAT...


But you can "ping $address" regardless of which IP version it's using. Please, elaborate what are you trying to solve.


I didn't say I couldn't type that in... my point was clear to everybody else who responded.


That part of IPv6 is mostly deprecated. The more modern version is NAT64 which uses 64:ff9b::/96 by default.


Those aren't publicly routable though... that's the problem.


Couldn't we just make it so?


If we could get the RFCs changed, sure!


Being able to work sitting in a deck chair in your back yard without having to worry about screen glare seems like a compelling use case.


None of your grandparents would have made it to the US under the current system either (probably)


> There will be no viable robotics applications that harness the serious power of GPTs in any meaningful way.

That's a weird prediction to make, considering that PaLM-E does exactly that: https://palm-e.github.io/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: