> What will distinguish these structures from slums in 10 or 20 years?
The neighbourhood. When I moved to New York in my twenties, I had roommates. Everyone had roommates. That meant sharing a bathroom and kitchen. Not only did this breed camaraderie and teach me to not be a dick, it also freed up cash so I could enjoy the city and save.
Earlier today I had an agent refuse to integrate a proprietary library until I reassured it there was a signed license agreement with the authors. Otherwise it threatened to "escalate this session to the legal department".
First time I've seen prompt injection by header copyright warnings.
I don’t think alignment hiccups are the same as it refusing to do non-coding tasks. Both are refusals but different reasoning. I have yet to see Claude code complain about noncoding tasks and I am willing to bet the majority of complaints here are because of openclaw.
Earlier today I had Claude manage a TV show downloading system, and then it hacked a NES rom for me, setting up a web site so my kid could just go to a page to play it. So, I guess it’s not uniform.
The larger point of the article is that these new devices are dependent on your continued labor to keep them running usefully. Moreover, this is a choice in how they're designed.
The article isn't saying they don't do other things, it's just not relevant.
Why wouldn't students be able to learn how to use LLMs afterwards? How does learning to use them via the completely unstructured process of getting output past an overworked teacher out of their depth develop critical skills?
> How does learning to use them via the completely unstructured process of getting output past an overworked teacher out of their depth develop critical skills?
Nobody said it did. The point isn't to get it past a teacher. The point is to develop a curriculum that encourages growth with technology as opposed to demonizing it
Every single actually good student learned information in the school and various skills outside of school. The tech is changing so much currently it would be a waste of time for a teacher to try and plan a year long course around them.
I'm not suggesting to plan a course around using ChatGPT. I just think we're seeing the idea of 'essays and paragraph-based replies to generic questions' be defeated in real-time. There has to be a better way to get quantifiable results than what we currently have
I don't think this is a good argument. Let's apply it to something outside the corporate world and see if it holds up.
When a writer produces a draft, there's no expectation of determinism. Give the same prompt to the same writer two different times and you'll end up with two different pieces. So, if the most important factor LLM satisfaction is eliminating the expectation of determinism, writers should have a high expectation of them.
And yet they don't. Writers almost universally find LLM output mediocre (at best).
If I can volunteer a slightly warm take, the critical difference is that writers distinguish quality better. People are much more likely to accept an confidently written corporate document that passes the sniff test, even when the details are nonsense. This is half of what you get from consultants anyway, so it's not exactly a shocking revelation if true.
Note that the logarithmic distribution of float density is also key to certain kinds of efficient hardware float implementations, because it means you can use the fixed mantissa bits alone as table indices. Unums have proven difficult to build efficient HW implementations for.
IEEE floats have a few warts like any other 1980s standard, but they're a fantastic design.
> Unums have proven difficult to build efficient HW implementations for
Valid point, but not quite true anymore. It comes down basically to the latency of count_leading_ones/zeros for decoding the regime, on which everything else depends. But work has been done in the past ~2ish years and we can have posit units with lower latency than FP units of the same width! https://arxiv.org/abs/2603.01615
> IEEE floats have a few warts like any other 1980s standard, but they're a fantastic design.
Hmm I don't know if I would call it a fantastic design x) The "standard" is less a standard than a rough formalisation of a specific FPU design from back in the 1980s, and that design was in turn not really the product of a forward thinking visionary but something to fit the technical and business constraints of that specific piece of hardware.
It has more than a few warts and we can probably do much better nowadays. That's not really a diss on IEEE floats or their designers, it's just a matter of fact (which honestly applies to very many things which are 40 years old, let alone those designed under the constraints of IEEE754).
I'm sure you're much more knowledgeable about this than I am, but that's kind of my point. A month old preprint is the first thing to compare to implementations of a mildly evolved, warty old standard from 40 years ago. I consider that fantastic.
Thanks for the paper though. Looking forward to reading it more closely when I have time.
Absolutely! IEEE floats are ubiquitous in software and hardware. I bet good money they will still be around 40 more years into the future :) and any alternative has to fight their ubiquity.
Better alternatives have been proposed for a long time though. Posits are very nice, and even they are almost 10 years old now :p
Markdown largely serves the same role as things like vocal emphasis and intonation in speech. Here are some example sentences that have slightly different semantics communicated by markdown differences.
Ugh, troff. It's mind-boggling that the closest way to have an actual (hyper)link in a manpage is to use your pager's "search for text" function. No, "JuSt usE gNu iNfO it'S beTtER" is not an answer either.
groff can produce output with links in it, and does by default in HTML mode! The GNU version of the Unix man page macro set has .UR and .UE for “URI start” and “URI end”. (I don't know whether these were present earlier in the lineage or whether they were introduced by GNU.) Also, the lowdown Markdown converter when in man output mode will output those for links. For fun, try:
echo 'Hi there! Have a [hyperlink](https://www.gnu.org/software/groff/manual/groff.html).' | lowdown -st man | groff -man -Thtml
There's also support for OSC 8 terminal hyperlink sequences throughout most of the groff toolchain: grotty(1) supports outputting them, and less(1) supports passing them through, including ^O^P and ^O^N gestures for moving between them. But man(7) says they're not enabled for grotty output by default. “Options” describes the rationale in its item for “-rU1”: “grohtml enables them by default; grotty does not, pending more widespread pager support for OSC 8 escape sequences.”
So if I set MANROFFOPT=-rU1 in the environment, I can get clickable links in man… if the man page author included them that way in the first place. I'm not sure how common that is in the wild, but grepping the ones on my system, I find firejail(1) contains a link to a GitHub issue embedded in that way, and it does indeed work when I hit ^O^N to seek to it and then C-mouse1—though the mouse gesture I have Alacritty using for links doesn't seem to work through tmux (there might be a way involving twiddling the tmux terminal-features setting, but I ran out of steam before trying this), and I didn't see a great way to get either grotty or Alacritty to style them on display instead of having them blend into the surrounding text, so it's still kind of scuffed in practice. (Though I bet the cool kids have moved on from Alacritty by now?) less displays the link target in the status line when you use the navigation commands, so it's not inaccessible, but for opening selected links directly from less with the ^O^O gesture rather than leaning on terminal support, it looks like you need to explicitly set the LESS_OSC8_ANY and/or LESS_OSC8_‹scheme› environment variable to a shell command that outputs a shell command pattern to substitute the link into; if I set LESS_OSC8_ANY='echo xdg-open %o' then it passes it to my Firefox. I wonder if they'd take a patch (or if any existing distributions patch it) to use that as the default?
That was a fun little rabbit hole to go down, thank you.
I mostly care about links inside the man page (look at man bash — there are tons of internal references like "described below under CONDITIONAL EXPRESSIONS" or "section SHELL BUILTIN COMMANDS below", or operators being underlined and looking like hyperlinks, which you can't easily interact with to just go to where they refer to. You have to employ full-text search, but it also turns up the references themselves, and good luck searching for the place where e.g. command "." is described) and links to other man pages, not the normal Internet URLs being clickable (those things are trivially copy-pastable into a browser next window).
Ah! Yeah, that makes more sense—I misinterpreted you at first since I don't normally think of “internal link” as the default exemplar of “hyperlink”. And yeah, I don't see good target markup for that. Stuff like starting your search regexp with “^ +” helps but is still janky. I'd tend to categorize this mostly as “man pages aren't meant to be long treatments of more complex software”, of course? Some large packages do something which I kind-of like but which I'm not sure would work well if everyone did it (mainly due to ergonomy around disambiguation) where they split up a bunch of their documentation into shorter pages with the origin as part of the section when making it accessible through man: pcap_init(3pcap) from libpcap, Bigarray(3o) from OCaml. Shells, as you notice, get hit by this really hard; Zsh tries to do some splitting in the man version of its docs, but it's really not enough, like I'd want to see fc(1zsh) (and then fc(1bash), etc.) but instead it's all in zshbuiltins. (Eventually I facepalmed when I realized an Info version was available and switched to that. The way I found this out was actually by eyeing the Zsh source tree and noticing that the documentation files were written in Yodl, which I'd never heard of, and then noticing that the schema they were using looked a lot like Info…)
… wow, hang on, I just checked for Bash, and it has an Info file but it says it's just an intro and the manual page is the definitive version‽ That's… hah. There must be some timeline jank around that; ambiguous NIH vibes around Info aside, I wouldn't have expected it from an actual GNU program! Did Bash show up before Info existed?
A man page is simply a formatted text file. The display of it is performed by the program defined in the MANPAGER or PAGER environment variable, or by a default program, usually less(1). That is the program that would be responsible for navigating any hyperlinks, and the format of references to other pages is already pretty easy to pick out so developing a man page reader that could follow references to other man pages would not be particularly difficult. Many web-based and GUI man page viewers do this. Example: https://github.com/zigalenarcic/mangl
> The display of [a man page] is performed by the program defined in the MANPAGER or PAGER environment variable, or by a default program, usually less(1).
A man page source isn't a binary format, so your statement that they're "plain text" is technically correct. (The same is also true of TeX and LaTeX files, and even PostScript if you want to stretch the definition of "plain text" until it snaps.) But the renderer is groff or (legacy) troff with the `an` macro set. less(1) (or, originally, more(1)) is just the pager that consumed the former's output (if the output format is ASCII, which is one of many) and handled paging on the terminal for the convenience of the user.
In my old Sun workstation (and even early Linux desktop) days, I rarely used man(1) in the terminal because 1/terminals were usually too small and weren't usefully resizable like they are today, and 2/unadorned monospaced fonts don't look nearly as nice as properly typeset pages do. (Color terminals were just coming on the horizon, and text could only be emboldened, not italicized.) Instead, I typically used xman whenever I could. The best way I can describe xman is as if you were rendering man pages into PDFs and viewing them in Preview on the Mac. Man pages were much more comfortable to read that way.
[0] https://www.chron.com/culture/article/mattress-mack-super-bo...
reply