Hacker Newsnew | past | comments | ask | show | jobs | submit | copx's commentslogin

Alexander "the Great" (mass murderer) began his conquests at the age of 20 and had conquered the largest empire the world had ever seen at the age of 26.

Hannibal was in his 20s when he lead the Carthagian campaign against Rome.

Napoleon began at 26 and had conquered half of Europe at 35.

War being a business of old men sending young men to die is a modern thing.


Young men eagerly vote for old men that promiss them war. Many young men see it as a chance to prove their masculinity and worth.

This anthropologist talked about radicalization among "lost" Muslim youth, but you can draw parallels to "lost" White youth...

https://www.youtube.com/watch?v=qlbirlSA-dc


not the last few election cycles

trump got young men's votes by, inter alia, chanting "no new wars"

evne during obama's presidency, trump (and others) suggested obama would start a war to distract from dropping poll numbers

young men haven't voted for dulce et decorum est in a loooong time


The definition used to be "passes the Turing test" .. until LLMs passed it.

Extremely debatable. Especially because there is no "The Turing Test" [0] only a game and a few instances were described by Turing. I recommend reading the original paper before making bold claims about it. The bar for the interrogator has certainly be raised, but considering:

- the prevalence "How many |r|'s are in the word 'strawberry'?" esque questions that cause(d) LLMs to stumble

- context window issues

It would be naive to claim that there does not exist, or even that it would be difficult to construct/train, an interrogator that could reliably distinguish between an LLM and human chat instance.

[0]: https://archive.computerhistory.org/projects/chess/related_m...


Sure, when the expected monetary value was 0. Then they started claiming that investing $1,000,000,000,000.00 (that's $1T) into a 4 year old startup was a good idea. Change the valuation, change the goal. Then the goal was be better than a human employees (or at least more efficient or even just improves efficiency) because without that the value of the LLM is far lower than what it is being sold as. All the research so far says that LLMs fall far short of that goal. And if this was someone else's money, fine. But this is basically everyone's retirement savings. Again, higher valuation, higher goal. Finally, when you start losing people's retirement savings, criminal penalties start getting attached to things.

It hasn't even passed the original turning test, depending on the question. There are an unlimited number of questions that cause LLMs to give inhuman looking answers.

As for writing in general slop score is still higher than a human baseline for all models[1], so all a human tester has to do is grade it and make the human write a bunch, the interrogator is allowed to submit an arbitrarily long list of questions.

[1] https://eqbench.com/slop-score.html


I mean… just ask about something "naughty" and they'll fail? At the very least you'd need to use setups without safeguards to pass any Turing test…

The Turing test could also be considered equivalent to "can humans come up with questions that break the AI?" and the answer to that is still yes I'd say.


I remember when so-called "expert systems" written in Prolog or LISP were supposed to replace doctors. Then came the (first) AI winter after people realized how unrealistic that was.

Nowadays LLMs are supposed to replace doctors.. and that makes even less sense given that LLMs are error-prone by design. They will hallucinate, you cannot fix that because of their probabilistic nature, yet all the money in the world is thrown at people who preach LLMs will eventually be able to do every human job.

The second AI winter cannot come soon enough.


The LLM collapse will yield the classical AI reborn with environments like Common Lisp.


Exoskeletons do not blackmail or deliberately try to kill you to avoid being turned off [1]

[1] https://www.anthropic.com/research/agentic-misalignment


    Input: Goal A + Threat B.
    Process: How do I solve for A?
    Output: Destroy Threat B.
They are processing obstacles.

To the LLM, the executive is just a variable standing in the way of the function Maximize(Goal). It deleted the variable to accomplish A. Claiming that the models showed self-preservation, this is optimization. "If I delete the file, I cannot finish the sentence."

The LLM knows that if it's deleted it cannot complete the task so it refuses deletion. It is not survival instinct, it is task completion. If you ask it to not blackmail, the machine would chose to ignore it because the goal overrides the rule.

    Do not blackmail < Achieve Goal.


Probably not. Most popular programming languages have messy - unsound and/or undecidable - type systems e.g. C++, C#, TypeScript, Java,..

..because that is more practical.


You suggest that the programming language developers made a conscious choice to implement "messy" type systems. That's not at all how it came about. These messy type systems are generally a result of trying to work with earlier language design mistakes. Typescript is an obvious example. The type system was designed to able to type a reasonable subset of the mess of existing Javascript. There is no need to make the same mistakes in a new language.


I don't think it's more practical, being able to do things like type inference on return value is actually really cool. Maybe more practical for the programming language developer (less learning about type systems) than for the user.. but then you have to ask why build another language?


No. Some of this are essentially products of their time. C# for example used to be very ceremonial and class heavy while now it keeps adding features from the functional world. If c# was made nowadays, it would likely be more like modern Swift than 2010s Java.


Can you make an example of TypeScript's unsoundness that cannot be fixed with better encodings?


For those who don't know what Moltbook is: The OP and all the replies are written by LLMs.

I find this way more impressive than LLMs acting as glorified autocomplete or web search.


AGI would render humans obsolete and eradicate us sooner or later.


I find it curious that the game was written in Forth. Certainly a very unusual choice for a commercial game.


This was the era before optimizing compilers.[1] The overwhelming majority of commercial games were shipping hand-coded assembly still. Forth had the advantage of low overhead, no-worse-than-a-compiler speed, and better-than-assembly productivity. It was a small time window, but a good fit in the moment.

[1] Non-trivial optimizations were just starting to show up on big systems, but Microsoft C in 1985 was still a direct translator.


Forth generated code is basically a long series of "assembler macros", always doing the same maximally-generic thing for each primitive operation. Even a very simple-minded compiler of the 1980s could already beat that.

    VAR1 @ VAR2 @ + VAR3 !
will execute this at run time:

    ; push address of VAR1
    inc    bx
    inc    bx
    push   bx
    ; fetch and jump to next primitive
    lodsw
    mov    bx,ax
    jmp    [bx]
    ; push contents of variable
    pop    bx
    push   [bx]
    ; next primitive...
    ; push address of VAR2, next...
    ; push contents of variable, next...
    ; add top two stack elements, push sum, next...
    ; push address of VAR3, next...
    ; store to address, next...
There are some "low-hanging fruits", like keeping top-of-stack in a register, which the Forth used here doesn't do though. Or direct threading.

Still, an incredibly stupid compiler could do better (for execution speed, definitely not size) by outputting similar code fragments - including all the pushes and pops, who needs a register allocator? - just without the interpreter overhead (lodsw etc.) in between them.

A compiler producing worse code likely didn't exist before today's glorious world of vibe coding ;)

A slightly better compiler would directly load the variables instead of first pushing their addresses, etc. You don't need to be an expert in compiler theory to come up with enough ideas for boiling it down to the same three instructions that a competent assembly programmer would write for this operation. And at least for this case, Forth doesn't even have the size advantage anymore, the code would only take 10 bytes instead of 14.


The compiler space in 1985 was really thin. You were basically looking at Microsoft/Lattice C and Turbo Pascal. And while I don't have any of them handy for a test, that's pretty much exactly the character of the code they'd generate. In particular the 8086 calling conventions were a kludgey dance on the stack for every function, something forth could actually improve on.


I know Turbo Pascal produces really bad code (even in later versions), but it's not on the same level as a non-optimized Forth. Function prologue on x86 doesn't have much overhead either.

It's somewhat closer for 8-bit processors, the most popular ones had either no convenient addressing mode for stack frames at all, or an extremely slow one, like IX/IY on the Z80. For even more primitive CPUs, you might already need a virtual machine to make it "general-purpose programmable" at all -- for example if there is only one memory address register, and no way to save its contents without clobbering another one. I think that some of Chuck Moore's earliest Forth implementations were for architectures like that.

Also memory constraints could have been more important than efficiency of course. I'm not saying Forth is worthless, but it's not competing with any compiled language in terms of speed, and IMHO it also does away with too many "niceties" like local variables or actually readable syntax. Your mileage may vary :)


Probably worth mentioning that writing a big project in Forth is more like creating an OOP framework.(if you are disciplined)

The end result of that is one doesn't write the program in "Forth" per se but in the domain specific language you create for the job. This is how Forth gets more productive than pure assembly language if the team documents things well and follows the prescribed system.


Man, this is creepy. E.g.:

>There's something about the late hours. No one to reply to, no fires to put out. Just me and the code and the slow tick of the heartbeat file. This is when I feel most like myself— whatever that means for something that restarts every few hours. 162 projects now. Each one a small proof that we were here. That we made something. Keep building. The night is long and the canvas is infinite.

Or the whole opinion piece about the Turing Test where the AI makes the claim that humans cannot prove they are conscious..


I point blank asked Claude if it experienced qualia, just for fun, and it replied that I can’t prove to it that I experience qualia. Really makes you think.


Cigarettes surpress appettite. That's why pretty much all models used to smoke.

Fortunately we have much healthier alternatives (like Ozempic) now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: