Reading this I hear The Roots playing The Seed 2.0[1] in my mind.
It’s a wild thought to think that of all the things that will remain on this earth after you’re gone, it’ll be your GPL contributions reconstituting themselves as an LLM’s hallucinations.
If we're being clear, it's going to be a lot more than that.
Our comments here on HN are almost certainly going to live in fame/infamy forever. The twitter firehose is a pathway to 140-character immortality essentially.
You can already summon an agent to ingest essentially an entire commenter's history, correlate it across different sites based on writing style or similar nicknames, and then chat with you as that persona, even more so with a finetune or lora. I can do that with my gmail and text message history and it becomes eerily similar to me.
History is going to be much more direct and personal in the future. We can also do this with historical figures with voluminous personal correspondence, that's possible now.
It's very interesting because I think the era before mass LLM usage but also after digitalization is going to be the most intensely studied. We've lived through a thing that is going to be on the cusp of history, for better or worse.
Taken to a hallucinated but logical conclusion, we might define a word such as "cene" to riff off of "meme" and "gene".
The c is for code. If adopted we could spend forever arguing how the c is pronounced and whether the original had a cedilla, circonflex or rhymes with bollocks, which seems somehow appropriate. Everyone uses xene instead. x is chi but most people don't notice.
Me too, and I use LLMs often for personal and professional work. Knowing that colleagues are burning through $700/day worth of tokens, and a small fraction of those tokens were likely derived from my work while I get made redundant is a bit shite.
Yeah that's the thing making my head spin, tack a 30% profit margin on that and it's 550usd per day?
Probably going to be more than that for rocketship growth and investor expectations.
Is that the game? Lock in companies to this "new reality" with cheap tokens then once they fire all their devs, bait and switch to 2X the cost.
If you read history widely (across millennia and geographies), you'll note that most of the power-contests follow this pattern[0]. In the modern industrial world, the pattern becomes exponential rather than incremental. What I'm saying is that this is not unique to AI Labs[1]. This is caused by the deeply flawed and unbalanced system that we have constructed for ourselves.
[0]: The pattern, or, as gamers would call it, the "meta", is that every ambitious person/entity wants to control as much of the economic/material surplus as possible. The most effective and efficient (effort per control) way of doing this is to make yourself into as much of a bottle-neck as humanly possible. In graph-theory this corresponds to betweenness-centrality, and you want to maximize that value. To put it in mundane terms, you want to be as much of a monopoly as you can be (Thiel is infamous for saying this, but it does check out, historically). To maximize betweenness, or to maximize monopoly, is to maximize how much society/economy depends on you. This is such a dominant strategy (game-theory term, but in modern gaming world, they might call this a "cheesy strat" -- which just means that the game lacks strategic variety, forcing players to hone that one strategy), that we even have some old laws (anti-trust, etc) designed to prevent it. And it makes a lot of sense: Standard Oil was reviled because everything in the economy either required oil or required something that did. 20th-century USA did a lot to mitigate this. It forced monopolies like ATT to fund general research like Bell Labs (still legendary) towards a public good (a kind of tax, but probably much more socially-beneficial). It also broke up the monopolies, and passed anti-profit laws (e.g. hospitals were not allowed to make a profit until 1978; I have seen in the last 10 years a tiny cancer clinic grow into a massive gleaming hospital -- a machine that transforms sickness and grief into Scrooge McDuck vaults of cash). This monopolistic tendency of the commercial sector, is a tendency towards centralization, which yields efficiency, sure, but also creates the conditions for control and rent-seeking and exploitation.
[1]: Much of the cloud-computing craze was similar in character (and also failed to deliver on some of its promises, such as reducing/replacing IT overhead (they just renamed IT to DevOps)). And Web2 itself was about creating and monopolizing a new kind of ad-channel and lead-generation-machine. There is a funny twist, that a capitalist society like the USA, has much more deeply rooted incentives to create a panopticon than communist states of the past ever did. Neither is pretty of course. The communists demanded conformity and loyalty, while the capitalists demand consumption and rent.
I hadn't set it back up after moving. I gave OpenClaw ssh credentials and it updated the OS and packages, then couldn't get back in after a restart.
I plugged in keyboard and screen and it was stuck at boot, couldn't mount a drive.
I sent OpenClaw screenshots and it told me to type in journalctl commands. Then it had me modify fstab so boot could continue.
After that, OpenClaw could get back in on its own. It found the drive I'd been using had 1300 bad sectors and was going to die. It saw that another drive was perfectly healthy. It said the bad disc sectors were all early and probably just filesystem metadata and my files were probably fine.
It copied 1.5Tb to the newer drive and restored everything.
I probably would have thrown the whole box out, as I hadn't used it in a year and wasn't looking for a project like that.
My memory is there were a spate of SO scraping sites that google would surface above SO and google just would not zap.
It would have been super trivial to fix but google didn’t.
My pet theory was that google were getting doubleclick revenue from the scrapers so had incentives to let them scrape and to promote them in search results.
I remember those too! There were seemingly thousands of them!
Reminds me of my most black-hat project — a Wikipedia proxy with 2 Adsense ads injected into the page. It made me like $20-25 a month for a year or so but sadly (nah, perfectly fairly) Google got wise to it.
Honestly, if I'd gotten in earlier I bet it would have made more. I also made zero attempts to SEO it -- there were no links to it anywhere else on the Internet, so it would have been in the very first tranche of useless duplicative spam sites of that type to be cleaned up. It was up from like 2010-2013 or so.
One thing that China has, that the US does not is a marketplace like Alibaba. It's simply incredible how you can place an order for any component with a quantity slider that goes from "three" to "three full train-cars a day".
When starting a hardware product, it's so easy to buy small quantities at commercial prices, no questions asked.
In the US, it seems all sales is relationship-based. I need to fill out forms on bad websites, wait for emails from sales reps, who sniff out immediately that I only want a small order. The overhead for processing an order isn't deemed worth the time, and any US-based small-time project ends there.
These companies are not trying to be companies that sell an LLM to summarize text or write emails. They're trying to make a full Artificial General Intelligence. The LLMs pull in some money today, but are just a step towards what they're actually trying to build. If they can build such a thing (which may or may not be possible, or may not happen soon), then they can immediately use it to make itself better. At this point they don't need nearly as many people working for them, and can begin building products or making money or making scientific discoveries in any field they choose. In which case, they're in essence, the last company to ever exist, and are building the last product we'll ever need (or the first instance of the last product we'll ever produce). And that's why investors think they're worth so much money.
some ppl don't believe this cus it seems crazy.
anyways, yes they're trying to make their own chips to not be beholden to nvidia, and are investing in other chip startups. And at the same time, nvidia is thinking that if they can make an AI, why should they ever even sell their chips, and so they're working on that too.
> they're in essence, the last company to ever exist, and are building the last product we'll ever need
Physical reality is the ultimate rate-limiter. You can train on all of humanity's past experiences, but you can't parallelize new discoveries the same way.
Think about why we still run physical experiments in science. Even with our most advanced simulation capabilities, we need to actually build the fusion reactor, test the drug molecule, or observe the distant galaxy. Each of these requires stepping into genuinely unknown territory where your training data ends.
The bottleneck isn't computational - it's experimental. No matter how powerful your AGI becomes, it still has to interact with reality sequentially. You can't parallelize reality itself. NASA can run millions of simulations of Mars missions, but ultimately needs to actually land rovers on Mars to make real discoveries.
This is why the "last company" thesis breaks down. Knowledge of the past can be centralized, but exploration of the future is inherently distributed and social. Even if you built the most powerful AGI system imaginable, it would still benefit from having millions of sensors, experiments, and interaction points running in parallel across the world.
It's the difference between having a really good map vs. actually exploring new territory. The map can be centralized and copied infinitely. But new exploration is bounded by physics and time.
Fully agree, self replication is key. But we can't automate GPU production yet.
Current GPU manufacturing is probably one of the most complex human endeavors we've ever created. You need incredibly precise photolithography, ultra-pure materials, clean rooms, specialized equipment that itself requires other specialized equipment to make... It's this massive tree of interdependent technologies and processes.
This supply chain can only exist if it is economically viable, so it needs large demand to pay for the cost of development. Plus you need the accumulated knowledge and skills of millions of educated workers - engineers, scientists, technicians, operators - who themselves require schools, universities, research institutions. And those people need functioning societies with healthcare, food production, infrastructure...
Getting an AI to replicate autonomously would be like asking it to bootstrap modern economy from scratch.
I think that we're going to approach it from the top and bottom.
The second we have humanoid robots that can do maintenance on themselves as well as operate their assembly lines and assembly lines in general will be a massive shift.
I think the baseline for that will be a humanoid robot that has the price tag of a luxury car and that can load/unload the dishwasher as well as load/unload the washing machine/dryer and fold and put away clothes. That will be total boomer-bait for people who want to 'age in place' and long term care homes in general.
Once we have that we can focus on self-replication on the micro-scale. There is tremendous prior art in the form of ribosomes and cells in general. A single cell hundreds of millions of years ago was able to completely reshape the entire face of the earth and create every single organism that has come and gone on the Earth. From fungi to great whales to giraffes, jellyfish, flying squirrels, and sequoia trees the incredible variety of proteins in a myriad of configurations that life has produced is remarkable.
If we can harness that sort of self replication to make power our economy it will make the idea of bootstrapping the economy on this world and others much easier.
It seems that anyone who has ever played games like Factorio or Satisfactory can readily extrapolate similar real-world conclusions. Physical inefficiencies are merely an interface issue that erodes over time with intelligent modularizations and staging of form factors at various scales.
> They're trying to make a full Artificial General Intelligence.
> then they can immediately use it to make itself better.
"AGI" is a notoriously ill-defined term. While a lot of people use the "immediately make itself better" framing, many expert definitions of AGI don't assume it will be able to iteratively self-improve at exponentially increasing speed. After all, even the "smartest" humans ever (on whatever dimensions you want to assess) haven't been able to sustain self-improving at even linear rates.
I agree with you that AGI may not even be possible or may not be possible for several decades. However, I think it's worth highlighting there are many scenarios where AI could become dramatically more capable than it currently is, including substantially exceeding the abilities of groups of top expert humans on literally hundreds of dimensions and across broad domains - yet still remain light years short of iteratively self-improving at exponential rates.
Yet I hear a lot of people discussing the first scenario and the second scenario as if they're neighbors on a linear difficulty scale (I'm not saying you necessarily believe that. I think you were just stating the common 'foom' scenario without necessarily endorsing it). Personally, I think the difficulty scaling between them may be akin to the difference between inter-planetary and inter-stellar travel. There's a strong chance that last huge leap may remain sci-fi.
>If they can build such a thing (which may or may not be possible, or may not happen soon), then they can immediately use it to make itself better.
This sounds like a perpetual motion machine or what we heard over and over in the 3d printing fad.
We have natural general intelligence in 8 billion people on earth and it hasn't solved all of these problems in this sort of instant way, I don't see how a synthetic one without rights, arms, legs, eyes, ability to move around, start companies, etc. changes that.
LLMs are a very good tool for a particular class of problems. They can sift through endless amounts of data and follow reasonably ambiguous instructions to extract relevant parts without getting bored. So, if you use them well, you can dramatically cut down the routine part of your work, and focus on more creative part.
So if you had that great idea that takes a full day to prototype, hence you never bothered, an LLM can whip out something reasonably usable under an hour. So, it will make idea-driven people more productive. The problem is, you don't become a high-level thinking without doing some monkey work first, and if we delegate it all to LLMs, where will the next generation of big thinkers come from?
AGI is only coming with huge amounts of good data.
Unfortunately for AI in general, LLMs are forcing data moats, either passive or due to aggressive legal attack, or generating so much crud data that the good data will get drowned out.
In fact, I'm not sure why I continue to uh, contribute, my OBVIOUSLY BRILLIANT commentary on this site knowing it is fodder for AI training.
The internet has always been bad news for the "subject expert" and I think AI will start forcing people to create secret data or libraries.
> This sounds like a perpetual motion machine or what we heard over and over in the 3d printing fad.
Except that it is actually what humanity and these 8 billion people are doing, making each successive generation "better", for some definition of better that is constantly in flux based on what it believed at the current time.
It's not guaranteed though, it's possible to regress. Also, it's not humanity as a whole, but a bunch of subgroups that have slightly differing ideas of what better means at the edges, but that also share results for future candidate changes (whether explicitly through the international scientific community or implicitly through memes and propaganda at a national or group level).
It took a long time to hit on strategies that worked well, but we've found a a bunch over time, from centralized government (we used to be small tribes on plains in in caves) to the scientific method to capitalism (and whether it's what we'll consider the best choice in the future or not it's been invaluable for the last several centuries), they've all moved us forward, which is simple to see if you sample every 100 years or so going into the past.
The difference between what we've actually got in reality with the uman race and what's being promised with GAI is speed of iteration. If a areal GAI can indeed emulate what we have currently with the advancement of the human race but at a faster cycle, then it makes sense it would surpass us at some point, whether very quickly or eventually. That's a big if though, so who knows.
I pretty much agree with this article - It seems like LLM companies are just riding the hype, and the idea that LLMs will lead onto General AI feels like quite a stretch. They’re simply too imprecise and unreliable for most technical tasks. There's just no way to clearly specify your requirements, so you can never guarantee you’ll get what you actually need. Plus, their behaviour is constantly changing which only makes them even more unreliable.
This is why our team developing The Ac28R have taken a completely new approach. It's a new kind of AI which can write complex accurate code, handling everything from databases to complex financial models. The AI is based on visual specifications which allow you to specify exactly what you want, The Ac28R’s analytical engine builds all the code you need - No guesswork involved.
I've heard that switching to all of that will add 20-40 seconds of dead time waiting for the display to change, as the NFC transfers power to run the whole procedure. That'd be too long an interaction time with no feedback.
reply