I would strongly recommend this podcast episode with Andrej Karpathy. I will poorly summarize it by saying his main point is that AI will spread like any other technology. It’s not going to be a sudden flash and everything is done by AI. It will be a slow rollout where each year it automates more and more manual work, until one day we realize it’s everywhere and has become indispensable.
It sounds like what you are seeing lines up with his predictions. Each model generation is able to take on a little more of the responsibilities of a software engineer, but it’s not as if we suddenly don’t need the engineer anymore.
Though I think it's a very steep sigmoid that we're still far on the bottom half of.
For math it just did its first "almost independent" Erdos problem. In a couple months it'll probably do another, then maybe one each month for a while, then one morning we'll wake up and find whoom it solved 20 overnight and is spitting them out by the hour.
For software it's been "curiosity ... curiosity ... curiosity ... occasionally useful assistant ... slightly more capable assistant" up to now, and it'll probably continue like that for a while. The inflection point will be when OpenAI/Anthropic/Google releases an e2e platform meant to be driven primarily by the product team, with engineering just being co-drivers. It probably starts out buggy and needing a lot of hand-holding (and grumbling) from engineering, but slowly but surely becomes more independently capable. Then at some point, product will become more confident in that platform than their own engineering team, and begin pushing out features based on that alone. Once that process starts (probably first at OpenAI/Anthropic/Google themselves, but spreading like wildfire across the industry), then it's just a matter of time until leadership declares that all feature development goes through that platform, and retains only as many engineers as is required to support the platform itself.
Hard to say. In business we'll still have to make hard decisions about unique situations, coordinate and align across teams and customers, deal with real world constraints and complex problems that aren't suitable to feed to an LLM and let it decide. In particular, deciding whether or not to trust an LLM with a task will itself always be a human decision. I think there will always be a place for analytical thinking in business even if LLMs do most of the actual engineering. If nothing else, the speed at which they work will require an increase in human analytical effort, to maximize their efficacy while maintaining safety and control.
In the academic world, and math in particular, I'm not sure. In a way, you could say it doesn't change anything because proofs already "exist" long before we discover them, so AI just streamlines that discovery. Many mathematicians say that asking the right questions is more important than finding the answers. In which case, maybe math turns into something more akin to philosophy or even creative writing, and equivalently follows the direction that we set for AI in those fields. Which is, perhaps less than one would think: while AI can write a novel and it could even be pretty good, part of the value of a novel is the implicit bond between the author and the audience. "Meaning" has less value coming from a machine. And so maybe math continues that way, computers solving the problems but humans determining the meaning.
Or maybe it all turns to shit and the sheer ubiquity of "masterpieces" of STEM/art everything renders all human endeavor pointless. Then the only thing that's left worth doing is for the greedy, the narcissists, and the power hungry to take the world back to the middle ages where knowledge and search for meaning take a back seat to tribalism and war mongering until the datacenters power needs destroy the planet.
I'm hoping for something more like the former, but, it's anybody's guess.
Does it? How exactly is the common Joe going to benefit from this world where the robots are doing the job he was doing before, as well as everyone else's job (aka, no more jobs for anyone)? Where exactly is the money going to come from to make sure Joe can still buy food? Why on earth would the people in power (aka the psychotic CxOs) care to expend any resources for Joe, once they control the robots that can do everything Joe could? What mechanisms exist for everyone here to prosper, rather than a select few who already own more wealth and power than the majority of the planet combined?
I think believing in this post-scarcity utopian fairy tale is a lot less imaginative and grounded than the opposite scenario, one where the common man gets crushed ruthlessly.
We don't even have to step into any kind of fantasy world to see this is the path we're heading down, in our current timeline as we speak, CEOs are foaming at the mouth to replace as many people as they can with AI. This entire massive AI/LLM bubble we find ourselves in is predicated on the idea that companies can finally get rid of their biggest cost centers, their human workers and their pesky desires like breaks and vacations and worker's rights. And yet, there's still somehow people out there that will readily lap up the bullshit notion that this tech is going to somehow be used as a force of good? That I find completely baffling.
Many people seem to have this ideal that UBI is inevitable and will solve a bunch of these sort of problems.
But I don't see how UBI can avoid the same complexities as our tax systems, where it will be used to try to influence behaviors, growing cruft along the way just like taxes.
To me it's completely baffling how people imagine that with human labor largely going obsolete, we will just stick with capitalism and all workers go hungry in some dystopian fantasy.
Many cynics seem to believe rich people are demons with zero consideration for their fellow humans.
Rich and powerful persons are still people just like you, and they have an interest in keeping the general population happy. Not to mention that we have democratic mechanisms that give power to the masses.
We will obviously transition to a system where most of us can live a comfortable life without working a full time job, and it's going to be great.
> Many cynics seem to believe rich people are demons with zero consideration for their fellow humans.
Do they have considerations for their fellow humans? I certainly haven't observed that they give a shit about anyone or anything that isn't their bottom line. What exactly has Zuckerberg contributed to this world and to his fellow man, other than a mass data harvesting operation that has enabled real life genocides?
What has Bezos done for the average Amazon warehouse worker, other than stick them in grueling conditions where they even have their toilet breaks timed, just to squeeze out every single inch of life out of his workers he can? What have the people working for Big Oil done that is beneficial to humanity, other than suppressing climate change research and funding lobbying groups to hide the fact that they knew about climate change since the 70s? What have the tobacco execs done for humanity, other than bribing doctors to falsify medical research indicating that tobacco isn't harmful? I could go on and on about all the evils brought on to the world by psychotic executives and their sycophantic legions sucking the teet hoping for a handout, but we'd be here all day.
Sure, there's a few philanthropists out there bobbing around in the ocean of soulless psychopaths that are doing some good things, but they're very much the exception.
> Not to mention that we have democratic mechanisms that give power to the masses.
Even (especially?) just looking solely from a US POV, these democratic mechanisms are quickly and actively being eroded by these "considerate" billionaires like Thiel (who is quite openly & proudly naming his companies using literally evil things from Tolkien's works). They're talking about taking over Greenland to distract from them all being ousted as pedophiles for fuck's sake, what "democractic mechanisms"?
> We will obviously transition to a system where most of us can live a comfortable life without working a full time job, and it's going to be great.
I again don't see how this is "obvious", and you haven't outlined anything about how this utopia is supposed to work other than extremely vague statements. How is this utopian state more obvious than the one we are currently freefalling into, a dystopian police state where your every breath is being tracked in some database that is then shared with anyone with 3 pennies to pay to access the data?
Even in the utopia scenario, that experiment has been taken to its natural conclusion on rats back in the 70s and the results were...interesting, to say the least. (google "Universe 25"). I feel like in many ways, a devolution to feudalism and tribal warfare would be preferable.
The problem is it can be subjective. Some people really like the “smooth motion” effect, especially if they never got used to watching 24fps films back in the day. Others, like me, think seeing stuff at higher refresh rates just looks off. It may be a generational thing. Same goes for “vivid color” mode and those crazy high contrast colors. People just like it more.
On the other hand, things that are objective like color calibration, can be hard to “push down” to each TV because they might vary from set to set. Apple TV has a cool feature where you can calibrate the output using your phone camera, it’s really nifty. Lots of people comment on how good the picture on my TV looks, it’s just because it’s calibrated. It makes a big difference.
Anyways, while I am on my soap box, one reason I don’t have a Netflix account any more is because you need the highest tier to get 4k/hdr content. Other services like Apple TV and Prime give everyone 4k. I feel like that should be the standard now. It’s funny to see this thread of suggestions for people to get better picture, when many viewers probably can’t even get 4k/hdr.
I feel like I’ve figured out a good workflow with AI coding tools now. I use it in “Planning mode” to describe the feature or whatever I am working on and break it down into phases. I iterate on the planning doc until it matches what I want to build.
Then, I ask it to execute each phase from the doc one at a time. I review all the code it writes or sometimes just write it myself. When it is done it updates the plan with what was accomplished and what needs to be done next.
This has worked for me because:
- it forces the planning part to happen before coding. A lot of Claude’s “wtf” moments can be caught in this phase before it write a ton of gobbledygook code that I then have to clean up
- the code is written in small chunks, usually one or two functions at a time. It’s small enough that I can review all the code and understand before I click accept. There’s no blindly accepting junk code.
- the only context is the planning doc. Claude captures everything it needs there, and it’s able to pick right up from a new chat and keep working.
- it helps my distraction-prone brain make plans and keep track of what I was doing. Even without Claude writing any code, this alone is a huge productivity boost for me. It’s like have a magic notebook that keeps track of where I was in my projects so I can pick them up again easily.
A nice practice that I try to follow it to always spell out what any Three Letter Acronyms (TLAs) the first time they are used. Then from that point onwards the simple TLA can be used.
In this case, BPF (shorthand for eBPF), stands for Extended Berkley Packet Filter. It’s a relatively new feature in the kernel that allows attaching small programs at certain “hook points” in the kernel (for example, when some syscall is called). These programs can pass information into userspace (like who is calling the syscall), and make decisions (whether to allow the call to proceed).
We do try to spell things out and/or link them in LWN articles to make the context available, but some things we just have to assume.
Additionally, spelling out "Berkeley Packet Filter" is not going to help any readers here; BPF is far removed from the days when its sole job was filtering packets, and that name will not tell readers anything about why BPF is important in the Linux kernel.
The same problem actually already exists for non-drone planes, because they must be able to operate in poor visibility conditions. FAA issues notams for construction cranes if they pose a risk to nearby airports. One solution for drones would be to extend these notams to all cranes/other obstacles, and the drones must subscribe to these notams to operate in the airspace.
I would tend to agree, I think dev time is better spent supporting Proton. I have even seen benchmarks where Proton on Linux outperforms Windows. As a Steam Deck owner, Proton is fantastic.
Yeah I’m not seeing any evidence that this actually works. I would’ve like to see some testing where they intentionally introduce a bug (ideally a tricky bug in a part of code that isn’t directly changed by the diff) and see if Claude catches it.
A good middle ground could be to allow the diff to land once the “AI quick check” passed, then keep the full test suite running in the background. If they run them side by side for a while and see that the AI quick check caught the failing test every time, I’d be convinced.
I swear I read in a text book once that Fourier discovered this while a boat. He looked out at the waves on the ocean and saw how there were many different sized waves combining to make up the larger waves. I could never find it again, but that visual helped me understand how the Fourier transform works.
It sounds like what you are seeing lines up with his predictions. Each model generation is able to take on a little more of the responsibilities of a software engineer, but it’s not as if we suddenly don’t need the engineer anymore.
https://www.dwarkesh.com/p/andrej-karpathy
reply