Hacker Newsnew | past | comments | ask | show | jobs | submit | dexterlagan's commentslogin

The tech debt this title speaks of only applies if humans have to deal with it. Tech debt is an assumption made on the grounds that humans are still programming and AI does not evolve. It's the opposite of reality.

Umm, no. Tech debt is a problem for AIs. You can argue current models have gotten smart enough to work despite it, but you still have the same downsides.

There is one thing everybody forgets when making such predictions: companies don't stand still. Nvidia and every other tech business is constantly exploring new options, taking over competitors, buying startups with novel technologies etc... Nvidia is no slouch in that regard, and their recent quasi-acquisition of Groq is just one example of this. So, when attempting at making predictions, we're looking at a moving target, not systems set in stone. If the people at the helm are smart (and they are), you can expect lots of action and ups and downs - especially in the AI sphere.

My personal opinion, having witnessed first hand nearly 40 years of tech evolution, is that this AI revolution is different. We're at the very beginning of a true paradigm shift: the commoditization of intelligence. If that's not enough to make people think twice before betting against it, I don't know what is. And it's not just computing that is going to change. Everything is about to change, for better or worse.


Execution is cheap? How about you try a video game, and not 3 obvious and worthless automations I could have made as a quick fix at lunch time.


I had the same idea. I think this is very useful. As it is it does look like a proof-of-concept, and that's OK. I'd develop this as a book recommendation site and simply link to the books on Amazon or your preferred book source. Collect cash on referrals. Good stuff!



> Tools like SourceFinder must be paired with education — teaching people how to trace information themselves, to ask: Where did this come from? Who benefits if I believe it?

These are very important and relevant questions to ask oneself when you read about anything, but we also keep in mind that even those question can be misused and they can drive you to conspiracy theories.


We've been through many technological revolutions, in computing alone, through the past 50 years. The rate of progress of LLMs and AI in general over the past 2 years alone makes me think that this may be unwarranted worry and akin to premature optimization. Also, it seems to be rooted in a slightly out of date, human understanding of the tech/complexity debt problem. I don't really buy it. Yes complexity will increase as a result of LLM use. Yes eventually code will be hard to understand. That's a given, but there's no turning back. Let that sink in: AI will never be as limited as it is today. It can only get better. We will never go back to a pre-LLM world, unless we obliterate all technology by some catastrophy. Today we can already grok nearly any codebase of any complexity, get models to write fantastic documentation and explain the finer points to nearly anybody. Next year we might not even need to generate any docs, the model built in the codebase will answer any question about it, and will semi-autonomously conduct feature upgrades or more.

Staying realistic, we can say with some confidence that within the next 6-12 months alone, there are good reasons to believe that local, open source models will equate their bigger cloud cousins in coding ability, or get very close. Within the next year or two, we will quite probably see GPT6 and Sonnet 5.0 come out, dwarfing all the models that came before. With this, there is a high probability that any comprehension or technical debt accumulated over the past year or more will be rendered completely irrelevant.

The benefits given by any development made until then, even sloppy, should more than make up for the downside caused by tech debt or any kind of overly high complexity problem. Even if I'm dead wrong, and we hit a ceiling to LLM's ability to grok huge/complex codebases, it is unlikely to appear within the next few months. Additionally, behind closed doors the progress made is nothing short of astounding. Recent research at Stanford might quite simply change all of these naysayers' mind.


Racket has a very nice built-in debugger in its DrRacket editor, with thread visuals and all. Too bad nobody uses DrRacket, or Racket anymore. Admittedly, even with the best debugger, finding the cause of runtime errors has always been a pain. Hence everybody's moving towards statically compiled, strongly typed languages.


I’ve had enough of misinformation. It’s killing our civilization. So I decided to do something about it.


This resonates with me, a lot. Few months ago I wrote about my initial thoughts here: https://www.cleverthinkingsoftware.com/programmers-will-be-r... Things have changed quite a bit since, but I'm glad they changed for the better. Or so it seems.


Agreed, but in the case of the lie detector, it seems it's a matter of interpretation. In the case of LLMs, what is it? Is it a matter of saying "It's a next-word calculator that uses stats, matrices and vectors to predict output" instead of "Reasoning simulation made using a neural network"? Is there a better name? I'd say it's "A static neural network that outputs a stream of words after having consumed textual input, and that can be used to simulate, with a high level of accuracy, the internal monologue of a person who would be thinking about and reasoning on the input". Whatever it is, it's not reasoning, but it's not a parrot either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: