Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Author assumes we’re going to use AI more and more. I don’t agree. I regularly out perform the AI pushers on my team and I can talk about engineering in person too!


To me, the more interesting question is whether you without AI can outperform you using AI, not whether you can outperform someone else who is using AI.

I think AI has already gotten to a point to where it can help skilled devs be more productive.


I have tested this. I have been coding for close to 20 years, in anything from web to embedded.

I got tired of hearing about vibe coding day in and day out, so I gave in, and I tried everything under the sun.

For the first few days, I started to see the hype. I was much faster at coding, I thought, I can just type this small prompt to the LLM and it will do what I wanted, much faster than I could have. It worked! I didn't bother looking at the code throughly, I was more vigilant the first day, and then less so. The code looks fine, so just click "accept".

At first I was amazed by how fast I was going, truly, but then I realized that I didn't know my own code. I didn't know what's happening when and why. I could churn lots of code quickly, but after the first prompt, the code was worth less than toilet paper.

I became unable to understand what I was doing, and as I read through the code very little of it made any sense to me, sure the individual lines were readable, functions made some semblance of sense, but there was no logic.

Code was just splattered throughout without any sense of where to put anything, global state was used religiously, and slowly it became impossible for me to understand my code. If there was a bug I didn't know where to look, I had to approach this as I would when joining a new company, except that in all my years, even looking through the worst human code I have ever seen, it was not as bad as the AI code.


> I became unable to understand what I was doing, and as I read through the code very little of it made any sense to me, sure the individual lines were readable, functions made some semblance of sense, but there was no logic

Yes, this happens, and it’s similar to when you first start working on a codebase you didn’t write

However, if instead of giving up, you keep going, eventually you do start understanding what the code does

You should also refactor the code regularly, and when you do, you get a better picture of where things are and how they interact with each other


No. It is different, when working with a codebase written by humans there is always sanity. Even when looking at terrible codebases, there is some realm of reason, that once you understand can make navigating the code easy.

I believe you missed the part of my comment saying that I have been coding professionally for 20 years. I have seen horrible codebases, and I'm telling you I'd rather see the switch statement with 2000 cases (real story), many of which were 100s of lines long, with C macros used religiously (same codebase). At a bare minimum, once you get over the humps, with human written code, you will always find some realm of reason. A human thought to do this, they had some realm of logic. I couldn't find that with AI, I just found cargo cult programming + shoving things where they make no sense.


Have you requested code written in the most human-readable fashion, with human-readable commenting,

for you to choose to be minified later?


I respectfully disagree. I have about the same years of experience as you, and now also 1-2 years of AI-assisted coding

If you stay on top of the code you are getting from the AI, you end up molding it and understanding it

The AI can only spit out so much nonsense until the code just doesn’t work. This varies by codebase complexity

Usually when starting from scratch, you can get pretty far with barely even looking at the code, but with bigger repos you’ll have to actively be involved in the process and applying your own logic to what the AI is doing

If the code of what you are building doesn’t make sense, it’s essentially because you let it get there. And at the end of the day, it’s your responsibility as the developer to make it make sense. You are ultimately accountable for the delivery of that code. AI is not magic, it’s just a tool


It sounds like parent commentator is giving it tasks and trying to make it come up with logic. It’s bad at this. Use it as a translator, i knock out the interface or write some logic in pseudocode and then get it to translate it to code, review it, generate tests and bam half an hour or more of coding has been done in a few minutes. all the logic is mine, but i don’t have to remember if that function takes &foo or foo, or the right magic ioreader i need or whatever…

whenever i try to get it to do my work for me, it ends badly.

it can be my syntax gimp tho sure.


this is a good approach that I have admittedly not thought of.

At that point however, is it really saving you that much time over good snippets and quick macros in your editor?

For me writing the code is the easiest part of my job, I can write fast, and I have my vim configured in such a way where it makes writing code even faster.


>is it really saving you that much time over good snippets and quick macros in your editor?

I had someone say that to me ~a month ago. I had mentioned online that one of the things I had the AI tooling do that morning was to convert a bunch of "print" statements to logging statements. He said that was something he'd just have his editor find/replace do. I asked him what sort of find/replace he'd do that based on the content of the log message appropriately selected between "logging.debug", "info", "warning", and "error", because the LLM did a good job of that. It also didn't fall into the issues of "pprint()" turning into "plogging.debug()" and the like.


Clearly this affected you deeply enough to create a new HN crusade account. Genuinely curious questions: Did you share this experience with management and your peers? What was their response? What steps could realistically reverse the momentum of vibe coding?


> Did you share this experience with management and your peers? What was their response?

Among my peers, there seemed to be a correlation between programming experience and agreement with my personal experience. I showed a particular colleague a part of the code and he genuinely asked me which one of the new hires wrote this.

As for management, well, let's just say that it's going to be an uphill battle :)

> What steps could realistically reverse the momentum of vibe coding?

For me and my team it's simple, we just won't be using those tools, and will keep being very strict with new hires to do the same.

For the software industry as a whole, I don't know. I think it's at least partially a lost cause. Just as the discussions about performance, and in general caring about our craft are.


I wish you luck, but at this point I think it is going the same path as banning cell phones in workplace/car/we.That is, even with penalties, people will do it.


Sentiment on AI-generated code being so bad that human code is strongly preferred, and marketed.


I love the implicit and totally baseless questioning of motives in this reply.


AI writes code according to the instructions given. You can instruct it to architect and organize the code any way you want. You got the fruit of your inarticulation.


This is what I heard from many people online.

I have tried beign specific, I have even gone as far as to feed its prompt with a full requirement document for a feature (1000+ words), and it did not seem to make any significant difference.


Need to use the Wittgenstein approach: “What can be shown cannot be said.” LLMs need you to show it what you want. Its model has seen nothing. Built up a library of examples. Humans also work better this way. Just good practice.


I think that's why cursor works well for me. I can say, write this thing that does this in a similar way to other places in the codebase and give it files for context.


You are experiencing the Dunning-Kruger effect of using AI. You used it enough to think you understand it, but not enough to really know how to use it well. That's okay, since even if you try and ignore and avoid it for now, eventually you'll have enough experience to understand how to use it well. Like any tool, the better you understand it and the better you understand the problems you're trying to solve, the better job you will do. Give an AI to a product manager and their code will be shit. Give it to a good programmer, and they're likely to ask the right questions and verify the code a little bit more so they get better results.


> Give an AI to a product manager and their code will be shit. Give it to a good programmer, and they're likely to ask the right questions and verify the code a little bit more so they get better results.

I'm finding the inverse correlation: programmers who are bullish on AI are actually just bad programmers. AI use is revealing their own lack of skill and taste.


You can literally stub out exactly the structure you want, describe exactly the algorithms you want, the coding style you want, etc, and get exactly what you asked for with modern frontier models like o3/gemini/claude4 (at least for well represented languages/libraries/algorithms). The fact that you haven't observed this is an indicator of the shallowness of your experience.


> modern frontier models like o3/gemini/claude4 (at least for well represented languages/libraries/algorithms). The fact that you haven't observed this is an indicator of the shallowness of your experience.

I'm not chasing the AI train to be on the bleeding edge because I have better things to do with my time

Also I'm trying to build novel things, not replicate well represented libraries and algorithms

So... Maybe I'm just holding it wrong, or maybe it's not good at doing things that you can't copy and paste from github or stackoverflow


It always feels like the height of HN when some pseudo-genius attempts a snobbish reply but instead just confidently confirms their lack of literacy on the subject.

LLMs write code according to thousands of hidden system prompts, weights, training data, and a multitude of other factors.

Additionally, they’re devoid of understanding.

I truly hope you do better in the future.


I have had great success at guiding LLMs to produce my desired output. Your baseless snark is a sign of your incompetence.


Or that your use case is well-documented across the internet and simple.

That would be more inline with how the technology works and less a sign of your inherent genius though.


> AI writes code according to the instructions given

Nope. The factors are your prompt, the thousand of lines of system prompts, whatever bugs may exist inside the generator system, and the weights born from the examples in the data which can be truly atrocious.

The user is only partially in control (a minimal part of that). With standard programming workflow, the output are deterministic so you can reason out the system's behavior and truly be in control.


That makes sense. It’s also step 1 in your journey.

Maybe taking the AI code as input and refactoring it heavily will result in a better step 2 than your previous step 0 was.


it makes me feel like an absolute beginner, and not in a good way. It was such a terrible experience that I don't believe I will try this again for at least a year. If this is what programming will become in the future, then feel free to call me a luddite, because I will only write hand crafted code.


Useless review without discussing the context, and carefully considered, well-scoped prompt you gave(?) it.


Have you tried to add more guidelines ? Similar to documentation you would provide to new members of the team


copy paste from a different comment in this thread:

> I have tried being specific, I have even gone as far as to feed its prompt with a full requirement document for a feature (1000+ words), and it did not seem to make any significant difference.


> I think AI has already gotten to a point to where it can help skilled devs be more productive.

Not really. The nice thing about knowing when to do something is you can just turn off your brain when typing code of thinks about architecture in the meanwhile. Then you just run the linter for syntax mistakes and you're golden. Zero to no mental load.

And if you've been in the same job for years, you have mental landmarks all over the codebase, the internal documentation, and the documentation of the dependencies. Your brains runs much faster than your finger, so it's faster to figure out where the bug is and write the few lines of code that fixes it (or the single character replacement). The rest of the time is thinking about where similar issues may be and if you've not impacted something down the line (aka caring about quality).


Maybe that’s because the AI pushers are compensating for already not being as good.

What happens when other yous start using ai. I suspect they will obv outperform you just in sheer typing speed.


I don’t agree. There’s a “muscle” you train every time you think about problems and solve them, and I say muscle because it also atrophies.


But is the muscle the part where we copy and paste from stack overflow, and now ChatGPT, or is the muscle when there's a problem and things aren't working and you have to read the code and have a deep think? Mashing together random bits of code isn't a useful muscle, debugging problems is. If it's the LLM which mashes a pile of code together for me and I only jump in when there's a problem, isn't that an argument for LLM usage, not against?


I cannot talk about engineering in person because I do not know how to pronounce words, and I am on the spectrum. :/ I can write about it though!


also the fatigue of just "feeding the beast" comes on fairly quickly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: