Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
They All Use It (thorstenball.com)
8 points by todsacerdoti on Nov 20, 2024 | hide | past | favorite | 8 comments


It's notable that the author decries the incuriosity of the ai skeptics but shows little curiosity about their reasons for being largely uninterested in this technology.

I know this is an imperfect analogy but I would compare llms to autotune. Yeah, I might be able to produce a recording of a song that is in tune faster, but the song will be better if I actually sing it in tune correctly (unless I am specifically using autotune for the effect). The more I rely on autotune, the less I will develop as a singer and I will be reliant on a technology that is not under my control and that imposes itself on my creative process. In the short run, I might be better able to make inoffensive recordings with autotune, but I don't think I will make good music. I basically think LLMs will have a similar effect on programmers and are as likely to make most programs worse and less comprehensible than existing technologies.

So for now, I will read about other's experiences with LLMs (I'm not that incurious) but I have yet to see a use case that truly interests me.


> What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology that’s said to be fundamentally changing how we’ll program in the future.

I'll grant you, it's entirely possible that the value proposition of LLM code assistants will eventually get good enough that I'll eat my words and adopt one. But for the moment, what I've seen doesn't really impress me: When you ask a code assistant for a big chunk of complicated code, it messes it up, and you have to play code reviewer. When you ask it to finish your line, you're paying a subscription fee so your text editor can make an API call to deliver marginally improved autocomplete. How much time does that really save me?

But people won't shut up about AI. And of course they won't—they're trying to sell something. Being able to advertise that you use AI is the main reason anyone's using it. Even if the functionality isn't that great yet, aren't you curious anyway? They're saying it'll change everything! It's marketing. I'm the opposite of curious—I'm irritated. I'm tired of people trying to sell me the "next big thing." I'll buy if and when the actual existing product genuinely appeals to me, and I'm not waiting for that to happen with baited breath.

I don't think I'm wrong to be unimpressed by the current capabilities of AI code assistants, but it's undeniably true that I've become negatively polarized against them by tech hype culture.


> what I've seen doesn't really impress me

I suspect that most people who express this view haven't tried cutting-edge LLMs and tools recently. I've met a lot of people who tried early versions of ChatGPT and Copilot a year or 2 ago, concluded that the tools aren't impressive or useful, and haven't revisited things since.

It's wild what Sonnet 3.5 can do with a tool like Aider; if you have a commit-sized piece of work in mind there's a very good chance it will get it right on the first try.


> What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology that’s said to be fundamentally changing how we’ll program in the future.

I don't use it because I don't find it useful. Not sure why that's hard to understand.

> Absolutely, yes, that claim sounds ridiculous — but don’t you want to see for yourself?

Well, I did see for myself. If the situation changes in the future and genAI becomes useful to me, I'll find out in the natural course of things. Until/unless that day comes, I'm not using it. Lack of curiosity doesn't enter into it.

> Maybe that assumption was wrong

I don't think that assumption was wrong. I think the wrong assumption is that some devs don't want to use AI features due to a lack of curiosity or willingness to learn.


> Well, I did see for myself.

What, specifically, did you try? Like what LLM and what tools around that LLM?


I tried the ones that are most popular around here. But since these discussions have a strong tendency to devolve into "you just haven't used the right tools", I prefer to keep it vague in order to sidestep that.

I'm in no way saying these tools aren't useful to many. I'm just saying that I haven't found them useful to me, at least so far. In order to achieve the same quality result, I have to spend more time if I use the tools than if I don't use them, and I need to spend an increased amount of time doing the sort of task that I would prefer to minimize.

And as I said, if/when the day comes that they are useful to me, I'll find out and start using them. That day just isn't today.


It's really interesting how some people seem to take skepticism, disinterest, disappointment, apathy about AI almost personally. I would think if the tools really are so game-changing just let things be, as the cold hard reality will fall upon this guy and he will be left without a job unless he adopts these tools.

I think the real cold hard reality is that these AI tools can't and won't be the revolution they're purported. There's too much messiness in everyday software engineering jobs: tribal knowledge, dismal documentation, company politics, a toxic co-worker, a manager almost seeking to make the project fail, lack of code consistency, apathy, etc.


> It's really interesting how some people seem to take skepticism, disinterest, disappointment, apathy about AI almost personally.

It's a bit like NTFs a few years ago. It's a cult, and they're all in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: