Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is over hyped for sure. This is among the biggest hype cycles we've seen yet. When it bursts, it'll be absolutely devastating. Make no mistake. Many companies will go out of business, many people affected.

However. It doesn't mean AI will go away. AI is really useful. It can do a lot actually. It is a slow adoption because it's somehow not the most intuitive to use. I think that may have a lot to do with tooling and human communication style - or the way we use it.

Once people learn how to use it, I think it'll just become ubiquitous. I don't see it taking anyone's job. The doomers who like to say that are people pushing their own agenda, trolling, or explaining away mass layoffs that were happening BEFORE AI. The layoffs are a result of losing a tax credit for R&D, over hiring, and the economy. Forgetting the tax thing for a moment, is anyone really surprised that companies over hired?? I mean come on. People BARELY do any work at all at large companies like Google, Apple, Amazon, etc. I mean that not quite fair. Don't get me wrong, SOME people there do. They work their tails off and do great things. That's not all of the company's employees though. So what do you expect is going to happen? Eventually the company prunes. They go and mass hire again years later, see who works out, and they prune again. This strategy is why hiring is broken. It's a horrible grind.

Sorry, back to AI adoption. AI is now seen by some caught in this grind as the "enemy." So that's another reason for slow adoption. A big one.

It does work though. I can see how it'll help and I think it's great. If you know how everything gets put together then you can provide the instructions for it to work well. If you don't, then you're not going to get great results. Sorry, if you don't know how software is built, what good code looks like, AND you don't "rub it the right way." Or as people say "prompt engineering."

I think for writing blog posts, getting info, it's easier. Though there's EXTREME dangers with it for other use cases. It can give incredibly dangerous medical advice. My wife is a psychiatrist and she's been keeping an eye on it, testing it, etc. To date AI has done more to harm people than it has help them in terms of mental health. It's also too inaccurate to use for mental health as well. So that field isn't adopting it so quickly. BUT they are trying and experimenting. It's just going to take some time and rightfully so. They don't want to rush start using something that hasn't been tested and validated. That's an understaffed field though, so I'm sure they will love any productivity gain and help they can get.

All said, I don't know what "slow" means for adoption. It feels like it's progressing quickly.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: