Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's what everyone seems to suppose. I don't see why it's warranted, though.


Compare where we were 1 year ago to where we are now, do the same for 5 years ago vs. now. The trajectory is pretty clear, is it not?


I agree with the general sentiment that things seems to be improving over time. But it won't always be like that, as at one point we'll reach AI winter again, and then it doesn't matter if the ecosystem moved incredibly fast for N years, you can't extrapolate that to future developments.


There is all of this talk about an ai winter, and somehow I just kind of doubt it. Unlike in past decades there is a ridiculous amount of money, energy, and time going into the space, and unlike 40 years ago (or even 10) there are things that can actually make money now, and are not just academic research.

(See stable diffusion/llama/ chatgpt)

There will be businesses that actually make money on these technologies, and they will be research heavy (even a 5% improvement is a big deal) as things are still getting figured out.

I could see speed dropping back towards 2017 like rates, but I kind of doubt we will ever see an true ai winter like the 90's early 00's

The field is just too young with too many things as of yet untried, along with the fact that I doubt funding will dry up any time soon. (There are too many interests, from Nvidia wanting to sell more chips, to Microsoft wanting to sell more productivity, to defence, and political concerns between the US and China.)

Yes, it won't go on forever, but also this time seems qualitatively different from the past AI cycles. (Granted I was not alive then)


> The field is just too young with too many things as of yet untried

The field has been around since the 50s with various summers and winters, with each summer having people saying it's now too big to fail, with ever increasing resources and time being spent on it, only for it to eventually stagnate again for some time. If there is one field in computer science I wouldn't call "too young", it would be AI. The first "true" AI winters happened in the 1970s/1980s, and second one in the late 1980s. You seem to have missed some of them by a large margin.

It's the natural movement of ecosystems that are hyped a lot. They get hyped until there is no more air, and it goes back into "building foundations" mode until someone hits gold and the cycle repeats all over again.


Not that the AI summer/winter cycle won't ever stop, but people said the exact same things about how it's different this time for previous winters too. We might see plateaus after transformers and realize that we can't improve for some number of years.


I feel like thats a view from the outside looking in. There are always limits. This is a moore's law type situation - it keeps advancing right up until it can't. That's not to say this is or isnt that case - but things only improve because magnificently smart people discover dozens of really clever little tricks you care about. There is no guarantee that an undiscovered little trick exists in the place you hope it does.

I'm sure things will develop, but develop into flawless midjourney-but-for-video? literally only time will tell, its a fools errand to extrapolate



one (quite convincing) theory is that anything that can be achieved by a carbon-based neural network (eg. human brain) can also be achieved by a silicon-based neural network. The hardware may change, but the hardware's software expressiveness shouldnt be affected, unless there is a fundamental chemistry constraint.

Since human brains during dreams (lucid or otherwise) can generate coherent scenes, and transform individual elements in a scene, diffusion based models running on cpu/gpus should eventually be able to do the same.


> one (quite convincing) theory is that anything that can be achieved by a carbon-based neural network (eg. human brain) can also be achieved by a silicon-based neural network.

That the human brain is exactly equivalent in function to our current model of a neural network is a huge, unproven hypothesis.


It is not warranted but is the logical next step. You're not going to get Holywood quality generated video one day to the other. See MJ results one year apart.

Indeed technically that might not be possible due to probabilistic nature of these models and may require a whole different technology. But one thing for sure is that enough labour and capital is going into it so the chances are not little.


Because that was the case with generative images and audio.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: