Maybe in the future when labs train more specifically on offensive work, lots of hand holding needed right now.
Even simple stuff like training the models to recognize when they're stuck and should just go clone a repo or pull up the javadocs instead of hallucinating their way through or trying simple internet searches.
Calling the greatest and last invention of man "AI thingies" is telling of why our society will split into tech and non tech communities in the future like all the science fiction authors have predicted.
There are several inventions which are far greater than LLMs. To name two: computers and methods to generate electricity, things without which LLMs wouldn’t have been possible. But also harnessing fire, the wheel, agriculture, vaccines… The list goes on and on.
Calling LLMs “AI thingies” seems much more in tune with reality than calling them “the greatest invention of man” (and I’m steel manning and assuming you meant “latest”, not “last”). You can’t eat LLMs or live in them and they are extremely dependent on other inventions to barely function. They do not, in any way, deserve the title of “greatest invention”, and it’s worrying that we’re at that level of hyperbole. Though you’re certainly not the first one to make that claim.
I meant last. The concept of calling it that way is historical and it shows you have never really deep dived into the concept of AI at all if you don't know who used it and why.
Ah, yes, the ad hominem attack with a dash of appeal to authority and just enough vagueness to attempt to skirt any criticism. Classic. Well, I for one am thoroughly convinced by that argumentative prowess, you sure showed me.
He did mean 'last', maybe don't steelman these arguments so dutifully?
Speak for yourself, friend. I don't believe you and think you're making a tragic mistake, but you're also my competition in a sense, so… you have fun with that.
chips designed to run ResNET? I guess the haskell compiler they built is impressive (it made it so 8 racks of chips designed to run ResNET can run llama 70b with extremely low latency).
Edit: my information might be old, I don't know if they successfully taped out their second gen chip or not. Can anyone corroborate?
reply