I think this is related to the concept of “aligned incentives” in a way.
The chain breaks when incentives aren’t aligned and there’s a cascade of crap that roles downhill that seem like bad decisions. When, in fact, as the article points out, the decision made didn’t take knock on effects into consideration.
I’ve learned that seemingly poor or even terrible decisions almost always make sense in the context of when and where the decision was made.
As LLM capabilities start to plateau, everyone with any sort of name recognition is scrambling to ride the hype to a big pay day before reality catches up with marketing.
Here now, you just need a few more ice cold glasses of the kool-aide. Drink up!
LLMs are not on the path to AGI. They’re a really cool parlor trick and will be powerful tools for lots of tasks, but won’t be sci-fi cool.
Copilot is useful and has definitely sped up coding, but like you said, only in a boilerplate sort of way and I need to cleanup almost everything it writes.
I'm sure that impersonation of Taylor Swift really scared the represenatives. A billionaire trying to copy the likeness of another billionaire for votes. I'm sure even some hard staunched conservatives realize how badly that can end.
The chain breaks when incentives aren’t aligned and there’s a cascade of crap that roles downhill that seem like bad decisions. When, in fact, as the article points out, the decision made didn’t take knock on effects into consideration.
I’ve learned that seemingly poor or even terrible decisions almost always make sense in the context of when and where the decision was made.