A few months back I was pessimistic about AI, and now I am the opposite. The perspective change happened when I realized giving it an entire problem and expecting it to solve that is unrealistic. The real value add is if you can use AI at the right steps of your workflow or larger system.
I did a PhD in program synthesis (programming languages techniques) and one the tricks there was to efficiently prune the space of programs. With LLMs it is more much more likely to start with an almost correct guess, the burden now shifts to lighter verification methods.
I still do not believe in the AGI hype. But I am genuinely excited. Computing has always been humans writing precise algorithms and getting correct answers. The current generation of LLMs are the opposite you can be imprecise, but the answers can be wrong. We have to figure out what interesting systems we can build with it.
I did a PhD in program synthesis (programming languages techniques) and one the tricks there was to efficiently prune the space of programs. With LLMs it is more much more likely to start with an almost correct guess, the burden now shifts to lighter verification methods.
I still do not believe in the AGI hype. But I am genuinely excited. Computing has always been humans writing precise algorithms and getting correct answers. The current generation of LLMs are the opposite you can be imprecise, but the answers can be wrong. We have to figure out what interesting systems we can build with it.