Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You should really think about making statements like "AI will probably never do X well". Many formal linguists made very strong statements about the impossibility of (__insert feature here__, such as pragmatic implicature) to be learned by AI, which they are now being shown to be wrong.

For instance, Miles Cranmers work on using GNNs for symbolic regression is a start towards useful new discoveries in physics. Transformers are just GNNs with a specific message passing function and position embeddings. It's not hard to see that either by a different architecture, augmentation, or potentially even just more of the same, we can get to new discoveries in physics with AI. The GNN symbolic regression work is evidence that it's already happened.

As for grounding knowledge in the LLMs we have exactly just this moment (a rather short-sighted view) there is plenty of interest and work in the area, for which I expect will be addressed in a multitude of ways. It's ability with grounded physics knowledge is not perfect, but it's very good w.r.t. the common knowledge of a human off the street. External sources alone make it much better, and that's just the exceedingly short-sighted analysis of what we have today.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: