Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

his comparison of LLMs to search engines where an LLM is low resolution, vs. the higher resolution of a search engine returning the "actual" document implies some perceptions of authorship or authority I don't share.

I use LLMs to explore and contrast results that I can then test, the results exist as hypotheticals, and not to provide authority about the state of anything- it's conceptually more of a lens than a lever. not to trap him in that contrast, but maybe these ideas are a forcing function that causes us to see how separate our worldviews can be instead of struggling to make one prevail.

it's as though the usefulness of an engine is measured in how much we can yield our agency to it. with a search engine you can say "google or wiki told me," but an LLM does not provide authority to us. these systems don't have agency themselves, yet we can yield ours to it, the way we might to an institution. I don't have this urge so it's peculiar to see it described.

do we want our tech to become objects of deference, literally, idols?

I love Chiang's work and we need minds like his, and maybe Ian McEwan and other literary thinkers, who have insight into human character (vs. plot and object driven sci-fi thinkers) to really apprehend the meaning consequences of AI tech.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: