Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In theory you could accomplish this by combing through search history.

In practice, the scenario in OP is unlikely to be practical with search history alone. It’s much less convenient for CBP to ask someone to pull up their Google search history. And even if they did, it doesn’t work as well. Officers don’t have infinite time to assess every person.

So I would call it a new threat.



They could also take your traditional search and chat history, feed it into an LLM, and ask it the same questions. Once you start doing that for one person... you could just feed everyone's chat and search history into an LLM, and ask it "who is the most dangerous" or whatever you want to ask.

Its just another version of the classic computing problem "computers might not make a new thing possible, but it makes it possible to do an old thing at a scale that fundamentally changes the way it works"

This is the same as universal surveillance... sure, anyone could have followed you in public and watched where you are going, but if you record everything, now you can do it for everyone at any time. That changes how it works.


I must not have understood the article correctly because I took ChatGPT to be a stand in for LLM technology in general. But I think I am wrong.


That's how I meant it.


Right, and so I interpreted your comment

> I'm not sure I see how this is meaningfully different than the threat posed by a search engine.

as being about the world pre-LLMs and post-LLMs, not about Google in 2025 vs ChatGPT in 2025.

For the latter comparison, I agree, and in fact Google probably has an even richer history of people over time.

But like any “X is just Y” explanation, the former comparison fails to address the emergent effects of Y becoming faster/cheaper/better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: