it will be up to governments to represent the people. A massive risk might be that GPT makes it trivial to simulate humans and thus simulate political demands to political leaders.
I think politicians and organisations might need to cut their digital feedback loops (if authentication proves too much of a challenge) and rely on canvassing IRL opinion to cut through the noise.
> I think politicians and organisations might need to cut their digital feedback loops (if authentication proves too much of a challenge) and rely on canvassing IRL opinion to cut through the noise.
They'll just get the results of "ChatGPT 17.0, write and produce an ad and astruturfing campaign to convince a cohort having the traits [list of demographic factors and opinions] that they should support position X and reject position Y" (repeat for hundreds of combos of demographic factors and opinions, deploy against entire populace) parroted back at them.
"Yeah but every position can do that, so it'll all even out" nah, the ones without a ton of money behind them won't be able to, or not anywhere near as effectively.
Basically, what we already have, but with messaging even more strongly shifted in favor of monied interests.
I feel like the governments that do this will/might be the ones whose supporting lobbies don't have ai tech companies or access to ai. But how long is that for? Take Monsanto eg. There is no govt that is not in it's pockets. Now there are counters to it as there are other industries (and subsequent lobbies) to balance Monsanto or act as alternative sources of funding. What would that be for ai when ai is going to be in everything (including your toaster haha)?
I think politicians and organisations might need to cut their digital feedback loops (if authentication proves too much of a challenge) and rely on canvassing IRL opinion to cut through the noise.