Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Man, everyone is happy with these advancements, and they are impressive.

I’m here looking at users and wondering - the content pipelines are broader, but the exit points of attention and human brains are constant. How the heck are you supposed to know if your content is valid?

During a recent apple event, someone on YT had an AI generated video of Tim Cook announcing a crypto collaboration; it had a 100k users before it was taken down.

Right now, all the videos of rockets falling on Israel can be faked. Heck, the responses on the communities are already populated by swathes of bots.

It’s simply cheaper to create content and overwhelm society level filters we inherited from an era of more expensive content creation.

Before anyone throws the sink at me for being a Luddite or raining on the parade - I’m coming from the side where you deal with the humans who consume content, and then decide to target your user base.

Yes, the vast majority of this is going to be used to create lovely cat memes and other great stuff.

At the same time, it takes just 1 post to act as a lightning rod and blow up things.

Edit:

From where I sit, there are 3 levels of issues.

1) Day to day arguments - this is organic normal human stuff

2) Bad actors - this is spammers, hate groups, hackers.

3) REALLY Bad actors - this is nation states conducting information warfare. This is countries seeding African user bases with faked stories, then using that as a basis for global interventions.

This is fake videos of war crimes, which incense their base and overshadow the harder won evidence of actual war crimes.

This doesn’t seem real, but political forces are about perception, not science and evidence.



We are at a crossroads of technology, where we're still used to the idea that audio and video are decent proof that something happened, in a way in which we don't generally trust written descriptions of an event. Generative AI will be a significant problem for a while, but this assumption that audio/video is inherently trustable will relatively soon (in the grand scheme of things) go away, and we'll return to the historical medium.

We've basically been living in a privileged and brief time in human history for the last 100-200 years, where you could mostly trust your eyes and years to learn about events that you didn't directly witness. This didn't exist before photography and phonograms: if you didn't witness an event personally, you could only rely on trust in other human beings that told you about it to know of it actually happened. The same will soon start to be true again, if it isn't already: a million videos from random anonymous strangers showing something happening will mean nothing, just like a million comments describing it mean nothing today.

This is not a brave new world of post-truth such as the world has never seen before. It is going back to basically the world we had before photo, video, and sound recordings.


That’s an interesting thought.

I think I would not like to live in a world in which democracy isn’t the predominant form of government. The ability of the typical person to understand and form their own opinions about the world is quite important to democracy, and journalism does help with that. But I guess the modern version of image and video heavy journalism wasn’t the only thing we had the whole time; even as recent as the 90’s (I’m pretty sure; I was just a kid), newspapers were a major source. And somehow America was invented before photojournalism, but of course that form of democracy would be hard for us to recognize nowadays…

It is only when we got these portable video screens that stuff like YouTube and TikTok became really important news sources (for better or worse; worse I would say). And anyway, people already manage to take misleading or out of context videos, so it isn’t like the situation is very good.

Maybe AI video will be a blessing in disguise. At some point we’ll have to give up on believe something just because we saw it. I guess we’ll have to rely on people attesting to information, that sort of thing. With modern cryptography I guess we could do that fairly well.

Edit: Another way of looking at it: basically no modern journalist or politician has a reputation better than an inanimate object, a photos or video. That’s a really bizarre situation! We’re used to consulting people on hard decisions, right? Not figuring out everything by direct observation.


I'd argue it's a step or two more manipulative. Not only do bad actors have the ability to generate moving images which are default believed by many, they also have the ability to measure the response over large populations, which lets them tune for the effect they want. One step more is building response models for target groups so that each can receive tailored distraction/outrage materials targeted to them. Further, the ability to replicate speech patterns and voice for each of your trusted humans with fabricated material is already commonplace.

True endstage adtech will require attention modeling of individuals so that you can predict target response before presenting optimized material.

It's not just a step back, it's a step into black. Each person has to maintain an encrypted web of trust and hope nobody in their trust ring is compromised. Once they are, it's not clear even in person conversations aren't contaminated.


> Further, the ability to replicate speech patterns and voice for each of your trusted humans with fabricated material is already commonplace.

Just like the ability to emulate the writing style of your trusted humans was (somewhat) commonplace in the time in which you'd only talk to distant friends over letters.

> Once they are, it's not clear even in person conversations aren't contaminated.

How exactly could any current or even somewhat close technology alter my perception of what someone I'm talking to in-person is saying?

Otherwise, the points about targeting are fair - PR/propaganda has already advanced considerably compared to even 50 years ago, and more personalized propaganda will be a considerable problem, regardless of medium.


The difference between artisanal work, vs mass production is enough to make it separate products.

The rate of production is the incomparable, no matter what the parallels may seem.


I feel as though i am honor-bound to say that this isn't new and we havent really been living in a place where we can trust in the way you claim. Its simply that every year it rapidly becomes more and more clear that there is no "original". you're not wrong i just think its important for people who care about such things to realize this the result of a historical process which has been going on longer than we've all been alive. in fact, it likely started at the beginning of the 100-200 year period you're talking about, but its origins are much much older than that.

read simulacra and simulation: https://0ducks.wordpress.com/wp-content/uploads/2014/12/simu...

or this essay from pre-war germany https://en.wikipedia.org/wiki/The_Work_of_Art_in_the_Age_of_...


Which was the era of insular beliefs, rank superstition and dramatically less use of human potential.

I feel that it’s not appreciated, that we are (were) part of an information ecosystem / market, and this looks like the dawn of industrial scale information pollution. Like firms just dumping fertilizer into the waterways with no care to the downstream impacts, just a concern for the bottom line.


It's not all the way back as long as solid encryption exists: Tim Cook could digitally sign his announcements, and assuming we can establish his signature (we had signatures and stamps 200 years ago) video proof still works.

So we're not going all the way back, but the era of believing strangers because they have photographic or video proof is drawing to a close.


Cryptography is nice here, but the base idea remains the same: you need to trust the person publishing the video to believe the video. Cryptography doesn't help for most interesting cases here, though it can help with another level, that of impersonation.

Sure, Tim Cook can sign a video so I know he is the one who published it - though watching it on https://apple.com does more or less the same thing. But if the video is showing some rockets hitting an air base, the cryptography doesn't do anything to tell you if these were real rockets or its an AI-generated video. It's your trust in Tim Cook (or lack thereof) that determines if you believe the video or not.


All this talk of trust speaks to the larger issue here too - that we've lost so much trust in governments and other important institutions. I'm not saying it was undeserved, but it's still an issue we need to fix.


This is too much work for the human use case.

Practically speaking, no one is going yo check provenance when scrolling through Reddit sitting on the pot.


Interesting thought. An alternative is a world where we can securely sign captured medium.


That only really matters if it's hard to feed generated data into a camera/microphone that does this signing. It's not that hard already (you can just film a screen showing the generated video for a very basic version of this), and if there was significant interest, I'm sure it would become commoditized very quickly. Not to mention that any signing scheme is quickly captured by powerful states.


Before photography was invented, mass communications was all just words on paper, right?

How would you know that the British burned down the white house in 1812? Anyone could fake a paper document saying it so. (Except many people were illiterate.)

As far as I can see you need institutions you can trust.


Everyone is focusing on photography.

1) it’s not the tech. It’s the rate of production. You had only 1 newspaper, no mass media, and boatloads of time in the 1800s

2) Before photography was created we lived in a world steeped in superstition, inequality and ignorance. A tiny economy compared to what we have today.

3) humanity changed with the advent of photography. It ushered in a new standard of proof that modern society depends on to this day.


If this was true, why haven’t we seen it with manipulated pictures?

Maybe I’m not well informed but there seem to be no example for the issues you describe with photos.

I believe it’s actually worse than you think. People believe in narratives, in stories, in ideas. These spread.

It has been like this forever. Text, pictures, videos are merely ways to proliferate narratives. We dismiss even clear evidence if it doesn’t fit our beliefs and we actively look for proof for what we think is the truth.

If you want to "fight" back you need to start on the narrative level, not on the artifact level.


We have seen it with manipulation of pictures.

Hell - look of the fate of the rest of the world online. They’re basically thrown to the fucking wolves.

Minorities are lynched around the world after viral forwards. Autocrats are stronger than ever and authoritarian regimes have flourished like never before.

Trust and safety tools are vastly stronger for English than any other language. See the language resource gap (lost in translation, Nicholas and Bhatia)

In America, the political divide has reached levels unimaginable. People live in entirely different realities.

Images from democrats sides are dismissed as faked and lies take so long to discredit that the issue has passed on, tiring fact checkers and the public.

The original fake news problem of Romanian advert farms focused entirely on conservative citizens.


Also, cost. How many do you have to generate to get something you want? Does it take 1 or a 100 attempts to get something reasonable, and what does it cost for each attempt? Might not affect Hollywood, but someone has to pay for this to be profitable for Meta. How many 5-Gigawatt power stations will be required (what OpenAI wants to build all over the country) if lots of people use this?


Hopefully this becomes the limiting factor, however generating more power isn’t that hard - and it doesn’t solve the rate of production issue.


The Information Bomb. There's a reason military types and spooks are joining the boards of OpenAI and friends.

https://www.goodreads.com/book/show/203092.The_Information_B...

> After the era of the atomic bomb, Virilio posits an era of genetic and information bombs which replace the apocalyptic bang of nuclear death with the whimper of a subliminally reinforced eugenics. We are entering the age of euthanasia.


There is some credence to the idea that the third reich was only possible due to mass media. Radio, television, and movie theatres broadcasting and rebroadcasting information onto a populace that did not have experience with media overload and therefore had no resistance to it.

Not attempting to justify their actions or the outcomes, just that media itself is and has been long known to be a powerful weapon, like the fabled story of a city besieged by a greater army, who opened their gates to the invaders knowing that the invaders were lead by a brilliant strategist.

The invader strategist, seeing the gates open, deduced that there must be a giant army laying in wait and that the gates being open were a trap, and so they turned and left.

Had they entered they would have won easily, but the medium of communication, an open gate before an advancing horde, was enough in and of itself to turn the tide of a pitched battle.

When we reach the point where we can never believe what we see or hear or think on our own, how will we ever fight?


It's just something to put ads next to. Selling ad spots is the business, and investors demand an increase even if they already have 3.5 billion pairs of eyeballs. https://www.404media.co/where-facebooks-ai-slop-comes-from/


This will just lead to people not taking videos as evidence anymore. Just like images of war crime aren’t irrefutable evidence due to staging and photoshop, videos will lose their worth as evidence. Which is actually a good thing in some instances. If someone blackmails you with nudes/explicit videos, you can just ignore it and claim it’s fake.


The solution to that is to make models both open weight and open source. That will equalize the level playing field.


How will that help? How will uncle Joe be able to tell fake videos better with an open source model?


Uncle Joe will just stop assuming that just because there’s a video it is real. That hasn’t been the case for decades. About time uncle Joe caught on.


So what’s the plan to level the playing field in that case? Give everybody an equal amount of compute and ask them what sort of propaganda they’d like to have theirs contribute to?


I only care about being able to express myself more easily

Maybe get a job where interviewers are biased against my actual look and pedigree

Just ignore everyone else’s use of the tool


> Just ignore everyone else’s use of the tool

That's precisely the hard part!


Yeah... African users... oh poor infantile, gullible, creatures... so incapable of discerning truth from falsehood are the ones to be fooled by generative AI...I get the gist


A State-actor could have already done that manipulation using CGI or something. The answer is not to trust the people and persons who one sees as not to be trusted. As per your Israel example, I don’t personality trust them because I have low levels of trust in genocidal regimes, so even if IDF-asset Gal Gadot were to come to my door and tell me that I won a million dollars I would just slam-shut the door in her face, never mind her and her ilk trying to convince me and people like me through videos posted on the internet of whatever it is they are trying to convince people of.

Again, plain common sense just works, most of the times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: