You are focusing too much on my specific problem instead of using it as a guide to understand your own situation.
Sure we have mobs and you don't, but we are talking about AI here.
Infact let's imagine a totally different culture to illustrate my point.
Imagine you are an Israeli, and people in your office have a habit of sending Whatsapp voice notes to confirm various things instead of calls, because that way you can have a record but don't have to type every damn thing out. Totally innocent and routine behaviour, you are just doing what many other people do.
A colleague pissed at you for whatever damn stupid reason creates a fake of your voice saying you support Hamas by using said voice notes, using an online tool that doesn't cost much or require much... are you saying just because you won't be lynched, that there isn't a problem?
You are confused why everyone is pissed at you and why suddenly your boss fired you, and by the time you find out the truth... the lie has spread to enough people in your social circle that there is no clearing your name.
Think of how little data in voice samples is required to generate an audio clip thats sounds very realistic, and how better it will get in an year. You don't need fancy PC or tech knowledge for that, already websites exist that do for cheap.
Just because you weren't lynched is no solace.
People are the problem, AI is just providing quality tools with minimal skill and cost required, thus broadening the user base.
I rephrased my comment probably right before you posted this because I felt it was too confrontational.
The problem of people manufacturing evidence by using synthesized voices should eventually result in audio voice recordings losing importance from an evidentiary standpoint. In fact, the quicker the use of it occurs, the quicker it will get devalued. And that is good. Someday soon, someone who sounds like the CEO won't be able to drain the company bank account based solely on trusting his voice. And hopefully, along the same lines, a voice recording of Person X blaspheming will lose its evidential power since it will be so easy to manufacture.
Sure we have mobs and you don't, but we are talking about AI here.
Infact let's imagine a totally different culture to illustrate my point.
Imagine you are an Israeli, and people in your office have a habit of sending Whatsapp voice notes to confirm various things instead of calls, because that way you can have a record but don't have to type every damn thing out. Totally innocent and routine behaviour, you are just doing what many other people do.
A colleague pissed at you for whatever damn stupid reason creates a fake of your voice saying you support Hamas by using said voice notes, using an online tool that doesn't cost much or require much... are you saying just because you won't be lynched, that there isn't a problem?
You are confused why everyone is pissed at you and why suddenly your boss fired you, and by the time you find out the truth... the lie has spread to enough people in your social circle that there is no clearing your name.
Think of how little data in voice samples is required to generate an audio clip thats sounds very realistic, and how better it will get in an year. You don't need fancy PC or tech knowledge for that, already websites exist that do for cheap.
Just because you weren't lynched is no solace.
People are the problem, AI is just providing quality tools with minimal skill and cost required, thus broadening the user base.