Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But they should owe you for stealing your likeliness without your awareness to promote their products. This isn’t for satire purposes.


How much? This IP/copyright mentality is so 90s/2000s. Brings back memories of Napster. This Jeff Geerling which I have never heard of will cry until some Spotify of AI appears and pay him some pennies. Maybe he wants to be the James Hetfield/Metallica of this generation. This guy doesn't have 1/10000 of the relevance Metallica had at the time tho.


> This Jeff Geerling which I have never heard of

This guy doesn't have 1/10000 of the relevance Metallica had at the time

It's weird that you seem to think a person's level of fame is relevant to a discussion of their legal rights.


In terms of personality rights, it is relevant. Although I would think Jeff is enough of a public figure to make a case here


What legal rights? TIL voice imitators are criminals. /s


> TIL voice imitators are criminals. /s

First off, this reddit speak needs to go in a fire and die. Seriously, stop typing like this, nobody likes it.

Secondly, yes voice imitators CAN be criminals. If they attempt to or allude to being the real deal. You can't get on a phone interview with the news and say you're Barack Obama, even if your impression is really good. Yes, that's illegal and has been since before AI of this form existed.


> Secondly, yes voice imitators CAN be criminals.

Which is not the case in discussion, there is no impersonation.

> First off, this reddit speak needs to go in a fire and die.

Are you ok? I think you should calm down before posting.


I'm fine thanks, and I would argue that yes there is impersonation. You can argue why it's not impersonation, but you can't say "nope uh uh no impersonation". That's not how arguments work, you can't begin in a position where you're already right.


> This IP/copyright mentality is so 90s/2000s. Brings back memories of Napster.

The trend has been going against copyright ever since internet was invented. We used to go for passive consumption - radio, TV, books, print magazines. But now that age has passed. We have changed. We prefer interactivity - games, social networks, web searching the billions of contents online, youtube clips commented and shared around. In this age copying is like speaking.

Now comes AI and pushes this trend even deeper - more interactive, more remixing and adapted to the user. We should take a hint - the idea of owning content doesn't make sense anymore. Any content can be recreated in seconds now, or similar content was already posted online years ago. Protecting expression is useless and protecting styles would destroy creativity. Quite a catch-22.

We should take a look instead at new models that popped up in the last decades - open source, creative commons, Wikipedia, open scientific publication. Maybe we need to decouple attribution from payment, like scientific papers, they cite each other without monetary obligations. In social networks comments respond to each other, it would not work if we had to pay to read. Even in real life, people don't pay to hear others speak, and are reusing ideas without issue.

I am aware this sounds bad for copyright, but I am trying to state the reality, not propose my own views. There are deep reasons we prefer interactivity to passive consumption. Copyright was fit for a different age.


How are you on hacker news and never have heard of Jeff Geerling? He's a goat in the ansible and raspberry pi world.


You would be surprised how niche these two things are. I don't care about both and 6 seconds of a tech guy's voice I am sure isn't the trademark of his content


Funny, I first heard of Jeff Geerling over ten years ago in my dev circles.


Your inability to look even few decades into the future to see the impact of this is depressing. You only seem to care about yourself and your current grift.


Welcome to Hacker News/YCombinator. It's what they do here. Ever since joining the community, it's been a speedrun of things explicitly covered as unethical during my CS Ethics course. Mainly stay here to keep a finger on how far things have slid down the "man made horrors beyond comprehension".


The danger of this shit can not be understated. Four years ago already there was a video where a deepfake of a president of the USA read a speech: In Event of Moon Disaster https://www.youtube.com/watch?v=LWLadJFI8Pk we of course know Nixon never gave this speech.

What happens when this "AI" is used to sway an election?

Last month a family got hospitalized after eating mushrooms they found and identified from a AI generated book. They didn't know it was AI generated. What happens when and this is not if, alas but when people die from this?

This shit is a danger to democracy and human lives. Napster was not.


> What happens when this "AI" is used to sway an election?

What do you mean "when"? It's already happening.

https://apnews.com/article/trump-taylor-swift-fake-endorseme...


> What happens when this "AI" is used to sway an election?

It already happened and the world went merrily on its way towards wherever the hell it goes.


This doesn't mean it's good or even fine. The world also had a holocaust, and we moved on. But it would've been preferable to not have a holocaust.


You are right. Even if I think elections don't matter there was one instance where it happened and the outcome actually mattered. The Brexit vote. But even there I don't think it's a huge difference if politicians are lying themselves or they do it with the use of AI. As long as it's an open technology the game is still fair.


> As long as it's an open technology the game is still fair.

I don't necessarily agree because money has a lot to do with it. In theory, anyone can use AI for evil. In practice, only the richest can inflict damage. It's a matter of scale.

That's why me killing someone is pretty bad, but me building a Nuclear Warhead is much worse. Luckily I don't have the power to build a Nuclear Warhead. Now anybody with enough money has the tech to just make one.


> In practice, only the richest can inflict damage. It's a matter of scale.

That's always been true for every technology ever. Even nuclear warheads you mentioned. They are not controlled by poor people. Even hacking that was seen as sort of equalizer is way more dangerous in hands of states that can hire whole armies of hackers.


Hi chx, long time no see! And I agree; deepfakes and voice cloning is already past the point it's fooling relatives. Some are good enough I have to spend time double checking if it's real or not. There are very real implications to all that, and being able to verify true from false is going to get more challenging in many circumstances.


>Last month a family got hospitalized after eating mushrooms they found and identified from a AI generated book.

Yeah, mushrooms are very dangerous, but the argument here is that the book could be written by a human. So what's the best way forward? Ban A.I. books, and human books?

A.I. also, is the best way IMHO to identify mushrooms. Mushrooms are drastically different from one another by magnifying their spores using a microscope. That's not the case when looking at the fruit. Mushrooms totally unrelated to one another, may turn out to be very similar, depending on season, humidity, rain, elevation, tree hosts, temperature and soil.

However when a human tries to examine the spores, he has to compare the spores to thousands of mushrooms to be sure. That's an amount of information, that only mechanically could be tackled effectively. A.I. may have a good chance to solve that problem.


That's kind of a different thing. Making the best AI possible for recognizing mushrooms and trying to produce the most factually correct answers is different than letting AIs run rampant, generating fiction.

Also, I'd want some kind of verification before using the mushroom app. Who made the app? Do they have people on their team familiar with mushrooms? Botanists or whatever you call them. Some random dude with free time and access to maybe 10 species of mushrooms in his backyard.. even if his intent is good, is still dangerous.


LLMs can not produce factual answers. That's not a property of them. What people call "hallucination" is the only thing they can do. Even if it occasionally happens to deliver a correct results that's just the broken clock being right twice a day. The typical ones sound authorative and so people believe it's authorative, this is well documented in Thinking Fast & Slow. As Aza Raskin put it, they hit a zeroday in human cognition. But that doesn't change the fundamentals of it, only the perception of the output which we need to consciously combat.

https://hachyderm.io/@inthehands/112006855076082650

> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

> Alas, that does not remotely resemble how people are pitching this technology.


I wasn't referring specifically to LLMs when talking about the mushrooms. You could do something more sophisticated. I'm not an expert on it, but you could probably compare the images against a known, validated dataset to give you a baseline and then use some fancy AI algorithm to hone in on the best match, or do it the other way around -- let the AI take a first guess and then validate that its not hallucinating.

They're already trying to use AI for breast cancer scans and what not, I don't see why we can't do it for mushrooms.


> the book could be written by a human.

I don't understand why so many people in these discussions are so keen to ignore the impact of scale and accessibility when it comes to new technology, and specifically this new technology. Yes, most dangerous things that can be done with AI can also be done by humans.

Is it not meaningful that these dangerous things can now be done far more cost effectively? It would've made no financial sense in the past to spend hours/days creating a fake mushroom identification book. You'd almost certainly never get enough sales to make it worth it, especially once people realized your book was nonsense and potentially dangerous (getting you delisted, as it seems like the seller did with this book). Now you can just ask an AI to "generate 100 book ideas, scripts, and images." Who cares if almost all of them make very little money when the time and $ cost to creating them is near zero (it looks like this book was physical but especially for digital books, videos, etc.).

Is it not meaningful that dangerous things can now be done (or may soon be able to be done) by more people with less skill? The time/money investment required to learn skills with the potential for destructive use is IMO a strong filter against the people who would do those destructive things in the first place.

OpenAI and I think Google have talked about making intelligence "too cheap to meter." I'm not sure that is a good thing. I could be convinced, but every poorly thought out dismissal of AI dangers only makes me increasingly sure that we aren't ready for it.

For those who are less concerned about AI dangers, consider maybe that premature reckless deployment of AI also has serious potential for generating social backlash that might end up slowing or halting the adoption of AI for positive purposes. The average non-tech person I know thinks of AI as a useful improvement over Google, and maybe thinks AI generated images are cool, but they aren't in love with it. It could disappear tomorrow and they'd shrug. In fact, public sentiment against AI (in the US) seems to be rapidly increasing, and is bipartisan [1]. If the promise of AGI is achieved this will go into overdrive should (when?) mass-unemployment happens. Look how far right-wing politicians are able to go by drumming up fears about immigrants stealing jobs.

https://www.pewresearch.org/short-reads/2023/11/21/what-the-...


> Is it not meaningful that these dangerous things can now be done far more cost effectively?

It is meaningful. Humans can publish 10 books with 10 mistakes inside, A.I. can publish a trillion books with countless mistakes. Radically different scale.

Imagine this: A writer after he finishes writing a book, he goes to a government office, and stamps electronically the hash signature of the book as "Human Approved". Then anyone can search the Book stamp in an internet database. Everything else is labeled as A.I.

Is it so difficult for a government to use a software like that? How about an open source software? (It doesn't exist, but it could be implemented by some people.)

> In fact, public sentiment against AI (in the US) seems to be rapidly increasing, and is bipartisan

That cannot be true. A massive poll of at least 10.000 people could demolish that myth. The poll should be organized by the age of participants, rather than political parties. Candidate ages are, 18-30 years old, 30-60, and 60-Rest.

Ages 18-30 will be 80% in favor of A.I.


> A writer after he finishes writing a book, he goes to a government office, and stamps electronically the hash signature of the book as "Human Approved". Then anyone can search the Book stamp in an internet database. Everything else is labeled as A.I.

Not sure what the point of this would be, the government worker will almost certainly have no ability to verify that the books people bring in weren't written with AI.

> That cannot be true. A massive poll of at least 10.000 people could demolish that myth.

The third link in the pew article I provided has some methodological details:

"Pew Research Center conducted this study to understand attitudes about artificial intelligence and its uses. For this analysis, we surveyed 11,201 U.S. adults from July 31 to Aug. 6, 2023."

> organized by the age of participants

I agree this would interesting. I found a smaller survey (n=1500) that suggests while young people are less anti-AI than older people, they are still heavily anti-AI [1]. Only 20% of Gen-Z states they're "excited about AI," while 29% say they're skeptical and 18% say they don't trust it. This meshes with my experience; even among colleagues that use and/or research AI they're nervous about the impact it'll have on society.

> Ages 18-30 will be 80% in favor of A.I.

I'm curious why you would assume this. In scenarios where AI causes major job loss, young people have by far the most to lose, as they haven't had nearly as much time to build up resources and acquire property.

[1]: https://www.barna.com/research/generations-ai/


You think someone stealing the dude’s voice is the same as people downloading Metallica songs?

Are you trying not to be taken seriously?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: