Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really wonder what harm would come to the company if they didn't talk about safety?

Would investors stop giving them money? Would users sue that they now had PTSD after looking at all the 'unsafe' outputs? Would regulators step in and make laws banning this 'unsafe' AI?

What is it specifically that company management is worried about?



All of the above! Additionally... I think AI companies are trying to steer the conversation about safety so that when regulations do come in (and they will) that the legal culpability is with the user of the model, not the trainer of it. The business model doesn't work if you're liable for harm caused by your training process - especially if the harm is already covered by existing laws.

One example of that would be if your model was being used to spot criminals in video footage and it turns out that the bias of the model picks one socioeconomic group over another. Most western nations have laws protecting the public against that kind of abuse (albeit they're not applied fairly) and the fines are pretty steep.


They have already used "AI" with success to give people loans and they were biased. Nothing happened legally to that company.


They're attempting to guard themselves against incoming regulation. The big players, such as Microsoft, want to squash Stable Diffusion while protecting themselves, and they're going to do it by wielding the "safety is important and only we have the resources to implement it" hammer.


AI/ML/GPT/etc are looking increasingly like other media formats -- a source of mass market content.

The safety discussion is proceeding very much like it did for movies, music, and video games.


Safety is a very real concern, always has been in ML research. I'm tired of this trite "they want a moat" narrative.

I'm glad tech orgs are for once thinking about what they're building before putting out society-warping democracy-corroding technology instead of move fast break things.


It doesn't strike you as hypocritical that they all talk about safety while continuing to push out tech that's upending multiple industries as we speak? It's tough for me to see it as anything other than lip service.

I'd be on your side if any of them actually chose to keep their technology in the lab instead of tossing it out into the world and gobbling up investment dollars as fast as they could.


How are these two things related at all? When AI companies speak of safety, it's almost always about the "only including data a religious pastor would find safe, and filtering outputs" angle. How's the market and other industries relevant at all? Should AI companies be obligated to care about what happens to other companies? With that point of view, we should've criticized the iPhone for upending the PDA market, or Wacom for "upending" the traditional art market.


That would make sense if it was in the slightest about avoiding "society-warping democracy-corroding technology". Rather than making sure no one ever sees a naked person which would cause governments to come down on them like a ton of bricks.


This would be funny if we weren't living it.

Software that promotes the unchecked spread of propaganda, conspiracy theories, hostility, division, institutional mistrust and so on: A-OK.

Software that might show a boob: Totally irresponsible and deserving of harsh regulation.


Safety from what? Human anatomy?


See the recent Taylor Swift scandal. Safety from never ending amounts of deepfake porn and gore for example.


This isn't a valid concern in my opinion. Photo manipulation has been around for decades. People have been drawing other people for centuries.

Also, where do we draw the line? Should Photoshop stop you from manipulating human body because it could be used for porn? Why stop there, should text editors stop you from writing about sex or describing human body because it could be used for "abuse". Should your comment be removed because it make me imagine Taylor Swift without clothes for a brief moment?


No, but AI requires zero learning curve and can be automated. I can't spit out 10 images of Tay per second in photoshop. If I want and the API delivers I can easily do that with AI. (Given, would one becoding this it requires a learning curve, but in principal with the right interface and they exist i can churn out hundreds of images without me actively putting work in)


I've never understood the argument about image generators being (relatively) fast. Does that mean that if you could Photoshop 10 images per second, we should've started clamping down on Photoshop? What exact speed is the cutoff mark here? Given that Photoshop is updated every year and includes more and more tools that can accelerate your workflow (incl. AI-assisted ones), is there going be a point when it gets too fast?

I don't know much about the initial scandal, but I was under the impression that there was only a small number of those images, yet that didn't change the situation. I just fail to see how quantity factors into anything here.


Yes, if you could Photoshop 10/sec it would be a problem.

Think of it this way, if one out of every ten phone calls you get is spam, you still have a pretty useable phone. Three orders of magnitude different and 1 out of every 100 calls is real and the system totally breaks down.

Generative AI makes generating realistic looking fakes ~1000x easier, its the one thing its best at.


>I just fail to see how quantity factors into anything here.

Because you can overload any online discussion / sphere with that. There were so many that X effectively banned searching for her at all because if you did, you where overwhelmed by very extreme fake porn. Everybody can do it with very low entry barrier, it looks very believable, and it can be generated in high quantities.

We shouldn't have clamped down on photoshop, but realisticly two things would be nice in your theoretical case, usage restrictions and public information building. There was no clear cut point where photoshop was so mighty you couldn't trust any picture online. There were skills to be learned and people could identify the trickery, and it was on a very small scale and gradual. And the photo trickery was around for ages, even Stalin did it.

But creating photorealistic fakes in an automated fashion is completely new.


But when we talk about specifically harming one person, does it really matter if it's a thousand different generations of the same thing or 10 generations that were copied thousands of times? It is a technology that lowers the bar for generating believable-looking things, but I don't know if it's the speed that is the main culprit here.

And in fairness to generative AI, even nowadays it feels like getting to a point of true photorealism takes some effort, especially if the goal is letting it just run nonstop with no further curation. And getting a local image generator to run at all on your computer (and having the hardware for it) is also a bar that plenty of people can't clear yet. Photoshop is kind of different in that making more believable things requires a lot more time, effort and knowledge - but the idea that any image online can be faked has already been ingrained in the public consciousness for a very long time.


That's fine. But the question was what are they referring to and that's the answer.


Doing it effortlessly and instantly makes a difference.

(This applies to all AI discussions)


> See the recent Taylor Swift scandal

but that's not dangerous. It's definitely worthy of unlocking the cages of the attack lawyers but it's not dangerous. The word "safety" is being used by big tech to trigger and gas light society.


I.e., controlling through fear


To the extent these models don't blindly regurgitate hate speech, I appreciate that. But what I do not appreciate is when they won't render a human nipple or other human anatomy. That's not safety, and calling it such is gaslighting.


People who cheer for their own disempowerment are fascinating.


The latter; there is already an executive order around AI safety. If you don't address it out loud you'll draw attention to yourself.

https://www.whitehouse.gov/briefing-room/presidential-action...


As the leader in open image models it is incumbent upon us as the models get to this level of quality to take seriously how we can release open and safe models from a legal, societal and other considerations.

Not engaging in this will indeed lead to bad laws, sanctions and more as well as not fulfilling our societal obligations of ensuring this amazing technology is used for as positive outcomes as possible.

Stability AI was set up to build benchmark open models of all types in a proper way, this is why for example we are one of the only companies to offer opt out of datasets (stable cascade and SD3 are opted out), have given millions of supercompute hours in grants to safety related research and more.

Smaller players with less uptake and scrutiny don't need to worry so much about some of these complex issues, it is quite a lot to keep on top of, doing our best.


>it is incumbent upon us as the models get to this level of quality to take seriously how we can release open and safe models from a legal, societal and other considerations.

Can you define what you mean by "societal and other considerations"? If not, why not?


I could but I won't as legal stuff :)


“We need to enforce our morality on you, for our beliefs are the true ones — and you’re unsafe for questioning them!”

You sound like many authoritarian regimes.


I mean open models yo


Likely public condemnation followed by unreasonable regulations when populists see their campaign opportunities. We've historically seen this when new types of media (e.g. TV, computer games) debut and there are real, early signals of such actions.

I don't think those companies being cautious is necessarily a bad thing even for AI enthusiasts. Open source models will quickly catch up without any censorship while most of those public attacks are concentrated into those high profile companies, which have established some defenses. That would be a much cheaper price than living with some unreasonable degree of regulations over decades, driven by populist politicians.


It's an election year.

They're probably more concerned about generated images of politicians in 'interesting' sitations going viral than they are about porn/gore etc.


Stability is not an American company.


they risk reputational harm and since there's so many alternatives outright "brand cancellation". For example, vocal groups can lobby payment processors to deny service to any AI provider deemed unworthy. Ironic that tech enabled all of that behavior to begin with and now they're worried about it turning on them.


What viable alternatives are there to Stable Diffusion? As far as I know, it's the only way to run good image generation locally, and that's probably a big consideration for any business dabbling in it.


It's not the only open image model. It is the best one, but it's not the only one.


Yeah, the word "good" is doing the heavy lifting here - while it's not the only one that can do it, it has a very comfortable lead over all alternatives.


> What is it specifically that company management is worried about?

As with all hype techs, even the most talented management are barely literate in the product. When talking about their new trillion $ product they must take their talking points from the established literature and "fake it till they make it".

If the other big players say "billions of parameters" you chuck in as many as you can. If the buzz words are "tokens" you say we have lots of tokens. If the buzz words are "safety" you say we are super safe. You say them all and hope against hope that nobody asks a simple question you are not equipped to answer that will show you dont actually know what you are talking about.


It's a bit rich when HN itself is chock full with camp followers who pick the most mainstream opinion. Previously it was AI danger, then it became hallucinations, now it's that safety is too much.

The rest of the world is also like that. You can make a thing that hurts your existing business. Spinning off the brand is probably Google's best bet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: