Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every time I look at AI produced pictures, I feel discomfort and uneasy. These pictures are like a scene from a perfect nightmare.


I feel discomfort reading “I feel discomfort and uneasy” because discomfort is a noun and uneasy is an adjective.


You can both "feel discomfort" and "feel uneasy", so I think it works.


No. You see:

    void feel(x: Noun)
    void feel(x: Adj)
"discomfort and uneasy" is actually an instamce of the superclass of Noun and Adj, but there is no overload for that so it doesn't compile.


With better heuristics and fuzzy logic, our IDEs should detect and handle such usage without hassle.


I suspect that sort of autocorrections would be AI-complete.


I agree with this fellow human.


You must be a robot. If you'd written your post in capitals though, I'd totally have fallen for it.




Doesn't your mind expand that to "I feel discomfort and become uneasy", or is mine just broken?


My mind expanded it to "I feel discomfort and I feel uneasy," since both work. You can feel nouns and you can feel adjectives, just in a different sense. It sounds weird, because we like to construct our logic with parallelism [1], and in this case we're using different definitions of the word, "feel". This makes pulling "feel" out front in an associative manner less correct.

I feel[a] hunger, and I feel[b] sad. I feel[a or b, but not both] hunger and sad.

[a] transitive verb; a physical sensation, acting on the object (in this case, "hunger")

[b] intransitive verb [2]; an emotional state, described by the adjective (in this case, "uneasy")

[1] https://en.wikipedia.org/wiki/Parallelism_(grammar)

[2] https://en.wikipedia.org/wiki/Intransitive_verb

EDIT: Clarity. Also, IANAL (I am not a linguist), and generally you don't want to take lingual advice from an engineer, like me.


I semantically autocorrected to "I feel discomfort and unease".

Something about it being the same number of words and the least letters' difference; I can't rule out the possibility of a 'y'/'e' error.


That reminds me of a line from Green-Eyed Lady by Sugarloaf that I've always liked: "Setting suns and lonely lovers free"


It's the perfect way to convey that feeling, so I'm stealing it now.


Traditionally, with computer generated stuff, you could clearly see the math in the algorithms (the sine waves and fractals and whatnot). With AI generated stuff it looks... natural. Like someone actually drew this for abstract art class. It's entirely unpredictable and unexplainable, except maybe for very vaguely applying the key words. It's a computer no longer letting you see how he thinks.


I think it actually shows this kind of image recognition AI doesn't actually think, i.e. it hasn't developed high-level concepts of the things in the pictures it's trained on.

When these neural networks get an input image and spit out labels like "bird" or "car", they haven't actually recognized which parts of the image are a car, nor what pieces it's made of. Instead they have memorized some textures and simple shapes which go with the label. It provides the kind of knee-jerk reaction that allows your brain to make you jump when a large object approaches fast, or think there's a tiger hidden in the dirty laundry when you turn around in a dark room.

That's why, when you reverse the process, it doesn't create meaningful images, but clumps of relatively common textures found in the training set. It lacks the hierarchy of concepts that allows you to identify objects and distinguish them from the background, which a baby learns in their first two years.


It's not so much that. It's because their training data is limited, the NNs haven't learned the constraints that typify birds or cars. They can recognise the features, and apply labels, but there has been no need for them to classify a car vs. the garbage you get out by generating images.

So all these garbage outputs would be classified as cars, because it's happening in the space that the NN doesn't really have information about.


What always fascinated me is how those images look almost exactly like the hallucinations you get on some psychedelics (e.g. salvia divinorum, which was legal and sold publicly in stores for a while).

The AI must be pretty close if they can already match the output of a confused human brain.


salvia still is legal in most states. and it is far from a typical psychedelic.


Well there is sort of a truth in that because dreams create a world thats half-reality-half-abstract. precisely on the boundary of the known and the unknown. These images reflect the same thing in essence.


Well, and both the AI and our brain employ some hugely efficient compression, trained on lots of real world data; and both these AI pictures and our dreams are sort of generated by "firing up" the compression machine from the other end with semi-random inputs.

So, yes, it's not wonder that they're alike in some ways, and disturbing, too.


Knowing this and that AI is so good at creating nightmarish pictures, would it be possible to attempt a reverse? Ask/learn AI to create the most frightening, most nightmarish rendering it can ever produce?

Is such picture even exist? That I would seen it once and wouldn't sleep for a week? Or upon seeing it, start crying without explanation?

Seeing something like this created by AI would be very impressive: "prepare to cry when you see this picture (guaranteed!)"



This is the best illustration of where AI is today. Yes it can do great things because we've tuned each to do so. This is the some of the tech that does it. It only works with much guidance and outside the boundaries, anything goes.


So, kind of like children.


Like really dumb infants maybe.

Most kids are able to correctly identify thing in their environment by age 1-ish since they're using words to describe those things around that age. Specially trained AIs are about at that level. Based on the difficulty of captcha tasks I think it's safe to say 1yo could identify cars, signs and store fronts about as well as Google's AI (were it actually in the situation, the 1yo probably couldn't conceptualize that some of the pictures in a grid of them on-screen represent cars or whatever). That's the level we're at and that's with AI that's specially trained to recognize those things.


...and qualitatively different. Where children have the deep concepts/relationships and poor rendering machines fake output with rendering/mimickery ability. These attempts show where the weaknesses are. When the pictures get better then it's more sensible to allow more control (ignoring any singularity).

Having jokingly brought that up, it's the best reason for hybridizing implants. "We should all just learn to get along" still applies.


I typed "Eldritch abomination and cat" and I was not disappointed.


Agreed. Felt the same way listening to some AI-generated classical music, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: