My mind expanded it to "I feel discomfort and I feel uneasy," since both work. You can feel nouns and you can feel adjectives, just in a different sense. It sounds weird, because we like to construct our logic with parallelism [1], and in this case we're using different definitions of the word, "feel". This makes pulling "feel" out front in an associative manner less correct.
I feel[a] hunger, and I feel[b] sad. I feel[a or b, but not both] hunger and sad.
[a] transitive verb; a physical sensation, acting on the object (in this case, "hunger")
[b] intransitive verb [2]; an emotional state, described by the adjective (in this case, "uneasy")
Traditionally, with computer generated stuff, you could clearly see the math in the algorithms (the sine waves and fractals and whatnot). With AI generated stuff it looks... natural. Like someone actually drew this for abstract art class. It's entirely unpredictable and unexplainable, except maybe for very vaguely applying the key words. It's a computer no longer letting you see how he thinks.
I think it actually shows this kind of image recognition AI doesn't actually think, i.e. it hasn't developed high-level concepts of the things in the pictures it's trained on.
When these neural networks get an input image and spit out labels like "bird" or "car", they haven't actually recognized which parts of the image are a car, nor what pieces it's made of. Instead they have memorized some textures and simple shapes which go with the label. It provides the kind of knee-jerk reaction that allows your brain to make you jump when a large object approaches fast, or think there's a tiger hidden in the dirty laundry when you turn around in a dark room.
That's why, when you reverse the process, it doesn't create meaningful images, but clumps of relatively common textures found in the training set. It lacks the hierarchy of concepts that allows you to identify objects and distinguish them from the background, which a baby learns in their first two years.
It's not so much that. It's because their training data is limited, the NNs haven't learned the constraints that typify birds or cars. They can recognise the features, and apply labels, but there has been no need for them to classify a car vs. the garbage you get out by generating images.
So all these garbage outputs would be classified as cars, because it's happening in the space that the NN doesn't really have information about.
What always fascinated me is how those images look almost exactly like the hallucinations you get on some psychedelics (e.g. salvia divinorum, which was legal and sold publicly in stores for a while).
The AI must be pretty close if they can already match the output of a confused human brain.
Well there is sort of a truth in that because dreams create a world thats half-reality-half-abstract. precisely on the boundary of the known and the unknown. These images reflect the same thing in essence.
Well, and both the AI and our brain employ some hugely efficient compression, trained on lots of real world data; and both these AI pictures and our dreams are sort of generated by "firing up" the compression machine from the other end with semi-random inputs.
So, yes, it's not wonder that they're alike in some ways, and disturbing, too.
Knowing this and that AI is so good at creating nightmarish pictures, would it be possible to attempt a reverse? Ask/learn AI to create the most frightening, most nightmarish rendering it can ever produce?
Is such picture even exist? That I would seen it once and wouldn't sleep for a week? Or upon seeing it, start crying without explanation?
Seeing something like this created by AI would be very impressive: "prepare to cry when you see this picture (guaranteed!)"
This is the best illustration of where AI is today. Yes it can do great things because we've tuned each to do so. This is the some of the tech that does it. It only works with much guidance and outside the boundaries, anything goes.
Most kids are able to correctly identify thing in their environment by age 1-ish since they're using words to describe those things around that age. Specially trained AIs are about at that level. Based on the difficulty of captcha tasks I think it's safe to say 1yo could identify cars, signs and store fronts about as well as Google's AI (were it actually in the situation, the 1yo probably couldn't conceptualize that some of the pictures in a grid of them on-screen represent cars or whatever). That's the level we're at and that's with AI that's specially trained to recognize those things.
...and qualitatively different. Where children have the deep concepts/relationships and poor rendering machines fake output with rendering/mimickery ability. These attempts show where the weaknesses are. When the pictures get better then it's more sensible to allow more control (ignoring any singularity).
Having jokingly brought that up, it's the best reason for hybridizing implants. "We should all just learn to get along" still applies.