> What always fascinated me is how those images look almost exactly like the hallucinations you get on some psychedelics... The AI must be pretty close if they can already match the output of a confused human brain.
> Traditionally, with computer generated stuff, you could clearly see the math in the algorithms (the sine waves and fractals and whatnot). With AI generated stuff it looks... natural... It's a computer no longer letting you see how he thinks.
The paper "Deep Image Prior" by Dmitry Ulyanov et al. gives compelling evidence that the structure of convolutional neural networks already encodes strong knowledge about the appearance of natural images, independent of any specific parameters (learned weights). Independence from parameters means it's independent from what task the network was trained to accomplish, and of the training algorithm.
This helps explain (IMO) why a neural network with "wrong" weights (meaning, the training process did not fully meet the goal of the project) still produces images that like plausible activations of the human visual cortex, rather than harsh mathematical patterns. The convolutional network structure is biased towards natural-looking images.
The connections between Neural Networks and the human brain are superficial at best and it's unwise to abuse the analogy beyond it's limit. The author's make no such claims and I don't think you have the standing to either.
> What always fascinated me is how those images look almost exactly like the hallucinations you get on some psychedelics... The AI must be pretty close if they can already match the output of a confused human brain.
> Traditionally, with computer generated stuff, you could clearly see the math in the algorithms (the sine waves and fractals and whatnot). With AI generated stuff it looks... natural... It's a computer no longer letting you see how he thinks.
The paper "Deep Image Prior" by Dmitry Ulyanov et al. gives compelling evidence that the structure of convolutional neural networks already encodes strong knowledge about the appearance of natural images, independent of any specific parameters (learned weights). Independence from parameters means it's independent from what task the network was trained to accomplish, and of the training algorithm.
This helps explain (IMO) why a neural network with "wrong" weights (meaning, the training process did not fully meet the goal of the project) still produces images that like plausible activations of the human visual cortex, rather than harsh mathematical patterns. The convolutional network structure is biased towards natural-looking images.
paper: https://arxiv.org/abs/1711.10925 third-party blog post: http://mlexplained.com/2018/01/18/paper-dissected-deep-image...