Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

See also Mark V Shaney https://en.wikipedia.org/wiki/Mark_V._Shaney

This was a class assignment in college, we had a lot of fun with it, and one of my classmates, the brilliant Alyosha Efros, decided to apply the exact same technique to images instead of text. It turned into a paper that revitalized texture synthesis in the Siggraph community. The most interesting part about it (in my opinion) is that he ran it on images of text, and it produces images of readable text nonsense! With the right window size, perhaps there’s a nonzero possibility of being able to produce the same text either way. This always struck me as very meta and makes me wonder if there are ways we could go in reverse with image processing (or other types of data); if there’s a latent underlying representation for the information contained in image content that is much smaller and more efficient to work with. Neural networks might or might not be providing evidence.

https://people.eecs.berkeley.edu/~efros/research/EfrosLeung....



That's amazing! It's a literal uncrop. I'll have to look into this later.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: