Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "this is exactly the way the human brain works"

I'm always puzzled by such assertions. A cursory look at the technical aspects of an iterated attention - perceptron transformation clearly shows it's just a convoluted and powerful way to query the training data, a "fancy" Markov chain. The only rationality it can exhibit is that which is already embedded in the dataset. If trained on nonsensical data it would generate nonsense and if trained with a partially non-sensical dataset it will generate an average between truth and nonsense that maximizes some abstract algorithmic goal.

There is no knowledge generation going on, no rational examination of the dataset through the lens of an internal model of reality that allows the rejection of invalid premises. The intellectual food already chewed and digested in the form of the training weights, with the model just mechanically extracting the nutrients, as opposed to venturing in the outside world to hunt.

So if it works "just like the human brain", it does so in a very remote sense, just like a basic neural net works "just like the human brain", i.e individual biological neurons can be said to be somewhat similar.



If a human spends the first 30 years of their life in a cult they will be also speaking nonsense a lot - from our point of view.

Sure, we have a nice inner loop, we do some pruning, picking and choosing, updating, weighting things based on emotions, goals, etc.

Who knows how complicated those things will prove to model/implement...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: