Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, first laying out the caveat that there are indeed quite useful and interesting results that could come from this approach, including discovering unexpected connections/underlying patterns between things humanly/socially viewed as "separate-domains", and that new ways of mixing/ensemble techniques are not particularly new, but can lead to real advances in both performance and insights, i'm guessing what you mean by "super pessimistic" is being asked to "throw water on the notion that there is literally one model to learn them all".

At a high-level I will try to do so with three concepts and an analogy.

First concept: opportunity cost Second concept: objective definition Third concept: qualitative similarity/monism

The analogy i use is physical fitness. If someone tried to tell you there was "one fitness regime to rule them all", you would, hopefully, step back and say something like:

"Well hang on a sec...what even is fitness?...and even once we MIGHT agree that there is some "thing" we're both calling fitness, what if the "thing" we agree on is fundamentally composed of qualitatively different components or atomistic concepts...and if that is the case, is it really conceivable that there is no opportunity cost between maximizing all qualitatively different concepts? How would we even agree on how to compare them?"

Put into english, i think most relatively advanced thinkers understand this about fitness. There is no "one greatest athlete" and there is no "one ultimate training regime". There is no objective way to rank or compare a tennis player to a linebacker to a golf player to a sprinter to a marathon runner. Additionally, the regimes and body types that make one good at some fundamentally make you worse at others. It might even be worse than that... There might not even be a way to compare or rank athletes transitively WITHIN domains?! Aye carumba!

To bring it back to data science, what we're being asked to believe in things like "general AI" or "one model to rule them all" is that the problem domain has these kinds of properties:

1. Composed of things which do not have a fundamental opportunity cost between them. If they do, you cannot have one model to rule them all, you must choose trade-offs. 2. Can be "objectively" agreed upon in some way: ok, you've come to an agreement on what your trade-offs will be, and you will maximise that instead, but was there anything objective in that decision? In the example included, he uses the example of training on the concept of "banana", but maybe there really is no universal concept of "banana" because it is subjectively experienced by every conscious being. Is it right to link the concept of banana to yellow, sweet, sour, disgusting, desirable? which really just leads into... 3. That the domain "REALLY IS" composed of a singular "same type of underlying thing". If the underlying thing in our domain is fundamentally composed of qualitatively different things, conglomerating and comparing them can only be achieved by subjectivity and subjective agreement. There is literally no objective answer to be found. You might find practical similarities or averages or something like that, but there is literally no fundamental common ground that will make everything happy and work out.

Now, to be sure, in a practical sense you can usually limit your domain, and limit your problems, and your limit your social circle sufficiently enough to get close to one "optimal model" that you all agree on in one limited context but usually this is a cause of extreme finiteness of problem scope and extreme finiteness of social and subjective context.

Once we expand to anything even remotely close to "all models" or even "most things humans care vaguely about", the whole thing breaks down.

Personally, not only do i think all these things are not composed of non-opportunity cost incurring, objectively defined, qualitatively similar domains, i think all evidence points explicitly to the opposite.

This of course does not mean that generalising models are not valuable or practically interesting, but you no more have to worry about general AI or one model to rule them all any more than you have to worry about one fitness regime that will make you the best at sport.

Of course...you MIGHT have to worry about the social context, that is to say, one idea of sport becoming so pre-eminant socially that it is what everyone thinks of when fitness and sport are mentioned. If you're not into tiddly-winks when it takes over the world, you might be in for a world of social pain if you're not involved too...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: