Hybrid systems combining different ML techniques were generally in back then. These should be revisited with today's processing power and data set availability.
This is an additional abstract dimension we add to it. As long as it helps, I encourage to go this road. But I also fear on the other hand that it gets harder and harder to understand NNs. Especially, if someone just starts with NNs.
That goes for any new field you are interested in. If this bothers you I have a suggestion: pretend the present does not exist and go back to the beginnings of the field, then do a 'faster than realtime' replay to allow you to see the steps as they were made in the past until you catch up with the present. Typically this will take you anywhere from 1 to 5 years depending on how fast you work for something complex but you'll come away with a much deeper level of understanding than you'd get if you just looked at what is being done today.
It will also help you to not go down all the blind alleys of history because you are now aware of them.
I wonder if it can be applied a 3rd meta-time.
E.g.: an RNN would edit the weights of another one which edits the weights of the main one.
And maybe add the L2L meta optimizer in that?
Lets go deeper, lets create a RNN that changes the weights of this RNN!
I fear someday we may accidentally discover (and disappoint ourselves) that consciousness (AKA the mind) its just a self-modifying binary system and not as magical as we like to assume... probably still centuries away even if that's the case.
I've also pondered this but I worry that there's a degree of magical thinking here.
Here's one thing too complex for us to understand (a conscious brain).
Here's another thing with superficial similarities (highly complex neural nets of one kind or another).
Maybe this complex thing is the same as this other complex thing!
However - I'm still stuck at the stage where I feel that p-zombies, qualia, the chinese room etc are pointing at a genuine explanatory gap. I know the more rationally minded think this is hand-waving or closet-mysticism but considering nobody has satisfactorily defined consciousness in a way that doesn't simply hide the problem one layer deeper then I'm going to stick to my gut feeling that the other side of the debate is doing just as much hand-waving as me.
> I'm still stuck at the stage where I feel that p-zombies, qualia, the chinese room etc are pointing at a genuine explanatory gap.
I can't shake the feeling that a lot of the arguments about qualia are begging the question. 'Qualia' is a name for something on our map of a human mind. It doesn't necessarily describe something in the actual territory.
I did try to acknowledge the question begging in my comment. My point is that both sides are begging the question.
So neither side (false dichotomy acknowledged) can use question begging as an argument to dismiss the other.
What's left? Who has more to explain? Surely at this point Descartes kicks in. The one undeniable piece of knowledge we have - the one that everything else is built on (precariously).
Isn't the burden of proof on those that want to reduce consciousness to something else - when consciousness is all we can be certain of?
I'm not dogmatic on this. I am pulled strongly in both directions and I had a youthful infatuation with Hofstadter/Dennett et al.
On the other hand I know my desire to imbue the self with something transcendental is driven by romanticism.
One of my A.I. related fears is that creating consciousness is really easy to do by accident (like Turing Completeness) and really easy for humans to fail recognise when it's in an easily-Othered body (far too many examples, sadly), and that as a result we won't find out until after billions of conscience A.I.s have been unintentionally mistreated.
I also worry that consciousness may be present in many more speices than I bother to make an effort to not kill, and not to allow to be killed by others to provide me with indirect benefits.
Edit: Cool, mentions it right out the gate.