Exactly. I think OP's argument was that we should devote more resources to these sorts of problems instead of bickering about low-level stuff so much (if they aren't directly helping that goal).
I don't see how that view implies ignorance of current technological limitations.
>Computing is in a sad state today, but not because of fucking callbacks. It's in a sad state today because we're still thinking about problems on this level at all.
He seems to imply that it's sad today is today and not 20 years from now. So what?
People have dreamed about replacing "programming" with "dialogue systems" since at least the 60s; they are like flying cars, always 10 or 20 years away. We might be closer now, but it's not like we were not dreaming about this since before I was born.
In the meantime, we have code to write, maybe it's worth doing something about this callback thingy problem just in case the singularity is delayed a bit.
There are not that many researchers (or other people) working on 'less code'. And the response from the community is always; don't try it, it's been done and it won't work (NoFlo's Kickstarter responses here on HN for instance).
Instead I see these new languages/frameworks which have quite steep learning curves and replace the original implementation with the same amount of code or even more.
As long as 99% of 'real coders' keep saying that (more) code is the way to go, we're not on the right track imho. I have no clue what exactly would happen if you throw Google's core AI team, IBM Watson's core AI team and a few forward thinking programming language researchers (Kay, Edwards, Katayama, ...) in a room for a few years, but I assume we would have something working.
Even if nothing working would come out, we need the research to be done. With the current state we are stuck in this loop of rewriting things into other things, maybe marginally better to the author of the framework/lib/lang and a handful of fans resulting into millions of lines of not very reusable code. Only 'reusable' if you add more glue code than the original code in the first place adding only to the problem.
Nothing was new with NoFlo, it was just trying the same thing that failed in the past without any new invention or innovation. The outcome will be the same.
Trust me, the PhBs would love to get rid of coders, we are hard to hire and retain. This was at the very heart of the "5th gen computing movement" in the 80s and it's massive failure in large part led to the following second "AI winter."
> I have no clue what exactly would happen if you throw Google's core AI team, IBM Watson's core AI team and a few forward thinking programming language researchers (Kay, Edwards, Katayama, ...) in a room for a few years, but I assume we would have something working.
What do you think these teams work on? Toys? My colleague in MSRA is on the bleeding edge of using DNNs in speech recognition, we discuss this all the time (if you want to put me in the forward thinking PL bucket) over lunch...almost daily in fact. There are many more steps between here and there, as with most research.
So you are unhappy with the current state of PL research, I am too, but going back and trying the same big leap that the Japanese tried to do in the 80s is not the answer. There are many cool PL fields we can develop before we get to fully intelligent dialogue systems and singularity. But if you think otherwise, go straight to the "deep learning" field and ignore the PL stuff, if your hunch is right, we won't be relevant anyways. But bring a jacket just in case it gets cold again.
I agree with you on NoFlo and criticism like yours is good if it's well founded. I just see a bit too much unfounded shouting that without tons of code we cannot and can never write anything but trivial software. The NoFlo example was more about the rage I feel coming from coders when you touch their precious code. Just screaming; "it's impossible" doesn't cut it and thus i'm happy with NoFlo trying even if doomed to fail, it might bring some people to even consider different options.
> What do you think these teams work on? Toys? My colleague in MSRA is on the bleeding edge of using DNNs in speech recognition, we discuss this all the time (if you want to put me in the forward thinking PL bucket) over lunch...almost daily in fact. There are many more steps between here and there, as with most research.
No definitely not toys :) But I am/was not aware of them doing software development (language) research. Happy to know people are discussing it during lunch; wish I had lunch companions like that. Also I should've added MS; great PL research there (rise4fun.com).
I wasn't suggesting a big leap; I'm suggesting considerably more research should be put into it. Software, it's code, it's development and bugs are huge issues of our time and I would think it important to put quite a bit more effort in it.
That said; any papers/authors of more cutting edge work in this field?
My colleague and I talk about this at lunch with an eye on doing a research project given promising ideas, but I think I (the pl guy) am much more optimistic than he (the ML/SR guy) is. I should mention he used to work on dialogue systems full time, and has a better grasp on the area than I do. I've basically decided to take the tool approach first: let's just get a Siri-like domain done for our IDE, we aren't writing code but at least we can access secondary dev functions via a secondary interface (voice, conversation). The main problem getting started is that tools for encoding domains for dialogue systems are very primitive (note that even Apple's Siri isn't open to new domains).
The last person to take a serious shot at this problem was Hugo Liu at MIT. Alexander Repinning has been looking at conversational programming as a way to improve visual programming experiences; this doesn't include natural language conversation, but the mechanisms are similar.
I would think that this is why PL research if very relevant here; until we have an 'intelligence' advanced enough to distill from our chaotic talking about an intended piece of software, I see a PL augmented with different AI techniques to explain, in a formal structure (with the AI allowing for a much larger amount of fuzziness than we have now in coding; aka having the AI fix the syntax/semantic errors based on what it can infer about your intent after which, preferably on a higher level of the running software, you can indicate if this was correct or not) how a program should behave.
With some instant feedback a bit like [1] this at least feels feasible.
> We might be closer now, but it's not like we were not dreaming about this since before I was born.
It's not about dreaming. It's about action and attitude. Continuing down the current path of iterating on the existing SW/HW paradigm is necessary, but in that 20 years, it's not going to lead to strong AI. Our narrow minded focus on Von Neumann architecture permeates academia. When I was in college I had a strong background in biology. Even though my CS professor literally wrote a book on AI, he seemed to have a disdain for any biologically inspired techniques.
Recently, I've seen a spark of hope with projects like Stanford's Brains in Silicon and IBM's TrueNorth. If I was back in school, this is where I'd want to be.
Thanks for the link. After 60 years of promising to make computers think even the fastest machines on the planet with access to Google's entire database still have trouble recognizing a cat from a dog, so I have to agree with the article that yes it was "Ahead of its time...In the early 21st century, many flavors of parallel computing began to proliferate, including multi-core architectures at the low-end and massively parallel processing at the high end."
As 5th generation shows small pockets of researchers haven't forgotten evolution has given each of us a model to follow for making intelligent machines. I hope they continue down this road because faster calculators aren't going to get us there in my lifetime. You feel differently?
As the article mentions "CPU performance quickly pushed through the "obvious" barriers that experts perceived in the 1980s, and the value of parallel computing quickly dropped", where as 30 years later single threaded CPU performance has gone from doubling every 2 years to 5-10% per year since ~2007. Combine that with the needs of big data, and the time is right to reconsider some of these "failures".
Parallel programming is the biggest challenge facing programmers this decade, which is why we get posts like this on callback hell. Isn't it possible that part of the problem lies in decisions made 50 years ago?
I'm not saying we need to start from scratch, but with these new problems we're facing, maybe it's time reconsider some of our assumptions.
"It is the mark of an educated mind to be able to entertain an idea without accepting it." -Aristotle
The problem is that we've been down this route before and it failed spectacularly (we've gone through two AI winters already!). It does not mean that such research is doomed to fail, but we have to proceed a bit more cautiously, and in any case, we can't neglect improving what works today.
The schemes that have been successful since then, like MapReduce or GPU computing, have been very pragmatic. It wasn't until very recently that a connection was made between deep learning (Hinton's DNNs) and parallel computing.
Yes, I did say continuing down the current path was necessary. From the languages to the libraries, today's tools allow us to work miracles. As Go has shown with CSP, sometimes these initial failures are just ahead of their time. Neuromorphic computing and neuroscience have come a long ways since the last AI winter.
The hierarchical model in Hinton's DNNs is a promising approach. My biggest concern is that all of the examples I've seen are built with perceptrons, whose simplicity makes them easy to model but share almost nothing in common with their biological counterparts.
That's more of an orthogonal point than a counter point though. Just because we haven't accomplished something yet doesn't mean we shouldn't invest any more effort into it.
I was responding to your insinuation that OP must not be knowledgeable about language implementation to hold the views he/she does. In this context, technological limitations are irrelevant because that was not the point; the point was an opinion about the relative effort/attention these problems receive.
I'm admittedly not really invested in this area myself so I don't really care, but it's disingenuous to try and discredit OP's views like that. At least this response is more of a direct counter-opinion.
I don't see how that view implies ignorance of current technological limitations.