Ooh, this'll be interesting to see, as with AlphaGo, a lot of people disputed the "experts believed it would take ~10 more years" claim retrospectively.
With SC2, no AI even comes close to beating even a silver level player, so even a 5 year timeline seems really soon. Let's see if DeepMind can beat it!
I think it is doable in under 5 years, but this critically depends on the resources invested by DM and other DL orgs. Deep RL is hugely demanding of computational resources to iterate your designs - for example, the first AlphaGo took something like 3 GPU-years to train it once (2 or 3 months parallelized); however, with much more iteration, DM was able to get Master's from-scratch training down to under 1 month. Now an AG researcher can iterate rapidly with small-scale hobbyist or researcher resources, but if they had had to do it all themselves, Ke Jie would still be waiting for a worthy adversary... When I look at all the recent deep RL research ( https://www.reddit.com/r/reinforcementlearning/ ) I definitely feel that we can't be far from an architecture which could solve SC2, but I don't know if anyone is going to invest the team+GPUs to do it within that timeframe. (It might not even be as complex as people think: some well-tuned mix of imitation learning on those 500k+ human games, self-play, residual RNNs for memory/POMDP-solving, and use of recent work on planning over high-level environment modeling\, might well be enough.)
I can't decide if I would be bummed or excited if that turns out to be the case. On the one hand, we'd be that much closer to AGI. On the other, we'd be continuing down the path of brute-forcing intelligence, rather than depending on those elegant, serendipitous breakthroughs that much of human progress has been built on.