Actually there are at least two decades-old branches of computer science/mathematics that have formulated precise definitions of AI, and proved many theoretical results that gave way to lots of practical applications. These branches of CS are called "Reinforcement Learning" and "Universal AI".
While Gwern has already mentioned Reinforcement Learning, UAI is a less known (but even more rigorous and well received) mathematical theory of general AI that arose from Marcus Hutter work [1].
My point here is how can one say that there is no definition of AI when there are several precise mathematical definitions available with many theorems proven about them?
You are confusing narrow AI for AGI. None of those things have proved anything practical about what an actually achievable AGI would look like, rather than some theoretical construct that is provably incomputable.
Because AIXI_tl has failure modes (it doesn't model itself as being embedded in its environment so it can't ensure its own survival) demonstrating that any approach which is just a weaker version of it will have those same problems.
> That said it is also not computable with finite time or resources, so it is unclear what relevance it has to practical applications.
You can define it as space or time-bound and then it's finite but still intractable.
I agree with the first sentence, but I'd like to note that there are practical (though weak) approximations of AIXI that preserve some of its properties, and while not turing-complete, prove to be more performant when compared to other RL approaches on Vetta benchmark. See [1].
Also there is a turing-complete implementation of OOPS, a search procedure related to AIXI that can solve toy problems, programmed by none other than Jurgen Schmidthuber 10 years ago [2]
Even more important: there is a breadth of RL theory built around MDPs and POMDPs. There are asymptotical, convergence, bounded regret, on-policy/off-policy results, etc. Modern practical Deep RL agents (the ones DeepMind is researching) are developed on the same RL theory and inherit many of these results.
From my POV it looks unfavorable to researchers that produced these results over decades of work when the comment's grandfather (and grand-grandfather) write that there is no definition and theory about AI, and that AI is like alchemy.
While Gwern has already mentioned Reinforcement Learning, UAI is a less known (but even more rigorous and well received) mathematical theory of general AI that arose from Marcus Hutter work [1].
My point here is how can one say that there is no definition of AI when there are several precise mathematical definitions available with many theorems proven about them?
1. http://www.hutter1.net/ai/uaibook.htm