Because AIXI_tl has failure modes (it doesn't model itself as being embedded in its environment so it can't ensure its own survival) demonstrating that any approach which is just a weaker version of it will have those same problems.
> That said it is also not computable with finite time or resources, so it is unclear what relevance it has to practical applications.
You can define it as space or time-bound and then it's finite but still intractable.
I agree with the first sentence, but I'd like to note that there are practical (though weak) approximations of AIXI that preserve some of its properties, and while not turing-complete, prove to be more performant when compared to other RL approaches on Vetta benchmark. See [1].
Also there is a turing-complete implementation of OOPS, a search procedure related to AIXI that can solve toy problems, programmed by none other than Jurgen Schmidthuber 10 years ago [2]
Even more important: there is a breadth of RL theory built around MDPs and POMDPs. There are asymptotical, convergence, bounded regret, on-policy/off-policy results, etc. Modern practical Deep RL agents (the ones DeepMind is researching) are developed on the same RL theory and inherit many of these results.
From my POV it looks unfavorable to researchers that produced these results over decades of work when the comment's grandfather (and grand-grandfather) write that there is no definition and theory about AI, and that AI is like alchemy.
That said it is also not computable with finite time or resources, so it is unclear what relevance it has to practical applications.