Because, as some people fear, the AI would decide humanity is a threat and wipe us out, or take over so much resources we die regardless, or some other apocalyptic scenario. To guard against that we may decide not to hand control of important things over to the AI, or at least build in a safe guard so we can regain control if necessary.
If we believe that pfisch is correct in their assertion that handing control of things to an AI means we end up living as if we're in a zoo, then we'll (presumably) decide that the benefits aren't too good to refuse, and we'll refuse them.
Whether or not that premise is correct is what's up for debate.