AI doesn't even attempt that. All our neural networks do is spit out the final result of whatever it's doing. If we want the "reasoning" for it we have to build different methods of doing that. It's an active field of research but much slower going than improving our AI's directly.