The GPU didn't really have the kind of muscle you'd need to do gpgpu, since a lot of the rendering workload was handled by dedicated instructions on the CPU instead of special-purpose or general-purpose GPU hardware like we have today. The PS2 inched closer but the PS3 was probably the first time Sony hardware could do anything resembling GPGPU. I don't think you were missing anything obvious :-)
AFAIK some other console vendors' audio DSPs were used for compute by specific games, though - I recall reading about a console game for one of Sony's competitors using the console's DSP to decompress game data during loads instead of synthesize audio.
> I recall reading about a console game for one of Sony's competitors using the console's DSP to decompress game data during loads instead of synthesize audio.
Maybe Burning Rangers for SEGA Saturn? That game uses alpha transparency for rendering fire instead of the usual (for the time) SEGA-style mosaic transparency, but it uses the sound processor to do it and has very sparse audio as a result: http://segabits.com/blog/2011/06/05/retro-review-burning-ran...
Eh, that "GPU microcode engine" was a fairly general MIPS with a vector unit like the CPU of the PS1. The GPU itself was just a rasterizer too like the PS1's (albeit more complex, understanding the Z dimensions, antialiasing, and subpixel coordinates). I wouldn't really call it GPGPU.
AFAIK some other console vendors' audio DSPs were used for compute by specific games, though - I recall reading about a console game for one of Sony's competitors using the console's DSP to decompress game data during loads instead of synthesize audio.