Sadly this omits the metric I'm most interested in, which is speed. I've noticed that, with the same rip settings, different drives -- some are a lot faster, but both produce bit-perfect rips. Biggest difference was between a slim, new consumer drive, and an old, late 90s IDE drive. The latter was significantly faster, and had far better error correction too!
In general, error correction adds a lot of time. My sense is it will re-read the damaged sectors several times and take the "most common" reading. Then, it compares that with the external full-track hashes to establish correctness. So, it has to do a lot of seeking to read and re-read a damaged portion, which can take minutes for "secure" rips in EAC.
Some drives (for example those using Pioneer PureRead) will actually vary parameters like laser power and angle between re-reads, so it‘s possible that a damaged sector can get read probably after some retries.
In general, error correction adds a lot of time. My sense is it will re-read the damaged sectors several times and take the "most common" reading. Then, it compares that with the external full-track hashes to establish correctness. So, it has to do a lot of seeking to read and re-read a damaged portion, which can take minutes for "secure" rips in EAC.