Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is technically correct I'm sure but people usually use it w.r.t. f being simply the runtime of the function, in which case the common usage converges. I think the original comment may have a point here as I'm not sure the article necessarily caveated those definitions.


I don't think this is entirely true. I'd bet if you asked people the complexity of quiskort, they'd be more likely to remember it as O(n log n) than as O(n²), even though the first one is average case complexity and the second is worse case. Similarly, if you ask about hashtables or hash maps, you're more likely to hear O(1) than O(n), despite O(n) being the worse case complexity (when all items happen to have the same hash).

Instead, what I think happens is that people tend to talk about the average case complexity of well known algorithms, except that for most algorithms that's the same as worse case, so they tend to forget the distinction. And, if they're evaluating the complexity of something they wrote, they'll probably do the worse case analysis, since that's easier than the average case.


I'm not sure about this - for quicksort the usual answer for myself is o(nlogn) average case and o(n^2) worst, and for hash maps it's o(1) amortized complexity. Conversely for merge sort it's simply o(nlogn) flat.

These are well known cases precisely because when taught those runtime statements are caveated. I'd expect any discussion of runtimes on another topic to extend the same basic courtesy if the worst case didn't align.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: