Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A function f(x) is said to be O(g(x)) if f(x)/g(x) is bounded, that is there is some C so that for every x, f(x)/g(x) < C .

In computer science f(x) is often some complexity function, like number of some specific operations when running an algorithm to completion.



This needs to be refined: f(x) is O(g(x)) if there exists some X >= 0 such that f(x)/g(x) is bounded for all x > X.

Otherwise, we cannot say that 1 is O(x), for example.


What an asshole way of explaining it. For practical purposes (don't @ me if this is technically wrong, I don't give a shit) it's a description of how the cost of an algorithm grows with the size of the input. For example if the cost doubles every time the input increments then it's O(n^2). If the cost increases in step with the input size then it's O(n). Simples.


I don't have a computer science education, but I had a math one. I found it a great explanation, but I understand why others would feel it's not.

It definitely is not an "asshole" way of explaining it though.


Okay unless you're doing numerical analysis where they use it for the growth rate of the error term, which has nothing to do with cost or algorithms


If the cost doubles for each increment in the input size, it's O(2^n).


Right you are, my bad.


This is not only “technically” wrong, but completely wrong. O-notation has absolutely nothing to do with algorithms.


Let's not over correct, of course O-notation has something do with algorithms for the working programmer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: