In which case, as I said, just use a lookup table, which is constant-time under those same constraints, and has the bonus of automatically caching frequently used values for you.
And as the Fibonacci sequence is exponential, any lookup table will be relatively small. (The maximum number of elements in your lookup table is linear w.r.t. the number of digits of your (fixed-precision) integers. I mean, even a 128-bit return value only requires 185 elements in the table, for a total of 2960 bytes.)
> A similar situation arises in, say, mergesort
This is why you have to specify exactly what you mean by the big-O notation you specify. Do you mean comparisons? Time? Time assuming everything but the variables specified are constant? Etc.
>It's true that you can't have an O(log N) time algorithm, as arithmetic operations cannot be performed in constant time.
Yep. You can perform some operations in sublinear time w.r.t. the number of bits, however, by exploiting hardware-level parallelization. It would be interesting to see what the best (in terms of big-O) possible implementation of fib(n) would be, given constant-time operations for a fixed-length machine word.
> This is why you have to specify exactly what you mean by the big-O notation you specify.
He did specify it, it's O(log N) arithmetic operations.
Note: I don't mean "assuming arithmetic operations take constant time, we can compute F(n) in O(log N) time", in which case you can correctly say that, since arithmetic operations don't take constant time in arbitrary precision arithmetic, we must be using fixed precision (or something essentially equivalent), in which case a lookup table takes O(1) time and O(precision) space. I mean "F(n) can be computed with O(log N) arithmetic operations". If I perform my arithmetic operations over arbitrary precision integers, and I implement simple arithmetic operations (so each arithmetic operation takes O((log n)^2) time), then F(n) takes O((log n)^3) time.
And as the Fibonacci sequence is exponential, any lookup table will be relatively small. (The maximum number of elements in your lookup table is linear w.r.t. the number of digits of your (fixed-precision) integers. I mean, even a 128-bit return value only requires 185 elements in the table, for a total of 2960 bytes.)
> A similar situation arises in, say, mergesort
This is why you have to specify exactly what you mean by the big-O notation you specify. Do you mean comparisons? Time? Time assuming everything but the variables specified are constant? Etc.
>It's true that you can't have an O(log N) time algorithm, as arithmetic operations cannot be performed in constant time.
Yep. You can perform some operations in sublinear time w.r.t. the number of bits, however, by exploiting hardware-level parallelization. It would be interesting to see what the best (in terms of big-O) possible implementation of fib(n) would be, given constant-time operations for a fixed-length machine word.