In an async framework of execution, this doesn't apply. A lot of programming happens in that space, and in it, the network call is "free", but you're clogging the thread(s) with actual CPU work. If execution is single-threaded, the problem becomes very relevant, but it applies to multi-threaded async just the same (you might exhaust the pool of workers).
Keeping this in mind isn't esoteric either, as it applies to JavaScript, Python, Rust, C#, and probably others.
> In an async framework of execution, this doesn't apply.
That's right. Async execution prevents the IO from being the bottleneck by offloading it to a different thread.
There are 3 situations where this statement falls apart:
1. If the execution is single threaded, as you rightly pointed out
2. If the response of the async execution matters to the final response of your service. In this case, the primary thread may finish its work but its still waiting for IO to complete. Basically making it synchronous but using async primitives.
3. The CPU utilization of iterating over 100k items in a list is negligible compared to the hundreds of daemons and services running on the host. Even a docker container will utilize more CPU than iteration over 100k items.
The point is: over-indexing over iteration and time-complexity in interviews is pointless as real systems are going to face challenges far beyond that.
Keeping this in mind isn't esoteric either, as it applies to JavaScript, Python, Rust, C#, and probably others.