Doesn't invalidate the point; another EC2 instance is a few dozen bucks a month to run full time; another developer, even overseas, is going to cost you more, at just 40 hours a week.
But that said, it's rather a good thing to point out for the OP, as Joe Armstrong himself tends to value programmer time over a machine's, and that micro-optimizations are rarely ever worth it (as opposed to yours, which sounds like there were huge, glaring algorithmic issues to fix with macro optimizations, complete replacements of algorithms rather than tightening up code, careful profiling and tweaking to get a 30% speedup in one critical section, etc)
I guess your response brings out the fact that it's not an either-or proposition. It's a sliding scale between dev resources and computer resources, based on the costs in your situation.
Yes, the exact tradeoff has to be calculated per instance, exactly how much programmer time would it take to reduce how much machine time.
However, getting X hours of average programmer time, vs X hours of average machine time...machines are cheap. You should default to optimizing programmer time, not machine time.
You missed the point. I was just pointing out that 1 unit of hardware, compared to 1 unit of developer, is cheap. That was what the OP was commenting on.
Obviously, 4950 units of hardware, repeated monthly into perpetuity, compared with 30 units of developer, for a fixed length project, is not cheaper.
I just moved a customer off EC2. A small startup. I've billed them about $20k for the work. It will take them ~2.5 months to repay in saved hosting costs at current load levels. But if they're still at current levels in 3 months time, something is wrong. On top of that their ops costs have dropped as we have more control over the environment.
Basically, with rapidly growing hosting needs, getting a more cost effective setup was a matter of survival: They'd be unlikely to close another round of funding in the next few months if they didn't get that cost under control.
I wonder how many startups fail because they don't understand how to get their hosting costs under control, as it's way too common that I see developers that seem to think that servers are basically free, and managers that have no clue they need to seriously question why developers are making the server choices they are making..
But that said, it's rather a good thing to point out for the OP, as Joe Armstrong himself tends to value programmer time over a machine's, and that micro-optimizations are rarely ever worth it (as opposed to yours, which sounds like there were huge, glaring algorithmic issues to fix with macro optimizations, complete replacements of algorithms rather than tightening up code, careful profiling and tweaking to get a 30% speedup in one critical section, etc)