The trouble with this ubiquitous argument is that it is a cost-benefit argument that simply ignores one of the major costs, that of changing the design later. Yet that cost can easily be as high or higher than the cost of building the original system.
Treating this sort of design change as an optimization problem (i.e., we'll measure and fix the bottlenecks later) is a category error. There are many OO systems that simply can't be refactored to solve the problems the OP is talking about.
Does this turn out to matter? Sometimes yes, sometimes no. Is there any way to measure it in advance? I doubt it. But that means there's no real cost-benefit argument here at all, only gut-feeling judgment and confirmation bias.
You should definitely weigh the cost of doing the extra work now versus doing it later. In profitable, stable ventures, time now and time later have similar costs. However, in new projects, time now is dramatically more expensive than time later.
Can you give an example of an OO system that can't be refactored to a data-driven system later? I ask because I've made very similar changes to Cal3D, converting overly-object-oriented code to memory-efficient data transformations and, thanks to unit tests, it wasn't hard at all.
Can you give an example of an OO system that can't be refactored to a data-driven system later?
The systems I was thinking of are ones I've worked on or consulted on. Mainly, they were just big and hard to change. The OO aspects didn't help, mainly because of their tendency to object-graph spaghetti.
There was an inaccuracy in what I said (mainly for brevity). It's not true that you can't refactor such systems to solve their design problems. Technically, you can refactor anything into anything. What I mean by "can't be refactored" is "can't be refactored at a cost less than writing a whole new program". Even then, that's too strong, since you can't prove that. So strictly speaking I should have said "There are many OO systems where nobody who works on them can think of a way to refactor them to solve the problems the OP is talking about in a way that is easier than just rewriting the program." :)
I agree that test coverage makes this easier, although it also adds a maintenance burden.
Treating this sort of design change as an optimization problem (i.e., we'll measure and fix the bottlenecks later) is a category error. There are many OO systems that simply can't be refactored to solve the problems the OP is talking about.
Does this turn out to matter? Sometimes yes, sometimes no. Is there any way to measure it in advance? I doubt it. But that means there's no real cost-benefit argument here at all, only gut-feeling judgment and confirmation bias.