Interestingly in the original article it said that the datacenter was meant to stay under water for five years. I wonder why they have pulled it out ahead of time.
edit: just to be clear, my questioning isn't meant to be read in a denigratory way. just wondering. Also thank to Kydlaw for pointing out that it actually said "up to five years".
Some executive probably said "I think it would be good to take stock of the experiment now to reduce time-risk and arrive at a decision point regarding future strategy".
It increasingly seems that public businesses seem incapable of making long-term plans, even rather trivial ones, if the cost of ending the experiment results in immediate gains. The driving force is the single most important metric: this quarter's upcoming earnings report. Businesses that tend to place bets on multi-year investments are either privately held (e.g. SpaceX) or are participating in specific public/private partnerships.
While I do think we are seeing the limits of what a publicly-traded company can actually realize, there are of course counterexamples: Google always seems to have multiple irons in the fire, and there are many other examples in the comments on this discussion of corporate research labs: https://news.ycombinator.com/item?id=24200764
> the cost of ending the experiment results in immediate gains.
This is oft-parroted without any hard evidence though. If it were true even broadly, most companies would be doing absolutely no R&D because all R&D is just cost in the near term.
Clearly companies can take on multi year projects like making a movie and even take on risks when doing so. They also do a lot of things called R&D with similar expectations of direct profits.
However, it’s not clear that Microsoft’s next version of windows for example is actual research rather than the software equivalent of making a movie. As such I think what people are talking about fundamental research not the kind of R&D which happens to qualify for a tax break but is mostly just the cost of doing business.
IMO, the line for what still qualifies for research is basically the DARPA self driving car challenge. Before the event it looked like basic research, afterward it looked like an engendering challenge to get there first. In 2004 nobody finished though several got close in 2005, in 2005 five teams finished and the race for commercial success was on.
The difference between 2 years and 5 is quite significant for server runtime above water. This argument of diminishing returns would hold better for placing it 1 year rather than 2, 9 months rather than 12, etc.
If the concern is that you'd see significant failure after 5 (which is likely) which could undercut your future plans, I could understand cutting the plan short.
From my anecdotal experience working in data centers early in my career, you really don't really know if your failure rate for a specific type of drive/server is atypical for quite some time. This information is critical for many businesses, as choosing which drive or which architecture model can have profound cost implications.
It's quite possible that failures at the 5 year mark wont maintain a proportionate ratio with failures at the 2 year mark when compared to control. Maybe drive "A" tends to have an unacceptably high failure rate only after "X" hours of up-time.
(source: https://www.bbc.com/news/technology-44368813 -- see video)
edit: just to be clear, my questioning isn't meant to be read in a denigratory way. just wondering. Also thank to Kydlaw for pointing out that it actually said "up to five years".