Space is a vacuum. i.e. The lack-of-a-thing that makes a thermos great at keeping your drink hot. A satellite is, if nothing else, a fantastic thermos. A data center in space would necessarily rely completely on cooling by radiation, unlike a terrestrial data center that can make use of convection and conduction. You can't just pipe heat out into the atmosphere or build a heat exchanger. You can't exchange heat with vacuum. You can only radiate heat into it.
Heat is going to limit the compute that can be done in a satellite data centre and radiative cooling solutions are going to massively increase weight. It makes far more sense to build data centers in the arctic.
Musk is up to something here. This could be another hyperloop (i.e. A distracting promise meant to sabotage competition). It could be a legal dodge. It could be a power grab. What it will not be is a useful source of computing power. Anyone who takes this venture seriously is probably going to be burned.
I'm confused about the level of conversation here. Can we actually run the math on heat dissipation and feasibility?
A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate. There are around 10K starlink satellites already in orbit, which means that the Starlink constellation is already effectively equivalent to a 50 Mega-watt (in a rough, back of the envelope feasibility way).
Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?
Why is starlink possible and other computations are not? Starlink is also already financially viable. Wouldn't it also become significantly cheaper as we improve our orbital launch vehicles?
Simply put no, 50MW is not the typical hyperscaler cloud size. It's not even the typical single datacenter size.
A single AI rack consumes 60kW, and there is apparently a single DC that alone consumes 650MW.
When Microsoft puts in a DC, the machines are done in units of a "stamp", ie a couple racks together. These aren't scaled by dollar or sqft, but by the MW.
And on top of that... That's a bunch of satellites not even trying to crunch data at top speed. No where near the right order of magnitude.
But the focus on building giant monolithic datacenters comes from the practicalities of ground based construction. There are huge overheads involved with obtaining permits, grid connections, leveling land, pouring concrete foundations, building roads and increasingly often now, building a power plant on site. So it makes sense to amortize these overheads by building massive facilities, which is why they get so big.
That doesn't mean you need a gigawatt of power before achieving anything useful. For training, maybe, but not for inference which scales horizontally.
With satellites you need an orbital slot and launch time, and I honestly don't know how hard it is to get those, but space is pretty big and the only reasons for denying them would be safety. Once those are obtained done you can make satellite inferencing cubes in a factory and just keep launching them on a cadence.
I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).
> I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).
You'd be wrong. There's a huge incentive to optimized radiator tech because of things like the international space station and MIR. It's a huge part of the deployment due to life having pretty narrow thermal bands. The added cost to deploy that tech also incentivizes hyper optimization.
Making bigger structures doesn't make that problem easier.
Fun fact, heat pipes were invented by NASA in the 60s to help address this very problem.
Good point - the comms satellites are not even "keeping" some of the energy, while a DC would. I _am_ now curious about the connection between bandwidth and wattage, but I'm willing to bet that less than 1% of the total energy dissipation on one of these DC satellites would be in the form of satellite-to-earth broadcast (keeping in mind that s2s broadcast would presumably be something of a wash).
Is the SpaceX thin-foil cooling based on graphene real? Can experts check this out?
"SmartIR’s graphene-based radiator launches on SpaceX Falcon 9" [1]. This could be the magic behind this bet on heat radiation through exotic material. Lot of blog posts say impossible, expensive, stock pump, etc. Could this be the underlying technology breakthrough? Along with avoiding complex self-assembly in space through decentralization (1 million AI constellation, laser-grid comms).
It's like this. Everything about operating a datacenter in space is more difficult than it is to operate one on earth.
1. The capital costs are higher, you have to expend tons of energy to put it into orbit
2. The maintenance costs are higher because the lifetime of satellites is pretty low
3. Refurbishment is next to impossible
4. Networking is harder, either you are ok with a relatively small datacenter or you have to deal with radio or laser links between satellites
For starlink this isn't as important. Starlink provides something that can't really be provided any other way, but even so just the US uses 176 terawatt-hours of power for data centers so starlink is 1/400th of that assuming your estimate is accurate (and I'm not sure it is, does it account for the night cycle?)
What about sourcing and the cost of energy? Solar Panels more efficient, no bad weather, and 100% in sunlight (depending on orbit) in space. Not that it makes up for the items you listed, but it may not be true that everything is more difficult in space.
Let's say with no atmosphere and no night cycle, a space solar panel is 5x better. Deploying 5x as many solar panels on the ground is still going to come in way under the budget of the space equivalent.
And it's not the same at all. 5x the solar panels on the ground means 5x the power output in the day, still 0 at night. So you'd need batteries. If you add in bad weather and winter, you may need battery capacity for days, weeks or even months, shifting the cost to batteries while still relying on nuclear of fossil backups in case your battery dies or some 3/4/5-sigma weather event outside what you designed for occurs.
just take cost of getting kg in space and compare it to how much solar panel will generate
Current satellites get around 150W/kg from solar panels. Cost of launching 1kg to space is ~$2000. So we're at $13.3(3)/Watt. We need to double it because same amount need to be dissipated so let's round it to $27
One NVidia GB200 rack is ~120kW. To just power it, you need to send $3 240 000 worth of payload into space. Then you need to send additional $3 106 000 (rack of them is 1553kg) worth of servers. Plus some extra for piping
Over 10 years ago, the best satellites had 500W/kg [2]. Modern solar panels that are designed to be light are at 200g per sqm [1]. That's 5sqm per kg. One sqm generates ca. 500W. So we're at 2.5kW per kg. Some people claim 4.3kW/kg possible.
Starship launch costs have a $100/kg goal, so we'd be at $40 / kW, or $4800 for a 120kW cluster.
120kW is 1GWh annually, costs you around $130k in Europe per year to operate. ROI 14 days. Even if launch costs aren't that low in the beginning and there's a lot more stuff to send up, your ROI might be a year or so, which is still good.
Solar panels in space are more efficient, but on the ground we have dead dinosaurs we can burn. The efficiency gain is also more than offset by the fact that you can't replace a worn out panel. A few years into the life of your satellite its power production drops.
... if you completely ignore the difficulty of getting them up there. I'd be interested to see a comparison between the amount of energy required to get a solar panel into space, and the amount of energy it produces during its lifetime there. I wouldn't be surprised if it were a net negative; getting mass into orbit requires a tremendous amount of energy, and putting it there with a rocket is not an efficient process.
The cost might be the draw (if there is one). Big tech isn't afraid of throwing money at problems, but the AI folk and financiers are afraid of waiting and uncertainty. A satellite is crazy expensive but throwing more money at it gets you more satellites.
At the end of the day I don't really care either way. It ain't my money, and their money isn't going to get back into the economy by sitting in a brokerage portfolio. To get them to spend money this is as good a way as any other, I guess. At least it helps fund a little spaceflight and satellite R&D on the way.
>1. The capital costs are higher, you have to expend tons of energy to put it into orbit
putting 1KW of solar on land - $2K, putting it into orbit on Starship (current ground-based heavy solar panels, 40kg for 4m2 of 1KW in space) - anywhere between $400 and $4K.
Add to that that the costs on Earth will only be growing, while costs in space will be falling.
Ultimately Starship's costs will come down to the bare cost of fuel + oxidizer, 20kg per 1kg in LEO, i.e. less than $10. And if they manage streamlined operations and high reuse. Yet even with $100/kg, it is still better in space than on the ground.
>That would make your solar panel (40kg) around $60K to put into space.
with the GPU costing the same, it would only double the capex.
>Even being generous and assuming you could get it to $100 per kg that's still $4000
noise compare to the main cost - GPUs.
>There's a lot of land in the middle of nowhere that is going to be cheaper than sending shit to space.
Cheapness of location of your major investment - GPUs - may as well happen to be secondary to other considerations - power/cooling capacity stable availability, jurisdiction, etc.
Can only speculate out of thin air - B200 and Ryzen 9950x made on the same process and have 11x difference in die size. 11 Ryzens would cost $6K, and with 200Gb RAM - $8K. Googling brings that the B200 cost or production is $6400. That matches the numbers from the Ryzen based estimate above (Ryzen numbers is retail, yet it has higher yield, so balance). So, i'd guess that given Google scale a TPU similar to B200 should be $6K-$10K.
It is SpaceX/Elon who bet billions on that yadda-yadda, not me. I wrote "If" for $10/kg. I'm sure though that they would easily yadda-yadda under sub-$100/kg - which is $15M per flight. And even with those $100/kg the datacenters in space still make sense as comparable to ground based and providing the demand for the huge Starship launch capacity.
A datacenter costs ~$1000/ft^2. How much equipment per square foot is there? say 100kg (1 ton per rack plus hallway). Which is $1000 to put into orbit on Starship at $100/kg. At sub-$50/kg, you can put into orbit all the equipment plus solar panels and it would still be cheaper than on the ground.
It looks like you’re comparing the cost of installing solar panels on the ground with the cost of just transporting them to orbit. You can’t just toss raw solar panels out of a cargo bay.
> putting 1KW of solar on land - $2K, putting it into orbit on Starship (current ground-based heavy solar panels, 40kg for 4m2 of 1KW in space) - anywhere between $400 and $4K.
What starship? The fantasy rocket Musk has been promising for 10 years or the real one that has thus far delivered only one banana worth of payload into orbit?
You are presented with a factual, verifiable, statement that starship has been promised for years and that all that's been delivered is something capable of sending a banana to LEO. Wayyyy overdue too.
You meet this with "well, once it works, it'll be amazing and you'll be queuing up"? How very very musky!
The bean counters at NVidia recently upped the expected lifecycle from 5 years to 6. On paper, you are expected now to get 6 years out of a GPU for datacenter use, not 3-5.
> The maintenance costs are higher because the lifetime of satellites is pretty low
Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean...
> Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean
Hell, you're going to lose some fraction of chips to entropy every year. What if you could process those into reaction mass?
I believe that a modern GPU will burn out immediately. Chips for space are using ancient process nodes with chunky sized components so that they are more resilient to radiation. Deploying a 3nm process into space seems unlikely to work unless you surround it with a foot of lead.
This brings a whole new dimension to that joke about how our software used to leak memory, then file descriptors, then ec2 instances, and soon we'll be leaking entire data centers. So essentially you're saying - let's convert this into a feature.
If anything, considering this + limited satellite lifetime, it almost looks like a ploy to deal with the current issue of warehouses full of GPUs and the questions about overbuild with just the currently actively installed GPUs (which is a fraction of the total that Nvidia has promised to deliver within a year or two).
Just shoot it into space where it's all inaccessible and will burn out within 5 years, forcing a continuous replacement scheme and steady contracts with Nvidia and the like to deliver the next generation at the exact same scale, forever
> Everything about operating a datacenter in space is more difficult than it is to operate one on earth
Minus one big one: permitting. Every datacentre I know going up right now is spending 90% of their bullshit budget on battlig state and local governments.
But since building a datacenter almost anywhere on the planet is more convenient than outer space, surely you can find some suitable location/government. Or put it on a boat, which is still 100 times more sensible than outer space.
> since building a datacenter almost anywhere on the planet is more convenient than outer space, surely you can find some suitable location/government
More convenient. But I'm balancing the cost equation. There are regimes where this balances. I don't think we're there yet. But it's irrational to reject it completely.
> Or put it on a boat, which is still 100 times more sensible than outer space
Surely given starlinks 5ish year deorbit plan, you could design a platform to hold up for that long... And instead of burning the whole thing up you could just refurbish it when you swap out the actual rack contents, considering that those probably have an even shorter edge lifespan.
Starlinks are built to safely burn up on re-entry. A big reusable platform will have to work quite differently to never uncontrollably re-enter, or it might kill someone by high velocity debris on impact.
This adds weight and complexity and likely also forces a much higher orbit.
I can’t wait for all the heavy metals that are put into GPUs and other electronics showering down on us constantly. Wonder why the billionaires have their bunkers.
> If you think there is no papework necessary for launching satellites, you are very very wrong
I would be. And granted, I know a lot more about launching satellites than building anything. But it would take me longer to get a satellite in the air than the weeks it will take me to fix a broken shelf in my kitchen. And hyperscalers are connecting in months, not weeks.
> when he talks about subject outside of his domain
Hate to burst your bubble. But I have a background in aerospace engineering. I’ve financed stuff in this field, from launch vehicles to satellites. And I own stakes in a decent chunk of the plays in this field. Both for and against this hypothesis.
So yeah, I’ll hold my ground on having reasonable basis for being sceptical of blanket dismissals of this idea as much as I dismiss certainty in its success.
There are a lot of cheap shots around AI and aerospace. Some are coming from Musk. A lot are coming from one-liner pros. HN is pretty good at filtering those to get the good stuff, which is anyone doing real math.
That actually confirms what the other commenter said.
Your assertion was "Every datacentre I know going up right now is spending 90% of their bullshit budget on battlig state and local governments" and you haven't demonstrated any expertise is building data centers.
You've given a very extraordinary claim about DC costs, with no evidence presented, nor expertise cited to sway our priors.
So what? Why is it important to have 24/7 solar, that you cannot have on the ground? On the ground level you have fossil fuels.
I wonder if you were thinking about muh emissions for a chemical rocket launched piece of machinery containing many toxic metals to be burnt up in the air in 3-5 years... It doesn't sound more environmentally friendly.
I mean, you don't have zoning in space, but you have things like international agreements to avoid, you know, catastrophic human development situations like kessler syndrome.
All satellites launched into orbit these days are required to have de-orbiting capabilities to "clean up" after EOL.
I dunno, two years ago I would have said municipal zoning probably ain't as hard to ignore as international treaties, but who the hell knows these days.
Parent just means "a lot" and is using 90% to convey their opinion. The actual numbers are closer to 0.083%[1][2][3][4] and parent thinks they should be 0.01-0.1% of the total build cost.
1. Assuming 500,000 USD in permitting costs. See 2.
2. Permits and approvals: Building permits, environmental assessments, and utility connection fees add extra expenses. In some jurisdictions, the approval process alone costs hundreds of thousands of dollars. https://www.truelook.com/blog/data-center-construction-costs
3. Assuming a 60MW facility at $10M/MW. See 4.
4. As a general rule, it costs between $600 to $1,100 per gross square foot or $7 million to $12 million per megawatt of commissioned IT load to build a data center. Therefore, if a 700,000-square foot, 60-megawatt data center were to be built in Northern Virginia, the world’s largest data center market, it would cost between $420 million and $770 million to construct the facility, including its powered shell and equipping the building with the appropriate electrical systems and HVAC components. https://dgtlinfra.com/how-much-does-it-cost-to-build-a-data-...
He said bullshit budget, not budget. He's thinking about opportunity and attention costs, not saying that permits literally have a higher price tag than GPUs.
> Source? I can't immediately find anything like that
I’ve financed two data centers. Most of my time was spent over permitting. If I tracked it minute by minute, it may be 70 to 95%. But broadly speaking, if I had to be told about it before it was solved, it was (a) a real nuisance and (b) not technical.
that may have been the case before but it is not anymore. I live in Northern VA, the capital of the data centers and it is easier to build one permit-wise than a tree house. also see provisions in OBBB
This is a huge one. What Musk is looking for is freedom from land acquisition. Everything else is an engineering and physics problem that he will somehow solve. The land acquisition problem is out of his hands and he doesn't want to deal with politicians. He learned from building out the Memphis DC.
Where a random malicious president can't just hijack the government and giga-companies can't trivially lobby lawmakers for profits at the expense of citizens?
So why does he not build here in Europe then? Getting a permit for building a data center in Sweden is just normal industrial zoning that anyone can get for cheap, there is plenty of it. Only challenge is getting enough electricity.
I meant Europe is an example of how not to do regulation. The problem you just mentioned. If you get land easily electricity won't be available and vice versa.
Then maybe you should move here. We have in most cases well functioning regulations. Of course there are counter examples where it has been bad but data centers is not one of them. It is easy to get permits to build one.
Maybe, but I'm skeptical, because current DCs are not designed to minimize footprint. Has anyone even built a two-story DC? Obviously cooling is always an issue, but not, directly, land.
Now that I think of it, a big hydro dam would be perfect: power and cooling in one place.
Downtown Los Angeles: The One Wilshire building, which is the worlds most connected building. There are over twenty floors of data centers. I used Corporate Colo which was a block or two away. That building had at least 10 floors of Data Centers.
I think Downtown Seattle has a bunch too (including near Amazon campus). I just looked up one random one and they have about half the total reported building square footage of a 10-story building used for a datacenter: https://www.datacenters.com/equinix-se3-seattle
Sure, we can run the math on heat dissipation. The law of Stefan-Boltzman is free and open source and it application is high school level physics. You talk about 50 MW. You are going to need a lot of surface area to radiate that off at somewhere close to reasonable temperatures.
Amazon’s new campus in Indiana is expected to use 2.2GW when complete. 50Mw is nothing, and that’s ignoring the fact that most of that power wouldn't actually be used for compute.
A Starlink satellite is mainly just receiving and sending data, the bare minimum of a data center-satellite's abilities; everything else comes on top and would be the real power drain.
This is the main point I think. I am very much convinced that SpaceX is capbable to put a datacenter into space. I am not convinced they can do it cheaper than building a datacenter on earth.
> Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?
xAI’s first data center buildout was in the 300MW range and their second is in the Gigawatt range. There are planned buildouts from other companies even bigger than that.
So data center buildouts in the AI era need 1-2 orders of magnitude more power and cooling than your 50MW estimate.
Even a single NVL72 rack, just one rack, needs 120kW.
I would assume such a setup involves multiple stages of heat pumps to from GPU to 1400C radiatoe. Obviously that's going to impact efficiency.
Also I'm not seriously suggesting that 1400C radiators is a reasonable approach to cooling a space data centre. It's just intended to demonstrate how infeasible the idea is.
Because 10K satellites have a FAR greater combined surface area than a single space-borne DC would. Stefan-Boltzman law: ability to radiate heat increase to the 4th power of surface area.
50MW is on the small side for an AI cluster - probably less than 50k gpus.
if the current satellite model dissipates 5kW, you can't just add a GPU (+1kW). maybe removing most of the downlink stuff lets you put in 2 GPUs? so if you had 10k of these, you'd have a pretty high-latency cluster of 20k GPUs.
I'm not saying I'd turn down free access to it, but it's also very cracked. you know, sort of Howard Hughesy.
> A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate.
This isn't quite true. It's very possible that the majority of that power is going into the antennas/lasers which technically means that the energy is being dissipated, but it never became heat in the first place. Also, 5KW solar power likely only means ~3kw of actual electrical consumption (you will over-provision a bit both for when you're behind the earth and also just for safety margin).
> Why is starlink possible and other computations are not?
Aside from the point others have made that 50 MW is small in the context of hyperscalers, if you want to do things like SOTA LLM training, you can't feasibly do it with large numbers of small devices.
Density is key because of latency - you need the nodes to be in close physical proximity to communicate with each other at very high speeds.
For training an LLM, you're ideally going to want individual satellites with power delivery on the order of at least about 20 MW, and that's just for training previous-generation SOTA models. That's nearly 5,000 times more power than a single current Starlink satellite, and nearly 300 times that of the ISS.
You'd need radiator areas in the range of tens of thousands of square meters to handle that. Is it theoretically technically possible? Sure. But it's a long-term project, the kind of thing that Musk will say takes "5 years" that will actually take many decades. And making it economically viable is another story - the OP article points out other issues with that, such as handling hardware upgrades. Starlink's current model relies on many cheap satellites - the equation changes when each one is going to be very, very expensive, large, and difficult to deploy.
Grok is losing pretty spectacularly on the user / subscriber side of things.
They have no path to paying for their existence unless they drastically increase usage. There aren't going to be very many big winners in this segment and xAI's expenses are really really big.
I really wonder what will happen when the AI companies can no longer set fire to piles of investor money, and have to transition to profitability or at least revenue neutrality - as that would entail dramatically increasing prices.
Is the plan to have everyone so hopelessly dependent on their product that they grit their teeth and keep on paying?
Think about the stock return over a period - its composed of capital gains and dividends.
Now what happens capital gains disappears and perhaps turns into capital losses? Dividends have to go higher.
What does this mean? Less retained earnings / cashflows that can be re-invested.
Apple is the only one that will come out of this OK. The others will be destroyed for if they dont return cash, the cash balance will be discounted leading to a further reduction in the value of equity. The same thing that happened to Zuckerberg and Meta with the Metaverse fiasco.
Firms in the private sphere will go bust/acquired.
> Now what happens capital gains disappears and perhaps turns into capital losses? Dividends have to go higher
This is not how corporate finance works. Capital gains and losses apply to assets. And only the most disciplined companies boost dividends in the face of decline—most double down and try to spend their way back to greatness.
It'll be a combination of advertising and subscription fees, and there will only be a few big winners.
Gemini is practically guaranteed. With the ad model already primed, their financial resources, their traffic to endlessly promote Gemini (ala Chrome), their R&D capabilities around AI, their own chips, crazy access to training data, and so on - they'd have to pull the ultimate goof to mess up here.
Microsoft is toast, short of a miracle. I'd bet against Office and Windows here. As Office goes down, it's going to take Windows down with it. The great Office moat is about to end. The company struggles, the stock struggles, Azure gets spun off (unlock value, institutional pressure), Office + Windows get spun off - the company splits into pieces. The LLMs are an inflection point for Office and Microsoft is super at risk, backwards regarding AI and they're slow. The OpenAI pursuit as it was done, was a gigantic mistake for Microsoft - one of the dumbest strategies in the history of tech, it left them with their pants down. Altman may have killed a king by getting him to be complacent.
Grok is very unlikely to make it (as is). The merger with SpaceX guarantees its death as a competitor to GPT/Gemini/Claude, it's over. Maybe they'll turn Grok into something useful to SpaceX. More likely they'll slip behind and it'll die rapidly like Llama. The merger is because they see the writing on the wall, this is a bailout to the investors (not named Elon) of xAI, as the forced Twitter rollup was a bailout for the investors of Twitter.
Claude is in a weird spot. What they have is not worth $300-$500 billion. Can they figure out how to build a lot more value out of what they have today (and get their finances sustainable), before the clock runs out? Or do they get purchased by Meta, Microsoft, etc.
OpenAI has to rapidly roll out the advertising model and get the burn rate down to meaningless levels, so they're no longer dependent on capital markets for financing (that party is going to end suddenly).
Meta is permanently on the outside looking in. They will never field an in-house competitor to GPT or Gemini that can persistently keep up. Meta doesn't know what it is or why it should be trying to compete with GPT/Gemini/Claude. Their failure (at this) is already guaranteed. They should just acquire GPT 4o and let their aging userbase on FB endlessly talk itself into the grave for the next 30 years while clicking ads.
If Amazon knew what they were doing (they don't right now), they would: immediately split retail + ads and AWS. The ad business ensures that the retail business will continue to thrive and would be highly lucrative. Then have AWS purchase Anthropic when valuations drop, bolt it on to AWS everything. Far less of an anti-trust issue than if what is presently known as Amazon attempted it here and now. Anthropic needs to build a lot on to itself to sustain itself and justify its valuation, AWS already has the answer to that.
If valuations plunge, and OpenAI is not yet sustainable, Microsoft should split itself into pieces and have the Windows-Office division purchase OpenAI as their AI option. It'd be their only path to avoiding anti-trust blocking that acquisition. As is Microsoft would not be allowed to buy OpenAI. Alternatively Microsoft can take a shot at acquiring Anthropic at some point - this seems likely given the internal usage going on at Redmond, the primary question is anti-trust (but in this case, Anthropic is viewed as the #3, so Microsoft would argue it bolsters competition with GPT & Gemini).
"Gemini is practically guaranteed. With the ad model already primed, their financial resources, their traffic to endlessly promote Gemini (ala Chrome), their R&D capabilities around AI, their own chips, crazy access to training data, and so on - they'd have to pull the ultimate goof to mess up here"
Im not convinced on this TBH in the long-run. Google is seemingly a pure play technology firm that has to make products for the sake of it, else the technology is not accessible/usable. Does that mean they are at their core a product firm? Nah. Thats always been Apple's core thing, along side superior marketing.
One only has to compare Google's marketing of the Pixel phone to Apple - it does not come close. Nobody connects with Google's ads, the way they do with Apple. Google has a mountain to climb and has to compensate the user tremendously for switching.
Apple will watch the developments keenly and figure out where they can take advantage of the investments others have made. Hence the partnerships et al with Google.
Merging with SpaceX means they don't have to pay for their existence. Anyway they're probably positioned better than any other AI player except maybe Gemini.
I don’t follow why merging with SpaceX means they don’t have to pay for their existence. Someone does. Presumably now that is SpaceX. What is SpaceX’s revenue?
Maybe the idea is that SpaceX has access to effectively unlimited money through the US Government, either via ongoing lucrative contracts, or likely bailouts if needed. The US Govt wouldn't bail out xAI but they would bail out SpaceX if they are in financial trouble.
> It makes far more sense to build data centers in the arctic.
What (literally) on earth makes you say this? The arctic has excellent cooling and extremely poor sun exposure. Where would the energy come from?
A satellite in sun-synchronous orbit would have approximately 3-5X more energy generation than a terrestrial solar panel in the arctic. Additionally anything terrestrial needs maintenance for e.g. clearing dust and snow off of the panels (a major concern in deserts which would otherwise seem to be ideal locations).
There are so many more considerations that go into terrestrial generation. This is not to deny the criticism of orbital panels, but rather to encourage a real and apolitical engineering discussion.
> A satellite in sun-synchronous orbit would have approximately 3-5X more energy generation than a terrestrial solar panel in the arctic.
Building 3-5x more solar plants in the Arctic, would still be cheaper than travelling to space. And that's ignoring that there are other, more efficient plants possible. Even just building a long powerline around the globe to fetch it from warmer regions would be cheaper.
It has been worked out. Just look at how big are ISS radiators and that they dissipate around 100kW then calculate cost of sending all that to space. And by that I mean it would be even more expensive that some of the estimates flying around
While personally I think it's another AI cash grab and he just wants to find some more customers for spacex, other thing is "you can't copyright infringe in space" so it might be perfect place to load that terabytes of stolen copyrighted material to train data sets, if some country suddenly decides corporation stealing copyright content is not okay any more
DGX H200 is 10,2 kW. So that like 10 of them. Or only 80 H200. Doesn’t sound like a big data center. More like a server room.
ISS radiators are huge 13.6x3.1 m. Each radiates 35 kW. So you need 3 of them to have your 100 kW target. They are also filled with gas that needs pumping so not exactly a passive system and as such can break down for a whole lot of reasons.
You also need to collect that power so you need about the same amount of power coming from solar panels. ISS solar array wings are 35x12 m and can generate about 31 kW of power. So we’ll need at least 3 of them. BTW each weighs a ton, a literal metric ton.
It hardly seems feasible. Huge infrastructure costs for small AI server rooms in space.
In Table 1, the cost of cooling of a terrestrial data centre is listed as $7M. The cost of cooling in space is assigned a value of $0 with the claim:
"More efficient cooling architecture taking advantage of higher ΔT in space"
My bold claim: The cost of cooling will not be $0. The cost of launching that cooling into space will also not be $0. The cost of maintaining that mechanically complex cooling in space will not be $0.
They then throw in enough unrealistic calculations later in the "paper" to show that they thought about the actual cost at least a little bit. Apparently just enough to conclude that it's so massive there's no way they're going to list it in the table. Table 1 is pure fantasy.
I will not re-read them, but from what I recall from those threads is numbers don't make sense. Something like:
- radiators the multiple square kilometers in size, in space;
- lifting necessary payloads to space is multiples of magnitudes more than we have technology/capacity as the whole world now;
- maintanence nightmare. yeah you can have redundancy, but no feasable way to maintain;
- compare how much effort/energy/maintenance is required to have ISS or Tiangong space stations - these space datacenters sound ridiculous;
NB: I would be happy to be proven wrong. There are many things that are possible if we would invest effort (and money) into it, akin to JFK's "We choose to go to the Moon" talk. Sounded incredible, but it was done from nearly zero to Moon landing in ~7 years. Though as much as I udnerstand - napkin math for such scale of space data centers seem to need efforts that are orders or magnitude more than Apollo mission, i.e. launching Saturn V for years multiple times per day. Even with booster reuse technology this seems literally incredible (not to mention fuel/material costs).
They do not at any point outline how cooling will be done, they simply say "it will be more efficient than chillers due to the larger delta T" which is incorrect because it's about dT not delta T
(DTC) Datacentres take electricity and turn it into low grade heat e.g 60c water. Put them anywhere where you've either got excess (cheap) energy or where you can use the heat. Either is fine, both is great, but neither is both bad and current standard practice.
It's perfectly possible to put small data centres in city centres and pipe the heat around town, they take up very very little space and if you're consuming the heat, you don't need the noisy cooling towers (Ok maybe a little in summer).
Similarly if you stick your datacentre right next to a big nuclear power plant, nobody is even going to notice let alone care.
Resistive heating is a tremendously inefficient way to generate heat. Sometimes it's worth it if you get something useful in exchange (such as full spectrum light in the winter). But it's not all upsides.
Heat pumps are magic. They're something like 300% efficient. Each watt generates 3 watts of useful heat.
I share your enthusiasm about heat pumps, but I wonder what the efficiency of using waste heat is. Couldn't it be competitive with heat pumps? As it's a waste product, isn't it reasonable to also expect it to be more than 100% efficient?
The energy economics in space are also a bit more complicated than usually thought. I think Starlink has been using Si cells instead of III-V-based ones, but in addition to lower output they also tend to degrade faster under radiation. I guess that's ok if the GPU is going to be toast in a few years anyway so you might as well de-orbit the whole thing. But that same solar cell on Earth will happily be producing for 40+ years.
Also the same issue with radiative cooling pops up for space solar cells - they tend to run way hotter than on Earth and that lowers their efficiency relative to what you could get terrestrially.
Big tech businesses are convinced that there must be some profitable business model for AI, and are undeterred by the fact that none has yet been found. They want to be the first to get there, raking in that sweet sweet money (even though there's no evidence yet that there is money to be made here). It's industry-wide FOMO, nothing more.
Typically in capitalism, if there is any profit, the race is towards zero profit. The alternative is a race to bankrupt all competitors at enormous cost in order to jack up prices and recoup the losses as a monopoly (or duopoly, or some other stable arrangement). I assume the latter is the goal, but that means burning through like 50%+ of american gdp growth just to be undercut by china.
Imo I would be extremely angry if I owned any spacex equity. At least nvidia might be selling to china in the short term... what's the upside for spacex?
People keep saying this but it's simply untrue. AI inference is profitable. Openai and Anthropic have 40-60% gross margins. If they stopped training and building out future capacity they would already be raking in cash.
They're losing money now because they're making massive bets on future capacity needs. If those bets are wrong, they're going to be in very big trouble when demand levels off lower than expected. But that's not the same as demand being zero.
those gross profit margins aren't that useful since training at fixed capacity is continually getting cheaper, so there's a treadmill effect where staying in business requires training new models constantly to not fall behind. If the big companies stop training models, they only have a year before someone else catches up with way less debt and puts them out of business.
Only if training new models leads to better models. If the newly trained models are just a bit cheaper but not better most users wont switch. Then the entrenched labs can stop training so much and focus on profitable inference
A significant number of AI companies and investors are hoping to build a machine god. This is batshit insane, but I suppose it might be possible. Which wouldn't make it any more sane.
But when they say, "Win the AI race," they mean, "Build the machine god first." Make of this what you will.
There's a synergy effect here - Tesla sells you a solar roof and car bundle, the roof comes without a battery (making it cheaper) and the car now gets a free recharge whenever you're home (making it cheaper in the long term).
Of course that didn't work out with this specific acquisition, but overall it's at least a somewhat reasonable idea.
It's obviously a pretty weird thing for a car company to do, and is probably just a silly idea in general (it has little obvious benefit over normal solar panels, and is vastly more expensive and messy to install), but in principle it could at least work, FSOV work. The space datacenter thing is a nonsensical fantasy.
In comparison to datacenters in space yes. Solar roofs are already a profitable business, just not likely to be high growth. Datacenters in space are unlikely to ever make financial sense, and even if they did, they are very unlikely to show high growth due to continuing ongoing high capital expenses inherent in the model.
Off on a tangent here but I'd love for anyone to seriously explain how they believe the "AI race" is economically winnable in any meaningful way.
Like what is the believed inflection point that changes us from the current situation (where all of the state-of-the-art models are roughly equal if you squint, and the open models are only like one release cycle behind) to one where someone achieves a clear advantage that won't be reproduced by everyone else in the "race" virtually immediately.
I _think_ the idea is that the first one to hit self improving AGI will, in a short period of time, pull _so_ far ahead that competition will quickly die out, no longer having any chance to compete economically.
At the same time, it'd give the country controlling it so much economic, political and military power that it becomes impossible to challenge.
I find that all to be a bit of a stretch, but I think that's roughly what people talking about "the AI race" have in mind.
> Off on a tangent here but I'd love for anyone to seriously explain how they believe the "AI race" is economically winnable in any meaningful way.
Because the first company to have a full functioning AGI will most likely be the most valuable in the world. So it is worth all the effort to be the first.
They ultimately want to own everyone's business processes, is my guess. You can only jack up the subscription prices on coding models and chatbots by so much, as everyone has already noted... but if OpenAI runs your "smart" CRM and ERP flows, they can really tighten the screws.
If you have the greatest coding agent under your thumb, eventually you orient it toward eating everything else instead of letting everybody else use your agent to build software & make money. Go forward ten years, it's highly likely GPT, Gemini, maybe Claude - they'll have consumed a very large amount of the software ecosystem. Why should MS Office exist at all as a separate piece of software? The various pieces of Office will be trivial for the GPT (etc) of ten years out to fully recreate & maintain internally for OpenAI. There's no scenario where they don't do what the platforms always do: eat the ecosystem, anything they can. If a platform can consume a thing that touches it, it will.
Office? Dead. Box? Dead. DropBox? Dead. And so on. They'll move on anything that touches users (from productivity software to storage). You're not going to pay $20-$30 for GPT and then pay for DropBox too, OpenAI will just do an Amazon Prime maneuver and stack more onto what you get to try to kill everyone else.
Google of course has a huge lead on this move already with their various prominent apps.
That may be the plan, but this is also a great way for GDPR's maximum fine, based on global revenue, to bite on SpaceX's much higher revenue. And without any real room for argument.
Starlink and Falcon 9 have been an excellent pairing, Falcon 9 partially reusable rockets created a lot launch capacity and starlink filled the demand. Starship if it meets its goals will create more launch fully reusable supply by orders of magnitude, but there is not the demand for all that launch capacity. Starlink can take some of it but probably not all so they need to find a customer to fill it in order to build up enough to have the volume to eventually colonize mars.
We can tell because it’s not being treated as a serious goal. 100% of the focus is on the big vroom vroom part that’s really exciting to kids who get particularly excited by things that go vroom, and approximately 0% of the focus is on developing all the less glamorous but equally essential components of a successful Mars mission, like making sure the crew stays healthy.
> It could be a legal dodge. It could be a power grab. What it will not be is a useful source of computing power
It's a way to get cheap capital to get cool tech. (Personal opinion.)
Like dark fibre in the 1990s, there will absolutely–someday–be a need for liquid-droplet radiators [1]. Nobody is funding it today. But if you stick a GPU on one end, maybe they will let you build a space station.
The only way I see this actually working given the resource requirement is delta-v style with in orbit resource extraction using robots. By transferring heat to asteroids in the shade of the solar panels at L1 or something.
This is mistaken. In space a radiator can radiate to cold (2.7K) deep space. A thermos on earth cannot. The temperature difference between the inner and outer walls of the thermos is much lower and it’s the temperature difference which determines the rate of cooling.
Basically you concentrate the heat into a high emissivity high temperature material that’s facing deep space and is shaded. Radiators get dramatically smaller as temperature goes up because radiation scales as T⁴ (Stefan–Boltzmann). There are many cases in space where you need to radiate heat - see Kerbal Space Program
"High emissivity, high temperature" sounds good on paper, but to create that temperature gradient within your spacecraft the way you want costs a lot of energy. What you actually do is add a shit load of surface area to your spacecraft, give that whole thing a coating that improves its emissivity, and try your hardest to minimize the thermal gradient from the heat source (the hot part) throughout the radiator. Emissivity isn't going past 1 in that equation, and you're going to have a very hard time getting your radiator to be hotter than your heat source.
Note that KSP is a game that fictionalizes a lot of things, and sizes of solar panels and radiators are one of those things.
AI sovereignty, not AI efficiency. Redesign AI chips with lower power density and higher thermal tolerances and you get more efficient radiation with some sacrifice in compute power. But you are outside the jurisdiction of every country.
Then you get people paying much more money to use less-tightly-moderated space-based AI rather than heavily moderated AI.
What about gamma rays? there is a reason why "space hardened" microcontrollers are MIPS chips from the 90s on massive dies with a huge wedge of metal on it. You can't just take a normal 4micron die and yeet it into space and have done with it.
Then there is the downlink. If you want low latency, then you need to be in Low earth orbit. That means that you'll spend >40% of your time in darkness. So not only do you need to have a MAssive heat exchanger and liquid cooling loop, which is space rated, you need to have ?20mwhr of battery as well (also cooled/heated because swinging +/- 140 C every 90 minutes is not going to make them happy)
Then there is data consistency, is this inference only? or are we expecting to have a mesh network that can do whole "datacentre" cache coherence? because I have bad news for you if you're going to try that.
This is widely believed (especially in the US, where, other than the Leaf, most early electric cars never launched), but honestly pretty dubious. The first real electric cars, with significant production:
2010 - Mitsubishi i-MiEV, Nissan Leaf
2011 - Smart electric, Volvo C30 electric, Ford Focus electric, BYD e6.
2012 - Renault Zoe (Renault launched a couple of other vehicles on the same platform ~2010, but they never saw significant production), Tesla Model S (Tesla had a prior car, the Roadster, but it never saw significant production).
2013 - VW eUP, eGolf (VW occasionally put out an electric Golf historically, going back to 1992, but again those were never produced in large quantities).
The big change ~2010 was around the economics of lithium ion batteries; they finally got cheap enough that everyone started pulling their concept designs and small-scale demonstration models into full production.
I think you under appreciate him a bit here. No he's not a super genius. He's probably not even a good engineer. But he is a) a total a.hole and b) a tremendous bullshitter. There are circumstances in which you need such a person to succeed (see also Steve Jobs). He yelled at people for 10 years straight and he was crucial in facilitating capital to build these very capital intensive products. A regular smart person would absolutely not have succeeded, for these reasons.
My guess is it’s just another example of his habit of trying to use one of his companies to manufacture demand for another of his companies’ products.
Specifically: Starship makes no economic sense. There simply isn’t any pre-existing demand for the kind of heavy lift capacity and cadence that Starship is designed to deliver. Nor is there anyone who isn’t currently launching heavy payloads to LEO but the only thing holding them back is that they need weekly launches because their use case demands a whole lot of heavy stuff in space on a tight schedule and that’s an all-or-nothing thing for them.
So nobody else has a reason to buy 50 Starship launches per year. And the planned Starlink satellites are already mostly in orbit. So what do you do? Just sell Starship to xAI, the same way he fixed Cybertruck’s demand problem by selling heaps of them to SpaceX.
If (as seems to be the case) nobody can identify a specific source of latent demand that is large enough to soak up the two order of magnitude increase in the supply of heavy lift launch capacity that Elon wants to deliver, then that strongly suggests that SpaceX does not actually have a business plan for Starship. Or at least, not a business plan that’s been thought through as clearly as a $5 billion (and counting) investment would warrant.
“Defense” is not nearly specific enough to count as an answer. What kind of defense application, specifically, do you have in mind, and why does it need specifically this kind of heavy lift capacity to be viable?
I think Musk is backed into a corner financially. Most of his companies don't have that much revenue and their worth is mostly based on hope.
They might be closer to collapsing than most people think. It's not unheard of that a billionaires net worth drops to zero over night.
I think it's mostly financial reasons why they merged the companies, this space datacenter idea was born to justify the merge of SpaceX and xAI. To give investors hope, not to really do it.
You're thinking of outer space. At any distance away from earth where space is so thin that heat dissipation is impossible, then the speed of light will be prohibitive of any workloads to/from space. there is plenty of altitude above the karman line where there is enough atmosphere to dissipate heat. Furthermore, i don't know if they figured it out, but radiation can dissipate heat, that's how we get heat from the sun. Also, given enough input energy (the sun), active closed-cooling systems might be feasible.
But I really hope posts like this don't discourage whoever is investing in this. The problems are solvable, and someone is trying to solve them, that's all that matters. My only concern is the latency, but starlink seems to manage somehow.
Also, a matter of technicality (or so I've heard it said) is that the earth itself doesn't dissipate heat, it transforms or transfers entropy.
> At any distance away from earth where space is so thin that heat dissipation is impossible, then the speed of light will be prohibitive of any workloads to/from space.
Why would they need to get data back to earth for near real time workloads? What we should be thinking about is how these things will operate in space and communicate with each other and whoever else is in space. The Earth is just ancient history
Yes, but you need energy to pump heat, and that has an efficiency maximum (thx ~~Obama~~ Carnot), and radiative cooling scales with the ~4th power of the temperature, so it has to be really hot, and so it requires a lot of energy to "cool down" the already relatively cool side and use that "heat" to heat up the other side that's a thousand degree hotter.
All in all, the cooling system would likely consume more energy than the compute parts.
yes. it is how sats currently handle this. its actually exponentially effective too P = E S A T^4
requires a lot of weight (cooling fluid). requires a lot of materials science (dont want to burn out radiator). requires a lot of moving parts (sun shutters if your orbit ever faces the sun - radiator is going to be both ways).
so that sounds all well and good (wow! 4th power efficiency!) but it's still insanely expensive and if your radiator solution fucks up in any way (in famously easy to service environment space) then your entire investment is toast
now i havent run the math on cost or what elon thinks the cost is, but my extremely favorable back of hand math suggests he's full of it
Be careful with the math there. While a 4th power is awesome you got the Stefan-Boltzman constant to consider and that's on the order of 10^-8
Radiative power is really efficient for hot things but not so great when you're trying to keep things down to normal levels. Efficient for shedding heat from a sun but not so much for keeping a cpu from overheating...
You can. This is how it is currently done, but it is not easy. It needs to have a large enough surface area to radiate the heat, and also be protected from the sun (as to not collect extra heat). For a data centre, think of an at least 1000m2 heat exchange panel (likely more to train a frontier model).
You definitely _can_ the question is, can you do it by enough for a reasonable amount of money. There are a few techniques to this but at the end of the day you need to radiate away, the heat otherwise it will just keep growing. You cannot keep pumping energy into the satellite without distributing the same amount back out again.
You can, but the heat needs to go somewhere, and now you're back to square one, with "how do I get rid of all this heat". Earth refrigerators have a large heat exchanger on the back for this purpose. In fact now you need to get rid of both of the heat your compute generates and the energy your refrigerator pump uses - an example people often give is that a fridge with an open door actually heats the room, as it spends energy on moving heat around pointlessly.
He goes on about putting a mass driver on the moon for ultra-low-cost space launches.
His plan here clearly hinges around using robots to create a fully-automated GPU manufacturing and launch facility on the moon. Not launching any meaningful number from earth.
Raises some big questions about whether there are actually sufficient materials for GPU manufacture on the moon... But, whatever the case, the current pitch of earth-launches that the people involved with this "space datacenter" thing are making is a lie. I think it just sounds better than outright saying "we're going to build a self-replicating robot factory on the moon", and we are in the age of lying.
If any single country tried to create a whole production chain to single-handedly manufacture modern computer equipment it would be on the order of decades to see any result. Doing it on the moon is just not realistic this century, maybe the next one. Although i don't think the economics would ever work out.
However, TFA's purpose in assuming cooling (and other difficulties) have been worked out (even though they most definitely have not) was to talk about other things that make orbital datacenters in space economically dubious. As mentioned:
But even if we stipulate that radiation, cooling, latency, and launch costs are all solved, other fundamental issues still make orbital data centers, at least as SpaceX understands them, a complete fantasy. Three in particular come to mind:
The materialist take is that his plan is to eventually over-value and then trade on his company valuations, and also have another merger lined up for future personal financial bailouts.
For example: quite apart from the fact of how much rocket fuel is it going to take to haul all this shit up there at the kind of scale that would make these space data centres even remotely worthwhile.
I'm not against space travel or space exploration, or putting useful satellites in orbit, or the advancement of science or anything like that - quite the opposite in fact, I love all this stuff. But it has to be for something that matters.
Not for some deranged billionaire's boondoggle that makes no sense. I am so inexpressibly tired of all these guys and their stupid, arrogant, high-handed schemes.
Because rocket fuels are extremely toxic and the environmental impact of pointlessly burning a vast quantity of rocket fuel for something as nonsensical as data centres in space will be appalling.
Starship is fueled with methane (natural gas) and liquid oxygen which aren't toxic. It does produce a lot of CO2 which is a problem with lots of flights.
The equation has a ^4 to the temperature. If you raise the temperature of your radiator by ~50 degrees you double its emission capacity. This is well within the range of specialised phase change compressors, aka fancy air conditioning pumps.
Next up in the equation is surface emissivity which we’ve got a lot of experience in the automotive sector.
And finally surface area, once again, getting quite good here with nanotechnology.
Yes he’s distracting, no it’s not as impossible as many people think.
Raise the temperature of your radiator by 50 degrees and you double its emission capacity. Or put your radiator in the atmosphere and multiply its heat exchange capacity by a factor of a thousand.
It's not physically impossible. Of course not. It's been done thousands of times already. But it doesn't make any economic sense. It's like putting a McDonald's at the top of Everest. Is it possible? Of course. Is it worth the enormous difficulty and expense to put one there? Not even a little.
For thousands of years we never even looked to Mount Everest, then some bloke on the fiver said he’d give it a shot. Nowadays anyone with the cash and commitment can get the job done.
Same with datacenters in space, not today, but in 1000 years definitely, 100 surely, 10?
As for the economics, it makes about as much sense as running jet engines at full tilt to power them.
> some bloke on the fiver said he’d give it a shot
Hillary (he features on the NZ Five Dollar note) was one of those guys who does things for no good reason. He also went to both poles. This only tells us that it is indeed possible, but not that it's desirable or will become routine.
Even if you create a material with surface emissivity of 1.0:
- let's say 8x 800W GPUs and neglect the CPU, that's 6400W
- let's further assume the PSU is 100% efficient
- let's also assume that you allow the server hardware to run at 77 degrees C, or 350K, which is already pretty hot for modern datacenter chips.
Your radiator would need to dissipate those 6400W, requiring it to be almost 8 square meters in size. That's a lot of launch mass. Adding 50 degrees will reduce your required area to only about 4.4 square meters with the consequence that chip temps will rise by 50 degrees also, putting them at 127 degrees C.
No CPU I'm aware of can run at those temps for very long and most modern chips will start to self throttle above about 100
Yes, that’s what we’re talking about. Data centers in space.
You put the cold side of the phase change on the internal cooling loop, step up the external cooling loop as high temp as you can and then circulate that through the radiators. You might even do this step up more than once.
Imagine the data center like a box, you want it to be cold inside, and there’s a compressor, you use to transfer heat from inside to outside, the outside gets hot, inside cold. You then put a radiator on the back of the box and radiate the heat to the darkness of space.
This is all very dependent on the biggest and cheapest rockets in the world but it’s a tradeoff of convenience and serviceability for unlimited free energy.
Sure and it all routes to dump the heat to...where again? A vacuum? Or to a radiator with a fan with some kind of cooler fluid/gas from the environment constantly flowing through it?
The other two methods of heat transfer apart from radiation are conduction (through “touch”, adjacent molecules, eg from the outside of a chicken on the BBQ to the inside) and convection (through movement, eg cold air or water flowing past).
Not going to read the article, because Data centers in space = DOA is common sense to me, however, did the article really claim cooling wasn't an issue? Do they not understand the laws of thermodynamics, physics, etc?
Sure, space is cold. Good luck cooling your gear with a vacuum.
Don't even get me started on radiation, or even lack of gravity when it comes to trying to run high powered compute in space. If you think you are just going to plop a 1-4U server up there designed for use on earth, you are going to have some very interesting problems pop up. Anything not hardened for space is going to have a very high error/failure rate, and that includes anything socketed...
> Not going to read the article, because Data centers in space = DOA is common sense to me, however, did the article really claim cooling wasn't an issue?
No. Nearly everyone that talks about data centers in space talks about cooling. The point of this article was to talk about other problems that would remain even if the most commonly talked about problems were solved.
It says:
> But even if we stipulate that radiation, cooling, latency, and launch costs are all solved, other fundamental issues still make orbital data centers, at least as SpaceX understands them, a complete fantasy.
Not disagreeing with you at all: that physics fact always come up. My honest question is: if it's a perfect thermos, what does, for example, the ISS do with the heat generated by computers and humans burning calories? The ISS is equipped with a mechanism to radiate excess heat into space? Or is the ISS slowly heating up but it's not a problem?
Massive radiators. In this photo[0], all of the light gray panels are thermal radiators. Note how they are nearly as large as the solar panels, which gives you an idea about the scale needed to radiate away 3-12 people's worth of heat (~1200 watts) + the heat generated by equipment.
The ISS has giant heat sinks[1]. Those heat sinks are necessary for just the modest heat generated on the ISS, and should give an idea of what a sattelite full of GPU's might require...
I think it's actually the other way around, satellites need to be specifically designed to burn up fast in the atmosphere. See for example the warnings about space debris from Chinese satellites not designed with this in mind.
I want to nitpick you here but a thermos is specifically good at insulating because not only does it have a vacuum gap, it's also got two layers of metal (inner and outer) to absorb and reflect thermal radiation.
That specific aspect is NOT true in space because there's nothing stopping thermal radiation.
Now you're correct that you can't remove heat by conduction or convection in space, but it's not that hard to radiate away energy in space. In fact rocket engine nozzle extensions of rocket upper stages depend on thermal radiation to avoid melting. They glow cherry red and emit a lot of energy.
By Stefan–Boltzmann law, thermal radiation goes up with temperature to the 4th power. If you use a coolant that lets your radiator glow you can conduct heat away very efficiently. This is generally problematic to do on Earth because of the danger of such a thing and also because such heat would cause significant chemical reactions of the radiator with our corrosive oxygen atmosphere.
Even without making them super hot, there's already significant energy density on SpaceX's satellites. They're at around 75 kW of energy generation that needs to be radiated away.
And on your final statement, hyperloop was not used as a "distraction" as he never even funded it. He had been talking about it for years and years until fanboys on twitter finally talked him into releasing that hastily put together white paper. The various hyperloop companies out there never had any investment from him.
> a thermos is specifically good at insulating because not only does it have a vacuum gap, it's also got two layers of metal (inner and outer) to absorb and reflect thermal radiation.
It is well known that Musk primary reason to push Hyperloop was because he didn’t want them to build a high speed rail for some reason:
> Musk admitted to his biographer Ashlee Vance that Hyperloop was all about trying to get legislators to cancel plans for high-speed rail in California—even though he had no plans to build it.
I think people underestimate how quickly heat radiates to space. A rock in orbit around Earth will experience 250F/125C on the side facing the Sun, and -173C/-280F on the other side. The ability to rotate an insulating shield toward the sun means you're always radiating.
I think you may be overestimating how quickly this happens and underestimating how much surface area that rock has. Given no atmosphere, the fact that the rock with 1/4 the radius of Earth has a temperature differential of only 300C between the hot side and the cold side, there's not a lot of radiation happening.
In deep space (no incident power) you need roughly 2000 sq meters of surface area per megawatt if you want to keep it at 40C. That would mean your 100 MW deep space datacenter (a small datacenter by AI standards) needs 200000 sq meters of surface area to dissipate your heat. That is a flat panel that has a side length of 300 meters (you radiate on both sides).
Unfortunately, you also need to get that power from the sun, and that will take a square with a 500 meter side length. That solar panel is only about 30% efficient, so it needs a heatsink for the 70% of incident power that becomes heat. That heatsink is another radiator. It turns out, we need to radiate a total of ~350 MW of heat to compute with 100 MW, giving a total heatsink side length of a bit under 600 meters.
All in, separate from the computers and assuming no losses from there, you need a 500x500 meter solar panel and a 600x600 meter radiator just for power and heat management on a relatively small compute cluster.
This sounds small compared to things built on Earth, but it's huge compared to anything that has been sent to space before. The ISS is about 100 meters across and about 30 meters wide for comparison.
What do you think about droplet radiators? E.g. using a ferrofluid with magnetic containment for capture and enough spare on board to last five years of loss due to occasional splashes?
Second, are you saying that we basically need to have a radiator as big (approximately) as the solar panels?
That is a lot, but it does sound manageable, in the sense that it approximately doubles what we require anyway for power.
So, not saying that it’s easy or feasible, but saying that cooling then seems “just” as difficult as power, not insurmountably more difficult. (Note that the article lists cooling, radiation, latency, and launch costs as known hard problems, but not power.)
Of course it's working. We've had computers operating in space for decades. There's no doubt it can be done.
The question isn't whether it's possible, the question is why you'd do it just for data centers. We put computers in space because they're needed to do things that can only be done from there. Data centers work just fine on the ground. What's so great about data centers in space that makes them worth the immense cost and difficulty.
I know a lot of prominent people are talking about this. I do not understand it. pg says "when you look at the tradeoffs" well what exactly is he looking at? Because when I look at the tradeoffs, the whole concept makes no damned sense. Sure, you can put a bunch of GPUs in space. But why would you do that when you can put them in a building for orders of magnitude less money?
I liked one comment someone made: if it's just about dodging regulation, then put the data centers on container ships. At any given time, there are thousands of them sailing in international waters, and I'm sure their operators would love to gain that business.
That being said, space would be a good place to move heat around with Peltier elements. A lot of the criticisms revolve around the substantial amount of coolant plumbing that will be needed, but that may not necessarily be what SpaceX has in mind.
There should be some temperature where incoming radiation (sunlight) balances outgoing radiation (thermal IR). As long as you're ok with whatever that temperature is at our distance from the sun, I'd think the only real issue would be making sure your satellite has enough thermal conductivity.
Space is a vacuum. i.e. The lack-of-a-thing that makes a thermos great at keeping your drink hot. A satellite is, if nothing else, a fantastic thermos. A data center in space would necessarily rely completely on cooling by radiation, unlike a terrestrial data center that can make use of convection and conduction. You can't just pipe heat out into the atmosphere or build a heat exchanger. You can't exchange heat with vacuum. You can only radiate heat into it.
Heat is going to limit the compute that can be done in a satellite data centre and radiative cooling solutions are going to massively increase weight. It makes far more sense to build data centers in the arctic.
Musk is up to something here. This could be another hyperloop (i.e. A distracting promise meant to sabotage competition). It could be a legal dodge. It could be a power grab. What it will not be is a useful source of computing power. Anyone who takes this venture seriously is probably going to be burned.