Hacker Newsnew | past | comments | ask | show | jobs | submit | scoopdewoop's commentslogin

That is immediate-mode graphics. Fine when you are already power-budgeted for 60 frames each second. UIs typically use retained-mode graphics, with persisting regions.

Dwarkesh was ready to whitewash Elon the day after his Epstein emails came out. None of them should be taken seriously.


More than that the questions about space based solar vs land solar for data center calculations seemed hollow as they are easily verifiable. He let Elon get away with this admin does not like Solar as an answer instead of what he is doing to convince them otherwise


Link please!


The day after the emails came out he posted a video where they had beers while Elon LARPed as a human


Link please!


Elon Musk – "In 36 months, the cheapest place to put AI will be space”: https://www.youtube.com/watch?v=BYXbuik3dgA


Thanks for the link because I can't stomach 3 hours of this.

First phrase: "you're saving on energy by putting data centers in space". What?

2:08 "It's harder to scale on the ground than it is in space" what?


The argument is permitting and weather proofing are harder than lifting at certain values of scale for each. We’re not there right now. But if Starship pans out we’re at least damn close, particularly if solar-panel fabrication can be done from out-of-well silicates.


You don't buy any of this right?

Didnt startship exploded like 10 times by now? But in 30 months they'll be launchign 1 per hour? What?


> You don't buy any of this right?

I actually do. The math is more strained than anything present. But a lot of people are rejecting it out of hand without doing anything back of the envelope. Truth is, barring a seismic shift in how we permit data centers on the ground, it takes a within-the-envelope decreases in launch costs to make space-based data centers profitable. Which is then just a cheat code for building a Dyson sphere.

> Didnt startship exploded like 10 times by now?

They all explode all the time. Starship has also been consistently improving its suborbital flight characteristics. I don’t see a good argument for a fundamental design fuckup in the data we have.

> But in 30 months they'll be launchign 1 per hour?

This is nonsense. But within ten years? I think so. At least, we don’t have a good reason to reject that with current data. And that would make the cost equation flip to favoring space-based infrastructure. Which, honestly, is not the answer I expected. (I’ve done aerospace stuff for a while. Most of the back-of-the-envelope math fails. It failed for space-based solar power. It failed for asteroid mining. And it currently fails for space-based data centers. But let launch costs dip a bit, or permitting delays and risks rise a bit, and the equation balances sooner than one would think.)


Changing permits sounds to me a lot easier than building anything in space. What has ever been built in space? The ISS, that's it.

Alright, show me the back of the envelope maths.


> Changing permits sounds to me a lot easier than building anything in space

Having done a little bit of both, the latter around data centers, I’ll say they’re different kinds of hard.

> Alright, show me

Fair question. But no, I’m still refining my math and making bets on this. But I’ll start working on an HN comment in a few weeks and try to remember to post it back to this thread.

My basic argument is to try pinning out current datacenter costs, pin out lifted costs, and then work out what cost/kg you need to balance the two. Hint: approval time and interest rates are meaningful variables.


> I’ll start working on an HN comment in a few weeks and try to remember to post it back to this thread

iirc HN threads automatically close, due to inactivity and (/or?) based on time since the original post. I wasn’t able to find a thread with the comments still open from 16 days ago, let alone a “few weeks”, but in good faith I’m assuming that you already know that, and aren’t using that as an out to avoid replying, not that anyone is “owed” a reply by you, or by anyone.

This is all to say, I appreciate the thread as a bystander, and would thus naturally eagerly await your reply if and when it arrives before the closure of individual this post’s comment section.


You'll find anti-capitalists are anti-capitalists whether the number is red or green.


Prompting LLMs for code simply takes more than a couple of weeks to learn.

It takes time to get an intuition for the kinds of problems they've seen in pre-training, what environments it faced in RL, and what kind of bizarre biases and blindspots it has. Learning to google was hard, learning to use other peoples libraries was hard, and its on par with those skills at least.

If there is a well known design pattern you know, thats a great thing to shout out. Knowing what to add to the context takes time and taste. If you are asking for pieces so large that you can't trust them, ask for smaller pieces and their composition. Its a force multiplier, and your taste for abstractions as a programmer is one of the factors.

In early usenet/forum days, the XY problem described users asking for implementation details of their X solution to Y problem, rather than asking how to solve Y. In llm prompting, people fall into the opposite. They have an X implementation they want to see, and rather than ask for it, they describe the Y problem and expect the LLM to arrive at the same X solution. Just ask for the implementation you want.

Asking bots to ask bots seems to be another skill as well.


Let me clarify, I've been using the latest models for the last two weeks, but I've been using AI for about a year now. I know how to prompt. I don't know why people think it's an amazing skill, it's not much different from writing a good ticket.


Writing a good ticket is not a common skill. IMO it seems deceptively easy but usually requires years of experience to understand what to include and express it in the most concise yet unambiguous terms possible for the intended audience.


Not quite. They have 128GB of ram that can be allocated in the BIOS, up to 96GB to the GPU.


You don't have to statically allocate the VRAM in the BIOS. It can be dynamically allocated. Jeff Geerling found you can reliably use up to 108 GB [1].

[1]: https://www.jeffgeerling.com/blog/2025/increasing-vram-alloc...


allocation is irrelevant. as an owner of one of these you can absolutely use the full 128GB (minus OS overhead) for inference workloads


Care to go into a bit more on machine specs? I am interested in picking up a rig to do some LLM stuff and not sure where to get started. I also just need a new machine, mine is 8y-o (with some gaming gpu upgrades) at this point and It's That Time Again. No biggie tho, just curious what a good modern machine might look like.


Those Ryzen AI Max+ 395 systems are all more or less the same. For inference you want the one with 128GB soldered RAM. There are ones from Framework, Gmktec, Minisforum etc. Gmktec used to be the cheapest but with the rising RAM prices its Framework noe i think. You cant really upgrade/configure them. For benchmarks look into r/localllama - there are plenty.


Minisforum, Gmktec also have Ryzen AI HX 370 mini PCs with 128Gb (2x64Gb) max LPDDR5. It's dirt cheap, you can get one barebone with ~€750 on Amazon (the 395 similarly retails for ~€1k)... It should be fully supported in Ubuntu 25.04 or 25.10 with ROCm for iGPU inference (NPU isn't available ATM AFAIK), which is what I'd use it for. But I just don't know how the HX 370 compares to eg. the 395, iGPU-wise. I was thinking of getting one to run Lemonade, Qwen3-coder-next FP8, BTW... but I don't know how much RAM should I equip it with - shouldn't 96Gb be enough? Suggestions welcome!


I benchmarked unsloth/Qwen3-Coder-Next-GGUF using the MXFP4_MOE (43.7 GB) quantization on my Ryzen AI Max+ 395 and I got ~30 tps. According to [1] and [2], the AI Max+ 395 is 2.4x faster than the AI 9 HX 370 (laptop edition). Taking all that into account, the AI 9 HX 370 should get ~13 tps on this model. Make of that what you will.

[1]: https://community.frame.work/t/ai-9-hx-370-vs-ai-max-395/736...

[2]: https://community.frame.work/t/tracking-will-the-ai-max-395-...


Thanks! I'm... unimpressed.


The Ryzen 370 lacks the quad channel RAM. Stay away.


Ryzen AI HX 370 is not what you want, you need strix halo APU with unified memory


maxed out Framework Desktop


I had no idea games were already compelling players to use these features. The user-controlled client is rapidly going away.


You'll own nothing and be happy


Bluefin, Aurora, and Bazzite are taking over my home.

I've been using desktop linux since before ubuntu, and I have never had so much confidence in my linux rigs. They are dependable, which is refreshing after boot-breaking updates have ruined my setups before.


I was blown away when I realized I could stream mjpeg from a raspberry pi camera with lower latency and less ceremony than everything I tried with webrtc and similar approaches.


> programming shifted from corporate mainframe work to the community builders > which is good

but then:

> Our field deserves better than a zoo of random nouns masquerading as professional nomenclature

Okay? So is this professional nomenclature or the work of community builders?

I think: everyone should code, it should not be an elitist profession, we don't need to all accommodate busy professionals, i'm fine with corporate users having to say my stupid package name at work.

> Your fun has externalities. Every person who encounters your “fun” name pays a small tax. Across the industry, these taxes compound into significant waste

Someone please get this guy a bong rip.


It’s all fun and games until you have to be like “blastoise deleted our database backups”


Well if they used coq, maybe it would not have happened!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: