I was doing a lot of playing around in Stable Diffusion when it first dropped, with scripts generating a lot- some stuff I left overnight, 12+ hour runs. My M1 Pro Macbook at doing SD inference averages about 60-70 watts, or around that I think. The inference speed is about 0.9 iterations/s (1.1s/it). Later I had time and got SD running on a workstation of mine- an HP Z840 with dual Xeons and an RTX 3090. The 3090 eats 350 watts alone, and the rest of the system sucks another 250 watts or so, for a total of 600W. The generation speed (same Euler sampler) is about 9 iterations/s.
So, 10x faster than my Macbook, nearly exactly. While using 10x the power. Utilizing larger batching with the extra 3090 vram can actually result in even more throughput.
But I find it interesting that it winds up being nearly the same in images-per-$.
So, 10x faster than my Macbook, nearly exactly. While using 10x the power. Utilizing larger batching with the extra 3090 vram can actually result in even more throughput.
But I find it interesting that it winds up being nearly the same in images-per-$.