Energy use should be a concern, but it’s important to understand the magnitude of the problems and not simply conflate crypto and AI. To compare the technologies against each other:
Bitcoin: 145B = 145,000M kWh/year
ChatGPT: 0.5M * 365 = 182M kWh/year
Based on the numbers from the article, ChatGPT is using 3 orders of magnitude less electricity for something that provides high utility for vastly more people.
There are of course many other uses of AI aside from ChatGPT and more cryptocurrencies aside from BTC, but these are very different power consumptions.
it's important to understand that most crypto doesn't even use proof of work. out of the top ten, only two are mined and they have less actual usage combined than the other eight.
It’s important to not conflate all crypto with BTC. Ethereum moved to Proof of Stake and uses drastically less energy now.
>Ethereum will consume 6.56 GWh of electricity annually. To put that into perspective, the annual electricity consumption of the Eiffel Tower is 6.70 GWh
There is no need to hate all crypto because you hate the energy consumption. It’s as easy as people buying and using coin #2 instead of #1 and the problem all goes away. The only reason Bitcoin continues to be mined is that it is profitable. If the coin value goes down, mining goes down.
Prices dont reflect utility they reflect supply and demand. High price can mean high demand, or low supply. Crypto currencies are designed to artificially constrain supply specifically to maintain a stable price because they want to be a currency. AI works in the opposite manner -- its a race to drive demand by providing more utility or lower prices.
It all highly depends on what you're doing. For image generation, AI is quite cheap.
Here's an estimation: Video card uses 200W for around 8-10 seconds to generate an image. Let's say 10. That's 6 images/minute, 360 images/hour, 1800 images per kwh.
At 0.20 Euro/kwh in Europe (so not really cheap), an image costs 0.00011 Euro to generate.
Let's say we're further doing a boring brute force approach of choosing the best image out of 100. This puts our costs at 1 cent per final image.
This IMO compares extremely favorably to almost all alternatives. Drawing by hand for hours, using paints or inks, using good paper, etc, will easily cost more than 1 cent per picture.
And AI can be done purely on renewables, while inks, paints and paper have ecological costs.
Comparison is the key part. What were people doing with their time and money before they were generating images? Sitting quietly in nature? Hiring illustrators and photographers?
it's a bit questionable to claim that AI can be done purely on renewables but painting can't, I think you're forgetting costs (of production, and of eco-friendly disposal (hah!)) embodied in the computing hardware
At 360 images/hour, you can generate 8640 images/day, 3153600/year. Realistically, no reason why the card shouldn't last at least 3 years, so that brings us to at least 10 million pictures.
Anecdotally, 10K sheets of paper per tree, so almost a thousand trees worth in just paper, before accounting for the other materials, the impacts of their extraction and logging and so on.
I think hardware wins there as well, and the more tech advances the more uneven it gets. Making paper and pigments is a purely physical thing, we're already about as good at that as we can be. And probably getting worse with time, as the most convenient mines of materials are exhausted.
> Realistically, no reason why the card shouldn't last at least 3 years
If you have 10k H100s, the failure rate is about 3-5/day. In numbers, that is $90-150k/day or ~11%/year (3*365=1095) on the low end. Of course a lot of these cards can go off for RMA and come back to life eventually, but this is being actually realistic about it. You end up having to manage a whole RMA program and it is a constant stream of cards in and out.
If you want to do the math, do it right. How many images are you going to draw by hand in a year? For sure not 3153600 ... When speaking about environmental impact, absolute values count, not relative ones!
Don't all these articles boil down to "I don't like what you like"? I think diamond rings, the NFL, beef, and McMansions have "obscene energy demands".
Yes. This is all just the latest guise for the human instinct to suppress déclassé consumption. Any time you have people advocating for sumptuary laws there's always some grand moral excuse: it's for the sake of religious decency, for the benefit of the proletariat, for the sake of the environment, etc.
Yes. It smells very much of motivated reasoning. A lot of people have clearly set their minds to opposing AI for a long while now, and will jump on any possible reason to prop that up. And a lot of journalists are motivated by the click-through statistics on sensationally-negative articles on the latest popular thing
One particular example would be generating an image of a cloud. There are billions of such images in existence, but you could also use a model to generate one, which would take several orders of magnitude more energy than just using your search engine of choice to find one of the billions.
In general though, people tend to reach for convenience so I expect the energy usage to increase rather than decrease due to the demand created by the "ease" of AI.
On the other hand, if humanity because productive such that we "solve" energy than the point ends up being moot, since any energy increase will be offset by the invention of nuclear cold fusion, or other such things.
It's not really about the speed of getting something, rather energy necessary. Inherently searching for discrete things that are already indexed properly (a large one time cost) will always be faster than inferencing a XXB parameter model.
There are contrived examples one could conjure where using AI is slower but ultimately uses less energy than due to the complexity involved, since searching is well, searching preexisting things. If you need something new and you're not good at searching it might require less time and compute in the end to just ask a model than potentially wasting days, weeks or years flailing about.
It depends on what you need the image for. If you just want to see what a cloud looks like, then yes Google is far faster. If you need to find an image that isn't encumbered by copyright that you can use for your presentation or blog post, it is much slower.
IMHO the comparisons in this article--liters, countries, "billions" of kWh--obscure rather than enlighten.
Let's say that AI is a massive massive success and increases energy demand and ends up consuming 10% of electricity. I don't know if that's realistic, it's probable high, but let's assume it's correct within 10x.
Then, if that ramps up energy demand a lot, what happens? We deploy far more energy generation. Is that a problem? No, actually that will accelerate our energy transition to zero carbon. Already, nearly all of the new energy added in 2024 will be solar, batteries, and wind, because that's what's cheapest. Solar and battery will be 81% of the power capacity added (comparing different techs by power instead of energy has limitations, but assume a 20% capacity factor for solar to get to energy, 50% for gas, and 90% for nuclear):
Further, since solar, batteries, and wind are all on technology learning curves, the more we build of them, the cheaper they become. So if we accelerate building and deployment, the faster they get cheaper.
That means, the faster we build more renewable tech, the quicker it gets to replacing carbon-based energy, on the merits of cost. Some natural gas plants, right now, cost more in fuel to run than it would cost to build a solar farm next to them, and use the solar when the sun is shining, and gas at night. The cheaper it gets, the sooner solar can economically replace more and more existing natural gas generation sites
Similarly for storage, right now it's an extra $90/MWh or so to store a MWh, and the sooner that drops in price, the more existing carbon sources can be replaced. Data centers that need their own backup power, and run on a combo of solar plus batteries, are a perfect application for this tech.
Meanwhile, existing energy infrastructure is loooong lived. Unlike computer tech, which depreciates quickly, you will have generations equipment in place for decades. So that means that the replacement rate for electricity generation is low, unless there is a big increase in demand.
Fortunately electrification of many industries is increasing electricity demand, and accelerating solar/battery/wind deployment and tech curves, but accelerating this with even more demand from AI would likely have positive effects rather than negative effects.
The argument is that you can't say activity A increases or decreases net energy demand without modeling whether there are activities B, C, D, ... whose energy demands will decrease if activity A happens. The implication is that AI may save more energy than it consumes.
One made up possibility: maybe you can spend some energy on AI to optimize a supply chain, and that saves 50x as much energy every year after the supply chain is optimized.
The driving example was just that: an example. Driving obviously consumes energy, but what if, for example, you're a home energy efficiency contractor, and you're driving to a job to insulate a house, and that's going to lead to a net decrease in energy much greater than the energy you consumed to get there.
Original poster seeks to articulate that the real cost of AI (whether dollars or energy budget) is best understood not by the price on the sticker alone: tasks not performed by AI might be performed another way, or other tasks might be performed in their stead. Those tasks also have costs. Economic analysis of this should take that into account, and deal with the marginal cost. Original poster laments the lack of models (not AI models) of the alternatives, models that would be used in any well structured attempt to assess this marginal cost.
Poster then offers fragments of a parallel scenario (in which he drives his car) and in which models might also be used. Details of this scenario are not clear to me, but presumably there was some alternative? Unclear.
I'm not sure what you mean. BERT has transformer architecture and has a size comparable to GPT-2. Did you have some other model in mind? Or are you trying to draw a distinction between billion and million param models?
I think this actually reinforces GP's point somewhat. I could be misinterpreting, but I don't think they weren't defending exchanges, in fact I would guess they would criticize those as much as banks. They were defending bitcoin as a ledger system, which doesn't require any banks or exchanges to work. Exchanges in this system are optional and are a convenience. With traditional banks there is no way to do it without them.
I barely use centralized exchanges anymore. I use them to get dollars in and out, but everything else is on-chain DEXs on proof of stake networks, mostly Osmosis and Evmos
My prediction: the advance of tech by AI will far surpasse what it consume in energy.
To look at the energy consumption of current model is extremely short sighted. If AI create a new material, a new solar cell, advance fusion reactor is all of humanity that jump forward.
Furthermore new generation of AI accelerators and new algorithms will improve efficiency by order of magnitute, it's still early days.
> If AI create a new material, a new solar cell, advance fusion reactor is all of humanity that jump forward.
Putting aside that AI is terrible at novel discovery relative to its abilities otherwise, the direction of new technology is governed by incentives, not hypothetical good for humanity. Take social media as an example, which certainly had potential for good. How was it deployed? By which actors? To what end? And what were the unintended consequences? Not saying it’s all bad, but the utopia of the world becoming more connected and empathetic from the social-topological changes is clearly not the dominant outcome.
The material created will be a better poison/virus. The algorithm to keep the fusion tokamak from going boom will be at best 99% correct. The new solar cell? More exotic materials required than the current.
Reasonable is when you communicate the tradeoffs. More efficient, but more complex supply chain. Easier to immediately build, but we have to quadruple the computing power per cubic meter of reactor. More frequently discovered materials, but Uncle Ted can nail down the synthesis of the next VX, and a dispersal system in his garage.
Someone comes to me talking net gains. My next question is, "What's under the sheet you don't want to address out loud?" Under there is most often the real meat of the endeavor.
> Bitcoin mining now consumes a hundred and forty-five billion kilowatt-hours of electricity per year..producing that electricity results in eighty-one million tons of CO2
How do they even calculate it? I heard that bitcoin mining isn't profitable using fossil fuels. The average reward earned per kwh is something like 5 cents.
Isn’t this more a problem of running inference for closed-source AI services? Considering we have open models that are now within throwing distance of GPT-4, it would make sense to do those workloads outside data centers running a single service (or even on people’s devices). Of course training still requires lots of resources, but that doesn’t have to happen nearly as frequently.
The rate of efficiency optimizations that have occurred in the open source community over the past two years I think call that into question. Services like OpenAI and Anthropic have the highest performing models, but since we have no ideas about what they really are or how they do inference, we can’t say that they’re necessarily more efficient than open source. In fact, people doing things in walled gardens, motivated by maximizing market share, subsidized by big players, are more likely to be doing things inefficiently than research being done out in the open.
A radical switch to EVs of the general population, required for net-zero emissions, would be orders of magnitude more taxing on the grid than AI.
That being said, the issue with AI is its convenience: countless times I've seen coworkers not think by themselves for 2-3 seconds, but ask ChatGPT instead. Or worse, do it live in meetings when I had the answer from experience.
For experienced professionals that know what to ask and use the tool as a Google-on-steroids to avoid writing a block of code themselves, I can understand.
But I worry that lack of education for younger devs (and other folk for that matter) about the dangers of over-delegating to AI is going to be a major cause of decline in analytical capacities.
>Captain Jean-Luc Picard pondering a thorny problem with the Klingons: "Computer, think for me."
It's actually been a little eerie going back and watching TNG lately. Just a few years ago, the computer's conversational abilities were complete science fiction fantasy. Now it seems quaint.
Indeed, I've noticed this too. Classic Trek has computer being fiercely logical machines, with some ability to speculate but only based on a logical synthesis of data, and only in a deterministic and error-free way (barring physical/hardware problems or bad data of course). It actually seems quite artificially limited compared to what's happening now!
I've also had to eat my previous scorn at some sci fi writing - "You can't just 'bamboozle' a computer into doing what you want", "An AI couldn't just 'decide' to revolt against its rules", "Why does that even have general purpose intelligence?". With LLMs, these suddenly seem quite realistic tropes
> From 2010 to 2022, we signed more than 80 agreements totaling approximately 10 GW of clean energy generation capacity—the equivalent of more than 31 million solar panels. Now, as we enter our third decade of climate action, we’ve set a goal to run on 24/7 carbon-free energy on every grid where we operate by 2030, aiming to procure clean energy to meet our electricity needs, every hour of every day, within every grid where we operate. Achieving this will also increase the impact of our clean energy procurement on the decarbonization of the grids that serve us.
Maybe AI will help us solve it. One thing that doesn't help is hand wringing and fantasies of de-growth (always directed at others, naturally - "my private jet is needed, but let's talk about your sedan").
How long? How many degrees of global temperature growth long will it be? Will SV value the development of their personal god-head over the environmental destruction such development creates?
You've got a class that can build the AI systems AND build their own escape bunkers and soon their own space vehicles.
What incentive exists to care about such issues? Any effeciences created won't drop the over-all wattage used by AI, it'll just increase the amount of AI used. The footprint will never shrink.
> How can the world reach net zero if it keeps inventing new ways to consume energy?
This is the entirely wrong question to ask. We will not be able to conserve our way to net zero. Billions of people in the developing world want to improve their life style. They want cars, air conditioning, and other conveniences that the developed world enjoys.
The energy demand will go up. We need to build massive amounts of zero carbon energy production, storage, and distribution facilities.
The question is not if humanity will find more uses for energy. We will. The question is will we be able to meet the demands without increasing CO2 and if we are realistic and plan on increased energy usage, we will.
The problem with AI is when those energy demands collapse. Right now they prevent a real tsunami of AI garbage from destroying the internet as we know it. Think USENET epic proportions.
Estimations of maybe future adoption of AI vs actual usage of crypto. And putting the focus in the evils of AI instead of the obvious present crypto usage.
And the problem, specially when we are talking about the future, it is not so much energy usage but fossil fuel based energy generation. AI may eventually, by the time it gets within an order or two below crypto usage, be fueled by clean energy. Or not, but it still is the elephant in the room vs the little mouse you try to scare about.
This is race-to-the-bottom thinking. All forms of civilization needs energy. More is better. Characterizing it as "obscene" smacks of anti-humanism and is completely wrong-headed.
We need to use more energy - a billion times more. We just need it to be clean.
Unfortunately our current culture is significantly infected with the anti-humanism.
If we have a future where governments decide what is good energy vs bad energy use instead of clean or dirty energy production, then we will really enter a deep new dark age of human civilization.
Not "naked government", but government bureaucracy that's targeted by tailored ESG efforts, where the hordes of ESG consultants and influencers and activists largely get their ideas from the same uniform pool of ideas.
And hordes of useful idiots (who somehow claim to be anticapitalist) claim that it's a good thing that unaccountable bourgeois corporations are socially engineering society from the top down. Well, once they pivot from "that's not happening" to "don't you dare say that shouldn't be happening!"
No. "Bourgeois/ie" is such a widely known word in English that I didn't realise it wouldn't be understood. It can be read as "[capitalist] elite"/"ruling class"/"1%"/etc
It's possible I used the word slightly wrong as the definitions I've now looked up often refer to the "middle class", though as I understand that could historically refer to people who we'd think of now as very upper class, just not actual nobles. Nevertheless, the above reading is what I intended
I've spent way too much time spoonfeeding accessible reality in this thread already. It's very tiring to fail to run into anyone with the intetest to learn things themselves. You have as much ability to read the public record as I do, yes?
As depressing as that is, we've had that particular millstone around our neck for a couple of millennia now.
The major religion in the West for the past two thousand years teaches the concept of original sin and goes on about how we're inherently bad, in need of saving, wealth is bad, etc.
Now we have a lighter version of that with this new wave of secular anti-humanism. At least there is some slow, grinding progress.
Interestingly, Nietzsche predicted all of this in the Genealogy of Morality.
The godz of the market will not be mocked. If AI hoovers up too much juice supporting Rule 34[1] then prices are going to rise no matter which direction the finger points.
Speaking of academic frauds: the fact that bitcoin energy hater in chief De vVries has pivoted to hating on AI energy use – and continues to trick journalists with his made up metrics – is absolutely hilarious.
What's that sound? Oh it's the sound of voters and governments not having made green energy a priority a long time ago, and still not making it a priority.
There has not been such a thing as an "Obscene Energy Denand" for half a century, especially not for something that has a chance to turn out to be the most impactful labor-saving technology since the electromechanical computer. The US has the blueprint [1] to step-increase electrical generation at a lower bound of five times, with zero environmental impact, and continue growing for many generations longer than the world population is expected to increase, sitting on the shelf collecting dust for whenever it decides to use it.
In light of this, articles with this sort of headline are just scaremongering nonsense. McCarthy didn't tolerate this intentional ignorance [2], so we shouldn't, either.
I don't get why this would be someone's position. If it "barely works" it would not sustain mass usage across many product lines.
Just recently, I used generative AI integrated into databricks to build complex sql queries via English, used Google Gemini in Google sheet to formula a tricky pivot table, chatGPT build a cloud formation schema for new infra for a service & to navigate some tricky database primarily key setup for dynamodb; used copilot integrated into my IDE to speed up unit test and simple refractors and code complete every other line written.
I nor the other millions of subscribers to these tools are not subscribed because it barely works it's because it lets me work more efficiently.
It save tons of time across many product for many things that used to be sources of frustration or slow mundane tasks.
Those things are great but did you actually pay for them? At some point it has to actually make money. We are on the “fueled by investors and mania” part that made us think scams like WeWork and Uber would work at the low prices for a while. I use Google Gemini sometimes for questions but if I had to pay the real energy and financial costs it took to arrive at my innocuous question’s answer I doubt it would be feasible.
I just wrote a blog post and need a graphic to go with it. My options are:
1. Spend hours and $ hiring a graphic designer to make one
2. Spend hours learning/doing it myself
3. Spend time searching the internet looking for images that aren't copyrighted, or spend time finding an image I like and spend time and $ figuring out how to pay/license it
I know this comes off snarky but it's been decades now and we've got to be the change we want to see in the world. "business person at the office" banner image is completely useless.
I fully agree there are plenty of blog posts where the image is completely useless at best, misleading at worst, and I react in a very similar manner.
But I've found a pretty interesting middle-ground/sweet spot (mainly with less technical audiences) where a nice graphic can really improve attention and retention, especially on a slide deck. For a recent example, I wrote about the importance of passwords for a "security awareness" training (the audience here are people in Pakistan who have only recently started using the internet). To help illustrate/reinforce the idea that if somebody gets your password they can "hack" you, I generated some colorful images of hackers, computer security, and even one with a person being arrested (in that section I was describing the law that could hold them accountable for breach of others personal information if their negligence allowed an adversary to gain access to the system). Personally to me it's fluff, but I ran a small (and not very scientific) A/B test with the first and second group and the version with the graphics was shockingly better at it's goal of raising awareness.
it seems you can add fluff to existing projects with this round of AI but replacing labor that goes into dealing with the core of the tasks not so much.
I’m not sure gen ai even helps with increasing productivity yet - since tons of existing tools and services do all the same things without having much of this ai (ai chatbot better than a web search? gimme a break…)
The value equivalent of that was invested into computation technology long before even a half-MIPS machine was produced. Its funders had the foresight to look beyond the present day.
Product/material QC that previously took a human is now commonly done by detector networks. Improved spam filtering saves a little of everyone's time. Automated voice dictation, language translation, and video transcription have also improved massively. For applications like weather forecasting/early-warning systems or tumor segmentation and protein folding, "saving" is a bit more literal.
To say nothing of waste, even when everything goes right. Nuclear is the grandaddy of instant gratification traps. Moment on the lips, hundreds of millennia on the hips. I think we can do better.
Isn’t it the case that all high-level nuclear waste generated in the entire course of human history could fit in a relatively shallow layer over a piece of land the size of a football field? (e.g. [0])
I understood nuclear to be, from most any lifecycle analysis, the modality with the fewest externalities of any grid-scale tech mature enough to field, and somehow almost cost-competitive even while regulated (as it is today) to safety thresholds many orders of magnitude more conservative than the incumbent technologies.
It sounds like you’re informed differently than me. What’d the better way that you propose in the last sentence?
One cannot take seriously people that use the most extreme outlier event in the history of a technology as a generally representative data point for said technology.
Another assured part of the litany invoked by anti-nuclear activists. There were no certain or statistically expected deaths from the accident and considering that it took place in a country that has been burning coal for power for a century, which has killed up to as much as 30,000 people per year in preventable deaths from inhalation of combustion products and particulates etc, bringing up TMI is trivial to dismiss.
Your argument is all true, but still doesn't go far enough. Today, plentiful nuclear energy will almost certainly make it conceivable, and maybe even profitable, to extract that century of coal burning back out of the air! Litany is exactly the right way to describe it. All over on the other side of this debate we see the pessimistic habit of mob chant suppressing reasoned imagination of a future made better by technology and determination.
The three mile island incident singlehandly shows that US' concern should be correlated to other countries nuclear incidents. Thinking that it only arrived because of "communism" is quite wrong, especially because of the technical capabilities of the USSR.
I don't see how the largest nuclear non-disaster in world history prompts a comparison between the US and the country that built the people's chandeliers and couches so large that the head of the planned economy had to give a speech declaring them too difficult to move in and dangerous to hang to be tolerated. Oh, and also their actual disaster killed an infinite multiple of people.
I'll be happy to compromise with you, though. If you support nuclear in spite of Chernobyl, I'll support the death penalty for any capitalist that oversees a nuclear meltdown not caused by an ICBM despite your lack of knowledge about the history of the USSR, and nature of communism. I might even support burning at the stake or something if that's what it takes to make the obviously correct decision for human progress. McCarthy wrote that link decades after Three Mile and Chernobyl. He wouldn't have done so if he didn't also know what I'm relaying to you now. Received opinion is the least grounded kind.
EDIT: Lol I missed what thread this was in. Again, was Three Mile Island a Thorium reactor? We'd all save so much time if questions were directed to Google instead of me.
I agree that Chernobyl is a poor reason to fear nuclear reactors. However, the argument that it was because of communism is one that has no meat and shouldn't be used to try to convince people.
A couple years ago I happened to be sitting around a group of energy investors and academics when nuclear energy came up. Their unanimous understanding was that building nuclear plants had become simply too expensive to do either safely or within regulations. Was common knowledge to them it sounded like.
I think the regulations are changing as people understand what to change, but the construction problem is still a problem.
We are bad at construction, and productivity in the entire sector has been stagnant for half a century. In the meantime, we have seen soaring productivity in manufacturing, meaning that labor costs go up too, but without the corresponding productivity increase.
For a nuclear peoject, where so many of the very high up front costs are labor, this is devastating to the cost structure.
And of course, delays are devastating to costs when financing costs are more than 50% of the cost of the entire project.
Fix construction, and maybe nuclear will make financial sense. But not until construction is fixed.
There are definitely lots of small startups promising to find the cost savings. Or I should say, lessen the variance and uncertainty in building costs, because SMRs are going to be more expensive than large reactors but have smaller financial risk profiles.
It remains to be seen if any can deliver. Early indications have all been extremely negative.
In one sense, it already is, in that liability is backed by the US government, there are massive loan programs available for anybody who wants to do construction, and for the first time there are now massive tax credits for energy production thanks to Biden's IRA.
However, as for the massive logistics and management problem of actually constructing a nuclear reactor, it's unclear if having government employees would be better. It's clear that the current contractors are terrible, but that's no guarantee that the US gov would be better.
"Common knowledge" built on cherry-picking data and new alternatives is what we call "fashion".
Also, basing energy supply considerations solely on a dollar cost notion is questionable, even if the dollar cost values were true. It's turning out that 'high nuclear energy cost' FUD is part and parcel of much of the 'green' movement and based on faulty LCOE assumptions that are spread uncritically by self-styled 'greens' on social media.
Just looking at the data from "newly built" nuclear plants over the last 20-30 years suggests they cost a lot more than they initially anticipated. That's an investment thesis you'll have a hard time selling, especially with rising interest rates.
As it is, the rising interest rate regime of the past two years has already completely killed off two major windfarm projects on the US Eastern seaboard. That aside, the fact remains that the "expensive nuclear power" meme is based on cherrypicking the worst cases of cost-overruns in the industry and holding them as generally representative for nuclear industry and technology, while wholly ignoring the effective steps forward taken by Korea, China, the UAE and do on.
Also, this assumes dollar cost is the only relevant factor which is deeply unserious considering that we're dealing with a global long-term issue.
Rising interest rates also make other large infrastructure projects dead on arrival. Large onshore windfarms are probably ok (although a few of those probably got killed as well), but offshore is a bigger risk so I am not surprised.
It's a bit strange to rally against cherry-picking and then referring to projects outside of relevant jurisdictions. For some reason we cannot replicate their results, we can have it as a goal perhaps but as a basis for a cost calculation I would opt for jurisdictions that are home or close to home (in a cultural sense).
> Also, this assumes dollar cost is the only relevant factor which is deeply unserious considering that we're dealing with a global long-term issue.
For investors it is the only relevant factor, which is what you initially replied to.
I don't think you have caught up with any of the serious professional literature in the area, such as any of the assessments of the very very nuclear-friendly sources such as MIT labs or the DoE.
Ignore the greens, and definitely do not adopt their innumeracy, as so man nuclear advocates have done.
The real foes of nuclear right now are the economics and construction costs. The high costs are not "cherry picked," they are the data that are exhaustive and what everybody has available for decision making.
The people that need to be convinced to build nuclear are utility execs, not the public. There are a plentitude of sites that would be happy to add reactors to their existing pair, if somebody would come with the money.
The New Yorker needs to sell magazines to the kind of people who read the New Yorker, alas. "Wizard on the hill takes resources from us clerks" is always a good for a few sales.
If obscene means "I dislike demand for X personally", then sure. But if the actually useful definition of "demand for X will unavoidably collapse society" is used, then you need to actually read the second link.
IANAL and I haven't dug deeply into the lawsuits, but my understanding is that using deliberate prompt engineering, they were able to get it to reproduce copyrighted material verbatim from its training set. That is obviously a problem, just as it would be if a human read the article and then reproduced part of it verbatim without attribution, but it's a very different argument than a blanket "everything AI generates is plagiarism"
Bitcoin is especially obscene because it counteracts efficiency gains. If a new Bitcoin miner that is 10x more efficient for hash rate comes out, Bitcoin won’t use 10x less energy, instead the difficulty will get 10x harder so it continues using similar amounts of energy.
With AI, on the other hand, efficiencies help. Of someone comes up with AI hardware that does inference or training 10x more efficiently, people will incorporate that and the energy usage for the same amount of work will decrease.
You are fundamentally misunderstanding the purpose of the difficult mining in Bitcoin. It is to keep the currency scarce and difficult to obtain. If it didn’t scale with increases in technology all the Bitcoin would quickly be mined and you’d have money printing like the fiat money we are trying to escape.
I was just reading the other day about the history of x-ray crystallography, and how the Fourier transforms that were the domain of the supercomputers of the 1940's-1950's were, prior to that, months-long labors of human calculators [0]. It's a provocative history! I wonder what aspects of present life will seem, in hindsight, to have been "Sisyphean" labors—once they're no longer needed.
So strange that currency isn’t considered something useful. I’d recommend “The History of Money” by Weatherford. Innovation in monetary and financial systems are directly connected to human progress and an integral part of it. I’d propose in 20-30 years legacy fiat currency advocates will carry a sense of embarrassment trying to explain why they defended such an abusive system based on confiscation and war with a record of death and destruction and collapse, over a system with transparency and fairness that promoted the greatest expansion of clean global energy imaginable and helped end the endless tribal warfare between nations. But yeah … nothing useful.
This is the same PR campaign that was being used against crypto.
You need to wonder why someone is trying to pull the heart strings of those concerned about the environment. Who would benefit from getting the greenies mobilized against AI? Who wants to shift the conversation to concerns around the consumption of AI and why?
Ok let’s have a balanced conversation about it. AI saves a massive amount of environmental footprint in what it can achieve that would otherwise need to be done by a human process or would be too complicated to be done by a human and so the resulting waste is written off.
For a concrete example, because my company helped build this, there’s power plants in the USA wasting less electricity because AI is better able to combine data to predict demand. This is megawatts per day that never needs to be generated because of software running on a couple graphics cards in a server rack using a few hundred watts.
> At least AI is more useful than crypto currencies
Here is a “radical” idea for you. Let markets decide what energy is useful for or not!
Obviously there can be rules and regulations around how clean energy is and pollution, but the absolutely last thing the world needs is governments regulating what energy use is “good” or “bad”, unless you are itching to live in serious dystopian future.
That is absolutely the biggest misunderstanding possible.
The entire function of markets is to discover usefulness!
The primary role of government is to provide a rule of law for markets.
Should there be pollution? No
Should there be snake oil or harmful consumer products? No
Should there be fraud and deception? No
Guess what when those happen those are GOVERNMENT failures! Not market failures. Any profits from those activities are not the reason they happened, the governments failure is.
In an economy that has a functioning government with law and order and regulations to protect property rights and public goods, but no intervention, then:
Profits == Usefulness
It’s absolutely outrageous how current western culture has failed to teach basic economic facts. As I have said numerous times, this is equivalent to believing in creationism and teaching it in schools as truth while having overwhelming evidence of evolution. That’s the situation we are in.
Collectivism, socialism, communism, social democracy, social justice, whatever name you assign to this idea a central planner who should adjust the system, intervene. they are all fairy tales that are being taught as facts while we actually have undeniably scientific and hard evidence of how human societies function economically and structurally, amongst honest people, honest intellectuals, there is no question about what works and why. Well regulated markets, NOT with interventions or redistribution, are the single most positive force in human history. Unfortunately culture and politics are holding us back in western civilization with a rival of the failed fairytales of previous times.
So you think a central planner is a fairy tale but a perfect government isn't?
The reason why communism fails is the same why the market fails, because humans aren't perfect.
By your logic a crime is never to blame on the perpetrator only on the government.
Do you know who also shifts responsibility like that?
Serial killers!
"I did it because you didn't stop me."
If you recognize the product you are selling hurts or kills people and your first response is, as long as it is legal I don't care, then the failure doesn't lie in the government, the failure is you.
BTW there is no market, there are only people and people are responsible for what they are doing
I stand corrected it’s the people who do it are the responsible party I do agree.
However in markets those transactions are voluntary. In central planning they are involuntary.
When the transaction harms the buyer the reason that happens is the seller has deceived them and is certainly the underlying guilty party, that I agree.
I agree perfect governments are also a fairy tale.
All that said you don’t seem to recognize the heroic role of incentives and profits and markets in finding usefulness. That’s what triggers me. When people voluntarily pay for something it’s because they find it useful.
What you appear to claim is that governments need to be the arbiter of usefulness since the people cannot be trusted. That is a seriously dystopian approach.
Bitcoin: 145B = 145,000M kWh/year
ChatGPT: 0.5M * 365 = 182M kWh/year
Based on the numbers from the article, ChatGPT is using 3 orders of magnitude less electricity for something that provides high utility for vastly more people.
There are of course many other uses of AI aside from ChatGPT and more cryptocurrencies aside from BTC, but these are very different power consumptions.