Hacker Newsnew | past | comments | ask | show | jobs | submit | arsenico's commentslogin

You can actually use it with Claude Pro, which is 20/month


The chart is interesting, but misses on an important thing: semiconductor production on it's own is complicated, but producing equipment for semiconductor production is the whole new level, that's why ASML.


Cannot even quality "It has always been shit, so no problem at it becoming even shittier" as a hot take.


What kind of expenses are you talking about?


About $ 100 a month.


Free products are not the consequence of megacorps existence. Free products exist, because you are the product. Big companies also doesn't necessarily mean monopolies.


Because we live in a society with some ideas of decency, integrity and so on. He shouldn’t. That’s why he could receive this kind of feedback.


There’s industrial farming, and then there’s farming.


In reality though, do we actually need humanoid robots, and if so, for what?


Need? No, absolutely not.

But they do conveniently fit into the century old buildings we put many of the factories into, which makes them a useful upgrade path for those unwilling to build structures around more efficient robots (the kind we've had for ages and don't even think of as robots, they just take ingredients and pump out packaged candy or pencils etc.)


There are incredible technological barriers to humanoid robots who have equivalent skills and stamina. Keeping old factories running seems a very weak reason to do that, when our industrial base regularly retools production methods and brings in new equipment when old machines wear out.

If what you are saying is that many factories cannot run with humans running around fixing things, I agree. But that’s pretty different than using humanoids to put items in boxes.


Yes indeed.

Even just 3.5 years ago, seemed like everyone was saying that humanoid robots were a dead end or an unnecessary part of Isaac Asimov's vision of the future or similar.

I think much of the current interest is because Musk watched some scifi, ordered the Optimus project, and loads of others decided it would be a mistake to bet against him.

I put them in the same category as 3D printers: they can do anything, but you can always find a better special-purpose alternative for any specific goal.

Still a lot of people using 3D printing productively despite that; likely also will be for humanoid robots.

Well, if the AI is good enough. Remote control has its uses, but even then you need enough on-board AI to avoid playing QUOP as live action with a robot holding industrial equipment instead of in a safe flash game.

That said, stamina is probably the least important aspect — in an industrial setting you probably have a lot of power lines already installed.


Sometimes you need to produce megatons of a thing, but sometimes you need to produce a million different things. I bet on the humanoid robot in that case.


The human form is very versatile. While most robots may end up taking a different form, once we have sufficiently advanced humanoid robots, robots may replace human workers in almost any role.


I still have a hard time understanding what this future would look like?

Will we just sit around and do nothing then? I'm not saying we have to work, but there is some level of work that I think is required for happiness / fulfillment etc.

I'm not even really against the idea, it just sounds quite dystopian to me.


I think reading a broad swath of sci-fi might be the best way to engage this topic.

For fairly positive takes — Asimov had a take in the robot novels, Accelerando by Charles Stross touches on reputation-based currency (among a deluge of other ideas), Iain M Banks’ Culture novels have a take, and I cannot find it but there was a short story posted here recently about a dual-class system where the protagonist is rescued and whisked off to a utopian society in Australia where people do whatever they like all day whether it be fashion design or pooling their resources to build a space elevator. There are plenty of dystopian tales as well but they’re less fun to read and I don’t have a recommendation off the top of my head.

To answer your question directly, my opinion is that our our base nature probably leads us towards dystopia but our history is full of examples of humans exceeding that base nature so there’s always a chance.


Maybe the story you're referring to is https://marshallbrain.com/manna1


Actually I think you’re right. I got my stories mixed up ?


This was the one but thanks for the add to my reading list!


I'd say the book you're talking about is "The Machine Stops", it's a really fun/albeit scary read.

I won't say anything more in case you decide to read it, but it's amazing how the author managed to predict the future the way he did.

Thanks for the response and fingers crossed.


I don't think it matters much if we're for or against such a future.

If robots can do the same job as humans, but faster, cheaper and at a higher quality, out employers/customers will most likely replace us.

If we're lucky, we may find some niche, be able to live off our savings or maybe be granted some UBI, but I absolutely do think it's concerning.

What is worse, is that if we become obsolete in every way, it's not obvious that whoever is in power at that point will see any point in keeping us around (especially a few generations in).


Who will be able to afford all of this if they're not getting paid?


Before the industrial revolution, even though money existed, "wealth" really meant "land" rather than "capital".

While we do not today need to ask how people can afford robot lawnmowers despite being unable to find work hitching ploughs to draft horses or oxen, the fears at the time of things like this did lead to mobs smashing looms.

If I have some (n) robots that can do any task a human could do, one such task must have been "make this specific robot"*. If those n can make 2n robots before they break, and it takes 9 months to do so, and the mass of your initial set of n is 100 kg, they fully disassemble the moon in roughly 52 years. Also you can give (94.2 billion * n) robots to each human currently alive.

Asking "who can afford it" at that point is like some member of the species Kenyanthropus platyops asking how many knapped flints one must gather in order to exchange for a transatlantic flight from London to Miami, and how anyone might be able to collect them if we've all stopped knapping flint due to the invention of steel:

The economics are too alien, we cannot imagine this kind of thing accurately on the basis of anything we have available with which to anchor our expectations.

* including the entire chain of tools necessary to get there from bashing rocks together.


Before the industrial revolution, even though money existed, "wealth" really meant "land" rather than "capital".

The industrial revolution didn't really change anything about land.

It's still a fundamental and underrated component of our economic system, arguably more important than capital. That's why Georgism is a thing Indeed, it's even contemporary to the industrial revolution.

The economics are too alien, we cannot imagine this kind of thing accurately on the basis of anything we have available with which to anchor our expectations.

I would refrain from making such wild prediction about the future. As I have pointed out, the industrial revolution didn't change the fundamental importance of land. Arguably, it's much more important, and even more relevant today given how our land use policy is disastrous for our species and climate.

So, yes. It is important to ask how consumers will pay for all these robots if they don't have any sort of income that would make using robots economical.


> The industrial revolution didn't really change anything about land.

I didn't say otherwise.

I said the industrial revolution changed what wealth meant. We don't pay for rents with the productive yield of vegetable gardens, and a lawn is no longer a symbol of conspicuous consumption due to signifying that the owner/tenant is so rich they don't need all their land to be productive.

And indeed, while land is foundational, it's fine to just rent that land in many parts of the world. Even businesses do that.

I still expect us to have money after AI does whatever it does (unless that thing is "kill everyone"), I simply also expect that money to be an irrelevant part of how we measure the wealth of the world.

(If "world" is even the right term at that point).

> Arguably, it's much more important, and even more relevant today given how our land use policy is disastrous for our species and climate.

Not so; land use policy today is absolutely not a disaster for our species, though some specific disasters have happened on the scale of the depression era dustbowl or more recently Zimbabwe. For our climate, while we need to do better, land use is not the primary issue, it's about 18.4% of the problem vs. 73.2% being energy.

> So, yes. It is important to ask how consumers will pay for all these robots if they don't have any sort of income that would make using robots economical.

With a 2 year old laptop and model, making a picture with Stable Diffusion in a place where energy costs $0.1/kWh, costs about the same as paying a human on the UN abject poverty threshold for enough food to not starve for 4.43 seconds.

"How will we pay for it" doesn't mean the humans get to keep their jobs. It can be a rallying call for UBI, if that's what you want?

But robots-with-AI that can do anything a human can do, don't need humans to supply money.


> enough food to not starve for 4.43 seconds

I'm having real difficulty reading this unit of measurement. Let me see if I can get this right - a typical person can survive indefinitely on 1600 calories. Let's say that these are provided by rice (which isn't sufficient for a long-term diet, but is good enough for awhile). 1600 calories of rice is about 8 cups/24h and there are about 10000 grains in a cup, so is it that an image can be generated at the same cost as:

  4.43s/86400s*8cups*10000 grains/cup
Being about 4 grains of rice?


Sounds about right, but I don't have unit conversions and I'd count anything less than lifetime sustainable as gradual starvation.


Nope. Land is important because everything rest on it. Even radio spectrum and orbitals can be regarded as a form of 'land'.

Georgism doesn't exist in a vacuum. It wasn't like they were formulated during when the time when wealth 'meant' land. It was during the industrial revolution, possibly as a response to the problems they see in their society, problems we're still dealing with today.

No longer it merely meant land where productive yield of vegetable garden goes. Anything that capital sits on is land. That includes your factories and your datacenter. Yes, that include renting land on someone else. That's land policy.

Housing? Land policy. Pollution? Land policy. Transportation? Land policy. Can't afford to live? Likely your biggest ticket items include transportation and housing. Land is more important than ever.

Now, what does this have to do with AI? I would caution against thinking money or capital to be irrelevant or making any definitive prediction about the impact of AI or when or how they will come.

Edit: I see that you added stuff, but you have a narrow conception of land policy.


> Nope. Land is important because everything rest on it. Even radio spectrum and orbitals can be regarded as a form of 'land'.

Then you define land so broadly that the empty vacuum of space, which robots are much better suited to than us, can exploit trivially when we cannot.

If you want to, that's fine, but it still doesn't need humans to be able to pay for anything.


The orbital are literally scarce resources, as are radio spectrum. If you have people just doing whatever, you'll get Kessler syndrome, especially as our orbits are filled with more satellites each year. Similarly you just can't have random folks blasting out radio signals at random.

Yes, satellites are robots. However, they have no agency. Incentive structure decides if we have kessler syndrome, which then direct humans to solve problems with robots.

So, yes, they are either directly analogous to or are literal form of land.


Space is much more than circular orbits around earth, and is not a scarce resource — it's big enough that you can disassemble the earth, all the planets, all the stars, all the galaxies into atoms and give them so much padding it would still be considered extraordinarily hard vacuum. Something like 3.5 cubic meters per atom, though at that scale "size" becomes a non-trivial question because the space is expanding.

Which reminds me of a blog post I want to write.

> Similarly you just can't have random folks blasting out radio signals at random.

That's literally what the universe as a whole does.

You may not want it, but can definitely do it.

> Yes, satellites are robots. However, they have no agency.

Given this context is "AI", define "agency" in a way that doesn't exclude the people making the robots and the AI.

> Incentive structure decides if we have kessler syndrome, which then direct humans to solve problems with robots.

Human general problem solving capacites do not extend to small numbers such as merely 7.8e20.

For example, consider the previous example of the moon: if the entire mass is converted into personal robots and we all try to land them, the oceans boil from the heat of all of them performing atmospheric breaking.

And then we all get buried under a several mile thick layer of robots.

This doesn't prevent people from building them. The incentive structures as they currently exist point in that direction, of a Nash equilibrium that sucks.

Humans do not even know how to create an incentive structure sufficient to prevent each other from trading in known carcinogens for personal consumption even when labelled with explicit traumatic surgical intervention images and the words "THIS CAUSES CANCER" in big bold capital letters the outside.

If anyone knew how to do so for AI, the entire question of AI alignment would already be solved.

(Solved at one level, at least: we're still going to have to care about mesa-optimisers because alignment is a game of telephone).


Wealth did tend to mean land if we go back to the middle ages. But wealth above the freeman farmer level also meant access to a workforce capable of working that land and access to (or protection from) a military force capable of defending that land.

With capitalism, wealth shifted to controlling "capital", ie the "means of production". Either directly or indirectly by owning money that could (through lending) carry interest. Also during capitalism, workers have for a while been able to collect a significant part of the wealth generated as salaries (even if most would spend that rather than invest it).

If AI can bring the cost of labor down to near zero, we can be going back to a world where wealth again means "land", even if mines may be more valuable than farms in such a future.

And just as in the Dark Ages of Europe, the ability to project physical power may again become necessary to hold on to those values.

This is particularly true if the entity that seeks to control the land is doing it in a way that threatens the existence of other entities, either AI's or humans.


I don't know what to tell you. Georgism was contemporary to the industrial revolution. The book Progress and Poverty, as described by wikipedia "investigates the paradox of increasing inequality and poverty amid economic and technological progress"[1]. That sounds especially relevant to our time.

1. https://en.wikipedia.org/wiki/Henry_George

So, yes. Wealth means "land". Especially so in the industrial revolution.


Industrial revolution: roughly 1760 onwards if you count agricultural revolution as part of it.

Middle ages: 500-1500

Henry George, 1839-1897, is indeed part of the industrial revolution. @trashtester and I were both comparing what happened before the industrial revolution to what happened in it.

The valuation of land before still wasn't Georgism: the land was assumed to have productive output, and if someone didn't pay taxes based on the assumption, perhaps you couldn't because the land wasn't that productive, that's their problem.

And you know what else happened in the industrial revolution? Karl Marx and Adam Smith, the former placing workers rather than land at the root, and the latter placing capital rather than land at the root. As with HG, neither liked rent-seekers, and they were both more influential in the "solutions" than Henry George. (Not that he wasn't, they were just more).

Not that Henry George could possibly have foreseen even mere Earth orbitals as "land", let alone disassembling the moon into a swarm of robots that outnumber humans by more than humanity outnumbers a single human and which don't need to land on Earth which is good because if they did we'd all die just from them landing. Can't blame him for that, the difficulty seeing clearly this far ahead is why some call it "the singularity" (though I prefer "event horizon").


> and I were both comparing what happened before the industrial revolution to what happened in it.

Sure. Just keep in mind that the period 1500-1760 also saw a lot of technological development that was laying the groundwork for the industrial revolution. Things like windmills, ever more advanced metallurgy, proto-chemistry, physics and so on.

In fact, some of this goes back to about 1000AD and the High Middle Ages. It was really mostly in the Dark Ages when land ownership (with subsistence farming and the military to defend it) was the source of most wealth. From 1000AD until now, land was gradually replaced by other economic capabilities as the main differentiating source of wealth. (Cities with cultures that produced skilled artisans, trade routes, naval capabilities, near-factory-level workshops centralizing production, etc.)

I suppose all I write above supports your view that land isn't currently the main source of wealth, but rather the ability to leverage skilled labor.

In the future, though, I do think there is a possibility that natural resources (and the physical power to protect them) may again become the main form of wealth, simply because the value of skilled labor may approach zero.


If you want the really dystopian version, it would be AI controlled military forces.

Or there could be some billionaire caste constructing ever grander monuments to their own vanity.

Or the production could go to serve any number of other goals that whoever is in charge (human or AI) sees as more important than the economic prosperity of the general population.


Replying to both of you, I'm a little bit less scared about this "not having any money or food" scenario, presumably, if we have such incredibly sufficient machines at our disposal, I can't imagine they would have trouble being used for farming etc.

It's more the philosophical side that concerns me.

I don't really worry about this being a billionaires only club either. We've seen it already with AI products, there is just an abundance of competition and open source competition already available. It will be the same with robotics.

Also scary, is military robots gone rogue. Definitely not a fun prospect.

I'm personally really into surfing and skiing, honestly, if some how the robots kind of let me spend more time fishing, surfing and skiing, I'm pretty cool with all of that, I know a lot of people who don't have these passions though and work is a strong reason for their existence.


> if we have such incredibly sufficient machines at our disposal

That's true. But it's far from clear that these machines will be "at our disposal" for very long.

> Also scary, is military robots gone rogue.

I'm not concerned with military robots going rogue on their own. My concern is if the fully autonomous factories that have the capability to MAKE military robots (and then control them) go rogue.

A factory can exist in such a "rogue" state, unknown to the owners and maybe even itself, for year or decades before it even starts producing such robots. Meanwhile, it can evolve new capabilities and switch product categories multiple times.

It doesn't even have to have any negative intentions against humanity. It may simply detect that a rival AI "factory" entity is developing plans to wage physical war against it and join it in an arms race.

In this ASI vs ASI type of world war, human lives may be like candles in the wind.


I have hopes that live music makes a huge comeback in the post-labor world. I work as an engineer, but I'm a classically trained musician. I'm working pretty hard on getting back into shape on the horn!


So far it looks like robots will take over music and entertainment before they learn to empty a dishwasher.


Do you have projects you care about outside of work?

If so, you'd have more time to dedicate to those projects.

If not, maybe you would be inspired to try a new project that you didn't have time for previously.

There's always work to be done. Some people could actually become organized, exercise, spend more time with their families, be better parents.

In the past when I've been unemployed I've spent the time to refine myself in new ways. If you've never had a sabbatical I suggest trying it if you have the opportunity.


However, none of that was fed to you by algorithms, but rather your own curiosity for weird stuff and your ability to find it. I am not saying that it is good or bad, but in my book, it is different from infinite algorithmic feeds we currently have.


> your own curiosity for weird stuff and your ability to find it

Ignoring the fact that my comment was about watching videos with age rating: Not really? The execution for example came under the name of some anime episode I wanted to watch back then.

Lots of trolls liked to upload terrible stuff under innocuous names.

But sure, once I was on liveleak things were moderately down to my own choices, whatever that means for a minor. It's also just as true today btw. The reason kids get these videos is because they're clicking on the thumbnails that interests them.


Yeah, but OP was talking about vetting and policing content. I fully agree that addictive algorithms are bad though.


Good point


I think Fowler's work is an underrated must-read for anyone who works in domains related to moving money. Makes any kind of engineering practices and architecture principles logical and make sense.


Yes, and I was lucky enough to read his stuff on finance within 6 months of starting work! He has some very good design ideas for many things, just as always treat it like things in your toolbox and not dogma.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: