Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Personal belief, but robots coming for your jobs is not a valid argument against robots. If robots can do a job better and/or faster, they should be the ones doing the jobs. Specialization is how we got to the future.

So the problem isn't robots, it's the structure of how we humans rely on jobs for income. I don't necessarily feel like it's the AI company's problem to fix either.

This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.



> Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

-- Frank Herbert, Dune

The "government" is just the set of people who hold power over others.

Those who will own machines that can match humans will have unprecedented power over others. In effect they'll grow more and more to be the government.

Even now, companies hold more and more of the power over others and are more part of the government than ever before.

So it confuses me when you say it's what the government is for? Who would that be? If we pretend it would still be a democracy then I guess you're saying it's everyone's problem?

So here we are, let's discuss the solution and vote for it?


Voting for it has become really difficult in countries with a First Past The Post voting systems, where the only parties that could win are comprehensively captured by the elite


>The "government" is just the set of people who hold power over others.

Often, yes, but in a more functional society it would be the mechanism we collectively use to prevent a few people from amassing excessive wealth and power.


Where did this ever work out? It’s extremely common that policies are universally hated by the people, yet the people in power do it anyway.


The Nordic countries are pretty ok functioning democracies, with many smaller political parties. And pretty high taxes

(What do you refer to with "this"?)


Switzerland.

America isnt really a democracy it's a plutocracy.


Just for example. When the elite drives all their opponents into poverty and sends them to rot in prison for the slightest outcry, that's anything but democracy.

https://www.nbcnews.com/nbc-out/out-news/swiss-lgbtq-groups-...


The way real democracies punish bad people is really upsetting to bad people.


"Soral was convicted repeatedly in France and sentenced to jail time in 2019 for denying the Holocaust, which is a crime in France." Couldn't have happened to a nicer guy.


That's the paradox of tolerance for you; we must be intolerant of those that are intolerant... but you probably already know that Karl Popper theorized that [0]. Just looking around globally, we seem to be at that inflection point where this isn't mere hypothesis, but theory or law:

"Popper posited that if intolerant ideologies are allowed unchecked expression, they could exploit open society values to erode or destroy tolerance itself through authoritarian or oppressive practices."

[0]: https://en.wikipedia.org/wiki/Paradox_of_tolerance


> That's the paradox of tolerance for you

That's not a paradox of tolerance, it is the anti-democratic practice of fascism.

> they could exploit open society values to erode or destroy tolerance itself through authoritarian or oppressive practices.

This is exactly my point: an emerging fascist government through authoritarian or oppressive practices destroys tolerance by silencing people for any words that go against their agenda. There is no paradox here.


That's just one particular set of preferences. Different people have different priorities.


Those two things don’t sound mutually exclusive to me.


a democracy would be neat; we have a representative democracy here, so I can only vote for one of two candidates with a plausible chance of being elected, neither of who(m, if you must) have a coherent policy on AI or general disbursement of product, and even if they did, would be unable to convince the existing power structures in legislature to do something bold. probably better for mental health to accept a lot of these things (progression of AI or regulation of it, healthcare, etc) as variables in the environment we just live with, and focus on local/personal things.

we do actually have real democracy in this state, where we have binding referendums, but legislature is able to act faster than we're allowed to, to work around and nullify the policy we vote for. -so voting is fine; nothing wrong with it; but I guess I just worry, oftentimes, people get too involved in it and attached to movements which can accomplish something one day only for it to be reversed by the end of the decade. feels like the two sides are getting angrier and angrier, spinning their wheels in dysfunctional politics, and we can't have a functional government in this environment; one side guts government to replace with loyalists, then the other guts it again in a few years to replace the partisans, to replace with their own partisans. meanwhile, national debt just keeps climbing as people swarm into gold.

my compost piles, though -- not directly, but I can eat that; I can feed people with that. you know, you want to solve hunger -- you can contribute directly to food pantries. it's more work than voting, but something actually happens. -and almost all the regulation government cares about relates to capitalism; they don't care about my carrots because my carrots don't engage in capitalism. -and for some people in some circumstances, it doesn't take too much engagement with capitalism to be able to get $100k or whatever you need for a plot of land with electricity in a rural area if you plan out for it.

Herbert, as an aside, expressed a kind of appreciation for Nixon. His son wrote in a foreword to some edition of a Dune book I read mentioning this. He was glad the corruption was exposed and so blatant, because now, surely, voters would see how bad things became and will not let it happen again. Optimistic guy.


Butlerian Jihad fucking when? I'm ready.


We need to develop mentats first. Not sure if FH discussed the timeline of the tribe of mentats, but it seems a Butlerian Jihad would require sufficient bio brain power to counter the side using cognitive mechanisms.


> robots coming for your jobs is not a valid argument against robots.

Taking work away from people is practically the definition of technology. We invent things to reduce the effort needed to do things. Eliminating work is a good thing, that’s why inventing things is so popular!

What ends up happening is the amount of work remains relatively constant, meaning we get more done for the same amount of effort performed by the same amount of people doing the same amount of jobs. That’s why standards of living have been rising for the past few millennia instead of everybody being out of work. We took work away from humans with technology, we then used that effort saved to get more done.


I agree with most everything you said. The problem has always been the short-term job loss, particularly today where society as a whole has resources for safety nets, but hasnt implemented them.

Anger at companies who hold power in multiple places to prevent and worsen this situation for people is valid anger.


There's another problem with who gets to capture all of the resulting wealth from the higher tech-assisted productivity.


> The problem has always been the short-term job loss

Does anyone have any idea of the new jobs that will be created to replace the ones that are being lost? If it's not possible to at least foresee it, then it's not likely to happen. In which case the job loss will be long-term not short-term.


Past performance is not indicative of future results.

There is zero indication that there will be new jobs, new work. Just because there was lots of low hanging fruit historically does not mean we will stumble into some new job creators now. Waiving away concerns with 'jobs always have magically appeared when needed' is nonsense and a non-response to their concerns.


> What ends up happening is the amount of work remains relatively constant

That's a pretty hard bet against AGI becoming general. If the promises of many technologists come to pass, humans remaining in charge of any work (including which work should be done) would be a waste of resources.

Hopefully the AGI will remember to leave food in the cat bowls.


> What ends up happening is the amount of work remains relatively constant,

The entire promise of AI is to negate that statement. So if AI is truly successful, then that will no longer be true.


This time actually is different, though.

If everything that a human can do, a robot can do better and cheaper, then humans are completely shut out of the production function. Humans have a minimum level of consumption that they need to stay alive whether or not they earn a wage; robots do not.

Since most humans live off wages which they get from work, they are then shut out of life. The only humans left alive are those who fund their consumption from capital rents.


If humans who don't own robots are not producing and therefore not earning, then they're also not buying. Who are the robot owners selling their products to?


Other robot owners?

See: "build me a bigger yacht", building moon bases, researching immortality medicine, and building higher walls and better killbots to manage any peasant attacks


Why would other robot owners need to buy anything? They can just get their AIs to figure out how to make the things they're buying so they can not buy them anymore.


I think it's likely that there'll still be specialization - some robots/AI will be better at some tasks than others. On top of that, physical/natural resources may be owned by different people. So maybe you need tungsten and I need someone to build a space elevator, so we trade.


Robots consume resources


Hi, let me be the first to welcome into our millennium. Things have changed significantly since the 70s you seem to be used to: https://anticap.wordpress.com/wp-content/uploads/2010/11/fig...


[flagged]


We've replaced human jobs in the past not just made them more efficient. Horses (and all the jobs to do with them) were completely displaced by cars. Those jobs aren't more efficient, they're gone. Similar with many other jobs during the industrial revolution.

This is not a zero sum game. For an economy to exist we need consumers. For consumers to exist we need people to have jobs and be paid. So the equilibrium is that there will be some new jobs somewhere, not done by robots, that will pay people enough to consume the (better and cheaper) goods made by those robots. Or we'll just have a lot of leisure time and get paid by the government. Or (like some other discussion thread) we'll all be wiped out or slaves in the salt mines if the elites can just sustain/improve without us and are able to enforce it. Either way, it's not the scenario where we're out of jobs sitting at home.


> This is not a zero sum game. For an economy to exist we need consumers.

I think that's really unimaginative and not thinking about it right. If you control the basic resources and own the tools needed to convert those into whatever you want, why does the "economy" even matter? If you can get anything you want without needing anything from the other 95% of people, having "consumers" in the sense you're thinking doesn't matter any more.


There have always been some people powerful or rich enough to control the resources and tools needed for whatever they want yet the equilibrium has never been what you describe.


I think your percentage might be off by a decimal, make it 0.5% and broligarchs. Maybe that's why they've gone crazy with spending trillions in the AI bubble, trying to position themselves as far up the chain as possible. 99.5% of humans will be wasted resource sinks to them and seen as "unnecessary"


The problem is that AI and advanced robotics (and matter synthesis and all that future stuff) must come with a post scarcity mindset. Maybe that mindset needs to happen before.

Either way, without that social pattern, I'm afraid all this does is enshrine a type of futuristic serfdom that is completely insurmountable.


> post scarcity mindset

A total shift of human mentality. Humans have shown time and again there is only one way we ever get there. A long winding road paved with bodies.


I searched some day and there is enough food to end world hunger once or (twice I know for a fact once, but I had a reddit article which said twice but I am not pulling that source but you get it)

> Less than 5% of the US’ military budget could end world hunger for a year. [1]

My friend we live in 1984, when the main character discovers that eurasia and others have enough food / resources to help everybody out but they constantly fight/lessen it just so that people are easy to be manipulated in times of war.

I discovered this 5% statistic just right now and its fucking wild. US itself could end world hunger 20 times but it spends it all on military. I don't know if its wrong or not but this is fucking 1984. The big brother has happened. We just don't know it yet because the propaganda works on us as well, people in 1984 didn't know they were living in 1984.

And neither do we.

Have a nice day :)

Sources [1]: https://www.globalcitizen.org/en/content/hunger-global-citiz...


Yeah. Desire and distribution problems.

Despite the cynical pessimistic tone of my comment I don't think its necessarily that humans are bad. Humans do bad things but also we still do better than any other lifeform we know of at helping each other at our own expense.

I don't think there is a good answer the same things that kept us alive as a species are what are now holding us back from becoming something better. I thinks humans will get there but like I said. Mountains of unnecessary preventable deaths first.


That's valid, but I can still hope for better and dread that result.

I genuinely believe we'll see technological world war 3 in my child's lifetime. And I'm not a super supporter of that.


I think it's going to come with eradicate the 'wastrel' mindset.


We're already in a post-scarcity world. The only reason there is still scarcity is entirely because it is created by humans to acquire more wealth.

You think that's going to change just because many more people find themselves without?


Oh, nah, we're definitely not post-scarcity. If the whole world's population used resources at the rate that the US population does, we'd need 4.9 Earths.

So: getting the whole world up to US standards of living is going to take a lot of changes to lifestyle or technological advances. Both of these are scarcity issues.


> If the whole world's population used resources at the rate that the US population does

Why would the US's wasteful use of resources be the benchmark for "sufficiency"?

There's enough food to feed the world, and enough raw materials to produce the things we need. The problem is not scarcity of resources, it's distribution of resources.


i think we are post scarcity in the sense of there is enough food, medical supplys, housing and smartphones for the whole world but 90% of the resources go to the "west" or are burned at place


What you end up with is a dozen people owning all the wealth and everyone else owning nothing, resulting in the robots not doing anything because no one is buying anything, resulting in a complete collapse of the economic system the world uses to operate. Mass riots, hunger wars, political upheaval, world war 3. Nuke the opposition before they nuke you.


That’s one scenario, but there are others. There are lots of open-weight models. Why wouldn’t ownership of AI end up being widely distributed? Mabybe it's more like solar panels than nuclear power plants?


In terms of quality of life, much/most of the value of intelligence is in how it lets you affect the physical world. For most knowledge workers, that takes the form of using intelligence to increase how productively some physical asset can be exploited. The owner of the asset gives some of the money/surplus earned to the knowledge worker, and they can use the money to affect change in the physical world by paying for food, labor, shelter, etc.

If the physical asset owner can replace me with a brain in a jar, it doesn't really help me that I have my own brain in a jar. It can't think food/shelter into existence for me.

If AI gets to the point where human knowledge is obsolete, and if politics don't shift to protect the former workers, I don't think widespread availability of AI is saving those who don't have control over substantial physical assets.


What are the barriers to entry? Seems like there are lots of AI startups out there?

There is a rush to build data centers so it seems that hardware is a bottleneck and maybe that will remain the trend, but another scenario is that it stops abruptly when capacity catches up? I'm wondering why this doesn't this become a race to the bottom?


Is the suggestion that AGI (or even current AI) lowers the barrier of entry to making a company so much that regular people can just create a company in order to make money (and then buy food/shelter)? If so, I think there's a lot of problems with that:

1) It doesn't solve the problem of obtaining physical capital. So you're basically limited to just software companies.

2) If the barrier to entry to creating a software product that can form the basis of a company is so low that a single person can do it, why would other companies (the ones with money and physical capital) buy your product instead of just telling their GPT-N to create them the same software?

3) Every other newly-unemployed person is going to have the same idea. Having everyone be a solo-founder of a software company really doesn't seem viable, even if we grant that it could be viable for a single person in a world where everyone has a GPT-N that can easily replicate the company's software.

On a side note, one niche where I think a relatively small number of AI-enabled solo founders will do exceptionally well is in video games. How well a video game will do depends a lot on how fun it is to humans and the taste of the designer. I'm suspicious that AIs will have good taste in video game design and even if they do I think it would be tough for them to evaluate how fun mechanics would be for a person.


My potato RTX vs your supercomputer cluster and circular chipfab/ai-training economy. Challenge:

“Be competitive in the market place.”

Go.


Your potato RTX that uses a finite amount of power, is already paid for, and does things that you know are useful verses your supercomputer cluster and circular funding economy that uses infinite power, is 10x more overleveraged than the dot com bubble, and does things that you think might be useful someday. Challenge:

“Don’t collapse the global economy.”

:)


Excellent. Completely fair.


Can't find it now, but I read an article about someone adding AI help to a large appliance. Can't assume WiFi is set up, so it has to work offline. Frontier model not required.

I don't think we will be building these things ourselves, but I think there will still be products you can just buy and then they're yours.

It would be the opposite of the "Internet of things" trend though.


What market? If this shocks employment numbers what can happen other than market collapse?


I don't think it's a short term thing, but in the short term, yes you're exactly right.

But what are the minimum inputs necessary to build self-sustaining robotic workforce of machines that can (1) produce more power (2) produce more robots (3) produce food. The specifics of what exactly is necessary--which inputs, which production goals--is debatable. But imagine some point where a billionaire like Elon has the minimum covered to keep a mini space-x running, or a mini optimus factory running, a mini solar-city running.

At this point, it's perfectly acceptable to crash the economy, and leave them to their own devices. If they survive, fine. If they don't, also fine. The minimum kernel necessary to let the best of mankind march off and explore the solar system is secure.

Obviously, this is an extreme, and the whole trajectory is differential. But in general, if I were a billionaire, I'd be thinking "8 billion people is a whole lot of mouths to feed, and a whole lot of carbon footprint to worry about. Is 8 billion people (most of whom lack a solid education) a luxury liability?"

I really just don't believe that most people are going to make it to "the singularity" if there even is such a thing. Just more of the same of humanity: barbaric bullshit, one group of humans trying to control another group of humans.


This is the only outcome any economic model concludes. Complete market collapse. It will scream before it collapses (meaning it will shoot to the moon, then completely collapse). Way worse than the Great Depression because instead of 26% unemployment, it will be 80%.


the ai isnt the useful thing, the stuff you can do with it is.

if your job is replaced by ai, you having ai at home doesnt change whether you're making money or not.

the capital owner gets their capital to work more effectively, and you without capital don't get that benefit


No take backs when they alienate 1-3 generations of workers though


Because the open models are not going to make as much money?


If we're in fantasy land about AI, why do we keep thinking anyone will actually _own_ AI? Human hubris alone cannot contain a super intelligent AI.


What makes you assume the AI companies actually want to create a super intelligence they can’t control. Altman has stated as much. Musk definitely wants to remain in power.


Life, uh, finds a way


Life in general, maybe

Species go extinct.


Computers are not alive


Not yet, I agree, but who is to say they couldn't?

Limiting life to cell based biology is a somewhat lousy definition by the only example we know. I prefer the generalized definition in "What is Life?" by Erwin Schrödinger which currently draws the same line (at cellular biology) but could accommodate other forms of life too.


—- ain malcom


I sometimes wonder about what our governments would do if one of the businesses in their jurisdictions were to achieve AGI or other such destabilizing technology. If it were truly disruptive, why would these governments respect the ownership of such property and not seize it - toward whatever end authority desires. These businesses have little defense against that and simply trust that government will protect their operations. Their only defense is lobbying.


As we have almost seen, they will do absolutely nothing because they are afraid of losing to competing countries.


AGI is end game scenario. That is "winning". If a business wins it, then the government may not remain subservient to it no matter what free market conditions it had preserved beforehand, as long as it has the power to act.


Economies of Scale have been such a huge influence on the last ~by 300 years of industrial change. In 1725 people sat at home hand crafting things to sell to neighbors. In 1825, capitalists were able to open factories that hired people. By 1925, those products were built in a massive factory that employed the entire town and were shipped all over the country on rail roads. By 2025 factories might be partially automated while hiring tens of thousands of people, cost billions to build and the products get distributed globally. The same trend applies to knowledge work as well, despite the rise of personal computing.

Companies are spending hundreds of millions of dollars on training AI models, why wouldn’t they expect to own the reward from that investment? These models are best run on $100k+ fleets of power hungry, interconnected GPUs, just like factory equipment vs a hand loom.

Open weight models are a political and marketing tool. They’re not being opened up out of kindness, or because “data wants to be free”. AI firms open models to try and destabilize American companies by “dumping”, and AI firms open models as a way to incentivize companies who don’t like closed-source models to buy their hosted services.


I think a lot of people will be okay with paying $20 a month if they're getting value out of it, but it seems like you could just buy an AI subscription from someone else if you're dissatisfied or it's a bit cheaper?

This is not like cell service or your home ISP; there are more choices. Not seeing where the lock-in comes from.


> What you end up with is a dozen people owning all the wealth and everyone else owning nothing,

That may be where the USA ends up. We Australian's (and probably a few others, like the Swiss) have gone to some effort to ensure we don't end up there: https://www.abc.net.au/listen/programs/boyerlectures/1058675...


Robots will do stuff for rich people ecosystem.

The rest you know what’s going to happen


If robots can do all industrial labor, including making duplicate robots, keeping robots the exclusive property of a few rich people is like trying to prevent poor people from copying Hollywood movies. Most of the world doesn't live under the laws of the Anglosphere. The BRICS won't care about American laws regarding robots and AI if it proves more advantageous to just clone the technology without regard for rights/payment.


I don't see how owning a robot helps me with obtaining the essentials of life in this scenario. There's no reason for a corporation to hire my robot if it has its own robots and can make/power them more efficiently with economy of scale. I can't rent it out to other people if they also have their own robots. If I already own a farm/house (and solar panels to recharge the robots) I guess it can keep me alive. But for most people a robot isn't going to be able to manufacture food and shelter for them out of nothing.


That is the system we have today, directionally. AI is an opportunity to accelerate it, but it is also an opportunity to do the opposite.


Then government comes in and takes over. In the end we will end up with communism. Communism couldn't compete with the free market but in a world of six companies it can.


> What you end up with is a dozen people owning all the wealth and everyone else owning nothing

Only if the socialists win. Capitalism operates on a completely different principle: people CREATE wealth and own everything they have created. Therefore, AI cannot reduce their wealth in any way, because AI does not impair people's ability to create wealth.


“Robots coming for your jobs” is a valid argument against robots even if they can do those jobs better and faster, under two assumptions: 1) humans benefit from having jobs and 2) human benefit is the end goal.

Both are fairly uncontroversial: many humans not only benefit from jobs but in fact often depend on jobs for their livelihoods, and (2) should be self-evident.

This can change if the socioeconomic system is quickly enough and quite substantially restructured to make humans not depend on being compensated for work that is now being done by robots (not only financially but also psychologically—feeling fulfilled—socially, etc.), but I don’t see that happening.


This is valid only so far as "human benefit" is localized to the human doing the job. I'm a cancer researcher. Obviously. my job is of value to me because it pays my bills (and yes, I do get satisfaction from it in other ways). But if an AI can do cancer research better than me, then the human benefit (to every human except perhaps me) favors the AI over me.

But a lot of jobs aren't like that. I doubt many people who work in, say, public relations, really think their job has value other than paying their bills. They can't take solace in the fact that the AI can write press releases deflecting the blame for the massive oil spill that their former employer caused.


> I'm a cancer researcher. Obviously. my job is of value to me because it pays my bills (and yes, I do get satisfaction from it in other ways). But if an AI can do cancer research better than me, then the human benefit (to every human except perhaps me) favors the AI over me.

I’ll note that I didn’t mention “AI”, I was addressing robots vs. jobs, but sure.

Let me straw man myself and say up front that it would be simplistic to say that if something hurts one person today then it is bad, even if it benefits a million tomorrow. However, does that mean if death of N people can cause N+1 people to be saved at some indefinite point in the future, we should jump on the opportunity?

Utilitarian math is not the answer. We don’t have to go far for examples of tragic loss of life and atrocities that were caused by people following utilitarian objectives, so I am much more wary of this logic than its inverse. We probably should neither let everybody starve while we are fighting cancer nor stop studying cancer, even if we could feed millions of people in dire need with that money.

With that in mind: feeling fulfilled and needed is important, being able to pay bills is important, etc., no matter who you are. It is probably unacceptable to reduce human life to “be the most efficient cog in the system or GTFO”.

> I doubt many people who work in, say, public relations, really think their job has value other than paying their bills.

Does your cancer research institution have no public relations department? Should there be no public relations departments in any of them? Would it help cancer research?

--

Tangentially, the Wikipedia article on inflammation makes two claims: 1) inflammation can be caused by stress—I imagine via lack of sleep, bad habits, eating disorders, etc.—and 2) cancer is one of the non-immune diseases with causal origins in inflammatory processes. Now, there’s many things that could cause a person to stress, but I imagine losing your job and struggling to make ends meet, with a promise that it helps humanity at some point, is definitely high up there (especially if you have a family). I jest, but only in part.


> 1) humans benefit from having jobs and 2) human benefit is the end goal.

There's no law of nature saying that a human must work 40 hours per week or starve.

The current dependence on work is a consequence, not a goal.


Having to pay bills is part of current socioeconomic reality. Deriving satisfaction from being a useful member of society and social ties is part of human psychological nature.

Working on replacing human jobs with robots has two concrete outcomes, 1) impacting people’s ability to have jobs that pay bills and often create a meaning their lives and 2) concentrating wealth in technological elites (who run those robots), and a theoretical outcome of 3) helping humanity in some way.

If we are acting in good will, we should dedicate effort to addressing the concrete impact (1) at least as much as to working on (3). Most of us are or are adjacent to tech elites and benefit from (2), which means we are individually incentivized to not care about (1), so it requires reiterating every now and then. If we are purely thinking of (3), we are not much better than dictators and their henchmen that caused famines and other catastrophes justifying it with some sort of long-term utilitarian calculus.


> Having to pay bills is part of current socioeconomic reality.

This is a man-made reality though, and we have as much power to change it as we did to create it.

> Deriving satisfaction from being a useful member of society and social ties is part of human psychological nature.

I can't get behind this idea that "work" is the only way that a person can feel like a useful member of society. This is just the result of our (man-made) programming that makes it seem like the only way. We've essentially been brainwashed into accepting the backwards idea that we need work even if work doesn't need us.

> If we are acting in good will, we should dedicate effort to addressing the concrete impact

I agree, but i don't think the right answer is to stop the tech and keep digging holes and filling them in just to get a paycheck. The solution is to fix the humans. Unfortunately our government is trash so yeah we're probably screwed unless we first figure out how to get govt to actually represent the people. Andrew Yang is the only politician-adjacent I've seen take this conversation seriously.


> This is a man-made reality though, and we have as much power to change it as we did to create it.

Does it really need to be stated that “some technology would not be harmful if only the reality in which the technology was used was different”? The challenge is that reality is what it is, and even if we have a degree of control over some aspects of reality we are not at all trying to change it.

If we were the people in charge of job-subsuming robots, and we acted in good faith and common interest, we would be dedicating at least as much resources to changing that reality (in a peaceful, non-violent way) as to introducing a technology that harms a lot of people (even if we get paid for working on that technology).

> I can't get behind this idea that "work" is the only way that a person can feel like a useful member of society. This is just the result of our (man-made) programming that makes it seem like the only way. We've essentially been brainwashed into accepting the backwards idea that we need work even if work doesn't need us.

Even if what you said was true, this is a reality and for robots taking jobs to not be harmful this has to not be a reality. Are we dedicating resources to working on making that not a reality?

However, I don’t even believe this is true. Humans are inherently social. Self-awareness requires other people to exist (“self” cannot be defined without “other”); you can’t become a human without other humans because you need to be surrounded by others for something that we call “consciousness” to develop in you. We are much more ants in an anthill than solitary individuals occasionally in touch with others that we like to imagine ourselves as. For as long as humans existed, we depended on each other, and being in the void, unneeded, is subconsciously a death sentence.

There are the lucky few who find themselves needed by others without much effort, but work is a mechanism that makes the rest of us feel needed. Sure, some work isn’t the best for that, but a lot of work is, be it cancer research or opening doors for people entering a shopping mall.

> I agree, but i don't think the right answer is to stop the tech and keep digging holes and filling them in just to get a paycheck.

False dichotomy. The tech does not need to be stopped and has plenty of very useful applications outside of digging holes (e.g., cancer research mentioned by the other commenter). However, if you have XX% of population digging holes, firing them without any concern is an absolutely bad move, regardless of how good your hole-digging robots are. If you do that, all you are doing is a wealth transfer from people digging holes to people running hole-digging robots. (Remember, people digging holes also participated in the economy, paying their local butcher and baker who in turn could pay their bills, etc.)

> The solution is to fix the humans. Unfortunately our government is trash so yeah we're probably screwed unless we first figure out how to get govt to actually represent the people.

We can work on robots replacing everybody’s jobs: robotics is very challenging, but we found a way. However, to work on making a reality where everybody’s jobs are taken by robots a tenable reality? No way sir, it is way too challenging for our small brains.


> Does it really need to be stated that “some technology would not be harmful if only the reality in which the technology was used was different”?

If someone is arguing to limit that technology, then yes.

> but work is a mechanism that makes the rest of us feel needed

I'm guessing that you say that because you enjoy your job and it's part of your identity, which is great. But that's a luxury that not everyone has. Some people are ashamed of their jobs, or just simply hate them. But they're stuck because they can't find anything better.

I'm also sure there are tons of people in well-paying BS jobs that are glad to have a paycheck but absolutely do not feel their jobs are necessary or fullfilling. Upon realizing that their job is unecessary they are also left with the feeling that they are in fact wasting their time on this earth. So they're also stuck.

And it is surely possible to be social and to find fullfillment outside of work. I would even say that work is one of the lowest forms of socializing, one step up from sharing an elevator with a stranger and commenting on the weather. That's why most of the time (not all the time), when someone moves on to another job they lose contact with their old collegues. Because they were put together by chance, and while they made the best of being forced to spend the majority of their waking lives together, they missed out on the opportunity to develop real lasting connections with people.

> However, to work on making a reality where everybody’s jobs are taken by robots a tenable reality? No way sir, it is way too challenging for our small brains.

It's happening, so we had better figure out how to wrap our small brains around it.


> If someone is arguing to limit that technology, then yes.

This was a rhetorical question to highlight the ridiculousness of using this statement as justification for anything. You can state it however many times you want, it does not change reality.

Just as well, we would not need legs if only the reality was that we live in the water. Turns out, the reality is that we live on land and we need legs. The right way is to adapt humans to living in water first, and then legs would have been gone away through evolution. The wrong way is to cut off people’s legs[0].

> I'm guessing that you say that because you enjoy your job

Let’s stick to the point, not to what you imagine about me.

If somebody hates their job, taking away their source of income is extremely harmful (clearly they would have quit already if they didn’t absolutely need money). If somebody loves their job, taking away what gives their life meaning is extremely harmful.

> It's happening, so we had better figure out how to wrap our small brains around it.

Not at all. For it to start happening, those who are in charge of robot job replacement would have to stop plugging their ears and shouting whenever someone talks about the issues it causes.

I feel like we are talking past each other, I am done trying to rephrase my point.

[0] To make things obvious… Making sure jobs are not vital for human existence is evolving humans to live in water. Replacing jobs with robots is cutting off people’s legs.


> Not at all

What I'm saying is that the automation is happening, so we need to deal with that reality.

> stop plugging their ears and shouting whenever someone talks about the issues it causes

Exactly.


> What I'm saying is that the automation is happening, so we need to deal with that reality.

This is not a force of nature.

Perhaps what you mean to say is that certain people, heavily featured on this forum, are working on something that causes harm to many, which they are not concerned about because they get paid well, or at best because of some long-term utilitarian math—alongside those complicit in it by investing in the effort, trying to make it seem as if it’s “natural” and “inevitable” and “normal”, and so on.

If that’s what you mean—perhaps that’s true, and that’s exactly why this thread is happening.

It is man made, and unlike reality in which we’ve been living for thousands of years and which we are well adapted for this change is being forced by a wealthy minority onto the rest of humanity in the span of decades. Luckily it is far from being “reality” yet and it can well be stopped.


> Perhaps what you mean to say is that certain people, heavily featured on this forum, are working on something that causes harm to many

No that's not at all what I mean to say.

I see technological advancement as inevitible and good. A robot that can do jobs so that humans don't have to is a good thing. It's progress. People working towards progress aren't evil. I assume that they, like me, believe that our society can and should evolve with the technology. If it can't, that's our fault, not the fault of the tech. If our government is so embarrisingly bad that it exploits the people that it represents rather than helping them (which I agree it is), well that's also our fault and not the tech's. The government is us. We better get our shit together.

The wealthy already own and control everything. So your status quo goal of a fair society where we all work all day and feel needed and appreciated and have a nice comfortable life is already dead. You're defending a dead body.


> I see technological advancement as inevitible and good.

It’s false, simply because it is a product of human effort and human choice to do this or the other.

> A robot that can do jobs so that humans don't have to is a good thing. It's progress

A “good thing” is what benefits humans. Robots replacing humans at what humans choose to do for their own benefit is decidedly not a good thing—aside from humans who profit from running the robots.

You may have noticed that I am repeating myself[0]. You are yet to show how this benefits humanity in a way that outweighs harm to humans who lose their jobs (especially considering many of them provided, without consent, the data instrumental for the robots to work in the first place). If you are among the people who work on robots, I think you ought to pause and reflect.

> The wealthy already own and control everything.

“They” don’t. A lot of “them” are here, by the way. Wealth gap is high, but to say it’s absolute (100% is owned by the rich and we are all just slaves for them) is simply wrong. We should work towards decreasing the gap, not increasing it.

[0] I am basically reiterating my original comment:

> “robots are coming for your jobs” is a valid argument against robots even if they can do those jobs better and faster, under two assumptions: 1) humans benefit from having jobs and 2) human benefit is the end goal.


> You are yet to show how this benefits humanity in a way that outweighs harm to humans who lose their jobs

I think it's pretty obvious if you think beyond trying to protect the status quo. The benefit is simply that machines do work so humans don't have to. It's no more complicated than that. It's what we humans have always strived to do: to make our lives easier. It's why an electric screwdriver exists.

The fact that making our lives easier has become a problem is the actual problem. We should address that problem instead of trying to protect it.


> The benefit is simply that machines do work so humans don't have to.

Why is it a benefit? Because not having to work the ultimate ideal? Why would that be the case?

To me it seems like it’s only an ideal for those who wield the robots who do the work and profit from that, not to people who wouldn’t be able to do compensated work if they wanted to. (Ultimately, it’s a means of control: if there are no jobs, the people in power get to decide how to distribute sustenance to jobless population. Rest assured, that population will not dare to bite the hand that feeds them.)

The ideal is to be able to choose to do work you enjoy doing, feel pride in it, get fairly compensated for it. To not be able to do this seems like a strongly negative outcome.

> making our lives easier has become a problem

Is “easier life” the ultimate ideal? Why? There’s many other, more compelling things it could be (e.g., “fulfilled”, “happy”, “meaningful”, “satisfying”) and many of them are not exactly aligned with “easy”.

Even if easier life was your ideal, the precedent has been that automation does not lead to that—we are doing more work (and more challenging, a.k.a. the opposite of “easy”, work) instead[0]. As jobs go away, whoever is still lucky to have one gets getting paid less to do more work (that’s just market forces at work), while a small minority profits and benefits from more accumulated power. Is that what you want to happen? If yes, we don’t have anything to discuss further. If not, you have the power to be part of the change.

[0] https://news.ycombinator.com/item?id=45656916


> Is that what you want to happen? If yes, we don’t have anything to discuss further. If not, you have the power to be part of the change.

I think I've been pretty consistent that I think the change necessary is a social/political one, not a tech one. Whether or not we're capable of this change is another question.


I have not seen a description of what this “target reality” should even look like. It sounds like either you have not thought about what it would be, or you did but you would rather keep it to yourself.

It is not even a question that it would be strongly unethical (like evil addictive social media/crypto scams/online casino times a thousand level of unethical) to proceed on working on job-replacing robots without considering what a tenable no-job reality would like like, or after deciding it is probably not achievable.


I mentioned Andrew Yang way up thread. He proposed a "freedom dividend". Something like that is a start.

https://2020.yang2020.com/what-is-freedom-dividend-faq/


It’s worth (re)watching the 1985 movie Brazil in particular the character of Harry Tuttle, hearing engineer https://youtu.be/VRfoIyx8KfU

Neither government or corporations are going to “save us” simply because sheer short termism and incompetence. But the seem incompetence will make the coming dystopia ridiculous


I do wonder if somewhere like China might be better off - they might not have muh freedumb, but their government seems keen to look after the majority and fund things that corporations wouldn't.


I believe that if the elites in China didn't need the populace, they would dispense with them.


You're probably right, human nature probably doesn't change with geography.


"China might be better off"

Uh...


Is it just income that’s the issue? I’d rather say it’s purpose. Even more: What will happen to democracy in a world where 100% of the population are 27/7 consumers?


“ What will happen to democracy in a world where 100% of the population are 27/7 consumers?”

…we’ll add three hours to our day?

Bu seriously, I support what you are saying. This is why the entire consumer system needs to change, because in a world with no jobs it is by definition unsustainable.


Not if it's ends up a literal utopia where robots do all the work and all humans share equally in the benefits, and get to live their lives with a purpose other than toiling for N hours a week to earn M survival tokens, which is what we have today. Good luck coming up with the political will to actually implement that utopia, though.


Smaller cities, human size, humans closer to nature, robots bring stuff from factories by driving. Done


Seeing how the humans behave when an extreme minority is able to control the production and the army…


> What will happen to democracy in a world where 100% of the population are 27/7 consumers?

What does the one have to do with the other?

But even then, currently plenty of people find their fun in creating - when it's not their job. And they struggle with finding the time for that. Sometimes the materials and training and machines for that also. Meanwhile a majority of current jobs involve zero personal creativity or making or creating. Driving or staffing a retail outlet or even most cooking jobs can't really be what you are looking for on your argument?

Is the road to post-scarcity more likely with or without robots?


Gonna just play a little mad libs here with your argument…

Personal belief, but AI coming for your children is not a valid argument against AI. If AI can do a job better and/or faster, they should be the ones doing the parenting. Specialization is how we got to the future. So the problem isn't AI, it's the structure of how we humans rely on parenting for their children. I don't necessarily feel like it's the AI company's problem to fix either. This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.

You’re right about one thing within reason… this is what a rationale government should be for… if the government was by the people and for the people.

Addendum for emphasis: …and if that government followed the very laws it portends to protect and enforce…


The artist behind replacement.ai chose a very relevant first use case — everyone thinks of AI replacement in terms of labor, but the example in terms of parenting and child rearing, which is arguably the only true reason for humans to exist, is genius.

Procreation and progeny is our only true purpose — and one could make the argument AI would make better parents and teachers. Should we all capitulate our sole purpose in the name of efficiency?


If it makes our lives better then why not?

We use tools all the time.


I also agree with this, but I think there is a need to slow the replacement, by a bit, to reduce the short term societal harm and allow society to catch up. Robots can’t do the jobs if society collapses due to unrest.

Progress is great obviously, but progress as fast as possible with no care about the consequences is more motivated by money, not the common good.


What you mean you don’t want to take a Great Leap Forward?


A Great Leap Forward is what you get when you give a few fanatical ideologues too much power. I don't see anything in my history book about robots or AI being connected with that. They seem like different topics altogether.


> give a few fanatical ideologues too much power.

Uh, have you seen the US lately? I think that ship has sailed.


Well, we're certainly taking a Great Leap, so that part checks out at least.


The problem is that the AI companies are most interested in displacing the existing labor force more so than they are interested in developing new uses of AI in areas that humans are inherently bad at. They are more interested in replacing good jobs with AI rather than bad jobs. It's not that machines are doing jobs better, they are just doing them for cheaper and by cutting more corners.

Best summarized in this comic: https://x.com/9mmballpoint/status/1658163045502267428


> This is what government is for

They're busy selling watches whilst people can still afford them thanks to having jobs.


Kind of hard for the government to “prepare society to move forward” when the AI companies and their financiers lobby for conditions that worsen the ability of society to do so.


My reading of history is that human society is able to adjust to small changes like this over long periods of time as young people choose alternate paths looking at what changes are likely on the horizon. Rapid changes frequently lead to destabilization of adults who are unwilling or unable to retrain which then also screws up their next generation who start off on an economic back foot and a worldview of despair and decrepitude.

Not the AI company’s fault per se, but generally the US government does a very poor job of creating a safety net either intentionally, ineptitude or indifference.

By the way, attacks were also leveled against Chinese and Japanese California workers who were viewed as stealing the jobs of other “Americans”. So this viewpoint and tradition of behavior and capitalism is very long in US history.


> Personal belief, but robots coming for your jobs is not a valid argument against robots.

Replace the word robot with "automation" or "industrialization" and you have the last 200 years of human history covered.

The Ludites could have won and we would all have 1500$ shirts.

Do you know any lamp lighters? How about a town crier?

We could still all be farming.

Where are all the switch board operators? Where are all the draftsmen?

How many people had programing jobs in 1900? 1950?

We have an amazing ability to "make work for ourselves", and history indicates that we're going to keep doing that regardless of how automated we make society. We also keep traditional "arts" alive... Recording didnt replace live performances, TV/Film didnt replace Broadway... Photography didnt replace painting...


The way I think about this is either the job is done in the most efficient way possible or I am asking everyone else to pay more for that product/service (sometimes a worse product/service) just so I can have a job.

E.g. if I was a truck driver and autonomous trucks came along that were 2/3rds the price and reduced truck related deaths by 99% obviously I couldn't, in good faith, argue that the rest of the population should pay more and have higher risk of death even to save my job and thousands of others. Though somehow this is a serious argument in many quarters (and accounts for lots of government spending).


This is what minimum wage deals with to some extent. Governments decide what jobs may exist based on how much a company can pay to do the job.


What about replacing a screenplay writer or actor? Are dirty jobs only acceptable.


Problem is billionaires have co-opted our government. Their interest is in channeling money from the working class into their hands through rentier capitalism. That is contrary to widely structuring income.

Rent extraction hurts them in the long run. Because working class income gets absorbed by various forms of rent, they are more expensive to employ. Thus we fail to compete with, say, China, which socializes many costs and invests in productive industry. We are left with a top heavy society that as we can see is already starting to crumble.


The government and all social structures developed because your labour has value and division of labour/specialisation is so effective that it outperforms the alternatives. Cooperation beats violence, iterated prisoners dilemma, etc.

None of this holds if you don't have anything of value to offer and automation is concentrating power and value, AI is the extreme end of this - at some point the charade of democracy becomes too annoying to the ones at the top, and you get there faster by trying to reign them in.


It's also what organized labor is for. Workers can't wait on government to offer aid without leverage. We would not have weekends off or other protections we now take for granted if we waited on government to govern for us as if it was a caring parent.

So that would mean it is in fact the responsibility of the people at robot/AI companies (and across industries). It's not something we can just delegate to role-based authorities to sort out on our behalf.


> This is what government is for

In my home country, the people building the robots and job destroying AI have captured all three branches of government, and have been saying for over 40 years that they'd like to shrink government down to a size that they could drown it in a bathtub. The government can't be relied upon to do more than move its military into our cities to violently stifle dissent.


  > This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
Many will argue that the purpose of government is not to steer or prepare society, but rather to reflect the values of society. Traditionally, the body that has steered (prepared or failed to prepare) society for impending changes was religion.


Enter the Dominionists, gaining steam now. Not a regime I want to live under. Here's a forty year old article describing the inception of those religious figures close to the current USA administration ... https://banner.org.uk/res/kt1.html


Dominionists? Ready the Defiant.


Firstly, they are not coming for my job, they're coming for all jobs.

Secondly, you assume in the first place that we can somehow build a stable post-scarcity society in which people with no leverage can control the super-intelligent agents with all of the power. The idea that "government will just fix it" is totally ignorant of what the government is or how it emerges. In the long run, you cannot have a ruling class that is removed from the keys to power.

Lastly, Who says we should all support this future? What if I disagree with the AI revolution and it's consequences?

It is kind of amazing how your path of reasoning is so dangerously misoriented and wrong. This is what happens when people grow up watching star trek, they just assume that once we live in a post scarcity future everything will be perfect, and that this is the natural endpoint for humanity.


>Firstly, they are not coming for my job, they're coming for all jobs.

They're not coming for all jobs. There are many jobs that exist today that could be replaced by automation but haven't been because people will pay a premium for it to be done by a human. There are a lot of artisan products out there which are technically inferior to manufactured goods but people still buy them. Separately, there are many jobs which are entirely about physical and social engagement with a flesh and blood human being, sex work being the most obvious, but live performances (how has Broadway survived in an era of mass adoption of film and television), and personal care work like home health aids, nannies, and doulas are all at least partially about providing an emotional connection on top of their actual physical labor.

And there's also a question of things that can literally only be done by human beings, because by definition they can only be done by human beings. I imagine in the future, many people will be paid full time to be part of scientific studies that can't easily be done today, such as extended, large cohort diet and exercise studies of people in metabolic chambers.


>They're not coming for all jobs.

So we are all going to just do useless bullcrap like sell artisan clay pots to each other and pimp each other out? Wow, some future!

I just don't know how this goofball economy is going to work out when a handful elites/AI overlords control everything you need to eat and survive and everyone else is weaving them artisan wicker baskets and busking (jobs which are totally automated and redundant, have you, but the elites would keep us around for the sentimental value).

>I imagine in the future, many people will be paid full time to be part of scientific studies that can't easily be done today, such as extended, large cohort diet and exercise studies of people in metabolic chambers.

Yeah this is one plausible future, we could all be lab rats testing the next cancer medicine or donating organs to the elites. I can't imagine the conditions will be very humane, being that the unwashed masses will have basically no leverage to negotiate their pay.


These AI apologists are also totally ignorant about the actual usefulness of these models. The tech billionaires have been building bunkers while pumping up the bubble. Tell me, why would one need a bunker if AI brings us that utopia? Non rich humans are just consumers to them, they will exploit the planet until it's uninhabitable for their own "gain". We are a waste product for those who hold power. They are showing this to us openly and all the time... how can people be so blind?


I strongly disagree and I am having trouble understanding what kind of world you envision, what will it look like?

The problem as I see it is not robots coming for my job and taking away my ability to earn a salary. That can be solved by societal structures like you are saying, even though I am somewhat pessimistic of our ability to do so in our current political climate.

The problem I see is robots coming for my mind and taking away any stakes and my ability to do anything that matters. If the robot is an expert in all fields why would you bother to learn anything? The fact that it takes time and energy to learn new skills and knowledge is what makes the world interesting. And this is exactly what happened before when machines took over a lot of human labour, luckily there were still plenty of things they couldn't do and thus ways to keep the world interesting. But if the machines start to think for us, what then is left for us to do?


> If the robot is an expert in all fields why would you bother to learn anything?

Robots have been better at chess than humans for a quarter of a century. Yet, chess is still a delightful intellectual and social persuit.


Where would governments find the money to expand safety nets by 2-3 orders of magnitude, while losing most income tax inflows?


> Where would governments find the money to expand safety nets by 2-3 orders of magnitude, while losing most income tax inflows?

Well, it would start by not tax-favoring the (capital) income that remains and would have to have grown massively relatively to the overall economy for that to have occurred.

(In fact, it could start by doing that now, and the resulting tax burden shift would reduce the artificial tax incentive to shift from labor intensive to capital intensive production methods, which would, among other things, buy more time to deal with the broader transition if it is actually going to happen.)


You need income to: - buy house - get food - buy clothes - medical care - buy nice things

if robots are that advanced that can do most of the jobs - the cost of goods will be close to zero.

government will product and distribute most of the things above and you mostly won't need any money, but if you want extra to travel etc there will always be a bunch of work to do - and not 8 hours per day


> if robots are that advanced that can do most of the jobs - the cost of goods will be close to zero.

No, the cost of goods will be the cost of the robots involved in production amortized over their production lifetime. Which, if robots are more productive than humans, will not be “near zero” from the point of view of any human without ownership of at least the number of robots needed to produce the goods that they wish to consume (whether that’s private ownership or their share of socially-owned robots). If there is essentially no demand for human labor, it will, instead, be near infinite from their perspective .


The cost of robots will be zero because robots will mine ore, process it and make other robots.


> The cost of robots will be zero because robots will mine ore, process it and make other robots.

This assumes, among other things that are unlikely to be true, that the only cost of extraction is (robot) labor, and that mining rights are free, rather than being driven by non-zero demand and finite supply.

(Another critical implicit assumption is that energy is free rather than being driven by non-zero demand and finite supply, as failing that will result in a non-zero price for robot labor even if the materials in the robot were free.)


Sure... in 200 years. The trick is getting through the cultural/economic shifts required to get to that point without blowing up the earth in WW3.


I think the trick is to start preparing like it will happen in 10 years and hope it will take 200, otherwise wars are imminent.


No. You become worthless to the government and will be treated accordingly


I'm pretty sure if robots are capable of doing most jobs, the only ones left will be related to prostitution or fluid/tissue/organ donation.


Rich ppl doing something to undermine their status as „the better ones”?

This is not going to happen.

We all know a post-apocalyptic world is what awaits us.

More or less Elysium is the future if ppl will still behave the same way they do now.

And I doubt ppl will change in a span of 100 years.


There are still capital costs in producing the robots and infrastructure. So the costs of goods will be nonzero.

>government will product and distribute most of the things above and you mostly won't need any money

So basicially what you are saying is that a government monopoly will control everything?


Government is people, it is not some corporation.

>> government monopoly

there is no monopoly if there is no market


The market exists whether or not you choose to recognize it


Wealth tax.


Government no longer has the power or authority to constrain private enterprise; especially in highly technical sectors.


Of course it does. Do you think the elites actually WANT massive tariffs putting a brake on GDP growth? Why are tech companies suddenly reversing course on content moderation and/or DEI, after years of pushing in the opposite directions?

Private enterprise will always have some level of corrupting influence over government. And perhaps it sees current leadership as the lesser of two evils in the grand scheme. But make no mistake, government DOES ultimately have the power, when it chooses to assert itself and use it. It's just a matter of political will, which waxes and wanes.

Going back a century, did the British aristocracy WANT to be virtually taxed out of existence, and confined to the historical dustbin of "Downton Abbey"?


I think it's more productive to think in terms of 'owners of public enterprise', rather than elites

There's a theatrical push-pull negotiation narrative that's replayed to us, but do you honestly feel that government could push back strongly on _any issue_ it deemed necessary to?

Public enterprise is so firmly embedded in every corner of Government.

Everything in life involves compromise.

Authority requires the possibility of edict above compromise; which in my mind is no longer possible.


Of course government has the authority to represent the people; if not it, then who or what does?


I didn't say government doesn't have the power to represent the people, now did I?


You wrote:

> Government no longer has the power or authority to constrain private enterprise; especially in highly technical sectors.

Perhaps you meant only this: "Government no longer has the power ..."? It is clear government still has the authority based on the will of the people.


Conceptually, yes.

But I'd argue the legal framework necessary to carry this out doesn't exist.

The strength of private lobbists, keeps power in the hands of private enterprise.


Ok, to synthesize: there is a difference between the legislative branch having some authority (e.g. granted by the Constitution) versus the administrative branch having some legal authority (e.g. based on legislation).


I'm in the UK, but do you feel either is likely to exercise authority to constrain private enterprise for the good of the people underthe current US administration?


In the US, given the current trajectory, I see a frightening and bleak future.* Due to the Trump administration and the people who enable it, the United States is highly degraded and getting worse every day. I'll list what comes to mind...

[x] Authoritarianism. [x] Civil rights abuses. [x] Blatant defiance of the law. [x] Unrepentant selfishness and lack of character. [x] Weaponization of the courts. [x] Loyalty tests at government agencies and in the military. [x] Political prosecutions. [x] Politicization of the Department of Justice. [x] Rampant presidential overreach. [x] The Supreme Court's endorsement and flimsy justification of presidential overreach. [x] Self-destructive trade policy. [x] Ineffective economic policy. [x] Erosion of norms. [x] Concerning presidential cognitive decline. [x] Institutional hollowing-out. [x] Defunding of science. [x] Destruction of USAID. [x] Blatant corruption. [x] Nepotism. [x] Use of the military for domestic purposes. [x] Firing of qualified military leaders. [x] Blatantly self-serving presidential pardons. [x] Firing of qualified civil servants. [x] Deliberately trying to traumatize civil servants. [x] Unnecessary tax breaks for the wealthy. [x] Intimidation of universities. [x] Rollback of environmental protections. [x] Unconstitutional and economically damaging immigration policies. [x] Top-down gerrymandering. [x] Firing of ~19 Inspectors General. [x] Unqualified cabinet members. [x] Relentless lying. [x] Implicit endorsement of conspiracy theories. [x] Public health policies that will lead to unnecessary deaths. [x] A president who 'models' immorality. [x] Tolerance of illegal and immoral behavior of political allies. [x] Prioritization of appearance over substance. [x] Opulent and disgusting displays of wealth. [x] Trampling on independent journalist access. [x] A foreign policy that undermines key alliances. [x] Dismantling of the Department of Education. [x] Undoing key healthcare provisions from the ACA. [x] Negligent inaction regarding AI catastrophic risks. [x] Motive and capability to manipulate voting machines [x] And more.

Positives? It looks like some durable peace deals are in the works.

Overall, things are dark.

* All the more reason to organize and act. No single person (or even group) can solve any of these problems alone. No person or group can afford to wait for other people to act.

We simply cannot afford to let our shock, anger, or fear get the better of us. We have to build coalitions to turn things around -- including coalitions with people that may vote in ways we don't like or believe things that we think don't make sense. We have to find coalitions that work. We have to persuade and build a movement that can outcompete and outlast Trumpism and whatever comes after it.


If wealth inequality was greatly reduced, we wouldn't have to worry about a lot of these topics, nearly as much.


> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.

What if the issue isn't government failing to prepare society to move forward, but instead, AI businesses moving us in a direction that more and more people don't consider to be forward?


it's the structure of how we humans rely on jobs for income

1. We don’t need everyone in society to be involved in trade.

2. We made it so that if you do not take part in trade (trade labor for income), you cannot live.

3. Thus, people will fear losing their ability to trade in society.

The question is, when did we make this shift? It used to just be slavery, and you would be able to survive so long as you slaved.

The fear is coming from something odd, the reality that you won’t have to trade anymore to live. Our society has convinced us you won’t have any value otherwise.


>We made it so that if you do not take part in trade (trade labor for income), you cannot live.

We did not make it so, this has been the natural state for as long as humans have existed, and in fact, it’s been this way for every other life form on Earth.

Maybe with post-scarcity (if it ever happens) there could be other ways of living. We can dream. But let’s not pretend that “life requires effort” is some sort of temporary unnatural abomination made by capitalists. It’s really just a fact.


You think every other life form on earth survives due to trading with each other? Most of human history has been some contingent contract between owner and labor, where most humans lived under some form of servitude or just pure slavery.

Paradigm shift means, “I can live without being involved in a financial contract with another entity”. This is the milestone before us.


It certainly survives by working, one way or another.

My point is that until now, we have never been able to find a functioning system that frees us from work (trade is just one type of work, so is hunting for survival, or photosynthesis), and until something changes dramatically (like a machine that caters to our every need), I find it hard to believe this can change.


That’s fine. I understand. I have a ridiculous belief that the universe is finally here to free us from work. AI is absurd, and magical, and if it does what we think it can, then the paradigm will shift. We as custodians of the transition have to work diligently to make sure this paradigm shift is not perverted by forces that want to reenable the prior way (which was that, whatever can be reasonably had to live, be wrapped in a scalping structure where one side can extract additional value).

One of the ways this shift will have momentum is that children today are going to be born into the light. They will live decades without the concept of having to make decisions around the scarcity of work and resources. They will not have the same values and viewpoints on society that, we, legacy components of the system, are currently engulfed by.

Our generation will be the last to join all other prior generations, in the boat of economic slavery, and it will be allowed to drift and sail away from port for the final time. It was a long journey, generations and generation were involved. Lots of thieving and abuse, good riddance.


> This is what government is for, and not to stifle innovation

We should compare how anti-government politicians talk versus how trained, educated neoclassical economists talk. The latter readily recognize that a valid function of government is to steer, shape, and yes, regulate markets to some degree. This is why we don’t have (for example) legal, open markets for murder.

Markets do not define human values; they are a coordination mechanism given a diverse set of values.


I have very little faith in the government to fix that problem.

And even if the government did institute something like universal basic income, if all jobs were replaced, that would almost certainly mean a lower standard of living for the middle class, and even less socioeconomic mobility than there is now.


> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.

The AI will belong to the parasite class who will capture all the profits - but you can't tax them on this, because they can afford to buy the government. So there isn't really a way to fund food and shelter for the population without taking something from the billionaires. Their plans for the future does not include us [0].

[0] https://www.theguardian.com/news/2022/sep/04/super-rich-prep...


On a few occasions as a teenager, I slipped into an unpleasant khole. The one thing that remained constant every time was that I felt like we were living in the apocalypse. Even though each of these experiences was precipitated by the dose taken, I somehow ended up with a 24 hour news channel on in the background and while I sat there motionless, images of air defense missiles launching on some grainy video from kosovo, I just felt like the world was coming to an end and that all doom was beginning to take place. It's weird because of the k I didn't feel panicky about the doom, like I was super calm and collected about it but nevertheless, it was still an hour or two of impending doom that was very unpleasant. I guess my point is that we need to restrict ketamine use based on income. If someone has enough money, they have far more than enough resources to deal with whatever medical issue justifies their k prescription using a different treatment. Allowing these people to have access to ketamine is the root cause of this problem.


Agreed 100%, except that this is not a new development. This process has been ongoing, with automation has been taking over certain classes of jobs and doing them faster and better since the industrial revolution.

And capitalism has flourished during this time. There's no reason to believe even more automation is going to change that, on its own.

Sure, Musk and Altman can make noises and talk about the need for UBI "in the future" all they want, but their political actions clearly show which side they're actually on.


Robots might be the future, but humans are more important and should be peioritized first. lets make sure the humans needs are met, and then you can play with your robot toys, ok?


> So the problem isn't robots, it's the structure of how we humans rely on jobs for income.

Humans have depended on their own labor for income since we stopped being hunters and gatherers or living in small tribes.

So it's not just a matter of "the gov will find a way", but it's basically destroying the way humanity as a whole has operated for the past 5000 years.

So yes, it's a huge problem. Everything done under the banner of "innovation" isn't necessarily a good thing. Slavery was pretty "innovative" as well, for those who were the slave owners.


In my opinion there is a problem when said robot relies on piracy to learn how to do stuff.

If you are going to use my work without permission to build such a robot, then said robot shouldn’t exist.

On the other hand a jack of all trades robot is very different from all the advancements we have had so far. If the robot can do anything, in the best case scenario we have billions of people with lots of free time. And that doesn’t seem like a great thing to me. Doubt that’s ever gonna happen, but still.


The problem with that is that having all the robots also makes it easy to control the government.


> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.

This requires faith that the government will actually step in to do something, which many people lack (at least in the US, can't speak for globally).

That's the sticking point for many of the people I've talked to about it. Some are diametrically opposed to AI, but most think there's a realistic chance AI takes jobs away and an unrealistic chance the government opposes the whims of capital causing people displaced from their jobs to dip into poverty.

I can't say I have a good counter-argument either. At least in the US, the government has largely sided with capital for my entire life. I wouldn't take a bet that government does the kind of wealth redistribution required if AI really takes off, and I would eat my hat if it happens in a timely manner that doesn't require an absolute crisis of ruined lives before something happens.

See the accumulation of wealth at the top income brackets while the middle and lower classes get left behind.

TLDR this is more of a crisis of faith in the government than opposition to AI taking over crap jobs that people don't want anyways.


But the only government that could "fairly" operate such a government would be a rational AI with a limited set of rules meant to maximize human wellness. Humans will always fail at such a government as we have seen in the past. Greed always overwhelms good intentions in the long run


But we all know what will happen, it’s the human condition of greed. The wealthy will control everything, governments will be no help, and all the newly unemployed will be in dire situations.

So while you’ve identified the real problem we need to identify a realistic solution.


And if we know we can't fix it fast enough, is a delay acceptable?


You wont have that opinion when its your job, life and freedom.


Technology should make our lives better. Whether it's social media, AI, or nuclear power, if we introduce technology that even with its incredible benefits ends up causing harm or making life worse for many people (as all of the above have in various ways), we should reconsider. This should be self-evident. That doesn't mean we get rid of the technology, but we refine. Chernobyl didn't mean the world got rid of nuclear power. It meant we became more responsible with it.

Anyway there is a name for your kind of take. It is anti-humanist.


Might want to read some Karl Polanyi.


"The government" needs time to fix this and until then, we need to not automate everyone out of a job. If that means we don't "get to the future" until then, fine. "Fault" or not, the AI companies are putting people in danger now and unless we can implement a more proper solution extremely quickly, they just have to put up with being slowed down.

But it's not like "the government" (as if there is just one) simply doesn't want to fix things. There are many people who want to fix the way we distribute resources, but there are others who are working to stop them. The various millionaires behind these AI companies are part of the reason why the problem you identified exists in the first place.


So your government should pump the brakes, while other governments rush towards ASI. And you believe this will benefit you, long term? Or do you believe in “global cooperation”?


This is exactly why I put "the government" in quotes. The parent post was saying "the government" should just fix the underlying problem instead of showing AI progress. This has the same problem - no government can do that alone.

So it's either "we all science ourselves out of a job and die from uncontrolled capitalism" or "we try to do something about the uncontrolled capitalism and in the meantime, everyone else keeps sciencing everyone out of a job". The result is the same, but some of us at least tried to do something about it.


Not necessarily. You can also attempt to disconnect your economy from AI-driven economies. Yes, tariffs. :-)

That path is hard and risky (are AI countries eclipsing us in military power?), but probably more realistic than hoping for global cooperation.


So basically being Amish


If all of the economy were to be AI-driven except your lone village, then yes.


The problem is that we do not share the same values at all and I do not envision this truly benefiting people neither making it something I envy or feel OK with. You can make the best fakes and make your best to remove activity from people - but ultimately this is going to lead to a deteriorated society, increased mental health issues, plenty of sad stories just so few people can be happy about it.

Ngl, if someone nuked all USA's servers and wipes out the all these bullshit i'm not convinced the world would be in a worst state right now.

Let AI be used by scientific research, development, helping people out. But if it's just to sit your smelly ideas down, you may even be right, but ultimately the form, intentions and result matter more than recklessly endangering everybody.

TBH I feel like the AI discourse around human replacement smell like hard-core psychopathic behavior - or the one of a drunken dude who's driving a car just happy.

You have 0 data concerning the result it would do on society - and I definitely prefer to live in a less technological world than a world that is full of people with psychosis.

So until we find how we can solve this bottleneck I have 0 sympathy for this kind of discourse.


“This is what government is for” was the most terrifying thing I’ve read all month. The only thing more starkly dystopian than relying on robots and AI for survival would be adding “the government” to the list.

The government should keep its charge as the protector and upholder of justice, I don’t want it to be those things and then also become a fiat source for economic survival, that’s a terribly destructive combination because the government doesn’t care for competition or viability, and survival is the last place you want to have all your eggs in one basket, especially when the eggs are guaranteed by force of law and the basket becomes a magnet for corruption.


Ah yes, our government where career politicians from both sides have bent the rules to create 9-10 figure fortunes.


It's not so black and white - it depends on the scale

We as a society get to decide what is done in our society. If robots replace a few jobs but make goods cheaper for everyone that's a net positive for society.

If robots replace EVERYONE's job, where everyone has no income anymore that's clearly a huge negative for society and it should be prevented.


“Hey Association of Retarded Citizens, robots can do it better so just stay home I guess.”


It's wrong to assume the owners will share the productivity gains with everyone, especially when reliance on labour will be at its lowest, and the power structure of a data/AI economy is more concentrated than anything we've seen before. IMO, the assumption that some form of basic income or social welfare system will be funded voluntarily is as delusional as thinking communism would work.


It's one thing to be fired and completely replaced by a robot, never to work again. It's another to have your industry change and embrace automation, but to remain in with higher productivity and a new role. You might not initially like the new industry or role, but ....

That's noble. The first is dystopian


“Guns don’t kill people, etc…”


Replace robots with immigrants and it’s the same fear mongering as usual.


There is much more to concerns about AI than fear mongering. Reasonable people can disagree on predictions about probabilities of future events, but they should not discount reasonable arguments.


This is not a personal belief this is a regurgitation of the most standard neoliberal orthodoxy.


We just keep moving forward, no one responsible, no one accountable, victims of our own progress.


Respectfully, that’s not a very functional belief. It’s sort of the equivalent to saying “communism is how mankind should operate”, while completely ignoring why communism doesn’t work: greedy, self-preserving genetic human instincts.

The workforce gives regular folks at least some marginal stake in civilization. Governments aren’t effective engines against AI. We failed to elect Andrew Yang in 2020–who was literally running on a platform of setting up a UBI tax on AI. Congress is completely corrupt and ineffectual. Trump gutting the government.

You may be right about ai taking jobs eventually if that’s what you’re saying, but you come off pretty coldly if you’re implying it’s what “should” happen because it’s Darwinian and inevitable, and just sorta “well fuck poor people.”


Not only Communism doesn't work, Capitalism doesn't work without crises and wars too. :)


Communism was never realized and what didn't work was the transitions stage between free market capitalism and communism, state capitalism. In literally every instance of communism not working, it was state capitalism that didn't work because the people who benefited from state capitalism worked to keep it in place as communism would have ended their greedy consolidation of power. This is not a flaw in communism, this is a flaw in capitalism: the inevitable hoarding of capital.


"Arguing against robots" is an oversimplification. They're arguing against misery and harm, and against the unprepared nature of our government and businesses to account for the impact of large jumps in automation, which, of course, also means arguing against robots. The distinction is pointless. And yes actually, it is in fact the responsibility of everybody to decline to inflict misery, whether a law has been made to control your behavior yet or not. It would be dishonest to imply there is a binary where your business can't ever use any automation based on this principle.

The over-application of objective phrases like "valid" vs "invalid" when talking about non-formal arguments is a sickness a lot of technical people tend to share. In this case, it's dismissive of harm to humans, which is the worst thing you can be dismissive about. "Please don't make me and my family miserable" is not an "invalid argument" - that's inhuman. That person isn't arguing their thesis.

"The problem". Another common oversimplifying phrase used by us thinkers, who believe there is "the answer", as if either of those two things exist as physical objects. "The problem" is that humans are harmed. Everything else just exists within that problem domain, not as "part of the problem" or "not part of the problem".

But most importantly:

Yes, you're absolutely correct (and I hate to use this word, but I'm angry): Obviously the ideal state is that robots do all the work we don't want to do and we do whatever we want and our society is structured in a way to support that. You've omitted the part where that level of social support is very hard to make physically feasible, very hard to convince people of depending on their politics, and, most importantly: It's usually only enough to spare people from death and homelessness, not from misery and unrest. Of course it would be ridiculous to outright ban for-profit use of automation, but even more ridiculous to write a bill that enforces it, e.g. by banning any form of regulation.

Short and medium term, automating technologies are good for the profit of businesses and bad for the affected humans. Long term, automating technologies are good for everybody, but only if society actually organizes that transition in a way that doesn't make those affected miserable/angry. It isn't, and I don't think it's pessimistic to say that it probably won't.

I'd love to live in Star Trek! We don't. We won't for hundreds of years if ever. Technology isn't the limiting factor, the immutable nature of human society and resources are the limiting factors. Nothing else is interesting to even talk about until we clear the bar of simply giving a shit about what actually, in concrete reality, happens to our countrymen.


> So the problem isn't robots, it's the structure of how we humans rely on jobs for income.

It's called capitalism


I agree with you


The problem is more existential.

Why are people even doing the jobs?

In a huge number of cases people have jobs that largely amount to nothing other than accumulation of wealth for people higher up.

I have a feeling that automation replacement will make this fact all the more apparent.

When people realise big truths, revolutions occur.


Specialization is for insects.

A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: