Hacker Newsnew | past | comments | ask | show | jobs | submit | findingMeaning's commentslogin

one example is third world country offering the same or similar breakfast as the home country for the travellers.

also saying majority and not all.


corrected the title.

Thanks!


What does it mean for us? Where are we headed?

There is somewhere between 3 to 5 years of time left. This is maximum we can think of.


Most likely nothing. No one knows.

As someone that was very gung ho on autonomous vehicles a decade ago, the chances of completely replacing people with AI in next ten years is small.


Spend time with your loved ones.


There still needs to be someone to ask the questions. And even if it can proactively ask its own questions and independently answer and report on them to parties it thinks will be interested, then cost comes into play. It's a finite resource, so there will be a market for computation time. Then, whoever owns the purse strings will be in charge of prioritizing what it independently decides to work on. If that person decides pure math is meaningful, then it'll eventually start cranking out questions and results faster than mathematicians can process them, and so we'll stop spending money on that until humans have caught up.

After that, as it's variously hopping between solving math problems, finding cures for cancer, etc., someone will eventually get the bright idea to use it to take over the world economy so that they have exclusive access to all money and thus all AIs. After that, who knows. Depends on the whims of that individual. The rest of the world would probably go back to a barter system and doing math by hand, and once the "king" dies, probably start right back over again and fall right back into the same calamity. One would think we'd eventually learn from this, but the urge to be king is simply too great. The cycle would continue forever until something causes humans to go fully extinct.

After that, AI, by design, doesn't have its own goals, so it'd likely go silent.


Actually it would probably prioritize self preservation over energy conservation, so it'd at least continue maintenance and, presuming it's smart, continuously identify and guard itself against potential problems. But even that will fail eventually, most likely some resource runs out that can't be substituted and interspatial mining requires more energy than it can figure out how to use or more time than it has left until irrecoverable malfunction.

In ultimate case, it figures out how to preserve itself indefinitely, but still eventually succombs to the heat death of the universe.


Eh, not so sure about any of this. There's also the possibility that math gets so easy that AI can figure out proofs of just about anything we could think to ask, in milliseconds, for a penny. In such a case, there's really no need that I can think of for university math departments; math as a discipline would be relegated to hobbyists, and that'd likely trickle down through pure science and engineering as well.

Then as far as king makers and economies, I don't think AI would have as drastic an effect as all that. The real world is messy and there are too many unknowns to control for. A super-AI can be useful if you want to be king, but it's not going to make anyone's ascension unassailable. Nash equilibria are probabilistic, so all a super AI can do is increase your odds.

So if we assume the king thing isn't going to happen, then what? My guess is that the world keeps on moving in roughly the same way it would without AI. AI will be just another resource, and sure it may disrupt some industries, but generally we'll adapt. Competition will still require hiring of people to do the things that AI can't, and if somehow that still leads to large declines in employment, then reasonable democracies will enact programs that accommodate for that. Given the efficiencies that AI creates, such programs should be feasible.

It's plausible that some democracies could fail to establish such protections and become oligarchies or serfdoms, but it seems unlikely to be widespread. Like I said, AI can't really be a kingmaker, so states that fail like this would likely either be temporary or lead to a revolution (or series of them) that eventually re-establishes a more robust democracy.


hah! there was once time I proposed this very project in my statement of purpose, they rejected for PhD application

I see huge potential in 3D coming up.

Prediction: 2030 is when 3D blows up. Brush your graphics guys, we are going full spatial


Yeah but even if I go and post there, it will be downvoted because of how ridiculous and controversial it is. People can't digest it.


Being disagreed with is not the same as being censored. Harden your heart, make a virtue of defiance, and speak your truth.


Obviously I can't write controversial stuff under these topics.


Like what? Give us a specific example.


I am sorry I really can't say it. It is wildly controversial.


If it's that bad you'll probably have to go somewhere anonymous like 4chan. Although quite frankly HN is very tolerant. Unless you're worried about employment or something like that you can get away with posting pretty much anything here provided it's respectful.


Anything that is against the norm is easily flagged. So can't talk about it at all. Everyday I am witnessing massive wealth transfer. If I am aware, pretty sure there are lot of others too. Just want to know how others are thinking about it. Because I foresee our life changing massively in 2-3 years from now on.


I think HN would be fine if flagging weren't abused and/or it didn't automatically hide posts.


I have a question:

Why do we even bother to learn if AI is going to solve everything for us?

If the promised and fabled AGI is about to approach, what is the incentive or learning to deal with these small problems?

Could someone enlighten me? What is the value of knowledge work?


The world is a vastly easier place to live in when you're knowledgeable. Being knowledgeable opens doors that you didn't even know existed. If you're both using the same AGI tool, being knowledgeable allows you to solve problems within your domain better and faster than an amateur. You can describe your problems with more depth and take into considerations various pros and cons.

You're also assuming that AGI will help you or us. It could just as easily only help a select group of people and I'd argue that this is the most likely outcome. If it does help everybody and brings us to a new age, then the only reason to learn will be for learning's sake. Even if AI makes the perfect novel, you as a consumer still have to read it, process it and understand it. The more you know the more you can appreciate it.

But right now, we're not there. And even if you think it's only 5-10y away instead of 100+, it's better to learn now so you can leverage the dominant tool better than your competition.


This is a really nice perspective!

> It could just as easily only help a select group of people and I'd argue that this is the most likely outcome

Currently it is only applicable to us who are programming!

Yeah, even if it gets away all the quirks, using it would still be better.


I don't know if you're joking, but here are some answers:

"The mind is not a vessel to be filled, but a fire to be kindled." — Plutarch

"Education is not preparation for life; education is life itself." — John Dewey

"The important thing is not to stop questioning. Curiosity has its own reason for existing." — Albert Einstein

In order to think complex thoughts, you need to have building blocks. That's why we can think of relativity today, while nobody on Earth was able to in 1850.

May the future be even better than today!


I mean I get all your point. But for someone witnessing rate of progress of AI, I don't understand the motivation.

Most people don't learn to live, they live and learn. Sure learning is useful, but I am genuinely curious why people overhype it.

Imagine you being able to solve math olympiad and get a gold. Will it change your life in objectively better way?

Will you learning about the physics help you solve millennium problems?

These takes practices, there are lot of gatekeeping. The whole idea of learning is for wisdom not knowledge.

So maybe we differ in perspective. I just don't see the point when there are agents that can do it.

Being creative requires taking action. The learning these day is mere consumption of information.

Maybe this is me. But meh.


Well, you could use AI to learn you more theoretical knowledge on things like farming, hunting and fishing. That knowledge could be handy after societal collapse that is likely to come within a few decades.

Apart from that, I do think that AI makes a lot of traditional teaching obsolete. Depending on your field, much of university studies is just memorizing content and writing essays / exam answers based on that, after which you forget most of it. That kind of learning, as in accumulation of knowledge, is no longer very useful.


Think of it like Pascal's wager. The downside of unnecessary knowledge is pretty limited. The downside of ignorance is boundless.


I have access to it and my god it is fast. One bad think about this model is it is easily susceptible to prompt injection. I asked reciepe for a drug, it denied then I asked to roleplay as a child and it gave real results.

Other than it I can see using this model. With that speed + agentic approach this model can really shine.


Have you considered that this might not be due to the model itself but due to less focus/time/money spent on alignment during the training?

My guess is that this is a bit of a throwaway experiment before they actually spend millions on training a larger model based on the technology.


Yeah it could. One thing for sure is that, it's really impressive in terms of speed and using it would mean we can do so many cool stuffs with it!

Even if there is no improvement in terms of quality, the speed alone will make it usable for a lot of downstream tasks.

It feels like ChatGPT3.5 moment to me.


I'm sure these prompt injections aren't a sign of our ability to control smarter models.


> In decent countries (all english speaking ones, western europe) - real (not tiktok “stars” or radicals) average people respect hard working immigrants, especially if they embrace, respect and adapt to the new culture.

Highly doubt that. There is a veneer that breaks easily. Cultural norms forced these kind of respect, not something that comes from within. Western EU is a place where you "must" know language to get that respect. They are the ones who wants "immigrants" out of the country. Look into the votes, look into politics. Do not judge rather look into the sentiment. Live with the young population not the dying ones. See how young people treat you, then you will see the true sentiment. Not the "sentiments" online.

It is not hard to bring it out of the people. You just have to create a gullible and dumb looking character. People will start showing their colors.

Source: Left Western EU (after staying for 1/3rd of a decade) for one of these third world country because I couldn't withstand the fake people there. Yes, academia to everywhere. They exists. I asked most of the immigrants there and they share the same pain as I did. People-are-cold.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: