Hacker Newsnew | past | comments | ask | show | jobs | submit | great_psy's commentslogin

The introduction to the article is not paywalled. But the actual 2027 ai story is paywalled


Ah.


Can you provide some names of AI apps who’s revenue > cost ?


I mean, ChatGPT could easily be profitable today if they wanted to, but they're prioritizing growth


[citation needed]


Please stop the BS and take a basic corporate finance class.

FCFF = EBIT(1-t) - Reinvestment.

If OAI stops the Reinvestment, they lose to competition. Got it? Simple.


But leaving a light on 2x the time will equal very close to 2x the price.

Asking “what day is today” vs “create this api endpoint to adjust the inventory” will cost vastly different. And honestly I have no clue where to start to even estimate the cost unless I run the query.


A likely catalyst would be a DeepSeek like model where it has the capability of top model but in a much smaller footprint.

NVDA and cloud providers that are highly staked in AI would likely take a hit if a 70B model could do what Sonnet4 does.

But overall, AI is here to stay even if the market crashes, so it’s not really a AI bubble pop, more like a GPU pop.

And even then, GPU will still be in demand since the training will still need large clusters.


I have not delved to deep in the code, but is there any functional differences it has over Java other than the size ?

Presumably Java would also be pretty tiny if we wrote it in bytecode instead of higher lever Java.


The Java bytecode instruction set actually has a quite complicated specification: https://docs.oracle.com/javase/specs/jvms/se8/html/

Which means implementations also have to be correspondingly complicated. You have to handle quite a few different primitive data types each with their own opcodes, class hierarchies, method resolution (including overloading), a "constant pool" per class, garbage collection, exception handling, ...

I would expect a minimal JVM that can actually run real code generated by a Java compiler to require at least 10x as much code as a minimal Bedrock VM, and probably closer to 100x.


Is this an indication of the peak of the AI bubble ?

In a way this is saying that there are some GPUs just sitting around so they would rather get 50% than nothing for their use.


Seems more like electricity pricing, which has peak and offpeak pricing for most business customers.

To handle peak daily load you need capacity that goes unused in offpeak hours.


Why do you think that this means "idle GPU" rather than a company recognizing a growing need and allocating resources toward it?

It's cheaper because it's a different market with different needs which can be served by systems optimizing for throughput instead latency. Feels like you're looking for something that's not there.


I wouldn’t be so dismissive. Research is just a loop of hypothesis, experiments, collect data, make new hypothesis. There’s so creativity required for scientific breakthroughs, but 99.9% percent of scientists don’t need this creativity. Just need grit and stamina.


I wouldn't be so dismissive of the objection.

That loop involves way more flexible goal oriented attention, more intrinsic/implicit understanding of plausible cause and effect based on context, and more novel idea creation than it seems.

You can only brute force things with combinatorics and probabilities that have been well mapped via human attention, as piggy-backing off of lots of human digested data is just a clever way of avoiding those issues. Research is by definition novel human attention directed at a given area, so it can't benefit from that strategy in the same way domains which have already had a lot of human attention can.


I think the whole idea of "original insight" is doing a lot of heavy lifting here.

Most innovative is derivative, either from observation or cross application. People aren't sitting in isolation chambers their whole lives and coming up with things in the absence of input.

I don't know why people think a model would have to manifest a theory absence of input.


> I think the whole idea of "original insight" is doing a lot of heavy lifting here.

This is by biggest issue with AI conversations. Terms like "original insight" are just not rigorous enough to have a meaningful discussion about. Any example an LLM produces can be said to be not original enough and conversely you could imagine trivial types of originality that simple algorithms could simulate (i.e. speculate on which existing drugs could be used to treat known conditions). Given the amount of drugs and conditions you are bound to propose some original combination.

People usually end up just talking past each other.


And insight. Insight can be gleaned from a comprehensive knowledge of all previous trials and the pattern that emerges. But the big insights can also be simple random attempts people make because they dont know something is impossible. While AI _may_ be capable of the first type, it certainly won't be capable of the second


I think this comment is significantly more dismissive of science and scientists than the original comment was of AI.


Awfully bold to claim that 99.9% of scientists lack the need for "creativity". Creativity in methodology creates gigantic leaps away from reliance on grit and stamina.


Why would you want to have an ever growing memory usage for your Python environment?

Since LLM context is limited, at some point the LLM will forget what was defined at the beginning so you will need to reset/ remind the LLM whats in memory.


You're right that LLM context is the limiting factor here, and we generally don't expect machines to be used across different LLM contexts (though there is nothing stopping you).

The utility here is mostly that you're not paying for compute/memory when you're not actively running a command. The "forever" aspect is a side effect of that architecture, but it also means you can freeze/resume a session later in time just as you can freeze/resume the LLM session that "owns" it.


Fun fact: this is very similar to how Smalltalk works. Instead of storing source code as text on disk, it only stores the compiled representation as a frozen VM. Using introspection, you can still find all of the live classes/methods/variables. Is this the best way to build applications? Almost assuredly not. But it does make for an interesting learning environment, which seems in line with what this project is, too.


> only stores the compiled representation

That seems to be a common misunderstanding.

Smalltalk implementations are usually 4 files:

-- the VM (like the J VM)

-- the image file (which you mention)

-- the sources file (consolidated source code for classes/methods/variables)

-- the changes file (actions since the source code was last consolidated)

The sources file and changes file are plain text.

https://github.com/Cuis-Smalltalk/Cuis7-0/tree/main/CuisImag...

So when someone says they corrupted the image file and lost all their work, it usually means they don't know that their work has been saved as re-playable actions.

https://cuis-smalltalk.github.io/TheCuisBook/The-Change-Log....

> Is this the best way to build applications? Almost assuredly not.

False premise.


It's the other way around, it swaps idle sessions to disk, so that they don't consume memory. From what I read, apparently "traditional" code interpreters keep sessions in memory and if a session is idle, it expires. This one will write it to disk instead, so that if user comes back after a month, it's still there.


Before or after the bubble popped ?


During


With how healthcare is handled in America … good luck to poor people getting access to anything like that.


Life extension isn’t happening for minimum 30 years, if ever. Hopefully, maybe it won’t be this bad by then???


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: