But leaving a light on 2x the time will equal very close to 2x the price.
Asking “what day is today” vs “create this api endpoint to adjust the inventory” will cost vastly different. And honestly I have no clue where to start to even estimate the cost unless I run the query.
Which means implementations also have to be correspondingly complicated. You have to handle quite a few different primitive data types each with their own opcodes, class hierarchies, method resolution (including overloading), a "constant pool" per class, garbage collection, exception handling, ...
I would expect a minimal JVM that can actually run real code generated by a Java compiler to require at least 10x as much code as a minimal Bedrock VM, and probably closer to 100x.
Why do you think that this means "idle GPU" rather than a company recognizing a growing need and allocating resources toward it?
It's cheaper because it's a different market with different needs which can be served by systems optimizing for throughput instead latency. Feels like you're looking for something that's not there.
I wouldn’t be so dismissive. Research is just a loop of hypothesis, experiments, collect data, make new hypothesis.
There’s so creativity required for scientific breakthroughs, but 99.9% percent of scientists don’t need this creativity. Just need grit and stamina.
That loop involves way more flexible goal oriented attention, more intrinsic/implicit understanding of plausible cause and effect based on context, and more novel idea creation than it seems.
You can only brute force things with combinatorics and probabilities that have been well mapped via human attention, as piggy-backing off of lots of human digested data is just a clever way of avoiding those issues. Research is by definition novel human attention directed at a given area, so it can't benefit from that strategy in the same way domains which have already had a lot of human attention can.
I think the whole idea of "original insight" is doing a lot of heavy lifting here.
Most innovative is derivative, either from observation or cross application. People aren't sitting in isolation chambers their whole lives and coming up with things in the absence of input.
I don't know why people think a model would have to manifest a theory absence of input.
> I think the whole idea of "original insight" is doing a lot of heavy lifting here.
This is by biggest issue with AI conversations. Terms like "original insight" are just not rigorous enough to have a meaningful discussion about. Any example an LLM produces can be said to be not original enough and conversely you could imagine trivial types of originality that simple algorithms could simulate (i.e. speculate on which existing drugs could be used to treat known conditions). Given the amount of drugs and conditions you are bound to propose some original combination.
People usually end up just talking past each other.
And insight. Insight can be gleaned from a comprehensive knowledge of all previous trials and the pattern that emerges. But the big insights can also be simple random attempts people make because they dont know something is impossible. While AI _may_ be capable of the first type, it certainly won't be capable of the second
Awfully bold to claim that 99.9% of scientists lack the need for "creativity". Creativity in methodology creates gigantic leaps away from reliance on grit and stamina.
Why would you want to have an ever growing memory usage for your Python environment?
Since LLM context is limited, at some point the LLM will forget what was defined at the beginning so you will need to reset/ remind the LLM whats in memory.
You're right that LLM context is the limiting factor here, and we generally don't expect machines to be used across different LLM contexts (though there is nothing stopping you).
The utility here is mostly that you're not paying for compute/memory when you're not actively running a command. The "forever" aspect is a side effect of that architecture, but it also means you can freeze/resume a session later in time just as you can freeze/resume the LLM session that "owns" it.
Fun fact: this is very similar to how Smalltalk works. Instead of storing source code as text on disk, it only stores the compiled representation as a frozen VM. Using introspection, you can still find all of the live classes/methods/variables. Is this the best way to build applications? Almost assuredly not. But it does make for an interesting learning environment, which seems in line with what this project is, too.
So when someone says they corrupted the image file and lost all their work, it usually means they don't know that their work has been saved as re-playable actions.
It's the other way around, it swaps idle sessions to disk, so that they don't consume memory. From what I read, apparently "traditional" code interpreters keep sessions in memory and if a session is idle, it expires. This one will write it to disk instead, so that if user comes back after a month, it's still there.