Hacker Newsnew | past | comments | ask | show | jobs | submit | catasaurus's commentslogin

I mean the max efficiency for intelligence, which is pretty abstract so part of the question is what does intelligence even mean as it supersedes humans and the metrics we care about


ya darknet diaries is very well done


DnD drives me nuts, how the host resummarizes everything every guest says in a fluffy emotional way that adds no new info. It’s like a podcast where every sentence is said twice.. kinda hard to listen to


Now that we’ve run out of truly interesting hacks, the later episodes really aren’t too interesting. Fav episode is Knaves Out.


Yeah, I understand that it is something that can be done fairly easily but I am too lazy to do so at the moment (lol). By finicky I mean like usually kind of buggy and hard-to-use web interfaces to set up some weird Jupyter Notebook that they run or SSH (hard-to-use as in something like AWS Sagemaker). I understand the use of these things but I just want to be able to use my own development environment to do everything I need to do and then send some code over to run. Preferably the pricing would be kind of serverless, just paying for the time of the compute used to run the programs that are run, not just paying for the reserved server to sit there.


What you're essentially asking for (especially with the toml like configuration) is SLURM + GPU cluster.

SLURM does that wrapping for you, where you essentially just point to the file that you want to run, along with some high level GPU and CPU resource allocation tags, and it just schedules and runs it for you.

I have seen some people trying to run GCP (lol) with SLURM, and wouldn't be surprised if it is possible with AWS/Lambda or any of the other cluster service providers (Cluster-as-a-service, CLaaS?).

Just through one Google search, looks like its definitely possible with AWS: https://docs.aws.amazon.com/parallelcluster/latest/ug/slurm-...


It sounds like you might be wanting something for GPU batch job management. Some things to check out would be gpu orchestration tools, specifically: Slurm, Run.ai, and Skypilot.

Or maybe you're kind of wanting a serverless GPU cloud - check out Runpod, Modal, Baseten, and Replicate.

Links:

https://slurm.schedmd.com/documentation.html

https://www.run.ai/ml-workflow-management

https://github.com/skypilot-org/skypilot

https://www.runpod.io/serverless-gpu

https://modal.com/pricing

https://www.baseten.co/pricing/

https://replicate.com/pricing


SLURM is mentioned by @momofuku.

Ray is another good candidate as well (and feels more modern imho).

https://www.ray.io/


ray looks interesting, will check out

EDIT: so the main thing is technologies like Ray have a way to do these things, but I honestly just want an easy way to do this. Maybe means I will have to set up something with Ray and AWS myself and make a wrapper for that?


I haven't used ray, but I've read a bit of documentation on it and from what I gather, you install a daemon on the box (ray core) and can send it commands that it executes. Along the way, you can keep state, store data and schedule things.

https://docs.ray.io/en/latest/ray-core/key-concepts.html#tas...

https://docs.ray.io/en/latest/ray-core/examples/gentle_walkt...

That's what I would want something to do if I was building tooling like this myself. Although, I'd do it in golang instead of python so that the dependency chain was simpler. A single small binary is nicer imho.


ya thats what I have found from my research but since I am desperate idk.


Lol. I will check those articles out.


I could understand that, but what about highly competitive markets like large cap NASDAQ and NYSE stocks?


The NSA declassified this a while back, it's 395 pages of an in depth python course that literally teaches everything. Does anyone know of a similar in depth guide to another language (C/C++, Javascript, etc) publicly available on the internet?


The paid version probably includes better servers and hardware for your ChatGPT instance, but OpenAI intentionally creating demand errors? I would think its more along the lines of they are using cheap servers to allow everyone to use ChatGPT and thus when too many people are trying to use this massive machine learning model (175B parameters), the wait time goes up and up as the models have more and more inputs to respond to.

Also most freemium platforms do try to motivate their users to upgrade to paid plans.

Spotify: No downloading music without paid plan, etc

AWS: Free tier but then you have to pay

Youtube: Ads but if you pay no ads

And on and on


I agree. A problem arises though when moderation goes too far and a website/forum/online community is turned into a Stack Overflow, a useful place but way to over moderated, with established users being very toxic to new ones.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: