Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've done a version of this: https://news.ycombinator.com/item?id=36632397

Let me know what you'd want to see added!



That was great! thank you.

One thing I cant glean ; What GPu/kit are preferred for which type of output?

Like chat vs imaging...

Do locally run models/agents have access to the internet?

Whats the best internet connected crawler version on can use?


1. I've updated the section now: https://gpus.llm-utils.org/cloud-gpu-guide/#so-which-gpus-sh... - that should answer it. Basically 1x 3090 or 1x 4090 is an ideal set up for stable diffusion, 1x A100 80GB is an ideal setup for llama 2 70b GPTQ (and you can use much smaller GPUs or even CPUs if needed, for smaller llama 2 models)

2. No, they don't have access to the internet unless you build something that gives them access

3. I'm not sure what you're asking




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: