1. I've updated the section now: https://gpus.llm-utils.org/cloud-gpu-guide/#so-which-gpus-sh... - that should answer it. Basically 1x 3090 or 1x 4090 is an ideal set up for stable diffusion, 1x A100 80GB is an ideal setup for llama 2 70b GPTQ (and you can use much smaller GPUs or even CPUs if needed, for smaller llama 2 models)
2. No, they don't have access to the internet unless you build something that gives them access
Let me know what you'd want to see added!