It has less to do with sloppiness and more to do with Windows being a nightmare to target for this sort of thing. I dual-boot now so I can play games and run ML compatibly. Conda is not docker unfortunately and doesn't do as much as you may think to guarantee Windows support for CUDA. Further, nvidia-docker seems to only work on Linux.
Not to suggest your frustrations aren't valid, but you have some options at least.
good point and its a big reason why I haven't dived into ML because of the platform I am on is very unfriendly.
Can't do much when you get stuck on step 23 and realize Windows 10 require you to google and look for workarounds.
I'm thinking of doing a completely linux only build with powerful GPU, but this is also where things get tricky because you don't know what you really need, and its a hefty investment when you are not building a machine for fun but for experimentation with AI.
But I presume installing this on Linux also installs into global site-packages? I don't see anything in the setup that would be different in that regard.
Only if you choose to, same as on Windows, and it's _highly_ discouraged on both. I haven't read the README but culturally, even if someone uses just `python3` in documentation, they still probably expect you to understand how to do so inside of a virtual environment of some sort.
You can still use conda envs/venv/poetry on Linux, after all.
edit: re-reading your question, it sounds like you're in PATH hell. You should examine the contents of your PATH environment variable and make sure you don't have conflicting installations of python.
I'm not sure but I think you've misunderstood my point.
The readme gives specific steps to install including the creation of a conda environment.
It also uses pip directly which results in stuff being installed into site-packages.
Now - I presume conda is capable of handling this but for some reason it's not done that way. I'm familiar with virtualenv but not with conda so I'm not sure how I'd go about doing this correctly and I'm also not sure if the author had a good reason not do it this way in the readme.
So - I'm simply asking "Why is some stuff isolated and other parts not?"
My hunch is that the author doesn't care about virtualenvs/isolation and is just using conda as package installer. When it came to pip they just ignored this aspect.
In general, activating a conda environment _should_ override your PATH to include the environment's local, contained copy of both python and pip. As such, using pip install x in a conda environment will install those dependencies using your conda environment's python/pip, not your global python/pip.
On bash, you would test this like:
```sh
> conda activate env-name
> which python
$ # should be your local conda env path
> which pip
$ # should also be your local conda env, not global pip
```
If it _is_ using your global pip, that means somehow your PATH isn't being set properly. This is common with conda on windows, although I'm not certain why exactly.
The reason they are using pip inside of conda instead of conda may be because CUDA needs dependencies which are not found in conda's repositories, or simply due to personal preference.
Not to suggest your frustrations aren't valid, but you have some options at least.