The Python standard library is overall very solid. However, it has a few darker corners - and, in the past, HTTP handling was not as easy as it could be. In the age of APIs, the requests library filled this void with a compellingly simple solution and became one of the most popular libraries (with millions of downloads every month [1]).
Looking at the top few packages:
* simplejson is the upstream for the json module in stdlib which people install since it gets performance updates first
* requests
* six: the most common library used to bridge the Python 2/3 transition
* virtualenv: near-ubiquitous development tool (it allows you to maintain a separate Python environment for each project to avoid cross-talk, and is also used by popular testing tools like tox which run your tests under a variety of Python versions)
* distribute: a few years back, the stdlib setuptools module was forked for a major overhaul. That's since been merged back in but many packages still reference it, particularly in older releases
* boto: AWS client library, used by the official awscli tool
* pip: Python package installer, now bundled with Python but updates & older versions of Python use the PyPI version
About as famous as they come in the python community. It is the perfect example of a library done right. I've yet to meet a developer who have used it that dislike it.
It handles all of the low level stuff for you, such as sessions, cookies, gzip, form encoding POSTs, composing URLs with arguments, thread safety etc. etc. without you even having to be aware of them, all in a terse and more readable format. It does it without taking the ability away to control those if you really want to, not that you're likely to need to anyway.
It is the de-facto standard for anything that has to interact with a API, or just HTTP in general. The old way to do these sort of things was the urllib2 module (part of the Python standard library). urllib2 has some design flaws and can be a pain to work with.