That's funny, I personally prefer HTTP for its simplicity, human-readability, accessibility, lack of centralized control, backwards compatibility and lack of forced upgrades or locking out old clients, etc., not to mention speed.
Of course, I'm fortunate enough to live in a place where MITM attacks are virtually non-existent, aside from WiFi portals and maybe ISP banners (which I've never experienced.)
> Of course, I'm fortunate enough to live in a place where MITM attacks are virtually non-existent, aside from WiFi portals and maybe ISP banners (which I've never experienced.)
I don’t know where you live, but i feel like this is more common and insidious than you think. For instance, in the UK Vodafone (or Three, I don’t remember exactly) would break 100s of our sites by injecting js and tracking pixels into the markup.
Now, with behavioural targeting slowly dying, ad tech businesses talking about fingerprinting as valid alternatives, and contextual targeting on the rise, I can guarantee you that the situation is going to get worse.
How insidious would you say that is compared to not being able to use an otherwise capable 7-year-old device to access most "secure" websites across the Web?
I only browse HTTPS sites. I have the `HTTPS Everywhere` addon installed with the `EASE` / Encrypt All Sites Eligible turned on so I don't accidentally browse an unencrypted website. Something like 85%/90% of the web is encrypted now, and there's no excuse to be using outdated plaintext http anymore. It's a privacy and security risk. There are only few instances where I had to view a http site (I'm a freelancer and my client's webpage was still unencrypted, so I had to see it, so a rare exception to the rule).
The privacy and security risk comes in large part from the nature of code and actions performed on the site.
In reality as far as privacy goes, the matters are on average opposite to your claim. Most sites that will put your privacy at risk today are using https - I am talking about the vast majority of the commercially operated web today. I know my privacy is much better respected on a plain text (no javascript) site using http then on [insert a top 10k most popular site here] using https.
And for security, if I am not performing for example shopping or entering my billing details anywhere on the site, I do not see how a http site can compromise my security.
I actually prefer deploying http sites for simple test projects where speed is imperative because they are also faster - there is no SSL handshake needed to connect.
It's funny because I got like 70% HTTP in my index, so the whole "90% of the web is encrypted" seems to depend on which sample you are looking at. Google doesn't index HTTP at all, so that's not a good place to go looking for what's the most popular. That's in fact half the reason why I built this search engine in the first place, because they demand things of websites that some websites simply can't or wont comply with.
A lot of servers still use HTTP, for various reasons. There are also some clients that can't use HTTPS.
I think there are absolute numbers and then there are "the sites most people visit regularly" and those probably are 75% https. It's relative like most things.
Absolute numbers are pretty hard to define, as is the size of the Internet.
If the same server has two domains associated with it, does it count twice? Now consider a loadbalancer that points to virtual servers on the same machine. How about subdomains?
It may be a privacy risk, but it's certainly not a security risk with plain old blog and static sites that have completely open data available to anyone who wants to surf to their sites.
HTTPS is a still privacy risk because the hostname is sent in plaintext. Perhaps you get some "URL privacy" but you get no improvment in terms of "hostname privacy". HTTP only leaks the hostname once. HTTPS leaks it twice.
This can be prevented by (a) using TLS1.3 to ensure the server certificate that is sent is encrypted and (b) taking steps to plug the SNI leak; popular browsers send SNI to every website, even when it is not required.
You should be getting fewer .txt-results in the new update, a part of the problem was that keyword extraction for plain text was kind of not working as intended, so they'd usually crop up as false positives toward the end of any search page. I'm hoping that will work better once the upgrade is finished.