Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is really Google's fault. They make it impossible to turn off automatic updates for Chrome extensions from their store. That would be kind-of-ok if they actually had a rigorous approval process. But they don't. The Chrome Web Store has become one of the prime Vectors for malware. The only way to be safe is to exclusively download releases from the extensions github repo and to manually install them.


In general, taking control away from users sets up all kind of bad incentives. For example, automatic updates with no way to downgrade save vendors from having to compete with their own older versions. This means regressions in functionality or design can be pushed out with little recourse for users other than complaining online. This is compounded by ecosystem lock-in and lack of data portability. The software industry as a whole is heading towards treating users more and more paternalistically.


Conversely, before automatic updates web developers were stuck supporting Internet Explorer for the best part of twenty years. Many of the people using it had neither reason or knowledge to update it, and it became the reason my parent's computers got riddled with malware.

There's a sensible middle ground here. Take the paternalistic approach that (generally) protects people like my mum. Add settings that allow people like you and me to turn off updates or roll backwards. Push the people controlling the updates (like the Chrome store) to better protect their users.


Users need to be motivated to upgrade. If their current software works sufficiently on the sites they care about, then they have no need to upgrade. If the sites themselves are enabling this behavior, by bending over backwards to work on with old browsers, then they are part of the “problem”.

I don’t like automatic updates and generally keep them disabled. Software upgrades tend to reduce functionality and instead force unnecessary UX redesigns on users, so I’d rather avoid them. I wish developers had the [EDIT: incentive] to release security patches independently from functionality changes, but few do that anymore, sadly.


It's been an age since I've worked in an agency, but back in the IE era, at least once a month a dev would ask to use a 'modern feature'. Something to support some a new piece of design from the design team, or save hours or days of dev, or remove the need for hacky 'fixes' that could be done cleanly with modern browser support.

So off to analytics they would go. "X thousand users are using IE8. We're converting at X%. Removing support for IE8 just means these people will shop elsewhere and we'll lose X thousand pounds a month. You need to support IE8."

Believe me, I wish it was as simple as saying developers are "part of the problem," because it would be an easy fix. But try selling that (without a huuuuge struggle!) to the person who holds the purse strings.

Sadly the new features usually only came on new sites. It's much easier to push it through when you're not cutting off an existing income stream.


>I wish developers had the competence to release security patches independently from functionality changes, but few do that anymore, sadly.

You do realize it's not competence developers are lacking, it's resources that are finite, do you?


Despite automatic updates, web developers are still stuck with Safari, IE, old android browsers and old edge. Automation doesn't help with bugs and functionality if there are just no updates to be installed that fix bugs and bring new functionality.


>Conversely, before automatic updates web developers were stuck supporting Internet Explorer for the best part of twenty years. Many of the people using it had neither reason or knowledge to update it, and it became the reason my parent's computers got riddled with malware.

The failure is not that of Internet Explorer, but rather the OS in which it runs, which has a faulty security model. No operating system should trust executables with everything by default.


It wasn’t faulty at the time since people were more concerned about protecting computers from users than protecting users from applications.

We all seem to forget that computing has changed drastically in the last decade.


I would say that "protecting users from applications" (or at least, external attackers) has been commonplace for maybe even two decades now, ever since major malware 'plagues' of the early 2000's (pre-SP2 Windows XP) like Blaster or Sasser.

That said, in that era it was often assumed (more so than now) that software the user installed himself is trusted.


Internet Explorer was only replaced by automatic updates after its usage felt enough that sites stopped supporting it.


The major problem with internet explorer was that it was impossible to update without updating windows which costs money so most people and organizations didn't do it.


I don't mind automatic updates per se as long as they're thoroughly checked and vetted. I'm not convinced Android and the Chrome web store do ANY checking / vetting. I have more trust in Apple's stores.

Vetting could be better with a lot of companies as well; remember not so long ago when Windows Defender decided a critical system file was malware and broke a ton of systems?

Verification. Vetting. Gradual release. Automatically disable extensions if they changed ownership, or if there's suspicious activity on the account of the owner (e.g. new login in another country).

And they need to take a MUCH harder stance on malware. Right now they're not even acknowledging there's a problem, let alone acting on it.


For any extension that makes any money, the solution is a deposit scheme.

"Google will withhold $1 per user of your ad revenue forever. If your extension is found to contain malware, you forfeit all the $1's. Decisions on malware'y ness shall be made by XYZ malware researchers."

Allow a developer to get back their $1 when a user uninstalls the extension, or the developer stops making the extension. Also give the developer a certificate anytime showing how many $1's you hold of theirs (they could use that to get a loan from someone willing to trust them not to distribute malware).


Not really a solution, just the minimum price a buyer would need to pay.


True. But even the most profitable malware won't want to forfeit hundreds of millions of dollars for a popular chrome extension.


On the other hand users are generally pretty poor at managing software themselves and as long as it works they'll happily and probably ignorantly run something that is not secure already and needs an update.


> users are generally pretty poor at managing software

This is an assertion which begs many questions.

Who are these users? What do you mean by "generally"? What do you mean by "poor"? What do you mean with "managing software"? Which software specifically? Why is "managing software" hard? What are specific case where this might be true? Is this statement falsifiable?

For instance, how does age, social background, education level, language, culture,... factor into the experience of "managing software"? Sure, the problem can't be software itself in it's entirety?

See, statements like these tend to break down once you start digging into the murky nuances and specificities of reality.

Moreover, accepting them at face value tends to reinforce a belief which isn't based on fact: that the users of digital technology can't manage their devices, and therefore shouldn't be confronted with managing their devices.

... which is then translated and implemented in interfaces and systems that simply lack the functionality that gives users fine grained control over what is or isn't installed.

Over a longer term, this promotes a form of "lazy thinking" in which users simply don't question what happens under the hood of their devices. Sure, people are aware of the many issues concerning privacy, personal data, security and so on. But ask them how they could make a meaningful change, and the answers will be limited to what's possible within the limitations of what the device offers.

A great example of this would be people using a post-it to cover the camera in the laptop bezel.

People don't know what happens inside their machine, they don't trust what happens on their machine, and there's no meaningful possibility to look under the hood and come to a proper understanding... so they revert to the next sensible thing they have: taping a post-it over the lens.

The post-it doesn't solve the underlying issue - a lack of understanding which was cultivated - but it does solve a particular symptom: the inability to control what that camera does.


It really doesn't beg those questions - we have 25+ years of data backing it up. People across the board are bad about running updates. I'm guessing you missed the mid-late 90s when things like buffer overflows started to be exploited and firewalls became necessities because even the folks whose job it was to run updates of vulnerable systems with public IPs on the Internet... weren't. Then came the early 2000s and all the worms running amok because people still weren't running their updates. Then the collective web development industry screamed in pain because things like Windows XP and IE6 just would not die.

The collective Internet has been through this before and (mostly) learned its lesson. People don't run updates when it's not shoved down their throat. And it's not a small segment of people. And it hasn't changed. Look at how many hacks still happen because of servers and apps that aren't patched for known vulnerabilities. Or the prevalence of cryptojacking which is still largely based on known vulnerabilities that already have patches available - indicating it's successful enough that people keep doing it.

Most users don't question what happens under the hood of their devices because they don't care. They have other things to care about that actually mean something to them besides the nuances of the day to day maintenance of their devices. There does not exist an effective way of making people care about things like this, let alone educating the masses on how to appropriately choose which commit hash of their favorite browser extension they should really be on. How many security newsletters do you really expect the average person to be subscribed to in order to make informed decisions about these things?

Hell my "Update" notification on Chrome is red this morning and I'm at least in the top 10% of security-conscious folks in the world (it's really not a high bar).

I'm not saying automatic updates are without their problems - I'm in a thread on HN about that exact thing. But trying to claim it's somehow about sociodemographic issues and the answer is solving that and going back to selectively running updates is just ignoring the lessons of the past.


I, and everyone else I know, do not install updates to our software in a timely manner unless we actively need a feature.

Users are "I, and everyone else I know".

Generally is "unless we need a feature".

Poor is "do not install updates to our software".

Managing software is "install updates".

Software is any software we use that provides updates, which is all of it.

Managing software is hard because doing it manually would require checking the website of every piece of software you've ever downloaded at regular intervals, where regular could be as frequently as minutes for security-critical tools.

If I ever downgrade my software and lock it to a specific version, I am now managing it manually, and all of the above applies.

I honestly don't think there are unquestioned assumptions here, because the task of keeping security-critical software up to date manually is nearly impossible for any user.


I honestly am not at all sure what you mean by much of that.

Demographics don't change the fact that if you don't automatically update software, many users simply won't. That's bad.


... in the usual pedantry of HN your use of "poor" was interpreted to mean socio-economic, rather than... "just bad at something"...


I don’t see how one could parse ”On the other hand users are generally pretty poor at managing software themselves” and assign that interpretation to “poor”.


I agree, but the user who responded to me seemed to talk about demographics as if I had meant "poor" as in not having much money.

The internet is global, sometimes I think things get lost in translation.


That's a reductionist reading of my comment.

I'm challenging your initial assertion that "people are poor at managing software". That's not enough of an explanation to support the second part of your claim:

> and as long as it works they'll happily and probably ignorantly run something that is not secure already and needs an update.

Are they poor at managing software because they are ignorantly running insecure software? Or are they ignorantly running insecure software because they are poor at managing software?

The replies so far take the entire context out of the picture and reframes the issue to "Users use their devices the 'wrong way'." and this can only be solved through technological advances.

I'm here questioning and challenging those assertions.


Oh I see. That's, weird, but thanks for letting me know.


That would cover users who are poor at managing software. Being able to turn them off would require someone to be good at managing software. Why remove control from those users?


I don't want to be saying that we should remove control, but I actually do think it's reasonable to. Even on a single-user device, security issues are not isolated. An infected machine will likely be used for things like spam and DDOS.

If you make something available for people to toggle that improves their experience, people are going to take advantage of that even if they don't really grasp or decide to ignore the consequences. In the case of updates the improved experience is not being nagged or forced to restart an application or the whole OS. And unfortunately the only way to really gatekeep that control to people who know what they're doing is giving it enterprise pricing.


I want to think that folks who would chose that option would be responsible, but the amount I hear from other developers who defer updates on Windows 10 to the maximum (1 year...) and still are upset when they have to reboot makes me think that even experienced users present a risk.


Users never upgrading their software certainly also leads to security problems though, it's not a solution, and it is reasonable to try to set things up so this doesn't happen.


Wouldn't an easy solution be to turn auto updates on by default, and warn users that turn it off that they are opening themselves up to potential security issues, and to do so wisely?


The issue comes when an auto update regresses something that the user relied upon. As long as the automatic update has a 'downgrade' option that's tenable but most of the solutions out there make downgrading difficult.

I prefer automatic updates that are presented to the user for action, sadly feature update/release notes are often hidden or content-free (cf. Google's apps' updates on the Play Store) and downgrading path varies heavily with OS (easy on Linux, impossible on iOS).


Good point, being able to roll back to a specific release would be very handy.


Sure, that'd be one solution. I wonder how many users would end up with auto-updates off, and how many of them would actually understand the risk.

Many users are going to change configuration because some tutorial on the internet somewhere tells them to do it, without totally understanding what they are doing, and are unlikely to revisit this configuration again ever. (Heck, I have done that with some configurations I don't totally understand, and don't even remember what I did and will never revisit to change back).

But it might be a fine way to do it.

But in analysis there is a shift from "can we blame someone else [users who ignored our advice] if the ecosystem ends up very insecure", to "how do we actually keep the ecosystem secure, not just have someone to blame when it isn't?" Doing the latter while also providing for user flexibility and autonomy can be a challenge for sure.


I don't think turning automatic updates would be the right way to deal with this. See: Windows. If a piece of software becomes malware it needs to either be forked or retired completely, running unmaintained legacy versions of software is not sustainable.

I have plenty of things I want to complain about when it comes to Google's user-adversity but mandatory automatic updates is definitely not one of them.

If you're a technical user and really know (or really think that you know) what you're doing there are ways to effectively freeze a given version of an extension.


Or just add permissions and ask the user when the extension asks for new ones? e.g. permission to talk to the outside world that something like TGS shouldn't need to just do its job.


It already does. If a new release of the extension requires new permission, it gets disabled till the user gives consent.


I never even patch automatic updates to my OS either (e.g. OS bigSur). I'd rather not guinea pig the latest updates and they usually don't add all that much value for chrome extension releases either, so a way to turn off automatic updates in chrome is highly desirable for me.

Download and unpacking from github is a pita, I'd need to do this to each of my computers seperately


This is a terrible security practice.

Switch to Chromium and use a package manager to stay up to date. Don't freeze updates, especially on your browser.


I work in software. I know the dangers of a day 0 exploit. I also know the dangers of an x.0 release of software.

Security is often in tension with convenience/usability (as in this case).

Concretely: I don't update to the latest MacOS day of release. I do update after a few weeks of "no significant issues reported" (or I'll update manually faster if I learn of a serious exploit). I still haven't updated to BigSur as some of the software that I rely on doesn't work on BigSur yet, so I'm on the latest patch of Catalina.


I'm not going to update to a new MacOS "named" release until it's been out for a while and probably has a patch release or two, agreed.

But I install MacOS patch releases as soon as they are offered. It has never caused me a problem I am aware of, and I don't want to miss out on security patches, or even just bugfixes and perf improvements.

Heck, I actually just upgraded a MacBook that was still on 10.12, which was EOL'd. But I upgraded it because it was EOL'd, and wasn't getting patch releases for security fixes, and I want those patch releases as soon as they are released!


You should let clients and users know that you care more about convenience than security so that they can make an informed decision about whether to trust their data with you.

I don't know what x.0 software updates you're talking about (Chrome or Mac), but my comment never mentioned any. You don't seem to know that browser vendors don't really do those like OS vendors do. Either way, you can still avoid those while gettong security updates.

In my memory, there hasn't been a breaking auto-update in Chrome in years, but there have been hundreds of 0-days. The numbers don't really work out for the tradeoff you claim to be making.


>The only way to be safe is to exclusively download releases from the extensions github repo and to manually install them.

Or not use chrome


The fact that Google has not addressed this gaping security hole in Chrome is borderline criminal.


You can do better to voice your displeasure by not stretching credulity.


It's hyperbole. Welcome to the Internet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: