Let us be clear.
Running a local helper agent that accepts properly formatted requests (includes authn/authz) to provide a valid expected functionality is a perfectly valid architectural choice for a full-fledged desktop computer and we shouldn't throw out this capability.
The mistakes I see here are:
- UX Dark Patterns – making uninstall hard/duplicitous
- Helper process having security vulnerability - unauthenticated requests, providing unnecessary privileged operations like update/reinstall etc.
- Providing the control of participant video on/off to meeting host
- Not acknowledging the mistakes quickly and fixing them fast. Being defensive and using 'others do it too' excuse.
Also, in an internal fight for resources/prioritization and just plain philosophical alignment between security vulnerabilities vs UX funnel optimization (reduce number of clicks), in a company like Zoom, I am not at all surprised that UX side own always and security side lost and it took public pressure the shift the balance. Anyone here who has been in this situation knows what I'm talking about.
Unless the cost equation changes, it is hard to get business users to change their priority – from their perspective, they didn't understand what the heck their internal security guy was talking about.
It would have been one person/security-team who they normally don't interact with. So why will they listen to that guy over the UX Product guy who they interact with daily, who they see as the one who built the hockey stick growth in their customer NPS scores and that guy wasn't happy about adding the extra click back.
So, only workable answer I see is public outrage like this (still not very scalable or consistent) and better yet, legal protections/regulations that make it extremely expensive for companies to ignore this stuff.
Running a local helper to take control from the browser is absolutely a bad architectural choice. The browser doesn't allow websites to open local programs without going through a user confirmation process for damned good reasons, and Zoom decided to put a lot of effort into circumventing that security measure to save users a click and so boost their conversion rates by a few percentage points.
This is inherently a security problem because web sites can open URLs without user awareness or deceive users about the content of said URLs.
On Ubuntu, xdg-open phrases the checkbox as something like "always allow X program to handle foo:// URLs?", which is probably not comprehensible to the average user; more accurate phrasing would be "always allow websites to open X program?" Which I think indicates why I'm so skeptical that this is a good option to give to users.
Unless installing an always running service on my device is directly related to the intended functionality of your software, setting one up is unwelcome and deceptive. Especially when it is done to work around existing security controls.
I have never been in the position to choose other than voicing my opinion, all video conferencing sucks for some reason or another, and it has never been anywhere near the most important thing.
I disagree with declaring all helper agents as dark patterns.
From a regular user point of view, it would be acceptable to have a helper agent as long as it follows:
- platform provided background process methodology (example: launchd could launch your process when you hit the socket),
- and it is made clearly apparent that such a thing is installed on your system (say, via system preferences panel, via status bar icon menu, and via in-app preferences panel),
- and it does cleanly uninstall as part of a simple standard regular uninstall.
And from a technical/security point of view, it would be acceptable if it:
- has minimal necessary privileges and proper separation of concerns.
- and does only what it needs to provide a user-expected functionality and doesn't do random egregious things.
- has secure ways to allow only expected/authorized caller to talk to it.
- does not violate any platform guidelines or tries to circumvent protections.
> It would be not, stop pretending acquiring consent from a statistical model counts as acquiring consent from the actual user.
I don't know what you are referring to here. Care to elaborate?
> Thing you wrote may make it acceptable for you, but certainly ain't sufficient for me.
This isn't about individual taste. Nothing I wrote above was about my personal taste. My point was about differentiating between the OS provided valid architectural mechanisms vs surreptitious dark patterns applied on top of it by an application developer.
I don't know what you are referring to here. Care to elaborate?
You make assumptions about individual user's consent from whatever bulk experiences you might have measured. Either that, or you didn't even measure anything and therefore you're just making things up about what's "acceptable."
> This isn't about individual taste.
Who said anything about taste, it's about individual boundaries.
> My point was about differentiating between the OS provided valid architectural mechanisms vs surreptitious dark patterns applied on top of it by an application developer.
First, that's a word salad. Second, after untangling it, I'm pretty sure you mean "if there's a mechanism in the OS that enables this then it's okay" in which case that's even more absurd than the usual "if it's legal then it's okay." Look, even if you take Zoom's "let's leave a tray icon there when you thought you quit the app without putting a honking huge notice you just did that like a decent app usually does" is more about having a way to disawov ("see, we did leave a notification, lol") than actually ethical design. That's the _essence_ of a dark pattern.
Seriously, though, you're being creepy and advocating pushing people's boundaries here.
Libreoffice has an agent that preloads java bins to make the startup time comparable to MS Office. There are valid uses for startup agents, please get over yourself
That’s the meat of it, Zoom wanted an app feature macOS said was a no-no so they coded up an insecure workaround. On iOS that would get your app pulled at the least.
I want an operating system with a permissions model which specifically forbids this kind of thing.
My Linux desktops are also always full of processes which I have to dig to figure the purpose, unless I build my own distribution it's hard to make anything work which feels satisfactorily under control.
Non-OS provided applications are installed as packages and given package-level permissions which are easily audited and revokable (without forcing uninstall).
Apache has permission to start at boot, run in the background, and listen to 0.0.0.0:80,443. Photoshop has permission to write to files in $HOME, and connect to network services while the application is running optionally with explicit permissions for each access. Adobe's update service can be disabled with a click.
> Unless the cost equation changes, it is hard to get business users to change their priority
With GDPR getting teeth (see recent fines of BA and Marriott) for security breaches, I think this is the beginning of that cost equation changing.
But also bear in mind this is a company who have someone with the title Chief Information Security Officer. If alarm bells didn’t start ringing for that person when this vulnerability was reported, then they likely aren’t the right person for the job. Especially as Zoom have customers in the EU so that person is also likely their nominated Data Protection Officer and should therefore be well aware of the privacy requirements imposed by GDPR and the penalties for a privacy breach (which someone secretly recording webcam footage would surely qualify as).
As for local helper agents accessible from the internet, you only need to browse google project zero to see what a bad idea that is.
The mistakes I see here are:
- UX Dark Patterns – making uninstall hard/duplicitous
- Helper process having security vulnerability - unauthenticated requests, providing unnecessary privileged operations like update/reinstall etc.
- Providing the control of participant video on/off to meeting host
- Not acknowledging the mistakes quickly and fixing them fast. Being defensive and using 'others do it too' excuse.
Also, in an internal fight for resources/prioritization and just plain philosophical alignment between security vulnerabilities vs UX funnel optimization (reduce number of clicks), in a company like Zoom, I am not at all surprised that UX side own always and security side lost and it took public pressure the shift the balance. Anyone here who has been in this situation knows what I'm talking about.
Unless the cost equation changes, it is hard to get business users to change their priority – from their perspective, they didn't understand what the heck their internal security guy was talking about. It would have been one person/security-team who they normally don't interact with. So why will they listen to that guy over the UX Product guy who they interact with daily, who they see as the one who built the hockey stick growth in their customer NPS scores and that guy wasn't happy about adding the extra click back.
So, only workable answer I see is public outrage like this (still not very scalable or consistent) and better yet, legal protections/regulations that make it extremely expensive for companies to ignore this stuff.