Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Face ID and Touch ID for the Web (webkit.org)
757 points by Austin_Conlon on Oct 19, 2020 | hide | past | favorite | 355 comments


So happy Apple decided to go with an open standard here rather than something proprietary. This is good news for the FIDO2 ecosystem and I hope this leads to far greater support for FIDO2 authenticators of all types.

There is another world in which Apple just pushed 'Sign in with Apple' and created yet another federated identity provider rather than true, 'secure element'-based FIDO2 authentication.


"Sign in with Apple" requires a developer account with Apple.

Having saw Epic's developer account terminated by Apple, I would definitely stay away from any "Sign in with Apple".

(FWIW, the only 2fa with "Sign in with Apple", if you don't own any Apple hardware, is SMS.)


It's pretty clear Epic set out to intentionally get their Apple developer account terminated so they would have standing to sue, so I would not draw too much inference from that.

That said, it's generally true that any dependence on a platform is a form of risk. There are documented examples of Google kicking people out of their ecosystem unexpectedly too.

Federated sign-in schemes may be a good idea if they help your users create accounts and authenticate more easily, but it certainly seems smart to offer more than one, including your own email-based option.


> It's pretty clear Epic set out to intentionally get their Apple developer account terminated so they would have standing to sue, so I would not draw too much inference from that.

Actually, it's clear that they intentionally got their _app removed_. The termination of their entire Apple account was a step I wouldn't have expected Apple to take because it underlines the fragility of their authentication system. Now, player who signed up to Fortnite on their phones can't continue to play anywhere else, probably making them regret using the Apple sign-in in the first place.

I generally consider any federated login that doesn't have an external email address attached to be fleeting, possibly disappearing out of the blue. I've lost some minor accounts when I deleted my Facebook account with services that didn't offer an email alternative. Developers, at least make adding an email/username and password optional once I've signed in with another service account, because that account might just disappear altogether one day!


> Actually, it's clear that they intentionally got their _app removed_. The termination of their entire Apple account was a step I wouldn't have expected Apple to take because it underlines the fragility of their authentication system. Now, player who signed up to Fortnite on their phones can't continue to play anywhere else, probably making them regret using the Apple sign-in in the first place.

- Apple never indicated that they’d remove Epic’s SIWA support, and there have been reports that Apple went out of their way to make sure the support would survive the account being terminated [0]

- Apple’s developer agreement allows them to terminate the account of an offending developer after 30 days; Epic’s account was terminated 45+ days after breach.

[0] https://daringfireball.net/linked/2020/09/29/epic-games-unre... (with the caveat that John Gruber shills for Apple, but also has some good contacts within)


>John Gruber shills for Apple...

A shill is a person who pretends to give a neutral endorsement but has an interest in the deal.


Do you think, honestly, that anyone who isn't as critical as you'd like is a shill?


What if I'm running a site that is against apple's beliefs? E.g you run an adult site? Would you risk it?


Loads of companies - especially financial processors - don't want anything to do with adult sites, so if you're in that industry you're already up for a challenge.

I've worked for a project for the tobacco industry, similar story.


Apple doesn't allow adult content in their App Store, but they don't do anything to prevent adult sites from loading in Safari, even on iOS--never have. They seem to draw a distinction between their walled gardens and the open web. I'll admit I'm not an expert on Face ID and Touch ID for the web, but to me it looks like a feature of the Safari browser, not a walled garden.


On the contrary, although not against all uses within Safari, there are rules against use of Sign In with Apple depending on what you are doing, including one against pornographic websites as mentioned.

https://developer.apple.com/sign-in-with-apple/usage-guideli...


They use Web Authn, which is an open Web API : https://developer.mozilla.org/en-US/docs/Web/API/Web_Authent...

Implementing a client with this also works on Android and any computer that supports some kind of hardware authentication mechanism, like fingerprint or face recognition.


>It's pretty clear Epic set out to intentionally get their Apple developer account terminated so they would have standing to sue, so I would not draw too much inference from that.

Nevertheless, the suspension still clearly highlights the fealty you are expected to give to Apple being a dev on their platform, or else.


And neither is Epic the first nor last of the kind of people who want to pick this fight with Apple, even if they can't


Apple isn't perfect, and yeah, you might run into a situation where you have to sue. Them taking away the ability for half your users two sign in to your app while your dispute makes its way through the courts is seriously problematic.


I don’t think it’s ever been shown that Apple did this at all, let alone maliciously.

If anyone has a source to the contrary, we need it here please. And I mean a documented source, not hearsay.

Edit: there’s discussion below.


As a user I would prefer no account in most cases. As a distant second, I would prefer the convenience, security, and privacy of Sign in with Apple over Google, Facebook, or the headache of managing yet another web account.

As a developer, I use my preferences as a user to steer my choices, but recognize that the world doesn't revolve around Apple so would allow other options.

> Having saw Epic's developer account t...

Since I have zero need to deliberately violate Apple's App Story policy, I don't worry about this overmuch.


> Since I have zero need to deliberately violate Apple's App Story policy, I don't worry about this overmuch.

That may be true today, but their policies are a moving target. Who knows what they'll be like in a year's time?


> That may be true today, but their policies are a moving target. Who knows what they'll be like in a year's time?

Likewise Google, Facebook, or any site/ API that a developer deals with on a daily basis.

I wouldn’t bet my company on any sign in with _____ service. I just feel the trade-offs with Apple’s sign in versus Google/ Facebook to be less bad. With any of the services, I might do something that causes me issues down the road. With Apple at least I’m not selling out my users immediately.


This is why (as a customer) I always prefer "yet another web account" vs. "sign-in with ____" - yes, it's a hassle to manage a different account for every different thing I use (although not much of one with a decent password manager), but I get the advantage of not risking losing access to _everything_ because <list of all the things that might go wrong with a sign-on provider>.

Single-sign-on means single point of failure.


Is that still true when those accounts are literally just backend-side caches of your shipping+billing address and credit card details?

I had to create an "account" to order a pizza. That account does nothing other than to make ordering pizzas slightly faster. I would be literally not inconvenienced at all if I had to input all that information for each order (because AutoFill.) I would also be literally not inconvenienced in the slightest if I lost access to that account and had to create another one.

To me, that's the type of "account" for which "Sign In With Apple" is a perfect solution. The type of account where it's only the provider insisting on having an account in the first place, and you would be get on just fine without one if they'd let you.

(Or, for an even more annoying example: web forums that you have to "create an account" for to read certain posts, or download attachments on those posts. Thanks for making me take five minutes to verify my email just so I can click a link on a page!)


I suppose it depends entirely on what the account is securing.

My bank? Needs to be pretty independent of everything else.

Some random web store or something like Kickstarter? The only reason I care about those accounts is to track orders or for the convenience of not having to re-enter credit card info. The risk of losing my Apple account because ??? and dealing with recreating those accounts is negligible. In many cases I just use guest login for exactly that reason.

The convenience accounts are the ones I might use Sign in with Apple for. The bank? Not so much. But I won't trust Google or Facebook with even the convenience accounts.

I would also use something like this Touch ID on the web feature for 2FA, particularly if the only other options are SMS or email.


> I wouldn’t bet my company on any sign in with _____ service.

With Google Sign-In, you get an email address. If push comes to shove, you rip out the Google code and email everyone a traditional password.


You could say that about anything -- nothing is completely static -- but it's worth mentioning that Apple vs. Epic is a counterexample; Epic wants apple to change their policies (lower than 30% cut) while Apple wants to maintain the same structure since the inception of their app store.


> You could say that about anything -- nothing is completely static

1. Contracts are a thing. You can draft a contract with your vendor that guarantees certain terms for a certain duration.

2. You should be wary of wandering into commitments (including de facto commitments e.g. "vendor lock-in")--there are plenty of good reasons to do so, but one should make sure to properly consider the cost.

The problem is that doing business with Apple can make or break a company--companies can scarcely afford not to do business with Apple. I'm not an economist, but this seems pretty monopolistic (which isn't to say Apple should be broken up, but perhaps more tightly regulated).


It is more of an monopsony[1] than a monopoly. If you are looking to buy a smartphone there are still options. If you are looking to sell software for smart phones, Apple is by far the most lucrative platform and they have a lock on publishing software for iOS. It is very difficult in the US to survive as a software developer for Android only.

The anti-trust frameworks in the US are based largely on monopolies and there is little in the way of legal precedence for protecting sellers in a monopsony market. If Epic wins their legal battle, it will likely set precedence for later cases.

Google and Facebook are also largely in weird legal ground. They have more or less exclusive access to large networks of users which is hugely disruptive to the advertising market.

[1] A monopoly is a market where there is only one provider. A monopsony is a market where there is only one buyer.


Perhaps I should have said “anti-competitive”. As you note, it’s very difficult to survive as an app developer on other platforms, and that’s bad for users ultimately.

I would like to see the US crack down on anti-competitive behavior in general, but especially in cases where companies are deriving value by gate-keeping some large network. To this effect, I think Apple is a relatively minor problem compared to social media networks. Consumers have no meaningful choice (hopefully I don’t need to elaborate on why Facebook vs Twitter is a false choice) and it allows social media companies to get away with all kinds of awful behavior, but especially the ability to steer the course of democracy (by determining at scale who is exposed to which ideas and at what potency) and then selling that as a service to the highest bidder or even serving as an attack vector for other states to steer our democracy (or other democracies for that matter). A monopoly over the flow of speech is intolerable for a democracy, and at least in America where conservatives are concerned about censorship of conservative speech by Silicon Valley progressives and liberals are concerned about Russian manipulation, it seems like a naturally bipartisan concern.

Edit: genuinely wondering what downvoters are objecting to in particular? Do you not believe that social media companies have a monopoly over their own networks? Do you disagree that they can and do steer public opinion and thus public policy? Do you disagree that this is a bad outcome? Perhaps it’s a bad outcome but regulation is an ineffective solution (e.g., libertarianism)? Educate me.


Apple sits between users and developers. If it's a monopsony on one side, it's a monopoly on the other side.


Not remotely. Users can buy Android phones.

Having ~50% market share is not a monopoly.

Even suggesting Apple has monopsony as I did above is a stretch and is only the case if you define the market based on paying users.


The monopoly/monopsony distinction is pedantry. The important point is that consumers and developers suffer because one company controls access to the lion’s share of a market. That point can be criticized and debated, but litigating semantics makes for boring reading and anyway it’s off topic.


> The monopoly/monopsony distinction is pedantry.

Expecting people to be in the general ballpark of the definition of a thing isn't pedantry. It's kind of hard to have any sort of meeting of the minds when people ignore even the basic premise of a term.


> Having ~50% market share is not a monopoly.

Most competition regulators disagree. 50% of a market is well above the threshold for both the US and EU to consider a company to be a monopoly. They usually treat the cut-off as around 20%.


How does this argument not apply to Facebook and Google?

Hint: It does, 100%. Epic could pick a fight with Google, tomorrow, that culminates in the same exact outcome.


It isn’t productive to establish defense against an arbitrary future that turns on you. Spend those brain cycles focusing on your user and building a great product. Choosing Sign in with Apple is great for Apple users.


It absolutely can be productive. So many people have done as you said and had the rug ripped out from under them by $ARBITRARY_PLATFORM_DECISION (cough cough YouTube) that if they had thought the risk they were taking on by going with one platform they might still be in business.


I heard a guy died getting struck by lightning once. I am _never_ leaving my house during a storm again.


Maybe so. But how is this different than saying, "I have nothing to hide, so I'm not worried about having my car searched?" It's a somewhat pragmatic point of view but I wonder if it is a good idea in the limit.


So what are you saying then? We shouldn't agree to a system that allows our car to be searched because under some limit it might be bad for us? What are we comparing this to in the Apple case? What is the equivalent of the "car search"?


"I have nothing to hide, so I'm not worried about having my car seized"


> It isn’t productive to establish defense against an arbitrary future that turns on you.

It isn't?

Isn't that like, the hallmark of intelligence?


Against an arbitrary future? No. Against a likely future? Yes.

It is not the hallmark of intelligence to come up with every possible thing that could go wrong and build defenses against it just in case someone somewhere does something counter to their wellbeing.

If you use any third-party login system, think about how you can migrate users between accounts, or validate that they are who they say they are, that's perfectly normal. What if Twitter OAuth goes down? What if your Google dev account is suspended by some automated system for some reason? You should definitely have some kind of plan for that.

Spinning off contingencies in case Apple changes its developer policies out from under you and suddenly no one on iOS can log in? Too specific and arbitrary.


I think you watch too much Doomsday Preppers


> or the headache of managing yet another web account.

Honestly, I find this to be the distant second behind no account. I treat my password manager as my SSO provider in some sense.


I've had an issue with sign-in with Apple where using an autogenerated emails ruins certain support interactions. One of them was cancelling a subscription service that was eventually resolved (they required me to email their customer support, which I couldn't figure out how to do using the autogenerated email address created by "sign-in with Apple").


While it’s frustrating and worth pointing out of course - what really happened there was a bug in the service provider’s support process!


It was actually a dark pattern, not a bug.


It would be helpful if you could let us know how you managed to send that e-mail.


You can see the email that Apple generates:

https://support.apple.com/en-us/HT210426


Yes but the situation I asked for a solution is different. It is not about not seeing the e-mail, it is about sending from the private e-mail. Example

You subscribe with `private-XXXX@apple-id.com` which relays to `foo@gmail.com`. Customer support asks you to chat with them from the same ID you signed up. Your e-mail client sends email from `foo@gmail.com`. How can you quickly respond to the customer support from `private-XXXX@apple-id.com`?


I didn't, I found an alternative that was specific to the service.


> convenience, security, and privacy of Sign in with Apple over Google, Facebook

Unless of course they block you for whatever reason. Then the process of getting back access has nothing to do with convenience, security, and privacy...


> I would definitely stay away from any "Sign in with Apple".

I would stay away from any "Sign in with.." service as a user and as a product owner. You're affectively giving away a major control of your users to a third party.


I think "Sign in with Apple" is unique that Apple allows the users to hide their email address https://support.apple.com/en-us/HT210425 which makes it extremely hard to migrate away, unlike other federated login system where you can at least get users' email addresses, allowing you to create a proper email+password login later on.


I think it is very private. The third party app cannot sell the attached email address to others. But I agree if you sign up with the third party using your apple id, they will not know you which may end up in creating second account. If they don't have account linking feature, it will be messy.


Apple announced an account linking feature as a requirement at WWDC this year.


How is that? It's the developer's choice of they want to let me add a second login to the same account.


As a product owner, why wouldn't I want to piggyback on the millions of dollars of R&D + security that the big companies have put in?

And as a user, why would I trust my password to the website that rolled their own authentication over the big companies?


> why would I trust my password

Other people have mentioned this, but if you're not reusing passwords, this shouldn't be a concern for you. Don't reuse passwords!

On the security front, companies that are implementing 3rd-party sign-in can still get hacked and leak your personal information. If that information is supplied by Apple instead of you, it's all the same. You don't automatically get better security because you're using 3rd-party sign-in, you only get better security if you're being forced to stop doing something bad (reusing passwords, enabling 2FA) or if Apple is providing less information than an account would ask for during signup.

To Apple's credit on that front, they do mask your email, which is a legitimate privacy improvement. But it would be better for that to be a generic service that allowed you to generate an anonymous email at any time for anything, rather than a perk that's hardwired into an anti-competitive scheme to make it harder for you to migrate devices or change services.


As a product owner, because what if that service provider decides to (mis-?)interpret something you did as against their TOS and revoke your access to their sdk, thereby making it more difficult for many of your users to log in?

And as a user, what if Facebook/Google/Twitter/Apple decides you've violated their ToS, blocks you from your account, and now you can no longer log into any of the sites you've linked with one of those providers?

I know this is a bit extreme, and for many people, the risk is totally worth the reward, but I think that is one of the chief concerns many people have with trusting a third party for all their sign-ins. It's a single point of failure.


> why would I trust my password...

Do you use the same password everywhere, by any chance? :)


I used to, but some websites stopped accepting "abc123"!

Thankfully, it's still fine for my bank login.


As a user, I recently deleted x account. I used login with x on a few services and now I can't access them anymore.

This situation is solvable by implementing forget password and storing user's email but many services don't and with a system similar to apple's where you mask the email or phone number, you can't do anything.


this is a MASSIVE selling point, BUT, when your users contact support trying to gain access to their account and can't describe what their email address might be... trust me, there is pain in your future.


Users can just tell the vendor what their real email address is when they want support, until Apple bans vendors who ask for contact info.


It's true that oauth is very secure but I wouldn't assume major sites are rolling their own authentication. The standard password hashing methods and workflows are all well known and implemented.

Even if the local auth system was poor, an email/password combo is simpler and faster without leaking data to social providers. There's no reason to remember which provider you used, or login to them first, or worry about loose permissions especially with future changes. It also limits the blast radius in case your social account ever gets compromised, and it's useful for completely anonymous and disposable accounts.


Well one issue as a user is trying to remember which third-party auth was used when I first created the account (did I use Facebook? Google? Twitter?).


Well, you can get access to the email on all three so it's not a huge problem for vast majority of whom only maintain single email address.


Mainly to prevent the big companies from knowing every website you’ve ever visited.


This is always a business decision, imo.

It's similar to publishing on medium (or Huffpost, from a few years ago) as opposed to your blog. You'll get more reach in the former case, but have much less control. For that matter, it's similar to serverless vs code everything and host it in a server on a rack somewhere.

Engineering is all about the tradeoffs.

So, how would I make that call? I'd think about how much it mattered to have control of the auth experience vs the easier onboarding of customers. I'd think about the risks of having the auth yanked out from under me (I'm not aware of any cases where this happened). I'd think about the value add of auth to my app; in most cases it's slim to none. I'd also look at what the auth provider allowed me to know about the user when they deliver an authenticated user to my application.

I think in general auth isn't a huge differentiator for most applications, and offloading it to a social provider, as long as a chunk or most of the target market has an account (github for devs, linkedin for sales folks, google for, well, people with an email address), is a good choice.

Don't forget, you can provide both; I've worked at companies with only social login, but don't think that's very common.

Disclosure, I now work for a company which provides auth software (link in my bio).


It's extremely cheap to syndicate to Medium plus other venues. Not so for Sign In With.


What is expensive about "sign in with"? There are lots of libraries that will make it pretty simple, though the configuration can be hairy (google, looking at you).

It's more expensive than syndicating content, sure, but I think you want to compare the relative costs of each option, not between them.

Write on medium vs writing on your own blog

Using social sign on vs building your own auth system (hopefully using a library)


As a user, I'm much happier giving my e-mail address to Apple who already has it, then 100 other websites.


I always leaned that way, but from the security and compliance side of things it's a heck of a lot easier for us when our staff can sign into 3rd party services with their company Google account that has strict security and 2 security keys in place vs whatever the luck of the draw may be with each service that we want to use. It's a nice alternative to the "SAML SSO only available with $10k / user enterprise account" routine.

But as long as that service has quality 2FA options with (ideally) Webauthn, it's much less of a concern.


so, if you're a service that posts memes to twitter, or facebook you should always use email and sms two factor, no 'sign in with facebook/twitter' links?


> You're affectively giving away a major control of your users to a third party.

You may also get users that you wouldn't have otherwise. It's a trade-off. Lowering friction tends to increase conversions.


Even if you don’t like Apple’s actions in the whole Epic drama, one thing that’s clear from it is that Apple probably won’t terminate your developer account like this unless you beg them to as part of a major PR stunt.


But they can expand the prohibited uses clause in the future if they see fit. https://developer.apple.com/sign-in-with-apple/usage-guideli...


Contracts can be changed based on pre-agreed terms, film at 11

At some point, people need to do business. Worrying about hypotheticals leads to paralysis.


>> Apple probably won’t terminate your developer account

Or probably they will terminate it if they don't like your business


For what it's worth, it's been said (by anonymous sources, no one on the record) that Apple did not threaten to terminate Epic's "Sign in with Apple" accounts/features, and they spent extra effort to maintain their access to that after their account was terminated.

I do not know if other terminated accounts get this "luxury".


Epic did say it was going to be terminated, presumably as part of the overall account termination. They then updated that it would continue to work.

https://www.theverge.com/2020/9/10/21431396/epic-sign-in-wit...

Apple commented they weren’t doing anything to stop Sign In with Apple working, but I have to wonder if there’s a lie of omission in there. Like “We aren’t doing anything deliberate to stop it, but it’s going to stop as a side effect of terminating their developer account.”

Either that or Epic was lying outright.


Considering the court itself has said Epic has lied about other details, I’d lean towards that.


unnamed sources have said that Epic was outright lying https://daringfireball.net/linked/2020/09/29/epic-games-unre...

> multiple sources at Apple told me Epic’s claims were simply false. There was never a September 11 deadline for their SIWA support to stop working, and in fact, Apple’s SIWA team performed work to make sure SIWA continued working for Fortnite users despite the fact that Epic Games’s developer account had been revoked.



So the other question, is this a standard behavior, or a special circumstance because of Epic's high profile?

Or more to the point, if Apple terminates your developer account, do you think Apple's SIWA team will "perform work to make sure SIWA continues working" for your users?

If work is actually required, I'm not optimistic.


That is the problem with KOL, spreading wrong information and it spreads as Fact.

Gruber was also the one who spread the Safari Javascript is so much faster than its competitor because Apple have custom SoC and they use specialise ARM instruction to speed it up. And now even HN has a high percentage of people who believe in that without even thinking about it.

And like I said in the previous thread on HN about misinformation. None of these KOL, and including most media / publication care to fact check or to correct their previous wrong reporting.


You were right. And this is not the first time Tim Cook's Apple caught outright lying, or Lie of Omission. And not in any good faith.

In both Qualcomm and in IMG's Case.

The whole new Tim Cook's PR and Marketing is way worst than Steve's era in my book. Along with their business strategy and behaviour.


It's weird to see you using this example, given that Apple specifically went out of their way to continue operating Sign in with Apple for all of Epic's users. Can you talk more about why you would stay away from it when developing for Apple platforms?


Note that Epic claimed that "Sign in with Apple" was going to get terminated, but Apple says that wasn't true:

https://www.theverge.com/2020/9/10/21431396/epic-sign-in-wit...

And IIRC it all still works anyway, so this sounds like Epic either being confused about itself or lying to make Apple look like the bad guy (more).

And bear in mind that anyone using any third-party auth (like Twitter, Facebook or Google OAuth) could have their developer accounts terminated for violating the rules, same as Apple. I guess the key takeaway here is "if you're relying on someone else's services, don't flagrantly disregard the service's rules publicly in order to start a lawsuit and then try the case in the court of public opinion", but I think that's good advice for any service.



Apple did not terminate Epic's SiwA account, and several journalists have sources within Apple that say that Apple never sent the message Epic claimed to receive that said access was going away.


https://developer.apple.com/forums/thread/123774

Not the only ones that randomly gets their accounts terminated.

Based on how apple has -insane- fragmentation and security for different aspects of the company, I would doubt any employee that isn't directly tied into the store accounts would know the whole details. (Source: GF worked for the department that did art/design for the apple stores, no one had access to their room, and they had more secure rooms that only a few employees had access to).

FWIW google has the same issues.

Don't trust a company with an account that you can't get a human on the phone for to review shit.


I use Sign In with Apple everywhere I can (so many of my passwords are in haveibeenpwned datasets), and if Apple blacklisted a provider I use, I’d expect the service to email me to migrate to their own email/password identity provider (if I didn’t hide my email from them with SIWA), with a link to the migration process in the email.


Same except that, if Apple blacklisted a provider, I'd want to know why they got blacklisted. At this point, I trust Apple as a neutral third party more than I trust most other companies since Apple isn't incentivized to sell my info.


Agreed, with the caveat that we need some regulation through legislation to codify safeguards around identity providers, their responsibilities, etc.


> Not the only ones that randomly

Not really random now right...


That does not necessarily have anything to do with SiwA, however.


Tim Sweeney produced a letter saying that Apple would terminate their access: https://twitter.com/TimSweeneyEpic/status/131134525357683097...


That’s a pretty misleading interpretation of what Apple said or even what Tim Sweeney said in the tweet...


How so?


> Apple is entirely in its rights to terminate Epic Games’ developer account and all related functionality, but SIWA will continue to function for two weeks.

Ok, let’s translate it:

1) Apple believes it’s entirely in its rights to terminate Epic’s dev account and is doing so

2) Apple also believes it’s in their right to kill all related account functionally, but they aren’t, specifically with SIWA, for at least two weeks.

3) they don’t specify what will happen after two weeks, as that likely depends on several factors. We later learn that they extended it indefinitely.

So there’s multiple possible scenarios:

A) Apple really is just threatening as you indicated, but not being blatantly direct about it. Possible, but not the only possibility, and not usually their style from what I’ve seen for something like this, but who knows. I feel if Apple really wanted to threaten, they’d make the threat explicit and say “after two weeks of non-compliance, we will terminate everything”.

B) Apple is stalling making a decision over if SIWA will continue to be available to Epic’s users to ensure the SIWA team can support it when the dev account is deactivated. From other rumors where we hear folks saying SIWA team did have to make changes, I’m thinking this was the likely scenario. Don’t make a commitment to keep something going if you can’t deliver on it for sure yet, so “give them a two week extension” so you learn if you can actually keep SIWA up. If they can’t and Epic is still fighting with Fire, maybe you do terminate SIWA, but maybe not. Either way, Apple wins.

C) Apple is stalling a decision to scare Epic into compliance after Epic’s users backlash over word it might end. Possible, but I think Epic had well shown they were ready to play the Russian roulette game with Apple down to the last chamber, so I’m thinking less likely than A/B.

So if I was a betting man, I’d say it’s a mixed bag at best, but would likely go:

B > A > C

Apple doesn’t play checkers, they play chess. When they send a letter like that, they ensure there’s multiple positive outcomes for them for any potential scenario, from technical complications, to user perception, to legal proceedings, etc.


While there's always a risk anytime you build on top of someone else's platform, it's worth noting that it looks like Epic lied[0] about getting blocked from Sign in with Apple. There's no evidence they were going to lose access to it, even with a terminated developer account.

0: https://daringfireball.net/linked/2020/09/29/epic-games-unre...



Ahh thanks for sharing that. That’s good to know. Super disappointed Gruber didn’t update his post with that information, especially given his recent campaign against editorial integrity.


Sign in with Apple still works for Fortnite. [1]

According to Epic, Apple said they were going to turn it off and then changed their minds. Apple's position is they were never going to turn off Sign in with Apple for Epic.

[1]: https://twitter.com/FortniteStatus/status/130416143288864358...


Except the 2FA is kinda stupid.

I have devices logged into my iCloud account at a datacenter, that end up getting my 2FA for my other devices.

I have a iPhone, iPad, Macbook, do you think any device I actually use all the time gets the 2FA code ?

Sometimes, the same computer i’m using to login gets the code which is kinda pointless.

I have to always use SMS to get my code because of this.


Every single device on your iCloud account gets the 2FA prompt. You then choose which device you want to confirm the request on. The prompt does not go to just a single device, but every single device on said iCloud account.


Note that Epic claims Apple was going to disable "sign in with Apple", but Apple did not do so and has claimed through unattributed press quotes that they were never going to do so, that Epic made that up.


Apple's claim is false, and can be shown to be false via documents in the court filing: https://twitter.com/TimSweeneyEpic/status/131134525357683097...


Yeah same for me. I was super happy with sign in with apple as a concept, but I have exactly 1 account with them. After the epic account-hijacking, no thank you. I will never ever use it again.


Why don't sites start supporting multiple federated identities? It shouldn't be too hard to keep the "sign in with X" links on the account settings page, right?


Epic asked them to terminate the account though.


you do know EPIC did break a lot of clauses in their contract, knowing what the impact of breaking these clauses was, and still breaking all those clauses?


> Having saw Epic's developer account terminated by Apple, I would definitely stay away from any "Sign in with Apple".

If you don't renege on your agreements with Apple as part of a public pissing contest, and you aren't in the business of misleading customers and creating deceptive apps, it's unlikely they'll revoke your developer account.


There's an explicit list of websites that are not allowed to integrate with "Sign in with Apple". https://developer.apple.com/sign-in-with-apple/usage-guideli... Nothing stops Apple from adding more requirements in the future, even if you don't start a feud with Apple.


There's an explicit list of websites that are not allowed to integrate with "Sign in with Apple".

I don't think anyone is crying over Apple not wanting to handle authentication for web sites offering "Illegal drugs or non-legally prescribed controlled substances."

The rest of the list is similar.

And yes, with that list Apple has once again affirmed it's not interested in helping normalize pornography. That's its choice.


Sure, but at the end of the page:

> Apple reserves the right to disable Sign in with Apple on a website or app for any reason at any time.


Wow, you can’t even use sign in with Apple if you

“Show Apple or its products in a false or derogatory light.”

Who decides what’s false or derogatory?


A court of law.


No, there's no world where Apple blocks access to an account because they're showing derogatory content towards Apple products, and a company gets a judge to overturn that block because it's not technically libel. Apple has the right to block you from their sign-in for any reason. Short of pulling a move like Epic and suing them for antitrust, a court of law is never going to enter into the equation.

None of these are legal definitions, Apple gets to decide what they mean. And no court of law is going to rule that they don't have the right to block when their TOS end with:

> Apple reserves the right to disable Sign in with Apple on a website or app for any reason at any time.


> Apple gets to decide what they mean.

(IAAL, this is not legal advice.)

That's not how contract law works. There's a whole body of law around how to construe language in contracts, and it's subject to litigation and dispute if the definition isn't made clear in the contract itself.

> no court of law is going to rule that they don't have the right to block

Then you don't know courts very well. Such clauses are still subject to the law and public policy. For example, no competent court is going to allow anyone to use an escape clause to terminate a contract with someone because of their race, age, or gender.

But yeah, if you use someone's services and then publicly talk trash about them? Why should they be forced to continue to do business with you?


> But yeah, if you use someone's services and then publicly talk trash about them? Why should they be forced to continue to do business with you?

Do you really not see how saying, "businesses should be allowed to sever ties with people who say mean things" is different from saying, "the courts will decide whether or not you committed libel"?

> For example, no competent court is going to allow anyone to use an escape clause to terminate a contract with someone because of their race, age, or gender.

Do you think there's a difference between a court saying, "we're not going to allow you to sever a business relationship because of a protected characteristic", and "we're going to regulate what does and doesn't count as disparagement"?

Can you point me at an example of a business fighting a disparagement clause in a contract with language like this and winning, based on a court deciding that what they said didn't count as disparagement?

This is like the people who say they're going to sue Facebook for taking down posts because their definition of "misleading information" isn't specific enough. You can sue anyone for anything, but you're not going to win that case. Companies with contract language like this have broad leverage to wield their power in whatever way they see fit -- because for the most part courts have not ruled that escape clauses boiling down to "we can decide to ban you at any time for any reason" are illegal or unenforceable.


> Do you really not see how saying, "businesses should be allowed to sever ties with people who say mean things" is different from saying, "the courts will decide whether or not you committed libel"?

Of course I see that. Contract law and defamation law (tort) are separate domains. A court does not have to conclude that a party to a contract committed libel in order to determine that they are in breach for making disparaging remarks about the counterparty. The layperson might (understandably) believe that the analysis would be identical, but it is not. A finding of libel under tort law requires a multi-part test which I won't elaborate on here, and there are lots of defenses, too.

> Companies with contract language like this have broad leverage to wield their power in whatever way they see fit -- because for the most part courts have not ruled that escape clauses boiling down to "we can decide to ban you at any time for any reason" are illegal or unenforceable.

I think we're in violent agreement for the most part -- I'm just saying that your characterization is overbroad since obviously "any reason" is not quite "any reason." As engineers, we should strive to be as accurate as possible in our analyses and avoid hasty overgeneralizations.

At bottom, courts aren't generally inclined to force parties to do business with each other if one of them is no longer interested, there are no promises left to be fulfilled, and there's nothing binding them to an infinite term (which courts also don't like to enforce). Apple is not a common carrier or the government, and so they're treated just like anyone else for the purpose of contract law.


> But yeah, if you use someone's services and then publicly talk trash about them? Why should they be forced to continue to do business with you?

Nobody said they have to, just that it's very dangerous to depend on such a service. I say bad things about companies all the time!


unilateral agreements that one side can change on a whim is not something that one would call a fair agreement in the first place.


unilateral agreements that one side can change on a whim is not something that one would call a fair agreement in the first place.

Wait till you read the fine print in your cell phone contract.


Do you really think a company the size of Apple is going to separately negotiate terms of service with every developer that wants to publish an app?

Contracts of adhesion are just a part of life. We agree to them practically every day whenever we do business with a third party. And if you were in the other party's shoes, you'd do the same thing; otherwise you couldn't practically run a business.


Well, both companies do that - The various epic EULAs let them behave just as apple does, so they can't argue it's unfair unless they want all their existing EULAs to be invalidated as well.


But we can still avoid Apple services in support of those Apple doesn't like, open standards and free competition.


Having just devoted about 12 hours to helping my wife migrate to a password manager from an ad-hoc collection of access control approaches (you don't want to know), I emerged horrified by the range and domain of what I can only call incompetence and a total lack of common sense and UI intelligence.

I wrote about it here (long post):

https://news.ycombinator.com/item?id=24827031

I haven't quite processed this entirely yet. Part of me feels one or more adults on the world stage need to get behind what I will call a canonical approach to login, authentication, account recovery, password policies, etc.

In some ways I equate this mess to what happened back when fire hydrants were not compatible with every hose coupling fire departments used. In other words, it was a mess and people got hurt.

Standardization is good. Or can be good. After this weekend I can't help but think that this is another area where the web needs to seek standardization. I get the feeling that every n-th developer is rolling their own approach and the result is an absolute mess.

Not advocating for an Apple solution, just saying that I had a revelation this past weekend and what I learned does not speak kindly of how this important aspect of online life is being handled.


I think a big part of the problem here is that the vast majority of the time, nobody wants to pay for security. The end result is that you end up with 2 kinds of players in this market:

- "small" players like Mozzila who don't have a big enough marketing budget or leverage from existing products to drive adoption.

- large companies who are willing to throw large sums of money into this with the goal of monopolizing auth, and using that monopoly to manipulate related markets. In the non-Apple cases (Google, Facebook, Microsoft), it also involves collecting data on users for ad targetting.


I dunno. I mean, I do, it feels good. But it also is a very different kind of FIDO2 than what we've seen before. In a way that FIDO was designed for, that we hoped would happen. But it's still not entirely joy & mirth that we're here for me.

It feels like a little like the first day we start to understand how "Big Tent" (in the OpenStack sense) FIDO2 ecosystem is. You can do whatever, make anything, and call it FIDO2; it's all duck typing: looks like a duck, quacks must be a duck. No implementation details are required, no transparency is needed, everything can be totally vendored way way up, and the standard will welcome you. Your platform is welcome here. This post is about how to use & prefer that platform, over the more common means available.

For sure this is overwhelmingly a good thing. It's by design that we allow platform authenticators in WebAuthn. Apple is allowing their closed, proprietary security technologies to seamlessly work on the web without making webdevs jump through hoops. It's a good thing, and this will really help Web Authentication for sure.

Still I have some wistfulness. It's a good user experience, it's great. It's a win for devs, it's a win for users. Still there's some larger context I can't quite put my finger on, when I see "authenticatorSelection: { authenticatorAttachment: "platform" }". The web is letting more of the native platform shine through, and that's good, but it also forgoes some of the knowability & commonality that resounds on so much of the rest of the web, and while the immediate impact is extremely good, I still think there's some kind of hard to describe ultra-slow-motion civilization-scale loss that's also passenger to this successful commingling of web and platform.


So what exactly is the concrete downside here, apart from the icky word "platform", that, in this context, means "a security chip that's not removable", ie as opposed to a Yubikey or such.


Websites will, at their option, be able to require Apple hardware to use them.


Last I checked, Windows won't let you use self-signed attestation keys for your homemade devices.


Attestation has allowed a website to lock to a specific vendor for 5+ years.


perhaps it's a totem, a reminder, that not all tech is interested in working together, that unlike the web itself you are entering an Apple or an Atari or Acorn specific encampment, for some random examples.


Really excited about this too. When we were adding 2FA options at my company we pushed to use Webauthn instead of just a QR code/OTP approach.

News like this where we can tell people "it already works because we made the right decision" is fun.


This doesnt really surprise me - Apple has a history of implementing, or moving to, standards for their platform features in Safari.

I think about Apple Pay for web - it started out as a proprietary API, and then the Payment Request API standard was developed and they added support for that.

It's in Apple's interest to help develop and support standards like this because they mean more adoption of their platform features.


I am still waiting for them to move to USB-C


> I am still waiting for them to move to USB-C

What do you mean? All Mac models introduced since 2016 support USB-C.

https://support.apple.com/en-us/HT201736


Not OC, but they probably mean iPhones. I wish iPhones would switch to USB-C too; my iPhone is now the only device I own that isn't USB-C.


In another generation or two, I would expect it to be port-less (wireless). This generation was the one to change to USB-C if it were on their roadmap... but I guess time will tell.


I hope they don't switch. I think Lightning is a better engineered connector -- it is very durable and attaches much more positively.

I am 100% on board with USB-C on computers -- but the use cases that apply to a laptop/desktop are very different than how people use phones.


I would actually pay extra $10 for an Portless iPhone or iPhone without USB-C.


It's a shame they have a messy pile of API-specific hacks to propagate the "user gesture". Chrome solved this problem with a change to the spec (which they called "User activation v2" [1]). It's basically two flags and a short timeout, and it covers basically all cases. Safari's approach means you have very specific codepaths, and if you do something async outside of that, tough luck, you can't use the feature and will have to nag the user to touch the screen again. This already affects APIs like clipboard (want to copy something that takes async work to generate? tough luck), limits APIs like OffscreenCanvas (want to move your game engine to a worker? tough luck, you lose access to all user gestures), and this too. Hopefully Apple can consider aligning with Chrome on this.

[1] https://www.chromestatus.com/feature/5722065667620864


> It's a shame they have a messy pile of API-specific hacks to propagate the "user gesture". Chrome solved this problem with a change to the spec

If Safari had changed the spec that comment would probably have begun with "It's a shame they had to change the spec..."

> Safari's approach means you have very specific codepaths, and if you do something async...tough luck

I'm no expert on it but this sounds more secure, no?

> APIs like clipboard (want to copy something

On a side note, I really wish non-explicit copy/pasting/clipboard snooping would die.

iOS 14 has exposed a bunch of apps that read your clipboard without any explicit paste action. It's creepy and we can only hope that it's not malicious. A bunch of big names including Discord are guilty of this.


> If Safari had changed the spec that comment would probably have begun with "It's a shame they had to change the spec..."

I don't follow. They are breaking the spec anyways. A simple timeout would be equally spec-breaking and a lot easier to understand and use.

The weird async-callback-chaining doesn't actually limit any nefarious behaviour. It just requires the code author to carefully stay inside their arbitrary happy-path. A simple timeout after a user gesture is just simpler. Furthermore this is something that you can't test without buying a thousand dollar device.


Programmatic paste is restricted in all browsers, including Safari (although Chrome has a special exception hardcoded to the Suite apps).


The UI probably needs to be more explicit about what's going on.

I would imagine most non-technical users aren't well-versed in how Apple's Secure Enclave (or other competing solutions) manage authentication, and so I wouldn't be surprised if "allow example.com to use TouchID" would give many the impression that the website is asking to access their biometric data.

Ideally, the prompt should reflect the actual model, though I'm not sure how one might pose that in an accessible and succinct way.


They could just have it say "example.com wants to use Touch ID to sign-in. Biometric data is not shared." with a help link that goes on to explain in laymens terms how your iPhone basically sends a password-ish thing to the website after you use Touch ID (similar to how Apple Pay sends a one-time use credit card number to a merchant).


Apple's UI designers would never tolerate such a helpful and wordy dialog box.


Maybe not on iOS but the Mac is replete with messages and dialogs as long as this.


I don't understand why they even ask it as a popup. I mean, you already get "touch to login" kind of text next to Touch ID on Touch Bar, if you don't want to login that way, simply don't touch it. Am I missing something extra about the purpose of this popup?


Because who is looking at their touch bar constantly just in case a site is trying to login? It seems to me that every time it is active a dialog pops up.

Especially when I am using an external monitor there is no way I would notice just that.


> I wouldn't be surprised if "allow example.com to use TouchID"

This is already a very common pattern on ios devices - every app that wants to allow touch or face ID based login uses such prompts, so users are used to it.


Yep, we tried to be consistent with the wording for the app prompt (even though that doesn't generate a private key in the same way).


This comes 3 days after a leak that alleged that iPhone 13 will bring back Touch ID via in-screen fingerprinting https://www.techradar.com/uk/news/move-over-iphone-12-apples...


Please please please be true. TouchID is objectively superior to FaceId, by a long shot. It is my soap box... but TouchID RARELY failed and could be activated BEFORE you had the phone in front of you. FaceID fails constantly and MUST be in view to start the unlock process.

TouchID has a single failure mode (and a half) that isn't that common. Wet / dampness. Solution, dry your finger, try again. Gloves are the 'half' mode as what can you expect, you can't access the finger.

FaceID has no less than 3 FREQUENT failures and a myriad of other smaller less frequent ones. Occlusion, distance and light.

Occlusion - Have your face resting on your palm... fail. Have a hat on, fail. Have a mask on...

Distance is an interesting one, but for those of us with terrible eyesight, it is constant. Phone call / text at night? Put the phone up to my face to see... FaceID fail (too close). I have to move it back away so I can't read it to unlock... hoping it did unlock before I move it forward again to read it.

Light - This really shows up when outdoors in the sun. The IR sensor gets washed out and can't work. It isn't that bad, but it does happen frequently enough to notice.

The other thing that is SO annoying about FaceID is that is has no idea WHEN you want to use it... see a face 'I must SCAN IT!' Oh, that is someone else, now you are locked out and need your password. Pick up your phone, SCAN! Oh, that is your pocket, now you need your password. The latter can be turned off, the former, not so much.

I ALMOST bought an 8 instead of a Xs when I finally upgraded, but OLED pushed me over the edge. I do love the screen, but I scorn FaceID daily and was hoping for TouchId to return to the 12. Fingers crossed for the 13, lol.


I live in a cold climate, so I was happy to be done with TouchID. Then coronavirus happened and now it doesn't recognize me with a mask on.


You can try enrolling your masked face as a second face. I had some success with that, but it would still miss enough to be annoying, but I've heard it works well for some people.


Just curiosity: does it start recognizing with the mask after some time?


In my anecdotal experience, it does not. Apple has made it easier to bring up the PIN prompt when it fails, though: https://www.theverge.com/2020/5/20/21265019/apple-ios-13-5-o...


Anecdotally, after some of the recent updates I've noticed like a 10% match instead of 0%. It was really nice when they pushed an update to recognize a mask and fail quickly to a pin since that's what happens most often when shopping.


Have you tried switching the "Require attention for Face ID" off? Usually it can get you before you are "looking at it".

I've been using Face ID since the xs, and so far my experience is, outside of mask wearing, much better than Touch ID. But I also try not to touch my phone when I'm out (and as such am wearing a mask). I even occasionally can't use Touch ID on my MacBook because of a damp finger, and that sees far less usage than the phone.

I think early on low light (or rather barely any light) was a problem, but I can unlock it in a dark room these days... not sure what that means for accuracy... but it works well.


I haven’t turned off the ‘require attention’ option... actually didn’t even know about it. Thanks!


TouchID has another interesting failure mode for me that's probably quite rare: after I go climbing, my skin is so worn that it doesn't recognize any of my fingerprints.


Similarly, when my mother had chemo, her prints changed enough that it would fail.

These are definitely failure modes, but I would put them in the niche bucket.


Interesting I didn’t realize prints changed significantly during chemo.


Hasn't this been leaked for like the past 3 years?


Yes, this has been a perennial rumor which is probably propagated by how much people don't like Face ID all that much. Face ID is fine when it works, and sucks when it doesn't. And mask wearing has really accelerated it being much worse, and we'll be wearing masks as a standard piece of dress for another two years I guess.

I do miss Touch ID, but that had its own problems (gloves, wet fingers). So one would hope there is a solution that has both. I'd personally be just peachy with a fingerprint sensor on the back just like the Pixel line used to have. I mean, how much is a fingerprint sensor manufacturing cost? $5?


Does anyone know why Webauth does not support a message associated with the authentication? Shouldn't the user be informed in a secure way what they are authenticating for? Currently only site and user name are supported to be shown to the user. For payments (PSD2 for instance) it is a requirement to also provide information about the transaction for instance.


Last time I checked, WebAuthN does not support UAF transaction signing yet - only via extensions and thus we don't see it in the browser implementations. Hopefully it gets added to the spec in the future.


These all seem to be examples that use faceID/touchID as a password. That’s not what biometrics should be though, they should be the username. I hope that this is supported as a flow as well. Identify who you are with biometrics, and prove your access with a correlated password.


This is a good doctrine, and a bad dogma.

Under the hood, [bio]ID generates a one-time token with a signature that serves as a superior substitute to a password.

If your face or fingerprint don't happen to be around, you can always use the password for the same account to reset it. If you have legal concerns about being coerced into using a biometric system, you should disable it.

One of the problems "biometrics shouldn't be a password" is meant to expose, is that you can't reset a biometric if it leaks. By keeping the biometrics inside a device at all times, the T2 system substantially mitigates this risk. I'm fairly certain you can remove a device from your Apple account, and the ID system will no longer work, even if provided with your face or fingerprint.


First, the model in FIDO is that it's the piece of hardware (in this case, your laptop or iOS device) that is your authentication factor. The biometric is a local facility for unlocking that device and approving that specific action.

Second, the primary threat identity providers (and their users) deal with are remote, impersonal hijacking of accounts, where the password is either guessed (because it existed in a leak) or the user was phished.

That's the primary value of something like this. Attacks where someone can lift your fingerprints or has your device are real but much much less common.


That doesn't make sense. Username + Password is a cumbersome workaround because (so far) machines couldn't use biometrics to authenticate a user. Now they can, so we can let go of that very problematic and often insecure model.

Think like this, when you go to visit your grandmother and knock on her door you don't have to provide a password. You don't have to provide anything, because the human brain is capable of determining your identity in less than a fraction of a second. This is why as soon as your grandmother opens the door she'll have a smile on her face and welcome you in. That IS biometrics. Now machines can do the same. It's the oldest and most secure form of authentication in human history.


Biometrics fails every test for a password.

1) A password is secret

2) You don't leave copies of it lying around everywhere

3) You can change it periodically

4) If discovered, it can't be traced back to you

No, biometrics can only be a username. It can never be an acceptable password.


You don't seem to be aware of what is under discussion here. You just raised a huge strawman.

Websites are not receiving your biometrics in this context, and your biometrics would be meaningless to the website if captured and somehow provided.

Your biometric signature is stored solely inside the Secure Enclave in the Apple device.

If and only if the Secure Enclave recognizes you via your biometrics will the Enclave uses a non-transferrable key stored only in the Enclave to attest to the website that you are the user "JoeAltmaier".

- The key is secret.

- The key is kept in only one place.

- The key can easily be discarded and a new one made.

- The key is never given to anything outside of the Enclave, so... I'm not sure how it could be traced to you, besides the whole fact that it is being used to authenticate you, which is necessary.

Sounds pretty acceptable to me as a password. The usability issue here is that a website needs to be able to accept a different password from each device you own, since the password is non-transferrable, and you might accidentally drop your iPhone to the bottom of the ocean.

Good news: FIDO2 is an entire standard built around this concept, originally intended for use on YubiKeys and similar FIDO2-compliant USB sticks.

Apple is building an implementation of FIDO2 that uses the iPhone that's already in the person's hand.

If the Secure Enclave is compromised (which does happen sometimes), then Bad Things could happen... but that's also what happens when a password manager tool is compromised.


All that's well and good until companies start implementing their own FaceID then forcing you to use it [0] on the back of trusting Apple, even CALLING it the same thing.

This app linked above (my bank) contains NONE of the security you've mentioned above.

And, incidentally, for me, biometrics STILL fail every test that matters to me: If I am dead, a bad actor can still gain access to my accounts. With a password, they cannot.

[0] https://apps.apple.com/us/app/bbva-m%C3%A9xico-bancomer-m%C3...


> This app linked above (my bank) contains NONE of the security you've mentioned above.

This misunderstanding is where you went wrong: your bank doesn’t have a choice about this. If they use FaceID, they don’t have a choice about implementing that - the app can ask it to perform the public-key authentication operation but there’s no way for the developer to choose to weaken the security of the system.

Similarly, you should read up about how these systems incorporate liveness checks. A dead body will not pass those and, if you weren’t aware, Apple’s implementation requires a password after a reboot or a small number of failed tries. It’s presumably possible for a well-resourced attacker to bypass those but you’d have to think about how much more vulnerable you are if you use only a password which is much easier for an attacker with that level of resources to capture. If you’re worried about Tom Cruise recording a mask from your still-cooling body, think about how much easier it’d be to get a camera to record you entering it - which you do a lot more in public if you don’t use biometrics - and how trivially this could be done without your knowledge.


I think you misunderstood (but I appreciate your reply)

This bank is NOT using FaceID, they invented their own version and are calling it the same thing. Your picture goes to their servers. Who knows what happens after that. And they're piggy backing on Apple's trust where FaceID is concerned in order to do it.

They are not the only company I have seen do this.


Rather than downvote me for pointing out how companies abuse this, explain how this is an improvement.


Lol...


That's sophistry. The fact is, the laptop is 'secured' by biometrics, which can be spoofed. Having a key-to-the-key is not safe if the biometrics are not safe.


It is not sophistry, and sophistry is not a word I've ever seen used in a genuine conversation, so I immediately doubt your sincerity in this conversation.

If someone wants to fake biometrics on an iPhone, they have a very limited window of time to do so, and the user can lock out the biometrics in less than 3 seconds just by "squeezing" the phone. (power button + either volume key, 2 seconds later the biometrics are locked out.)

It's much harder to fake the biometrics in that very brief window of time (maximum 48 hours) than it is to shoulder surf a password.

Unless your threat model includes State Level Actors, biometric bypass is a very remote concern.

If your threat model includes State Level Actors, you're probably screwed either way, since they can easily afford to shoulder surf you.

For everyone else, the main concern is that someone not physically present will manage to acquire your password and log into your services. Passwords suck at this threat model. FIDO2 makes this scenario impossible without that remote person managing to execute a Secure Enclave 0day on your personal device... and even then, it's still way harder than acquiring your password. Not even the website you're authenticating against receives your FIDO2 key... websites always receive your password, which is awful for security.

Some related info here: https://news.ycombinator.com/item?id=24830642


Sure it was - sophistry is pretended an issue is simple by (deliberately) ignoring alternatives.

If biometrics are fallible, it matters zero how secure the digital system behind it is. That's obvious, and a comment belaboring the digital security is beside the point.

As for how hard to spoof, just google it. There are dozens of folks with techniques and hacks right now. It'll only get worse.


You are the one using sophistry, if anyone is using it. You are completely (and willfully!) ignoring how vulnerable passwords are in any threat model that invalidates a FIDO2 implementation of Face ID. No threat model invalidates the security of Face ID for the Web without similarly invalidating passwords. At least, you have chosen not to present such a threat model, which would help my (apparent) failure of imagination.

You cannot protect your passwords from someone who would physically take your iPhone before you can lock it and who would have a life-size reconstruction of your face ready and waiting. Such a person could shoulder surf your passwords with far less effort, or compromise one of the dozens of websites the average user re-uses their passwords on. At a certain point, the person in this threat model will just pull out a wrench and beat you with it until you help them get into your account. https://xkcd.com/538/

If you can show how some random person on the other side of the internet having your fingerprint helps them get into a website using your account... that would be interesting discussion. As it is, they must have physical access to your device. Your biometrics are useless without physical access and rapid action, since the biometrics quickly become useless as the device falls into a state that requires the user's passcode.

You're completely ignoring everything I actually said in my comments, so I'm done here.


Corroborating you point: Safari on iPhone will autofill passwords with just a biometric, so if you have an unlocked iPhone and a clone of the user's biometric, you can access websites and potentially even change their passwords. If it's timed out or user deliberately disabled the biometric, then you will need the passcode to be able to access anything.


You're right; I apologize.


That's you not understanding the security model, not sophistry. The application is granting access based on a public-key exchange, which the key stored in a hardware store which cannot be retrieved even in the event of a system-level compromise[1]. The remote application does not see the biometric data or even know that it was involved in the process.

It's also important to note that this does not mean anyone who grabs a laptop gets access to everything. The device still uses a password to unlock — you're forced to enter the password on boot before you can use biometrics later — and someone who stole an unlocked laptop could, for most users, have auto-fill supply the passwords _except_ on devices with biometrics which usually require that check every time (as iOS users have been reminded in this year of mask wearing).

So let's walk through some common threats:

1. Password re-use: a major source of compromises, blocked by this system 2. Phishing: a major source of compromises, completely blocked by this system 3. Compromised email: also popular, blocked by this system except for the services which allow email-based MFA resets, in which case it'd be the same as a password. 4. Local system compromise (user or root-level): passwords are vulnerable, biometrics present a barrier when the attacker can't just do something like reuse the credentials stored in your browser's cookie store. Biometric data and FIDO keys cannot be extracted. 5. Stolen device, locked: both are probably secure as long as you don't have your password taped on the keyboard 6. Stolen device, unlocked: passwords are vulnerable if you have a password manager which doesn't require e.g. FaceID checks (which is what Safari does on supported devices), FIDO MFA is not usable if Touch ID or Face ID is configured. Biometric data and FIDO keys cannot be extracted.

1. https://support.apple.com/guide/security/secure-enclave-over...


You're missing the point of biometrics. Something you are is a form of authentication that only you can use. Your face, fingerprints, blood, retinas are all public but try as you might you can't make another living human with the same features.

If your view of fingerprint auth is "a picture of your face is the password" then of course it sounds stupid. It's actually "a face with the correct features attached to an alive human" which is much harder to fake.

The whole point of biometric auth and all the advancements in the industry are about correctly identifying alive humans and the strength of any system that uses biometrics is directly related to that. You can say current systems aren't good enough at this yet for your personal thread models but it's real security, and beats the hell out of a 4/5 digit lock screen passcode.


That's trivially refutable. Fingerprints are left everywhere, and can be lifted and reproduced with common household substances (tape, glue etc). A face can be photographed, printed and presented trivially.

And so far, biometrics falls far short of a 4/5 digit lock passcode. The entropy in most fingerprint sensors is a few bits. They are famously defeatable.

Nothing will change the fact that you cannot keep your face and fingerprints secret, cannot/won't change them, and they are always, always traceable to you. Using them to secure a 'better' key is not security at all.


You need to learn how these systems work before commenting further. First, you're wrong about the current biometric systems' sensor design — Touch ID and Face ID are not simple single frame cameras[1] so while spoofing is not impossible it's nowhere near as easy as you're claiming — and, more importantly, you're missing that the biometric is used to authenticate to the local device, not the remote service. If someone steals your phone, as soon as it's removed from your account the attacker has no access or way to gain access to your resources and they also do not have a copy of your biometric data. If I do get a full biometric from you, I cannot use that to add a new key-pair to your account without already having fully compromised it.

That means that you're left with really unusual situations like someone stalking you with drones with 3-D infrared scanners who can't figure out how to have the same drone record your password when you type it in many times per day.

1. https://support.apple.com/en-us/HT204587 https://support.apple.com/en-us/HT208108


It doesn't matter since your fingerprint isn't secret. It's not enough to have a picture of my fingerprint, you have to produce a convincing enough fake of a real human with the right fingerprint.

Take this to meatspace for a second. If you had a security guard sitting at a desk inspecting your hands and taking fingerprints you couldn't trick them with pictures. You can't hold up a picture of my face to a guard and expect that they'll suddenly think you're me. Biometric auth systems are trying to the same thing but without the human.

> They are famously defeatable.

And most locks in wide-use today are also defeatable by amateur locksmiths, that's not really the point. There are sophisticated biometric auth systems that aren't fooled by pictures. FaceID is one example.


[flagged]


Let's suppose someone goes to all the effort to duplicate your biometrics in a real world scenario (e.g., going to the DMV). They put on their mask and their fake fingerprint and get a new driver license with your name and picture. Then they open bank accounts with it, get loans, and buy cars. What then? Suicide I guess.


Biometrics fail every test for a password, but, assuming sufficient accuracy, work pretty well for Authentication, which is the actual purpose of a password.

Biometrics are closer to a public key than a password.


It falls short, because biometrics <> passwords. As I said, biometrics == identity.

Compare biometrics with identity:

1) Your identity is not secret. Your mother knows you, your entire school knows you, your neighbour knows you, when you go anywhere the police may ask for your ID at any time and knows you. Biometrics is the same.

2) You don't hide every day from the world. You don't cover your face (ok maybe before COVID) when entering a shop ;)

3) You cannot change who you are. You shouldn't have to and shouldn't want to. Same for biometrics.

4) If your identity is discovered then of course they know who you are. Same applies for biometrics.

So yes, biometrics isn't passwords, it's identity. Username + Password is a workaround to establish a person's identity and will never be as good as a biometric. The fact that you can have multiple usernames but only the same right index fingerprint is proof that biometrics is superior in establishing your identity than username + password.


> It falls short, because biometrics <> passwords. As I said, biometrics == identity.

This is just dogma, it's not based on the actual implementation details.

TouchID for the web requires: Something you are (biometric), and something you have (Your phone/ computer).

If someone "Discovers" my fingerprints, they are worthless without the phone/ computer which has the Secure Enclave I've matched them to. If my phone is stolen, I can invalidate the entire device as a method of authentication.


I see what you’re saying about real world identity but digital identities don’t [have to] share those constraints. Digital identities can be instantiated and discarded at will.


Not if they're 'secured' by biometrics. Then they are almost trivially subvertable.


I know I’m just repeating what a lot of other have said, but this is sort of missing how this works. To use SSH as an example:

What you’re describing is essentially using a fingerprint instead of a private key or password for logging into a server via ssh.

The way this works is more akin to using a private key to login via SSH, but that private key is protected by a fingerprint instead of a password.

Ultimately the security comes from not sharing your private key, rather than the mechanism used to protect that key.


Maybe (probably) I’m just ignorant of the actual state of the industry, but it seems to me that biometrics have always been about providing full user authentication. I’m not personally aware of any instance where an alphanumeric password is still required as a secondary authenticator to biometrics.


> I’m not personally aware of any instance where an alphanumeric password is still required as a secondary authenticator to biometrics.

It's straightforward to configure Active Directory / Group Policy to require biometric and password factors for interactive authentication.

The main reason not to allow _only_ biometrics is to disincentivize chopping fingers off (well, it's more because [it's still surprisingly easy to fool biometric systems](https://www.theguardian.com/technology/2014/dec/30/hacker-fa...)).


I mean you're referencing an article from 2014 about a relatively unsophisticated biometric auth system. It's like saying "TSA locks are stupidly easy to pick."

There are extremely high-security biometric auth systems that you would have a much harder time fooling. The high-res retina scanners that take a 3d map of your eye and detect blood flow would be a much better assessment of what's possible.


But I like to use touchID to authenticate with multiple usernames on the same site. Using it as the username would reduce it to only working in a single case.


I might be wrong but I don't think thats how its working. It sounds like your phone saves a password/key locally and then when you load a website, faceid is used by your phone to unlock the keyring and sends the normal password to the website.

I'm not sure what the UX is when moving devices though.


Authentication and authorization

If the machine can prove that you are indeed you through biometric authentication, why take another step for authorization?


This only makes sense if you have multiple personality disorder. if there are many alters that shares one body, after successful biometric authentication, the machine might need to request password for authorization to prevent unsolicited access from another alter in one body.


Any bullying or power abuse situation may easily exploit biometric authentication against the victim. Kids bullying other kids, domestic abuse, authorities etc.

Now, if biometric authentication becomes the norm, how will a wife suffering domestic abuse justify using a password?


I agree with you, but am not sure a password works too well in such situations either - the abuser would presumably just beat the victim until they enter their password?


Yes, I don’t see it helping in situations where the attacker has a long term relationship with victim.

https://xkcd.com/538/

However, I think touchID is susceptible to being robbed since if you have your bank account or Apple Pay enabled to transfer cash with touchID as authentication, then you can be robbed while you’re passed out drunk or roofied or whatever.


One of the benefits of Face ID is that it supports an attention check, which should be harder to coerce if you’re passed out/drugged.


Overall I'm hopeful that people will actually put the (relatively little) work in to actually deliver WebAuthn on more web sites because it has both better security and a nicer UX flow. Win win. But, as to this particular article:

The code snippet elides two really important values: userIdBuffer and challengeBuffer. I get why - they're complicated and not very exciting to the intended audience. But they're also vitally important, so I think it would have been worth putting it non-zero effort.

userIdBuffer needs to be some sort of unchanging unique identifier for the user. Probably pick suitably large random values in an indexed table or something. It might be tempting to stick the row ID from your user table or similar in here, but that's actually a bad idea you'll likely regret.

challengeBuffer needs to be actual random bytes. Hey, the Javascript interpreter has... No. The random bytes need to be chosen fresh by the backend server where you're authenticating users. And they must be actual cryptographic quality randomness, not a Mersenne Twister from your utility library.


Hope this will finally put some pressure on Mozilla to build TouchID support:

https://bugzilla.mozilla.org/show_bug.cgi?id=1536482


Provided there's still someone still working there that's capable of implementing it.


I was just thinking: Could we replace CAPTCHA with Touch ID/Face ID when browsing a website on Apple devices?

That would be a pretty quick and painless way to verify that I am, indeed, a human, with no unpaid labor involved.


Potentially, yes. Authenticators can provide a public certificate, so servers can accept attestations created through whitelisted authenticators.

This would mean being forced to use proprietary hardware or closed-source authenticators, because that’s the only way to control authenticator certificates.


Cmon apple, just add NFC to macbooks already. Fido cards are the most obvious solution to this.


This would also open the door to actually secure online card payments by just tapping the physical card (or Apple/Google Pay) instead of relying on static card numbers & hacks such as 3D Secure.


I don’t know what I’m talking about, but isn’t the Secure Enclave already a solution to this? Fido cards just sound like an external solution that all iPhones and new MacBooks already have integrated (T2)


Sure works when you have only one computer in your life (aka almost nobody). It’s especially bad with Apple with 3 different usb standards. Contactless is the future.


What's stopping you from linking more than just one physical FIDO-enabled device to a particular site/service? Most MFA implementations I've seen already allow this, and it seems especially important if you want to minimize the pain of losing/damaging that one sacred card.


AWS, for example, only permits one MFA device per IAM user account. I have four computers and four tokens. :/


I don't carry any keys so those massive dongles would be pain. Plus must is not ALL which breaks everything.


Mobile phones can be FIDO enabled. (see https://developers.google.com/identity/fido/android/native-a... )


I've come to appreciate DigitalOcean's approach on the matter.

Until you configure a 2FA device (TOPT in my case) they'll send you a temporary code to your email address.

So password alone is worthless, and mailbox ownership is verified on every login.

Also, I made a point never to use the "login with X" feature, no matter what X is.

I always sign up with my own email.


I really hate the "we're going to send a code to your email" approach because it so often breaks.

Case in point, I tried to log into Patreon earlier today, they insisted on sending a code, and it never arrived. I tried several times, no email. I have a record of every email sent to my address for the past several years regardless of spam status so I can be 100% positive they simply never sent it.

In the end I had to give up. I can't log into Patreon today, because of their insistence on sending me an email even though I'm logging in from a computer I've used many times before.


I'm curious -- do you host your own email, or use a lesser-known email provider?

I am increasingly seeing failures where sites seem to be blackholing outgoing email, I suspect based on the destination domain, and unaware that they are even doing this / extremely insistent that they are not. I've gotten login-email failures like you describe from a couple of sites, and seemingly similar failures from those "email your kid this story" sharing systems on two news sites my mother uses. In all these cases, the messages simply never arrive at the destination SMTP server.


Patreon has had many outages in the last few weeks, typically for 30mins to 2hours at a time.

The outages have been affecting their authentication (login and sending emails) as well as the API.

Their status page is also very slow to update. This shows a fraction of the outages that myself and others have experienced. https://status.patreon.com/


I have a custom domain that uses Gmail as its mail provider (for legacy reasons related to the other users of this domain), and my Gmail account is configured to never mark anything as spam and to forward everything to Fastmail. This means that no matter what I do with my email, I can log into the gmail account and see every email I've received for years sitting in my inbox.

I don't really recommend this setup, I'd really prefer to cut Google out of the path entirely, but I can't do that without abandoning my email address.


Gmail and most other providers will 5xx incoming mail they see as spam before it even gets to your spam settings. It's a "feature" on their end to outright block more blatant spam, but it often snags larger senders of legitimate email. Like Patreon.


If it’s catching Patreon then Patreon must be doing something really egregiously wrong, because there’s tons of spam I get to my address that’s exceedingly obvious. And I have no problem receiving other email from Patreon, so I think Patreon really just failed to send the login code email.


I've had this issue with some websites that use transactional e-mail services to send their e-mails. A temporary failure on your domain or email provider will flag the address as bad seemingly permanently on their e-mail provider's side so that it will instantly fail any future sending attempts to that address even after the original issue is resolved.


I host my own email but I'm not having any problem with digital ocean's mails.


I would agree except I always had issues where I would never receive their TOTP emails. I was using Fastmail and had my spam settings set very leniently.

Fastmail Support claimed they never received any mail (ie it hadn't bounced) but DigitalOcean Support would insist I check my spam folder for the 500th time.

Ultimately, I just swore to never use their services, primarily because I... could never log in and actually use their services.

I think it's fixed nowadays but I wasted so much time on that issue.


You can change a password but you can't change your fingerprint / palm / etc. Am I missing something? How is Face / Touch ID more secure that user + pass? What happens when biometric data is leaked?


you're right that using biometric data for authentication is bad, but that's not what's happening here. faceid/touchid stores a private key on your device, and uses your biometrics to unlock that private key. it's the same idea as using a yubikey or something where you have to press the button on the 2fa dongle to prove it's physically in your posession and unlock the private key stored within it, but it goes beyond just pressing a button by doing some biometric identification.

the concern isn't what if the biometric data is leaked, because the biometrics are only used to allow local access to your phone. the concern is what if the private keys stored in your phone are leaked. and the secure enclave makes that as close to impossible as any other existing solution.


I know very little about security. Could you please explain it in simpler way?

So, does my iPhone create a private key (which my Face ID data) on the phone itself, then the browser ask the phone to do something to authenticate the user? I am lost here.


You can think of a private key like a really long, complicated password. Like, thousands of characters. But you don't have to type it in every time, you just let your phone store it for you and fill it in for you in apps (and now, websites).

To log into a website, your iPhone checks to see if your face is your face, and if it is then unlocks your private key to send it to the website. If it can't identify your face, then it won't enter your password.

Sending your actual facial data to a website would be bad because you can't change your face, so if you give your face to one site, then that site could use it to log in as you to other sites. But by just using it to unlock a private key, you (or apple on your behalf) can still change or de-authorize your private key, and use a unique one for each sites. Basically all the good practices you're supposed to follow when you use a password, and they aren't giving any site any special data that they could use anywhere other than their own site.


So does my phone has a different private key for every website?


Yes. In WebAuthn every single time you enroll on some web site with this system, a completely random new private key will be generated and the site will be given the corresponding public key and a fresh magic "cookie" identifier that serves no other purpose.

Your Apple device remembers the association between this particular web site, any user ID the site said is relevant (e.g. maybe the username mrwnmonm and friendly name "Shiny Steve") the cookie, and the private key.

On subsequent visits either of two things can happen:

1. You tap some sort of easy-one-touch login button. The Apple device says "Hey, sign in here as mrwnmonm / Shiny Steve?" and you use your touch ID to prove you are still you, this unlocks the private key, Safari uses the private key to create a proof that you still know that key, attaches the proof, and the cookie. The site recognises you must be Shiny Steve and you're in.

2. You sign in "normally" (e.g. with a username and password) but then as a Second Factor the site shows the Apple Device the cookie it remembers, your device recognises this cookie and prompts you for a touch to prove you are still Shiny Steve, whereupon it uses the private key to sign a proof and send it back to the site.

Because the keys are different on every site even if two web sites deliberately work together to try to figure out if a user on one site is the same person as a user on another site, WebAuthn doesn't help them do that at all.

Also unlike passwords or most other schemes, there's no risk from mass data loss because the web site is storing public information. If a "dump" of every Facebook WebAuthn public key was made, that's essentially useless to everybody except Facebook anyway, whereas obviously a password dump or a dump of all the TOTP secrets would be a huge security problem.


Oh man, couldn't they write it this way?

This comment summarizes it all. Thanks very much.


Yes, it’s a unique for each site which makes webauthn extremely phishing resistant. Even on look alike domains the origin doesn’t match and your phone has nothing to send.


Your phone holds the passwords for the websites but they passwords never get presented to the user. FaceID is just used to unlock the password store. You would have to steal a phone and trick faceid on the phone to gain access to someones account.


Sorry, I still don't see the improvement here, doesn't that already happen using chrome for example, or Touch ID with 1password? (I think IOS has that too)


That's correct, but 1Password stores a static secret (the password) and this may be re-used by less security-minded folks as you probably know. WebAuthn servers only store a public key which is useless in case the server gets compromised. WebAuthn also is phsihing proof by having browsers verify the domain the credentials are used for.

See more at https://webauthn.guide/


Thanks for the link <3


As far as Apple devices are concerned, biometric data never leaves the Secure Enclave, so risk of data leak is non-zero, but sufficiently low for everyday life. Compare that to the massive number of people who use password123 as their password for every account across the web, and yes, biometrics are far more secure. :) More secure than a random 30 character string? Probably not.

And if you’re targeted by individuals who are sufficiently motivated to steal your biometrics and the physical device, then neither the password nor the biometrics will be enough to protect you.


> How is Face / Touch ID more secure that user + pass

Because the password that the average user is using is shared among 100 sites and has been leaked 10 times over, and meanwhile there is nobody dusting their leftover coke can for prints.


Biometric data is never leaked because it never leaves the secure enclave in the device.


Biometric data is leaked whenever you touch a doorknob.


This. Biometrics are perfect for tracking but sh*t for security!


Your face can easily be mimicked. Biometric data is not stored in the phone, it is stored in you.

See: https://www.macrumors.com/2018/12/16/3d-printed-head-android...


Your face can easily be mimicked.

Article: "I was ushered into a dome-like studio containing 50 cameras [...] The final model took a few days to generate at the cost of just over £300."

I don't how that can be characterized as "easily." Possible, yes, but not easy. And it still didn't fool FaceID.


Maybe not easily in 2017, though there were other successful attempts to fool FaceID at the time [1], there are also successful and easier attempts to fool FaceID today [0] that can successfully bypass it.

[0] https://www.technologyreview.com/2020/02/29/905599/how-coron...

[1] https://www.forbes.com/sites/daveywinder/2019/08/10/apples-i...


Torture can extract an alphanumeric password.


But is it the right password, or the duress one?


This thread is about Apple. From the link:

> 3D Printed Head Fools Android Face Recognition, iPhone X 'Impenetrable'


> The final model took a few days to generate at the cost of just over £300. "easily"


£300 is chump change in most any organization's budget.


This is why real security starts with a threat model.

If you are worried that someone will kidnap you, take you to a 3-D imaging system, hit you with an amnesiac so you forget that happened, build a mask realistic enough to unlock your iPhone – which that article noted could NOT be done for £300 — and then use that to unlock your devices you have to start by asking why they wouldn’t simply unlock the device when they had enough control over you to run an invasive scan. That’s a movie-plot threat, not something anyone reading this needs to worry about and if they did they should be investing in bodyguards.

Similarly, in the real world you have to make trade-offs. In this case, the alternative is using a password. Those are not only much, much easier to observe with a camera but also open rich new areas for an attacker to try: passwords are generated by normal people so they’re often weak, notoriously reused across multiple sites, and people are convinced by phishers to enter their passwords on the wrong site. Trying to protect against the Hollywood threats makes you more vulnerable to the kinds of things which befall many people on a daily basis.


It's not.


> What happens when biometric data is leaked?

Like when someone takes your picture? Nothing.


Neither sync nor keychain is mentioned in this post or in the WWDC presentation, so I'd like to ask if the credentials are synced between devices whatsoever, or if they're transferrable. While it'd be more secure to run it through a device's T2, a syncing/keychain-backed key would save a lot of the re-enrollment hassle for users who end up transferring their data to a new phone or need to enroll each of their devices for all of the web services they use.


They are no synchronized between devices. This is generally considered to be a no-no with FIDO (although Apple has not publicly announced whether they intend to be FIDO certified)


Apple is part of the FIDO alliance, I would assume they intend to be FIDO certified.


For simplified login situations (e.g. not requiring two factors for authentication) you'll probably have some kind of password or password-less (email) login flow where you can login at first and then enroll the FIDO2 device.

Otherwise there is no way to login from any other device.

An alternative is that you can authenticate a new device from an existing device along with a re-auth on that device to verify it's them. Discord has a feature similar to this with it's QR login system.


I'm curious about "Apple Anonymous Attestation". Is Apple taking on any liability by providing this service or is this all done on hardware through private APIs?


It is only for Apple Devices, and all it does is allow them to easily disable attestation and thus (is this device used for auth secure) for a singular TouchID/FaceID.

Instead of with Yubikey where if the attestation key is compromised, and it is blacklisted, they disable every single last device manufactured with said attestation key.


> easily disable attestation

all future attestations. U2F/webauthn doesn't actually provide a revocation mechanism.

> Instead of with Yubikey where if the attestation key is compromised, and it is blacklisted, they disable every single last device manufactured with said attestation key.

it is important to note that there is not a "the" attestation key. there are many. "disabling" one, to the extent that is even possible, disables only the group with that key.


Attestation is just a technical term in the WebAuthn API. Different authenticators can use different attestation data formats. Apple rolled their own, hence the name.


Apple didn't just roll their own though, they improved upon it by allowing them to easily revoke attestation for a particular implementation without affecting all other devices out there.

So if an attacker tampers with the physical device, they can revoke the key for that particular device so that it is no longer trusted (the way I am reading it) vs yubikey where if an attacker has messed with one key, there is no good way to revoke attestation for that one device.


> So if an attacker tampers with the physical device, they can revoke the key for that particular device so that it is no longer trusted (the way I am reading it) vs yubikey where if an attacker has messed with one key, there is no good way to revoke attestation for that one device.

Attestation is a statement that the hardware and firmware are genuine, with the trust model being based on genuine hardware/software. You would not typically revoke anything - third party trust in that hardware/software combination would go away. That might be for example all iOS versions below 14.2, or all iPhones before the A14 chip.

If your service cares that much about key policy, you remember the attestations of each key so that you can change how they apply to security policy. That might be, for example, allowing someone to authenticate into a lower security level until they re-register the phone after upgrading their operating system.

In that sense, Yubico has a disadvantage in that their security policy does not allow firmware updates, so any firmware compromise will permanently tarnish those manufactured keys.


> allowing someone to authenticate into a lower security level until they re-register the phone after upgrading their operating system.

Instead of "This site requires IE6" we'll have "To log back into your account, please buy an iPhone 12 Pro Max and re-register".

I'm imagining an undesirable future where having access to different sites requires carrying around multiple different pieces of authentication hardware, and the user has to keep track of which device is needed for each site. Still, that might be better than a future where all sites mandate using an authenticator made by a specific monopoly company/government, assuming we can avoid that.


Firefox and Chrome both prompt before disclosing the attestation information of your key, adding a UX tax to asking for attestations.

The protocol also does not have a way to get the browser to 'filter' potential authentication methods to just say a google titan key or iPhone 12. So while sites might determine whether to show the authentication options by scraping user agents like they always have, they can't get the browser to handle the user tapping the wrong kind of key. Instead, the browser has to repeatedly inform the user of what went wrong and drive the user through registration/authentication.

So websites _can_ restrict things to a particular form of authentication, but in many cases it may lead to a sub-par user experience. They may also need to tune this repeatedly, for instance to allow other browsers once Apple makes this available to them, or say pushing the authentication experience from desktop to phone or watch.

Since Android and Windows Hello support the same API, platform restrictions in particular have been just asking for more rope to hang yourself with. Such restrictions have been required up to this point because the platform support has been spotty (with Android being the current third place)


Can this Attestation be safely issued from macOS running on Apple Silicon, conditional on whether the user has disabled various hardware/software security features on the Mac?


When this was pre-announced awhile back I highlighted the lack of anonymity, due to the Apple secure element (T2 on laptop/desktop; whatever the name is on mobile) lacking attestation capability. That comment was downvoted by folks that presumably don't understand the problem at hand.

In that pre-announce video Apple dared to insinuate that attestation by current devices is de-anonymizing, when that isn't universally the case. And that their flavor of such would be "actually" privacy preserving. At the time, I figured this (a remote service) would be how they would address it, as the secure hardware probably doesn't have the burned in capability. Largely due to Apple needing to do it their own way.

Now of course, like the "normal" U2F attestation, this is private between the client and the service. But in Apple's case, like DOH, it is not private between client and Apple. IOW, a decrease in privacy, but marketed as an increase! LOL, classic Apple!

They go even further in their blind denial by stating:

> this approach avoids the security pitfall of Basic Attestation that the compromising of a single device results in revoking certificates from all devices with the same attestation certificate.

without recognizing or acknowledging that there is still a single point of compromise, but now it's a cloud service which many engineers have access to and presumably isn't 100% impervious to outsider attack either. So yes, it may avoid the single device security pitfall (resulting in a largish group of devices requireing to be revoked), but it trades it for an even bigger pitfall of all devices everywhere needing to be revoked!

It's sad that the reality distortion field has to be extended to privacy and security. Apple could easily just present their take on it without confusing the issues for the sake of marketing.

Further, it's up to the site to decide whether to use an anonymous attestation or not. The user will not be informed either way. With "standard" attestation, it's up to the user in their decision of what token (security key) to buy, and is always in effect. That said, a non-attested enrollment, if done correctly on the client side, doesn't give up privacy; just security.

To answer your question, it depends what you mean by "taking on". Yes, Apple is assuming some risk. They are not taking on any liability in the legal sense.


> In that pre-announce video Apple dared to insinuate that attestation by current devices is de-anonymizing, when that isn't universally the case.

Most hardware devices use batch attestation, where some group of (say, 100 000) keys all get the same private key. This does still provide some data for correlating users based on make + model + batch of their authenticators.

ECDAA was meant to be an approach to solve this with pairing-friendly curves and crypto, but the industry stayed clear of this as an unproven algorithm.

Apple (and the rather similar SafetyNet approach by Google) use a service. The interaction between the device and this service is black-box and we do not yet know how much information is exchanged. By its nature, this service can change in the future and we will not know.

However, it _could_ be limited to an RSA or EC-based DAA attestation from the Secure Enclave, a curve point on P-256, and a SHA-256 hash value to embed. There's no reason Apple needs to know either the specific piece of hardware or the site a user is going to.

> Further, it's up to the site to decide whether to use an anonymous attestation or not. The user will not be informed either way.

Several browsers will give the user the option to not send attestation if a site asks for it, to the point where capturing attestations is a UX-impacting action. You could say the UX impact is intentional - not only do basic attestations leak some tracking information, but they limit user choice in how to authenticate.


> There's no reason Apple needs to know either the specific piece of hardware

In order to revoke a specific device without cooperation of the device itself, which is one of their advantageous claims, they do.

Disregarding that property, Apple needs to know that the "CSR", ie attestation request, is coming from the secure enclave. That means that the "CSR" needs to be signed by some key. If the CSR signing key is unique per device, Apple receives device-specific information. This can't be an ephemeral key, as there wouldn't be a chain back to the secure element. If OTOH the CSR signing key is shared, it is probably exposed, or at least accessible for malicious signing, via the now known T2 compromise. There is much to be gained by such an exploit, so this isn't merely a thought exercise.

This complexity is all avoided by on-device attestation. EPID or EPID-like scheme could have been used, vs batch attestation.

Honestly, Apple is derelict in not describing this in more detail. Had they not made claims, it would be ok. But they are directly claiming to be better. I think we can be 100% certain they are aware of this depth of detail internally. Apple is not amateur hour.


> Disregarding that property, Apple needs to know that the "CSR", ie attestation request, is coming from the secure enclave. That means that the "CSR" needs to be signed by some key. If the CSR signing key is unique per device, Apple receives device-specific information. This can't be an ephemeral key, as there wouldn't be a chain back to the secure element. If OTOH the CSR signing key is shared, it is probably exposed, or at least accessible for malicious signing, via the now known T2 compromise. There is much to be gained by such an exploit, so this isn't merely a thought exercise.

> This complexity is all avoided by on-device attestation. EPID or EPID-like scheme could have been used, vs batch attestation.

EPID is a DAA scheme. _If_ Apple is using a DAA scheme to their attestation service, then you would use that to protect the equivalent of a CSR (which really should only need the public point and hash of the creation request).

The intermediate service exists to hide the choice of DAA scheme (and needs to potentially require things like BN curve cryptography to handle it), and to do other value add logic. For example, the attestation format is extremely similar to DeviceCheck attestations, which assert hardware/os/application authenticity for a third-party remote service call.


> In order to revoke a specific device without cooperation of the device itself, which is one of their advantageous claims, they do.

My understanding is that there are methods to revoke a particular group key under DAA, which would prevent that device from being able to retrieve attestations in the future.

That said, revoking individual devices is somewhat nonsense from a security point of view. There's nothing (other than the difficulty of the hack itself) that prevents a compromise of one phone from being replicated across the entire production line, which impacts the security reputation of the entire line.

Also, it is hard to imagine the use case for identifying and revoking the attestation for a single user's devices that isn't troubling.


We've built and use https://thumbsignin.com (TSI for short) on our intranet apps for simple (i.e. one step and secure) authentication using our mobile phones. ThumbSignin is FIDO certified and we've worked to add more authentication means for enterprises as time passed (TSI is 2 years old now I think).

Disclosure: I work for Pramati Technologies which developed and owns ThumbSignin.


I toyed around with this on https://passwordless.dev/usernameless on iOS 14 and now I have two test users to choose from on the 'Do you want to sign in www.passwordless.dev" using a saved account?' prompt. I have yet to find a way to remove those test accounts again.


Removing is covered in the article:

> Credentials can only be cleared for all via Safari > History > Clear History… on Mac Safari or Settings > Safari > Clear History and Website Data on iOS & iPadOS.


Thanks for the quick response. I thought I tried that and just made sure I did. The following did not work for me: Settings > Safari > Clear History and Website Data > Searching for "passwordless.dev" > Edit > Removing the "passwordless.dev" row. The two accounts are still there when reopening passwordless.dev and clicking on "Sign in". I'm probably missing something obvious.

Edit: Oh. I guess the "for all" in the article mean that this only works if I clear the complete history for all sites? I did not try that.

Edit2: Cleared all website data. The two test accounts are still there.


Also confirming that clearing all history & website data does not clear it on iOS.


Why does it exactly show a popup saying "website wants to use Touch ID"?

It could just say "Touch ID to login" on the Touch Bar and wouldn't it be enough? If the user prefers otherwise, they simply won't touch. Doesn't the popup create extra friction (unless I'm missing something)?


My understanding is that the permissions popup is used for the registration flow (generating & enrolling a key into the Secure Enclave), and after that if there are existing credentials, it's just 'Touch ID to login'.


Honest question, Why I feel I have to be an advanced programmer to understand all of that?

I see a lot of confusion in the comments, so it is not just me. Why the article couldn't be simpler?

Maybe I am just stupid, I don't know.


It seems like it's really written for potential implementers already in that field.

I love technical content that does not shy away from from going in depth


Is this only implemented in Apple devices Or is it cross platform ?


They are actually implementing the Credential Management API which is a W3C working draft - the underlying implementation on Safari/WebKit will allow using FaceID / TouchID as the Authenticator. Chrome 67+ and Firefox have already had WebAuthn support since some time - https://developers.google.com/web/updates/2018/05/webauthn, https://itnext.io/biometrics-fingerprint-auth-in-your-web-ap... - and they use fingerprint, windows hello, USB/NFC keys as the platform allows - see for e.g. https://www.xda-developers.com/google-chrome-supports-window...


This appears to be Apple's implementation of WebAuthn which is a standard specification. So while this page is about the specifics of this in Safari, the WebAuthn spec in general is widely implemented. If your device has a TPM/secure enclave/etc (most modern devices do) then it works already in Chrome/FF/Safari/Edge on desktop and Chrome on Android and Safari on iOS.

If you want to see it in action you can try out a demo at https://webauthn.io


Looks like it's "just" another authn method for using the phone itself (or more precisely, the "secure element") as a WebAuthn/FIDO2 dongle. So the specific UX is Apple-specific (although it could of course be replicated by others), but the API is generic (and already exists).


as far as i know both faceID and touchID require Secure Enclave, so that would be apple only.



Yep, it can. I don't know what logic it uses to request touchID vs. keychain password (because i occasionally see both), but, nonetheless, I definitely had Chrome TouchID pop-up quite a few times when I tried to autofill my credit card info.


If passwords are the original sin, then Face ID and Touch ID are Sodom and Gomorrah.

Authentication is something you KNOW. Strong authentication is something you KNOW, and something you HAVE.

Something you ARE is great for identification, but terrible for authentication. Something you are cannot be changed like a password.


The “something you have” is the key stored in the secure enclave/yubikey/whatever. That’s the real authenticator in this scenario. That it’s (maybe) unlocked biometrically is nice, but secondary.


In this case Face/Touch ID represents something you have (the iOS device with a persisted credential in its secure enclave), and something you are (fingerprint/face).


At least, in the case of biometric authentication on device, the ability to use biometrics expires automatically when the device is turned off (and other times), so biometric credential theft has a limited window of vulnerability. This contrasts with many fixed biometric authentication systems that aren't used as often as a cell phone.


Maybe... but if I steal your fingerprint, I have it forever.

And the courts can't yet force you to reveal secrets from your mind. But you can be compelled to use your fingerprint to unlock things.


In USA? I’m not even sure. In France there are many circumstances where you must reveal your phone password or any hard drive’s encryption key, if there is legit suspicion it has been used for general criminality (case in court with a cannabis dealer).


If I can use FIDO2 on my yubikeys in more places, it would be fantastic.


You can already use NFC and Lightning yubikeys on your iPhone. On iPad Pro there are issues using USB-C still in my experience. Being able to use FaceID on there now will be nice.


Right, I am hoping that this leads to more websites utilizing FIDO2. Kind of like how Android Pay was around for many years but didn't actually take off until Apple Pay made it happen.


There are three methods on iPhone: 1. NFC 2. USB - this is actually how it works with the lightning connector on iPhone in Safari - the Yubico model with a lightning port is working as a lightning to USB adapter in this case. 3. MFI (Made for iPhone) - some third party browsers can use the external accessory framework to talk to the Yubico model over the lightning port. Safari does not support this, but luckily can just use these keys as USB keys.

I believe there is a hardware limitation that the MFI protocol only works over the lightning port and not the USB-C port, so they work for third party browsers on iPhone and not iPad Pro or the new iPad Air.


Last I checked Apple only supported U2F and similar in Safari - the WebKit exposed for the other iOS browsers use can’t access those features. Did that change in iOS 14?


According to the article

> Like Face ID and Touch ID for the web, security key support is available in Safari, SFSafariViewController and ASWebAuthenticationSession.


Ok, looks like ASWebAuthenticationSession is the one Chrome for iOS uses, hopefully that means we’ll gain u2f, TouchID, and FaceTime then.


Oh cool, the new ARM MacBooks coming next month have Face ID.

Sweet.


PWA's will be a huge deal in near future


What will Chrome do?


This is a part of the Credentials Management API / WebAuthn API: https://developer.mozilla.org/en-US/docs/Web/API/Web_Authent...

They already support this for things like FIDO hardware keys and other biometric hardware. It's also supported by Firefox.


Other platforms have supported this for a long time. https://www.zdnet.com/google-amp/article/windows-10-says-hel...


Probably the same as what they did with Apple Pay, and (re)launch a comparable function with Android?


> What will Chrome do?

Develop & give away a verifiable, open-source hardware & software solution for a silicon root of trust, with a focus on security through transparency[1]. Google will organize & make available the world's information on how to secure systems well.

Yes, this is intended for, among other uses, implementation of a FIDO2 universal 2-factor security key[2].

[1] https://opentitan.org/

[2] https://github.com/lowRISC/opentitan/pull/1125/files#diff-6d...


How does this work on laptop/desktop computers?


Like the laptops that Apple has built with TouchID built-in?


Most (all?) new apple laptops have touchid, but I agree this seems most useful on iPhone/Pad which kind of makes sense as the majority of 2fa keys (yubi, etc) still plug in.

That would surely be more awkward than using the already built in SEP and authentication system.


It's not supported by mobile or desktop Safari?


It looks like it's a feature of the next release? Maybe the developer versions already have it?


Should there be a non-anonymous option, for social media sites where you have to log in under your real name? Verified by face recognition and fingerprints?


I cant tell if you're being sarcastic lol.


How secure is this?


The s in "face ID" stands for security.


I may use this when I find a cheap surgeon in case the "mathematical representation of my face" is leaked.


I predict logs of Face ID attestations will feature in divorce lawsuits within a year of them becoming commonplace.


Why?


Dating sites?


presumably because they think faceid has a picture of peoples faces.


No. Thank you very much.


Care to elaborate?


Why would anyone use this except apple? It's bad for users, bad for websites, but good for apple.


I would never use this for anything sensitive.

Bad actors can get your face and your fingerprint. Some of them already have it (governments, banks, Apple, Facebook, etc). And changing your face or fingerprint is practically impossible.


This wouldn't just require them to have your face and fingerprint though - This would require them to

1. Have access to your phone

2. Be able to spoof the phone's authentication mechanisms (whether that be fingerprint / face)

In this regard, it passes the 2FA test (something that you own, and something that you are).

While it's true that you can't really change your face or fingerprint, this facilitates moving away from the "password repeated across multiple accounts" landscape of insecurity.


Getting your face (and it has to be a high resolution 3D model of your face, not just a photograph) or your fingerprint is not enough. They also have to have physical access to one of your devices.


Apple has my face and fingerprint? I haven't heard this before and Google turns up nothing. Any source?


Your Apple device has a hash of your fingerprint and/or face at best. Apple itself has nothing of the sort.


You put your fingerprint into a black box and that black box phones home to Apple servers. From a security standpoint, we must assume that they have your fingerprint unless there's a way to prove that they don't. iOS is closed source so we can't prove that.

From an epistemological point of view: I don't know if they have it.

From a security point of view: they have your fingerprint.


Just because something's closed source doesn't mean we can't learn about it. The field of reverse engineering has been around for a long time, and the iPhone and Mac are two of the most studied devices around.

Ultimately you do have to trust your platform if you're going to use a platform authenticator to some degree.

But the alternative in the FIDO2/webauthn space is something like a Yubikey which has no biometrics; it just takes a simple tap. And can be easily removed from your computer. So in that comparison, the fingerprint is purely additive security. Even if it's adding nothing to a serious adversary, it's still dramatically reducing risk to a less-skilled local attacker.


My bank has my money. Shoul I feel robbed, from security point of view?


what? Apple does not have your face or fingerprints.

FaceID and TouchID both record any data they need in parts of the enclave that can't be used for anything else, and cannot be extracted. The information that they do store is not a picture, and cannot be reversed even in the case where someone does manage to get that data.

They have repeatedly documented how these features work.

Similarly touchid and faceid aren't sending your touchid/faceid data to a third party (because again, even if they wanted to, they can't), they're simply gating access to credential data on successful local authentication.


I am not comfortable with Apple getting any control over the web. Thankfully, so far the web is mostly free from Apple's clutches and they can run their fiefdom only in the App universe.

I shudder to think of a future where websites will have to implement some version of Apple's In-App Purchases program and give 30% of the revenue to Apple or else they won't be allowed to render on Apple devices.


Has anybody ever thought about how Face ID is pretty much a backdoor into your iPhone? Think back a few years to when Apple refused to open a felon's iPhone for the US government, because "they couldn't." If that happened now, they wouldn't even have to ask Apple, given the felon has Face ID enabled.

Edit: I'm kind of surprised by the downvotes, given I thought HN was pretty big on personal privacy. Just thought I'd stir up the discussion, that's all.

Edit 2: I personally think U2F is the way forward here, not Face ID or Touch ID or other biometrics.


The risks of forced unlocking with FaceID have been discussed, including in court: https://appleinsider.com/articles/19/01/14/face-id-touch-id-...

The main alternative to biometric ID is a PIN. Those are seldomly changed, and are relatively easily shoulder-surfed. I've seen children who can't even read yet learn their parent's iPad PINs. If you're afraid of the police, surveillance needs to be your threat model.

There's some ways to prevent being forced to use face unlock if you see it coming, by squeezing your phone: https://www.macworld.com/article/3236793/how-to-quickly-and-...


I didn't know you could disable face ID by squeezing your phone. Thanks for the info, and for the court case as well.

I think a strong password on my phone is important, rather than a simple PIN, given it's my second factor for nearly everything.


Just as an FYI for anyone reading this thread: Android has a similar feature called "lockdown", that can be triggered by holding down the power button for a second and then selecting the option.


possibly, but they have to be quick about it:

>To use Face ID, you must set up a passcode on your device.

>You must enter your passcode for additional security validation when:

>The device has just been turned on or restarted.

>The device hasn’t been unlocked for more than 48 hours.

>The passcode hasn’t been used to unlock the device in the last six and a half days and Face ID hasn't unlocked the device in the last 4 hours.

>The device has received a remote lock command.

>After five unsuccessful attempts to match a face.

>After initiating power off/Emergency SOS by pressing and holding either volume button and the side button simultaneously for 2 seconds.


I wasn't aware of some of these. Thanks for sharing.


There's limitations. iOS requires a passcode after so long and after so many bad attempts or after a reboot. The time law enforcement would have to use your face to get in is pretty short.


The fingerprint sensor found on most phones doesn't always require intent, either.


Interestingly, touch ID launched 1 year after Apple refused to unlock an iPhone for the FBI.


Did you mean FaceID? The iPhone 5c in the San Bernardino shooter case was released at the same time as the iPhone 5s, which introduced TouchID.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: