There at least a few others left, https://gitlab.com is one I regularly use.
Sadly the amount of money you need to spend on security & support for such a service does make offering such a service (particularly for free) not viable for smaller entities, there are some big economies of scale to make these things viable that work particularly well if you can also get big companies to pay for commercial offerings.
It's a double edged sword. Actually creating a good standard that people want to use through an open process that aims to be unbiased takes a non-trivial amount of time and hence costs a not insubstantial amount of money. Different standards organisations have chosen different approaches to solving that problem, and although I completely agree and freely available standards are my preferred approach, it is also very clear that ISO standards are well respected and widely used despite the need to pay to view them in some cases.
Some kind of funding model where large corporations can pay to have a standard written would be ideal. Even then it seems a bit odd. Web APIs don't seem to have this problem. Just have the big corps donate some engineers instead. I don't know, I guess nothing is perfect.
My understanding it is quite often government/country contexts where (because ISO is recognised in various international treaties) it is easier to get approval to use an ISO standard than it is to use an OpenID Foundation standard. So getting OpenID Connect published with an ISO number just makes adoption easier for some projects.
OpenID Connect does of course remain free view/use, but now people in the above situation have an easier option available to them.
Historically the IETF has been reluctant to get involved with Identity (and hence authentication) for various reasons. There are a few standards bodies in this area and they all have their strengths and weaknesses (the presentation by Heather Flanagan someone linked to elsewhere in the thread gives a good introduction).
Even some RFCs are basically available as ISO standards and vice versa, e.g. for time/date formats you almost never need to buy ISO8601 and can just read RFC3339 (which is technically a 'profile' of ISO8601).
I wasn't really following OAuth back in those days, but I have heard much of the history from those that were there are the time, and there were some of the failures of some of the early specs in this area for being too secure - and hence to hard to implement and failing to be adopted.
Was OAuth2 wrong to land exactly where it did on security back in 2012 or before? It seems really hard to say - it clearly didn't have great security, but it was easy to implement and where would we have ended up if it had better security but much poorer adoption?
Does the OAuth working group recognise those failures and has it worked hard to fixed them over the years since? Yes, very much so.
Has OAuth2 being adopted in use cases that do require high levels of security? Yes, absolutely, OpenBanking and OpenHealth ecosystems around the world are built on top of OAuth2. In particular the FAPI profile of OAuth2 that gives a pretty much out-of-the-box recipe for how to easily comply with the OAuth security best current practices document, https://openid.net/specs/fapi-2_0-security-profile.html (Disclaimer: I'm one of the authors of FAPI2, or at least will be when the next revision is published.)
Is it still a struggle to try and get across to people in industries that need higher security (like banking and health) that they need to stop using client secrets, ban bearer access tokens, to mandate PKCE, and so on? Yes. Yes it is. I have stories.
Back in 2012, TLS was not enabled everywhere yet. OAuth 1.0 was based on client signatures (just like JAR, DPoP etc., but far simpler to implement) and it was a good fit for its time. One of Eran Hammer's top gripes with the direction OAuth 2.0 was going for is removing cryptography and relying on TLS. I think this turned out to be a good decision, in hindsight, since TLS did become the norm very quickly, and the state of cryptography at IETF during that period (2010) was rather abysmal. If OAuth 2.0 did mandate signatures, we'd end up with yet another standard pushing RSA with PKCS#1 v1.5 padding (let's not pretend most systems are using anything else with JWT).
But that's all hindsight is 20:20, I guess. I think the points that withstood the state of time more is about how OAuth 2.0 was more of a "Framework" than a real protocol. There are too many options and you can implement everything you want. Options like the implicit flow or password grant shouldn't have been in the standard in the first place, and the language regarding refresh tokens and access tokens should have been clearer.
Fast forward to 2024, I think we've started going back to cryptography again, but I don't think it's all good. The cryptographic standards that modern OAuth specs rely on are too complex, and that leads to a lot of opportunity for attacks. I'm yet to see a single cryptographer or security researcher who is satisfied with the JOSE/JWT set of standards. While you can use them securely, you can't expect any random developer (including most library writers) to be able to do so.
"Through I'm not sure if requiring (instead of just allowing) PKCE is strictly OIDC compliant"
It's technically not compliant, but people definitely do so, and there are definite security advantages to requiring it.
Technically the 'nonce' in OpenID Connect provides the same protections, and hence the OAuth Security BCP says (in a lot more words) that you may omit PKCE when using OIDC. However in practice, based on a period in the trenches that I've mostly repressed now, the way the mechanisms were designed means clients are far more likely to use PKCE correctly than to use nonce correctly.)
Yes, indeed. Both OAuth 2.1 & the BCP tighten things up a lot, although neither is technically final yet (the security BCP should be published as an RFC "any day now").
For people looking for an easy-to-follow interoperability/security profile for OAuth2 (assuming they're most interested in authorization code flow, though it's not exclusive to that) FAPI2 is a good place to look, the most recent official is here:
On the flip-side, it is much more complex to implement than OAuth 2.1, since it mandates a lot of extra standards, some of them very new and with very little to go in the way of library support: DPoP, PAR, private_key_jwt, Dynamic Client Registrations, Authorization Server Metadata, etc.
Except for PAR, these extra requirements are harder to implement than their alternatives and I'm not sold that they increase security. For instance, DPoP with mandatory "jti" is not operationally different than having a stateful refresh token. You've got to store the nonce somewhere, after all. Having a stateful refresh token is simpler, and you remove the unnecessary reliance on asymmetric cryptography, and as a bonus save some headaches down the road if quantum computers which can break EC cryptography become a thing.
In addition, the new requirements increase the reliance on JWT, which was always the weakest link for OIDC, by ditching client secrets (unless you're using mTLS, which nobody is going to use). Client secrets have their issues, but JWT is extremely hard to secure, and we've got so many CVEs for JWT libraries over the years that I've started treating it like I do SAML: Necessary evil, but I'd minimize contact with it as much as I can.
There are also some quirky requirements, like:
1. Mandating "OpenID Connect Core 1.0 incorporating errata set 1 [OIDC]" - "errata set 2" is the current version, but even if you update that, what happens if a new version of OIDC comes out? Are you forced to use the older version for compliance?
2. The TLS 1.2 ciphers are weird. DHE is pretty bad whichever way you're looking at it, but I get it that some browsers do not support it, but why would you block ECDSA and ChaChaPoly1305? This would likely result in less secure ciphersuites being selected when the machine is capable of more.
In short, the standard seems to be much better than than FAPI 1.0, but I wouldn't say it's in a more complete state than OAuth 2.1.
DPoP isn't mandated, MTLS sender constrained access tokens are selected by a lot of people instead of DPoP. (And yes I agree, MTLS has challenges in some cases.)
Stateful refresh tokens have other practical issues, we've seen several cases in OpenBanking ecosystems where stateful refresh tokens resulted in loss of access for large numbers of users which things went wrong.
The quirks you mention are sorted in the next revision. The cipher requirements come from the IETF TLS BCP [1] (which is clearer in the new version). If you think the IETF TLS WG got it wrong, please do tell them.
As other people said elsewhere, this isn't about completeness - OAuth2.1 is a framework, FAPI is something concrete you can for the large part just follow, and then use the FAPI conformance tests to confirm if you correctly implemented it or not. If you design an authorization code flow flowing all the recommendations in OAuth 2.1, you'll end up implementing FAPI. Most people not in this space implementing OAuth will struggle to know how to avoid the traps once they stop following the recommendations, as "implementing OAuth securely" isn't usually their primary mission.
How common is it to use MTLS in the user-to-service use case (e.g. browsers with mTLS configured)? I mean for (potentially external) service-to-service authentication it's way easier then for user(browser,app)-to-service.
Apple are absolutely 100% free to run the tool and fix any issues found without any payment to anyone except their own engineer's time.
(There is a fee to pay for the 'OpenID Certified' logo and to be listed as certified, but in my opinion just running the free tool and fixing any interoperability issues is a major benefit by itself.)
The contributor referred to (John Bradley) as saying that OAuth 2.0 implementation mistakes are almost inevitable is one of the authors of the OpenID Connect spec, and if you follow the citation link ( https://mailarchive.ietf.org/arch/msg/oauth/WuT1tmFoxs8S_2v7... ) you'll see him mention that the flaw referred to is fixed in OpenID Connect.
The main requirement therein I would say is that you can't use OpenID to describe an implementation unless it implements the mandatory-to-implement parts of the relevant protocol specification (and all the specifications are 100% free; e.g. https://openid.net/specs/openid-connect-core-1_0.html ). That's more about interoperability than anything else.
Sadly the amount of money you need to spend on security & support for such a service does make offering such a service (particularly for free) not viable for smaller entities, there are some big economies of scale to make these things viable that work particularly well if you can also get big companies to pay for commercial offerings.