Opaque handles everywhere (okay, that simplifies stuff going over the wire). Union types in protocol payloads - the spec calls these "polymorphic JSON", but the reality is you will need to branch on type of a given field. Worse, nothing prevents having two or more subtly different dictionaries in the same field, based on arbitrary/implicit conditions.
Subtle and surprising payload differences are pretty much guaranteed to introduce weird problems in the real world. And I'm not ruling out security problems either, because a bug in authorisation logic can easily generate tokens that are valid for wrong scopes.
EDIT: There's this [1], but it only makes me ask more questions. The only rationale I can see from that document is “it would seem silly and wasteful to force all developers to push all of their keying information in every request”. Which makes me want to throw out oauth.xyz and never look at it again, because that looks like the authors have some absurd priorities in their protocol design.
[1] - https://medium.com/@justinsecurity/xyz-handles-passing-by-re...
OAuth transactions are "big" because they allow the use of RSA keys, which are large. The keys would be smaller if they were simply opinionated and mandated a specific cipher, such as Ed25519 that uses much smaller keys.
Protocols like SAML, OpenID, and OAuth aren't. They're not protocols at all. They're protocol parts thrown into a bag that everyone can pull whatever they like out of. They support way too many cryptographic primitives, and have far too many pluggable features.
Just yesterday I had to deal with an outage because a large enterprise ISV's SAML implementation doesn't support signing key rollover! You can have exactly one key only, which means after a key renewal all authentication breaks. You have to do big-bang coordinated rollouts.
That is typical of the kind of stuff I see in this space.
Everyone gets every part of these protocols wrong. Google does SAML wrong! Microsoft fixed one glaring security flaw in Azure AD, but neglected to back-port it to AD FS, because legacy on-prem security doesn't matter, right?
If Google and Microsoft can't get these things right, why are we working one yet more protocols that are even more complex!?
The web is full of middle boxes with crazy limitations. And OAuth2 is very good of triggering each one of them. They are also mostly unique, not under the control of the ends, and often transient, so they most often aren't even understood, the problem is just assumed unsolvable. That alone is a big limitation that stops people from using OAuth2.
That said, I have never seen a case where crypto data was the cause of the bloat. Its size is so small when compared to everything else that I'm not sure why anybody would even look at it. And indeed, the rationale I found on the site is about cryptographic agility... what is interesting because you will find plenty of people claiming that this is an anti-feature that will harm security much more than help.
As odd as it sounds, this one I can actually understand. I'm pretty sure the designers come loaded with painful experience on request header bloat. They may want v3 to support completely stateless requests, and would rather not transmit large public keys or possibly even client certificate chains on routine requests.
For those cases I can see the benefit of being able to say "look up my details from this $ObjectID". When everything related to session authorisation is behind a single lookup key, the data likely becomes more easily cacheable.
It's a perfectly valid tradeoff for environments where compute and bandwidth cost way more than engineering time. For the rest of us...
Would you please send your comments to the working group?
I came to understand OAuth2 much better when I realized that it exists to make the lives of big companies easier, and to make the lives of small developers possible. If BigCo only offers an OAuth2 API, then developers will figure it out because they have no choice. And from the point of view of big companies, what matters is that they implement something that meets their needs, which they can pretend is a standard.
Ambiguities give big companies the freedom to do the different things that they want to do while everyone claims, "We're following the standard!"
{ "key": { "KeyHandle": "myhandle" } }
and { "key": { "FullKey": "myfullkey" } }
They could just provide a schema and nobody would have to implement anything of the wire end.But I think the authors were focused on using the shorted possible serialized JSON, no matter the implementation difficulty cost and the inability to use existing schema/IDLs. Which in my opinion is terribad for what is effectively a critical security standard.
Yeah... let's not please.
Yes, many mainstream languages have near-zero support for Tagged/Discriminated Unions or Enums with Associated Data or Algebraic Data Types (pick your favorite name for the same concept). This is a limitation of those languages, which should not force a language-agnostic protocol to adopt the lowest common denominator of expressiveness.
Consider the problem they're avoiding of mutually exclusive keys in a struct/object. What do you do if you receive more than one? Is that behavior undefined? If it is defined, how sure are you that the implementation your package manager installed for you doesn't just pick one key arbitrarily in the name of "developer friendliness" leading to security bugs? This seems like a much more bug-ridden problem to solve than having to write verbose type/switching golang/java.
Implementing more verbose deserialization code in languages with no support for Tagged Unions seems like a small price to pay for making a protocol that leaves no room for undefined behavior.
To be clear, _many_ statically typed languages have perfect support for this concept (Rust/Swift/Scala/Haskell, to name a few).
No they don't, at least in the way you're selling it. The "limitation" here is JSON which doesn't attach type information. You're going to have to implement some typing protocol on top of the JSON anyway which will face similar problems to the ones you raised (unless you do some trait based inference which could be ambiguous and dangerous).
If they were Enums/Unions over a serialization protocol like protobuf, maybe your case makes sense. Even then, Im guessing a large % of the OAuth 3 requests will go through Java/Golang libraries, so on a practical level this is a bad idea too.
IMHO: a very bad choice. Complicated basic and higher level elements of protocols are the death of them (remember SOAP). I follow the train of thoughts to not restrict yourself too much but if (eg) java or C++ cannot implement it easy, not a good idea.
It's an intentional decision made by those languages in order to focus on other things. If your intent is to be language-agnostic, then yeah, going with lowest common denominator concepts is exactly what you need to do. If you just want to write a Haskell auth implementation using your favorite pet language features, then write a Haskell auth implementation.
{ foo: { bar: "baz" }}
and other times you'll get { foo: "something else" }
Good luck!This is horrible.
The only time I've had to deal with it in the wild was a terrible experience. As a consumer, you couldn't make any decisions with confidence without first making a careful study of the documentation. For every. single. decision.
If folks are interested in the nitty gritty, I wrote a blog post a few months back: https://fusionauth.io/blog/2020/04/15/whats-new-in-oauth-2-1...
And this is a great podcast with one of the authors, Aaron Parecki, talking in detail about the changes: https://identityunlocked.auth0.com/public/49/Identity%2C-Unl...
1. Using OAuth to sign-up often means disclosing private data you can (and would normally prefer to) keep secret if you go the bare e-mail sign-up way. E.g. contacts list, exact date of birth, etc. - This is why I (as a user) stopped using OAuth for new accounts.
Kind of the same used to apply to e.g. Android apps. I mean the "give an app all the permissions it wants or gtfo" anti-pattern which ought be abolished. The user should be allowed to continue after denying/revoking access to any (but absolutely essential for the very function) data silently or manually specifying whatever values they want.
2. It isn't always easy to decouple an OAuth-based account from the social network account, especially in case you loose access to the latter. - This is why I (as a user) migrated all OAuth-based accounts I had to the good old e-mail way.
OAuth is an authorization protocol. It can be used (for example) to give Facebook access to your Flickr photos without having to give out your Flickr username and password to Facebook or share API tokens, and have a standardized way to revoke access when you realise Facebook scraped all your private photos.
The app could easily ask you to check a checkbox next to each scope, and then write separate code for each combination of checkboxes. They decided not to do that because it's probably not worth your business if you don't want to give them full access. (Honestly, I click a lot of things on HN that ask for way too many scopes, and then I close the window and forget what it was. But the calculation was done -- they don't need me as a user or customer. I can live with that.)
I guess what people want is an IDP that will give applications fake data when you deny a scope. But no application developer wants to deal with that complexity, so they'd never integrate a provider that does that. (They probably moved away from email+password because of all the fake emails that people provide.)
On the other hand, it's mandatory for iOS apps to use Apple's sign-in which auto-generates a fake email address for you. So I suppose some progress is being made. (I have an iPhone but I've never seen this supposedly mandatory OAuth provider. I only know about it from reading HN. So maybe it doesn't actually exist? I have no idea really.)
I once tried to sign-up with Google an it asked me to allow (with no option to deny but continue) to share my specific personal details. I've cancelled and never used this technology ever since. I didn't have to specify the same details (which Google was going to share) when signing-up with an e-mail address.
The spec should discourage sharing details beyond necessary, prevent any details from being shared silently and ensure user can always deny and continue.
Ignoring LDAP and Active Directory for now.
Not surprised, but still disappointed.
OAuth is about getting access to something, and usually part of that is proving to some authorization server that you are you (ie what OpenID is about), no?
Do you mean you'd like OAuth to tackle the "you are you" part as well?
As a website developer I would definitely appreciate something like OpenID but actually usable/popular. Having to implement a ton of "log in with"s sucks, as does implementing email based login.
authZ = authoriZation (access / "what are you allowed to do")
How could the specification support letting the end-user pick their authorization provider? Should the RC suggest the AS instead of the RS doing so?
As a student I've previously sat on a couple of mailing lists for the academic benefit of learning from some really smart/dedicated people. Joining and participating is open and just requires you to sign up. The signup link is on the announcement page above ^
https://www.ietf.org/archive/id/draft-ietf-gnap-core-protoco...
and are pretty much the only open technology that is sufficiently technology-agnostic and interoperable.
what should be the alternative, facebook comment threads?
When can we just have client side certificates? That would be a great way to deal with most of the problems that emailing a "magic login link" (or just normal email based accounts) doesn't solve.
Not only is the spec itself challenging, it leaves enough ambiguity and rough edges that most providers end up extending it some way that makes it hard to standardize. Most commonly: how to get refresh tokens (`offline_access` scope, `access_type=offline` parameter?), and how/when they expire (as soon as you get a new one? as soon as you've received 10 new ones? on a set schedule?)
And that's not to mention how OAuth gets extended to handle organization-wide access. Anyone that's dealt with GSuite/Workspace Service Accounts or Microsoft Graph Application permissions knows what a pain that is.
This is exactly why I built [Xkit](https://xkit.co), which abstracts away the complexity of dealing with all the different providers and gives you a single API to retrieve access tokens (or API keys) that are always valid. Not everyone should have to be an OAuth expert to build a 3rd party integration.
From a brief search, it looks like let's encrypt doesn't have great support for them ( https://community.letsencrypt.org/t/can-i-create-client-cert... ) so you are stuck setting up a private CA?
Have you set up client side certs? I'd love to hear your experience if so.
BTW, I'd defer implementing OAuth to a library or specialized piece of software (full disclosure: I work for a company providing this). There are a number of options, paid and open source out there.
Entire Estonia and a few other countries use them daily. For logging into banks, Craigslist-equivalents, online stores, service providers etc. etc.
The biggest problem is around revocation. You need to have some central revocation list and make sure that all of the users of your PKI are keeping that list up-to-date in production, which can be difficult if you do not plan for that from the start.
From a security standpoint, it's pretty great. But the reality of generating keys and signing and distributing certificates was horrible, and our users were confused and hated it.
How would you solve key generation even now - assuming the client generates the key, is it locked to that browser on that machine? How do you generate a CSR (certificate signing request)? How do you send the signed certificate to the user? How does the user install the certificate? Again, does that mean the user can only access your app from the machine they installed the certificate to?
PKI is hard, mainly because of the distribution problem.
Part of the reason was that the user interfaces for installing certificates were terrible, and websites needed to have guides on how to use it in each browser.
At $work we have several systems in which the server only accepts requests, or only accepts certain kinds of requests, from clients with client certificates with specific restrictions. Depending on the application and its authN/authZ needs, any of the solutions I'm about to mention might be combined with some combination of a username/password, with a time-based token, a JWT, IP range restrictions, an API key, or whatever else in addition to the client cert requirement - or sometimes the cert is sufficient by itself. Some just trust anything that was issued by the right CA and is in its proper date range. Sometimes we also verify that the certificate matches an assigned hostname of the client. Sometimes we trust certs by the right CA to connect, but parse the hostname out of the cert and check whether that client's hostname or the subdomain it's in has authorization to do certain things. Semantic hostnames might look long and confusing at first, but they can be used very easily for things like that. Semantic host names and naming schemes could be its own article.
This isn't a general use case for the general public because of deployment headaches. Which CAs do we trust? Is it the same as those issuing server certs? Will services be set up especially to issue client certs? Who's supporting the users to get the certs installed, many of whom enter the wrong email when signing up for services? We can do this in a corporate network pretty easily. We have automation systems for servers. We have another, different automation system for administration of Laptop, desktop, and mobile client devices. We just put what cert we want everywhere we want it.
A big problem I see with client certs and the general public is multi-device users. If you're logging into Google from your home desktop, your work desktop, a couple of laptops, a tablet, and a phone that's one email address but half a dozen different certificates. Some applications, especially cross-platform ones, insist on their own certificate store even if the OS provides a shared one. So for mail, a couple of browsers, and two other native apps, congratulations that's maybe two dozen. One can export and import client certs, but there's no simple way to get less technical end users to do that. So do we make it easier to configure multiple client devices and all their applications with the same certificate and key? Are end users going to remember to update them all when one is lost in a breach or it expires? Or do we expect all the service providers to trust multiple certificates signed by multiple different CAs for each user, then have the user upload the public (but not the private!) part of each cert/key pair to all of those services to say they should be trusted? Or does every service require its own CA to sign your cert for its own service, so you need an Apple cert, a Google cert, an Amazon cert... ad nauseum?
Tools like Bitbucket or Gitlab let you upload your public SSH key in the web UI to provide auth for the git repos. You can also have (hopefully with separate keys) automated applications that interact with git auth against a repository or all the repos in a project. That's the sort of interface one might expect a web application to offer for TLS certificates. *
* A certificate is basically the public key portion of a public/private key pair that's been signed by some CA. Preferably that CA is a broadly trusted one, except in very particular circumstances.
... Great.
"OAuth" is such a terrible name. It sounds like a silly problem, until you've been through a number of calls where you had to explain to someone that this is what can be used for integration. A fair percentage of such calls end with no understanding of what is being talked about.
Sure people might not know about it, but there are tons of tech things people don't know about. That's a separate issue.
In either case, I am excited about it, I do hope it will be easier to use as well.
The "Grant Negotiation and Authorization Protocol"
And to be clear, I don’t actually care if the new work is called OAuth 3.0 or TxAuth or some other name, but I do think that it’s a fitting change set for a major revision of a powerful and successful protocol. We need something new, while we patch what we have, and now’s the time to build it. Come join the conversation in TxAuth, play with XYZ, and let’s start building the future.
--------------------
[1] https://medium.com/@justinsecurity/the-case-for-oauth-3-0-5c...
If anyone from the workgroup is reading this: please clarify in the first paragraph that XYZ is like a 'working title' for OAuth 3.
[1] https://medium.com/@justinsecurity/the-case-for-oauth-3-0-5c...
https://web.archive.org/web/20130116102852/http://hueniverse...
Although this fact alone might even tell me enough of OAuth 3.0.
I've been doing some research on this for an upcoming presentation and it seems this was a union of the design ideas of two draft documents TxAuth and OAuth.xyz, which means there's a few issues that need to be resolved. I'm sure they'd welcome respectful feedback.
From the WG's charter[2], they are looking for feedback and comments and expect last call for the core protocol in July 2021.
It's still very much a work in progress. I counted the TBDs and "Editor's notes" and found an average of one of these "TODO" markers per page of the draft.
I'm excited about the more modern developer ergonomics (using JSON is a step up from using form params), the ability for an RC to request user info at the same time (folding in some of OIDC), and fact they've explicitly built interaction extensions into the model. OAuth2 often assumes a browser with redirect capabilities and there are some inelegant solutions that arise from that[3]. Still a lot of things to iron out, for sure, though.
That said, I think OAuth2 will still be common 3 years from now, and if OAuth2 satisfies your needs, you aren't forced to move on to this new, explicitly not backward compatible[4], auth protocol.
[0]: https://www.ietf.org/archive/id/draft-ietf-gnap-core-protoco...
[1]: https://mailarchive.ietf.org/arch/msg/txauth/UkvrBXkMk9YMl7m...
[2]: https://datatracker.ietf.org/wg/gnap/about/
[3]: https://fusionauth.io/blog/2020/08/19/securing-react-native-... shows that you have to have a redirect with a custom scheme for a mobile app. Seems weird to me.
[4]: "Although the artifacts for this work are not intended or expected to be backwards-compatible with OAuth 2.0 or OpenID Connect, the group will attempt to simplify migrating from OAuth 2.0 and OpenID Connect to the new protocol where possible." - https://datatracker.ietf.org/wg/gnap/about/
Why? It seems to me that I'm either writing Json.Serialize(loginParams) or HttpForms.Serialize(loginParams). Both are human readable and weakly typed. From a developer perspective, these seem almost exactly equivalent, just different.
Here's a grant request from the draft:
{
"resources": [
{
"type": "photo-api",
"actions": [
"read",
"write",
"dolphin"
],
"locations": [
"https://server.example.net/",
"https://resource.local/other"
],
"datatypes": [
"metadata",
"images"
]
},
"dolphin-metadata"
],
"client": {
"display": {
"name": "My Client Display Name",
"uri": "https://example.net/client"
},
"key": {
"proof": "jwsd",
"jwk": {
"kty": "RSA",
"e": "AQAB",
"kid": "xyz-1",
"alg": "RS256",
"n": "kOB5rR4Jv0GMeL...."
}
}
},
"interact": {
"redirect": true,
"callback": {
"method": "redirect",
"uri": "https://client.example.net/return/123455",
"nonce": "LKLTI25DK82FX4T4QFZC"
}
},
"capabilities": ["ext1", "ext2"],
"subject": {
"sub_ids": ["iss-sub", "email"],
"assertions": ["id_token"]
}
}
(Not all of the object keys are required, FYI). The ability to have resources be a rich object (as opposed to a string) and to support multiple resources in one grant request seems to me to be a good thing(tm).See the diagram in the RFC[1] and section 1.3 just below it. Sure OAuth usually involves authentication, but OAuth doesn't really care how it's done.
Then again, not my field of expertise so I might be wrong.
I remember something that previous versions were for webapps only. Never used it though.
Edit: What's wrong with you people? You're downvoting questions now? I remember that OAuth forced the user to include client secret in app's binary. When extracted, everyone could impersonate the app. If you don't understand the problem then don't downvote.
> I remember something that previous versions were for webapps only.
> I remember that OAuth forced the user to include client secret in app's binary.
This is not actually a problem with RFC 7636.
OAuth 2 was a design nightmare.
But by now it kinda consolidated into a usable best practices how to do it. But gathering them from the core RFC and all the extensions is a pain.
So what would be nice would a a updated RFC including all best practice and deprecating all things which turned out bad (or had security vulnerabilities).
OAuth 2.1 somewhat goes into that direction.
But IMHO OAuth 3 looks like starting the whole OAuth 2 madness from the scratch not learning from all the problems OAuth 2 had when it was new...
This isn't correct. Native apps aren't capable of holding a secret. There are two patterns here. Some providers omit the secret for native apps. Other providers define the concept of a "public secret," a value that is "not a secret," but is put in the client_secret field - rotating this value disables old clients. Either model is fine and secure.
The problems you refer to were mostly just developer error. Developers registered their native apps as having confidential secrets, even though this was not the case, and then shipped those secrets in the app source code.
See section 4.1.1 of the OAuth 2.1.1 spec ( https://tools.ietf.org/html/draft-ietf-oauth-v2-1-00 ) which was, I believe also included in the security best practices.