https://news.ycombinator.com/item?id=16006394
The short answer is: don't overthink it. Do the simplest thing that will work: use 16+ byte random keys read from /dev/urandom and stored in a database. The cool-kid name for this is "bearer tokens".
You do not need Amazon-style "access keys" to go with your secrets. You do not need OAuth (OAuth is for delegated authentication, to allow 3rd parties to take actions on behalf of users), you do not need special UUID formats, you do not need to mess around with localStorage, you do not need TLS client certificates, you do not need cryptographic signatures, or any kind of public key crypto, or really any cryptography at all. You almost certain do not and will not need "stateless" authentication; to get it, you will sacrifice security and some usability, and in a typical application that depends on a database to get anything done anyways, you'll make those sacrifices for nothing.
Do not use JWTs, which are an increasingly (and somewhat inexplicably) popular cryptographic token that every working cryptography engineer I've spoken to hates passionately. JWTs are easy for developers to use, because libraries are a thing, but impossible for security engineers to reason about. There's a race to replace JWTs with something better (PAST is an example) and while I don't think the underlying idea behind JWTs is any good either, even if you disagree, you should still wait for one of the better formats to gain acceptance.
Also remember to time out the tokens: it is almost never a good idea to permit infinitely long login sessions (surprising how often I see this not done). Again remember to invalidate the token when the user changes their password.
I agree that OAuth is not necessary on its own but it can be appropriate if you are also supporting delegated authentication with various 3rd parties : make your own native auth just another OAuth provider.
Does anyone have any recommendations for services that do offer this? (or where those options are in the named services)
Can we get more intel behind why JWT is bad? I've always been told that as long as you explicitly check both the signature and that the signature algorithm type is what you expect, its fine. Libraries tend to make the mistake of not doing that second part for you, so implementations have to be aware of that.
The one concern I've always had is that even though they are stateless, most implementations end up making a db request to get info about the user anyway (i.e. their role and permissions), so the stateless advantage of JWT isn't as big as it is touted. You can embed that into the JWT, but then the token inevitably gets huge.
I do this to create bearer tokens without JWT.
Anyway, you can find a lot of his comments about JWT by searching ‘tctapek JWT site: ycombinator.com’
Sorry if the answer is obvious...this is not my area of expertise.
If you're storing the key on the client (cookie or w/e) and in the database and solely using it to authenticate, aren't you going to run into timing attacks if you're using it for retrieval?
What I typically do is also store a unique identifier like email for the lookup and then use a random key for comparison / validation.
Aside from PAST, I recently have come across this[0] (called branca) as I too was looking for an alternative to JWT. This seems to be based on Fernet, but according to the maintainer of this spec it isn't maintained anymore.
You seem to know what you're talking about, but i'm a bit confused. With JWT I just store the token in localStorage and then add Authentization Bearer header with the token. What's the recommended approach? To send username + token as part of the form data?
HTTPS is mandatory of course, and caching successful authorizations help performance.
Can you confirm re: your recommendation for random "bearer token" auth, are you talking about just short-lived tokens that are the response to a regular user auth flow (ie login in one request, get a token, use that for subsequent requests in this "session" ala a session cookie) or do you (also) intend it for long-lived tokens used eg for machine accounts?
API authentication doesn't have the memorability problem, because the password has to be stored somewhere the client program can reach it. But, as you can see, it does have the storage problem, which means you need to account for the fact that it might be compromised in storage.
So you need a separate credential for API access. Since it's only going to be used by computers, there's no purpose served by making it anything other than a computer-usable secure credential; hence: a 128 bit token from /dev/urandom.
Once you have a 128 bit token, there's also little purpose served by forcing clients to relay usernames, since any 128 bit random token will uniquely identify an account anyways.
Unless you get Bill Murray to run into people on the street or crash their all-hands meetings and tell them this, no one will believe you. Or at least, it's worth a try since nothing else seems to work, as seen in thread.
> Do the simplest thing that will work:
For many a long, a randomized bearer token will do. Depending on the type of data you expose via the API (example - financial data, PII) this may not be sufficient for your security team or auditors.
With the exception of the military, which I on principle won't work with, there's probably no regulatory or audit regime I haven't had experience with.
I say all this as lead-up to a simple assertion: I have never once seen an auditor push back on bearer-token API access. It's the global standard for this problem. If you knew how, for instance, clearing and settlement worked for major exchanges, you'd laugh at the idea that 128 bit random tokens would trip an audit flag.
tl;dr: No.
(The actual answer is: I have no idea what Google does with JWTs.)
* The implementation of cryptographic "signing" (really, virtually never signing but rather message authentication) is susceptible to implementation errors.
* The concept of signing is susceptible to an entire class of implementation errors falling under the category of "quoting and canonicalization". See: basically every well-known implementation of "signed URLs" for examples.
* Signed URLs have to make their own special arrangements for expiration and replay protection in ways that stateful authentication doesn't, either because the stateful implementation is trivial or because stateful auth "can't" do the things stateful auth can (like explicitly handing off a URL to a third party for delegated execution).
Stateless authentication is almost never a better idea than stateful authentication. In the real world, most important APIs are statefully authenticated. This is true even in a lot of cases where those APIs (inexplicably) use JWTs, as you discover when you get contracted to look at the source code.
Delegated authentication is useful situationally. But really, there aren't all that many situations where it's needed. It's categorically not useful as a default; it's a terrible, dangerous default.
Large swaths of the internet love to hate on JWT but its a major feature in oauth2 and is in use all over the place as decentralized APIs have become more commonplace.
Signed requests have burned a bunch of applications, more than have been burned by OAuth.
we've been collectively brainwashed to reach out for AWS for almost any dev related work.
Also refreshing that a $5 DO/Linode box can do everything AWS can without the learning curve.
Please don't reinvent the wheel and use a guid.
A guid is a random number generator to avoid collisions.
The API client should this token in every request that requires authentication, often in the header as `Authorization : Bearer 123TheToken456`.
JWT: If DB performance becomes a problem (or you want to expose signed session metadata) consider using JWT to provide session validation with the request itself. The downsides of JWT are that its often used to hold secret values (dont do this), or is a few kilobytes big which makes all requests slow, or stupid mistakes in signing and session validation that make it very insecure like allowing any request to just specify false permissions.
[1]: https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-ba...
Does it keep a copy somewhere and check against it on every request?
I don't even store role information in them, since authorization checks are performed on the server anyway.
If the client needs to know what a user is allowed to do with a resource (so it knows not to display certain buttons, etc.) I have the client do an OPTIONS call (with the token) to see what methods are allowed.
Lately, I've been thinking about replacing the whole JWT scheme with simple bearer tokens stored in the database, mostly because it would make revocation simple and I can't think of anything I would lose by giving up JWTs (a little storage space in the database?), and I don't think switching the type of bearer token I'm working with will actually be very painful implementation-wise. You know what, I'm adding a task to my backlog...
If you want to be completely stateless, you need to send the authorization info on each request, probably as basic auth headers.
This is actually simpler than implementing the Bearer token, for both client and server, but requires that the client retain the non-expiring username and password, rather than retaining the expiring, session-specific Bearer token.
Also, mixing credentials into URL does not feel like a good separation of concerns, e.g. URLs are often logged and analyzed in separate logging/monitoring/analytic tools, so there is a bigger risk to have credentials leaked over some side-channel.
[1] https://pdos.csail.mit.edu/papers/webauth:sec10.pdf [2] https://github.com/expressjs/express/blob/master/examples/au... [3] https://expressjs.com/2x/screencasts.html
[1] https://tylermcginnis.com/react-router-protected-routes-auth...
Then, there are articles like the Hackernoon post explaining why most Passport blog posts are wrong in one way or another.[0] This article explains that there are no "copy/paste" authentication solutions for Javascript, as there are for other languages - and Passport is probably the best out there.
As there's no "copy/paste" auth solution for Javascript, it becomes essential to understand how auth works with your site. It has to be added to every Express render call, to work with the Session. And rolling your own is educational - you can learn some of the common pitfalls and why rolling your own is a bad idea.
I do plan to go back to Passport sometime this year. The number of Oauth providers is nearly overwhelming - too much to ignore. But also daunting for the first-time student.
[0] https://hackernoon.com/your-node-js-authentication-tutorial-...
Using well-developed hashing schemes like pbkdf2 (which the auth snippet uses) or bcrypt (another good and common option) and storing the output is not rolling your own crypto. Writing your own hash function would be rolling your own crypto.
People talk about not rolling your own crypto because crypto is very, very sensitive to even very small errors, in ways that other code is not. Writing your own authentication using a well-known strong hashing scheme (which handles salting and verifying passwords in a timing attack resistant way) is much less likely to have vulnerabilities that aren't obvious.
To me, passportjs might be useful if you need to plug into 3rd party auth APIs, but I don't really see the point. Authentication is a core part of your application and you should always know exactly how it works.
If you can't store an authn secret with confidence, how can you do anything with confidence?
I would consider more complicated solutions only if you first come to conclusion that these simple things are not fit for the purpose.
True that some fancy token based solution may reduce database load, but if the API is doing something useful then that one primary key lookup and potentially the password hashing won't be a show stopper. Drawback with tokens and skipping the DB check is that you can't simply kill a client behaving badly. With API key you can just change the key and requests stop immediately (with MVP product this might be an issue, since maybe you have decided to add rate limits etc later).
You can always introduce other forms of authentication later. I have a slight preference for basic/digest auth as the secret isn't part of the URL, and therefore not cached/logged by any network equipment.
edit: typo
Does basic auth works if the client is a browser and you want to have a nice login screen?
You can solve this with a token/user blacklist. There are desirable (and undesirable) characteristics of using a blacklist instead of a whitelist, but you don't lose this capability.
* Public data API (no notion of user account, e.g. weathers API, newspapers API, GitHub public events, Reddit /r/popular, etc): use an API key. The developers must create an account on your website to create their API key; each API call is accompanied by the API key in the query string (?key=...)
* Self-used API for AJAX (notion of user account, e.g. all the AJAX calls behind the scenes when you use Gmail): use the same user cookie as for the rest of the website. The API is like any other resource on the website except it returns JSON/XML/whatever instead of HTML.
* Internal API for micro-services: API key shared by both ends of the service. There can be a notion of user accounts, but it's a business notion and the user is an argument like any other. If possible, your micro-services shouldn't actually be accessible on the public Internet.
* API with the notion of user accounts intended for third-parties (e.g. Reddit mobile app where the users can authorize the app to use their Reddit account on their behalf): OAuth2
A username and a password is so much simpler.
For APIs, it should be more manageable but many places stumble with key management and a lot of developers were resistant to learning enough about the tooling to do things like manage test instances.
For federated use cases, they are less popular, likely due to the added complexity being a blocker for adoption.
It would be really cool if there was a service like letsyncrypt that would provision "client" certs for this sort of use but revocation lists and things are a little annoying for large scale use.
EDIT: Another reason why this can be a hassle is that while Safari/IE/Chrome use the system to evaluate trusted certs on all OSs i've tested, firefox uses its own implementation so you have to add all the certs yourself. This is ... frustrating from a management perspective because you have to keep two sets of docs explaining what to do for new hires and etc.
EDIT 2: I've always been curious if its a net security benefit that firefox does this. On one hand, they are less vulnerable to OS-specific attacks and can automatically un-trust root certs that are compromised for whatever reason, but you're then trusting Mozillas implementation of something that is admittedly very complex.
The Firefox Enterprise mailing list is the place to go to for deeper level help on these things. [2]
SSL client authentication is widely used by the US military on smartcards requiring additional hardware readers: https://en.wikipedia.org/wiki/Common_Access_Card. AFAIK using a smartcard doesn't work reliably on Mac without installing 3rd-party software. The difficulties of usage in practice has spawned a cottage industry of commercial software and support, like http://militarycac.com.
I assume companies selling end user hardware tokens like YubiKey would love to see client certificate authentication become more usable, but initiatives like FIDO U2F seem to be gaining momentum instead.
SSL certificates also doesn't work in HTTP/2 (because of multiplexing multiple requests).
Benefits include storing private key in a hardware tokens, most OSes support them out of the box. You can just plug your token into USB port, visit site that requests a client certificate, enter PIN and be done (e.g. Yubico PIV applet).
HTML also has/had <keygen> element that would generate private key in a browser, send the public key to be signed to a webpage essentially creating private/public key credentials but that is being removed from browsers.
For inter-service communication I'd definitely consider using SSL client certificates pinning private keys e.g. to TPM but regular users can't be bothered with it.
If you're interested check out Token Binding that makes tokens (cookies, etc.) bound to TLS connections essentially providing security of client certificates for tokens.
Using it to authenticate a person regardless of device doesn't work so well from a usability point of view.
I would appreciate pointers to any open source libraries demonstrating best practices and/or promoting this approach, specifically protecting against replay attacks and race conditions that come up as the cert is renewed (much more often - thanks Let's Encrypt!).
It might be better to have accounts per device rather than per person in that case.
In my observation, people who try to go this route in an uncontrolled environment mess it up by sending the certificate (including the private key) in unencrypted email (which is the default in most cases) or using other insecure mechanisms. The only ones who'd even attempt this are those who may not go through a security check or audit.
[If there are easy ways to handle this in an uncontrolled environment, I'd like to know more.]
I mean with support: getting the certs in there.
Once they are in your keychain, clientcerts are really nice.
https://github.com/paragonie/past
Basically JWT but without the pitfalls as far as I can see.
Show HN: PAST, a secure alternative to JWT | https://news.ycombinator.com/item?id=16070394 (2018Jan:361 points,137 comments)
If your API just serves public non-user-specific data, a simple API key might be okay. The obvious downside of this method is that a user leaking their client API key is a big problem, especially if your users are likely to distribute code that makes requests (e.g. a mobile app that makes requests to your API).
The state of the art is probably still OAuth, where clients regularly request session keys. This means a leaked key probably won’t cause problems for very long. The obvious downside of this is complexity, but that can be mitigated by releasing client libraries that smooth over the process.
There's an RFC that goes into some of the security considerations of OAuth 2.0, that should be required reading if you implement it (even from a pre-built library): https://tools.ietf.org/html/rfc6819
It's crucial that clients are able to respond to their refresh tokens being revoked.
The good thing is that it is a standard workflow, contrary to API key being revoked, which is generally not handled (most people hard-code API key in their client).
For most SasS products, basic auth or an API key is going to be fine. In fact, a ton of SasS vendors do exactly that. It's also totally fine for, say, an enterprise API used by a partner or clients.
Oauth is a cluster-fuck of terribleness, a nightmare for you to work with and a nightmare for your consumers to use. If you do it, you will need to have excellent support docs and examples or have to hand-hold external devs to get it working. The only time I might start considering OAuth is if you want other apps to be granted permissions to use the API on behalf of the user, where you want some granularity of which parts they can access.
I'm not saying OAuth doesn't have a use, but it's awful, overcomplicated implementation means it's a huge time-sink compared to basic auth over HTTPS and I certainly wouldn't recommend it without a very good reason.
Key benefits:
* Secret not included in request
* Verifies integrity of the message (since its contents are signed)
* Protection against replay attacks
It's probably overkill in a lot of situations, but I've always liked how even if TLS were compromised, all the attacker would gain is the ability to see the requests--not modify them or forge new ones.
I haven't used JWT before, but reading one of the links below, it looks like it covers a lot of the same stuff (although you'd have to implement your own replay protection if you want that).
curl -X GET https://127.0.0.1:8000/api/example/ -H 'Authorization: Token 9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b'
Note: HTTPS is required for all of this to be secure.This is what comes out of the box with Django Rest Framework.
FWIW I have ended up using OAuth2 for this situation a few times, and it always feels more complicated than I'd like.
But we are not alone in this regard. Bridge building is centuries more mature than software engineering and their shapes, materials and construction methods still change regularly. I would expect to see this trend remain for a long time. Engineering is an inherently creative practice, often staffed by appropriately creative people. The continued evolution, including the trial and error approach, are likely to continue for decades.
Also, here is a good recent video for ASP Net Core 2, that includes extra things like HSTS, etc. Even if your not in ASP, the concepts will be relevant https://www.youtube.com/watch?v=z2iCddrJRY8
The first three could be implemented easily inhouse and are suited when number of consumers are small. Its better to use some third party SAAS service or OAuth server for the fourth one. I work professionally on these things and these implementations can be time consuming. More than often people dont take into account their requirements when choosing a solution.
If you don't mind running a PHP application, or it being built in Laravel, (I don't, but some do) it's actually a really good implementation of a solid OAuth package[2] (Disclaimer: I am a maintainer on oauth2-server).
You can set this up in a couple of days, and it'll be ready to roll for the majority of use-cases. With a few simple tweaks and additions, you can have it doing extra stuff pretty easily.
In one build I'm working on, this acts as just the authentication layer. The actual API that relies on the tokens this generates sits elsewhere and could be written in any other language (it's not in this case).
[1]: https://github.com/laravel/passport [2]: https://github.com/thephpleague/oauth2-server
WEB API that a device needs to authenticate to. Can't store password on device (it's a device we don't control). No user, so authentication has to be all autommated.
i.e. we need to run software on a clients machine, and it has to authenticate to our web api to send us data.
We obviously don't want to hard code the credentials in the software as that can be trivially extracted.
Am I missing something, or have you painted a contradiction?
* You want the device to hold some secret
* You want the device to be able to prove that it holds the secret
* You don't trust the device to hold a secret
If I'm understanding this correctly, then you've left the realm of cryptography and entered the realm of obfuscation.
Edit
This isn't necessarily a losing battle, but it changes the way we need to think about the problem.
Games consoles and DRM'ed video media (Blu-Ray and HDCP) do something similar in not trusting the end-user: they want to hold the key to the kingdom whilst ensuring the user never sees it. They've done this with varying levels of success.
Can you expand on this? Because storing a device-specific password (or api key, which is essentially the same) would be my first suggestion.
If it's because you can't configure the device, then my suggestion would be to create a process that embeds the device key into the software before deploying to each particular device.
We could have a password entered by our systems guys who deploy to a new machine for the first time, the service encrypts and stores that on disc, then each time it wants to talk to us it can decrypt its password.
I'm not sure if that would be a good solution, or is it just as insecure as having password in the code.
That way, the breach of a single device doesn't immediately give the attacker unlimited access to the API.
You should also monitor for unusual activity, and blacklist API keys and devices with such activity.
2. Generate pregenerated-key upon first login (maybe based on email or tel no?). Just like, e.g., Signal does it
3. On your servers, check if pregenerated-key and/or email is used more than once at the same time, if so invalidate it and direct user to 2.
We monitor for the same login being used twice at the same time and disconnect both and delete the account.
If you happen to be using Azure. I found it very useful for everything you'd want to do with your API, one of them being able to tie down security as much or as little as you need. It even builds a front end for anyone who has access to use for reference. But that's just one of the cool features.
TIA.
eta: I don't work for them, but really no need to roll your own.
We have implemented it for authentication with a Asp.Net Core webservice, with a REST based API. Authorization is also possible, either by working with the JWT token scopes, or using the Auth0 app_metadata.
If you want more, then use username + pass. Encrypt both or generate something from both of them. Eg. encrypt(username):encrypt(pass)
If you want more, use private & public keys, which receive a session token the first time ( when authenticating).
...
I think the end result would be a self hosted oauth server with permission management.
Some APIs also use self-invalidating auth tokens based on an expiry date. For more secure data, I'd prefer that.
http://page.rest does a good job.
Create a token, put your userId in it, set an expiry date.
If a request comes with a token check if token is valid, check the userId & expiry date otherwise throw error.
I believe the meta is that jwt is solid itself but allows doing things "wrong". Guardrails so to speak are insufficient if not outright lacking.
I'd say just go with plain text token for a web app. I don't like the idea of trusting the client because I don't understand how jwt works.
I am interested to know whether HN thinks this is considered secure?
For users, I'd add a OAuth layer to the application layer and still have this application using a HMAC like above. You want to try keep things 'stateless' when it comes to your API's.
For users you'd need some way for the users to "fetch the secret", which is effectively what logging in is. At that point you should just use JWT or oAuth.
Bonus: If you stay close enough to the standard you can plugin a real OAuth 2.0 provider if/when you decide you need it.