For customer OAuth tokens, I believe you should NEVER store the access token in the database. Only store the refresh token in the database. When you need api access, get a new access token with the refresh token and client secret.
This prevents incidents like the above as the attacker would also need your client secret, which is ideally not in your database.
> In general, GitHub Apps are preferred over OAuth apps.
[1] https://docs.github.com/en/apps/oauth-apps/building-oauth-ap...
They are "in the process" and "looking" to do that. As of now if I am not mistaken the main issue still persists and any other new security vulnerability will possibly give full access to tokens again.
Correct me if I am wrong please but I don't see anything in the comment or Blog post saying otherwise.
> and are also in the process of completely deprecating the admin tokens for a more secure internal authentication procedure. Not to mention, we're also looking to fully deprecate the need of the GitHub OAuth tokens entirely in the coming weeks.
https://www.reddit.com/r/cscareerquestions/comments/1bh22bq/...
I think the best thing I can say about it is this: Mintlify helps us write better docs. Before Mintlify, we had a humongous readme plus a few one-off articles in the wiki. After switching the Mintlify, we started organizing things better, beginning with a less overwhelming readme.
I found out about this hack when they sent me an email yesterday. I looked through the git history (edit: and github activity), but nothing seems awry.
It sucks, but they're a pretty small business, and I think they're handling it reasonably well.
Git history or GitHub event history? You can easily push a malicious commit forged to have the same author and short hash.
I reported similar issues to what's described in May 2023.
It's been awhile I hope you're doing well!
The vulnerability issue that you reported, if I recall correctly, was relating to server-forwarding attacks via our API playground. We've addressed that issue around 8 months ago, and unfortunately, it was not the root of this incident.
Please correct me if I am wrong though!
As an aside, GitHub’s security model for apps/integrations is extremely puzzling to reason about and enables a lot of foot guns. Add the fact that it’s very obtuse to audit integrations (especially within an organization) makes them pretty scary to use sometimes.
For the security of our customers from the publicity we were expecting from the announcement, we've decided to leave out the technical details of the breach in the blog post.
The source of this security incident was due to an uncaught error response in one of our APIs that didn't properly format the response before sending it back to the client. The response contained our internal admin tokens, which can then be used to access internal endpoints, which unveiled sensitive user information.
Our initial patch upon discovering the incident fixed the response of the vulnerable endpoint, but we have since also implemented a sweeping number of security provisions and are also in the process of completely deprecating the admin tokens and GitHub OAuth entirely to prevent an incident like this from ever happening again.
>"Our dedication to transparency, security, and the trust you place in us remains unwavering."
You are contradicting yourself here.
>"The source of this security incident was due to an uncaught error response in one of our APIs that didn't properly format the response before sending it back to the client. The response contained our internal admin tokens, which can then be used to access internal endpoints, which unveiled sensitive user information."
Why would you leave that out? Seems like it is vital information.
This doesn't exactly inspire confidence that your service is now secure.
[first red flag] No internal monitoring to check for unauthorized access
Poor engineer(s) tasked with issue on a Friday at end of work day manually rakes logs in their app. After ~1 hr of searching, discovers unauthorized device using leaked credentials
[second red flag] allows any device to hook into their critical infrastructure and access apis that have potential to expose PII and possibly move laterally within the org or customers.
Then poor engineers tasked with painstaking task to rotate all of the tokens throughout the night. Bye bye family time, any planned events (game tickets?). I hate this company already and I have no clue wtf they do.
[third red flag] why the fuck are they storing user tokens in a database. This is apparently a “SOC 2” certified application/company. Shows how much that is worth here.
[edit: fourth red flag] no indication of how long the unauthorized credentials have been used and what customers have been impacted. Very very piss poor logging or purposely omitting due to potential litigation issues
Probably missed a few but this is bad.
The “S” in Mintify clearly stands for security.
I was contracting for them last year and tried, among other things to build an actual engineering culture that prevents and fixes issues that accumulate to catastrophic incidents like this.
They generally prefer to "ship fast".
I informed them very thoroughly again on January 13th (3+ months after they terminated me for "cultural differences"), because I was worried of this exact nightmare scenario happening very soon.
The reason for this was that they open sourced a package that let's an attacker easily practice and test locally in like a minute.
MDX exposes to Cross site Scripting easily. I assume this is the "fixed vulnerability" they are talking about, just to be transparent.
I saw this pop up based on this Reddit thread and on Twitter as well:
https://www.reddit.com/r/ExperiencedDevs/comments/1bf7eqa/ni...
This seems serious? Is this really serious?
Why would they need to save these tokens in the first place?
Everyone should audit their GitHub Apps periodically/avoid using them if at all possible IMO. Most of these integrations are just a convenience for adding webhooks, which you can do yourself without compromising security. Always prefer "outbound" integrations.
SOC 2 is useful as setting a baseline for how a business and IT org should be ran, assuming it's followed...
It's surprisingly common for places to be SOC compliant, yet their latest report has half a dozen or more gaps/findings.