The usual is that if there's no logs saying something bad actually happened, there's certainly nothing to say that it did, even though some terribly guessable credentials were used for ages on something publicly exposed. I know, they know, but told in no uncertain terms to drop it.
Nothing to see here, move along. Work to be done, money to be made.
Right now my ChatGPT4 history is full of chats I didn't create, on subjects ranging from corporate governance to Roblox scripting to somebody's math homework. It will be only a matter of time before this bug causes them to leak sensitive personal data. I spent 10 minutes looking for a way to report it, but they have successfully insulated themselves from any contact with their (paying) customers.
Pretty annoying, and not something you expect from a supposedly security-savvy company... although that expectation is certainly changing.
It sounds like the bug affecting your account is uncommon, making your account special, and as an AI security researcher I can help investigate the extent of the issue and I have contacts that can help call attention to it. Thank you for discovering this and trying to escalate this.
They can't even do basic auth properly so ...
Not at all. OpenAI follows basic accepted standards for security reporting. This is like complaining that you can't find if a website doesn't want specific directories crawled because you don't know about the existence of a robots.txt.
Specifically, OpenAI has a security.txt [0], which is:
> an accepted standard for website security information that allows security researchers to report security vulnerabilities easily [1]
Whenever attempting to find where to report a security issue, the easiest thing to do is always check if the website has a security.txt file.
[0] https://openai.com/security.txt
[1] https://en.wikipedia.org/wiki/Security.txt
Here's their security.txt:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
#
# .d88888b.
# .8P" "9bd888b.
# .8P .d8P" `"988.
# .8888 .d8P" , 98.
# .8P" 88 8" .d98b. 88
# .8P 88 8 .d8P" "98b. 88
# 88 88 8P" `"8b. "98.
# 88. 88 8 8"8b. 88
# 88 "98.8 8 88 "88
# `8b. "98., .d8 88 88
# 88 "98b. .d8P" 8 88 d8"
# 88 "98bP" .8 88 .d8"
# "8b ` .d8P" 8888"
# "88b., .d8P" d8"
# "9888P98b. .d8"
# "988888P"
#
Contact: https://bugcrowd.com/openai
Acknowledgments: https://bugcrowd.com/openai/hall-of-fame
Policy: https://openai.com/policies/coordinated-vulnerability-disclosure-policy
Hiring: https://openai.com/careers/search?c=security
Canonical: https://openai.com/.well-known/security.txt
Encryption: https://cdn.openai.com/security/disclosure.asc.pub
# You may also email us directly.
Contact: mailto:disclosure@openai.com
-----BEGIN PGP SIGNATURE-----
iHUEARYKAB0WIQQ5fYPd6Hi19rZDZ+kKj1HZ7OnINQUCZbiKWgAKCRAKj1HZ7OnI
NS9+AQCTx4vlrCp+Urd1fa/lAQ3dcV8VNHOxA4JnxD0TH2nxwQEAuqoxenxPZWeD
+IsSikn4em/LEheOeAakGDzZedcu1QE=
=rMRk
-----END PGP SIGNATURE----- Hello and thank you for reaching out to OpenAI. Our vulnerability disclosure program has migrated to OpenAI's bug bounty program, and this mailbox is no longer monitored. Please use the "submit report" functionality available through our bug bounty platform to inform us of security concerns, or reach out to support@openai.com for any non-security-related inquiries.
Thank you for your help in securing OpenAI!
Bug Bounty Program: https://bugcrowd.com/openai... that was a joke, right? So only people who have heard of the security.txt convention are expected to find this information easily when they need to report a bug?
Under "Reporting security issues" it points you to a bug bounty page: https://bugcrowd.com/openai with a bunch of explanations.
I'm guessing if you also just send an email to security@openai.com it'll go to someone. Using Bugcrowd just seems like a nice way to also run a bug bounty as part of their normal intake.
https://cookbook.openai.com/examples/how_to_call_functions_w...
https://news.ycombinator.com/item?id=40474451#40474452
The bug bounty report was closed with a message saying:
Upon reviewing your report and consulting with the OpenAI team, we have determined that this feature is operating as intended. This means it does not constitute a valid sandbox escape. The Code Interpreter environment is securely sandboxed to support code writing and execution, including shell operations. Any code execution within this environment falls outside the scope of our program ... As you have not demonstrated a valid sandbox escape or RCE, we're closing this submission as Not Applicable.
This shows a fundamental misunderstanding of basic coding, as the eval() I pointed them to is completely unrelated to the Code Interpreter environment. So, the report is incorrectly considered "Not Applicable", without any real further ways to try to get them to fix it. I tried contacting the Cookbook authors directly, but never heard back.I can't be arsed to create an account on a third party 'bug bounty' site, or to waste time guessing email addresses, or to download a security.txt file I've never heard of. Sorry. Their loss, not mine. If they make it hard for me to help them, they can't be too surprised when I give up trying.
The ship has sailed, OpenAI wants you to put everything in their system. It makes them more valuable. They know there is no repercussions because their base will blindly advocate for them regardless under the guise of “the best llm”.
Actual article: https://www.nytimes.com/2024/07/04/technology/openai-hack.ht...
More discussion: https://news.ycombinator.com/item?id=40887619
A poorly written article regurgitating the NYT story with uninformed alarmist shitty podcast tier ‘analysis’.
Jog on.
A bit more complicated than that for public companies. But OpenAI is private, so yeah, they most likely don't have to. It's still an interesting scoop for a journalist, though.
If the internal culture is to keep problems under wraps to maintain appearances, this seems like it might backfire at some point.
Article just rambles about some unnamed uninformed AI-phobes being concerned about US national security in relation to China because of some unknown OpenAI internal information that might have leaked.