How do I run something as SYSTEM? I thought I always ran as "me" or Administrator. Is this only likely to happen for deployment automation tools?
> Avoid running the uninstaller until after upgrading
Don't leave us with this cliff-hanger... Does the upgrade installer run the uninstaller first? (The original report doesn't have this bullet point.)
Seemingly everything needs admin permission to run. If the program can't get admin permission from you, it can probably just ignore that and do stuff like install to %appdata%. Chrome does this, for example, apparently to fix a bug where work place administrators didn't want their users installing programs with administrative permission. Chrome or anyone else can just fix that bug by using a windows-level exploit feature.
You can never really be sure what's happening when you install a mystery .exe file until you just install it. Sometimes you can exploit things with hack tools to take a peak inside I guess, but that doesn't seem to work very well most of the time in my opinion.
This horrible permission management system is why a lot of games only work on Windows. If they want to make sure players aren't cheating, the way they do that is by installing root kit level malware. We call that anticheat. It's amazing how that's a standard procedure that can just work on a computer. It also has a history of entitled devs screwing it up big time (Sony) or even doing it very poorly which causes huge performance drops like we see in most modern recent games.
Modern Warfare 2019 crashed and corrupted my OS which was on an encrypted drive. This lead to a 100% data loss for me. I'm 90% sure the root reason to this was the root kit anti cheat bugging out on top of awful coding / performance and bugginess in general we see everywhere in that game.
I don't understand how Windows still works off of you receiving packages at your door and the only way to verify its safety is guess that the timing of your package syncs up with what you ordered and the packaging looks like what you got, and to have hope that the place that packed it wasn't malicious. While nearly all programs do include bombs (unwanted tracking / malware / bad design), usually they're just fairly harmless firecrackers. The only way to open your package up is to put it on the flammable gas pipe in the middle of the house (a left over design from earlier versions of the house) which has a very heavy flaming chainsaw attached to a 1 inch chain.
To oversimplify:
- there's the kernel, which can do anything
- there SYSTEM, which is is kind of like "root" in unix, but only used by services (you can't log in as SYSTEM)
- there are user accounts with Administrator rights, which can do "anything" (can be slightly limited by the above two)
- there are user accounts without Admin rights, kind of like normal user accounts in unix.
There are also more specialized Admin privileges, rights for everything are actually controlled by Access Control Lists and tokens, and your accounts and their rights might exist locally or domain-wide (centrally managed on some server), but unless you're a company you don't care about that.
The problems you describe are mostly due to three things:
- Home users got into the habit of only using one user account with admin credentials. It's convenient, but effectively running everything as root. That's why we now have UAC (that prompt when you try to do admin stuff with an Admin account), and the transition to UAC was pretty rough.
- Some software actually wants some protection from the user (antivirus has to protect itself from processes started by the user, anti-cheat has to protect itself from the user, etc.) Because home users use Admin accounts for everything, the only escape upwards in the hierarchy is into the SYSTEM account or into the kernel, with the latter being much more secure, but much worse if you do it wrong
- In large corporations obviously only IT has Admin accounts. That's how the system is supposed to work. People still want to install software without calling IT, so some software installs in the user folder (in %APPDATA%). That's no different from installing in ~/bin in unix.
The last two points are equally bad in linux, you just don't come across them as often because nobody does anti-cheat and runtime anti-virus on linux, and most linux systems are ultimately used and administrated by the same person
This is exactly what the "Signed by Windows" MSI feature is for. Of course it is also a cash-in with a prohibitively lengthy and expensive process for publishers though (IMHO)
If corporate systems were stand-alone/isolated, we probably would not have this problem to the extent that we do.
It’s actually a _really_ common vector to exploit poorly written installers by dropping your own file (like a malicious dll or exe) into that directory as a low rights user in the hope that the high rights installer process will then load that code. That’s presumably what’s happening in this case.
How do you define unprivileged?
I do not quite understand what is the 'correct' behaviour considering the parent directory thing. The concrete problem seems rather than that there is no -safe switch to use in prompts, etc.
I'm curious about this -- what's the attack vector here?
> If set, the value of this variable is used as a command which will identify all files that may have changed since the requested date/time.
I feel like this change is a far bigger one than thought, and it's gonna break some workflows, such as mine where I have a git repo that's shared between multiple "users" that I run applications as. I'm glad I've not gotten too far into this project. The next step is just to keep doing a pull/push cycle on every commit, but it's a bit more of a pain to make that happen.
Breaks the CI system for perl, for example.
But ok, let's not take it as an excuse. How about fixing git, then? I mean, actually fixing: making it possible to disable hooks & core.fsmonitor & whatever else they fucked up? No, right, let's just disable git instead.
And if I'm reading this correctly, I'm not even allowed to say "I don't care" — I must explicitly mark every shared directory as trusted (I mean, safe.directory = '/' won't work unless / is actually a git directory, right?).
I guess I just shouldn't update git until this "fix" is fixed. Or until git is forked.
not everyone is running on a multi-user system either (realistically speaking, most personal computers are single user). That doesn't mean microsoft/apple/linux doesn't care about escalation of privilege exploits.
>Truth is, developers are doing something that can fuck up their system daily. Let's forget about wget | bash and copying completely untrusted git repositories (and it's pretty much guaranteed that everybody using git-enabled PS1 won't shy away from that).
So because devs are doing dumb shit on a daily basis, they shouldn't fix security vulnerabilities? What if I'm not doing dumb shit? should I get hacked because I entered a malicious directory on a multi-user system?
>I mean, actually fixing: making it possible to disable hooks & core.fsmonitor & whatever else they fucked up? No, right, let's just disable git instead.
but then what if you need hooks? then you'll have to somehow manually enable it on a repo-by-repo basis, which also doesn't seem very convenient. At least with the ownership check it's transparent to most users. For people that use shared directories and/or network drive mounts, they can always whitelist the path.
> but then what if you need hooks?
Now that's just genius! So, making it possible to disable the functionality that specifically allows for the execution of arbitrary code (which is questionable on its own to say the least — it's pretty much the definition of aforementioned "dumb stuff") is bad, because having to enable it back is "inconvenient", and disabling the whole multi-purpose tool that git is (which has hundreds of user scenarios that don't require allowing to execute arbitrary commands) is good? This is a rhetorical question of course, just think about what you are saying. Worth nothing that making enabling it back inconvenient is a strawman of yours: this is precisely my point that even what they did would be ok, if I was allowed to simply disable their "fix". And it's exactly the problem, that there's no convenient option to do so.
What's wrong with that? Git hooks are inherently dangerous (i.e. running arbitrary code) and should be something you opt into manually.
Because they're authoritarian control-freaks who want to take away even the concept of ownership eventually, having it all to themselves. They want to be able to force users into doing whatever they want.
If you're wondering "Linux too?" --- I'm not saying Linus himself is an enemy, nor a lot of the neutral developers who have contributed good things to it, but all the corporate interests (like Android --- via Google) have shoved plenty of "trusted" computing shit into the kernel, and "secure" boot for Linux distros is still ultimately controlled by a Microsoft key.
We are starting to wake up to this "security" bullshit.
Symlinks, the poisonous gift that keeps on giving.
That's why symlinks MUST die.
You can sort of think of a symlink as having 2 owners: the user that owns the symlink itself, and the user who owns the file pointed to by the symlink. One of those owners might be an attacker, so every time you interact with a file, you have to think "this file might be half-owned by an attacker, and half-owned by a victim".
It's not just a whine, I'm also going to make some suggestions for fixing it :-).
It's also not entirely clear to me what this does to site-wide shared remotes, though I suppose if they can be listed in system config, it's at least not a per-user hassle.
In lack of CI I still tend to do builds in a container from a separate local user with read-only permissions. I wish they'd add an option to allow it when the user is in the owner group, which could be a decent compromise.
[0]: https://git-scm.com/docs/git-config/2.35.2#Documentation/git...
Or else, if you're not root, you're in messed up system. Whoever is root should go read some 40-year-old book on Unix about how it's supposed to be laid out.
This is not a genuine security vulnerability; though of course, it's good to fix it.
Here is how I would fix it. Forget about permissions and ownership entirely. There is a weaker, more powerful condition we can check. Ready?
Git should terminate if it is executed from a subdirectory of a git repo that contains no tracked files according to the first .git/ directory that it finds while ascending the file system.
If you're in a directory that contains no files that are tracked by the closest .git/ that can be found by walking up the stairs, then that directory has no relationship to that repo. Git should diagnose that and bail out. (It could allow files in that directory to be added to the index, but only with -f option to force it.)
If git finds a .git/ dir, and that repo's index shows that at least one item in, or below, your working directory is in that repo's index, it should go ahead and work with it, regardless of ownership.
Your suggestion may protect against accidents, but doesn't seem to me to do anything for deliberately malicious behavior.
Easy fix: on boot, have the "rc" script create a root-owned /tmp/.git dummy file with r-------- permissions.
Someone can also create a /tmp/foo/.git; but to be susceptible to that, you have to be under /tmp/foo. That's another user's directory. What are you doing in there? Serves you right.
If /tmp/foo is your own, and someone planted a .git into it, that's your problem too: you're creating material in /tmp that is accessible to others, which is a security no-no.
Probably, this should be fixed in the kernel: the kernel should not allow a regular user to create a hidden directory (i.e. name starting with ".") in /tmp. Or probably any hidden object.
Such a fix is more general: it fixes the issue for any git-like program that walks up the tree looking for a special dot directory or file, including all such programs not yet written.
The rule could be general, such that creating a hidden object in a directory is only allowed to the directory's owner, not just to anyone who has write permissions to the directory.
In other words, if multiple users have write access to a directory, such as /tmp, but any other kind of directory, then they are not allowed to perpetrate hidden objects on each other (both because those things don't show up under "ls" without "-a" and because programs find those and react to them).
In fact, I would go one step further and enforce the kernel rule that writing to an existing dot file is denied to anyone other than the owner that file, regardless of its write permissions.
(A better fix would be to allow the command if there's no hooks, which does seem feasible, and only failing if it's actually asked to do something dangerous.)
No criticism intended.
$ grep GIT_CEILING_DIRECTORIES ~/.bashrc
export GIT_CEILING_DIRECTORIES=$HOME:/var/www
But malicious Git repos could still affect your user profile. You can harden that by putting all git repos in a sandbox, e.g.: export GIT_CEILING_DIRECTORIES=$HOME/sandboxIt seems they forgot to provide an exception for the root user or a way to disable this "feature" on a global level, instead of per-directory.
If a malicious actor has access to the filesystem, isn't it a bigger problem? I remember Raymond Chen recounted in his blog that Microsoft usually dismisses vulnerability reports that start with "to use the exploit, you must have access to the machine". As he likes to say, "the gates are already open". If you already have access to the machine and can create files outside of your home directory, what stops you from causing even greater havoc?
Generally, you still want these additional protections even if you don't expect others to have access to a machine. Can't say if one or the other is a bigger problem. I think they are all components of having a secure system.
These systems don't let you put files in other people's directories. You can only create things in a specific spot, and if that thing is a directory then you and only you can put files inside it. Sometimes the only thing you can make in that spot is a directory.
(Other users can access those files if you explicitly add them to the permissions, of course.)
ubuntu@vpn1:$ git --version
git version 2.25.1
ubuntu@vpn1:$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focalHowever, you can use bullseye-backports to get 2.34.1 if you want: See https://packages.debian.org/git
Edit: None of the debian versions have the patch yet: https://security-tracker.debian.org/tracker/CVE-2022-24765
I think this is a big mistake. Build environments use separate users for security purposes. It's insane to decrease security for everyone by requiring a single user to do everything because some of your users want to have fancy terminal prompts.
At the very least, let users configure this at a per-user level.
Besides, now that this security issue is patched, git devs should seek a proper solution to that doesn't break git and decrease security for everyone else.
That said, if it is intended, I'm surprised there isn't a comment mentioning that because it certainly looks like a bug.
git_config_pathname(&interpolated, key, value)It's fascinating to me that we have people out there just casually making these kind of decisions with enormous cost implications with barely any thought to the downstream implications. Then meanwhile, we need approval in our org to claim a $30 taxi voucher as an expense.
I think the same thing is going on with non-security reviewed software. The additional cost of thoroughly reviewing everything is so high and software is generally so good, that we can rely on word of mouth and past project behavior to make guesses that make economical sense.
So git has proven a good enough steward in the past, and society swallows the cost of this update. Organizations trusting humans with $30 at scale has proven not to work out, so red tape grows like mushrooms in that region. And log4j2 might be an example of all of us stopping for a moment and evaluating if the cost is still worth it.
Staying sane as an open source maintainer means ignoring such thanklessness as best you can.
This is relatively low risk so I would expect the mitigation to consist of "let your existing automation update it".
Microsoft's own package manager Winget only has v2.34.1 right now.
Chocolatey https://community.chocolatey.org/packages/git#versionhistory
$ winget show git.git
Found Git [Git.Git]
Version: 2.35.2
Publisher: The Git Development Community
Publisher Url: https://gitforwindows.org
Publisher Support Url: https://github.com/git-for-windows/git/issues
Author: Johannes Schindelin
Moniker: git
Description: Git for Windows focuses on offering a lightweight, native set of tools that bring the full feature set of the Git SCM to Windows while providing appropriate user interfaces for experienced Git users and novices alike.
Homepage: https://gitforwindows.org
License: GNU General Public License version 2
License Url: https://raw.githubusercontent.com/git-for-windows/git/main/COPYING
Copyright: Copyright (C) 1989, 1991 Free Software Foundation, Inc.
Copyright Url: https://raw.githubusercontent.com/git-for-windows/git/main/COPYING
Installer:
Type: inno
Download Url: https://github.com/git-for-windows/git/releases/download/v2.35.2.windows.1/Git-2.35.2-64-bit.exe
SHA256: 8d33512f097e79adf7910d917653e630b3a4446b25fe258f6c3a21bdbde410caFor future reference the GitHub manifest page seems to be the better choice:
https://github.com/microsoft/winget-pkgs/tree/master/manifes...
On the other hand, having a git aware PS1 would also immediately alert you to the fact that a user had created a top level .git folder, thereby allowing you to prevent the first cve here.
And to the fact that someone other than me had write access to my disk, in which case git is probably the least of my worries.
The Unix "plug-in API" is pipes and exec and "everything is a file (descriptor)". A "good plug-in API" that doesn't support anything written outside the "huge convoluted script language" is not a plug-in API, it's an internal API of the "convoluted script language".
"Do one thing and do it well doesn’t really appeal to me to begin with" means that you don't like the Unix model in general.
Absolutely correct. While it does have benefits in some situations, writing cross platform command line tools isn’t a place where it shines.
Yes? Some people don’t like it.
Jr Engineer: "Hey, I know we've always managed our little dotnet application via email and shared-network-drive, but I've been reading about a thing called "git" that we should probably use."
Sr Engineer: "Change is scary and bad, also we are not a software company. We're not going to learn some newfangled whatsit. Just email me the .vba files when you want me to review the changes with the one copy of visual studio 2008 that our team has access to."
Jr Engineer: "C'mon, give it a chance! We can leave everything the way its always been, but have better tracking of changes. Remember that time Bruno hard coded the tool to point to the C: drive? Git would let us just undo that, instead of having to search our emails for the last-most-recent version."
Sr Engineer: "Ok fine, I've got 10 minutes, show me."
Jr Engineer: "Ahh! Well I just got it installed, so let me go to the network drive... and then I think I have to git init our project folder... huh? Let me just... Maybe if I..."
Sr Engineer: "Times up! Looks like this "git" thing isn't compatible with our setup after all. Those modern dev types never make anything that works in a real enterprise environment."
The only difference is that my git pitch went really well and they promised they would start using it. They never started using it.
CVE-12345: insecure use of consumer grade operating system in multi-user role when expecting any form of real isolation
CVE-12346: faulty system administration techniques, including running anything as SYSTEM, can cause things to run with elevated privileges
CVE-12347: failure to secure root (C:) and important system directories can allow malicious actors to access them. This can be exploited to trick other parts of the system into doing ... things.
I don't mind patching git for windows to workaround these things, but sheesh, the root cause of both of these is people using Windows incorrectly/insecurely.
Let me fix that for you:
> people using Windows.
Clonning will not copy .git/hooks/ nor .git/config which is the main danger here, I guess. But I'd sure want to hear about other risks.
Maybe an env variable to disable hooks execution and .git/config parsing would be nice to have for safer use of git repositories you didn't clone yourself as part of shell prompt customizations.
And of course the unspoken: almost nobody uses git on multi user systems, and when they do, most of the time every single user already has sudo.
We operate some number of repositories and the majority of them use https://github.com/actions-ecosystem/action-get-latest-tag - or more specifically, a fork of that repo which more or less works the same way.
Midday today our CI/CD started failing. We must have hit this so soon because the `apk add git` in that Dockerfile grabbed the new git version. Evidently the SID that ultimately executed the git command inside the included actions' dockerfile was not the same as the one that owned `/github/workspace` on the runner.
We were able to patch around using the new `safe.directory` option, but I'm curious to see if there's more fallout since CI/CD environments in particular create this sort of shared repository.