Python dependency management and environments have been a pain for 15 years. Poetry was nice but slow and sometimes difficult.
Uv is lightning fast and damn easy to use. It’s so functional and simple.
I switched to using uv just 2 weeks ago. Previously I had been dealing with maintaining a ton of batch jobs that used: global packages (yes, sudo pip install), manually managed virtualenvs, and docker containers.
uv beats all of them easily. Automatically handling the virtualenv means running a project that uses uv feels as easy as invoking the system Python is.
It's probably worth mentioning that Astral (The team behind uv/etc) has a team filled with people with a history of making very good CLI tooling. They probably have a very good sense for what matters in this stuff, and are thus avoiding a lot of pain.
Motivation is not enough, there's also a skill factor. And being multiple people working on it "full time"-ish means you can get so much done, especially before the backwards compat issues really start falling into place
It sounds like uv is a drop-in replacement for pip, pipx, and poetry with all of their benefits and none of the downsides, so I don't see why I wouldn't migrate to it overnight.
Then I gave it a try and it just worked! It’s so much better that I immediately moved all my Python projects to it.
Poetry which I think is the closest analogue, still requires a [tool.poetry.depenencies] section afaik.
Or is the superior replacement actually up to the job this time?
I just found out they’re still making pipenv. Yes, if you’re using pipenv, I’m confident that uv will be a better experience in every way, except maybe “I like using pipenv so I can take long coffee breaks every time I run it”.
I’ve tried uv a couple places where it’s been forced on me, and it didn’t work for whatever reason. I know thats anecdotal and I’m sure it mostly works, but it obviously was off putting. For better or worse I know how to use conda, and despite having to special attachment to it, slightly faster with a whole different set of rough edges is not at all compelling.
I have a feeling this is some kind of Rust fan thing and that’s where the push comes from, to try and insinuate it into more people’s workflows.
I’d like to hear a real reason I would ever migrate to it, and honestly if there isn’t one, am super annoyed about having it forced on me.
uv uses some very neat tricks involving hard links such that if you start a new uv-managed virtual environment and install packages into it that you've used previously, the packages are symlinked in. This means the new environment becomes usable almost instantly and you don't end up wasting filesystem space on a bunch of duplicate files.
This means it's no longer expensive to have dozens, hundreds or even thousands of environments on a machine. This is fantastic for people like myself who work on a lot of different projects at once.
Then you can use "uv run" to run Python code in a brand new temporary environment that get created on-demand within ms of you launching it.
I wrote a Bash script the other day that lets me do this in any Python project directory that includes a setup.py or pyproject.toml file:
uv-test -p 3.11
That will run pytest with Python 3.11 (or 3.12/3.13/3.14/whatever version you like) against the current project, in a fresh isolated environment, without any risk of conflicting with anything else. And it's fast - the overhead of that environment setup is negligible.Which means I can test any code I like against different Python versions without any extra steps.
1. If you edit any dependency, you resolve the environment from scratch. There is no way to update just one dependency.
2. Conda "lock" files are just the hashes of the all the packages you happened to get, and that means they're non-portable. If you move from x86 to ARM, or Mac to Linux, or CPU to GPU, you have to throw everything out and resolve.
Point (2) has an additional hidden cost: unless you go massively out of your way, all your platforms can end up on different versions. That's because solving every environment is a manual process and it's unlikely you're taking the time to run through 6+ different options all at once. So if different users solve the environments on different days from the same human-readable environment file, there's no reason to expect them to be in sync. They'll slowly diverge over time and you'll start to see breakage because the versions diverge.
P.S. if you do want a "uv for Conda packages", see Pixi [1], which has a lot of the benefits of uv (e.g., lock files) but works out of the box with Conda's package ecosystem.
When I first started using uv, I did not know what language it was written in; it was a good tool which worked far better than its predecessors (and I used pdm/pipenv/pyenv/etc. pretty heavily and in non-basic ways). I still don’t particularly care if it’s written in Rust or Brainfuck, it works well. Rust is just a way to get to “don’t bootstrap Python environments in Python or shell”.
> I’ve tried uv a couple places where it’s been forced on me, and it didn’t work for whatever reason.
I’m curious what issues you encountered. Were these bugs/failures of uv, issues using it in a specific environment, or workflow patterns that it didn’t support? Or something else entirely?
The speed usually doesn't matter, but one time I did have to use it to auto figure out compatible deps in a preexisting project because the pip equivalent with backtracking was taking forever with CPU pegged at 100.
> could care less
I think “couldn’t care less” works better.
Both of them manage venvs, but where the venv goes (by default) makes a difference, imo. Conda defaults to a user level directory eg ~/.conda/envs/my-venv. uv prefers a .venv dir in the project's folder. It's small, but it means per-project venvs are slightly more ergonomic with uv. Wereas with conda, because they're shared under homedir, it's easy to get lazy once you have a working venv and reuse that good working venv across multiple programs, and then it breaks when one program needs its dependencies updated and now it's broken for all of them. Naturally that would never happen to a skilled conda operator, so I'll just say per-project uv venv creation and recreation flows just that tiny bit smoother, because I can just run "rm -rf .venv" and not worry about breaking other things. One annoyance I have with uv is that it really wants to use the latest version of python it knows about, and that version is too new for a program or one of its dependencies, and the program won't run. Running "uv venv --python 3.12" instead of"uv venv" isn't onerous, but it's annoying enough to mention. (pyproject.toml lets projects specify version requirements, but they're not always right.) Arguably that's a python issue and not uv's, but as users, we just want things to work, dammit. That's always the first thing I look for when things don't work.
As mentioned, with uv the project venv lives in .venv inside the project's directory which lets "uv run program.py" cheat. Who amongst us hasn't forgotten to "source .venv/bin/activate" and been confused when things "suddenly" stopped working. So if you're in the project directory, "uv run" will automatically use the project's .venv dir.
As far as it being pushed to promote rust. I'm sure there's a non-zero amount of people for whom that's true, but personally as that makes it harder to contribute to uv, it's actually a point against it. Sometimes I wonder how fast it would be if it was written in python using the same algorithms, but run under pypy.
Anyway, I wouldn't say any of that's revolutionary. Programs exist to translate between the different project file types (requirements.txt/environment.yml/pyproject.toml) so if you're already comfortable with conda and don't want to use uv, and you're not administering any shared system(s), I'd just stick the command to generate environment.yml from pyproject.toml on a cheat sheet somewhere.
---
One bug I ran into with one of the condas; I forgot which, is that it called out to pip under the hood in interactive mode and pip got stuck waiting for user input and that conda just sat there waiting for input that would never come. Forums were filled with reports by users talking about letting it run for hours or even days. I fixed that, but it soured me on *conda, unfortunately.
Absolute no-brainer.
I use Python since version 1.6, mainly for OS scripting, because I rather use something with JIT/AOT in the box for application software.
Still, having a little setup script to change environment variables for PYTHONPATH, PATH and a few other things, always did the trick.
Never got to spend hours tracking down problems caused by the multiple solutions that are supposed to solve Python's problems.
Last I tried it, it insisted on downloading a dynamically linked Python and installing that. This obviously doesn't work, you can't distribute dynamically linked binaries for Linux and expect them to work on any distribution (I keep seeing this pattern and I guess it's because this typically works on macOS?).
Moreover my distribution already has a package manager which can install Python. I get that some absolute niche cases might need this functionality, but that should most definitely be a separate tool. The problem isn't just that the functionality is in the same binary, but also that it can get triggered when you're using another of its functionalities.
I wish this had been made into actual separate tools, where the useful ones can be adopted and the others ignored. And, most important, where the ecosystem can iterate on a single tool. Having "one tool that does 5 things" makes it really hard to iterate on a new tools that does only one of those things in a better way.
It's pretty disappointing to see the Python ecosystem move in this direction.
What caused python to go through these issues? Is there any fundamental design flaw ?
More recent languages like Node.js and Rust and Go all got to create their packaging ecosystems learning from the experiences of Perl and Python before them.
There is one part of Python that I consider a design flaw when it comes to packaging: the sys.modules global dictionary means it's not at all easy in Python to install two versions of the same package at the same time. This makes it really tricky if you have dependency A and dependency B both of which themselves require different versions of dependency C.
All the languages of today gain all their improvements from:
1. Nothing should be global, but if it is it's only a cache (and caches are safe to delete since they're only used as a performance optimization)
2. You have to have extremely explicit artifact versioning, which means everything needs checksums, which means mostly reproducible builds
3. The "blessed way" is to distribute the source (or a mostly-source dist) and compile things in; the happy path is not distributing pre-computed binaries
Now, everything I just said above is also wrong in many aspects or there's support for breaking any and all of the rules I just outlined, but in general, everything's built to adhere to those 3 rules nowadays. And what's crazy is that for many decades, those three rules above were considered absolutely impossible, or anti-patterns, or annoying, or a waste, etc (not without reason, but still we couldn't do it). That's what made package managers and package management so awful. That's why it was even possible to break things with `sudo pip install` vs `apt install`.
Now that we've abandoned the old ways in e.g. JS/Rust/Go and adopted the three rules, all kinds of delightful side effects fall out. Tools now which re-build a full dependency tree on-disk in the project directory are the norm (it's done automatically! No annoying bits! No special flags! No manual venv!). Getting serious about checksums for artifacts means we can do proper versioning, which means we can do aggressive caching of dependencies across different projects safely, which means we don't have to _actually_ have 20 copies of every dependency, one for each repo. It all comes from the slow distributed Gentoo/FreeBSD-ification of everything and it's great!
[0] https://learnpythonthehardway.org/book/nopython3.html#the-py...
But it solves the problem that if A and B both depend on C the user can pass an object from A to B that was created by C without worrying about it breaking.
In less abstract terms, let's say numpy one day changed it's internal representation of an array, so if one version of numpy read an array of a different version of numpy it would crash or worse read it but misinterpret it. Now if I have one data science library produces numpy arrays and another visualization library that takes numpy arrays, I can be confident that only one version of numpy is installed and the visualization library isn't going to misinterpret the data from the data because it is using a different version of numpy.
This stability of installed versions have allowed entire ecosystems build around core dependencies in a way that would be tricky without that. I would therefore not consider it a design flaw.
All of this alongside the rise of GitHub and free CI builders, it being trivial to depend on lots of other packages of unknown provenance, stdlib packages being completely sidelined by stuff like requests.
It’s really only in the last ten years or so that there’s been the clarity of what is a build backend vs frontend, what a lock file is and how workspace management fits into the whole picture. Distutils and setuptools are in there too.
Basically, Python’s packaging has been a mess for a long time, but uv getting almost everything right all of a sudden isn’t an accident; it’s an abrupt gelling of ideas that have been in progress for two decades.
Please don't use this. You need to be careful about how you place any secondary installation of Python on Ubuntu. Meanwhile, it's easy to build from source on Ubuntu and you can easily control its destination this way (by setting a prefix when you ./configure, and using make altinstall) and keep it out of Apt's way.
> and venvs, plus the ongoing awkwardness about whether pip should be writing stuff into usr/local or ~/.local or something else.
There is not really anything like this. You just use venvs now, which should have already been the rule since 3.3. If you need to put the package in the system environment, use an Apt package for that. If there isn't an Apt package for what you want, it shouldn't live in the system environment and also shouldn't live in your "user" site-packages — because that can still cause problems for system tools written in Python, including Apt.
You only need to think about venvs as the destination, and venvs are easy to understand (and are also fundamental to how uv works). Start with https://chriswarrick.com/blog/2018/09/04/python-virtual-envi... .
> It’s really only in the last ten years or so that there’s been the clarity of what is a build backend vs frontend
Well no; it's in that time that the idea of separating a backend and frontend emerged. Before that, it was assumed that Setuptools could just do everything. But it really couldn't, and it also led to people distributing source packages for pure-Python projects, resulting in installation doing a ton of ultimately useless work. And now that Setuptools is supposed to be focused on providing a build backend, it's mostly dead code in that workflow, but they still can't get rid of it for backwards compatibility reasons.
(Incidentally, uv's provided backend only supports pure Python — they're currently recommending heavyweight tools like maturin and scikit-build-core if you need to compile something. Although in principle you can use Setuptools if you want.)
2. There is tons of code in the Python ecosystem not written in Python. One of the most popular packages, NumPy, depends on dozens of megabytes of statically compiled C and Fortran code.
3. Age again; things were designed in an era before the modern conception of a "software ecosystem", so there was nobody imagining that one day you'd be automatically fetching all the transitive dependencies and trying to build them locally, perhaps using build systems that you'd also fetch automatically.
4. GvR didn't seem to appreciate the problem fully in the early 2010s, which is where Conda came from.
5. Age again. Old designs overlooked some security issues and bootstrapping issues (this ties into all the previous points); in particular, it was (and still is) accepted that because you can include code in any language and all sorts of weird build processes, the "build the package locally" machinery needs to run arbitrary code. But that same system was then considered acceptable for pure-Python packages for many years, and the arbitrary code was even used to define metadata. And in that code, you were expected to be able to use some functionality provided by a build system written in Python, e.g. in order to locate and operate a compiler. Which then caused bootstrapping problems, because you couldn't assume that your users had a compatible version of the main build system (Setuptools) installed, and it had to be installed in the same environment as the target for package installation. So you also didn't get build isolation, etc. It was a giant mess.
5a. So they invented a system (using pyproject.toml) that would address all those problems, and also allow for competition from other build back-ends. But the other build back-end authors mostly wanted to make all-in-one tools (like Poetry, and now, er, uv); and meanwhile it was important to keep compatibility, so a bunch of defaults were chosen that enabled legacy behaviour — and ended up giving old packages little to no reason to fix anything. Oh, and also they released the specification for the "choose the build back-end system" and "here's how installers and build back-ends communicate" years before the specification for "human-friendly input for the package metadata system".
Funny thing is that decision was for modularity, but uv didn't even reuse pip.
To be fair, that's justified by pip's overall lack of good design. Which in turn is justified by its long, organic development (I'm not trying to slight the maintainers here).
But I'm making modular pieces that I hope will showcase the original idea properly. Starting with an installer, PAPER, and build backend, bbbb. These work together with `build` and `twine` (already provided by PyPA) to do the important core tasks of packaging and distribution. I'm not trying to make a "project manager", but I do plan to support PEP 751 lockfiles.
Given that, plus the breadth and complexity of its ecosystem, it makes sense that its tooling is also complex.
easy_install never even made it to 1.0
Still, not bad for a bunch of mostly unpaid volunteers.
UV it's a step in the right direction, but legacy projects without Dockerfile can be tricky to start.
The ability to get random github project working without messing with system is finally making python not scary to use.
mise use -g go@1.24
mise use -g java@latest
mise use -g github:BurntSushi/ripgrep
It gives you cross-platform binary packages, quickly (also written in Rust).
Rarely I'd need a different version of python, in case I do, either I let the IDE to take care of it or just do pyenv.
I know there's the argument of being fast with uv, but most of the time, the actual downloading is the slowest part.
I'm not sure how big a project should be, before I feel pip is slow for me.
Currently, I have a project with around 50 direct dependencies and everything is installed in less than a min with a fresh venv and without pip cache.
Also, if I ever, ever needed lock files stuff, I use pipx. Never needed the hash of the packages the way it's done in package-lock.json.
Maybe, I'm just not the target audience of uv.
Even if you only change your commands to 'uv venv ...' and 'uv pip install ...' and keep the rest of your workflow, you'll get
1. Much faster installs.
2. The option to specify the python version in the venv creation instead of having to manage multiple Python versions in some other way.
No pyproject.toml, no new commands to learn. It still seems like a win to me.
1. Write code that crosses a certain complexity treshold. Let's say tou also need compiled wheels for a performance critical section of a library that was written in Rust, have some non-public dependencies on a company-internal got server
2. Try deploying said code on a fleet of servers whose version and exact operating system versions (and python versions!) are totally out of your control. Bonus points for when your users need to install it themselves
3. Wait for the people to contact you
4. Now do monthly updates on their servers while updating dependencies for your python program
If that was never your situation, congrats on your luck, but that just means you really weren't in a situation where the strengths of uv had played out. I had to wrestle with this for years.
This is where uv shines. Install uv, run with uv. Everything else just works, including getting the correct python binary, downloading the correct wheel, downloading dependencies from the non-public git repo (provided the access has been given), ensuring the updates go fine, etc.
Client side, we don't get the privilege of deploying code: we need to build installers, which means again we have complete control over the environment because we package Python and all associated dependencies.
I'm sure there are marginal benefits to uv even with the above scenarios (better dependency management for example), but it seems that there's a middle ground here which I have largely avoided which is where uv really shines.
Where things get annoying is when I push to GitHub and Tox runs through GitHub Actions. I've set up parallel runs for each Python version, but the "Prepare Tox" step (which is where Python packages are downloaded & installed) can take up to 3 minutes, where the "Run Tox" step (which is where pytest runs) takes 1½ minutes.
GitHub Actions has a much better network connection than me, but the free worker VMs are much slower. That is where I would look at making a change, continuing to use pip locally but using uv in GitHub Actions.
If your project requires creating an env and switching to shit and then running it’s a bad program and you should feel bad.
Quite frankly the fact that Python requires explaining and understanding a virtual environment is an embarrassing failure.
uv run foo.py
I never ever want running any python program to ever require more than that. And it better work first time 100%. No missing dependencies errors are ever permitted.
Also, Conda can fucking die in a fire. I wil never ever ever install conda or mini-conda onto my system ever again. Keep those abominations away.
I noticed the comment from andy99 got several downvotes (became grey) and mine here also immediately got some.
I had decided to do something via a one-off Python script. I wanted to use some Python packages for the script (like `progressbar2`). I decided to use Inline Script Metadata[0], so I could include the package dependencies at the top of the script.
I'm not using pipenv or poetry right now, and decided to give uv a try for this. So I did a `sudo port install uv`, followed by a `uv run myscript.py --arguments`. It worked fine, making & managing a venv somewhere. As I developed the script, adding & changing dependencies, the `uv run …` installed things as needed.
After everything was done, cleanup was via `uv cache clean`.
Will I immediately go and replace everything with uv? No. As I mentioned in another post, I'll probably next look at using uv in my CI runs. But I don't feel any need to rush.
[0]: https://packaging.python.org/en/latest/specifications/inline...
With uv it just works and that in a fraction of the time. Where before updates would mean to mentally prepare that a thing that should take 5 seconds in the best and 15 minutes in the worst case could occupy my whole day, it has now become very predictable.
I don't care what it is written in. It works. If you think people love it because it was written in some language it just means you never had a job where what uv brings was really needed and thus you can't really judge its usefulness.
I couldn’t care less that it’s written in rust. It could be conjured from malbolge for all I care. It works as advertised, whatever it’s written in.
While I understand that some have acclimated well to the prior situation and see no need to change their methods, is there really no objective self-awareness that perhaps having one fast tool over many tools may be objectively better?
`uv install` = `uv sync`
`uv install rich` = `uv add rich`
Also, on a new machine, I could never remember how to install the latest version of Python without fiddling for a while. uv solves the problem of both installation and distribution. So executing `uv run script.py` is kind of delightful now.
Before that, I wouldn't want to be too dependent on it.
The biggest wins are speed and a dependable lock file. Dependencies get installed ~10x faster than with pip, at least on my machine.
Both of my Docker Compose starter app examples for https://github.com/nickjj/docker-flask-example and https://github.com/nickjj/docker-django-example use uv.
I also wrote about making the switch here: https://nickjanetakis.com/blog/switching-pip-to-uv-in-a-dock...
A virtual environment, minimally, is a folder hierarchy and a pyvenv.cfg file with a few lines of plain text. (Generally they also contain a few dozen kilobytes of activation scripts that aren't really necessary here.) If you're willing to incur the overhead of using a container image in the first place, plus the ~35 megabyte compiled uv executable, what does a venv matter?
In my docker files I use `uv sync` to install deps vs `pip install -f requirements.txt`
And then set my command to `uv run my_command.py` vs calling Python directly.
Could you elaborate?
Source? That's an option, but it's not even explicitly mentioned in the related documentation [1].
And lack of non-local venv support [2].
What's the problem with that?
You just make your script's entry point be something like this:
uv venv --clear
uv sync
uv run main.py ENV UV_SYSTEM_PYTHON=1