Ever try conda though? I’ve had moderate success with pipenv, but tbh I don’t love it as it hides too many things when installing a package fails.
(And by the time it happens you might have a real mess on your hands because by that point the dev container's dockerfile has quite possibly grown into an undocumented spaghetti tangle of band-aids as a result of every dev on the team tweaking things in whatever way seemed to make the most sense at the time without a whole lot of regard for the end-to-end cohesiveness of the situation.)
The standard advice in the Python community is "never trust the system Python", but tools like pyenv that we have for protecting ourselves from the operating system aren't always straightforward to get working sensibly inside of a container. It seems like it should be easy, but I've seen people get it wrong far more often than I've seen them get it right.
A big part of the problem is that the Python community has developed an extremely severe case of TMTOWTDI when it comes to dependency management, packaging and deployment. It's led to a situation where, if you're just googling around for problem solutions in an ad-hoc manner, you're likely to end up with a horrible chimera of different philosophies of how to do Python devops, and they won't necessarily mesh well together.
Do you have a suggested solution? Im solo dev for now but will be adding more folks in foreseeable future. Stuck with python for some things (notebooks, model development.)
My usual advice is to just bite the bullet and invest the time it takes to understand how Python package management and resolution really works under the hood, and how all the various devops approaches that are built on top of it work with it, so that you can make informed decisions and truly own your own stack.
Docker and docker compose do make it incredibly easy to start everything that's required for local development and testing. Your service A needs B and C? Grab those images of B and C and run them all on your machine. The only limitation is the amount of RAM you have available locally.
And if you think a bit about your Dockerfiles (ie. have the layers set-up to take advantage of caching, have icecc+ccache mounts for c++ projects to distribute compilation and cache results, have mounts for apt or other package manager cache downloaded packages that you use) the local image rebuilds can be quite fast. Those are the little tricks to make your life with docker less miserable.