The language itself is (more than) complex enough already - I hope this focus on implementation quality continues.
" Python now uses a new interactive shell by default, based on code from the PyPy project. When the user starts the REPL from an interactive terminal, the following new features are now supported:
Multiline editing with history preservation.
Direct support for REPL-specific commands like help, exit, and quit, without the need to call them as functions.
Prompts and tracebacks with color enabled by default.
Interactive help browsing using F1 with a separate command history.
History browsing using F2 that skips output as well as the >>> and … prompts.
“Paste mode” with F3 that makes pasting larger blocks of code easier (press F3 again to return to the regular prompt). "
Sounds cool. Definitely need the history feature, for the few times I can't run IPython.
https://www.bitecode.dev/p/happiness-is-a-good-pythonstartup or search for a gist
> The new REPL will not be implementing inputrc support, and consequently there won't be a vi editing mode.
https://github.com/python/cpython/issues/118840#issuecomment...
# bisect.py
...
# main.py
import random
with: Traceback (most recent call last):
File ".../foo.py", line 1, in <module>
import random
File "/usr/lib/python3.12/random.py", line 62, in <module>
from bisect import bisect as _bisect
ImportError: cannot import name 'bisect' from 'bisect'
This is very frustrating because Python stdlib is still very large and so many meaningful names are effectively reserved. People are aware of things like "sys" or "json", but e.g. did you know that "wave", "cmd", and "grp" are also standard modules?Worse yet is that these errors are not consistent. You might be inadvertently reusing an stdlib module name without even realizing it just because none of the stdlib (or third-party) modules that you import have it in their import graphs. Then you move on to a new version of Python or some of your dependencies, and suddenly it breaks because they have added an import somewhere.
But even if you are careful about checking every single module name against the list of standard modules, a new Python version can still break you by introducing a new stdlib module that happens to clash with one of yours. For example, Python 3.9 added "graphlib", which is a fairly generic name.
It was ultimately rejected due to issues with how it would need to change the dict object.
IMO all the rejection reasons could be overcome with a more focused approach and implementation, but I don't know if there is anyone wishing to give it another go.
There may be a person in the world panicking that they need to be on Python 3.13 and also need to parse Amiga IFF files, but it seems unlikely.
Looking forward to JIT maturing from now onwards.
As do I.
I suppose only time will tell if that effort succeeds. But the intent is promising.
This is a win for the DX, but this is not yet widely used. For example, "TypeGuard[" appears in only 8k Python files on GitHub.[2]
[0] -- https://docs.python.org/3.13/library/typing.html#typing.Type...
[1] -- https://docs.python.org/3.13/library/typing.html#typing.Type...
[2] -- https://github.com/search?q=%22TypeGuard%5B%22+path%3A*.py&t...
It's day and night compared to typeguard.
Also the dev is... Completely out of this world
Pylance with pyright[0] while developing (with strict mode) and mypy[1] with pre-commit and CI.
Previously, I had to rely on pyright in pre-commit and CI for a while because mypy didn’t support PEP 695 until its 1.11 release in July.
https://github.com/python/cpython/issues/84904
(Don't let the associates with asyncio throw you: that was merely the code in which it was first found; later code transcends it.)
Would be nice to see performance improvements for libraries like FastAPI, NetworkX etc in future.
And will python 3.14 be named pi-thon 3.14. I will see myself out.
Specifically, pendulum hasn't released a wheel yet for 3.13 so it tried to build from source but it uses Rust and the Python docker image obviously doesn't have Rust installed.
I really like using Python, but I can’t keep using it when they just keep breaking things like this. Most people don’t read all the release notes.
While it’s not perfect, I know a few other people people who do “set up lots of data structures, including in libraries, then make use of the fact multiprocessing uses fork to duplicate them”. While fork always has sharp edges, it’s also long been clearly documented that’s the behavior on Linux.
both fork() and spawn() are just wrappers around clone() on most libc types anyway.
spawn() was introduced to POSIX in the last century to address some of the problems with fork() especially related to multi threading, so I an curious how your code is so dependent on UTM, yet multi threading.
It use fork in Python multiprocess, because many packages can't be "pickled" (the standard way of copying data structures between processes), so instead my code looks like:
* Set up big complicated data-structures.
* Use fork to make a bunch of copies of my running program, and all my datastructures
* Use multiprocessing to make all those python programs talk to each other and share work, thereby using all my CPU cores.
Still, this one doesn’t seem too bad. Add method=FORK now and forget about it.
So much perl clutching. Just curious, since I guess you've made up your mind, what's your plan to migrate away? Or are you hoping maintainers see your comment and reconsider the road-map?
Rust isn't perfect (no language is), but they do seem to try much harder to not break backwards compatability.
> Python 3.13 was released on October 7, 2024
I.e. we currently run 3.11 and will now schedule work to upgrade to 3.12, which is expected to be more or less trivial for most services.
The rationale is that some of the (direct and transitive) dependencies will take a while to be compatible with the latest release. And waiting roughly a year is both fast enough to not get too much behind, and slow enough to expect that most dependencies have caught up with the latest release.
Python libraries support https://pyreadiness.org/3.13/
Which is mostly latest_major - 1, adjusted to production constraints, obviously. And play with latest for fun.
I stopped using latest even for non serious projects, the ecosystem really needs time to catch up.
No, because it varies widely depending on your use case and your motivations.
>Is it usually best to wait for the first patch version before using in production?
This makes it sound like you're primarily worried about a situation where you host an application and you're worried about Python itself breaking. On the one hand, historically Python has been pretty good about this sort of thing. The bugfixes in patches are usually quite minor, throughout the life cycle of a minor version (despite how many of them there are these days - a lot of that is just because of how big the standard library is). 3.13 has already been through alpha, beta and multiple RCs - they know what they're doing by now. The much greater concern is your dependencies - they aren't likely to have tested on pre-release versions of 3.13, and if they have any non-Python components then either you or they will have to rebuild everything and pray for no major hiccups. And, of course, that applies transitively.
On the other hand, unless you're on 3.8 (dropping out of support), you might not have any good reason to update at all yet. The new no-GIL stuff seems a lot more exciting for new development (since anyone for whom the GIL caused a bottleneck before, will have already developed an acceptable workaround), and I haven't heard a lot about other performance improvements - certainly that hasn't been talked up as much as it was for 3.11 and 3.12. There are a lot of quality-of-implementation improvements this time around, but (at least from what I've paid attention to so far, at least) they seem more oriented towards onboarding newer programmers.
And again, it will be completely different if that isn't your situation. Hobbyists writing new code will have a completely different set of considerations; so will people who primarily maintain mature libraries (for whom "using in production" is someone else's problem); etc.
Old Systems Admins like me have been following this simple rule for decades. It's the easiest way at scale.
Otherwise, there's always the excellent `pyenv` to use, including this person's docker-pyenv project [1]
[0] https://hub.docker.com/layers/library/python/3.13.0rc3-slim-... [1] https://github.com/tzenderman/docker-pyenv?tab=readme-ov-fil...
What I meant is: While I am already inside a container running Debian, can I ...
1: ./myscript.py
2: some_magic_command
3: ./myscript.py
So 1 runs it under 3.11 (which came with Debian) and 2 runs it under 3.13.I don't need to preserve 3.11. some_magic_command can wrack havoc in the container as much as it wants. As soon as I exit it, it will be gone anyhow.
The in a sense, the question is not related to Docker at all. I just mentioned that I would do it inside a container to emphasize that I don't need to preserve anything.
Am I seeing a cached version and you see 3.13 ? Cause I can't see it on the homage page download link either.