http://www.python.org/dev/peps/pep-0414/
In a nutshell, the 2.x version of declaring a unicode string is now valid (although redundant). From the PEP:
In many cases, Python 2 offered two ways of doing things for historical reasons. For example, inequality could be tested with both != and <> and integer literals could be specified with an optional L suffix. Such redundancies have been eliminated in Python 3, which reduces the overall size of the language and improves consistency across developers.
In the original Python 3 design (up to and including Python 3.2), the explicit prefix syntax for unicode literals was deemed to fall into this category, as it is completely unnecessary in Python 3. However, the difference between those other cases and unicode literals is that the unicode literal prefix is not redundant in Python 2 code: it is a programmatically significant distinction that needs to be preserved in some fashion to avoid losing information.
This version of python should see more uptake by 2.x developers as it is now easier to port.
Many of the features detailed in the release list are more helpful in general. 'yield from' is actually really good if you are using generator based coroutines, the wide/narrow build thing addresses a long-time pain point, it will be great if namespace packages are actually fixed by now, and adoption of virtualenv into the core is a big deal!
The #1 Python web framework, Django, is a bit ahead of the curve but still hasn't released a major version that supports 3. Django's huge ecosystem of addons are not going to catch up for some time after that. And users are not going to develop green-field Django apps on Python 3 until some time after that. So the timeline for serious use of Python 3 with Django should still be counted in years, which means that the majority of Python web development will not be done with Python 3 for a while. (If Flask and Werkzeug had been out ahead of this issue, it could have eaten up even more of Django's market share, but it looks like it will actually be behind in adopting Python 3). I don't think any of the Zope frameworks or ZODB, etc. support 3. And I think between those three you probably have most of Python web development accounted for.
Pyramid, Bottle, Tornado and CherryPy DO support Python 3 but aren't nearly as widely used. App Engine needs a new runtime and that could be years. I don't think that Web2py or web.py have released versions which support Python 3, though both probably have something in the works.
Since web development is so heavily framework-driven, the majority of web development on Python is not going to move on to 3 until Django and Flask ecosystems move over, or everyone switches to Pyramid.
And due to the requirement for backward compatibility, Python 3 features will not be widely used for a while (suppose you make a widely used extension, you are not going to lock out people using 2.7, this means you are using the common denominator of features and supplying Python 2 implementations for the stuff you are using from new Python 3 libraries).
I would contrast this with the very large number of general Python libraries which already support Python 3. So I think the Python web development world is well behind the rest of the Python world.
It's a mess but I think that Python 3's improvements are worth not straitjacketing the future of the language into the decisions made in 2.
You forget the animation group. And the hardware/embedded group (Raspberry-Pi, anyone?). And the C-wrapper-writing group. And the sysadmin group. And and and...
The Python ecosystem is very large and diverse. That's part of its strength, and part of its weakness as well (it's very difficult to "herd" all these people, as this 3k migration has shown), but don't make the error of reducing it to the most vocal sectors -- they're not necessarily the most significant ones.
Last-Modified: 2011-01-16 09:57:25 +0000 (Sun, 16 Jan 2011) ... Status: Final ... Created: 26-Sep-2010
Until this was made final(January 2011), web frameworks that use the wsgi spec had no python 3 path short of ditching the wsgi spec.
Now that it is final, there is a path to python 3 for web-dev. Cherrpy got there first I think. Pyramid/Webob got there, and others as well, and I'm sure Django will get there soon enough.
I'd say web-dev adoption of python 3 has been pretty swift.
Ah .. NO! I can assure you that we are still stuck with Python 2.x (2.7 to be more precise). I am not too sure the others would have moved either.
I gave Django a couple of truly honest tries, but I just couldn't bring myself to tolerate it. Pyramid is by far the best.
Assuming you do want to wait on Python 3 adoption, your timing should depend on the framework you want to use, because effectively each one has its own community and ecosystem, and their adoption is at completely different rates.
If Pyramid looks good to you, for example, it already is on board with Python 3. Bottle is on board with Python 3. If you want to use Django, which is what most people will want to do, you should just wait on Django to release a Python 3 version, and Django should be usable on Python 3 within the year. If you want to use Flask (considered the closest analogue of Ruby's Sinatra) then it could take a while.
* The feature I like most is definitely generator delegation, it significantly improves more extensive uses of generators and iterators, and makes generator-as-coroutines a much more interesting proposition (before delegation, calling an other "coroutine" would be rather painful, now you essentially just have to tack a `yield from` in front, as you'd tack an `await` in C# 5.0)
* The flexible string representation finally fixes narrow build's issues with astral planes, which is becoming rather important as astral planes include e.g. emoji, and it significantly reduces the possibility of bugs when working with astral planes (as there's no more behavioral difference between "narrow" and "wide" builds)
* We'll have to see how they're used, but namespace support could be used to significantly cleanup of... well, namespaces (and multiple separate libraries living in the same namespace, without having to resort to PYTHONPATH hacks or setuptools tomfoolery)
* A built-in, clean implementation of contexts/scopes (collections.ChainMap) I can already see plenty of use for. Same for signatures, there's high hijinks potential in that one.
* The rest really is about a better experience all around: reduced memory, "unicode literals" (for Python 2 compat), ElementTree fixups, ...
Having to re-do any part of your code from one release of a language to another became a real deal breaker for me.
For an interpreted language that problem is even worse because you don't know you have a problem until that bit of code gets hit.
At some point you need to burn bridges in order to move forward. Doing this frequently destroys the community. Never doing it also destroys the community, it just takes longer.
Personally, I think Python is managing pretty well. Yes, it's occasionally painful, but it's less painful than stagnation.
That's a fair point, and it applies to a lot more than just programming languages. Operating systems and browsers also come to mind, for example.
On the other hand, if you insist on maintaining strict backward compatibility indefinitely, you have increasing drag on every useful new feature you want to introduce. You also can't remove edge cases that should ideally never have been there, even if they make it easy to introduce a bug.
In programming languages, this is the C++ effect. Building on the familiar foundation of C was a good decision by Stroustrup in the early days, and I'm sure it contributed greatly to C++'s success. On the other hand, today I believe that C++ is holding back large parts of the programming community, by being good enough in its niche that huge numbers of projects stick with it, yet lacking the expressive power, broad standard library functionality, and clean syntax/semantics that we take for granted in numerous modern programming languages.
In general, the goals of stability and progress are always going to be in conflict for any platform-like software. Such software essentially defines a standard for others to program against, and the entire point of standards is to create stability and common ground, but sometimes old standards don't adapt well to incorporate new ideas.
I suspect the best we can ever do is restrict major changes, which in practice means those where the old code cannot be automatically converted to get the same behaviour on the new system, to major releases. Minor changes that can be automatically converted are much less of a problem, as long as the "breaking" version of the platform comes with a simple conversion tool.
To be fair to the Python developers, this is essentially what they've done with the jump from Python 2 to Python 3. There is a tool to deal with converting the trivia, and most of the breaking changes were in the initial jump and acknowledged as such.
There probably is a case for making fewer, if any, breaking changes in minor releases. On the other hand, if you're looking at an estimated period of five years to migrate the bulk of the community from one major version to the next, there is probably a fair case for allowing a few smaller but incompatible changes in minor versions as well, as long as their effects are clear and only within a tightly controlled scope so they don't unduly disrupt everyone they aren't there to help.
Other than that language itself being incompatible with previous releases there is the added burden of having to maintain a whole ecosystem, not unlike many frameworks and their plug-ins.
Many people will write some module or other and will make it available for others to build on, and then an upgrade will break the module. The module creators have since moved on and are no longer supporting their brainchildren.
Fortunately with open source you actually can fix these problems - most of the time - but there is not always time or opportunity to do so.
Backwards compatibility is what made microsoft a dominant market force, I believe that you mess with it at your peril.
edit: oh my, that's a lot of downvotes for answering a question.
The easiest way around this problem is to have tests and run them before you upgrade your production boxes. It's not that hard to do.
What are specific APIs or things they keep continuously breaking? Did you have issues with transition from 1 to 2 and now with 2 to 3.
Another Python feature that makes upgrades hard is monkeypatching, where one piece of code can inject arbitrary code into another piece of code. In many dynamic languages, this is often used to patch the behavior of core classes like String. I don't know how common this is in Python, but I've definitely seen it before. By erasing the distinction between interface and implementation, this makes it difficult to ever change them implementation of anything.
People say that writing more and more unit tests will solve these problems. But guess what? The more unit tests you have of an API, the more unit tests you have to change when that API changes. Unit tests are good and should be written in any language, but they are hardly a substitute for static typing.
The end result of all of this is that dynamically typed languages usually follow a trajectory where there's an initial burst of enthusiasm about some cool syntax, followed by a lot of code being written, and then a gradual descent into a compatibility tarpit, where nothing can be changed because of fear of breaking working code. Only additions can be made, and the language gradually grows uglier and uglier. Dynamically typed languages approximate Bourne shell more and more as time goes on-- a dozen slightly incompatible implementations, ancient quirks that bite hard on newcomers, and a resignation that this is the "best it gets."
Sometimes there's a burst of irrational hope towards the end of a language's lifetime. Perl 6 and Python 3 are good example of this. Developers go into their happy place and forget about the big bad compatibility bear that's been chasing them. But it's just a fantasy-- dynamic languages can't escape from the tarpit in the end, and nobody adopts the new thing.
Implying that a language is doomed to be ugly or hard to evolve because it doesn't use static types is not a logical conclusion.
Besides, Perl 6 is not Perl 5 continued, and never pretended to be. Perl 5 continues to evolve (e.g. Moose) and is not getting uglier. You could argue that it was already ugly when it was created, but it is not growing uglier.
It's really not clear to me how your claims can be backed up. Lack of static typing is a fundamental feature of Python, it's part of the design philosophy. The majority of the language functionality is built in the standard library, where modules and interfaces can be swapped out and deprecated easily. Modules in pypi can have different codebases for different interpreter versions, and programs/projects can declare their dependencies using virtualenv; multiple Python interpreter versions can coexist on the same system. Your claim boils down to saying that static typing allows languages to develop faster and cut down maintenance costs, but I don't think you can conclusively demonstrate either point.
You can't alter core classes (like str) in Python, and patching classes from other modules is considered a really weird, rare, bad practice.
Although possible, it's my understanding that monkey patching is greatly frowner upon in the python community. I've been programming python on and off for a decade and can only think one time when I've monkey patched something, and I felt really bad about it. The only library that I know of that uses monkey patching is gevent, and it doesn't do it by default but only if you've explicitly told it to.
Visual Studio is the vendor-suggested way of building C++ and it's free besides; there's not really a good reason not to use it.
More widespread, easier to handle, more accepted among windows developers.
(Windows contributor who did the VS2010 work)
(I did the 2010 changes)
As someone who hated the .ini configuration for logging in python 2.5, this smells a bit.
That in itself is should make a few python gamers happy. Also some serious motivations for older version users, not all but more and more. Also many other interesting develepments others have highlighted already.
Precision: 9 decimal digits
float:
result: 3.1415926535897927
time: 0.113188s
cdecimal:
result: 3.14159265
time: 0.158313s
decimal:
result: 3.14159265
time: 18.671457s
Precision: 19 decimal digits
float:
result: 3.1415926535897927
time: 0.112874s
cdecimal:
result: 3.141592653589793236
time: 0.348100s
decimal:
result: 3.141592653589793236
time: 43.241220sPi is 3.14159 26535 89793 238....
So I do wonder what rounding they are using, even truncating as I have (next digit 4 so good place to do that) then can see the last digit should at least be 8, worst case 7 and 6!! There again this may be a convention or result of the methord to calulate Pi.
As for floats, well, for accuracy I'd go with cdecimal right there, though is it as accurate. I suspect it is the formular used that induces the minute error in results.
http://en.wikipedia.org/wiki/Pi #21 reference
But even without it I am going to try Python3 in my projects (hello Django 1.5, Tornado, Wheezy.web, jinja2 and a lot of others).
I can't make head or tail of that claim, what's the "bloat since 3.0"?