Professional Documents
Culture Documents
Release 2.7.12
A. M. Kuchling
November 03, 2016
Python Software Foundation
Email: docs@python.org
Contents
1
10
10
13
13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
22
23
23
24
26
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
29
29
30
30
30
30
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
32
32
32
33
33
33
34
34
16 Acknowledgements
34
Index
35
and other bug fixes) until at least 2020 (10 years after its initial release, compared to the more typical support
period of 18-24 months).
As the Python 2.7 standard library ages, making effective use of the Python Package Index (either directly or via
a redistributor) becomes more important for Python 2 users. In addition to a wide variety of third party packages
for various tasks, the available packages include backports of new modules and features from the Python 3
standard library that are compatible with Python 2, as well as various tools and libraries that can make it easier
to migrate to Python 3. The Python Packaging User Guide provides guidance on downloading and installing
software from the Python Package Index.
While the preferred approach to enhancing Python 2 is now the publication of new packages on the Python
Package Index, this approach doesnt necessarily work in all cases, especially those related to network security.
In exceptional cases that cannot be handled adequately by publishing new or updated packages on PyPI, the
Python Enhancement Proposal process may be used to make the case for adding new features directly to the
Python 2 standard library. Any such additions, and the maintenance releases where they were added, will be
noted in the New Features Added to Python 2.7 Maintenance Releases section below.
For projects wishing to migrate from Python 2 to Python 3, or for library and framework developers wishing to support
users on both Python 2 and Python 3, there are a variety of tools and guides available to help decide on a suitable
approach and manage some of the technical details involved. The recommended starting point is the pyporting-howto
HOWTO guide.
The ordered-dictionary type described in PEP 372: Adding an Ordered Dictionary to collections.
The new "," format specifier described in PEP 378: Format Specifier for Thousands Separator.
The memoryview object.
A small subset of the importlib module, described below.
The repr() of a float x is shorter in many cases: its now based on the shortest decimal string thats guaranteed
to round back to x. As in previous versions of Python, its guaranteed that float(repr(x)) recovers x.
Float-to-string and string-to-float conversions are correctly rounded. The round() function is also now correctly rounded.
The PyCapsule type, used to provide a C API for extension modules.
The PyLong_AsLongAndOverflow() C API function.
Other new Python3-mode warnings include:
operator.isCallable() and operator.sequenceIncludes(), which are not supported in 3.x,
now trigger warnings.
The -3 switch now automatically enables the -Qwarn switch that causes warnings about using classic division
with integers and long integers.
>>> od.popitem()
(18, 0)
>>> od.popitem(last=False)
(0, 0)
>>> od.popitem(last=False)
(1, 0)
Comparing two ordered dictionaries checks both the keys and values, and requires that the insertion order was the
same:
>>> od1 = OrderedDict([('first', 1),
...
('second', 2),
...
('third', 3)])
>>> od2 = OrderedDict([('third', 3),
...
('first', 1),
...
('second', 2)])
>>> od1 == od2
False
>>> # Move 'third' key to the end
>>> del od2['third']; od2['third'] = 3
>>> od1 == od2
True
Comparing an OrderedDict with a regular dictionary ignores the insertion order and just compares the keys and
values.
How does the OrderedDict work? It maintains a doubly-linked list of keys, appending new keys to the list as theyre
inserted. A secondary dictionary maps keys to their corresponding list node, so deletion doesnt have to traverse the
entire linked list and therefore remains O(1).
The standard library now supports use of ordered dictionaries in several modules.
The ConfigParser module uses them by default, meaning that configuration files can now be read, modified,
and then written back in their original order.
The _asdict() method for collections.namedtuple() now returns an ordered dictionary with the
values appearing in the same order as the underlying tuple indices.
The json modules JSONDecoder class constructor was extended with an object_pairs_hook parameter to
allow OrderedDict instances to be built by the decoder. Support was also added for third-party tools like
PyYAML.
See also:
PEP 372 - Adding an ordered dictionary to collections PEP written by Armin Ronacher and Raymond Hettinger;
implemented by Raymond Hettinger.
>>> '{:20,.2f}'.format(18446744073709551616.0)
'18,446,744,073,709,551,616.00'
When formatting an integer, include the comma after the width:
>>> '{:20,d}'.format(18446744073709551616)
'18,446,744,073,709,551,616'
This mechanism is not adaptable at all; commas are always used as the separator and the grouping is always into
three-digit groups. The comma-formatting mechanism isnt as general as the locale module, but its easier to use.
See also:
PEP 378 - Format Specifier for Thousands Separator PEP written by Raymond Hettinger; implemented by Eric
Smith.
Command-line example.
positional arguments:
inputs
input filenames (default is stdin)
optional arguments:
-h, --help show this help message and exit
-v
produce verbose output
-o FILE
direct output to FILE instead of stdout
-C NUM
display NUM lines of added context
As with optparse, the command-line switches and arguments are returned as an object with attributes named by the
dest parameters:
-> ./python.exe argparse-example.py -v
{'output': None,
'is_verbose': True,
'context': 0,
'inputs': []}
-> ./python.exe argparse-example.py -v -o /tmp/output -C 4 file1 file2
{'output': '/tmp/output',
'is_verbose': True,
'context': 4,
'inputs': ['file1', 'file2']}
argparse has much fancier validation than optparse; you can specify an exact number of arguments as an integer,
0 or more arguments by passing *, 1 or more by passing +, or an optional argument with ?. A top-level
parser can contain sub-parsers to define subcommands that have different sets of switches, as in svn commit, svn
checkout, etc. You can specify an arguments type as FileType, which will automatically open files for you and
understands that - means standard input or output.
See also:
argparse documentation The documentation page of the argparse module.
argparse-from-optparse Part of the Python documentation, describing how to convert code that uses optparse.
PEP 389 - argparse - New Command Line Parsing Module PEP written and implemented by Steven Bethard.
written to a network.log file that will be rotated once the log reaches 1MB.
import logging
import logging.config
configdict = {
'version': 1,
# Configuration schema in use; must be 1 for now
'formatters': {
'standard': {
'format': ('%(asctime)s %(name)-15s '
'%(levelname)-8s %(message)s')}},
'handlers': {'netlog': {'backupCount': 10,
'class': 'logging.handlers.RotatingFileHandler',
'filename': '/logs/network.log',
'formatter': 'standard',
'level': 'INFO',
'maxBytes': 1000000},
'syslog': {'class': 'logging.handlers.SysLogHandler',
'formatter': 'standard',
'level': 'ERROR'}},
# Specify all the subordinate loggers
'loggers': {
'network': {
'handlers': ['netlog']
}
},
# Specify properties of the root logger
'root': {
'handlers': ['syslog']
},
}
# Set up configuration
logging.config.dictConfig(configdict)
# As an example, log two error messages
logger = logging.getLogger('/')
logger.error('Database not found')
netlogger = logging.getLogger('network')
netlogger.error('Connection failed')
Three smaller enhancements to the logging module, all implemented by Vinay Sajip, are:
The SysLogHandler class now supports syslogging over TCP. The constructor has a socktype parameter
giving the type of socket to use, either socket.SOCK_DGRAM for UDP or socket.SOCK_STREAM for
TCP. The default protocol remains UDP.
Logger instances gained a getChild() method that retrieves a descendant logger using a relative path. For example, once you retrieve a logger by doing log = getLogger(app), calling
log.getChild(network.listen) is equivalent to getLogger(app.network.listen).
The LoggerAdapter class gained an isEnabledFor() method that takes a level and returns whether the
underlying logger would process a message of that level of importance.
See also:
PEP 391 - Dictionary-Based Configuration For Logging PEP written and implemented by Vinay Sajip.
set([])
>>> {}
{}
# empty dict
>>> n = 295147905179352891391
>>> float(n)
2.9514790517935289e+20
>>> n - long(float(n))
-1L
(Implemented by Mark Dickinson; issue 3166.)
Integer division is also more accurate in its rounding behaviours. (Also implemented by Mark Dickinson; issue
1811.)
Implicit coercion for complex numbers has been removed; the interpreter will no longer ever attempt to call a
__coerce__() method on complex objects. (Removed by Meador Inge and Mark Dickinson; issue 5211.)
The str.format() method now supports automatic numbering of the replacement fields. This makes using
str.format() more closely resemble using %s formatting:
>>> '{}:{}:{}'.format(2009, 04, 'Sunday')
'2009:4:Sunday'
>>> '{}:{}:{day}'.format(2009, 4, day='Sunday')
'2009:4:Sunday'
The auto-numbering takes the fields from left to right, so the first {...} specifier will use the first argument to
str.format(), the next specifier will use the next argument, and so on. You cant mix auto-numbering and
explicit numbering either number all of your specifier fields or none of them but you can mix auto-numbering
and named fields, as in the second example above. (Contributed by Eric Smith; issue 5237.)
Complex numbers now correctly support usage with format(), and default to being right-aligned. Specifying
a precision or comma-separation applies to both the real and imaginary parts of the number, but a specified field
width and alignment is applied to the whole of the resulting 1.5+3j output. (Contributed by Eric Smith; issue
1588 and issue 7988.)
The F format code now always formats its output using uppercase characters, so it will now produce INF and
NAN. (Contributed by Eric Smith; issue 3382.)
A
low-level
change:
the
object.__format__()
method
now
triggers
a
PendingDeprecationWarning if its passed a format string, because the __format__() method for
object converts the object to a string representation and formats that. Previously the method silently applied
the format string to the string representation, but that could hide mistakes in Python code. If youre supplying
formatting information such as an alignment or precision, presumably youre expecting the formatting to be
applied in some object-specific way. (Fixed by Eric Smith; issue 7994.)
The int() and long() types gained a bit_length method that returns the number of bits necessary to
represent its argument in binary:
>>> n = 37
>>> bin(n)
'0b100101'
>>> n.bit_length()
6
>>> n = 2**123-1
>>> n.bit_length()
123
>>> (n+1).bit_length()
124
(Contributed by Fredrik Johansson and Victor Stinner; issue 3439.)
The import statement will no longer try an absolute import if a relative import (e.g. from .os import
sep) fails. This fixes a bug, but could possibly break certain import statements that were only working by
10.2 Optimizations
Several performance enhancements have been added:
A new opcode was added to perform the initial setup for with statements, looking up the __enter__() and
__exit__() methods. (Contributed by Benjamin Peterson.)
The garbage collector now performs better for one common usage pattern: when many objects are being allocated without deallocating any of them. This would previously take quadratic time for garbage collection, but
now the number of full garbage collections is reduced as the number of objects on the heap grows. The new
logic only performs a full garbage collection pass when the middle generation has been collected 10 times and
when the number of survivor objects from the middle generation exceeds 10% of the number of objects in the
oldest generation. (Suggested by Martin von Lwis and implemented by Antoine Pitrou; issue 4074.)
The garbage collector tries to avoid tracking simple containers which cant be part of a cycle. In Python 2.7,
this is now true for tuples and dicts containing atomic types (such as ints, strings, etc.). Transitively, a dict
containing tuples of atomic types wont be tracked either. This helps reduce the cost of each garbage collection
by decreasing the number of objects to be considered and traversed by the collector. (Contributed by Antoine
Pitrou; issue 4688.)
Long integers are now stored internally either in base 2**15 or in base 2**30, the base being determined at
build time. Previously, they were always stored in base 2**15. Using base 2**30 gives significant performance
improvements on 64-bit machines, but benchmark results on 32-bit machines have been mixed. Therefore, the
default is to use base 2**30 on 64-bit machines and base 2**15 on 32-bit machines; on Unix, theres a new
configure option --enable-big-digits that can be used to override this default.
Apart from the performance improvements this change should be invisible to end users, with one exception: for
testing and debugging purposes theres a new structseq sys.long_info that provides information about the
internal format, giving the number of bits per digit and the size in bytes of the C type used to store each digit:
>>> import sys
>>> sys.long_info
sys.long_info(bits_per_digit=30, sizeof_digit=4)
(Contributed by Mark Dickinson; issue 4258.)
Another set of changes made long objects a few bytes smaller: 2 bytes smaller on 32-bit systems and 6 bytes on
64-bit. (Contributed by Mark Dickinson; issue 5260.)
The division algorithm for long integers has been made faster by tightening the inner loop, doing shifts instead
of multiplications, and fixing an unnecessary extra iteration. Various benchmarks show speedups of between
50% and 150% for long integer divisions and modulo operations. (Contributed by Mark Dickinson; issue 5512.)
Bitwise operations are also significantly faster (initial patch by Gregory Smith; issue 1087418).
The implementation of % checks for the left-side operand being a Python string and special-cases it; this results
in a 1-3% performance increase for applications that frequently use % with strings, such as templating libraries.
(Implemented by Collin Winter; issue 5176.)
List comprehensions with an if condition are compiled into faster bytecode. (Patch by Antoine Pitrou, backported to 2.7 by Jeffrey Yasskin; issue 4715.)
Converting an integer or long integer to a decimal string was made faster by special-casing base 10 instead of
using a generalized conversion function that supports arbitrary bases. (Patch by Gawain Bolton; issue 6713.)
The split(), replace(), rindex(), rpartition(), and rsplit() methods of string-like types
(strings, Unicode strings, and bytearray objects) now use a fast reverse-search algorithm instead of a
character-by-character scan. This is sometimes faster by a factor of 10. (Added by Florent Xicluna; issue
7462 and issue 7622.)
The pickle and cPickle modules now automatically intern the strings used for attribute names, reducing
memory usage of the objects resulting from unpickling. (Contributed by Jake McGuire; issue 5084.)
The cPickle module now special-cases dictionaries, nearly halving the time required to pickle them. (Contributed by Collin Winter; issue 5670.)
for a more complete list of changes, or look through the Subversion logs for all the details.
The bdb modules base debugging class Bdb gained a feature for skipping modules. The constructor now
takes an iterable containing glob-style patterns such as django.*; the debugger will not step into stack frames
from a module that matches one of these patterns. (Contributed by Maru Newby after a suggestion by Senthil
Kumaran; issue 5142.)
The binascii module now supports the buffer API, so it can be used with memoryview instances and other
similar buffer objects. (Backported from 3.x by Florent Xicluna; issue 7703.)
Updated module: the bsddb module has been updated from 4.7.2devel9 to version 4.8.4 of the pybsddb package. The new version features better Python 3.x compatibility, various bug fixes, and adds several new BerkeleyDB flags and methods. (Updated by Jess Cea Avin; issue 8156. The pybsddb changelog can be read at
http://hg.jcea.es/pybsddb/file/tip/ChangeLog.)
The bz2 modules BZ2File now supports the context management protocol, so you can write with
bz2.BZ2File(...) as f:. (Contributed by Hagen Frstenau; issue 3860.)
New class: the Counter class in the collections module is useful for tallying data. Counter instances
behave mostly like dictionaries but return zero for missing keys instead of raising a KeyError:
>>> from collections import Counter
>>> c = Counter()
>>> for letter in 'here is a sample of english text':
...
c[letter] += 1
...
>>> c
Counter({' ': 6, 'e': 5, 's': 3, 'a': 2, 'i': 2, 'h': 2,
'l': 2, 't': 2, 'g': 1, 'f': 1, 'm': 1, 'o': 1, 'n': 1,
'p': 1, 'r': 1, 'x': 1})
>>> c['e']
5
>>> c['z']
0
There are three additional Counter methods. most_common() returns the N most common elements and
their counts. elements() returns an iterator over the contained elements, repeating each element as many
times as its count. subtract() takes an iterable and subtracts one for each element instead of adding; if the
argument is a dictionary or another Counter, the counts are subtracted.
>>> c.most_common(5)
[(' ', 6), ('e', 5), ('s', 3), ('a', 2), ('i', 2)]
>>> c.elements() ->
'a', 'a', ' ', ' ', ' ', ' ', ' ', ' ',
'e', 'e', 'e', 'e', 'e', 'g', 'f', 'i', 'i',
'h', 'h', 'm', 'l', 'l', 'o', 'n', 'p', 's',
's', 's', 'r', 't', 't', 'x'
>>> c['e']
5
>>> c.subtract('very heavy on the letter e')
>>> c['e']
# Count is now lower
-1
Contributed by Raymond Hettinger; issue 1696199.
New class: OrderedDict is described in the earlier section PEP 372: Adding an Ordered Dictionary to
collections.
New method: The deque data type now has a count() method that returns the number of contained elements
equal to the supplied argument x, and a reverse() method that reverses the elements of the deque in-place.
deque also exposes its maximum length as the read-only maxlen attribute. (Both features added by Raymond
Hettinger.)
The namedtuple class now has an optional rename parameter. If rename is true, field names that are invalid
because theyve been repeated or arent legal Python identifiers will be renamed to legal names that are derived
from the fields position within the list of fields:
>>> from collections import namedtuple
>>> T = namedtuple('T', ['field1', '$illegal', 'for', 'field2'], rename=True)
>>> T._fields
('field1', '_1', '_2', 'field2')
(Added by Raymond Hettinger; issue 1818.)
Finally, the Mapping abstract base class now returns NotImplemented if a mapping is compared to another
type that isnt a Mapping. (Fixed by Daniel Stutzbach; issue 8729.)
Constructors for the parsing classes in the ConfigParser module now take an allow_no_value parameter,
defaulting to false; if true, options without values will be allowed. For example:
>>> import ConfigParser, StringIO
>>> sample_config = """
... [mysqld]
... user = mysql
... pid-file = /var/run/mysqld/mysqld.pid
... skip-bdb
... """
>>> config = ConfigParser.RawConfigParser(allow_no_value=True)
>>> config.readfp(StringIO.StringIO(sample_config))
>>> config.get('mysqld', 'user')
'mysql'
>>> print config.get('mysqld', 'skip-bdb')
None
>>> print config.get('mysqld', 'unknown')
Traceback (most recent call last):
...
NoOptionError: No option 'unknown' in section: 'mysqld'
(Contributed by Mats Kindahl; issue 7005.)
Deprecated function: contextlib.nested(), which allows handling more than one context manager with
a single with statement, has been deprecated, because the with statement now supports multiple context
managers.
The cookielib module now ignores cookies that have an invalid version field, one that doesnt contain an
integer value. (Fixed by John J. Lee; issue 3924.)
The copy modules deepcopy() function will now correctly copy bound instance methods. (Implemented
by Robert Collins; issue 1515.)
The ctypes module now always converts None to a C NULL pointer for arguments declared as pointers.
(Changed by Thomas Heller; issue 4606.) The underlying libffi library has been updated to version 3.0.9,
containing various fixes for different platforms. (Updated by Matthias Klose; issue 8142.)
New method: the datetime modules timedelta class gained a total_seconds() method that returns
the number of seconds in the duration. (Contributed by Brian Quinlan; issue 5788.)
New method: the Decimal class gained a from_float() class method that performs an exact
conversion of a floating-point number to a Decimal. This exact conversion strives for the clos-
est decimal approximation to the floating-point representations value; the resulting decimal value will
therefore still include the inaccuracy, if any. For example, Decimal.from_float(0.1) returns
Decimal(0.1000000000000000055511151231257827021181583404541015625). (Implemented by Raymond Hettinger; issue 4796.)
Comparing instances of Decimal with floating-point numbers now produces sensible results based on the
numeric values of the operands. Previously such comparisons would fall back to Pythons default rules for
comparing objects, which produced arbitrary results based on their type. Note that you still cannot combine
Decimal and floating-point in other operations such as addition, since you should be explicitly choosing how
to convert between float and Decimal. (Fixed by Mark Dickinson; issue 2531.)
The constructor for Decimal now accepts floating-point numbers (added by Raymond Hettinger; issue 8257)
and non-European Unicode characters such as Arabic-Indic digits (contributed by Mark Dickinson; issue 6595).
Most of the methods of the Context class now accept integers as well as Decimal instances; the only
exceptions are the canonical() and is_canonical() methods. (Patch by Juan Jos Conti; issue 7633.)
When using Decimal instances with a strings format() method, the default alignment was previously leftalignment. This has been changed to right-alignment, which is more sensible for numeric types. (Changed by
Mark Dickinson; issue 6857.)
Comparisons involving a signaling NaN value (or sNAN) now signal InvalidOperation instead of silently
returning a true or false value depending on the comparison operator. Quiet NaN values (or NaN) are now
hashable. (Fixed by Mark Dickinson; issue 7279.)
The difflib module now produces output that is more compatible with modern diff/patch tools through
one small change, using a tab character instead of spaces as a separator in the header giving the filename. (Fixed
by Anatoly Techtonik; issue 7585.)
The Distutils sdist command now always regenerates the MANIFEST file, since even if the MANIFEST.in
or setup.py files havent been modified, the user might have created some new files that should be included.
(Fixed by Tarek Ziad; issue 8688.)
The doctest modules IGNORE_EXCEPTION_DETAIL flag will now ignore the name of the module containing the exception being tested. (Patch by Lennart Regebro; issue 7490.)
The email modules Message class will now accept a Unicode-valued payload, automatically converting the
payload to the encoding specified by output_charset. (Added by R. David Murray; issue 1368247.)
The Fraction class now accepts a single float or Decimal instance, or two rational numbers, as arguments
to its constructor. (Implemented by Mark Dickinson; rationals added in issue 5812, and float/decimal in issue
8294.)
Ordering comparisons (<, <=, >, >=) between fractions and complex numbers now raise a TypeError. This
fixes an oversight, making the Fraction match the other numeric types.
New class: FTP_TLS in the ftplib module provides secure FTP connections using TLS encapsulation of
authentication as well as subsequent control and data transfers. (Contributed by Giampaolo Rodola; issue 2054.)
The storbinary() method for binary uploads can now restart uploads thanks to an added rest parameter
(patch by Pablo Mouzo; issue 6845.)
New class decorator: total_ordering() in the functools module takes a class that defines an
__eq__() method and one of __lt__(), __le__(), __gt__(), or __ge__(), and generates the missing comparison methods. Since the __cmp__() method is being deprecated in Python 3.x, this decorator
makes it easier to define ordered classes. (Added by Raymond Hettinger; issue 5479.)
New function: cmp_to_key() will take an old-style comparison function that expects two arguments and
return a new callable that can be used as the key parameter to functions such as sorted(), min() and max(),
etc. The primary intended use is to help with making code compatible with Python 3.x. (Added by Raymond
Hettinger.)
New function: the gc modules is_tracked() returns true if a given instance is tracked by the garbage
collector, false otherwise. (Contributed by Antoine Pitrou; issue 4688.)
The gzip modules GzipFile now supports the context management protocol, so you can write with
gzip.GzipFile(...) as f: (contributed by Hagen Frstenau; issue 3860), and it now implements the
io.BufferedIOBase ABC, so you can wrap it with io.BufferedReader for faster processing (contributed by Nir Aides; issue 7471). Its also now possible to override the modification time recorded in a gzipped
file by providing an optional timestamp to the constructor. (Contributed by Jacques Frechet; issue 4272.)
Files in gzip format can be padded with trailing zero bytes; the gzip module will now consume these trailing
bytes. (Fixed by Tadek Pietraszek and Brian Curtin; issue 2846.)
New attribute: the hashlib module now has an algorithms attribute containing a tuple naming the supported algorithms. In Python 2.7, hashlib.algorithms contains (md5, sha1, sha224,
sha256, sha384, sha512). (Contributed by Carl Chenet; issue 7418.)
The default HTTPResponse class used by the httplib module now supports buffering, resulting in much
faster reading of HTTP responses. (Contributed by Kristjn Valur Jnsson; issue 4879.)
The HTTPConnection and HTTPSConnection classes now support a source_address parameter, a
(host, port) 2-tuple giving the source address that will be used for the connection. (Contributed by Eldon
Ziegler; issue 3972.)
The ihooks module now supports relative imports. Note that ihooks is an older module for customizing
imports, superseded by the imputil module added in Python 2.0. (Relative import support added by Neil
Schemenauer.)
The imaplib module now supports IPv6 addresses. (Contributed by Derek Morr; issue 1655.)
New function: the inspect modules getcallargs() takes a callable and its positional and keyword
arguments, and figures out which of the callables parameters will receive each argument, returning a dictionary
mapping argument names to their values. For example:
>>> from inspect import getcallargs
>>> def f(a, b=1, *pos, **named):
...
pass
>>> getcallargs(f, 1, 2, 3)
{'a': 1, 'b': 2, 'pos': (3,), 'named': {}}
>>> getcallargs(f, a=2, x=4)
{'a': 2, 'b': 1, 'pos': (), 'named': {'x': 4}}
>>> getcallargs(f)
Traceback (most recent call last):
...
TypeError: f() takes at least 1 argument (0 given)
Contributed by George Sakkis; issue 3135.
Updated module: The io library has been upgraded to the version shipped with Python 3.1. For 3.1, the I/O
library was entirely rewritten in C and is 2 to 20 times faster depending on the task being performed. The
original Python version was renamed to the _pyio module.
One minor resulting change: the io.TextIOBase class now has an errors attribute giving the error setting
used for encoding and decoding errors (one of strict, replace, ignore).
The io.FileIO class now raises an OSError when passed an invalid file descriptor. (Implemented by
Benjamin Peterson; issue 4991.) The truncate() method now preserves the file position; previously it
would change the file position to the end of the new file. (Fixed by Pascal Chambon; issue 6939.)
New function: itertools.compress(data, selectors) takes two iterators. Elements of data are
returned if the corresponding value in selectors is true:
The pydoc module now has help for the various symbols that Python uses. You can now do help(<<) or
help(@), for example. (Contributed by David Laban; issue 4739.)
The re modules split(), sub(), and subn() now accept an optional flags argument, for consistency with
the other functions in the module. (Added by Gregory P. Smith.)
New function: run_path() in the runpy module will execute the code at a provided path argument. path
can be the path of a Python source file (example.py), a compiled bytecode file (example.pyc), a directory
(./package/), or a zip archive (example.zip). If a directory or zip path is provided, it will be added to
the front of sys.path and the module __main__ will be imported. Its expected that the directory or zip
contains a __main__.py; if it doesnt, some other __main__.py might be imported from a location later
in sys.path. This makes more of the machinery of runpy available to scripts that want to mimic the way
Pythons command line processes an explicit path name. (Added by Nick Coghlan; issue 6816.)
New function: in the shutil module, make_archive() takes a filename, archive type (zip or tar-format),
and a directory path, and creates an archive containing the directorys contents. (Added by Tarek Ziad.)
shutils copyfile() and copytree() functions now raise a SpecialFileError exception when
asked to copy a named pipe. Previously the code would treat named pipes like a regular file by opening them
for reading, and this would block indefinitely. (Fixed by Antoine Pitrou; issue 3002.)
The signal module no longer re-installs the signal handler unless this is truly necessary, which fixes a bug that
could make it impossible to catch the EINTR signal robustly. (Fixed by Charles-Francois Natali; issue 8354.)
New functions: in the site module, three new functions return various site- and user-specific
paths.
getsitepackages() returns a list containing all global site-packages directories,
getusersitepackages() returns the path of the users site-packages directory, and getuserbase()
returns the value of the USER_BASE environment variable, giving the path to a directory that can be used to
store data. (Contributed by Tarek Ziad; issue 6693.)
The site module now reports exceptions occurring when the sitecustomize module is imported, and will
no longer catch and swallow the KeyboardInterrupt exception. (Fixed by Victor Stinner; issue 3137.)
The create_connection() function gained a source_address parameter, a (host, port) 2-tuple giving the source address that will be used for the connection. (Contributed by Eldon Ziegler; issue 3972.)
The recv_into() and recvfrom_into() methods will now write into objects that support the buffer
API, most usefully the bytearray and memoryview objects. (Implemented by Antoine Pitrou; issue 8104.)
The SocketServer modules TCPServer class now supports socket timeouts and disabling the Nagle algorithm. The disable_nagle_algorithm class attribute defaults to False; if overridden to be true,
new request connections will have the TCP_NODELAY option set to prevent buffering many small sends
into a single TCP packet. The timeout class attribute can hold a timeout in seconds that will be applied
to the request socket; if no request is received within that time, handle_timeout() will be called and
handle_request() will return. (Contributed by Kristjn Valur Jnsson; issue 6192 and issue 6267.)
Updated module: the sqlite3 module has been updated to version 2.6.0 of the pysqlite package. Version
2.6.0 includes a number of bugfixes, and adds the ability to load SQLite extensions from shared libraries. Call
the enable_load_extension(True) method to enable extensions, and then call load_extension()
to load a particular shared library. (Updated by Gerhard Hring.)
The ssl modules SSLSocket objects now support the buffer API, which fixed a test suite failure (fix by
Antoine Pitrou; issue 7133) and automatically set OpenSSLs SSL_MODE_AUTO_RETRY, which will prevent
an error code being returned from recv() operations that trigger an SSL renegotiation (fix by Antoine Pitrou;
issue 8222).
The ssl.wrap_socket() constructor function now takes a ciphers argument thats a string listing the encryption algorithms to be allowed; the format of the string is described in the OpenSSL documentation. (Added
by Antoine Pitrou; issue 8322.)
Another change makes the extension load all of OpenSSLs ciphers and digest algorithms so that theyre all
available. Some SSL certificates couldnt be verified, reporting an unknown algorithm error. (Reported by
Beda Kosata, and fixed by Antoine Pitrou; issue 8484.)
The version of OpenSSL being used is now available as the module attributes ssl.OPENSSL_VERSION (a
string), ssl.OPENSSL_VERSION_INFO (a 5-tuple), and ssl.OPENSSL_VERSION_NUMBER (an integer).
(Added by Antoine Pitrou; issue 8321.)
The struct module will no longer silently ignore overflow errors when a value is too large for a particular
integer format code (one of bBhHiIlLqQ); it now always raises a struct.error exception. (Changed by
Mark Dickinson; issue 1523.) The pack() function will also attempt to use __index__() to convert and
pack non-integers before trying the __int__() method or reporting an error. (Changed by Mark Dickinson;
issue 8300.)
New function: the subprocess modules check_output() runs a command with a specified set of arguments and returns the commands output as a string when the command runs without error, or raises a
CalledProcessError exception otherwise.
>>> subprocess.check_output(['df', '-h', '.'])
'Filesystem
Size
Used Avail Capacity Mounted on\n
/dev/disk0s2
52G
49G
3.0G
94%
/\n'
>>> subprocess.check_output(['df', '-h', '/bogus'])
...
subprocess.CalledProcessError: Command '['df', '-h', '/bogus']' returned non-zero exit
(Contributed by Gregory P. Smith.)
The subprocess module will now retry its internal system calls on receiving an EINTR signal. (Reported by
several people; final patch by Gregory P. Smith in issue 1068268.)
New function: is_declared_global() in the symtable module returns true for variables that are explicitly declared to be global, false for ones that are implicitly global. (Contributed by Jeremy Hylton.)
The syslog module will now use the value of sys.argv[0] as the identifier instead of the previous default
value of python. (Changed by Sean Reifschneider; issue 8451.)
The sys.version_info value is now a named tuple, with attributes named major, minor, micro,
releaselevel, and serial. (Contributed by Ross Light; issue 4285.)
sys.getwindowsversion() also returns a named tuple, with attributes named major, minor, build,
platform, service_pack, service_pack_major, service_pack_minor, suite_mask, and
product_type. (Contributed by Brian Curtin; issue 7766.)
The tarfile modules default error handling has changed, to no longer suppress fatal errors. The default error
level was previously 0, which meant that errors would only result in a message being written to the debug log,
but because the debug log is not activated by default, these errors go unnoticed. The default error level is now
1, which raises an exception if theres an error. (Changed by Lars Gustbel; issue 7357.)
tarfile now supports filtering the TarInfo objects being added to a tar file. When you call add(), you
may supply an optional filter argument thats a callable. The filter callable will be passed the TarInfo for
every file being added, and can modify and return it. If the callable returns None, the file will be excluded
from the resulting archive. This is more powerful than the existing exclude argument, which has therefore
been deprecated. (Added by Lars Gustbel; issue 6856.) The TarFile class also now supports the context
management protocol. (Added by Lars Gustbel; issue 7232.)
The wait() method of the threading.Event class now returns the internal flag on exit. This means the
method will usually return true because wait() is supposed to block until the internal flag becomes true. The
return value will only be false if a timeout was provided and the operation timed out. (Contributed by Tim
Lesher; issue 1674032.)
The Unicode database provided by the unicodedata module is now used internally to determine which
characters are numeric, whitespace, or represent line breaks. The database also includes information from the
Unihan.txt data file (patch by Anders Chrigstrm and Amaury Forgeot dArc; issue 1571184) and has been
updated to version 5.2.0 (updated by Florent Xicluna; issue 8024).
The urlparse modules urlsplit() now handles unknown URL schemes in a fashion compliant with
RFC 3986: if the URL is of the form "<something>://...", the text before the :// is treated as the
scheme, even if its a made-up scheme that the module doesnt know about. This change may break code that
worked around the old behaviour. For example, Python 2.6.4 or 2.5 will return the following:
>>> import urlparse
>>> urlparse.urlsplit('invented://host/filename?query')
('invented', '', '//host/filename?query', '', '')
Python 2.7 (and Python 2.6.5) will return:
>>> import urlparse
>>> urlparse.urlsplit('invented://host/filename?query')
('invented', 'host', '/filename?query', '', '')
(Python 2.7 actually produces slightly different output, since it returns a named tuple instead of a standard tuple.)
The urlparse module also supports IPv6 literal addresses as defined by RFC 2732 (contributed by Senthil
Kumaran; issue 2987).
>>> urlparse.urlparse('http://[1080::8:800:200C:417A]/foo')
ParseResult(scheme='http', netloc='[1080::8:800:200C:417A]',
path='/foo', params='', query='', fragment='')
New class: the WeakSet class in the weakref module is a set that only holds weak references to its elements;
elements will be removed once there are no references pointing to them. (Originally implemented in Python 3.x
by Raymond Hettinger, and backported to 2.7 by Michael Foord.)
The ElementTree library, xml.etree, no longer escapes ampersands and angle brackets when outputting an
XML processing instruction (which looks like <?xml-stylesheet href="#style1"?>) or comment
(which looks like <!-- comment -->). (Patch by Neil Muller; issue 2746.)
The XML-RPC client and server, provided by the xmlrpclib and SimpleXMLRPCServer modules, have
improved performance by supporting HTTP/1.1 keep-alive and by optionally using gzip encoding to compress
the XML being exchanged. The gzip compression is controlled by the encode_threshold attribute of
SimpleXMLRPCRequestHandler, which contains a size in bytes; responses larger than this will be compressed. (Contributed by Kristjn Valur Jnsson; issue 6267.)
The zipfile modules ZipFile now supports the context management protocol, so you can write with
zipfile.ZipFile(...) as f:. (Contributed by Brian Curtin; issue 5511.)
zipfile now also supports archiving empty directories and extracts them correctly. (Fixed by Kuba Wieczorek; issue 4710.) Reading files out of an archive is faster, and interleaving read() and readline() now
works correctly. (Contributed by Nir Aides; issue 7610.)
The is_zipfile() function now accepts a file object, in addition to the path names accepted in earlier
versions. (Contributed by Gabriel Genellina; issue 4756.)
The writestr() method now has an optional compress_type parameter that lets you override the default
compression method specified in the ZipFile constructor. (Contributed by Ronald Oussoren; issue 6003.)
that can participate in the import process. Python 2.7 doesnt contain the complete importlib package, but instead
has a tiny subset that contains a single function, import_module().
import_module(name, package=None) imports a module. name is a string containing the module or packages name. Its possible to do relative imports by providing a string that begins with a . character, such as
..utils.errors. For relative imports, the package argument must be provided and is the name of the package that will be used as the anchor for the relative import. import_module() both inserts the imported module
into sys.modules and returns the module object.
Here are some examples:
>>> from importlib import import_module
>>> anydbm = import_module('anydbm') # Standard absolute import
>>> anydbm
<module 'anydbm' from '/p/python/Lib/anydbm.py'>
>>> # Relative import
>>> file_util = import_module('..file_util', 'distutils.command')
>>> file_util
<module 'distutils.file_util' from '/python/Lib/distutils/file_util.pyc'>
importlib was implemented by Brett Cannon and introduced in Python 3.1.
assertIsNone() and assertIsNotNone() take one expression and verify that the result is or is not
None.
assertIs() and assertIsNot() take two values and check whether the two values evaluate to the same
object or not. (Added by Michael Foord; issue 2578.)
assertIsInstance() and assertNotIsInstance() check whether the resulting object is an instance
of a particular class, or of one of a tuple of classes. (Added by Georg Brandl; issue 7031.)
assertGreater(), assertGreaterEqual(), assertLess(), and assertLessEqual() compare two quantities.
assertMultiLineEqual() compares two strings, and if theyre not equal, displays a helpful comparison
that highlights the differences in the two strings. This comparison is now used by default when Unicode strings
are compared with assertEqual().
assertRegexpMatches() and assertNotRegexpMatches() checks whether the first argument is a
string matching or not matching the regular expression provided as the second argument (issue 8038).
assertRaisesRegexp() checks whether a particular exception is raised, and then also checks that the
string representation of the exception matches the provided regular expression.
assertIn() and assertNotIn() tests whether first is or is not in second.
assertItemsEqual() tests whether two provided sequences contain the same elements.
assertSetEqual() compares whether two sets are equal, and only reports the differences between the sets
in case of error.
Similarly, assertListEqual() and assertTupleEqual() compare the specified types and explain
any differences without necessarily printing their full values; these methods are now used by default when
comparing lists and tuples using assertEqual(). More generally, assertSequenceEqual() compares
two sequences and can optionally check whether both sequences are of a particular type.
assertDictEqual() compares two dictionaries and reports the differences; its now used by default
when you compare two dictionaries using assertEqual(). assertDictContainsSubset() checks
whether all of the key/value pairs in first are found in second.
assertAlmostEqual() and assertNotAlmostEqual() test whether first and second are approximately equal. This method can either round their difference to an optionally-specified number of places (the
default is 7) and compare it to zero, or require the difference to be smaller than a supplied delta value.
loadTestsFromName() properly honors the suiteClass attribute of the TestLoader. (Fixed by Mark
Roddy; issue 6866.)
A new hook lets you extend the assertEqual() method to handle new data types.
The
addTypeEqualityFunc() method takes a type object and a function. The function will be used when
both of the objects being compared are of the specified type. This function should compare the two objects and
raise an exception if they dont match; its a good idea for the function to provide additional information about
why the two objects arent matching, much as the new sequence comparison methods do.
unittest.main() now takes an optional exit argument. If false, main() doesnt call sys.exit(), allowing
main() to be used from the interactive interpreter. (Contributed by J. Pablo Fernndez; issue 3379.)
TestResult has new startTestRun() and stopTestRun() methods that are called immediately before and
after a test run. (Contributed by Robert Collins; issue 5728.)
With all these changes, the unittest.py was becoming awkwardly large, so the module was turned into a package
and the code split into several files (by Benjamin Peterson). This doesnt affect how the module is imported or used.
See also:
http://www.voidspace.org.uk/python/articles/unittest2.shtml Describes the new features, how to use them, and the
rationale for various design decisions. (By Michael Foord.)
<item>3</item>
# Outputs <root><item>1</item>...</root>
print ET.tostring(new)
New Element method: iter() yields the children of the element as a generator. Its also possible to write
for child in elem: to loop over an elements children. The existing method getiterator() is now
deprecated, as is getchildren() which constructs and returns a list of children.
New Element method: itertext() yields all chunks of text that are descendants of the element. For
example:
t = ET.XML("""<list>
<item>1</item> <item>2</item>
</list>""")
<item>3</item>
Deprecated: using an element as a Boolean (i.e., if elem:) would return true if the element had any children,
or false if there were no children. This behaviour is confusing None is false, but so is a childless element?
so it will now trigger a FutureWarning. In your code, you should be explicit: write len(elem) != 0 if
youre interested in the number of children, or elem is not None.
Fredrik Lundh develops ElementTree and produced the 1.3 version; you can read his article describing 1.3 at
http://effbot.org/zone/elementtree-13-intro.htm. Florent Xicluna updated the version included with Python, after discussions on python-dev and in issue 6472.)
This meant that, if you ran an application embedding Python in a directory controlled by someone else, attackers could put a Trojan-horse module in the directory (say, a file named os.py) that your application would then
import and run.
If you maintain a C/C++ application that embeds Python, check whether youre calling PySys_SetArgv()
and carefully consider whether the application should be using PySys_SetArgvEx() with updatepath set to
false.
Security issue reported as CVE-2008-5983; discussed in issue 5753, and fixed by Antoine Pitrou.
New macros: the Python header files now define the following macros: Py_ISALNUM, Py_ISALPHA,
Py_ISDIGIT, Py_ISLOWER, Py_ISSPACE, Py_ISUPPER, Py_ISXDIGIT, Py_TOLOWER, and
Py_TOUPPER. All of these functions are analogous to the C standard macros for classifying characters, but
ignore the current locale setting, because in several places Python needs to analyze characters in a localeindependent way. (Added by Eric Smith; issue 5793.)
Removed function: PyEval_CallObject is now only available as a macro. A function version was being
kept around to preserve ABI linking compatibility, but that was in 1997; it can certainly be deleted by now.
(Removed by Antoine Pitrou; issue 8276.)
New format codes:
the PyFormat_FromString(), PyFormat_FromStringV(), and
PyErr_Format() functions now accept %lld and %llu format codes for displaying Cs long long
types. (Contributed by Mark Dickinson; issue 7228.)
The complicated interaction between threads and process forking has been changed. Previously, the child process created by os.fork() might fail because the child is created with only a single thread running, the thread
performing the os.fork(). If other threads were holding a lock, such as Pythons import lock, when the fork
was performed, the lock would still be marked as held in the new process. But in the child process nothing
would ever release the lock, since the other threads werent replicated, and the child process would no longer be
able to perform imports.
Python 2.7 acquires the import lock before performing an os.fork(), and will also clean up any locks created
using the threading module. C extension modules that have internal locks, or that call fork() themselves,
will not benefit from this clean-up.
(Fixed by Thomas Wouters; issue 1590864.)
The Py_Finalize() function now calls the internal threading._shutdown() function; this prevents
some exceptions from being raised when an interpreter shuts down. (Patch by Adam Olsen; issue 1722344.)
When using the PyMemberDef structure to define attributes of a type, Python will no longer let you try to
delete or set a T_STRING_INPLACE attribute.
Global symbols defined by the ctypes module are now prefixed with Py, or with _ctypes. (Implemented
by Thomas Heller; issue 3102.)
New configure option: the --with-system-expat switch allows building the pyexpat module to use the
system Expat library. (Contributed by Arfrever Frehtes Taifersar Arahesis; issue 7609.)
New configure option: the --with-valgrind option will now disable the pymalloc allocator, which is difficult for the Valgrind memory-error detector to analyze correctly. Valgrind will therefore be better at detecting
memory leaks and overruns. (Contributed by James Henstridge; issue 2422.)
New configure option: you can now supply an empty string to --with-dbmliborder= in order to disable
all of the various DBM modules. (Added by Arfrever Frehtes Taifersar Arahesis; issue 6491.)
The configure script now checks for floating-point rounding bugs on certain 32-bit Intel chips and defines a
X87_DOUBLE_ROUNDING preprocessor definition. No code currently uses this definition, but its available if
anyone wishes to use it. (Added by Mark Dickinson; issue 2937.)
configure also now sets a LDCXXSHARED Makefile variable for supporting C++ linking. (Contributed by
Arfrever Frehtes Taifersar Arahesis; issue 1222585.)
The build process now creates the necessary files for pkg-config support. (Contributed by Clinton Roy; issue
3585.)
The build process now supports Subversion 1.7. (Contributed by Arfrever Frehtes Taifersar Arahesis; issue
6094.)
12.1 Capsules
Python 3.1 adds a new C datatype, PyCapsule, for providing a C API to an extension module. A capsule is essentially the holder of a C void * pointer, and is made available as a module attribute; for example, the socket
modules API is exposed as socket.CAPI, and unicodedata exposes ucnhash_CAPI. Other extensions can
import the module, access its dictionary to get the capsule object, and then get the void * pointer, which will usually
point to an array of pointers to the modules various API functions.
There is an existing data type already used for this, PyCObject, but it doesnt provide type safety. Evil code written
in pure Python could cause a segmentation fault by taking a PyCObject from module A and somehow substituting
it for the PyCObject in module B. Capsules know their own name, and getting the pointer requires providing the
name:
void *vtable;
if (!PyCapsule_IsValid(capsule, "mymodule.CAPI") {
PyErr_SetString(PyExc_ValueError, "argument type invalid");
return NULL;
}
vtable = PyCapsule_GetPointer(capsule, "mymodule.CAPI");
You are assured that vtable points to whatever youre expecting. If a different capsule was passed in,
PyCapsule_IsValid() would detect the mismatched name and return false. Refer to using-capsules for more
information on using these objects.
Python 2.7 now uses capsules internally to provide various extension-module APIs, but the
PyCObject_AsVoidPtr() was modified to handle capsules, preserving compile-time compatibility with
the CObject interface. Use of PyCObject_AsVoidPtr() will signal a PendingDeprecationWarning,
which is silent by default.
Implemented in Python 3.1 and backported to 2.7 by Larry Hastings; discussed in issue 5630.
and
sevand
issue
The new _beginthreadex() API is used to start threads, and the native thread-local storage functions are
now used. (Contributed by Kristjn Valur Jnsson; issue 3582.)
The os.kill() function now works on Windows. The signal value can be the constants CTRL_C_EVENT,
CTRL_BREAK_EVENT, or any integer. The first two constants will send Control-C and Control-Break
keystroke events to subprocesses; any other value will use the TerminateProcess() API. (Contributed by
Miki Tebeka; issue 1220212.)
The os.listdir() function now correctly fails for an empty path. (Fixed by Hirokazu Yamamoto; issue
5913.)
The mimelib module will now read the MIME database from the Windows registry when initializing. (Patch
by Gabriel Genellina; issue 4969.)
The range() function processes its arguments more consistently; it will now call __int__() on non-float,
non-integer arguments that are supplied to it. (Fixed by Alexander Belopolsky; issue 1533.)
The string format() method changed the default precision used for floating-point and complex numbers from
6 decimal places to 12, which matches the precision used by str(). (Changed by Eric Smith; issue 5920.)
Because of an optimization for the with statement, the special methods __enter__() and __exit__()
must belong to the objects type, and cannot be directly attached to the objects instance. This affects new-style
classes (derived from object) and C extension types. (issue 6101.)
Due to a bug in Python 2.6, the exc_value parameter to __exit__() methods was often the string representation of the exception, not an instance. This was fixed in 2.7, so exc_value will be an instance as expected. (Fixed
by Florent Xicluna; issue 7853.)
When a restricted set of attributes were set using __slots__, deleting an unset attribute would not raise
AttributeError as you would expect. Fixed by Benjamin Peterson; issue 7604.)
In the standard library:
Operations with datetime instances that resulted in a year falling outside the supported range didnt always
raise OverflowError. Such errors are now checked more carefully and will now raise the exception. (Reported by Mark Leander, patch by Anand B. Pillai and Alexander Belopolsky; issue 7150.)
When using Decimal instances with a strings format() method, the default alignment was previously
left-alignment. This has been changed to right-alignment, which might change the output of your programs.
(Changed by Mark Dickinson; issue 6857.)
Comparisons involving a signaling NaN value (or sNAN) now signal InvalidOperation instead of silently
returning a true or false value depending on the comparison operator. Quiet NaN values (or NaN) are now
hashable. (Fixed by Mark Dickinson; issue 7279.)
The ElementTree library, xml.etree, no longer escapes ampersands and angle brackets when outputting an
XML processing instruction (which looks like <?xml-stylesheet href=#style1?>) or comment (which looks
like <! comment >). (Patch by Neil Muller; issue 2746.)
The readline() method of StringIO objects now does nothing when a negative length is requested, as
other file-like objects do. (issue 7348).
The syslog module will now use the value of sys.argv[0] as the identifier instead of the previous default
value of python. (Changed by Sean Reifschneider; issue 8451.)
The tarfile modules default error handling has changed, to no longer suppress fatal errors. The default error
level was previously 0, which meant that errors would only result in a message being written to the debug log,
but because the debug log is not activated by default, these errors go unnoticed. The default error level is now
1, which raises an exception if theres an error. (Changed by Lars Gustbel; issue 7357.)
The urlparse modules urlsplit() now handles unknown URL schemes in a fashion compliant with
RFC 3986: if the URL is of the form "<something>://...", the text before the :// is treated as the
scheme, even if its a made-up scheme that the module doesnt know about. This change may break code that
worked around the old behaviour. For example, Python 2.6.4 or 2.5 will return the following:
>>> import urlparse
>>> urlparse.urlsplit('invented://host/filename?query')
('invented', '', '//host/filename?query', '', '')
Python 2.7 (and Python 2.6.5) will return:
>>> import urlparse
>>> urlparse.urlsplit('invented://host/filename?query')
('invented', 'host', '/filename?query', '', '')
(Python 2.7 actually produces slightly different output, since it returns a named tuple instead of a standard tuple.)
For C extensions:
C extensions that use integer format codes with the PyArg_Parse* family of functions will now raise a
TypeError exception instead of triggering a DeprecationWarning (issue 5080).
Use the new PyOS_string_to_double() function instead of the old PyOS_ascii_strtod() and
PyOS_ascii_atof() functions, which are now deprecated.
For applications that embed Python:
The PySys_SetArgvEx() function was added, letting applications close a security hole when the existing
PySys_SetArgv() function was used. Check whether youre calling PySys_SetArgv() and carefully
consider whether the application should be using PySys_SetArgvEx() with updatepath set to false.
Most of Python 3.4s ssl module was backported. This means ssl now supports Server Name Indication,
TLS1.x settings, access to the platform certificate store, the SSLContext class, and other features. (Contributed by Alex Gaynor and David Reid; issue 21308.)
Refer to the Version added: 2.7.9 notes in the module documentation for specific details.
os.urandom() was changed to cache a file descriptor to /dev/urandom instead of reopening
/dev/urandom on every call. (Contributed by Alex Gaynor; issue 21305.)
hashlib.algorithms_guaranteed and hashlib.algorithms_available were backported
from Python 3 to make it easier for Python 2 applications to select the strongest available hash algorithm.
(Contributed by Alex Gaynor in issue 21307)
15.4 PEP 476: Enabling certificate verification by default for stdlib http clients
PEP 476 updated httplib and modules which use it, such as urllib2 and xmlrpclib, to now verify that the
server presents a certificate which is signed by a Certificate Authority in the platform trust store and whose hostname
matches the hostname being requested by default, significantly improving security for many applications. This change
was made in the Python 2.7.9 release.
For applications which require the old previous behavior, they can pass an alternate context:
import urllib2
import ssl
# This disables all verification
context = ssl._create_unverified_context()
# This allows using a specific certificate for the host, which doesn't need
# to be in the trust store
context = ssl.create_default_context(cafile="/path/to/file.crt")
urllib2.urlopen("https://invalid-cert", context=context)
15.5 PEP 493: HTTPS verification migration tools for Python 2.7
PEP 493 provides additional migration tools to support a more incremental infrastructure upgrade process for environments containing applications and services relying on the historically permissive processing of server certificates
when establishing client HTTPS connections. These additions were made in the Python 2.7.12 release.
These tools are intended for use in cases where affected applications and services cant be modified to explicitly pass
a more permissive SSL context when establishing the connection.
For applications and services which cant be modified at all, the new PYTHONHTTPSVERIFY environment variable
may be set to 0 to revert an entire Python process back to the default permissive behaviour of Python 2.7.8 and earlier.
For cases where the connection establishment code cant be modified, but the overall application can be, the new
ssl._https_verify_certificates() function can be used to adjust the default behaviour at runtime.
16 Acknowledgements
The author would like to thank the following people for offering suggestions, corrections and assistance with various
drafts of this article: Nick Coghlan, Philip Jenvey, Ryan Lovett, R. David Murray, Hugh Secker-Walker.
Index
E
environment variable
LDCXXSHARED, 28
PYTHONWARNINGS, 3, 13
USER_BASE, 20
L
LDCXXSHARED, 28
P
Python Enhancement Proposals
PEP 3106, 9
PEP 3137, 10
PEP 372, 5
PEP 373, 2
PEP 378, 6
PEP 389, 7
PEP 391, 9
PEP 434, 32
PEP 453, 33
PEP 466, 32
PEP 476, 34
PEP 477, 33
PEP 493, 34
PYTHONWARNINGS, 3, 13
R
RFC
RFC 2732, 22
RFC 3986, 22, 31
U
USER_BASE, 20
35