Hacked By AnonymousFox

Current Path : /proc/thread-self/root/opt/alt/python27/lib64/python2.7/pydoc_data/
Upload File :
Current File : //proc/thread-self/root/opt/alt/python27/lib64/python2.7/pydoc_data/topics.pyc

�
^
bc@s3iOdd6dd6dd6dd6dd	6d
d6dd
6dd6dd6dd6dd6dd6dd6dd6dd6dd6d d!6d"d#6d$d%6d&d'6d(d)6d*d+6d,d-6d.d/6d0d16d2d36d4d56d6d76d8d96d:d;6d<d=6d>d?6d@dA6dBdC6dDdE6dFdG6dHdI6dJdK6dLdM6dNdO6dPdQ6d:dR6dSdT6dUdV6dWdX6dYdZ6d[d\6d]d^6d_d`6dadb6dcdd6dedf6dgdh6didj6dkdl6dmdn6dodp6dqdr6dsdt6dudv6dwdx6dydz6d{d|6d}d~6dd�6d�d�6d�d�6d�d�6d�d�6d�d�6d�d�6d�d�6d�d�6d�d�6d�d�6d�d�6d�d�6d�d�6d�d�6Zd�S(�st
The "assert" statement
**********************

Assert statements are a convenient way to insert debugging assertions
into a program:

   assert_stmt ::= "assert" expression ["," expression]

The simple form, "assert expression", is equivalent to

   if __debug__:
       if not expression: raise AssertionError

The extended form, "assert expression1, expression2", is equivalent to

   if __debug__:
       if not expression1: raise AssertionError(expression2)

These equivalences assume that "__debug__" and "AssertionError" refer
to the built-in variables with those names.  In the current
implementation, the built-in variable "__debug__" is "True" under
normal circumstances, "False" when optimization is requested (command
line option -O).  The current code generator emits no code for an
assert statement when optimization is requested at compile time.  Note
that it is unnecessary to include the source code for the expression
that failed in the error message; it will be displayed as part of the
stack trace.

Assignments to "__debug__" are illegal.  The value for the built-in
variable is determined when the interpreter starts.
tasserts
Assignment statements
*********************

Assignment statements are used to (re)bind names to values and to
modify attributes or items of mutable objects:

   assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)
   target_list     ::= target ("," target)* [","]
   target          ::= identifier
              | "(" target_list ")"
              | "[" [target_list] "]"
              | attributeref
              | subscription
              | slicing

(See section Primaries for the syntax definitions for the last three
symbols.)

An assignment statement evaluates the expression list (remember that
this can be a single expression or a comma-separated list, the latter
yielding a tuple) and assigns the single resulting object to each of
the target lists, from left to right.

Assignment is defined recursively depending on the form of the target
(list). When a target is part of a mutable object (an attribute
reference, subscription or slicing), the mutable object must
ultimately perform the assignment and decide about its validity, and
may raise an exception if the assignment is unacceptable.  The rules
observed by various types and the exceptions raised are given with the
definition of the object types (see section The standard type
hierarchy).

Assignment of an object to a target list is recursively defined as
follows.

* If the target list is a single target: The object is assigned to
  that target.

* If the target list is a comma-separated list of targets: The
  object must be an iterable with the same number of items as there
  are targets in the target list, and the items are assigned, from
  left to right, to the corresponding targets.

Assignment of an object to a single target is recursively defined as
follows.

* If the target is an identifier (name):

  * If the name does not occur in a "global" statement in the
    current code block: the name is bound to the object in the current
    local namespace.

  * Otherwise: the name is bound to the object in the current global
    namespace.

  The name is rebound if it was already bound.  This may cause the
  reference count for the object previously bound to the name to reach
  zero, causing the object to be deallocated and its destructor (if it
  has one) to be called.

* If the target is a target list enclosed in parentheses or in
  square brackets: The object must be an iterable with the same number
  of items as there are targets in the target list, and its items are
  assigned, from left to right, to the corresponding targets.

* If the target is an attribute reference: The primary expression in
  the reference is evaluated.  It should yield an object with
  assignable attributes; if this is not the case, "TypeError" is
  raised.  That object is then asked to assign the assigned object to
  the given attribute; if it cannot perform the assignment, it raises
  an exception (usually but not necessarily "AttributeError").

  Note: If the object is a class instance and the attribute reference
  occurs on both sides of the assignment operator, the RHS expression,
  "a.x" can access either an instance attribute or (if no instance
  attribute exists) a class attribute.  The LHS target "a.x" is always
  set as an instance attribute, creating it if necessary.  Thus, the
  two occurrences of "a.x" do not necessarily refer to the same
  attribute: if the RHS expression refers to a class attribute, the
  LHS creates a new instance attribute as the target of the
  assignment:

     class Cls:
         x = 3             # class variable
     inst = Cls()
     inst.x = inst.x + 1   # writes inst.x as 4 leaving Cls.x as 3

  This description does not necessarily apply to descriptor
  attributes, such as properties created with "property()".

* If the target is a subscription: The primary expression in the
  reference is evaluated.  It should yield either a mutable sequence
  object (such as a list) or a mapping object (such as a dictionary).
  Next, the subscript expression is evaluated.

  If the primary is a mutable sequence object (such as a list), the
  subscript must yield a plain integer.  If it is negative, the
  sequence's length is added to it. The resulting value must be a
  nonnegative integer less than the sequence's length, and the
  sequence is asked to assign the assigned object to its item with
  that index.  If the index is out of range, "IndexError" is raised
  (assignment to a subscripted sequence cannot add new items to a
  list).

  If the primary is a mapping object (such as a dictionary), the
  subscript must have a type compatible with the mapping's key type,
  and the mapping is then asked to create a key/datum pair which maps
  the subscript to the assigned object.  This can either replace an
  existing key/value pair with the same key value, or insert a new
  key/value pair (if no key with the same value existed).

* If the target is a slicing: The primary expression in the
  reference is evaluated.  It should yield a mutable sequence object
  (such as a list).  The assigned object should be a sequence object
  of the same type.  Next, the lower and upper bound expressions are
  evaluated, insofar they are present; defaults are zero and the
  sequence's length.  The bounds should evaluate to (small) integers.
  If either bound is negative, the sequence's length is added to it.
  The resulting bounds are clipped to lie between zero and the
  sequence's length, inclusive.  Finally, the sequence object is asked
  to replace the slice with the items of the assigned sequence.  The
  length of the slice may be different from the length of the assigned
  sequence, thus changing the length of the target sequence, if the
  object allows it.

**CPython implementation detail:** In the current implementation, the
syntax for targets is taken to be the same as for expressions, and
invalid syntax is rejected during the code generation phase, causing
less detailed error messages.

WARNING: Although the definition of assignment implies that overlaps
between the left-hand side and the right-hand side are 'safe' (for
example "a, b = b, a" swaps two variables), overlaps *within* the
collection of assigned-to variables are not safe!  For instance, the
following program prints "[0, 2]":

   x = [0, 1]
   i = 0
   i, x[i] = 1, 2
   print x


Augmented assignment statements
===============================

Augmented assignment is the combination, in a single statement, of a
binary operation and an assignment statement:

   augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)
   augtarget                 ::= identifier | attributeref | subscription | slicing
   augop                     ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="
             | ">>=" | "<<=" | "&=" | "^=" | "|="

(See section Primaries for the syntax definitions for the last three
symbols.)

An augmented assignment evaluates the target (which, unlike normal
assignment statements, cannot be an unpacking) and the expression
list, performs the binary operation specific to the type of assignment
on the two operands, and assigns the result to the original target.
The target is only evaluated once.

An augmented assignment expression like "x += 1" can be rewritten as
"x = x + 1" to achieve a similar, but not exactly equal effect. In the
augmented version, "x" is only evaluated once. Also, when possible,
the actual operation is performed *in-place*, meaning that rather than
creating a new object and assigning that to the target, the old object
is modified instead.

With the exception of assigning to tuples and multiple targets in a
single statement, the assignment done by augmented assignment
statements is handled the same way as normal assignments. Similarly,
with the exception of the possible *in-place* behavior, the binary
operation performed by augmented assignment is the same as the normal
binary operations.

For targets which are attribute references, the same caveat about
class and instance attributes applies as for regular assignments.
t
assignments�
Identifiers (Names)
*******************

An identifier occurring as an atom is a name.  See section Identifiers
and keywords for lexical definition and section Naming and binding for
documentation of naming and binding.

When the name is bound to an object, evaluation of the atom yields
that object. When a name is not bound, an attempt to evaluate it
raises a "NameError" exception.

**Private name mangling:** When an identifier that textually occurs in
a class definition begins with two or more underscore characters and
does not end in two or more underscores, it is considered a *private
name* of that class. Private names are transformed to a longer form
before code is generated for them.  The transformation inserts the
class name, with leading underscores removed and a single underscore
inserted, in front of the name.  For example, the identifier "__spam"
occurring in a class named "Ham" will be transformed to "_Ham__spam".
This transformation is independent of the syntactical context in which
the identifier is used.  If the transformed name is extremely long
(longer than 255 characters), implementation defined truncation may
happen. If the class name consists only of underscores, no
transformation is done.
satom-identifierss
Literals
********

Python supports string literals and various numeric literals:

   literal ::= stringliteral | integer | longinteger
               | floatnumber | imagnumber

Evaluation of a literal yields an object of the given type (string,
integer, long integer, floating point number, complex number) with the
given value.  The value may be approximated in the case of floating
point and imaginary (complex) literals.  See section Literals for
details.

All literals correspond to immutable data types, and hence the
object's identity is less important than its value.  Multiple
evaluations of literals with the same value (either the same
occurrence in the program text or a different occurrence) may obtain
the same object or a different object with the same value.
s
atom-literalssU*
Customizing attribute access
****************************

The following methods can be defined to customize the meaning of
attribute access (use of, assignment to, or deletion of "x.name") for
class instances.

object.__getattr__(self, name)

   Called when an attribute lookup has not found the attribute in the
   usual places (i.e. it is not an instance attribute nor is it found
   in the class tree for "self").  "name" is the attribute name. This
   method should return the (computed) attribute value or raise an
   "AttributeError" exception.

   Note that if the attribute is found through the normal mechanism,
   "__getattr__()" is not called.  (This is an intentional asymmetry
   between "__getattr__()" and "__setattr__()".) This is done both for
   efficiency reasons and because otherwise "__getattr__()" would have
   no way to access other attributes of the instance.  Note that at
   least for instance variables, you can fake total control by not
   inserting any values in the instance attribute dictionary (but
   instead inserting them in another object).  See the
   "__getattribute__()" method below for a way to actually get total
   control in new-style classes.

object.__setattr__(self, name, value)

   Called when an attribute assignment is attempted.  This is called
   instead of the normal mechanism (i.e. store the value in the
   instance dictionary).  *name* is the attribute name, *value* is the
   value to be assigned to it.

   If "__setattr__()" wants to assign to an instance attribute, it
   should not simply execute "self.name = value" --- this would cause
   a recursive call to itself.  Instead, it should insert the value in
   the dictionary of instance attributes, e.g., "self.__dict__[name] =
   value".  For new-style classes, rather than accessing the instance
   dictionary, it should call the base class method with the same
   name, for example, "object.__setattr__(self, name, value)".

object.__delattr__(self, name)

   Like "__setattr__()" but for attribute deletion instead of
   assignment.  This should only be implemented if "del obj.name" is
   meaningful for the object.


More attribute access for new-style classes
===========================================

The following methods only apply to new-style classes.

object.__getattribute__(self, name)

   Called unconditionally to implement attribute accesses for
   instances of the class. If the class also defines "__getattr__()",
   the latter will not be called unless "__getattribute__()" either
   calls it explicitly or raises an "AttributeError". This method
   should return the (computed) attribute value or raise an
   "AttributeError" exception. In order to avoid infinite recursion in
   this method, its implementation should always call the base class
   method with the same name to access any attributes it needs, for
   example, "object.__getattribute__(self, name)".

   Note: This method may still be bypassed when looking up special
     methods as the result of implicit invocation via language syntax
     or built-in functions. See Special method lookup for new-style
     classes.


Implementing Descriptors
========================

The following methods only apply when an instance of the class
containing the method (a so-called *descriptor* class) appears in an
*owner* class (the descriptor must be in either the owner's class
dictionary or in the class dictionary for one of its parents).  In the
examples below, "the attribute" refers to the attribute whose name is
the key of the property in the owner class' "__dict__".

object.__get__(self, instance, owner)

   Called to get the attribute of the owner class (class attribute
   access) or of an instance of that class (instance attribute
   access). *owner* is always the owner class, while *instance* is the
   instance that the attribute was accessed through, or "None" when
   the attribute is accessed through the *owner*.  This method should
   return the (computed) attribute value or raise an "AttributeError"
   exception.

object.__set__(self, instance, value)

   Called to set the attribute on an instance *instance* of the owner
   class to a new value, *value*.

object.__delete__(self, instance)

   Called to delete the attribute on an instance *instance* of the
   owner class.


Invoking Descriptors
====================

In general, a descriptor is an object attribute with "binding
behavior", one whose attribute access has been overridden by methods
in the descriptor protocol:  "__get__()", "__set__()", and
"__delete__()". If any of those methods are defined for an object, it
is said to be a descriptor.

The default behavior for attribute access is to get, set, or delete
the attribute from an object's dictionary. For instance, "a.x" has a
lookup chain starting with "a.__dict__['x']", then
"type(a).__dict__['x']", and continuing through the base classes of
"type(a)" excluding metaclasses.

However, if the looked-up value is an object defining one of the
descriptor methods, then Python may override the default behavior and
invoke the descriptor method instead.  Where this occurs in the
precedence chain depends on which descriptor methods were defined and
how they were called.  Note that descriptors are only invoked for new
style objects or classes (ones that subclass "object()" or "type()").

The starting point for descriptor invocation is a binding, "a.x". How
the arguments are assembled depends on "a":

Direct Call
   The simplest and least common call is when user code directly
   invokes a descriptor method:    "x.__get__(a)".

Instance Binding
   If binding to a new-style object instance, "a.x" is transformed
   into the call: "type(a).__dict__['x'].__get__(a, type(a))".

Class Binding
   If binding to a new-style class, "A.x" is transformed into the
   call: "A.__dict__['x'].__get__(None, A)".

Super Binding
   If "a" is an instance of "super", then the binding "super(B,
   obj).m()" searches "obj.__class__.__mro__" for the base class "A"
   immediately preceding "B" and then invokes the descriptor with the
   call: "A.__dict__['m'].__get__(obj, obj.__class__)".

For instance bindings, the precedence of descriptor invocation depends
on the which descriptor methods are defined.  A descriptor can define
any combination of "__get__()", "__set__()" and "__delete__()".  If it
does not define "__get__()", then accessing the attribute will return
the descriptor object itself unless there is a value in the object's
instance dictionary.  If the descriptor defines "__set__()" and/or
"__delete__()", it is a data descriptor; if it defines neither, it is
a non-data descriptor.  Normally, data descriptors define both
"__get__()" and "__set__()", while non-data descriptors have just the
"__get__()" method.  Data descriptors with "__set__()" and "__get__()"
defined always override a redefinition in an instance dictionary.  In
contrast, non-data descriptors can be overridden by instances.

Python methods (including "staticmethod()" and "classmethod()") are
implemented as non-data descriptors.  Accordingly, instances can
redefine and override methods.  This allows individual instances to
acquire behaviors that differ from other instances of the same class.

The "property()" function is implemented as a data descriptor.
Accordingly, instances cannot override the behavior of a property.


__slots__
=========

By default, instances of both old and new-style classes have a
dictionary for attribute storage.  This wastes space for objects
having very few instance variables.  The space consumption can become
acute when creating large numbers of instances.

The default can be overridden by defining *__slots__* in a new-style
class definition.  The *__slots__* declaration takes a sequence of
instance variables and reserves just enough space in each instance to
hold a value for each variable.  Space is saved because *__dict__* is
not created for each instance.

__slots__

   This class variable can be assigned a string, iterable, or sequence
   of strings with variable names used by instances.  If defined in a
   new-style class, *__slots__* reserves space for the declared
   variables and prevents the automatic creation of *__dict__* and
   *__weakref__* for each instance.

   New in version 2.2.

Notes on using *__slots__*

* When inheriting from a class without *__slots__*, the *__dict__*
  attribute of that class will always be accessible, so a *__slots__*
  definition in the subclass is meaningless.

* Without a *__dict__* variable, instances cannot be assigned new
  variables not listed in the *__slots__* definition.  Attempts to
  assign to an unlisted variable name raises "AttributeError". If
  dynamic assignment of new variables is desired, then add
  "'__dict__'" to the sequence of strings in the *__slots__*
  declaration.

  Changed in version 2.3: Previously, adding "'__dict__'" to the
  *__slots__* declaration would not enable the assignment of new
  attributes not specifically listed in the sequence of instance
  variable names.

* Without a *__weakref__* variable for each instance, classes
  defining *__slots__* do not support weak references to its
  instances. If weak reference support is needed, then add
  "'__weakref__'" to the sequence of strings in the *__slots__*
  declaration.

  Changed in version 2.3: Previously, adding "'__weakref__'" to the
  *__slots__* declaration would not enable support for weak
  references.

* *__slots__* are implemented at the class level by creating
  descriptors (Implementing Descriptors) for each variable name.  As a
  result, class attributes cannot be used to set default values for
  instance variables defined by *__slots__*; otherwise, the class
  attribute would overwrite the descriptor assignment.

* The action of a *__slots__* declaration is limited to the class
  where it is defined.  As a result, subclasses will have a *__dict__*
  unless they also define *__slots__* (which must only contain names
  of any *additional* slots).

* If a class defines a slot also defined in a base class, the
  instance variable defined by the base class slot is inaccessible
  (except by retrieving its descriptor directly from the base class).
  This renders the meaning of the program undefined.  In the future, a
  check may be added to prevent this.

* Nonempty *__slots__* does not work for classes derived from
  "variable-length" built-in types such as "long", "str" and "tuple".

* Any non-string iterable may be assigned to *__slots__*. Mappings
  may also be used; however, in the future, special meaning may be
  assigned to the values corresponding to each key.

* *__class__* assignment works only if both classes have the same
  *__slots__*.

  Changed in version 2.6: Previously, *__class__* assignment raised an
  error if either new or old class had *__slots__*.
sattribute-accesss_
Attribute references
********************

An attribute reference is a primary followed by a period and a name:

   attributeref ::= primary "." identifier

The primary must evaluate to an object of a type that supports
attribute references, e.g., a module, list, or an instance.  This
object is then asked to produce the attribute whose name is the
identifier.  If this attribute is not available, the exception
"AttributeError" is raised. Otherwise, the type and value of the
object produced is determined by the object.  Multiple evaluations of
the same attribute reference may yield different objects.
sattribute-referencess�
Augmented assignment statements
*******************************

Augmented assignment is the combination, in a single statement, of a
binary operation and an assignment statement:

   augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)
   augtarget                 ::= identifier | attributeref | subscription | slicing
   augop                     ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="
             | ">>=" | "<<=" | "&=" | "^=" | "|="

(See section Primaries for the syntax definitions for the last three
symbols.)

An augmented assignment evaluates the target (which, unlike normal
assignment statements, cannot be an unpacking) and the expression
list, performs the binary operation specific to the type of assignment
on the two operands, and assigns the result to the original target.
The target is only evaluated once.

An augmented assignment expression like "x += 1" can be rewritten as
"x = x + 1" to achieve a similar, but not exactly equal effect. In the
augmented version, "x" is only evaluated once. Also, when possible,
the actual operation is performed *in-place*, meaning that rather than
creating a new object and assigning that to the target, the old object
is modified instead.

With the exception of assigning to tuples and multiple targets in a
single statement, the assignment done by augmented assignment
statements is handled the same way as normal assignments. Similarly,
with the exception of the possible *in-place* behavior, the binary
operation performed by augmented assignment is the same as the normal
binary operations.

For targets which are attribute references, the same caveat about
class and instance attributes applies as for regular assignments.
t	augassignsn
Binary arithmetic operations
****************************

The binary arithmetic operations have the conventional priority
levels.  Note that some of these operations also apply to certain non-
numeric types.  Apart from the power operator, there are only two
levels, one for multiplicative operators and one for additive
operators:

   m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr
              | m_expr "%" u_expr
   a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr

The "*" (multiplication) operator yields the product of its arguments.
The arguments must either both be numbers, or one argument must be an
integer (plain or long) and the other must be a sequence. In the
former case, the numbers are converted to a common type and then
multiplied together.  In the latter case, sequence repetition is
performed; a negative repetition factor yields an empty sequence.

The "/" (division) and "//" (floor division) operators yield the
quotient of their arguments.  The numeric arguments are first
converted to a common type. Plain or long integer division yields an
integer of the same type; the result is that of mathematical division
with the 'floor' function applied to the result. Division by zero
raises the "ZeroDivisionError" exception.

The "%" (modulo) operator yields the remainder from the division of
the first argument by the second.  The numeric arguments are first
converted to a common type.  A zero right argument raises the
"ZeroDivisionError" exception.  The arguments may be floating point
numbers, e.g., "3.14%0.7" equals "0.34" (since "3.14" equals "4*0.7 +
0.34".)  The modulo operator always yields a result with the same sign
as its second operand (or zero); the absolute value of the result is
strictly smaller than the absolute value of the second operand [2].

The integer division and modulo operators are connected by the
following identity: "x == (x/y)*y + (x%y)".  Integer division and
modulo are also connected with the built-in function "divmod()":
"divmod(x, y) == (x/y, x%y)".  These identities don't hold for
floating point numbers; there similar identities hold approximately
where "x/y" is replaced by "floor(x/y)" or "floor(x/y) - 1" [3].

In addition to performing the modulo operation on numbers, the "%"
operator is also overloaded by string and unicode objects to perform
string formatting (also known as interpolation). The syntax for string
formatting is described in the Python Library Reference, section
String Formatting Operations.

Deprecated since version 2.3: The floor division operator, the modulo
operator, and the "divmod()" function are no longer defined for
complex numbers.  Instead, convert to a floating point number using
the "abs()" function if appropriate.

The "+" (addition) operator yields the sum of its arguments. The
arguments must either both be numbers or both sequences of the same
type.  In the former case, the numbers are converted to a common type
and then added together.  In the latter case, the sequences are
concatenated.

The "-" (subtraction) operator yields the difference of its arguments.
The numeric arguments are first converted to a common type.
tbinarys�
Binary bitwise operations
*************************

Each of the three bitwise operations has a different priority level:

   and_expr ::= shift_expr | and_expr "&" shift_expr
   xor_expr ::= and_expr | xor_expr "^" and_expr
   or_expr  ::= xor_expr | or_expr "|" xor_expr

The "&" operator yields the bitwise AND of its arguments, which must
be plain or long integers.  The arguments are converted to a common
type.

The "^" operator yields the bitwise XOR (exclusive OR) of its
arguments, which must be plain or long integers.  The arguments are
converted to a common type.

The "|" operator yields the bitwise (inclusive) OR of its arguments,
which must be plain or long integers.  The arguments are converted to
a common type.
tbitwises~
Code Objects
************

Code objects are used by the implementation to represent "pseudo-
compiled" executable Python code such as a function body. They differ
from function objects because they don't contain a reference to their
global execution environment.  Code objects are returned by the built-
in "compile()" function and can be extracted from function objects
through their "func_code" attribute. See also the "code" module.

A code object can be executed or evaluated by passing it (instead of a
source string) to the "exec" statement or the built-in "eval()"
function.

See The standard type hierarchy for more information.
sbltin-code-objectssE
The Ellipsis Object
*******************

This object is used by extended slice notation (see Slicings).  It
supports no special operations.  There is exactly one ellipsis object,
named "Ellipsis" (a built-in name).

It is written as "Ellipsis".  When in a subscript, it can also be
written as "...", for example "seq[...]".
sbltin-ellipsis-objects�+
File Objects
************

File objects are implemented using C's "stdio" package and can be
created with the built-in "open()" function.  File objects are also
returned by some other built-in functions and methods, such as
"os.popen()" and "os.fdopen()" and the "makefile()" method of socket
objects. Temporary files can be created using the "tempfile" module,
and high-level file operations such as copying, moving, and deleting
files and directories can be achieved with the "shutil" module.

When a file operation fails for an I/O-related reason, the exception
"IOError" is raised.  This includes situations where the operation is
not defined for some reason, like "seek()" on a tty device or writing
a file opened for reading.

Files have the following methods:

file.close()

   Close the file.  A closed file cannot be read or written any more.
   Any operation which requires that the file be open will raise a
   "ValueError" after the file has been closed.  Calling "close()"
   more than once is allowed.

   As of Python 2.5, you can avoid having to call this method
   explicitly if you use the "with" statement.  For example, the
   following code will automatically close *f* when the "with" block
   is exited:

      from __future__ import with_statement # This isn't required in Python 2.6

      with open("hello.txt") as f:
          for line in f:
              print line,

   In older versions of Python, you would have needed to do this to
   get the same effect:

      f = open("hello.txt")
      try:
          for line in f:
              print line,
      finally:
          f.close()

   Note: Not all "file-like" types in Python support use as a
     context manager for the "with" statement.  If your code is
     intended to work with any file-like object, you can use the
     function "contextlib.closing()" instead of using the object
     directly.

file.flush()

   Flush the internal buffer, like "stdio"'s "fflush()".  This may be
   a no-op on some file-like objects.

   Note: "flush()" does not necessarily write the file's data to
     disk. Use "flush()" followed by "os.fsync()" to ensure this
     behavior.

file.fileno()

   Return the integer "file descriptor" that is used by the underlying
   implementation to request I/O operations from the operating system.
   This can be useful for other, lower level interfaces that use file
   descriptors, such as the "fcntl" module or "os.read()" and friends.

   Note: File-like objects which do not have a real file descriptor
     should *not* provide this method!

file.isatty()

   Return "True" if the file is connected to a tty(-like) device, else
   "False".

   Note: If a file-like object is not associated with a real file,
     this method should *not* be implemented.

file.next()

   A file object is its own iterator, for example "iter(f)" returns
   *f* (unless *f* is closed).  When a file is used as an iterator,
   typically in a "for" loop (for example, "for line in f: print
   line.strip()"), the "next()" method is called repeatedly.  This
   method returns the next input line, or raises "StopIteration" when
   EOF is hit when the file is open for reading (behavior is undefined
   when the file is open for writing).  In order to make a "for" loop
   the most efficient way of looping over the lines of a file (a very
   common operation), the "next()" method uses a hidden read-ahead
   buffer.  As a consequence of using a read-ahead buffer, combining
   "next()" with other file methods (like "readline()") does not work
   right.  However, using "seek()" to reposition the file to an
   absolute position will flush the read-ahead buffer.

   New in version 2.3.

file.read([size])

   Read at most *size* bytes from the file (less if the read hits EOF
   before obtaining *size* bytes).  If the *size* argument is negative
   or omitted, read all data until EOF is reached.  The bytes are
   returned as a string object.  An empty string is returned when EOF
   is encountered immediately.  (For certain files, like ttys, it
   makes sense to continue reading after an EOF is hit.)  Note that
   this method may call the underlying C function "fread()" more than
   once in an effort to acquire as close to *size* bytes as possible.
   Also note that when in non-blocking mode, less data than was
   requested may be returned, even if no *size* parameter was given.

   Note: This function is simply a wrapper for the underlying
     "fread()" C function, and will behave the same in corner cases,
     such as whether the EOF value is cached.

file.readline([size])

   Read one entire line from the file.  A trailing newline character
   is kept in the string (but may be absent when a file ends with an
   incomplete line). [6] If the *size* argument is present and non-
   negative, it is a maximum byte count (including the trailing
   newline) and an incomplete line may be returned. When *size* is not
   0, an empty string is returned *only* when EOF is encountered
   immediately.

   Note: Unlike "stdio"'s "fgets()", the returned string contains
     null characters ("'\0'") if they occurred in the input.

file.readlines([sizehint])

   Read until EOF using "readline()" and return a list containing the
   lines thus read.  If the optional *sizehint* argument is present,
   instead of reading up to EOF, whole lines totalling approximately
   *sizehint* bytes (possibly after rounding up to an internal buffer
   size) are read.  Objects implementing a file-like interface may
   choose to ignore *sizehint* if it cannot be implemented, or cannot
   be implemented efficiently.

file.xreadlines()

   This method returns the same thing as "iter(f)".

   New in version 2.1.

   Deprecated since version 2.3: Use "for line in file" instead.

file.seek(offset[, whence])

   Set the file's current position, like "stdio"'s "fseek()". The
   *whence* argument is optional and defaults to  "os.SEEK_SET" or "0"
   (absolute file positioning); other values are "os.SEEK_CUR" or "1"
   (seek relative to the current position) and "os.SEEK_END" or "2"
   (seek relative to the file's end).  There is no return value.

   For example, "f.seek(2, os.SEEK_CUR)" advances the position by two
   and "f.seek(-3, os.SEEK_END)" sets the position to the third to
   last.

   Note that if the file is opened for appending (mode "'a'" or
   "'a+'"), any "seek()" operations will be undone at the next write.
   If the file is only opened for writing in append mode (mode "'a'"),
   this method is essentially a no-op, but it remains useful for files
   opened in append mode with reading enabled (mode "'a+'").  If the
   file is opened in text mode (without "'b'"), only offsets returned
   by "tell()" are legal.  Use of other offsets causes undefined
   behavior.

   Note that not all file objects are seekable.

   Changed in version 2.6: Passing float values as offset has been
   deprecated.

file.tell()

   Return the file's current position, like "stdio"'s "ftell()".

   Note: On Windows, "tell()" can return illegal values (after an
     "fgets()") when reading files with Unix-style line-endings. Use
     binary mode ("'rb'") to circumvent this problem.

file.truncate([size])

   Truncate the file's size.  If the optional *size* argument is
   present, the file is truncated to (at most) that size.  The size
   defaults to the current position. The current file position is not
   changed.  Note that if a specified size exceeds the file's current
   size, the result is platform-dependent:  possibilities include that
   the file may remain unchanged, increase to the specified size as if
   zero-filled, or increase to the specified size with undefined new
   content. Availability:  Windows, many Unix variants.

file.write(str)

   Write a string to the file.  There is no return value.  Due to
   buffering, the string may not actually show up in the file until
   the "flush()" or "close()" method is called.

file.writelines(sequence)

   Write a sequence of strings to the file.  The sequence can be any
   iterable object producing strings, typically a list of strings.
   There is no return value. (The name is intended to match
   "readlines()"; "writelines()" does not add line separators.)

Files support the iterator protocol.  Each iteration returns the same
result as "readline()", and iteration ends when the "readline()"
method returns an empty string.

File objects also offer a number of other interesting attributes.
These are not required for file-like objects, but should be
implemented if they make sense for the particular object.

file.closed

   bool indicating the current state of the file object.  This is a
   read-only attribute; the "close()" method changes the value. It may
   not be available on all file-like objects.

file.encoding

   The encoding that this file uses. When Unicode strings are written
   to a file, they will be converted to byte strings using this
   encoding. In addition, when the file is connected to a terminal,
   the attribute gives the encoding that the terminal is likely to use
   (that  information might be incorrect if the user has misconfigured
   the  terminal). The attribute is read-only and may not be present
   on all file-like objects. It may also be "None", in which case the
   file uses the system default encoding for converting Unicode
   strings.

   New in version 2.3.

file.errors

   The Unicode error handler used along with the encoding.

   New in version 2.6.

file.mode

   The I/O mode for the file.  If the file was created using the
   "open()" built-in function, this will be the value of the *mode*
   parameter.  This is a read-only attribute and may not be present on
   all file-like objects.

file.name

   If the file object was created using "open()", the name of the
   file. Otherwise, some string that indicates the source of the file
   object, of the form "<...>".  This is a read-only attribute and may
   not be present on all file-like objects.

file.newlines

   If Python was built with *universal newlines* enabled (the default)
   this read-only attribute exists, and for files opened in universal
   newline read mode it keeps track of the types of newlines
   encountered while reading the file. The values it can take are
   "'\r'", "'\n'", "'\r\n'", "None" (unknown, no newlines read yet) or
   a tuple containing all the newline types seen, to indicate that
   multiple newline conventions were encountered. For files not opened
   in universal newlines read mode the value of this attribute will be
   "None".

file.softspace

   Boolean that indicates whether a space character needs to be
   printed before another value when using the "print" statement.
   Classes that are trying to simulate a file object should also have
   a writable "softspace" attribute, which should be initialized to
   zero.  This will be automatic for most classes implemented in
   Python (care may be needed for objects that override attribute
   access); types implemented in C will have to provide a writable
   "softspace" attribute.

   Note: This attribute is not used to control the "print"
     statement, but to allow the implementation of "print" to keep
     track of its internal state.
sbltin-file-objectss�
The Null Object
***************

This object is returned by functions that don't explicitly return a
value.  It supports no special operations.  There is exactly one null
object, named "None" (a built-in name).

It is written as "None".
sbltin-null-objects3
Type Objects
************

Type objects represent the various object types.  An object's type is
accessed by the built-in function "type()".  There are no special
operations on types.  The standard module "types" defines names for
all standard built-in types.

Types are written like this: "<type 'int'>".
sbltin-type-objectss�
Boolean operations
******************

   or_test  ::= and_test | or_test "or" and_test
   and_test ::= not_test | and_test "and" not_test
   not_test ::= comparison | "not" not_test

In the context of Boolean operations, and also when expressions are
used by control flow statements, the following values are interpreted
as false: "False", "None", numeric zero of all types, and empty
strings and containers (including strings, tuples, lists,
dictionaries, sets and frozensets).  All other values are interpreted
as true.  (See the "__nonzero__()" special method for a way to change
this.)

The operator "not" yields "True" if its argument is false, "False"
otherwise.

The expression "x and y" first evaluates *x*; if *x* is false, its
value is returned; otherwise, *y* is evaluated and the resulting value
is returned.

The expression "x or y" first evaluates *x*; if *x* is true, its value
is returned; otherwise, *y* is evaluated and the resulting value is
returned.

(Note that neither "and" nor "or" restrict the value and type they
return to "False" and "True", but rather return the last evaluated
argument. This is sometimes useful, e.g., if "s" is a string that
should be replaced by a default value if it is empty, the expression
"s or 'foo'" yields the desired value.  Because "not" has to invent a
value anyway, it does not bother to return a value of the same type as
its argument, so e.g., "not 'foo'" yields "False", not "''".)
tbooleanss%
The "break" statement
*********************

   break_stmt ::= "break"

"break" may only occur syntactically nested in a "for" or "while"
loop, but not nested in a function or class definition within that
loop.

It terminates the nearest enclosing loop, skipping the optional "else"
clause if the loop has one.

If a "for" loop is terminated by "break", the loop control target
keeps its current value.

When "break" passes control out of a "try" statement with a "finally"
clause, that "finally" clause is executed before really leaving the
loop.
tbreaks�
Emulating callable objects
**************************

object.__call__(self[, args...])

   Called when the instance is "called" as a function; if this method
   is defined, "x(arg1, arg2, ...)" is a shorthand for
   "x.__call__(arg1, arg2, ...)".
scallable-typess�
Calls
*****

A call calls a callable object (e.g., a *function*) with a possibly
empty series of *arguments*:

   call                 ::= primary "(" [argument_list [","]
            | expression genexpr_for] ")"
   argument_list        ::= positional_arguments ["," keyword_arguments]
                       ["," "*" expression] ["," keyword_arguments]
                       ["," "**" expression]
                     | keyword_arguments ["," "*" expression]
                       ["," "**" expression]
                     | "*" expression ["," keyword_arguments] ["," "**" expression]
                     | "**" expression
   positional_arguments ::= expression ("," expression)*
   keyword_arguments    ::= keyword_item ("," keyword_item)*
   keyword_item         ::= identifier "=" expression

A trailing comma may be present after the positional and keyword
arguments but does not affect the semantics.

The primary must evaluate to a callable object (user-defined
functions, built-in functions, methods of built-in objects, class
objects, methods of class instances, and certain class instances
themselves are callable; extensions may define additional callable
object types).  All argument expressions are evaluated before the call
is attempted.  Please refer to section Function definitions for the
syntax of formal *parameter* lists.

If keyword arguments are present, they are first converted to
positional arguments, as follows.  First, a list of unfilled slots is
created for the formal parameters.  If there are N positional
arguments, they are placed in the first N slots.  Next, for each
keyword argument, the identifier is used to determine the
corresponding slot (if the identifier is the same as the first formal
parameter name, the first slot is used, and so on).  If the slot is
already filled, a "TypeError" exception is raised. Otherwise, the
value of the argument is placed in the slot, filling it (even if the
expression is "None", it fills the slot).  When all arguments have
been processed, the slots that are still unfilled are filled with the
corresponding default value from the function definition.  (Default
values are calculated, once, when the function is defined; thus, a
mutable object such as a list or dictionary used as default value will
be shared by all calls that don't specify an argument value for the
corresponding slot; this should usually be avoided.)  If there are any
unfilled slots for which no default value is specified, a "TypeError"
exception is raised.  Otherwise, the list of filled slots is used as
the argument list for the call.

**CPython implementation detail:** An implementation may provide
built-in functions whose positional parameters do not have names, even
if they are 'named' for the purpose of documentation, and which
therefore cannot be supplied by keyword.  In CPython, this is the case
for functions implemented in C that use "PyArg_ParseTuple()" to parse
their arguments.

If there are more positional arguments than there are formal parameter
slots, a "TypeError" exception is raised, unless a formal parameter
using the syntax "*identifier" is present; in this case, that formal
parameter receives a tuple containing the excess positional arguments
(or an empty tuple if there were no excess positional arguments).

If any keyword argument does not correspond to a formal parameter
name, a "TypeError" exception is raised, unless a formal parameter
using the syntax "**identifier" is present; in this case, that formal
parameter receives a dictionary containing the excess keyword
arguments (using the keywords as keys and the argument values as
corresponding values), or a (new) empty dictionary if there were no
excess keyword arguments.

If the syntax "*expression" appears in the function call, "expression"
must evaluate to an iterable.  Elements from this iterable are treated
as if they were additional positional arguments; if there are
positional arguments *x1*, ..., *xN*, and "expression" evaluates to a
sequence *y1*, ..., *yM*, this is equivalent to a call with M+N
positional arguments *x1*, ..., *xN*, *y1*, ..., *yM*.

A consequence of this is that although the "*expression" syntax may
appear *after* some keyword arguments, it is processed *before* the
keyword arguments (and the "**expression" argument, if any -- see
below).  So:

   >>> def f(a, b):
   ...     print a, b
   ...
   >>> f(b=1, *(2,))
   2 1
   >>> f(a=1, *(2,))
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
   TypeError: f() got multiple values for keyword argument 'a'
   >>> f(1, *(2,))
   1 2

It is unusual for both keyword arguments and the "*expression" syntax
to be used in the same call, so in practice this confusion does not
arise.

If the syntax "**expression" appears in the function call,
"expression" must evaluate to a mapping, the contents of which are
treated as additional keyword arguments.  In the case of a keyword
appearing in both "expression" and as an explicit keyword argument, a
"TypeError" exception is raised.

Formal parameters using the syntax "*identifier" or "**identifier"
cannot be used as positional argument slots or as keyword argument
names.  Formal parameters using the syntax "(sublist)" cannot be used
as keyword argument names; the outermost sublist corresponds to a
single unnamed argument slot, and the argument value is assigned to
the sublist using the usual tuple assignment rules after all other
parameter processing is done.

A call always returns some value, possibly "None", unless it raises an
exception.  How this value is computed depends on the type of the
callable object.

If it is---

a user-defined function:
   The code block for the function is executed, passing it the
   argument list.  The first thing the code block will do is bind the
   formal parameters to the arguments; this is described in section
   Function definitions.  When the code block executes a "return"
   statement, this specifies the return value of the function call.

a built-in function or method:
   The result is up to the interpreter; see Built-in Functions for the
   descriptions of built-in functions and methods.

a class object:
   A new instance of that class is returned.

a class instance method:
   The corresponding user-defined function is called, with an argument
   list that is one longer than the argument list of the call: the
   instance becomes the first argument.

a class instance:
   The class must define a "__call__()" method; the effect is then the
   same as if that method was called.
tcallssJ

Class definitions
*****************

A class definition defines a class object (see section The standard
type hierarchy):

   classdef    ::= "class" classname [inheritance] ":" suite
   inheritance ::= "(" [expression_list] ")"
   classname   ::= identifier

A class definition is an executable statement.  It first evaluates the
inheritance list, if present.  Each item in the inheritance list
should evaluate to a class object or class type which allows
subclassing.  The class's suite is then executed in a new execution
frame (see section Naming and binding), using a newly created local
namespace and the original global namespace. (Usually, the suite
contains only function definitions.)  When the class's suite finishes
execution, its execution frame is discarded but its local namespace is
saved. [4] A class object is then created using the inheritance list
for the base classes and the saved local namespace for the attribute
dictionary.  The class name is bound to this class object in the
original local namespace.

**Programmer's note:** Variables defined in the class definition are
class variables; they are shared by all instances.  To create instance
variables, they can be set in a method with "self.name = value".  Both
class and instance variables are accessible through the notation
""self.name"", and an instance variable hides a class variable with
the same name when accessed in this way. Class variables can be used
as defaults for instance variables, but using mutable values there can
lead to unexpected results.  For *new-style class*es, descriptors can
be used to create instance variables with different implementation
details.

Class definitions, like function definitions, may be wrapped by one or
more *decorator* expressions.  The evaluation rules for the decorator
expressions are the same as for functions.  The result must be a class
object, which is then bound to the class name.

-[ Footnotes ]-

[1] The exception is propagated to the invocation stack unless
    there is a "finally" clause which happens to raise another
    exception. That new exception causes the old one to be lost.

[2] Currently, control "flows off the end" except in the case of
    an exception or the execution of a "return", "continue", or
    "break" statement.

[3] A string literal appearing as the first statement in the
    function body is transformed into the function's "__doc__"
    attribute and therefore the function's *docstring*.

[4] A string literal appearing as the first statement in the class
    body is transformed into the namespace's "__doc__" item and
    therefore the class's *docstring*.
tclasss$
Comparisons
***********

Unlike C, all comparison operations in Python have the same priority,
which is lower than that of any arithmetic, shifting or bitwise
operation.  Also unlike C, expressions like "a < b < c" have the
interpretation that is conventional in mathematics:

   comparison    ::= or_expr ( comp_operator or_expr )*
   comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="
                     | "is" ["not"] | ["not"] "in"

Comparisons yield boolean values: "True" or "False".

Comparisons can be chained arbitrarily, e.g., "x < y <= z" is
equivalent to "x < y and y <= z", except that "y" is evaluated only
once (but in both cases "z" is not evaluated at all when "x < y" is
found to be false).

Formally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,
*op2*, ..., *opN* are comparison operators, then "a op1 b op2 c ... y
opN z" is equivalent to "a op1 b and b op2 c and ... y opN z", except
that each expression is evaluated at most once.

Note that "a op1 b op2 c" doesn't imply any kind of comparison between
*a* and *c*, so that, e.g., "x < y > z" is perfectly legal (though
perhaps not pretty).

The forms "<>" and "!=" are equivalent; for consistency with C, "!="
is preferred; where "!=" is mentioned below "<>" is also accepted.
The "<>" spelling is considered obsolescent.


Value comparisons
=================

The operators "<", ">", "==", ">=", "<=", and "!=" compare the values
of two objects.  The objects do not need to have the same type.

Chapter Objects, values and types states that objects have a value (in
addition to type and identity).  The value of an object is a rather
abstract notion in Python: For example, there is no canonical access
method for an object's value.  Also, there is no requirement that the
value of an object should be constructed in a particular way, e.g.
comprised of all its data attributes. Comparison operators implement a
particular notion of what the value of an object is.  One can think of
them as defining the value of an object indirectly, by means of their
comparison implementation.

Types can customize their comparison behavior by implementing a
"__cmp__()" method or *rich comparison methods* like "__lt__()",
described in Basic customization.

The default behavior for equality comparison ("==" and "!=") is based
on the identity of the objects.  Hence, equality comparison of
instances with the same identity results in equality, and equality
comparison of instances with different identities results in
inequality.  A motivation for this default behavior is the desire that
all objects should be reflexive (i.e. "x is y" implies "x == y").

The default order comparison ("<", ">", "<=", and ">=") gives a
consistent but arbitrary order.

(This unusual definition of comparison was used to simplify the
definition of operations like sorting and the "in" and "not in"
operators. In the future, the comparison rules for objects of
different types are likely to change.)

The behavior of the default equality comparison, that instances with
different identities are always unequal, may be in contrast to what
types will need that have a sensible definition of object value and
value-based equality.  Such types will need to customize their
comparison behavior, and in fact, a number of built-in types have done
that.

The following list describes the comparison behavior of the most
important built-in types.

* Numbers of built-in numeric types (Numeric Types --- int, float,
  long, complex) and of the standard library types
  "fractions.Fraction" and "decimal.Decimal" can be compared within
  and across their types, with the restriction that complex numbers do
  not support order comparison.  Within the limits of the types
  involved, they compare mathematically (algorithmically) correct
  without loss of precision.

* Strings (instances of "str" or "unicode") compare
  lexicographically using the numeric equivalents (the result of the
  built-in function "ord()") of their characters. [4] When comparing
  an 8-bit string and a Unicode string, the 8-bit string is converted
  to Unicode.  If the conversion fails, the strings are considered
  unequal.

* Instances of "tuple" or "list" can be compared only within each of
  their types.  Equality comparison across these types results in
  unequality, and ordering comparison across these types gives an
  arbitrary order.

  These sequences compare lexicographically using comparison of
  corresponding elements, whereby reflexivity of the elements is
  enforced.

  In enforcing reflexivity of elements, the comparison of collections
  assumes that for a collection element "x", "x == x" is always true.
  Based on that assumption, element identity is compared first, and
  element comparison is performed only for distinct elements.  This
  approach yields the same result as a strict element comparison
  would, if the compared elements are reflexive.  For non-reflexive
  elements, the result is different than for strict element
  comparison.

  Lexicographical comparison between built-in collections works as
  follows:

  * For two collections to compare equal, they must be of the same
    type, have the same length, and each pair of corresponding
    elements must compare equal (for example, "[1,2] == (1,2)" is
    false because the type is not the same).

  * Collections are ordered the same as their first unequal elements
    (for example, "cmp([1,2,x], [1,2,y])" returns the same as
    "cmp(x,y)").  If a corresponding element does not exist, the
    shorter collection is ordered first (for example, "[1,2] <
    [1,2,3]" is true).

* Mappings (instances of "dict") compare equal if and only if they
  have equal *(key, value)* pairs. Equality comparison of the keys and
  values enforces reflexivity.

  Outcomes other than equality are resolved consistently, but are not
  otherwise defined. [5]

* Most other objects of built-in types compare unequal unless they
  are the same object; the choice whether one object is considered
  smaller or larger than another one is made arbitrarily but
  consistently within one execution of a program.

User-defined classes that customize their comparison behavior should
follow some consistency rules, if possible:

* Equality comparison should be reflexive. In other words, identical
  objects should compare equal:

     "x is y" implies "x == y"

* Comparison should be symmetric. In other words, the following
  expressions should have the same result:

     "x == y" and "y == x"

     "x != y" and "y != x"

     "x < y" and "y > x"

     "x <= y" and "y >= x"

* Comparison should be transitive. The following (non-exhaustive)
  examples illustrate that:

     "x > y and y > z" implies "x > z"

     "x < y and y <= z" implies "x < z"

* Inverse comparison should result in the boolean negation. In other
  words, the following expressions should have the same result:

     "x == y" and "not x != y"

     "x < y" and "not x >= y" (for total ordering)

     "x > y" and "not x <= y" (for total ordering)

  The last two expressions apply to totally ordered collections (e.g.
  to sequences, but not to sets or mappings). See also the
  "total_ordering()" decorator.

* The "hash()" result should be consistent with equality. Objects
  that are equal should either have the same hash value, or be marked
  as unhashable.

Python does not enforce these consistency rules.


Membership test operations
==========================

The operators "in" and "not in" test for membership.  "x in s"
evaluates to "True" if *x* is a member of *s*, and "False" otherwise.
"x not in s" returns the negation of "x in s".  All built-in sequences
and set types support this as well as dictionary, for which "in" tests
whether the dictionary has a given key. For container types such as
list, tuple, set, frozenset, dict, or collections.deque, the
expression "x in y" is equivalent to "any(x is e or x == e for e in
y)".

For the string and bytes types, "x in y" is "True" if and only if *x*
is a substring of *y*.  An equivalent test is "y.find(x) != -1".
Empty strings are always considered to be a substring of any other
string, so """ in "abc"" will return "True".

For user-defined classes which define the "__contains__()" method, "x
in y" returns "True" if "y.__contains__(x)" returns a true value, and
"False" otherwise.

For user-defined classes which do not define "__contains__()" but do
define "__iter__()", "x in y" is "True" if some value "z" with "x ==
z" is produced while iterating over "y".  If an exception is raised
during the iteration, it is as if "in" raised that exception.

Lastly, the old-style iteration protocol is tried: if a class defines
"__getitem__()", "x in y" is "True" if and only if there is a non-
negative integer index *i* such that "x == y[i]", and all lower
integer indices do not raise "IndexError" exception. (If any other
exception is raised, it is as if "in" raised that exception).

The operator "not in" is defined to have the inverse true value of
"in".


Identity comparisons
====================

The operators "is" and "is not" test for object identity: "x is y" is
true if and only if *x* and *y* are the same object.  "x is not y"
yields the inverse truth value. [6]
tcomparisonsspP
Compound statements
*******************

Compound statements contain (groups of) other statements; they affect
or control the execution of those other statements in some way.  In
general, compound statements span multiple lines, although in simple
incarnations a whole compound statement may be contained in one line.

The "if", "while" and "for" statements implement traditional control
flow constructs.  "try" specifies exception handlers and/or cleanup
code for a group of statements.  Function and class definitions are
also syntactically compound statements.

Compound statements consist of one or more 'clauses.'  A clause
consists of a header and a 'suite.'  The clause headers of a
particular compound statement are all at the same indentation level.
Each clause header begins with a uniquely identifying keyword and ends
with a colon.  A suite is a group of statements controlled by a
clause.  A suite can be one or more semicolon-separated simple
statements on the same line as the header, following the header's
colon, or it can be one or more indented statements on subsequent
lines.  Only the latter form of suite can contain nested compound
statements; the following is illegal, mostly because it wouldn't be
clear to which "if" clause a following "else" clause would belong:

   if test1: if test2: print x

Also note that the semicolon binds tighter than the colon in this
context, so that in the following example, either all or none of the
"print" statements are executed:

   if x < y < z: print x; print y; print z

Summarizing:

   compound_stmt ::= if_stmt
                     | while_stmt
                     | for_stmt
                     | try_stmt
                     | with_stmt
                     | funcdef
                     | classdef
                     | decorated
   suite         ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT
   statement     ::= stmt_list NEWLINE | compound_stmt
   stmt_list     ::= simple_stmt (";" simple_stmt)* [";"]

Note that statements always end in a "NEWLINE" possibly followed by a
"DEDENT". Also note that optional continuation clauses always begin
with a keyword that cannot start a statement, thus there are no
ambiguities (the 'dangling "else"' problem is solved in Python by
requiring nested "if" statements to be indented).

The formatting of the grammar rules in the following sections places
each clause on a separate line for clarity.


The "if" statement
==================

The "if" statement is used for conditional execution:

   if_stmt ::= "if" expression ":" suite
               ( "elif" expression ":" suite )*
               ["else" ":" suite]

It selects exactly one of the suites by evaluating the expressions one
by one until one is found to be true (see section Boolean operations
for the definition of true and false); then that suite is executed
(and no other part of the "if" statement is executed or evaluated).
If all expressions are false, the suite of the "else" clause, if
present, is executed.


The "while" statement
=====================

The "while" statement is used for repeated execution as long as an
expression is true:

   while_stmt ::= "while" expression ":" suite
                  ["else" ":" suite]

This repeatedly tests the expression and, if it is true, executes the
first suite; if the expression is false (which may be the first time
it is tested) the suite of the "else" clause, if present, is executed
and the loop terminates.

A "break" statement executed in the first suite terminates the loop
without executing the "else" clause's suite.  A "continue" statement
executed in the first suite skips the rest of the suite and goes back
to testing the expression.


The "for" statement
===================

The "for" statement is used to iterate over the elements of a sequence
(such as a string, tuple or list) or other iterable object:

   for_stmt ::= "for" target_list "in" expression_list ":" suite
                ["else" ":" suite]

The expression list is evaluated once; it should yield an iterable
object.  An iterator is created for the result of the
"expression_list".  The suite is then executed once for each item
provided by the iterator, in the order of ascending indices.  Each
item in turn is assigned to the target list using the standard rules
for assignments, and then the suite is executed.  When the items are
exhausted (which is immediately when the sequence is empty), the suite
in the "else" clause, if present, is executed, and the loop
terminates.

A "break" statement executed in the first suite terminates the loop
without executing the "else" clause's suite.  A "continue" statement
executed in the first suite skips the rest of the suite and continues
with the next item, or with the "else" clause if there was no next
item.

The suite may assign to the variable(s) in the target list; this does
not affect the next item assigned to it.

The target list is not deleted when the loop is finished, but if the
sequence is empty, it will not have been assigned to at all by the
loop.  Hint: the built-in function "range()" returns a sequence of
integers suitable to emulate the effect of Pascal's "for i := a to b
do"; e.g., "range(3)" returns the list "[0, 1, 2]".

Note: There is a subtlety when the sequence is being modified by the
  loop (this can only occur for mutable sequences, i.e. lists). An
  internal counter is used to keep track of which item is used next,
  and this is incremented on each iteration.  When this counter has
  reached the length of the sequence the loop terminates.  This means
  that if the suite deletes the current (or a previous) item from the
  sequence, the next item will be skipped (since it gets the index of
  the current item which has already been treated).  Likewise, if the
  suite inserts an item in the sequence before the current item, the
  current item will be treated again the next time through the loop.
  This can lead to nasty bugs that can be avoided by making a
  temporary copy using a slice of the whole sequence, e.g.,

     for x in a[:]:
         if x < 0: a.remove(x)


The "try" statement
===================

The "try" statement specifies exception handlers and/or cleanup code
for a group of statements:

   try_stmt  ::= try1_stmt | try2_stmt
   try1_stmt ::= "try" ":" suite
                 ("except" [expression [("as" | ",") identifier]] ":" suite)+
                 ["else" ":" suite]
                 ["finally" ":" suite]
   try2_stmt ::= "try" ":" suite
                 "finally" ":" suite

Changed in version 2.5: In previous versions of Python,
"try"..."except"..."finally" did not work. "try"..."except" had to be
nested in "try"..."finally".

The "except" clause(s) specify one or more exception handlers. When no
exception occurs in the "try" clause, no exception handler is
executed. When an exception occurs in the "try" suite, a search for an
exception handler is started.  This search inspects the except clauses
in turn until one is found that matches the exception.  An expression-
less except clause, if present, must be last; it matches any
exception.  For an except clause with an expression, that expression
is evaluated, and the clause matches the exception if the resulting
object is "compatible" with the exception.  An object is compatible
with an exception if it is the class or a base class of the exception
object, or a tuple containing an item compatible with the exception.

If no except clause matches the exception, the search for an exception
handler continues in the surrounding code and on the invocation stack.
[1]

If the evaluation of an expression in the header of an except clause
raises an exception, the original search for a handler is canceled and
a search starts for the new exception in the surrounding code and on
the call stack (it is treated as if the entire "try" statement raised
the exception).

When a matching except clause is found, the exception is assigned to
the target specified in that except clause, if present, and the except
clause's suite is executed.  All except clauses must have an
executable block.  When the end of this block is reached, execution
continues normally after the entire try statement.  (This means that
if two nested handlers exist for the same exception, and the exception
occurs in the try clause of the inner handler, the outer handler will
not handle the exception.)

Before an except clause's suite is executed, details about the
exception are assigned to three variables in the "sys" module:
"sys.exc_type" receives the object identifying the exception;
"sys.exc_value" receives the exception's parameter;
"sys.exc_traceback" receives a traceback object (see section The
standard type hierarchy) identifying the point in the program where
the exception occurred. These details are also available through the
"sys.exc_info()" function, which returns a tuple "(exc_type,
exc_value, exc_traceback)".  Use of the corresponding variables is
deprecated in favor of this function, since their use is unsafe in a
threaded program.  As of Python 1.5, the variables are restored to
their previous values (before the call) when returning from a function
that handled an exception.

The optional "else" clause is executed if and when control flows off
the end of the "try" clause. [2] Exceptions in the "else" clause are
not handled by the preceding "except" clauses.

If "finally" is present, it specifies a 'cleanup' handler.  The "try"
clause is executed, including any "except" and "else" clauses.  If an
exception occurs in any of the clauses and is not handled, the
exception is temporarily saved. The "finally" clause is executed.  If
there is a saved exception, it is re-raised at the end of the
"finally" clause. If the "finally" clause raises another exception or
executes a "return" or "break" statement, the saved exception is
discarded:

   >>> def f():
   ...     try:
   ...         1/0
   ...     finally:
   ...         return 42
   ...
   >>> f()
   42

The exception information is not available to the program during
execution of the "finally" clause.

When a "return", "break" or "continue" statement is executed in the
"try" suite of a "try"..."finally" statement, the "finally" clause is
also executed 'on the way out.' A "continue" statement is illegal in
the "finally" clause. (The reason is a problem with the current
implementation --- this restriction may be lifted in the future).

The return value of a function is determined by the last "return"
statement executed.  Since the "finally" clause always executes, a
"return" statement executed in the "finally" clause will always be the
last one executed:

   >>> def foo():
   ...     try:
   ...         return 'try'
   ...     finally:
   ...         return 'finally'
   ...
   >>> foo()
   'finally'

Additional information on exceptions can be found in section
Exceptions, and information on using the "raise" statement to generate
exceptions may be found in section The raise statement.


The "with" statement
====================

New in version 2.5.

The "with" statement is used to wrap the execution of a block with
methods defined by a context manager (see section With Statement
Context Managers). This allows common "try"..."except"..."finally"
usage patterns to be encapsulated for convenient reuse.

   with_stmt ::= "with" with_item ("," with_item)* ":" suite
   with_item ::= expression ["as" target]

The execution of the "with" statement with one "item" proceeds as
follows:

1. The context expression (the expression given in the "with_item")
   is evaluated to obtain a context manager.

2. The context manager's "__exit__()" is loaded for later use.

3. The context manager's "__enter__()" method is invoked.

4. If a target was included in the "with" statement, the return
   value from "__enter__()" is assigned to it.

   Note: The "with" statement guarantees that if the "__enter__()"
     method returns without an error, then "__exit__()" will always be
     called. Thus, if an error occurs during the assignment to the
     target list, it will be treated the same as an error occurring
     within the suite would be. See step 6 below.

5. The suite is executed.

6. The context manager's "__exit__()" method is invoked. If an
   exception caused the suite to be exited, its type, value, and
   traceback are passed as arguments to "__exit__()". Otherwise, three
   "None" arguments are supplied.

   If the suite was exited due to an exception, and the return value
   from the "__exit__()" method was false, the exception is reraised.
   If the return value was true, the exception is suppressed, and
   execution continues with the statement following the "with"
   statement.

   If the suite was exited for any reason other than an exception, the
   return value from "__exit__()" is ignored, and execution proceeds
   at the normal location for the kind of exit that was taken.

With more than one item, the context managers are processed as if
multiple "with" statements were nested:

   with A() as a, B() as b:
       suite

is equivalent to

   with A() as a:
       with B() as b:
           suite

Note: In Python 2.5, the "with" statement is only allowed when the
  "with_statement" feature has been enabled.  It is always enabled in
  Python 2.6.

Changed in version 2.7: Support for multiple context expressions.

See also:

  **PEP 343** - The "with" statement
     The specification, background, and examples for the Python "with"
     statement.


Function definitions
====================

A function definition defines a user-defined function object (see
section The standard type hierarchy):

   decorated      ::= decorators (classdef | funcdef)
   decorators     ::= decorator+
   decorator      ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE
   funcdef        ::= "def" funcname "(" [parameter_list] ")" ":" suite
   dotted_name    ::= identifier ("." identifier)*
   parameter_list ::= (defparameter ",")*
                      (  "*" identifier ["," "**" identifier]
                      | "**" identifier
                      | defparameter [","] )
   defparameter   ::= parameter ["=" expression]
   sublist        ::= parameter ("," parameter)* [","]
   parameter      ::= identifier | "(" sublist ")"
   funcname       ::= identifier

A function definition is an executable statement.  Its execution binds
the function name in the current local namespace to a function object
(a wrapper around the executable code for the function).  This
function object contains a reference to the current global namespace
as the global namespace to be used when the function is called.

The function definition does not execute the function body; this gets
executed only when the function is called. [3]

A function definition may be wrapped by one or more *decorator*
expressions. Decorator expressions are evaluated when the function is
defined, in the scope that contains the function definition.  The
result must be a callable, which is invoked with the function object
as the only argument. The returned value is bound to the function name
instead of the function object.  Multiple decorators are applied in
nested fashion. For example, the following code:

   @f1(arg)
   @f2
   def func(): pass

is equivalent to:

   def func(): pass
   func = f1(arg)(f2(func))

When one or more top-level *parameters* have the form *parameter* "="
*expression*, the function is said to have "default parameter values."
For a parameter with a default value, the corresponding *argument* may
be omitted from a call, in which case the parameter's default value is
substituted.  If a parameter has a default value, all following
parameters must also have a default value --- this is a syntactic
restriction that is not expressed by the grammar.

**Default parameter values are evaluated when the function definition
is executed.**  This means that the expression is evaluated once, when
the function is defined, and that the same "pre-computed" value is
used for each call.  This is especially important to understand when a
default parameter is a mutable object, such as a list or a dictionary:
if the function modifies the object (e.g. by appending an item to a
list), the default value is in effect modified. This is generally not
what was intended.  A way around this  is to use "None" as the
default, and explicitly test for it in the body of the function, e.g.:

   def whats_on_the_telly(penguin=None):
       if penguin is None:
           penguin = []
       penguin.append("property of the zoo")
       return penguin

Function call semantics are described in more detail in section Calls.
A function call always assigns values to all parameters mentioned in
the parameter list, either from position arguments, from keyword
arguments, or from default values.  If the form ""*identifier"" is
present, it is initialized to a tuple receiving any excess positional
parameters, defaulting to the empty tuple.  If the form
""**identifier"" is present, it is initialized to a new dictionary
receiving any excess keyword arguments, defaulting to a new empty
dictionary.

It is also possible to create anonymous functions (functions not bound
to a name), for immediate use in expressions.  This uses lambda
expressions, described in section Lambdas.  Note that the lambda
expression is merely a shorthand for a simplified function definition;
a function defined in a ""def"" statement can be passed around or
assigned to another name just like a function defined by a lambda
expression.  The ""def"" form is actually more powerful since it
allows the execution of multiple statements.

**Programmer's note:** Functions are first-class objects.  A ""def""
form executed inside a function definition defines a local function
that can be returned or passed around.  Free variables used in the
nested function can access the local variables of the function
containing the def.  See section Naming and binding for details.


Class definitions
=================

A class definition defines a class object (see section The standard
type hierarchy):

   classdef    ::= "class" classname [inheritance] ":" suite
   inheritance ::= "(" [expression_list] ")"
   classname   ::= identifier

A class definition is an executable statement.  It first evaluates the
inheritance list, if present.  Each item in the inheritance list
should evaluate to a class object or class type which allows
subclassing.  The class's suite is then executed in a new execution
frame (see section Naming and binding), using a newly created local
namespace and the original global namespace. (Usually, the suite
contains only function definitions.)  When the class's suite finishes
execution, its execution frame is discarded but its local namespace is
saved. [4] A class object is then created using the inheritance list
for the base classes and the saved local namespace for the attribute
dictionary.  The class name is bound to this class object in the
original local namespace.

**Programmer's note:** Variables defined in the class definition are
class variables; they are shared by all instances.  To create instance
variables, they can be set in a method with "self.name = value".  Both
class and instance variables are accessible through the notation
""self.name"", and an instance variable hides a class variable with
the same name when accessed in this way. Class variables can be used
as defaults for instance variables, but using mutable values there can
lead to unexpected results.  For *new-style class*es, descriptors can
be used to create instance variables with different implementation
details.

Class definitions, like function definitions, may be wrapped by one or
more *decorator* expressions.  The evaluation rules for the decorator
expressions are the same as for functions.  The result must be a class
object, which is then bound to the class name.

-[ Footnotes ]-

[1] The exception is propagated to the invocation stack unless
    there is a "finally" clause which happens to raise another
    exception. That new exception causes the old one to be lost.

[2] Currently, control "flows off the end" except in the case of
    an exception or the execution of a "return", "continue", or
    "break" statement.

[3] A string literal appearing as the first statement in the
    function body is transformed into the function's "__doc__"
    attribute and therefore the function's *docstring*.

[4] A string literal appearing as the first statement in the class
    body is transformed into the namespace's "__doc__" item and
    therefore the class's *docstring*.
tcompounds�
With Statement Context Managers
*******************************

New in version 2.5.

A *context manager* is an object that defines the runtime context to
be established when executing a "with" statement. The context manager
handles the entry into, and the exit from, the desired runtime context
for the execution of the block of code.  Context managers are normally
invoked using the "with" statement (described in section The with
statement), but can also be used by directly invoking their methods.

Typical uses of context managers include saving and restoring various
kinds of global state, locking and unlocking resources, closing opened
files, etc.

For more information on context managers, see Context Manager Types.

object.__enter__(self)

   Enter the runtime context related to this object. The "with"
   statement will bind this method's return value to the target(s)
   specified in the "as" clause of the statement, if any.

object.__exit__(self, exc_type, exc_value, traceback)

   Exit the runtime context related to this object. The parameters
   describe the exception that caused the context to be exited. If the
   context was exited without an exception, all three arguments will
   be "None".

   If an exception is supplied, and the method wishes to suppress the
   exception (i.e., prevent it from being propagated), it should
   return a true value. Otherwise, the exception will be processed
   normally upon exit from this method.

   Note that "__exit__()" methods should not reraise the passed-in
   exception; this is the caller's responsibility.

See also:

  **PEP 343** - The "with" statement
     The specification, background, and examples for the Python "with"
     statement.
scontext-managerss�
The "continue" statement
************************

   continue_stmt ::= "continue"

"continue" may only occur syntactically nested in a "for" or "while"
loop, but not nested in a function or class definition or "finally"
clause within that loop.  It continues with the next cycle of the
nearest enclosing loop.

When "continue" passes control out of a "try" statement with a
"finally" clause, that "finally" clause is executed before really
starting the next loop cycle.
tcontinuesB
Arithmetic conversions
**********************

When a description of an arithmetic operator below uses the phrase
"the numeric arguments are converted to a common type," the arguments
are coerced using the coercion rules listed at  Coercion rules.  If
both arguments are standard numeric types, the following coercions are
applied:

* If either argument is a complex number, the other is converted to
  complex;

* otherwise, if either argument is a floating point number, the
  other is converted to floating point;

* otherwise, if either argument is a long integer, the other is
  converted to long integer;

* otherwise, both must be plain integers and no conversion is
  necessary.

Some additional rules apply for certain operators (e.g., a string left
argument to the '%' operator). Extensions can define their own
coercions.
tconversionss�/
Basic customization
*******************

object.__new__(cls[, ...])

   Called to create a new instance of class *cls*.  "__new__()" is a
   static method (special-cased so you need not declare it as such)
   that takes the class of which an instance was requested as its
   first argument.  The remaining arguments are those passed to the
   object constructor expression (the call to the class).  The return
   value of "__new__()" should be the new object instance (usually an
   instance of *cls*).

   Typical implementations create a new instance of the class by
   invoking the superclass's "__new__()" method using
   "super(currentclass, cls).__new__(cls[, ...])" with appropriate
   arguments and then modifying the newly-created instance as
   necessary before returning it.

   If "__new__()" returns an instance of *cls*, then the new
   instance's "__init__()" method will be invoked like
   "__init__(self[, ...])", where *self* is the new instance and the
   remaining arguments are the same as were passed to "__new__()".

   If "__new__()" does not return an instance of *cls*, then the new
   instance's "__init__()" method will not be invoked.

   "__new__()" is intended mainly to allow subclasses of immutable
   types (like int, str, or tuple) to customize instance creation.  It
   is also commonly overridden in custom metaclasses in order to
   customize class creation.

object.__init__(self[, ...])

   Called after the instance has been created (by "__new__()"), but
   before it is returned to the caller.  The arguments are those
   passed to the class constructor expression.  If a base class has an
   "__init__()" method, the derived class's "__init__()" method, if
   any, must explicitly call it to ensure proper initialization of the
   base class part of the instance; for example:
   "BaseClass.__init__(self, [args...])".

   Because "__new__()" and "__init__()" work together in constructing
   objects ("__new__()" to create it, and "__init__()" to customise
   it), no non-"None" value may be returned by "__init__()"; doing so
   will cause a "TypeError" to be raised at runtime.

object.__del__(self)

   Called when the instance is about to be destroyed.  This is also
   called a destructor.  If a base class has a "__del__()" method, the
   derived class's "__del__()" method, if any, must explicitly call it
   to ensure proper deletion of the base class part of the instance.
   Note that it is possible (though not recommended!) for the
   "__del__()" method to postpone destruction of the instance by
   creating a new reference to it.  It may then be called at a later
   time when this new reference is deleted.  It is not guaranteed that
   "__del__()" methods are called for objects that still exist when
   the interpreter exits.

   Note: "del x" doesn't directly call "x.__del__()" --- the former
     decrements the reference count for "x" by one, and the latter is
     only called when "x"'s reference count reaches zero.  Some common
     situations that may prevent the reference count of an object from
     going to zero include: circular references between objects (e.g.,
     a doubly-linked list or a tree data structure with parent and
     child pointers); a reference to the object on the stack frame of
     a function that caught an exception (the traceback stored in
     "sys.exc_traceback" keeps the stack frame alive); or a reference
     to the object on the stack frame that raised an unhandled
     exception in interactive mode (the traceback stored in
     "sys.last_traceback" keeps the stack frame alive).  The first
     situation can only be remedied by explicitly breaking the cycles;
     the latter two situations can be resolved by storing "None" in
     "sys.exc_traceback" or "sys.last_traceback".  Circular references
     which are garbage are detected when the option cycle detector is
     enabled (it's on by default), but can only be cleaned up if there
     are no Python-level "__del__()" methods involved. Refer to the
     documentation for the "gc" module for more information about how
     "__del__()" methods are handled by the cycle detector,
     particularly the description of the "garbage" value.

   Warning: Due to the precarious circumstances under which
     "__del__()" methods are invoked, exceptions that occur during
     their execution are ignored, and a warning is printed to
     "sys.stderr" instead. Also, when "__del__()" is invoked in
     response to a module being deleted (e.g., when execution of the
     program is done), other globals referenced by the "__del__()"
     method may already have been deleted or in the process of being
     torn down (e.g. the import machinery shutting down).  For this
     reason, "__del__()" methods should do the absolute minimum needed
     to maintain external invariants.  Starting with version 1.5,
     Python guarantees that globals whose name begins with a single
     underscore are deleted from their module before other globals are
     deleted; if no other references to such globals exist, this may
     help in assuring that imported modules are still available at the
     time when the "__del__()" method is called.

   See also the "-R" command-line option.

object.__repr__(self)

   Called by the "repr()" built-in function and by string conversions
   (reverse quotes) to compute the "official" string representation of
   an object.  If at all possible, this should look like a valid
   Python expression that could be used to recreate an object with the
   same value (given an appropriate environment).  If this is not
   possible, a string of the form "<...some useful description...>"
   should be returned.  The return value must be a string object. If a
   class defines "__repr__()" but not "__str__()", then "__repr__()"
   is also used when an "informal" string representation of instances
   of that class is required.

   This is typically used for debugging, so it is important that the
   representation is information-rich and unambiguous.

object.__str__(self)

   Called by the "str()" built-in function and by the "print"
   statement to compute the "informal" string representation of an
   object.  This differs from "__repr__()" in that it does not have to
   be a valid Python expression: a more convenient or concise
   representation may be used instead. The return value must be a
   string object.

object.__lt__(self, other)
object.__le__(self, other)
object.__eq__(self, other)
object.__ne__(self, other)
object.__gt__(self, other)
object.__ge__(self, other)

   New in version 2.1.

   These are the so-called "rich comparison" methods, and are called
   for comparison operators in preference to "__cmp__()" below. The
   correspondence between operator symbols and method names is as
   follows: "x<y" calls "x.__lt__(y)", "x<=y" calls "x.__le__(y)",
   "x==y" calls "x.__eq__(y)", "x!=y" and "x<>y" call "x.__ne__(y)",
   "x>y" calls "x.__gt__(y)", and "x>=y" calls "x.__ge__(y)".

   A rich comparison method may return the singleton "NotImplemented"
   if it does not implement the operation for a given pair of
   arguments. By convention, "False" and "True" are returned for a
   successful comparison. However, these methods can return any value,
   so if the comparison operator is used in a Boolean context (e.g.,
   in the condition of an "if" statement), Python will call "bool()"
   on the value to determine if the result is true or false.

   There are no implied relationships among the comparison operators.
   The truth of "x==y" does not imply that "x!=y" is false.
   Accordingly, when defining "__eq__()", one should also define
   "__ne__()" so that the operators will behave as expected.  See the
   paragraph on "__hash__()" for some important notes on creating
   *hashable* objects which support custom comparison operations and
   are usable as dictionary keys.

   There are no swapped-argument versions of these methods (to be used
   when the left argument does not support the operation but the right
   argument does); rather, "__lt__()" and "__gt__()" are each other's
   reflection, "__le__()" and "__ge__()" are each other's reflection,
   and "__eq__()" and "__ne__()" are their own reflection.

   Arguments to rich comparison methods are never coerced.

   To automatically generate ordering operations from a single root
   operation, see "functools.total_ordering()".

object.__cmp__(self, other)

   Called by comparison operations if rich comparison (see above) is
   not defined.  Should return a negative integer if "self < other",
   zero if "self == other", a positive integer if "self > other".  If
   no "__cmp__()", "__eq__()" or "__ne__()" operation is defined,
   class instances are compared by object identity ("address").  See
   also the description of "__hash__()" for some important notes on
   creating *hashable* objects which support custom comparison
   operations and are usable as dictionary keys. (Note: the
   restriction that exceptions are not propagated by "__cmp__()" has
   been removed since Python 1.5.)

object.__rcmp__(self, other)

   Changed in version 2.1: No longer supported.

object.__hash__(self)

   Called by built-in function "hash()" and for operations on members
   of hashed collections including "set", "frozenset", and "dict".
   "__hash__()" should return an integer.  The only required property
   is that objects which compare equal have the same hash value; it is
   advised to mix together the hash values of the components of the
   object that also play a part in comparison of objects by packing
   them into a tuple and hashing the tuple. Example:

      def __hash__(self):
          return hash((self.name, self.nick, self.color))

   If a class does not define a "__cmp__()" or "__eq__()" method it
   should not define a "__hash__()" operation either; if it defines
   "__cmp__()" or "__eq__()" but not "__hash__()", its instances will
   not be usable in hashed collections.  If a class defines mutable
   objects and implements a "__cmp__()" or "__eq__()" method, it
   should not implement "__hash__()", since hashable collection
   implementations require that an object's hash value is immutable
   (if the object's hash value changes, it will be in the wrong hash
   bucket).

   User-defined classes have "__cmp__()" and "__hash__()" methods by
   default; with them, all objects compare unequal (except with
   themselves) and "x.__hash__()" returns a result derived from
   "id(x)".

   Classes which inherit a "__hash__()" method from a parent class but
   change the meaning of "__cmp__()" or "__eq__()" such that the hash
   value returned is no longer appropriate (e.g. by switching to a
   value-based concept of equality instead of the default identity
   based equality) can explicitly flag themselves as being unhashable
   by setting "__hash__ = None" in the class definition. Doing so
   means that not only will instances of the class raise an
   appropriate "TypeError" when a program attempts to retrieve their
   hash value, but they will also be correctly identified as
   unhashable when checking "isinstance(obj, collections.Hashable)"
   (unlike classes which define their own "__hash__()" to explicitly
   raise "TypeError").

   Changed in version 2.5: "__hash__()" may now also return a long
   integer object; the 32-bit integer is then derived from the hash of
   that object.

   Changed in version 2.6: "__hash__" may now be set to "None" to
   explicitly flag instances of a class as unhashable.

object.__nonzero__(self)

   Called to implement truth value testing and the built-in operation
   "bool()"; should return "False" or "True", or their integer
   equivalents "0" or "1".  When this method is not defined,
   "__len__()" is called, if it is defined, and the object is
   considered true if its result is nonzero. If a class defines
   neither "__len__()" nor "__nonzero__()", all its instances are
   considered true.

object.__unicode__(self)

   Called to implement "unicode()" built-in; should return a Unicode
   object. When this method is not defined, string conversion is
   attempted, and the result of string conversion is converted to
   Unicode using the system default encoding.
t
customizations�
"pdb" --- The Python Debugger
*****************************

**Source code:** Lib/pdb.py

======================================================================

The module "pdb" defines an interactive source code debugger for
Python programs.  It supports setting (conditional) breakpoints and
single stepping at the source line level, inspection of stack frames,
source code listing, and evaluation of arbitrary Python code in the
context of any stack frame.  It also supports post-mortem debugging
and can be called under program control.

The debugger is extensible --- it is actually defined as the class
"Pdb". This is currently undocumented but easily understood by reading
the source.  The extension interface uses the modules "bdb" and "cmd".

The debugger's prompt is "(Pdb)". Typical usage to run a program under
control of the debugger is:

   >>> import pdb
   >>> import mymodule
   >>> pdb.run('mymodule.test()')
   > <string>(0)?()
   (Pdb) continue
   > <string>(1)?()
   (Pdb) continue
   NameError: 'spam'
   > <string>(1)?()
   (Pdb)

"pdb.py" can also be invoked as a script to debug other scripts.  For
example:

   python -m pdb myscript.py

When invoked as a script, pdb will automatically enter post-mortem
debugging if the program being debugged exits abnormally. After post-
mortem debugging (or after normal exit of the program), pdb will
restart the program. Automatic restarting preserves pdb's state (such
as breakpoints) and in most cases is more useful than quitting the
debugger upon program's exit.

New in version 2.4: Restarting post-mortem behavior added.

The typical usage to break into the debugger from a running program is
to insert

   import pdb; pdb.set_trace()

at the location you want to break into the debugger.  You can then
step through the code following this statement, and continue running
without the debugger using the "c" command.

The typical usage to inspect a crashed program is:

   >>> import pdb
   >>> import mymodule
   >>> mymodule.test()
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
     File "./mymodule.py", line 4, in test
       test2()
     File "./mymodule.py", line 3, in test2
       print spam
   NameError: spam
   >>> pdb.pm()
   > ./mymodule.py(3)test2()
   -> print spam
   (Pdb)

The module defines the following functions; each enters the debugger
in a slightly different way:

pdb.run(statement[, globals[, locals]])

   Execute the *statement* (given as a string) under debugger control.
   The debugger prompt appears before any code is executed; you can
   set breakpoints and type "continue", or you can step through the
   statement using "step" or "next" (all these commands are explained
   below).  The optional *globals* and *locals* arguments specify the
   environment in which the code is executed; by default the
   dictionary of the module "__main__" is used.  (See the explanation
   of the "exec" statement or the "eval()" built-in function.)

pdb.runeval(expression[, globals[, locals]])

   Evaluate the *expression* (given as a string) under debugger
   control.  When "runeval()" returns, it returns the value of the
   expression.  Otherwise this function is similar to "run()".

pdb.runcall(function[, argument, ...])

   Call the *function* (a function or method object, not a string)
   with the given arguments.  When "runcall()" returns, it returns
   whatever the function call returned.  The debugger prompt appears
   as soon as the function is entered.

pdb.set_trace()

   Enter the debugger at the calling stack frame.  This is useful to
   hard-code a breakpoint at a given point in a program, even if the
   code is not otherwise being debugged (e.g. when an assertion
   fails).

pdb.post_mortem([traceback])

   Enter post-mortem debugging of the given *traceback* object.  If no
   *traceback* is given, it uses the one of the exception that is
   currently being handled (an exception must be being handled if the
   default is to be used).

pdb.pm()

   Enter post-mortem debugging of the traceback found in
   "sys.last_traceback".

The "run*" functions and "set_trace()" are aliases for instantiating
the "Pdb" class and calling the method of the same name.  If you want
to access further features, you have to do this yourself:

class pdb.Pdb(completekey='tab', stdin=None, stdout=None, skip=None)

   "Pdb" is the debugger class.

   The *completekey*, *stdin* and *stdout* arguments are passed to the
   underlying "cmd.Cmd" class; see the description there.

   The *skip* argument, if given, must be an iterable of glob-style
   module name patterns.  The debugger will not step into frames that
   originate in a module that matches one of these patterns. [1]

   Example call to enable tracing with *skip*:

      import pdb; pdb.Pdb(skip=['django.*']).set_trace()

   New in version 2.7: The *skip* argument.

   run(statement[, globals[, locals]])
   runeval(expression[, globals[, locals]])
   runcall(function[, argument, ...])
   set_trace()

      See the documentation for the functions explained above.
tdebuggers�
The "del" statement
*******************

   del_stmt ::= "del" target_list

Deletion is recursively defined very similar to the way assignment is
defined. Rather than spelling it out in full details, here are some
hints.

Deletion of a target list recursively deletes each target, from left
to right.

Deletion of a name removes the binding of that name  from the local or
global namespace, depending on whether the name occurs in a "global"
statement in the same code block.  If the name is unbound, a
"NameError" exception will be raised.

It is illegal to delete a name from the local namespace if it occurs
as a free variable in a nested block.

Deletion of attribute references, subscriptions and slicings is passed
to the primary object involved; deletion of a slicing is in general
equivalent to assignment of an empty slice of the right type (but even
this is determined by the sliced object).
tdels�
Dictionary displays
*******************

A dictionary display is a possibly empty series of key/datum pairs
enclosed in curly braces:

   dict_display       ::= "{" [key_datum_list | dict_comprehension] "}"
   key_datum_list     ::= key_datum ("," key_datum)* [","]
   key_datum          ::= expression ":" expression
   dict_comprehension ::= expression ":" expression comp_for

A dictionary display yields a new dictionary object.

If a comma-separated sequence of key/datum pairs is given, they are
evaluated from left to right to define the entries of the dictionary:
each key object is used as a key into the dictionary to store the
corresponding datum.  This means that you can specify the same key
multiple times in the key/datum list, and the final dictionary's value
for that key will be the last one given.

A dict comprehension, in contrast to list and set comprehensions,
needs two expressions separated with a colon followed by the usual
"for" and "if" clauses. When the comprehension is run, the resulting
key and value elements are inserted in the new dictionary in the order
they are produced.

Restrictions on the types of the key values are listed earlier in
section The standard type hierarchy.  (To summarize, the key type
should be *hashable*, which excludes all mutable objects.)  Clashes
between duplicate keys are not detected; the last datum (textually
rightmost in the display) stored for a given key value prevails.
tdicts+
Interaction with dynamic features
*********************************

There are several cases where Python statements are illegal when used
in conjunction with nested scopes that contain free variables.

If a variable is referenced in an enclosing scope, it is illegal to
delete the name.  An error will be reported at compile time.

If the wild card form of import --- "import *" --- is used in a
function and the function contains or is a nested block with free
variables, the compiler will raise a "SyntaxError".

If "exec" is used in a function and the function contains or is a
nested block with free variables, the compiler will raise a
"SyntaxError" unless the exec explicitly specifies the local namespace
for the "exec".  (In other words, "exec obj" would be illegal, but
"exec obj in ns" would be legal.)

The "eval()", "execfile()", and "input()" functions and the "exec"
statement do not have access to the full environment for resolving
names.  Names may be resolved in the local and global namespaces of
the caller.  Free variables are not resolved in the nearest enclosing
namespace, but in the global namespace. [1] The "exec" statement and
the "eval()" and "execfile()" functions have optional arguments to
override the global and local namespace.  If only one namespace is
specified, it is used for both.
sdynamic-featuressE
The "if" statement
******************

The "if" statement is used for conditional execution:

   if_stmt ::= "if" expression ":" suite
               ( "elif" expression ":" suite )*
               ["else" ":" suite]

It selects exactly one of the suites by evaluating the expressions one
by one until one is found to be true (see section Boolean operations
for the definition of true and false); then that suite is executed
(and no other part of the "if" statement is executed or evaluated).
If all expressions are false, the suite of the "else" clause, if
present, is executed.
telsesh	
Exceptions
**********

Exceptions are a means of breaking out of the normal flow of control
of a code block in order to handle errors or other exceptional
conditions.  An exception is *raised* at the point where the error is
detected; it may be *handled* by the surrounding code block or by any
code block that directly or indirectly invoked the code block where
the error occurred.

The Python interpreter raises an exception when it detects a run-time
error (such as division by zero).  A Python program can also
explicitly raise an exception with the "raise" statement. Exception
handlers are specified with the "try" ... "except" statement.  The
"finally" clause of such a statement can be used to specify cleanup
code which does not handle the exception, but is executed whether an
exception occurred or not in the preceding code.

Python uses the "termination" model of error handling: an exception
handler can find out what happened and continue execution at an outer
level, but it cannot repair the cause of the error and retry the
failing operation (except by re-entering the offending piece of code
from the top).

When an exception is not handled at all, the interpreter terminates
execution of the program, or returns to its interactive main loop.  In
either case, it prints a stack backtrace, except when the exception is
"SystemExit".

Exceptions are identified by class instances.  The "except" clause is
selected depending on the class of the instance: it must reference the
class of the instance or a base class thereof.  The instance can be
received by the handler and can carry additional information about the
exceptional condition.

Exceptions can also be identified by strings, in which case the
"except" clause is selected by object identity.  An arbitrary value
can be raised along with the identifying string which can be passed to
the handler.

Note: Messages to exceptions are not part of the Python API.  Their
  contents may change from one version of Python to the next without
  warning and should not be relied on by code which will run under
  multiple versions of the interpreter.

See also the description of the "try" statement in section The try
statement and "raise" statement in section The raise statement.

-[ Footnotes ]-

[1] This limitation occurs because the code that is executed by
    these operations is not available at the time the module is
    compiled.
t
exceptionss�

The "exec" statement
********************

   exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]

This statement supports dynamic execution of Python code.  The first
expression should evaluate to either a Unicode string, a *Latin-1*
encoded string, an open file object, a code object, or a tuple.  If it
is a string, the string is parsed as a suite of Python statements
which is then executed (unless a syntax error occurs). [1] If it is an
open file, the file is parsed until EOF and executed. If it is a code
object, it is simply executed.  For the interpretation of a tuple, see
below.  In all cases, the code that's executed is expected to be valid
as file input (see section File input).  Be aware that the "return"
and "yield" statements may not be used outside of function definitions
even within the context of code passed to the "exec" statement.

In all cases, if the optional parts are omitted, the code is executed
in the current scope.  If only the first expression after "in" is
specified, it should be a dictionary, which will be used for both the
global and the local variables.  If two expressions are given, they
are used for the global and local variables, respectively. If
provided, *locals* can be any mapping object. Remember that at module
level, globals and locals are the same dictionary. If two separate
objects are given as *globals* and *locals*, the code will be executed
as if it were embedded in a class definition.

The first expression may also be a tuple of length 2 or 3.  In this
case, the optional parts must be omitted.  The form "exec(expr,
globals)" is equivalent to "exec expr in globals", while the form
"exec(expr, globals, locals)" is equivalent to "exec expr in globals,
locals".  The tuple form of "exec" provides compatibility with Python
3, where "exec" is a function rather than a statement.

Changed in version 2.4: Formerly, *locals* was required to be a
dictionary.

As a side effect, an implementation may insert additional keys into
the dictionaries given besides those corresponding to variable names
set by the executed code.  For example, the current implementation may
add a reference to the dictionary of the built-in module "__builtin__"
under the key "__builtins__" (!).

**Programmer's hints:** dynamic evaluation of expressions is supported
by the built-in function "eval()".  The built-in functions "globals()"
and "locals()" return the current global and local dictionary,
respectively, which may be useful to pass around for use by "exec".

-[ Footnotes ]-

[1] Note that the parser only accepts the Unix-style end of line
    convention. If you are reading the code from a file, make sure to
    use *universal newlines* mode to convert Windows or Mac-style
    newlines.
texecs&
Execution model
***************


Naming and binding
==================

*Names* refer to objects.  Names are introduced by name binding
operations. Each occurrence of a name in the program text refers to
the *binding* of that name established in the innermost function block
containing the use.

A *block* is a piece of Python program text that is executed as a
unit. The following are blocks: a module, a function body, and a class
definition. Each command typed interactively is a block.  A script
file (a file given as standard input to the interpreter or specified
on the interpreter command line the first argument) is a code block.
A script command (a command specified on the interpreter command line
with the '**-c**' option) is a code block.  The file read by the
built-in function "execfile()" is a code block.  The string argument
passed to the built-in function "eval()" and to the "exec" statement
is a code block. The expression read and evaluated by the built-in
function "input()" is a code block.

A code block is executed in an *execution frame*.  A frame contains
some administrative information (used for debugging) and determines
where and how execution continues after the code block's execution has
completed.

A *scope* defines the visibility of a name within a block.  If a local
variable is defined in a block, its scope includes that block.  If the
definition occurs in a function block, the scope extends to any blocks
contained within the defining one, unless a contained block introduces
a different binding for the name.  The scope of names defined in a
class block is limited to the class block; it does not extend to the
code blocks of methods -- this includes generator expressions since
they are implemented using a function scope.  This means that the
following will fail:

   class A:
       a = 42
       b = list(a + i for i in range(10))

When a name is used in a code block, it is resolved using the nearest
enclosing scope.  The set of all such scopes visible to a code block
is called the block's *environment*.

If a name is bound in a block, it is a local variable of that block.
If a name is bound at the module level, it is a global variable.  (The
variables of the module code block are local and global.)  If a
variable is used in a code block but not defined there, it is a *free
variable*.

When a name is not found at all, a "NameError" exception is raised.
If the name refers to a local variable that has not been bound, a
"UnboundLocalError" exception is raised.  "UnboundLocalError" is a
subclass of "NameError".

The following constructs bind names: formal parameters to functions,
"import" statements, class and function definitions (these bind the
class or function name in the defining block), and targets that are
identifiers if occurring in an assignment, "for" loop header, in the
second position of an "except" clause header or after "as" in a "with"
statement.  The "import" statement of the form "from ... import *"
binds all names defined in the imported module, except those beginning
with an underscore.  This form may only be used at the module level.

A target occurring in a "del" statement is also considered bound for
this purpose (though the actual semantics are to unbind the name).  It
is illegal to unbind a name that is referenced by an enclosing scope;
the compiler will report a "SyntaxError".

Each assignment or import statement occurs within a block defined by a
class or function definition or at the module level (the top-level
code block).

If a name binding operation occurs anywhere within a code block, all
uses of the name within the block are treated as references to the
current block.  This can lead to errors when a name is used within a
block before it is bound. This rule is subtle.  Python lacks
declarations and allows name binding operations to occur anywhere
within a code block.  The local variables of a code block can be
determined by scanning the entire text of the block for name binding
operations.

If the global statement occurs within a block, all uses of the name
specified in the statement refer to the binding of that name in the
top-level namespace. Names are resolved in the top-level namespace by
searching the global namespace, i.e. the namespace of the module
containing the code block, and the builtins namespace, the namespace
of the module "__builtin__".  The global namespace is searched first.
If the name is not found there, the builtins namespace is searched.
The global statement must precede all uses of the name.

The builtins namespace associated with the execution of a code block
is actually found by looking up the name "__builtins__" in its global
namespace; this should be a dictionary or a module (in the latter case
the module's dictionary is used).  By default, when in the "__main__"
module, "__builtins__" is the built-in module "__builtin__" (note: no
's'); when in any other module, "__builtins__" is an alias for the
dictionary of the "__builtin__" module itself.  "__builtins__" can be
set to a user-created dictionary to create a weak form of restricted
execution.

**CPython implementation detail:** Users should not touch
"__builtins__"; it is strictly an implementation detail.  Users
wanting to override values in the builtins namespace should "import"
the "__builtin__" (no 's') module and modify its attributes
appropriately.

The namespace for a module is automatically created the first time a
module is imported.  The main module for a script is always called
"__main__".

The "global" statement has the same scope as a name binding operation
in the same block.  If the nearest enclosing scope for a free variable
contains a global statement, the free variable is treated as a global.

A class definition is an executable statement that may use and define
names. These references follow the normal rules for name resolution.
The namespace of the class definition becomes the attribute dictionary
of the class.  Names defined at the class scope are not visible in
methods.


Interaction with dynamic features
---------------------------------

There are several cases where Python statements are illegal when used
in conjunction with nested scopes that contain free variables.

If a variable is referenced in an enclosing scope, it is illegal to
delete the name.  An error will be reported at compile time.

If the wild card form of import --- "import *" --- is used in a
function and the function contains or is a nested block with free
variables, the compiler will raise a "SyntaxError".

If "exec" is used in a function and the function contains or is a
nested block with free variables, the compiler will raise a
"SyntaxError" unless the exec explicitly specifies the local namespace
for the "exec".  (In other words, "exec obj" would be illegal, but
"exec obj in ns" would be legal.)

The "eval()", "execfile()", and "input()" functions and the "exec"
statement do not have access to the full environment for resolving
names.  Names may be resolved in the local and global namespaces of
the caller.  Free variables are not resolved in the nearest enclosing
namespace, but in the global namespace. [1] The "exec" statement and
the "eval()" and "execfile()" functions have optional arguments to
override the global and local namespace.  If only one namespace is
specified, it is used for both.


Exceptions
==========

Exceptions are a means of breaking out of the normal flow of control
of a code block in order to handle errors or other exceptional
conditions.  An exception is *raised* at the point where the error is
detected; it may be *handled* by the surrounding code block or by any
code block that directly or indirectly invoked the code block where
the error occurred.

The Python interpreter raises an exception when it detects a run-time
error (such as division by zero).  A Python program can also
explicitly raise an exception with the "raise" statement. Exception
handlers are specified with the "try" ... "except" statement.  The
"finally" clause of such a statement can be used to specify cleanup
code which does not handle the exception, but is executed whether an
exception occurred or not in the preceding code.

Python uses the "termination" model of error handling: an exception
handler can find out what happened and continue execution at an outer
level, but it cannot repair the cause of the error and retry the
failing operation (except by re-entering the offending piece of code
from the top).

When an exception is not handled at all, the interpreter terminates
execution of the program, or returns to its interactive main loop.  In
either case, it prints a stack backtrace, except when the exception is
"SystemExit".

Exceptions are identified by class instances.  The "except" clause is
selected depending on the class of the instance: it must reference the
class of the instance or a base class thereof.  The instance can be
received by the handler and can carry additional information about the
exceptional condition.

Exceptions can also be identified by strings, in which case the
"except" clause is selected by object identity.  An arbitrary value
can be raised along with the identifying string which can be passed to
the handler.

Note: Messages to exceptions are not part of the Python API.  Their
  contents may change from one version of Python to the next without
  warning and should not be relied on by code which will run under
  multiple versions of the interpreter.

See also the description of the "try" statement in section The try
statement and "raise" statement in section The raise statement.

-[ Footnotes ]-

[1] This limitation occurs because the code that is executed by
    these operations is not available at the time the module is
    compiled.
t	execmodelsK
Expression lists
****************

   expression_list ::= expression ( "," expression )* [","]

An expression list containing at least one comma yields a tuple.  The
length of the tuple is the number of expressions in the list.  The
expressions are evaluated from left to right.

The trailing comma is required only to create a single tuple (a.k.a. a
*singleton*); it is optional in all other cases.  A single expression
without a trailing comma doesn't create a tuple, but rather yields the
value of that expression. (To create an empty tuple, use an empty pair
of parentheses: "()".)
t	exprlistss�
Floating point literals
***********************

Floating point literals are described by the following lexical
definitions:

   floatnumber   ::= pointfloat | exponentfloat
   pointfloat    ::= [intpart] fraction | intpart "."
   exponentfloat ::= (intpart | pointfloat) exponent
   intpart       ::= digit+
   fraction      ::= "." digit+
   exponent      ::= ("e" | "E") ["+" | "-"] digit+

Note that the integer and exponent parts of floating point numbers can
look like octal integers, but are interpreted using radix 10.  For
example, "077e010" is legal, and denotes the same number as "77e10".
The allowed range of floating point literals is implementation-
dependent. Some examples of floating point literals:

   3.14    10.    .001    1e100    3.14e-10    0e0

Note that numeric literals do not include a sign; a phrase like "-1"
is actually an expression composed of the unary operator "-" and the
literal "1".
tfloatingsZ	
The "for" statement
*******************

The "for" statement is used to iterate over the elements of a sequence
(such as a string, tuple or list) or other iterable object:

   for_stmt ::= "for" target_list "in" expression_list ":" suite
                ["else" ":" suite]

The expression list is evaluated once; it should yield an iterable
object.  An iterator is created for the result of the
"expression_list".  The suite is then executed once for each item
provided by the iterator, in the order of ascending indices.  Each
item in turn is assigned to the target list using the standard rules
for assignments, and then the suite is executed.  When the items are
exhausted (which is immediately when the sequence is empty), the suite
in the "else" clause, if present, is executed, and the loop
terminates.

A "break" statement executed in the first suite terminates the loop
without executing the "else" clause's suite.  A "continue" statement
executed in the first suite skips the rest of the suite and continues
with the next item, or with the "else" clause if there was no next
item.

The suite may assign to the variable(s) in the target list; this does
not affect the next item assigned to it.

The target list is not deleted when the loop is finished, but if the
sequence is empty, it will not have been assigned to at all by the
loop.  Hint: the built-in function "range()" returns a sequence of
integers suitable to emulate the effect of Pascal's "for i := a to b
do"; e.g., "range(3)" returns the list "[0, 1, 2]".

Note: There is a subtlety when the sequence is being modified by the
  loop (this can only occur for mutable sequences, i.e. lists). An
  internal counter is used to keep track of which item is used next,
  and this is incremented on each iteration.  When this counter has
  reached the length of the sequence the loop terminates.  This means
  that if the suite deletes the current (or a previous) item from the
  sequence, the next item will be skipped (since it gets the index of
  the current item which has already been treated).  Likewise, if the
  suite inserts an item in the sequence before the current item, the
  current item will be treated again the next time through the loop.
  This can lead to nasty bugs that can be avoided by making a
  temporary copy using a slice of the whole sequence, e.g.,

     for x in a[:]:
         if x < 0: a.remove(x)
tfors�Q
Format String Syntax
********************

The "str.format()" method and the "Formatter" class share the same
syntax for format strings (although in the case of "Formatter",
subclasses can define their own format string syntax).

Format strings contain "replacement fields" surrounded by curly braces
"{}". Anything that is not contained in braces is considered literal
text, which is copied unchanged to the output.  If you need to include
a brace character in the literal text, it can be escaped by doubling:
"{{" and "}}".

The grammar for a replacement field is as follows:

      replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"
      field_name        ::= arg_name ("." attribute_name | "[" element_index "]")*
      arg_name          ::= [identifier | integer]
      attribute_name    ::= identifier
      element_index     ::= integer | index_string
      index_string      ::= <any source character except "]"> +
      conversion        ::= "r" | "s"
      format_spec       ::= <described in the next section>

In less formal terms, the replacement field can start with a
*field_name* that specifies the object whose value is to be formatted
and inserted into the output instead of the replacement field. The
*field_name* is optionally followed by a  *conversion* field, which is
preceded by an exclamation point "'!'", and a *format_spec*, which is
preceded by a colon "':'".  These specify a non-default format for the
replacement value.

See also the Format Specification Mini-Language section.

The *field_name* itself begins with an *arg_name* that is either a
number or a keyword.  If it's a number, it refers to a positional
argument, and if it's a keyword, it refers to a named keyword
argument.  If the numerical arg_names in a format string are 0, 1, 2,
... in sequence, they can all be omitted (not just some) and the
numbers 0, 1, 2, ... will be automatically inserted in that order.
Because *arg_name* is not quote-delimited, it is not possible to
specify arbitrary dictionary keys (e.g., the strings "'10'" or
"':-]'") within a format string. The *arg_name* can be followed by any
number of index or attribute expressions. An expression of the form
"'.name'" selects the named attribute using "getattr()", while an
expression of the form "'[index]'" does an index lookup using
"__getitem__()".

Changed in version 2.7: The positional argument specifiers can be
omitted, so "'{} {}'" is equivalent to "'{0} {1}'".

Some simple format string examples:

   "First, thou shalt count to {0}"  # References first positional argument
   "Bring me a {}"                   # Implicitly references the first positional argument
   "From {} to {}"                   # Same as "From {0} to {1}"
   "My quest is {name}"              # References keyword argument 'name'
   "Weight in tons {0.weight}"       # 'weight' attribute of first positional arg
   "Units destroyed: {players[0]}"   # First element of keyword argument 'players'.

The *conversion* field causes a type coercion before formatting.
Normally, the job of formatting a value is done by the "__format__()"
method of the value itself.  However, in some cases it is desirable to
force a type to be formatted as a string, overriding its own
definition of formatting.  By converting the value to a string before
calling "__format__()", the normal formatting logic is bypassed.

Two conversion flags are currently supported: "'!s'" which calls
"str()" on the value, and "'!r'" which calls "repr()".

Some examples:

   "Harold's a clever {0!s}"        # Calls str() on the argument first
   "Bring out the holy {name!r}"    # Calls repr() on the argument first

The *format_spec* field contains a specification of how the value
should be presented, including such details as field width, alignment,
padding, decimal precision and so on.  Each value type can define its
own "formatting mini-language" or interpretation of the *format_spec*.

Most built-in types support a common formatting mini-language, which
is described in the next section.

A *format_spec* field can also include nested replacement fields
within it. These nested replacement fields may contain a field name,
conversion flag and format specification, but deeper nesting is not
allowed.  The replacement fields within the format_spec are
substituted before the *format_spec* string is interpreted. This
allows the formatting of a value to be dynamically specified.

See the Format examples section for some examples.


Format Specification Mini-Language
==================================

"Format specifications" are used within replacement fields contained
within a format string to define how individual values are presented
(see Format String Syntax).  They can also be passed directly to the
built-in "format()" function.  Each formattable type may define how
the format specification is to be interpreted.

Most built-in types implement the following options for format
specifications, although some of the formatting options are only
supported by the numeric types.

A general convention is that an empty format string ("""") produces
the same result as if you had called "str()" on the value. A non-empty
format string typically modifies the result.

The general form of a *standard format specifier* is:

   format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]
   fill        ::= <any character>
   align       ::= "<" | ">" | "=" | "^"
   sign        ::= "+" | "-" | " "
   width       ::= integer
   precision   ::= integer
   type        ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"

If a valid *align* value is specified, it can be preceded by a *fill*
character that can be any character and defaults to a space if
omitted. It is not possible to use a literal curly brace (""{"" or
""}"") as the *fill* character when using the "str.format()" method.
However, it is possible to insert a curly brace with a nested
replacement field.  This limitation doesn't affect the "format()"
function.

The meaning of the various alignment options is as follows:

   +-----------+------------------------------------------------------------+
   | Option    | Meaning                                                    |
   +===========+============================================================+
   | "'<'"     | Forces the field to be left-aligned within the available   |
   |           | space (this is the default for most objects).              |
   +-----------+------------------------------------------------------------+
   | "'>'"     | Forces the field to be right-aligned within the available  |
   |           | space (this is the default for numbers).                   |
   +-----------+------------------------------------------------------------+
   | "'='"     | Forces the padding to be placed after the sign (if any)    |
   |           | but before the digits.  This is used for printing fields   |
   |           | in the form '+000000120'. This alignment option is only    |
   |           | valid for numeric types.  It becomes the default when '0'  |
   |           | immediately precedes the field width.                      |
   +-----------+------------------------------------------------------------+
   | "'^'"     | Forces the field to be centered within the available       |
   |           | space.                                                     |
   +-----------+------------------------------------------------------------+

Note that unless a minimum field width is defined, the field width
will always be the same size as the data to fill it, so that the
alignment option has no meaning in this case.

The *sign* option is only valid for number types, and can be one of
the following:

   +-----------+------------------------------------------------------------+
   | Option    | Meaning                                                    |
   +===========+============================================================+
   | "'+'"     | indicates that a sign should be used for both positive as  |
   |           | well as negative numbers.                                  |
   +-----------+------------------------------------------------------------+
   | "'-'"     | indicates that a sign should be used only for negative     |
   |           | numbers (this is the default behavior).                    |
   +-----------+------------------------------------------------------------+
   | space     | indicates that a leading space should be used on positive  |
   |           | numbers, and a minus sign on negative numbers.             |
   +-----------+------------------------------------------------------------+

The "'#'" option is only valid for integers, and only for binary,
octal, or hexadecimal output.  If present, it specifies that the
output will be prefixed by "'0b'", "'0o'", or "'0x'", respectively.

The "','" option signals the use of a comma for a thousands separator.
For a locale aware separator, use the "'n'" integer presentation type
instead.

Changed in version 2.7: Added the "','" option (see also **PEP 378**).

*width* is a decimal integer defining the minimum field width.  If not
specified, then the field width will be determined by the content.

When no explicit alignment is given, preceding the *width* field by a
zero ("'0'") character enables sign-aware zero-padding for numeric
types.  This is equivalent to a *fill* character of "'0'" with an
*alignment* type of "'='".

The *precision* is a decimal number indicating how many digits should
be displayed after the decimal point for a floating point value
formatted with "'f'" and "'F'", or before and after the decimal point
for a floating point value formatted with "'g'" or "'G'".  For non-
number types the field indicates the maximum field size - in other
words, how many characters will be used from the field content. The
*precision* is not allowed for integer values.

Finally, the *type* determines how the data should be presented.

The available string presentation types are:

   +-----------+------------------------------------------------------------+
   | Type      | Meaning                                                    |
   +===========+============================================================+
   | "'s'"     | String format. This is the default type for strings and    |
   |           | may be omitted.                                            |
   +-----------+------------------------------------------------------------+
   | None      | The same as "'s'".                                         |
   +-----------+------------------------------------------------------------+

The available integer presentation types are:

   +-----------+------------------------------------------------------------+
   | Type      | Meaning                                                    |
   +===========+============================================================+
   | "'b'"     | Binary format. Outputs the number in base 2.               |
   +-----------+------------------------------------------------------------+
   | "'c'"     | Character. Converts the integer to the corresponding       |
   |           | unicode character before printing.                         |
   +-----------+------------------------------------------------------------+
   | "'d'"     | Decimal Integer. Outputs the number in base 10.            |
   +-----------+------------------------------------------------------------+
   | "'o'"     | Octal format. Outputs the number in base 8.                |
   +-----------+------------------------------------------------------------+
   | "'x'"     | Hex format. Outputs the number in base 16, using lower-    |
   |           | case letters for the digits above 9.                       |
   +-----------+------------------------------------------------------------+
   | "'X'"     | Hex format. Outputs the number in base 16, using upper-    |
   |           | case letters for the digits above 9.                       |
   +-----------+------------------------------------------------------------+
   | "'n'"     | Number. This is the same as "'d'", except that it uses the |
   |           | current locale setting to insert the appropriate number    |
   |           | separator characters.                                      |
   +-----------+------------------------------------------------------------+
   | None      | The same as "'d'".                                         |
   +-----------+------------------------------------------------------------+

In addition to the above presentation types, integers can be formatted
with the floating point presentation types listed below (except "'n'"
and "None"). When doing so, "float()" is used to convert the integer
to a floating point number before formatting.

The available presentation types for floating point and decimal values
are:

   +-----------+------------------------------------------------------------+
   | Type      | Meaning                                                    |
   +===========+============================================================+
   | "'e'"     | Exponent notation. Prints the number in scientific         |
   |           | notation using the letter 'e' to indicate the exponent.    |
   |           | The default precision is "6".                              |
   +-----------+------------------------------------------------------------+
   | "'E'"     | Exponent notation. Same as "'e'" except it uses an upper   |
   |           | case 'E' as the separator character.                       |
   +-----------+------------------------------------------------------------+
   | "'f'"     | Fixed point. Displays the number as a fixed-point number.  |
   |           | The default precision is "6".                              |
   +-----------+------------------------------------------------------------+
   | "'F'"     | Fixed point. Same as "'f'".                                |
   +-----------+------------------------------------------------------------+
   | "'g'"     | General format.  For a given precision "p >= 1", this      |
   |           | rounds the number to "p" significant digits and then       |
   |           | formats the result in either fixed-point format or in      |
   |           | scientific notation, depending on its magnitude.  The      |
   |           | precise rules are as follows: suppose that the result      |
   |           | formatted with presentation type "'e'" and precision "p-1" |
   |           | would have exponent "exp".  Then if "-4 <= exp < p", the   |
   |           | number is formatted with presentation type "'f'" and       |
   |           | precision "p-1-exp".  Otherwise, the number is formatted   |
   |           | with presentation type "'e'" and precision "p-1". In both  |
   |           | cases insignificant trailing zeros are removed from the    |
   |           | significand, and the decimal point is also removed if      |
   |           | there are no remaining digits following it.  Positive and  |
   |           | negative infinity, positive and negative zero, and nans,   |
   |           | are formatted as "inf", "-inf", "0", "-0" and "nan"        |
   |           | respectively, regardless of the precision.  A precision of |
   |           | "0" is treated as equivalent to a precision of "1". The    |
   |           | default precision is "6".                                  |
   +-----------+------------------------------------------------------------+
   | "'G'"     | General format. Same as "'g'" except switches to "'E'" if  |
   |           | the number gets too large. The representations of infinity |
   |           | and NaN are uppercased, too.                               |
   +-----------+------------------------------------------------------------+
   | "'n'"     | Number. This is the same as "'g'", except that it uses the |
   |           | current locale setting to insert the appropriate number    |
   |           | separator characters.                                      |
   +-----------+------------------------------------------------------------+
   | "'%'"     | Percentage. Multiplies the number by 100 and displays in   |
   |           | fixed ("'f'") format, followed by a percent sign.          |
   +-----------+------------------------------------------------------------+
   | None      | The same as "'g'".                                         |
   +-----------+------------------------------------------------------------+


Format examples
===============

This section contains examples of the "str.format()" syntax and
comparison with the old "%"-formatting.

In most of the cases the syntax is similar to the old "%"-formatting,
with the addition of the "{}" and with ":" used instead of "%". For
example, "'%03.2f'" can be translated to "'{:03.2f}'".

The new format syntax also supports new and different options, shown
in the follow examples.

Accessing arguments by position:

   >>> '{0}, {1}, {2}'.format('a', 'b', 'c')
   'a, b, c'
   >>> '{}, {}, {}'.format('a', 'b', 'c')  # 2.7+ only
   'a, b, c'
   >>> '{2}, {1}, {0}'.format('a', 'b', 'c')
   'c, b, a'
   >>> '{2}, {1}, {0}'.format(*'abc')      # unpacking argument sequence
   'c, b, a'
   >>> '{0}{1}{0}'.format('abra', 'cad')   # arguments' indices can be repeated
   'abracadabra'

Accessing arguments by name:

   >>> 'Coordinates: {latitude}, {longitude}'.format(latitude='37.24N', longitude='-115.81W')
   'Coordinates: 37.24N, -115.81W'
   >>> coord = {'latitude': '37.24N', 'longitude': '-115.81W'}
   >>> 'Coordinates: {latitude}, {longitude}'.format(**coord)
   'Coordinates: 37.24N, -115.81W'

Accessing arguments' attributes:

   >>> c = 3-5j
   >>> ('The complex number {0} is formed from the real part {0.real} '
   ...  'and the imaginary part {0.imag}.').format(c)
   'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.'
   >>> class Point(object):
   ...     def __init__(self, x, y):
   ...         self.x, self.y = x, y
   ...     def __str__(self):
   ...         return 'Point({self.x}, {self.y})'.format(self=self)
   ...
   >>> str(Point(4, 2))
   'Point(4, 2)'

Accessing arguments' items:

   >>> coord = (3, 5)
   >>> 'X: {0[0]};  Y: {0[1]}'.format(coord)
   'X: 3;  Y: 5'

Replacing "%s" and "%r":

   >>> "repr() shows quotes: {!r}; str() doesn't: {!s}".format('test1', 'test2')
   "repr() shows quotes: 'test1'; str() doesn't: test2"

Aligning the text and specifying a width:

   >>> '{:<30}'.format('left aligned')
   'left aligned                  '
   >>> '{:>30}'.format('right aligned')
   '                 right aligned'
   >>> '{:^30}'.format('centered')
   '           centered           '
   >>> '{:*^30}'.format('centered')  # use '*' as a fill char
   '***********centered***********'

Replacing "%+f", "%-f", and "% f" and specifying a sign:

   >>> '{:+f}; {:+f}'.format(3.14, -3.14)  # show it always
   '+3.140000; -3.140000'
   >>> '{: f}; {: f}'.format(3.14, -3.14)  # show a space for positive numbers
   ' 3.140000; -3.140000'
   >>> '{:-f}; {:-f}'.format(3.14, -3.14)  # show only the minus -- same as '{:f}; {:f}'
   '3.140000; -3.140000'

Replacing "%x" and "%o" and converting the value to different bases:

   >>> # format also supports binary numbers
   >>> "int: {0:d};  hex: {0:x};  oct: {0:o};  bin: {0:b}".format(42)
   'int: 42;  hex: 2a;  oct: 52;  bin: 101010'
   >>> # with 0x, 0o, or 0b as prefix:
   >>> "int: {0:d};  hex: {0:#x};  oct: {0:#o};  bin: {0:#b}".format(42)
   'int: 42;  hex: 0x2a;  oct: 0o52;  bin: 0b101010'

Using the comma as a thousands separator:

   >>> '{:,}'.format(1234567890)
   '1,234,567,890'

Expressing a percentage:

   >>> points = 19.5
   >>> total = 22
   >>> 'Correct answers: {:.2%}'.format(points/total)
   'Correct answers: 88.64%'

Using type-specific formatting:

   >>> import datetime
   >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)
   >>> '{:%Y-%m-%d %H:%M:%S}'.format(d)
   '2010-07-04 12:15:58'

Nesting arguments and more complex examples:

   >>> for align, text in zip('<^>', ['left', 'center', 'right']):
   ...     '{0:{fill}{align}16}'.format(text, fill=align, align=align)
   ...
   'left<<<<<<<<<<<<'
   '^^^^^center^^^^^'
   '>>>>>>>>>>>right'
   >>>
   >>> octets = [192, 168, 0, 1]
   >>> '{:02X}{:02X}{:02X}{:02X}'.format(*octets)
   'C0A80001'
   >>> int(_, 16)
   3232235521
   >>>
   >>> width = 5
   >>> for num in range(5,12):
   ...     for base in 'dXob':
   ...         print '{0:{width}{base}}'.format(num, base=base, width=width),
   ...     print
   ...
       5     5     5   101
       6     6     6   110
       7     7     7   111
       8     8    10  1000
       9     9    11  1001
      10     A    12  1010
      11     B    13  1011
t
formatstringssz
Function definitions
********************

A function definition defines a user-defined function object (see
section The standard type hierarchy):

   decorated      ::= decorators (classdef | funcdef)
   decorators     ::= decorator+
   decorator      ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE
   funcdef        ::= "def" funcname "(" [parameter_list] ")" ":" suite
   dotted_name    ::= identifier ("." identifier)*
   parameter_list ::= (defparameter ",")*
                      (  "*" identifier ["," "**" identifier]
                      | "**" identifier
                      | defparameter [","] )
   defparameter   ::= parameter ["=" expression]
   sublist        ::= parameter ("," parameter)* [","]
   parameter      ::= identifier | "(" sublist ")"
   funcname       ::= identifier

A function definition is an executable statement.  Its execution binds
the function name in the current local namespace to a function object
(a wrapper around the executable code for the function).  This
function object contains a reference to the current global namespace
as the global namespace to be used when the function is called.

The function definition does not execute the function body; this gets
executed only when the function is called. [3]

A function definition may be wrapped by one or more *decorator*
expressions. Decorator expressions are evaluated when the function is
defined, in the scope that contains the function definition.  The
result must be a callable, which is invoked with the function object
as the only argument. The returned value is bound to the function name
instead of the function object.  Multiple decorators are applied in
nested fashion. For example, the following code:

   @f1(arg)
   @f2
   def func(): pass

is equivalent to:

   def func(): pass
   func = f1(arg)(f2(func))

When one or more top-level *parameters* have the form *parameter* "="
*expression*, the function is said to have "default parameter values."
For a parameter with a default value, the corresponding *argument* may
be omitted from a call, in which case the parameter's default value is
substituted.  If a parameter has a default value, all following
parameters must also have a default value --- this is a syntactic
restriction that is not expressed by the grammar.

**Default parameter values are evaluated when the function definition
is executed.**  This means that the expression is evaluated once, when
the function is defined, and that the same "pre-computed" value is
used for each call.  This is especially important to understand when a
default parameter is a mutable object, such as a list or a dictionary:
if the function modifies the object (e.g. by appending an item to a
list), the default value is in effect modified. This is generally not
what was intended.  A way around this  is to use "None" as the
default, and explicitly test for it in the body of the function, e.g.:

   def whats_on_the_telly(penguin=None):
       if penguin is None:
           penguin = []
       penguin.append("property of the zoo")
       return penguin

Function call semantics are described in more detail in section Calls.
A function call always assigns values to all parameters mentioned in
the parameter list, either from position arguments, from keyword
arguments, or from default values.  If the form ""*identifier"" is
present, it is initialized to a tuple receiving any excess positional
parameters, defaulting to the empty tuple.  If the form
""**identifier"" is present, it is initialized to a new dictionary
receiving any excess keyword arguments, defaulting to a new empty
dictionary.

It is also possible to create anonymous functions (functions not bound
to a name), for immediate use in expressions.  This uses lambda
expressions, described in section Lambdas.  Note that the lambda
expression is merely a shorthand for a simplified function definition;
a function defined in a ""def"" statement can be passed around or
assigned to another name just like a function defined by a lambda
expression.  The ""def"" form is actually more powerful since it
allows the execution of multiple statements.

**Programmer's note:** Functions are first-class objects.  A ""def""
form executed inside a function definition defines a local function
that can be returned or passed around.  Free variables used in the
nested function can access the local variables of the function
containing the def.  See section Naming and binding for details.
tfunctions�
The "global" statement
**********************

   global_stmt ::= "global" identifier ("," identifier)*

The "global" statement is a declaration which holds for the entire
current code block.  It means that the listed identifiers are to be
interpreted as globals.  It would be impossible to assign to a global
variable without "global", although free variables may refer to
globals without being declared global.

Names listed in a "global" statement must not be used in the same code
block textually preceding that "global" statement.

Names listed in a "global" statement must not be defined as formal
parameters or in a "for" loop control target, "class" definition,
function definition, or "import" statement.

**CPython implementation detail:** The current implementation does not
enforce the latter two restrictions, but programs should not abuse
this freedom, as future implementations may enforce them or silently
change the meaning of the program.

**Programmer's note:** "global" is a directive to the parser.  It
applies only to code parsed at the same time as the "global"
statement. In particular, a "global" statement contained in an "exec"
statement does not affect the code block *containing* the "exec"
statement, and code contained in an "exec" statement is unaffected by
"global" statements in the code containing the "exec" statement.  The
same applies to the "eval()", "execfile()" and "compile()" functions.
tglobals�
Reserved classes of identifiers
*******************************

Certain classes of identifiers (besides keywords) have special
meanings.  These classes are identified by the patterns of leading and
trailing underscore characters:

"_*"
   Not imported by "from module import *".  The special identifier "_"
   is used in the interactive interpreter to store the result of the
   last evaluation; it is stored in the "__builtin__" module.  When
   not in interactive mode, "_" has no special meaning and is not
   defined. See section The import statement.

   Note: The name "_" is often used in conjunction with
     internationalization; refer to the documentation for the
     "gettext" module for more information on this convention.

"__*__"
   System-defined names. These names are defined by the interpreter
   and its implementation (including the standard library).  Current
   system names are discussed in the Special method names section and
   elsewhere.  More will likely be defined in future versions of
   Python.  *Any* use of "__*__" names, in any context, that does not
   follow explicitly documented use, is subject to breakage without
   warning.

"__*"
   Class-private names.  Names in this category, when used within the
   context of a class definition, are re-written to use a mangled form
   to help avoid name clashes between "private" attributes of base and
   derived classes. See section Identifiers (Names).
s
id-classess�

Identifiers and keywords
************************

Identifiers (also referred to as *names*) are described by the
following lexical definitions:

   identifier ::= (letter|"_") (letter | digit | "_")*
   letter     ::= lowercase | uppercase
   lowercase  ::= "a"..."z"
   uppercase  ::= "A"..."Z"
   digit      ::= "0"..."9"

Identifiers are unlimited in length.  Case is significant.


Keywords
========

The following identifiers are used as reserved words, or *keywords* of
the language, and cannot be used as ordinary identifiers.  They must
be spelled exactly as written here:

   and       del       from      not       while
   as        elif      global    or        with
   assert    else      if        pass      yield
   break     except    import    print
   class     exec      in        raise
   continue  finally   is        return
   def       for       lambda    try

Changed in version 2.4: "None" became a constant and is now recognized
by the compiler as a name for the built-in object "None".  Although it
is not a keyword, you cannot assign a different object to it.

Changed in version 2.5: Using "as" and "with" as identifiers triggers
a warning.  To use them as keywords, enable the "with_statement"
future feature .

Changed in version 2.6: "as" and "with" are full keywords.


Reserved classes of identifiers
===============================

Certain classes of identifiers (besides keywords) have special
meanings.  These classes are identified by the patterns of leading and
trailing underscore characters:

"_*"
   Not imported by "from module import *".  The special identifier "_"
   is used in the interactive interpreter to store the result of the
   last evaluation; it is stored in the "__builtin__" module.  When
   not in interactive mode, "_" has no special meaning and is not
   defined. See section The import statement.

   Note: The name "_" is often used in conjunction with
     internationalization; refer to the documentation for the
     "gettext" module for more information on this convention.

"__*__"
   System-defined names. These names are defined by the interpreter
   and its implementation (including the standard library).  Current
   system names are discussed in the Special method names section and
   elsewhere.  More will likely be defined in future versions of
   Python.  *Any* use of "__*__" names, in any context, that does not
   follow explicitly documented use, is subject to breakage without
   warning.

"__*"
   Class-private names.  Names in this category, when used within the
   context of a class definition, are re-written to use a mangled form
   to help avoid name clashes between "private" attributes of base and
   derived classes. See section Identifiers (Names).
tidentifierstifs%
Imaginary literals
******************

Imaginary literals are described by the following lexical definitions:

   imagnumber ::= (floatnumber | intpart) ("j" | "J")

An imaginary literal yields a complex number with a real part of 0.0.
Complex numbers are represented as a pair of floating point numbers
and have the same restrictions on their range.  To create a complex
number with a nonzero real part, add a floating point number to it,
e.g., "(3+4j)".  Some examples of imaginary literals:

   3.14j   10.j    10j     .001j   1e100j  3.14e-10j
t	imaginarysK.
The "import" statement
**********************

   import_stmt     ::= "import" module ["as" name] ( "," module ["as" name] )*
                   | "from" relative_module "import" identifier ["as" name]
                   ( "," identifier ["as" name] )*
                   | "from" relative_module "import" "(" identifier ["as" name]
                   ( "," identifier ["as" name] )* [","] ")"
                   | "from" module "import" "*"
   module          ::= (identifier ".")* identifier
   relative_module ::= "."* module | "."+
   name            ::= identifier

Import statements are executed in two steps: (1) find a module, and
initialize it if necessary; (2) define a name or names in the local
namespace (of the scope where the "import" statement occurs). The
statement comes in two forms differing on whether it uses the "from"
keyword. The first form (without "from") repeats these steps for each
identifier in the list. The form with "from" performs step (1) once,
and then performs step (2) repeatedly.

To understand how step (1) occurs, one must first understand how
Python handles hierarchical naming of modules. To help organize
modules and provide a hierarchy in naming, Python has a concept of
packages. A package can contain other packages and modules while
modules cannot contain other modules or packages. From a file system
perspective, packages are directories and modules are files.

Once the name of the module is known (unless otherwise specified, the
term "module" will refer to both packages and modules), searching for
the module or package can begin. The first place checked is
"sys.modules", the cache of all modules that have been imported
previously. If the module is found there then it is used in step (2)
of import.

If the module is not found in the cache, then "sys.meta_path" is
searched (the specification for "sys.meta_path" can be found in **PEP
302**). The object is a list of *finder* objects which are queried in
order as to whether they know how to load the module by calling their
"find_module()" method with the name of the module. If the module
happens to be contained within a package (as denoted by the existence
of a dot in the name), then a second argument to "find_module()" is
given as the value of the "__path__" attribute from the parent package
(everything up to the last dot in the name of the module being
imported). If a finder can find the module it returns a *loader*
(discussed later) or returns "None".

If none of the finders on "sys.meta_path" are able to find the module
then some implicitly defined finders are queried. Implementations of
Python vary in what implicit meta path finders are defined. The one
they all do define, though, is one that handles "sys.path_hooks",
"sys.path_importer_cache", and "sys.path".

The implicit finder searches for the requested module in the "paths"
specified in one of two places ("paths" do not have to be file system
paths). If the module being imported is supposed to be contained
within a package then the second argument passed to "find_module()",
"__path__" on the parent package, is used as the source of paths. If
the module is not contained in a package then "sys.path" is used as
the source of paths.

Once the source of paths is chosen it is iterated over to find a
finder that can handle that path. The dict at
"sys.path_importer_cache" caches finders for paths and is checked for
a finder. If the path does not have a finder cached then
"sys.path_hooks" is searched by calling each object in the list with a
single argument of the path, returning a finder or raises
"ImportError". If a finder is returned then it is cached in
"sys.path_importer_cache" and then used for that path entry. If no
finder can be found but the path exists then a value of "None" is
stored in "sys.path_importer_cache" to signify that an implicit, file-
based finder that handles modules stored as individual files should be
used for that path. If the path does not exist then a finder which
always returns "None" is placed in the cache for the path.

If no finder can find the module then "ImportError" is raised.
Otherwise some finder returned a loader whose "load_module()" method
is called with the name of the module to load (see **PEP 302** for the
original definition of loaders). A loader has several responsibilities
to perform on a module it loads. First, if the module already exists
in "sys.modules" (a possibility if the loader is called outside of the
import machinery) then it is to use that module for initialization and
not a new module. But if the module does not exist in "sys.modules"
then it is to be added to that dict before initialization begins. If
an error occurs during loading of the module and it was added to
"sys.modules" it is to be removed from the dict. If an error occurs
but the module was already in "sys.modules" it is left in the dict.

The loader must set several attributes on the module. "__name__" is to
be set to the name of the module. "__file__" is to be the "path" to
the file unless the module is built-in (and thus listed in
"sys.builtin_module_names") in which case the attribute is not set. If
what is being imported is a package then "__path__" is to be set to a
list of paths to be searched when looking for modules and packages
contained within the package being imported. "__package__" is optional
but should be set to the name of package that contains the module or
package (the empty string is used for module not contained in a
package). "__loader__" is also optional but should be set to the
loader object that is loading the module.

If an error occurs during loading then the loader raises "ImportError"
if some other exception is not already being propagated. Otherwise the
loader returns the module that was loaded and initialized.

When step (1) finishes without raising an exception, step (2) can
begin.

The first form of "import" statement binds the module name in the
local namespace to the module object, and then goes on to import the
next identifier, if any.  If the module name is followed by "as", the
name following "as" is used as the local name for the module.

The "from" form does not bind the module name: it goes through the
list of identifiers, looks each one of them up in the module found in
step (1), and binds the name in the local namespace to the object thus
found.  As with the first form of "import", an alternate local name
can be supplied by specifying ""as" localname".  If a name is not
found, "ImportError" is raised.  If the list of identifiers is
replaced by a star ("'*'"), all public names defined in the module are
bound in the local namespace of the "import" statement..

The *public names* defined by a module are determined by checking the
module's namespace for a variable named "__all__"; if defined, it must
be a sequence of strings which are names defined or imported by that
module.  The names given in "__all__" are all considered public and
are required to exist.  If "__all__" is not defined, the set of public
names includes all names found in the module's namespace which do not
begin with an underscore character ("'_'"). "__all__" should contain
the entire public API. It is intended to avoid accidentally exporting
items that are not part of the API (such as library modules which were
imported and used within the module).

The "from" form with "*" may only occur in a module scope.  If the
wild card form of import --- "import *" --- is used in a function and
the function contains or is a nested block with free variables, the
compiler will raise a "SyntaxError".

When specifying what module to import you do not have to specify the
absolute name of the module. When a module or package is contained
within another package it is possible to make a relative import within
the same top package without having to mention the package name. By
using leading dots in the specified module or package after "from" you
can specify how high to traverse up the current package hierarchy
without specifying exact names. One leading dot means the current
package where the module making the import exists. Two dots means up
one package level. Three dots is up two levels, etc. So if you execute
"from . import mod" from a module in the "pkg" package then you will
end up importing "pkg.mod". If you execute "from ..subpkg2 import mod"
from within "pkg.subpkg1" you will import "pkg.subpkg2.mod". The
specification for relative imports is contained within **PEP 328**.

"importlib.import_module()" is provided to support applications that
determine which modules need to be loaded dynamically.


Future statements
=================

A *future statement* is a directive to the compiler that a particular
module should be compiled using syntax or semantics that will be
available in a specified future release of Python.  The future
statement is intended to ease migration to future versions of Python
that introduce incompatible changes to the language.  It allows use of
the new features on a per-module basis before the release in which the
feature becomes standard.

   future_statement ::= "from" "__future__" "import" feature ["as" name]
                        ("," feature ["as" name])*
                        | "from" "__future__" "import" "(" feature ["as" name]
                        ("," feature ["as" name])* [","] ")"
   feature          ::= identifier
   name             ::= identifier

A future statement must appear near the top of the module.  The only
lines that can appear before a future statement are:

* the module docstring (if any),

* comments,

* blank lines, and

* other future statements.

The features recognized by Python 2.6 are "unicode_literals",
"print_function", "absolute_import", "division", "generators",
"nested_scopes" and "with_statement".  "generators", "with_statement",
"nested_scopes" are redundant in Python version 2.6 and above because
they are always enabled.

A future statement is recognized and treated specially at compile
time: Changes to the semantics of core constructs are often
implemented by generating different code.  It may even be the case
that a new feature introduces new incompatible syntax (such as a new
reserved word), in which case the compiler may need to parse the
module differently.  Such decisions cannot be pushed off until
runtime.

For any given release, the compiler knows which feature names have
been defined, and raises a compile-time error if a future statement
contains a feature not known to it.

The direct runtime semantics are the same as for any import statement:
there is a standard module "__future__", described later, and it will
be imported in the usual way at the time the future statement is
executed.

The interesting runtime semantics depend on the specific feature
enabled by the future statement.

Note that there is nothing special about the statement:

   import __future__ [as name]

That is not a future statement; it's an ordinary import statement with
no special semantics or syntax restrictions.

Code compiled by an "exec" statement or calls to the built-in
functions "compile()" and "execfile()" that occur in a module "M"
containing a future statement will, by default, use the new  syntax or
semantics associated with the future statement.  This can, starting
with Python 2.2 be controlled by optional arguments to "compile()" ---
see the documentation of that function for details.

A future statement typed at an interactive interpreter prompt will
take effect for the rest of the interpreter session.  If an
interpreter is started with the "-i" option, is passed a script name
to execute, and the script includes a future statement, it will be in
effect in the interactive session started after the script is
executed.

See also:

  **PEP 236** - Back to the __future__
     The original proposal for the __future__ mechanism.
timportsO
Membership test operations
**************************

The operators "in" and "not in" test for membership.  "x in s"
evaluates to "True" if *x* is a member of *s*, and "False" otherwise.
"x not in s" returns the negation of "x in s".  All built-in sequences
and set types support this as well as dictionary, for which "in" tests
whether the dictionary has a given key. For container types such as
list, tuple, set, frozenset, dict, or collections.deque, the
expression "x in y" is equivalent to "any(x is e or x == e for e in
y)".

For the string and bytes types, "x in y" is "True" if and only if *x*
is a substring of *y*.  An equivalent test is "y.find(x) != -1".
Empty strings are always considered to be a substring of any other
string, so """ in "abc"" will return "True".

For user-defined classes which define the "__contains__()" method, "x
in y" returns "True" if "y.__contains__(x)" returns a true value, and
"False" otherwise.

For user-defined classes which do not define "__contains__()" but do
define "__iter__()", "x in y" is "True" if some value "z" with "x ==
z" is produced while iterating over "y".  If an exception is raised
during the iteration, it is as if "in" raised that exception.

Lastly, the old-style iteration protocol is tried: if a class defines
"__getitem__()", "x in y" is "True" if and only if there is a non-
negative integer index *i* such that "x == y[i]", and all lower
integer indices do not raise "IndexError" exception. (If any other
exception is raised, it is as if "in" raised that exception).

The operator "not in" is defined to have the inverse true value of
"in".
tinso
Integer and long integer literals
*********************************

Integer and long integer literals are described by the following
lexical definitions:

   longinteger    ::= integer ("l" | "L")
   integer        ::= decimalinteger | octinteger | hexinteger | bininteger
   decimalinteger ::= nonzerodigit digit* | "0"
   octinteger     ::= "0" ("o" | "O") octdigit+ | "0" octdigit+
   hexinteger     ::= "0" ("x" | "X") hexdigit+
   bininteger     ::= "0" ("b" | "B") bindigit+
   nonzerodigit   ::= "1"..."9"
   octdigit       ::= "0"..."7"
   bindigit       ::= "0" | "1"
   hexdigit       ::= digit | "a"..."f" | "A"..."F"

Although both lower case "'l'" and upper case "'L'" are allowed as
suffix for long integers, it is strongly recommended to always use
"'L'", since the letter "'l'" looks too much like the digit "'1'".

Plain integer literals that are above the largest representable plain
integer (e.g., 2147483647 when using 32-bit arithmetic) are accepted
as if they were long integers instead. [1]  There is no limit for long
integer literals apart from what can be stored in available memory.

Some examples of plain integer literals (first row) and long integer
literals (second and third rows):

   7     2147483647                        0177
   3L    79228162514264337593543950336L    0377L   0x100000000L
         79228162514264337593543950336             0xdeadbeef
tintegerssx
Lambdas
*******

   lambda_expr     ::= "lambda" [parameter_list]: expression
   old_lambda_expr ::= "lambda" [parameter_list]: old_expression

Lambda expressions (sometimes called lambda forms) have the same
syntactic position as expressions.  They are a shorthand to create
anonymous functions; the expression "lambda arguments: expression"
yields a function object.  The unnamed object behaves like a function
object defined with

   def name(arguments):
       return expression

See section Function definitions for the syntax of parameter lists.
Note that functions created with lambda expressions cannot contain
statements.
tlambdas�
List displays
*************

A list display is a possibly empty series of expressions enclosed in
square brackets:

   list_display        ::= "[" [expression_list | list_comprehension] "]"
   list_comprehension  ::= expression list_for
   list_for            ::= "for" target_list "in" old_expression_list [list_iter]
   old_expression_list ::= old_expression [("," old_expression)+ [","]]
   old_expression      ::= or_test | old_lambda_expr
   list_iter           ::= list_for | list_if
   list_if             ::= "if" old_expression [list_iter]

A list display yields a new list object.  Its contents are specified
by providing either a list of expressions or a list comprehension.
When a comma-separated list of expressions is supplied, its elements
are evaluated from left to right and placed into the list object in
that order.  When a list comprehension is supplied, it consists of a
single expression followed by at least one "for" clause and zero or
more "for" or "if" clauses.  In this case, the elements of the new
list are those that would be produced by considering each of the "for"
or "if" clauses a block, nesting from left to right, and evaluating
the expression to produce a list element each time the innermost block
is reached [1].
tlistss�
Naming and binding
******************

*Names* refer to objects.  Names are introduced by name binding
operations. Each occurrence of a name in the program text refers to
the *binding* of that name established in the innermost function block
containing the use.

A *block* is a piece of Python program text that is executed as a
unit. The following are blocks: a module, a function body, and a class
definition. Each command typed interactively is a block.  A script
file (a file given as standard input to the interpreter or specified
on the interpreter command line the first argument) is a code block.
A script command (a command specified on the interpreter command line
with the '**-c**' option) is a code block.  The file read by the
built-in function "execfile()" is a code block.  The string argument
passed to the built-in function "eval()" and to the "exec" statement
is a code block. The expression read and evaluated by the built-in
function "input()" is a code block.

A code block is executed in an *execution frame*.  A frame contains
some administrative information (used for debugging) and determines
where and how execution continues after the code block's execution has
completed.

A *scope* defines the visibility of a name within a block.  If a local
variable is defined in a block, its scope includes that block.  If the
definition occurs in a function block, the scope extends to any blocks
contained within the defining one, unless a contained block introduces
a different binding for the name.  The scope of names defined in a
class block is limited to the class block; it does not extend to the
code blocks of methods -- this includes generator expressions since
they are implemented using a function scope.  This means that the
following will fail:

   class A:
       a = 42
       b = list(a + i for i in range(10))

When a name is used in a code block, it is resolved using the nearest
enclosing scope.  The set of all such scopes visible to a code block
is called the block's *environment*.

If a name is bound in a block, it is a local variable of that block.
If a name is bound at the module level, it is a global variable.  (The
variables of the module code block are local and global.)  If a
variable is used in a code block but not defined there, it is a *free
variable*.

When a name is not found at all, a "NameError" exception is raised.
If the name refers to a local variable that has not been bound, a
"UnboundLocalError" exception is raised.  "UnboundLocalError" is a
subclass of "NameError".

The following constructs bind names: formal parameters to functions,
"import" statements, class and function definitions (these bind the
class or function name in the defining block), and targets that are
identifiers if occurring in an assignment, "for" loop header, in the
second position of an "except" clause header or after "as" in a "with"
statement.  The "import" statement of the form "from ... import *"
binds all names defined in the imported module, except those beginning
with an underscore.  This form may only be used at the module level.

A target occurring in a "del" statement is also considered bound for
this purpose (though the actual semantics are to unbind the name).  It
is illegal to unbind a name that is referenced by an enclosing scope;
the compiler will report a "SyntaxError".

Each assignment or import statement occurs within a block defined by a
class or function definition or at the module level (the top-level
code block).

If a name binding operation occurs anywhere within a code block, all
uses of the name within the block are treated as references to the
current block.  This can lead to errors when a name is used within a
block before it is bound. This rule is subtle.  Python lacks
declarations and allows name binding operations to occur anywhere
within a code block.  The local variables of a code block can be
determined by scanning the entire text of the block for name binding
operations.

If the global statement occurs within a block, all uses of the name
specified in the statement refer to the binding of that name in the
top-level namespace. Names are resolved in the top-level namespace by
searching the global namespace, i.e. the namespace of the module
containing the code block, and the builtins namespace, the namespace
of the module "__builtin__".  The global namespace is searched first.
If the name is not found there, the builtins namespace is searched.
The global statement must precede all uses of the name.

The builtins namespace associated with the execution of a code block
is actually found by looking up the name "__builtins__" in its global
namespace; this should be a dictionary or a module (in the latter case
the module's dictionary is used).  By default, when in the "__main__"
module, "__builtins__" is the built-in module "__builtin__" (note: no
's'); when in any other module, "__builtins__" is an alias for the
dictionary of the "__builtin__" module itself.  "__builtins__" can be
set to a user-created dictionary to create a weak form of restricted
execution.

**CPython implementation detail:** Users should not touch
"__builtins__"; it is strictly an implementation detail.  Users
wanting to override values in the builtins namespace should "import"
the "__builtin__" (no 's') module and modify its attributes
appropriately.

The namespace for a module is automatically created the first time a
module is imported.  The main module for a script is always called
"__main__".

The "global" statement has the same scope as a name binding operation
in the same block.  If the nearest enclosing scope for a free variable
contains a global statement, the free variable is treated as a global.

A class definition is an executable statement that may use and define
names. These references follow the normal rules for name resolution.
The namespace of the class definition becomes the attribute dictionary
of the class.  Names defined at the class scope are not visible in
methods.


Interaction with dynamic features
=================================

There are several cases where Python statements are illegal when used
in conjunction with nested scopes that contain free variables.

If a variable is referenced in an enclosing scope, it is illegal to
delete the name.  An error will be reported at compile time.

If the wild card form of import --- "import *" --- is used in a
function and the function contains or is a nested block with free
variables, the compiler will raise a "SyntaxError".

If "exec" is used in a function and the function contains or is a
nested block with free variables, the compiler will raise a
"SyntaxError" unless the exec explicitly specifies the local namespace
for the "exec".  (In other words, "exec obj" would be illegal, but
"exec obj in ns" would be legal.)

The "eval()", "execfile()", and "input()" functions and the "exec"
statement do not have access to the full environment for resolving
names.  Names may be resolved in the local and global namespaces of
the caller.  Free variables are not resolved in the nearest enclosing
namespace, but in the global namespace. [1] The "exec" statement and
the "eval()" and "execfile()" functions have optional arguments to
override the global and local namespace.  If only one namespace is
specified, it is used for both.
tnamings�
Numeric literals
****************

There are four types of numeric literals: plain integers, long
integers, floating point numbers, and imaginary numbers.  There are no
complex literals (complex numbers can be formed by adding a real
number and an imaginary number).

Note that numeric literals do not include a sign; a phrase like "-1"
is actually an expression composed of the unary operator '"-"' and the
literal "1".
tnumberssy
Emulating numeric types
***********************

The following methods can be defined to emulate numeric objects.
Methods corresponding to operations that are not supported by the
particular kind of number implemented (e.g., bitwise operations for
non-integral numbers) should be left undefined.

object.__add__(self, other)
object.__sub__(self, other)
object.__mul__(self, other)
object.__floordiv__(self, other)
object.__mod__(self, other)
object.__divmod__(self, other)
object.__pow__(self, other[, modulo])
object.__lshift__(self, other)
object.__rshift__(self, other)
object.__and__(self, other)
object.__xor__(self, other)
object.__or__(self, other)

   These methods are called to implement the binary arithmetic
   operations ("+", "-", "*", "//", "%", "divmod()", "pow()", "**",
   "<<", ">>", "&", "^", "|").  For instance, to evaluate the
   expression "x + y", where *x* is an instance of a class that has an
   "__add__()" method, "x.__add__(y)" is called.  The "__divmod__()"
   method should be the equivalent to using "__floordiv__()" and
   "__mod__()"; it should not be related to "__truediv__()" (described
   below).  Note that "__pow__()" should be defined to accept an
   optional third argument if the ternary version of the built-in
   "pow()" function is to be supported.

   If one of those methods does not support the operation with the
   supplied arguments, it should return "NotImplemented".

object.__div__(self, other)
object.__truediv__(self, other)

   The division operator ("/") is implemented by these methods.  The
   "__truediv__()" method is used when "__future__.division" is in
   effect, otherwise "__div__()" is used.  If only one of these two
   methods is defined, the object will not support division in the
   alternate context; "TypeError" will be raised instead.

object.__radd__(self, other)
object.__rsub__(self, other)
object.__rmul__(self, other)
object.__rdiv__(self, other)
object.__rtruediv__(self, other)
object.__rfloordiv__(self, other)
object.__rmod__(self, other)
object.__rdivmod__(self, other)
object.__rpow__(self, other)
object.__rlshift__(self, other)
object.__rrshift__(self, other)
object.__rand__(self, other)
object.__rxor__(self, other)
object.__ror__(self, other)

   These methods are called to implement the binary arithmetic
   operations ("+", "-", "*", "/", "%", "divmod()", "pow()", "**",
   "<<", ">>", "&", "^", "|") with reflected (swapped) operands.
   These functions are only called if the left operand does not
   support the corresponding operation and the operands are of
   different types. [2] For instance, to evaluate the expression "x -
   y", where *y* is an instance of a class that has an "__rsub__()"
   method, "y.__rsub__(x)" is called if "x.__sub__(y)" returns
   *NotImplemented*.

   Note that ternary "pow()" will not try calling "__rpow__()" (the
   coercion rules would become too complicated).

   Note: If the right operand's type is a subclass of the left
     operand's type and that subclass provides the reflected method
     for the operation, this method will be called before the left
     operand's non-reflected method.  This behavior allows subclasses
     to override their ancestors' operations.

object.__iadd__(self, other)
object.__isub__(self, other)
object.__imul__(self, other)
object.__idiv__(self, other)
object.__itruediv__(self, other)
object.__ifloordiv__(self, other)
object.__imod__(self, other)
object.__ipow__(self, other[, modulo])
object.__ilshift__(self, other)
object.__irshift__(self, other)
object.__iand__(self, other)
object.__ixor__(self, other)
object.__ior__(self, other)

   These methods are called to implement the augmented arithmetic
   assignments ("+=", "-=", "*=", "/=", "//=", "%=", "**=", "<<=",
   ">>=", "&=", "^=", "|=").  These methods should attempt to do the
   operation in-place (modifying *self*) and return the result (which
   could be, but does not have to be, *self*).  If a specific method
   is not defined, the augmented assignment falls back to the normal
   methods.  For instance, to execute the statement "x += y", where
   *x* is an instance of a class that has an "__iadd__()" method,
   "x.__iadd__(y)" is called.  If *x* is an instance of a class that
   does not define a "__iadd__()" method, "x.__add__(y)" and
   "y.__radd__(x)" are considered, as with the evaluation of "x + y".

object.__neg__(self)
object.__pos__(self)
object.__abs__(self)
object.__invert__(self)

   Called to implement the unary arithmetic operations ("-", "+",
   "abs()" and "~").

object.__complex__(self)
object.__int__(self)
object.__long__(self)
object.__float__(self)

   Called to implement the built-in functions "complex()", "int()",
   "long()", and "float()".  Should return a value of the appropriate
   type.

object.__oct__(self)
object.__hex__(self)

   Called to implement the built-in functions "oct()" and "hex()".
   Should return a string value.

object.__index__(self)

   Called to implement "operator.index()".  Also called whenever
   Python needs an integer object (such as in slicing).  Must return
   an integer (int or long).

   New in version 2.5.

object.__coerce__(self, other)

   Called to implement "mixed-mode" numeric arithmetic.  Should either
   return a 2-tuple containing *self* and *other* converted to a
   common numeric type, or "None" if conversion is impossible.  When
   the common type would be the type of "other", it is sufficient to
   return "None", since the interpreter will also ask the other object
   to attempt a coercion (but sometimes, if the implementation of the
   other type cannot be changed, it is useful to do the conversion to
   the other type here).  A return value of "NotImplemented" is
   equivalent to returning "None".
s
numeric-typessZ
Objects, values and types
*************************

*Objects* are Python's abstraction for data.  All data in a Python
program is represented by objects or by relations between objects. (In
a sense, and in conformance to Von Neumann's model of a "stored
program computer," code is also represented by objects.)

Every object has an identity, a type and a value.  An object's
*identity* never changes once it has been created; you may think of it
as the object's address in memory.  The '"is"' operator compares the
identity of two objects; the "id()" function returns an integer
representing its identity (currently implemented as its address). An
object's *type* is also unchangeable. [1] An object's type determines
the operations that the object supports (e.g., "does it have a
length?") and also defines the possible values for objects of that
type.  The "type()" function returns an object's type (which is an
object itself).  The *value* of some objects can change.  Objects
whose value can change are said to be *mutable*; objects whose value
is unchangeable once they are created are called *immutable*. (The
value of an immutable container object that contains a reference to a
mutable object can change when the latter's value is changed; however
the container is still considered immutable, because the collection of
objects it contains cannot be changed.  So, immutability is not
strictly the same as having an unchangeable value, it is more subtle.)
An object's mutability is determined by its type; for instance,
numbers, strings and tuples are immutable, while dictionaries and
lists are mutable.

Objects are never explicitly destroyed; however, when they become
unreachable they may be garbage-collected.  An implementation is
allowed to postpone garbage collection or omit it altogether --- it is
a matter of implementation quality how garbage collection is
implemented, as long as no objects are collected that are still
reachable.

**CPython implementation detail:** CPython currently uses a reference-
counting scheme with (optional) delayed detection of cyclically linked
garbage, which collects most objects as soon as they become
unreachable, but is not guaranteed to collect garbage containing
circular references.  See the documentation of the "gc" module for
information on controlling the collection of cyclic garbage. Other
implementations act differently and CPython may change. Do not depend
on immediate finalization of objects when they become unreachable (ex:
always close files).

Note that the use of the implementation's tracing or debugging
facilities may keep objects alive that would normally be collectable.
Also note that catching an exception with a '"try"..."except"'
statement may keep objects alive.

Some objects contain references to "external" resources such as open
files or windows.  It is understood that these resources are freed
when the object is garbage-collected, but since garbage collection is
not guaranteed to happen, such objects also provide an explicit way to
release the external resource, usually a "close()" method. Programs
are strongly recommended to explicitly close such objects.  The
'"try"..."finally"' statement provides a convenient way to do this.

Some objects contain references to other objects; these are called
*containers*. Examples of containers are tuples, lists and
dictionaries.  The references are part of a container's value.  In
most cases, when we talk about the value of a container, we imply the
values, not the identities of the contained objects; however, when we
talk about the mutability of a container, only the identities of the
immediately contained objects are implied.  So, if an immutable
container (like a tuple) contains a reference to a mutable object, its
value changes if that mutable object is changed.

Types affect almost all aspects of object behavior.  Even the
importance of object identity is affected in some sense: for immutable
types, operations that compute new values may actually return a
reference to any existing object with the same type and value, while
for mutable objects this is not allowed.  E.g., after "a = 1; b = 1",
"a" and "b" may or may not refer to the same object with the value
one, depending on the implementation, but after "c = []; d = []", "c"
and "d" are guaranteed to refer to two different, unique, newly
created empty lists. (Note that "c = d = []" assigns the same object
to both "c" and "d".)
tobjectss
Operator precedence
*******************

The following table summarizes the operator precedences in Python,
from lowest precedence (least binding) to highest precedence (most
binding). Operators in the same box have the same precedence.  Unless
the syntax is explicitly given, operators are binary.  Operators in
the same box group left to right (except for comparisons, including
tests, which all have the same precedence and chain from left to right
--- see section Comparisons --- and exponentiation, which groups from
right to left).

+-------------------------------------------------+---------------------------------------+
| Operator                                        | Description                           |
+=================================================+=======================================+
| "lambda"                                        | Lambda expression                     |
+-------------------------------------------------+---------------------------------------+
| "if" -- "else"                                  | Conditional expression                |
+-------------------------------------------------+---------------------------------------+
| "or"                                            | Boolean OR                            |
+-------------------------------------------------+---------------------------------------+
| "and"                                           | Boolean AND                           |
+-------------------------------------------------+---------------------------------------+
| "not" "x"                                       | Boolean NOT                           |
+-------------------------------------------------+---------------------------------------+
| "in", "not in", "is", "is not", "<", "<=", ">", | Comparisons, including membership     |
| ">=", "<>", "!=", "=="                          | tests and identity tests              |
+-------------------------------------------------+---------------------------------------+
| "|"                                             | Bitwise OR                            |
+-------------------------------------------------+---------------------------------------+
| "^"                                             | Bitwise XOR                           |
+-------------------------------------------------+---------------------------------------+
| "&"                                             | Bitwise AND                           |
+-------------------------------------------------+---------------------------------------+
| "<<", ">>"                                      | Shifts                                |
+-------------------------------------------------+---------------------------------------+
| "+", "-"                                        | Addition and subtraction              |
+-------------------------------------------------+---------------------------------------+
| "*", "/", "//", "%"                             | Multiplication, division, remainder   |
|                                                 | [7]                                   |
+-------------------------------------------------+---------------------------------------+
| "+x", "-x", "~x"                                | Positive, negative, bitwise NOT       |
+-------------------------------------------------+---------------------------------------+
| "**"                                            | Exponentiation [8]                    |
+-------------------------------------------------+---------------------------------------+
| "x[index]", "x[index:index]",                   | Subscription, slicing, call,          |
| "x(arguments...)", "x.attribute"                | attribute reference                   |
+-------------------------------------------------+---------------------------------------+
| "(expressions...)", "[expressions...]", "{key:  | Binding or tuple display, list        |
| value...}", "`expressions...`"                  | display, dictionary display, string   |
|                                                 | conversion                            |
+-------------------------------------------------+---------------------------------------+

-[ Footnotes ]-

[1] In Python 2.3 and later releases, a list comprehension "leaks"
    the control variables of each "for" it contains into the
    containing scope.  However, this behavior is deprecated, and
    relying on it will not work in Python 3.

[2] While "abs(x%y) < abs(y)" is true mathematically, for floats
    it may not be true numerically due to roundoff.  For example, and
    assuming a platform on which a Python float is an IEEE 754 double-
    precision number, in order that "-1e-100 % 1e100" have the same
    sign as "1e100", the computed result is "-1e-100 + 1e100", which
    is numerically exactly equal to "1e100".  The function
    "math.fmod()" returns a result whose sign matches the sign of the
    first argument instead, and so returns "-1e-100" in this case.
    Which approach is more appropriate depends on the application.

[3] If x is very close to an exact integer multiple of y, it's
    possible for "floor(x/y)" to be one larger than "(x-x%y)/y" due to
    rounding.  In such cases, Python returns the latter result, in
    order to preserve that "divmod(x,y)[0] * y + x % y" be very close
    to "x".

[4] The Unicode standard distinguishes between *code points* (e.g.
    U+0041) and *abstract characters* (e.g. "LATIN CAPITAL LETTER A").
    While most abstract characters in Unicode are only represented
    using one code point, there is a number of abstract characters
    that can in addition be represented using a sequence of more than
    one code point.  For example, the abstract character "LATIN
    CAPITAL LETTER C WITH CEDILLA" can be represented as a single
    *precomposed character* at code position U+00C7, or as a sequence
    of a *base character* at code position U+0043 (LATIN CAPITAL
    LETTER C), followed by a *combining character* at code position
    U+0327 (COMBINING CEDILLA).

    The comparison operators on unicode strings compare at the level
    of Unicode code points. This may be counter-intuitive to humans.
    For example, "u"\u00C7" == u"\u0043\u0327"" is "False", even
    though both strings represent the same abstract character "LATIN
    CAPITAL LETTER C WITH CEDILLA".

    To compare strings at the level of abstract characters (that is,
    in a way intuitive to humans), use "unicodedata.normalize()".

[5] Earlier versions of Python used lexicographic comparison of
    the sorted (key, value) lists, but this was very expensive for the
    common case of comparing for equality.  An even earlier version of
    Python compared dictionaries by identity only, but this caused
    surprises because people expected to be able to test a dictionary
    for emptiness by comparing it to "{}".

[6] Due to automatic garbage-collection, free lists, and the
    dynamic nature of descriptors, you may notice seemingly unusual
    behaviour in certain uses of the "is" operator, like those
    involving comparisons between instance methods, or constants.
    Check their documentation for more info.

[7] The "%" operator is also used for string formatting; the same
    precedence applies.

[8] The power operator "**" binds less tightly than an arithmetic
    or bitwise unary operator on its right, that is, "2**-1" is "0.5".
soperator-summarysx
The "pass" statement
********************

   pass_stmt ::= "pass"

"pass" is a null operation --- when it is executed, nothing happens.
It is useful as a placeholder when a statement is required
syntactically, but no code needs to be executed, for example:

   def f(arg): pass    # a function that does nothing (yet)

   class C: pass       # a class with no methods (yet)
tpasss�
The power operator
******************

The power operator binds more tightly than unary operators on its
left; it binds less tightly than unary operators on its right.  The
syntax is:

   power ::= primary ["**" u_expr]

Thus, in an unparenthesized sequence of power and unary operators, the
operators are evaluated from right to left (this does not constrain
the evaluation order for the operands): "-1**2" results in "-1".

The power operator has the same semantics as the built-in "pow()"
function, when called with two arguments: it yields its left argument
raised to the power of its right argument.  The numeric arguments are
first converted to a common type.  The result type is that of the
arguments after coercion.

With mixed operand types, the coercion rules for binary arithmetic
operators apply. For int and long int operands, the result has the
same type as the operands (after coercion) unless the second argument
is negative; in that case, all arguments are converted to float and a
float result is delivered. For example, "10**2" returns "100", but
"10**-2" returns "0.01". (This last feature was added in Python 2.2.
In Python 2.1 and before, if both arguments were of integer types and
the second argument was negative, an exception was raised).

Raising "0.0" to a negative power results in a "ZeroDivisionError".
Raising a negative number to a fractional power results in a
"ValueError".
tpowers�
The "print" statement
*********************

   print_stmt ::= "print" ([expression ("," expression)* [","]]
                  | ">>" expression [("," expression)+ [","]])

"print" evaluates each expression in turn and writes the resulting
object to standard output (see below).  If an object is not a string,
it is first converted to a string using the rules for string
conversions.  The (resulting or original) string is then written.  A
space is written before each object is (converted and) written, unless
the output system believes it is positioned at the beginning of a
line.  This is the case (1) when no characters have yet been written
to standard output, (2) when the last character written to standard
output is a whitespace character except "' '", or (3) when the last
write operation on standard output was not a "print" statement. (In
some cases it may be functional to write an empty string to standard
output for this reason.)

Note: Objects which act like file objects but which are not the
  built-in file objects often do not properly emulate this aspect of
  the file object's behavior, so it is best not to rely on this.

A "'\n'" character is written at the end, unless the "print" statement
ends with a comma.  This is the only action if the statement contains
just the keyword "print".

Standard output is defined as the file object named "stdout" in the
built-in module "sys".  If no such object exists, or if it does not
have a "write()" method, a "RuntimeError" exception is raised.

"print" also has an extended form, defined by the second portion of
the syntax described above. This form is sometimes referred to as
""print" chevron." In this form, the first expression after the ">>"
must evaluate to a "file-like" object, specifically an object that has
a "write()" method as described above.  With this extended form, the
subsequent expressions are printed to this file object.  If the first
expression evaluates to "None", then "sys.stdout" is used as the file
for output.
tprints�
The "raise" statement
*********************

   raise_stmt ::= "raise" [expression ["," expression ["," expression]]]

If no expressions are present, "raise" re-raises the last exception
that was active in the current scope.  If no exception is active in
the current scope, a "TypeError" exception is raised indicating that
this is an error (if running under IDLE, a "Queue.Empty" exception is
raised instead).

Otherwise, "raise" evaluates the expressions to get three objects,
using "None" as the value of omitted expressions.  The first two
objects are used to determine the *type* and *value* of the exception.

If the first object is an instance, the type of the exception is the
class of the instance, the instance itself is the value, and the
second object must be "None".

If the first object is a class, it becomes the type of the exception.
The second object is used to determine the exception value: If it is
an instance of the class, the instance becomes the exception value. If
the second object is a tuple, it is used as the argument list for the
class constructor; if it is "None", an empty argument list is used,
and any other object is treated as a single argument to the
constructor.  The instance so created by calling the constructor is
used as the exception value.

If a third object is present and not "None", it must be a traceback
object (see section The standard type hierarchy), and it is
substituted instead of the current location as the place where the
exception occurred.  If the third object is present and not a
traceback object or "None", a "TypeError" exception is raised.  The
three-expression form of "raise" is useful to re-raise an exception
transparently in an except clause, but "raise" with no expressions
should be preferred if the exception to be re-raised was the most
recently active exception in the current scope.

Additional information on exceptions can be found in section
Exceptions, and information about handling exceptions is in section
The try statement.
traises�
The "return" statement
**********************

   return_stmt ::= "return" [expression_list]

"return" may only occur syntactically nested in a function definition,
not within a nested class definition.

If an expression list is present, it is evaluated, else "None" is
substituted.

"return" leaves the current function call with the expression list (or
"None") as return value.

When "return" passes control out of a "try" statement with a "finally"
clause, that "finally" clause is executed before really leaving the
function.

In a generator function, the "return" statement is not allowed to
include an "expression_list".  In that context, a bare "return"
indicates that the generator is done and will cause "StopIteration" to
be raised.
treturns�
Emulating container types
*************************

The following methods can be defined to implement container objects.
Containers usually are sequences (such as lists or tuples) or mappings
(like dictionaries), but can represent other containers as well.  The
first set of methods is used either to emulate a sequence or to
emulate a mapping; the difference is that for a sequence, the
allowable keys should be the integers *k* for which "0 <= k < N" where
*N* is the length of the sequence, or slice objects, which define a
range of items. (For backwards compatibility, the method
"__getslice__()" (see below) can also be defined to handle simple, but
not extended slices.) It is also recommended that mappings provide the
methods "keys()", "values()", "items()", "has_key()", "get()",
"clear()", "setdefault()", "iterkeys()", "itervalues()",
"iteritems()", "pop()", "popitem()", "copy()", and "update()" behaving
similar to those for Python's standard dictionary objects.  The
"UserDict" module provides a "DictMixin" class to help create those
methods from a base set of "__getitem__()", "__setitem__()",
"__delitem__()", and "keys()". Mutable sequences should provide
methods "append()", "count()", "index()", "extend()", "insert()",
"pop()", "remove()", "reverse()" and "sort()", like Python standard
list objects.  Finally, sequence types should implement addition
(meaning concatenation) and multiplication (meaning repetition) by
defining the methods "__add__()", "__radd__()", "__iadd__()",
"__mul__()", "__rmul__()" and "__imul__()" described below; they
should not define "__coerce__()" or other numerical operators.  It is
recommended that both mappings and sequences implement the
"__contains__()" method to allow efficient use of the "in" operator;
for mappings, "in" should be equivalent of "has_key()"; for sequences,
it should search through the values.  It is further recommended that
both mappings and sequences implement the "__iter__()" method to allow
efficient iteration through the container; for mappings, "__iter__()"
should be the same as "iterkeys()"; for sequences, it should iterate
through the values.

object.__len__(self)

   Called to implement the built-in function "len()".  Should return
   the length of the object, an integer ">=" 0.  Also, an object that
   doesn't define a "__nonzero__()" method and whose "__len__()"
   method returns zero is considered to be false in a Boolean context.

   **CPython implementation detail:** In CPython, the length is
   required to be at most "sys.maxsize". If the length is larger than
   "sys.maxsize" some features (such as "len()") may raise
   "OverflowError".  To prevent raising "OverflowError" by truth value
   testing, an object must define a "__nonzero__()" method.

object.__getitem__(self, key)

   Called to implement evaluation of "self[key]". For sequence types,
   the accepted keys should be integers and slice objects.  Note that
   the special interpretation of negative indexes (if the class wishes
   to emulate a sequence type) is up to the "__getitem__()" method. If
   *key* is of an inappropriate type, "TypeError" may be raised; if of
   a value outside the set of indexes for the sequence (after any
   special interpretation of negative values), "IndexError" should be
   raised. For mapping types, if *key* is missing (not in the
   container), "KeyError" should be raised.

   Note: "for" loops expect that an "IndexError" will be raised for
     illegal indexes to allow proper detection of the end of the
     sequence.

object.__missing__(self, key)

   Called by "dict"."__getitem__()" to implement "self[key]" for dict
   subclasses when key is not in the dictionary.

object.__setitem__(self, key, value)

   Called to implement assignment to "self[key]".  Same note as for
   "__getitem__()".  This should only be implemented for mappings if
   the objects support changes to the values for keys, or if new keys
   can be added, or for sequences if elements can be replaced.  The
   same exceptions should be raised for improper *key* values as for
   the "__getitem__()" method.

object.__delitem__(self, key)

   Called to implement deletion of "self[key]".  Same note as for
   "__getitem__()".  This should only be implemented for mappings if
   the objects support removal of keys, or for sequences if elements
   can be removed from the sequence.  The same exceptions should be
   raised for improper *key* values as for the "__getitem__()" method.

object.__iter__(self)

   This method is called when an iterator is required for a container.
   This method should return a new iterator object that can iterate
   over all the objects in the container.  For mappings, it should
   iterate over the keys of the container, and should also be made
   available as the method "iterkeys()".

   Iterator objects also need to implement this method; they are
   required to return themselves.  For more information on iterator
   objects, see Iterator Types.

object.__reversed__(self)

   Called (if present) by the "reversed()" built-in to implement
   reverse iteration.  It should return a new iterator object that
   iterates over all the objects in the container in reverse order.

   If the "__reversed__()" method is not provided, the "reversed()"
   built-in will fall back to using the sequence protocol ("__len__()"
   and "__getitem__()").  Objects that support the sequence protocol
   should only provide "__reversed__()" if they can provide an
   implementation that is more efficient than the one provided by
   "reversed()".

   New in version 2.6.

The membership test operators ("in" and "not in") are normally
implemented as an iteration through a sequence.  However, container
objects can supply the following special method with a more efficient
implementation, which also does not require the object be a sequence.

object.__contains__(self, item)

   Called to implement membership test operators.  Should return true
   if *item* is in *self*, false otherwise.  For mapping objects, this
   should consider the keys of the mapping rather than the values or
   the key-item pairs.

   For objects that don't define "__contains__()", the membership test
   first tries iteration via "__iter__()", then the old sequence
   iteration protocol via "__getitem__()", see this section in the
   language reference.
ssequence-typess
Shifting operations
*******************

The shifting operations have lower priority than the arithmetic
operations:

   shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr

These operators accept plain or long integers as arguments.  The
arguments are converted to a common type.  They shift the first
argument to the left or right by the number of bits given by the
second argument.

A right shift by *n* bits is defined as division by "pow(2, n)".  A
left shift by *n* bits is defined as multiplication with "pow(2, n)".
Negative shift counts raise a "ValueError" exception.

Note: In the current implementation, the right-hand operand is
  required to be at most "sys.maxsize".  If the right-hand operand is
  larger than "sys.maxsize" an "OverflowError" exception is raised.
tshiftings�

Slicings
********

A slicing selects a range of items in a sequence object (e.g., a
string, tuple or list).  Slicings may be used as expressions or as
targets in assignment or "del" statements.  The syntax for a slicing:

   slicing          ::= simple_slicing | extended_slicing
   simple_slicing   ::= primary "[" short_slice "]"
   extended_slicing ::= primary "[" slice_list "]"
   slice_list       ::= slice_item ("," slice_item)* [","]
   slice_item       ::= expression | proper_slice | ellipsis
   proper_slice     ::= short_slice | long_slice
   short_slice      ::= [lower_bound] ":" [upper_bound]
   long_slice       ::= short_slice ":" [stride]
   lower_bound      ::= expression
   upper_bound      ::= expression
   stride           ::= expression
   ellipsis         ::= "..."

There is ambiguity in the formal syntax here: anything that looks like
an expression list also looks like a slice list, so any subscription
can be interpreted as a slicing.  Rather than further complicating the
syntax, this is disambiguated by defining that in this case the
interpretation as a subscription takes priority over the
interpretation as a slicing (this is the case if the slice list
contains no proper slice nor ellipses).  Similarly, when the slice
list has exactly one short slice and no trailing comma, the
interpretation as a simple slicing takes priority over that as an
extended slicing.

The semantics for a simple slicing are as follows.  The primary must
evaluate to a sequence object.  The lower and upper bound expressions,
if present, must evaluate to plain integers; defaults are zero and the
"sys.maxint", respectively.  If either bound is negative, the
sequence's length is added to it.  The slicing now selects all items
with index *k* such that "i <= k < j" where *i* and *j* are the
specified lower and upper bounds.  This may be an empty sequence.  It
is not an error if *i* or *j* lie outside the range of valid indexes
(such items don't exist so they aren't selected).

The semantics for an extended slicing are as follows.  The primary
must evaluate to a mapping object, and it is indexed with a key that
is constructed from the slice list, as follows.  If the slice list
contains at least one comma, the key is a tuple containing the
conversion of the slice items; otherwise, the conversion of the lone
slice item is the key.  The conversion of a slice item that is an
expression is that expression.  The conversion of an ellipsis slice
item is the built-in "Ellipsis" object.  The conversion of a proper
slice is a slice object (see section The standard type hierarchy)
whose "start", "stop" and "step" attributes are the values of the
expressions given as lower bound, upper bound and stride,
respectively, substituting "None" for missing expressions.
tslicingss�	
Special Attributes
******************

The implementation adds a few special read-only attributes to several
object types, where they are relevant.  Some of these are not reported
by the "dir()" built-in function.

object.__dict__

   A dictionary or other mapping object used to store an object's
   (writable) attributes.

object.__methods__

   Deprecated since version 2.2: Use the built-in function "dir()" to
   get a list of an object's attributes. This attribute is no longer
   available.

object.__members__

   Deprecated since version 2.2: Use the built-in function "dir()" to
   get a list of an object's attributes. This attribute is no longer
   available.

instance.__class__

   The class to which a class instance belongs.

class.__bases__

   The tuple of base classes of a class object.

definition.__name__

   The name of the class, type, function, method, descriptor, or
   generator instance.

The following attributes are only supported by *new-style class*es.

class.__mro__

   This attribute is a tuple of classes that are considered when
   looking for base classes during method resolution.

class.mro()

   This method can be overridden by a metaclass to customize the
   method resolution order for its instances.  It is called at class
   instantiation, and its result is stored in "__mro__".

class.__subclasses__()

   Each new-style class keeps a list of weak references to its
   immediate subclasses.  This method returns a list of all those
   references still alive. Example:

      >>> int.__subclasses__()
      [<type 'bool'>]

-[ Footnotes ]-

[1] Additional information on these special methods may be found
    in the Python Reference Manual (Basic customization).

[2] As a consequence, the list "[1, 2]" is considered equal to
    "[1.0, 2.0]", and similarly for tuples.

[3] They must have since the parser can't tell the type of the
    operands.

[4] Cased characters are those with general category property
    being one of "Lu" (Letter, uppercase), "Ll" (Letter, lowercase),
    or "Lt" (Letter, titlecase).

[5] To format only a tuple you should therefore provide a
    singleton tuple whose only element is the tuple to be formatted.

[6] The advantage of leaving the newline on is that returning an
    empty string is then an unambiguous EOF indication.  It is also
    possible (in cases where it might matter, for example, if you want
    to make an exact copy of a file while scanning its lines) to tell
    whether the last line of a file ended in a newline or not (yes
    this happens!).
tspecialattrssa�
Special method names
********************

A class can implement certain operations that are invoked by special
syntax (such as arithmetic operations or subscripting and slicing) by
defining methods with special names. This is Python's approach to
*operator overloading*, allowing classes to define their own behavior
with respect to language operators.  For instance, if a class defines
a method named "__getitem__()", and "x" is an instance of this class,
then "x[i]" is roughly equivalent to "x.__getitem__(i)" for old-style
classes and "type(x).__getitem__(x, i)" for new-style classes.  Except
where mentioned, attempts to execute an operation raise an exception
when no appropriate method is defined (typically "AttributeError" or
"TypeError").

When implementing a class that emulates any built-in type, it is
important that the emulation only be implemented to the degree that it
makes sense for the object being modelled.  For example, some
sequences may work well with retrieval of individual elements, but
extracting a slice may not make sense.  (One example of this is the
"NodeList" interface in the W3C's Document Object Model.)


Basic customization
===================

object.__new__(cls[, ...])

   Called to create a new instance of class *cls*.  "__new__()" is a
   static method (special-cased so you need not declare it as such)
   that takes the class of which an instance was requested as its
   first argument.  The remaining arguments are those passed to the
   object constructor expression (the call to the class).  The return
   value of "__new__()" should be the new object instance (usually an
   instance of *cls*).

   Typical implementations create a new instance of the class by
   invoking the superclass's "__new__()" method using
   "super(currentclass, cls).__new__(cls[, ...])" with appropriate
   arguments and then modifying the newly-created instance as
   necessary before returning it.

   If "__new__()" returns an instance of *cls*, then the new
   instance's "__init__()" method will be invoked like
   "__init__(self[, ...])", where *self* is the new instance and the
   remaining arguments are the same as were passed to "__new__()".

   If "__new__()" does not return an instance of *cls*, then the new
   instance's "__init__()" method will not be invoked.

   "__new__()" is intended mainly to allow subclasses of immutable
   types (like int, str, or tuple) to customize instance creation.  It
   is also commonly overridden in custom metaclasses in order to
   customize class creation.

object.__init__(self[, ...])

   Called after the instance has been created (by "__new__()"), but
   before it is returned to the caller.  The arguments are those
   passed to the class constructor expression.  If a base class has an
   "__init__()" method, the derived class's "__init__()" method, if
   any, must explicitly call it to ensure proper initialization of the
   base class part of the instance; for example:
   "BaseClass.__init__(self, [args...])".

   Because "__new__()" and "__init__()" work together in constructing
   objects ("__new__()" to create it, and "__init__()" to customise
   it), no non-"None" value may be returned by "__init__()"; doing so
   will cause a "TypeError" to be raised at runtime.

object.__del__(self)

   Called when the instance is about to be destroyed.  This is also
   called a destructor.  If a base class has a "__del__()" method, the
   derived class's "__del__()" method, if any, must explicitly call it
   to ensure proper deletion of the base class part of the instance.
   Note that it is possible (though not recommended!) for the
   "__del__()" method to postpone destruction of the instance by
   creating a new reference to it.  It may then be called at a later
   time when this new reference is deleted.  It is not guaranteed that
   "__del__()" methods are called for objects that still exist when
   the interpreter exits.

   Note: "del x" doesn't directly call "x.__del__()" --- the former
     decrements the reference count for "x" by one, and the latter is
     only called when "x"'s reference count reaches zero.  Some common
     situations that may prevent the reference count of an object from
     going to zero include: circular references between objects (e.g.,
     a doubly-linked list or a tree data structure with parent and
     child pointers); a reference to the object on the stack frame of
     a function that caught an exception (the traceback stored in
     "sys.exc_traceback" keeps the stack frame alive); or a reference
     to the object on the stack frame that raised an unhandled
     exception in interactive mode (the traceback stored in
     "sys.last_traceback" keeps the stack frame alive).  The first
     situation can only be remedied by explicitly breaking the cycles;
     the latter two situations can be resolved by storing "None" in
     "sys.exc_traceback" or "sys.last_traceback".  Circular references
     which are garbage are detected when the option cycle detector is
     enabled (it's on by default), but can only be cleaned up if there
     are no Python-level "__del__()" methods involved. Refer to the
     documentation for the "gc" module for more information about how
     "__del__()" methods are handled by the cycle detector,
     particularly the description of the "garbage" value.

   Warning: Due to the precarious circumstances under which
     "__del__()" methods are invoked, exceptions that occur during
     their execution are ignored, and a warning is printed to
     "sys.stderr" instead. Also, when "__del__()" is invoked in
     response to a module being deleted (e.g., when execution of the
     program is done), other globals referenced by the "__del__()"
     method may already have been deleted or in the process of being
     torn down (e.g. the import machinery shutting down).  For this
     reason, "__del__()" methods should do the absolute minimum needed
     to maintain external invariants.  Starting with version 1.5,
     Python guarantees that globals whose name begins with a single
     underscore are deleted from their module before other globals are
     deleted; if no other references to such globals exist, this may
     help in assuring that imported modules are still available at the
     time when the "__del__()" method is called.

   See also the "-R" command-line option.

object.__repr__(self)

   Called by the "repr()" built-in function and by string conversions
   (reverse quotes) to compute the "official" string representation of
   an object.  If at all possible, this should look like a valid
   Python expression that could be used to recreate an object with the
   same value (given an appropriate environment).  If this is not
   possible, a string of the form "<...some useful description...>"
   should be returned.  The return value must be a string object. If a
   class defines "__repr__()" but not "__str__()", then "__repr__()"
   is also used when an "informal" string representation of instances
   of that class is required.

   This is typically used for debugging, so it is important that the
   representation is information-rich and unambiguous.

object.__str__(self)

   Called by the "str()" built-in function and by the "print"
   statement to compute the "informal" string representation of an
   object.  This differs from "__repr__()" in that it does not have to
   be a valid Python expression: a more convenient or concise
   representation may be used instead. The return value must be a
   string object.

object.__lt__(self, other)
object.__le__(self, other)
object.__eq__(self, other)
object.__ne__(self, other)
object.__gt__(self, other)
object.__ge__(self, other)

   New in version 2.1.

   These are the so-called "rich comparison" methods, and are called
   for comparison operators in preference to "__cmp__()" below. The
   correspondence between operator symbols and method names is as
   follows: "x<y" calls "x.__lt__(y)", "x<=y" calls "x.__le__(y)",
   "x==y" calls "x.__eq__(y)", "x!=y" and "x<>y" call "x.__ne__(y)",
   "x>y" calls "x.__gt__(y)", and "x>=y" calls "x.__ge__(y)".

   A rich comparison method may return the singleton "NotImplemented"
   if it does not implement the operation for a given pair of
   arguments. By convention, "False" and "True" are returned for a
   successful comparison. However, these methods can return any value,
   so if the comparison operator is used in a Boolean context (e.g.,
   in the condition of an "if" statement), Python will call "bool()"
   on the value to determine if the result is true or false.

   There are no implied relationships among the comparison operators.
   The truth of "x==y" does not imply that "x!=y" is false.
   Accordingly, when defining "__eq__()", one should also define
   "__ne__()" so that the operators will behave as expected.  See the
   paragraph on "__hash__()" for some important notes on creating
   *hashable* objects which support custom comparison operations and
   are usable as dictionary keys.

   There are no swapped-argument versions of these methods (to be used
   when the left argument does not support the operation but the right
   argument does); rather, "__lt__()" and "__gt__()" are each other's
   reflection, "__le__()" and "__ge__()" are each other's reflection,
   and "__eq__()" and "__ne__()" are their own reflection.

   Arguments to rich comparison methods are never coerced.

   To automatically generate ordering operations from a single root
   operation, see "functools.total_ordering()".

object.__cmp__(self, other)

   Called by comparison operations if rich comparison (see above) is
   not defined.  Should return a negative integer if "self < other",
   zero if "self == other", a positive integer if "self > other".  If
   no "__cmp__()", "__eq__()" or "__ne__()" operation is defined,
   class instances are compared by object identity ("address").  See
   also the description of "__hash__()" for some important notes on
   creating *hashable* objects which support custom comparison
   operations and are usable as dictionary keys. (Note: the
   restriction that exceptions are not propagated by "__cmp__()" has
   been removed since Python 1.5.)

object.__rcmp__(self, other)

   Changed in version 2.1: No longer supported.

object.__hash__(self)

   Called by built-in function "hash()" and for operations on members
   of hashed collections including "set", "frozenset", and "dict".
   "__hash__()" should return an integer.  The only required property
   is that objects which compare equal have the same hash value; it is
   advised to mix together the hash values of the components of the
   object that also play a part in comparison of objects by packing
   them into a tuple and hashing the tuple. Example:

      def __hash__(self):
          return hash((self.name, self.nick, self.color))

   If a class does not define a "__cmp__()" or "__eq__()" method it
   should not define a "__hash__()" operation either; if it defines
   "__cmp__()" or "__eq__()" but not "__hash__()", its instances will
   not be usable in hashed collections.  If a class defines mutable
   objects and implements a "__cmp__()" or "__eq__()" method, it
   should not implement "__hash__()", since hashable collection
   implementations require that an object's hash value is immutable
   (if the object's hash value changes, it will be in the wrong hash
   bucket).

   User-defined classes have "__cmp__()" and "__hash__()" methods by
   default; with them, all objects compare unequal (except with
   themselves) and "x.__hash__()" returns a result derived from
   "id(x)".

   Classes which inherit a "__hash__()" method from a parent class but
   change the meaning of "__cmp__()" or "__eq__()" such that the hash
   value returned is no longer appropriate (e.g. by switching to a
   value-based concept of equality instead of the default identity
   based equality) can explicitly flag themselves as being unhashable
   by setting "__hash__ = None" in the class definition. Doing so
   means that not only will instances of the class raise an
   appropriate "TypeError" when a program attempts to retrieve their
   hash value, but they will also be correctly identified as
   unhashable when checking "isinstance(obj, collections.Hashable)"
   (unlike classes which define their own "__hash__()" to explicitly
   raise "TypeError").

   Changed in version 2.5: "__hash__()" may now also return a long
   integer object; the 32-bit integer is then derived from the hash of
   that object.

   Changed in version 2.6: "__hash__" may now be set to "None" to
   explicitly flag instances of a class as unhashable.

object.__nonzero__(self)

   Called to implement truth value testing and the built-in operation
   "bool()"; should return "False" or "True", or their integer
   equivalents "0" or "1".  When this method is not defined,
   "__len__()" is called, if it is defined, and the object is
   considered true if its result is nonzero. If a class defines
   neither "__len__()" nor "__nonzero__()", all its instances are
   considered true.

object.__unicode__(self)

   Called to implement "unicode()" built-in; should return a Unicode
   object. When this method is not defined, string conversion is
   attempted, and the result of string conversion is converted to
   Unicode using the system default encoding.


Customizing attribute access
============================

The following methods can be defined to customize the meaning of
attribute access (use of, assignment to, or deletion of "x.name") for
class instances.

object.__getattr__(self, name)

   Called when an attribute lookup has not found the attribute in the
   usual places (i.e. it is not an instance attribute nor is it found
   in the class tree for "self").  "name" is the attribute name. This
   method should return the (computed) attribute value or raise an
   "AttributeError" exception.

   Note that if the attribute is found through the normal mechanism,
   "__getattr__()" is not called.  (This is an intentional asymmetry
   between "__getattr__()" and "__setattr__()".) This is done both for
   efficiency reasons and because otherwise "__getattr__()" would have
   no way to access other attributes of the instance.  Note that at
   least for instance variables, you can fake total control by not
   inserting any values in the instance attribute dictionary (but
   instead inserting them in another object).  See the
   "__getattribute__()" method below for a way to actually get total
   control in new-style classes.

object.__setattr__(self, name, value)

   Called when an attribute assignment is attempted.  This is called
   instead of the normal mechanism (i.e. store the value in the
   instance dictionary).  *name* is the attribute name, *value* is the
   value to be assigned to it.

   If "__setattr__()" wants to assign to an instance attribute, it
   should not simply execute "self.name = value" --- this would cause
   a recursive call to itself.  Instead, it should insert the value in
   the dictionary of instance attributes, e.g., "self.__dict__[name] =
   value".  For new-style classes, rather than accessing the instance
   dictionary, it should call the base class method with the same
   name, for example, "object.__setattr__(self, name, value)".

object.__delattr__(self, name)

   Like "__setattr__()" but for attribute deletion instead of
   assignment.  This should only be implemented if "del obj.name" is
   meaningful for the object.


More attribute access for new-style classes
-------------------------------------------

The following methods only apply to new-style classes.

object.__getattribute__(self, name)

   Called unconditionally to implement attribute accesses for
   instances of the class. If the class also defines "__getattr__()",
   the latter will not be called unless "__getattribute__()" either
   calls it explicitly or raises an "AttributeError". This method
   should return the (computed) attribute value or raise an
   "AttributeError" exception. In order to avoid infinite recursion in
   this method, its implementation should always call the base class
   method with the same name to access any attributes it needs, for
   example, "object.__getattribute__(self, name)".

   Note: This method may still be bypassed when looking up special
     methods as the result of implicit invocation via language syntax
     or built-in functions. See Special method lookup for new-style
     classes.


Implementing Descriptors
------------------------

The following methods only apply when an instance of the class
containing the method (a so-called *descriptor* class) appears in an
*owner* class (the descriptor must be in either the owner's class
dictionary or in the class dictionary for one of its parents).  In the
examples below, "the attribute" refers to the attribute whose name is
the key of the property in the owner class' "__dict__".

object.__get__(self, instance, owner)

   Called to get the attribute of the owner class (class attribute
   access) or of an instance of that class (instance attribute
   access). *owner* is always the owner class, while *instance* is the
   instance that the attribute was accessed through, or "None" when
   the attribute is accessed through the *owner*.  This method should
   return the (computed) attribute value or raise an "AttributeError"
   exception.

object.__set__(self, instance, value)

   Called to set the attribute on an instance *instance* of the owner
   class to a new value, *value*.

object.__delete__(self, instance)

   Called to delete the attribute on an instance *instance* of the
   owner class.


Invoking Descriptors
--------------------

In general, a descriptor is an object attribute with "binding
behavior", one whose attribute access has been overridden by methods
in the descriptor protocol:  "__get__()", "__set__()", and
"__delete__()". If any of those methods are defined for an object, it
is said to be a descriptor.

The default behavior for attribute access is to get, set, or delete
the attribute from an object's dictionary. For instance, "a.x" has a
lookup chain starting with "a.__dict__['x']", then
"type(a).__dict__['x']", and continuing through the base classes of
"type(a)" excluding metaclasses.

However, if the looked-up value is an object defining one of the
descriptor methods, then Python may override the default behavior and
invoke the descriptor method instead.  Where this occurs in the
precedence chain depends on which descriptor methods were defined and
how they were called.  Note that descriptors are only invoked for new
style objects or classes (ones that subclass "object()" or "type()").

The starting point for descriptor invocation is a binding, "a.x". How
the arguments are assembled depends on "a":

Direct Call
   The simplest and least common call is when user code directly
   invokes a descriptor method:    "x.__get__(a)".

Instance Binding
   If binding to a new-style object instance, "a.x" is transformed
   into the call: "type(a).__dict__['x'].__get__(a, type(a))".

Class Binding
   If binding to a new-style class, "A.x" is transformed into the
   call: "A.__dict__['x'].__get__(None, A)".

Super Binding
   If "a" is an instance of "super", then the binding "super(B,
   obj).m()" searches "obj.__class__.__mro__" for the base class "A"
   immediately preceding "B" and then invokes the descriptor with the
   call: "A.__dict__['m'].__get__(obj, obj.__class__)".

For instance bindings, the precedence of descriptor invocation depends
on the which descriptor methods are defined.  A descriptor can define
any combination of "__get__()", "__set__()" and "__delete__()".  If it
does not define "__get__()", then accessing the attribute will return
the descriptor object itself unless there is a value in the object's
instance dictionary.  If the descriptor defines "__set__()" and/or
"__delete__()", it is a data descriptor; if it defines neither, it is
a non-data descriptor.  Normally, data descriptors define both
"__get__()" and "__set__()", while non-data descriptors have just the
"__get__()" method.  Data descriptors with "__set__()" and "__get__()"
defined always override a redefinition in an instance dictionary.  In
contrast, non-data descriptors can be overridden by instances.

Python methods (including "staticmethod()" and "classmethod()") are
implemented as non-data descriptors.  Accordingly, instances can
redefine and override methods.  This allows individual instances to
acquire behaviors that differ from other instances of the same class.

The "property()" function is implemented as a data descriptor.
Accordingly, instances cannot override the behavior of a property.


__slots__
---------

By default, instances of both old and new-style classes have a
dictionary for attribute storage.  This wastes space for objects
having very few instance variables.  The space consumption can become
acute when creating large numbers of instances.

The default can be overridden by defining *__slots__* in a new-style
class definition.  The *__slots__* declaration takes a sequence of
instance variables and reserves just enough space in each instance to
hold a value for each variable.  Space is saved because *__dict__* is
not created for each instance.

__slots__

   This class variable can be assigned a string, iterable, or sequence
   of strings with variable names used by instances.  If defined in a
   new-style class, *__slots__* reserves space for the declared
   variables and prevents the automatic creation of *__dict__* and
   *__weakref__* for each instance.

   New in version 2.2.

Notes on using *__slots__*

* When inheriting from a class without *__slots__*, the *__dict__*
  attribute of that class will always be accessible, so a *__slots__*
  definition in the subclass is meaningless.

* Without a *__dict__* variable, instances cannot be assigned new
  variables not listed in the *__slots__* definition.  Attempts to
  assign to an unlisted variable name raises "AttributeError". If
  dynamic assignment of new variables is desired, then add
  "'__dict__'" to the sequence of strings in the *__slots__*
  declaration.

  Changed in version 2.3: Previously, adding "'__dict__'" to the
  *__slots__* declaration would not enable the assignment of new
  attributes not specifically listed in the sequence of instance
  variable names.

* Without a *__weakref__* variable for each instance, classes
  defining *__slots__* do not support weak references to its
  instances. If weak reference support is needed, then add
  "'__weakref__'" to the sequence of strings in the *__slots__*
  declaration.

  Changed in version 2.3: Previously, adding "'__weakref__'" to the
  *__slots__* declaration would not enable support for weak
  references.

* *__slots__* are implemented at the class level by creating
  descriptors (Implementing Descriptors) for each variable name.  As a
  result, class attributes cannot be used to set default values for
  instance variables defined by *__slots__*; otherwise, the class
  attribute would overwrite the descriptor assignment.

* The action of a *__slots__* declaration is limited to the class
  where it is defined.  As a result, subclasses will have a *__dict__*
  unless they also define *__slots__* (which must only contain names
  of any *additional* slots).

* If a class defines a slot also defined in a base class, the
  instance variable defined by the base class slot is inaccessible
  (except by retrieving its descriptor directly from the base class).
  This renders the meaning of the program undefined.  In the future, a
  check may be added to prevent this.

* Nonempty *__slots__* does not work for classes derived from
  "variable-length" built-in types such as "long", "str" and "tuple".

* Any non-string iterable may be assigned to *__slots__*. Mappings
  may also be used; however, in the future, special meaning may be
  assigned to the values corresponding to each key.

* *__class__* assignment works only if both classes have the same
  *__slots__*.

  Changed in version 2.6: Previously, *__class__* assignment raised an
  error if either new or old class had *__slots__*.


Customizing class creation
==========================

By default, new-style classes are constructed using "type()". A class
definition is read into a separate namespace and the value of class
name is bound to the result of "type(name, bases, dict)".

When the class definition is read, if *__metaclass__* is defined then
the callable assigned to it will be called instead of "type()". This
allows classes or functions to be written which monitor or alter the
class creation process:

* Modifying the class dictionary prior to the class being created.

* Returning an instance of another class -- essentially performing
  the role of a factory function.

These steps will have to be performed in the metaclass's "__new__()"
method -- "type.__new__()" can then be called from this method to
create a class with different properties.  This example adds a new
element to the class dictionary before creating the class:

   class metacls(type):
       def __new__(mcs, name, bases, dict):
           dict['foo'] = 'metacls was here'
           return type.__new__(mcs, name, bases, dict)

You can of course also override other class methods (or add new
methods); for example defining a custom "__call__()" method in the
metaclass allows custom behavior when the class is called, e.g. not
always creating a new instance.

__metaclass__

   This variable can be any callable accepting arguments for "name",
   "bases", and "dict".  Upon class creation, the callable is used
   instead of the built-in "type()".

   New in version 2.2.

The appropriate metaclass is determined by the following precedence
rules:

* If "dict['__metaclass__']" exists, it is used.

* Otherwise, if there is at least one base class, its metaclass is
  used (this looks for a *__class__* attribute first and if not found,
  uses its type).

* Otherwise, if a global variable named __metaclass__ exists, it is
  used.

* Otherwise, the old-style, classic metaclass (types.ClassType) is
  used.

The potential uses for metaclasses are boundless. Some ideas that have
been explored including logging, interface checking, automatic
delegation, automatic property creation, proxies, frameworks, and
automatic resource locking/synchronization.


Customizing instance and subclass checks
========================================

New in version 2.6.

The following methods are used to override the default behavior of the
"isinstance()" and "issubclass()" built-in functions.

In particular, the metaclass "abc.ABCMeta" implements these methods in
order to allow the addition of Abstract Base Classes (ABCs) as
"virtual base classes" to any class or type (including built-in
types), including other ABCs.

class.__instancecheck__(self, instance)

   Return true if *instance* should be considered a (direct or
   indirect) instance of *class*. If defined, called to implement
   "isinstance(instance, class)".

class.__subclasscheck__(self, subclass)

   Return true if *subclass* should be considered a (direct or
   indirect) subclass of *class*.  If defined, called to implement
   "issubclass(subclass, class)".

Note that these methods are looked up on the type (metaclass) of a
class.  They cannot be defined as class methods in the actual class.
This is consistent with the lookup of special methods that are called
on instances, only in this case the instance is itself a class.

See also:

  **PEP 3119** - Introducing Abstract Base Classes
     Includes the specification for customizing "isinstance()" and
     "issubclass()" behavior through "__instancecheck__()" and
     "__subclasscheck__()", with motivation for this functionality in
     the context of adding Abstract Base Classes (see the "abc"
     module) to the language.


Emulating callable objects
==========================

object.__call__(self[, args...])

   Called when the instance is "called" as a function; if this method
   is defined, "x(arg1, arg2, ...)" is a shorthand for
   "x.__call__(arg1, arg2, ...)".


Emulating container types
=========================

The following methods can be defined to implement container objects.
Containers usually are sequences (such as lists or tuples) or mappings
(like dictionaries), but can represent other containers as well.  The
first set of methods is used either to emulate a sequence or to
emulate a mapping; the difference is that for a sequence, the
allowable keys should be the integers *k* for which "0 <= k < N" where
*N* is the length of the sequence, or slice objects, which define a
range of items. (For backwards compatibility, the method
"__getslice__()" (see below) can also be defined to handle simple, but
not extended slices.) It is also recommended that mappings provide the
methods "keys()", "values()", "items()", "has_key()", "get()",
"clear()", "setdefault()", "iterkeys()", "itervalues()",
"iteritems()", "pop()", "popitem()", "copy()", and "update()" behaving
similar to those for Python's standard dictionary objects.  The
"UserDict" module provides a "DictMixin" class to help create those
methods from a base set of "__getitem__()", "__setitem__()",
"__delitem__()", and "keys()". Mutable sequences should provide
methods "append()", "count()", "index()", "extend()", "insert()",
"pop()", "remove()", "reverse()" and "sort()", like Python standard
list objects.  Finally, sequence types should implement addition
(meaning concatenation) and multiplication (meaning repetition) by
defining the methods "__add__()", "__radd__()", "__iadd__()",
"__mul__()", "__rmul__()" and "__imul__()" described below; they
should not define "__coerce__()" or other numerical operators.  It is
recommended that both mappings and sequences implement the
"__contains__()" method to allow efficient use of the "in" operator;
for mappings, "in" should be equivalent of "has_key()"; for sequences,
it should search through the values.  It is further recommended that
both mappings and sequences implement the "__iter__()" method to allow
efficient iteration through the container; for mappings, "__iter__()"
should be the same as "iterkeys()"; for sequences, it should iterate
through the values.

object.__len__(self)

   Called to implement the built-in function "len()".  Should return
   the length of the object, an integer ">=" 0.  Also, an object that
   doesn't define a "__nonzero__()" method and whose "__len__()"
   method returns zero is considered to be false in a Boolean context.

   **CPython implementation detail:** In CPython, the length is
   required to be at most "sys.maxsize". If the length is larger than
   "sys.maxsize" some features (such as "len()") may raise
   "OverflowError".  To prevent raising "OverflowError" by truth value
   testing, an object must define a "__nonzero__()" method.

object.__getitem__(self, key)

   Called to implement evaluation of "self[key]". For sequence types,
   the accepted keys should be integers and slice objects.  Note that
   the special interpretation of negative indexes (if the class wishes
   to emulate a sequence type) is up to the "__getitem__()" method. If
   *key* is of an inappropriate type, "TypeError" may be raised; if of
   a value outside the set of indexes for the sequence (after any
   special interpretation of negative values), "IndexError" should be
   raised. For mapping types, if *key* is missing (not in the
   container), "KeyError" should be raised.

   Note: "for" loops expect that an "IndexError" will be raised for
     illegal indexes to allow proper detection of the end of the
     sequence.

object.__missing__(self, key)

   Called by "dict"."__getitem__()" to implement "self[key]" for dict
   subclasses when key is not in the dictionary.

object.__setitem__(self, key, value)

   Called to implement assignment to "self[key]".  Same note as for
   "__getitem__()".  This should only be implemented for mappings if
   the objects support changes to the values for keys, or if new keys
   can be added, or for sequences if elements can be replaced.  The
   same exceptions should be raised for improper *key* values as for
   the "__getitem__()" method.

object.__delitem__(self, key)

   Called to implement deletion of "self[key]".  Same note as for
   "__getitem__()".  This should only be implemented for mappings if
   the objects support removal of keys, or for sequences if elements
   can be removed from the sequence.  The same exceptions should be
   raised for improper *key* values as for the "__getitem__()" method.

object.__iter__(self)

   This method is called when an iterator is required for a container.
   This method should return a new iterator object that can iterate
   over all the objects in the container.  For mappings, it should
   iterate over the keys of the container, and should also be made
   available as the method "iterkeys()".

   Iterator objects also need to implement this method; they are
   required to return themselves.  For more information on iterator
   objects, see Iterator Types.

object.__reversed__(self)

   Called (if present) by the "reversed()" built-in to implement
   reverse iteration.  It should return a new iterator object that
   iterates over all the objects in the container in reverse order.

   If the "__reversed__()" method is not provided, the "reversed()"
   built-in will fall back to using the sequence protocol ("__len__()"
   and "__getitem__()").  Objects that support the sequence protocol
   should only provide "__reversed__()" if they can provide an
   implementation that is more efficient than the one provided by
   "reversed()".

   New in version 2.6.

The membership test operators ("in" and "not in") are normally
implemented as an iteration through a sequence.  However, container
objects can supply the following special method with a more efficient
implementation, which also does not require the object be a sequence.

object.__contains__(self, item)

   Called to implement membership test operators.  Should return true
   if *item* is in *self*, false otherwise.  For mapping objects, this
   should consider the keys of the mapping rather than the values or
   the key-item pairs.

   For objects that don't define "__contains__()", the membership test
   first tries iteration via "__iter__()", then the old sequence
   iteration protocol via "__getitem__()", see this section in the
   language reference.


Additional methods for emulation of sequence types
==================================================

The following optional methods can be defined to further emulate
sequence objects.  Immutable sequences methods should at most only
define "__getslice__()"; mutable sequences might define all three
methods.

object.__getslice__(self, i, j)

   Deprecated since version 2.0: Support slice objects as parameters
   to the "__getitem__()" method. (However, built-in types in CPython
   currently still implement "__getslice__()".  Therefore, you have to
   override it in derived classes when implementing slicing.)

   Called to implement evaluation of "self[i:j]". The returned object
   should be of the same type as *self*.  Note that missing *i* or *j*
   in the slice expression are replaced by zero or "sys.maxsize",
   respectively.  If negative indexes are used in the slice, the
   length of the sequence is added to that index. If the instance does
   not implement the "__len__()" method, an "AttributeError" is
   raised. No guarantee is made that indexes adjusted this way are not
   still negative.  Indexes which are greater than the length of the
   sequence are not modified. If no "__getslice__()" is found, a slice
   object is created instead, and passed to "__getitem__()" instead.

object.__setslice__(self, i, j, sequence)

   Called to implement assignment to "self[i:j]". Same notes for *i*
   and *j* as for "__getslice__()".

   This method is deprecated. If no "__setslice__()" is found, or for
   extended slicing of the form "self[i:j:k]", a slice object is
   created, and passed to "__setitem__()", instead of "__setslice__()"
   being called.

object.__delslice__(self, i, j)

   Called to implement deletion of "self[i:j]". Same notes for *i* and
   *j* as for "__getslice__()". This method is deprecated. If no
   "__delslice__()" is found, or for extended slicing of the form
   "self[i:j:k]", a slice object is created, and passed to
   "__delitem__()", instead of "__delslice__()" being called.

Notice that these methods are only invoked when a single slice with a
single colon is used, and the slice method is available.  For slice
operations involving extended slice notation, or in absence of the
slice methods, "__getitem__()", "__setitem__()" or "__delitem__()" is
called with a slice object as argument.

The following example demonstrate how to make your program or module
compatible with earlier versions of Python (assuming that methods
"__getitem__()", "__setitem__()" and "__delitem__()" support slice
objects as arguments):

   class MyClass:
       ...
       def __getitem__(self, index):
           ...
       def __setitem__(self, index, value):
           ...
       def __delitem__(self, index):
           ...

       if sys.version_info < (2, 0):
           # They won't be defined if version is at least 2.0 final

           def __getslice__(self, i, j):
               return self[max(0, i):max(0, j):]
           def __setslice__(self, i, j, seq):
               self[max(0, i):max(0, j):] = seq
           def __delslice__(self, i, j):
               del self[max(0, i):max(0, j):]
       ...

Note the calls to "max()"; these are necessary because of the handling
of negative indices before the "__*slice__()" methods are called.
When negative indexes are used, the "__*item__()" methods receive them
as provided, but the "__*slice__()" methods get a "cooked" form of the
index values.  For each negative index value, the length of the
sequence is added to the index before calling the method (which may
still result in a negative index); this is the customary handling of
negative indexes by the built-in sequence types, and the "__*item__()"
methods are expected to do this as well.  However, since they should
already be doing that, negative indexes cannot be passed in; they must
be constrained to the bounds of the sequence before being passed to
the "__*item__()" methods. Calling "max(0, i)" conveniently returns
the proper value.


Emulating numeric types
=======================

The following methods can be defined to emulate numeric objects.
Methods corresponding to operations that are not supported by the
particular kind of number implemented (e.g., bitwise operations for
non-integral numbers) should be left undefined.

object.__add__(self, other)
object.__sub__(self, other)
object.__mul__(self, other)
object.__floordiv__(self, other)
object.__mod__(self, other)
object.__divmod__(self, other)
object.__pow__(self, other[, modulo])
object.__lshift__(self, other)
object.__rshift__(self, other)
object.__and__(self, other)
object.__xor__(self, other)
object.__or__(self, other)

   These methods are called to implement the binary arithmetic
   operations ("+", "-", "*", "//", "%", "divmod()", "pow()", "**",
   "<<", ">>", "&", "^", "|").  For instance, to evaluate the
   expression "x + y", where *x* is an instance of a class that has an
   "__add__()" method, "x.__add__(y)" is called.  The "__divmod__()"
   method should be the equivalent to using "__floordiv__()" and
   "__mod__()"; it should not be related to "__truediv__()" (described
   below).  Note that "__pow__()" should be defined to accept an
   optional third argument if the ternary version of the built-in
   "pow()" function is to be supported.

   If one of those methods does not support the operation with the
   supplied arguments, it should return "NotImplemented".

object.__div__(self, other)
object.__truediv__(self, other)

   The division operator ("/") is implemented by these methods.  The
   "__truediv__()" method is used when "__future__.division" is in
   effect, otherwise "__div__()" is used.  If only one of these two
   methods is defined, the object will not support division in the
   alternate context; "TypeError" will be raised instead.

object.__radd__(self, other)
object.__rsub__(self, other)
object.__rmul__(self, other)
object.__rdiv__(self, other)
object.__rtruediv__(self, other)
object.__rfloordiv__(self, other)
object.__rmod__(self, other)
object.__rdivmod__(self, other)
object.__rpow__(self, other)
object.__rlshift__(self, other)
object.__rrshift__(self, other)
object.__rand__(self, other)
object.__rxor__(self, other)
object.__ror__(self, other)

   These methods are called to implement the binary arithmetic
   operations ("+", "-", "*", "/", "%", "divmod()", "pow()", "**",
   "<<", ">>", "&", "^", "|") with reflected (swapped) operands.
   These functions are only called if the left operand does not
   support the corresponding operation and the operands are of
   different types. [2] For instance, to evaluate the expression "x -
   y", where *y* is an instance of a class that has an "__rsub__()"
   method, "y.__rsub__(x)" is called if "x.__sub__(y)" returns
   *NotImplemented*.

   Note that ternary "pow()" will not try calling "__rpow__()" (the
   coercion rules would become too complicated).

   Note: If the right operand's type is a subclass of the left
     operand's type and that subclass provides the reflected method
     for the operation, this method will be called before the left
     operand's non-reflected method.  This behavior allows subclasses
     to override their ancestors' operations.

object.__iadd__(self, other)
object.__isub__(self, other)
object.__imul__(self, other)
object.__idiv__(self, other)
object.__itruediv__(self, other)
object.__ifloordiv__(self, other)
object.__imod__(self, other)
object.__ipow__(self, other[, modulo])
object.__ilshift__(self, other)
object.__irshift__(self, other)
object.__iand__(self, other)
object.__ixor__(self, other)
object.__ior__(self, other)

   These methods are called to implement the augmented arithmetic
   assignments ("+=", "-=", "*=", "/=", "//=", "%=", "**=", "<<=",
   ">>=", "&=", "^=", "|=").  These methods should attempt to do the
   operation in-place (modifying *self*) and return the result (which
   could be, but does not have to be, *self*).  If a specific method
   is not defined, the augmented assignment falls back to the normal
   methods.  For instance, to execute the statement "x += y", where
   *x* is an instance of a class that has an "__iadd__()" method,
   "x.__iadd__(y)" is called.  If *x* is an instance of a class that
   does not define a "__iadd__()" method, "x.__add__(y)" and
   "y.__radd__(x)" are considered, as with the evaluation of "x + y".

object.__neg__(self)
object.__pos__(self)
object.__abs__(self)
object.__invert__(self)

   Called to implement the unary arithmetic operations ("-", "+",
   "abs()" and "~").

object.__complex__(self)
object.__int__(self)
object.__long__(self)
object.__float__(self)

   Called to implement the built-in functions "complex()", "int()",
   "long()", and "float()".  Should return a value of the appropriate
   type.

object.__oct__(self)
object.__hex__(self)

   Called to implement the built-in functions "oct()" and "hex()".
   Should return a string value.

object.__index__(self)

   Called to implement "operator.index()".  Also called whenever
   Python needs an integer object (such as in slicing).  Must return
   an integer (int or long).

   New in version 2.5.

object.__coerce__(self, other)

   Called to implement "mixed-mode" numeric arithmetic.  Should either
   return a 2-tuple containing *self* and *other* converted to a
   common numeric type, or "None" if conversion is impossible.  When
   the common type would be the type of "other", it is sufficient to
   return "None", since the interpreter will also ask the other object
   to attempt a coercion (but sometimes, if the implementation of the
   other type cannot be changed, it is useful to do the conversion to
   the other type here).  A return value of "NotImplemented" is
   equivalent to returning "None".


Coercion rules
==============

This section used to document the rules for coercion.  As the language
has evolved, the coercion rules have become hard to document
precisely; documenting what one version of one particular
implementation does is undesirable.  Instead, here are some informal
guidelines regarding coercion.  In Python 3, coercion will not be
supported.

* If the left operand of a % operator is a string or Unicode object,
  no coercion takes place and the string formatting operation is
  invoked instead.

* It is no longer recommended to define a coercion operation. Mixed-
  mode operations on types that don't define coercion pass the
  original arguments to the operation.

* New-style classes (those derived from "object") never invoke the
  "__coerce__()" method in response to a binary operator; the only
  time "__coerce__()" is invoked is when the built-in function
  "coerce()" is called.

* For most intents and purposes, an operator that returns
  "NotImplemented" is treated the same as one that is not implemented
  at all.

* Below, "__op__()" and "__rop__()" are used to signify the generic
  method names corresponding to an operator; "__iop__()" is used for
  the corresponding in-place operator.  For example, for the operator
  '"+"', "__add__()" and "__radd__()" are used for the left and right
  variant of the binary operator, and "__iadd__()" for the in-place
  variant.

* For objects *x* and *y*, first "x.__op__(y)" is tried.  If this is
  not implemented or returns "NotImplemented", "y.__rop__(x)" is
  tried.  If this is also not implemented or returns "NotImplemented",
  a "TypeError" exception is raised.  But see the following exception:

* Exception to the previous item: if the left operand is an instance
  of a built-in type or a new-style class, and the right operand is an
  instance of a proper subclass of that type or class and overrides
  the base's "__rop__()" method, the right operand's "__rop__()"
  method is tried *before* the left operand's "__op__()" method.

  This is done so that a subclass can completely override binary
  operators. Otherwise, the left operand's "__op__()" method would
  always accept the right operand: when an instance of a given class
  is expected, an instance of a subclass of that class is always
  acceptable.

* When either operand type defines a coercion, this coercion is
  called before that type's "__op__()" or "__rop__()" method is
  called, but no sooner.  If the coercion returns an object of a
  different type for the operand whose coercion is invoked, part of
  the process is redone using the new object.

* When an in-place operator (like '"+="') is used, if the left
  operand implements "__iop__()", it is invoked without any coercion.
  When the operation falls back to "__op__()" and/or "__rop__()", the
  normal coercion rules apply.

* In "x + y", if *x* is a sequence that implements sequence
  concatenation, sequence concatenation is invoked.

* In "x * y", if one operand is a sequence that implements sequence
  repetition, and the other is an integer ("int" or "long"), sequence
  repetition is invoked.

* Rich comparisons (implemented by methods "__eq__()" and so on)
  never use coercion.  Three-way comparison (implemented by
  "__cmp__()") does use coercion under the same conditions as other
  binary operations use it.

* In the current implementation, the built-in numeric types "int",
  "long", "float", and "complex" do not use coercion. All these types
  implement a "__coerce__()" method, for use by the built-in
  "coerce()" function.

  Changed in version 2.7: The complex type no longer makes implicit
  calls to the "__coerce__()" method for mixed-type binary arithmetic
  operations.


With Statement Context Managers
===============================

New in version 2.5.

A *context manager* is an object that defines the runtime context to
be established when executing a "with" statement. The context manager
handles the entry into, and the exit from, the desired runtime context
for the execution of the block of code.  Context managers are normally
invoked using the "with" statement (described in section The with
statement), but can also be used by directly invoking their methods.

Typical uses of context managers include saving and restoring various
kinds of global state, locking and unlocking resources, closing opened
files, etc.

For more information on context managers, see Context Manager Types.

object.__enter__(self)

   Enter the runtime context related to this object. The "with"
   statement will bind this method's return value to the target(s)
   specified in the "as" clause of the statement, if any.

object.__exit__(self, exc_type, exc_value, traceback)

   Exit the runtime context related to this object. The parameters
   describe the exception that caused the context to be exited. If the
   context was exited without an exception, all three arguments will
   be "None".

   If an exception is supplied, and the method wishes to suppress the
   exception (i.e., prevent it from being propagated), it should
   return a true value. Otherwise, the exception will be processed
   normally upon exit from this method.

   Note that "__exit__()" methods should not reraise the passed-in
   exception; this is the caller's responsibility.

See also:

  **PEP 343** - The "with" statement
     The specification, background, and examples for the Python "with"
     statement.


Special method lookup for old-style classes
===========================================

For old-style classes, special methods are always looked up in exactly
the same way as any other method or attribute. This is the case
regardless of whether the method is being looked up explicitly as in
"x.__getitem__(i)" or implicitly as in "x[i]".

This behaviour means that special methods may exhibit different
behaviour for different instances of a single old-style class if the
appropriate special attributes are set differently:

   >>> class C:
   ...     pass
   ...
   >>> c1 = C()
   >>> c2 = C()
   >>> c1.__len__ = lambda: 5
   >>> c2.__len__ = lambda: 9
   >>> len(c1)
   5
   >>> len(c2)
   9


Special method lookup for new-style classes
===========================================

For new-style classes, implicit invocations of special methods are
only guaranteed to work correctly if defined on an object's type, not
in the object's instance dictionary.  That behaviour is the reason why
the following code raises an exception (unlike the equivalent example
with old-style classes):

   >>> class C(object):
   ...     pass
   ...
   >>> c = C()
   >>> c.__len__ = lambda: 5
   >>> len(c)
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
   TypeError: object of type 'C' has no len()

The rationale behind this behaviour lies with a number of special
methods such as "__hash__()" and "__repr__()" that are implemented by
all objects, including type objects. If the implicit lookup of these
methods used the conventional lookup process, they would fail when
invoked on the type object itself:

   >>> 1 .__hash__() == hash(1)
   True
   >>> int.__hash__() == hash(int)
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
   TypeError: descriptor '__hash__' of 'int' object needs an argument

Incorrectly attempting to invoke an unbound method of a class in this
way is sometimes referred to as 'metaclass confusion', and is avoided
by bypassing the instance when looking up special methods:

   >>> type(1).__hash__(1) == hash(1)
   True
   >>> type(int).__hash__(int) == hash(int)
   True

In addition to bypassing any instance attributes in the interest of
correctness, implicit special method lookup generally also bypasses
the "__getattribute__()" method even of the object's metaclass:

   >>> class Meta(type):
   ...    def __getattribute__(*args):
   ...       print "Metaclass getattribute invoked"
   ...       return type.__getattribute__(*args)
   ...
   >>> class C(object):
   ...     __metaclass__ = Meta
   ...     def __len__(self):
   ...         return 10
   ...     def __getattribute__(*args):
   ...         print "Class getattribute invoked"
   ...         return object.__getattribute__(*args)
   ...
   >>> c = C()
   >>> c.__len__()                 # Explicit lookup via instance
   Class getattribute invoked
   10
   >>> type(c).__len__(c)          # Explicit lookup via type
   Metaclass getattribute invoked
   10
   >>> len(c)                      # Implicit lookup
   10

Bypassing the "__getattribute__()" machinery in this fashion provides
significant scope for speed optimisations within the interpreter, at
the cost of some flexibility in the handling of special methods (the
special method *must* be set on the class object itself in order to be
consistently invoked by the interpreter).

-[ Footnotes ]-

[1] It *is* possible in some cases to change an object's type,
    under certain controlled conditions. It generally isn't a good
    idea though, since it can lead to some very strange behaviour if
    it is handled incorrectly.

[2] For operands of the same type, it is assumed that if the non-
    reflected method (such as "__add__()") fails the operation is not
    supported, which is why the reflected method is not called.
tspecialnamess�K
String Methods
**************

Below are listed the string methods which both 8-bit strings and
Unicode objects support.  Some of them are also available on
"bytearray" objects.

In addition, Python's strings support the sequence type methods
described in the Sequence Types --- str, unicode, list, tuple,
bytearray, buffer, xrange section. To output formatted strings use
template strings or the "%" operator described in the String
Formatting Operations section. Also, see the "re" module for string
functions based on regular expressions.

str.capitalize()

   Return a copy of the string with its first character capitalized
   and the rest lowercased.

   For 8-bit strings, this method is locale-dependent.

str.center(width[, fillchar])

   Return centered in a string of length *width*. Padding is done
   using the specified *fillchar* (default is a space).

   Changed in version 2.4: Support for the *fillchar* argument.

str.count(sub[, start[, end]])

   Return the number of non-overlapping occurrences of substring *sub*
   in the range [*start*, *end*].  Optional arguments *start* and
   *end* are interpreted as in slice notation.

str.decode([encoding[, errors]])

   Decodes the string using the codec registered for *encoding*.
   *encoding* defaults to the default string encoding.  *errors* may
   be given to set a different error handling scheme.  The default is
   "'strict'", meaning that encoding errors raise "UnicodeError".
   Other possible values are "'ignore'", "'replace'" and any other
   name registered via "codecs.register_error()", see section Codec
   Base Classes.

   New in version 2.2.

   Changed in version 2.3: Support for other error handling schemes
   added.

   Changed in version 2.7: Support for keyword arguments added.

str.encode([encoding[, errors]])

   Return an encoded version of the string.  Default encoding is the
   current default string encoding.  *errors* may be given to set a
   different error handling scheme.  The default for *errors* is
   "'strict'", meaning that encoding errors raise a "UnicodeError".
   Other possible values are "'ignore'", "'replace'",
   "'xmlcharrefreplace'", "'backslashreplace'" and any other name
   registered via "codecs.register_error()", see section Codec Base
   Classes. For a list of possible encodings, see section Standard
   Encodings.

   New in version 2.0.

   Changed in version 2.3: Support for "'xmlcharrefreplace'" and
   "'backslashreplace'" and other error handling schemes added.

   Changed in version 2.7: Support for keyword arguments added.

str.endswith(suffix[, start[, end]])

   Return "True" if the string ends with the specified *suffix*,
   otherwise return "False".  *suffix* can also be a tuple of suffixes
   to look for.  With optional *start*, test beginning at that
   position.  With optional *end*, stop comparing at that position.

   Changed in version 2.5: Accept tuples as *suffix*.

str.expandtabs([tabsize])

   Return a copy of the string where all tab characters are replaced
   by one or more spaces, depending on the current column and the
   given tab size.  Tab positions occur every *tabsize* characters
   (default is 8, giving tab positions at columns 0, 8, 16 and so on).
   To expand the string, the current column is set to zero and the
   string is examined character by character.  If the character is a
   tab ("\t"), one or more space characters are inserted in the result
   until the current column is equal to the next tab position. (The
   tab character itself is not copied.)  If the character is a newline
   ("\n") or return ("\r"), it is copied and the current column is
   reset to zero.  Any other character is copied unchanged and the
   current column is incremented by one regardless of how the
   character is represented when printed.

   >>> '01\t012\t0123\t01234'.expandtabs()
   '01      012     0123    01234'
   >>> '01\t012\t0123\t01234'.expandtabs(4)
   '01  012 0123    01234'

str.find(sub[, start[, end]])

   Return the lowest index in the string where substring *sub* is
   found within the slice "s[start:end]".  Optional arguments *start*
   and *end* are interpreted as in slice notation.  Return "-1" if
   *sub* is not found.

   Note: The "find()" method should be used only if you need to know
     the position of *sub*.  To check if *sub* is a substring or not,
     use the "in" operator:

        >>> 'Py' in 'Python'
        True

str.format(*args, **kwargs)

   Perform a string formatting operation.  The string on which this
   method is called can contain literal text or replacement fields
   delimited by braces "{}".  Each replacement field contains either
   the numeric index of a positional argument, or the name of a
   keyword argument.  Returns a copy of the string where each
   replacement field is replaced with the string value of the
   corresponding argument.

   >>> "The sum of 1 + 2 is {0}".format(1+2)
   'The sum of 1 + 2 is 3'

   See Format String Syntax for a description of the various
   formatting options that can be specified in format strings.

   This method of string formatting is the new standard in Python 3,
   and should be preferred to the "%" formatting described in String
   Formatting Operations in new code.

   New in version 2.6.

str.index(sub[, start[, end]])

   Like "find()", but raise "ValueError" when the substring is not
   found.

str.isalnum()

   Return true if all characters in the string are alphanumeric and
   there is at least one character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.isalpha()

   Return true if all characters in the string are alphabetic and
   there is at least one character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.isdigit()

   Return true if all characters in the string are digits and there is
   at least one character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.islower()

   Return true if all cased characters [4] in the string are lowercase
   and there is at least one cased character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.isspace()

   Return true if there are only whitespace characters in the string
   and there is at least one character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.istitle()

   Return true if the string is a titlecased string and there is at
   least one character, for example uppercase characters may only
   follow uncased characters and lowercase characters only cased ones.
   Return false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.isupper()

   Return true if all cased characters [4] in the string are uppercase
   and there is at least one cased character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.join(iterable)

   Return a string which is the concatenation of the strings in
   *iterable*. A "TypeError" will be raised if there are any non-
   string values in *iterable*, including "bytes" objects.  The
   separator between elements is the string providing this method.

str.ljust(width[, fillchar])

   Return the string left justified in a string of length *width*.
   Padding is done using the specified *fillchar* (default is a
   space).  The original string is returned if *width* is less than or
   equal to "len(s)".

   Changed in version 2.4: Support for the *fillchar* argument.

str.lower()

   Return a copy of the string with all the cased characters [4]
   converted to lowercase.

   For 8-bit strings, this method is locale-dependent.

str.lstrip([chars])

   Return a copy of the string with leading characters removed.  The
   *chars* argument is a string specifying the set of characters to be
   removed.  If omitted or "None", the *chars* argument defaults to
   removing whitespace.  The *chars* argument is not a prefix; rather,
   all combinations of its values are stripped:

   >>> '   spacious   '.lstrip()
   'spacious   '
   >>> 'www.example.com'.lstrip('cmowz.')
   'example.com'

   Changed in version 2.2.2: Support for the *chars* argument.

str.partition(sep)

   Split the string at the first occurrence of *sep*, and return a
   3-tuple containing the part before the separator, the separator
   itself, and the part after the separator.  If the separator is not
   found, return a 3-tuple containing the string itself, followed by
   two empty strings.

   New in version 2.5.

str.replace(old, new[, count])

   Return a copy of the string with all occurrences of substring *old*
   replaced by *new*.  If the optional argument *count* is given, only
   the first *count* occurrences are replaced.

str.rfind(sub[, start[, end]])

   Return the highest index in the string where substring *sub* is
   found, such that *sub* is contained within "s[start:end]".
   Optional arguments *start* and *end* are interpreted as in slice
   notation.  Return "-1" on failure.

str.rindex(sub[, start[, end]])

   Like "rfind()" but raises "ValueError" when the substring *sub* is
   not found.

str.rjust(width[, fillchar])

   Return the string right justified in a string of length *width*.
   Padding is done using the specified *fillchar* (default is a
   space). The original string is returned if *width* is less than or
   equal to "len(s)".

   Changed in version 2.4: Support for the *fillchar* argument.

str.rpartition(sep)

   Split the string at the last occurrence of *sep*, and return a
   3-tuple containing the part before the separator, the separator
   itself, and the part after the separator.  If the separator is not
   found, return a 3-tuple containing two empty strings, followed by
   the string itself.

   New in version 2.5.

str.rsplit([sep[, maxsplit]])

   Return a list of the words in the string, using *sep* as the
   delimiter string. If *maxsplit* is given, at most *maxsplit* splits
   are done, the *rightmost* ones.  If *sep* is not specified or
   "None", any whitespace string is a separator.  Except for splitting
   from the right, "rsplit()" behaves like "split()" which is
   described in detail below.

   New in version 2.4.

str.rstrip([chars])

   Return a copy of the string with trailing characters removed.  The
   *chars* argument is a string specifying the set of characters to be
   removed.  If omitted or "None", the *chars* argument defaults to
   removing whitespace.  The *chars* argument is not a suffix; rather,
   all combinations of its values are stripped:

   >>> '   spacious   '.rstrip()
   '   spacious'
   >>> 'mississippi'.rstrip('ipz')
   'mississ'

   Changed in version 2.2.2: Support for the *chars* argument.

str.split([sep[, maxsplit]])

   Return a list of the words in the string, using *sep* as the
   delimiter string.  If *maxsplit* is given, at most *maxsplit*
   splits are done (thus, the list will have at most "maxsplit+1"
   elements).  If *maxsplit* is not specified or "-1", then there is
   no limit on the number of splits (all possible splits are made).

   If *sep* is given, consecutive delimiters are not grouped together
   and are deemed to delimit empty strings (for example,
   "'1,,2'.split(',')" returns "['1', '', '2']").  The *sep* argument
   may consist of multiple characters (for example,
   "'1<>2<>3'.split('<>')" returns "['1', '2', '3']"). Splitting an
   empty string with a specified separator returns "['']".

   If *sep* is not specified or is "None", a different splitting
   algorithm is applied: runs of consecutive whitespace are regarded
   as a single separator, and the result will contain no empty strings
   at the start or end if the string has leading or trailing
   whitespace.  Consequently, splitting an empty string or a string
   consisting of just whitespace with a "None" separator returns "[]".

   For example, "' 1  2   3  '.split()" returns "['1', '2', '3']", and
   "'  1  2   3  '.split(None, 1)" returns "['1', '2   3  ']".

str.splitlines([keepends])

   Return a list of the lines in the string, breaking at line
   boundaries. This method uses the *universal newlines* approach to
   splitting lines. Line breaks are not included in the resulting list
   unless *keepends* is given and true.

   Python recognizes ""\r"", ""\n"", and ""\r\n"" as line boundaries
   for 8-bit strings.

   For example:

      >>> 'ab c\n\nde fg\rkl\r\n'.splitlines()
      ['ab c', '', 'de fg', 'kl']
      >>> 'ab c\n\nde fg\rkl\r\n'.splitlines(True)
      ['ab c\n', '\n', 'de fg\r', 'kl\r\n']

   Unlike "split()" when a delimiter string *sep* is given, this
   method returns an empty list for the empty string, and a terminal
   line break does not result in an extra line:

      >>> "".splitlines()
      []
      >>> "One line\n".splitlines()
      ['One line']

   For comparison, "split('\n')" gives:

      >>> ''.split('\n')
      ['']
      >>> 'Two lines\n'.split('\n')
      ['Two lines', '']

unicode.splitlines([keepends])

   Return a list of the lines in the string, like "str.splitlines()".
   However, the Unicode method splits on the following line
   boundaries, which are a superset of the *universal newlines*
   recognized for 8-bit strings.

   +-------------------------+-------------------------------+
   | Representation          | Description                   |
   +=========================+===============================+
   | "\n"                    | Line Feed                     |
   +-------------------------+-------------------------------+
   | "\r"                    | Carriage Return               |
   +-------------------------+-------------------------------+
   | "\r\n"                  | Carriage Return + Line Feed   |
   +-------------------------+-------------------------------+
   | "\v" or "\x0b"          | Line Tabulation               |
   +-------------------------+-------------------------------+
   | "\f" or "\x0c"          | Form Feed                     |
   +-------------------------+-------------------------------+
   | "\x1c"                  | File Separator                |
   +-------------------------+-------------------------------+
   | "\x1d"                  | Group Separator               |
   +-------------------------+-------------------------------+
   | "\x1e"                  | Record Separator              |
   +-------------------------+-------------------------------+
   | "\x85"                  | Next Line (C1 Control Code)   |
   +-------------------------+-------------------------------+
   | "\u2028"                | Line Separator                |
   +-------------------------+-------------------------------+
   | "\u2029"                | Paragraph Separator           |
   +-------------------------+-------------------------------+

   Changed in version 2.7: "\v" and "\f" added to list of line
   boundaries.

str.startswith(prefix[, start[, end]])

   Return "True" if string starts with the *prefix*, otherwise return
   "False". *prefix* can also be a tuple of prefixes to look for.
   With optional *start*, test string beginning at that position.
   With optional *end*, stop comparing string at that position.

   Changed in version 2.5: Accept tuples as *prefix*.

str.strip([chars])

   Return a copy of the string with the leading and trailing
   characters removed. The *chars* argument is a string specifying the
   set of characters to be removed. If omitted or "None", the *chars*
   argument defaults to removing whitespace. The *chars* argument is
   not a prefix or suffix; rather, all combinations of its values are
   stripped:

   >>> '   spacious   '.strip()
   'spacious'
   >>> 'www.example.com'.strip('cmowz.')
   'example'

   Changed in version 2.2.2: Support for the *chars* argument.

str.swapcase()

   Return a copy of the string with uppercase characters converted to
   lowercase and vice versa.

   For 8-bit strings, this method is locale-dependent.

str.title()

   Return a titlecased version of the string where words start with an
   uppercase character and the remaining characters are lowercase.

   The algorithm uses a simple language-independent definition of a
   word as groups of consecutive letters.  The definition works in
   many contexts but it means that apostrophes in contractions and
   possessives form word boundaries, which may not be the desired
   result:

      >>> "they're bill's friends from the UK".title()
      "They'Re Bill'S Friends From The Uk"

   A workaround for apostrophes can be constructed using regular
   expressions:

      >>> import re
      >>> def titlecase(s):
      ...     return re.sub(r"[A-Za-z]+('[A-Za-z]+)?",
      ...                   lambda mo: mo.group(0)[0].upper() +
      ...                              mo.group(0)[1:].lower(),
      ...                   s)
      ...
      >>> titlecase("they're bill's friends.")
      "They're Bill's Friends."

   For 8-bit strings, this method is locale-dependent.

str.translate(table[, deletechars])

   Return a copy of the string where all characters occurring in the
   optional argument *deletechars* are removed, and the remaining
   characters have been mapped through the given translation table,
   which must be a string of length 256.

   You can use the "maketrans()" helper function in the "string"
   module to create a translation table. For string objects, set the
   *table* argument to "None" for translations that only delete
   characters:

   >>> 'read this short text'.translate(None, 'aeiou')
   'rd ths shrt txt'

   New in version 2.6: Support for a "None" *table* argument.

   For Unicode objects, the "translate()" method does not accept the
   optional *deletechars* argument.  Instead, it returns a copy of the
   *s* where all characters have been mapped through the given
   translation table which must be a mapping of Unicode ordinals to
   Unicode ordinals, Unicode strings or "None". Unmapped characters
   are left untouched. Characters mapped to "None" are deleted.  Note,
   a more flexible approach is to create a custom character mapping
   codec using the "codecs" module (see "encodings.cp1251" for an
   example).

str.upper()

   Return a copy of the string with all the cased characters [4]
   converted to uppercase.  Note that "str.upper().isupper()" might be
   "False" if "s" contains uncased characters or if the Unicode
   category of the resulting character(s) is not "Lu" (Letter,
   uppercase), but e.g. "Lt" (Letter, titlecase).

   For 8-bit strings, this method is locale-dependent.

str.zfill(width)

   Return the numeric string left filled with zeros in a string of
   length *width*.  A sign prefix is handled correctly.  The original
   string is returned if *width* is less than or equal to "len(s)".

   New in version 2.2.2.

The following methods are present only on unicode objects:

unicode.isnumeric()

   Return "True" if there are only numeric characters in S, "False"
   otherwise. Numeric characters include digit characters, and all
   characters that have the Unicode numeric value property, e.g.
   U+2155, VULGAR FRACTION ONE FIFTH.

unicode.isdecimal()

   Return "True" if there are only decimal characters in S, "False"
   otherwise. Decimal characters include digit characters, and all
   characters that can be used to form decimal-radix numbers, e.g.
   U+0660, ARABIC-INDIC DIGIT ZERO.
sstring-methodssF
String literals
***************

String literals are described by the following lexical definitions:

   stringliteral   ::= [stringprefix](shortstring | longstring)
   stringprefix    ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"
                    | "b" | "B" | "br" | "Br" | "bR" | "BR"
   shortstring     ::= "'" shortstringitem* "'" | '"' shortstringitem* '"'
   longstring      ::= "'''" longstringitem* "'''"
                  | '"""' longstringitem* '"""'
   shortstringitem ::= shortstringchar | escapeseq
   longstringitem  ::= longstringchar | escapeseq
   shortstringchar ::= <any source character except "\" or newline or the quote>
   longstringchar  ::= <any source character except "\">
   escapeseq       ::= "\" <any ASCII character>

One syntactic restriction not indicated by these productions is that
whitespace is not allowed between the "stringprefix" and the rest of
the string literal. The source character set is defined by the
encoding declaration; it is ASCII if no encoding declaration is given
in the source file; see section Encoding declarations.

In plain English: String literals can be enclosed in matching single
quotes ("'") or double quotes (""").  They can also be enclosed in
matching groups of three single or double quotes (these are generally
referred to as *triple-quoted strings*).  The backslash ("\")
character is used to escape characters that otherwise have a special
meaning, such as newline, backslash itself, or the quote character.
String literals may optionally be prefixed with a letter "'r'" or
"'R'"; such strings are called *raw strings* and use different rules
for interpreting backslash escape sequences.  A prefix of "'u'" or
"'U'" makes the string a Unicode string.  Unicode strings use the
Unicode character set as defined by the Unicode Consortium and ISO
10646.  Some additional escape sequences, described below, are
available in Unicode strings. A prefix of "'b'" or "'B'" is ignored in
Python 2; it indicates that the literal should become a bytes literal
in Python 3 (e.g. when code is automatically converted with 2to3).  A
"'u'" or "'b'" prefix may be followed by an "'r'" prefix.

In triple-quoted strings, unescaped newlines and quotes are allowed
(and are retained), except that three unescaped quotes in a row
terminate the string.  (A "quote" is the character used to open the
string, i.e. either "'" or """.)

Unless an "'r'" or "'R'" prefix is present, escape sequences in
strings are interpreted according to rules similar to those used by
Standard C.  The recognized escape sequences are:

+-------------------+-----------------------------------+---------+
| Escape Sequence   | Meaning                           | Notes   |
+===================+===================================+=========+
| "\newline"        | Ignored                           |         |
+-------------------+-----------------------------------+---------+
| "\\"              | Backslash ("\")                   |         |
+-------------------+-----------------------------------+---------+
| "\'"              | Single quote ("'")                |         |
+-------------------+-----------------------------------+---------+
| "\""              | Double quote (""")                |         |
+-------------------+-----------------------------------+---------+
| "\a"              | ASCII Bell (BEL)                  |         |
+-------------------+-----------------------------------+---------+
| "\b"              | ASCII Backspace (BS)              |         |
+-------------------+-----------------------------------+---------+
| "\f"              | ASCII Formfeed (FF)               |         |
+-------------------+-----------------------------------+---------+
| "\n"              | ASCII Linefeed (LF)               |         |
+-------------------+-----------------------------------+---------+
| "\N{name}"        | Character named *name* in the     |         |
|                   | Unicode database (Unicode only)   |         |
+-------------------+-----------------------------------+---------+
| "\r"              | ASCII Carriage Return (CR)        |         |
+-------------------+-----------------------------------+---------+
| "\t"              | ASCII Horizontal Tab (TAB)        |         |
+-------------------+-----------------------------------+---------+
| "\uxxxx"          | Character with 16-bit hex value   | (1)     |
|                   | *xxxx* (Unicode only)             |         |
+-------------------+-----------------------------------+---------+
| "\Uxxxxxxxx"      | Character with 32-bit hex value   | (2)     |
|                   | *xxxxxxxx* (Unicode only)         |         |
+-------------------+-----------------------------------+---------+
| "\v"              | ASCII Vertical Tab (VT)           |         |
+-------------------+-----------------------------------+---------+
| "\ooo"            | Character with octal value *ooo*  | (3,5)   |
+-------------------+-----------------------------------+---------+
| "\xhh"            | Character with hex value *hh*     | (4,5)   |
+-------------------+-----------------------------------+---------+

Notes:

1. Individual code units which form parts of a surrogate pair can
   be encoded using this escape sequence.

2. Any Unicode character can be encoded this way, but characters
   outside the Basic Multilingual Plane (BMP) will be encoded using a
   surrogate pair if Python is compiled to use 16-bit code units (the
   default).

3. As in Standard C, up to three octal digits are accepted.

4. Unlike in Standard C, exactly two hex digits are required.

5. In a string literal, hexadecimal and octal escapes denote the
   byte with the given value; it is not necessary that the byte
   encodes a character in the source character set. In a Unicode
   literal, these escapes denote a Unicode character with the given
   value.

Unlike Standard C, all unrecognized escape sequences are left in the
string unchanged, i.e., *the backslash is left in the string*.  (This
behavior is useful when debugging: if an escape sequence is mistyped,
the resulting output is more easily recognized as broken.)  It is also
important to note that the escape sequences marked as "(Unicode only)"
in the table above fall into the category of unrecognized escapes for
non-Unicode string literals.

When an "'r'" or "'R'" prefix is present, a character following a
backslash is included in the string without change, and *all
backslashes are left in the string*.  For example, the string literal
"r"\n"" consists of two characters: a backslash and a lowercase "'n'".
String quotes can be escaped with a backslash, but the backslash
remains in the string; for example, "r"\""" is a valid string literal
consisting of two characters: a backslash and a double quote; "r"\""
is not a valid string literal (even a raw string cannot end in an odd
number of backslashes).  Specifically, *a raw string cannot end in a
single backslash* (since the backslash would escape the following
quote character).  Note also that a single backslash followed by a
newline is interpreted as those two characters as part of the string,
*not* as a line continuation.

When an "'r'" or "'R'" prefix is used in conjunction with a "'u'" or
"'U'" prefix, then the "\uXXXX" and "\UXXXXXXXX" escape sequences are
processed while  *all other backslashes are left in the string*. For
example, the string literal "ur"\u0062\n"" consists of three Unicode
characters: 'LATIN SMALL LETTER B', 'REVERSE SOLIDUS', and 'LATIN
SMALL LETTER N'. Backslashes can be escaped with a preceding
backslash; however, both remain in the string.  As a result, "\uXXXX"
escape sequences are only recognized when there are an odd number of
backslashes.
tstringss
Subscriptions
*************

A subscription selects an item of a sequence (string, tuple or list)
or mapping (dictionary) object:

   subscription ::= primary "[" expression_list "]"

The primary must evaluate to an object of a sequence or mapping type.

If the primary is a mapping, the expression list must evaluate to an
object whose value is one of the keys of the mapping, and the
subscription selects the value in the mapping that corresponds to that
key.  (The expression list is a tuple except if it has exactly one
item.)

If the primary is a sequence, the expression (list) must evaluate to a
plain integer.  If this value is negative, the length of the sequence
is added to it (so that, e.g., "x[-1]" selects the last item of "x".)
The resulting value must be a nonnegative integer less than the number
of items in the sequence, and the subscription selects the item whose
index is that value (counting from zero).

A string's items are characters.  A character is not a separate data
type but a string of exactly one character.
t
subscriptionss�
Truth Value Testing
*******************

Any object can be tested for truth value, for use in an "if" or
"while" condition or as operand of the Boolean operations below. The
following values are considered false:

* "None"

* "False"

* zero of any numeric type, for example, "0", "0L", "0.0", "0j".

* any empty sequence, for example, "''", "()", "[]".

* any empty mapping, for example, "{}".

* instances of user-defined classes, if the class defines a
  "__nonzero__()" or "__len__()" method, when that method returns the
  integer zero or "bool" value "False". [1]

All other values are considered true --- so objects of many types are
always true.

Operations and built-in functions that have a Boolean result always
return "0" or "False" for false and "1" or "True" for true, unless
otherwise stated. (Important exception: the Boolean operations "or"
and "and" always return one of their operands.)
ttruths
The "try" statement
*******************

The "try" statement specifies exception handlers and/or cleanup code
for a group of statements:

   try_stmt  ::= try1_stmt | try2_stmt
   try1_stmt ::= "try" ":" suite
                 ("except" [expression [("as" | ",") identifier]] ":" suite)+
                 ["else" ":" suite]
                 ["finally" ":" suite]
   try2_stmt ::= "try" ":" suite
                 "finally" ":" suite

Changed in version 2.5: In previous versions of Python,
"try"..."except"..."finally" did not work. "try"..."except" had to be
nested in "try"..."finally".

The "except" clause(s) specify one or more exception handlers. When no
exception occurs in the "try" clause, no exception handler is
executed. When an exception occurs in the "try" suite, a search for an
exception handler is started.  This search inspects the except clauses
in turn until one is found that matches the exception.  An expression-
less except clause, if present, must be last; it matches any
exception.  For an except clause with an expression, that expression
is evaluated, and the clause matches the exception if the resulting
object is "compatible" with the exception.  An object is compatible
with an exception if it is the class or a base class of the exception
object, or a tuple containing an item compatible with the exception.

If no except clause matches the exception, the search for an exception
handler continues in the surrounding code and on the invocation stack.
[1]

If the evaluation of an expression in the header of an except clause
raises an exception, the original search for a handler is canceled and
a search starts for the new exception in the surrounding code and on
the call stack (it is treated as if the entire "try" statement raised
the exception).

When a matching except clause is found, the exception is assigned to
the target specified in that except clause, if present, and the except
clause's suite is executed.  All except clauses must have an
executable block.  When the end of this block is reached, execution
continues normally after the entire try statement.  (This means that
if two nested handlers exist for the same exception, and the exception
occurs in the try clause of the inner handler, the outer handler will
not handle the exception.)

Before an except clause's suite is executed, details about the
exception are assigned to three variables in the "sys" module:
"sys.exc_type" receives the object identifying the exception;
"sys.exc_value" receives the exception's parameter;
"sys.exc_traceback" receives a traceback object (see section The
standard type hierarchy) identifying the point in the program where
the exception occurred. These details are also available through the
"sys.exc_info()" function, which returns a tuple "(exc_type,
exc_value, exc_traceback)".  Use of the corresponding variables is
deprecated in favor of this function, since their use is unsafe in a
threaded program.  As of Python 1.5, the variables are restored to
their previous values (before the call) when returning from a function
that handled an exception.

The optional "else" clause is executed if and when control flows off
the end of the "try" clause. [2] Exceptions in the "else" clause are
not handled by the preceding "except" clauses.

If "finally" is present, it specifies a 'cleanup' handler.  The "try"
clause is executed, including any "except" and "else" clauses.  If an
exception occurs in any of the clauses and is not handled, the
exception is temporarily saved. The "finally" clause is executed.  If
there is a saved exception, it is re-raised at the end of the
"finally" clause. If the "finally" clause raises another exception or
executes a "return" or "break" statement, the saved exception is
discarded:

   >>> def f():
   ...     try:
   ...         1/0
   ...     finally:
   ...         return 42
   ...
   >>> f()
   42

The exception information is not available to the program during
execution of the "finally" clause.

When a "return", "break" or "continue" statement is executed in the
"try" suite of a "try"..."finally" statement, the "finally" clause is
also executed 'on the way out.' A "continue" statement is illegal in
the "finally" clause. (The reason is a problem with the current
implementation --- this restriction may be lifted in the future).

The return value of a function is determined by the last "return"
statement executed.  Since the "finally" clause always executes, a
"return" statement executed in the "finally" clause will always be the
last one executed:

   >>> def foo():
   ...     try:
   ...         return 'try'
   ...     finally:
   ...         return 'finally'
   ...
   >>> foo()
   'finally'

Additional information on exceptions can be found in section
Exceptions, and information on using the "raise" statement to generate
exceptions may be found in section The raise statement.
ttrys��
The standard type hierarchy
***************************

Below is a list of the types that are built into Python.  Extension
modules (written in C, Java, or other languages, depending on the
implementation) can define additional types.  Future versions of
Python may add types to the type hierarchy (e.g., rational numbers,
efficiently stored arrays of integers, etc.).

Some of the type descriptions below contain a paragraph listing
'special attributes.'  These are attributes that provide access to the
implementation and are not intended for general use.  Their definition
may change in the future.

None
   This type has a single value.  There is a single object with this
   value. This object is accessed through the built-in name "None". It
   is used to signify the absence of a value in many situations, e.g.,
   it is returned from functions that don't explicitly return
   anything. Its truth value is false.

NotImplemented
   This type has a single value.  There is a single object with this
   value. This object is accessed through the built-in name
   "NotImplemented". Numeric methods and rich comparison methods may
   return this value if they do not implement the operation for the
   operands provided.  (The interpreter will then try the reflected
   operation, or some other fallback, depending on the operator.)  Its
   truth value is true.

Ellipsis
   This type has a single value.  There is a single object with this
   value. This object is accessed through the built-in name
   "Ellipsis". It is used to indicate the presence of the "..." syntax
   in a slice.  Its truth value is true.

"numbers.Number"
   These are created by numeric literals and returned as results by
   arithmetic operators and arithmetic built-in functions.  Numeric
   objects are immutable; once created their value never changes.
   Python numbers are of course strongly related to mathematical
   numbers, but subject to the limitations of numerical representation
   in computers.

   Python distinguishes between integers, floating point numbers, and
   complex numbers:

   "numbers.Integral"
      These represent elements from the mathematical set of integers
      (positive and negative).

      There are three types of integers:

      Plain integers
         These represent numbers in the range -2147483648 through
         2147483647. (The range may be larger on machines with a
         larger natural word size, but not smaller.)  When the result
         of an operation would fall outside this range, the result is
         normally returned as a long integer (in some cases, the
         exception "OverflowError" is raised instead).  For the
         purpose of shift and mask operations, integers are assumed to
         have a binary, 2's complement notation using 32 or more bits,
         and hiding no bits from the user (i.e., all 4294967296
         different bit patterns correspond to different values).

      Long integers
         These represent numbers in an unlimited range, subject to
         available (virtual) memory only.  For the purpose of shift
         and mask operations, a binary representation is assumed, and
         negative numbers are represented in a variant of 2's
         complement which gives the illusion of an infinite string of
         sign bits extending to the left.

      Booleans
         These represent the truth values False and True.  The two
         objects representing the values "False" and "True" are the
         only Boolean objects. The Boolean type is a subtype of plain
         integers, and Boolean values behave like the values 0 and 1,
         respectively, in almost all contexts, the exception being
         that when converted to a string, the strings ""False"" or
         ""True"" are returned, respectively.

      The rules for integer representation are intended to give the
      most meaningful interpretation of shift and mask operations
      involving negative integers and the least surprises when
      switching between the plain and long integer domains.  Any
      operation, if it yields a result in the plain integer domain,
      will yield the same result in the long integer domain or when
      using mixed operands.  The switch between domains is transparent
      to the programmer.

   "numbers.Real" ("float")
      These represent machine-level double precision floating point
      numbers. You are at the mercy of the underlying machine
      architecture (and C or Java implementation) for the accepted
      range and handling of overflow. Python does not support single-
      precision floating point numbers; the savings in processor and
      memory usage that are usually the reason for using these are
      dwarfed by the overhead of using objects in Python, so there is
      no reason to complicate the language with two kinds of floating
      point numbers.

   "numbers.Complex"
      These represent complex numbers as a pair of machine-level
      double precision floating point numbers.  The same caveats apply
      as for floating point numbers. The real and imaginary parts of a
      complex number "z" can be retrieved through the read-only
      attributes "z.real" and "z.imag".

Sequences
   These represent finite ordered sets indexed by non-negative
   numbers. The built-in function "len()" returns the number of items
   of a sequence. When the length of a sequence is *n*, the index set
   contains the numbers 0, 1, ..., *n*-1.  Item *i* of sequence *a* is
   selected by "a[i]".

   Sequences also support slicing: "a[i:j]" selects all items with
   index *k* such that *i* "<=" *k* "<" *j*.  When used as an
   expression, a slice is a sequence of the same type.  This implies
   that the index set is renumbered so that it starts at 0.

   Some sequences also support "extended slicing" with a third "step"
   parameter: "a[i:j:k]" selects all items of *a* with index *x* where
   "x = i + n*k", *n* ">=" "0" and *i* "<=" *x* "<" *j*.

   Sequences are distinguished according to their mutability:

   Immutable sequences
      An object of an immutable sequence type cannot change once it is
      created.  (If the object contains references to other objects,
      these other objects may be mutable and may be changed; however,
      the collection of objects directly referenced by an immutable
      object cannot change.)

      The following types are immutable sequences:

      Strings
         The items of a string are characters.  There is no separate
         character type; a character is represented by a string of one
         item. Characters represent (at least) 8-bit bytes.  The
         built-in functions "chr()" and "ord()" convert between
         characters and nonnegative integers representing the byte
         values.  Bytes with the values 0--127 usually represent the
         corresponding ASCII values, but the interpretation of values
         is up to the program.  The string data type is also used to
         represent arrays of bytes, e.g., to hold data read from a
         file.

         (On systems whose native character set is not ASCII, strings
         may use EBCDIC in their internal representation, provided the
         functions "chr()" and "ord()" implement a mapping between
         ASCII and EBCDIC, and string comparison preserves the ASCII
         order. Or perhaps someone can propose a better rule?)

      Unicode
         The items of a Unicode object are Unicode code units.  A
         Unicode code unit is represented by a Unicode object of one
         item and can hold either a 16-bit or 32-bit value
         representing a Unicode ordinal (the maximum value for the
         ordinal is given in "sys.maxunicode", and depends on how
         Python is configured at compile time).  Surrogate pairs may
         be present in the Unicode object, and will be reported as two
         separate items.  The built-in functions "unichr()" and
         "ord()" convert between code units and nonnegative integers
         representing the Unicode ordinals as defined in the Unicode
         Standard 3.0. Conversion from and to other encodings are
         possible through the Unicode method "encode()" and the built-
         in function "unicode()".

      Tuples
         The items of a tuple are arbitrary Python objects. Tuples of
         two or more items are formed by comma-separated lists of
         expressions.  A tuple of one item (a 'singleton') can be
         formed by affixing a comma to an expression (an expression by
         itself does not create a tuple, since parentheses must be
         usable for grouping of expressions).  An empty tuple can be
         formed by an empty pair of parentheses.

   Mutable sequences
      Mutable sequences can be changed after they are created.  The
      subscription and slicing notations can be used as the target of
      assignment and "del" (delete) statements.

      There are currently two intrinsic mutable sequence types:

      Lists
         The items of a list are arbitrary Python objects.  Lists are
         formed by placing a comma-separated list of expressions in
         square brackets. (Note that there are no special cases needed
         to form lists of length 0 or 1.)

      Byte Arrays
         A bytearray object is a mutable array. They are created by
         the built-in "bytearray()" constructor.  Aside from being
         mutable (and hence unhashable), byte arrays otherwise provide
         the same interface and functionality as immutable bytes
         objects.

      The extension module "array" provides an additional example of a
      mutable sequence type.

Set types
   These represent unordered, finite sets of unique, immutable
   objects. As such, they cannot be indexed by any subscript. However,
   they can be iterated over, and the built-in function "len()"
   returns the number of items in a set. Common uses for sets are fast
   membership testing, removing duplicates from a sequence, and
   computing mathematical operations such as intersection, union,
   difference, and symmetric difference.

   For set elements, the same immutability rules apply as for
   dictionary keys. Note that numeric types obey the normal rules for
   numeric comparison: if two numbers compare equal (e.g., "1" and
   "1.0"), only one of them can be contained in a set.

   There are currently two intrinsic set types:

   Sets
      These represent a mutable set. They are created by the built-in
      "set()" constructor and can be modified afterwards by several
      methods, such as "add()".

   Frozen sets
      These represent an immutable set.  They are created by the
      built-in "frozenset()" constructor.  As a frozenset is immutable
      and *hashable*, it can be used again as an element of another
      set, or as a dictionary key.

Mappings
   These represent finite sets of objects indexed by arbitrary index
   sets. The subscript notation "a[k]" selects the item indexed by "k"
   from the mapping "a"; this can be used in expressions and as the
   target of assignments or "del" statements. The built-in function
   "len()" returns the number of items in a mapping.

   There is currently a single intrinsic mapping type:

   Dictionaries
      These represent finite sets of objects indexed by nearly
      arbitrary values.  The only types of values not acceptable as
      keys are values containing lists or dictionaries or other
      mutable types that are compared by value rather than by object
      identity, the reason being that the efficient implementation of
      dictionaries requires a key's hash value to remain constant.
      Numeric types used for keys obey the normal rules for numeric
      comparison: if two numbers compare equal (e.g., "1" and "1.0")
      then they can be used interchangeably to index the same
      dictionary entry.

      Dictionaries are mutable; they can be created by the "{...}"
      notation (see section Dictionary displays).

      The extension modules "dbm", "gdbm", and "bsddb" provide
      additional examples of mapping types.

Callable types
   These are the types to which the function call operation (see
   section Calls) can be applied:

   User-defined functions
      A user-defined function object is created by a function
      definition (see section Function definitions).  It should be
      called with an argument list containing the same number of items
      as the function's formal parameter list.

      Special attributes:

      +-------------------------+---------------------------------+-------------+
      | Attribute               | Meaning                         |             |
      +=========================+=================================+=============+
      | "__doc__" "func_doc"    | The function's documentation    | Writable    |
      |                         | string, or "None" if            |             |
      |                         | unavailable.                    |             |
      +-------------------------+---------------------------------+-------------+
      | "__name__" "func_name"  | The function's name             | Writable    |
      +-------------------------+---------------------------------+-------------+
      | "__module__"            | The name of the module the      | Writable    |
      |                         | function was defined in, or     |             |
      |                         | "None" if unavailable.          |             |
      +-------------------------+---------------------------------+-------------+
      | "__defaults__"          | A tuple containing default      | Writable    |
      | "func_defaults"         | argument values for those       |             |
      |                         | arguments that have defaults,   |             |
      |                         | or "None" if no arguments have  |             |
      |                         | a default value.                |             |
      +-------------------------+---------------------------------+-------------+
      | "__code__" "func_code"  | The code object representing    | Writable    |
      |                         | the compiled function body.     |             |
      +-------------------------+---------------------------------+-------------+
      | "__globals__"           | A reference to the dictionary   | Read-only   |
      | "func_globals"          | that holds the function's       |             |
      |                         | global variables --- the global |             |
      |                         | namespace of the module in      |             |
      |                         | which the function was defined. |             |
      +-------------------------+---------------------------------+-------------+
      | "__dict__" "func_dict"  | The namespace supporting        | Writable    |
      |                         | arbitrary function attributes.  |             |
      +-------------------------+---------------------------------+-------------+
      | "__closure__"           | "None" or a tuple of cells that | Read-only   |
      | "func_closure"          | contain bindings for the        |             |
      |                         | function's free variables.      |             |
      +-------------------------+---------------------------------+-------------+

      Most of the attributes labelled "Writable" check the type of the
      assigned value.

      Changed in version 2.4: "func_name" is now writable.

      Changed in version 2.6: The double-underscore attributes
      "__closure__", "__code__", "__defaults__", and "__globals__"
      were introduced as aliases for the corresponding "func_*"
      attributes for forwards compatibility with Python 3.

      Function objects also support getting and setting arbitrary
      attributes, which can be used, for example, to attach metadata
      to functions.  Regular attribute dot-notation is used to get and
      set such attributes. *Note that the current implementation only
      supports function attributes on user-defined functions. Function
      attributes on built-in functions may be supported in the
      future.*

      Additional information about a function's definition can be
      retrieved from its code object; see the description of internal
      types below.

   User-defined methods
      A user-defined method object combines a class, a class instance
      (or "None") and any callable object (normally a user-defined
      function).

      Special read-only attributes: "im_self" is the class instance
      object, "im_func" is the function object; "im_class" is the
      class of "im_self" for bound methods or the class that asked for
      the method for unbound methods; "__doc__" is the method's
      documentation (same as "im_func.__doc__"); "__name__" is the
      method name (same as "im_func.__name__"); "__module__" is the
      name of the module the method was defined in, or "None" if
      unavailable.

      Changed in version 2.2: "im_self" used to refer to the class
      that defined the method.

      Changed in version 2.6: For Python 3 forward-compatibility,
      "im_func" is also available as "__func__", and "im_self" as
      "__self__".

      Methods also support accessing (but not setting) the arbitrary
      function attributes on the underlying function object.

      User-defined method objects may be created when getting an
      attribute of a class (perhaps via an instance of that class), if
      that attribute is a user-defined function object, an unbound
      user-defined method object, or a class method object. When the
      attribute is a user-defined method object, a new method object
      is only created if the class from which it is being retrieved is
      the same as, or a derived class of, the class stored in the
      original method object; otherwise, the original method object is
      used as it is.

      When a user-defined method object is created by retrieving a
      user-defined function object from a class, its "im_self"
      attribute is "None" and the method object is said to be unbound.
      When one is created by retrieving a user-defined function object
      from a class via one of its instances, its "im_self" attribute
      is the instance, and the method object is said to be bound. In
      either case, the new method's "im_class" attribute is the class
      from which the retrieval takes place, and its "im_func"
      attribute is the original function object.

      When a user-defined method object is created by retrieving
      another method object from a class or instance, the behaviour is
      the same as for a function object, except that the "im_func"
      attribute of the new instance is not the original method object
      but its "im_func" attribute.

      When a user-defined method object is created by retrieving a
      class method object from a class or instance, its "im_self"
      attribute is the class itself, and its "im_func" attribute is
      the function object underlying the class method.

      When an unbound user-defined method object is called, the
      underlying function ("im_func") is called, with the restriction
      that the first argument must be an instance of the proper class
      ("im_class") or of a derived class thereof.

      When a bound user-defined method object is called, the
      underlying function ("im_func") is called, inserting the class
      instance ("im_self") in front of the argument list.  For
      instance, when "C" is a class which contains a definition for a
      function "f()", and "x" is an instance of "C", calling "x.f(1)"
      is equivalent to calling "C.f(x, 1)".

      When a user-defined method object is derived from a class method
      object, the "class instance" stored in "im_self" will actually
      be the class itself, so that calling either "x.f(1)" or "C.f(1)"
      is equivalent to calling "f(C,1)" where "f" is the underlying
      function.

      Note that the transformation from function object to (unbound or
      bound) method object happens each time the attribute is
      retrieved from the class or instance. In some cases, a fruitful
      optimization is to assign the attribute to a local variable and
      call that local variable. Also notice that this transformation
      only happens for user-defined functions; other callable objects
      (and all non-callable objects) are retrieved without
      transformation.  It is also important to note that user-defined
      functions which are attributes of a class instance are not
      converted to bound methods; this *only* happens when the
      function is an attribute of the class.

   Generator functions
      A function or method which uses the "yield" statement (see
      section The yield statement) is called a *generator function*.
      Such a function, when called, always returns an iterator object
      which can be used to execute the body of the function:  calling
      the iterator's "next()" method will cause the function to
      execute until it provides a value using the "yield" statement.
      When the function executes a "return" statement or falls off the
      end, a "StopIteration" exception is raised and the iterator will
      have reached the end of the set of values to be returned.

   Built-in functions
      A built-in function object is a wrapper around a C function.
      Examples of built-in functions are "len()" and "math.sin()"
      ("math" is a standard built-in module). The number and type of
      the arguments are determined by the C function. Special read-
      only attributes: "__doc__" is the function's documentation
      string, or "None" if unavailable; "__name__" is the function's
      name; "__self__" is set to "None" (but see the next item);
      "__module__" is the name of the module the function was defined
      in or "None" if unavailable.

   Built-in methods
      This is really a different disguise of a built-in function, this
      time containing an object passed to the C function as an
      implicit extra argument.  An example of a built-in method is
      "alist.append()", assuming *alist* is a list object. In this
      case, the special read-only attribute "__self__" is set to the
      object denoted by *alist*.

   Class Types
      Class types, or "new-style classes," are callable.  These
      objects normally act as factories for new instances of
      themselves, but variations are possible for class types that
      override "__new__()".  The arguments of the call are passed to
      "__new__()" and, in the typical case, to "__init__()" to
      initialize the new instance.

   Classic Classes
      Class objects are described below.  When a class object is
      called, a new class instance (also described below) is created
      and returned.  This implies a call to the class's "__init__()"
      method if it has one.  Any arguments are passed on to the
      "__init__()" method.  If there is no "__init__()" method, the
      class must be called without arguments.

   Class instances
      Class instances are described below.  Class instances are
      callable only when the class has a "__call__()" method;
      "x(arguments)" is a shorthand for "x.__call__(arguments)".

Modules
   Modules are imported by the "import" statement (see section The
   import statement). A module object has a namespace implemented by a
   dictionary object (this is the dictionary referenced by the
   func_globals attribute of functions defined in the module).
   Attribute references are translated to lookups in this dictionary,
   e.g., "m.x" is equivalent to "m.__dict__["x"]". A module object
   does not contain the code object used to initialize the module
   (since it isn't needed once the initialization is done).

   Attribute assignment updates the module's namespace dictionary,
   e.g., "m.x = 1" is equivalent to "m.__dict__["x"] = 1".

   Special read-only attribute: "__dict__" is the module's namespace
   as a dictionary object.

   **CPython implementation detail:** Because of the way CPython
   clears module dictionaries, the module dictionary will be cleared
   when the module falls out of scope even if the dictionary still has
   live references.  To avoid this, copy the dictionary or keep the
   module around while using its dictionary directly.

   Predefined (writable) attributes: "__name__" is the module's name;
   "__doc__" is the module's documentation string, or "None" if
   unavailable; "__file__" is the pathname of the file from which the
   module was loaded, if it was loaded from a file. The "__file__"
   attribute is not present for C modules that are statically linked
   into the interpreter; for extension modules loaded dynamically from
   a shared library, it is the pathname of the shared library file.

Classes
   Both class types (new-style classes) and class objects (old-
   style/classic classes) are typically created by class definitions
   (see section Class definitions).  A class has a namespace
   implemented by a dictionary object. Class attribute references are
   translated to lookups in this dictionary, e.g., "C.x" is translated
   to "C.__dict__["x"]" (although for new-style classes in particular
   there are a number of hooks which allow for other means of locating
   attributes). When the attribute name is not found there, the
   attribute search continues in the base classes.  For old-style
   classes, the search is depth-first, left-to-right in the order of
   occurrence in the base class list. New-style classes use the more
   complex C3 method resolution order which behaves correctly even in
   the presence of 'diamond' inheritance structures where there are
   multiple inheritance paths leading back to a common ancestor.
   Additional details on the C3 MRO used by new-style classes can be
   found in the documentation accompanying the 2.3 release at
   https://www.python.org/download/releases/2.3/mro/.

   When a class attribute reference (for class "C", say) would yield a
   user-defined function object or an unbound user-defined method
   object whose associated class is either "C" or one of its base
   classes, it is transformed into an unbound user-defined method
   object whose "im_class" attribute is "C". When it would yield a
   class method object, it is transformed into a bound user-defined
   method object whose "im_self" attribute is "C".  When it would
   yield a static method object, it is transformed into the object
   wrapped by the static method object. See section Implementing
   Descriptors for another way in which attributes retrieved from a
   class may differ from those actually contained in its "__dict__"
   (note that only new-style classes support descriptors).

   Class attribute assignments update the class's dictionary, never
   the dictionary of a base class.

   A class object can be called (see above) to yield a class instance
   (see below).

   Special attributes: "__name__" is the class name; "__module__" is
   the module name in which the class was defined; "__dict__" is the
   dictionary containing the class's namespace; "__bases__" is a tuple
   (possibly empty or a singleton) containing the base classes, in the
   order of their occurrence in the base class list; "__doc__" is the
   class's documentation string, or "None" if undefined.

Class instances
   A class instance is created by calling a class object (see above).
   A class instance has a namespace implemented as a dictionary which
   is the first place in which attribute references are searched.
   When an attribute is not found there, and the instance's class has
   an attribute by that name, the search continues with the class
   attributes.  If a class attribute is found that is a user-defined
   function object or an unbound user-defined method object whose
   associated class is the class (call it "C") of the instance for
   which the attribute reference was initiated or one of its bases, it
   is transformed into a bound user-defined method object whose
   "im_class" attribute is "C" and whose "im_self" attribute is the
   instance. Static method and class method objects are also
   transformed, as if they had been retrieved from class "C"; see
   above under "Classes". See section Implementing Descriptors for
   another way in which attributes of a class retrieved via its
   instances may differ from the objects actually stored in the
   class's "__dict__". If no class attribute is found, and the
   object's class has a "__getattr__()" method, that is called to
   satisfy the lookup.

   Attribute assignments and deletions update the instance's
   dictionary, never a class's dictionary.  If the class has a
   "__setattr__()" or "__delattr__()" method, this is called instead
   of updating the instance dictionary directly.

   Class instances can pretend to be numbers, sequences, or mappings
   if they have methods with certain special names.  See section
   Special method names.

   Special attributes: "__dict__" is the attribute dictionary;
   "__class__" is the instance's class.

Files
   A file object represents an open file.  File objects are created by
   the "open()" built-in function, and also by "os.popen()",
   "os.fdopen()", and the "makefile()" method of socket objects (and
   perhaps by other functions or methods provided by extension
   modules).  The objects "sys.stdin", "sys.stdout" and "sys.stderr"
   are initialized to file objects corresponding to the interpreter's
   standard input, output and error streams.  See File Objects for
   complete documentation of file objects.

Internal types
   A few types used internally by the interpreter are exposed to the
   user. Their definitions may change with future versions of the
   interpreter, but they are mentioned here for completeness.

   Code objects
      Code objects represent *byte-compiled* executable Python code,
      or *bytecode*. The difference between a code object and a
      function object is that the function object contains an explicit
      reference to the function's globals (the module in which it was
      defined), while a code object contains no context; also the
      default argument values are stored in the function object, not
      in the code object (because they represent values calculated at
      run-time).  Unlike function objects, code objects are immutable
      and contain no references (directly or indirectly) to mutable
      objects.

      Special read-only attributes: "co_name" gives the function name;
      "co_argcount" is the number of positional arguments (including
      arguments with default values); "co_nlocals" is the number of
      local variables used by the function (including arguments);
      "co_varnames" is a tuple containing the names of the local
      variables (starting with the argument names); "co_cellvars" is a
      tuple containing the names of local variables that are
      referenced by nested functions; "co_freevars" is a tuple
      containing the names of free variables; "co_code" is a string
      representing the sequence of bytecode instructions; "co_consts"
      is a tuple containing the literals used by the bytecode;
      "co_names" is a tuple containing the names used by the bytecode;
      "co_filename" is the filename from which the code was compiled;
      "co_firstlineno" is the first line number of the function;
      "co_lnotab" is a string encoding the mapping from bytecode
      offsets to line numbers (for details see the source code of the
      interpreter); "co_stacksize" is the required stack size
      (including local variables); "co_flags" is an integer encoding a
      number of flags for the interpreter.

      The following flag bits are defined for "co_flags": bit "0x04"
      is set if the function uses the "*arguments" syntax to accept an
      arbitrary number of positional arguments; bit "0x08" is set if
      the function uses the "**keywords" syntax to accept arbitrary
      keyword arguments; bit "0x20" is set if the function is a
      generator.

      Future feature declarations ("from __future__ import division")
      also use bits in "co_flags" to indicate whether a code object
      was compiled with a particular feature enabled: bit "0x2000" is
      set if the function was compiled with future division enabled;
      bits "0x10" and "0x1000" were used in earlier versions of
      Python.

      Other bits in "co_flags" are reserved for internal use.

      If a code object represents a function, the first item in
      "co_consts" is the documentation string of the function, or
      "None" if undefined.

   Frame objects
      Frame objects represent execution frames.  They may occur in
      traceback objects (see below).

      Special read-only attributes: "f_back" is to the previous stack
      frame (towards the caller), or "None" if this is the bottom
      stack frame; "f_code" is the code object being executed in this
      frame; "f_locals" is the dictionary used to look up local
      variables; "f_globals" is used for global variables;
      "f_builtins" is used for built-in (intrinsic) names;
      "f_restricted" is a flag indicating whether the function is
      executing in restricted execution mode; "f_lasti" gives the
      precise instruction (this is an index into the bytecode string
      of the code object).

      Special writable attributes: "f_trace", if not "None", is a
      function called at the start of each source code line (this is
      used by the debugger); "f_exc_type", "f_exc_value",
      "f_exc_traceback" represent the last exception raised in the
      parent frame provided another exception was ever raised in the
      current frame (in all other cases they are "None"); "f_lineno"
      is the current line number of the frame --- writing to this from
      within a trace function jumps to the given line (only for the
      bottom-most frame).  A debugger can implement a Jump command
      (aka Set Next Statement) by writing to f_lineno.

   Traceback objects
      Traceback objects represent a stack trace of an exception.  A
      traceback object is created when an exception occurs.  When the
      search for an exception handler unwinds the execution stack, at
      each unwound level a traceback object is inserted in front of
      the current traceback.  When an exception handler is entered,
      the stack trace is made available to the program. (See section
      The try statement.) It is accessible as "sys.exc_traceback", and
      also as the third item of the tuple returned by
      "sys.exc_info()".  The latter is the preferred interface, since
      it works correctly when the program is using multiple threads.
      When the program contains no suitable handler, the stack trace
      is written (nicely formatted) to the standard error stream; if
      the interpreter is interactive, it is also made available to the
      user as "sys.last_traceback".

      Special read-only attributes: "tb_next" is the next level in the
      stack trace (towards the frame where the exception occurred), or
      "None" if there is no next level; "tb_frame" points to the
      execution frame of the current level; "tb_lineno" gives the line
      number where the exception occurred; "tb_lasti" indicates the
      precise instruction.  The line number and last instruction in
      the traceback may differ from the line number of its frame
      object if the exception occurred in a "try" statement with no
      matching except clause or with a finally clause.

   Slice objects
      Slice objects are used to represent slices when *extended slice
      syntax* is used. This is a slice using two colons, or multiple
      slices or ellipses separated by commas, e.g., "a[i:j:step]",
      "a[i:j, k:l]", or "a[..., i:j]".  They are also created by the
      built-in "slice()" function.

      Special read-only attributes: "start" is the lower bound; "stop"
      is the upper bound; "step" is the step value; each is "None" if
      omitted.  These attributes can have any type.

      Slice objects support one method:

      slice.indices(self, length)

         This method takes a single integer argument *length* and
         computes information about the extended slice that the slice
         object would describe if applied to a sequence of *length*
         items.  It returns a tuple of three integers; respectively
         these are the *start* and *stop* indices and the *step* or
         stride length of the slice. Missing or out-of-bounds indices
         are handled in a manner consistent with regular slices.

         New in version 2.3.

   Static method objects
      Static method objects provide a way of defeating the
      transformation of function objects to method objects described
      above. A static method object is a wrapper around any other
      object, usually a user-defined method object. When a static
      method object is retrieved from a class or a class instance, the
      object actually returned is the wrapped object, which is not
      subject to any further transformation. Static method objects are
      not themselves callable, although the objects they wrap usually
      are. Static method objects are created by the built-in
      "staticmethod()" constructor.

   Class method objects
      A class method object, like a static method object, is a wrapper
      around another object that alters the way in which that object
      is retrieved from classes and class instances. The behaviour of
      class method objects upon such retrieval is described above,
      under "User-defined methods". Class method objects are created
      by the built-in "classmethod()" constructor.
ttypess�
Functions
*********

Function objects are created by function definitions.  The only
operation on a function object is to call it: "func(argument-list)".

There are really two flavors of function objects: built-in functions
and user-defined functions.  Both support the same operation (to call
the function), but the implementation is different, hence the
different object types.

See Function definitions for more information.
ttypesfunctionss�/
Mapping Types --- "dict"
************************

A *mapping* object maps *hashable* values to arbitrary objects.
Mappings are mutable objects.  There is currently only one standard
mapping type, the *dictionary*.  (For other containers see the built
in "list", "set", and "tuple" classes, and the "collections" module.)

A dictionary's keys are *almost* arbitrary values.  Values that are
not *hashable*, that is, values containing lists, dictionaries or
other mutable types (that are compared by value rather than by object
identity) may not be used as keys.  Numeric types used for keys obey
the normal rules for numeric comparison: if two numbers compare equal
(such as "1" and "1.0") then they can be used interchangeably to index
the same dictionary entry.  (Note however, that since computers store
floating-point numbers as approximations it is usually unwise to use
them as dictionary keys.)

Dictionaries can be created by placing a comma-separated list of "key:
value" pairs within braces, for example: "{'jack': 4098, 'sjoerd':
4127}" or "{4098: 'jack', 4127: 'sjoerd'}", or by the "dict"
constructor.

class dict(**kwarg)
class dict(mapping, **kwarg)
class dict(iterable, **kwarg)

   Return a new dictionary initialized from an optional positional
   argument and a possibly empty set of keyword arguments.

   If no positional argument is given, an empty dictionary is created.
   If a positional argument is given and it is a mapping object, a
   dictionary is created with the same key-value pairs as the mapping
   object.  Otherwise, the positional argument must be an *iterable*
   object.  Each item in the iterable must itself be an iterable with
   exactly two objects.  The first object of each item becomes a key
   in the new dictionary, and the second object the corresponding
   value.  If a key occurs more than once, the last value for that key
   becomes the corresponding value in the new dictionary.

   If keyword arguments are given, the keyword arguments and their
   values are added to the dictionary created from the positional
   argument.  If a key being added is already present, the value from
   the keyword argument replaces the value from the positional
   argument.

   To illustrate, the following examples all return a dictionary equal
   to "{"one": 1, "two": 2, "three": 3}":

      >>> a = dict(one=1, two=2, three=3)
      >>> b = {'one': 1, 'two': 2, 'three': 3}
      >>> c = dict(zip(['one', 'two', 'three'], [1, 2, 3]))
      >>> d = dict([('two', 2), ('one', 1), ('three', 3)])
      >>> e = dict({'three': 3, 'one': 1, 'two': 2})
      >>> a == b == c == d == e
      True

   Providing keyword arguments as in the first example only works for
   keys that are valid Python identifiers.  Otherwise, any valid keys
   can be used.

   New in version 2.2.

   Changed in version 2.3: Support for building a dictionary from
   keyword arguments added.

   These are the operations that dictionaries support (and therefore,
   custom mapping types should support too):

   len(d)

      Return the number of items in the dictionary *d*.

   d[key]

      Return the item of *d* with key *key*.  Raises a "KeyError" if
      *key* is not in the map.

      If a subclass of dict defines a method "__missing__()" and *key*
      is not present, the "d[key]" operation calls that method with
      the key *key* as argument.  The "d[key]" operation then returns
      or raises whatever is returned or raised by the
      "__missing__(key)" call. No other operations or methods invoke
      "__missing__()". If "__missing__()" is not defined, "KeyError"
      is raised. "__missing__()" must be a method; it cannot be an
      instance variable:

         >>> class Counter(dict):
         ...     def __missing__(self, key):
         ...         return 0
         >>> c = Counter()
         >>> c['red']
         0
         >>> c['red'] += 1
         >>> c['red']
         1

      The example above shows part of the implementation of
      "collections.Counter".  A different "__missing__" method is used
      by "collections.defaultdict".

      New in version 2.5: Recognition of __missing__ methods of dict
      subclasses.

   d[key] = value

      Set "d[key]" to *value*.

   del d[key]

      Remove "d[key]" from *d*.  Raises a "KeyError" if *key* is not
      in the map.

   key in d

      Return "True" if *d* has a key *key*, else "False".

      New in version 2.2.

   key not in d

      Equivalent to "not key in d".

      New in version 2.2.

   iter(d)

      Return an iterator over the keys of the dictionary.  This is a
      shortcut for "iterkeys()".

   clear()

      Remove all items from the dictionary.

   copy()

      Return a shallow copy of the dictionary.

   fromkeys(seq[, value])

      Create a new dictionary with keys from *seq* and values set to
      *value*.

      "fromkeys()" is a class method that returns a new dictionary.
      *value* defaults to "None".

      New in version 2.3.

   get(key[, default])

      Return the value for *key* if *key* is in the dictionary, else
      *default*. If *default* is not given, it defaults to "None", so
      that this method never raises a "KeyError".

   has_key(key)

      Test for the presence of *key* in the dictionary.  "has_key()"
      is deprecated in favor of "key in d".

   items()

      Return a copy of the dictionary's list of "(key, value)" pairs.

      **CPython implementation detail:** Keys and values are listed in
      an arbitrary order which is non-random, varies across Python
      implementations, and depends on the dictionary's history of
      insertions and deletions.

      If "items()", "keys()", "values()", "iteritems()", "iterkeys()",
      and "itervalues()" are called with no intervening modifications
      to the dictionary, the lists will directly correspond.  This
      allows the creation of "(value, key)" pairs using "zip()":
      "pairs = zip(d.values(), d.keys())".  The same relationship
      holds for the "iterkeys()" and "itervalues()" methods: "pairs =
      zip(d.itervalues(), d.iterkeys())" provides the same value for
      "pairs". Another way to create the same list is "pairs = [(v, k)
      for (k, v) in d.iteritems()]".

   iteritems()

      Return an iterator over the dictionary's "(key, value)" pairs.
      See the note for "dict.items()".

      Using "iteritems()" while adding or deleting entries in the
      dictionary may raise a "RuntimeError" or fail to iterate over
      all entries.

      New in version 2.2.

   iterkeys()

      Return an iterator over the dictionary's keys.  See the note for
      "dict.items()".

      Using "iterkeys()" while adding or deleting entries in the
      dictionary may raise a "RuntimeError" or fail to iterate over
      all entries.

      New in version 2.2.

   itervalues()

      Return an iterator over the dictionary's values.  See the note
      for "dict.items()".

      Using "itervalues()" while adding or deleting entries in the
      dictionary may raise a "RuntimeError" or fail to iterate over
      all entries.

      New in version 2.2.

   keys()

      Return a copy of the dictionary's list of keys.  See the note
      for "dict.items()".

   pop(key[, default])

      If *key* is in the dictionary, remove it and return its value,
      else return *default*.  If *default* is not given and *key* is
      not in the dictionary, a "KeyError" is raised.

      New in version 2.3.

   popitem()

      Remove and return an arbitrary "(key, value)" pair from the
      dictionary.

      "popitem()" is useful to destructively iterate over a
      dictionary, as often used in set algorithms.  If the dictionary
      is empty, calling "popitem()" raises a "KeyError".

   setdefault(key[, default])

      If *key* is in the dictionary, return its value.  If not, insert
      *key* with a value of *default* and return *default*.  *default*
      defaults to "None".

   update([other])

      Update the dictionary with the key/value pairs from *other*,
      overwriting existing keys.  Return "None".

      "update()" accepts either another dictionary object or an
      iterable of key/value pairs (as tuples or other iterables of
      length two).  If keyword arguments are specified, the dictionary
      is then updated with those key/value pairs: "d.update(red=1,
      blue=2)".

      Changed in version 2.4: Allowed the argument to be an iterable
      of key/value pairs and allowed keyword arguments.

   values()

      Return a copy of the dictionary's list of values.  See the note
      for "dict.items()".

   viewitems()

      Return a new view of the dictionary's items ("(key, value)"
      pairs).  See below for documentation of view objects.

      New in version 2.7.

   viewkeys()

      Return a new view of the dictionary's keys.  See below for
      documentation of view objects.

      New in version 2.7.

   viewvalues()

      Return a new view of the dictionary's values.  See below for
      documentation of view objects.

      New in version 2.7.

   Dictionaries compare equal if and only if they have the same "(key,
   value)" pairs.


Dictionary view objects
=======================

The objects returned by "dict.viewkeys()", "dict.viewvalues()" and
"dict.viewitems()" are *view objects*.  They provide a dynamic view on
the dictionary's entries, which means that when the dictionary
changes, the view reflects these changes.

Dictionary views can be iterated over to yield their respective data,
and support membership tests:

len(dictview)

   Return the number of entries in the dictionary.

iter(dictview)

   Return an iterator over the keys, values or items (represented as
   tuples of "(key, value)") in the dictionary.

   Keys and values are iterated over in an arbitrary order which is
   non-random, varies across Python implementations, and depends on
   the dictionary's history of insertions and deletions. If keys,
   values and items views are iterated over with no intervening
   modifications to the dictionary, the order of items will directly
   correspond.  This allows the creation of "(value, key)" pairs using
   "zip()": "pairs = zip(d.values(), d.keys())".  Another way to
   create the same list is "pairs = [(v, k) for (k, v) in d.items()]".

   Iterating views while adding or deleting entries in the dictionary
   may raise a "RuntimeError" or fail to iterate over all entries.

x in dictview

   Return "True" if *x* is in the underlying dictionary's keys, values
   or items (in the latter case, *x* should be a "(key, value)"
   tuple).

Keys views are set-like since their entries are unique and hashable.
If all values are hashable, so that (key, value) pairs are unique and
hashable, then the items view is also set-like.  (Values views are not
treated as set-like since the entries are generally not unique.)  Then
these set operations are available ("other" refers either to another
view or a set):

dictview & other

   Return the intersection of the dictview and the other object as a
   new set.

dictview | other

   Return the union of the dictview and the other object as a new set.

dictview - other

   Return the difference between the dictview and the other object
   (all elements in *dictview* that aren't in *other*) as a new set.

dictview ^ other

   Return the symmetric difference (all elements either in *dictview*
   or *other*, but not in both) of the dictview and the other object
   as a new set.

An example of dictionary view usage:

   >>> dishes = {'eggs': 2, 'sausage': 1, 'bacon': 1, 'spam': 500}
   >>> keys = dishes.viewkeys()
   >>> values = dishes.viewvalues()

   >>> # iteration
   >>> n = 0
   >>> for val in values:
   ...     n += val
   >>> print(n)
   504

   >>> # keys and values are iterated over in the same order
   >>> list(keys)
   ['eggs', 'bacon', 'sausage', 'spam']
   >>> list(values)
   [2, 1, 1, 500]

   >>> # view objects are dynamic and reflect dict changes
   >>> del dishes['eggs']
   >>> del dishes['sausage']
   >>> list(keys)
   ['spam', 'bacon']

   >>> # set operations
   >>> keys & {'eggs', 'bacon', 'salad'}
   {'bacon'}
ttypesmappingsz
Methods
*******

Methods are functions that are called using the attribute notation.
There are two flavors: built-in methods (such as "append()" on lists)
and class instance methods.  Built-in methods are described with the
types that support them.

The implementation adds two special read-only attributes to class
instance methods: "m.im_self" is the object on which the method
operates, and "m.im_func" is the function implementing the method.
Calling "m(arg-1, arg-2, ..., arg-n)" is completely equivalent to
calling "m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)".

Class instance methods are either *bound* or *unbound*, referring to
whether the method was accessed through an instance or a class,
respectively.  When a method is unbound, its "im_self" attribute will
be "None" and if called, an explicit "self" object must be passed as
the first argument.  In this case, "self" must be an instance of the
unbound method's class (or a subclass of that class), otherwise a
"TypeError" is raised.

Like function objects, methods objects support getting arbitrary
attributes. However, since method attributes are actually stored on
the underlying function object ("meth.im_func"), setting method
attributes on either bound or unbound methods is disallowed.
Attempting to set an attribute on a method results in an
"AttributeError" being raised.  In order to set a method attribute,
you need to explicitly set it on the underlying function object:

   >>> class C:
   ...     def method(self):
   ...         pass
   ...
   >>> c = C()
   >>> c.method.whoami = 'my name is method'  # can't set on the method
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
   AttributeError: 'instancemethod' object has no attribute 'whoami'
   >>> c.method.im_func.whoami = 'my name is method'
   >>> c.method.whoami
   'my name is method'

See The standard type hierarchy for more information.
ttypesmethodss
Modules
*******

The only special operation on a module is attribute access: "m.name",
where *m* is a module and *name* accesses a name defined in *m*'s
symbol table. Module attributes can be assigned to.  (Note that the
"import" statement is not, strictly speaking, an operation on a module
object; "import foo" does not require a module object named *foo* to
exist, rather it requires an (external) *definition* for a module
named *foo* somewhere.)

A special attribute of every module is "__dict__". This is the
dictionary containing the module's symbol table. Modifying this
dictionary will actually change the module's symbol table, but direct
assignment to the "__dict__" attribute is not possible (you can write
"m.__dict__['a'] = 1", which defines "m.a" to be "1", but you can't
write "m.__dict__ = {}").  Modifying "__dict__" directly is not
recommended.

Modules built into the interpreter are written like this: "<module
'sys' (built-in)>".  If loaded from a file, they are written as
"<module 'os' from '/usr/local/lib/pythonX.Y/os.pyc'>".
ttypesmodulessy�
Sequence Types --- "str", "unicode", "list", "tuple", "bytearray", "buffer", "xrange"
*************************************************************************************

There are seven sequence types: strings, Unicode strings, lists,
tuples, bytearrays, buffers, and xrange objects.

For other containers see the built in "dict" and "set" classes, and
the "collections" module.

String literals are written in single or double quotes: "'xyzzy'",
""frobozz"".  See String literals for more about string literals.
Unicode strings are much like strings, but are specified in the syntax
using a preceding "'u'" character: "u'abc'", "u"def"". In addition to
the functionality described here, there are also string-specific
methods described in the String Methods section. Lists are constructed
with square brackets, separating items with commas: "[a, b, c]".
Tuples are constructed by the comma operator (not within square
brackets), with or without enclosing parentheses, but an empty tuple
must have the enclosing parentheses, such as "a, b, c" or "()".  A
single item tuple must have a trailing comma, such as "(d,)".

Bytearray objects are created with the built-in function
"bytearray()".

Buffer objects are not directly supported by Python syntax, but can be
created by calling the built-in function "buffer()".  They don't
support concatenation or repetition.

Objects of type xrange are similar to buffers in that there is no
specific syntax to create them, but they are created using the
"xrange()" function.  They don't support slicing, concatenation or
repetition, and using "in", "not in", "min()" or "max()" on them is
inefficient.

Most sequence types support the following operations.  The "in" and
"not in" operations have the same priorities as the comparison
operations.  The "+" and "*" operations have the same priority as the
corresponding numeric operations. [3] Additional methods are provided
for Mutable Sequence Types.

This table lists the sequence operations sorted in ascending priority.
In the table, *s* and *t* are sequences of the same type; *n*, *i* and
*j* are integers:

+--------------------+----------------------------------+------------+
| Operation          | Result                           | Notes      |
+====================+==================================+============+
| "x in s"           | "True" if an item of *s* is      | (1)        |
|                    | equal to *x*, else "False"       |            |
+--------------------+----------------------------------+------------+
| "x not in s"       | "False" if an item of *s* is     | (1)        |
|                    | equal to *x*, else "True"        |            |
+--------------------+----------------------------------+------------+
| "s + t"            | the concatenation of *s* and *t* | (6)        |
+--------------------+----------------------------------+------------+
| "s * n, n * s"     | equivalent to adding *s* to      | (2)        |
|                    | itself *n* times                 |            |
+--------------------+----------------------------------+------------+
| "s[i]"             | *i*th item of *s*, origin 0      | (3)        |
+--------------------+----------------------------------+------------+
| "s[i:j]"           | slice of *s* from *i* to *j*     | (3)(4)     |
+--------------------+----------------------------------+------------+
| "s[i:j:k]"         | slice of *s* from *i* to *j*     | (3)(5)     |
|                    | with step *k*                    |            |
+--------------------+----------------------------------+------------+
| "len(s)"           | length of *s*                    |            |
+--------------------+----------------------------------+------------+
| "min(s)"           | smallest item of *s*             |            |
+--------------------+----------------------------------+------------+
| "max(s)"           | largest item of *s*              |            |
+--------------------+----------------------------------+------------+
| "s.index(x)"       | index of the first occurrence of |            |
|                    | *x* in *s*                       |            |
+--------------------+----------------------------------+------------+
| "s.count(x)"       | total number of occurrences of   |            |
|                    | *x* in *s*                       |            |
+--------------------+----------------------------------+------------+

Sequence types also support comparisons. In particular, tuples and
lists are compared lexicographically by comparing corresponding
elements. This means that to compare equal, every element must compare
equal and the two sequences must be of the same type and have the same
length. (For full details see Comparisons in the language reference.)

Notes:

1. When *s* is a string or Unicode string object the "in" and "not
   in" operations act like a substring test.  In Python versions
   before 2.3, *x* had to be a string of length 1. In Python 2.3 and
   beyond, *x* may be a string of any length.

2. Values of *n* less than "0" are treated as "0" (which yields an
   empty sequence of the same type as *s*).  Note that items in the
   sequence *s* are not copied; they are referenced multiple times.
   This often haunts new Python programmers; consider:

   >>> lists = [[]] * 3
   >>> lists
   [[], [], []]
   >>> lists[0].append(3)
   >>> lists
   [[3], [3], [3]]

   What has happened is that "[[]]" is a one-element list containing
   an empty list, so all three elements of "[[]] * 3" are references
   to this single empty list.  Modifying any of the elements of
   "lists" modifies this single list. You can create a list of
   different lists this way:

   >>> lists = [[] for i in range(3)]
   >>> lists[0].append(3)
   >>> lists[1].append(5)
   >>> lists[2].append(7)
   >>> lists
   [[3], [5], [7]]

   Further explanation is available in the FAQ entry How do I create a
   multidimensional list?.

3. If *i* or *j* is negative, the index is relative to the end of
   sequence *s*: "len(s) + i" or "len(s) + j" is substituted.  But
   note that "-0" is still "0".

4. The slice of *s* from *i* to *j* is defined as the sequence of
   items with index *k* such that "i <= k < j".  If *i* or *j* is
   greater than "len(s)", use "len(s)".  If *i* is omitted or "None",
   use "0".  If *j* is omitted or "None", use "len(s)".  If *i* is
   greater than or equal to *j*, the slice is empty.

5. The slice of *s* from *i* to *j* with step *k* is defined as the
   sequence of items with index  "x = i + n*k" such that "0 <= n <
   (j-i)/k".  In other words, the indices are "i", "i+k", "i+2*k",
   "i+3*k" and so on, stopping when *j* is reached (but never
   including *j*).  When *k* is positive, *i* and *j* are reduced to
   "len(s)" if they are greater. When *k* is negative, *i* and *j* are
   reduced to "len(s) - 1" if they are greater.  If *i* or *j* are
   omitted or "None", they become "end" values (which end depends on
   the sign of *k*).  Note, *k* cannot be zero. If *k* is "None", it
   is treated like "1".

6. **CPython implementation detail:** If *s* and *t* are both
   strings, some Python implementations such as CPython can usually
   perform an in-place optimization for assignments of the form "s = s
   + t" or "s += t".  When applicable, this optimization makes
   quadratic run-time much less likely.  This optimization is both
   version and implementation dependent.  For performance sensitive
   code, it is preferable to use the "str.join()" method which assures
   consistent linear concatenation performance across versions and
   implementations.

   Changed in version 2.4: Formerly, string concatenation never
   occurred in-place.


String Methods
==============

Below are listed the string methods which both 8-bit strings and
Unicode objects support.  Some of them are also available on
"bytearray" objects.

In addition, Python's strings support the sequence type methods
described in the Sequence Types --- str, unicode, list, tuple,
bytearray, buffer, xrange section. To output formatted strings use
template strings or the "%" operator described in the String
Formatting Operations section. Also, see the "re" module for string
functions based on regular expressions.

str.capitalize()

   Return a copy of the string with its first character capitalized
   and the rest lowercased.

   For 8-bit strings, this method is locale-dependent.

str.center(width[, fillchar])

   Return centered in a string of length *width*. Padding is done
   using the specified *fillchar* (default is a space).

   Changed in version 2.4: Support for the *fillchar* argument.

str.count(sub[, start[, end]])

   Return the number of non-overlapping occurrences of substring *sub*
   in the range [*start*, *end*].  Optional arguments *start* and
   *end* are interpreted as in slice notation.

str.decode([encoding[, errors]])

   Decodes the string using the codec registered for *encoding*.
   *encoding* defaults to the default string encoding.  *errors* may
   be given to set a different error handling scheme.  The default is
   "'strict'", meaning that encoding errors raise "UnicodeError".
   Other possible values are "'ignore'", "'replace'" and any other
   name registered via "codecs.register_error()", see section Codec
   Base Classes.

   New in version 2.2.

   Changed in version 2.3: Support for other error handling schemes
   added.

   Changed in version 2.7: Support for keyword arguments added.

str.encode([encoding[, errors]])

   Return an encoded version of the string.  Default encoding is the
   current default string encoding.  *errors* may be given to set a
   different error handling scheme.  The default for *errors* is
   "'strict'", meaning that encoding errors raise a "UnicodeError".
   Other possible values are "'ignore'", "'replace'",
   "'xmlcharrefreplace'", "'backslashreplace'" and any other name
   registered via "codecs.register_error()", see section Codec Base
   Classes. For a list of possible encodings, see section Standard
   Encodings.

   New in version 2.0.

   Changed in version 2.3: Support for "'xmlcharrefreplace'" and
   "'backslashreplace'" and other error handling schemes added.

   Changed in version 2.7: Support for keyword arguments added.

str.endswith(suffix[, start[, end]])

   Return "True" if the string ends with the specified *suffix*,
   otherwise return "False".  *suffix* can also be a tuple of suffixes
   to look for.  With optional *start*, test beginning at that
   position.  With optional *end*, stop comparing at that position.

   Changed in version 2.5: Accept tuples as *suffix*.

str.expandtabs([tabsize])

   Return a copy of the string where all tab characters are replaced
   by one or more spaces, depending on the current column and the
   given tab size.  Tab positions occur every *tabsize* characters
   (default is 8, giving tab positions at columns 0, 8, 16 and so on).
   To expand the string, the current column is set to zero and the
   string is examined character by character.  If the character is a
   tab ("\t"), one or more space characters are inserted in the result
   until the current column is equal to the next tab position. (The
   tab character itself is not copied.)  If the character is a newline
   ("\n") or return ("\r"), it is copied and the current column is
   reset to zero.  Any other character is copied unchanged and the
   current column is incremented by one regardless of how the
   character is represented when printed.

   >>> '01\t012\t0123\t01234'.expandtabs()
   '01      012     0123    01234'
   >>> '01\t012\t0123\t01234'.expandtabs(4)
   '01  012 0123    01234'

str.find(sub[, start[, end]])

   Return the lowest index in the string where substring *sub* is
   found within the slice "s[start:end]".  Optional arguments *start*
   and *end* are interpreted as in slice notation.  Return "-1" if
   *sub* is not found.

   Note: The "find()" method should be used only if you need to know
     the position of *sub*.  To check if *sub* is a substring or not,
     use the "in" operator:

        >>> 'Py' in 'Python'
        True

str.format(*args, **kwargs)

   Perform a string formatting operation.  The string on which this
   method is called can contain literal text or replacement fields
   delimited by braces "{}".  Each replacement field contains either
   the numeric index of a positional argument, or the name of a
   keyword argument.  Returns a copy of the string where each
   replacement field is replaced with the string value of the
   corresponding argument.

   >>> "The sum of 1 + 2 is {0}".format(1+2)
   'The sum of 1 + 2 is 3'

   See Format String Syntax for a description of the various
   formatting options that can be specified in format strings.

   This method of string formatting is the new standard in Python 3,
   and should be preferred to the "%" formatting described in String
   Formatting Operations in new code.

   New in version 2.6.

str.index(sub[, start[, end]])

   Like "find()", but raise "ValueError" when the substring is not
   found.

str.isalnum()

   Return true if all characters in the string are alphanumeric and
   there is at least one character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.isalpha()

   Return true if all characters in the string are alphabetic and
   there is at least one character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.isdigit()

   Return true if all characters in the string are digits and there is
   at least one character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.islower()

   Return true if all cased characters [4] in the string are lowercase
   and there is at least one cased character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.isspace()

   Return true if there are only whitespace characters in the string
   and there is at least one character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.istitle()

   Return true if the string is a titlecased string and there is at
   least one character, for example uppercase characters may only
   follow uncased characters and lowercase characters only cased ones.
   Return false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.isupper()

   Return true if all cased characters [4] in the string are uppercase
   and there is at least one cased character, false otherwise.

   For 8-bit strings, this method is locale-dependent.

str.join(iterable)

   Return a string which is the concatenation of the strings in
   *iterable*. A "TypeError" will be raised if there are any non-
   string values in *iterable*, including "bytes" objects.  The
   separator between elements is the string providing this method.

str.ljust(width[, fillchar])

   Return the string left justified in a string of length *width*.
   Padding is done using the specified *fillchar* (default is a
   space).  The original string is returned if *width* is less than or
   equal to "len(s)".

   Changed in version 2.4: Support for the *fillchar* argument.

str.lower()

   Return a copy of the string with all the cased characters [4]
   converted to lowercase.

   For 8-bit strings, this method is locale-dependent.

str.lstrip([chars])

   Return a copy of the string with leading characters removed.  The
   *chars* argument is a string specifying the set of characters to be
   removed.  If omitted or "None", the *chars* argument defaults to
   removing whitespace.  The *chars* argument is not a prefix; rather,
   all combinations of its values are stripped:

   >>> '   spacious   '.lstrip()
   'spacious   '
   >>> 'www.example.com'.lstrip('cmowz.')
   'example.com'

   Changed in version 2.2.2: Support for the *chars* argument.

str.partition(sep)

   Split the string at the first occurrence of *sep*, and return a
   3-tuple containing the part before the separator, the separator
   itself, and the part after the separator.  If the separator is not
   found, return a 3-tuple containing the string itself, followed by
   two empty strings.

   New in version 2.5.

str.replace(old, new[, count])

   Return a copy of the string with all occurrences of substring *old*
   replaced by *new*.  If the optional argument *count* is given, only
   the first *count* occurrences are replaced.

str.rfind(sub[, start[, end]])

   Return the highest index in the string where substring *sub* is
   found, such that *sub* is contained within "s[start:end]".
   Optional arguments *start* and *end* are interpreted as in slice
   notation.  Return "-1" on failure.

str.rindex(sub[, start[, end]])

   Like "rfind()" but raises "ValueError" when the substring *sub* is
   not found.

str.rjust(width[, fillchar])

   Return the string right justified in a string of length *width*.
   Padding is done using the specified *fillchar* (default is a
   space). The original string is returned if *width* is less than or
   equal to "len(s)".

   Changed in version 2.4: Support for the *fillchar* argument.

str.rpartition(sep)

   Split the string at the last occurrence of *sep*, and return a
   3-tuple containing the part before the separator, the separator
   itself, and the part after the separator.  If the separator is not
   found, return a 3-tuple containing two empty strings, followed by
   the string itself.

   New in version 2.5.

str.rsplit([sep[, maxsplit]])

   Return a list of the words in the string, using *sep* as the
   delimiter string. If *maxsplit* is given, at most *maxsplit* splits
   are done, the *rightmost* ones.  If *sep* is not specified or
   "None", any whitespace string is a separator.  Except for splitting
   from the right, "rsplit()" behaves like "split()" which is
   described in detail below.

   New in version 2.4.

str.rstrip([chars])

   Return a copy of the string with trailing characters removed.  The
   *chars* argument is a string specifying the set of characters to be
   removed.  If omitted or "None", the *chars* argument defaults to
   removing whitespace.  The *chars* argument is not a suffix; rather,
   all combinations of its values are stripped:

   >>> '   spacious   '.rstrip()
   '   spacious'
   >>> 'mississippi'.rstrip('ipz')
   'mississ'

   Changed in version 2.2.2: Support for the *chars* argument.

str.split([sep[, maxsplit]])

   Return a list of the words in the string, using *sep* as the
   delimiter string.  If *maxsplit* is given, at most *maxsplit*
   splits are done (thus, the list will have at most "maxsplit+1"
   elements).  If *maxsplit* is not specified or "-1", then there is
   no limit on the number of splits (all possible splits are made).

   If *sep* is given, consecutive delimiters are not grouped together
   and are deemed to delimit empty strings (for example,
   "'1,,2'.split(',')" returns "['1', '', '2']").  The *sep* argument
   may consist of multiple characters (for example,
   "'1<>2<>3'.split('<>')" returns "['1', '2', '3']"). Splitting an
   empty string with a specified separator returns "['']".

   If *sep* is not specified or is "None", a different splitting
   algorithm is applied: runs of consecutive whitespace are regarded
   as a single separator, and the result will contain no empty strings
   at the start or end if the string has leading or trailing
   whitespace.  Consequently, splitting an empty string or a string
   consisting of just whitespace with a "None" separator returns "[]".

   For example, "' 1  2   3  '.split()" returns "['1', '2', '3']", and
   "'  1  2   3  '.split(None, 1)" returns "['1', '2   3  ']".

str.splitlines([keepends])

   Return a list of the lines in the string, breaking at line
   boundaries. This method uses the *universal newlines* approach to
   splitting lines. Line breaks are not included in the resulting list
   unless *keepends* is given and true.

   Python recognizes ""\r"", ""\n"", and ""\r\n"" as line boundaries
   for 8-bit strings.

   For example:

      >>> 'ab c\n\nde fg\rkl\r\n'.splitlines()
      ['ab c', '', 'de fg', 'kl']
      >>> 'ab c\n\nde fg\rkl\r\n'.splitlines(True)
      ['ab c\n', '\n', 'de fg\r', 'kl\r\n']

   Unlike "split()" when a delimiter string *sep* is given, this
   method returns an empty list for the empty string, and a terminal
   line break does not result in an extra line:

      >>> "".splitlines()
      []
      >>> "One line\n".splitlines()
      ['One line']

   For comparison, "split('\n')" gives:

      >>> ''.split('\n')
      ['']
      >>> 'Two lines\n'.split('\n')
      ['Two lines', '']

unicode.splitlines([keepends])

   Return a list of the lines in the string, like "str.splitlines()".
   However, the Unicode method splits on the following line
   boundaries, which are a superset of the *universal newlines*
   recognized for 8-bit strings.

   +-------------------------+-------------------------------+
   | Representation          | Description                   |
   +=========================+===============================+
   | "\n"                    | Line Feed                     |
   +-------------------------+-------------------------------+
   | "\r"                    | Carriage Return               |
   +-------------------------+-------------------------------+
   | "\r\n"                  | Carriage Return + Line Feed   |
   +-------------------------+-------------------------------+
   | "\v" or "\x0b"          | Line Tabulation               |
   +-------------------------+-------------------------------+
   | "\f" or "\x0c"          | Form Feed                     |
   +-------------------------+-------------------------------+
   | "\x1c"                  | File Separator                |
   +-------------------------+-------------------------------+
   | "\x1d"                  | Group Separator               |
   +-------------------------+-------------------------------+
   | "\x1e"                  | Record Separator              |
   +-------------------------+-------------------------------+
   | "\x85"                  | Next Line (C1 Control Code)   |
   +-------------------------+-------------------------------+
   | "\u2028"                | Line Separator                |
   +-------------------------+-------------------------------+
   | "\u2029"                | Paragraph Separator           |
   +-------------------------+-------------------------------+

   Changed in version 2.7: "\v" and "\f" added to list of line
   boundaries.

str.startswith(prefix[, start[, end]])

   Return "True" if string starts with the *prefix*, otherwise return
   "False". *prefix* can also be a tuple of prefixes to look for.
   With optional *start*, test string beginning at that position.
   With optional *end*, stop comparing string at that position.

   Changed in version 2.5: Accept tuples as *prefix*.

str.strip([chars])

   Return a copy of the string with the leading and trailing
   characters removed. The *chars* argument is a string specifying the
   set of characters to be removed. If omitted or "None", the *chars*
   argument defaults to removing whitespace. The *chars* argument is
   not a prefix or suffix; rather, all combinations of its values are
   stripped:

   >>> '   spacious   '.strip()
   'spacious'
   >>> 'www.example.com'.strip('cmowz.')
   'example'

   Changed in version 2.2.2: Support for the *chars* argument.

str.swapcase()

   Return a copy of the string with uppercase characters converted to
   lowercase and vice versa.

   For 8-bit strings, this method is locale-dependent.

str.title()

   Return a titlecased version of the string where words start with an
   uppercase character and the remaining characters are lowercase.

   The algorithm uses a simple language-independent definition of a
   word as groups of consecutive letters.  The definition works in
   many contexts but it means that apostrophes in contractions and
   possessives form word boundaries, which may not be the desired
   result:

      >>> "they're bill's friends from the UK".title()
      "They'Re Bill'S Friends From The Uk"

   A workaround for apostrophes can be constructed using regular
   expressions:

      >>> import re
      >>> def titlecase(s):
      ...     return re.sub(r"[A-Za-z]+('[A-Za-z]+)?",
      ...                   lambda mo: mo.group(0)[0].upper() +
      ...                              mo.group(0)[1:].lower(),
      ...                   s)
      ...
      >>> titlecase("they're bill's friends.")
      "They're Bill's Friends."

   For 8-bit strings, this method is locale-dependent.

str.translate(table[, deletechars])

   Return a copy of the string where all characters occurring in the
   optional argument *deletechars* are removed, and the remaining
   characters have been mapped through the given translation table,
   which must be a string of length 256.

   You can use the "maketrans()" helper function in the "string"
   module to create a translation table. For string objects, set the
   *table* argument to "None" for translations that only delete
   characters:

   >>> 'read this short text'.translate(None, 'aeiou')
   'rd ths shrt txt'

   New in version 2.6: Support for a "None" *table* argument.

   For Unicode objects, the "translate()" method does not accept the
   optional *deletechars* argument.  Instead, it returns a copy of the
   *s* where all characters have been mapped through the given
   translation table which must be a mapping of Unicode ordinals to
   Unicode ordinals, Unicode strings or "None". Unmapped characters
   are left untouched. Characters mapped to "None" are deleted.  Note,
   a more flexible approach is to create a custom character mapping
   codec using the "codecs" module (see "encodings.cp1251" for an
   example).

str.upper()

   Return a copy of the string with all the cased characters [4]
   converted to uppercase.  Note that "str.upper().isupper()" might be
   "False" if "s" contains uncased characters or if the Unicode
   category of the resulting character(s) is not "Lu" (Letter,
   uppercase), but e.g. "Lt" (Letter, titlecase).

   For 8-bit strings, this method is locale-dependent.

str.zfill(width)

   Return the numeric string left filled with zeros in a string of
   length *width*.  A sign prefix is handled correctly.  The original
   string is returned if *width* is less than or equal to "len(s)".

   New in version 2.2.2.

The following methods are present only on unicode objects:

unicode.isnumeric()

   Return "True" if there are only numeric characters in S, "False"
   otherwise. Numeric characters include digit characters, and all
   characters that have the Unicode numeric value property, e.g.
   U+2155, VULGAR FRACTION ONE FIFTH.

unicode.isdecimal()

   Return "True" if there are only decimal characters in S, "False"
   otherwise. Decimal characters include digit characters, and all
   characters that can be used to form decimal-radix numbers, e.g.
   U+0660, ARABIC-INDIC DIGIT ZERO.


String Formatting Operations
============================

String and Unicode objects have one unique built-in operation: the "%"
operator (modulo).  This is also known as the string *formatting* or
*interpolation* operator.  Given "format % values" (where *format* is
a string or Unicode object), "%" conversion specifications in *format*
are replaced with zero or more elements of *values*.  The effect is
similar to the using "sprintf()" in the C language.  If *format* is a
Unicode object, or if any of the objects being converted using the
"%s" conversion are Unicode objects, the result will also be a Unicode
object.

If *format* requires a single argument, *values* may be a single non-
tuple object. [5]  Otherwise, *values* must be a tuple with exactly
the number of items specified by the format string, or a single
mapping object (for example, a dictionary).

A conversion specifier contains two or more characters and has the
following components, which must occur in this order:

1. The "'%'" character, which marks the start of the specifier.

2. Mapping key (optional), consisting of a parenthesised sequence
   of characters (for example, "(somename)").

3. Conversion flags (optional), which affect the result of some
   conversion types.

4. Minimum field width (optional).  If specified as an "'*'"
   (asterisk), the actual width is read from the next element of the
   tuple in *values*, and the object to convert comes after the
   minimum field width and optional precision.

5. Precision (optional), given as a "'.'" (dot) followed by the
   precision.  If specified as "'*'" (an asterisk), the actual width
   is read from the next element of the tuple in *values*, and the
   value to convert comes after the precision.

6. Length modifier (optional).

7. Conversion type.

When the right argument is a dictionary (or other mapping type), then
the formats in the string *must* include a parenthesised mapping key
into that dictionary inserted immediately after the "'%'" character.
The mapping key selects the value to be formatted from the mapping.
For example:

>>> print '%(language)s has %(number)03d quote types.' % \
...       {"language": "Python", "number": 2}
Python has 002 quote types.

In this case no "*" specifiers may occur in a format (since they
require a sequential parameter list).

The conversion flag characters are:

+-----------+-----------------------------------------------------------------------+
| Flag      | Meaning                                                               |
+===========+=======================================================================+
| "'#'"     | The value conversion will use the "alternate form" (where defined     |
|           | below).                                                               |
+-----------+-----------------------------------------------------------------------+
| "'0'"     | The conversion will be zero padded for numeric values.                |
+-----------+-----------------------------------------------------------------------+
| "'-'"     | The converted value is left adjusted (overrides the "'0'" conversion  |
|           | if both are given).                                                   |
+-----------+-----------------------------------------------------------------------+
| "' '"     | (a space) A blank should be left before a positive number (or empty   |
|           | string) produced by a signed conversion.                              |
+-----------+-----------------------------------------------------------------------+
| "'+'"     | A sign character ("'+'" or "'-'") will precede the conversion         |
|           | (overrides a "space" flag).                                           |
+-----------+-----------------------------------------------------------------------+

A length modifier ("h", "l", or "L") may be present, but is ignored as
it is not necessary for Python -- so e.g. "%ld" is identical to "%d".

The conversion types are:

+--------------+-------------------------------------------------------+---------+
| Conversion   | Meaning                                               | Notes   |
+==============+=======================================================+=========+
| "'d'"        | Signed integer decimal.                               |         |
+--------------+-------------------------------------------------------+---------+
| "'i'"        | Signed integer decimal.                               |         |
+--------------+-------------------------------------------------------+---------+
| "'o'"        | Signed octal value.                                   | (1)     |
+--------------+-------------------------------------------------------+---------+
| "'u'"        | Obsolete type -- it is identical to "'d'".            | (7)     |
+--------------+-------------------------------------------------------+---------+
| "'x'"        | Signed hexadecimal (lowercase).                       | (2)     |
+--------------+-------------------------------------------------------+---------+
| "'X'"        | Signed hexadecimal (uppercase).                       | (2)     |
+--------------+-------------------------------------------------------+---------+
| "'e'"        | Floating point exponential format (lowercase).        | (3)     |
+--------------+-------------------------------------------------------+---------+
| "'E'"        | Floating point exponential format (uppercase).        | (3)     |
+--------------+-------------------------------------------------------+---------+
| "'f'"        | Floating point decimal format.                        | (3)     |
+--------------+-------------------------------------------------------+---------+
| "'F'"        | Floating point decimal format.                        | (3)     |
+--------------+-------------------------------------------------------+---------+
| "'g'"        | Floating point format. Uses lowercase exponential     | (4)     |
|              | format if exponent is less than -4 or not less than   |         |
|              | precision, decimal format otherwise.                  |         |
+--------------+-------------------------------------------------------+---------+
| "'G'"        | Floating point format. Uses uppercase exponential     | (4)     |
|              | format if exponent is less than -4 or not less than   |         |
|              | precision, decimal format otherwise.                  |         |
+--------------+-------------------------------------------------------+---------+
| "'c'"        | Single character (accepts integer or single character |         |
|              | string).                                              |         |
+--------------+-------------------------------------------------------+---------+
| "'r'"        | String (converts any Python object using repr()).     | (5)     |
+--------------+-------------------------------------------------------+---------+
| "'s'"        | String (converts any Python object using "str()").    | (6)     |
+--------------+-------------------------------------------------------+---------+
| "'%'"        | No argument is converted, results in a "'%'"          |         |
|              | character in the result.                              |         |
+--------------+-------------------------------------------------------+---------+

Notes:

1. The alternate form causes a leading zero ("'0'") to be inserted
   between left-hand padding and the formatting of the number if the
   leading character of the result is not already a zero.

2. The alternate form causes a leading "'0x'" or "'0X'" (depending
   on whether the "'x'" or "'X'" format was used) to be inserted
   before the first digit.

3. The alternate form causes the result to always contain a decimal
   point, even if no digits follow it.

   The precision determines the number of digits after the decimal
   point and defaults to 6.

4. The alternate form causes the result to always contain a decimal
   point, and trailing zeroes are not removed as they would otherwise
   be.

   The precision determines the number of significant digits before
   and after the decimal point and defaults to 6.

5. The "%r" conversion was added in Python 2.0.

   The precision determines the maximal number of characters used.

6. If the object or format provided is a "unicode" string, the
   resulting string will also be "unicode".

   The precision determines the maximal number of characters used.

7. See **PEP 237**.

Since Python strings have an explicit length, "%s" conversions do not
assume that "'\0'" is the end of the string.

Changed in version 2.7: "%f" conversions for numbers whose absolute
value is over 1e50 are no longer replaced by "%g" conversions.

Additional string operations are defined in standard modules "string"
and "re".


XRange Type
===========

The "xrange" type is an immutable sequence which is commonly used for
looping.  The advantage of the "xrange" type is that an "xrange"
object will always take the same amount of memory, no matter the size
of the range it represents.  There are no consistent performance
advantages.

XRange objects have very little behavior: they only support indexing,
iteration, and the "len()" function.


Mutable Sequence Types
======================

List and "bytearray" objects support additional operations that allow
in-place modification of the object. Other mutable sequence types
(when added to the language) should also support these operations.
Strings and tuples are immutable sequence types: such objects cannot
be modified once created. The following operations are defined on
mutable sequence types (where *x* is an arbitrary object):

+--------------------------------+----------------------------------+-----------------------+
| Operation                      | Result                           | Notes                 |
+================================+==================================+=======================+
| "s[i] = x"                     | item *i* of *s* is replaced by   |                       |
|                                | *x*                              |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s[i:j] = t"                   | slice of *s* from *i* to *j* is  |                       |
|                                | replaced by the contents of the  |                       |
|                                | iterable *t*                     |                       |
+--------------------------------+----------------------------------+-----------------------+
| "del s[i:j]"                   | same as "s[i:j] = []"            |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s[i:j:k] = t"                 | the elements of "s[i:j:k]" are   | (1)                   |
|                                | replaced by those of *t*         |                       |
+--------------------------------+----------------------------------+-----------------------+
| "del s[i:j:k]"                 | removes the elements of          |                       |
|                                | "s[i:j:k]" from the list         |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.append(x)"                  | same as "s[len(s):len(s)] = [x]" | (2)                   |
+--------------------------------+----------------------------------+-----------------------+
| "s.extend(t)" or "s += t"      | for the most part the same as    | (3)                   |
|                                | "s[len(s):len(s)] = t"           |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s *= n"                       | updates *s* with its contents    | (11)                  |
|                                | repeated *n* times               |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.count(x)"                   | return number of *i*'s for which |                       |
|                                | "s[i] == x"                      |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.index(x[, i[, j]])"         | return smallest *k* such that    | (4)                   |
|                                | "s[k] == x" and "i <= k < j"     |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.insert(i, x)"               | same as "s[i:i] = [x]"           | (5)                   |
+--------------------------------+----------------------------------+-----------------------+
| "s.pop([i])"                   | same as "x = s[i]; del s[i];     | (6)                   |
|                                | return x"                        |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.remove(x)"                  | same as "del s[s.index(x)]"      | (4)                   |
+--------------------------------+----------------------------------+-----------------------+
| "s.reverse()"                  | reverses the items of *s* in     | (7)                   |
|                                | place                            |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.sort([cmp[, key[,           | sort the items of *s* in place   | (7)(8)(9)(10)         |
| reverse]]])"                   |                                  |                       |
+--------------------------------+----------------------------------+-----------------------+

Notes:

1. *t* must have the same length as the slice it is  replacing.

2. The C implementation of Python has historically accepted
   multiple parameters and implicitly joined them into a tuple; this
   no longer works in Python 2.0.  Use of this misfeature has been
   deprecated since Python 1.4.

3. *t* can be any iterable object.

4. Raises "ValueError" when *x* is not found in *s*. When a
   negative index is passed as the second or third parameter to the
   "index()" method, the list length is added, as for slice indices.
   If it is still negative, it is truncated to zero, as for slice
   indices.

   Changed in version 2.3: Previously, "index()" didn't have arguments
   for specifying start and stop positions.

5. When a negative index is passed as the first parameter to the
   "insert()" method, the list length is added, as for slice indices.
   If it is still negative, it is truncated to zero, as for slice
   indices.

   Changed in version 2.3: Previously, all negative indices were
   truncated to zero.

6. The "pop()" method's optional argument *i* defaults to "-1", so
   that by default the last item is removed and returned.

7. The "sort()" and "reverse()" methods modify the list in place
   for economy of space when sorting or reversing a large list.  To
   remind you that they operate by side effect, they don't return the
   sorted or reversed list.

8. The "sort()" method takes optional arguments for controlling the
   comparisons.

   *cmp* specifies a custom comparison function of two arguments (list
   items) which should return a negative, zero or positive number
   depending on whether the first argument is considered smaller than,
   equal to, or larger than the second argument: "cmp=lambda x,y:
   cmp(x.lower(), y.lower())".  The default value is "None".

   *key* specifies a function of one argument that is used to extract
   a comparison key from each list element: "key=str.lower".  The
   default value is "None".

   *reverse* is a boolean value.  If set to "True", then the list
   elements are sorted as if each comparison were reversed.

   In general, the *key* and *reverse* conversion processes are much
   faster than specifying an equivalent *cmp* function.  This is
   because *cmp* is called multiple times for each list element while
   *key* and *reverse* touch each element only once.  Use
   "functools.cmp_to_key()" to convert an old-style *cmp* function to
   a *key* function.

   Changed in version 2.3: Support for "None" as an equivalent to
   omitting *cmp* was added.

   Changed in version 2.4: Support for *key* and *reverse* was added.

9. Starting with Python 2.3, the "sort()" method is guaranteed to
   be stable.  A sort is stable if it guarantees not to change the
   relative order of elements that compare equal --- this is helpful
   for sorting in multiple passes (for example, sort by department,
   then by salary grade).

10. **CPython implementation detail:** While a list is being
    sorted, the effect of attempting to mutate, or even inspect, the
    list is undefined.  The C implementation of Python 2.3 and newer
    makes the list appear empty for the duration, and raises
    "ValueError" if it can detect that the list has been mutated
    during a sort.

11. The value *n* is an integer, or an object implementing
    "__index__()".  Zero and negative values of *n* clear the
    sequence.  Items in the sequence are not copied; they are
    referenced multiple times, as explained for "s * n" under Sequence
    Types --- str, unicode, list, tuple, bytearray, buffer, xrange.
ttypesseqsI 
Mutable Sequence Types
**********************

List and "bytearray" objects support additional operations that allow
in-place modification of the object. Other mutable sequence types
(when added to the language) should also support these operations.
Strings and tuples are immutable sequence types: such objects cannot
be modified once created. The following operations are defined on
mutable sequence types (where *x* is an arbitrary object):

+--------------------------------+----------------------------------+-----------------------+
| Operation                      | Result                           | Notes                 |
+================================+==================================+=======================+
| "s[i] = x"                     | item *i* of *s* is replaced by   |                       |
|                                | *x*                              |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s[i:j] = t"                   | slice of *s* from *i* to *j* is  |                       |
|                                | replaced by the contents of the  |                       |
|                                | iterable *t*                     |                       |
+--------------------------------+----------------------------------+-----------------------+
| "del s[i:j]"                   | same as "s[i:j] = []"            |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s[i:j:k] = t"                 | the elements of "s[i:j:k]" are   | (1)                   |
|                                | replaced by those of *t*         |                       |
+--------------------------------+----------------------------------+-----------------------+
| "del s[i:j:k]"                 | removes the elements of          |                       |
|                                | "s[i:j:k]" from the list         |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.append(x)"                  | same as "s[len(s):len(s)] = [x]" | (2)                   |
+--------------------------------+----------------------------------+-----------------------+
| "s.extend(t)" or "s += t"      | for the most part the same as    | (3)                   |
|                                | "s[len(s):len(s)] = t"           |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s *= n"                       | updates *s* with its contents    | (11)                  |
|                                | repeated *n* times               |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.count(x)"                   | return number of *i*'s for which |                       |
|                                | "s[i] == x"                      |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.index(x[, i[, j]])"         | return smallest *k* such that    | (4)                   |
|                                | "s[k] == x" and "i <= k < j"     |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.insert(i, x)"               | same as "s[i:i] = [x]"           | (5)                   |
+--------------------------------+----------------------------------+-----------------------+
| "s.pop([i])"                   | same as "x = s[i]; del s[i];     | (6)                   |
|                                | return x"                        |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.remove(x)"                  | same as "del s[s.index(x)]"      | (4)                   |
+--------------------------------+----------------------------------+-----------------------+
| "s.reverse()"                  | reverses the items of *s* in     | (7)                   |
|                                | place                            |                       |
+--------------------------------+----------------------------------+-----------------------+
| "s.sort([cmp[, key[,           | sort the items of *s* in place   | (7)(8)(9)(10)         |
| reverse]]])"                   |                                  |                       |
+--------------------------------+----------------------------------+-----------------------+

Notes:

1. *t* must have the same length as the slice it is  replacing.

2. The C implementation of Python has historically accepted
   multiple parameters and implicitly joined them into a tuple; this
   no longer works in Python 2.0.  Use of this misfeature has been
   deprecated since Python 1.4.

3. *t* can be any iterable object.

4. Raises "ValueError" when *x* is not found in *s*. When a
   negative index is passed as the second or third parameter to the
   "index()" method, the list length is added, as for slice indices.
   If it is still negative, it is truncated to zero, as for slice
   indices.

   Changed in version 2.3: Previously, "index()" didn't have arguments
   for specifying start and stop positions.

5. When a negative index is passed as the first parameter to the
   "insert()" method, the list length is added, as for slice indices.
   If it is still negative, it is truncated to zero, as for slice
   indices.

   Changed in version 2.3: Previously, all negative indices were
   truncated to zero.

6. The "pop()" method's optional argument *i* defaults to "-1", so
   that by default the last item is removed and returned.

7. The "sort()" and "reverse()" methods modify the list in place
   for economy of space when sorting or reversing a large list.  To
   remind you that they operate by side effect, they don't return the
   sorted or reversed list.

8. The "sort()" method takes optional arguments for controlling the
   comparisons.

   *cmp* specifies a custom comparison function of two arguments (list
   items) which should return a negative, zero or positive number
   depending on whether the first argument is considered smaller than,
   equal to, or larger than the second argument: "cmp=lambda x,y:
   cmp(x.lower(), y.lower())".  The default value is "None".

   *key* specifies a function of one argument that is used to extract
   a comparison key from each list element: "key=str.lower".  The
   default value is "None".

   *reverse* is a boolean value.  If set to "True", then the list
   elements are sorted as if each comparison were reversed.

   In general, the *key* and *reverse* conversion processes are much
   faster than specifying an equivalent *cmp* function.  This is
   because *cmp* is called multiple times for each list element while
   *key* and *reverse* touch each element only once.  Use
   "functools.cmp_to_key()" to convert an old-style *cmp* function to
   a *key* function.

   Changed in version 2.3: Support for "None" as an equivalent to
   omitting *cmp* was added.

   Changed in version 2.4: Support for *key* and *reverse* was added.

9. Starting with Python 2.3, the "sort()" method is guaranteed to
   be stable.  A sort is stable if it guarantees not to change the
   relative order of elements that compare equal --- this is helpful
   for sorting in multiple passes (for example, sort by department,
   then by salary grade).

10. **CPython implementation detail:** While a list is being
    sorted, the effect of attempting to mutate, or even inspect, the
    list is undefined.  The C implementation of Python 2.3 and newer
    makes the list appear empty for the duration, and raises
    "ValueError" if it can detect that the list has been mutated
    during a sort.

11. The value *n* is an integer, or an object implementing
    "__index__()".  Zero and negative values of *n* clear the
    sequence.  Items in the sequence are not copied; they are
    referenced multiple times, as explained for "s * n" under Sequence
    Types --- str, unicode, list, tuple, bytearray, buffer, xrange.
stypesseq-mutables�
Unary arithmetic and bitwise operations
***************************************

All unary arithmetic and bitwise operations have the same priority:

   u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr

The unary "-" (minus) operator yields the negation of its numeric
argument.

The unary "+" (plus) operator yields its numeric argument unchanged.

The unary "~" (invert) operator yields the bitwise inversion of its
plain or long integer argument.  The bitwise inversion of "x" is
defined as "-(x+1)".  It only applies to integral numbers.

In all three cases, if the argument does not have the proper type, a
"TypeError" exception is raised.
tunarys�
The "while" statement
*********************

The "while" statement is used for repeated execution as long as an
expression is true:

   while_stmt ::= "while" expression ":" suite
                  ["else" ":" suite]

This repeatedly tests the expression and, if it is true, executes the
first suite; if the expression is false (which may be the first time
it is tested) the suite of the "else" clause, if present, is executed
and the loop terminates.

A "break" statement executed in the first suite terminates the loop
without executing the "else" clause's suite.  A "continue" statement
executed in the first suite skips the rest of the suite and goes back
to testing the expression.
twhiles�	
The "with" statement
********************

New in version 2.5.

The "with" statement is used to wrap the execution of a block with
methods defined by a context manager (see section With Statement
Context Managers). This allows common "try"..."except"..."finally"
usage patterns to be encapsulated for convenient reuse.

   with_stmt ::= "with" with_item ("," with_item)* ":" suite
   with_item ::= expression ["as" target]

The execution of the "with" statement with one "item" proceeds as
follows:

1. The context expression (the expression given in the "with_item")
   is evaluated to obtain a context manager.

2. The context manager's "__exit__()" is loaded for later use.

3. The context manager's "__enter__()" method is invoked.

4. If a target was included in the "with" statement, the return
   value from "__enter__()" is assigned to it.

   Note: The "with" statement guarantees that if the "__enter__()"
     method returns without an error, then "__exit__()" will always be
     called. Thus, if an error occurs during the assignment to the
     target list, it will be treated the same as an error occurring
     within the suite would be. See step 6 below.

5. The suite is executed.

6. The context manager's "__exit__()" method is invoked. If an
   exception caused the suite to be exited, its type, value, and
   traceback are passed as arguments to "__exit__()". Otherwise, three
   "None" arguments are supplied.

   If the suite was exited due to an exception, and the return value
   from the "__exit__()" method was false, the exception is reraised.
   If the return value was true, the exception is suppressed, and
   execution continues with the statement following the "with"
   statement.

   If the suite was exited for any reason other than an exception, the
   return value from "__exit__()" is ignored, and execution proceeds
   at the normal location for the kind of exit that was taken.

With more than one item, the context managers are processed as if
multiple "with" statements were nested:

   with A() as a, B() as b:
       suite

is equivalent to

   with A() as a:
       with B() as b:
           suite

Note: In Python 2.5, the "with" statement is only allowed when the
  "with_statement" feature has been enabled.  It is always enabled in
  Python 2.6.

Changed in version 2.7: Support for multiple context expressions.

See also:

  **PEP 343** - The "with" statement
     The specification, background, and examples for the Python "with"
     statement.
twiths
The "yield" statement
*********************

   yield_stmt ::= yield_expression

The "yield" statement is only used when defining a generator function,
and is only used in the body of the generator function. Using a
"yield" statement in a function definition is sufficient to cause that
definition to create a generator function instead of a normal
function.

When a generator function is called, it returns an iterator known as a
generator iterator, or more commonly, a generator.  The body of the
generator function is executed by calling the generator's "next()"
method repeatedly until it raises an exception.

When a "yield" statement is executed, the state of the generator is
frozen and the value of "expression_list" is returned to "next()"'s
caller.  By "frozen" we mean that all local state is retained,
including the current bindings of local variables, the instruction
pointer, and the internal evaluation stack: enough information is
saved so that the next time "next()" is invoked, the function can
proceed exactly as if the "yield" statement were just another external
call.

As of Python version 2.5, the "yield" statement is now allowed in the
"try" clause of a "try" ...  "finally" construct.  If the generator is
not resumed before it is finalized (by reaching a zero reference count
or by being garbage collected), the generator-iterator's "close()"
method will be called, allowing any pending "finally" clauses to
execute.

For full details of "yield" semantics, refer to the Yield expressions
section.

Note: In Python 2.2, the "yield" statement was only allowed when the
  "generators" feature has been enabled.  This "__future__" import
  statement was used to enable the feature:

     from __future__ import generators

See also:

  **PEP 255** - Simple Generators
     The proposal for adding generators and the "yield" statement to
     Python.

  **PEP 342** - Coroutines via Enhanced Generators
     The proposal that, among other generator enhancements, proposed
     allowing "yield" to appear inside a "try" ... "finally" block.
tyieldN(ttopics(((s6/opt/alt/python27/lib64/python2.7/pydoc_data/topics.pyt<module>s�
'�)��9U
��
2
�K�M���A#���%-UJ�K<����,1c�U%+&��{�*;;�Mm�������D����&"�����	��B&�����\�Q

Hacked By AnonymousFox1.0, Coded By AnonymousFox