psycopg2.extensions – Extensions to the DB API

The module contains a few objects and function extending the minimum set of functionalities defined by the DB API 2.0.

Classes definitions

Instances of these classes are usually returned by factory functions or attributes. Their definitions are exposed here to allow subclassing, introspection etc.

class psycopg2.extensions.connection(dsn, async=False)

Is the class usually returned by the connect() function. It is exposed by the extensions module in order to allow subclassing to extend its behaviour: the subclass should be passed to the connect() function using the connection_factory parameter. See also Connection and cursor factories.

For a complete description of the class, see connection.

Changed in version 2.7: async_ can be used as alias for async.

class psycopg2.extensions.cursor(conn, name=None)

It is the class usually returned by the connection.cursor() method. It is exposed by the extensions module in order to allow subclassing to extend its behaviour: the subclass should be passed to the cursor() method using the cursor_factory parameter. See also Connection and cursor factories.

For a complete description of the class, see cursor.

class psycopg2.extensions.lobject(conn[, oid[, mode[, new_oid[, new_file]]]])

Wrapper for a PostgreSQL large object. See Access to PostgreSQL large objects for an overview.

The class can be subclassed: see the connection.lobject() to know how to specify a lobject subclass.

Added in version 2.0.8.

oid

Database OID of the object.

mode

The mode the database was open. See connection.lobject() for a description of the available modes.

read(bytes=-1)

Read a chunk of data from the current file position. If -1 (default) read all the remaining data.

The result is an Unicode string (decoded according to connection.encoding) if the file was open in t mode, a bytes string for b mode.

Changed in version 2.4: added Unicode support.

write(str)

Write a string to the large object. Return the number of bytes written. Unicode strings are encoded in the connection.encoding before writing.

Changed in version 2.4: added Unicode support.

export(file_name)

Export the large object content to the file system.

The method uses the efficient lo_export() libpq function.

seek(offset, whence=0)

Set the lobject current position.

Changed in version 2.6: added support for offset > 2GB.

tell()

Return the lobject current position.

Added in version 2.2.

Changed in version 2.6: added support for return value > 2GB.

truncate(len=0)

Truncate the lobject to the given size.

The method will only be available if Psycopg has been built against libpq from PostgreSQL 8.3 or later and can only be used with PostgreSQL servers running these versions. It uses the lo_truncate() libpq function.

Added in version 2.2.

Changed in version 2.6: added support for len > 2GB.

Warning

If Psycopg is built with lo_truncate() support or with the 64 bits API support (resp. from PostgreSQL versions 8.3 and 9.3) but at runtime an older version of the dynamic library is found, the psycopg2 module will fail to import. See the lo_truncate FAQ about the problem.

close()

Close the object.

closed

Boolean attribute specifying if the object is closed.

Close the object and remove it from the database.

class psycopg2.extensions.Column(\*args, \*\*kwargs)

Description of one result column, exposed as items of the cursor.description sequence.

Added in version 2.8: in previous version the description attribute was a sequence of simple tuples or namedtuples.

name

The name of the column returned.

type_code

The PostgreSQL OID of the column. You can use the pg_type system table to get more informations about the type. This is the value used by Psycopg to decide what Python type use to represent the value. See also Type casting of SQL types into Python objects.

display_size

Supposed to be the actual length of the column in bytes. Obtaining this value is computationally intensive, so it is always None.

Changed in version 2.8: It was previously possible to obtain this value using a compiler flag at builtin.

internal_size

The size in bytes of the column associated to this column on the server. Set to a negative value for variable-size types See also PQfsize.

precision

Total number of significant digits in columns of type NUMERIC. None for other types.

scale

Count of decimal digits in the fractional part in columns of type NUMERIC. None for other types.

null_ok

Always None as not easy to retrieve from the libpq.

table_oid

The oid of the table from which the column was fetched (matching pg_class.oid). None if the column is not a simple reference to a table column. See also PQftable.

Added in version 2.8.

table_column

The number of the column (within its table) making up the result (matching pg_attribute.attnum, so it will start from 1). None if the column is not a simple reference to a table column. See also PQftablecol.

Added in version 2.8.

SQL adaptation protocol objects

Psycopg provides a flexible system to adapt Python objects to the SQL syntax (inspired to the PEP 246), allowing serialization in PostgreSQL. See Adapting new Python types to SQL syntax for a detailed description. The following objects deal with Python objects adaptation:

psycopg2.extensions.adapt(obj)

Return the SQL representation of obj as an ISQLQuote. Raise a ProgrammingError if how to adapt the object is unknown. In order to allow new objects to be adapted, register a new adapter for it using the register_adapter() function.

The function is the entry point of the adaptation mechanism: it can be used to write adapters for complex objects by recursively calling adapt() on its components.

psycopg2.extensions.register_adapter(class, adapter)

Register a new adapter for the objects of class class.

adapter should be a function taking a single argument (the object to adapt) and returning an object conforming to the ISQLQuote protocol (e.g. exposing a getquoted() method). The AsIs is often useful for this task.

Once an object is registered, it can be safely used in SQL queries and by the adapt() function.

class psycopg2.extensions.ISQLQuote(wrapped_object)

Represents the SQL adaptation protocol. Objects conforming this protocol should implement a getquoted() and optionally a prepare() method.

Adapters may subclass ISQLQuote, but is not necessary: it is enough to expose a getquoted() method to be conforming.

_wrapped

The wrapped object passes to the constructor

getquoted()

Subclasses or other conforming objects should return a valid SQL string representing the wrapped object. In Python 3 the SQL must be returned in a bytes object. The ISQLQuote implementation does nothing.

prepare(conn)

Prepare the adapter for a connection. The method is optional: if implemented, it will be invoked before getquoted() with the connection to adapt for as argument.

A conform object can implement this method if the SQL representation depends on any server parameter, such as the server version or the standard_conforming_string setting. Container objects may store the connection and use it to recursively prepare contained objects: see the implementation for psycopg2.extensions.SQL_IN for a simple example.

class psycopg2.extensions.AsIs(object)

Adapter conform to the ISQLQuote protocol useful for objects whose string representation is already valid as SQL representation.

getquoted()

Return the str() conversion of the wrapped object.

>>> AsIs(42).getquoted()
'42'
class psycopg2.extensions.QuotedString(str)

Adapter conform to the ISQLQuote protocol for string-like objects.

getquoted()

Return the string enclosed in single quotes. Any single quote appearing in the string is escaped by doubling it according to SQL string constants syntax. Backslashes are escaped too.

>>> QuotedString(r"O'Reilly").getquoted()
"'O''Reilly'"
class psycopg2.extensions.Binary(str)

Adapter conform to the ISQLQuote protocol for binary objects.

getquoted()

Return the string enclosed in single quotes. It performs the same escaping of the QuotedString adapter, plus it knows how to escape non-printable chars.

>>> Binary("\x00\x08\x0F").getquoted()
"'\\\\000\\\\010\\\\017'"

Changed in version 2.0.14: previously the adapter was not exposed by the extensions module. In older versions it can be imported from the implementation module psycopg2._psycopg.

class psycopg2.extensions.Boolean
class psycopg2.extensions.Float
class psycopg2.extensions.SQL_IN

Specialized adapters for builtin objects.

class psycopg2.extensions.DateFromPy
class psycopg2.extensions.TimeFromPy
class psycopg2.extensions.TimestampFromPy
class psycopg2.extensions.IntervalFromPy

Specialized adapters for Python datetime objects.

psycopg2.extensions.adapters

Dictionary of the currently registered object adapters. Use register_adapter() to add an adapter for a new type.

Database types casting functions

These functions are used to manipulate type casters to convert from PostgreSQL types to Python objects. See Type casting of SQL types into Python objects for details.

psycopg2.extensions.new_type(oids, name, adapter)

Create a new type caster to convert from a PostgreSQL type to a Python object. The object created must be registered using register_type() to be used.

Parameters:
  • oids – tuple of OIDs of the PostgreSQL type to convert.

  • name – the name of the new type adapter.

  • adapter – the adaptation function.

The object OID can be read from the cursor.description attribute or by querying from the PostgreSQL catalog.

adapter should have signature fun(value, cur) where value is the string representation returned by PostgreSQL and cur is the cursor from which data are read. In case of NULL, value will be None. The adapter should return the converted object.

See Type casting of SQL types into Python objects for an usage example.

psycopg2.extensions.new_array_type(oids, name, base_caster)

Create a new type caster to convert from a PostgreSQL array type to a list of Python object. The object created must be registered using register_type() to be used.

Parameters:
  • oids – tuple of OIDs of the PostgreSQL type to convert. It should probably contain the oid of the array type (e.g. the typarray field in the pg_type table).

  • name – the name of the new type adapter.

  • base_caster – a Psycopg typecaster, e.g. created using the new_type() function. The caster should be able to parse a single item of the desired type.

Added in version 2.4.3.

Note

The function can be used to create a generic array typecaster, returning a list of strings: just use psycopg2.STRING as base typecaster. For instance, if you want to receive an array of macaddr from the database, each address represented by string, you can use:

# select typarray from pg_type where typname = 'macaddr' -> 1040
psycopg2.extensions.register_type(
    psycopg2.extensions.new_array_type(
        (1040,), 'MACADDR[]', psycopg2.STRING))
psycopg2.extensions.register_type(obj[, scope])

Register a type caster created using new_type().

If scope is specified, it should be a connection or a cursor: the type caster will be effective only limited to the specified object. Otherwise it will be globally registered.

psycopg2.extensions.string_types

The global register of type casters.

psycopg2.extensions.encodings

Mapping from PostgreSQL encoding to Python encoding names. Used by Psycopg when adapting or casting unicode strings. See Unicode handling.

Additional exceptions

The module exports a few exceptions in addition to the standard ones defined by the DB API 2.0.

Note

From psycopg 2.8 these error classes are also exposed by the psycopg2.errors module.

exception psycopg2.extensions.QueryCanceledError

(subclasses OperationalError)

Error related to SQL query cancellation. It can be trapped specifically to detect a timeout.

Added in version 2.0.7.

exception psycopg2.extensions.TransactionRollbackError

(subclasses OperationalError)

Error causing transaction rollback (deadlocks, serialization failures, etc). It can be trapped specifically to detect a deadlock.

Added in version 2.0.7.

Coroutines support functions

These functions are used to set and retrieve the callback function for cooperation with coroutine libraries.

Added in version 2.2.

Other functions

psycopg2.extensions.libpq_version()

Return the version number of the libpq dynamic library loaded as an integer, in the same format of server_version.

Raise NotSupportedError if the psycopg2 module was compiled with a libpq version lesser than 9.1 (which can be detected by the __libpq_version__ constant).

Added in version 2.7.

See also

libpq docs for PQlibVersion().

psycopg2.extensions.make_dsn(dsn=None, \*\*kwargs)

Create a valid connection string from arguments.

Put together the arguments in kwargs into a connection string. If dsn is specified too, merge the arguments coming from both the sources. If the same argument name is specified in both the sources, the kwargs value overrides the dsn value.

The input arguments are validated: the output should always be a valid connection string (as far as parse_dsn() is concerned). If not raise ProgrammingError.

Example:

>>> from psycopg2.extensions import make_dsn
>>> make_dsn('dbname=foo host=example.com', password="s3cr3t")
'host=example.com password=s3cr3t dbname=foo'

Added in version 2.7.

psycopg2.extensions.parse_dsn(dsn)

Parse connection string into a dictionary of keywords and values.

Parsing is delegated to the libpq: different versions of the client library may support different formats or parameters (for example, connection URIs are only supported from libpq 9.2). Raise ProgrammingError if the dsn is not valid.

Example:

>>> from psycopg2.extensions import parse_dsn
>>> parse_dsn('dbname=test user=postgres password=secret')
{'password': 'secret', 'user': 'postgres', 'dbname': 'test'}
>>> parse_dsn("postgresql://someone@example.com/somedb?connect_timeout=10")
{'host': 'example.com', 'user': 'someone', 'dbname': 'somedb', 'connect_timeout': '10'}

Added in version 2.7.

See also

libpq docs for PQconninfoParse().

psycopg2.extensions.quote_ident(str, scope)

Return quoted identifier according to PostgreSQL quoting rules.

The scope must be a connection or a cursor, the underlying connection encoding is used for any necessary character conversion.

Added in version 2.7.

See also

libpq docs for PQescapeIdentifier()

psycopg2.extensions.encrypt_password(password, user, scope=None, algorithm=None)

Return the encrypted form of a PostgreSQL password.

Parameters:
  • password – the cleartext password to encrypt

  • user – the name of the user to use the password for

  • scope (connection or cursor) – the scope to encrypt the password into; if algorithm is md5 it can be None

  • algorithm – the password encryption algorithm to use

The algorithm md5 is always supported. Other algorithms are only supported if the client libpq version is at least 10 and may require a compatible server version: check the PostgreSQL encryption documentation to know the algorithms supported by your server.

Using None as algorithm will result in querying the server to know the current server password encryption setting, which is a blocking operation: query the server separately and specify a value for algorithm if you want to maintain a non-blocking behaviour.

Added in version 2.8.

See also

PostgreSQL docs for the password_encryption setting, libpq PQencryptPasswordConn(), PQencryptPassword() functions.

Isolation level constants

Psycopg2 connection objects hold informations about the PostgreSQL transaction isolation level. By default Psycopg doesn’t change the default configuration of the server (ISOLATION_LEVEL_DEFAULT); the default for PostgreSQL servers is typically READ COMMITTED, but this may be changed in the server configuration files. A different isolation level can be set through the set_isolation_level() or set_session() methods. The level can be set to one of the following constants:

psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT

No transaction is started when commands are executed and no commit() or rollback() is required. Some PostgreSQL command such as CREATE DATABASE or VACUUM can’t run into a transaction: to run such command use:

>>> conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)

See also Transactions control.

psycopg2.extensions.ISOLATION_LEVEL_READ_UNCOMMITTED

The READ UNCOMMITTED isolation level is defined in the SQL standard but not available in the MVCC model of PostgreSQL: it is replaced by the stricter READ COMMITTED.

psycopg2.extensions.ISOLATION_LEVEL_READ_COMMITTED

This is usually the default PostgreSQL value, but a different default may be set in the database configuration.

A new transaction is started at the first execute() command on a cursor and at each new execute() after a commit() or a rollback(). The transaction runs in the PostgreSQL READ COMMITTED isolation level: a SELECT query sees only data committed before the query began; it never sees either uncommitted data or changes committed during query execution by concurrent transactions.

See also

Read Committed Isolation Level in PostgreSQL documentation.

psycopg2.extensions.ISOLATION_LEVEL_REPEATABLE_READ

As in ISOLATION_LEVEL_READ_COMMITTED, a new transaction is started at the first execute() command. Transactions run at a REPEATABLE READ isolation level: all the queries in a transaction see a snapshot as of the start of the transaction, not as of the start of the current query within the transaction. However applications using this level must be prepared to retry transactions due to serialization failures.

While this level provides a guarantee that each transaction sees a completely stable view of the database, this view will not necessarily always be consistent with some serial (one at a time) execution of concurrent transactions of the same level.

Changed in version 2.4.2: The value was an alias for ISOLATION_LEVEL_SERIALIZABLE before. The two levels are distinct since PostgreSQL 9.1

See also

Repeatable Read Isolation Level in PostgreSQL documentation.

psycopg2.extensions.ISOLATION_LEVEL_SERIALIZABLE

As in ISOLATION_LEVEL_READ_COMMITTED, a new transaction is started at the first execute() command. Transactions run at a SERIALIZABLE isolation level. This is the strictest transactions isolation level, equivalent to having the transactions executed serially rather than concurrently. However applications using this level must be prepared to retry transactions due to serialization failures.

Starting from PostgreSQL 9.1, this mode monitors for conditions which could make execution of a concurrent set of serializable transactions behave in a manner inconsistent with all possible serial (one at a time) executions of those transaction. In previous version the behaviour was the same of the REPEATABLE READ isolation level.

See also

Serializable Isolation Level in PostgreSQL documentation.

psycopg2.extensions.ISOLATION_LEVEL_DEFAULT

A new transaction is started at the first execute() command, but the isolation level is not explicitly selected by Psycopg: the server will use whatever level is defined in its configuration or by statements executed within the session outside Pyscopg control. If you want to know what the value is you can use a query such as show transaction_isolation.

Added in version 2.7.

Transaction status constants

These values represent the possible status of a transaction: the current value can be read using the connection.info.transaction_status property.

psycopg2.extensions.TRANSACTION_STATUS_IDLE

The session is idle and there is no current transaction.

psycopg2.extensions.TRANSACTION_STATUS_ACTIVE

A command is currently in progress.

psycopg2.extensions.TRANSACTION_STATUS_INTRANS

The session is idle in a valid transaction block.

psycopg2.extensions.TRANSACTION_STATUS_INERROR

The session is idle in a failed transaction block.

psycopg2.extensions.TRANSACTION_STATUS_UNKNOWN

Reported if the connection with the server is bad.

Connection status constants

These values represent the possible status of a connection: the current value can be read from the status attribute.

It is possible to find the connection in other status than the one shown below. Those are the only states in which a working connection is expected to be found during the execution of regular Python client code: other states are for internal usage and Python code should not rely on them.

psycopg2.extensions.STATUS_READY

Connection established. No transaction in progress.

psycopg2.extensions.STATUS_BEGIN

Connection established. A transaction is currently in progress.

psycopg2.extensions.STATUS_IN_TRANSACTION

An alias for STATUS_BEGIN

psycopg2.extensions.STATUS_PREPARED

The connection has been prepared for the second phase in a two-phase commit transaction. The connection can’t be used to send commands to the database until the transaction is finished with tpc_commit() or tpc_rollback().

Added in version 2.3.

Poll constants

Added in version 2.2.

These values can be returned by connection.poll() during asynchronous connection and communication. They match the values in the libpq enum PostgresPollingStatusType. See Asynchronous support and Support for coroutine libraries.

psycopg2.extensions.POLL_OK

The data being read is available, or the file descriptor is ready for writing: reading or writing will not block.

psycopg2.extensions.POLL_READ

Some data is being read from the backend, but it is not available yet on the client and reading would block. Upon receiving this value, the client should wait for the connection file descriptor to be ready for reading. For example:

select.select([conn.fileno()], [], [])
psycopg2.extensions.POLL_WRITE

Some data is being sent to the backend but the connection file descriptor can’t currently accept new data. Upon receiving this value, the client should wait for the connection file descriptor to be ready for writing. For example:

select.select([], [conn.fileno()], [])
psycopg2.extensions.POLL_ERROR

There was a problem during connection polling. This value should actually never be returned: in case of poll error usually an exception containing the relevant details is raised.

Additional database types

The extensions module includes typecasters for many standard PostgreSQL types. These objects allow the conversion of returned data into Python objects. All the typecasters are automatically registered, except UNICODE and UNICODEARRAY: you can register them using register_type() in order to receive Unicode objects instead of strings from the database. See Unicode handling for details.

psycopg2.extensions.BOOLEAN
psycopg2.extensions.BYTES
psycopg2.extensions.DATE
psycopg2.extensions.DECIMAL
psycopg2.extensions.FLOAT
psycopg2.extensions.INTEGER
psycopg2.extensions.INTERVAL
psycopg2.extensions.LONGINTEGER
psycopg2.extensions.TIME
psycopg2.extensions.UNICODE

Typecasters for basic types. Note that a few other ones (BINARY, DATETIME, NUMBER, ROWID, STRING) are exposed by the psycopg2 module for DB API 2.0 compliance.

psycopg2.extensions.BINARYARRAY
psycopg2.extensions.BOOLEANARRAY
psycopg2.extensions.BYTESARRAY
psycopg2.extensions.DATEARRAY
psycopg2.extensions.DATETIMEARRAY
psycopg2.extensions.DECIMALARRAY
psycopg2.extensions.FLOATARRAY
psycopg2.extensions.INTEGERARRAY
psycopg2.extensions.INTERVALARRAY
psycopg2.extensions.LONGINTEGERARRAY
psycopg2.extensions.ROWIDARRAY
psycopg2.extensions.STRINGARRAY
psycopg2.extensions.TIMEARRAY
psycopg2.extensions.UNICODEARRAY

Typecasters to convert arrays of sql types into Python lists.

psycopg2.extensions.PYDATE
psycopg2.extensions.PYDATETIME
psycopg2.extensions.PYDATETIMETZ
psycopg2.extensions.PYINTERVAL
psycopg2.extensions.PYTIME
psycopg2.extensions.PYDATEARRAY
psycopg2.extensions.PYDATETIMEARRAY
psycopg2.extensions.PYDATETIMETZARRAY
psycopg2.extensions.PYINTERVALARRAY
psycopg2.extensions.PYTIMEARRAY

Typecasters to convert time-related data types to Python datetime objects.

Changed in version 2.2: previously the DECIMAL typecaster and the specific time-related typecasters (PY* and MX*) were not exposed by the extensions module. In older versions they can be imported from the implementation module psycopg2._psycopg.

Added in version 2.7.2: the *DATETIMETZ* objects.

Added in version 2.8: the BYTES and BYTESARRAY objects.