Welcome to pyDAL’s API documentation!

Contents:

Indices and tables

Subpackages

pydal.adapters package

Submodules

pydal.adapters.base module

class pydal.adapters.base.AdapterMeta[source]

Bases: type

Metaclass to support manipulation of adapter classes.

At the moment is used to intercept entity_quoting argument passed to DAL.

class pydal.adapters.base.BaseAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.connection.ConnectionPool

ADD(first, second)[source]
AGGREGATE(first, what)[source]
ALLOW_NULL()[source]
AND(first, second)[source]
AS(first, second)[source]
BELONGS(first, second)[source]
CASE(query, t, f)[source]
CAST(first, second)[source]
COALESCE(first, second)[source]
COALESCE_ZERO(first)[source]
COMMA(first, second)[source]
CONCAT(*items)[source]
CONTAINS(first, second, case_sensitive=True)[source]
COUNT(first, distinct=None)[source]
DIV(first, second)[source]
ENDSWITH(first, second)[source]
EPOCH(first)[source]
EQ(first, second=None)[source]
EXTRACT(first, what)[source]
FALSE = 'F'
FALSE_exp = '0'
GE(first, second=None)[source]
GT(first, second=None)[source]
ILIKE(first, second)[source]

Case insensitive like operator

INVERT(first)[source]
JOIN()[source]
LE(first, second=None)[source]
LEFT_JOIN()[source]
LENGTH(first)[source]
LIKE(first, second)[source]

Case sensitive like operator

LOWER(first)[source]
LT(first, second=None)[source]
MOD(first, second)[source]
MUL(first, second)[source]
NE(first, second=None)[source]
NOT(first)[source]
NOT_NULL(default, field_type)[source]
ON(first, second)[source]
OR(first, second)[source]
PRIMARY_KEY(key)[source]
QUOTE_TEMPLATE = '"%s"'
RANDOM()[source]
RAW(first)[source]
REGEXP(first, second)[source]

Regular expression operator

REPLACE(first, (second, third))[source]
STARTSWITH(first, second)[source]
SUB(first, second)[source]
SUBSTRING(field, parameters)[source]
TRUE = 'T'
TRUE_exp = '1'
T_SEP = ' '
UPPER(first)[source]
adapt(obj)[source]
alias(table, alias)[source]

Given a table object, makes a new table object with alias name.

build_parsemap()[source]
bulk_insert(table, items)[source]
can_select_for_update = True
close_connection()[source]
commit()[source]
commit_on_alter_table = False
commit_prepared(key)[source]
common_filter(query, tablenames)[source]
concat_add(tablename)[source]
connection = None
connector(*args, **kwargs)
constraint_name(table, fieldname)[source]
count(query, distinct=None)[source]
create_sequence_and_triggers(query, table, **args)[source]
create_table(table, migrate=True, fake_migrate=False, polymodel=None)[source]
dbpath = None
delete(tablename, query)[source]
distributed_transaction_begin(key)[source]
driver = None
driver_auto_json = []
driver_name = None
drivers = ()
drop(table, mode='')[source]
execute(*a, **b)[source]
expand(expression, field_type=None, colnames=False)[source]
expand_all(fields, tablenames)[source]
file_close(fileobj)[source]
file_delete(filename)[source]
file_exists(filename)[source]
file_open(filename, mode='rb', lock=True)[source]
find_driver(adapter_args, uri=None)[source]
folder = None
get_table(query)[source]
id_query(table)[source]
insert(table, fields)[source]
isOperationalError(exception)[source]
isProgrammingError(exception)[source]
is_numerical_type(ftype)[source]
lastrowid(table)[source]
log(message, table=None)[source]

Logs migrations

It will not log changes if logfile is not specified. Defaults to sql.log

log_execute(*a, **b)[source]
migrate_table(table, sql_fields, sql_fields_old, sql_fields_aux, logfile, fake_migrate=False)[source]
parse(rows, fields, colnames, blob_decode=True, cacheable=False)[source]
parse_blob(value, field_type)[source]
parse_boolean(value, field_type)[source]
parse_date(value, field_type)[source]
parse_datetime(value, field_type)[source]
parse_decimal(value, field_type)[source]
parse_double(value, field_type)[source]
parse_id(value, field_type)[source]
parse_integer(value, field_type)[source]
parse_json(value, field_type)[source]
parse_list_integers(value, field_type)[source]
parse_list_references(value, field_type)[source]
parse_list_strings(value, field_type)[source]
parse_reference(value, field_type)[source]
parse_time(value, field_type)[source]
parse_value(value, field_type, blob_decode=True)[source]
prepare(key)[source]
represent(obj, fieldtype)[source]
represent_exceptions(obj, fieldtype)[source]
rollback()[source]
rollback_prepared(key)[source]
rowslice(rows, minimum=0, maximum=None)[source]

By default this function does nothing; overload when db does not do slicing.

save_dbt(table, sql_fields_current)[source]
select(query, fields, attributes)[source]

Always returns a Rows object, possibly empty.

select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
sequence_name(tablename)[source]
smart_adapt(obj)[source]
sqlsafe_field(fieldname)[source]
sqlsafe_table(tablename, ot=None)[source]
support_distributed_transaction = False
table_alias(tbl)[source]
tables(*queries)[source]
trigger_name(tablename)[source]
truncate(table, mode=' ')[source]
types = {'reference': 'INTEGER REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'TEXT', 'float': 'DOUBLE', 'datetime': 'TIMESTAMP', 'bigint': 'INTEGER', 'id': 'INTEGER PRIMARY KEY AUTOINCREMENT', 'reference FK': ', CONSTRAINT "FK_%(constraint_name)s" FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'json': 'TEXT', 'big-id': 'INTEGER PRIMARY KEY AUTOINCREMENT', 'blob': 'BLOB', 'big-reference': 'INTEGER REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'CHAR(%(length)s)', 'list:string': 'TEXT', 'date': 'DATE', 'integer': 'INTEGER', 'password': 'CHAR(%(length)s)', 'list:integer': 'TEXT', 'double': 'DOUBLE', 'decimal': 'DOUBLE', 'upload': 'CHAR(%(length)s)', 'list:reference': 'TEXT', 'boolean': 'CHAR(1)', 'time': 'TIME'}
update(tablename, query, fields)[source]
uploads_in_blob = False
varquote(name)[source]
class pydal.adapters.base.NoSQLAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

ADD(first, second)[source]
AGGREGATE(first, what)[source]
AND(first, second)[source]
AS(first, second)[source]
DIV(first, second)[source]
ENDSWITH(first, second=None)[source]
EXTRACT(first, what)[source]
ILIKE(first, second)[source]
LEFT_JOIN()[source]
LENGTH(first)[source]
LOWER(first)[source]
MUL(first, second)[source]
ON(first, second)[source]
OR(first, second)[source]
PRIMARY_KEY(key)[source]
QUOTE_TEMPLATE = '%s'
RANDOM()[source]
STARTSWITH(first, second=None)[source]
SUB(first, second)[source]
SUBSTRING(field, parameters)[source]
UPPER(first)[source]
alias(table, alias)[source]
can_select_for_update = False
close_connection()[source]

remember: no transactions on many NoSQL

commit()[source]

remember: no transactions on many NoSQL

commit_prepared(key)[source]
concat_add(table)[source]
constraint_name(table, fieldname)[source]
create_sequence_and_triggers(query, table, **args)[source]
distributed_transaction_begin(key)[source]
drop(table, mode)[source]
execute(*a, **b)[source]
id_query(table)[source]
lastrowid(table)[source]
log_execute(*a, **b)[source]
migrate_table(*a, **b)[source]
prepare(key)[source]
represent(obj, fieldtype)[source]
represent_exceptions(obj, fieldtype)[source]
rollback()[source]

remember: no transactions on many NoSQL

rollback_prepared(key)[source]
rowslice(rows, minimum=0, maximum=None)[source]
static to_unicode(obj)[source]

pydal.adapters.couchdb module

class pydal.adapters.couchdb.CouchDBAdapter(db, uri='couchdb://127.0.0.1:5984', pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.NoSQLAdapter

AND(first, second)[source]
COMMA(first, second)[source]
EQ(first, second)[source]
NE(first, second)[source]
OR(first, second)[source]
count(query, distinct=None)[source]
create_table(table, migrate=True, fake_migrate=False, polymodel=None)[source]
delete(tablename, query)[source]
drivers = ('couchdb',)
expand(expression, field_type=None)[source]
file_close(fileobj)[source]
file_exists(filename)[source]
file_open(filename, mode='rb', lock=True)[source]
insert(table, fields)[source]
represent(obj, fieldtype)[source]
select(query, fields, attributes)[source]
types = {'string': <type 'str'>, 'reference': <type 'long'>, 'text': <type 'str'>, 'id': <type 'long'>, 'float': <type 'float'>, 'bigint': <type 'long'>, 'upload': <type 'str'>, 'datetime': <type 'datetime.datetime'>, 'json': <type 'str'>, 'boolean': <type 'bool'>, 'blob': <type 'str'>, 'list:string': <type 'list'>, 'double': <type 'float'>, 'date': <type 'datetime.date'>, 'integer': <type 'long'>, 'password': <type 'str'>, 'list:integer': <type 'list'>, 'time': <type 'datetime.time'>, 'list:reference': <type 'list'>}
update(tablename, query, fields)[source]
uploads_in_blob = True

pydal.adapters.cubrid module

class pydal.adapters.cubrid.CubridAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.mysql.MySQLAdapter

REGEX_URI = <_sre.SRE_Pattern object at 0x1a7c890>
after_connection()[source]
drivers = ('cubriddb',)

pydal.adapters.db2 module

class pydal.adapters.db2.DB2Adapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

LEFT_JOIN()[source]
RANDOM()[source]
drivers = ('ibm_db_dbi', 'pyodbc')
execute(command, placeholders=None)[source]
lastrowid(table)[source]
represent_exceptions(obj, fieldtype)[source]
rowslice(rows, minimum=0, maximum=None)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
types = {'reference': 'INT, FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'CLOB', 'float': 'REAL', 'datetime': 'TIMESTAMP', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY NOT NULL', 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'json': 'CLOB', 'big-id': 'BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY NOT NULL', 'blob': 'BLOB', 'big-reference': 'BIGINT, FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'CLOB', 'date': 'DATE', 'integer': 'INT', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'CLOB', 'double': 'DOUBLE', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'CLOB', 'boolean': 'CHAR(1)', 'time': 'TIME'}

pydal.adapters.firebird module

class pydal.adapters.firebird.FireBirdAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

CONTAINS(first, second, case_sensitive=False)[source]
EPOCH(first)[source]
LENGTH(first)[source]
NOT_NULL(default, field_type)[source]
RANDOM()[source]
REGEX_URI = <_sre.SRE_Pattern object at 0x1a2f2b0>
SUBSTRING(field, parameters)[source]
commit_on_alter_table = False
create_sequence_and_triggers(query, table, **args)[source]
drivers = ('kinterbasdb', 'firebirdsql', 'fdb', 'pyodbc')
lastrowid(table)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
sequence_name(tablename)[source]
support_distributed_transaction = True
trigger_name(tablename)[source]
types = {'reference': 'INTEGER REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'BLOB SUB_TYPE 1', 'float': 'FLOAT', 'datetime': 'TIMESTAMP', 'bigint': 'BIGINT', 'id': 'INTEGER PRIMARY KEY', 'json': 'BLOB SUB_TYPE 1', 'big-id': 'BIGINT PRIMARY KEY', 'blob': 'BLOB SUB_TYPE 0', 'big-reference': 'BIGINT REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'BLOB SUB_TYPE 1', 'date': 'DATE', 'integer': 'INTEGER', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'BLOB SUB_TYPE 1', 'double': 'DOUBLE PRECISION', 'decimal': 'DECIMAL(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'BLOB SUB_TYPE 1', 'boolean': 'CHAR(1)', 'time': 'TIME'}
class pydal.adapters.firebird.FireBirdEmbeddedAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.firebird.FireBirdAdapter

REGEX_URI = <_sre.SRE_Pattern object at 0x1a8cfa0>
drivers = ('kinterbasdb', 'firebirdsql', 'fdb', 'pyodbc')

pydal.adapters.google module

pydal.adapters.imap module

class pydal.adapters.imap.IMAPAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.NoSQLAdapter

IMAP server adapter

This class is intended as an interface with email IMAP servers to perform simple queries in the web2py DAL query syntax, so email read, search and other related IMAP mail services (as those implemented by brands like Google(r), and Yahoo!(r) can be managed from web2py applications.

The code uses examples by Yuji Tomita on this post: http://yuji.wordpress.com/2011/06/22/python-imaplib-imap-example-with-gmail/#comment-1137 and is based in docs for Python imaplib, python email and email IETF’s (i.e. RFC2060 and RFC3501)

This adapter was tested with a small set of operations with Gmail(r). Other services requests could raise command syntax and response data issues.

It creates its table and field names “statically”, meaning that the developer should leave the table and field definitions to the DAL instance by calling the adapter’s .define_tables() method. The tables are defined with the IMAP server mailbox list information.

.define_tables() returns a dictionary mapping dal tablenames to the server mailbox names with the following structure:

{<tablename>: str <server mailbox name>}

Here is a list of supported fields:

Field Type Description
uid string  
answered boolean Flag
created date  
content list:string A list of dict text or html parts
to string  
cc string  
bcc string  
size integer the amount of octets of the message*
deleted boolean Flag
draft boolean Flag
flagged boolean Flag
sender string  
recent boolean Flag
seen boolean Flag
subject string  
mime string The mime header declaration
email string The complete RFC822 message (*)
attachments list Each non text part as dict
encoding string The main detected encoding

(*) At the application side it is measured as the length of the RFC822 message string

WARNING: As row id’s are mapped to email sequence numbers, make sure your imap client web2py app does not delete messages during select or update actions, to prevent updating or deleting different messages. Sequence numbers change whenever the mailbox is updated. To avoid this sequence numbers issues, it is recommended the use of uid fields in query references (although the update and delete in separate actions rule still applies).

# This is the code recommended to start imap support
# at the app's model:

imapdb = DAL("imap://user:password@server:port", pool_size=1) # port 993 for ssl
imapdb.define_tables()

Here is an (incomplete) list of possible imap commands:

# Count today's unseen messages
# smaller than 6000 octets from the
# inbox mailbox

q = imapdb.INBOX.seen == False
q &= imapdb.INBOX.created == datetime.date.today()
q &= imapdb.INBOX.size < 6000
unread = imapdb(q).count()

# Fetch last query messages
rows = imapdb(q).select()

# it is also possible to filter query select results with limitby and
# sequences of mailbox fields

set.select(<fields sequence>, limitby=(<int>, <int>))

# Mark last query messages as seen
messages = [row.uid for row in rows]
seen = imapdb(imapdb.INBOX.uid.belongs(messages)).update(seen=True)

# Delete messages in the imap database that have mails from mr. Gumby

deleted = 0
for mailbox in imapdb.tables
    deleted += imapdb(imapdb[mailbox].sender.contains("gumby")).delete()

# It is possible also to mark messages for deletion instead of ereasing them
# directly with set.update(deleted=True)


# This object give access
# to the adapter auto mailbox
# mapped names (which native
# mailbox has what table name)

imapdb.mailboxes <dict> # tablename, server native name pairs

# To retrieve a table native mailbox name use:
imapdb.<table>.mailbox

### New features v2.4.1:

# Declare mailboxes statically with tablename, name pairs
# This avoids the extra server names retrieval

imapdb.define_tables({"inbox": "INBOX"})

# Selects without content/attachments/email columns will only
# fetch header and flags

imapdb(q).select(imapdb.INBOX.sender, imapdb.INBOX.subject)
AND(first, second)[source]
BELONGS(first, second)[source]
CONTAINS(first, second, case_sensitive=False)[source]
EQ(first, second)[source]
GE(first, second)[source]
GT(first, second)[source]
LE(first, second)[source]
LT(first, second)[source]
NE(first, second=None)[source]
NOT(first)[source]
OR(first, second)[source]
REGEX_URI = <_sre.SRE_Pattern object at 0x7f42375743a0>
convert_date(date, add=None, imf=False)[source]
count(query, distinct=None)[source]
create_table(*args, **kwargs)[source]
dbengine = 'imap'
define_tables(mailbox_names=None)[source]

Auto create common IMAP fileds

This function creates fields definitions “statically” meaning that custom fields as in other adapters should not be supported and definitions handled on a service/mode basis (local syntax for Gmail(r), Ymail(r)

Returns a dictionary with tablename, server native mailbox name pairs.

delete(tablename, query)[source]
drivers = ('imaplib',)
encode_text(text, charset, errors='replace')[source]

convert text for mail to unicode

get_charset(message)[source]
get_last_message(tablename)[source]
get_mailboxes()[source]

Query the mail database for mailbox names

get_query_mailbox(query)[source]
get_uid_bounds(tablename)[source]
static header_represent(f, r)[source]
insert(table, fields)[source]
is_flag(flag)[source]
reconnect(f=None, cursor=True)[source]

IMAP4 Pool connection method

imap connection lacks of self cursor command. A custom command should be provided as a replacement for connection pooling to prevent uncaught remote session closing

select(query, fields, attributes)[source]

Searches and Fetches records and return web2py rows

types = {'boolean': <type 'bool'>, 'string': <type 'str'>, 'list:string': <type 'str'>, 'integer': <type 'int'>, 'date': <type 'datetime.date'>, 'text': <type 'str'>, 'blob': <type 'str'>, 'bigint': <type 'long'>, 'id': <type 'long'>, 'datetime': <type 'datetime.datetime'>}
update(tablename, query, fields)[source]
uri = None

MESSAGE is an identifier for sequence number

pydal.adapters.informix module

class pydal.adapters.informix.InformixAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

NOT_NULL(default, field_type)[source]
RANDOM()[source]
REGEX_URI = <_sre.SRE_Pattern object at 0x7f4237d06830>
drivers = ('informixdb',)
execute(command)[source]
lastrowid(table)[source]
represent_exceptions(obj, fieldtype)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
types = {'reference': 'INTEGER REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'BLOB SUB_TYPE 1', 'float': 'FLOAT', 'datetime': 'DATETIME', 'bigint': 'BIGINT', 'reference TFK': 'FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s CONSTRAINT TFK_%(table_name)s_%(field_name)s', 'id': 'SERIAL', 'reference FK': 'REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s CONSTRAINT FK_%(table_name)s_%(field_name)s', 'json': 'BLOB SUB_TYPE 1', 'big-id': 'BIGSERIAL', 'blob': 'BLOB SUB_TYPE 0', 'big-reference': 'BIGINT REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'BLOB SUB_TYPE 1', 'date': 'DATE', 'integer': 'INTEGER', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'BLOB SUB_TYPE 1', 'double': 'DOUBLE PRECISION', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'BLOB SUB_TYPE 1', 'boolean': 'CHAR(1)', 'time': 'CHAR(8)'}
class pydal.adapters.informix.InformixSEAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.informix.InformixAdapter

work in progress

rowslice(rows, minimum=0, maximum=None)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]

pydal.adapters.ingres module

class pydal.adapters.ingres.IngresAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

LEFT_JOIN()[source]
RANDOM()[source]
create_sequence_and_triggers(query, table, **args)[source]
drivers = ('pyodbc',)
lastrowid(table)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
types = {'reference': 'INT, FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'CLOB', 'float': 'FLOAT', 'datetime': 'TIMESTAMP WITHOUT TIME ZONE', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'int not null unique with default next value for ii***lineitemsequence', 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'json': 'CLOB', 'big-id': 'bigint not null unique with default next value for ii***lineitemsequence', 'blob': 'BLOB', 'big-reference': 'BIGINT, FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'CLOB', 'date': 'ANSIDATE', 'integer': 'INTEGER4', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'CLOB', 'double': 'FLOAT8', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'CLOB', 'boolean': 'CHAR(1)', 'time': 'TIME WITHOUT TIME ZONE'}
class pydal.adapters.ingres.IngresUnicodeAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.ingres.IngresAdapter

drivers = ('pyodbc',)
types = {'reference': 'INTEGER4, FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'NCLOB', 'float': 'FLOAT', 'datetime': 'TIMESTAMP WITHOUT TIME ZONE', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'INTEGER4 not null unique with default next value for ii***lineitemsequence', 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'json': 'NCLOB', 'big-id': 'BIGINT not null unique with default next value for ii***lineitemsequence', 'blob': 'BLOB', 'big-reference': 'BIGINT, FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'NVARCHAR(%(length)s)', 'list:string': 'NCLOB', 'date': 'ANSIDATE', 'integer': 'INTEGER4', 'password': 'NVARCHAR(%(length)s)', 'list:integer': 'NCLOB', 'double': 'FLOAT8', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'NCLOB', 'boolean': 'CHAR(1)', 'time': 'TIME WITHOUT TIME ZONE'}

pydal.adapters.mongo module

class pydal.adapters.mongo.MongoDBAdapter(db, uri='mongodb://127.0.0.1:5984/db', pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.NoSQLAdapter

ADD(first, second)[source]
AND(first, second)[source]
AS(first, second)[source]
BELONGS(first, second)[source]
COMMA(first, second)[source]
CONTAINS(first, second, case_sensitive=False)[source]
DIV(first, second)[source]
ENDSWITH(first, second)[source]
EQ(first, second=None)[source]
GE(first, second=None)[source]
GT(first, second)[source]
ILIKE(first, second)[source]
INVERT(first)[source]
LE(first, second=None)[source]
LIKE(first, second)[source]
LT(first, second=None)[source]
MOD(first, second)[source]
MUL(first, second)[source]
NE(first, second=None)[source]
NOT(first)[source]
ON(first, second)[source]
OR(first, second)[source]
STARTSWITH(first, second)[source]
SUB(first, second)[source]
bulk_insert(table, items)[source]
count(query, distinct=None, snapshot=True)[source]
create_table(table, migrate=True, fake_migrate=False, polymodel=None, isCapped=False)[source]
delete(tablename, query, safe=None)[source]
driver_auto_json = ['loads', 'dumps']
drivers = ('pymongo',)
drop(table, mode='')[source]
error_messages = {'javascript_needed': 'This must yet be replaced with javascript in order to work.'}
expand(expression, field_type=None)[source]
insert(table, fields, safe=None)[source]

Safe determines whether a asynchronous request is done or a synchronous action is done For safety, we use by default synchronous requests

object_id(arg=None)[source]

Convert input to a valid Mongodb ObjectId instance

self.object_id(“<random>”) -> ObjectId (not unique) instance

parse_id(value, field_type)[source]
parse_reference(value, field_type)[source]
represent(obj, fieldtype)[source]
select(query, fields, attributes, count=False, snapshot=False)[source]
truncate(table, mode, safe=None)[source]
types = {'string': <type 'str'>, 'reference': <type 'long'>, 'text': <type 'str'>, 'id': <type 'long'>, 'float': <type 'float'>, 'bigint': <type 'long'>, 'upload': <type 'str'>, 'datetime': <type 'datetime.datetime'>, 'json': <type 'str'>, 'boolean': <type 'bool'>, 'blob': <type 'str'>, 'list:string': <type 'list'>, 'double': <type 'float'>, 'date': <type 'datetime.date'>, 'integer': <type 'long'>, 'password': <type 'str'>, 'list:integer': <type 'list'>, 'time': <type 'datetime.time'>, 'list:reference': <type 'list'>}
update(tablename, query, fields, safe=None)[source]
uploads_in_blob = False

pydal.adapters.mssql module

class pydal.adapters.mssql.MSSQL2Adapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, srid=4326, after_connection=None)[source]

Bases: pydal.adapters.mssql.MSSQLAdapter

drivers = ('pyodbc',)
execute(a)[source]
represent(obj, fieldtype)[source]
types = {'reference': 'INT, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'NTEXT', 'float': 'FLOAT', 'datetime': 'DATETIME', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'INT IDENTITY PRIMARY KEY', 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'json': 'NTEXT', 'big-id': 'BIGINT IDENTITY PRIMARY KEY', 'blob': 'IMAGE', 'big-reference': 'BIGINT, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'NVARCHAR(%(length)s)', 'list:string': 'NTEXT', 'date': 'DATETIME', 'integer': 'INT', 'password': 'NVARCHAR(%(length)s)', 'list:integer': 'NTEXT', 'double': 'FLOAT', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'NVARCHAR(%(length)s)', 'list:reference': 'NTEXT', 'boolean': 'CHAR(1)', 'time': 'CHAR(8)'}
class pydal.adapters.mssql.MSSQL3Adapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, srid=4326, after_connection=None)[source]

Bases: pydal.adapters.mssql.MSSQLAdapter

Experimental support for pagination in MSSQL

Requires MSSQL >= 2005, uses ROW_NUMBER()

rowslice(rows, minimum=0, maximum=None)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
types = {'reference': 'INT NULL, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'VARCHAR(MAX)', 'float': 'FLOAT', 'datetime': 'DATETIME', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'INT IDENTITY PRIMARY KEY', 'geography': 'geography', 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'big-id': 'BIGINT IDENTITY PRIMARY KEY', 'json': 'VARCHAR(MAX)', 'blob': 'IMAGE', 'big-reference': 'BIGINT NULL, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'VARCHAR(MAX)', 'date': 'DATETIME', 'integer': 'INT', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'VARCHAR(MAX)', 'geometry': 'geometry', 'double': 'FLOAT', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'VARCHAR(MAX)', 'boolean': 'BIT', 'time': 'TIME(7)'}
class pydal.adapters.mssql.MSSQL4Adapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, srid=4326, after_connection=None)[source]

Bases: pydal.adapters.mssql.MSSQLAdapter

Support for “native” pagination

Requires MSSQL >= 2012, uses OFFSET ... ROWS ... FETCH NEXT ... ROWS ONLY

rowslice(rows, minimum=0, maximum=None)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
types = {'reference': 'INT NULL, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'VARCHAR(MAX)', 'float': 'FLOAT', 'datetime': 'DATETIME', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'INT IDENTITY PRIMARY KEY', 'geography': 'geography', 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'big-id': 'BIGINT IDENTITY PRIMARY KEY', 'json': 'VARCHAR(MAX)', 'blob': 'IMAGE', 'big-reference': 'BIGINT NULL, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'VARCHAR(MAX)', 'date': 'DATETIME', 'integer': 'INT', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'VARCHAR(MAX)', 'geometry': 'geometry', 'double': 'FLOAT', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'VARCHAR(MAX)', 'boolean': 'BIT', 'time': 'TIME(7)'}
class pydal.adapters.mssql.MSSQLAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, srid=4326, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

AGGREGATE(first, what)[source]
ALLOW_NULL()[source]
CAST(first, second)[source]
CONCAT(*items)[source]
EPOCH(first)[source]
EXTRACT(field, what)[source]
FALSE = 0
LEFT_JOIN()[source]
PRIMARY_KEY(key)[source]
QUOTE_TEMPLATE = '"%s"'
RANDOM()[source]
REGEX_ARGPATTERN = <_sre.SRE_Pattern object at 0x7f4237af1510>
REGEX_DSN = <_sre.SRE_Pattern object at 0x7f42375c9570>
REGEX_URI = <_sre.SRE_Pattern object at 0x1a6ade0>
ST_ASTEXT(first)[source]
ST_CONTAINS(first, second)[source]
ST_DISTANCE(first, second)[source]
ST_EQUALS(first, second)[source]
ST_INTERSECTS(first, second)[source]
ST_OVERLAPS(first, second)[source]
ST_TOUCHES(first, second)[source]
ST_WITHIN(first, second)[source]
SUBSTRING(field, parameters)[source]
TRUE = 1
T_SEP = 'T'
concat_add(tablename)[source]
drivers = ('pyodbc',)
lastrowid(table)[source]
represent(obj, fieldtype)[source]
rowslice(rows, minimum=0, maximum=None)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
types = {'reference': 'INT NULL, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'TEXT', 'float': 'FLOAT', 'datetime': 'DATETIME', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'INT IDENTITY PRIMARY KEY', 'geography': 'geography', 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'big-id': 'BIGINT IDENTITY PRIMARY KEY', 'json': 'TEXT', 'blob': 'IMAGE', 'big-reference': 'BIGINT NULL, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'TEXT', 'date': 'DATETIME', 'integer': 'INT', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'TEXT', 'geometry': 'geometry', 'double': 'FLOAT', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'TEXT', 'boolean': 'BIT', 'time': 'CHAR(8)'}
varquote(name)[source]
class pydal.adapters.mssql.SybaseAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, srid=4326, after_connection=None)[source]

Bases: pydal.adapters.mssql.MSSQLAdapter

drivers = 'Sybase'
types = {'reference': 'INT NULL, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'TEXT', 'float': 'FLOAT', 'datetime': 'DATETIME', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'INT IDENTITY PRIMARY KEY', 'geography': 'geography', 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'big-id': 'BIGINT IDENTITY PRIMARY KEY', 'json': 'TEXT', 'blob': 'IMAGE', 'big-reference': 'BIGINT NULL, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'CHAR VARYING(%(length)s)', 'list:string': 'TEXT', 'date': 'DATETIME', 'integer': 'INT', 'password': 'CHAR VARYING(%(length)s)', 'list:integer': 'TEXT', 'geometry': 'geometry', 'double': 'FLOAT', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'CHAR VARYING(%(length)s)', 'list:reference': 'TEXT', 'boolean': 'BIT', 'time': 'CHAR(8)'}
class pydal.adapters.mssql.VerticaAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, srid=4326, after_connection=None)[source]

Bases: pydal.adapters.mssql.MSSQLAdapter

EXTRACT(first, what)[source]
T_SEP = ' '
drivers = ('pyodbc',)
execute(a)[source]
lastrowid(table)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
types = {'big-reference': 'BIGINT REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'reference': 'INT REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'BYTEA', 'decimal': 'DECIMAL(%(precision)s,%(scale)s)', 'float': 'FLOAT', 'bigint': 'BIGINT', 'upload': 'VARCHAR(%(length)s)', 'datetime': 'DATETIME', 'json': 'VARCHAR(%(length)s)', 'boolean': 'BOOLEAN', 'id': 'IDENTITY', 'blob': 'BYTEA', 'list:string': 'BYTEA', 'double': 'DOUBLE PRECISION', 'date': 'DATE', 'integer': 'INT', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'BYTEA', 'time': 'TIME', 'list:reference': 'BYTEA'}

pydal.adapters.mysql module

class pydal.adapters.mysql.MySQLAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

CAST(first, second)[source]
CONCAT(*items)[source]
EPOCH(first)[source]
QUOTE_TEMPLATE = '`%s`'
RANDOM()[source]
REGEXP(first, second)[source]
REGEX_URI = <_sre.SRE_Pattern object at 0x1a7c890>
SUBSTRING(field, parameters)[source]
after_connection()[source]
commit_on_alter_table = True
commit_prepared(key)[source]
distributed_transaction_begin(key)[source]
drivers = ('MySQLdb', 'pymysql', 'mysqlconnector')
lastrowid(table)[source]
prepare(key)[source]
rollback_prepared(key)[source]
support_distributed_transaction = True
types = {'reference': 'INT, INDEX %(index_name)s (%(field_name)s), FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'LONGTEXT', 'float': 'FLOAT', 'datetime': 'DATETIME', 'bigint': 'BIGINT', 'id': 'INT AUTO_INCREMENT NOT NULL', 'reference FK': ', CONSTRAINT `FK_%(constraint_name)s` FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'json': 'LONGTEXT', 'big-id': 'BIGINT AUTO_INCREMENT NOT NULL', 'blob': 'LONGBLOB', 'big-reference': 'BIGINT, INDEX %(index_name)s (%(field_name)s), FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'LONGTEXT', 'date': 'DATE', 'integer': 'INT', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'LONGTEXT', 'double': 'DOUBLE', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'LONGTEXT', 'boolean': 'CHAR(1)', 'time': 'TIME'}
varquote(name)[source]

pydal.adapters.oracle module

class pydal.adapters.oracle.OracleAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

LEFT_JOIN()[source]
NOT_NULL(default, field_type)[source]
RANDOM()[source]
REGEXP(first, second)[source]
after_connection()[source]
commit_on_alter_table = False
constraint_name(tablename, fieldname)[source]
create_sequence_and_triggers(query, table, **args)[source]
drivers = ('cx_Oracle',)
execute(command, args=None)[source]
lastrowid(table)[source]
oracle_fix = <_sre.SRE_Pattern object at 0x7f42375c4030>
represent_exceptions(obj, fieldtype)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
sqlsafe_table(tablename, ot=None)[source]
trigger_name(tablename)[source]
types = {'reference': 'NUMBER, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'CLOB', 'float': 'FLOAT', 'datetime': 'DATE', 'bigint': 'NUMBER', 'reference TFK': ' CONSTRAINT FK_%(foreign_table)s_PK FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'NUMBER PRIMARY KEY', 'reference FK': ', CONSTRAINT FK_%(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'json': 'CLOB', 'big-id': 'NUMBER PRIMARY KEY', 'blob': 'CLOB', 'big-reference': 'NUMBER, CONSTRAINT %(constraint_name)s FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR2(%(length)s)', 'list:string': 'CLOB', 'date': 'DATE', 'integer': 'INT', 'password': 'VARCHAR2(%(length)s)', 'list:integer': 'CLOB', 'double': 'BINARY_DOUBLE', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR2(%(length)s)', 'list:reference': 'CLOB', 'boolean': 'CHAR(1)', 'time': 'CHAR(8)'}

pydal.adapters.postgres module

class pydal.adapters.postgres.JDBCPostgreSQLAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.postgres.PostgreSQLAdapter

REGEX_URI = <_sre.SRE_Pattern object at 0x7f4237d06830>
after_connection()[source]
drivers = ('zxJDBC',)
class pydal.adapters.postgres.NewPostgreSQLAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, srid=4326, after_connection=None)[source]

Bases: pydal.adapters.postgres.PostgreSQLAdapter

ANY(first)[source]
CONTAINS(first, second, case_sensitive=True)[source]
EQ(first, second=None)[source]
ILIKE(first, second)[source]
drivers = ('psycopg2', 'pg8000')
parse_list_integers(value, field_type)[source]
parse_list_references(value, field_type)[source]
parse_list_strings(value, field_type)[source]
represent(obj, fieldtype)[source]
types = {'reference': 'INTEGER REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'TEXT', 'float': 'FLOAT', 'datetime': 'TIMESTAMP', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT "FK_%(foreign_table)s_PK" FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'SERIAL PRIMARY KEY', 'geography': 'GEOGRAPHY', 'reference FK': ', CONSTRAINT "FK_%(constraint_name)s" FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'big-id': 'BIGSERIAL PRIMARY KEY', 'json': 'TEXT', 'blob': 'BYTEA', 'big-reference': 'BIGINT REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'TEXT[]', 'date': 'DATE', 'integer': 'INTEGER', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'BIGINT[]', 'geometry': 'GEOMETRY', 'double': 'FLOAT8', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'BIGINT[]', 'boolean': 'CHAR(1)', 'time': 'TIME'}
class pydal.adapters.postgres.PostgreSQLAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, srid=4326, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

ADD(first, second)[source]
ILIKE(first, second)[source]
LIKE(first, second)[source]
QUOTE_TEMPLATE = '"%s"'
RANDOM()[source]
REGEXP(first, second)[source]
REGEX_URI = <_sre.SRE_Pattern object at 0x1a936c0>
ST_ASGEOJSON(first, second)[source]

http://postgis.org/docs/ST_AsGeoJSON.html

ST_ASTEXT(first)[source]

http://postgis.org/docs/ST_AsText.html

ST_CONTAINS(first, second)[source]

http://postgis.org/docs/ST_Contains.html

ST_DISTANCE(first, second)[source]

http://postgis.org/docs/ST_Distance.html

ST_DWITHIN(first, (second, third))[source]

http://postgis.org/docs/ST_DWithin.html

ST_EQUALS(first, second)[source]

http://postgis.org/docs/ST_Equals.html

ST_INTERSECTS(first, second)[source]

http://postgis.org/docs/ST_Intersects.html

ST_OVERLAPS(first, second)[source]

http://postgis.org/docs/ST_Overlaps.html

ST_SIMPLIFY(first, second)[source]

http://postgis.org/docs/ST_Simplify.html

ST_SIMPLIFYPRESERVETOPOLOGY(first, second)[source]

http://postgis.org/docs/ST_SimplifyPreserveTopology.html

ST_TOUCHES(first, second)[source]

http://postgis.org/docs/ST_Touches.html

ST_WITHIN(first, second)[source]

http://postgis.org/docs/ST_Within.html

ST_X(first)[source]

http://postgis.org/docs/ST_X.html

ST_Y(first)[source]

http://postgis.org/docs/ST_Y.html

adapt(obj)[source]
after_connection()[source]
commit_prepared(key)[source]
create_sequence_and_triggers(query, table, **args)[source]
distributed_transaction_begin(key)[source]
drivers = ('psycopg2', 'pg8000')
lastrowid(table=None)[source]
prepare(key)[source]
represent(obj, fieldtype)[source]
rollback_prepared(key)[source]
sequence_name(table)[source]
support_distributed_transaction = True
try_json()[source]
types = {'reference': 'INTEGER REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'TEXT', 'float': 'FLOAT', 'datetime': 'TIMESTAMP', 'bigint': 'BIGINT', 'reference TFK': ' CONSTRAINT "FK_%(foreign_table)s_PK" FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s) ON DELETE %(on_delete_action)s', 'id': 'SERIAL PRIMARY KEY', 'geography': 'GEOGRAPHY', 'reference FK': ', CONSTRAINT "FK_%(constraint_name)s" FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'big-id': 'BIGSERIAL PRIMARY KEY', 'json': 'TEXT', 'blob': 'BYTEA', 'big-reference': 'BIGINT REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'TEXT', 'date': 'DATE', 'integer': 'INTEGER', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'TEXT', 'geometry': 'GEOMETRY', 'double': 'FLOAT8', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'TEXT', 'boolean': 'CHAR(1)', 'time': 'TIME'}
varquote(name)[source]

pydal.adapters.sapdb module

class pydal.adapters.sapdb.SAPDBAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

REGEX_URI = <_sre.SRE_Pattern object at 0x1a936c0>
create_sequence_and_triggers(query, table, **args)[source]
drivers = ('sapdb',)
lastrowid(table)[source]
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
sequence_name(table)[source]
support_distributed_transaction = False
types = {'reference': 'INT, FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'LONG', 'float': 'FLOAT', 'datetime': 'TIMESTAMP', 'bigint': 'BIGINT', 'id': 'INT PRIMARY KEY', 'json': 'LONG', 'big-id': 'BIGINT PRIMARY KEY', 'blob': 'LONG', 'big-reference': 'BIGINT, FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'string': 'VARCHAR(%(length)s)', 'list:string': 'LONG', 'date': 'DATE', 'integer': 'INT', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'LONG', 'double': 'DOUBLE PRECISION', 'decimal': 'FIXED(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'LONG', 'boolean': 'CHAR(1)', 'time': 'TIME'}

pydal.adapters.sqlite module

class pydal.adapters.sqlite.JDBCSQLiteAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.sqlite.SQLiteAdapter

after_connection()[source]
drivers = ('zxJDBC_sqlite',)
execute(a)[source]
class pydal.adapters.sqlite.SQLiteAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

EXTRACT(field, what)[source]
REGEXP(first, second)[source]
after_connection()[source]
can_select_for_update = None
delete(tablename, query)[source]
drivers = ('sqlite2', 'sqlite3')
lastrowid(table)[source]
select(query, fields, attributes)[source]

Simulate SELECT ... FOR UPDATE with BEGIN IMMEDIATE TRANSACTION. Note that the entire database, rather than one record, is locked (it will be locked eventually anyway by the following UPDATE).

static web2py_extract(lookup, s)[source]
static web2py_regexp(expression, item)[source]
class pydal.adapters.sqlite.SpatiaLiteAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, srid=4326, after_connection=None)[source]

Bases: pydal.adapters.sqlite.SQLiteAdapter

ST_ASGEOJSON(first, second)[source]
ST_ASTEXT(first)[source]
ST_CONTAINS(first, second)[source]
ST_DISTANCE(first, second)[source]
ST_EQUALS(first, second)[source]
ST_INTERSECTS(first, second)[source]
ST_OVERLAPS(first, second)[source]
ST_SIMPLIFY(first, second)[source]
ST_TOUCHES(first, second)[source]
ST_WITHIN(first, second)[source]
after_connection()[source]
drivers = ('sqlite3', 'sqlite2')
represent(obj, fieldtype)[source]
types = {'string': 'CHAR(%(length)s)', 'reference': 'INTEGER REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'text': 'TEXT', 'float': 'DOUBLE', 'datetime': 'TIMESTAMP', 'bigint': 'INTEGER', 'list:string': 'TEXT', 'date': 'DATE', 'integer': 'INTEGER', 'password': 'CHAR(%(length)s)', 'list:integer': 'TEXT', 'id': 'INTEGER PRIMARY KEY AUTOINCREMENT', 'reference FK': ', CONSTRAINT "FK_%(constraint_name)s" FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s', 'geometry': 'GEOMETRY', 'double': 'DOUBLE', 'decimal': 'DOUBLE', 'big-id': 'INTEGER PRIMARY KEY AUTOINCREMENT', 'list:reference': 'TEXT', 'json': 'TEXT', 'boolean': 'CHAR(1)', 'upload': 'CHAR(%(length)s)', 'blob': 'BLOB', 'time': 'TIME', 'big-reference': 'INTEGER REFERENCES %(foreign_key)s ON DELETE %(on_delete_action)s'}

pydal.adapters.teradata module

class pydal.adapters.teradata.TeradataAdapter(db, uri, pool_size=0, folder=None, db_codec='UTF-8', credential_decoder=<function IDENTITY at 0x7f4237d47320>, driver_args={}, adapter_args={}, do_connect=True, after_connection=None)[source]

Bases: pydal.adapters.base.BaseAdapter

LEFT_JOIN()[source]
close(action='commit', really=True)[source]
drivers = ('pyodbc',)
select_limitby(sql_s, sql_f, sql_t, sql_w, sql_o, limitby)[source]
types = {'reference': 'INT', 'text': 'VARCHAR(2000)', 'float': 'REAL', 'datetime': 'TIMESTAMP', 'bigint': 'BIGINT', 'reference TFK': ' FOREIGN KEY (%(field_name)s) REFERENCES %(foreign_table)s (%(foreign_key)s)', 'id': 'INT GENERATED ALWAYS AS IDENTITY', 'reference FK': ' REFERENCES %(foreign_key)s', 'json': 'VARCHAR(4000)', 'big-id': 'BIGINT GENERATED ALWAYS AS IDENTITY', 'blob': 'BLOB', 'big-reference': 'BIGINT', 'string': 'VARCHAR(%(length)s)', 'list:string': 'VARCHAR(4000)', 'date': 'DATE', 'integer': 'INT', 'password': 'VARCHAR(%(length)s)', 'list:integer': 'VARCHAR(4000)', 'geometry': 'ST_GEOMETRY', 'double': 'DOUBLE', 'decimal': 'NUMERIC(%(precision)s,%(scale)s)', 'upload': 'VARCHAR(%(length)s)', 'list:reference': 'VARCHAR(4000)', 'boolean': 'CHAR(1)', 'time': 'TIME'}

Module contents

pydal.helpers package

Submodules

pydal.helpers.classes module

class pydal.helpers.classes.DatabaseStoredFile(db, filename, mode)[source]
close()[source]
close_connection()[source]
escape(obj)[source]
static exists(db, filename)[source]
read(bytes)[source]
readline()[source]
static try_create_web2py_filesystem(db)[source]
web2py_filesystems = set([])
write(data)[source]
class pydal.helpers.classes.MethodAdder(table)[source]

Bases: object

register(method_name=None)[source]
class pydal.helpers.classes.RecordDeleter(table, id)[source]

Bases: object

class pydal.helpers.classes.RecordUpdater(colset, table, id)[source]

Bases: object

class pydal.helpers.classes.Reference[source]

Bases: long

get(key, default=None)[source]
pydal.helpers.classes.Reference_pickler(data)[source]
pydal.helpers.classes.Reference_unpickler(data)[source]
class pydal.helpers.classes.SQLALL(table)[source]

Bases: object

Helper class providing a comma-separated string having all the field names (prefixed by table name and ‘.’)

normally only called from within gluon.dal

class pydal.helpers.classes.SQLCallableList[source]

Bases: list

class pydal.helpers.classes.SQLCustomType(type='string', native=None, encoder=None, decoder=None, validator=None, _class=None)[source]

Bases: object

Allows defining of custom SQL types

Parameters:
  • type – the web2py type (default = ‘string’)
  • native – the backend type
  • encoder – how to encode the value to store it in the backend
  • decoder – how to decode the value retrieved from the backend
  • validator – what validators to use ( default = None, will use the default validator for type)
Example::

Define as:

decimal = SQLCustomType(
type =’double’, native =’integer’, encoder =(lambda x: int(float(x) * 100)), decoder = (lambda x: Decimal(“0.00”) + Decimal(str(float(x)/100)) ) )
db.define_table(
‘example’, Field(‘value’, type=decimal) )
endswith(text=None)[source]
startswith(text=None)[source]
class pydal.helpers.classes.UseDatabaseStoredFile[source]
file_close(fileobj)[source]
file_delete(filename)[source]
file_exists(filename)[source]
file_open(filename, mode='rb', lock=True)[source]

pydal.helpers.methods module

pydal.helpers.methods.archive_record(qset, fs, archive_table, current_record)[source]
pydal.helpers.methods.auto_validators(field)[source]
pydal.helpers.methods.bar_decode_integer(value)[source]
pydal.helpers.methods.bar_decode_string(value)[source]
pydal.helpers.methods.bar_encode(items)[source]
pydal.helpers.methods.bar_escape(item)[source]
pydal.helpers.methods.cleanup(text)[source]

Validates that the given text is clean: only contains [0-9a-zA-Z_]

pydal.helpers.methods.geoLine(*line)[source]
pydal.helpers.methods.geoPoint(x, y)[source]
pydal.helpers.methods.geoPolygon(*line)[source]
pydal.helpers.methods.hide_password(uri)[source]
pydal.helpers.methods.int2uuid(n)[source]
pydal.helpers.methods.list_represent(x, r=None)[source]
pydal.helpers.methods.pluralize(singular, rules=[(<_sre.SRE_Pattern object at 0x7f423af77c60>, <_sre.SRE_Pattern object at 0x7f423af77c60>, 'children'), (<_sre.SRE_Pattern object at 0x7f4237d63030>, <_sre.SRE_Pattern object at 0x7f4237d63030>, 'eet'), (<_sre.SRE_Pattern object at 0x7f423af3f270>, <_sre.SRE_Pattern object at 0x7f423af3f270>, 'eeth'), (<_sre.SRE_Pattern object at 0x7f423af3f330>, <_sre.SRE_Pattern object at 0x7f423af77d30>, 'l\\1aves'), (<_sre.SRE_Pattern object at 0x7f4237d630e0>, <_sre.SRE_Pattern object at 0x7f4237d630e0>, 'ses'), (<_sre.SRE_Pattern object at 0x7f4237d63190>, <_sre.SRE_Pattern object at 0x7f4237d63190>, 'men'), (<_sre.SRE_Pattern object at 0x7f4237d63240>, <_sre.SRE_Pattern object at 0x7f4237d63240>, 'ives'), (<_sre.SRE_Pattern object at 0x7f4237d632f0>, <_sre.SRE_Pattern object at 0x7f4237d632f0>, 'eaux'), (<_sre.SRE_Pattern object at 0x7f4237d64030>, <_sre.SRE_Pattern object at 0x7f4237d64030>, 'lves'), (<_sre.SRE_Pattern object at 0x7f423af77e00>, <_sre.SRE_Pattern object at 0x7f423841a920>, 'es'), (<_sre.SRE_Pattern object at 0x7f423af8b9d0>, <_sre.SRE_Pattern object at 0x7f423841a920>, 'es'), (<_sre.SRE_Pattern object at 0x7f423af31bd0>, <_sre.SRE_Pattern object at 0x7f4237d4b270>, 'ies'), (<_sre.SRE_Pattern object at 0x7f423841a920>, <_sre.SRE_Pattern object at 0x7f423841a920>, 's')])[source]
pydal.helpers.methods.smart_query(fields, text)[source]
pydal.helpers.methods.use_common_filters(query)[source]
pydal.helpers.methods.uuid2int(uuidv)[source]
pydal.helpers.methods.varquote_aux(name, quotestr='%s')[source]
pydal.helpers.methods.xorify(orderby)[source]

pydal.helpers.regex module

Module contents

Submodules

pydal.base module

This file is part of the web2py Web Framework
Copyrighted by Massimo Di Pierro <mdipierro@cs.depaul.edu>

This file contains the DAL support for many relational databases, including:

  • SQLite & SpatiaLite
  • MySQL
  • Postgres
  • Firebird
  • Oracle
  • MS SQL
  • DB2
  • Interbase
  • Ingres
  • Informix (9+ and SE)
  • SapDB (experimental)
  • Cubrid (experimental)
  • CouchDB (experimental)
  • MongoDB (in progress)
  • Google:nosql
  • Google:sql
  • Teradata
  • IMAP (experimental)

Example of usage:

>>> # from dal import DAL, Field

### create DAL connection (and create DB if it doesn't exist)
>>> db = DAL(('sqlite://storage.sqlite','mysql://a:b@localhost/x'),
... folder=None)

### define a table 'person' (create/alter as necessary)
>>> person = db.define_table('person',Field('name','string'))

### insert a record
>>> id = person.insert(name='James')

### retrieve it by id
>>> james = person(id)

### retrieve it by name
>>> james = person(name='James')

### retrieve it by arbitrary query
>>> query = (person.name=='James') & (person.name.startswith('J'))
>>> james = db(query).select(person.ALL)[0]

### update one record
>>> james.update_record(name='Jim')
<Row {'id': 1, 'name': 'Jim'}>

### update multiple records by query
>>> db(person.name.like('J%')).update(name='James')
1

### delete records by query
>>> db(person.name.lower() == 'jim').delete()
0

### retrieve multiple records (rows)
>>> people = db(person).select(orderby=person.name,
... groupby=person.name, limitby=(0,100))

### further filter them
>>> james = people.find(lambda row: row.name == 'James').first()
>>> print james.id, james.name
1 James

### check aggregates
>>> counter = person.id.count()
>>> print db(person).select(counter).first()(counter)
1

### delete one record
>>> james.delete_record()
1

### delete (drop) entire database table
>>> person.drop()

Supported DAL URI strings:

'sqlite://test.db'
'spatialite://test.db'
'sqlite:memory'
'spatialite:memory'
'jdbc:sqlite://test.db'
'mysql://root:none@localhost/test'
'postgres://mdipierro:password@localhost/test'
'postgres:psycopg2://mdipierro:password@localhost/test'
'postgres:pg8000://mdipierro:password@localhost/test'
'jdbc:postgres://mdipierro:none@localhost/test'
'mssql://web2py:none@A64X2/web2py_test'
'mssql2://web2py:none@A64X2/web2py_test' # alternate mappings
'mssql3://web2py:none@A64X2/web2py_test' # better pagination (requires >= 2005)
'mssql4://web2py:none@A64X2/web2py_test' # best pagination (requires >= 2012)
'oracle://username:password@database'
'firebird://user:password@server:3050/database'
'db2:ibm_db_dbi://DSN=dsn;UID=user;PWD=pass'
'db2:pyodbc://driver=DB2;hostname=host;database=database;uid=user;pwd=password;port=port'
'firebird://username:password@hostname/database'
'firebird_embedded://username:password@c://path'
'informix://user:password@server:3050/database'
'informixu://user:password@server:3050/database' # unicode informix
'ingres://database'  # or use an ODBC connection string, e.g. 'ingres://dsn=dsn_name'
'google:datastore' # for google app engine datastore
'google:datastore+ndb' # for google app engine datastore + ndb
'google:sql' # for google app engine with sql (mysql compatible)
'teradata://DSN=dsn;UID=user;PWD=pass; DATABASE=database' # experimental
'imap://user:password@server:port' # experimental
'mongodb://user:password@server:port/database' # experimental

For more info:

help(DAL)
help(Field)
class pydal.base.DAL(uri='sqlite://dummy.db', pool_size=0, folder=None, db_codec='UTF-8', check_reserved=None, migrate=True, fake_migrate=False, migrate_enabled=True, fake_migrate_all=False, decode_credentials=False, driver_args=None, adapter_args=None, attempts=5, auto_import=False, bigint_id=False, debug=False, lazy_tables=False, db_uid=None, do_connect=True, after_connection=None, tables=None, ignore_field_case=True, entity_quoting=False, table_hash=None)[source]

Bases: object

An instance of this class represents a database connection

Parameters:
  • uri (str) –

    contains information for connecting to a database. Defaults to ‘sqlite://dummy.db’

    Note

    experimental: you can specify a dictionary as uri parameter i.e. with:

    db = DAL({"uri": "sqlite://storage.sqlite",
              "tables": {...}, ...})
    

    for an example of dict input you can check the output of the scaffolding db model with

    db.as_dict()

    Note that for compatibility with Python older than version 2.6.5 you should cast your dict input keys to str due to a syntax limitation on kwarg names. for proper DAL dictionary input you can use one of:

    obj = serializers.cast_keys(dict, [encoding="utf-8"])
    #or else (for parsing json input)
    obj = serializers.loads_json(data, unicode_keys=False)
    
  • pool_size – How many open connections to make to the database object.
  • folder – where .table files will be created. Automatically set within web2py. Use an explicit path when using DAL outside web2py
  • db_codec – string encoding of the database (default: ‘UTF-8’)
  • table_hash – database identifier with .tables. If your connection hash change you can still using old .tables if they have db_hash as prefix
  • check_reserved

    list of adapters to check tablenames and column names against sql/nosql reserved keywords. Defaults to None

    • ‘common’ List of sql keywords that are common to all database types such as “SELECT, INSERT”. (recommended)
    • ‘all’ Checks against all known SQL keywords
    • ‘<adaptername>’’ Checks against the specific adapters list of keywords
    • ‘<adaptername>_nonreserved’ Checks against the specific adapters list of nonreserved keywords. (if available)
  • migrate – sets default migrate behavior for all tables
  • fake_migrate – sets default fake_migrate behavior for all tables
  • migrate_enabled – If set to False disables ALL migrations
  • fake_migrate_all – If set to True fake migrates ALL tables
  • attempts – Number of times to attempt connecting
  • auto_import – If set to True, tries import automatically table definitions from the databases folder (works only for simple models)
  • bigint_id – If set, turn on bigint instead of int for id and reference fields
  • lazy_tables – delaya table definition until table access
  • after_connection – can a callable that will be executed after the connection

Example

Use as:

db = DAL('sqlite://test.db')

or:

db = DAL(**{"uri": ..., "tables": [...]...}) # experimental

db.define_table('tablename', Field('fieldname1'),
                             Field('fieldname2'))
class Table(db, tablename, *fields, **args)

Bases: object

Represents a database table

Example::
You can create a table as::
db = DAL(...) db.define_table(‘users’, Field(‘name’))

And then:

db.users.insert(name='me') # print db.users._insert(...) to see SQL
db.users.drop()
as_dict(flat=False, sanitize=True)
as_json(sanitize=True)
as_xml(sanitize=True)
as_yaml(sanitize=True)
bulk_insert(items)

here items is a list of dictionaries

drop(mode='')
fields
has_key(key)
import_from_csv_file(csvfile, id_map=None, null='<NULL>', unique='uuid', id_offset=None, *args, **kwargs)

Import records from csv file. Column headers must have same names as table fields. Field ‘id’ is ignored. If column names read ‘table.file’ the ‘table.’ prefix is ignored.

  • ‘unique’ argument is a field which must be unique (typically a uuid field)
  • ‘restore’ argument is default False; if set True will remove old values in table first.
  • ‘id_map’ if set to None will not map ids

The import will keep the id numbers in the restored table. This assumes that there is an field of type id that is integer and in incrementing order. Will keep the id numbers in restored table.

insert(**fields)
items()
iteritems()
on(query)
sqlsafe
sqlsafe_alias
truncate(mode=None)
update(*args, **kwargs)
update_or_insert(_key=<function <lambda> at 0x7f4237d472a8>, **values)
validate_and_insert(**fields)
validate_and_update(_key=<function <lambda> at 0x7f4237d472a8>, **fields)
validate_and_update_or_insert(_key=<function <lambda> at 0x7f4237d472a8>, **fields)
with_alias(alias)
DAL.as_dict(flat=False, sanitize=True)[source]
DAL.as_json(sanitize=True)[source]
DAL.as_xml(sanitize=True)[source]
DAL.as_yaml(sanitize=True)[source]
DAL.check_reserved_keyword(name)[source]

Validates name against SQL keywords Uses self.check_reserve which is a list of operators to use.

DAL.close()[source]
DAL.commit()[source]
DAL.define_table(tablename, *fields, **args)[source]
static DAL.distributed_transaction_begin(*instances)[source]
static DAL.distributed_transaction_commit(*instances)[source]
DAL.executesql(query, placeholders=None, as_dict=False, fields=None, colnames=None, as_ordered_dict=False)[source]

Executes an arbitrary query

Parameters:
  • query (str) – the query to submit to the backend
  • placeholders – is optional and will always be None. If using raw SQL with placeholders, placeholders may be a sequence of values to be substituted in or, (if supported by the DB driver), a dictionary with keys matching named placeholders in your SQL.
  • as_dict – will always be None when using DAL. If using raw SQL can be set to True and the results cursor returned by the DB driver will be converted to a sequence of dictionaries keyed with the db field names. Results returned with as_dict=True are the same as those returned when applying .to_list() to a DAL query. If “as_ordered_dict”=True the behaviour is the same as when “as_dict”=True with the keys (field names) guaranteed to be in the same order as returned by the select name executed on the database.
  • fields

    list of DAL Fields that match the fields returned from the DB. The Field objects should be part of one or more Table objects defined on the DAL object. The “fields” list can include one or more DAL Table objects in addition to or instead of including Field objects, or it can be just a single table (not in a list). In that case, the Field objects will be extracted from the table(s).

    Note

    if either fields or colnames is provided, the results will be converted to a DAL Rows object using the db._adapter.parse() method

  • colnames – list of field names in tablename.fieldname format

Note

It is also possible to specify both “fields” and the associated “colnames”. In that case, “fields” can also include DAL Expression objects in addition to Field objects. For Field objects in “fields”, the associated “colnames” must still be in tablename.fieldname format. For Expression objects in “fields”, the associated “colnames” can be any arbitrary labels.

DAL Table objects referred to by “fields” or “colnames” can be dummy tables and do not have to represent any real tables in the database. Also, note that the “fields” and “colnames” must be in the same order as the fields in the results cursor returned from the DB.

DAL.export_to_csv_file(ofile, *args, **kwargs)[source]
DAL.get(key, default=None)[source]
static DAL.get_instances()[source]

Returns a dictionary with uri as key with timings and defined tables:

{'sqlite://storage.sqlite': {
    'dbstats': [(select auth_user.email from auth_user, 0.02009)],
    'dbtables': {
        'defined': ['auth_cas', 'auth_event', 'auth_group',
            'auth_membership', 'auth_permission', 'auth_user'],
        'lazy': '[]'
        }
    }
}
DAL.has_key(tablename)
DAL.has_representer(name)[source]
DAL.has_serializer(name)[source]
DAL.import_from_csv_file(ifile, id_map=None, null='<NULL>', unique='uuid', map_tablenames=None, ignore_missing_tables=False, *args, **kwargs)[source]
DAL.import_table_definitions(path, migrate=False, fake_migrate=False, tables=None)[source]
DAL.lazy_define_table(tablename, *fields, **args)[source]
DAL.logger = <logging.Logger object at 0x7f4237d24910>
DAL.parse_as_rest(patterns, args, vars, queries=None, nested_select=True)[source]

Example

Use as:

db.define_table('person',Field('name'),Field('info'))
db.define_table('pet',
    Field('ownedby',db.person),
    Field('name'),Field('info')
)

@request.restful()
def index():
    def GET(*args,**vars):
        patterns = [
            "/friends[person]",
            "/{person.name}/:field",
            "/{person.name}/pets[pet.ownedby]",
            "/{person.name}/pets[pet.ownedby]/{pet.name}",
            "/{person.name}/pets[pet.ownedby]/{pet.name}/:field",
            ("/dogs[pet]", db.pet.info=='dog'),
            ("/dogs[pet]/{pet.name.startswith}", db.pet.info=='dog'),
            ]
        parser = db.parse_as_rest(patterns,args,vars)
        if parser.status == 200:
            return dict(content=parser.response)
        else:
            raise HTTP(parser.status,parser.error)

    def POST(table_name,**vars):
        if table_name == 'person':
            return db.person.validate_and_insert(**vars)
        elif table_name == 'pet':
            return db.pet.validate_and_insert(**vars)
        else:
            raise HTTP(400)
    return locals()
DAL.represent(name, *args, **kwargs)[source]
DAL.representers = {}
DAL.rollback()[source]
DAL.serialize(name, *args, **kwargs)[source]
DAL.serializers = None
static DAL.set_folder(folder)[source]
DAL.smart_query(fields, text)[source]
DAL.tables[source]
DAL.uuid(x)
DAL.validators = None
DAL.validators_method = None
pydal.base.DAL_pickler(db)[source]
pydal.base.DAL_unpickler(db_uid)[source]
class pydal.base.MetaDAL[source]

Bases: type

pydal.base.ogetattr

x.__getattribute__(‘name’) <==> x.name

pydal.base.osetattr

x.__setattr__(‘name’, value) <==> x.name = value

pydal.connection module

class pydal.connection.ConnectionPool[source]

Bases: object

POOLS = {}
after_connection()[source]
after_connection_hook()[source]

Hook for the after_connection parameter

check_active_connection = True
close(action='commit', really=False)[source]
static close_all_instances(action)[source]

to close cleanly databases in a multithreaded environment

find_or_make_work_folder()[source]
reconnect(f=None, cursor=True)[source]

Defines: self.connection and self.cursor (if cursor is True) if self.pool_size>0 it will try pull the connection from the pool if the connection is not active (closed by db server) it will loop if not self.pool_size or no active connections in pool makes a new one

static set_folder(folder)[source]

pydal.objects module

class pydal.objects.Expression(db, op, first=None, second=None, type=None, **optional_args)[source]

Bases: object

abs()[source]
avg()[source]
belongs(*value, **kwattr)[source]

Accepts the following inputs:

field.belongs(1,2)
field.belongs((1,2))
field.belongs(query)

Does NOT accept:

field.belongs(1)

If the set you want back includes None values, you can do:

field.belongs((1,None), null=True)
coalesce(*others)[source]
coalesce_zero()[source]
contains(value, all=False, case_sensitive=False)[source]

For MongoDB and GAE contains is always case sensitive

day()[source]
endswith(value)[source]
epoch()[source]
hour()[source]
ilike(value)[source]
len()[source]
like(value, case_sensitive=True)[source]
lower()[source]
max()[source]
min()[source]
minutes()[source]
month()[source]
regexp(value)[source]
replace(a, b)[source]
seconds()[source]
st_asgeojson(precision=15, options=0, version=1)[source]
st_astext()[source]
st_contains(value)[source]
st_distance(other)[source]
st_dwithin(value, distance)[source]
st_equals(value)[source]
st_intersects(value)[source]
st_overlaps(value)[source]
st_simplify(value)[source]
st_simplifypreservetopology(value)[source]
st_touches(value)[source]
st_within(value)[source]
st_x()[source]
st_y()[source]
startswith(value)[source]
sum()[source]
upper()[source]
with_alias(alias)[source]
year()[source]
class pydal.objects.Field(fieldname, type='string', length=None, default=<function <lambda> at 0x7f4237d472a8>, required=False, requires=<function <lambda> at 0x7f4237d472a8>, ondelete='CASCADE', notnull=False, unique=False, uploadfield=True, widget=None, label=None, comment=None, writable=True, readable=True, update=None, authorize=None, autodelete=False, represent=None, uploadfolder=None, uploadseparate=False, uploadfs=None, compute=None, custom_store=None, custom_retrieve=None, custom_retrieve_file_properties=None, custom_delete=None, filter_in=None, filter_out=None, custom_qualifier=None, map_none=None, rname=None)[source]

Bases: pydal.objects.Expression

Lazy

Represents a database field

Example

Usage:

a = Field(name, 'string', length=32, default=None, required=False,
    requires=IS_NOT_EMPTY(), ondelete='CASCADE',
    notnull=False, unique=False,
    uploadfield=True, widget=None, label=None, comment=None,
    uploadfield=True, # True means store on disk,
                      # 'a_field_name' means store in this field in db
                      # False means file content will be discarded.
    writable=True, readable=True, update=None, authorize=None,
    autodelete=False, represent=None, uploadfolder=None,
    uploadseparate=False # upload to separate directories by uuid_keys
                         # first 2 character and tablename.fieldname
                         # False - old behavior
                         # True - put uploaded file in
                         #   <uploaddir>/<tablename>.<fieldname>/uuid_key[:2]
                         #        directory)
    uploadfs=None        # a pyfilesystem where to store upload
    )

to be used as argument of DAL.define_table

alias of FieldMethod

Method

alias of FieldMethod

Virtual

alias of FieldVirtual

as_dict(flat=False, sanitize=True)[source]
as_json(sanitize=True)[source]
as_xml(sanitize=True)[source]
as_yaml(sanitize=True)[source]
clone(point_self_references_to=False, **args)[source]
count(distinct=None)[source]
formatter(value)[source]
retrieve(name, path=None, nameonly=False)[source]

If nameonly==True return (filename, fullfilename) instead of (filename, stream)

retrieve_file_properties(name, path=None)[source]
set_attributes(*args, **attributes)[source]
sqlsafe[source]
sqlsafe_name[source]
store(file, filename=None, path=None)[source]
validate(value)[source]
class pydal.objects.FieldMethod(name, f=None, handler=None)[source]

Bases: object

class pydal.objects.FieldVirtual(name, f=None, ftype='string', label=None, table_name=None)[source]

Bases: object

class pydal.objects.LazyReferenceGetter(table, id)[source]

Bases: object

class pydal.objects.LazySet(field, id)[source]

Bases: object

count(distinct=None, cache=None)[source]
delete()[source]
delete_uploaded_files(upload_fields=None)[source]
isempty()[source]
nested_select(*fields, **attributes)[source]
select(*fields, **attributes)[source]
update(**update_fields)[source]
update_naive(**update_fields)[source]
validate_and_update(**update_fields)[source]
class pydal.objects.Query(db, op, first=None, second=None, ignore_common_filters=False, **optional_args)[source]

Bases: object

Necessary to define a set. It can be stored or can be passed to DAL.__call__() to obtain a Set

Example

Use as:

query = db.users.name=='Max'
set = db(query)
records = set.select()
as_dict(flat=False, sanitize=True)[source]

Experimental stuff

This allows to return a plain dictionary with the basic query representation. Can be used with json/xml services for client-side db I/O

Example

Usage:

q = db.auth_user.id != 0
q.as_dict(flat=True)
{
"op": "NE",
"first":{
    "tablename": "auth_user",
    "fieldname": "id"
    },
"second":0
}
as_json(sanitize=True)[source]
as_xml(sanitize=True)[source]
case(t=1, f=0)[source]
class pydal.objects.Row(*args, **kwargs)[source]

Bases: object

A dictionary that lets you do d[‘a’] as well as d.a this is only used to store a Row

as_dict(datetime_to_str=False, custom_types=None)[source]
as_json(mode='object', default=None, colnames=None, serialize=True, **kwargs)[source]

serializes the row to a JSON object kwargs are passed to .as_dict method only “object” mode supported

serialize = False used by Rows.as_json

TODO: return array mode with query column order

mode and colnames are not implemented

as_xml(row_name='row', colnames=None, indent=' ')[source]
get(key, default=None)[source]
has_key(key)
items()
iteritems()
keys()
update(*args, **kwargs)
values()
class pydal.objects.Rows(db=None, records=[], colnames=[], compact=True, rawrows=None)[source]

Bases: object

A wrapper for the return value of a select. It basically represents a table. It has an iterator and each row is represented as a Row dictionary.

__iter__()[source]

Iterator over records

__str__()[source]

Serializes the table into a csv file

as_csv()

Serializes the table into a csv file

as_dict(key='id', compact=True, storage_to_dict=True, datetime_to_str=False, custom_types=None)[source]

Returns the data as a dictionary of dictionaries (storage_to_dict=True) or records (False)

Parameters:
  • key – the name of the field to be used as dict key, normally the id
  • compact – ? (default True)
  • storage_to_dict – when True returns a dict, otherwise a list(default True)
  • datetime_to_str – convert datetime fields as strings (default False)
as_json(mode='object', default=None)[source]

Serializes the rows to a JSON list or object with objects mode=’object’ is not implemented (should return a nested object structure)

as_list(compact=True, storage_to_dict=True, datetime_to_str=False, custom_types=None)[source]

Returns the data as a list or dictionary.

Parameters:
  • storage_to_dict – when True returns a dict, otherwise a list
  • datetime_to_str – convert datetime fields as strings
as_trees(parent_name='parent_id', children_name='children', render=False)[source]

returns the data as list of trees.

Parameters:
  • parent_name – the name of the field holding the reference to the parent (default parent_id).
  • children_name – the name where the children of each row will be stored as a list (default children).
  • render – whether we will render the fields using their represent (default False) can be a list of fields to render or True to render all.
as_xml(row_name='row', rows_name='rows')[source]
column(column=None)[source]
exclude(f)[source]

Removes elements from the calling Rows object, filtered by the function f, and returns a new Rows object containing the removed elements

export_to_csv_file(ofile, null='<NULL>', *args, **kwargs)[source]

Exports data to csv, the first line contains the column names

Parameters:
  • ofile – where the csv must be exported to
  • null – how null values must be represented (default ‘<NULL>’)
  • delimiter – delimiter to separate values (default ‘,’)
  • quotechar – character to use to quote string values (default ‘”’)
  • quoting – quote system, use csv.QUOTE_*** (default csv.QUOTE_MINIMAL)
  • represent – use the fields .represent value (default False)
  • colnames – list of column names to use (default self.colnames)

This will only work when exporting rows objects!!!! DO NOT use this with db.export_to_csv()

find(f, limitby=None)[source]

Returns a new Rows object, a subset of the original object, filtered by the function f

first()[source]
group_by_value(*fields, **args)[source]

Regroups the rows, by one of the fields

json(mode='object', default=None)

Serializes the rows to a JSON list or object with objects mode=’object’ is not implemented (should return a nested object structure)

last()[source]
render(i=None, fields=None)[source]

Takes an index and returns a copy of the indexed row with values transformed via the “represent” attributes of the associated fields.

Parameters:
  • i – index. If not specified, a generator is returned for iteration over all the rows.
  • fields – a list of fields to transform (if None, all fields with “represent” attributes will be transformed)
setvirtualfields(**keyed_virtualfields)[source]

For reference:

db.define_table('x',Field('number','integer'))
if db(db.x).isempty(): [db.x.insert(number=i) for i in range(10)]

from gluon.dal import lazy_virtualfield

class MyVirtualFields(object):
    # normal virtual field (backward compatible, discouraged)
    def normal_shift(self): return self.x.number+1
    # lazy virtual field (because of @staticmethod)
    @lazy_virtualfield
    def lazy_shift(instance,row,delta=4): return row.x.number+delta
db.x.virtualfields.append(MyVirtualFields())

for row in db(db.x).select():
    print row.number, row.normal_shift, row.lazy_shift(delta=7)
sort(f, reverse=False)[source]

Returns a list of sorted elements (not sorted in place)

xml(strict=False, row_name='row', rows_name='rows')[source]

Serializes the table using sqlhtml.SQLTABLE (if present)

class pydal.objects.Set(db, query, ignore_common_filters=None)[source]

Bases: object

Represents a set of records in the database. Records are identified by the query=Query(...) object. Normally the Set is generated by DAL.__call__(Query(...))

Given a set, for example:

myset = db(db.users.name=='Max')

you can:

myset.update(db.users.name='Massimo')
myset.delete() # all elements in the set
myset.select(orderby=db.users.id, groupby=db.users.name, limitby=(0,10))

and take subsets:

subset = myset(db.users.id<5)
as_dict(flat=False, sanitize=True)[source]
as_json(sanitize=True)[source]
as_xml(sanitize=True)[source]
build(d)[source]

Experimental: see .parse()

count(distinct=None, cache=None)[source]
delete()[source]
delete_uploaded_files(upload_fields=None)[source]
isempty()[source]
nested_select(*fields, **attributes)[source]
parse(dquery)[source]

Experimental: Turn a dictionary into a Query object

select(*fields, **attributes)[source]
update(**update_fields)[source]
update_naive(**update_fields)[source]

Same as update but does not call table._before_update and _after_update

validate_and_update(**update_fields)[source]
class pydal.objects.Table(db, tablename, *fields, **args)[source]

Bases: object

Represents a database table

Example::
You can create a table as::
db = DAL(...) db.define_table(‘users’, Field(‘name’))

And then:

db.users.insert(name='me') # print db.users._insert(...) to see SQL
db.users.drop()
as_dict(flat=False, sanitize=True)[source]
as_json(sanitize=True)[source]
as_xml(sanitize=True)[source]
as_yaml(sanitize=True)[source]
bulk_insert(items)[source]

here items is a list of dictionaries

drop(mode='')[source]
fields[source]
has_key(key)
import_from_csv_file(csvfile, id_map=None, null='<NULL>', unique='uuid', id_offset=None, *args, **kwargs)[source]

Import records from csv file. Column headers must have same names as table fields. Field ‘id’ is ignored. If column names read ‘table.file’ the ‘table.’ prefix is ignored.

  • ‘unique’ argument is a field which must be unique (typically a uuid field)
  • ‘restore’ argument is default False; if set True will remove old values in table first.
  • ‘id_map’ if set to None will not map ids

The import will keep the id numbers in the restored table. This assumes that there is an field of type id that is integer and in incrementing order. Will keep the id numbers in restored table.

insert(**fields)[source]
items()[source]
iteritems()[source]
on(query)[source]
sqlsafe[source]
sqlsafe_alias[source]
truncate(mode=None)[source]
update(*args, **kwargs)[source]
update_or_insert(_key=<function <lambda> at 0x7f4237d472a8>, **values)[source]
validate_and_insert(**fields)[source]
validate_and_update(_key=<function <lambda> at 0x7f4237d472a8>, **fields)[source]
validate_and_update_or_insert(_key=<function <lambda> at 0x7f4237d472a8>, **fields)[source]
with_alias(alias)[source]
class pydal.objects.VirtualCommand(method, row)[source]

Bases: object

pydal.objects.ogetattr

x.__getattribute__(‘name’) <==> x.name

pydal.objects.osetattr

x.__setattr__(‘name’, value) <==> x.name = value

Module contents