Copyright © 1996–2024 The PostgreSQL Global Development Group
Legal Notice
PostgreSQL is Copyright © 1996–2024 by the PostgreSQL Global Development Group.
Postgres95 is Copyright © 1994–5 by the Regents of the University of California.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies.
IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN “AS-IS” BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
Table of Contents
Table of Contents
This book is the official documentation of PostgreSQL. It has been written by the PostgreSQL developers and other volunteers in parallel to the development of the PostgreSQL software. It describes all the functionality that the current version of PostgreSQL officially supports.
To make the large amount of information about PostgreSQL manageable, this book has been organized in several parts. Each part is targeted at a different class of users, or at users in different stages of their PostgreSQL experience:
Part I is an informal introduction for new users.
Part II documents the SQL query language environment, including data types and functions, as well as user-level performance tuning. Every PostgreSQL user should read this.
Part III describes the installation and administration of the server. Everyone who runs a PostgreSQL server, be it for private use or for others, should read this part.
Part IV describes the programming interfaces for PostgreSQL client programs.
Part V contains information for advanced users about the extensibility capabilities of the server. Topics include user-defined data types and functions.
Part VI contains reference information about SQL commands, client and server programs. This part supports the other parts with structured information sorted by command or program.
Part VII contains assorted information that might be of use to PostgreSQL developers.
PostgreSQL is an object-relational database management system (ORDBMS) based on POSTGRES, Version 4.2, developed at the University of California at Berkeley Computer Science Department. POSTGRES pioneered many concepts that only became available in some commercial database systems much later.
PostgreSQL is an open-source descendant of this original Berkeley code. It supports a large part of the SQL standard and offers many modern features:
Also, PostgreSQL can be extended by the user in many ways, for example by adding new
And because of the liberal license, PostgreSQL can be used, modified, and distributed by anyone free of charge for any purpose, be it private, commercial, or academic.
The object-relational database management system now known as PostgreSQL is derived from the POSTGRES package written at the University of California at Berkeley. With decades of development behind it, PostgreSQL is now the most advanced open-source database available anywhere.
The POSTGRES project, led by Professor Michael Stonebraker, was sponsored by the Defense Advanced Research Projects Agency (DARPA), the Army Research Office (ARO), the National Science Foundation (NSF), and ESL, Inc. The implementation of POSTGRES began in 1986. The initial concepts for the system were presented in [ston86], and the definition of the initial data model appeared in [rowe87]. The design of the rule system at that time was described in [ston87a]. The rationale and architecture of the storage manager were detailed in [ston87b].
POSTGRES has undergone several major releases since then. The first “demoware” system became operational in 1987 and was shown at the 1988 ACM-SIGMOD Conference. Version 1, described in [ston90a], was released to a few external users in June 1989. In response to a critique of the first rule system ([ston89]), the rule system was redesigned ([ston90b]), and Version 2 was released in June 1990 with the new rule system. Version 3 appeared in 1991 and added support for multiple storage managers, an improved query executor, and a rewritten rule system. For the most part, subsequent releases until Postgres95 (see below) focused on portability and reliability.
POSTGRES has been used to implement many different research and production applications. These include: a financial data analysis system, a jet engine performance monitoring package, an asteroid tracking database, a medical information database, and several geographic information systems. POSTGRES has also been used as an educational tool at several universities. Finally, Illustra Information Technologies (later merged into Informix, which is now owned by IBM) picked up the code and commercialized it. In late 1992, POSTGRES became the primary data manager for the Sequoia 2000 scientific computing project.
The size of the external user community nearly doubled during 1993. It became increasingly obvious that maintenance of the prototype code and support was taking up large amounts of time that should have been devoted to database research. In an effort to reduce this support burden, the Berkeley POSTGRES project officially ended with Version 4.2.
In 1994, Andrew Yu and Jolly Chen added an SQL language interpreter to POSTGRES. Under a new name, Postgres95 was subsequently released to the web to find its own way in the world as an open-source descendant of the original POSTGRES Berkeley code.
Postgres95 code was completely ANSI C and trimmed in size by 25%. Many internal changes improved performance and maintainability. Postgres95 release 1.0.x ran about 30–50% faster on the Wisconsin Benchmark compared to POSTGRES, Version 4.2. Apart from bug fixes, the following were the major enhancements:
The query language PostQUEL was replaced with
SQL (implemented in the server). (Interface
library libpq was named after PostQUEL.)
Subqueries
were not supported until PostgreSQL
(see below), but they could be imitated in
Postgres95 with user-defined
SQL functions. Aggregate functions were
re-implemented. Support for the GROUP BY
query clause was also added.
A new program (psql) was provided for interactive SQL queries, which used GNU Readline. This largely superseded the old monitor program.
A new front-end library, libpgtcl
,
supported Tcl-based clients. A sample shell,
pgtclsh
, provided new Tcl commands to
interface Tcl programs with the
Postgres95 server.
The large-object interface was overhauled. The inversion large objects were the only mechanism for storing large objects. (The inversion file system was removed.)
The instance-level rule system was removed. Rules were still available as rewrite rules.
A short tutorial introducing regular SQL features as well as those of Postgres95 was distributed with the source code
GNU make (instead of BSD make) was used for the build. Also, Postgres95 could be compiled with an unpatched GCC (data alignment of doubles was fixed).
By 1996, it became clear that the name “Postgres95” would not stand the test of time. We chose a new name, PostgreSQL, to reflect the relationship between the original POSTGRES and the more recent versions with SQL capability. At the same time, we set the version numbering to start at 6.0, putting the numbers back into the sequence originally begun by the Berkeley POSTGRES project.
Many people continue to refer to PostgreSQL as “Postgres” (now rarely in all capital letters) because of tradition or because it is easier to pronounce. This usage is widely accepted as a nickname or alias.
The emphasis during development of Postgres95 was on identifying and understanding existing problems in the server code. With PostgreSQL, the emphasis has shifted to augmenting features and capabilities, although work continues in all areas.
Details about what has happened in PostgreSQL since then can be found in Appendix E.
The following conventions are used in the synopsis of a command:
brackets ([
and ]
) indicate
optional parts. Braces
({
and }
) and vertical lines
(|
) indicate that you must choose one
alternative. Dots (...
) mean that the preceding element
can be repeated. All other symbols, including parentheses, should be
taken literally.
Where it enhances the clarity, SQL commands are preceded by the
prompt =>
, and shell commands are preceded by the
prompt $
. Normally, prompts are not shown, though.
An administrator is generally a person who is in charge of installing and running the server. A user could be anyone who is using, or wants to use, any part of the PostgreSQL system. These terms should not be interpreted too narrowly; this book does not have fixed presumptions about system administration procedures.
Besides the documentation, that is, this book, there are other resources about PostgreSQL:
The PostgreSQL wiki contains the project's FAQ (Frequently Asked Questions) list, TODO list, and detailed information about many more topics.
The PostgreSQL web site carries details on the latest release and other information to make your work or play with PostgreSQL more productive.
The mailing lists are a good place to have your questions answered, to share experiences with other users, and to contact the developers. Consult the PostgreSQL web site for details.
PostgreSQL is an open-source project. As such, it depends on the user community for ongoing support. As you begin to use PostgreSQL, you will rely on others for help, either through the documentation or through the mailing lists. Consider contributing your knowledge back. Read the mailing lists and answer questions. If you learn something which is not in the documentation, write it up and contribute it. If you add features to the code, contribute them.
When you find a bug in PostgreSQL we want to hear about it. Your bug reports play an important part in making PostgreSQL more reliable because even the utmost care cannot guarantee that every part of PostgreSQL will work on every platform under every circumstance.
The following suggestions are intended to assist you in forming bug reports that can be handled in an effective fashion. No one is required to follow them but doing so tends to be to everyone's advantage.
We cannot promise to fix every bug right away. If the bug is obvious, critical, or affects a lot of users, chances are good that someone will look into it. It could also happen that we tell you to update to a newer version to see if the bug happens there. Or we might decide that the bug cannot be fixed before some major rewrite we might be planning is done. Or perhaps it is simply too hard and there are more important things on the agenda. If you need help immediately, consider obtaining a commercial support contract.
Before you report a bug, please read and re-read the documentation to verify that you can really do whatever it is you are trying. If it is not clear from the documentation whether you can do something or not, please report that too; it is a bug in the documentation. If it turns out that a program does something different from what the documentation says, that is a bug. That might include, but is not limited to, the following circumstances:
A program terminates with a fatal signal or an operating system error message that would point to a problem in the program. (A counterexample might be a “disk full” message, since you have to fix that yourself.)
A program produces the wrong output for any given input.
A program refuses to accept valid input (as defined in the documentation).
A program accepts invalid input without a notice or error message. But keep in mind that your idea of invalid input might be our idea of an extension or compatibility with traditional practice.
PostgreSQL fails to compile, build, or install according to the instructions on supported platforms.
Here “program” refers to any executable, not only the backend process.
Being slow or resource-hogging is not necessarily a bug. Read the documentation or ask on one of the mailing lists for help in tuning your applications. Failing to comply to the SQL standard is not necessarily a bug either, unless compliance for the specific feature is explicitly claimed.
Before you continue, check on the TODO list and in the FAQ to see if your bug is already known. If you cannot decode the information on the TODO list, report your problem. The least we can do is make the TODO list clearer.
The most important thing to remember about bug reporting is to state all the facts and only facts. Do not speculate what you think went wrong, what “it seemed to do”, or which part of the program has a fault. If you are not familiar with the implementation you would probably guess wrong and not help us a bit. And even if you are, educated explanations are a great supplement to but no substitute for facts. If we are going to fix the bug we still have to see it happen for ourselves first. Reporting the bare facts is relatively straightforward (you can probably copy and paste them from the screen) but all too often important details are left out because someone thought it does not matter or the report would be understood anyway.
The following items should be contained in every bug report:
The exact sequence of steps from program
start-up necessary to reproduce the problem. This
should be self-contained; it is not enough to send in a bare
SELECT
statement without the preceding
CREATE TABLE
and INSERT
statements, if the output should depend on the data in the
tables. We do not have the time to reverse-engineer your
database schema, and if we are supposed to make up our own data
we would probably miss the problem.
The best format for a test case for SQL-related problems is a
file that can be run through the psql
frontend that shows the problem. (Be sure to not have anything
in your ~/.psqlrc
start-up file.) An easy
way to create this file is to use pg_dump
to dump out the table declarations and data needed to set the
scene, then add the problem query. You are encouraged to
minimize the size of your example, but this is not absolutely
necessary. If the bug is reproducible, we will find it either
way.
If your application uses some other client interface, such as PHP, then please try to isolate the offending queries. We will probably not set up a web server to reproduce your problem. In any case remember to provide the exact input files; do not guess that the problem happens for “large files” or “midsize databases”, etc. since this information is too inexact to be of use.
The output you got. Please do not say that it “didn't work” or “crashed”. If there is an error message, show it, even if you do not understand it. If the program terminates with an operating system error, say which. If nothing at all happens, say so. Even if the result of your test case is a program crash or otherwise obvious it might not happen on our platform. The easiest thing is to copy the output from the terminal, if possible.
If you are reporting an error message, please obtain the most verbose
form of the message. In psql, say \set
VERBOSITY verbose
beforehand. If you are extracting the message
from the server log, set the run-time parameter
log_error_verbosity to verbose
so that all
details are logged.
In case of fatal errors, the error message reported by the client might not contain all the information available. Please also look at the log output of the database server. If you do not keep your server's log output, this would be a good time to start doing so.
The output you expected is very important to state. If you just write “This command gives me that output.” or “This is not what I expected.”, we might run it ourselves, scan the output, and think it looks OK and is exactly what we expected. We should not have to spend the time to decode the exact semantics behind your commands. Especially refrain from merely saying that “This is not what SQL says/Oracle does.” Digging out the correct behavior from SQL is not a fun undertaking, nor do we all know how all the other relational databases out there behave. (If your problem is a program crash, you can obviously omit this item.)
Any command line options and other start-up options, including any relevant environment variables or configuration files that you changed from the default. Again, please provide exact information. If you are using a prepackaged distribution that starts the database server at boot time, you should try to find out how that is done.
Anything you did at all differently from the installation instructions.
The PostgreSQL version. You can run the command
SELECT version();
to
find out the version of the server you are connected to. Most executable
programs also support a --version
option; at least
postgres --version
and psql --version
should work.
If the function or the options do not exist then your version is
more than old enough to warrant an upgrade.
If you run a prepackaged version, such as RPMs, say so, including any
subversion the package might have. If you are talking about a Git
snapshot, mention that, including the commit hash.
If your version is older than 14.13 we will almost certainly tell you to upgrade. There are many bug fixes and improvements in each new release, so it is quite possible that a bug you have encountered in an older release of PostgreSQL has already been fixed. We can only provide limited support for sites using older releases of PostgreSQL; if you require more than we can provide, consider acquiring a commercial support contract.
Platform information. This includes the kernel name and version, C library, processor, memory information, and so on. In most cases it is sufficient to report the vendor and version, but do not assume everyone knows what exactly “Debian” contains or that everyone runs on x86_64. If you have installation problems then information about the toolchain on your machine (compiler, make, and so on) is also necessary.
Do not be afraid if your bug report becomes rather lengthy. That is a fact of life. It is better to report everything the first time than us having to squeeze the facts out of you. On the other hand, if your input files are huge, it is fair to ask first whether somebody is interested in looking into it. Here is an article that outlines some more tips on reporting bugs.
Do not spend all your time to figure out which changes in the input make the problem go away. This will probably not help solving it. If it turns out that the bug cannot be fixed right away, you will still have time to find and share your work-around. Also, once again, do not waste your time guessing why the bug exists. We will find that out soon enough.
When writing a bug report, please avoid confusing terminology. The software package in total is called “PostgreSQL”, sometimes “Postgres” for short. If you are specifically talking about the backend process, mention that, do not just say “PostgreSQL crashes”. A crash of a single backend process is quite different from crash of the parent “postgres” process; please don't say “the server crashed” when you mean a single backend process went down, nor vice versa. Also, client programs such as the interactive frontend “psql” are completely separate from the backend. Please try to be specific about whether the problem is on the client or server side.
In general, send bug reports to the bug report mailing list at
<pgsql-bugs@lists.postgresql.org>
.
You are requested to use a descriptive subject for your email
message, perhaps parts of the error message.
Another method is to fill in the bug report web-form available
at the project's
web site.
Entering a bug report this way causes it to be mailed to the
<pgsql-bugs@lists.postgresql.org>
mailing list.
If your bug report has security implications and you'd prefer that it
not become immediately visible in public archives, don't send it to
pgsql-bugs
. Security issues can be
reported privately to <security@postgresql.org>
.
Do not send bug reports to any of the user mailing lists, such as
<pgsql-sql@lists.postgresql.org>
or
<pgsql-general@lists.postgresql.org>
.
These mailing lists are for answering
user questions, and their subscribers normally do not wish to receive
bug reports. More importantly, they are unlikely to fix them.
Also, please do not send reports to
the developers' mailing list <pgsql-hackers@lists.postgresql.org>
.
This list is for discussing the
development of PostgreSQL, and it would be nice
if we could keep the bug reports separate. We might choose to take up a
discussion about your bug report on pgsql-hackers
,
if the problem needs more review.
If you have a problem with the documentation, the best place to report it
is the documentation mailing list <pgsql-docs@lists.postgresql.org>
.
Please be specific about what part of the documentation you are unhappy
with.
If your bug is a portability problem on a non-supported platform,
send mail to <pgsql-hackers@lists.postgresql.org>
,
so we (and you) can work on
porting PostgreSQL to your platform.
Due to the unfortunate amount of spam going around, all of the above lists will be moderated unless you are subscribed. That means there will be some delay before the email is delivered. If you wish to subscribe to the lists, please visit https://lists.postgresql.org/ for instructions.
Welcome to the PostgreSQL Tutorial. The following few chapters are intended to give a simple introduction to PostgreSQL, relational database concepts, and the SQL language to those who are new to any one of these aspects. We only assume some general knowledge about how to use computers. No particular Unix or programming experience is required. This part is mainly intended to give you some hands-on experience with important aspects of the PostgreSQL system. It makes no attempt to be a complete or thorough treatment of the topics it covers.
After you have worked through this tutorial you might want to move on to reading Part II to gain a more formal knowledge of the SQL language, or Part IV for information about developing applications for PostgreSQL. Those who set up and manage their own server should also read Part III.
Table of Contents
Table of Contents
Before you can use PostgreSQL you need to install it, of course. It is possible that PostgreSQL is already installed at your site, either because it was included in your operating system distribution or because the system administrator already installed it. If that is the case, you should obtain information from the operating system documentation or your system administrator about how to access PostgreSQL.
If you are not sure whether PostgreSQL is already available or whether you can use it for your experimentation then you can install it yourself. Doing so is not hard and it can be a good exercise. PostgreSQL can be installed by any unprivileged user; no superuser (root) access is required.
If you are installing PostgreSQL yourself, then refer to Chapter 17 for instructions on installation, and return to this guide when the installation is complete. Be sure to follow closely the section about setting up the appropriate environment variables.
If your site administrator has not set things up in the default
way, you might have some more work to do. For example, if the
database server machine is a remote machine, you will need to set
the PGHOST
environment variable to the name of the
database server machine. The environment variable
PGPORT
might also have to be set. The bottom line is
this: if you try to start an application program and it complains
that it cannot connect to the database, you should consult your
site administrator or, if that is you, the documentation to make
sure that your environment is properly set up. If you did not
understand the preceding paragraph then read the next section.
Before we proceed, you should understand the basic PostgreSQL system architecture. Understanding how the parts of PostgreSQL interact will make this chapter somewhat clearer.
In database jargon, PostgreSQL uses a client/server model. A PostgreSQL session consists of the following cooperating processes (programs):
A server process, which manages the database files, accepts
connections to the database from client applications, and
performs database actions on behalf of the clients. The
database server program is called
postgres
.
The user's client (frontend) application that wants to perform database operations. Client applications can be very diverse in nature: a client could be a text-oriented tool, a graphical application, a web server that accesses the database to display web pages, or a specialized database maintenance tool. Some client applications are supplied with the PostgreSQL distribution; most are developed by users.
As is typical of client/server applications, the client and the server can be on different hosts. In that case they communicate over a TCP/IP network connection. You should keep this in mind, because the files that can be accessed on a client machine might not be accessible (or might only be accessible using a different file name) on the database server machine.
The PostgreSQL server can handle
multiple concurrent connections from clients. To achieve this it
starts (“forks”) a new process for each connection.
From that point on, the client and the new server process
communicate without intervention by the original
postgres
process. Thus, the
supervisor server process is always running, waiting for
client connections, whereas client and associated server processes
come and go. (All of this is of course invisible to the user. We
only mention it here for completeness.)
The first test to see whether you can access the database server is to try to create a database. A running PostgreSQL server can manage many databases. Typically, a separate database is used for each project or for each user.
Possibly, your site administrator has already created a database for your use. In that case you can omit this step and skip ahead to the next section.
To create a new database, in this example named
mydb
, you use the following command:
$
createdb mydb
If this produces no response then this step was successful and you can skip over the remainder of this section.
If you see a message similar to:
createdb: command not found
then PostgreSQL was not installed properly. Either it was not installed at all or your shell's search path was not set to include it. Try calling the command with an absolute path instead:
$
/usr/local/pgsql/bin/createdb mydb
The path at your site might be different. Contact your site administrator or check the installation instructions to correct the situation.
Another response could be this:
createdb: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory Is the server running locally and accepting connections on that socket?
This means that the server was not started, or it is not listening
where createdb
expects to contact it. Again, check the
installation instructions or consult the administrator.
Another response could be this:
createdb: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: role "joe" does not exist
where your own login name is mentioned. This will happen if the
administrator has not created a PostgreSQL user account
for you. (PostgreSQL user accounts are distinct from
operating system user accounts.) If you are the administrator, see
Chapter 22 for help creating accounts. You will need to
become the operating system user under which PostgreSQL
was installed (usually postgres
) to create the first user
account. It could also be that you were assigned a
PostgreSQL user name that is different from your
operating system user name; in that case you need to use the -U
switch or set the PGUSER
environment variable to specify your
PostgreSQL user name.
If you have a user account but it does not have the privileges required to create a database, you will see the following:
createdb: error: database creation failed: ERROR: permission denied to create database
Not every user has authorization to create new databases. If PostgreSQL refuses to create databases for you then the site administrator needs to grant you permission to create databases. Consult your site administrator if this occurs. If you installed PostgreSQL yourself then you should log in for the purposes of this tutorial under the user account that you started the server as. [1]
You can also create databases with other names. PostgreSQL allows you to create any number of databases at a given site. Database names must have an alphabetic first character and are limited to 63 bytes in length. A convenient choice is to create a database with the same name as your current user name. Many tools assume that database name as the default, so it can save you some typing. To create that database, simply type:
$
createdb
If you do not want to use your database anymore you can remove it.
For example, if you are the owner (creator) of the database
mydb
, you can destroy it using the following
command:
$
dropdb mydb
(For this command, the database name does not default to the user account name. You always need to specify it.) This action physically removes all files associated with the database and cannot be undone, so this should only be done with a great deal of forethought.
More about createdb
and dropdb
can
be found in createdb and dropdb
respectively.
Once you have created a database, you can access it by:
Running the PostgreSQL interactive terminal program, called psql, which allows you to interactively enter, edit, and execute SQL commands.
Using an existing graphical frontend tool like pgAdmin or an office suite with ODBC or JDBC support to create and manipulate a database. These possibilities are not covered in this tutorial.
Writing a custom application, using one of the several available language bindings. These possibilities are discussed further in Part IV.
You probably want to start up psql
to try
the examples in this tutorial. It can be activated for the
mydb
database by typing the command:
$
psql mydb
If you do not supply the database name then it will default to your
user account name. You already discovered this scheme in the
previous section using createdb
.
In psql
, you will be greeted with the following
message:
psql (14.13) Type "help" for help. mydb=>
mydb=#
That would mean you are a database superuser, which is most likely the case if you installed the PostgreSQL instance yourself. Being a superuser means that you are not subject to access controls. For the purposes of this tutorial that is not important.
If you encounter problems starting psql
then go back to the previous section. The diagnostics of
createdb
and psql
are
similar, and if the former worked the latter should work as well.
The last line printed out by psql
is the
prompt, and it indicates that psql
is listening
to you and that you can type SQL queries into a
work space maintained by psql
. Try out these
commands:
mydb=>
SELECT version();
version ------------------------------------------------------------------------------------------ PostgreSQL 14.13 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit (1 row)mydb=>
SELECT current_date;
date ------------ 2016-01-07 (1 row)mydb=>
SELECT 2 + 2;
?column? ---------- 4 (1 row)
The psql
program has a number of internal
commands that are not SQL commands. They begin with the backslash
character, “\
”.
For example,
you can get help on the syntax of various
PostgreSQL SQL
commands by typing:
mydb=>
\h
To get out of psql
, type:
mydb=>
\q
and psql
will quit and return you to your
command shell. (For more internal commands, type
\?
at the psql
prompt.) The
full capabilities of psql
are documented in
psql. In this tutorial we will not use these
features explicitly, but you can use them yourself when it is helpful.
[1]
As an explanation for why this works:
PostgreSQL user names are separate
from operating system user accounts. When you connect to a
database, you can choose what
PostgreSQL user name to connect as;
if you don't, it will default to the same name as your current
operating system account. As it happens, there will always be a
PostgreSQL user account that has the
same name as the operating system user that started the server,
and it also happens that that user always has permission to
create databases. Instead of logging in as that user you can
also specify the -U
option everywhere to select
a PostgreSQL user name to connect as.
Table of Contents
This chapter provides an overview of how to use SQL to perform simple operations. This tutorial is only intended to give you an introduction and is in no way a complete tutorial on SQL. Numerous books have been written on SQL, including [melt93] and [date97]. You should be aware that some PostgreSQL language features are extensions to the standard.
In the examples that follow, we assume that you have created a
database named mydb
, as described in the previous
chapter, and have been able to start psql.
Examples in this manual can also be found in the
PostgreSQL source distribution
in the directory src/tutorial/
. (Binary
distributions of PostgreSQL might not
provide those files.) To use those
files, first change to that directory and run make:
$
cd
...
/src/tutorial$
make
This creates the scripts and compiles the C files containing user-defined functions and types. Then, to start the tutorial, do the following:
$
psql -s mydb
...
mydb=>
\i basics.sql
The \i
command reads in commands from the
specified file. psql
's -s
option puts you in
single step mode which pauses before sending each statement to the
server. The commands used in this section are in the file
basics.sql
.
PostgreSQL is a relational database management system (RDBMS). That means it is a system for managing data stored in relations. Relation is essentially a mathematical term for table. The notion of storing data in tables is so commonplace today that it might seem inherently obvious, but there are a number of other ways of organizing databases. Files and directories on Unix-like operating systems form an example of a hierarchical database. A more modern development is the object-oriented database.
Each table is a named collection of rows. Each row of a given table has the same set of named columns, and each column is of a specific data type. Whereas columns have a fixed order in each row, it is important to remember that SQL does not guarantee the order of the rows within the table in any way (although they can be explicitly sorted for display).
Tables are grouped into databases, and a collection of databases managed by a single PostgreSQL server instance constitutes a database cluster.
You can create a new table by specifying the table name, along with all column names and their types:
CREATE TABLE weather ( city varchar(80), temp_lo int, -- low temperature temp_hi int, -- high temperature prcp real, -- precipitation date date );
You can enter this into psql
with the line
breaks. psql
will recognize that the command
is not terminated until the semicolon.
White space (i.e., spaces, tabs, and newlines) can be used freely
in SQL commands. That means you can type the command aligned
differently than above, or even all on one line. Two dashes
(“--
”) introduce comments.
Whatever follows them is ignored up to the end of the line. SQL
is case insensitive about key words and identifiers, except
when identifiers are double-quoted to preserve the case (not done
above).
varchar(80)
specifies a data type that can store
arbitrary character strings up to 80 characters in length.
int
is the normal integer type. real
is
a type for storing single precision floating-point numbers.
date
should be self-explanatory. (Yes, the column of
type date
is also named date
.
This might be convenient or confusing — you choose.)
PostgreSQL supports the standard
SQL types int
,
smallint
, real
, double
precision
, char(
,
N
)varchar(
, N
)date
,
time
, timestamp
, and
interval
, as well as other types of general utility
and a rich set of geometric types.
PostgreSQL can be customized with an
arbitrary number of user-defined data types. Consequently, type
names are not key words in the syntax, except where required to
support special cases in the SQL standard.
The second example will store cities and their associated geographical location:
CREATE TABLE cities ( name varchar(80), location point );
The point
type is an example of a
PostgreSQL-specific data type.
Finally, it should be mentioned that if you don't need a table any longer or want to recreate it differently you can remove it using the following command:
DROP TABLE tablename
;
The INSERT
statement is used to populate a table with
rows:
INSERT INTO weather VALUES ('San Francisco', 46, 50, 0.25, '1994-11-27');
Note that all data types use rather obvious input formats.
Constants that are not simple numeric values usually must be
surrounded by single quotes ('
), as in the example.
The
date
type is actually quite flexible in what it
accepts, but for this tutorial we will stick to the unambiguous
format shown here.
The point
type requires a coordinate pair as input,
as shown here:
INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');
The syntax used so far requires you to remember the order of the columns. An alternative syntax allows you to list the columns explicitly:
INSERT INTO weather (city, temp_lo, temp_hi, prcp, date) VALUES ('San Francisco', 43, 57, 0.0, '1994-11-29');
You can list the columns in a different order if you wish or even omit some columns, e.g., if the precipitation is unknown:
INSERT INTO weather (date, city, temp_hi, temp_lo) VALUES ('1994-11-29', 'Hayward', 54, 37);
Many developers consider explicitly listing the columns better style than relying on the order implicitly.
Please enter all the commands shown above so you have some data to work with in the following sections.
You could also have used COPY
to load large
amounts of data from flat-text files. This is usually faster
because the COPY
command is optimized for this
application while allowing less flexibility than
INSERT
. An example would be:
COPY weather FROM '/home/user/weather.txt';
where the file name for the source file must be available on the
machine running the backend process, not the client, since the backend process
reads the file directly. You can read more about the
COPY
command in COPY.
To retrieve data from a table, the table is
queried. An SQL
SELECT
statement is used to do this. The
statement is divided into a select list (the part that lists the
columns to be returned), a table list (the part that lists the
tables from which to retrieve the data), and an optional
qualification (the part that specifies any restrictions). For
example, to retrieve all the rows of table
weather
, type:
SELECT * FROM weather;
Here *
is a shorthand for “all columns”.
[2]
So the same result would be had with:
SELECT city, temp_lo, temp_hi, prcp, date FROM weather;
The output should be:
city | temp_lo | temp_hi | prcp | date ---------------+---------+---------+------+------------ San Francisco | 46 | 50 | 0.25 | 1994-11-27 San Francisco | 43 | 57 | 0 | 1994-11-29 Hayward | 37 | 54 | | 1994-11-29 (3 rows)
You can write expressions, not just simple column references, in the select list. For example, you can do:
SELECT city, (temp_hi+temp_lo)/2 AS temp_avg, date FROM weather;
This should give:
city | temp_avg | date ---------------+----------+------------ San Francisco | 48 | 1994-11-27 San Francisco | 50 | 1994-11-29 Hayward | 45 | 1994-11-29 (3 rows)
Notice how the AS
clause is used to relabel the
output column. (The AS
clause is optional.)
A query can be “qualified” by adding a WHERE
clause that specifies which rows are wanted. The WHERE
clause contains a Boolean (truth value) expression, and only rows for
which the Boolean expression is true are returned. The usual
Boolean operators (AND
,
OR
, and NOT
) are allowed in
the qualification. For example, the following
retrieves the weather of San Francisco on rainy days:
SELECT * FROM weather WHERE city = 'San Francisco' AND prcp > 0.0;
Result:
city | temp_lo | temp_hi | prcp | date ---------------+---------+---------+------+------------ San Francisco | 46 | 50 | 0.25 | 1994-11-27 (1 row)
You can request that the results of a query be returned in sorted order:
SELECT * FROM weather ORDER BY city;
city | temp_lo | temp_hi | prcp | date ---------------+---------+---------+------+------------ Hayward | 37 | 54 | | 1994-11-29 San Francisco | 43 | 57 | 0 | 1994-11-29 San Francisco | 46 | 50 | 0.25 | 1994-11-27
In this example, the sort order isn't fully specified, and so you might get the San Francisco rows in either order. But you'd always get the results shown above if you do:
SELECT * FROM weather ORDER BY city, temp_lo;
You can request that duplicate rows be removed from the result of a query:
SELECT DISTINCT city FROM weather;
city --------------- Hayward San Francisco (2 rows)
Here again, the result row ordering might vary.
You can ensure consistent results by using DISTINCT
and
ORDER BY
together:
[3]
SELECT DISTINCT city FROM weather ORDER BY city;
Thus far, our queries have only accessed one table at a time.
Queries can access multiple tables at once, or access the same
table in such a way that multiple rows of the table are being
processed at the same time. Queries that access multiple tables
(or multiple instances of the same table) at one time are called
join queries. They combine rows from one table
with rows from a second table, with an expression specifying which rows
are to be paired. For example, to return all the weather records together
with the location of the associated city, the database needs to compare
the city
column of each row of the weather
table with the
name
column of all rows in the cities
table, and select the pairs of rows where these values match.[4]
This would be accomplished by the following query:
SELECT * FROM weather JOIN cities ON city = name;
city | temp_lo | temp_hi | prcp | date | name | location ---------------+---------+---------+------+------------+---------------+----------- San Francisco | 46 | 50 | 0.25 | 1994-11-27 | San Francisco | (-194,53) San Francisco | 43 | 57 | 0 | 1994-11-29 | San Francisco | (-194,53) (2 rows)
Observe two things about the result set:
There is no result row for the city of Hayward. This is
because there is no matching entry in the
cities
table for Hayward, so the join
ignores the unmatched rows in the weather
table. We will see
shortly how this can be fixed.
There are two columns containing the city name. This is
correct because the lists of columns from the
weather
and
cities
tables are concatenated. In
practice this is undesirable, though, so you will probably want
to list the output columns explicitly rather than using
*
:
SELECT city, temp_lo, temp_hi, prcp, date, location FROM weather JOIN cities ON city = name;
Since the columns all had different names, the parser automatically found which table they belong to. If there were duplicate column names in the two tables you'd need to qualify the column names to show which one you meant, as in:
SELECT weather.city, weather.temp_lo, weather.temp_hi, weather.prcp, weather.date, cities.location FROM weather JOIN cities ON weather.city = cities.name;
It is widely considered good style to qualify all column names in a join query, so that the query won't fail if a duplicate column name is later added to one of the tables.
Join queries of the kind seen thus far can also be written in this form:
SELECT * FROM weather, cities WHERE city = name;
This syntax pre-dates the JOIN
/ON
syntax, which was introduced in SQL-92. The tables are simply listed in
the FROM
clause, and the comparison expression is added
to the WHERE
clause. The results from this older
implicit syntax and the newer explicit
JOIN
/ON
syntax are identical. But
for a reader of the query, the explicit syntax makes its meaning easier to
understand: The join condition is introduced by its own key word whereas
previously the condition was mixed into the WHERE
clause together with other conditions.
Now we will figure out how we can get the Hayward records back in.
What we want the query to do is to scan the
weather
table and for each row to find the
matching cities
row(s). If no matching row is
found we want some “empty values” to be substituted
for the cities
table's columns. This kind
of query is called an outer join. (The
joins we have seen so far are inner joins.)
The command looks like this:
SELECT * FROM weather LEFT OUTER JOIN cities ON weather.city = cities.name;
city | temp_lo | temp_hi | prcp | date | name | location ---------------+---------+---------+------+------------+---------------+----------- Hayward | 37 | 54 | | 1994-11-29 | | San Francisco | 46 | 50 | 0.25 | 1994-11-27 | San Francisco | (-194,53) San Francisco | 43 | 57 | 0 | 1994-11-29 | San Francisco | (-194,53) (3 rows)
This query is called a left outer join because the table mentioned on the left of the join operator will have each of its rows in the output at least once, whereas the table on the right will only have those rows output that match some row of the left table. When outputting a left-table row for which there is no right-table match, empty (null) values are substituted for the right-table columns.
Exercise: There are also right outer joins and full outer joins. Try to find out what those do.
We can also join a table against itself. This is called a
self join. As an example, suppose we wish
to find all the weather records that are in the temperature range
of other weather records. So we need to compare the
temp_lo
and temp_hi
columns of
each weather
row to the
temp_lo
and
temp_hi
columns of all other
weather
rows. We can do this with the
following query:
SELECT w1.city, w1.temp_lo AS low, w1.temp_hi AS high, w2.city, w2.temp_lo AS low, w2.temp_hi AS high FROM weather w1 JOIN weather w2 ON w1.temp_lo < w2.temp_lo AND w1.temp_hi > w2.temp_hi;
city | low | high | city | low | high ---------------+-----+------+---------------+-----+------ San Francisco | 43 | 57 | San Francisco | 46 | 50 Hayward | 37 | 54 | San Francisco | 46 | 50 (2 rows)
Here we have relabeled the weather table as w1
and
w2
to be able to distinguish the left and right side
of the join. You can also use these kinds of aliases in other
queries to save some typing, e.g.:
SELECT * FROM weather w JOIN cities c ON w.city = c.name;
You will encounter this style of abbreviating quite frequently.
Like most other relational database products,
PostgreSQL supports
aggregate functions.
An aggregate function computes a single result from multiple input rows.
For example, there are aggregates to compute the
count
, sum
,
avg
(average), max
(maximum) and
min
(minimum) over a set of rows.
As an example, we can find the highest low-temperature reading anywhere with:
SELECT max(temp_lo) FROM weather;
max ----- 46 (1 row)
If we wanted to know what city (or cities) that reading occurred in, we might try:
SELECT city FROM weather WHERE temp_lo = max(temp_lo); WRONG
but this will not work since the aggregate
max
cannot be used in the
WHERE
clause. (This restriction exists because
the WHERE
clause determines which rows will be
included in the aggregate calculation; so obviously it has to be evaluated
before aggregate functions are computed.)
However, as is often the case
the query can be restated to accomplish the desired result, here
by using a subquery:
SELECT city FROM weather WHERE temp_lo = (SELECT max(temp_lo) FROM weather);
city --------------- San Francisco (1 row)
This is OK because the subquery is an independent computation that computes its own aggregate separately from what is happening in the outer query.
Aggregates are also very useful in combination with GROUP
BY
clauses. For example, we can get the number of readings
and the maximum low temperature observed in each city with:
SELECT city, count(*), max(temp_lo) FROM weather GROUP BY city;
city | count | max ---------------+-------+----- Hayward | 1 | 37 San Francisco | 2 | 46 (2 rows)
which gives us one output row per city. Each aggregate result is
computed over the table rows matching that city.
We can filter these grouped
rows using HAVING
:
SELECT city, count(*), max(temp_lo) FROM weather GROUP BY city HAVING max(temp_lo) < 40;
city | count | max ---------+-------+----- Hayward | 1 | 37 (1 row)
which gives us the same results for only the cities that have all
temp_lo
values below 40. Finally, if we only care about
cities whose
names begin with “S
”, we might do:
SELECT city, count(*), max(temp_lo) FROM weather WHERE city LIKE 'S%' -- (1) GROUP BY city;
city | count | max ---------------+-------+----- San Francisco | 2 | 46 (1 row)
The |
It is important to understand the interaction between aggregates and
SQL's WHERE
and HAVING
clauses.
The fundamental difference between WHERE
and
HAVING
is this: WHERE
selects
input rows before groups and aggregates are computed (thus, it controls
which rows go into the aggregate computation), whereas
HAVING
selects group rows after groups and
aggregates are computed. Thus, the
WHERE
clause must not contain aggregate functions;
it makes no sense to try to use an aggregate to determine which rows
will be inputs to the aggregates. On the other hand, the
HAVING
clause always contains aggregate functions.
(Strictly speaking, you are allowed to write a HAVING
clause that doesn't use aggregates, but it's seldom useful. The same
condition could be used more efficiently at the WHERE
stage.)
In the previous example, we can apply the city name restriction in
WHERE
, since it needs no aggregate. This is
more efficient than adding the restriction to HAVING
,
because we avoid doing the grouping and aggregate calculations
for all rows that fail the WHERE
check.
Another way to select the rows that go into an aggregate
computation is to use FILTER
, which is a
per-aggregate option:
SELECT city, count(*) FILTER (WHERE temp_lo < 45), max(temp_lo) FROM weather GROUP BY city;
city | count | max ---------------+-------+----- Hayward | 1 | 37 San Francisco | 1 | 46 (2 rows)
FILTER
is much like WHERE
,
except that it removes rows only from the input of the particular
aggregate function that it is attached to.
Here, the count
aggregate counts only
rows with temp_lo
below 45; but the
max
aggregate is still applied to all rows,
so it still finds the reading of 46.
You can update existing rows using the
UPDATE
command.
Suppose you discover the temperature readings are
all off by 2 degrees after November 28. You can correct the
data as follows:
UPDATE weather SET temp_hi = temp_hi - 2, temp_lo = temp_lo - 2 WHERE date > '1994-11-28';
Look at the new state of the data:
SELECT * FROM weather; city | temp_lo | temp_hi | prcp | date ---------------+---------+---------+------+------------ San Francisco | 46 | 50 | 0.25 | 1994-11-27 San Francisco | 41 | 55 | 0 | 1994-11-29 Hayward | 35 | 52 | | 1994-11-29 (3 rows)
Rows can be removed from a table using the DELETE
command.
Suppose you are no longer interested in the weather of Hayward.
Then you can do the following to delete those rows from the table:
DELETE FROM weather WHERE city = 'Hayward';
All weather records belonging to Hayward are removed.
SELECT * FROM weather;
city | temp_lo | temp_hi | prcp | date ---------------+---------+---------+------+------------ San Francisco | 46 | 50 | 0.25 | 1994-11-27 San Francisco | 41 | 55 | 0 | 1994-11-29 (2 rows)
One should be wary of statements of the form
DELETE FROM tablename
;
Without a qualification, DELETE
will
remove all rows from the given table, leaving it
empty. The system will not request confirmation before
doing this!
[2]
While SELECT *
is useful for off-the-cuff
queries, it is widely considered bad style in production code,
since adding a column to the table would change the results.
[3]
In some database systems, including older versions of
PostgreSQL, the implementation of
DISTINCT
automatically orders the rows and
so ORDER BY
is unnecessary. But this is not
required by the SQL standard, and current
PostgreSQL does not guarantee that
DISTINCT
causes the rows to be ordered.
[4] This is only a conceptual model. The join is usually performed in a more efficient manner than actually comparing each possible pair of rows, but this is invisible to the user.
Table of Contents
In the previous chapter we have covered the basics of using SQL to store and access your data in PostgreSQL. We will now discuss some more advanced features of SQL that simplify management and prevent loss or corruption of your data. Finally, we will look at some PostgreSQL extensions.
This chapter will on occasion refer to examples found in Chapter 2 to change or improve them, so it will be
useful to have read that chapter. Some examples from
this chapter can also be found in
advanced.sql
in the tutorial directory. This
file also contains some sample data to load, which is not
repeated here. (Refer to Section 2.1 for
how to use the file.)
Refer back to the queries in Section 2.6. Suppose the combined listing of weather records and city location is of particular interest to your application, but you do not want to type the query each time you need it. You can create a view over the query, which gives a name to the query that you can refer to like an ordinary table:
CREATE VIEW myview AS SELECT name, temp_lo, temp_hi, prcp, date, location FROM weather, cities WHERE city = name; SELECT * FROM myview;
Making liberal use of views is a key aspect of good SQL database design. Views allow you to encapsulate the details of the structure of your tables, which might change as your application evolves, behind consistent interfaces.
Views can be used in almost any place a real table can be used. Building views upon other views is not uncommon.
Recall the weather
and
cities
tables from Chapter 2. Consider the following problem: You
want to make sure that no one can insert rows in the
weather
table that do not have a matching
entry in the cities
table. This is called
maintaining the referential integrity of
your data. In simplistic database systems this would be
implemented (if at all) by first looking at the
cities
table to check if a matching record
exists, and then inserting or rejecting the new
weather
records. This approach has a
number of problems and is very inconvenient, so
PostgreSQL can do this for you.
The new declaration of the tables would look like this:
CREATE TABLE cities ( name varchar(80) primary key, location point ); CREATE TABLE weather ( city varchar(80) references cities(name), temp_lo int, temp_hi int, prcp real, date date );
Now try inserting an invalid record:
INSERT INTO weather VALUES ('Berkeley', 45, 53, 0.0, '1994-11-28');
ERROR: insert or update on table "weather" violates foreign key constraint "weather_city_fkey" DETAIL: Key (city)=(Berkeley) is not present in table "cities".
The behavior of foreign keys can be finely tuned to your application. We will not go beyond this simple example in this tutorial, but just refer you to Chapter 5 for more information. Making correct use of foreign keys will definitely improve the quality of your database applications, so you are strongly encouraged to learn about them.
Transactions are a fundamental concept of all database systems. The essential point of a transaction is that it bundles multiple steps into a single, all-or-nothing operation. The intermediate states between the steps are not visible to other concurrent transactions, and if some failure occurs that prevents the transaction from completing, then none of the steps affect the database at all.
For example, consider a bank database that contains balances for various customer accounts, as well as total deposit balances for branches. Suppose that we want to record a payment of $100.00 from Alice's account to Bob's account. Simplifying outrageously, the SQL commands for this might look like:
UPDATE accounts SET balance = balance - 100.00 WHERE name = 'Alice'; UPDATE branches SET balance = balance - 100.00 WHERE name = (SELECT branch_name FROM accounts WHERE name = 'Alice'); UPDATE accounts SET balance = balance + 100.00 WHERE name = 'Bob'; UPDATE branches SET balance = balance + 100.00 WHERE name = (SELECT branch_name FROM accounts WHERE name = 'Bob');
The details of these commands are not important here; the important point is that there are several separate updates involved to accomplish this rather simple operation. Our bank's officers will want to be assured that either all these updates happen, or none of them happen. It would certainly not do for a system failure to result in Bob receiving $100.00 that was not debited from Alice. Nor would Alice long remain a happy customer if she was debited without Bob being credited. We need a guarantee that if something goes wrong partway through the operation, none of the steps executed so far will take effect. Grouping the updates into a transaction gives us this guarantee. A transaction is said to be atomic: from the point of view of other transactions, it either happens completely or not at all.
We also want a guarantee that once a transaction is completed and acknowledged by the database system, it has indeed been permanently recorded and won't be lost even if a crash ensues shortly thereafter. For example, if we are recording a cash withdrawal by Bob, we do not want any chance that the debit to his account will disappear in a crash just after he walks out the bank door. A transactional database guarantees that all the updates made by a transaction are logged in permanent storage (i.e., on disk) before the transaction is reported complete.
Another important property of transactional databases is closely related to the notion of atomic updates: when multiple transactions are running concurrently, each one should not be able to see the incomplete changes made by others. For example, if one transaction is busy totalling all the branch balances, it would not do for it to include the debit from Alice's branch but not the credit to Bob's branch, nor vice versa. So transactions must be all-or-nothing not only in terms of their permanent effect on the database, but also in terms of their visibility as they happen. The updates made so far by an open transaction are invisible to other transactions until the transaction completes, whereupon all the updates become visible simultaneously.
In PostgreSQL, a transaction is set up by surrounding
the SQL commands of the transaction with
BEGIN
and COMMIT
commands. So our banking
transaction would actually look like:
BEGIN; UPDATE accounts SET balance = balance - 100.00 WHERE name = 'Alice'; -- etc etc COMMIT;
If, partway through the transaction, we decide we do not want to
commit (perhaps we just noticed that Alice's balance went negative),
we can issue the command ROLLBACK
instead of
COMMIT
, and all our updates so far will be canceled.
PostgreSQL actually treats every SQL statement as being
executed within a transaction. If you do not issue a BEGIN
command,
then each individual statement has an implicit BEGIN
and
(if successful) COMMIT
wrapped around it. A group of
statements surrounded by BEGIN
and COMMIT
is sometimes called a transaction block.
Some client libraries issue BEGIN
and COMMIT
commands automatically, so that you might get the effect of transaction
blocks without asking. Check the documentation for the interface
you are using.
It's possible to control the statements in a transaction in a more
granular fashion through the use of savepoints. Savepoints
allow you to selectively discard parts of the transaction, while
committing the rest. After defining a savepoint with
SAVEPOINT
, you can if needed roll back to the savepoint
with ROLLBACK TO
. All the transaction's database changes
between defining the savepoint and rolling back to it are discarded, but
changes earlier than the savepoint are kept.
After rolling back to a savepoint, it continues to be defined, so you can roll back to it several times. Conversely, if you are sure you won't need to roll back to a particular savepoint again, it can be released, so the system can free some resources. Keep in mind that either releasing or rolling back to a savepoint will automatically release all savepoints that were defined after it.
All this is happening within the transaction block, so none of it is visible to other database sessions. When and if you commit the transaction block, the committed actions become visible as a unit to other sessions, while the rolled-back actions never become visible at all.
Remembering the bank database, suppose we debit $100.00 from Alice's account, and credit Bob's account, only to find later that we should have credited Wally's account. We could do it using savepoints like this:
BEGIN; UPDATE accounts SET balance = balance - 100.00 WHERE name = 'Alice'; SAVEPOINT my_savepoint; UPDATE accounts SET balance = balance + 100.00 WHERE name = 'Bob'; -- oops ... forget that and use Wally's account ROLLBACK TO my_savepoint; UPDATE accounts SET balance = balance + 100.00 WHERE name = 'Wally'; COMMIT;
This example is, of course, oversimplified, but there's a lot of control
possible in a transaction block through the use of savepoints.
Moreover, ROLLBACK TO
is the only way to regain control of a
transaction block that was put in aborted state by the
system due to an error, short of rolling it back completely and starting
again.
A window function performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. However, window functions do not cause rows to become grouped into a single output row like non-window aggregate calls would. Instead, the rows retain their separate identities. Behind the scenes, the window function is able to access more than just the current row of the query result.
Here is an example that shows how to compare each employee's salary with the average salary in his or her department:
SELECT depname, empno, salary, avg(salary) OVER (PARTITION BY depname) FROM empsalary;
depname | empno | salary | avg -----------+-------+--------+----------------------- develop | 11 | 5200 | 5020.0000000000000000 develop | 7 | 4200 | 5020.0000000000000000 develop | 9 | 4500 | 5020.0000000000000000 develop | 8 | 6000 | 5020.0000000000000000 develop | 10 | 5200 | 5020.0000000000000000 personnel | 5 | 3500 | 3700.0000000000000000 personnel | 2 | 3900 | 3700.0000000000000000 sales | 3 | 4800 | 4866.6666666666666667 sales | 1 | 5000 | 4866.6666666666666667 sales | 4 | 4800 | 4866.6666666666666667 (10 rows)
The first three output columns come directly from the table
empsalary
, and there is one output row for each row in the
table. The fourth column represents an average taken across all the table
rows that have the same depname
value as the current row.
(This actually is the same function as the non-window avg
aggregate, but the OVER
clause causes it to be
treated as a window function and computed across the window frame.)
A window function call always contains an OVER
clause
directly following the window function's name and argument(s). This is what
syntactically distinguishes it from a normal function or non-window
aggregate. The OVER
clause determines exactly how the
rows of the query are split up for processing by the window function.
The PARTITION BY
clause within OVER
divides the rows into groups, or partitions, that share the same
values of the PARTITION BY
expression(s). For each row,
the window function is computed across the rows that fall into the
same partition as the current row.
You can also control the order in which rows are processed by
window functions using ORDER BY
within OVER
.
(The window ORDER BY
does not even have to match the
order in which the rows are output.) Here is an example:
SELECT depname, empno, salary, rank() OVER (PARTITION BY depname ORDER BY salary DESC) FROM empsalary;
depname | empno | salary | rank -----------+-------+--------+------ develop | 8 | 6000 | 1 develop | 10 | 5200 | 2 develop | 11 | 5200 | 2 develop | 9 | 4500 | 4 develop | 7 | 4200 | 5 personnel | 2 | 3900 | 1 personnel | 5 | 3500 | 2 sales | 1 | 5000 | 1 sales | 4 | 4800 | 2 sales | 3 | 4800 | 2 (10 rows)
As shown here, the rank
function produces a numerical rank
for each distinct ORDER BY
value in the current row's
partition, using the order defined by the ORDER BY
clause.
rank
needs no explicit parameter, because its behavior
is entirely determined by the OVER
clause.
The rows considered by a window function are those of the “virtual
table” produced by the query's FROM
clause as filtered by its
WHERE
, GROUP BY
, and HAVING
clauses
if any. For example, a row removed because it does not meet the
WHERE
condition is not seen by any window function.
A query can contain multiple window functions that slice up the data
in different ways using different OVER
clauses, but
they all act on the same collection of rows defined by this virtual table.
We already saw that ORDER BY
can be omitted if the ordering
of rows is not important. It is also possible to omit PARTITION
BY
, in which case there is a single partition containing all rows.
There is another important concept associated with window functions:
for each row, there is a set of rows within its partition called its
window frame. Some window functions act only
on the rows of the window frame, rather than of the whole partition.
By default, if ORDER BY
is supplied then the frame consists of
all rows from the start of the partition up through the current row, plus
any following rows that are equal to the current row according to the
ORDER BY
clause. When ORDER BY
is omitted the
default frame consists of all rows in the partition.
[5]
Here is an example using sum
:
SELECT salary, sum(salary) OVER () FROM empsalary;
salary | sum --------+------- 5200 | 47100 5000 | 47100 3500 | 47100 4800 | 47100 3900 | 47100 4200 | 47100 4500 | 47100 4800 | 47100 6000 | 47100 5200 | 47100 (10 rows)
Above, since there is no ORDER BY
in the OVER
clause, the window frame is the same as the partition, which for lack of
PARTITION BY
is the whole table; in other words each sum is
taken over the whole table and so we get the same result for each output
row. But if we add an ORDER BY
clause, we get very different
results:
SELECT salary, sum(salary) OVER (ORDER BY salary) FROM empsalary;
salary | sum --------+------- 3500 | 3500 3900 | 7400 4200 | 11600 4500 | 16100 4800 | 25700 4800 | 25700 5000 | 30700 5200 | 41100 5200 | 41100 6000 | 47100 (10 rows)
Here the sum is taken from the first (lowest) salary up through the current one, including any duplicates of the current one (notice the results for the duplicated salaries).
Window functions are permitted only in the SELECT
list
and the ORDER BY
clause of the query. They are forbidden
elsewhere, such as in GROUP BY
, HAVING
and WHERE
clauses. This is because they logically
execute after the processing of those clauses. Also, window functions
execute after non-window aggregate functions. This means it is valid to
include an aggregate function call in the arguments of a window function,
but not vice versa.
If there is a need to filter or group rows after the window calculations are performed, you can use a sub-select. For example:
SELECT depname, empno, salary, enroll_date FROM (SELECT depname, empno, salary, enroll_date, rank() OVER (PARTITION BY depname ORDER BY salary DESC, empno) AS pos FROM empsalary ) AS ss WHERE pos < 3;
The above query only shows the rows from the inner query having
rank
less than 3.
When a query involves multiple window functions, it is possible to write
out each one with a separate OVER
clause, but this is
duplicative and error-prone if the same windowing behavior is wanted
for several functions. Instead, each windowing behavior can be named
in a WINDOW
clause and then referenced in OVER
.
For example:
SELECT sum(salary) OVER w, avg(salary) OVER w FROM empsalary WINDOW w AS (PARTITION BY depname ORDER BY salary DESC);
More details about window functions can be found in Section 4.2.8, Section 9.22, Section 7.2.5, and the SELECT reference page.
Inheritance is a concept from object-oriented databases. It opens up interesting new possibilities of database design.
Let's create two tables: A table cities
and a table capitals
. Naturally, capitals
are also cities, so you want some way to show the capitals
implicitly when you list all cities. If you're really clever you
might invent some scheme like this:
CREATE TABLE capitals ( name text, population real, elevation int, -- (in ft) state char(2) ); CREATE TABLE non_capitals ( name text, population real, elevation int -- (in ft) ); CREATE VIEW cities AS SELECT name, population, elevation FROM capitals UNION SELECT name, population, elevation FROM non_capitals;
This works OK as far as querying goes, but it gets ugly when you need to update several rows, for one thing.
A better solution is this:
CREATE TABLE cities ( name text, population real, elevation int -- (in ft) ); CREATE TABLE capitals ( state char(2) UNIQUE NOT NULL ) INHERITS (cities);
In this case, a row of capitals
inherits all columns (name
,
population
, and elevation
) from its
parent, cities
. The
type of the column name
is
text
, a native PostgreSQL
type for variable length character strings. The
capitals
table has
an additional column, state
, which shows its
state abbreviation. In
PostgreSQL, a table can inherit from
zero or more other tables.
For example, the following query finds the names of all cities, including state capitals, that are located at an elevation over 500 feet:
SELECT name, elevation FROM cities WHERE elevation > 500;
which returns:
name | elevation -----------+----------- Las Vegas | 2174 Mariposa | 1953 Madison | 845 (3 rows)
On the other hand, the following query finds all the cities that are not state capitals and are situated at an elevation over 500 feet:
SELECT name, elevation FROM ONLY cities WHERE elevation > 500;
name | elevation -----------+----------- Las Vegas | 2174 Mariposa | 1953 (2 rows)
Here the ONLY
before cities
indicates that the query should be run over only the
cities
table, and not tables below
cities
in the inheritance hierarchy. Many
of the commands that we have already discussed —
SELECT
, UPDATE
, and
DELETE
— support this ONLY
notation.
Although inheritance is frequently useful, it has not been integrated with unique constraints or foreign keys, which limits its usefulness. See Section 5.10 for more detail.
PostgreSQL has many features not touched upon in this tutorial introduction, which has been oriented toward newer users of SQL. These features are discussed in more detail in the remainder of this book.
If you feel you need more introductory material, please visit the PostgreSQL web site for links to more resources.
[5] There are options to define the window frame in other ways, but this tutorial does not cover them. See Section 4.2.8 for details.
This part describes the use of the SQL language in PostgreSQL. We start with describing the general syntax of SQL, then explain how to create the structures to hold data, how to populate the database, and how to query it. The middle part lists the available data types and functions for use in SQL commands. The rest treats several aspects that are important for tuning a database for optimal performance.
The information in this part is arranged so that a novice user can follow it start to end to gain a full understanding of the topics without having to refer forward too many times. The chapters are intended to be self-contained, so that advanced users can read the chapters individually as they choose. The information in this part is presented in a narrative fashion in topical units. Readers looking for a complete description of a particular command should see Part VI.
Readers of this part should know how to connect to a PostgreSQL database and issue SQL commands. Readers that are unfamiliar with these issues are encouraged to read Part I first. SQL commands are typically entered using the PostgreSQL interactive terminal psql, but other programs that have similar functionality can be used as well.
Table of Contents
pg_lsn
TypeORDER BY
Table of Contents
This chapter describes the syntax of SQL. It forms the foundation for understanding the following chapters which will go into detail about how SQL commands are applied to define and modify data.
We also advise users who are already familiar with SQL to read this chapter carefully because it contains several rules and concepts that are implemented inconsistently among SQL databases or that are specific to PostgreSQL.
SQL input consists of a sequence of commands. A command is composed of a sequence of tokens, terminated by a semicolon (“;”). The end of the input stream also terminates a command. Which tokens are valid depends on the syntax of the particular command.
A token can be a key word, an identifier, a quoted identifier, a literal (or constant), or a special character symbol. Tokens are normally separated by whitespace (space, tab, newline), but need not be if there is no ambiguity (which is generally only the case if a special character is adjacent to some other token type).
For example, the following is (syntactically) valid SQL input:
SELECT * FROM MY_TABLE; UPDATE MY_TABLE SET A = 5; INSERT INTO MY_TABLE VALUES (3, 'hi there');
This is a sequence of three commands, one per line (although this is not required; more than one command can be on a line, and commands can usefully be split across lines).
Additionally, comments can occur in SQL input. They are not tokens, they are effectively equivalent to whitespace.
The SQL syntax is not very consistent regarding what tokens
identify commands and which are operands or parameters. The first
few tokens are generally the command name, so in the above example
we would usually speak of a “SELECT”, an
“UPDATE”, and an “INSERT” command. But
for instance the UPDATE
command always requires
a SET
token to appear in a certain position, and
this particular variation of INSERT
also
requires a VALUES
in order to be complete. The
precise syntax rules for each command are described in Part VI.
Tokens such as SELECT
, UPDATE
, or
VALUES
in the example above are examples of
key words, that is, words that have a fixed
meaning in the SQL language. The tokens MY_TABLE
and A
are examples of
identifiers. They identify names of
tables, columns, or other database objects, depending on the
command they are used in. Therefore they are sometimes simply
called “names”. Key words and identifiers have the
same lexical structure, meaning that one cannot know whether a
token is an identifier or a key word without knowing the language.
A complete list of key words can be found in Appendix C.
SQL identifiers and key words must begin with a letter
(a
-z
, but also letters with
diacritical marks and non-Latin letters) or an underscore
(_
). Subsequent characters in an identifier or
key word can be letters, underscores, digits
(0
-9
), or dollar signs
($
). Note that dollar signs are not allowed in identifiers
according to the letter of the SQL standard, so their use might render
applications less portable.
The SQL standard will not define a key word that contains
digits or starts or ends with an underscore, so identifiers of this
form are safe against possible conflict with future extensions of the
standard.
The system uses no more than NAMEDATALEN
-1
bytes of an identifier; longer names can be written in
commands, but they will be truncated. By default,
NAMEDATALEN
is 64 so the maximum identifier
length is 63 bytes. If this limit is problematic, it can be raised by
changing the NAMEDATALEN
constant in
src/include/pg_config_manual.h
.
Key words and unquoted identifiers are case insensitive. Therefore:
UPDATE MY_TABLE SET A = 5;
can equivalently be written as:
uPDaTE my_TabLE SeT a = 5;
A convention often used is to write key words in upper case and names in lower case, e.g.:
UPDATE my_table SET a = 5;
There is a second kind of identifier: the delimited
identifier or quoted
identifier. It is formed by enclosing an arbitrary
sequence of characters in double-quotes
("
). A delimited
identifier is always an identifier, never a key word. So
"select"
could be used to refer to a column or
table named “select”, whereas an unquoted
select
would be taken as a key word and
would therefore provoke a parse error when used where a table or
column name is expected. The example can be written with quoted
identifiers like this:
UPDATE "my_table" SET "a" = 5;
Quoted identifiers can contain any character, except the character with code zero. (To include a double quote, write two double quotes.) This allows constructing table or column names that would otherwise not be possible, such as ones containing spaces or ampersands. The length limitation still applies.
Quoting an identifier also makes it case-sensitive, whereas
unquoted names are always folded to lower case. For example, the
identifiers FOO
, foo
, and
"foo"
are considered the same by
PostgreSQL, but
"Foo"
and "FOO"
are
different from these three and each other. (The folding of
unquoted names to lower case in PostgreSQL is
incompatible with the SQL standard, which says that unquoted names
should be folded to upper case. Thus, foo
should be equivalent to "FOO"
not
"foo"
according to the standard. If you want
to write portable applications you are advised to always quote a
particular name or never quote it.)
A variant of quoted
identifiers allows including escaped Unicode characters identified
by their code points. This variant starts
with U&
(upper or lower case U followed by
ampersand) immediately before the opening double quote, without
any spaces in between, for example U&"foo"
.
(Note that this creates an ambiguity with the
operator &
. Use spaces around the operator to
avoid this problem.) Inside the quotes, Unicode characters can be
specified in escaped form by writing a backslash followed by the
four-digit hexadecimal code point number or alternatively a
backslash followed by a plus sign followed by a six-digit
hexadecimal code point number. For example, the
identifier "data"
could be written as
U&"d\0061t\+000061"
The following less trivial example writes the Russian word “slon” (elephant) in Cyrillic letters:
U&"\0441\043B\043E\043D"
If a different escape character than backslash is desired, it can
be specified using
the UESCAPE
clause after the string, for example:
U&"d!0061t!+000061" UESCAPE '!'
The escape character can be any single character other than a
hexadecimal digit, the plus sign, a single quote, a double quote,
or a whitespace character. Note that the escape character is
written in single quotes, not double quotes,
after UESCAPE
.
To include the escape character in the identifier literally, write it twice.
Either the 4-digit or the 6-digit escape form can be used to specify UTF-16 surrogate pairs to compose characters with code points larger than U+FFFF, although the availability of the 6-digit form technically makes this unnecessary. (Surrogate pairs are not stored directly, but are combined into a single code point.)
If the server encoding is not UTF-8, the Unicode code point identified by one of these escape sequences is converted to the actual server encoding; an error is reported if that's not possible.
There are three kinds of implicitly-typed constants in PostgreSQL: strings, bit strings, and numbers. Constants can also be specified with explicit types, which can enable more accurate representation and more efficient handling by the system. These alternatives are discussed in the following subsections.
A string constant in SQL is an arbitrary sequence of characters
bounded by single quotes ('
), for example
'This is a string'
. To include
a single-quote character within a string constant,
write two adjacent single quotes, e.g.,
'Dianne''s horse'
.
Note that this is not the same as a double-quote
character ("
).
Two string constants that are only separated by whitespace with at least one newline are concatenated and effectively treated as if the string had been written as one constant. For example:
SELECT 'foo' 'bar';
is equivalent to:
SELECT 'foobar';
but:
SELECT 'foo' 'bar';
is not valid syntax. (This slightly bizarre behavior is specified by SQL; PostgreSQL is following the standard.)
PostgreSQL also accepts “escape”
string constants, which are an extension to the SQL standard.
An escape string constant is specified by writing the letter
E
(upper or lower case) just before the opening single
quote, e.g., E'foo'
. (When continuing an escape string
constant across lines, write E
only before the first opening
quote.)
Within an escape string, a backslash character (\
) begins a
C-like backslash escape sequence, in which the combination
of backslash and following character(s) represent a special byte
value, as shown in Table 4.1.
Table 4.1. Backslash Escape Sequences
Backslash Escape Sequence | Interpretation |
---|---|
\b | backspace |
\f | form feed |
\n | newline |
\r | carriage return |
\t | tab |
\ ,
\ ,
\
(o = 0–7)
| octal byte value |
\x ,
\x
(h = 0–9, A–F)
| hexadecimal byte value |
\u ,
\U
(x = 0–9, A–F)
| 16 or 32-bit hexadecimal Unicode character value |
Any other
character following a backslash is taken literally. Thus, to
include a backslash character, write two backslashes (\\
).
Also, a single quote can be included in an escape string by writing
\'
, in addition to the normal way of ''
.
It is your responsibility that the byte sequences you create, especially when using the octal or hexadecimal escapes, compose valid characters in the server character set encoding. A useful alternative is to use Unicode escapes or the alternative Unicode escape syntax, explained in Section 4.1.2.3; then the server will check that the character conversion is possible.
If the configuration parameter
standard_conforming_strings is off
,
then PostgreSQL recognizes backslash escapes
in both regular and escape string constants. However, as of
PostgreSQL 9.1, the default is on
, meaning
that backslash escapes are recognized only in escape string constants.
This behavior is more standards-compliant, but might break applications
which rely on the historical behavior, where backslash escapes
were always recognized. As a workaround, you can set this parameter
to off
, but it is better to migrate away from using backslash
escapes. If you need to use a backslash escape to represent a special
character, write the string constant with an E
.
In addition to standard_conforming_strings
, the configuration
parameters escape_string_warning and
backslash_quote govern treatment of backslashes
in string constants.
The character with the code zero cannot be in a string constant.
PostgreSQL also supports another type
of escape syntax for strings that allows specifying arbitrary
Unicode characters by code point. A Unicode escape string
constant starts with U&
(upper or lower case
letter U followed by ampersand) immediately before the opening
quote, without any spaces in between, for
example U&'foo'
. (Note that this creates an
ambiguity with the operator &
. Use spaces
around the operator to avoid this problem.) Inside the quotes,
Unicode characters can be specified in escaped form by writing a
backslash followed by the four-digit hexadecimal code point
number or alternatively a backslash followed by a plus sign
followed by a six-digit hexadecimal code point number. For
example, the string 'data'
could be written as
U&'d\0061t\+000061'
The following less trivial example writes the Russian word “slon” (elephant) in Cyrillic letters:
U&'\0441\043B\043E\043D'
If a different escape character than backslash is desired, it can
be specified using
the UESCAPE
clause after the string, for example:
U&'d!0061t!+000061' UESCAPE '!'
The escape character can be any single character other than a hexadecimal digit, the plus sign, a single quote, a double quote, or a whitespace character.
To include the escape character in the string literally, write it twice.
Either the 4-digit or the 6-digit escape form can be used to specify UTF-16 surrogate pairs to compose characters with code points larger than U+FFFF, although the availability of the 6-digit form technically makes this unnecessary. (Surrogate pairs are not stored directly, but are combined into a single code point.)
If the server encoding is not UTF-8, the Unicode code point identified by one of these escape sequences is converted to the actual server encoding; an error is reported if that's not possible.
Also, the Unicode escape syntax for string constants only works when the configuration parameter standard_conforming_strings is turned on. This is because otherwise this syntax could confuse clients that parse the SQL statements to the point that it could lead to SQL injections and similar security issues. If the parameter is set to off, this syntax will be rejected with an error message.
While the standard syntax for specifying string constants is usually
convenient, it can be difficult to understand when the desired string
contains many single quotes, since each of those must
be doubled. To allow more readable queries in such situations,
PostgreSQL provides another way, called
“dollar quoting”, to write string constants.
A dollar-quoted string constant
consists of a dollar sign ($
), an optional
“tag” of zero or more characters, another dollar
sign, an arbitrary sequence of characters that makes up the
string content, a dollar sign, the same tag that began this
dollar quote, and a dollar sign. For example, here are two
different ways to specify the string “Dianne's horse”
using dollar quoting:
$$Dianne's horse$$ $SomeTag$Dianne's horse$SomeTag$
Notice that inside the dollar-quoted string, single quotes can be used without needing to be escaped. Indeed, no characters inside a dollar-quoted string are ever escaped: the string content is always written literally. Backslashes are not special, and neither are dollar signs, unless they are part of a sequence matching the opening tag.
It is possible to nest dollar-quoted string constants by choosing different tags at each nesting level. This is most commonly used in writing function definitions. For example:
$function$ BEGIN RETURN ($1 ~ $q$[\t\r\n\v\\]$q$); END; $function$
Here, the sequence $q$[\t\r\n\v\\]$q$
represents a
dollar-quoted literal string [\t\r\n\v\\]
, which will
be recognized when the function body is executed by
PostgreSQL. But since the sequence does not match
the outer dollar quoting delimiter $function$
, it is
just some more characters within the constant so far as the outer
string is concerned.
The tag, if any, of a dollar-quoted string follows the same rules
as an unquoted identifier, except that it cannot contain a dollar sign.
Tags are case sensitive, so $tag$String content$tag$
is correct, but $TAG$String content$tag$
is not.
A dollar-quoted string that follows a keyword or identifier must be separated from it by whitespace; otherwise the dollar quoting delimiter would be taken as part of the preceding identifier.
Dollar quoting is not part of the SQL standard, but it is often a more convenient way to write complicated string literals than the standard-compliant single quote syntax. It is particularly useful when representing string constants inside other constants, as is often needed in procedural function definitions. With single-quote syntax, each backslash in the above example would have to be written as four backslashes, which would be reduced to two backslashes in parsing the original string constant, and then to one when the inner string constant is re-parsed during function execution.
Bit-string constants look like regular string constants with a
B
(upper or lower case) immediately before the
opening quote (no intervening whitespace), e.g.,
B'1001'
. The only characters allowed within
bit-string constants are 0
and
1
.
Alternatively, bit-string constants can be specified in hexadecimal
notation, using a leading X
(upper or lower case),
e.g., X'1FF'
. This notation is equivalent to
a bit-string constant with four binary digits for each hexadecimal digit.
Both forms of bit-string constant can be continued across lines in the same way as regular string constants. Dollar quoting cannot be used in a bit-string constant.
Numeric constants are accepted in these general forms:
digits
digits
.[digits
][e[+-]digits
] [digits
].digits
[e[+-]digits
]digits
e[+-]digits
where digits
is one or more decimal
digits (0 through 9). At least one digit must be before or after the
decimal point, if one is used. At least one digit must follow the
exponent marker (e
), if one is present.
There cannot be any spaces or other characters embedded in the
constant. Note that any leading plus or minus sign is not actually
considered part of the constant; it is an operator applied to the
constant.
These are some examples of valid numeric constants:
42
3.5
4.
.001
5e2
1.925e-3
A numeric constant that contains neither a decimal point nor an
exponent is initially presumed to be type integer
if its
value fits in type integer
(32 bits); otherwise it is
presumed to be type bigint
if its
value fits in type bigint
(64 bits); otherwise it is
taken to be type numeric
. Constants that contain decimal
points and/or exponents are always initially presumed to be type
numeric
.
The initially assigned data type of a numeric constant is just a
starting point for the type resolution algorithms. In most cases
the constant will be automatically coerced to the most
appropriate type depending on context. When necessary, you can
force a numeric value to be interpreted as a specific data type
by casting it.
For example, you can force a numeric value to be treated as type
real
(float4
) by writing:
REAL '1.23' -- string style 1.23::REAL -- PostgreSQL (historical) style
These are actually just special cases of the general casting notations discussed next.
A constant of an arbitrary type can be entered using any one of the following notations:
type
'string
' 'string
'::type
CAST ( 'string
' AStype
)
The string constant's text is passed to the input conversion
routine for the type called type
. The
result is a constant of the indicated type. The explicit type
cast can be omitted if there is no ambiguity as to the type the
constant must be (for example, when it is assigned directly to a
table column), in which case it is automatically coerced.
The string constant can be written using either regular SQL notation or dollar-quoting.
It is also possible to specify a type coercion using a function-like syntax:
typename
( 'string
' )
but not all type names can be used in this way; see Section 4.2.9 for details.
The ::
, CAST()
, and
function-call syntaxes can also be used to specify run-time type
conversions of arbitrary expressions, as discussed in Section 4.2.9. To avoid syntactic ambiguity, the
syntax can only be used to specify the type of a simple literal constant.
Another restriction on the
type
'string
'
syntax is that it does not work for array types; use type
'string
'::
or CAST()
to specify the type of an array constant.
The CAST()
syntax conforms to SQL. The
syntax is a generalization of the standard: SQL specifies this syntax only
for a few data types, but PostgreSQL allows it
for all types. The syntax with
type
'string
'::
is historical PostgreSQL
usage, as is the function-call syntax.
An operator name is a sequence of up to NAMEDATALEN
-1
(63 by default) characters from the following list:
+ - * / < > = ~ ! @ # % ^ & | ` ?
There are a few restrictions on operator names, however:
--
and /*
cannot appear
anywhere in an operator name, since they will be taken as the
start of a comment.
A multiple-character operator name cannot end in +
or -
,
unless the name also contains at least one of these characters:
~ ! @ # % ^ & | ` ?
For example, @-
is an allowed operator name,
but *-
is not. This restriction allows
PostgreSQL to parse SQL-compliant
queries without requiring spaces between tokens.
When working with non-SQL-standard operator names, you will usually
need to separate adjacent operators with spaces to avoid ambiguity.
For example, if you have defined a prefix operator named @
,
you cannot write X*@Y
; you must write
X* @Y
to ensure that
PostgreSQL reads it as two operator names
not one.
Some characters that are not alphanumeric have a special meaning that is different from being an operator. Details on the usage can be found at the location where the respective syntax element is described. This section only exists to advise the existence and summarize the purposes of these characters.
A dollar sign ($
) followed by digits is used
to represent a positional parameter in the body of a function
definition or a prepared statement. In other contexts the
dollar sign can be part of an identifier or a dollar-quoted string
constant.
Parentheses (()
) have their usual meaning to
group expressions and enforce precedence. In some cases
parentheses are required as part of the fixed syntax of a
particular SQL command.
Brackets ([]
) are used to select the elements
of an array. See Section 8.15 for more information
on arrays.
Commas (,
) are used in some syntactical
constructs to separate the elements of a list.
The semicolon (;
) terminates an SQL command.
It cannot appear anywhere within a command, except within a
string constant or quoted identifier.
The colon (:
) is used to select
“slices” from arrays. (See Section 8.15.) In certain SQL dialects (such as Embedded
SQL), the colon is used to prefix variable names.
The asterisk (*
) is used in some contexts to denote
all the fields of a table row or composite value. It also
has a special meaning when used as the argument of an
aggregate function, namely that the aggregate does not require
any explicit parameter.
The period (.
) is used in numeric
constants, and to separate schema, table, and column names.
A comment is a sequence of characters beginning with double dashes and extending to the end of the line, e.g.:
-- This is a standard SQL comment
Alternatively, C-style block comments can be used:
/* multiline comment * with nesting: /* nested block comment */ */
where the comment begins with /*
and extends to
the matching occurrence of */
. These block
comments nest, as specified in the SQL standard but unlike C, so that one can
comment out larger blocks of code that might contain existing block
comments.
A comment is removed from the input stream before further syntax analysis and is effectively replaced by whitespace.
Table 4.2 shows the precedence and associativity of the operators in PostgreSQL. Most operators have the same precedence and are left-associative. The precedence and associativity of the operators is hard-wired into the parser. Add parentheses if you want an expression with multiple operators to be parsed in some other way than what the precedence rules imply.
Table 4.2. Operator Precedence (highest to lowest)
Operator/Element | Associativity | Description |
---|---|---|
. | left | table/column name separator |
:: | left | PostgreSQL-style typecast |
[ ] | left | array element selection |
+ - | right | unary plus, unary minus |
COLLATE | left | collation selection |
AT | left | AT TIME ZONE |
^ | left | exponentiation |
* / % | left | multiplication, division, modulo |
+ - | left | addition, subtraction |
(any other operator) | left | all other native and user-defined operators |
BETWEEN IN LIKE ILIKE SIMILAR | range containment, set membership, string matching | |
< > = <= >= <>
| comparison operators | |
IS ISNULL NOTNULL | IS TRUE , IS FALSE , IS
NULL , IS DISTINCT FROM , etc | |
NOT | right | logical negation |
AND | left | logical conjunction |
OR | left | logical disjunction |
Note that the operator precedence rules also apply to user-defined operators that have the same names as the built-in operators mentioned above. For example, if you define a “+” operator for some custom data type it will have the same precedence as the built-in “+” operator, no matter what yours does.
When a schema-qualified operator name is used in the
OPERATOR
syntax, as for example in:
SELECT 3 OPERATOR(pg_catalog.+) 4;
the OPERATOR
construct is taken to have the default precedence
shown in Table 4.2 for
“any other operator”. This is true no matter
which specific operator appears inside OPERATOR()
.
PostgreSQL versions before 9.5 used slightly different
operator precedence rules. In particular, <=
>=
and <>
used to be treated as
generic operators; IS
tests used to have higher priority;
and NOT BETWEEN
and related constructs acted inconsistently,
being taken in some cases as having the precedence of NOT
rather than BETWEEN
. These rules were changed for better
compliance with the SQL standard and to reduce confusion from
inconsistent treatment of logically equivalent constructs. In most
cases, these changes will result in no behavioral change, or perhaps
in “no such operator” failures which can be resolved by adding
parentheses. However there are corner cases in which a query might
change behavior without any parsing error being reported.
Value expressions are used in a variety of contexts, such
as in the target list of the SELECT
command, as
new column values in INSERT
or
UPDATE
, or in search conditions in a number of
commands. The result of a value expression is sometimes called a
scalar, to distinguish it from the result of
a table expression (which is a table). Value expressions are
therefore also called scalar expressions (or
even simply expressions). The expression
syntax allows the calculation of values from primitive parts using
arithmetic, logical, set, and other operations.
A value expression is one of the following:
A constant or literal value
A column reference
A positional parameter reference, in the body of a function definition or prepared statement
A subscripted expression
A field selection expression
An operator invocation
A function call
An aggregate expression
A window function call
A type cast
A collation expression
A scalar subquery
An array constructor
A row constructor
Another value expression in parentheses (used to group subexpressions and override precedence)
In addition to this list, there are a number of constructs that can
be classified as an expression but do not follow any general syntax
rules. These generally have the semantics of a function or
operator and are explained in the appropriate location in Chapter 9. An example is the IS NULL
clause.
We have already discussed constants in Section 4.1.2. The following sections discuss the remaining options.
A column can be referenced in the form:
correlation
.columnname
correlation
is the name of a
table (possibly qualified with a schema name), or an alias for a table
defined by means of a FROM
clause.
The correlation name and separating dot can be omitted if the column name
is unique across all the tables being used in the current query. (See also Chapter 7.)
A positional parameter reference is used to indicate a value that is supplied externally to an SQL statement. Parameters are used in SQL function definitions and in prepared queries. Some client libraries also support specifying data values separately from the SQL command string, in which case parameters are used to refer to the out-of-line data values. The form of a parameter reference is:
$number
For example, consider the definition of a function,
dept
, as:
CREATE FUNCTION dept(text) RETURNS dept AS $$ SELECT * FROM dept WHERE name = $1 $$ LANGUAGE SQL;
Here the $1
references the value of the first
function argument whenever the function is invoked.
If an expression yields a value of an array type, then a specific element of the array value can be extracted by writing
expression
[subscript
]
or multiple adjacent elements (an “array slice”) can be extracted by writing
expression
[lower_subscript
:upper_subscript
]
(Here, the brackets [ ]
are meant to appear literally.)
Each subscript
is itself an expression,
which will be rounded to the nearest integer value.
In general the array expression
must be
parenthesized, but the parentheses can be omitted when the expression
to be subscripted is just a column reference or positional parameter.
Also, multiple subscripts can be concatenated when the original array
is multidimensional.
For example:
mytable.arraycolumn[4] mytable.two_d_column[17][34] $1[10:42] (arrayfunction(a,b))[42]
The parentheses in the last example are required. See Section 8.15 for more about arrays.
If an expression yields a value of a composite type (row type), then a specific field of the row can be extracted by writing
expression
.fieldname
In general the row expression
must be
parenthesized, but the parentheses can be omitted when the expression
to be selected from is just a table reference or positional parameter.
For example:
mytable.mycolumn $1.somecolumn (rowfunction(a,b)).col3
(Thus, a qualified column reference is actually just a special case of the field selection syntax.) An important special case is extracting a field from a table column that is of a composite type:
(compositecol).somefield (mytable.compositecol).somefield
The parentheses are required here to show that
compositecol
is a column name not a table name,
or that mytable
is a table name not a schema name
in the second case.
You can ask for all fields of a composite value by
writing .*
:
(compositecol).*
This notation behaves differently depending on context; see Section 8.16.5 for details.
There are two possible syntaxes for an operator invocation:
expression operator expression (binary infix operator) |
operator expression (unary prefix operator) |
where the operator
token follows the syntax
rules of Section 4.1.3, or is one of the
key words AND
, OR
, and
NOT
, or is a qualified operator name in the form:
OPERATOR(
schema
.
operatorname
)
Which particular operators exist and whether they are unary or binary depends on what operators have been defined by the system or the user. Chapter 9 describes the built-in operators.
The syntax for a function call is the name of a function (possibly qualified with a schema name), followed by its argument list enclosed in parentheses:
function_name
([expression
[,expression
... ]] )
For example, the following computes the square root of 2:
sqrt(2)
The list of built-in functions is in Chapter 9. Other functions can be added by the user.
When issuing queries in a database where some users mistrust other users, observe security precautions from Section 10.3 when writing function calls.
The arguments can optionally have names attached. See Section 4.3 for details.
A function that takes a single argument of composite type can
optionally be called using field-selection syntax, and conversely
field selection can be written in functional style. That is, the
notations col(table)
and table.col
are
interchangeable. This behavior is not SQL-standard but is provided
in PostgreSQL because it allows use of functions to
emulate “computed fields”. For more information see
Section 8.16.5.
An aggregate expression represents the application of an aggregate function across the rows selected by a query. An aggregate function reduces multiple inputs to a single output value, such as the sum or average of the inputs. The syntax of an aggregate expression is one of the following:
aggregate_name
(expression
[ , ... ] [order_by_clause
] ) [ FILTER ( WHEREfilter_clause
) ]aggregate_name
(ALLexpression
[ , ... ] [order_by_clause
] ) [ FILTER ( WHEREfilter_clause
) ]aggregate_name
(DISTINCTexpression
[ , ... ] [order_by_clause
] ) [ FILTER ( WHEREfilter_clause
) ]aggregate_name
( * ) [ FILTER ( WHEREfilter_clause
) ]aggregate_name
( [expression
[ , ... ] ] ) WITHIN GROUP (order_by_clause
) [ FILTER ( WHEREfilter_clause
) ]
where aggregate_name
is a previously
defined aggregate (possibly qualified with a schema name) and
expression
is
any value expression that does not itself contain an aggregate
expression or a window function call. The optional
order_by_clause
and
filter_clause
are described below.
The first form of aggregate expression invokes the aggregate
once for each input row.
The second form is the same as the first, since
ALL
is the default.
The third form invokes the aggregate once for each distinct value
of the expression (or distinct set of values, for multiple expressions)
found in the input rows.
The fourth form invokes the aggregate once for each input row; since no
particular input value is specified, it is generally only useful
for the count(*)
aggregate function.
The last form is used with ordered-set aggregate
functions, which are described below.
Most aggregate functions ignore null inputs, so that rows in which one or more of the expression(s) yield null are discarded. This can be assumed to be true, unless otherwise specified, for all built-in aggregates.
For example, count(*)
yields the total number
of input rows; count(f1)
yields the number of
input rows in which f1
is non-null, since
count
ignores nulls; and
count(distinct f1)
yields the number of
distinct non-null values of f1
.
Ordinarily, the input rows are fed to the aggregate function in an
unspecified order. In many cases this does not matter; for example,
min
produces the same result no matter what order it
receives the inputs in. However, some aggregate functions
(such as array_agg
and string_agg
) produce
results that depend on the ordering of the input rows. When using
such an aggregate, the optional order_by_clause
can be
used to specify the desired ordering. The order_by_clause
has the same syntax as for a query-level ORDER BY
clause, as
described in Section 7.5, except that its expressions
are always just expressions and cannot be output-column names or numbers.
For example:
SELECT array_agg(a ORDER BY b DESC) FROM table;
When dealing with multiple-argument aggregate functions, note that the
ORDER BY
clause goes after all the aggregate arguments.
For example, write this:
SELECT string_agg(a, ',' ORDER BY a) FROM table;
not this:
SELECT string_agg(a ORDER BY a, ',') FROM table; -- incorrect
The latter is syntactically valid, but it represents a call of a
single-argument aggregate function with two ORDER BY
keys
(the second one being rather useless since it's a constant).
If DISTINCT
is specified in addition to an
order_by_clause
, then all the ORDER BY
expressions must match regular arguments of the aggregate; that is,
you cannot sort on an expression that is not included in the
DISTINCT
list.
The ability to specify both DISTINCT
and ORDER BY
in an aggregate function is a PostgreSQL extension.
Placing ORDER BY
within the aggregate's regular argument
list, as described so far, is used when ordering the input rows for
general-purpose and statistical aggregates, for which ordering is
optional. There is a
subclass of aggregate functions called ordered-set
aggregates for which an order_by_clause
is required, usually because the aggregate's computation is
only sensible in terms of a specific ordering of its input rows.
Typical examples of ordered-set aggregates include rank and percentile
calculations. For an ordered-set aggregate,
the order_by_clause
is written
inside WITHIN GROUP (...)
, as shown in the final syntax
alternative above. The expressions in
the order_by_clause
are evaluated once per
input row just like regular aggregate arguments, sorted as per
the order_by_clause
's requirements, and fed
to the aggregate function as input arguments. (This is unlike the case
for a non-WITHIN GROUP
order_by_clause
,
which is not treated as argument(s) to the aggregate function.) The
argument expressions preceding WITHIN GROUP
, if any, are
called direct arguments to distinguish them from
the aggregated arguments listed in
the order_by_clause
. Unlike regular aggregate
arguments, direct arguments are evaluated only once per aggregate call,
not once per input row. This means that they can contain variables only
if those variables are grouped by GROUP BY
; this restriction
is the same as if the direct arguments were not inside an aggregate
expression at all. Direct arguments are typically used for things like
percentile fractions, which only make sense as a single value per
aggregation calculation. The direct argument list can be empty; in this
case, write just ()
not (*)
.
(PostgreSQL will actually accept either spelling, but
only the first way conforms to the SQL standard.)
An example of an ordered-set aggregate call is:
SELECT percentile_cont(0.5) WITHIN GROUP (ORDER BY income) FROM households; percentile_cont ----------------- 50489
which obtains the 50th percentile, or median, value of
the income
column from table households
.
Here, 0.5
is a direct argument; it would make no sense
for the percentile fraction to be a value varying across rows.
If FILTER
is specified, then only the input
rows for which the filter_clause
evaluates to true are fed to the aggregate function; other rows
are discarded. For example:
SELECT count(*) AS unfiltered, count(*) FILTER (WHERE i < 5) AS filtered FROM generate_series(1,10) AS s(i); unfiltered | filtered ------------+---------- 10 | 4 (1 row)
The predefined aggregate functions are described in Section 9.21. Other aggregate functions can be added by the user.
An aggregate expression can only appear in the result list or
HAVING
clause of a SELECT
command.
It is forbidden in other clauses, such as WHERE
,
because those clauses are logically evaluated before the results
of aggregates are formed.
When an aggregate expression appears in a subquery (see
Section 4.2.11 and
Section 9.23), the aggregate is normally
evaluated over the rows of the subquery. But an exception occurs
if the aggregate's arguments (and filter_clause
if any) contain only outer-level variables:
the aggregate then belongs to the nearest such outer level, and is
evaluated over the rows of that query. The aggregate expression
as a whole is then an outer reference for the subquery it appears in,
and acts as a constant over any one evaluation of that subquery.
The restriction about
appearing only in the result list or HAVING
clause
applies with respect to the query level that the aggregate belongs to.
A window function call represents the application
of an aggregate-like function over some portion of the rows selected
by a query. Unlike non-window aggregate calls, this is not tied
to grouping of the selected rows into a single output row — each
row remains separate in the query output. However the window function
has access to all the rows that would be part of the current row's
group according to the grouping specification (PARTITION BY
list) of the window function call.
The syntax of a window function call is one of the following:
function_name
([expression
[,expression
... ]]) [ FILTER ( WHEREfilter_clause
) ] OVERwindow_name
function_name
([expression
[,expression
... ]]) [ FILTER ( WHEREfilter_clause
) ] OVER (window_definition
)function_name
( * ) [ FILTER ( WHEREfilter_clause
) ] OVERwindow_name
function_name
( * ) [ FILTER ( WHEREfilter_clause
) ] OVER (window_definition
)
where window_definition
has the syntax
[existing_window_name
] [ PARTITION BYexpression
[, ...] ] [ ORDER BYexpression
[ ASC | DESC | USINGoperator
] [ NULLS { FIRST | LAST } ] [, ...] ] [frame_clause
]
The optional frame_clause
can be one of
{ RANGE | ROWS | GROUPS }frame_start
[frame_exclusion
] { RANGE | ROWS | GROUPS } BETWEENframe_start
ANDframe_end
[frame_exclusion
]
where frame_start
and frame_end
can be one of
UNBOUNDED PRECEDINGoffset
PRECEDING CURRENT ROWoffset
FOLLOWING UNBOUNDED FOLLOWING
and frame_exclusion
can be one of
EXCLUDE CURRENT ROW EXCLUDE GROUP EXCLUDE TIES EXCLUDE NO OTHERS
Here, expression
represents any value
expression that does not itself contain window function calls.
window_name
is a reference to a named window
specification defined in the query's WINDOW
clause.
Alternatively, a full window_definition
can
be given within parentheses, using the same syntax as for defining a
named window in the WINDOW
clause; see the
SELECT reference page for details. It's worth
pointing out that OVER wname
is not exactly equivalent to
OVER (wname ...)
; the latter implies copying and modifying the
window definition, and will be rejected if the referenced window
specification includes a frame clause.
The PARTITION BY
clause groups the rows of the query into
partitions, which are processed separately by the window
function. PARTITION BY
works similarly to a query-level
GROUP BY
clause, except that its expressions are always just
expressions and cannot be output-column names or numbers.
Without PARTITION BY
, all rows produced by the query are
treated as a single partition.
The ORDER BY
clause determines the order in which the rows
of a partition are processed by the window function. It works similarly
to a query-level ORDER BY
clause, but likewise cannot use
output-column names or numbers. Without ORDER BY
, rows are
processed in an unspecified order.
The frame_clause
specifies
the set of rows constituting the window frame, which is a
subset of the current partition, for those window functions that act on
the frame instead of the whole partition. The set of rows in the frame
can vary depending on which row is the current row. The frame can be
specified in RANGE
, ROWS
or GROUPS
mode; in each case, it runs from
the frame_start
to
the frame_end
.
If frame_end
is omitted, the end defaults
to CURRENT ROW
.
A frame_start
of UNBOUNDED PRECEDING
means
that the frame starts with the first row of the partition, and similarly
a frame_end
of UNBOUNDED FOLLOWING
means
that the frame ends with the last row of the partition.
In RANGE
or GROUPS
mode,
a frame_start
of
CURRENT ROW
means the frame starts with the current
row's first peer row (a row that the
window's ORDER BY
clause sorts as equivalent to the
current row), while a frame_end
of
CURRENT ROW
means the frame ends with the current
row's last peer row.
In ROWS
mode, CURRENT ROW
simply
means the current row.
In the offset
PRECEDING
and offset
FOLLOWING
frame
options, the offset
must be an expression not
containing any variables, aggregate functions, or window functions.
The meaning of the offset
depends on the
frame mode:
In ROWS
mode,
the offset
must yield a non-null,
non-negative integer, and the option means that the frame starts or
ends the specified number of rows before or after the current row.
In GROUPS
mode,
the offset
again must yield a non-null,
non-negative integer, and the option means that the frame starts or
ends the specified number of peer groups
before or after the current row's peer group, where a peer group is a
set of rows that are equivalent in the ORDER BY
ordering. (There must be an ORDER BY
clause
in the window definition to use GROUPS
mode.)
In RANGE
mode, these options require that
the ORDER BY
clause specify exactly one column.
The offset
specifies the maximum
difference between the value of that column in the current row and
its value in preceding or following rows of the frame. The data type
of the offset
expression varies depending
on the data type of the ordering column. For numeric ordering
columns it is typically of the same type as the ordering column,
but for datetime ordering columns it is an interval
.
For example, if the ordering column is of type date
or timestamp
, one could write RANGE BETWEEN
'1 day' PRECEDING AND '10 days' FOLLOWING
.
The offset
is still required to be
non-null and non-negative, though the meaning
of “non-negative” depends on its data type.
In any case, the distance to the end of the frame is limited by the distance to the end of the partition, so that for rows near the partition ends the frame might contain fewer rows than elsewhere.
Notice that in both ROWS
and GROUPS
mode, 0 PRECEDING
and 0 FOLLOWING
are equivalent to CURRENT ROW
. This normally holds
in RANGE
mode as well, for an appropriate
data-type-specific meaning of “zero”.
The frame_exclusion
option allows rows around
the current row to be excluded from the frame, even if they would be
included according to the frame start and frame end options.
EXCLUDE CURRENT ROW
excludes the current row from the
frame.
EXCLUDE GROUP
excludes the current row and its
ordering peers from the frame.
EXCLUDE TIES
excludes any peers of the current
row from the frame, but not the current row itself.
EXCLUDE NO OTHERS
simply specifies explicitly the
default behavior of not excluding the current row or its peers.
The default framing option is RANGE UNBOUNDED PRECEDING
,
which is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND
CURRENT ROW
. With ORDER BY
, this sets the frame to be
all rows from the partition start up through the current row's last
ORDER BY
peer. Without ORDER BY
,
this means all rows of the partition are included in the window frame,
since all rows become peers of the current row.
Restrictions are that
frame_start
cannot be UNBOUNDED FOLLOWING
,
frame_end
cannot be UNBOUNDED PRECEDING
,
and the frame_end
choice cannot appear earlier in the
above list of frame_start
and frame_end
options than
the frame_start
choice does — for example
RANGE BETWEEN CURRENT ROW AND
is not allowed.
But, for example, offset
PRECEDINGROWS BETWEEN 7 PRECEDING AND 8
PRECEDING
is allowed, even though it would never select any
rows.
If FILTER
is specified, then only the input
rows for which the filter_clause
evaluates to true are fed to the window function; other rows
are discarded. Only window functions that are aggregates accept
a FILTER
clause.
The built-in window functions are described in Table 9.62. Other window functions can be added by the user. Also, any built-in or user-defined general-purpose or statistical aggregate can be used as a window function. (Ordered-set and hypothetical-set aggregates cannot presently be used as window functions.)
The syntaxes using *
are used for calling parameter-less
aggregate functions as window functions, for example
count(*) OVER (PARTITION BY x ORDER BY y)
.
The asterisk (*
) is customarily not used for
window-specific functions. Window-specific functions do not
allow DISTINCT
or ORDER BY
to be used within the
function argument list.
Window function calls are permitted only in the SELECT
list and the ORDER BY
clause of the query.
More information about window functions can be found in Section 3.5, Section 9.22, and Section 7.2.5.
A type cast specifies a conversion from one data type to another. PostgreSQL accepts two equivalent syntaxes for type casts:
CAST (expression
AStype
)expression
::type
The CAST
syntax conforms to SQL; the syntax with
::
is historical PostgreSQL
usage.
When a cast is applied to a value expression of a known type, it represents a run-time type conversion. The cast will succeed only if a suitable type conversion operation has been defined. Notice that this is subtly different from the use of casts with constants, as shown in Section 4.1.2.7. A cast applied to an unadorned string literal represents the initial assignment of a type to a literal constant value, and so it will succeed for any type (if the contents of the string literal are acceptable input syntax for the data type).
An explicit type cast can usually be omitted if there is no ambiguity as to the type that a value expression must produce (for example, when it is assigned to a table column); the system will automatically apply a type cast in such cases. However, automatic casting is only done for casts that are marked “OK to apply implicitly” in the system catalogs. Other casts must be invoked with explicit casting syntax. This restriction is intended to prevent surprising conversions from being applied silently.
It is also possible to specify a type cast using a function-like syntax:
typename
(expression
)
However, this only works for types whose names are also valid as
function names. For example, double precision
cannot be used this way, but the equivalent float8
can. Also, the names interval
, time
, and
timestamp
can only be used in this fashion if they are
double-quoted, because of syntactic conflicts. Therefore, the use of
the function-like cast syntax leads to inconsistencies and should
probably be avoided.
The function-like syntax is in fact just a function call. When one of the two standard cast syntaxes is used to do a run-time conversion, it will internally invoke a registered function to perform the conversion. By convention, these conversion functions have the same name as their output type, and thus the “function-like syntax” is nothing more than a direct invocation of the underlying conversion function. Obviously, this is not something that a portable application should rely on. For further details see CREATE CAST.
The COLLATE
clause overrides the collation of
an expression. It is appended to the expression it applies to:
expr
COLLATEcollation
where collation
is a possibly
schema-qualified identifier. The COLLATE
clause binds tighter than operators; parentheses can be used when
necessary.
If no collation is explicitly specified, the database system either derives a collation from the columns involved in the expression, or it defaults to the default collation of the database if no column is involved in the expression.
The two common uses of the COLLATE
clause are
overriding the sort order in an ORDER BY
clause, for
example:
SELECT a, b, c FROM tbl WHERE ... ORDER BY a COLLATE "C";
and overriding the collation of a function or operator call that has locale-sensitive results, for example:
SELECT * FROM tbl WHERE a > 'foo' COLLATE "C";
Note that in the latter case the COLLATE
clause is
attached to an input argument of the operator we wish to affect.
It doesn't matter which argument of the operator or function call the
COLLATE
clause is attached to, because the collation that is
applied by the operator or function is derived by considering all
arguments, and an explicit COLLATE
clause will override the
collations of all other arguments. (Attaching non-matching
COLLATE
clauses to more than one argument, however, is an
error. For more details see Section 24.2.)
Thus, this gives the same result as the previous example:
SELECT * FROM tbl WHERE a COLLATE "C" > 'foo';
But this is an error:
SELECT * FROM tbl WHERE (a > 'foo') COLLATE "C";
because it attempts to apply a collation to the result of the
>
operator, which is of the non-collatable data type
boolean
.
A scalar subquery is an ordinary
SELECT
query in parentheses that returns exactly one
row with one column. (See Chapter 7 for information about writing queries.)
The SELECT
query is executed
and the single returned value is used in the surrounding value expression.
It is an error to use a query that
returns more than one row or more than one column as a scalar subquery.
(But if, during a particular execution, the subquery returns no rows,
there is no error; the scalar result is taken to be null.)
The subquery can refer to variables from the surrounding query,
which will act as constants during any one evaluation of the subquery.
See also Section 9.23 for other expressions involving subqueries.
For example, the following finds the largest city population in each state:
SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) FROM states;
An array constructor is an expression that builds an
array value using values for its member elements. A simple array
constructor
consists of the key word ARRAY
, a left square bracket
[
, a list of expressions (separated by commas) for the
array element values, and finally a right square bracket ]
.
For example:
SELECT ARRAY[1,2,3+4]; array --------- {1,2,7} (1 row)
By default,
the array element type is the common type of the member expressions,
determined using the same rules as for UNION
or
CASE
constructs (see Section 10.5).
You can override this by explicitly casting the array constructor to the
desired type, for example:
SELECT ARRAY[1,2,22.7]::integer[]; array ---------- {1,2,23} (1 row)
This has the same effect as casting each expression to the array element type individually. For more on casting, see Section 4.2.9.
Multidimensional array values can be built by nesting array
constructors.
In the inner constructors, the key word ARRAY
can
be omitted. For example, these produce the same result:
SELECT ARRAY[ARRAY[1,2], ARRAY[3,4]]; array --------------- {{1,2},{3,4}} (1 row) SELECT ARRAY[[1,2],[3,4]]; array --------------- {{1,2},{3,4}} (1 row)
Since multidimensional arrays must be rectangular, inner constructors
at the same level must produce sub-arrays of identical dimensions.
Any cast applied to the outer ARRAY
constructor propagates
automatically to all the inner constructors.
Multidimensional array constructor elements can be anything yielding
an array of the proper kind, not only a sub-ARRAY
construct.
For example:
CREATE TABLE arr(f1 int[], f2 int[]); INSERT INTO arr VALUES (ARRAY[[1,2],[3,4]], ARRAY[[5,6],[7,8]]); SELECT ARRAY[f1, f2, '{{9,10},{11,12}}'::int[]] FROM arr; array ------------------------------------------------ {{{1,2},{3,4}},{{5,6},{7,8}},{{9,10},{11,12}}} (1 row)
You can construct an empty array, but since it's impossible to have an array with no type, you must explicitly cast your empty array to the desired type. For example:
SELECT ARRAY[]::integer[]; array ------- {} (1 row)
It is also possible to construct an array from the results of a
subquery. In this form, the array constructor is written with the
key word ARRAY
followed by a parenthesized (not
bracketed) subquery. For example:
SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%'); array ------------------------------------------------------------------ {2011,1954,1948,1952,1951,1244,1950,2005,1949,1953,2006,31,2412} (1 row) SELECT ARRAY(SELECT ARRAY[i, i*2] FROM generate_series(1,5) AS a(i)); array ---------------------------------- {{1,2},{2,4},{3,6},{4,8},{5,10}} (1 row)
The subquery must return a single column. If the subquery's output column is of a non-array type, the resulting one-dimensional array will have an element for each row in the subquery result, with an element type matching that of the subquery's output column. If the subquery's output column is of an array type, the result will be an array of the same type but one higher dimension; in this case all the subquery rows must yield arrays of identical dimensionality, else the result would not be rectangular.
The subscripts of an array value built with ARRAY
always begin with one. For more information about arrays, see
Section 8.15.
A row constructor is an expression that builds a row value (also
called a composite value) using values
for its member fields. A row constructor consists of the key word
ROW
, a left parenthesis, zero or more
expressions (separated by commas) for the row field values, and finally
a right parenthesis. For example:
SELECT ROW(1,2.5,'this is a test');
The key word ROW
is optional when there is more than one
expression in the list.
A row constructor can include the syntax
rowvalue
.*
,
which will be expanded to a list of the elements of the row value,
just as occurs when the .*
syntax is used at the top level
of a SELECT
list (see Section 8.16.5).
For example, if table t
has
columns f1
and f2
, these are the same:
SELECT ROW(t.*, 42) FROM t; SELECT ROW(t.f1, t.f2, 42) FROM t;
Before PostgreSQL 8.2, the
.*
syntax was not expanded in row constructors, so
that writing ROW(t.*, 42)
created a two-field row whose first
field was another row value. The new behavior is usually more useful.
If you need the old behavior of nested row values, write the inner
row value without .*
, for instance
ROW(t, 42)
.
By default, the value created by a ROW
expression is of
an anonymous record type. If necessary, it can be cast to a named
composite type — either the row type of a table, or a composite type
created with CREATE TYPE AS
. An explicit cast might be needed
to avoid ambiguity. For example:
CREATE TABLE mytable(f1 int, f2 float, f3 text); CREATE FUNCTION getf1(mytable) RETURNS int AS 'SELECT $1.f1' LANGUAGE SQL; -- No cast needed since only one getf1() exists SELECT getf1(ROW(1,2.5,'this is a test')); getf1 ------- 1 (1 row) CREATE TYPE myrowtype AS (f1 int, f2 text, f3 numeric); CREATE FUNCTION getf1(myrowtype) RETURNS int AS 'SELECT $1.f1' LANGUAGE SQL; -- Now we need a cast to indicate which function to call: SELECT getf1(ROW(1,2.5,'this is a test')); ERROR: function getf1(record) is not unique SELECT getf1(ROW(1,2.5,'this is a test')::mytable); getf1 ------- 1 (1 row) SELECT getf1(CAST(ROW(11,'this is a test',2.5) AS myrowtype)); getf1 ------- 11 (1 row)
Row constructors can be used to build composite values to be stored
in a composite-type table column, or to be passed to a function that
accepts a composite parameter. Also,
it is possible to compare two row values or test a row with
IS NULL
or IS NOT NULL
, for example:
SELECT ROW(1,2.5,'this is a test') = ROW(1, 3, 'not the same'); SELECT ROW(table.*) IS NULL FROM table; -- detect all-null rows
For more detail see Section 9.24. Row constructors can also be used in connection with subqueries, as discussed in Section 9.23.
The order of evaluation of subexpressions is not defined. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order.
Furthermore, if the result of an expression can be determined by evaluating only some parts of it, then other subexpressions might not be evaluated at all. For instance, if one wrote:
SELECT true OR somefunc();
then somefunc()
would (probably) not be called
at all. The same would be the case if one wrote:
SELECT somefunc() OR true;
Note that this is not the same as the left-to-right “short-circuiting” of Boolean operators that is found in some programming languages.
As a consequence, it is unwise to use functions with side effects
as part of complex expressions. It is particularly dangerous to
rely on side effects or evaluation order in WHERE
and HAVING
clauses,
since those clauses are extensively reprocessed as part of
developing an execution plan. Boolean
expressions (AND
/OR
/NOT
combinations) in those clauses can be reorganized
in any manner allowed by the laws of Boolean algebra.
When it is essential to force evaluation order, a CASE
construct (see Section 9.18) can be
used. For example, this is an untrustworthy way of trying to
avoid division by zero in a WHERE
clause:
SELECT ... WHERE x > 0 AND y/x > 1.5;
But this is safe:
SELECT ... WHERE CASE WHEN x > 0 THEN y/x > 1.5 ELSE false END;
A CASE
construct used in this fashion will defeat optimization
attempts, so it should only be done when necessary. (In this particular
example, it would be better to sidestep the problem by writing
y > 1.5*x
instead.)
CASE
is not a cure-all for such issues, however.
One limitation of the technique illustrated above is that it does not
prevent early evaluation of constant subexpressions.
As described in Section 38.7, functions and
operators marked IMMUTABLE
can be evaluated when
the query is planned rather than when it is executed. Thus for example
SELECT CASE WHEN x > 0 THEN x ELSE 1/0 END FROM tab;
is likely to result in a division-by-zero failure due to the planner
trying to simplify the constant subexpression,
even if every row in the table has x > 0
so that the
ELSE
arm would never be entered at run time.
While that particular example might seem silly, related cases that don't
obviously involve constants can occur in queries executed within
functions, since the values of function arguments and local variables
can be inserted into queries as constants for planning purposes.
Within PL/pgSQL functions, for example, using an
IF
-THEN
-ELSE
statement to protect
a risky computation is much safer than just nesting it in a
CASE
expression.
Another limitation of the same kind is that a CASE
cannot
prevent evaluation of an aggregate expression contained within it,
because aggregate expressions are computed before other
expressions in a SELECT
list or HAVING
clause
are considered. For example, the following query can cause a
division-by-zero error despite seemingly having protected against it:
SELECT CASE WHEN min(employees) > 0 THEN avg(expenses / employees) END FROM departments;
The min()
and avg()
aggregates are computed
concurrently over all the input rows, so if any row
has employees
equal to zero, the division-by-zero error
will occur before there is any opportunity to test the result of
min()
. Instead, use a WHERE
or FILTER
clause to prevent problematic input rows from
reaching an aggregate function in the first place.
PostgreSQL allows functions that have named parameters to be called using either positional or named notation. Named notation is especially useful for functions that have a large number of parameters, since it makes the associations between parameters and actual arguments more explicit and reliable. In positional notation, a function call is written with its argument values in the same order as they are defined in the function declaration. In named notation, the arguments are matched to the function parameters by name and can be written in any order. For each notation, also consider the effect of function argument types, documented in Section 10.3.
In either notation, parameters that have default values given in the function declaration need not be written in the call at all. But this is particularly useful in named notation, since any combination of parameters can be omitted; while in positional notation parameters can only be omitted from right to left.
PostgreSQL also supports mixed notation, which combines positional and named notation. In this case, positional parameters are written first and named parameters appear after them.
The following examples will illustrate the usage of all three notations, using the following function definition:
CREATE FUNCTION concat_lower_or_upper(a text, b text, uppercase boolean DEFAULT false) RETURNS text AS $$ SELECT CASE WHEN $3 THEN UPPER($1 || ' ' || $2) ELSE LOWER($1 || ' ' || $2) END; $$ LANGUAGE SQL IMMUTABLE STRICT;
Function concat_lower_or_upper
has two mandatory
parameters, a
and b
. Additionally
there is one optional parameter uppercase
which defaults
to false
. The a
and
b
inputs will be concatenated, and forced to either
upper or lower case depending on the uppercase
parameter. The remaining details of this function
definition are not important here (see Chapter 38 for
more information).
Positional notation is the traditional mechanism for passing arguments to functions in PostgreSQL. An example is:
SELECT concat_lower_or_upper('Hello', 'World', true); concat_lower_or_upper ----------------------- HELLO WORLD (1 row)
All arguments are specified in order. The result is upper case since
uppercase
is specified as true
.
Another example is:
SELECT concat_lower_or_upper('Hello', 'World'); concat_lower_or_upper ----------------------- hello world (1 row)
Here, the uppercase
parameter is omitted, so it
receives its default value of false
, resulting in
lower case output. In positional notation, arguments can be omitted
from right to left so long as they have defaults.
In named notation, each argument's name is specified using
=>
to separate it from the argument expression.
For example:
SELECT concat_lower_or_upper(a => 'Hello', b => 'World'); concat_lower_or_upper ----------------------- hello world (1 row)
Again, the argument uppercase
was omitted
so it is set to false
implicitly. One advantage of
using named notation is that the arguments may be specified in any
order, for example:
SELECT concat_lower_or_upper(a => 'Hello', b => 'World', uppercase => true); concat_lower_or_upper ----------------------- HELLO WORLD (1 row) SELECT concat_lower_or_upper(a => 'Hello', uppercase => true, b => 'World'); concat_lower_or_upper ----------------------- HELLO WORLD (1 row)
An older syntax based on ":=" is supported for backward compatibility:
SELECT concat_lower_or_upper(a := 'Hello', uppercase := true, b := 'World'); concat_lower_or_upper ----------------------- HELLO WORLD (1 row)
The mixed notation combines positional and named notation. However, as already mentioned, named arguments cannot precede positional arguments. For example:
SELECT concat_lower_or_upper('Hello', 'World', uppercase => true); concat_lower_or_upper ----------------------- HELLO WORLD (1 row)
In the above query, the arguments a
and
b
are specified positionally, while
uppercase
is specified by name. In this example,
that adds little except documentation. With a more complex function
having numerous parameters that have default values, named or mixed
notation can save a great deal of writing and reduce chances for error.
Named and mixed call notations currently cannot be used when calling an aggregate function (but they do work when an aggregate function is used as a window function).
Table of Contents
This chapter covers how one creates the database structures that will hold one's data. In a relational database, the raw data is stored in tables, so the majority of this chapter is devoted to explaining how tables are created and modified and what features are available to control what data is stored in the tables. Subsequently, we discuss how tables can be organized into schemas, and how privileges can be assigned to tables. Finally, we will briefly look at other features that affect the data storage, such as inheritance, table partitioning, views, functions, and triggers.
A table in a relational database is much like a table on paper: It consists of rows and columns. The number and order of the columns is fixed, and each column has a name. The number of rows is variable — it reflects how much data is stored at a given moment. SQL does not make any guarantees about the order of the rows in a table. When a table is read, the rows will appear in an unspecified order, unless sorting is explicitly requested. This is covered in Chapter 7. Furthermore, SQL does not assign unique identifiers to rows, so it is possible to have several completely identical rows in a table. This is a consequence of the mathematical model that underlies SQL but is usually not desirable. Later in this chapter we will see how to deal with this issue.
Each column has a data type. The data type constrains the set of possible values that can be assigned to a column and assigns semantics to the data stored in the column so that it can be used for computations. For instance, a column declared to be of a numerical type will not accept arbitrary text strings, and the data stored in such a column can be used for mathematical computations. By contrast, a column declared to be of a character string type will accept almost any kind of data but it does not lend itself to mathematical calculations, although other operations such as string concatenation are available.
PostgreSQL includes a sizable set of
built-in data types that fit many applications. Users can also
define their own data types. Most built-in data types have obvious
names and semantics, so we defer a detailed explanation to Chapter 8. Some of the frequently used data types are
integer
for whole numbers, numeric
for
possibly fractional numbers, text
for character
strings, date
for dates, time
for
time-of-day values, and timestamp
for values
containing both date and time.
To create a table, you use the aptly named CREATE TABLE command. In this command you specify at least a name for the new table, the names of the columns and the data type of each column. For example:
CREATE TABLE my_first_table ( first_column text, second_column integer );
This creates a table named my_first_table
with
two columns. The first column is named
first_column
and has a data type of
text
; the second column has the name
second_column
and the type integer
.
The table and column names follow the identifier syntax explained
in Section 4.1.1. The type names are
usually also identifiers, but there are some exceptions. Note that the
column list is comma-separated and surrounded by parentheses.
Of course, the previous example was heavily contrived. Normally, you would give names to your tables and columns that convey what kind of data they store. So let's look at a more realistic example:
CREATE TABLE products ( product_no integer, name text, price numeric );
(The numeric
type can store fractional components, as
would be typical of monetary amounts.)
When you create many interrelated tables it is wise to choose a consistent naming pattern for the tables and columns. For instance, there is a choice of using singular or plural nouns for table names, both of which are favored by some theorist or other.
There is a limit on how many columns a table can contain. Depending on the column types, it is between 250 and 1600. However, defining a table with anywhere near this many columns is highly unusual and often a questionable design.
If you no longer need a table, you can remove it using the DROP TABLE command. For example:
DROP TABLE my_first_table; DROP TABLE products;
Attempting to drop a table that does not exist is an error.
Nevertheless, it is common in SQL script files to unconditionally
try to drop each table before creating it, ignoring any error
messages, so that the script works whether or not the table exists.
(If you like, you can use the DROP TABLE IF EXISTS
variant
to avoid the error messages, but this is not standard SQL.)
If you need to modify a table that already exists, see Section 5.6 later in this chapter.
With the tools discussed so far you can create fully functional tables. The remainder of this chapter is concerned with adding features to the table definition to ensure data integrity, security, or convenience. If you are eager to fill your tables with data now you can skip ahead to Chapter 6 and read the rest of this chapter later.
A column can be assigned a default value. When a new row is created and no values are specified for some of the columns, those columns will be filled with their respective default values. A data manipulation command can also request explicitly that a column be set to its default value, without having to know what that value is. (Details about data manipulation commands are in Chapter 6.)
If no default value is declared explicitly, the default value is the null value. This usually makes sense because a null value can be considered to represent unknown data.
In a table definition, default values are listed after the column data type. For example:
CREATE TABLE products (
product_no integer,
name text,
price numeric DEFAULT 9.99
);
The default value can be an expression, which will be
evaluated whenever the default value is inserted
(not when the table is created). A common example
is for a timestamp
column to have a default of CURRENT_TIMESTAMP
,
so that it gets set to the time of row insertion. Another common
example is generating a “serial number” for each row.
In PostgreSQL this is typically done by
something like:
CREATE TABLE products (
product_no integer DEFAULT nextval('products_product_no_seq'),
...
);
where the nextval()
function supplies successive values
from a sequence object (see Section 9.17). This arrangement is sufficiently common
that there's a special shorthand for it:
CREATE TABLE products (
product_no SERIAL,
...
);
The SERIAL
shorthand is discussed further in Section 8.1.4.
A generated column is a special column that is always computed from other columns. Thus, it is for columns what a view is for tables. There are two kinds of generated columns: stored and virtual. A stored generated column is computed when it is written (inserted or updated) and occupies storage as if it were a normal column. A virtual generated column occupies no storage and is computed when it is read. Thus, a virtual generated column is similar to a view and a stored generated column is similar to a materialized view (except that it is always updated automatically). PostgreSQL currently implements only stored generated columns.
To create a generated column, use the GENERATED ALWAYS
AS
clause in CREATE TABLE
, for example:
CREATE TABLE people (
...,
height_cm numeric,
height_in numeric GENERATED ALWAYS AS (height_cm / 2.54) STORED
);
The keyword STORED
must be specified to choose the
stored kind of generated column. See CREATE TABLE for
more details.
A generated column cannot be written to directly. In
INSERT
or UPDATE
commands, a value
cannot be specified for a generated column, but the keyword
DEFAULT
may be specified.
Consider the differences between a column with a default and a generated
column. The column default is evaluated once when the row is first
inserted if no other value was provided; a generated column is updated
whenever the row changes and cannot be overridden. A column default may
not refer to other columns of the table; a generation expression would
normally do so. A column default can use volatile functions, for example
random()
or functions referring to the current time;
this is not allowed for generated columns.
Several restrictions apply to the definition of generated columns and tables involving generated columns:
The generation expression can only use immutable functions and cannot use subqueries or reference anything other than the current row in any way.
A generation expression cannot reference another generated column.
A generation expression cannot reference a system column, except
tableoid
.
A generated column cannot have a column default or an identity definition.
A generated column cannot be part of a partition key.
Foreign tables can have generated columns. See CREATE FOREIGN TABLE for details.
For inheritance:
If a parent column is a generated column, a child column must also be
a generated column using the same expression. In the definition of
the child column, leave off the GENERATED
clause,
as it will be copied from the parent.
In case of multiple inheritance, if one parent column is a generated column, then all parent columns must be generated columns and with the same expression.
If a parent column is not a generated column, a child column may be defined to be a generated column or not.
Additional considerations apply to the use of generated columns.
Generated columns maintain access privileges separately from their underlying base columns. So, it is possible to arrange it so that a particular role can read from a generated column but not from the underlying base columns.
Generated columns are, conceptually, updated after
BEFORE
triggers have run. Therefore, changes made to
base columns in a BEFORE
trigger will be reflected in
generated columns. But conversely, it is not allowed to access
generated columns in BEFORE
triggers.
Generated columns are skipped for logical replication.
Data types are a way to limit the kind of data that can be stored in a table. For many applications, however, the constraint they provide is too coarse. For example, a column containing a product price should probably only accept positive values. But there is no standard data type that accepts only positive numbers. Another issue is that you might want to constrain column data with respect to other columns or rows. For example, in a table containing product information, there should be only one row for each product number.
To that end, SQL allows you to define constraints on columns and tables. Constraints give you as much control over the data in your tables as you wish. If a user attempts to store data in a column that would violate a constraint, an error is raised. This applies even if the value came from the default value definition.
A check constraint is the most generic constraint type. It allows you to specify that the value in a certain column must satisfy a Boolean (truth-value) expression. For instance, to require positive product prices, you could use:
CREATE TABLE products (
product_no integer,
name text,
price numeric CHECK (price > 0)
);
As you see, the constraint definition comes after the data type,
just like default value definitions. Default values and
constraints can be listed in any order. A check constraint
consists of the key word CHECK
followed by an
expression in parentheses. The check constraint expression should
involve the column thus constrained, otherwise the constraint
would not make too much sense.
You can also give the constraint a separate name. This clarifies error messages and allows you to refer to the constraint when you need to change it. The syntax is:
CREATE TABLE products (
product_no integer,
name text,
price numeric CONSTRAINT positive_price CHECK (price > 0)
);
So, to specify a named constraint, use the key word
CONSTRAINT
followed by an identifier followed
by the constraint definition. (If you don't specify a constraint
name in this way, the system chooses a name for you.)
A check constraint can also refer to several columns. Say you store a regular price and a discounted price, and you want to ensure that the discounted price is lower than the regular price:
CREATE TABLE products (
product_no integer,
name text,
price numeric CHECK (price > 0),
discounted_price numeric CHECK (discounted_price > 0),
CHECK (price > discounted_price)
);
The first two constraints should look familiar. The third one uses a new syntax. It is not attached to a particular column, instead it appears as a separate item in the comma-separated column list. Column definitions and these constraint definitions can be listed in mixed order.
We say that the first two constraints are column constraints, whereas the third one is a table constraint because it is written separately from any one column definition. Column constraints can also be written as table constraints, while the reverse is not necessarily possible, since a column constraint is supposed to refer to only the column it is attached to. (PostgreSQL doesn't enforce that rule, but you should follow it if you want your table definitions to work with other database systems.) The above example could also be written as:
CREATE TABLE products ( product_no integer, name text, price numeric, CHECK (price > 0), discounted_price numeric, CHECK (discounted_price > 0), CHECK (price > discounted_price) );
or even:
CREATE TABLE products ( product_no integer, name text, price numeric CHECK (price > 0), discounted_price numeric, CHECK (discounted_price > 0 AND price > discounted_price) );
It's a matter of taste.
Names can be assigned to table constraints in the same way as column constraints:
CREATE TABLE products (
product_no integer,
name text,
price numeric,
CHECK (price > 0),
discounted_price numeric,
CHECK (discounted_price > 0),
CONSTRAINT valid_discount CHECK (price > discounted_price)
);
It should be noted that a check constraint is satisfied if the check expression evaluates to true or the null value. Since most expressions will evaluate to the null value if any operand is null, they will not prevent null values in the constrained columns. To ensure that a column does not contain null values, the not-null constraint described in the next section can be used.
PostgreSQL does not support
CHECK
constraints that reference table data other than
the new or updated row being checked. While a CHECK
constraint that violates this rule may appear to work in simple
tests, it cannot guarantee that the database will not reach a state
in which the constraint condition is false (due to subsequent changes
of the other row(s) involved). This would cause a database dump and
restore to fail. The restore could fail even when the complete
database state is consistent with the constraint, due to rows not
being loaded in an order that will satisfy the constraint. If
possible, use UNIQUE
, EXCLUDE
,
or FOREIGN KEY
constraints to express
cross-row and cross-table restrictions.
If what you desire is a one-time check against other rows at row insertion, rather than a continuously-maintained consistency guarantee, a custom trigger can be used to implement that. (This approach avoids the dump/restore problem because pg_dump does not reinstall triggers until after restoring data, so that the check will not be enforced during a dump/restore.)
PostgreSQL assumes that
CHECK
constraints' conditions are immutable, that
is, they will always give the same result for the same input row.
This assumption is what justifies examining CHECK
constraints only when rows are inserted or updated, and not at other
times. (The warning above about not referencing other table data is
really a special case of this restriction.)
An example of a common way to break this assumption is to reference a
user-defined function in a CHECK
expression, and
then change the behavior of that
function. PostgreSQL does not disallow
that, but it will not notice if there are rows in the table that now
violate the CHECK
constraint. That would cause a
subsequent database dump and restore to fail.
The recommended way to handle such a change is to drop the constraint
(using ALTER TABLE
), adjust the function definition,
and re-add the constraint, thereby rechecking it against all table rows.
A not-null constraint simply specifies that a column must not assume the null value. A syntax example:
CREATE TABLE products ( product_no integer NOT NULL, name text NOT NULL, price numeric );
A not-null constraint is always written as a column constraint. A
not-null constraint is functionally equivalent to creating a check
constraint CHECK (
, but in
PostgreSQL creating an explicit
not-null constraint is more efficient. The drawback is that you
cannot give explicit names to not-null constraints created this
way.
column_name
IS NOT NULL)
Of course, a column can have more than one constraint. Just write the constraints one after another:
CREATE TABLE products ( product_no integer NOT NULL, name text NOT NULL, price numeric NOT NULL CHECK (price > 0) );
The order doesn't matter. It does not necessarily determine in which order the constraints are checked.
The NOT NULL
constraint has an inverse: the
NULL
constraint. This does not mean that the
column must be null, which would surely be useless. Instead, this
simply selects the default behavior that the column might be null.
The NULL
constraint is not present in the SQL
standard and should not be used in portable applications. (It was
only added to PostgreSQL to be
compatible with some other database systems.) Some users, however,
like it because it makes it easy to toggle the constraint in a
script file. For example, you could start with:
CREATE TABLE products ( product_no integer NULL, name text NULL, price numeric NULL );
and then insert the NOT
key word where desired.
In most database designs the majority of columns should be marked not null.
Unique constraints ensure that the data contained in a column, or a group of columns, is unique among all the rows in the table. The syntax is:
CREATE TABLE products (
product_no integer UNIQUE,
name text,
price numeric
);
when written as a column constraint, and:
CREATE TABLE products (
product_no integer,
name text,
price numeric,
UNIQUE (product_no)
);
when written as a table constraint.
To define a unique constraint for a group of columns, write it as a table constraint with the column names separated by commas:
CREATE TABLE example (
a integer,
b integer,
c integer,
UNIQUE (a, c)
);
This specifies that the combination of values in the indicated columns is unique across the whole table, though any one of the columns need not be (and ordinarily isn't) unique.
You can assign your own name for a unique constraint, in the usual way:
CREATE TABLE products (
product_no integer CONSTRAINT must_be_different UNIQUE,
name text,
price numeric
);
Adding a unique constraint will automatically create a unique B-tree index on the column or group of columns listed in the constraint. A uniqueness restriction covering only some rows cannot be written as a unique constraint, but it is possible to enforce such a restriction by creating a unique partial index.
In general, a unique constraint is violated if there is more than one row in the table where the values of all of the columns included in the constraint are equal. However, two null values are never considered equal in this comparison. That means even in the presence of a unique constraint it is possible to store duplicate rows that contain a null value in at least one of the constrained columns. This behavior conforms to the SQL standard, but we have heard that other SQL databases might not follow this rule. So be careful when developing applications that are intended to be portable.
A primary key constraint indicates that a column, or group of columns, can be used as a unique identifier for rows in the table. This requires that the values be both unique and not null. So, the following two table definitions accept the same data:
CREATE TABLE products ( product_no integer UNIQUE NOT NULL, name text, price numeric );
CREATE TABLE products (
product_no integer PRIMARY KEY,
name text,
price numeric
);
Primary keys can span more than one column; the syntax is similar to unique constraints:
CREATE TABLE example (
a integer,
b integer,
c integer,
PRIMARY KEY (a, c)
);
Adding a primary key will automatically create a unique B-tree index
on the column or group of columns listed in the primary key, and will
force the column(s) to be marked NOT NULL
.
A table can have at most one primary key. (There can be any number of unique and not-null constraints, which are functionally almost the same thing, but only one can be identified as the primary key.) Relational database theory dictates that every table must have a primary key. This rule is not enforced by PostgreSQL, but it is usually best to follow it.
Primary keys are useful both for documentation purposes and for client applications. For example, a GUI application that allows modifying row values probably needs to know the primary key of a table to be able to identify rows uniquely. There are also various ways in which the database system makes use of a primary key if one has been declared; for example, the primary key defines the default target column(s) for foreign keys referencing its table.
A foreign key constraint specifies that the values in a column (or a group of columns) must match the values appearing in some row of another table. We say this maintains the referential integrity between two related tables.
Say you have the product table that we have used several times already:
CREATE TABLE products ( product_no integer PRIMARY KEY, name text, price numeric );
Let's also assume you have a table storing orders of those products. We want to ensure that the orders table only contains orders of products that actually exist. So we define a foreign key constraint in the orders table that references the products table:
CREATE TABLE orders (
order_id integer PRIMARY KEY,
product_no integer REFERENCES products (product_no),
quantity integer
);
Now it is impossible to create orders with non-NULL
product_no
entries that do not appear in the
products table.
We say that in this situation the orders table is the referencing table and the products table is the referenced table. Similarly, there are referencing and referenced columns.
You can also shorten the above command to:
CREATE TABLE orders (
order_id integer PRIMARY KEY,
product_no integer REFERENCES products,
quantity integer
);
because in absence of a column list the primary key of the referenced table is used as the referenced column(s).
You can assign your own name for a foreign key constraint, in the usual way.
A foreign key can also constrain and reference a group of columns. As usual, it then needs to be written in table constraint form. Here is a contrived syntax example:
CREATE TABLE t1 (
a integer PRIMARY KEY,
b integer,
c integer,
FOREIGN KEY (b, c) REFERENCES other_table (c1, c2)
);
Of course, the number and type of the constrained columns need to match the number and type of the referenced columns.
Sometimes it is useful for the “other table” of a foreign key constraint to be the same table; this is called a self-referential foreign key. For example, if you want rows of a table to represent nodes of a tree structure, you could write
CREATE TABLE tree ( node_id integer PRIMARY KEY, parent_id integer REFERENCES tree, name text, ... );
A top-level node would have NULL parent_id
,
while non-NULL parent_id
entries would be
constrained to reference valid rows of the table.
A table can have more than one foreign key constraint. This is used to implement many-to-many relationships between tables. Say you have tables about products and orders, but now you want to allow one order to contain possibly many products (which the structure above did not allow). You could use this table structure:
CREATE TABLE products ( product_no integer PRIMARY KEY, name text, price numeric ); CREATE TABLE orders ( order_id integer PRIMARY KEY, shipping_address text, ... ); CREATE TABLE order_items ( product_no integer REFERENCES products, order_id integer REFERENCES orders, quantity integer, PRIMARY KEY (product_no, order_id) );
Notice that the primary key overlaps with the foreign keys in the last table.
We know that the foreign keys disallow creation of orders that do not relate to any products. But what if a product is removed after an order is created that references it? SQL allows you to handle that as well. Intuitively, we have a few options:
Disallow deleting a referenced product
Delete the orders as well
Something else?
To illustrate this, let's implement the following policy on the
many-to-many relationship example above: when someone wants to
remove a product that is still referenced by an order (via
order_items
), we disallow it. If someone
removes an order, the order items are removed as well:
CREATE TABLE products ( product_no integer PRIMARY KEY, name text, price numeric ); CREATE TABLE orders ( order_id integer PRIMARY KEY, shipping_address text, ... ); CREATE TABLE order_items ( product_no integer REFERENCES products ON DELETE RESTRICT, order_id integer REFERENCES orders ON DELETE CASCADE, quantity integer, PRIMARY KEY (product_no, order_id) );
Restricting and cascading deletes are the two most common options.
RESTRICT
prevents deletion of a
referenced row. NO ACTION
means that if any
referencing rows still exist when the constraint is checked, an error
is raised; this is the default behavior if you do not specify anything.
(The essential difference between these two choices is that
NO ACTION
allows the check to be deferred until
later in the transaction, whereas RESTRICT
does not.)
CASCADE
specifies that when a referenced row is deleted,
row(s) referencing it should be automatically deleted as well.
There are two other options:
SET NULL
and SET DEFAULT
.
These cause the referencing column(s) in the referencing row(s)
to be set to nulls or their default
values, respectively, when the referenced row is deleted.
Note that these do not excuse you from observing any constraints.
For example, if an action specifies SET DEFAULT
but the default value would not satisfy the foreign key constraint, the
operation will fail.
Analogous to ON DELETE
there is also
ON UPDATE
which is invoked when a referenced
column is changed (updated). The possible actions are the same.
In this case, CASCADE
means that the updated values of the
referenced column(s) should be copied into the referencing row(s).
Normally, a referencing row need not satisfy the foreign key constraint
if any of its referencing columns are null. If MATCH FULL
is added to the foreign key declaration, a referencing row escapes
satisfying the constraint only if all its referencing columns are null
(so a mix of null and non-null values is guaranteed to fail a
MATCH FULL
constraint). If you don't want referencing rows
to be able to avoid satisfying the foreign key constraint, declare the
referencing column(s) as NOT NULL
.
A foreign key must reference columns that either are a primary key or
form a unique constraint, or are columns from a non-partial unique index.
This means that the referenced columns always have an index to allow
efficient lookups on whether a referencing row has a match. Since a
DELETE
of a row from the referenced table or an
UPDATE
of a referenced column will require a scan of
the referencing table for rows matching the old value, it is often a good
idea to index the referencing columns too. Because this is not always
needed, and there are many choices available on how to index, the
declaration of a foreign key constraint does not automatically create an
index on the referencing columns.
More information about updating and deleting data is in Chapter 6. Also see the description of foreign key constraint syntax in the reference documentation for CREATE TABLE.
Exclusion constraints ensure that if any two rows are compared on the specified columns or expressions using the specified operators, at least one of these operator comparisons will return false or null. The syntax is:
CREATE TABLE circles ( c circle, EXCLUDE USING gist (c WITH &&) );
See also CREATE
TABLE ... CONSTRAINT ... EXCLUDE
for details.
Adding an exclusion constraint will automatically create an index of the type specified in the constraint declaration.
Every table has several system columns that are implicitly defined by the system. Therefore, these names cannot be used as names of user-defined columns. (Note that these restrictions are separate from whether the name is a key word or not; quoting a name will not allow you to escape these restrictions.) You do not really need to be concerned about these columns; just know they exist.
tableoid
The OID of the table containing this row. This column is
particularly handy for queries that select from partitioned
tables (see Section 5.11) or inheritance
hierarchies (see Section 5.10), since without it,
it's difficult to tell which individual table a row came from. The
tableoid
can be joined against the
oid
column of
pg_class
to obtain the table name.
xmin
The identity (transaction ID) of the inserting transaction for this row version. (A row version is an individual state of a row; each update of a row creates a new row version for the same logical row.)
cmin
The command identifier (starting at zero) within the inserting transaction.
xmax
The identity (transaction ID) of the deleting transaction, or zero for an undeleted row version. It is possible for this column to be nonzero in a visible row version. That usually indicates that the deleting transaction hasn't committed yet, or that an attempted deletion was rolled back.
cmax
The command identifier within the deleting transaction, or zero.
ctid
The physical location of the row version within its table. Note that
although the ctid
can be used to
locate the row version very quickly, a row's
ctid
will change if it is
updated or moved by VACUUM FULL
. Therefore
ctid
is useless as a long-term row
identifier. A primary key should be used to identify logical rows.
Transaction identifiers are also 32-bit quantities. In a long-lived database it is possible for transaction IDs to wrap around. This is not a fatal problem given appropriate maintenance procedures; see Chapter 25 for details. It is unwise, however, to depend on the uniqueness of transaction IDs over the long term (more than one billion transactions).
Command identifiers are also 32-bit quantities. This creates a hard limit of 232 (4 billion) SQL commands within a single transaction. In practice this limit is not a problem — note that the limit is on the number of SQL commands, not the number of rows processed. Also, only commands that actually modify the database contents will consume a command identifier.
When you create a table and you realize that you made a mistake, or the requirements of the application change, you can drop the table and create it again. But this is not a convenient option if the table is already filled with data, or if the table is referenced by other database objects (for instance a foreign key constraint). Therefore PostgreSQL provides a family of commands to make modifications to existing tables. Note that this is conceptually distinct from altering the data contained in the table: here we are interested in altering the definition, or structure, of the table.
You can:
Add columns
Remove columns
Add constraints
Remove constraints
Change default values
Change column data types
Rename columns
Rename tables
All these actions are performed using the ALTER TABLE command, whose reference page contains details beyond those given here.
To add a column, use a command like:
ALTER TABLE products ADD COLUMN description text;
The new column is initially filled with whatever default
value is given (null if you don't specify a DEFAULT
clause).
From PostgreSQL 11, adding a column with
a constant default value no longer means that each row of the table
needs to be updated when the ALTER TABLE
statement
is executed. Instead, the default value will be returned the next time
the row is accessed, and applied when the table is rewritten, making
the ALTER TABLE
very fast even on large tables.
However, if the default value is volatile (e.g.,
clock_timestamp()
)
each row will need to be updated with the value calculated at the time
ALTER TABLE
is executed. To avoid a potentially
lengthy update operation, particularly if you intend to fill the column
with mostly nondefault values anyway, it may be preferable to add the
column with no default, insert the correct values using
UPDATE
, and then add any desired default as described
below.
You can also define constraints on the column at the same time, using the usual syntax:
ALTER TABLE products ADD COLUMN description text CHECK (description <> '');
In fact all the options that can be applied to a column description
in CREATE TABLE
can be used here. Keep in mind however
that the default value must satisfy the given constraints, or the
ADD
will fail. Alternatively, you can add
constraints later (see below) after you've filled in the new column
correctly.
To remove a column, use a command like:
ALTER TABLE products DROP COLUMN description;
Whatever data was in the column disappears. Table constraints involving
the column are dropped, too. However, if the column is referenced by a
foreign key constraint of another table,
PostgreSQL will not silently drop that
constraint. You can authorize dropping everything that depends on
the column by adding CASCADE
:
ALTER TABLE products DROP COLUMN description CASCADE;
See Section 5.14 for a description of the general mechanism behind this.
To add a constraint, the table constraint syntax is used. For example:
ALTER TABLE products ADD CHECK (name <> ''); ALTER TABLE products ADD CONSTRAINT some_name UNIQUE (product_no); ALTER TABLE products ADD FOREIGN KEY (product_group_id) REFERENCES product_groups;
To add a not-null constraint, which cannot be written as a table constraint, use this syntax:
ALTER TABLE products ALTER COLUMN product_no SET NOT NULL;
The constraint will be checked immediately, so the table data must satisfy the constraint before it can be added.
To remove a constraint you need to know its name. If you gave it
a name then that's easy. Otherwise the system assigned a
generated name, which you need to find out. The
psql command \d
can be helpful
here; other interfaces might also provide a way to inspect table
details. Then the command is:
tablename
ALTER TABLE products DROP CONSTRAINT some_name;
(If you are dealing with a generated constraint name like $2
,
don't forget that you'll need to double-quote it to make it a valid
identifier.)
As with dropping a column, you need to add CASCADE
if you
want to drop a constraint that something else depends on. An example
is that a foreign key constraint depends on a unique or primary key
constraint on the referenced column(s).
This works the same for all constraint types except not-null constraints. To drop a not null constraint use:
ALTER TABLE products ALTER COLUMN product_no DROP NOT NULL;
(Recall that not-null constraints do not have names.)
To set a new default for a column, use a command like:
ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77;
Note that this doesn't affect any existing rows in the table, it
just changes the default for future INSERT
commands.
To remove any default value, use:
ALTER TABLE products ALTER COLUMN price DROP DEFAULT;
This is effectively the same as setting the default to null. As a consequence, it is not an error to drop a default where one hadn't been defined, because the default is implicitly the null value.
To convert a column to a different data type, use a command like:
ALTER TABLE products ALTER COLUMN price TYPE numeric(10,2);
This will succeed only if each existing entry in the column can be
converted to the new type by an implicit cast. If a more complex
conversion is needed, you can add a USING
clause that
specifies how to compute the new values from the old.
PostgreSQL will attempt to convert the column's default value (if any) to the new type, as well as any constraints that involve the column. But these conversions might fail, or might produce surprising results. It's often best to drop any constraints on the column before altering its type, and then add back suitably modified constraints afterwards.
When an object is created, it is assigned an owner. The owner is normally the role that executed the creation statement. For most kinds of objects, the initial state is that only the owner (or a superuser) can do anything with the object. To allow other roles to use it, privileges must be granted.
There are different kinds of privileges: SELECT
,
INSERT
, UPDATE
, DELETE
,
TRUNCATE
, REFERENCES
, TRIGGER
,
CREATE
, CONNECT
, TEMPORARY
,
EXECUTE
, and USAGE
.
The privileges applicable to a particular
object vary depending on the object's type (table, function, etc).
More detail about the meanings of these privileges appears below.
The following sections and chapters will also show you how
these privileges are used.
The right to modify or destroy an object is inherent in being the object's owner, and cannot be granted or revoked in itself. (However, like all privileges, that right can be inherited by members of the owning role; see Section 22.3.)
An object can be assigned to a new owner with an ALTER
command of the appropriate kind for the object, for example
ALTER TABLEtable_name
OWNER TOnew_owner
;
Superusers can always do this; ordinary roles can only do it if they are both the current owner of the object (or a member of the owning role) and a member of the new owning role.
To assign privileges, the GRANT command is
used. For example, if joe
is an existing role, and
accounts
is an existing table, the privilege to
update the table can be granted with:
GRANT UPDATE ON accounts TO joe;
Writing ALL
in place of a specific privilege grants all
privileges that are relevant for the object type.
The special “role” name PUBLIC
can
be used to grant a privilege to every role on the system. Also,
“group” roles can be set up to help manage privileges when
there are many users of a database — for details see
Chapter 22.
To revoke a previously-granted privilege, use the fittingly named REVOKE command:
REVOKE ALL ON accounts FROM PUBLIC;
Ordinarily, only the object's owner (or a superuser) can grant or revoke privileges on an object. However, it is possible to grant a privilege “with grant option”, which gives the recipient the right to grant it in turn to others. If the grant option is subsequently revoked then all who received the privilege from that recipient (directly or through a chain of grants) will lose the privilege. For details see the GRANT and REVOKE reference pages.
An object's owner can choose to revoke their own ordinary privileges, for example to make a table read-only for themselves as well as others. But owners are always treated as holding all grant options, so they can always re-grant their own privileges.
The available privileges are:
SELECT
Allows SELECT
from
any column, or specific column(s), of a table, view, materialized
view, or other table-like object.
Also allows use of COPY TO
.
This privilege is also needed to reference existing column values in
UPDATE
or DELETE
.
For sequences, this privilege also allows use of the
currval
function.
For large objects, this privilege allows the object to be read.
INSERT
Allows INSERT
of a new row into a table, view,
etc. Can be granted on specific column(s), in which case
only those columns may be assigned to in the INSERT
command (other columns will therefore receive default values).
Also allows use of COPY FROM
.
UPDATE
Allows UPDATE
of any
column, or specific column(s), of a table, view, etc.
(In practice, any nontrivial UPDATE
command will
require SELECT
privilege as well, since it must
reference table columns to determine which rows to update, and/or to
compute new values for columns.)
SELECT ... FOR UPDATE
and SELECT ... FOR SHARE
also require this privilege on at least one column, in addition to the
SELECT
privilege. For sequences, this
privilege allows use of the nextval
and
setval
functions.
For large objects, this privilege allows writing or truncating the
object.
DELETE
Allows DELETE
of a row from a table, view, etc.
(In practice, any nontrivial DELETE
command will
require SELECT
privilege as well, since it must
reference table columns to determine which rows to delete.)
TRUNCATE
Allows TRUNCATE
on a table.
REFERENCES
Allows creation of a foreign key constraint referencing a table, or specific column(s) of a table.
TRIGGER
Allows creation of a trigger on a table, view, etc.
CREATE
For databases, allows new schemas and publications to be created within the database, and allows trusted extensions to be installed within the database.
For schemas, allows new objects to be created within the schema. To rename an existing object, you must own the object and have this privilege for the containing schema.
For tablespaces, allows tables, indexes, and temporary files to be created within the tablespace, and allows databases to be created that have the tablespace as their default tablespace.
Note that revoking this privilege will not alter the existence or location of existing objects.
CONNECT
Allows the grantee to connect to the database. This
privilege is checked at connection startup (in addition to checking
any restrictions imposed by pg_hba.conf
).
TEMPORARY
Allows temporary tables to be created while using the database.
EXECUTE
Allows calling a function or procedure, including use of any operators that are implemented on top of the function. This is the only type of privilege that is applicable to functions and procedures.
USAGE
For procedural languages, allows use of the language for the creation of functions in that language. This is the only type of privilege that is applicable to procedural languages.
For schemas, allows access to objects contained in the schema (assuming that the objects' own privilege requirements are also met). Essentially this allows the grantee to “look up” objects within the schema. Without this permission, it is still possible to see the object names, e.g., by querying system catalogs. Also, after revoking this permission, existing sessions might have statements that have previously performed this lookup, so this is not a completely secure way to prevent object access.
For sequences, allows use of the
currval
and nextval
functions.
For types and domains, allows use of the type or domain in the creation of tables, functions, and other schema objects. (Note that this privilege does not control all “usage” of the type, such as values of the type appearing in queries. It only prevents objects from being created that depend on the type. The main purpose of this privilege is controlling which users can create dependencies on a type, which could prevent the owner from changing the type later.)
For foreign-data wrappers, allows creation of new servers using the foreign-data wrapper.
For foreign servers, allows creation of foreign tables using the server. Grantees may also create, alter, or drop their own user mappings associated with that server.
The privileges required by other commands are listed on the reference page of the respective command.
PostgreSQL grants privileges on some types of objects to
PUBLIC
by default when the objects are created.
No privileges are granted to PUBLIC
by default on
tables,
table columns,
sequences,
foreign data wrappers,
foreign servers,
large objects,
schemas,
or tablespaces.
For other types of objects, the default privileges
granted to PUBLIC
are as follows:
CONNECT
and TEMPORARY
(create
temporary tables) privileges for databases;
EXECUTE
privilege for functions and procedures; and
USAGE
privilege for languages and data types
(including domains).
The object owner can, of course, REVOKE
both default and expressly granted privileges. (For maximum
security, issue the REVOKE
in the same transaction that
creates the object; then there is no window in which another user
can use the object.)
Also, these default privilege settings can be overridden using the
ALTER DEFAULT PRIVILEGES command.
Table 5.1 shows the one-letter abbreviations that are used for these privilege types in ACL (Access Control List) values. You will see these letters in the output of the psql commands listed below, or when looking at ACL columns of system catalogs.
Table 5.1. ACL Privilege Abbreviations
Privilege | Abbreviation | Applicable Object Types |
---|---|---|
SELECT | r (“read”) |
LARGE OBJECT ,
SEQUENCE ,
TABLE (and table-like objects),
table column
|
INSERT | a (“append”) | TABLE , table column |
UPDATE | w (“write”) |
LARGE OBJECT ,
SEQUENCE ,
TABLE ,
table column
|
DELETE | d | TABLE |
TRUNCATE | D | TABLE |
REFERENCES | x | TABLE , table column |
TRIGGER | t | TABLE |
CREATE | C |
DATABASE ,
SCHEMA ,
TABLESPACE
|
CONNECT | c | DATABASE |
TEMPORARY | T | DATABASE |
EXECUTE | X | FUNCTION , PROCEDURE |
USAGE | U |
DOMAIN ,
FOREIGN DATA WRAPPER ,
FOREIGN SERVER ,
LANGUAGE ,
SCHEMA ,
SEQUENCE ,
TYPE
|
Table 5.2 summarizes the privileges available for each type of SQL object, using the abbreviations shown above. It also shows the psql command that can be used to examine privilege settings for each object type.
Table 5.2. Summary of Access Privileges
Object Type | All Privileges | Default PUBLIC Privileges | psql Command |
---|---|---|---|
DATABASE | CTc | Tc | \l |
DOMAIN | U | U | \dD+ |
FUNCTION or PROCEDURE | X | X | \df+ |
FOREIGN DATA WRAPPER | U | none | \dew+ |
FOREIGN SERVER | U | none | \des+ |
LANGUAGE | U | U | \dL+ |
LARGE OBJECT | rw | none | |
SCHEMA | UC | none | \dn+ |
SEQUENCE | rwU | none | \dp |
TABLE (and table-like objects) | arwdDxt | none | \dp |
Table column | arwx | none | \dp |
TABLESPACE | C | none | \db+ |
TYPE | U | U | \dT+ |
The privileges that have been granted for a particular object are
displayed as a list of aclitem
entries, each having the
format:
grantee
=
privilege-abbreviation
[*
].../
grantor
Each aclitem
lists all the permissions of one grantee that
have been granted by a particular grantor. Specific privileges are
represented by one-letter abbreviations from
Table 5.1, with *
appended if the privilege was granted with grant option. For example,
calvin=r*w/hobbes
specifies that the role
calvin
has the privilege
SELECT
(r
) with grant option
(*
) as well as the non-grantable
privilege UPDATE
(w
), both granted
by the role hobbes
. If calvin
also has some privileges on the same object granted by a different
grantor, those would appear as a separate aclitem
entry.
An empty grantee field in an aclitem
stands
for PUBLIC
.
As an example, suppose that user miriam
creates
table mytable
and does:
GRANT SELECT ON mytable TO PUBLIC; GRANT SELECT, UPDATE, INSERT ON mytable TO admin; GRANT SELECT (col1), UPDATE (col1) ON mytable TO miriam_rw;
Then psql's \dp
command
would show:
=> \dp mytable Access privileges Schema | Name | Type | Access privileges | Column privileges | Policies --------+---------+-------+-----------------------+-----------------------+---------- public | mytable | table | miriam=arwdDxt/miriam+| col1: +| | | | =r/miriam +| miriam_rw=rw/miriam | | | | admin=arw/miriam | | (1 row)
If the “Access privileges” column is empty for a given
object, it means the object has default privileges (that is, its
privileges entry in the relevant system catalog is null). Default
privileges always include all privileges for the owner, and can include
some privileges for PUBLIC
depending on the object
type, as explained above. The first GRANT
or REVOKE
on an object will instantiate the default
privileges (producing, for
example, miriam=arwdDxt/miriam
) and then modify them
per the specified request. Similarly, entries are shown in “Column
privileges” only for columns with nondefault privileges.
(Note: for this purpose, “default privileges” always means
the built-in default privileges for the object's type. An object whose
privileges have been affected by an ALTER DEFAULT
PRIVILEGES
command will always be shown with an explicit
privilege entry that includes the effects of
the ALTER
.)
Notice that the owner's implicit grant options are not marked in the
access privileges display. A *
will appear only when
grant options have been explicitly granted to someone.
In addition to the SQL-standard privilege system available through GRANT, tables can have row security policies that restrict, on a per-user basis, which rows can be returned by normal queries or inserted, updated, or deleted by data modification commands. This feature is also known as Row-Level Security. By default, tables do not have any policies, so that if a user has access privileges to a table according to the SQL privilege system, all rows within it are equally available for querying or updating.
When row security is enabled on a table (with
ALTER TABLE ... ENABLE ROW LEVEL
SECURITY), all normal access to the table for selecting rows or
modifying rows must be allowed by a row security policy. (However, the
table's owner is typically not subject to row security policies.) If no
policy exists for the table, a default-deny policy is used, meaning that
no rows are visible or can be modified. Operations that apply to the
whole table, such as TRUNCATE
and REFERENCES
,
are not subject to row security.
Row security policies can be specific to commands, or to roles, or to
both. A policy can be specified to apply to ALL
commands, or to SELECT
, INSERT
, UPDATE
,
or DELETE
. Multiple roles can be assigned to a given
policy, and normal role membership and inheritance rules apply.
To specify which rows are visible or modifiable according to a policy,
an expression is required that returns a Boolean result. This
expression will be evaluated for each row prior to any conditions or
functions coming from the user's query. (The only exceptions to this
rule are leakproof
functions, which are guaranteed to
not leak information; the optimizer may choose to apply such functions
ahead of the row-security check.) Rows for which the expression does
not return true
will not be processed. Separate expressions
may be specified to provide independent control over the rows which are
visible and the rows which are allowed to be modified. Policy
expressions are run as part of the query and with the privileges of the
user running the query, although security-definer functions can be used
to access data not available to the calling user.
Superusers and roles with the BYPASSRLS
attribute always
bypass the row security system when accessing a table. Table owners
normally bypass row security as well, though a table owner can choose to
be subject to row security with ALTER
TABLE ... FORCE ROW LEVEL SECURITY.
Enabling and disabling row security, as well as adding policies to a table, is always the privilege of the table owner only.
Policies are created using the CREATE POLICY command, altered using the ALTER POLICY command, and dropped using the DROP POLICY command. To enable and disable row security for a given table, use the ALTER TABLE command.
Each policy has a name and multiple policies can be defined for a table. As policies are table-specific, each policy for a table must have a unique name. Different tables may have policies with the same name.
When multiple policies apply to a given query, they are combined using
either OR
(for permissive policies, which are the
default) or using AND
(for restrictive policies).
This is similar to the rule that a given role has the privileges
of all roles that they are a member of. Permissive vs. restrictive
policies are discussed further below.
As a simple example, here is how to create a policy on
the account
relation to allow only members of
the managers
role to access rows, and only rows of their
accounts:
CREATE TABLE accounts (manager text, company text, contact_email text); ALTER TABLE accounts ENABLE ROW LEVEL SECURITY; CREATE POLICY account_managers ON accounts TO managers USING (manager = current_user);
The policy above implicitly provides a WITH CHECK
clause identical to its USING
clause, so that the
constraint applies both to rows selected by a command (so a manager
cannot SELECT
, UPDATE
,
or DELETE
existing rows belonging to a different
manager) and to rows modified by a command (so rows belonging to a
different manager cannot be created via INSERT
or UPDATE
).
If no role is specified, or the special user name
PUBLIC
is used, then the policy applies to all
users on the system. To allow all users to access only their own row in
a users
table, a simple policy can be used:
CREATE POLICY user_policy ON users USING (user_name = current_user);
This works similarly to the previous example.
To use a different policy for rows that are being added to the table
compared to those rows that are visible, multiple policies can be
combined. This pair of policies would allow all users to view all rows
in the users
table, but only modify their own:
CREATE POLICY user_sel_policy ON users FOR SELECT USING (true); CREATE POLICY user_mod_policy ON users USING (user_name = current_user);
In a SELECT
command, these two policies are combined
using OR
, with the net effect being that all rows
can be selected. In other command types, only the second policy applies,
so that the effects are the same as before.
Row security can also be disabled with the ALTER TABLE
command. Disabling row security does not remove any policies that are
defined on the table; they are simply ignored. Then all rows in the
table are visible and modifiable, subject to the standard SQL privileges
system.
Below is a larger example of how this feature can be used in production
environments. The table passwd
emulates a Unix password
file:
-- Simple passwd-file based example CREATE TABLE passwd ( user_name text UNIQUE NOT NULL, pwhash text, uid int PRIMARY KEY, gid int NOT NULL, real_name text NOT NULL, home_phone text, extra_info text, home_dir text NOT NULL, shell text NOT NULL ); CREATE ROLE admin; -- Administrator CREATE ROLE bob; -- Normal user CREATE ROLE alice; -- Normal user -- Populate the table INSERT INTO passwd VALUES ('admin','xxx',0,0,'Admin','111-222-3333',null,'/root','/bin/dash'); INSERT INTO passwd VALUES ('bob','xxx',1,1,'Bob','123-456-7890',null,'/home/bob','/bin/zsh'); INSERT INTO passwd VALUES ('alice','xxx',2,1,'Alice','098-765-4321',null,'/home/alice','/bin/zsh'); -- Be sure to enable row-level security on the table ALTER TABLE passwd ENABLE ROW LEVEL SECURITY; -- Create policies -- Administrator can see all rows and add any rows CREATE POLICY admin_all ON passwd TO admin USING (true) WITH CHECK (true); -- Normal users can view all rows CREATE POLICY all_view ON passwd FOR SELECT USING (true); -- Normal users can update their own records, but -- limit which shells a normal user is allowed to set CREATE POLICY user_mod ON passwd FOR UPDATE USING (current_user = user_name) WITH CHECK ( current_user = user_name AND shell IN ('/bin/bash','/bin/sh','/bin/dash','/bin/zsh','/bin/tcsh') ); -- Allow admin all normal rights GRANT SELECT, INSERT, UPDATE, DELETE ON passwd TO admin; -- Users only get select access on public columns GRANT SELECT (user_name, uid, gid, real_name, home_phone, extra_info, home_dir, shell) ON passwd TO public; -- Allow users to update certain columns GRANT UPDATE (pwhash, real_name, home_phone, extra_info, shell) ON passwd TO public;
As with any security settings, it's important to test and ensure that the system is behaving as expected. Using the example above, this demonstrates that the permission system is working properly.
-- admin can view all rows and fields postgres=> set role admin; SET postgres=> table passwd; user_name | pwhash | uid | gid | real_name | home_phone | extra_info | home_dir | shell -----------+--------+-----+-----+-----------+--------------+------------+-------------+----------- admin | xxx | 0 | 0 | Admin | 111-222-3333 | | /root | /bin/dash bob | xxx | 1 | 1 | Bob | 123-456-7890 | | /home/bob | /bin/zsh alice | xxx | 2 | 1 | Alice | 098-765-4321 | | /home/alice | /bin/zsh (3 rows) -- Test what Alice is able to do postgres=> set role alice; SET postgres=> table passwd; ERROR: permission denied for table passwd postgres=> select user_name,real_name,home_phone,extra_info,home_dir,shell from passwd; user_name | real_name | home_phone | extra_info | home_dir | shell -----------+-----------+--------------+------------+-------------+----------- admin | Admin | 111-222-3333 | | /root | /bin/dash bob | Bob | 123-456-7890 | | /home/bob | /bin/zsh alice | Alice | 098-765-4321 | | /home/alice | /bin/zsh (3 rows) postgres=> update passwd set user_name = 'joe'; ERROR: permission denied for table passwd -- Alice is allowed to change her own real_name, but no others postgres=> update passwd set real_name = 'Alice Doe'; UPDATE 1 postgres=> update passwd set real_name = 'John Doe' where user_name = 'admin'; UPDATE 0 postgres=> update passwd set shell = '/bin/xx'; ERROR: new row violates WITH CHECK OPTION for "passwd" postgres=> delete from passwd; ERROR: permission denied for table passwd postgres=> insert into passwd (user_name) values ('xxx'); ERROR: permission denied for table passwd -- Alice can change her own password; RLS silently prevents updating other rows postgres=> update passwd set pwhash = 'abc'; UPDATE 1
All of the policies constructed thus far have been permissive policies,
meaning that when multiple policies are applied they are combined using
the “OR” Boolean operator. While permissive policies can be constructed
to only allow access to rows in the intended cases, it can be simpler to
combine permissive policies with restrictive policies (which the records
must pass and which are combined using the “AND” Boolean operator).
Building on the example above, we add a restrictive policy to require
the administrator to be connected over a local Unix socket to access the
records of the passwd
table:
CREATE POLICY admin_local_only ON passwd AS RESTRICTIVE TO admin USING (pg_catalog.inet_client_addr() IS NULL);
We can then see that an administrator connecting over a network will not see any records, due to the restrictive policy:
=> SELECT current_user; current_user -------------- admin (1 row) => select inet_client_addr(); inet_client_addr ------------------ 127.0.0.1 (1 row) => TABLE passwd; user_name | pwhash | uid | gid | real_name | home_phone | extra_info | home_dir | shell -----------+--------+-----+-----+-----------+------------+------------+----------+------- (0 rows) => UPDATE passwd set pwhash = NULL; UPDATE 0
Referential integrity checks, such as unique or primary key constraints and foreign key references, always bypass row security to ensure that data integrity is maintained. Care must be taken when developing schemas and row level policies to avoid “covert channel” leaks of information through such referential integrity checks.
In some contexts it is important to be sure that row security is
not being applied. For example, when taking a backup, it could be
disastrous if row security silently caused some rows to be omitted
from the backup. In such a situation, you can set the
row_security configuration parameter
to off
. This does not in itself bypass row security;
what it does is throw an error if any query's results would get filtered
by a policy. The reason for the error can then be investigated and
fixed.
In the examples above, the policy expressions consider only the current
values in the row to be accessed or updated. This is the simplest and
best-performing case; when possible, it's best to design row security
applications to work this way. If it is necessary to consult other rows
or other tables to make a policy decision, that can be accomplished using
sub-SELECT
s, or functions that contain SELECT
s,
in the policy expressions. Be aware however that such accesses can
create race conditions that could allow information leakage if care is
not taken. As an example, consider the following table design:
-- definition of privilege groups CREATE TABLE groups (group_id int PRIMARY KEY, group_name text NOT NULL); INSERT INTO groups VALUES (1, 'low'), (2, 'medium'), (5, 'high'); GRANT ALL ON groups TO alice; -- alice is the administrator GRANT SELECT ON groups TO public; -- definition of users' privilege levels CREATE TABLE users (user_name text PRIMARY KEY, group_id int NOT NULL REFERENCES groups); INSERT INTO users VALUES ('alice', 5), ('bob', 2), ('mallory', 2); GRANT ALL ON users TO alice; GRANT SELECT ON users TO public; -- table holding the information to be protected CREATE TABLE information (info text, group_id int NOT NULL REFERENCES groups); INSERT INTO information VALUES ('barely secret', 1), ('slightly secret', 2), ('very secret', 5); ALTER TABLE information ENABLE ROW LEVEL SECURITY; -- a row should be visible to/updatable by users whose security group_id is -- greater than or equal to the row's group_id CREATE POLICY fp_s ON information FOR SELECT USING (group_id <= (SELECT group_id FROM users WHERE user_name = current_user)); CREATE POLICY fp_u ON information FOR UPDATE USING (group_id <= (SELECT group_id FROM users WHERE user_name = current_user)); -- we rely only on RLS to protect the information table GRANT ALL ON information TO public;
Now suppose that alice
wishes to change the “slightly
secret” information, but decides that mallory
should not
be trusted with the new content of that row, so she does:
BEGIN; UPDATE users SET group_id = 1 WHERE user_name = 'mallory'; UPDATE information SET info = 'secret from mallory' WHERE group_id = 2; COMMIT;
That looks safe; there is no window wherein mallory
should be
able to see the “secret from mallory” string. However, there is
a race condition here. If mallory
is concurrently doing,
say,
SELECT * FROM information WHERE group_id = 2 FOR UPDATE;
and her transaction is in READ COMMITTED
mode, it is possible
for her to see “secret from mallory”. That happens if her
transaction reaches the information
row just
after alice
's does. It blocks waiting
for alice
's transaction to commit, then fetches the updated
row contents thanks to the FOR UPDATE
clause. However, it
does not fetch an updated row for the
implicit SELECT
from users
, because that
sub-SELECT
did not have FOR UPDATE
; instead
the users
row is read with the snapshot taken at the start
of the query. Therefore, the policy expression tests the old value
of mallory
's privilege level and allows her to see the
updated row.
There are several ways around this problem. One simple answer is to use
SELECT ... FOR SHARE
in sub-SELECT
s in row
security policies. However, that requires granting UPDATE
privilege on the referenced table (here users
) to the
affected users, which might be undesirable. (But another row security
policy could be applied to prevent them from actually exercising that
privilege; or the sub-SELECT
could be embedded into a security
definer function.) Also, heavy concurrent use of row share locks on the
referenced table could pose a performance problem, especially if updates
of it are frequent. Another solution, practical if updates of the
referenced table are infrequent, is to take an
ACCESS EXCLUSIVE
lock on the
referenced table when updating it, so that no concurrent transactions
could be examining old row values. Or one could just wait for all
concurrent transactions to end after committing an update of the
referenced table and before making changes that rely on the new security
situation.
For additional details see CREATE POLICY and ALTER TABLE.
A PostgreSQL database cluster contains one or more named databases. Roles and a few other object types are shared across the entire cluster. A client connection to the server can only access data in a single database, the one specified in the connection request.
Users of a cluster do not necessarily have the privilege to access every
database in the cluster. Sharing of role names means that there
cannot be different roles named, say, joe
in two databases
in the same cluster; but the system can be configured to allow
joe
access to only some of the databases.
A database contains one or more named schemas, which
in turn contain tables. Schemas also contain other kinds of named
objects, including data types, functions, and operators. The same
object name can be used in different schemas without conflict; for
example, both schema1
and myschema
can
contain tables named mytable
. Unlike databases,
schemas are not rigidly separated: a user can access objects in any
of the schemas in the database they are connected to, if they have
privileges to do so.
There are several reasons why one might want to use schemas:
To allow many users to use one database without interfering with each other.
To organize database objects into logical groups to make them more manageable.
Third-party applications can be put into separate schemas so they do not collide with the names of other objects.
Schemas are analogous to directories at the operating system level, except that schemas cannot be nested.
To create a schema, use the CREATE SCHEMA command. Give the schema a name of your choice. For example:
CREATE SCHEMA myschema;
To create or access objects in a schema, write a qualified name consisting of the schema name and table name separated by a dot:
schema
.
table
This works anywhere a table name is expected, including the table modification commands and the data access commands discussed in the following chapters. (For brevity we will speak of tables only, but the same ideas apply to other kinds of named objects, such as types and functions.)
Actually, the even more general syntax
database
.
schema
.
table
can be used too, but at present this is just for pro forma compliance with the SQL standard. If you write a database name, it must be the same as the database you are connected to.
So to create a table in the new schema, use:
CREATE TABLE myschema.mytable ( ... );
To drop a schema if it's empty (all objects in it have been dropped), use:
DROP SCHEMA myschema;
To drop a schema including all contained objects, use:
DROP SCHEMA myschema CASCADE;
See Section 5.14 for a description of the general mechanism behind this.
Often you will want to create a schema owned by someone else (since this is one of the ways to restrict the activities of your users to well-defined namespaces). The syntax for that is:
CREATE SCHEMAschema_name
AUTHORIZATIONuser_name
;
You can even omit the schema name, in which case the schema name will be the same as the user name. See Section 5.9.6 for how this can be useful.
Schema names beginning with pg_
are reserved for
system purposes and cannot be created by users.
In the previous sections we created tables without specifying any schema names. By default such tables (and other objects) are automatically put into a schema named “public”. Every new database contains such a schema. Thus, the following are equivalent:
CREATE TABLE products ( ... );
and:
CREATE TABLE public.products ( ... );
Qualified names are tedious to write, and it's often best not to wire a particular schema name into applications anyway. Therefore tables are often referred to by unqualified names, which consist of just the table name. The system determines which table is meant by following a search path, which is a list of schemas to look in. The first matching table in the search path is taken to be the one wanted. If there is no match in the search path, an error is reported, even if matching table names exist in other schemas in the database.
The ability to create like-named objects in different schemas complicates
writing a query that references precisely the same objects every time. It
also opens up the potential for users to change the behavior of other
users' queries, maliciously or accidentally. Due to the prevalence of
unqualified names in queries and their use
in PostgreSQL internals, adding a schema
to search_path
effectively trusts all users having
CREATE
privilege on that schema. When you run an
ordinary query, a malicious user able to create objects in a schema of
your search path can take control and execute arbitrary SQL functions as
though you executed them.
The first schema named in the search path is called the current schema.
Aside from being the first schema searched, it is also the schema in
which new tables will be created if the CREATE TABLE
command does not specify a schema name.
To show the current search path, use the following command:
SHOW search_path;
In the default setup this returns:
search_path -------------- "$user", public
The first element specifies that a schema with the same name as the current user is to be searched. If no such schema exists, the entry is ignored. The second element refers to the public schema that we have seen already.
The first schema in the search path that exists is the default location for creating new objects. That is the reason that by default objects are created in the public schema. When objects are referenced in any other context without schema qualification (table modification, data modification, or query commands) the search path is traversed until a matching object is found. Therefore, in the default configuration, any unqualified access again can only refer to the public schema.
To put our new schema in the path, we use:
SET search_path TO myschema,public;
(We omit the $user
here because we have no
immediate need for it.) And then we can access the table without
schema qualification:
DROP TABLE mytable;
Also, since myschema
is the first element in
the path, new objects would by default be created in it.
We could also have written:
SET search_path TO myschema;
Then we no longer have access to the public schema without explicit qualification. There is nothing special about the public schema except that it exists by default. It can be dropped, too.
See also Section 9.26 for other ways to manipulate the schema search path.
The search path works in the same way for data type names, function names, and operator names as it does for table names. Data type and function names can be qualified in exactly the same way as table names. If you need to write a qualified operator name in an expression, there is a special provision: you must write
OPERATOR(
schema
.
operator
)
This is needed to avoid syntactic ambiguity. An example is:
SELECT 3 OPERATOR(pg_catalog.+) 4;
In practice one usually relies on the search path for operators, so as not to have to write anything so ugly as that.
By default, users cannot access any objects in schemas they do not
own. To allow that, the owner of the schema must grant the
USAGE
privilege on the schema. To allow users
to make use of the objects in the schema, additional privileges
might need to be granted, as appropriate for the object.
A user can also be allowed to create objects in someone else's
schema. To allow that, the CREATE
privilege on
the schema needs to be granted. Note that by default, everyone
has CREATE
and USAGE
privileges on
the schema
public
. This allows all users that are able to
connect to a given database to create objects in its
public
schema.
Some usage patterns call for
revoking that privilege:
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
(The first “public” is the schema, the second “public” means “every user”. In the first sense it is an identifier, in the second sense it is a key word, hence the different capitalization; recall the guidelines from Section 4.1.1.)
In addition to public
and user-created schemas, each
database contains a pg_catalog
schema, which contains
the system tables and all the built-in data types, functions, and
operators. pg_catalog
is always effectively part of
the search path. If it is not named explicitly in the path then
it is implicitly searched before searching the path's
schemas. This ensures that built-in names will always be
findable. However, you can explicitly place
pg_catalog
at the end of your search path if you
prefer to have user-defined names override built-in names.
Since system table names begin with pg_
, it is best to
avoid such names to ensure that you won't suffer a conflict if some
future version defines a system table named the same as your
table. (With the default search path, an unqualified reference to
your table name would then be resolved as the system table instead.)
System tables will continue to follow the convention of having
names beginning with pg_
, so that they will not
conflict with unqualified user-table names so long as users avoid
the pg_
prefix.
Schemas can be used to organize your data in many ways.
A secure schema usage pattern prevents untrusted
users from changing the behavior of other users' queries. When a database
does not use a secure schema usage pattern, users wishing to securely
query that database would take protective action at the beginning of each
session. Specifically, they would begin each session by
setting search_path
to the empty string or otherwise
removing non-superuser-writable schemas
from search_path
. There are a few usage patterns
easily supported by the default configuration:
Constrain ordinary users to user-private schemas. To implement this,
issue REVOKE CREATE ON SCHEMA public FROM PUBLIC
,
and create a schema for each user with the same name as that user.
Recall that the default search path starts
with $user
, which resolves to the user name.
Therefore, if each user has a separate schema, they access their own
schemas by default. After adopting this pattern in a database where
untrusted users had already logged in, consider auditing the public
schema for objects named like objects in
schema pg_catalog
. This pattern is a secure schema
usage pattern unless an untrusted user is the database owner or holds
the CREATEROLE
privilege, in which case no secure
schema usage pattern exists.
Remove the public schema from the default search path, by modifying
postgresql.conf
or by issuing ALTER ROLE ALL SET search_path =
"$user"
. Everyone retains the ability to create objects in
the public schema, but only qualified names will choose those objects.
While qualified table references are fine, calls to functions in the
public schema will be unsafe or
unreliable. If you create functions or extensions in the public
schema, use the first pattern instead. Otherwise, like the first
pattern, this is secure unless an untrusted user is the database owner
or holds the CREATEROLE
privilege.
Keep the default. All users access the public schema implicitly. This simulates the situation where schemas are not available at all, giving a smooth transition from the non-schema-aware world. However, this is never a secure pattern. It is acceptable only when the database has a single user or a few mutually-trusting users.
For any pattern, to install shared applications (tables to be used by everyone, additional functions provided by third parties, etc.), put them into separate schemas. Remember to grant appropriate privileges to allow the other users to access them. Users can then refer to these additional objects by qualifying the names with a schema name, or they can put the additional schemas into their search path, as they choose.
In the SQL standard, the notion of objects in the same schema
being owned by different users does not exist. Moreover, some
implementations do not allow you to create schemas that have a
different name than their owner. In fact, the concepts of schema
and user are nearly equivalent in a database system that
implements only the basic schema support specified in the
standard. Therefore, many users consider qualified names to
really consist of
.
This is how PostgreSQL will effectively
behave if you create a per-user schema for every user.
user_name
.table_name
Also, there is no concept of a public
schema in the
SQL standard. For maximum conformance to the standard, you should
not use the public
schema.
Of course, some SQL database systems might not implement schemas at all, or provide namespace support by allowing (possibly limited) cross-database access. If you need to work with those systems, then maximum portability would be achieved by not using schemas at all.
PostgreSQL implements table inheritance, which can be a useful tool for database designers. (SQL:1999 and later define a type inheritance feature, which differs in many respects from the features described here.)
Let's start with an example: suppose we are trying to build a data
model for cities. Each state has many cities, but only one
capital. We want to be able to quickly retrieve the capital city
for any particular state. This can be done by creating two tables,
one for state capitals and one for cities that are not
capitals. However, what happens when we want to ask for data about
a city, regardless of whether it is a capital or not? The
inheritance feature can help to resolve this problem. We define the
capitals
table so that it inherits from
cities
:
CREATE TABLE cities ( name text, population float, elevation int -- in feet ); CREATE TABLE capitals ( state char(2) ) INHERITS (cities);
In this case, the capitals
table inherits
all the columns of its parent table, cities
. State
capitals also have an extra column, state
, that shows
their state.
In PostgreSQL, a table can inherit from zero or more other tables, and a query can reference either all rows of a table or all rows of a table plus all of its descendant tables. The latter behavior is the default. For example, the following query finds the names of all cities, including state capitals, that are located at an elevation over 500 feet:
SELECT name, elevation FROM cities WHERE elevation > 500;
Given the sample data from the PostgreSQL tutorial (see Section 2.1), this returns:
name | elevation -----------+----------- Las Vegas | 2174 Mariposa | 1953 Madison | 845
On the other hand, the following query finds all the cities that are not state capitals and are situated at an elevation over 500 feet:
SELECT name, elevation FROM ONLY cities WHERE elevation > 500; name | elevation -----------+----------- Las Vegas | 2174 Mariposa | 1953
Here the ONLY
keyword indicates that the query
should apply only to cities
, and not any tables
below cities
in the inheritance hierarchy. Many
of the commands that we have already discussed —
SELECT
, UPDATE
and
DELETE
— support the
ONLY
keyword.
You can also write the table name with a trailing *
to explicitly specify that descendant tables are included:
SELECT name, elevation FROM cities* WHERE elevation > 500;
Writing *
is not necessary, since this behavior is always
the default. However, this syntax is still supported for
compatibility with older releases where the default could be changed.
In some cases you might wish to know which table a particular row
originated from. There is a system column called
tableoid
in each table which can tell you the
originating table:
SELECT c.tableoid, c.name, c.elevation FROM cities c WHERE c.elevation > 500;
which returns:
tableoid | name | elevation ----------+-----------+----------- 139793 | Las Vegas | 2174 139793 | Mariposa | 1953 139798 | Madison | 845
(If you try to reproduce this example, you will probably get
different numeric OIDs.) By doing a join with
pg_class
you can see the actual table names:
SELECT p.relname, c.name, c.elevation FROM cities c, pg_class p WHERE c.elevation > 500 AND c.tableoid = p.oid;
which returns:
relname | name | elevation ----------+-----------+----------- cities | Las Vegas | 2174 cities | Mariposa | 1953 capitals | Madison | 845
Another way to get the same effect is to use the regclass
alias type, which will print the table OID symbolically:
SELECT c.tableoid::regclass, c.name, c.elevation FROM cities c WHERE c.elevation > 500;
Inheritance does not automatically propagate data from
INSERT
or COPY
commands to
other tables in the inheritance hierarchy. In our example, the
following INSERT
statement will fail:
INSERT INTO cities (name, population, elevation, state) VALUES ('Albany', NULL, NULL, 'NY');
We might hope that the data would somehow be routed to the
capitals
table, but this does not happen:
INSERT
always inserts into exactly the table
specified. In some cases it is possible to redirect the insertion
using a rule (see Chapter 41). However that does not
help for the above case because the cities
table
does not contain the column state
, and so the
command will be rejected before the rule can be applied.
All check constraints and not-null constraints on a parent table are
automatically inherited by its children, unless explicitly specified
otherwise with NO INHERIT
clauses. Other types of constraints
(unique, primary key, and foreign key constraints) are not inherited.
A table can inherit from more than one parent table, in which case it has the union of the columns defined by the parent tables. Any columns declared in the child table's definition are added to these. If the same column name appears in multiple parent tables, or in both a parent table and the child's definition, then these columns are “merged” so that there is only one such column in the child table. To be merged, columns must have the same data types, else an error is raised. Inheritable check constraints and not-null constraints are merged in a similar fashion. Thus, for example, a merged column will be marked not-null if any one of the column definitions it came from is marked not-null. Check constraints are merged if they have the same name, and the merge will fail if their conditions are different.
Table inheritance is typically established when the child table is
created, using the INHERITS
clause of the
CREATE TABLE
statement.
Alternatively, a table which is already defined in a compatible way can
have a new parent relationship added, using the INHERIT
variant of ALTER TABLE
.
To do this the new child table must already include columns with
the same names and types as the columns of the parent. It must also include
check constraints with the same names and check expressions as those of the
parent. Similarly an inheritance link can be removed from a child using the
NO INHERIT
variant of ALTER TABLE
.
Dynamically adding and removing inheritance links like this can be useful
when the inheritance relationship is being used for table
partitioning (see Section 5.11).
One convenient way to create a compatible table that will later be made
a new child is to use the LIKE
clause in CREATE
TABLE
. This creates a new table with the same columns as
the source table. If there are any CHECK
constraints defined on the source table, the INCLUDING
CONSTRAINTS
option to LIKE
should be
specified, as the new child must have constraints matching the parent
to be considered compatible.
A parent table cannot be dropped while any of its children remain. Neither
can columns or check constraints of child tables be dropped or altered
if they are inherited
from any parent tables. If you wish to remove a table and all of its
descendants, one easy way is to drop the parent table with the
CASCADE
option (see Section 5.14).
ALTER TABLE
will
propagate any changes in column data definitions and check
constraints down the inheritance hierarchy. Again, dropping
columns that are depended on by other tables is only possible when using
the CASCADE
option. ALTER
TABLE
follows the same rules for duplicate column merging
and rejection that apply during CREATE TABLE
.
Inherited queries perform access permission checks on the parent table
only. Thus, for example, granting UPDATE
permission on
the cities
table implies permission to update rows in
the capitals
table as well, when they are
accessed through cities
. This preserves the appearance
that the data is (also) in the parent table. But
the capitals
table could not be updated directly
without an additional grant. In a similar way, the parent table's row
security policies (see Section 5.8) are applied to
rows coming from child tables during an inherited query. A child table's
policies, if any, are applied only when it is the table explicitly named
in the query; and in that case, any policies attached to its parent(s) are
ignored.
Foreign tables (see Section 5.12) can also be part of inheritance hierarchies, either as parent or child tables, just as regular tables can be. If a foreign table is part of an inheritance hierarchy then any operations not supported by the foreign table are not supported on the whole hierarchy either.
Note that not all SQL commands are able to work on
inheritance hierarchies. Commands that are used for data querying,
data modification, or schema modification
(e.g., SELECT
, UPDATE
, DELETE
,
most variants of ALTER TABLE
, but
not INSERT
or ALTER TABLE ...
RENAME
) typically default to including child tables and
support the ONLY
notation to exclude them.
Commands that do database maintenance and tuning
(e.g., REINDEX
, VACUUM
)
typically only work on individual, physical tables and do not
support recursing over inheritance hierarchies. The respective
behavior of each individual command is documented in its reference
page (SQL Commands).
A serious limitation of the inheritance feature is that indexes (including unique constraints) and foreign key constraints only apply to single tables, not to their inheritance children. This is true on both the referencing and referenced sides of a foreign key constraint. Thus, in the terms of the above example:
If we declared cities
.name
to be
UNIQUE
or a PRIMARY KEY
, this would not stop the
capitals
table from having rows with names duplicating
rows in cities
. And those duplicate rows would by
default show up in queries from cities
. In fact, by
default capitals
would have no unique constraint at all,
and so could contain multiple rows with the same name.
You could add a unique constraint to capitals
, but this
would not prevent duplication compared to cities
.
Similarly, if we were to specify that
cities
.name
REFERENCES
some
other table, this constraint would not automatically propagate to
capitals
. In this case you could work around it by
manually adding the same REFERENCES
constraint to
capitals
.
Specifying that another table's column REFERENCES
cities(name)
would allow the other table to contain city names, but
not capital names. There is no good workaround for this case.
Some functionality not implemented for inheritance hierarchies is implemented for declarative partitioning. Considerable care is needed in deciding whether partitioning with legacy inheritance is useful for your application.
PostgreSQL supports basic table partitioning. This section describes why and how to implement partitioning as part of your database design.
Partitioning refers to splitting what is logically one large table into smaller physical pieces. Partitioning can provide several benefits:
Query performance can be improved dramatically in certain situations, particularly when most of the heavily accessed rows of the table are in a single partition or a small number of partitions. Partitioning effectively substitutes for the upper tree levels of indexes, making it more likely that the heavily-used parts of the indexes fit in memory.
When queries or updates access a large percentage of a single partition, performance can be improved by using a sequential scan of that partition instead of using an index, which would require random-access reads scattered across the whole table.
Bulk loads and deletes can be accomplished by adding or removing
partitions, if the usage pattern is accounted for in the
partitioning design. Dropping an individual partition
using DROP TABLE
, or doing ALTER TABLE
DETACH PARTITION
, is far faster than a bulk
operation. These commands also entirely avoid the
VACUUM
overhead caused by a bulk DELETE
.
Seldom-used data can be migrated to cheaper and slower storage media.
These benefits will normally be worthwhile only when a table would otherwise be very large. The exact point at which a table will benefit from partitioning depends on the application, although a rule of thumb is that the size of the table should exceed the physical memory of the database server.
PostgreSQL offers built-in support for the following forms of partitioning:
The table is partitioned into “ranges” defined
by a key column or set of columns, with no overlap between
the ranges of values assigned to different partitions. For
example, one might partition by date ranges, or by ranges of
identifiers for particular business objects.
Each range's bounds are understood as being inclusive at the
lower end and exclusive at the upper end. For example, if one
partition's range is from 1
to 10
, and the next one's range is
from 10
to 20
, then
value 10
belongs to the second partition not
the first.
The table is partitioned by explicitly listing which key value(s) appear in each partition.
The table is partitioned by specifying a modulus and a remainder for each partition. Each partition will hold the rows for which the hash value of the partition key divided by the specified modulus will produce the specified remainder.
If your application needs to use other forms of partitioning not listed
above, alternative methods such as inheritance and
UNION ALL
views can be used instead. Such methods
offer flexibility but do not have some of the performance benefits
of built-in declarative partitioning.
PostgreSQL allows you to declare that a table is divided into partitions. The table that is divided is referred to as a partitioned table. The declaration includes the partitioning method as described above, plus a list of columns or expressions to be used as the partition key.
The partitioned table itself is a “virtual” table having no storage of its own. Instead, the storage belongs to partitions, which are otherwise-ordinary tables associated with the partitioned table. Each partition stores a subset of the data as defined by its partition bounds. All rows inserted into a partitioned table will be routed to the appropriate one of the partitions based on the values of the partition key column(s). Updating the partition key of a row will cause it to be moved into a different partition if it no longer satisfies the partition bounds of its original partition.
Partitions may themselves be defined as partitioned tables, resulting in sub-partitioning. Although all partitions must have the same columns as their partitioned parent, partitions may have their own indexes, constraints and default values, distinct from those of other partitions. See CREATE TABLE for more details on creating partitioned tables and partitions.
It is not possible to turn a regular table into a partitioned table or
vice versa. However, it is possible to add an existing regular or
partitioned table as a partition of a partitioned table, or remove a
partition from a partitioned table turning it into a standalone table;
this can simplify and speed up many maintenance processes.
See ALTER TABLE to learn more about the
ATTACH PARTITION
and DETACH PARTITION
sub-commands.
Partitions can also be foreign tables, although considerable care is needed because it is then the user's responsibility that the contents of the foreign table satisfy the partitioning rule. There are some other restrictions as well. See CREATE FOREIGN TABLE for more information.
Suppose we are constructing a database for a large ice cream company. The company measures peak temperatures every day as well as ice cream sales in each region. Conceptually, we want a table like:
CREATE TABLE measurement ( city_id int not null, logdate date not null, peaktemp int, unitsales int );
We know that most queries will access just the last week's, month's or quarter's data, since the main use of this table will be to prepare online reports for management. To reduce the amount of old data that needs to be stored, we decide to keep only the most recent 3 years worth of data. At the beginning of each month we will remove the oldest month's data. In this situation we can use partitioning to help us meet all of our different requirements for the measurements table.
To use declarative partitioning in this case, use the following steps:
Create the measurement
table as a partitioned
table by specifying the PARTITION BY
clause, which
includes the partitioning method (RANGE
in this
case) and the list of column(s) to use as the partition key.
CREATE TABLE measurement ( city_id int not null, logdate date not null, peaktemp int, unitsales int ) PARTITION BY RANGE (logdate);
Create partitions. Each partition's definition must specify bounds that correspond to the partitioning method and partition key of the parent. Note that specifying bounds such that the new partition's values would overlap with those in one or more existing partitions will cause an error.
Partitions thus created are in every way normal PostgreSQL tables (or, possibly, foreign tables). It is possible to specify a tablespace and storage parameters for each partition separately.
For our example, each partition should hold one month's worth of data, to match the requirement of deleting one month's data at a time. So the commands might look like:
CREATE TABLE measurement_y2006m02 PARTITION OF measurement FOR VALUES FROM ('2006-02-01') TO ('2006-03-01'); CREATE TABLE measurement_y2006m03 PARTITION OF measurement FOR VALUES FROM ('2006-03-01') TO ('2006-04-01'); ... CREATE TABLE measurement_y2007m11 PARTITION OF measurement FOR VALUES FROM ('2007-11-01') TO ('2007-12-01'); CREATE TABLE measurement_y2007m12 PARTITION OF measurement FOR VALUES FROM ('2007-12-01') TO ('2008-01-01') TABLESPACE fasttablespace; CREATE TABLE measurement_y2008m01 PARTITION OF measurement FOR VALUES FROM ('2008-01-01') TO ('2008-02-01') WITH (parallel_workers = 4) TABLESPACE fasttablespace;
(Recall that adjacent partitions can share a bound value, since range upper bounds are treated as exclusive bounds.)
If you wish to implement sub-partitioning, again specify the
PARTITION BY
clause in the commands used to create
individual partitions, for example:
CREATE TABLE measurement_y2006m02 PARTITION OF measurement FOR VALUES FROM ('2006-02-01') TO ('2006-03-01') PARTITION BY RANGE (peaktemp);
After creating partitions of measurement_y2006m02
,
any data inserted into measurement
that is mapped to
measurement_y2006m02
(or data that is
directly inserted into measurement_y2006m02
,
which is allowed provided its partition constraint is satisfied)
will be further redirected to one of its
partitions based on the peaktemp
column. The partition
key specified may overlap with the parent's partition key, although
care should be taken when specifying the bounds of a sub-partition
such that the set of data it accepts constitutes a subset of what
the partition's own bounds allow; the system does not try to check
whether that's really the case.
Inserting data into the parent table that does not map to one of the existing partitions will cause an error; an appropriate partition must be added manually.
It is not necessary to manually create table constraints describing the partition boundary conditions for partitions. Such constraints will be created automatically.
Create an index on the key column(s), as well as any other indexes you might want, on the partitioned table. (The key index is not strictly necessary, but in most scenarios it is helpful.) This automatically creates a matching index on each partition, and any partitions you create or attach later will also have such an index. An index or unique constraint declared on a partitioned table is “virtual” in the same way that the partitioned table is: the actual data is in child indexes on the individual partition tables.
CREATE INDEX ON measurement (logdate);
Ensure that the enable_partition_pruning
configuration parameter is not disabled in postgresql.conf
.
If it is, queries will not be optimized as desired.
In the above example we would be creating a new partition each month, so it might be wise to write a script that generates the required DDL automatically.
Normally the set of partitions established when initially defining the table is not intended to remain static. It is common to want to remove partitions holding old data and periodically add new partitions for new data. One of the most important advantages of partitioning is precisely that it allows this otherwise painful task to be executed nearly instantaneously by manipulating the partition structure, rather than physically moving large amounts of data around.
The simplest option for removing old data is to drop the partition that is no longer necessary:
DROP TABLE measurement_y2006m02;
This can very quickly delete millions of records because it doesn't have
to individually delete every record. Note however that the above command
requires taking an ACCESS EXCLUSIVE
lock on the parent
table.
Another option that is often preferable is to remove the partition from the partitioned table but retain access to it as a table in its own right. This has two forms:
ALTER TABLE measurement DETACH PARTITION measurement_y2006m02; ALTER TABLE measurement DETACH PARTITION measurement_y2006m02 CONCURRENTLY;
These allow further operations to be performed on the data before
it is dropped. For example, this is often a useful time to back up
the data using COPY
, pg_dump, or
similar tools. It might also be a useful time to aggregate data
into smaller formats, perform other data manipulations, or run
reports. The first form of the command requires an
ACCESS EXCLUSIVE
lock on the parent table.
Adding the CONCURRENTLY
qualifier as in the second
form allows the detach operation to require only
SHARE UPDATE EXCLUSIVE
lock on the parent table, but see
ALTER TABLE ... DETACH PARTITION
for details on the restrictions.
Similarly we can add a new partition to handle new data. We can create an empty partition in the partitioned table just as the original partitions were created above:
CREATE TABLE measurement_y2008m02 PARTITION OF measurement FOR VALUES FROM ('2008-02-01') TO ('2008-03-01') TABLESPACE fasttablespace;
As an alternative, it is sometimes more convenient to create the
new table outside the partition structure, and attach it as a
partition later. This allows new data to be loaded, checked, and
transformed prior to it appearing in the partitioned table.
Moreover, the ATTACH PARTITION
operation requires
only SHARE UPDATE EXCLUSIVE
lock on the
partitioned table, as opposed to the ACCESS
EXCLUSIVE
lock that is required by CREATE TABLE
... PARTITION OF
, so it is more friendly to concurrent
operations on the partitioned table.
The CREATE TABLE ... LIKE
option is helpful
to avoid tediously repeating the parent table's definition:
CREATE TABLE measurement_y2008m02 (LIKE measurement INCLUDING DEFAULTS INCLUDING CONSTRAINTS) TABLESPACE fasttablespace; ALTER TABLE measurement_y2008m02 ADD CONSTRAINT y2008m02 CHECK ( logdate >= DATE '2008-02-01' AND logdate < DATE '2008-03-01' ); \copy measurement_y2008m02 from 'measurement_y2008m02' -- possibly some other data preparation work ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 FOR VALUES FROM ('2008-02-01') TO ('2008-03-01' );
Before running the ATTACH PARTITION
command, it is
recommended to create a CHECK
constraint on the table to
be attached that matches the expected partition constraint, as
illustrated above. That way, the system will be able to skip the scan
which is otherwise needed to validate the implicit
partition constraint. Without the CHECK
constraint,
the table will be scanned to validate the partition constraint while
holding an ACCESS EXCLUSIVE
lock on that partition.
It is recommended to drop the now-redundant CHECK
constraint after the ATTACH PARTITION
is complete. If
the table being attached is itself a partitioned table, then each of its
sub-partitions will be recursively locked and scanned until either a
suitable CHECK
constraint is encountered or the leaf
partitions are reached.
Similarly, if the partitioned table has a DEFAULT
partition, it is recommended to create a CHECK
constraint which excludes the to-be-attached partition's constraint. If
this is not done then the DEFAULT
partition will be
scanned to verify that it contains no records which should be located in
the partition being attached. This operation will be performed whilst
holding an ACCESS EXCLUSIVE
lock on the
DEFAULT
partition. If the DEFAULT
partition
is itself a partitioned table, then each of its partitions will be
recursively checked in the same way as the table being attached, as
mentioned above.
As explained above, it is possible to create indexes on partitioned tables
so that they are applied automatically to the entire hierarchy.
This is very
convenient, as not only will the existing partitions become indexed, but
also any partitions that are created in the future will. One limitation is
that it's not possible to use the CONCURRENTLY
qualifier when creating such a partitioned index. To avoid long lock
times, it is possible to use CREATE INDEX ON ONLY
the partitioned table; such an index is marked invalid, and the partitions
do not get the index applied automatically. The indexes on partitions can
be created individually using CONCURRENTLY
, and then
attached to the index on the parent using
ALTER INDEX .. ATTACH PARTITION
. Once indexes for all
partitions are attached to the parent index, the parent index is marked
valid automatically. Example:
CREATE INDEX measurement_usls_idx ON ONLY measurement (unitsales); CREATE INDEX CONCURRENTLY measurement_usls_200602_idx ON measurement_y2006m02 (unitsales); ALTER INDEX measurement_usls_idx ATTACH PARTITION measurement_usls_200602_idx; ...
This technique can be used with UNIQUE
and
PRIMARY KEY
constraints too; the indexes are created
implicitly when the constraint is created. Example:
ALTER TABLE ONLY measurement ADD UNIQUE (city_id, logdate); ALTER TABLE measurement_y2006m02 ADD UNIQUE (city_id, logdate); ALTER INDEX measurement_city_id_logdate_key ATTACH PARTITION measurement_y2006m02_city_id_logdate_key; ...
The following limitations apply to partitioned tables:
To create a unique or primary key constraint on a partitioned table, the partition keys must not include any expressions or function calls and the constraint's columns must include all of the partition key columns. This limitation exists because the individual indexes making up the constraint can only directly enforce uniqueness within their own partitions; therefore, the partition structure itself must guarantee that there are not duplicates in different partitions.
There is no way to create an exclusion constraint spanning the whole partitioned table. It is only possible to put such a constraint on each leaf partition individually. Again, this limitation stems from not being able to enforce cross-partition restrictions.
BEFORE ROW
triggers on INSERT
cannot change which partition is the final destination for a new row.
Mixing temporary and permanent relations in the same partition tree is not allowed. Hence, if the partitioned table is permanent, so must be its partitions and likewise if the partitioned table is temporary. When using temporary relations, all members of the partition tree have to be from the same session.
Individual partitions are linked to their partitioned table using inheritance behind-the-scenes. However, it is not possible to use all of the generic features of inheritance with declaratively partitioned tables or their partitions, as discussed below. Notably, a partition cannot have any parents other than the partitioned table it is a partition of, nor can a table inherit from both a partitioned table and a regular table. That means partitioned tables and their partitions never share an inheritance hierarchy with regular tables.
Since a partition hierarchy consisting of the partitioned table and its
partitions is still an inheritance hierarchy,
tableoid
and all the normal rules of
inheritance apply as described in Section 5.10, with
a few exceptions:
Partitions cannot have columns that are not present in the parent. It
is not possible to specify columns when creating partitions with
CREATE TABLE
, nor is it possible to add columns to
partitions after-the-fact using ALTER TABLE
.
Tables may be added as a partition with ALTER TABLE
... ATTACH PARTITION
only if their columns exactly match
the parent.
Both CHECK
and NOT NULL
constraints of a partitioned table are always inherited by all its
partitions. CHECK
constraints that are marked
NO INHERIT
are not allowed to be created on
partitioned tables.
You cannot drop a NOT NULL
constraint on a
partition's column if the same constraint is present in the parent
table.
Using ONLY
to add or drop a constraint on only
the partitioned table is supported as long as there are no
partitions. Once partitions exist, using ONLY
will result in an error for any constraints other than
UNIQUE
and PRIMARY KEY
.
Instead, constraints on the partitions
themselves can be added and (if they are not present in the parent
table) dropped.
As a partitioned table does not have any data itself, attempts to use
TRUNCATE
ONLY
on a partitioned
table will always return an error.
While the built-in declarative partitioning is suitable for most common use cases, there are some circumstances where a more flexible approach may be useful. Partitioning can be implemented using table inheritance, which allows for several features not supported by declarative partitioning, such as:
For declarative partitioning, partitions must have exactly the same set of columns as the partitioned table, whereas with table inheritance, child tables may have extra columns not present in the parent.
Table inheritance allows for multiple inheritance.
Declarative partitioning only supports range, list and hash partitioning, whereas table inheritance allows data to be divided in a manner of the user's choosing. (Note, however, that if constraint exclusion is unable to prune child tables effectively, query performance might be poor.)
This example builds a partitioning structure equivalent to the declarative partitioning example above. Use the following steps:
Create the “root” table, from which all of the
“child” tables will inherit. This table will contain no data. Do not
define any check constraints on this table, unless you intend them
to be applied equally to all child tables. There is no point in
defining any indexes or unique constraints on it, either. For our
example, the root table is the measurement
table as originally defined:
CREATE TABLE measurement ( city_id int not null, logdate date not null, peaktemp int, unitsales int );
Create several “child” tables that each inherit from the root table. Normally, these tables will not add any columns to the set inherited from the root. Just as with declarative partitioning, these tables are in every way normal PostgreSQL tables (or foreign tables).
CREATE TABLE measurement_y2006m02 () INHERITS (measurement); CREATE TABLE measurement_y2006m03 () INHERITS (measurement); ... CREATE TABLE measurement_y2007m11 () INHERITS (measurement); CREATE TABLE measurement_y2007m12 () INHERITS (measurement); CREATE TABLE measurement_y2008m01 () INHERITS (measurement);
Add non-overlapping table constraints to the child tables to define the allowed key values in each.
Typical examples would be:
CHECK ( x = 1 ) CHECK ( county IN ( 'Oxfordshire', 'Buckinghamshire', 'Warwickshire' )) CHECK ( outletID >= 100 AND outletID < 200 )
Ensure that the constraints guarantee that there is no overlap between the key values permitted in different child tables. A common mistake is to set up range constraints like:
CHECK ( outletID BETWEEN 100 AND 200 ) CHECK ( outletID BETWEEN 200 AND 300 )
This is wrong since it is not clear which child table the key value 200 belongs in. Instead, ranges should be defined in this style:
CREATE TABLE measurement_y2006m02 ( CHECK ( logdate >= DATE '2006-02-01' AND logdate < DATE '2006-03-01' ) ) INHERITS (measurement); CREATE TABLE measurement_y2006m03 ( CHECK ( logdate >= DATE '2006-03-01' AND logdate < DATE '2006-04-01' ) ) INHERITS (measurement); ... CREATE TABLE measurement_y2007m11 ( CHECK ( logdate >= DATE '2007-11-01' AND logdate < DATE '2007-12-01' ) ) INHERITS (measurement); CREATE TABLE measurement_y2007m12 ( CHECK ( logdate >= DATE '2007-12-01' AND logdate < DATE '2008-01-01' ) ) INHERITS (measurement); CREATE TABLE measurement_y2008m01 ( CHECK ( logdate >= DATE '2008-01-01' AND logdate < DATE '2008-02-01' ) ) INHERITS (measurement);
For each child table, create an index on the key column(s), as well as any other indexes you might want.
CREATE INDEX measurement_y2006m02_logdate ON measurement_y2006m02 (logdate); CREATE INDEX measurement_y2006m03_logdate ON measurement_y2006m03 (logdate); CREATE INDEX measurement_y2007m11_logdate ON measurement_y2007m11 (logdate); CREATE INDEX measurement_y2007m12_logdate ON measurement_y2007m12 (logdate); CREATE INDEX measurement_y2008m01_logdate ON measurement_y2008m01 (logdate);
We want our application to be able to say INSERT INTO
measurement ...
and have the data be redirected into the
appropriate child table. We can arrange that by attaching
a suitable trigger function to the root table.
If data will be added only to the latest child, we can
use a very simple trigger function:
CREATE OR REPLACE FUNCTION measurement_insert_trigger() RETURNS TRIGGER AS $$ BEGIN INSERT INTO measurement_y2008m01 VALUES (NEW.*); RETURN NULL; END; $$ LANGUAGE plpgsql;
After creating the function, we create a trigger which calls the trigger function:
CREATE TRIGGER insert_measurement_trigger BEFORE INSERT ON measurement FOR EACH ROW EXECUTE FUNCTION measurement_insert_trigger();
We must redefine the trigger function each month so that it always inserts into the current child table. The trigger definition does not need to be updated, however.
We might want to insert data and have the server automatically locate the child table into which the row should be added. We could do this with a more complex trigger function, for example:
CREATE OR REPLACE FUNCTION measurement_insert_trigger() RETURNS TRIGGER AS $$ BEGIN IF ( NEW.logdate >= DATE '2006-02-01' AND NEW.logdate < DATE '2006-03-01' ) THEN INSERT INTO measurement_y2006m02 VALUES (NEW.*); ELSIF ( NEW.logdate >= DATE '2006-03-01' AND NEW.logdate < DATE '2006-04-01' ) THEN INSERT INTO measurement_y2006m03 VALUES (NEW.*); ... ELSIF ( NEW.logdate >= DATE '2008-01-01' AND NEW.logdate < DATE '2008-02-01' ) THEN INSERT INTO measurement_y2008m01 VALUES (NEW.*); ELSE RAISE EXCEPTION 'Date out of range. Fix the measurement_insert_trigger() function!'; END IF; RETURN NULL; END; $$ LANGUAGE plpgsql;
The trigger definition is the same as before.
Note that each IF
test must exactly match the
CHECK
constraint for its child table.
While this function is more complex than the single-month case, it doesn't need to be updated as often, since branches can be added in advance of being needed.
In practice, it might be best to check the newest child first, if most inserts go into that child. For simplicity, we have shown the trigger's tests in the same order as in other parts of this example.
A different approach to redirecting inserts into the appropriate child table is to set up rules, instead of a trigger, on the root table. For example:
CREATE RULE measurement_insert_y2006m02 AS ON INSERT TO measurement WHERE ( logdate >= DATE '2006-02-01' AND logdate < DATE '2006-03-01' ) DO INSTEAD INSERT INTO measurement_y2006m02 VALUES (NEW.*); ... CREATE RULE measurement_insert_y2008m01 AS ON INSERT TO measurement WHERE ( logdate >= DATE '2008-01-01' AND logdate < DATE '2008-02-01' ) DO INSTEAD INSERT INTO measurement_y2008m01 VALUES (NEW.*);
A rule has significantly more overhead than a trigger, but the overhead is paid once per query rather than once per row, so this method might be advantageous for bulk-insert situations. In most cases, however, the trigger method will offer better performance.
Be aware that COPY
ignores rules. If you want to
use COPY
to insert data, you'll need to copy into the
correct child table rather than directly into the root. COPY
does fire triggers, so you can use it normally if you use the trigger
approach.
Another disadvantage of the rule approach is that there is no simple way to force an error if the set of rules doesn't cover the insertion date; the data will silently go into the root table instead.
Ensure that the constraint_exclusion
configuration parameter is not disabled in
postgresql.conf
; otherwise
child tables may be accessed unnecessarily.
As we can see, a complex table hierarchy could require a substantial amount of DDL. In the above example we would be creating a new child table each month, so it might be wise to write a script that generates the required DDL automatically.
To remove old data quickly, simply drop the child table that is no longer necessary:
DROP TABLE measurement_y2006m02;
To remove the child table from the inheritance hierarchy table but retain access to it as a table in its own right:
ALTER TABLE measurement_y2006m02 NO INHERIT measurement;
To add a new child table to handle new data, create an empty child table just as the original children were created above:
CREATE TABLE measurement_y2008m02 ( CHECK ( logdate >= DATE '2008-02-01' AND logdate < DATE '2008-03-01' ) ) INHERITS (measurement);
Alternatively, one may want to create and populate the new child table before adding it to the table hierarchy. This could allow data to be loaded, checked, and transformed before being made visible to queries on the parent table.
CREATE TABLE measurement_y2008m02 (LIKE measurement INCLUDING DEFAULTS INCLUDING CONSTRAINTS); ALTER TABLE measurement_y2008m02 ADD CONSTRAINT y2008m02 CHECK ( logdate >= DATE '2008-02-01' AND logdate < DATE '2008-03-01' ); \copy measurement_y2008m02 from 'measurement_y2008m02' -- possibly some other data preparation work ALTER TABLE measurement_y2008m02 INHERIT measurement;
The following caveats apply to partitioning implemented using inheritance:
There is no automatic way to verify that all of the
CHECK
constraints are mutually
exclusive. It is safer to create code that generates
child tables and creates and/or modifies associated objects than
to write each by hand.
Indexes and foreign key constraints apply to single tables and not to their inheritance children, hence they have some caveats to be aware of.
The schemes shown here assume that the values of a row's key column(s)
never change, or at least do not change enough to require it to move to another partition.
An UPDATE
that attempts
to do that will fail because of the CHECK
constraints.
If you need to handle such cases, you can put suitable update triggers
on the child tables, but it makes management of the structure
much more complicated.
If you are using manual VACUUM
or
ANALYZE
commands, don't forget that
you need to run them on each child table individually. A command like:
ANALYZE measurement;
will only process the root table.
INSERT
statements with ON CONFLICT
clauses are unlikely to work as expected, as the ON CONFLICT
action is only taken in case of unique violations on the specified
target relation, not its child relations.
Triggers or rules will be needed to route rows to the desired child table, unless the application is explicitly aware of the partitioning scheme. Triggers may be complicated to write, and will be much slower than the tuple routing performed internally by declarative partitioning.
Partition pruning is a query optimization technique that improves performance for declaratively partitioned tables. As an example:
SET enable_partition_pruning = on; -- the default SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01';
Without partition pruning, the above query would scan each of the
partitions of the measurement
table. With
partition pruning enabled, the planner will examine the definition
of each partition and prove that the partition need not
be scanned because it could not contain any rows meeting the query's
WHERE
clause. When the planner can prove this, it
excludes (prunes) the partition from the query
plan.
By using the EXPLAIN command and the enable_partition_pruning configuration parameter, it's possible to show the difference between a plan for which partitions have been pruned and one for which they have not. A typical unoptimized plan for this type of table setup is:
SET enable_partition_pruning = off; EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; QUERY PLAN ----------------------------------------------------------------------------------- Aggregate (cost=188.76..188.77 rows=1 width=8) -> Append (cost=0.00..181.05 rows=3085 width=0) -> Seq Scan on measurement_y2006m02 (cost=0.00..33.12 rows=617 width=0) Filter: (logdate >= '2008-01-01'::date) -> Seq Scan on measurement_y2006m03 (cost=0.00..33.12 rows=617 width=0) Filter: (logdate >= '2008-01-01'::date) ... -> Seq Scan on measurement_y2007m11 (cost=0.00..33.12 rows=617 width=0) Filter: (logdate >= '2008-01-01'::date) -> Seq Scan on measurement_y2007m12 (cost=0.00..33.12 rows=617 width=0) Filter: (logdate >= '2008-01-01'::date) -> Seq Scan on measurement_y2008m01 (cost=0.00..33.12 rows=617 width=0) Filter: (logdate >= '2008-01-01'::date)
Some or all of the partitions might use index scans instead of full-table sequential scans, but the point here is that there is no need to scan the older partitions at all to answer this query. When we enable partition pruning, we get a significantly cheaper plan that will deliver the same answer:
SET enable_partition_pruning = on; EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; QUERY PLAN ----------------------------------------------------------------------------------- Aggregate (cost=37.75..37.76 rows=1 width=8) -> Seq Scan on measurement_y2008m01 (cost=0.00..33.12 rows=617 width=0) Filter: (logdate >= '2008-01-01'::date)
Note that partition pruning is driven only by the constraints defined implicitly by the partition keys, not by the presence of indexes. Therefore it isn't necessary to define indexes on the key columns. Whether an index needs to be created for a given partition depends on whether you expect that queries that scan the partition will generally scan a large part of the partition or just a small part. An index will be helpful in the latter case but not the former.
Partition pruning can be performed not only during the planning of a
given query, but also during its execution. This is useful as it can
allow more partitions to be pruned when clauses contain expressions
whose values are not known at query planning time, for example,
parameters defined in a PREPARE
statement, using a
value obtained from a subquery, or using a parameterized value on the
inner side of a nested loop join. Partition pruning during execution
can be performed at any of the following times:
During initialization of the query plan. Partition pruning can be
performed here for parameter values which are known during the
initialization phase of execution. Partitions which are pruned
during this stage will not show up in the query's
EXPLAIN
or EXPLAIN ANALYZE
.
It is possible to determine the number of partitions which were
removed during this phase by observing the
“Subplans Removed” property in the
EXPLAIN
output.
During actual execution of the query plan. Partition pruning may
also be performed here to remove partitions using values which are
only known during actual query execution. This includes values
from subqueries and values from execution-time parameters such as
those from parameterized nested loop joins. Since the value of
these parameters may change many times during the execution of the
query, partition pruning is performed whenever one of the
execution parameters being used by partition pruning changes.
Determining if partitions were pruned during this phase requires
careful inspection of the loops
property in
the EXPLAIN ANALYZE
output. Subplans
corresponding to different partitions may have different values
for it depending on how many times each of them was pruned during
execution. Some may be shown as (never executed)
if they were pruned every time.
Partition pruning can be disabled using the enable_partition_pruning setting.
Constraint exclusion is a query optimization technique similar to partition pruning. While it is primarily used for partitioning implemented using the legacy inheritance method, it can be used for other purposes, including with declarative partitioning.
Constraint exclusion works in a very similar way to partition
pruning, except that it uses each table's CHECK
constraints — which gives it its name — whereas partition
pruning uses the table's partition bounds, which exist only in the
case of declarative partitioning. Another difference is that
constraint exclusion is only applied at plan time; there is no attempt
to remove partitions at execution time.
The fact that constraint exclusion uses CHECK
constraints, which makes it slow compared to partition pruning, can
sometimes be used as an advantage: because constraints can be defined
even on declaratively-partitioned tables, in addition to their internal
partition bounds, constraint exclusion may be able
to elide additional partitions from the query plan.
The default (and recommended) setting of
constraint_exclusion is neither
on
nor off
, but an intermediate setting
called partition
, which causes the technique to be
applied only to queries that are likely to be working on inheritance partitioned
tables. The on
setting causes the planner to examine
CHECK
constraints in all queries, even simple ones that
are unlikely to benefit.
The following caveats apply to constraint exclusion:
Constraint exclusion is only applied during query planning, unlike partition pruning, which can also be applied during query execution.
Constraint exclusion only works when the query's WHERE
clause contains constants (or externally supplied parameters).
For example, a comparison against a non-immutable function such as
CURRENT_TIMESTAMP
cannot be optimized, since the
planner cannot know which child table the function's value might fall
into at run time.
Keep the partitioning constraints simple, else the planner may not be able to prove that child tables might not need to be visited. Use simple equality conditions for list partitioning, or simple range tests for range partitioning, as illustrated in the preceding examples. A good rule of thumb is that partitioning constraints should contain only comparisons of the partitioning column(s) to constants using B-tree-indexable operators, because only B-tree-indexable column(s) are allowed in the partition key.
All constraints on all children of the parent table are examined during constraint exclusion, so large numbers of children are likely to increase query planning time considerably. So the legacy inheritance based partitioning will work well with up to perhaps a hundred child tables; don't try to use many thousands of children.
The choice of how to partition a table should be made carefully, as the performance of query planning and execution can be negatively affected by poor design.
One of the most critical design decisions will be the column or columns
by which you partition your data. Often the best choice will be to
partition by the column or set of columns which most commonly appear in
WHERE
clauses of queries being executed on the
partitioned table. WHERE
clauses that are compatible
with the partition bound constraints can be used to prune unneeded
partitions. However, you may be forced into making other decisions by
requirements for the PRIMARY KEY
or a
UNIQUE
constraint. Removal of unwanted data is also a
factor to consider when planning your partitioning strategy. An entire
partition can be detached fairly quickly, so it may be beneficial to
design the partition strategy in such a way that all data to be removed
at once is located in a single partition.
Choosing the target number of partitions that the table should be divided
into is also a critical decision to make. Not having enough partitions
may mean that indexes remain too large and that data locality remains poor
which could result in low cache hit ratios. However, dividing the table
into too many partitions can also cause issues. Too many partitions can
mean longer query planning times and higher memory consumption during both
query planning and execution, as further described below.
When choosing how to partition your table,
it's also important to consider what changes may occur in the future. For
example, if you choose to have one partition per customer and you
currently have a small number of large customers, consider the
implications if in several years you instead find yourself with a large
number of small customers. In this case, it may be better to choose to
partition by HASH
and choose a reasonable number of
partitions rather than trying to partition by LIST
and
hoping that the number of customers does not increase beyond what it is
practical to partition the data by.
Sub-partitioning can be useful to further divide partitions that are expected to become larger than other partitions. Another option is to use range partitioning with multiple columns in the partition key. Either of these can easily lead to excessive numbers of partitions, so restraint is advisable.
It is important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few thousand partitions fairly well, provided that typical queries allow the query planner to prune all but a small number of partitions. Planning times become longer and memory consumption becomes higher when more partitions remain after the planner performs partition pruning. Another reason to be concerned about having a large number of partitions is that the server's memory consumption may grow significantly over time, especially if many sessions touch large numbers of partitions. That's because each partition requires its metadata to be loaded into the local memory of each session that touches it.
With data warehouse type workloads, it can make sense to use a larger number of partitions than with an OLTP type workload. Generally, in data warehouses, query planning time is less of a concern as the majority of processing time is spent during query execution. With either of these two types of workload, it is important to make the right decisions early, as re-partitioning large quantities of data can be painfully slow. Simulations of the intended workload are often beneficial for optimizing the partitioning strategy. Never just assume that more partitions are better than fewer partitions, nor vice-versa.
PostgreSQL implements portions of the SQL/MED specification, allowing you to access data that resides outside PostgreSQL using regular SQL queries. Such data is referred to as foreign data. (Note that this usage is not to be confused with foreign keys, which are a type of constraint within the database.)
Foreign data is accessed with help from a
foreign data wrapper. A foreign data wrapper is a
library that can communicate with an external data source, hiding the
details of connecting to the data source and obtaining data from it.
There are some foreign data wrappers available as contrib
modules; see Appendix F. Other kinds of foreign data
wrappers might be found as third party products. If none of the existing
foreign data wrappers suit your needs, you can write your own; see Chapter 57.
To access foreign data, you need to create a foreign server object, which defines how to connect to a particular external data source according to the set of options used by its supporting foreign data wrapper. Then you need to create one or more foreign tables, which define the structure of the remote data. A foreign table can be used in queries just like a normal table, but a foreign table has no storage in the PostgreSQL server. Whenever it is used, PostgreSQL asks the foreign data wrapper to fetch data from the external source, or transmit data to the external source in the case of update commands.
Accessing remote data may require authenticating to the external data source. This information can be provided by a user mapping, which can provide additional data such as user names and passwords based on the current PostgreSQL role.
For additional information, see CREATE FOREIGN DATA WRAPPER, CREATE SERVER, CREATE USER MAPPING, CREATE FOREIGN TABLE, and IMPORT FOREIGN SCHEMA.
Tables are the central objects in a relational database structure, because they hold your data. But they are not the only objects that exist in a database. Many other kinds of objects can be created to make the use and management of the data more efficient or convenient. They are not discussed in this chapter, but we give you a list here so that you are aware of what is possible:
Views
Functions, procedures, and operators
Data types and domains
Triggers and rewrite rules
Detailed information on these topics appears in Part V.
When you create complex database structures involving many tables with foreign key constraints, views, triggers, functions, etc. you implicitly create a net of dependencies between the objects. For instance, a table with a foreign key constraint depends on the table it references.
To ensure the integrity of the entire database structure, PostgreSQL makes sure that you cannot drop objects that other objects still depend on. For example, attempting to drop the products table we considered in Section 5.4.5, with the orders table depending on it, would result in an error message like this:
DROP TABLE products; ERROR: cannot drop table products because other objects depend on it DETAIL: constraint orders_product_no_fkey on table orders depends on table products HINT: Use DROP ... CASCADE to drop the dependent objects too.
The error message contains a useful hint: if you do not want to bother deleting all the dependent objects individually, you can run:
DROP TABLE products CASCADE;
and all the dependent objects will be removed, as will any objects
that depend on them, recursively. In this case, it doesn't remove
the orders table, it only removes the foreign key constraint.
It stops there because nothing depends on the foreign key constraint.
(If you want to check what DROP ... CASCADE
will do,
run DROP
without CASCADE
and read the
DETAIL
output.)
Almost all DROP
commands in PostgreSQL support
specifying CASCADE
. Of course, the nature of
the possible dependencies varies with the type of the object. You
can also write RESTRICT
instead of
CASCADE
to get the default behavior, which is to
prevent dropping objects that any other objects depend on.
According to the SQL standard, specifying either
RESTRICT
or CASCADE
is
required in a DROP
command. No database system actually
enforces that rule, but whether the default behavior
is RESTRICT
or CASCADE
varies
across systems.
If a DROP
command lists multiple
objects, CASCADE
is only required when there are
dependencies outside the specified group. For example, when saying
DROP TABLE tab1, tab2
the existence of a foreign
key referencing tab1
from tab2
would not mean
that CASCADE
is needed to succeed.
For a user-defined function or procedure whose body is defined as a string literal, PostgreSQL tracks dependencies associated with the function's externally-visible properties, such as its argument and result types, but not dependencies that could only be known by examining the function body. As an example, consider this situation:
CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple'); CREATE TABLE my_colors (color rainbow, note text); CREATE FUNCTION get_color_note (rainbow) RETURNS text AS 'SELECT note FROM my_colors WHERE color = $1' LANGUAGE SQL;
(See Section 38.5 for an explanation of SQL-language
functions.) PostgreSQL will be aware that
the get_color_note
function depends on the rainbow
type: dropping the type would force dropping the function, because its
argument type would no longer be defined. But PostgreSQL
will not consider get_color_note
to depend on
the my_colors
table, and so will not drop the function if
the table is dropped. While there are disadvantages to this approach,
there are also benefits. The function is still valid in some sense if the
table is missing, though executing it would cause an error; creating a new
table of the same name would allow the function to work again.
On the other hand, for a SQL-language function or procedure whose body is written in SQL-standard style, the body is parsed at function definition time and all dependencies recognized by the parser are stored. Thus, if we write the function above as
CREATE FUNCTION get_color_note (rainbow) RETURNS text BEGIN ATOMIC SELECT note FROM my_colors WHERE color = $1; END;
then the function's dependency on the my_colors
table will be known and enforced by DROP
.
Table of Contents
The previous chapter discussed how to create tables and other structures to hold your data. Now it is time to fill the tables with data. This chapter covers how to insert, update, and delete table data. The chapter after this will finally explain how to extract your long-lost data from the database.
When a table is created, it contains no data. The first thing to do before a database can be of much use is to insert data. Data is inserted one row at a time. You can also insert more than one row in a single command, but it is not possible to insert something that is not a complete row. Even if you know only some column values, a complete row must be created.
To create a new row, use the INSERT command. The command requires the table name and column values. For example, consider the products table from Chapter 5:
CREATE TABLE products ( product_no integer, name text, price numeric );
An example command to insert a row would be:
INSERT INTO products VALUES (1, 'Cheese', 9.99);
The data values are listed in the order in which the columns appear in the table, separated by commas. Usually, the data values will be literals (constants), but scalar expressions are also allowed.
The above syntax has the drawback that you need to know the order of the columns in the table. To avoid this you can also list the columns explicitly. For example, both of the following commands have the same effect as the one above:
INSERT INTO products (product_no, name, price) VALUES (1, 'Cheese', 9.99); INSERT INTO products (name, price, product_no) VALUES ('Cheese', 9.99, 1);
Many users consider it good practice to always list the column names.
If you don't have values for all the columns, you can omit some of them. In that case, the columns will be filled with their default values. For example:
INSERT INTO products (product_no, name) VALUES (1, 'Cheese'); INSERT INTO products VALUES (1, 'Cheese');
The second form is a PostgreSQL extension. It fills the columns from the left with as many values as are given, and the rest will be defaulted.
For clarity, you can also request default values explicitly, for individual columns or for the entire row:
INSERT INTO products (product_no, name, price) VALUES (1, 'Cheese', DEFAULT); INSERT INTO products DEFAULT VALUES;
You can insert multiple rows in a single command:
INSERT INTO products (product_no, name, price) VALUES (1, 'Cheese', 9.99), (2, 'Bread', 1.99), (3, 'Milk', 2.99);
It is also possible to insert the result of a query (which might be no rows, one row, or many rows):
INSERT INTO products (product_no, name, price) SELECT product_no, name, price FROM new_products WHERE release_date = 'today';
This provides the full power of the SQL query mechanism (Chapter 7) for computing the rows to be inserted.
When inserting a lot of data at the same time, consider using the COPY command. It is not as flexible as the INSERT command, but is more efficient. Refer to Section 14.4 for more information on improving bulk loading performance.
The modification of data that is already in the database is referred to as updating. You can update individual rows, all the rows in a table, or a subset of all rows. Each column can be updated separately; the other columns are not affected.
To update existing rows, use the UPDATE command. This requires three pieces of information:
The name of the table and column to update
The new value of the column
Which row(s) to update
Recall from Chapter 5 that SQL does not, in general, provide a unique identifier for rows. Therefore it is not always possible to directly specify which row to update. Instead, you specify which conditions a row must meet in order to be updated. Only if you have a primary key in the table (independent of whether you declared it or not) can you reliably address individual rows by choosing a condition that matches the primary key. Graphical database access tools rely on this fact to allow you to update rows individually.
For example, this command updates all products that have a price of 5 to have a price of 10:
UPDATE products SET price = 10 WHERE price = 5;
This might cause zero, one, or many rows to be updated. It is not an error to attempt an update that does not match any rows.
Let's look at that command in detail. First is the key word
UPDATE
followed by the table name. As usual,
the table name can be schema-qualified, otherwise it is looked up
in the path. Next is the key word SET
followed
by the column name, an equal sign, and the new column value. The
new column value can be any scalar expression, not just a constant.
For example, if you want to raise the price of all products by 10%
you could use:
UPDATE products SET price = price * 1.10;
As you see, the expression for the new value can refer to the existing
value(s) in the row. We also left out the WHERE
clause.
If it is omitted, it means that all rows in the table are updated.
If it is present, only those rows that match the
WHERE
condition are updated. Note that the equals
sign in the SET
clause is an assignment while
the one in the WHERE
clause is a comparison, but
this does not create any ambiguity. Of course, the
WHERE
condition does
not have to be an equality test. Many other operators are
available (see Chapter 9). But the expression
needs to evaluate to a Boolean result.
You can update more than one column in an
UPDATE
command by listing more than one
assignment in the SET
clause. For example:
UPDATE mytable SET a = 5, b = 3, c = 1 WHERE a > 0;
So far we have explained how to add data to tables and how to change data. What remains is to discuss how to remove data that is no longer needed. Just as adding data is only possible in whole rows, you can only remove entire rows from a table. In the previous section we explained that SQL does not provide a way to directly address individual rows. Therefore, removing rows can only be done by specifying conditions that the rows to be removed have to match. If you have a primary key in the table then you can specify the exact row. But you can also remove groups of rows matching a condition, or you can remove all rows in the table at once.
You use the DELETE command to remove rows; the syntax is very similar to the UPDATE command. For instance, to remove all rows from the products table that have a price of 10, use:
DELETE FROM products WHERE price = 10;
If you simply write:
DELETE FROM products;
then all rows in the table will be deleted! Caveat programmer.
Sometimes it is useful to obtain data from modified rows while they are
being manipulated. The INSERT
, UPDATE
,
and DELETE
commands all have an
optional RETURNING
clause that supports this. Use
of RETURNING
avoids performing an extra database query to
collect the data, and is especially valuable when it would otherwise be
difficult to identify the modified rows reliably.
The allowed contents of a RETURNING
clause are the same as
a SELECT
command's output list
(see Section 7.3). It can contain column
names of the command's target table, or value expressions using those
columns. A common shorthand is RETURNING *
, which selects
all columns of the target table in order.
In an INSERT
, the data available to RETURNING
is
the row as it was inserted. This is not so useful in trivial inserts,
since it would just repeat the data provided by the client. But it can
be very handy when relying on computed default values. For example,
when using a serial
column to provide unique identifiers, RETURNING
can return
the ID assigned to a new row:
CREATE TABLE users (firstname text, lastname text, id serial primary key); INSERT INTO users (firstname, lastname) VALUES ('Joe', 'Cool') RETURNING id;
The RETURNING
clause is also very useful
with INSERT ... SELECT
.
In an UPDATE
, the data available to RETURNING
is
the new content of the modified row. For example:
UPDATE products SET price = price * 1.10 WHERE price <= 99.99 RETURNING name, price AS new_price;
In a DELETE
, the data available to RETURNING
is
the content of the deleted row. For example:
DELETE FROM products WHERE obsoletion_date = 'today' RETURNING *;
If there are triggers (Chapter 39) on the target table,
the data available to RETURNING
is the row as modified by
the triggers. Thus, inspecting columns computed by triggers is another
common use-case for RETURNING
.
Table of Contents
The previous chapters explained how to create tables, how to fill them with data, and how to manipulate that data. Now we finally discuss how to retrieve the data from the database.
The process of retrieving or the command to retrieve data from a
database is called a query. In SQL the
SELECT
command is
used to specify queries. The general syntax of the
SELECT
command is
[WITHwith_queries
] SELECTselect_list
FROMtable_expression
[sort_specification
]
The following sections describe the details of the select list, the
table expression, and the sort specification. WITH
queries are treated last since they are an advanced feature.
A simple kind of query has the form:
SELECT * FROM table1;
Assuming that there is a table called table1
,
this command would retrieve all rows and all user-defined columns from
table1
. (The method of retrieval depends on the
client application. For example, the
psql program will display an ASCII-art
table on the screen, while client libraries will offer functions to
extract individual values from the query result.) The select list
specification *
means all columns that the table
expression happens to provide. A select list can also select a
subset of the available columns or make calculations using the
columns. For example, if
table1
has columns named a
,
b
, and c
(and perhaps others) you can make
the following query:
SELECT a, b + c FROM table1;
(assuming that b
and c
are of a numerical
data type).
See Section 7.3 for more details.
FROM table1
is a simple kind of
table expression: it reads just one table. In general, table
expressions can be complex constructs of base tables, joins, and
subqueries. But you can also omit the table expression entirely and
use the SELECT
command as a calculator:
SELECT 3 * 4;
This is more useful if the expressions in the select list return varying results. For example, you could call a function this way:
SELECT random();
A table expression computes a table. The
table expression contains a FROM
clause that is
optionally followed by WHERE
, GROUP BY
, and
HAVING
clauses. Trivial table expressions simply refer
to a table on disk, a so-called base table, but more complex
expressions can be used to modify or combine base tables in various
ways.
The optional WHERE
, GROUP BY
, and
HAVING
clauses in the table expression specify a
pipeline of successive transformations performed on the table
derived in the FROM
clause. All these transformations
produce a virtual table that provides the rows that are passed to
the select list to compute the output rows of the query.
FROM
Clause
The FROM
clause derives a
table from one or more other tables given in a comma-separated
table reference list.
FROMtable_reference
[,table_reference
[, ...]]
A table reference can be a table name (possibly schema-qualified),
or a derived table such as a subquery, a JOIN
construct, or
complex combinations of these. If more than one table reference is
listed in the FROM
clause, the tables are cross-joined
(that is, the Cartesian product of their rows is formed; see below).
The result of the FROM
list is an intermediate virtual
table that can then be subject to
transformations by the WHERE
, GROUP BY
,
and HAVING
clauses and is finally the result of the
overall table expression.
When a table reference names a table that is the parent of a
table inheritance hierarchy, the table reference produces rows of
not only that table but all of its descendant tables, unless the
key word ONLY
precedes the table name. However, the
reference produces only the columns that appear in the named table
— any columns added in subtables are ignored.
Instead of writing ONLY
before the table name, you can write
*
after the table name to explicitly specify that descendant
tables are included. There is no real reason to use this syntax any more,
because searching descendant tables is now always the default behavior.
However, it is supported for compatibility with older releases.
A joined table is a table derived from two other (real or derived) tables according to the rules of the particular join type. Inner, outer, and cross-joins are available. The general syntax of a joined table is
T1
join_type
T2
[join_condition
]
Joins of all types can be chained together, or nested: either or
both T1
and
T2
can be joined tables. Parentheses
can be used around JOIN
clauses to control the join
order. In the absence of parentheses, JOIN
clauses
nest left-to-right.
Join Types
T1
CROSS JOINT2
For every possible combination of rows from
T1
and
T2
(i.e., a Cartesian product),
the joined table will contain a
row consisting of all columns in T1
followed by all columns in T2
. If
the tables have N and M rows respectively, the joined
table will have N * M rows.
FROM
is equivalent to
T1
CROSS JOIN
T2
FROM
(see below).
It is also equivalent to
T1
INNER JOIN
T2
ON TRUEFROM
.
T1
,
T2
This latter equivalence does not hold exactly when more than two
tables appear, because JOIN
binds more tightly than
comma. For example
FROM
is not the same as
T1
CROSS JOIN
T2
INNER JOIN T3
ON condition
FROM
because the T1
,
T2
INNER JOIN T3
ON condition
condition
can
reference T1
in the first case but not
the second.
T1
{ [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOINT2
ONboolean_expression
T1
{ [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOINT2
USING (join column list
)T1
NATURAL { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOINT2
The words INNER
and
OUTER
are optional in all forms.
INNER
is the default;
LEFT
, RIGHT
, and
FULL
imply an outer join.
The join condition is specified in the
ON
or USING
clause, or implicitly by
the word NATURAL
. The join condition determines
which rows from the two source tables are considered to
“match”, as explained in detail below.
The possible types of qualified join are:
INNER JOIN
For each row R1 of T1, the joined table has a row for each row in T2 that satisfies the join condition with R1.
LEFT OUTER JOIN
First, an inner join is performed. Then, for each row in T1 that does not satisfy the join condition with any row in T2, a joined row is added with null values in columns of T2. Thus, the joined table always has at least one row for each row in T1.
RIGHT OUTER JOIN
First, an inner join is performed. Then, for each row in T2 that does not satisfy the join condition with any row in T1, a joined row is added with null values in columns of T1. This is the converse of a left join: the result table will always have a row for each row in T2.
FULL OUTER JOIN
First, an inner join is performed. Then, for each row in T1 that does not satisfy the join condition with any row in T2, a joined row is added with null values in columns of T2. Also, for each row of T2 that does not satisfy the join condition with any row in T1, a joined row with null values in the columns of T1 is added.
The ON
clause is the most general kind of join
condition: it takes a Boolean value expression of the same
kind as is used in a WHERE
clause. A pair of rows
from T1
and T2
match if the
ON
expression evaluates to true.
The USING
clause is a shorthand that allows you to take
advantage of the specific situation where both sides of the join use
the same name for the joining column(s). It takes a
comma-separated list of the shared column names
and forms a join condition that includes an equality comparison
for each one. For example, joining T1
and T2
with USING (a, b)
produces
the join condition ON
.
T1
.a
= T2
.a AND T1
.b
= T2
.b
Furthermore, the output of JOIN USING
suppresses
redundant columns: there is no need to print both of the matched
columns, since they must have equal values. While JOIN
ON
produces all columns from T1
followed by all
columns from T2
, JOIN USING
produces one
output column for each of the listed column pairs (in the listed
order), followed by any remaining columns from T1
,
followed by any remaining columns from T2
.
Finally, NATURAL
is a shorthand form of
USING
: it forms a USING
list
consisting of all column names that appear in both
input tables. As with USING
, these columns appear
only once in the output table. If there are no common
column names, NATURAL JOIN
behaves like
JOIN ... ON TRUE
, producing a cross-product join.
USING
is reasonably safe from column changes
in the joined relations since only the listed columns
are combined. NATURAL
is considerably more risky since
any schema changes to either relation that cause a new matching
column name to be present will cause the join to combine that new
column as well.
To put this together, assume we have tables t1
:
num | name -----+------ 1 | a 2 | b 3 | c
and t2
:
num | value -----+------- 1 | xxx 3 | yyy 5 | zzz
then we get the following results for the various joins:
=>
SELECT * FROM t1 CROSS JOIN t2;
num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 1 | a | 3 | yyy 1 | a | 5 | zzz 2 | b | 1 | xxx 2 | b | 3 | yyy 2 | b | 5 | zzz 3 | c | 1 | xxx 3 | c | 3 | yyy 3 | c | 5 | zzz (9 rows)=>
SELECT * FROM t1 INNER JOIN t2 ON t1.num = t2.num;
num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 3 | c | 3 | yyy (2 rows)=>
SELECT * FROM t1 INNER JOIN t2 USING (num);
num | name | value -----+------+------- 1 | a | xxx 3 | c | yyy (2 rows)=>
SELECT * FROM t1 NATURAL INNER JOIN t2;
num | name | value -----+------+------- 1 | a | xxx 3 | c | yyy (2 rows)=>
SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num;
num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 2 | b | | 3 | c | 3 | yyy (3 rows)=>
SELECT * FROM t1 LEFT JOIN t2 USING (num);
num | name | value -----+------+------- 1 | a | xxx 2 | b | 3 | c | yyy (3 rows)=>
SELECT * FROM t1 RIGHT JOIN t2 ON t1.num = t2.num;
num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 3 | c | 3 | yyy | | 5 | zzz (3 rows)=>
SELECT * FROM t1 FULL JOIN t2 ON t1.num = t2.num;
num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 2 | b | | 3 | c | 3 | yyy | | 5 | zzz (4 rows)
The join condition specified with ON
can also contain
conditions that do not relate directly to the join. This can
prove useful for some queries but needs to be thought out
carefully. For example:
=>
SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num AND t2.value = 'xxx';
num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 2 | b | | 3 | c | | (3 rows)
Notice that placing the restriction in the WHERE
clause
produces a different result:
=>
SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num WHERE t2.value = 'xxx';
num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx (1 row)
This is because a restriction placed in the ON
clause is processed before the join, while
a restriction placed in the WHERE
clause is processed
after the join.
That does not matter with inner joins, but it matters a lot with outer
joins.
A temporary name can be given to tables and complex table references to be used for references to the derived table in the rest of the query. This is called a table alias.
To create a table alias, write
FROMtable_reference
ASalias
or
FROMtable_reference
alias
The AS
key word is optional noise.
alias
can be any identifier.
A typical application of table aliases is to assign short identifiers to long table names to keep the join clauses readable. For example:
SELECT * FROM some_very_long_table_name s JOIN another_fairly_long_name a ON s.id = a.num;
The alias becomes the new name of the table reference so far as the current query is concerned — it is not allowed to refer to the table by the original name elsewhere in the query. Thus, this is not valid:
SELECT * FROM my_table AS m WHERE my_table.a > 5; -- wrong
Table aliases are mainly for notational convenience, but it is necessary to use them when joining a table to itself, e.g.:
SELECT * FROM people AS mother JOIN people AS child ON mother.id = child.mother_id;
Additionally, an alias is required if the table reference is a subquery (see Section 7.2.1.3).
Parentheses are used to resolve ambiguities. In the following example,
the first statement assigns the alias b
to the second
instance of my_table
, but the second statement assigns the
alias to the result of the join:
SELECT * FROM my_table AS a CROSS JOIN my_table AS b ... SELECT * FROM (my_table AS a CROSS JOIN my_table) AS b ...
Another form of table aliasing gives temporary names to the columns of the table, as well as the table itself:
FROMtable_reference
[AS]alias
(column1
[,column2
[, ...]] )
If fewer column aliases are specified than the actual table has columns, the remaining columns are not renamed. This syntax is especially useful for self-joins or subqueries.
When an alias is applied to the output of a JOIN
clause, the alias hides the original
name(s) within the JOIN
. For example:
SELECT a.* FROM my_table AS a JOIN your_table AS b ON ...
is valid SQL, but:
SELECT a.* FROM (my_table AS a JOIN your_table AS b ON ...) AS c
is not valid; the table alias a
is not visible
outside the alias c
.
Subqueries specifying a derived table must be enclosed in parentheses and must be assigned a table alias name (as in Section 7.2.1.2). For example:
FROM (SELECT * FROM table1) AS alias_name
This example is equivalent to FROM table1 AS
alias_name
. More interesting cases, which cannot be
reduced to a plain join, arise when the subquery involves
grouping or aggregation.
A subquery can also be a VALUES
list:
FROM (VALUES ('anne', 'smith'), ('bob', 'jones'), ('joe', 'blow')) AS names(first, last)
Again, a table alias is required. Assigning alias names to the columns
of the VALUES
list is optional, but is good practice.
For more information see Section 7.7.
Table functions are functions that produce a set of rows, made up
of either base data types (scalar types) or composite data types
(table rows). They are used like a table, view, or subquery in
the FROM
clause of a query. Columns returned by table
functions can be included in SELECT
,
JOIN
, or WHERE
clauses in the same manner
as columns of a table, view, or subquery.
Table functions may also be combined using the ROWS FROM
syntax, with the results returned in parallel columns; the number of
result rows in this case is that of the largest function result, with
smaller results padded with null values to match.
function_call
[WITH ORDINALITY] [[AS]table_alias
[(column_alias
[, ... ])]] ROWS FROM(function_call
[, ... ] ) [WITH ORDINALITY] [[AS]table_alias
[(column_alias
[, ... ])]]
If the WITH ORDINALITY
clause is specified, an
additional column of type bigint
will be added to the
function result columns. This column numbers the rows of the function
result set, starting from 1. (This is a generalization of the
SQL-standard syntax for UNNEST ... WITH ORDINALITY
.)
By default, the ordinal column is called ordinality
, but
a different column name can be assigned to it using
an AS
clause.
The special table function UNNEST
may be called with
any number of array parameters, and it returns a corresponding number of
columns, as if UNNEST
(Section 9.19) had been called on each parameter
separately and combined using the ROWS FROM
construct.
UNNEST(array_expression
[, ... ] ) [WITH ORDINALITY] [[AS]table_alias
[(column_alias
[, ... ])]]
If no table_alias
is specified, the function
name is used as the table name; in the case of a ROWS FROM()
construct, the first function's name is used.
If column aliases are not supplied, then for a function returning a base data type, the column name is also the same as the function name. For a function returning a composite type, the result columns get the names of the individual attributes of the type.
Some examples:
CREATE TABLE foo (fooid int, foosubid int, fooname text); CREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$ SELECT * FROM foo WHERE fooid = $1; $$ LANGUAGE SQL; SELECT * FROM getfoo(1) AS t1; SELECT * FROM foo WHERE foosubid IN ( SELECT foosubid FROM getfoo(foo.fooid) z WHERE z.fooid = foo.fooid ); CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1); SELECT * FROM vw_getfoo;
In some cases it is useful to define table functions that can
return different column sets depending on how they are invoked.
To support this, the table function can be declared as returning
the pseudo-type record
with no OUT
parameters. When such a function is used in
a query, the expected row structure must be specified in the
query itself, so that the system can know how to parse and plan
the query. This syntax looks like:
function_call
[AS]alias
(column_definition
[, ... ])function_call
AS [alias
] (column_definition
[, ... ]) ROWS FROM( ...function_call
AS (column_definition
[, ... ]) [, ... ] )
When not using the ROWS FROM()
syntax,
the column_definition
list replaces the column
alias list that could otherwise be attached to the FROM
item; the names in the column definitions serve as column aliases.
When using the ROWS FROM()
syntax,
a column_definition
list can be attached to
each member function separately; or if there is only one member function
and no WITH ORDINALITY
clause,
a column_definition
list can be written in
place of a column alias list following ROWS FROM()
.
Consider this example:
SELECT * FROM dblink('dbname=mydb', 'SELECT proname, prosrc FROM pg_proc') AS t1(proname name, prosrc text) WHERE proname LIKE 'bytea%';
The dblink function
(part of the dblink module) executes
a remote query. It is declared to return
record
since it might be used for any kind of query.
The actual column set must be specified in the calling query so
that the parser knows, for example, what *
should
expand to.
This example uses ROWS FROM
:
SELECT * FROM ROWS FROM ( json_to_recordset('[{"a":40,"b":"foo"},{"a":"100","b":"bar"}]') AS (a INTEGER, b TEXT), generate_series(1, 3) ) AS x (p, q, s) ORDER BY p; p | q | s -----+-----+--- 40 | foo | 1 100 | bar | 2 | | 3
It joins two functions into a single FROM
target. json_to_recordset()
is instructed
to return two columns, the first integer
and the second text
. The result of
generate_series()
is used directly.
The ORDER BY
clause sorts the column values
as integers.
LATERAL
Subqueries
Subqueries appearing in FROM
can be
preceded by the key word LATERAL
. This allows them to
reference columns provided by preceding FROM
items.
(Without LATERAL
, each subquery is
evaluated independently and so cannot cross-reference any other
FROM
item.)
Table functions appearing in FROM
can also be
preceded by the key word LATERAL
, but for functions the
key word is optional; the function's arguments can contain references
to columns provided by preceding FROM
items in any case.
A LATERAL
item can appear at the top level in the
FROM
list, or within a JOIN
tree. In the latter
case it can also refer to any items that are on the left-hand side of a
JOIN
that it is on the right-hand side of.
When a FROM
item contains LATERAL
cross-references, evaluation proceeds as follows: for each row of the
FROM
item providing the cross-referenced column(s), or
set of rows of multiple FROM
items providing the
columns, the LATERAL
item is evaluated using that
row or row set's values of the columns. The resulting row(s) are
joined as usual with the rows they were computed from. This is
repeated for each row or set of rows from the column source table(s).
A trivial example of LATERAL
is
SELECT * FROM foo, LATERAL (SELECT * FROM bar WHERE bar.id = foo.bar_id) ss;
This is not especially useful since it has exactly the same result as the more conventional
SELECT * FROM foo, bar WHERE bar.id = foo.bar_id;
LATERAL
is primarily useful when the cross-referenced
column is necessary for computing the row(s) to be joined. A common
application is providing an argument value for a set-returning function.
For example, supposing that vertices(polygon)
returns the
set of vertices of a polygon, we could identify close-together vertices
of polygons stored in a table with:
SELECT p1.id, p2.id, v1, v2 FROM polygons p1, polygons p2, LATERAL vertices(p1.poly) v1, LATERAL vertices(p2.poly) v2 WHERE (v1 <-> v2) < 10 AND p1.id != p2.id;
This query could also be written
SELECT p1.id, p2.id, v1, v2 FROM polygons p1 CROSS JOIN LATERAL vertices(p1.poly) v1, polygons p2 CROSS JOIN LATERAL vertices(p2.poly) v2 WHERE (v1 <-> v2) < 10 AND p1.id != p2.id;
or in several other equivalent formulations. (As already mentioned,
the LATERAL
key word is unnecessary in this example, but
we use it for clarity.)
It is often particularly handy to LEFT JOIN
to a
LATERAL
subquery, so that source rows will appear in
the result even if the LATERAL
subquery produces no
rows for them. For example, if get_product_names()
returns
the names of products made by a manufacturer, but some manufacturers in
our table currently produce no products, we could find out which ones
those are like this:
SELECT m.name FROM manufacturers m LEFT JOIN LATERAL get_product_names(m.id) pname ON true WHERE pname IS NULL;
WHERE
Clause
The syntax of the WHERE
clause is
WHERE search_condition
where search_condition
is any value
expression (see Section 4.2) that
returns a value of type boolean
.
After the processing of the FROM
clause is done, each
row of the derived virtual table is checked against the search
condition. If the result of the condition is true, the row is
kept in the output table, otherwise (i.e., if the result is
false or null) it is discarded. The search condition typically
references at least one column of the table generated in the
FROM
clause; this is not required, but otherwise the
WHERE
clause will be fairly useless.
The join condition of an inner join can be written either in
the WHERE
clause or in the JOIN
clause.
For example, these table expressions are equivalent:
FROM a, b WHERE a.id = b.id AND b.val > 5
and:
FROM a INNER JOIN b ON (a.id = b.id) WHERE b.val > 5
or perhaps even:
FROM a NATURAL JOIN b WHERE b.val > 5
Which one of these you use is mainly a matter of style. The
JOIN
syntax in the FROM
clause is
probably not as portable to other SQL database management systems,
even though it is in the SQL standard. For
outer joins there is no choice: they must be done in
the FROM
clause. The ON
or USING
clause of an outer join is not equivalent to a
WHERE
condition, because it results in the addition
of rows (for unmatched input rows) as well as the removal of rows
in the final result.
Here are some examples of WHERE
clauses:
SELECT ... FROM fdt WHERE c1 > 5 SELECT ... FROM fdt WHERE c1 IN (1, 2, 3) SELECT ... FROM fdt WHERE c1 IN (SELECT c1 FROM t2) SELECT ... FROM fdt WHERE c1 IN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10) SELECT ... FROM fdt WHERE c1 BETWEEN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10) AND 100 SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 > fdt.c1)
fdt
is the table derived in the
FROM
clause. Rows that do not meet the search
condition of the WHERE
clause are eliminated from
fdt
. Notice the use of scalar subqueries as
value expressions. Just like any other query, the subqueries can
employ complex table expressions. Notice also how
fdt
is referenced in the subqueries.
Qualifying c1
as fdt.c1
is only necessary
if c1
is also the name of a column in the derived
input table of the subquery. But qualifying the column name adds
clarity even when it is not needed. This example shows how the column
naming scope of an outer query extends into its inner queries.
GROUP BY
and HAVING
Clauses
After passing the WHERE
filter, the derived input
table might be subject to grouping, using the GROUP BY
clause, and elimination of group rows using the HAVING
clause.
SELECTselect_list
FROM ... [WHERE ...] GROUP BYgrouping_column_reference
[,grouping_column_reference
]...
The GROUP BY
clause is
used to group together those rows in a table that have the same
values in all the columns listed. The order in which the columns
are listed does not matter. The effect is to combine each set
of rows having common values into one group row that
represents all rows in the group. This is done to
eliminate redundancy in the output and/or compute aggregates that
apply to these groups. For instance:
=>
SELECT * FROM test1;
x | y ---+--- a | 3 c | 2 b | 5 a | 1 (4 rows)=>
SELECT x FROM test1 GROUP BY x;
x --- a b c (3 rows)
In the second query, we could not have written SELECT *
FROM test1 GROUP BY x
, because there is no single value
for the column y
that could be associated with each
group. The grouped-by columns can be referenced in the select list since
they have a single value in each group.
In general, if a table is grouped, columns that are not
listed in GROUP BY
cannot be referenced except in aggregate
expressions. An example with aggregate expressions is:
=>
SELECT x, sum(y) FROM test1 GROUP BY x;
x | sum ---+----- a | 4 b | 5 c | 2 (3 rows)
Here sum
is an aggregate function that
computes a single value over the entire group. More information
about the available aggregate functions can be found in Section 9.21.
Grouping without aggregate expressions effectively calculates the
set of distinct values in a column. This can also be achieved
using the DISTINCT
clause (see Section 7.3.3).
Here is another example: it calculates the total sales for each product (rather than the total sales of all products):
SELECT product_id, p.name, (sum(s.units) * p.price) AS sales FROM products p LEFT JOIN sales s USING (product_id) GROUP BY product_id, p.name, p.price;
In this example, the columns product_id
,
p.name
, and p.price
must be
in the GROUP BY
clause since they are referenced in
the query select list (but see below). The column
s.units
does not have to be in the GROUP
BY
list since it is only used in an aggregate expression
(sum(...)
), which represents the sales
of a product. For each product, the query returns a summary row about
all sales of the product.
If the products table is set up so that, say,
product_id
is the primary key, then it would be
enough to group by product_id
in the above example,
since name and price would be functionally
dependent on the product ID, and so there would be no
ambiguity about which name and price value to return for each product
ID group.
In strict SQL, GROUP BY
can only group by columns of
the source table but PostgreSQL extends
this to also allow GROUP BY
to group by columns in the
select list. Grouping by value expressions instead of simple
column names is also allowed.
If a table has been grouped using GROUP BY
,
but only certain groups are of interest, the
HAVING
clause can be used, much like a
WHERE
clause, to eliminate groups from the result.
The syntax is:
SELECTselect_list
FROM ... [WHERE ...] GROUP BY ... HAVINGboolean_expression
Expressions in the HAVING
clause can refer both to
grouped expressions and to ungrouped expressions (which necessarily
involve an aggregate function).
Example:
=>
SELECT x, sum(y) FROM test1 GROUP BY x HAVING sum(y) > 3;
x | sum ---+----- a | 4 b | 5 (2 rows)=>
SELECT x, sum(y) FROM test1 GROUP BY x HAVING x < 'c';
x | sum ---+----- a | 4 b | 5 (2 rows)
Again, a more realistic example:
SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit FROM products p LEFT JOIN sales s USING (product_id) WHERE s.date > CURRENT_DATE - INTERVAL '4 weeks' GROUP BY product_id, p.name, p.price, p.cost HAVING sum(p.price * s.units) > 5000;
In the example above, the WHERE
clause is selecting
rows by a column that is not grouped (the expression is only true for
sales during the last four weeks), while the HAVING
clause restricts the output to groups with total gross sales over
5000. Note that the aggregate expressions do not necessarily need
to be the same in all parts of the query.
If a query contains aggregate function calls, but no GROUP BY
clause, grouping still occurs: the result is a single group row (or
perhaps no rows at all, if the single row is then eliminated by
HAVING
).
The same is true if it contains a HAVING
clause, even
without any aggregate function calls or GROUP BY
clause.
GROUPING SETS
, CUBE
, and ROLLUP
More complex grouping operations than those described above are possible
using the concept of grouping sets. The data selected by
the FROM
and WHERE
clauses is grouped separately
by each specified grouping set, aggregates computed for each group just as
for simple GROUP BY
clauses, and then the results returned.
For example:
=>
SELECT * FROM items_sold;
brand | size | sales -------+------+------- Foo | L | 10 Foo | M | 20 Bar | M | 15 Bar | L | 5 (4 rows)=>
SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());
brand | size | sum -------+------+----- Foo | | 30 Bar | | 20 | L | 15 | M | 35 | | 50 (5 rows)
Each sublist of GROUPING SETS
may specify zero or more columns
or expressions and is interpreted the same way as though it were directly
in the GROUP BY
clause. An empty grouping set means that all
rows are aggregated down to a single group (which is output even if no
input rows were present), as described above for the case of aggregate
functions with no GROUP BY
clause.
References to the grouping columns or expressions are replaced by null values in result rows for grouping sets in which those columns do not appear. To distinguish which grouping a particular output row resulted from, see Table 9.61.
A shorthand notation is provided for specifying two common types of grouping set. A clause of the form
ROLLUP (e1
,e2
,e3
, ... )
represents the given list of expressions and all prefixes of the list including the empty list; thus it is equivalent to
GROUPING SETS ( (e1
,e2
,e3
, ... ), ... (e1
,e2
), (e1
), ( ) )
This is commonly used for analysis over hierarchical data; e.g., total salary by department, division, and company-wide total.
A clause of the form
CUBE (e1
,e2
, ... )
represents the given list and all of its possible subsets (i.e., the power set). Thus
CUBE ( a, b, c )
is equivalent to
GROUPING SETS ( ( a, b, c ), ( a, b ), ( a, c ), ( a ), ( b, c ), ( b ), ( c ), ( ) )
The individual elements of a CUBE
or ROLLUP
clause may be either individual expressions, or sublists of elements in
parentheses. In the latter case, the sublists are treated as single
units for the purposes of generating the individual grouping sets.
For example:
CUBE ( (a, b), (c, d) )
is equivalent to
GROUPING SETS ( ( a, b, c, d ), ( a, b ), ( c, d ), ( ) )
and
ROLLUP ( a, (b, c), d )
is equivalent to
GROUPING SETS ( ( a, b, c, d ), ( a, b, c ), ( a ), ( ) )
The CUBE
and ROLLUP
constructs can be used either
directly in the GROUP BY
clause, or nested inside a
GROUPING SETS
clause. If one GROUPING SETS
clause
is nested inside another, the effect is the same as if all the elements of
the inner clause had been written directly in the outer clause.
If multiple grouping items are specified in a single GROUP BY
clause, then the final list of grouping sets is the cross product of the
individual items. For example:
GROUP BY a, CUBE (b, c), GROUPING SETS ((d), (e))
is equivalent to
GROUP BY GROUPING SETS ( (a, b, c, d), (a, b, c, e), (a, b, d), (a, b, e), (a, c, d), (a, c, e), (a, d), (a, e) )
When specifying multiple grouping items together, the final set of grouping sets might contain duplicates. For example:
GROUP BY ROLLUP (a, b), ROLLUP (a, c)
is equivalent to
GROUP BY GROUPING SETS ( (a, b, c), (a, b), (a, b), (a, c), (a), (a), (a, c), (a), () )
If these duplicates are undesirable, they can be removed using the
DISTINCT
clause directly on the GROUP BY
.
Therefore:
GROUP BY DISTINCT ROLLUP (a, b), ROLLUP (a, c)
is equivalent to
GROUP BY GROUPING SETS ( (a, b, c), (a, b), (a, c), (a), () )
This is not the same as using SELECT DISTINCT
because the output
rows may still contain duplicates. If any of the ungrouped columns contains NULL,
it will be indistinguishable from the NULL used when that same column is grouped.
The construct (a, b)
is normally recognized in expressions as
a row constructor.
Within the GROUP BY
clause, this does not apply at the top
levels of expressions, and (a, b)
is parsed as a list of
expressions as described above. If for some reason you need
a row constructor in a grouping expression, use ROW(a, b)
.
If the query contains any window functions (see
Section 3.5,
Section 9.22 and
Section 4.2.8), these functions are evaluated
after any grouping, aggregation, and HAVING
filtering is
performed. That is, if the query uses any aggregates, GROUP
BY
, or HAVING
, then the rows seen by the window functions
are the group rows instead of the original table rows from
FROM
/WHERE
.
When multiple window functions are used, all the window functions having
syntactically equivalent PARTITION BY
and ORDER BY
clauses in their window definitions are guaranteed to be evaluated in a
single pass over the data. Therefore they will see the same sort ordering,
even if the ORDER BY
does not uniquely determine an ordering.
However, no guarantees are made about the evaluation of functions having
different PARTITION BY
or ORDER BY
specifications.
(In such cases a sort step is typically required between the passes of
window function evaluations, and the sort is not guaranteed to preserve
ordering of rows that its ORDER BY
sees as equivalent.)
Currently, window functions always require presorted data, and so the
query output will be ordered according to one or another of the window
functions' PARTITION BY
/ORDER BY
clauses.
It is not recommended to rely on this, however. Use an explicit
top-level ORDER BY
clause if you want to be sure the
results are sorted in a particular way.
As shown in the previous section,
the table expression in the SELECT
command
constructs an intermediate virtual table by possibly combining
tables, views, eliminating rows, grouping, etc. This table is
finally passed on to processing by the select list. The select
list determines which columns of the
intermediate table are actually output.
The simplest kind of select list is *
which
emits all columns that the table expression produces. Otherwise,
a select list is a comma-separated list of value expressions (as
defined in Section 4.2). For instance, it
could be a list of column names:
SELECT a, b, c FROM ...
The columns names a
, b
, and c
are either the actual names of the columns of tables referenced
in the FROM
clause, or the aliases given to them as
explained in Section 7.2.1.2. The name
space available in the select list is the same as in the
WHERE
clause, unless grouping is used, in which case
it is the same as in the HAVING
clause.
If more than one table has a column of the same name, the table name must also be given, as in:
SELECT tbl1.a, tbl2.a, tbl1.b FROM ...
When working with multiple tables, it can also be useful to ask for all the columns of a particular table:
SELECT tbl1.*, tbl2.a FROM ...
See Section 8.16.5 for more about
the table_name
.*
notation.
If an arbitrary value expression is used in the select list, it
conceptually adds a new virtual column to the returned table. The
value expression is evaluated once for each result row, with
the row's values substituted for any column references. But the
expressions in the select list do not have to reference any
columns in the table expression of the FROM
clause;
they can be constant arithmetic expressions, for instance.
The entries in the select list can be assigned names for subsequent
processing, such as for use in an ORDER BY
clause
or for display by the client application. For example:
SELECT a AS value, b + c AS sum FROM ...
If no output column name is specified using AS
,
the system assigns a default column name. For simple column references,
this is the name of the referenced column. For function
calls, this is the name of the function. For complex expressions,
the system will generate a generic name.
The AS
key word is usually optional, but in some
cases where the desired column name matches a
PostgreSQL key word, you must write
AS
or double-quote the column name in order to
avoid ambiguity.
(Appendix C shows which key words
require AS
to be used as a column label.)
For example, FROM
is one such key word, so this
does not work:
SELECT a from, b + c AS sum FROM ...
but either of these do:
SELECT a AS from, b + c AS sum FROM ... SELECT a "from", b + c AS sum FROM ...
For greatest safety against possible
future key word additions, it is recommended that you always either
write AS
or double-quote the output column name.
The naming of output columns here is different from that done in
the FROM
clause (see Section 7.2.1.2). It is possible
to rename the same column twice, but the name assigned in
the select list is the one that will be passed on.
DISTINCT
After the select list has been processed, the result table can
optionally be subject to the elimination of duplicate rows. The
DISTINCT
key word is written directly after
SELECT
to specify this:
SELECT DISTINCT select_list
...
(Instead of DISTINCT
the key word ALL
can be used to specify the default behavior of retaining all rows.)
Obviously, two rows are considered distinct if they differ in at least one column value. Null values are considered equal in this comparison.
Alternatively, an arbitrary expression can determine what rows are to be considered distinct:
SELECT DISTINCT ON (expression
[,expression
...])select_list
...
Here expression
is an arbitrary value
expression that is evaluated for all rows. A set of rows for
which all the expressions are equal are considered duplicates, and
only the first row of the set is kept in the output. Note that
the “first row” of a set is unpredictable unless the
query is sorted on enough columns to guarantee a unique ordering
of the rows arriving at the DISTINCT
filter.
(DISTINCT ON
processing occurs after ORDER
BY
sorting.)
The DISTINCT ON
clause is not part of the SQL standard
and is sometimes considered bad style because of the potentially
indeterminate nature of its results. With judicious use of
GROUP BY
and subqueries in FROM
, this
construct can be avoided, but it is often the most convenient
alternative.
UNION
, INTERSECT
, EXCEPT
)The results of two queries can be combined using the set operations union, intersection, and difference. The syntax is
query1
UNION [ALL]query2
query1
INTERSECT [ALL]query2
query1
EXCEPT [ALL]query2
where query1
and
query2
are queries that can use any of
the features discussed up to this point.
UNION
effectively appends the result of
query2
to the result of
query1
(although there is no guarantee
that this is the order in which the rows are actually returned).
Furthermore, it eliminates duplicate rows from its result, in the same
way as DISTINCT
, unless UNION ALL
is used.
INTERSECT
returns all rows that are both in the result
of query1
and in the result of
query2
. Duplicate rows are eliminated
unless INTERSECT ALL
is used.
EXCEPT
returns all rows that are in the result of
query1
but not in the result of
query2
. (This is sometimes called the
difference between two queries.) Again, duplicates
are eliminated unless EXCEPT ALL
is used.
In order to calculate the union, intersection, or difference of two queries, the two queries must be “union compatible”, which means that they return the same number of columns and the corresponding columns have compatible data types, as described in Section 10.5.
Set operations can be combined, for example
query1
UNIONquery2
EXCEPTquery3
which is equivalent to
(query1
UNIONquery2
) EXCEPTquery3
As shown here, you can use parentheses to control the order of
evaluation. Without parentheses, UNION
and EXCEPT
associate left-to-right,
but INTERSECT
binds more tightly than those two
operators. Thus
query1
UNIONquery2
INTERSECTquery3
means
query1
UNION (query2
INTERSECTquery3
)
You can also surround an individual query
with parentheses. This is important if
the query
needs to use any of the clauses
discussed in following sections, such as LIMIT
.
Without parentheses, you'll get a syntax error, or else the clause will
be understood as applying to the output of the set operation rather
than one of its inputs. For example,
SELECT a FROM b UNION SELECT x FROM y LIMIT 10
is accepted, but it means
(SELECT a FROM b UNION SELECT x FROM y) LIMIT 10
not
SELECT a FROM b UNION (SELECT x FROM y LIMIT 10)
ORDER BY
)After a query has produced an output table (after the select list has been processed) it can optionally be sorted. If sorting is not chosen, the rows will be returned in an unspecified order. The actual order in that case will depend on the scan and join plan types and the order on disk, but it must not be relied on. A particular output ordering can only be guaranteed if the sort step is explicitly chosen.
The ORDER BY
clause specifies the sort order:
SELECTselect_list
FROMtable_expression
ORDER BYsort_expression1
[ASC | DESC] [NULLS { FIRST | LAST }] [,sort_expression2
[ASC | DESC] [NULLS { FIRST | LAST }] ...]
The sort expression(s) can be any expression that would be valid in the query's select list. An example is:
SELECT a, b FROM table1 ORDER BY a + b, c;
When more than one expression is specified,
the later values are used to sort rows that are equal according to the
earlier values. Each expression can be followed by an optional
ASC
or DESC
keyword to set the sort direction to
ascending or descending. ASC
order is the default.
Ascending order puts smaller values first, where
“smaller” is defined in terms of the
<
operator. Similarly, descending order is
determined with the >
operator.
[6]
The NULLS FIRST
and NULLS LAST
options can be
used to determine whether nulls appear before or after non-null values
in the sort ordering. By default, null values sort as if larger than any
non-null value; that is, NULLS FIRST
is the default for
DESC
order, and NULLS LAST
otherwise.
Note that the ordering options are considered independently for each
sort column. For example ORDER BY x, y DESC
means
ORDER BY x ASC, y DESC
, which is not the same as
ORDER BY x DESC, y DESC
.
A sort_expression
can also be the column label or number
of an output column, as in:
SELECT a + b AS sum, c FROM table1 ORDER BY sum; SELECT a, max(b) FROM table1 GROUP BY a ORDER BY 1;
both of which sort by the first output column. Note that an output column name has to stand alone, that is, it cannot be used in an expression — for example, this is not correct:
SELECT a + b AS sum, c FROM table1 ORDER BY sum + c; -- wrong
This restriction is made to reduce ambiguity. There is still
ambiguity if an ORDER BY
item is a simple name that
could match either an output column name or a column from the table
expression. The output column is used in such cases. This would
only cause confusion if you use AS
to rename an output
column to match some other table column's name.
ORDER BY
can be applied to the result of a
UNION
, INTERSECT
, or EXCEPT
combination, but in this case it is only permitted to sort by
output column names or numbers, not by expressions.
LIMIT
and OFFSET
LIMIT
and OFFSET
allow you to retrieve just
a portion of the rows that are generated by the rest of the query:
SELECTselect_list
FROMtable_expression
[ ORDER BY ... ] [ LIMIT {number
| ALL } ] [ OFFSETnumber
]
If a limit count is given, no more than that many rows will be
returned (but possibly fewer, if the query itself yields fewer rows).
LIMIT ALL
is the same as omitting the LIMIT
clause, as is LIMIT
with a NULL argument.
OFFSET
says to skip that many rows before beginning to
return rows. OFFSET 0
is the same as omitting the
OFFSET
clause, as is OFFSET
with a NULL argument.
If both OFFSET
and LIMIT
appear, then OFFSET
rows are
skipped before starting to count the LIMIT
rows that
are returned.
When using LIMIT
, it is important to use an
ORDER BY
clause that constrains the result rows into a
unique order. Otherwise you will get an unpredictable subset of
the query's rows. You might be asking for the tenth through
twentieth rows, but tenth through twentieth in what ordering? The
ordering is unknown, unless you specified ORDER BY
.
The query optimizer takes LIMIT
into account when
generating query plans, so you are very likely to get different
plans (yielding different row orders) depending on what you give
for LIMIT
and OFFSET
. Thus, using
different LIMIT
/OFFSET
values to select
different subsets of a query result will give
inconsistent results unless you enforce a predictable
result ordering with ORDER BY
. This is not a bug; it
is an inherent consequence of the fact that SQL does not promise to
deliver the results of a query in any particular order unless
ORDER BY
is used to constrain the order.
The rows skipped by an OFFSET
clause still have to be
computed inside the server; therefore a large OFFSET
might be inefficient.
VALUES
Lists
VALUES
provides a way to generate a “constant table”
that can be used in a query without having to actually create and populate
a table on-disk. The syntax is
VALUES ( expression
[, ...] ) [, ...]
Each parenthesized list of expressions generates a row in the table.
The lists must all have the same number of elements (i.e., the number
of columns in the table), and corresponding entries in each list must
have compatible data types. The actual data type assigned to each column
of the result is determined using the same rules as for UNION
(see Section 10.5).
As an example:
VALUES (1, 'one'), (2, 'two'), (3, 'three');
will return a table of two columns and three rows. It's effectively equivalent to:
SELECT 1 AS column1, 'one' AS column2 UNION ALL SELECT 2, 'two' UNION ALL SELECT 3, 'three';
By default, PostgreSQL assigns the names
column1
, column2
, etc. to the columns of a
VALUES
table. The column names are not specified by the
SQL standard and different database systems do it differently, so
it's usually better to override the default names with a table alias
list, like this:
=> SELECT * FROM (VALUES (1, 'one'), (2, 'two'), (3, 'three')) AS t (num,letter); num | letter -----+-------- 1 | one 2 | two 3 | three (3 rows)
Syntactically, VALUES
followed by expression lists is
treated as equivalent to:
SELECTselect_list
FROMtable_expression
and can appear anywhere a SELECT
can. For example, you can
use it as part of a UNION
, or attach a
sort_specification
(ORDER BY
,
LIMIT
, and/or OFFSET
) to it. VALUES
is most commonly used as the data source in an INSERT
command,
and next most commonly as a subquery.
For more information see VALUES.
WITH
Queries (Common Table Expressions)
WITH
provides a way to write auxiliary statements for use in a
larger query. These statements, which are often referred to as Common
Table Expressions or CTEs, can be thought of as defining
temporary tables that exist just for one query. Each auxiliary statement
in a WITH
clause can be a SELECT
,
INSERT
, UPDATE
, or DELETE
; and the
WITH
clause itself is attached to a primary statement that can
also be a SELECT
, INSERT
, UPDATE
, or
DELETE
.
SELECT
in WITH
The basic value of SELECT
in WITH
is to
break down complicated queries into simpler parts. An example is:
WITH regional_sales AS ( SELECT region, SUM(amount) AS total_sales FROM orders GROUP BY region ), top_regions AS ( SELECT region FROM regional_sales WHERE total_sales > (SELECT SUM(total_sales)/10 FROM regional_sales) ) SELECT region, product, SUM(quantity) AS product_units, SUM(amount) AS product_sales FROM orders WHERE region IN (SELECT region FROM top_regions) GROUP BY region, product;
which displays per-product sales totals in only the top sales regions.
The WITH
clause defines two auxiliary statements named
regional_sales
and top_regions
,
where the output of regional_sales
is used in
top_regions
and the output of top_regions
is used in the primary SELECT
query.
This example could have been written without WITH
,
but we'd have needed two levels of nested sub-SELECT
s. It's a bit
easier to follow this way.
The optional RECURSIVE
modifier changes WITH
from a mere syntactic convenience into a feature that accomplishes
things not otherwise possible in standard SQL. Using
RECURSIVE
, a WITH
query can refer to its own
output. A very simple example is this query to sum the integers from 1
through 100:
WITH RECURSIVE t(n) AS ( VALUES (1) UNION ALL SELECT n+1 FROM t WHERE n < 100 ) SELECT sum(n) FROM t;
The general form of a recursive WITH
query is always a
non-recursive term, then UNION
(or
UNION ALL
), then a
recursive term, where only the recursive term can contain
a reference to the query's own output. Such a query is executed as
follows:
Recursive Query Evaluation
Evaluate the non-recursive term. For UNION
(but not
UNION ALL
), discard duplicate rows. Include all remaining
rows in the result of the recursive query, and also place them in a
temporary working table.
So long as the working table is not empty, repeat these steps:
Evaluate the recursive term, substituting the current contents of
the working table for the recursive self-reference.
For UNION
(but not UNION ALL
), discard
duplicate rows and rows that duplicate any previous result row.
Include all remaining rows in the result of the recursive query, and
also place them in a temporary intermediate table.
Replace the contents of the working table with the contents of the intermediate table, then empty the intermediate table.
While RECURSIVE
allows queries to be specified
recursively, internally such queries are evaluated iteratively.
In the example above, the working table has just a single row in each step,
and it takes on the values from 1 through 100 in successive steps. In
the 100th step, there is no output because of the WHERE
clause, and so the query terminates.
Recursive queries are typically used to deal with hierarchical or tree-structured data. A useful example is this query to find all the direct and indirect sub-parts of a product, given only a table that shows immediate inclusions:
WITH RECURSIVE included_parts(sub_part, part, quantity) AS ( SELECT sub_part, part, quantity FROM parts WHERE part = 'our_product' UNION ALL SELECT p.sub_part, p.part, p.quantity * pr.quantity FROM included_parts pr, parts p WHERE p.part = pr.sub_part ) SELECT sub_part, SUM(quantity) as total_quantity FROM included_parts GROUP BY sub_part
When computing a tree traversal using a recursive query, you might want to order the results in either depth-first or breadth-first order. This can be done by computing an ordering column alongside the other data columns and using that to sort the results at the end. Note that this does not actually control in which order the query evaluation visits the rows; that is as always in SQL implementation-dependent. This approach merely provides a convenient way to order the results afterwards.
To create a depth-first order, we compute for each result row an array of
rows that we have visited so far. For example, consider the following
query that searches a table tree
using a
link
field:
WITH RECURSIVE search_tree(id, link, data) AS ( SELECT t.id, t.link, t.data FROM tree t UNION ALL SELECT t.id, t.link, t.data FROM tree t, search_tree st WHERE t.id = st.link ) SELECT * FROM search_tree;
To add depth-first ordering information, you can write this:
WITH RECURSIVE search_tree(id, link, data, path) AS ( SELECT t.id, t.link, t.data, ARRAY[t.id] FROM tree t UNION ALL SELECT t.id, t.link, t.data, path || t.id FROM tree t, search_tree st WHERE t.id = st.link ) SELECT * FROM search_tree ORDER BY path;
In the general case where more than one field needs to be used to identify
a row, use an array of rows. For example, if we needed to track fields
f1
and f2
:
WITH RECURSIVE search_tree(id, link, data, path) AS ( SELECT t.id, t.link, t.data, ARRAY[ROW(t.f1, t.f2)] FROM tree t UNION ALL SELECT t.id, t.link, t.data, path || ROW(t.f1, t.f2) FROM tree t, search_tree st WHERE t.id = st.link ) SELECT * FROM search_tree ORDER BY path;
Omit the ROW()
syntax in the common case where only one
field needs to be tracked. This allows a simple array rather than a
composite-type array to be used, gaining efficiency.
To create a breadth-first order, you can add a column that tracks the depth of the search, for example:
WITH RECURSIVE search_tree(id, link, data, depth) AS ( SELECT t.id, t.link, t.data, 0 FROM tree t UNION ALL SELECT t.id, t.link, t.data, depth + 1 FROM tree t, search_tree st WHERE t.id = st.link ) SELECT * FROM search_tree ORDER BY depth;
To get a stable sort, add data columns as secondary sorting columns.
The recursive query evaluation algorithm produces its output in breadth-first search order. However, this is an implementation detail and it is perhaps unsound to rely on it. The order of the rows within each level is certainly undefined, so some explicit ordering might be desired in any case.
There is built-in syntax to compute a depth- or breadth-first sort column. For example:
WITH RECURSIVE search_tree(id, link, data) AS ( SELECT t.id, t.link, t.data FROM tree t UNION ALL SELECT t.id, t.link, t.data FROM tree t, search_tree st WHERE t.id = st.link ) SEARCH DEPTH FIRST BY id SET ordercol SELECT * FROM search_tree ORDER BY ordercol; WITH RECURSIVE search_tree(id, link, data) AS ( SELECT t.id, t.link, t.data FROM tree t UNION ALL SELECT t.id, t.link, t.data FROM tree t, search_tree st WHERE t.id = st.link ) SEARCH BREADTH FIRST BY id SET ordercol SELECT * FROM search_tree ORDER BY ordercol;
This syntax is internally expanded to something similar to the above
hand-written forms. The SEARCH
clause specifies whether
depth- or breadth first search is wanted, the list of columns to track for
sorting, and a column name that will contain the result data that can be
used for sorting. That column will implicitly be added to the output rows
of the CTE.
When working with recursive queries it is important to be sure that
the recursive part of the query will eventually return no tuples,
or else the query will loop indefinitely. Sometimes, using
UNION
instead of UNION ALL
can accomplish this
by discarding rows that duplicate previous output rows. However, often a
cycle does not involve output rows that are completely duplicate: it may be
necessary to check just one or a few fields to see if the same point has
been reached before. The standard method for handling such situations is
to compute an array of the already-visited values. For example, consider again
the following query that searches a table graph
using a
link
field:
WITH RECURSIVE search_graph(id, link, data, depth) AS ( SELECT g.id, g.link, g.data, 0 FROM graph g UNION ALL SELECT g.id, g.link, g.data, sg.depth + 1 FROM graph g, search_graph sg WHERE g.id = sg.link ) SELECT * FROM search_graph;
This query will loop if the link
relationships contain
cycles. Because we require a “depth” output, just changing
UNION ALL
to UNION
would not eliminate the looping.
Instead we need to recognize whether we have reached the same row again
while following a particular path of links. We add two columns
is_cycle
and path
to the loop-prone query:
WITH RECURSIVE search_graph(id, link, data, depth, is_cycle, path) AS ( SELECT g.id, g.link, g.data, 0, false, ARRAY[g.id] FROM graph g UNION ALL SELECT g.id, g.link, g.data, sg.depth + 1, g.id = ANY(path), path || g.id FROM graph g, search_graph sg WHERE g.id = sg.link AND NOT is_cycle ) SELECT * FROM search_graph;
Aside from preventing cycles, the array value is often useful in its own right as representing the “path” taken to reach any particular row.
In the general case where more than one field needs to be checked to
recognize a cycle, use an array of rows. For example, if we needed to
compare fields f1
and f2
:
WITH RECURSIVE search_graph(id, link, data, depth, is_cycle, path) AS ( SELECT g.id, g.link, g.data, 0, false, ARRAY[ROW(g.f1, g.f2)] FROM graph g UNION ALL SELECT g.id, g.link, g.data, sg.depth + 1, ROW(g.f1, g.f2) = ANY(path), path || ROW(g.f1, g.f2) FROM graph g, search_graph sg WHERE g.id = sg.link AND NOT is_cycle ) SELECT * FROM search_graph;
Omit the ROW()
syntax in the common case where only one field
needs to be checked to recognize a cycle. This allows a simple array
rather than a composite-type array to be used, gaining efficiency.
There is built-in syntax to simplify cycle detection. The above query can also be written like this:
WITH RECURSIVE search_graph(id, link, data, depth) AS (
SELECT g.id, g.link, g.data, 1
FROM graph g
UNION ALL
SELECT g.id, g.link, g.data, sg.depth + 1
FROM graph g, search_graph sg
WHERE g.id = sg.link
) CYCLE id SET is_cycle USING path
SELECT * FROM search_graph;
and it will be internally rewritten to the above form. The
CYCLE
clause specifies first the list of columns to
track for cycle detection, then a column name that will show whether a
cycle has been detected, and finally the name of another column that will track the
path. The cycle and path columns will implicitly be added to the output
rows of the CTE.
The cycle path column is computed in the same way as the depth-first
ordering column show in the previous section. A query can have both a
SEARCH
and a CYCLE
clause, but a
depth-first search specification and a cycle detection specification would
create redundant computations, so it's more efficient to just use the
CYCLE
clause and order by the path column. If
breadth-first ordering is wanted, then specifying both
SEARCH
and CYCLE
can be useful.
A helpful trick for testing queries
when you are not certain if they might loop is to place a LIMIT
in the parent query. For example, this query would loop forever without
the LIMIT
:
WITH RECURSIVE t(n) AS (
SELECT 1
UNION ALL
SELECT n+1 FROM t
)
SELECT n FROM t LIMIT 100;
This works because PostgreSQL's implementation
evaluates only as many rows of a WITH
query as are actually
fetched by the parent query. Using this trick in production is not
recommended, because other systems might work differently. Also, it
usually won't work if you make the outer query sort the recursive query's
results or join them to some other table, because in such cases the
outer query will usually try to fetch all of the WITH
query's
output anyway.
A useful property of WITH
queries is that they are
normally evaluated only once per execution of the parent query, even if
they are referred to more than once by the parent query or
sibling WITH
queries.
Thus, expensive calculations that are needed in multiple places can be
placed within a WITH
query to avoid redundant work. Another
possible application is to prevent unwanted multiple evaluations of
functions with side-effects.
However, the other side of this coin is that the optimizer is not able to
push restrictions from the parent query down into a multiply-referenced
WITH
query, since that might affect all uses of the
WITH
query's output when it should affect only one.
The multiply-referenced WITH
query will be
evaluated as written, without suppression of rows that the parent query
might discard afterwards. (But, as mentioned above, evaluation might stop
early if the reference(s) to the query demand only a limited number of
rows.)
However, if a WITH
query is non-recursive and
side-effect-free (that is, it is a SELECT
containing
no volatile functions) then it can be folded into the parent query,
allowing joint optimization of the two query levels. By default, this
happens if the parent query references the WITH
query
just once, but not if it references the WITH
query
more than once. You can override that decision by
specifying MATERIALIZED
to force separate calculation
of the WITH
query, or by specifying NOT
MATERIALIZED
to force it to be merged into the parent query.
The latter choice risks duplicate computation of
the WITH
query, but it can still give a net savings if
each usage of the WITH
query needs only a small part
of the WITH
query's full output.
A simple example of these rules is
WITH w AS ( SELECT * FROM big_table ) SELECT * FROM w WHERE key = 123;
This WITH
query will be folded, producing the same
execution plan as
SELECT * FROM big_table WHERE key = 123;
In particular, if there's an index on key
,
it will probably be used to fetch just the rows having key =
123
. On the other hand, in
WITH w AS ( SELECT * FROM big_table ) SELECT * FROM w AS w1 JOIN w AS w2 ON w1.key = w2.ref WHERE w2.key = 123;
the WITH
query will be materialized, producing a
temporary copy of big_table
that is then
joined with itself — without benefit of any index. This query
will be executed much more efficiently if written as
WITH w AS NOT MATERIALIZED ( SELECT * FROM big_table ) SELECT * FROM w AS w1 JOIN w AS w2 ON w1.key = w2.ref WHERE w2.key = 123;
so that the parent query's restrictions can be applied directly
to scans of big_table
.
An example where NOT MATERIALIZED
could be
undesirable is
WITH w AS ( SELECT key, very_expensive_function(val) as f FROM some_table ) SELECT * FROM w AS w1 JOIN w AS w2 ON w1.f = w2.f;
Here, materialization of the WITH
query ensures
that very_expensive_function
is evaluated only
once per table row, not twice.
The examples above only show WITH
being used with
SELECT
, but it can be attached in the same way to
INSERT
, UPDATE
, or DELETE
.
In each case it effectively provides temporary table(s) that can
be referred to in the main command.
WITH
You can use data-modifying statements (INSERT
,
UPDATE
, or DELETE
) in WITH
. This
allows you to perform several different operations in the same query.
An example is:
WITH moved_rows AS ( DELETE FROM products WHERE "date" >= '2010-10-01' AND "date" < '2010-11-01' RETURNING * ) INSERT INTO products_log SELECT * FROM moved_rows;
This query effectively moves rows from products
to
products_log
. The DELETE
in WITH
deletes the specified rows from products
, returning their
contents by means of its RETURNING
clause; and then the
primary query reads that output and inserts it into
products_log
.
A fine point of the above example is that the WITH
clause is
attached to the INSERT
, not the sub-SELECT
within
the INSERT
. This is necessary because data-modifying
statements are only allowed in WITH
clauses that are attached
to the top-level statement. However, normal WITH
visibility
rules apply, so it is possible to refer to the WITH
statement's output from the sub-SELECT
.
Data-modifying statements in WITH
usually have
RETURNING
clauses (see Section 6.4),
as shown in the example above.
It is the output of the RETURNING
clause, not the
target table of the data-modifying statement, that forms the temporary
table that can be referred to by the rest of the query. If a
data-modifying statement in WITH
lacks a RETURNING
clause, then it forms no temporary table and cannot be referred to in
the rest of the query. Such a statement will be executed nonetheless.
A not-particularly-useful example is:
WITH t AS ( DELETE FROM foo ) DELETE FROM bar;
This example would remove all rows from tables foo
and
bar
. The number of affected rows reported to the client
would only include rows removed from bar
.
Recursive self-references in data-modifying statements are not
allowed. In some cases it is possible to work around this limitation by
referring to the output of a recursive WITH
, for example:
WITH RECURSIVE included_parts(sub_part, part) AS ( SELECT sub_part, part FROM parts WHERE part = 'our_product' UNION ALL SELECT p.sub_part, p.part FROM included_parts pr, parts p WHERE p.part = pr.sub_part ) DELETE FROM parts WHERE part IN (SELECT part FROM included_parts);
This query would remove all direct and indirect subparts of a product.
Data-modifying statements in WITH
are executed exactly once,
and always to completion, independently of whether the primary query
reads all (or indeed any) of their output. Notice that this is different
from the rule for SELECT
in WITH
: as stated in the
previous section, execution of a SELECT
is carried only as far
as the primary query demands its output.
The sub-statements in WITH
are executed concurrently with
each other and with the main query. Therefore, when using data-modifying
statements in WITH
, the order in which the specified updates
actually happen is unpredictable. All the statements are executed with
the same snapshot (see Chapter 13), so they
cannot “see” one another's effects on the target tables. This
alleviates the effects of the unpredictability of the actual order of row
updates, and means that RETURNING
data is the only way to
communicate changes between different WITH
sub-statements and
the main query. An example of this is that in
WITH t AS ( UPDATE products SET price = price * 1.05 RETURNING * ) SELECT * FROM products;
the outer SELECT
would return the original prices before the
action of the UPDATE
, while in
WITH t AS ( UPDATE products SET price = price * 1.05 RETURNING * ) SELECT * FROM t;
the outer SELECT
would return the updated data.
Trying to update the same row twice in a single statement is not
supported. Only one of the modifications takes place, but it is not easy
(and sometimes not possible) to reliably predict which one. This also
applies to deleting a row that was already updated in the same statement:
only the update is performed. Therefore you should generally avoid trying
to modify a single row twice in a single statement. In particular avoid
writing WITH
sub-statements that could affect the same rows
changed by the main statement or a sibling sub-statement. The effects
of such a statement will not be predictable.
At present, any table used as the target of a data-modifying statement in
WITH
must not have a conditional rule, nor an ALSO
rule, nor an INSTEAD
rule that expands to multiple statements.
[6]
Actually, PostgreSQL uses the default B-tree
operator class for the expression's data type to determine the sort
ordering for ASC
and DESC
. Conventionally,
data types will be set up so that the <
and
>
operators correspond to this sort ordering,
but a user-defined data type's designer could choose to do something
different.
Table of Contents
pg_lsn
TypePostgreSQL has a rich set of native data types available to users. Users can add new types to PostgreSQL using the CREATE TYPE command.
Table 8.1 shows all the built-in general-purpose data types. Most of the alternative names listed in the “Aliases” column are the names used internally by PostgreSQL for historical reasons. In addition, some internally used or deprecated types are available, but are not listed here.
Table 8.1. Data Types
Name | Aliases | Description |
---|---|---|
bigint | int8 | signed eight-byte integer |
bigserial | serial8 | autoincrementing eight-byte integer |
bit [ ( | fixed-length bit string | |
bit varying [ ( | varbit [ ( | variable-length bit string |
boolean | bool | logical Boolean (true/false) |
box | rectangular box on a plane | |
bytea | binary data (“byte array”) | |
character [ ( | char [ ( | fixed-length character string |
character varying [ ( | varchar [ ( | variable-length character string |
cidr | IPv4 or IPv6 network address | |
circle | circle on a plane | |
date | calendar date (year, month, day) | |
double precision | float8 | double precision floating-point number (8 bytes) |
inet | IPv4 or IPv6 host address | |
integer | int , int4 | signed four-byte integer |
interval [ | time span | |
json | textual JSON data | |
jsonb | binary JSON data, decomposed | |
line | infinite line on a plane | |
lseg | line segment on a plane | |
macaddr | MAC (Media Access Control) address | |
macaddr8 | MAC (Media Access Control) address (EUI-64 format) | |
money | currency amount | |
numeric [ ( | decimal [ ( | exact numeric of selectable precision |
path | geometric path on a plane | |
pg_lsn | PostgreSQL Log Sequence Number | |
pg_snapshot | user-level transaction ID snapshot | |
point | geometric point on a plane | |
polygon | closed geometric path on a plane | |
real | float4 | single precision floating-point number (4 bytes) |
smallint | int2 | signed two-byte integer |
smallserial | serial2 | autoincrementing two-byte integer |
serial | serial4 | autoincrementing four-byte integer |
text | variable-length character string | |
time [ ( | time of day (no time zone) | |
time [ ( | timetz | time of day, including time zone |
timestamp [ ( | date and time (no time zone) | |
timestamp [ ( | timestamptz | date and time, including time zone |
tsquery | text search query | |
tsvector | text search document | |
txid_snapshot | user-level transaction ID snapshot (deprecated; see pg_snapshot ) | |
uuid | universally unique identifier | |
xml | XML data |
The following types (or spellings thereof) are specified by
SQL: bigint
, bit
, bit
varying
, boolean
, char
,
character varying
, character
,
varchar
, date
, double
precision
, integer
, interval
,
numeric
, decimal
, real
,
smallint
, time
(with or without time zone),
timestamp
(with or without time zone),
xml
.
Each data type has an external representation determined by its input and output functions. Many of the built-in types have obvious external formats. However, several types are either unique to PostgreSQL, such as geometric paths, or have several possible formats, such as the date and time types. Some of the input and output functions are not invertible, i.e., the result of an output function might lose accuracy when compared to the original input.
Numeric types consist of two-, four-, and eight-byte integers, four- and eight-byte floating-point numbers, and selectable-precision decimals. Table 8.2 lists the available types.
Table 8.2. Numeric Types
Name | Storage Size | Description | Range |
---|---|---|---|
smallint | 2 bytes | small-range integer | -32768 to +32767 |
integer | 4 bytes | typical choice for integer | -2147483648 to +2147483647 |
bigint | 8 bytes | large-range integer | -9223372036854775808 to +9223372036854775807 |
decimal | variable | user-specified precision, exact | up to 131072 digits before the decimal point; up to 16383 digits after the decimal point |
numeric | variable | user-specified precision, exact | up to 131072 digits before the decimal point; up to 16383 digits after the decimal point |
real | 4 bytes | variable-precision, inexact | 6 decimal digits precision |
double precision | 8 bytes | variable-precision, inexact | 15 decimal digits precision |
smallserial | 2 bytes | small autoincrementing integer | 1 to 32767 |
serial | 4 bytes | autoincrementing integer | 1 to 2147483647 |
bigserial | 8 bytes | large autoincrementing integer | 1 to 9223372036854775807 |
The syntax of constants for the numeric types is described in Section 4.1.2. The numeric types have a full set of corresponding arithmetic operators and functions. Refer to Chapter 9 for more information. The following sections describe the types in detail.
The types smallint
, integer
, and
bigint
store whole numbers, that is, numbers without
fractional components, of various ranges. Attempts to store
values outside of the allowed range will result in an error.
The type integer
is the common choice, as it offers
the best balance between range, storage size, and performance.
The smallint
type is generally only used if disk
space is at a premium. The bigint
type is designed to be
used when the range of the integer
type is insufficient.
SQL only specifies the integer types
integer
(or int
),
smallint
, and bigint
. The
type names int2
, int4
, and
int8
are extensions, which are also used by some
other SQL database systems.
The type numeric
can store numbers with a
very large number of digits. It is especially recommended for
storing monetary amounts and other quantities where exactness is
required. Calculations with numeric
values yield exact
results where possible, e.g., addition, subtraction, multiplication.
However, calculations on numeric
values are very slow
compared to the integer types, or to the floating-point types
described in the next section.
We use the following terms below: The
precision of a numeric
is the total count of significant digits in the whole number,
that is, the number of digits to both sides of the decimal point.
The scale of a numeric
is the
count of decimal digits in the fractional part, to the right of the
decimal point. So the number 23.5141 has a precision of 6 and a
scale of 4. Integers can be considered to have a scale of zero.
Both the maximum precision and the maximum scale of a
numeric
column can be
configured. To declare a column of type numeric
use
the syntax:
NUMERIC(precision
,scale
)
The precision must be positive, the scale zero or positive. Alternatively:
NUMERIC(precision
)
selects a scale of 0. Specifying:
NUMERIC
without any precision or scale creates an “unconstrained
numeric” column in which numeric values of any length can be
stored, up to the implementation limits. A column of this kind will
not coerce input values to any particular scale, whereas
numeric
columns with a declared scale will coerce
input values to that scale. (The SQL standard
requires a default scale of 0, i.e., coercion to integer
precision. We find this a bit useless. If you're concerned
about portability, always specify the precision and scale
explicitly.)
The maximum precision that can be explicitly specified in
a NUMERIC
type declaration is 1000. An
unconstrained NUMERIC
column is subject to the limits
described in Table 8.2.
If the scale of a value to be stored is greater than the declared scale of the column, the system will round the value to the specified number of fractional digits. Then, if the number of digits to the left of the decimal point exceeds the declared precision minus the declared scale, an error is raised.
Numeric values are physically stored without any extra leading or
trailing zeroes. Thus, the declared precision and scale of a column
are maximums, not fixed allocations. (In this sense the numeric
type is more akin to varchar(
than to n
)char(
.) The actual storage
requirement is two bytes for each group of four decimal digits,
plus three to eight bytes overhead.
n
)
In addition to ordinary numeric values, the numeric
type
has several special values:
Infinity
-Infinity
NaN
These are adapted from the IEEE 754 standard, and represent
“infinity”, “negative infinity”, and
“not-a-number”, respectively. When writing these values
as constants in an SQL command, you must put quotes around them,
for example UPDATE table SET x = '-Infinity'
.
On input, these strings are recognized in a case-insensitive manner.
The infinity values can alternatively be spelled inf
and -inf
.
The infinity values behave as per mathematical expectations. For
example, Infinity
plus any finite value equals
Infinity
, as does Infinity
plus Infinity
; but Infinity
minus Infinity
yields NaN
(not a
number), because it has no well-defined interpretation. Note that an
infinity can only be stored in an unconstrained numeric
column, because it notionally exceeds any finite precision limit.
The NaN
(not a number) value is used to represent
undefined calculational results. In general, any operation with
a NaN
input yields another NaN
.
The only exception is when the operation's other inputs are such that
the same output would be obtained if the NaN
were to
be replaced by any finite or infinite numeric value; then, that output
value is used for NaN
too. (An example of this
principle is that NaN
raised to the zero power
yields one.)
In most implementations of the “not-a-number” concept,
NaN
is not considered equal to any other numeric
value (including NaN
). In order to allow
numeric
values to be sorted and used in tree-based
indexes, PostgreSQL treats NaN
values as equal, and greater than all non-NaN
values.
The types decimal
and numeric
are
equivalent. Both types are part of the SQL
standard.
When rounding values, the numeric
type rounds ties away
from zero, while (on most machines) the real
and double precision
types round ties to the nearest even
number. For example:
SELECT x, round(x::numeric) AS num_round, round(x::double precision) AS dbl_round FROM generate_series(-3.5, 3.5, 1) as x; x | num_round | dbl_round ------+-----------+----------- -3.5 | -4 | -4 -2.5 | -3 | -2 -1.5 | -2 | -2 -0.5 | -1 | -0 0.5 | 1 | 0 1.5 | 2 | 2 2.5 | 3 | 2 3.5 | 4 | 4 (8 rows)
The data types real
and double precision
are
inexact, variable-precision numeric types. On all currently supported
platforms, these types are implementations of IEEE
Standard 754 for Binary Floating-Point Arithmetic (single and double
precision, respectively), to the extent that the underlying processor,
operating system, and compiler support it.
Inexact means that some values cannot be converted exactly to the internal format and are stored as approximations, so that storing and retrieving a value might show slight discrepancies. Managing these errors and how they propagate through calculations is the subject of an entire branch of mathematics and computer science and will not be discussed here, except for the following points:
If you require exact storage and calculations (such as for
monetary amounts), use the numeric
type instead.
If you want to do complicated calculations with these types for anything important, especially if you rely on certain behavior in boundary cases (infinity, underflow), you should evaluate the implementation carefully.
Comparing two floating-point values for equality might not always work as expected.
On all currently supported platforms, the real
type has a
range of around 1E-37 to 1E+37 with a precision of at least 6 decimal
digits. The double precision
type has a range of around
1E-307 to 1E+308 with a precision of at least 15 digits. Values that are
too large or too small will cause an error. Rounding might take place if
the precision of an input number is too high. Numbers too close to zero
that are not representable as distinct from zero will cause an underflow
error.
By default, floating point values are output in text form in their
shortest precise decimal representation; the decimal value produced is
closer to the true stored binary value than to any other value
representable in the same binary precision. (However, the output value is
currently never exactly midway between two
representable values, in order to avoid a widespread bug where input
routines do not properly respect the round-to-nearest-even rule.) This value will
use at most 17 significant decimal digits for float8
values, and at most 9 digits for float4
values.
This shortest-precise output format is much faster to generate than the historical rounded format.
For compatibility with output generated by older versions
of PostgreSQL, and to allow the output
precision to be reduced, the extra_float_digits
parameter can be used to select rounded decimal output instead. Setting a
value of 0 restores the previous default of rounding the value to 6
(for float4
) or 15 (for float8
)
significant decimal digits. Setting a negative value reduces the number
of digits further; for example -2 would round output to 4 or 13 digits
respectively.
Any value of extra_float_digits greater than 0 selects the shortest-precise format.
Applications that wanted precise values have historically had to set extra_float_digits to 3 to obtain them. For maximum compatibility between versions, they should continue to do so.
In addition to ordinary numeric values, the floating-point types have several special values:
Infinity
-Infinity
NaN
These represent the IEEE 754 special values
“infinity”, “negative infinity”, and
“not-a-number”, respectively. When writing these values
as constants in an SQL command, you must put quotes around them,
for example UPDATE table SET x = '-Infinity'
. On input,
these strings are recognized in a case-insensitive manner.
The infinity values can alternatively be spelled inf
and -inf
.
IEEE 754 specifies that NaN
should not compare equal
to any other floating-point value (including NaN
).
In order to allow floating-point values to be sorted and used
in tree-based indexes, PostgreSQL treats
NaN
values as equal, and greater than all
non-NaN
values.
PostgreSQL also supports the SQL-standard
notations float
and
float(
for specifying
inexact numeric types. Here, p
)p
specifies
the minimum acceptable precision in binary digits.
PostgreSQL accepts
float(1)
to float(24)
as selecting the
real
type, while
float(25)
to float(53)
select
double precision
. Values of p
outside the allowed range draw an error.
float
with no precision specified is taken to mean
double precision
.
This section describes a PostgreSQL-specific way to create an autoincrementing column. Another way is to use the SQL-standard identity column feature, described at CREATE TABLE.
The data types smallserial
, serial
and
bigserial
are not true types, but merely
a notational convenience for creating unique identifier columns
(similar to the AUTO_INCREMENT
property
supported by some other databases). In the current
implementation, specifying:
CREATE TABLEtablename
(colname
SERIAL );
is equivalent to specifying:
CREATE SEQUENCEtablename
_colname
_seq AS integer; CREATE TABLEtablename
(colname
integer NOT NULL DEFAULT nextval('tablename
_colname
_seq') ); ALTER SEQUENCEtablename
_colname
_seq OWNED BYtablename
.colname
;
Thus, we have created an integer column and arranged for its default
values to be assigned from a sequence generator. A NOT NULL
constraint is applied to ensure that a null value cannot be
inserted. (In most cases you would also want to attach a
UNIQUE
or PRIMARY KEY
constraint to prevent
duplicate values from being inserted by accident, but this is
not automatic.) Lastly, the sequence is marked as “owned by”
the column, so that it will be dropped if the column or table is dropped.
Because smallserial
, serial
and
bigserial
are implemented using sequences, there may
be "holes" or gaps in the sequence of values which appears in the
column, even if no rows are ever deleted. A value allocated
from the sequence is still "used up" even if a row containing that
value is never successfully inserted into the table column. This
may happen, for example, if the inserting transaction rolls back.
See nextval()
in Section 9.17
for details.
To insert the next value of the sequence into the serial
column, specify that the serial
column should be assigned its default value. This can be done
either by excluding the column from the list of columns in
the INSERT
statement, or through the use of
the DEFAULT
key word.
The type names serial
and serial4
are
equivalent: both create integer
columns. The type
names bigserial
and serial8
work
the same way, except that they create a bigint
column. bigserial
should be used if you anticipate
the use of more than 231 identifiers over the
lifetime of the table. The type names smallserial
and
serial2
also work the same way, except that they
create a smallint
column.
The sequence created for a serial
column is
automatically dropped when the owning column is dropped.
You can drop the sequence without dropping the column, but this
will force removal of the column default expression.
The money
type stores a currency amount with a fixed
fractional precision; see Table 8.3. The fractional precision is
determined by the database's lc_monetary setting.
The range shown in the table assumes there are two fractional digits.
Input is accepted in a variety of formats, including integer and
floating-point literals, as well as typical
currency formatting, such as '$1,000.00'
.
Output is generally in the latter form but depends on the locale.
Table 8.3. Monetary Types
Name | Storage Size | Description | Range |
---|---|---|---|
money | 8 bytes | currency amount | -92233720368547758.08 to +92233720368547758.07 |
Since the output of this data type is locale-sensitive, it might not
work to load money
data into a database that has a different
setting of lc_monetary
. To avoid problems, before
restoring a dump into a new database make sure lc_monetary
has
the same or equivalent value as in the database that was dumped.
Values of the numeric
, int
, and
bigint
data types can be cast to money
.
Conversion from the real
and double precision
data types can be done by casting to numeric
first, for
example:
SELECT '12.34'::float8::numeric::money;
However, this is not recommended. Floating point numbers should not be used to handle money due to the potential for rounding errors.
A money
value can be cast to numeric
without
loss of precision. Conversion to other types could potentially lose
precision, and must also be done in two stages:
SELECT '52093.89'::money::numeric::float8;
Division of a money
value by an integer value is performed
with truncation of the fractional part towards zero. To get a rounded
result, divide by a floating-point value, or cast the money
value to numeric
before dividing and back to money
afterwards. (The latter is preferable to avoid risking precision loss.)
When a money
value is divided by another money
value, the result is double precision
(i.e., a pure number,
not money); the currency units cancel each other out in the division.
Table 8.4. Character Types
Name | Description |
---|---|
character varying( , varchar( | variable-length with limit |
character( , char( | fixed-length, blank padded |
text | variable unlimited length |
Table 8.4 shows the general-purpose character types available in PostgreSQL.
SQL defines two primary character types:
character varying(
and
n
)character(
, where n
)n
is a positive integer. Both of these types can store strings up to
n
characters (not bytes) in length. An attempt to store a
longer string into a column of these types will result in an
error, unless the excess characters are all spaces, in which case
the string will be truncated to the maximum length. (This somewhat
bizarre exception is required by the SQL
standard.) If the string to be stored is shorter than the declared
length, values of type character
will be space-padded;
values of type character varying
will simply store the
shorter
string.
If one explicitly casts a value to character
varying(
or
n
)character(
, then an over-length
value will be truncated to n
)n
characters without
raising an error. (This too is required by the
SQL standard.)
The notations varchar(
and
n
)char(
are aliases for n
)character
varying(
and
n
)character(
, respectively.
If specified, the length must be greater than zero and cannot exceed
10485760.
n
)character
without length specifier is equivalent to
character(1)
. If character varying
is used
without length specifier, the type accepts strings of any size. The
latter is a PostgreSQL extension.
In addition, PostgreSQL provides the
text
type, which stores strings of any length.
Although the type text
is not in the
SQL standard, several other SQL database
management systems have it as well.
Values of type character
are physically padded
with spaces to the specified width n
, and are
stored and displayed that way. However, trailing spaces are treated as
semantically insignificant and disregarded when comparing two values
of type character
. In collations where whitespace
is significant, this behavior can produce unexpected results;
for example SELECT 'a '::CHAR(2) collate "C" <
E'a\n'::CHAR(2)
returns true, even though C
locale would consider a space to be greater than a newline.
Trailing spaces are removed when converting a character
value
to one of the other string types. Note that trailing spaces
are semantically significant in
character varying
and text
values, and
when using pattern matching, that is LIKE
and
regular expressions.
The characters that can be stored in any of these data types are determined by the database character set, which is selected when the database is created. Regardless of the specific character set, the character with code zero (sometimes called NUL) cannot be stored. For more information refer to Section 24.3.
The storage requirement for a short string (up to 126 bytes) is 1 byte
plus the actual string, which includes the space padding in the case of
character
. Longer strings have 4 bytes of overhead instead
of 1. Long strings are compressed by the system automatically, so
the physical requirement on disk might be less. Very long values are also
stored in background tables so that they do not interfere with rapid
access to shorter column values. In any case, the longest
possible character string that can be stored is about 1 GB. (The
maximum value that will be allowed for n
in the data
type declaration is less than that. It wouldn't be useful to
change this because with multibyte character encodings the number of
characters and bytes can be quite different. If you desire to
store long strings with no specific upper limit, use
text
or character varying
without a length
specifier, rather than making up an arbitrary length limit.)
There is no performance difference among these three types,
apart from increased storage space when using the blank-padded
type, and a few extra CPU cycles to check the length when storing into
a length-constrained column. While
character(
has performance
advantages in some other database systems, there is no such advantage in
PostgreSQL; in fact
n
)character(
is usually the slowest of
the three because of its additional storage costs. In most situations
n
)text
or character varying
should be used
instead.
Refer to Section 4.1.2.1 for information about the syntax of string literals, and to Chapter 9 for information about available operators and functions.
Example 8.1. Using the Character Types
CREATE TABLE test1 (a character(4)); INSERT INTO test1 VALUES ('ok'); SELECT a, char_length(a) FROM test1; -- (1)a | char_length ------+------------- ok | 2
CREATE TABLE test2 (b varchar(5)); INSERT INTO test2 VALUES ('ok'); INSERT INTO test2 VALUES ('good '); INSERT INTO test2 VALUES ('too long');ERROR: value too long for type character varying(5)
INSERT INTO test2 VALUES ('too long'::varchar(5)); -- explicit truncation SELECT b, char_length(b) FROM test2;b | char_length -------+------------- ok | 2 good | 5 too l | 5
The |
There are two other fixed-length character types in
PostgreSQL, shown in Table 8.5. The name
type exists only for the storage of identifiers
in the internal system catalogs and is not intended for use by the general user. Its
length is currently defined as 64 bytes (63 usable characters plus
terminator) but should be referenced using the constant
NAMEDATALEN
in C
source code.
The length is set at compile time (and
is therefore adjustable for special uses); the default maximum
length might change in a future release. The type "char"
(note the quotes) is different from char(1)
in that it
only uses one byte of storage. It is internally used in the system
catalogs as a simplistic enumeration type.
Table 8.5. Special Character Types
Name | Storage Size | Description |
---|---|---|
"char" | 1 byte | single-byte internal type |
name | 64 bytes | internal type for object names |
The bytea
data type allows storage of binary strings;
see Table 8.6.
Table 8.6. Binary Data Types
Name | Storage Size | Description |
---|---|---|
bytea | 1 or 4 bytes plus the actual binary string | variable-length binary string |
A binary string is a sequence of octets (or bytes). Binary strings are distinguished from character strings in two ways. First, binary strings specifically allow storing octets of value zero and other “non-printable” octets (usually, octets outside the decimal range 32 to 126). Character strings disallow zero octets, and also disallow any other octet values and sequences of octet values that are invalid according to the database's selected character set encoding. Second, operations on binary strings process the actual bytes, whereas the processing of character strings depends on locale settings. In short, binary strings are appropriate for storing data that the programmer thinks of as “raw bytes”, whereas character strings are appropriate for storing text.
The bytea
type supports two
formats for input and output: “hex” format
and PostgreSQL's historical
“escape” format. Both
of these are always accepted on input. The output format depends
on the configuration parameter bytea_output;
the default is hex. (Note that the hex format was introduced in
PostgreSQL 9.0; earlier versions and some
tools don't understand it.)
The SQL standard defines a different binary
string type, called BLOB
or BINARY LARGE
OBJECT
. The input format is different from
bytea
, but the provided functions and operators are
mostly the same.
bytea
Hex Format
The “hex” format encodes binary data as 2 hexadecimal digits
per byte, most significant nibble first. The entire string is
preceded by the sequence \x
(to distinguish it
from the escape format). In some contexts, the initial backslash may
need to be escaped by doubling it
(see Section 4.1.2.1).
For input, the hexadecimal digits can
be either upper or lower case, and whitespace is permitted between
digit pairs (but not within a digit pair nor in the starting
\x
sequence).
The hex format is compatible with a wide
range of external applications and protocols, and it tends to be
faster to convert than the escape format, so its use is preferred.
Example:
SET bytea_output = 'hex'; SELECT '\xDEADBEEF'::bytea; bytea ------------ \xdeadbeef
bytea
Escape Format
The “escape” format is the traditional
PostgreSQL format for the bytea
type. It
takes the approach of representing a binary string as a sequence
of ASCII characters, while converting those bytes that cannot be
represented as an ASCII character into special escape sequences.
If, from the point of view of the application, representing bytes
as characters makes sense, then this representation can be
convenient. But in practice it is usually confusing because it
fuzzes up the distinction between binary strings and character
strings, and also the particular escape mechanism that was chosen is
somewhat unwieldy. Therefore, this format should probably be avoided
for most new applications.
When entering bytea
values in escape format,
octets of certain
values must be escaped, while all octet
values can be escaped. In
general, to escape an octet, convert it into its three-digit
octal value and precede it by a backslash.
Backslash itself (octet decimal value 92) can alternatively be represented by
double backslashes.
Table 8.7
shows the characters that must be escaped, and gives the alternative
escape sequences where applicable.
Table 8.7. bytea
Literal Escaped Octets
Decimal Octet Value | Description | Escaped Input Representation | Example | Hex Representation |
---|---|---|---|---|
0 | zero octet | '\000' | '\000'::bytea | \x00 |
39 | single quote | '''' or '\047' | ''''::bytea | \x27 |
92 | backslash | '\\' or '\134' | '\\'::bytea | \x5c |
0 to 31 and 127 to 255 | “non-printable” octets | '\ (octal value) | '\001'::bytea | \x01 |
The requirement to escape non-printable octets varies depending on locale settings. In some instances you can get away with leaving them unescaped.
The reason that single quotes must be doubled, as shown
in Table 8.7, is that this
is true for any string literal in an SQL command. The generic
string-literal parser consumes the outermost single quotes
and reduces any pair of single quotes to one data character.
What the bytea
input function sees is just one
single quote, which it treats as a plain data character.
However, the bytea
input function treats
backslashes as special, and the other behaviors shown in
Table 8.7 are implemented by
that function.
In some contexts, backslashes must be doubled compared to what is shown above, because the generic string-literal parser will also reduce pairs of backslashes to one data character; see Section 4.1.2.1.
Bytea
octets are output in hex
format by default. If you change bytea_output
to escape
,
“non-printable” octets are converted to their
equivalent three-digit octal value and preceded by one backslash.
Most “printable” octets are output by their standard
representation in the client character set, e.g.:
SET bytea_output = 'escape'; SELECT 'abc \153\154\155 \052\251\124'::bytea; bytea ---------------- abc klm *\251T
The octet with decimal value 92 (backslash) is doubled in the output. Details are in Table 8.8.
Table 8.8. bytea
Output Escaped Octets
Decimal Octet Value | Description | Escaped Output Representation | Example | Output Result |
---|---|---|---|---|
92 | backslash | \\ | '\134'::bytea | \\ |
0 to 31 and 127 to 255 | “non-printable” octets | \ (octal value) | '\001'::bytea | \001 |
32 to 126 | “printable” octets | client character set representation | '\176'::bytea | ~ |
Depending on the front end to PostgreSQL you use,
you might have additional work to do in terms of escaping and
unescaping bytea
strings. For example, you might also
have to escape line feeds and carriage returns if your interface
automatically translates these.
PostgreSQL supports the full set of SQL date and time types, shown in Table 8.9. The operations available on these data types are described in Section 9.9. Dates are counted according to the Gregorian calendar, even in years before that calendar was introduced (see Section B.6 for more information).
Table 8.9. Date/Time Types
Name | Storage Size | Description | Low Value | High Value | Resolution |
---|---|---|---|---|---|
timestamp [ ( | 8 bytes | both date and time (no time zone) | 4713 BC | 294276 AD | 1 microsecond |
timestamp [ ( | 8 bytes | both date and time, with time zone | 4713 BC | 294276 AD | 1 microsecond |
date | 4 bytes | date (no time of day) | 4713 BC | 5874897 AD | 1 day |
time [ ( | 8 bytes | time of day (no date) | 00:00:00 | 24:00:00 | 1 microsecond |
time [ ( | 12 bytes | time of day (no date), with time zone | 00:00:00+1559 | 24:00:00-1559 | 1 microsecond |
interval [ | 16 bytes | time interval | -178000000 years | 178000000 years | 1 microsecond |
The SQL standard requires that writing just timestamp
be equivalent to timestamp without time
zone
, and PostgreSQL honors that
behavior. timestamptz
is accepted as an
abbreviation for timestamp with time zone
; this is a
PostgreSQL extension.
time
, timestamp
, and
interval
accept an optional precision value
p
which specifies the number of
fractional digits retained in the seconds field. By default, there
is no explicit bound on precision. The allowed range of
p
is from 0 to 6.
The interval
type has an additional option, which is
to restrict the set of stored fields by writing one of these phrases:
YEAR MONTH DAY HOUR MINUTE SECOND YEAR TO MONTH DAY TO HOUR DAY TO MINUTE DAY TO SECOND HOUR TO MINUTE HOUR TO SECOND MINUTE TO SECOND
Note that if both fields
and
p
are specified, the
fields
must include SECOND
,
since the precision applies only to the seconds.
The type time with time zone
is defined by the SQL
standard, but the definition exhibits properties which lead to
questionable usefulness. In most cases, a combination of
date
, time
, timestamp without time
zone
, and timestamp with time zone
should
provide a complete range of date/time functionality required by
any application.
Date and time input is accepted in almost any reasonable format, including
ISO 8601, SQL-compatible,
traditional POSTGRES, and others.
For some formats, ordering of day, month, and year in date input is
ambiguous and there is support for specifying the expected
ordering of these fields. Set the DateStyle parameter
to MDY
to select month-day-year interpretation,
DMY
to select day-month-year interpretation, or
YMD
to select year-month-day interpretation.
PostgreSQL is more flexible in handling date/time input than the SQL standard requires. See Appendix B for the exact parsing rules of date/time input and for the recognized text fields including months, days of the week, and time zones.
Remember that any date or time literal input needs to be enclosed in single quotes, like text strings. Refer to Section 4.1.2.7 for more information. SQL requires the following syntax
type
[ (p
) ] 'value
'
where p
is an optional precision
specification giving the number of
fractional digits in the seconds field. Precision can be
specified for time
, timestamp
, and
interval
types, and can range from 0 to 6.
If no precision is specified in a constant specification,
it defaults to the precision of the literal value (but not
more than 6 digits).
Table 8.10 shows some possible
inputs for the date
type.
Table 8.10. Date Input
Example | Description |
---|---|
1999-01-08 | ISO 8601; January 8 in any mode (recommended format) |
January 8, 1999 | unambiguous in any datestyle input mode |
1/8/1999 | January 8 in MDY mode;
August 1 in DMY mode |
1/18/1999 | January 18 in MDY mode;
rejected in other modes |
01/02/03 | January 2, 2003 in MDY mode;
February 1, 2003 in DMY mode;
February 3, 2001 in YMD mode
|
1999-Jan-08 | January 8 in any mode |
Jan-08-1999 | January 8 in any mode |
08-Jan-1999 | January 8 in any mode |
99-Jan-08 | January 8 in YMD mode, else error |
08-Jan-99 | January 8, except error in YMD mode |
Jan-08-99 | January 8, except error in YMD mode |
19990108 | ISO 8601; January 8, 1999 in any mode |
990108 | ISO 8601; January 8, 1999 in any mode |
1999.008 | year and day of year |
J2451187 | Julian date |
January 8, 99 BC | year 99 BC |
The time-of-day types are time [
(
and
p
) ] without time zonetime [ (
. p
) ] with time
zonetime
alone is equivalent to
time without time zone
.
Valid input for these types consists of a time of day followed
by an optional time zone. (See Table 8.11
and Table 8.12.) If a time zone is
specified in the input for time without time zone
,
it is silently ignored. You can also specify a date but it will
be ignored, except when you use a time zone name that involves a
daylight-savings rule, such as
America/New_York
. In this case specifying the date
is required in order to determine whether standard or daylight-savings
time applies. The appropriate time zone offset is recorded in the
time with time zone
value and is output as stored;
it is not adjusted to the active time zone.
Table 8.11. Time Input
Example | Description |
---|---|
04:05:06.789 | ISO 8601 |
04:05:06 | ISO 8601 |
04:05 | ISO 8601 |
040506 | ISO 8601 |
04:05 AM | same as 04:05; AM does not affect value |
04:05 PM | same as 16:05; input hour must be <= 12 |
04:05:06.789-8 | ISO 8601, with time zone as UTC offset |
04:05:06-08:00 | ISO 8601, with time zone as UTC offset |
04:05-08:00 | ISO 8601, with time zone as UTC offset |
040506-08 | ISO 8601, with time zone as UTC offset |
040506+0730 | ISO 8601, with fractional-hour time zone as UTC offset |
040506+07:30:00 | UTC offset specified to seconds (not allowed in ISO 8601) |
04:05:06 PST | time zone specified by abbreviation |
2003-04-12 04:05:06 America/New_York | time zone specified by full name |
Table 8.12. Time Zone Input
Example | Description |
---|---|
PST | Abbreviation (for Pacific Standard Time) |
America/New_York | Full time zone name |
PST8PDT | POSIX-style time zone specification |
-8:00:00 | UTC offset for PST |
-8:00 | UTC offset for PST (ISO 8601 extended format) |
-800 | UTC offset for PST (ISO 8601 basic format) |
-8 | UTC offset for PST (ISO 8601 basic format) |
zulu | Military abbreviation for UTC |
z | Short form of zulu (also in ISO 8601) |
Refer to Section 8.5.3 for more information on how to specify time zones.
Valid input for the time stamp types consists of the concatenation
of a date and a time, followed by an optional time zone,
followed by an optional AD
or BC
.
(Alternatively, AD
/BC
can appear
before the time zone, but this is not the preferred ordering.)
Thus:
1999-01-08 04:05:06
and:
1999-01-08 04:05:06 -8:00
are valid values, which follow the ISO 8601 standard. In addition, the common format:
January 8 04:05:06 1999 PST
is supported.
The SQL standard differentiates
timestamp without time zone
and timestamp with time zone
literals by the presence of a
“+” or “-” symbol and time zone offset after
the time. Hence, according to the standard,
TIMESTAMP '2004-10-19 10:23:54'
is a timestamp without time zone
, while
TIMESTAMP '2004-10-19 10:23:54+02'
is a timestamp with time zone
.
PostgreSQL never examines the content of a
literal string before determining its type, and therefore will treat
both of the above as timestamp without time zone
. To
ensure that a literal is treated as timestamp with time
zone
, give it the correct explicit type:
TIMESTAMP WITH TIME ZONE '2004-10-19 10:23:54+02'
In a literal that has been determined to be timestamp without time
zone
, PostgreSQL will silently ignore
any time zone indication.
That is, the resulting value is derived from the date/time
fields in the input value, and is not adjusted for time zone.
For timestamp with time zone
, the internally stored
value is always in UTC (Universal
Coordinated Time, traditionally known as Greenwich Mean Time,
GMT). An input value that has an explicit
time zone specified is converted to UTC using the appropriate offset
for that time zone. If no time zone is stated in the input string,
then it is assumed to be in the time zone indicated by the system's
TimeZone parameter, and is converted to UTC using the
offset for the timezone
zone.
When a timestamp with time
zone
value is output, it is always converted from UTC to the
current timezone
zone, and displayed as local time in that
zone. To see the time in another time zone, either change
timezone
or use the AT TIME ZONE
construct
(see Section 9.9.4).
Conversions between timestamp without time zone
and
timestamp with time zone
normally assume that the
timestamp without time zone
value should be taken or given
as timezone
local time. A different time zone can
be specified for the conversion using AT TIME ZONE
.
PostgreSQL supports several
special date/time input values for convenience, as shown in Table 8.13. The values
infinity
and -infinity
are specially represented inside the system and will be displayed
unchanged; but the others are simply notational shorthands
that will be converted to ordinary date/time values when read.
(In particular, now
and related strings are converted
to a specific time value as soon as they are read.)
All of these values need to be enclosed in single quotes when used
as constants in SQL commands.
Table 8.13. Special Date/Time Inputs
Input String | Valid Types | Description |
---|---|---|
epoch | date , timestamp | 1970-01-01 00:00:00+00 (Unix system time zero) |
infinity | date , timestamp | later than all other time stamps |
-infinity | date , timestamp | earlier than all other time stamps |
now | date , time , timestamp | current transaction's start time |
today | date , timestamp | midnight (00:00 ) today |
tomorrow | date , timestamp | midnight (00:00 ) tomorrow |
yesterday | date , timestamp | midnight (00:00 ) yesterday |
allballs | time | 00:00:00.00 UTC |
The following SQL-compatible functions can also
be used to obtain the current time value for the corresponding data
type:
CURRENT_DATE
, CURRENT_TIME
,
CURRENT_TIMESTAMP
, LOCALTIME
,
LOCALTIMESTAMP
. (See Section 9.9.5.) Note that these are
SQL functions and are not recognized in data input strings.
While the input strings now
,
today
, tomorrow
,
and yesterday
are fine to use in interactive SQL
commands, they can have surprising behavior when the command is
saved to be executed later, for example in prepared statements,
views, and function definitions. The string can be converted to a
specific time value that continues to be used long after it becomes
stale. Use one of the SQL functions instead in such contexts.
For example, CURRENT_DATE + 1
is safer than
'tomorrow'::date
.
The output format of the date/time types can be set to one of the four
styles ISO 8601,
SQL (Ingres), traditional POSTGRES
(Unix date format), or
German. The default
is the ISO format. (The
SQL standard requires the use of the ISO 8601
format. The name of the “SQL” output format is a
historical accident.) Table 8.14 shows examples of each
output style. The output of the date
and
time
types is generally only the date or time part
in accordance with the given examples. However, the
POSTGRES style outputs date-only values in
ISO format.
Table 8.14. Date/Time Output Styles
Style Specification | Description | Example |
---|---|---|
ISO | ISO 8601, SQL standard | 1997-12-17 07:37:16-08 |
SQL | traditional style | 12/17/1997 07:37:16.00 PST |
Postgres | original style | Wed Dec 17 07:37:16 1997 PST |
German | regional style | 17.12.1997 07:37:16.00 PST |
ISO 8601 specifies the use of uppercase letter T
to separate
the date and time. PostgreSQL accepts that format on
input, but on output it uses a space rather than T
, as shown
above. This is for readability and for consistency with
RFC 3339 as
well as some other database systems.
In the SQL and POSTGRES styles, day appears before month if DMY field ordering has been specified, otherwise month appears before day. (See Section 8.5.1 for how this setting also affects interpretation of input values.) Table 8.15 shows examples.
Table 8.15. Date Order Conventions
datestyle Setting | Input Ordering | Example Output |
---|---|---|
SQL, DMY | day /month /year | 17/12/1997 15:37:16.00 CET |
SQL, MDY | month /day /year | 12/17/1997 07:37:16.00 PST |
Postgres, DMY | day /month /year | Wed 17 Dec 07:37:16 1997 PST |
In the ISO style, the time zone is always shown as
a signed numeric offset from UTC, with positive sign used for zones
east of Greenwich. The offset will be shown
as hh
(hours only) if it is an integral
number of hours, else
as hh
:mm
if it
is an integral number of minutes, else as
hh
:mm
:ss
.
(The third case is not possible with any modern time zone standard,
but it can appear when working with timestamps that predate the
adoption of standardized time zones.)
In the other date styles, the time zone is shown as an alphabetic
abbreviation if one is in common use in the current zone. Otherwise
it appears as a signed numeric offset in ISO 8601 basic format
(hh
or hhmm
).
The date/time style can be selected by the user using the
SET datestyle
command, the DateStyle parameter in the
postgresql.conf
configuration file, or the
PGDATESTYLE
environment variable on the server or
client.
The formatting function to_char
(see Section 9.8) is also available as
a more flexible way to format date/time output.
Time zones, and time-zone conventions, are influenced by political decisions, not just earth geometry. Time zones around the world became somewhat standardized during the 1900s, but continue to be prone to arbitrary changes, particularly with respect to daylight-savings rules. PostgreSQL uses the widely-used IANA (Olson) time zone database for information about historical time zone rules. For times in the future, the assumption is that the latest known rules for a given time zone will continue to be observed indefinitely far into the future.
PostgreSQL endeavors to be compatible with the SQL standard definitions for typical usage. However, the SQL standard has an odd mix of date and time types and capabilities. Two obvious problems are:
Although the date
type
cannot have an associated time zone, the
time
type can.
Time zones in the real world have little meaning unless
associated with a date as well as a time,
since the offset can vary through the year with daylight-saving
time boundaries.
The default time zone is specified as a constant numeric offset from UTC. It is therefore impossible to adapt to daylight-saving time when doing date/time arithmetic across DST boundaries.
To address these difficulties, we recommend using date/time types
that contain both date and time when using time zones. We
do not recommend using the type time with
time zone
(though it is supported by
PostgreSQL for legacy applications and
for compliance with the SQL standard).
PostgreSQL assumes
your local time zone for any type containing only date or time.
All timezone-aware dates and times are stored internally in UTC. They are converted to local time in the zone specified by the TimeZone configuration parameter before being displayed to the client.
PostgreSQL allows you to specify time zones in three different forms:
A full time zone name, for example America/New_York
.
The recognized time zone names are listed in the
pg_timezone_names
view (see Section 52.94).
PostgreSQL uses the widely-used IANA
time zone data for this purpose, so the same time zone
names are also recognized by other software.
A time zone abbreviation, for example PST
. Such a
specification merely defines a particular offset from UTC, in
contrast to full time zone names which can imply a set of daylight
savings transition rules as well. The recognized abbreviations
are listed in the pg_timezone_abbrevs
view (see Section 52.93). You cannot set the
configuration parameters TimeZone or
log_timezone to a time
zone abbreviation, but you can use abbreviations in
date/time input values and with the AT TIME ZONE
operator.
In addition to the timezone names and abbreviations, PostgreSQL will accept POSIX-style time zone specifications, as described in Section B.5. This option is not normally preferable to using a named time zone, but it may be necessary if no suitable IANA time zone entry is available.
In short, this is the difference between abbreviations
and full names: abbreviations represent a specific offset from UTC,
whereas many of the full names imply a local daylight-savings time
rule, and so have two possible UTC offsets. As an example,
2014-06-04 12:00 America/New_York
represents noon local
time in New York, which for this particular date was Eastern Daylight
Time (UTC-4). So 2014-06-04 12:00 EDT
specifies that
same time instant. But 2014-06-04 12:00 EST
specifies
noon Eastern Standard Time (UTC-5), regardless of whether daylight
savings was nominally in effect on that date.
To complicate matters, some jurisdictions have used the same timezone
abbreviation to mean different UTC offsets at different times; for
example, in Moscow MSK
has meant UTC+3 in some years and
UTC+4 in others. PostgreSQL interprets such
abbreviations according to whatever they meant (or had most recently
meant) on the specified date; but, as with the EST
example
above, this is not necessarily the same as local civil time on that date.
In all cases, timezone names and abbreviations are recognized case-insensitively. (This is a change from PostgreSQL versions prior to 8.2, which were case-sensitive in some contexts but not others.)
Neither timezone names nor abbreviations are hard-wired into the server;
they are obtained from configuration files stored under
.../share/timezone/
and .../share/timezonesets/
of the installation directory
(see Section B.4).
The TimeZone configuration parameter can
be set in the file postgresql.conf
, or in any of the
other standard ways described in Chapter 20.
There are also some special ways to set it:
The SQL command SET TIME ZONE
sets the time zone for the session. This is an alternative spelling
of SET TIMEZONE TO
with a more SQL-spec-compatible syntax.
The PGTZ
environment variable is used by
libpq clients
to send a SET TIME ZONE
command to the server upon connection.
interval
values can be written using the following
verbose syntax:
[@]quantity
unit
[quantity
unit
...] [direction
]
where quantity
is a number (possibly signed);
unit
is microsecond
,
millisecond
, second
,
minute
, hour
, day
,
week
, month
, year
,
decade
, century
, millennium
,
or abbreviations or plurals of these units;
direction
can be ago
or
empty. The at sign (@
) is optional noise. The amounts
of the different units are implicitly added with appropriate
sign accounting. ago
negates all the fields.
This syntax is also used for interval output, if
IntervalStyle is set to
postgres_verbose
.
Quantities of days, hours, minutes, and seconds can be specified without
explicit unit markings. For example, '1 12:59:10'
is read
the same as '1 day 12 hours 59 min 10 sec'
. Also,
a combination of years and months can be specified with a dash;
for example '200-10'
is read the same as '200 years
10 months'
. (These shorter forms are in fact the only ones allowed
by the SQL standard, and are used for output when
IntervalStyle
is set to sql_standard
.)
Interval values can also be written as ISO 8601 time intervals, using either the “format with designators” of the standard's section 4.4.3.2 or the “alternative format” of section 4.4.3.3. The format with designators looks like this:
Pquantity
unit
[quantity
unit
...] [ T [quantity
unit
...]]
The string must start with a P
, and may include a
T
that introduces the time-of-day units. The
available unit abbreviations are given in Table 8.16. Units may be
omitted, and may be specified in any order, but units smaller than
a day must appear after T
. In particular, the meaning of
M
depends on whether it is before or after
T
.
Table 8.16. ISO 8601 Interval Unit Abbreviations
Abbreviation | Meaning |
---|---|
Y | Years |
M | Months (in the date part) |
W | Weeks |
D | Days |
H | Hours |
M | Minutes (in the time part) |
S | Seconds |
In the alternative format:
P [years
-months
-days
] [ Thours
:minutes
:seconds
]
the string must begin with P
, and a
T
separates the date and time parts of the interval.
The values are given as numbers similar to ISO 8601 dates.
When writing an interval constant with a fields
specification, or when assigning a string to an interval column that was
defined with a fields
specification, the interpretation of
unmarked quantities depends on the fields
. For
example INTERVAL '1' YEAR
is read as 1 year, whereas
INTERVAL '1'
means 1 second. Also, field values
“to the right” of the least significant field allowed by the
fields
specification are silently discarded. For
example, writing INTERVAL '1 day 2:03:04' HOUR TO MINUTE
results in dropping the seconds field, but not the day field.
According to the SQL standard all fields of an interval
value must have the same sign, so a leading negative sign applies to all
fields; for example the negative sign in the interval literal
'-1 2:03:04'
applies to both the days and hour/minute/second
parts. PostgreSQL allows the fields to have different
signs, and traditionally treats each field in the textual representation
as independently signed, so that the hour/minute/second part is
considered positive in this example. If IntervalStyle
is
set to sql_standard
then a leading sign is considered
to apply to all fields (but only if no additional signs appear).
Otherwise the traditional PostgreSQL interpretation is
used. To avoid ambiguity, it's recommended to attach an explicit sign
to each field if any field is negative.
Internally, interval
values are stored as three integral
fields: months, days, and microseconds. These fields are kept
separate because the number of days in a month varies, while a day
can have 23 or 25 hours if a daylight savings time transition is
involved. An interval input string that uses other units is
normalized into this format, and then reconstructed in a standardized
way for output, for example:
SELECT '2 years 15 months 100 weeks 99 hours 123456789 milliseconds'::interval; interval --------------------------------------- 3 years 3 mons 700 days 133:17:36.789
Here weeks, which are understood as “7 days”, have been kept separate, while the smaller and larger time units were combined and normalized.
Input field values can have fractional parts, for example '1.5
weeks'
or '01:02:03.45'
. However,
because interval
internally stores only integral fields,
fractional values must be converted into smaller
units. Fractional parts of units greater than months are truncated to
be an integer number of months, e.g. '1.5 years'
becomes '1 year 6 mons'
. Fractional parts of
weeks and days are computed to be an integer number of days and
microseconds, assuming 30 days per month and 24 hours per day, e.g.,
'1.75 months'
becomes 1 mon 22 days
12:00:00
. Only seconds will ever be shown as fractional
on output.
Table 8.17 shows some examples
of valid interval
input.
Table 8.17. Interval Input
Example | Description |
---|---|
1-2 | SQL standard format: 1 year 2 months |
3 4:05:06 | SQL standard format: 3 days 4 hours 5 minutes 6 seconds |
1 year 2 months 3 days 4 hours 5 minutes 6 seconds | Traditional Postgres format: 1 year 2 months 3 days 4 hours 5 minutes 6 seconds |
P1Y2M3DT4H5M6S | ISO 8601 “format with designators”: same meaning as above |
P0001-02-03T04:05:06 | ISO 8601 “alternative format”: same meaning as above |
As previously explained, PostgreSQL
stores interval
values as months, days, and
microseconds. For output, the months field is converted to years and
months by dividing by 12. The days field is shown as-is. The
microseconds field is converted to hours, minutes, seconds, and
fractional seconds. Thus months, minutes, and seconds will never be
shown as exceeding the ranges 0–11, 0–59, and 0–59
respectively, while the displayed years, days, and hours fields can
be quite large. (The justify_days
and justify_hours
functions can be used if it is desirable to transpose large days or
hours values into the next higher field.)
The output format of the interval type can be set to one of the
four styles sql_standard
, postgres
,
postgres_verbose
, or iso_8601
,
using the command SET intervalstyle
.
The default is the postgres
format.
Table 8.18 shows examples of each
output style.
The sql_standard
style produces output that conforms to
the SQL standard's specification for interval literal strings, if
the interval value meets the standard's restrictions (either year-month
only or day-time only, with no mixing of positive
and negative components). Otherwise the output looks like a standard
year-month literal string followed by a day-time literal string,
with explicit signs added to disambiguate mixed-sign intervals.
The output of the postgres
style matches the output of
PostgreSQL releases prior to 8.4 when the
DateStyle parameter was set to ISO
.
The output of the postgres_verbose
style matches the output of
PostgreSQL releases prior to 8.4 when the
DateStyle
parameter was set to non-ISO
output.
The output of the iso_8601
style matches the “format
with designators” described in section 4.4.3.2 of the
ISO 8601 standard.
Table 8.18. Interval Output Style Examples
Style Specification | Year-Month Interval | Day-Time Interval | Mixed Interval |
---|---|---|---|
sql_standard | 1-2 | 3 4:05:06 | -1-2 +3 -4:05:06 |
postgres | 1 year 2 mons | 3 days 04:05:06 | -1 year -2 mons +3 days -04:05:06 |
postgres_verbose | @ 1 year 2 mons | @ 3 days 4 hours 5 mins 6 secs | @ 1 year 2 mons -3 days 4 hours 5 mins 6 secs ago |
iso_8601 | P1Y2M | P3DT4H5M6S | P-1Y-2M3DT-4H-5M-6S |
PostgreSQL provides the
standard SQL type boolean
;
see Table 8.19.
The boolean
type can have several states:
“true”, “false”, and a third state,
“unknown”, which is represented by the
SQL null value.
Table 8.19. Boolean Data Type
Name | Storage Size | Description |
---|---|---|
boolean | 1 byte | state of true or false |
Boolean constants can be represented in SQL queries by the SQL
key words TRUE
, FALSE
,
and NULL
.
The datatype input function for type boolean
accepts these
string representations for the “true” state:
true |
yes |
on |
1 |
and these representations for the “false” state:
false |
no |
off |
0 |
Unique prefixes of these strings are also accepted, for
example t
or n
.
Leading or trailing whitespace is ignored, and case does not matter.
The datatype output function for type boolean
always emits
either t
or f
, as shown in
Example 8.2.
Example 8.2. Using the boolean
Type
CREATE TABLE test1 (a boolean, b text); INSERT INTO test1 VALUES (TRUE, 'sic est'); INSERT INTO test1 VALUES (FALSE, 'non est'); SELECT * FROM test1; a | b ---+--------- t | sic est f | non est SELECT * FROM test1 WHERE a; a | b ---+--------- t | sic est
The key words TRUE
and FALSE
are
the preferred (SQL-compliant) method for writing
Boolean constants in SQL queries. But you can also use the string
representations by following the generic string-literal constant syntax
described in Section 4.1.2.7, for
example 'yes'::boolean
.
Note that the parser automatically understands
that TRUE
and FALSE
are of
type boolean
, but this is not so
for NULL
because that can have any type.
So in some contexts you might have to cast NULL
to boolean
explicitly, for
example NULL::boolean
. Conversely, the cast can be
omitted from a string-literal Boolean value in contexts where the parser
can deduce that the literal must be of type boolean
.
Enumerated (enum) types are data types that
comprise a static, ordered set of values.
They are equivalent to the enum
types supported in a number of programming languages. An example of an enum
type might be the days of the week, or a set of status values for
a piece of data.
Enum types are created using the CREATE TYPE command, for example:
CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy');
Once created, the enum type can be used in table and function definitions much like any other type:
CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'); CREATE TABLE person ( name text, current_mood mood ); INSERT INTO person VALUES ('Moe', 'happy'); SELECT * FROM person WHERE current_mood = 'happy'; name | current_mood ------+-------------- Moe | happy (1 row)
The ordering of the values in an enum type is the order in which the values were listed when the type was created. All standard comparison operators and related aggregate functions are supported for enums. For example:
INSERT INTO person VALUES ('Larry', 'sad'); INSERT INTO person VALUES ('Curly', 'ok'); SELECT * FROM person WHERE current_mood > 'sad'; name | current_mood -------+-------------- Moe | happy Curly | ok (2 rows) SELECT * FROM person WHERE current_mood > 'sad' ORDER BY current_mood; name | current_mood -------+-------------- Curly | ok Moe | happy (2 rows) SELECT name FROM person WHERE current_mood = (SELECT MIN(current_mood) FROM person); name ------- Larry (1 row)
Each enumerated data type is separate and cannot be compared with other enumerated types. See this example:
CREATE TYPE happiness AS ENUM ('happy', 'very happy', 'ecstatic'); CREATE TABLE holidays ( num_weeks integer, happiness happiness ); INSERT INTO holidays(num_weeks,happiness) VALUES (4, 'happy'); INSERT INTO holidays(num_weeks,happiness) VALUES (6, 'very happy'); INSERT INTO holidays(num_weeks,happiness) VALUES (8, 'ecstatic'); INSERT INTO holidays(num_weeks,happiness) VALUES (2, 'sad'); ERROR: invalid input value for enum happiness: "sad" SELECT person.name, holidays.num_weeks FROM person, holidays WHERE person.current_mood = holidays.happiness; ERROR: operator does not exist: mood = happiness
If you really need to do something like that, you can either write a custom operator or add explicit casts to your query:
SELECT person.name, holidays.num_weeks FROM person, holidays WHERE person.current_mood::text = holidays.happiness::text; name | num_weeks ------+----------- Moe | 4 (1 row)
Enum labels are case sensitive, so
'happy'
is not the same as 'HAPPY'
.
White space in the labels is significant too.
Although enum types are primarily intended for static sets of values, there is support for adding new values to an existing enum type, and for renaming values (see ALTER TYPE). Existing values cannot be removed from an enum type, nor can the sort ordering of such values be changed, short of dropping and re-creating the enum type.
An enum value occupies four bytes on disk. The length of an enum
value's textual label is limited by the NAMEDATALEN
setting compiled into PostgreSQL; in standard
builds this means at most 63 bytes.
The translations from internal enum values to textual labels are
kept in the system catalog
pg_enum
.
Querying this catalog directly can be useful.
Geometric data types represent two-dimensional spatial objects. Table 8.20 shows the geometric types available in PostgreSQL.
Table 8.20. Geometric Types
Name | Storage Size | Description | Representation |
---|---|---|---|
point | 16 bytes | Point on a plane | (x,y) |
line | 24 bytes | Infinite line | {A,B,C} |
lseg | 32 bytes | Finite line segment | ((x1,y1),(x2,y2)) |
box | 32 bytes | Rectangular box | ((x1,y1),(x2,y2)) |
path | 16+16n bytes | Closed path (similar to polygon) | ((x1,y1),...) |
path | 16+16n bytes | Open path | [(x1,y1),...] |
polygon | 40+16n bytes | Polygon (similar to closed path) | ((x1,y1),...) |
circle | 24 bytes | Circle | <(x,y),r> (center point and radius) |
In all these types, the individual coordinates are stored as
double precision
(float8
) numbers.
A rich set of functions and operators is available to perform various geometric operations such as scaling, translation, rotation, and determining intersections. They are explained in Section 9.11.
Points are the fundamental two-dimensional building block for geometric
types. Values of type point
are specified using either of
the following syntaxes:
(x
,y
)x
,y
where x
and y
are the respective
coordinates, as floating-point numbers.
Points are output using the first syntax.
Lines are represented by the linear
equation A
x + B
y + C
= 0,
where A
and B
are not both zero. Values
of type line
are input and output in the following form:
{A
,B
,C
}
Alternatively, any of the following forms can be used for input:
[ (x1
,y1
) , (x2
,y2
) ] ( (x1
,y1
) , (x2
,y2
) ) (x1
,y1
) , (x2
,y2
)x1
,y1
,x2
,y2
where
(
and
x1
,y1
)(
are two different points on the line.
x2
,y2
)
Line segments are represented by pairs of points that are the endpoints
of the segment. Values of type lseg
are specified using any
of the following syntaxes:
[ (x1
,y1
) , (x2
,y2
) ] ( (x1
,y1
) , (x2
,y2
) ) (x1
,y1
) , (x2
,y2
)x1
,y1
,x2
,y2
where
(
and
x1
,y1
)(
are the end points of the line segment.
x2
,y2
)
Line segments are output using the first syntax.
Boxes are represented by pairs of points that are opposite
corners of the box.
Values of type box
are specified using any of the following
syntaxes:
( (x1
,y1
) , (x2
,y2
) ) (x1
,y1
) , (x2
,y2
)x1
,y1
,x2
,y2
where
(
and
x1
,y1
)(
are any two opposite corners of the box.
x2
,y2
)
Boxes are output using the second syntax.
Any two opposite corners can be supplied on input, but the values will be reordered as needed to store the upper right and lower left corners, in that order.
Paths are represented by lists of connected points. Paths can be open, where the first and last points in the list are considered not connected, or closed, where the first and last points are considered connected.
Values of type path
are specified using any of the following
syntaxes:
[ (x1
,y1
) , ... , (xn
,yn
) ] ( (x1
,y1
) , ... , (xn
,yn
) ) (x1
,y1
) , ... , (xn
,yn
) (x1
,y1
, ... ,xn
,yn
)x1
,y1
, ... ,xn
,yn
where the points are the end points of the line segments
comprising the path. Square brackets ([]
) indicate
an open path, while parentheses (()
) indicate a
closed path. When the outermost parentheses are omitted, as
in the third through fifth syntaxes, a closed path is assumed.
Paths are output using the first or second syntax, as appropriate.
Polygons are represented by lists of points (the vertexes of the polygon). Polygons are very similar to closed paths; the essential semantic difference is that a polygon is considered to include the area within it, while a path is not.
An important implementation difference between polygons and paths is that the stored representation of a polygon includes its smallest bounding box. This speeds up certain search operations, although computing the bounding box adds overhead while constructing new polygons.
Values of type polygon
are specified using any of the
following syntaxes:
( (x1
,y1
) , ... , (xn
,yn
) ) (x1
,y1
) , ... , (xn
,yn
) (x1
,y1
, ... ,xn
,yn
)x1
,y1
, ... ,xn
,yn
where the points are the end points of the line segments comprising the boundary of the polygon.
Polygons are output using the first syntax.
Circles are represented by a center point and radius.
Values of type circle
are specified using any of the
following syntaxes:
< (x
,y
) ,r
> ( (x
,y
) ,r
) (x
,y
) ,r
x
,y
,r
where
(
is the center point and x
,y
)r
is the radius of the
circle.
Circles are output using the first syntax.
PostgreSQL offers data types to store IPv4, IPv6, and MAC addresses, as shown in Table 8.21. It is better to use these types instead of plain text types to store network addresses, because these types offer input error checking and specialized operators and functions (see Section 9.12).
Table 8.21. Network Address Types
Name | Storage Size | Description |
---|---|---|
cidr | 7 or 19 bytes | IPv4 and IPv6 networks |
inet | 7 or 19 bytes | IPv4 and IPv6 hosts and networks |
macaddr | 6 bytes | MAC addresses |
macaddr8 | 8 bytes | MAC addresses (EUI-64 format) |
When sorting inet
or cidr
data types,
IPv4 addresses will always sort before IPv6 addresses, including
IPv4 addresses encapsulated or mapped to IPv6 addresses, such as
::10.2.3.4 or ::ffff:10.4.3.2.
inet
The inet
type holds an IPv4 or IPv6 host address, and
optionally its subnet, all in one field.
The subnet is represented by the number of network address bits
present in the host address (the
“netmask”). If the netmask is 32 and the address is IPv4,
then the value does not indicate a subnet, only a single host.
In IPv6, the address length is 128 bits, so 128 bits specify a
unique host address. Note that if you
want to accept only networks, you should use the
cidr
type rather than inet
.
The input format for this type is
address/y
where
address
is an IPv4 or IPv6 address and
y
is the number of bits in the netmask. If the
/y
portion is omitted, the
netmask is taken to be 32 for IPv4 or 128 for IPv6,
so the value represents
just a single host. On display, the
/y
portion is suppressed if the netmask specifies a single host.
cidr
The cidr
type holds an IPv4 or IPv6 network specification.
Input and output formats follow Classless Internet Domain Routing
conventions.
The format for specifying networks is address/y
where address
is the network's lowest
address represented as an
IPv4 or IPv6 address, and y
is the number of bits in the netmask. If
y
is omitted, it is calculated
using assumptions from the older classful network numbering system, except
it will be at least large enough to include all of the octets
written in the input. It is an error to specify a network address
that has bits set to the right of the specified netmask.
Table 8.22 shows some examples.
Table 8.22. cidr
Type Input Examples
cidr Input | cidr Output |
|
---|---|---|
192.168.100.128/25 | 192.168.100.128/25 | 192.168.100.128/25 |
192.168/24 | 192.168.0.0/24 | 192.168.0/24 |
192.168/25 | 192.168.0.0/25 | 192.168.0.0/25 |
192.168.1 | 192.168.1.0/24 | 192.168.1/24 |
192.168 | 192.168.0.0/24 | 192.168.0/24 |
128.1 | 128.1.0.0/16 | 128.1/16 |
128 | 128.0.0.0/16 | 128.0/16 |
128.1.2 | 128.1.2.0/24 | 128.1.2/24 |
10.1.2 | 10.1.2.0/24 | 10.1.2/24 |
10.1 | 10.1.0.0/16 | 10.1/16 |
10 | 10.0.0.0/8 | 10/8 |
10.1.2.3/32 | 10.1.2.3/32 | 10.1.2.3/32 |
2001:4f8:3:ba::/64 | 2001:4f8:3:ba::/64 | 2001:4f8:3:ba/64 |
2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128 | 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128 | 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128 |
::ffff:1.2.3.0/120 | ::ffff:1.2.3.0/120 | ::ffff:1.2.3/120 |
::ffff:1.2.3.0/128 | ::ffff:1.2.3.0/128 | ::ffff:1.2.3.0/128 |
inet
vs. cidr
The essential difference between inet
and cidr
data types is that inet
accepts values with nonzero bits to
the right of the netmask, whereas cidr
does not. For
example, 192.168.0.1/24
is valid for inet
but not for cidr
.
If you do not like the output format for inet
or
cidr
values, try the functions host
,
text
, and abbrev
.
macaddr
The macaddr
type stores MAC addresses, known for example
from Ethernet card hardware addresses (although MAC addresses are
used for other purposes as well). Input is accepted in the
following formats:
'08:00:2b:01:02:03' |
'08-00-2b-01-02-03' |
'08002b:010203' |
'08002b-010203' |
'0800.2b01.0203' |
'0800-2b01-0203' |
'08002b010203' |
These examples all specify the same address. Upper and
lower case is accepted for the digits
a
through f
. Output is always in the
first of the forms shown.
IEEE Standard 802-2001 specifies the second form shown (with hyphens) as the canonical form for MAC addresses, and specifies the first form (with colons) as used with bit-reversed, MSB-first notation, so that 08-00-2b-01-02-03 = 10:00:D4:80:40:C0. This convention is widely ignored nowadays, and it is relevant only for obsolete network protocols (such as Token Ring). PostgreSQL makes no provisions for bit reversal; all accepted formats use the canonical LSB order.
The remaining five input formats are not part of any standard.
macaddr8
The macaddr8
type stores MAC addresses in EUI-64
format, known for example from Ethernet card hardware addresses
(although MAC addresses are used for other purposes as well).
This type can accept both 6 and 8 byte length MAC addresses
and stores them in 8 byte length format. MAC addresses given
in 6 byte format will be stored in 8 byte length format with the
4th and 5th bytes set to FF and FE, respectively.
Note that IPv6 uses a modified EUI-64 format where the 7th bit
should be set to one after the conversion from EUI-48. The
function macaddr8_set7bit
is provided to make this
change.
Generally speaking, any input which is comprised of pairs of hex
digits (on byte boundaries), optionally separated consistently by
one of ':'
, '-'
or '.'
, is
accepted. The number of hex digits must be either 16 (8 bytes) or
12 (6 bytes). Leading and trailing whitespace is ignored.
The following are examples of input formats that are accepted:
'08:00:2b:01:02:03:04:05' |
'08-00-2b-01-02-03-04-05' |
'08002b:0102030405' |
'08002b-0102030405' |
'0800.2b01.0203.0405' |
'0800-2b01-0203-0405' |
'08002b01:02030405' |
'08002b0102030405' |
These examples all specify the same address. Upper and
lower case is accepted for the digits
a
through f
. Output is always in the
first of the forms shown.
The last six input formats shown above are not part of any standard.
To convert a traditional 48 bit MAC address in EUI-48 format to
modified EUI-64 format to be included as the host portion of an
IPv6 address, use macaddr8_set7bit
as shown:
SELECT macaddr8_set7bit('08:00:2b:01:02:03');
macaddr8_set7bit
-------------------------
0a:00:2b:ff:fe:01:02:03
(1 row)
Bit strings are strings of 1's and 0's. They can be used to store
or visualize bit masks. There are two SQL bit types:
bit(
and n
)bit
varying(
, where
n
)n
is a positive integer.
bit
type data must match the length
n
exactly; it is an error to attempt to
store shorter or longer bit strings. bit varying
data is
of variable length up to the maximum length
n
; longer strings will be rejected.
Writing bit
without a length is equivalent to
bit(1)
, while bit varying
without a length
specification means unlimited length.
If one explicitly casts a bit-string value to
bit(
, it will be truncated or
zero-padded on the right to be exactly n
)n
bits,
without raising an error. Similarly,
if one explicitly casts a bit-string value to
bit varying(
, it will be truncated
on the right if it is more than n
)n
bits.
Refer to Section 4.1.2.5 for information about the syntax of bit string constants. Bit-logical operators and string manipulation functions are available; see Section 9.6.
Example 8.3. Using the Bit String Types
CREATE TABLE test (a BIT(3), b BIT VARYING(5)); INSERT INTO test VALUES (B'101', B'00'); INSERT INTO test VALUES (B'10', B'101');ERROR: bit string length 2 does not match type bit(3)
INSERT INTO test VALUES (B'10'::bit(3), B'101'); SELECT * FROM test;a | b -----+----- 101 | 00 100 | 101
A bit string value requires 1 byte for each group of 8 bits, plus 5 or 8 bytes overhead depending on the length of the string (but long values may be compressed or moved out-of-line, as explained in Section 8.3 for character strings).
PostgreSQL provides two data types that
are designed to support full text search, which is the activity of
searching through a collection of natural-language documents
to locate those that best match a query.
The tsvector
type represents a document in a form optimized
for text search; the tsquery
type similarly represents
a text query.
Chapter 12 provides a detailed explanation of this
facility, and Section 9.13 summarizes the
related functions and operators.
tsvector
A tsvector
value is a sorted list of distinct
lexemes, which are words that have been
normalized to merge different variants of the same word
(see Chapter 12 for details). Sorting and
duplicate-elimination are done automatically during input, as shown in
this example:
SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector; tsvector ---------------------------------------------------- 'a' 'and' 'ate' 'cat' 'fat' 'mat' 'on' 'rat' 'sat'
To represent lexemes containing whitespace or punctuation, surround them with quotes:
SELECT $$the lexeme ' ' contains spaces$$::tsvector; tsvector ------------------------------------------- ' ' 'contains' 'lexeme' 'spaces' 'the'
(We use dollar-quoted string literals in this example and the next one to avoid the confusion of having to double quote marks within the literals.) Embedded quotes and backslashes must be doubled:
SELECT $$the lexeme 'Joe''s' contains a quote$$::tsvector; tsvector ------------------------------------------------ 'Joe''s' 'a' 'contains' 'lexeme' 'quote' 'the'
Optionally, integer positions can be attached to lexemes:
SELECT 'a:1 fat:2 cat:3 sat:4 on:5 a:6 mat:7 and:8 ate:9 a:10 fat:11 rat:12'::tsvector; tsvector ------------------------------------------------------------------------------- 'a':1,6,10 'and':8 'ate':9 'cat':3 'fat':2,11 'mat':7 'on':5 'rat':12 'sat':4
A position normally indicates the source word's location in the document. Positional information can be used for proximity ranking. Position values can range from 1 to 16383; larger numbers are silently set to 16383. Duplicate positions for the same lexeme are discarded.
Lexemes that have positions can further be labeled with a
weight, which can be A
,
B
, C
, or D
.
D
is the default and hence is not shown on output:
SELECT 'a:1A fat:2B,4C cat:5D'::tsvector; tsvector ---------------------------- 'a':1A 'cat':5 'fat':2B,4C
Weights are typically used to reflect document structure, for example by marking title words differently from body words. Text search ranking functions can assign different priorities to the different weight markers.
It is important to understand that the
tsvector
type itself does not perform any word
normalization; it assumes the words it is given are normalized
appropriately for the application. For example,
SELECT 'The Fat Rats'::tsvector; tsvector -------------------- 'Fat' 'Rats' 'The'
For most English-text-searching applications the above words would
be considered non-normalized, but tsvector
doesn't care.
Raw document text should usually be passed through
to_tsvector
to normalize the words appropriately
for searching:
SELECT to_tsvector('english', 'The Fat Rats'); to_tsvector ----------------- 'fat':2 'rat':3
Again, see Chapter 12 for more detail.
tsquery
A tsquery
value stores lexemes that are to be
searched for, and can combine them using the Boolean operators
&
(AND), |
(OR), and
!
(NOT), as well as the phrase search operator
<->
(FOLLOWED BY). There is also a variant
<
of the FOLLOWED BY
operator, where N
>N
is an integer constant that
specifies the distance between the two lexemes being searched
for. <->
is equivalent to <1>
.
Parentheses can be used to enforce grouping of these operators.
In the absence of parentheses, !
(NOT) binds most tightly,
<->
(FOLLOWED BY) next most tightly, then
&
(AND), with |
(OR) binding
the least tightly.
Here are some examples:
SELECT 'fat & rat'::tsquery; tsquery --------------- 'fat' & 'rat' SELECT 'fat & (rat | cat)'::tsquery; tsquery --------------------------- 'fat' & ( 'rat' | 'cat' ) SELECT 'fat & rat & ! cat'::tsquery; tsquery ------------------------ 'fat' & 'rat' & !'cat'
Optionally, lexemes in a tsquery
can be labeled with
one or more weight letters, which restricts them to match only
tsvector
lexemes with one of those weights:
SELECT 'fat:ab & cat'::tsquery; tsquery ------------------ 'fat':AB & 'cat'
Also, lexemes in a tsquery
can be labeled with *
to specify prefix matching:
SELECT 'super:*'::tsquery; tsquery ----------- 'super':*
This query will match any word in a tsvector
that begins
with “super”.
Quoting rules for lexemes are the same as described previously for
lexemes in tsvector
; and, as with tsvector
,
any required normalization of words must be done before converting
to the tsquery
type. The to_tsquery
function is convenient for performing such normalization:
SELECT to_tsquery('Fat:ab & Cats'); to_tsquery ------------------ 'fat':AB & 'cat'
Note that to_tsquery
will process prefixes in the same way
as other words, which means this comparison returns true:
SELECT to_tsvector( 'postgraduate' ) @@ to_tsquery( 'postgres:*' ); ?column? ---------- t
because postgres
gets stemmed to postgr
:
SELECT to_tsvector( 'postgraduate' ), to_tsquery( 'postgres:*' ); to_tsvector | to_tsquery ---------------+------------ 'postgradu':1 | 'postgr':*
which will match the stemmed form of postgraduate
.
The data type uuid
stores Universally Unique Identifiers
(UUID) as defined by RFC 4122,
ISO/IEC 9834-8:2005, and related standards.
(Some systems refer to this data type as a globally unique identifier, or
GUID, instead.) This
identifier is a 128-bit quantity that is generated by an algorithm chosen
to make it very unlikely that the same identifier will be generated by
anyone else in the known universe using the same algorithm. Therefore,
for distributed systems, these identifiers provide a better uniqueness
guarantee than sequence generators, which
are only unique within a single database.
A UUID is written as a sequence of lower-case hexadecimal digits, in several groups separated by hyphens, specifically a group of 8 digits followed by three groups of 4 digits followed by a group of 12 digits, for a total of 32 digits representing the 128 bits. An example of a UUID in this standard form is:
a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11
PostgreSQL also accepts the following alternative forms for input: use of upper-case digits, the standard format surrounded by braces, omitting some or all hyphens, adding a hyphen after any group of four digits. Examples are:
A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A11 {a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11} a0eebc999c0b4ef8bb6d6bb9bd380a11 a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 {a0eebc99-9c0b4ef8-bb6d6bb9-bd380a11}
Output is always in the standard form.
See Section 9.14 for how to generate a UUID in PostgreSQL.
The xml
data type can be used to store XML data. Its
advantage over storing XML data in a text
field is that it
checks the input values for well-formedness, and there are support
functions to perform type-safe operations on it; see Section 9.15. Use of this data type requires the
installation to have been built with configure
--with-libxml
.
The xml
type can store well-formed
“documents”, as defined by the XML standard, as well
as “content” fragments, which are defined by reference
to the more permissive
“document node”
of the XQuery and XPath data model.
Roughly, this means that content fragments can have
more than one top-level element or character node. The expression
can be used to evaluate whether a particular xmlvalue
IS DOCUMENTxml
value is a full document or only a content fragment.
Limits and compatibility notes for the xml
data type
can be found in Section D.3.
To produce a value of type xml
from character data,
use the function
xmlparse
:
XMLPARSE ( { DOCUMENT | CONTENT } value
)
Examples:
XMLPARSE (DOCUMENT '<?xml version="1.0"?><book><title>Manual</title><chapter>...</chapter></book>') XMLPARSE (CONTENT 'abc<foo>bar</foo><bar>foo</bar>')
While this is the only way to convert character strings into XML values according to the SQL standard, the PostgreSQL-specific syntaxes:
xml '<foo>bar</foo>' '<foo>bar</foo>'::xml
can also be used.
The xml
type does not validate input values
against a document type declaration
(DTD),
even when the input value specifies a DTD.
There is also currently no built-in support for validating against
other XML schema languages such as XML Schema.
The inverse operation, producing a character string value from
xml
, uses the function
xmlserialize
:
XMLSERIALIZE ( { DOCUMENT | CONTENT }value
AStype
)
type
can be
character
, character varying
, or
text
(or an alias for one of those). Again, according
to the SQL standard, this is the only way to convert between type
xml
and character types, but PostgreSQL also allows
you to simply cast the value.
When a character string value is cast to or from type
xml
without going through XMLPARSE
or
XMLSERIALIZE
, respectively, the choice of
DOCUMENT
versus CONTENT
is
determined by the “XML option”
session configuration parameter, which can be set using the
standard command:
SET XML OPTION { DOCUMENT | CONTENT };
or the more PostgreSQL-like syntax
SET xmloption TO { DOCUMENT | CONTENT };
The default is CONTENT
, so all forms of XML
data are allowed.
Care must be taken when dealing with multiple character encodings
on the client, server, and in the XML data passed through them.
When using the text mode to pass queries to the server and query
results to the client (which is the normal mode), PostgreSQL
converts all character data passed between the client and the
server and vice versa to the character encoding of the respective
end; see Section 24.3. This includes string
representations of XML values, such as in the above examples.
This would ordinarily mean that encoding declarations contained in
XML data can become invalid as the character data is converted
to other encodings while traveling between client and server,
because the embedded encoding declaration is not changed. To cope
with this behavior, encoding declarations contained in
character strings presented for input to the xml
type
are ignored, and content is assumed
to be in the current server encoding. Consequently, for correct
processing, character strings of XML data must be sent
from the client in the current client encoding. It is the
responsibility of the client to either convert documents to the
current client encoding before sending them to the server, or to
adjust the client encoding appropriately. On output, values of
type xml
will not have an encoding declaration, and
clients should assume all data is in the current client
encoding.
When using binary mode to pass query parameters to the server and query results back to the client, no encoding conversion is performed, so the situation is different. In this case, an encoding declaration in the XML data will be observed, and if it is absent, the data will be assumed to be in UTF-8 (as required by the XML standard; note that PostgreSQL does not support UTF-16). On output, data will have an encoding declaration specifying the client encoding, unless the client encoding is UTF-8, in which case it will be omitted.
Needless to say, processing XML data with PostgreSQL will be less error-prone and more efficient if the XML data encoding, client encoding, and server encoding are the same. Since XML data is internally processed in UTF-8, computations will be most efficient if the server encoding is also UTF-8.
Some XML-related functions may not work at all on non-ASCII data
when the server encoding is not UTF-8. This is known to be an
issue for xmltable()
and xpath()
in particular.
The xml
data type is unusual in that it does not
provide any comparison operators. This is because there is no
well-defined and universally useful comparison algorithm for XML
data. One consequence of this is that you cannot retrieve rows by
comparing an xml
column against a search value. XML
values should therefore typically be accompanied by a separate key
field such as an ID. An alternative solution for comparing XML
values is to convert them to character strings first, but note
that character string comparison has little to do with a useful
XML comparison method.
Since there are no comparison operators for the xml
data type, it is not possible to create an index directly on a
column of this type. If speedy searches in XML data are desired,
possible workarounds include casting the expression to a
character string type and indexing that, or indexing an XPath
expression. Of course, the actual query would have to be adjusted
to search by the indexed expression.
The text-search functionality in PostgreSQL can also be used to speed up full-document searches of XML data. The necessary preprocessing support is, however, not yet available in the PostgreSQL distribution.
JSON data types are for storing JSON (JavaScript Object Notation)
data, as specified in RFC
7159. Such data can also be stored as text
, but
the JSON data types have the advantage of enforcing that each
stored value is valid according to the JSON rules. There are also
assorted JSON-specific functions and operators available for data stored
in these data types; see Section 9.16.
PostgreSQL offers two types for storing JSON
data: json
and jsonb
. To implement efficient query
mechanisms for these data types, PostgreSQL
also provides the jsonpath
data type described in
Section 8.14.7.
The json
and jsonb
data types
accept almost identical sets of values as
input. The major practical difference is one of efficiency. The
json
data type stores an exact copy of the input text,
which processing functions must reparse on each execution; while
jsonb
data is stored in a decomposed binary format that
makes it slightly slower to input due to added conversion
overhead, but significantly faster to process, since no reparsing
is needed. jsonb
also supports indexing, which can be a
significant advantage.
Because the json
type stores an exact copy of the input text, it
will preserve semantically-insignificant white space between tokens, as
well as the order of keys within JSON objects. Also, if a JSON object
within the value contains the same key more than once, all the key/value
pairs are kept. (The processing functions consider the last value as the
operative one.) By contrast, jsonb
does not preserve white
space, does not preserve the order of object keys, and does not keep
duplicate object keys. If duplicate keys are specified in the input,
only the last value is kept.
In general, most applications should prefer to store JSON data as
jsonb
, unless there are quite specialized needs, such as
legacy assumptions about ordering of object keys.
RFC 7159 specifies that JSON strings should be encoded in UTF8. It is therefore not possible for the JSON types to conform rigidly to the JSON specification unless the database encoding is UTF8. Attempts to directly include characters that cannot be represented in the database encoding will fail; conversely, characters that can be represented in the database encoding but not in UTF8 will be allowed.
RFC 7159 permits JSON strings to contain Unicode escape sequences
denoted by \u
. In the input
function for the XXXX
json
type, Unicode escapes are allowed
regardless of the database encoding, and are checked only for syntactic
correctness (that is, that four hex digits follow \u
).
However, the input function for jsonb
is stricter: it disallows
Unicode escapes for characters that cannot be represented in the database
encoding. The jsonb
type also
rejects \u0000
(because that cannot be represented in
PostgreSQL's text
type), and it insists
that any use of Unicode surrogate pairs to designate characters outside
the Unicode Basic Multilingual Plane be correct. Valid Unicode escapes
are converted to the equivalent single character for storage;
this includes folding surrogate pairs into a single character.
Many of the JSON processing functions described
in Section 9.16 will convert Unicode escapes to
regular characters, and will therefore throw the same types of errors
just described even if their input is of type json
not jsonb
. The fact that the json
input function does
not make these checks may be considered a historical artifact, although
it does allow for simple storage (without processing) of JSON Unicode
escapes in a database encoding that does not support the represented
characters.
When converting textual JSON input into jsonb
, the primitive
types described by RFC 7159 are effectively mapped onto
native PostgreSQL types, as shown
in Table 8.23.
Therefore, there are some minor additional constraints on what
constitutes valid jsonb
data that do not apply to
the json
type, nor to JSON in the abstract, corresponding
to limits on what can be represented by the underlying data type.
Notably, jsonb
will reject numbers that are outside the
range of the PostgreSQL numeric
data
type, while json
will not. Such implementation-defined
restrictions are permitted by RFC 7159. However, in
practice such problems are far more likely to occur in other
implementations, as it is common to represent JSON's number
primitive type as IEEE 754 double precision floating point
(which RFC 7159 explicitly anticipates and allows for).
When using JSON as an interchange format with such systems, the danger
of losing numeric precision compared to data originally stored
by PostgreSQL should be considered.
Conversely, as noted in the table there are some minor restrictions on the input format of JSON primitive types that do not apply to the corresponding PostgreSQL types.
Table 8.23. JSON Primitive Types and Corresponding PostgreSQL Types
JSON primitive type | PostgreSQL type | Notes |
---|---|---|
string | text | \u0000 is disallowed, as are Unicode escapes
representing characters not available in the database encoding |
number | numeric | NaN and infinity values are disallowed |
boolean | boolean | Only lowercase true and false spellings are accepted |
null | (none) | SQL NULL is a different concept |
The input/output syntax for the JSON data types is as specified in RFC 7159.
The following are all valid json
(or jsonb
) expressions:
-- Simple scalar/primitive value -- Primitive values can be numbers, quoted strings, true, false, or null SELECT '5'::json; -- Array of zero or more elements (elements need not be of same type) SELECT '[1, 2, "foo", null]'::json; -- Object containing pairs of keys and values -- Note that object keys must always be quoted strings SELECT '{"bar": "baz", "balance": 7.77, "active": false}'::json; -- Arrays and objects can be nested arbitrarily SELECT '{"foo": [true, "bar"], "tags": {"a": 1, "b": null}}'::json;
As previously stated, when a JSON value is input and then printed without
any additional processing, json
outputs the same text that was
input, while jsonb
does not preserve semantically-insignificant
details such as whitespace. For example, note the differences here:
SELECT '{"bar": "baz", "balance": 7.77, "active":false}'::json; json ------------------------------------------------- {"bar": "baz", "balance": 7.77, "active":false} (1 row) SELECT '{"bar": "baz", "balance": 7.77, "active":false}'::jsonb; jsonb -------------------------------------------------- {"bar": "baz", "active": false, "balance": 7.77} (1 row)
One semantically-insignificant detail worth noting is that
in jsonb
, numbers will be printed according to the behavior of the
underlying numeric
type. In practice this means that numbers
entered with E
notation will be printed without it, for
example:
SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; json | jsonb -----------------------+------------------------- {"reading": 1.230e-5} | {"reading": 0.00001230} (1 row)
However, jsonb
will preserve trailing fractional zeroes, as seen
in this example, even though those are semantically insignificant for
purposes such as equality checks.
For the list of built-in functions and operators available for constructing and processing JSON values, see Section 9.16.
Representing data as JSON can be considerably more flexible than the traditional relational data model, which is compelling in environments where requirements are fluid. It is quite possible for both approaches to co-exist and complement each other within the same application. However, even for applications where maximal flexibility is desired, it is still recommended that JSON documents have a somewhat fixed structure. The structure is typically unenforced (though enforcing some business rules declaratively is possible), but having a predictable structure makes it easier to write queries that usefully summarize a set of “documents” (datums) in a table.
JSON data is subject to the same concurrency-control considerations as any other data type when stored in a table. Although storing large documents is practicable, keep in mind that any update acquires a row-level lock on the whole row. Consider limiting JSON documents to a manageable size in order to decrease lock contention among updating transactions. Ideally, JSON documents should each represent an atomic datum that business rules dictate cannot reasonably be further subdivided into smaller datums that could be modified independently.
jsonb
Containment and Existence
Testing containment is an important capability of
jsonb
. There is no parallel set of facilities for the
json
type. Containment tests whether
one jsonb
document has contained within it another one.
These examples return true except as noted:
-- Simple scalar/primitive values contain only the identical value:
SELECT '"foo"'::jsonb @> '"foo"'::jsonb;
-- The array on the right side is contained within the one on the left:
SELECT '[1, 2, 3]'::jsonb @> '[1, 3]'::jsonb;
-- Order of array elements is not significant, so this is also true:
SELECT '[1, 2, 3]'::jsonb @> '[3, 1]'::jsonb;
-- Duplicate array elements don't matter either:
SELECT '[1, 2, 3]'::jsonb @> '[1, 2, 2]'::jsonb;
-- The object with a single pair on the right side is contained
-- within the object on the left side:
SELECT '{"product": "PostgreSQL", "version": 9.4, "jsonb": true}'::jsonb @> '{"version": 9.4}'::jsonb;
-- The array on the right side is not considered contained within the
-- array on the left, even though a similar array is nested within it:
SELECT '[1, 2, [1, 3]]'::jsonb @> '[1, 3]'::jsonb; -- yields false
-- But with a layer of nesting, it is contained:
SELECT '[1, 2, [1, 3]]'::jsonb @> '[[1, 3]]'::jsonb;
-- Similarly, containment is not reported here:
SELECT '{"foo": {"bar": "baz"}}'::jsonb @> '{"bar": "baz"}'::jsonb; -- yields false
-- A top-level key and an empty object is contained:
SELECT '{"foo": {"bar": "baz"}}'::jsonb @> '{"foo": {}}'::jsonb;
The general principle is that the contained object must match the containing object as to structure and data contents, possibly after discarding some non-matching array elements or object key/value pairs from the containing object. But remember that the order of array elements is not significant when doing a containment match, and duplicate array elements are effectively considered only once.
As a special exception to the general principle that the structures must match, an array may contain a primitive value:
-- This array contains the primitive string value: SELECT '["foo", "bar"]'::jsonb @> '"bar"'::jsonb; -- This exception is not reciprocal -- non-containment is reported here: SELECT '"bar"'::jsonb @> '["bar"]'::jsonb; -- yields false
jsonb
also has an existence operator, which is
a variation on the theme of containment: it tests whether a string
(given as a text
value) appears as an object key or array
element at the top level of the jsonb
value.
These examples return true except as noted:
-- String exists as array element: SELECT '["foo", "bar", "baz"]'::jsonb ? 'bar'; -- String exists as object key: SELECT '{"foo": "bar"}'::jsonb ? 'foo'; -- Object values are not considered: SELECT '{"foo": "bar"}'::jsonb ? 'bar'; -- yields false -- As with containment, existence must match at the top level: SELECT '{"foo": {"bar": "baz"}}'::jsonb ? 'bar'; -- yields false -- A string is considered to exist if it matches a primitive JSON string: SELECT '"foo"'::jsonb ? 'foo';
JSON objects are better suited than arrays for testing containment or existence when there are many keys or elements involved, because unlike arrays they are internally optimized for searching, and do not need to be searched linearly.
Because JSON containment is nested, an appropriate query can skip
explicit selection of sub-objects. As an example, suppose that we have
a doc
column containing objects at the top level, with
most objects containing tags
fields that contain arrays of
sub-objects. This query finds entries in which sub-objects containing
both "term":"paris"
and "term":"food"
appear,
while ignoring any such keys outside the tags
array:
SELECT doc->'site_name' FROM websites WHERE doc @> '{"tags":[{"term":"paris"}, {"term":"food"}]}';
One could accomplish the same thing with, say,
SELECT doc->'site_name' FROM websites WHERE doc->'tags' @> '[{"term":"paris"}, {"term":"food"}]';
but that approach is less flexible, and often less efficient as well.
On the other hand, the JSON existence operator is not nested: it will only look for the specified key or array element at top level of the JSON value.
The various containment and existence operators, along with all other JSON operators and functions are documented in Section 9.16.
jsonb
Indexing
GIN indexes can be used to efficiently search for
keys or key/value pairs occurring within a large number of
jsonb
documents (datums).
Two GIN “operator classes” are provided, offering different
performance and flexibility trade-offs.
The default GIN operator class for jsonb
supports queries with
the key-exists operators ?
, ?|
and ?&
, the containment operator
@>
, and the jsonpath
match
operators @?
and @@
.
(For details of the semantics that these operators
implement, see Table 9.45.)
An example of creating an index with this operator class is:
CREATE INDEX idxgin ON api USING GIN (jdoc);
The non-default GIN operator class jsonb_path_ops
does not support the key-exists operators, but it does support
@>
, @?
and @@
.
An example of creating an index with this operator class is:
CREATE INDEX idxginp ON api USING GIN (jdoc jsonb_path_ops);
Consider the example of a table that stores JSON documents retrieved from a third-party web service, with a documented schema definition. A typical document is:
{ "guid": "9c36adc1-7fb5-4d5b-83b4-90356a46061a", "name": "Angela Barton", "is_active": true, "company": "Magnafone", "address": "178 Howard Place, Gulf, Washington, 702", "registered": "2009-11-07T08:53:22 +08:00", "latitude": 19.793713, "longitude": 86.513373, "tags": [ "enim", "aliquip", "qui" ] }
We store these documents in a table named api
,
in a jsonb
column named jdoc
.
If a GIN index is created on this column,
queries like the following can make use of the index:
-- Find documents in which the key "company" has value "Magnafone" SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"company": "Magnafone"}';
However, the index could not be used for queries like the
following, because though the operator ?
is indexable,
it is not applied directly to the indexed column jdoc
:
-- Find documents in which the key "tags" contains key or array element "qui" SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc -> 'tags' ? 'qui';
Still, with appropriate use of expression indexes, the above
query can use an index. If querying for particular items within
the "tags"
key is common, defining an index like this
may be worthwhile:
CREATE INDEX idxgintags ON api USING GIN ((jdoc -> 'tags'));
Now, the WHERE
clause jdoc -> 'tags' ? 'qui'
will be recognized as an application of the indexable
operator ?
to the indexed
expression jdoc -> 'tags'
.
(More information on expression indexes can be found in Section 11.7.)
Another approach to querying is to exploit containment, for example:
-- Find documents in which the key "tags" contains array element "qui" SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qui"]}';
A simple GIN index on the jdoc
column can support this
query. But note that such an index will store copies of every key and
value in the jdoc
column, whereas the expression index
of the previous example stores only data found under
the tags
key. While the simple-index approach is far more
flexible (since it supports queries about any key), targeted expression
indexes are likely to be smaller and faster to search than a simple
index.
GIN indexes also support the @?
and @@
operators, which
perform jsonpath
matching. Examples are
SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @? '$.tags[*] ? (@ == "qui")';
SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == "qui"';
For these operators, a GIN index extracts clauses of the form
out of
the accessors_chain
= constant
jsonpath
pattern, and does the index search based on
the keys and values mentioned in these clauses. The accessors chain
may include .
,
key
[*]
,
and [
accessors.
The index
]jsonb_ops
operator class also
supports .*
and .**
accessors,
but the jsonb_path_ops
operator class does not.
Although the jsonb_path_ops
operator class supports
only queries with the @>
, @?
and @@
operators, it has notable
performance advantages over the default operator
class jsonb_ops
. A jsonb_path_ops
index is usually much smaller than a jsonb_ops
index over the same data, and the specificity of searches is better,
particularly when queries contain keys that appear frequently in the
data. Therefore search operations typically perform better
than with the default operator class.
The technical difference between a jsonb_ops
and a jsonb_path_ops
GIN index is that the former
creates independent index items for each key and value in the data,
while the latter creates index items only for each value in the
data.
[7]
Basically, each jsonb_path_ops
index item is
a hash of the value and the key(s) leading to it; for example to index
{"foo": {"bar": "baz"}}
, a single index item would
be created incorporating all three of foo
, bar
,
and baz
into the hash value. Thus a containment query
looking for this structure would result in an extremely specific index
search; but there is no way at all to find out whether foo
appears as a key. On the other hand, a jsonb_ops
index would create three index items representing foo
,
bar
, and baz
separately; then to do the
containment query, it would look for rows containing all three of
these items. While GIN indexes can perform such an AND search fairly
efficiently, it will still be less specific and slower than the
equivalent jsonb_path_ops
search, especially if
there are a very large number of rows containing any single one of the
three index items.
A disadvantage of the jsonb_path_ops
approach is
that it produces no index entries for JSON structures not containing
any values, such as {"a": {}}
. If a search for
documents containing such a structure is requested, it will require a
full-index scan, which is quite slow. jsonb_path_ops
is
therefore ill-suited for applications that often perform such searches.
jsonb
also supports btree
and hash
indexes. These are usually useful only if it's important to check
equality of complete JSON documents.
The btree
ordering for jsonb
datums is seldom
of great interest, but for completeness it is:
Object
>Array
>Boolean
>Number
>String
>Null
Object with n pairs
>object with n - 1 pairs
Array with n elements
>array with n - 1 elements
Objects with equal numbers of pairs are compared in the order:
key-1
,value-1
,key-2
...
Note that object keys are compared in their storage order; in particular, since shorter keys are stored before longer keys, this can lead to results that might be unintuitive, such as:
{ "aa": 1, "c": 1} > {"b": 1, "d": 1}
Similarly, arrays with equal numbers of elements are compared in the order:
element-1
,element-2
...
Primitive JSON values are compared using the same comparison rules as for the underlying PostgreSQL data type. Strings are compared using the default database collation.
jsonb
Subscripting
The jsonb
data type supports array-style subscripting expressions
to extract and modify elements. Nested values can be indicated by chaining
subscripting expressions, following the same rules as the path
argument in the jsonb_set
function. If a jsonb
value is an array, numeric subscripts start at zero, and negative integers count
backwards from the last element of the array. Slice expressions are not supported.
The result of a subscripting expression is always of the jsonb data type.
UPDATE
statements may use subscripting in the
SET
clause to modify jsonb
values. Subscript
paths must be traversable for all affected values insofar as they exist. For
instance, the path val['a']['b']['c']
can be traversed all
the way to c
if every val
,
val['a']
, and val['a']['b']
is an
object. If any val['a']
or val['a']['b']
is not defined, it will be created as an empty object and filled as
necessary. However, if any val
itself or one of the
intermediary values is defined as a non-object such as a string, number, or
jsonb
null
, traversal cannot proceed so
an error is raised and the transaction aborted.
An example of subscripting syntax:
-- Extract object value by key SELECT ('{"a": 1}'::jsonb)['a']; -- Extract nested object value by key path SELECT ('{"a": {"b": {"c": 1}}}'::jsonb)['a']['b']['c']; -- Extract array element by index SELECT ('[1, "2", null]'::jsonb)[1]; -- Update object value by key. Note the quotes around '1': the assigned -- value must be of the jsonb type as well UPDATE table_name SET jsonb_field['key'] = '1'; -- This will raise an error if any record's jsonb_field['a']['b'] is something -- other than an object. For example, the value {"a": 1} has a numeric value -- of the key 'a'. UPDATE table_name SET jsonb_field['a']['b']['c'] = '1'; -- Filter records using a WHERE clause with subscripting. Since the result of -- subscripting is jsonb, the value we compare it against must also be jsonb. -- The double quotes make "value" also a valid jsonb string. SELECT * FROM table_name WHERE jsonb_field['key'] = '"value"';
jsonb
assignment via subscripting handles a few edge cases
differently from jsonb_set
. When a source jsonb
value is NULL
, assignment via subscripting will proceed
as if it was an empty JSON value of the type (object or array) implied by the
subscript key:
-- Where jsonb_field was NULL, it is now {"a": 1} UPDATE table_name SET jsonb_field['a'] = '1'; -- Where jsonb_field was NULL, it is now [1] UPDATE table_name SET jsonb_field[0] = '1';
If an index is specified for an array containing too few elements,
NULL
elements will be appended until the index is reachable
and the value can be set.
-- Where jsonb_field was [], it is now [null, null, 2]; -- where jsonb_field was [0], it is now [0, null, 2] UPDATE table_name SET jsonb_field[2] = '2';
A jsonb
value will accept assignments to nonexistent subscript
paths as long as the last existing element to be traversed is an object or
array, as implied by the corresponding subscript (the element indicated by
the last subscript in the path is not traversed and may be anything). Nested
array and object structures will be created, and in the former case
null
-padded, as specified by the subscript path until the
assigned value can be placed.
-- Where jsonb_field was {}, it is now {"a": [{"b": 1}]} UPDATE table_name SET jsonb_field['a'][0]['b'] = '1'; -- Where jsonb_field was [], it is now [null, {"a": 1}] UPDATE table_name SET jsonb_field[1]['a'] = '1';
Additional extensions are available that implement transforms for the
jsonb
type for different procedural languages.
The extensions for PL/Perl are called jsonb_plperl
and
jsonb_plperlu
. If you use them, jsonb
values are mapped to Perl arrays, hashes, and scalars, as appropriate.
The extensions for PL/Python are called jsonb_plpythonu
,
jsonb_plpython2u
, and
jsonb_plpython3u
(see Section 46.1 for the PL/Python naming convention). If you
use them, jsonb
values are mapped to Python dictionaries,
lists, and scalars, as appropriate.
Of these extensions, jsonb_plperl
is
considered “trusted”, that is, it can be installed by
non-superusers who have CREATE
privilege on the
current database. The rest require superuser privilege to install.
The jsonpath
type implements support for the SQL/JSON path language
in PostgreSQL to efficiently query JSON data.
It provides a binary representation of the parsed SQL/JSON path
expression that specifies the items to be retrieved by the path
engine from the JSON data for further processing with the
SQL/JSON query functions.
The semantics of SQL/JSON path predicates and operators generally follow SQL. At the same time, to provide a natural way of working with JSON data, SQL/JSON path syntax uses some JavaScript conventions:
Dot (.
) is used for member access.
Square brackets ([]
) are used for array access.
SQL/JSON arrays are 0-relative, unlike regular SQL arrays that start from 1.
An SQL/JSON path expression is typically written in an SQL query as an
SQL character string literal, so it must be enclosed in single quotes,
and any single quotes desired within the value must be doubled
(see Section 4.1.2.1).
Some forms of path expressions require string literals within them.
These embedded string literals follow JavaScript/ECMAScript conventions:
they must be surrounded by double quotes, and backslash escapes may be
used within them to represent otherwise-hard-to-type characters.
In particular, the way to write a double quote within an embedded string
literal is \"
, and to write a backslash itself, you
must write \\
. Other special backslash sequences
include those recognized in JavaScript strings:
\b
,
\f
,
\n
,
\r
,
\t
,
\v
for various ASCII control characters,
\x
for a character code
written with only two hex digits,
NN
\u
for a Unicode
character identified by its 4-hex-digit code point, and
NNNN
\u{
for a Unicode
character code point written with 1 to 6 hex digits.
N...
}
A path expression consists of a sequence of path elements, which can be any of the following:
Path literals of JSON primitive types: Unicode text, numeric, true, false, or null.
Path variables listed in Table 8.24.
Accessor operators listed in Table 8.25.
jsonpath
operators and methods listed
in Section 9.16.2.2.
Parentheses, which can be used to provide filter expressions or define the order of path evaluation.
For details on using jsonpath
expressions with SQL/JSON
query functions, see Section 9.16.2.
Table 8.24. jsonpath
Variables
Variable | Description |
---|---|
$ | A variable representing the JSON value being queried (the context item). |
$varname |
A named variable. Its value can be set by the parameter
vars of several JSON processing functions;
see Table 9.47 for details.
|
@ | A variable representing the result of path evaluation in filter expressions. |
Table 8.25. jsonpath
Accessors
Accessor Operator | Description |
---|---|
|
Member accessor that returns an object member with
the specified key. If the key name matches some named variable
starting with |
|
Wildcard member accessor that returns the values of all members located at the top level of the current object. |
|
Recursive wildcard member accessor that processes all levels of the JSON hierarchy of the current object and returns all the member values, regardless of their nesting level. This is a PostgreSQL extension of the SQL/JSON standard. |
|
Like |
|
Array element accessor.
The specified |
|
Wildcard array element accessor that returns all array elements. |
PostgreSQL allows columns of a table to be defined as variable-length multidimensional arrays. Arrays of any built-in or user-defined base type, enum type, composite type, range type, or domain can be created.
To illustrate the use of array types, we create this table:
CREATE TABLE sal_emp ( name text, pay_by_quarter integer[], schedule text[][] );
As shown, an array data type is named by appending square brackets
([]
) to the data type name of the array elements. The
above command will create a table named
sal_emp
with a column of type
text
(name
), a
one-dimensional array of type integer
(pay_by_quarter
), which represents the
employee's salary by quarter, and a two-dimensional array of
text
(schedule
), which
represents the employee's weekly schedule.
The syntax for CREATE TABLE
allows the exact size of
arrays to be specified, for example:
CREATE TABLE tictactoe ( squares integer[3][3] );
However, the current implementation ignores any supplied array size limits, i.e., the behavior is the same as for arrays of unspecified length.
The current implementation does not enforce the declared
number of dimensions either. Arrays of a particular element type are
all considered to be of the same type, regardless of size or number
of dimensions. So, declaring the array size or number of dimensions in
CREATE TABLE
is simply documentation; it does not
affect run-time behavior.
An alternative syntax, which conforms to the SQL standard by using
the keyword ARRAY
, can be used for one-dimensional arrays.
pay_by_quarter
could have been defined
as:
pay_by_quarter integer ARRAY[4],
Or, if no array size is to be specified:
pay_by_quarter integer ARRAY,
As before, however, PostgreSQL does not enforce the size restriction in any case.
To write an array value as a literal constant, enclose the element values within curly braces and separate them by commas. (If you know C, this is not unlike the C syntax for initializing structures.) You can put double quotes around any element value, and must do so if it contains commas or curly braces. (More details appear below.) Thus, the general format of an array constant is the following:
'{val1
delim
val2
delim
... }'
where delim
is the delimiter character
for the type, as recorded in its pg_type
entry.
Among the standard data types provided in the
PostgreSQL distribution, all use a comma
(,
), except for type box
which uses a semicolon
(;
). Each val
is
either a constant of the array element type, or a subarray. An example
of an array constant is:
'{{1,2,3},{4,5,6},{7,8,9}}'
This constant is a two-dimensional, 3-by-3 array consisting of three subarrays of integers.
To set an element of an array constant to NULL, write NULL
for the element value. (Any upper- or lower-case variant of
NULL
will do.) If you want an actual string value
“NULL”, you must put double quotes around it.
(These kinds of array constants are actually only a special case of the generic type constants discussed in Section 4.1.2.7. The constant is initially treated as a string and passed to the array input conversion routine. An explicit type specification might be necessary.)
Now we can show some INSERT
statements:
INSERT INTO sal_emp VALUES ('Bill', '{10000, 10000, 10000, 10000}', '{{"meeting", "lunch"}, {"training", "presentation"}}'); INSERT INTO sal_emp VALUES ('Carol', '{20000, 25000, 25000, 25000}', '{{"breakfast", "consulting"}, {"meeting", "lunch"}}');
The result of the previous two inserts looks like this:
SELECT * FROM sal_emp; name | pay_by_quarter | schedule -------+---------------------------+------------------------------------------- Bill | {10000,10000,10000,10000} | {{meeting,lunch},{training,presentation}} Carol | {20000,25000,25000,25000} | {{breakfast,consulting},{meeting,lunch}} (2 rows)
Multidimensional arrays must have matching extents for each dimension. A mismatch causes an error, for example:
INSERT INTO sal_emp VALUES ('Bill', '{10000, 10000, 10000, 10000}', '{{"meeting", "lunch"}, {"meeting"}}'); ERROR: multidimensional arrays must have array expressions with matching dimensions
The ARRAY
constructor syntax can also be used:
INSERT INTO sal_emp VALUES ('Bill', ARRAY[10000, 10000, 10000, 10000], ARRAY[['meeting', 'lunch'], ['training', 'presentation']]); INSERT INTO sal_emp VALUES ('Carol', ARRAY[20000, 25000, 25000, 25000], ARRAY[['breakfast', 'consulting'], ['meeting', 'lunch']]);
Notice that the array elements are ordinary SQL constants or
expressions; for instance, string literals are single quoted, instead of
double quoted as they would be in an array literal. The ARRAY
constructor syntax is discussed in more detail in
Section 4.2.12.
Now, we can run some queries on the table. First, we show how to access a single element of an array. This query retrieves the names of the employees whose pay changed in the second quarter:
SELECT name FROM sal_emp WHERE pay_by_quarter[1] <> pay_by_quarter[2]; name ------- Carol (1 row)
The array subscript numbers are written within square brackets.
By default PostgreSQL uses a
one-based numbering convention for arrays, that is,
an array of n
elements starts with array[1]
and
ends with array[
.
n
]
This query retrieves the third quarter pay of all employees:
SELECT pay_by_quarter[3] FROM sal_emp; pay_by_quarter ---------------- 10000 25000 (2 rows)
We can also access arbitrary rectangular slices of an array, or
subarrays. An array slice is denoted by writing
for one or more array dimensions. For example, this query retrieves the first
item on Bill's schedule for the first two days of the week:
lower-bound
:upper-bound
SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; schedule ------------------------ {{meeting},{training}} (1 row)
If any dimension is written as a slice, i.e., contains a colon, then all
dimensions are treated as slices. Any dimension that has only a single
number (no colon) is treated as being from 1
to the number specified. For example, [2]
is treated as
[1:2]
, as in this example:
SELECT schedule[1:2][2] FROM sal_emp WHERE name = 'Bill'; schedule ------------------------------------------- {{meeting,lunch},{training,presentation}} (1 row)
To avoid confusion with the non-slice case, it's best to use slice syntax
for all dimensions, e.g., [1:2][1:1]
, not [2][1:1]
.
It is possible to omit the lower-bound
and/or
upper-bound
of a slice specifier; the missing
bound is replaced by the lower or upper limit of the array's subscripts.
For example:
SELECT schedule[:2][2:] FROM sal_emp WHERE name = 'Bill'; schedule ------------------------ {{lunch},{presentation}} (1 row) SELECT schedule[:][1:1] FROM sal_emp WHERE name = 'Bill'; schedule ------------------------ {{meeting},{training}} (1 row)
An array subscript expression will return null if either the array itself or
any of the subscript expressions are null. Also, null is returned if a
subscript is outside the array bounds (this case does not raise an error).
For example, if schedule
currently has the dimensions [1:3][1:2]
then referencing
schedule[3][3]
yields NULL. Similarly, an array reference
with the wrong number of subscripts yields a null rather than an error.
An array slice expression likewise yields null if the array itself or any of the subscript expressions are null. However, in other cases such as selecting an array slice that is completely outside the current array bounds, a slice expression yields an empty (zero-dimensional) array instead of null. (This does not match non-slice behavior and is done for historical reasons.) If the requested slice partially overlaps the array bounds, then it is silently reduced to just the overlapping region instead of returning null.
The current dimensions of any array value can be retrieved with the
array_dims
function:
SELECT array_dims(schedule) FROM sal_emp WHERE name = 'Carol'; array_dims ------------ [1:2][1:2] (1 row)
array_dims
produces a text
result,
which is convenient for people to read but perhaps inconvenient
for programs. Dimensions can also be retrieved with
array_upper
and array_lower
,
which return the upper and lower bound of a
specified array dimension, respectively:
SELECT array_upper(schedule, 1) FROM sal_emp WHERE name = 'Carol'; array_upper ------------- 2 (1 row)
array_length
will return the length of a specified
array dimension:
SELECT array_length(schedule, 1) FROM sal_emp WHERE name = 'Carol'; array_length -------------- 2 (1 row)
cardinality
returns the total number of elements in an
array across all dimensions. It is effectively the number of rows a call to
unnest
would yield:
SELECT cardinality(schedule) FROM sal_emp WHERE name = 'Carol'; cardinality ------------- 4 (1 row)
An array value can be replaced completely:
UPDATE sal_emp SET pay_by_quarter = '{25000,25000,27000,27000}' WHERE name = 'Carol';
or using the ARRAY
expression syntax:
UPDATE sal_emp SET pay_by_quarter = ARRAY[25000,25000,27000,27000] WHERE name = 'Carol';
An array can also be updated at a single element:
UPDATE sal_emp SET pay_by_quarter[4] = 15000 WHERE name = 'Bill';
or updated in a slice:
UPDATE sal_emp SET pay_by_quarter[1:2] = '{27000,27000}' WHERE name = 'Carol';
The slice syntaxes with omitted lower-bound
and/or
upper-bound
can be used too, but only when
updating an array value that is not NULL or zero-dimensional (otherwise,
there is no existing subscript limit to substitute).
A stored array value can be enlarged by assigning to elements not already
present. Any positions between those previously present and the newly
assigned elements will be filled with nulls. For example, if array
myarray
currently has 4 elements, it will have six
elements after an update that assigns to myarray[6]
;
myarray[5]
will contain null.
Currently, enlargement in this fashion is only allowed for one-dimensional
arrays, not multidimensional arrays.
Subscripted assignment allows creation of arrays that do not use one-based
subscripts. For example one might assign to myarray[-2:7]
to
create an array with subscript values from -2 to 7.
New array values can also be constructed using the concatenation operator,
||
:
SELECT ARRAY[1,2] || ARRAY[3,4]; ?column? ----------- {1,2,3,4} (1 row) SELECT ARRAY[5,6] || ARRAY[[1,2],[3,4]]; ?column? --------------------- {{5,6},{1,2},{3,4}} (1 row)
The concatenation operator allows a single element to be pushed onto the
beginning or end of a one-dimensional array. It also accepts two
N
-dimensional arrays, or an N
-dimensional
and an N+1
-dimensional array.
When a single element is pushed onto either the beginning or end of a one-dimensional array, the result is an array with the same lower bound subscript as the array operand. For example:
SELECT array_dims(1 || '[0:1]={2,3}'::int[]); array_dims ------------ [0:2] (1 row) SELECT array_dims(ARRAY[1,2] || 3); array_dims ------------ [1:3] (1 row)
When two arrays with an equal number of dimensions are concatenated, the result retains the lower bound subscript of the left-hand operand's outer dimension. The result is an array comprising every element of the left-hand operand followed by every element of the right-hand operand. For example:
SELECT array_dims(ARRAY[1,2] || ARRAY[3,4,5]); array_dims ------------ [1:5] (1 row) SELECT array_dims(ARRAY[[1,2],[3,4]] || ARRAY[[5,6],[7,8],[9,0]]); array_dims ------------ [1:5][1:2] (1 row)
When an N
-dimensional array is pushed onto the beginning
or end of an N+1
-dimensional array, the result is
analogous to the element-array case above. Each N
-dimensional
sub-array is essentially an element of the N+1
-dimensional
array's outer dimension. For example:
SELECT array_dims(ARRAY[1,2] || ARRAY[[3,4],[5,6]]); array_dims ------------ [1:3][1:2] (1 row)
An array can also be constructed by using the functions
array_prepend
, array_append
,
or array_cat
. The first two only support one-dimensional
arrays, but array_cat
supports multidimensional arrays.
Some examples:
SELECT array_prepend(1, ARRAY[2,3]); array_prepend --------------- {1,2,3} (1 row) SELECT array_append(ARRAY[1,2], 3); array_append -------------- {1,2,3} (1 row) SELECT array_cat(ARRAY[1,2], ARRAY[3,4]); array_cat ----------- {1,2,3,4} (1 row) SELECT array_cat(ARRAY[[1,2],[3,4]], ARRAY[5,6]); array_cat --------------------- {{1,2},{3,4},{5,6}} (1 row) SELECT array_cat(ARRAY[5,6], ARRAY[[1,2],[3,4]]); array_cat --------------------- {{5,6},{1,2},{3,4}}
In simple cases, the concatenation operator discussed above is preferred over direct use of these functions. However, because the concatenation operator is overloaded to serve all three cases, there are situations where use of one of the functions is helpful to avoid ambiguity. For example consider:
SELECT ARRAY[1, 2] || '{3, 4}'; -- the untyped literal is taken as an array ?column? ----------- {1,2,3,4} SELECT ARRAY[1, 2] || '7'; -- so is this one ERROR: malformed array literal: "7" SELECT ARRAY[1, 2] || NULL; -- so is an undecorated NULL ?column? ---------- {1,2} (1 row) SELECT array_append(ARRAY[1, 2], NULL); -- this might have been meant array_append -------------- {1,2,NULL}
In the examples above, the parser sees an integer array on one side of the
concatenation operator, and a constant of undetermined type on the other.
The heuristic it uses to resolve the constant's type is to assume it's of
the same type as the operator's other input — in this case,
integer array. So the concatenation operator is presumed to
represent array_cat
, not array_append
. When
that's the wrong choice, it could be fixed by casting the constant to the
array's element type; but explicit use of array_append
might
be a preferable solution.
To search for a value in an array, each value must be checked. This can be done manually, if you know the size of the array. For example:
SELECT * FROM sal_emp WHERE pay_by_quarter[1] = 10000 OR pay_by_quarter[2] = 10000 OR pay_by_quarter[3] = 10000 OR pay_by_quarter[4] = 10000;
However, this quickly becomes tedious for large arrays, and is not helpful if the size of the array is unknown. An alternative method is described in Section 9.24. The above query could be replaced by:
SELECT * FROM sal_emp WHERE 10000 = ANY (pay_by_quarter);
In addition, you can find rows where the array has all values equal to 10000 with:
SELECT * FROM sal_emp WHERE 10000 = ALL (pay_by_quarter);
Alternatively, the generate_subscripts
function can be used.
For example:
SELECT * FROM (SELECT pay_by_quarter, generate_subscripts(pay_by_quarter, 1) AS s FROM sal_emp) AS foo WHERE pay_by_quarter[s] = 10000;
This function is described in Table 9.64.
You can also search an array using the &&
operator,
which checks whether the left operand overlaps with the right operand.
For instance:
SELECT * FROM sal_emp WHERE pay_by_quarter && ARRAY[10000];
This and other array operators are further described in Section 9.19. It can be accelerated by an appropriate index, as described in Section 11.2.
You can also search for specific values in an array using the array_position
and array_positions
functions. The former returns the subscript of
the first occurrence of a value in an array; the latter returns an array with the
subscripts of all occurrences of the value in the array. For example:
SELECT array_position(ARRAY['sun','mon','tue','wed','thu','fri','sat'], 'mon'); array_position ---------------- 2 (1 row) SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); array_positions ----------------- {1,4,8} (1 row)
Arrays are not sets; searching for specific array elements can be a sign of database misdesign. Consider using a separate table with a row for each item that would be an array element. This will be easier to search, and is likely to scale better for a large number of elements.
The external text representation of an array value consists of items that
are interpreted according to the I/O conversion rules for the array's
element type, plus decoration that indicates the array structure.
The decoration consists of curly braces ({
and }
)
around the array value plus delimiter characters between adjacent items.
The delimiter character is usually a comma (,
) but can be
something else: it is determined by the typdelim
setting
for the array's element type. Among the standard data types provided
in the PostgreSQL distribution, all use a comma,
except for type box
, which uses a semicolon (;
).
In a multidimensional array, each dimension (row, plane,
cube, etc.) gets its own level of curly braces, and delimiters
must be written between adjacent curly-braced entities of the same level.
The array output routine will put double quotes around element values
if they are empty strings, contain curly braces, delimiter characters,
double quotes, backslashes, or white space, or match the word
NULL
. Double quotes and backslashes
embedded in element values will be backslash-escaped. For numeric
data types it is safe to assume that double quotes will never appear, but
for textual data types one should be prepared to cope with either the presence
or absence of quotes.
By default, the lower bound index value of an array's dimensions is
set to one. To represent arrays with other lower bounds, the array
subscript ranges can be specified explicitly before writing the
array contents.
This decoration consists of square brackets ([]
)
around each array dimension's lower and upper bounds, with
a colon (:
) delimiter character in between. The
array dimension decoration is followed by an equal sign (=
).
For example:
SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 FROM (SELECT '[1:1][-2:-1][3:5]={{{1,2,3},{4,5,6}}}'::int[] AS f1) AS ss; e1 | e2 ----+---- 1 | 6 (1 row)
The array output routine will include explicit dimensions in its result only when there are one or more lower bounds different from one.
If the value written for an element is NULL
(in any case
variant), the element is taken to be NULL. The presence of any quotes
or backslashes disables this and allows the literal string value
“NULL” to be entered. Also, for backward compatibility with
pre-8.2 versions of PostgreSQL, the array_nulls configuration parameter can be turned
off
to suppress recognition of NULL
as a NULL.
As shown previously, when writing an array value you can use double
quotes around any individual array element. You must do so
if the element value would otherwise confuse the array-value parser.
For example, elements containing curly braces, commas (or the data type's
delimiter character), double quotes, backslashes, or leading or trailing
whitespace must be double-quoted. Empty strings and strings matching the
word NULL
must be quoted, too. To put a double
quote or backslash in a quoted array element value, precede it
with a backslash. Alternatively, you can avoid quotes and use
backslash-escaping to protect all data characters that would otherwise
be taken as array syntax.
You can add whitespace before a left brace or after a right brace. You can also add whitespace before or after any individual item string. In all of these cases the whitespace will be ignored. However, whitespace within double-quoted elements, or surrounded on both sides by non-whitespace characters of an element, is not ignored.
The ARRAY
constructor syntax (see
Section 4.2.12) is often easier to work
with than the array-literal syntax when writing array values in SQL
commands. In ARRAY
, individual element values are written the
same way they would be written when not members of an array.
A composite type represents the structure of a row or record; it is essentially just a list of field names and their data types. PostgreSQL allows composite types to be used in many of the same ways that simple types can be used. For example, a column of a table can be declared to be of a composite type.
Here are two simple examples of defining composite types:
CREATE TYPE complex AS ( r double precision, i double precision ); CREATE TYPE inventory_item AS ( name text, supplier_id integer, price numeric );
The syntax is comparable to CREATE TABLE
, except that only
field names and types can be specified; no constraints (such as NOT
NULL
) can presently be included. Note that the AS
keyword
is essential; without it, the system will think a different kind
of CREATE TYPE
command is meant, and you will get odd syntax
errors.
Having defined the types, we can use them to create tables:
CREATE TABLE on_hand ( item inventory_item, count integer ); INSERT INTO on_hand VALUES (ROW('fuzzy dice', 42, 1.99), 1000);
or functions:
CREATE FUNCTION price_extension(inventory_item, integer) RETURNS numeric AS 'SELECT $1.price * $2' LANGUAGE SQL; SELECT price_extension(item, 10) FROM on_hand;
Whenever you create a table, a composite type is also automatically created, with the same name as the table, to represent the table's row type. For example, had we said:
CREATE TABLE inventory_item ( name text, supplier_id integer REFERENCES suppliers, price numeric CHECK (price > 0) );
then the same inventory_item
composite type shown above would
come into being as a
byproduct, and could be used just as above. Note however an important
restriction of the current implementation: since no constraints are
associated with a composite type, the constraints shown in the table
definition do not apply to values of the composite type
outside the table. (To work around this, create a domain over the composite
type, and apply the desired constraints as CHECK
constraints of the domain.)
To write a composite value as a literal constant, enclose the field values within parentheses and separate them by commas. You can put double quotes around any field value, and must do so if it contains commas or parentheses. (More details appear below.) Thus, the general format of a composite constant is the following:
'(val1
,val2
, ... )'
An example is:
'("fuzzy dice",42,1.99)'
which would be a valid value of the inventory_item
type
defined above. To make a field be NULL, write no characters at all
in its position in the list. For example, this constant specifies
a NULL third field:
'("fuzzy dice",42,)'
If you want an empty string rather than NULL, write double quotes:
'("",42,)'
Here the first field is a non-NULL empty string, the third is NULL.
(These constants are actually only a special case of the generic type constants discussed in Section 4.1.2.7. The constant is initially treated as a string and passed to the composite-type input conversion routine. An explicit type specification might be necessary to tell which type to convert the constant to.)
The ROW
expression syntax can also be used to
construct composite values. In most cases this is considerably
simpler to use than the string-literal syntax since you don't have
to worry about multiple layers of quoting. We already used this
method above:
ROW('fuzzy dice', 42, 1.99) ROW('', 42, NULL)
The ROW keyword is actually optional as long as you have more than one field in the expression, so these can be simplified to:
('fuzzy dice', 42, 1.99) ('', 42, NULL)
The ROW
expression syntax is discussed in more detail in Section 4.2.13.
To access a field of a composite column, one writes a dot and the field
name, much like selecting a field from a table name. In fact, it's so
much like selecting from a table name that you often have to use parentheses
to keep from confusing the parser. For example, you might try to select
some subfields from our on_hand
example table with something
like:
SELECT item.name FROM on_hand WHERE item.price > 9.99;
This will not work since the name item
is taken to be a table
name, not a column name of on_hand
, per SQL syntax rules.
You must write it like this:
SELECT (item).name FROM on_hand WHERE (item).price > 9.99;
or if you need to use the table name as well (for instance in a multitable query), like this:
SELECT (on_hand.item).name FROM on_hand WHERE (on_hand.item).price > 9.99;
Now the parenthesized object is correctly interpreted as a reference to
the item
column, and then the subfield can be selected from it.
Similar syntactic issues apply whenever you select a field from a composite value. For instance, to select just one field from the result of a function that returns a composite value, you'd need to write something like:
SELECT (my_func(...)).field FROM ...
Without the extra parentheses, this will generate a syntax error.
The special field name *
means “all fields”, as
further explained in Section 8.16.5.
Here are some examples of the proper syntax for inserting and updating composite columns. First, inserting or updating a whole column:
INSERT INTO mytab (complex_col) VALUES((1.1,2.2)); UPDATE mytab SET complex_col = ROW(1.1,2.2) WHERE ...;
The first example omits ROW
, the second uses it; we
could have done it either way.
We can update an individual subfield of a composite column:
UPDATE mytab SET complex_col.r = (complex_col).r + 1 WHERE ...;
Notice here that we don't need to (and indeed cannot)
put parentheses around the column name appearing just after
SET
, but we do need parentheses when referencing the same
column in the expression to the right of the equal sign.
And we can specify subfields as targets for INSERT
, too:
INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2);
Had we not supplied values for all the subfields of the column, the remaining subfields would have been filled with null values.
There are various special syntax rules and behaviors associated with composite types in queries. These rules provide useful shortcuts, but can be confusing if you don't know the logic behind them.
In PostgreSQL, a reference to a table name (or alias)
in a query is effectively a reference to the composite value of the
table's current row. For example, if we had a table
inventory_item
as shown
above, we could write:
SELECT c FROM inventory_item c;
This query produces a single composite-valued column, so we might get output like:
c ------------------------ ("fuzzy dice",42,1.99) (1 row)
Note however that simple names are matched to column names before table
names, so this example works only because there is no column
named c
in the query's tables.
The ordinary qualified-column-name
syntax table_name
.
column_name
can be understood as applying field
selection to the composite value of the table's current row.
(For efficiency reasons, it's not actually implemented that way.)
When we write
SELECT c.* FROM inventory_item c;
then, according to the SQL standard, we should get the contents of the table expanded into separate columns:
name | supplier_id | price ------------+-------------+------- fuzzy dice | 42 | 1.99 (1 row)
as if the query were
SELECT c.name, c.supplier_id, c.price FROM inventory_item c;
PostgreSQL will apply this expansion behavior to
any composite-valued expression, although as shown above, you need to write parentheses
around the value that .*
is applied to whenever it's not a
simple table name. For example, if myfunc()
is a function
returning a composite type with columns a
,
b
, and c
, then these two queries have the
same result:
SELECT (myfunc(x)).* FROM some_table; SELECT (myfunc(x)).a, (myfunc(x)).b, (myfunc(x)).c FROM some_table;
PostgreSQL handles column expansion by
actually transforming the first form into the second. So, in this
example, myfunc()
would get invoked three times per row
with either syntax. If it's an expensive function you may wish to
avoid that, which you can do with a query like:
SELECT m.* FROM some_table, LATERAL myfunc(x) AS m;
Placing the function in
a LATERAL
FROM
item keeps it from
being invoked more than once per row. m.*
is still
expanded into m.a, m.b, m.c
, but now those variables
are just references to the output of the FROM
item.
(The LATERAL
keyword is optional here, but we show it
to clarify that the function is getting x
from some_table
.)
The composite_value
.*
syntax results in
column expansion of this kind when it appears at the top level of
a SELECT
output
list, a RETURNING
list in INSERT
/UPDATE
/DELETE
,
a VALUES
clause, or
a row constructor.
In all other contexts (including when nested inside one of those
constructs), attaching .*
to a composite value does not
change the value, since it means “all columns” and so the
same composite value is produced again. For example,
if somefunc()
accepts a composite-valued argument,
these queries are the same:
SELECT somefunc(c.*) FROM inventory_item c; SELECT somefunc(c) FROM inventory_item c;
In both cases, the current row of inventory_item
is
passed to the function as a single composite-valued argument.
Even though .*
does nothing in such cases, using it is good
style, since it makes clear that a composite value is intended. In
particular, the parser will consider c
in c.*
to
refer to a table name or alias, not to a column name, so that there is
no ambiguity; whereas without .*
, it is not clear
whether c
means a table name or a column name, and in fact
the column-name interpretation will be preferred if there is a column
named c
.
Another example demonstrating these concepts is that all these queries mean the same thing:
SELECT * FROM inventory_item c ORDER BY c; SELECT * FROM inventory_item c ORDER BY c.*; SELECT * FROM inventory_item c ORDER BY ROW(c.*);
All of these ORDER BY
clauses specify the row's composite
value, resulting in sorting the rows according to the rules described
in Section 9.24.6. However,
if inventory_item
contained a column
named c
, the first case would be different from the
others, as it would mean to sort by that column only. Given the column
names previously shown, these queries are also equivalent to those above:
SELECT * FROM inventory_item c ORDER BY ROW(c.name, c.supplier_id, c.price); SELECT * FROM inventory_item c ORDER BY (c.name, c.supplier_id, c.price);
(The last case uses a row constructor with the key word ROW
omitted.)
Another special syntactical behavior associated with composite values is
that we can use functional notation for extracting a field
of a composite value. The simple way to explain this is that
the notations
and field
(table
)
are interchangeable. For example, these queries are equivalent:
table
.field
SELECT c.name FROM inventory_item c WHERE c.price > 1000; SELECT name(c) FROM inventory_item c WHERE price(c) > 1000;
Moreover, if we have a function that accepts a single argument of a composite type, we can call it with either notation. These queries are all equivalent:
SELECT somefunc(c) FROM inventory_item c; SELECT somefunc(c.*) FROM inventory_item c; SELECT c.somefunc FROM inventory_item c;
This equivalence between functional notation and field notation
makes it possible to use functions on composite types to implement
“computed fields”.
An application using the last query above wouldn't need to be directly
aware that somefunc
isn't a real column of the table.
Because of this behavior, it's unwise to give a function that takes a
single composite-type argument the same name as any of the fields of
that composite type. If there is ambiguity, the field-name
interpretation will be chosen if field-name syntax is used, while the
function will be chosen if function-call syntax is used. However,
PostgreSQL versions before 11 always chose the
field-name interpretation, unless the syntax of the call required it to
be a function call. One way to force the function interpretation in
older versions is to schema-qualify the function name, that is, write
.
schema
.func
(compositevalue
)
The external text representation of a composite value consists of items that
are interpreted according to the I/O conversion rules for the individual
field types, plus decoration that indicates the composite structure.
The decoration consists of parentheses ((
and )
)
around the whole value, plus commas (,
) between adjacent
items. Whitespace outside the parentheses is ignored, but within the
parentheses it is considered part of the field value, and might or might not be
significant depending on the input conversion rules for the field data type.
For example, in:
'( 42)'
the whitespace will be ignored if the field type is integer, but not if it is text.
As shown previously, when writing a composite value you can write double quotes around any individual field value. You must do so if the field value would otherwise confuse the composite-value parser. In particular, fields containing parentheses, commas, double quotes, or backslashes must be double-quoted. To put a double quote or backslash in a quoted composite field value, precede it with a backslash. (Also, a pair of double quotes within a double-quoted field value is taken to represent a double quote character, analogously to the rules for single quotes in SQL literal strings.) Alternatively, you can avoid quoting and use backslash-escaping to protect all data characters that would otherwise be taken as composite syntax.
A completely empty field value (no characters at all between the commas
or parentheses) represents a NULL. To write a value that is an empty
string rather than NULL, write ""
.
The composite output routine will put double quotes around field values if they are empty strings or contain parentheses, commas, double quotes, backslashes, or white space. (Doing so for white space is not essential, but aids legibility.) Double quotes and backslashes embedded in field values will be doubled.
Remember that what you write in an SQL command will first be interpreted
as a string literal, and then as a composite. This doubles the number of
backslashes you need (assuming escape string syntax is used).
For example, to insert a text
field
containing a double quote and a backslash in a composite
value, you'd need to write:
INSERT ... VALUES ('("\"\\")');
The string-literal processor removes one level of backslashes, so that
what arrives at the composite-value parser looks like
("\"\\")
. In turn, the string
fed to the text
data type's input routine
becomes "\
. (If we were working
with a data type whose input routine also treated backslashes specially,
bytea
for example, we might need as many as eight backslashes
in the command to get one backslash into the stored composite field.)
Dollar quoting (see Section 4.1.2.4) can be
used to avoid the need to double backslashes.
The ROW
constructor syntax is usually easier to work with
than the composite-literal syntax when writing composite values in SQL
commands.
In ROW
, individual field values are written the same way
they would be written when not members of a composite.
Range types are data types representing a range of values of some
element type (called the range's subtype).
For instance, ranges
of timestamp
might be used to represent the ranges of
time that a meeting room is reserved. In this case the data type
is tsrange
(short for “timestamp range”),
and timestamp
is the subtype. The subtype must have
a total order so that it is well-defined whether element values are
within, before, or after a range of values.
Range types are useful because they represent many element values in a single range value, and because concepts such as overlapping ranges can be expressed clearly. The use of time and date ranges for scheduling purposes is the clearest example; but price ranges, measurement ranges from an instrument, and so forth can also be useful.
Every range type has a corresponding multirange type. A multirange is an ordered list of non-contiguous, non-empty, non-null ranges. Most range operators also work on multiranges, and they have a few functions of their own.
PostgreSQL comes with the following built-in range types:
int4range
— Range of integer
,
int4multirange
— corresponding Multirange
int8range
— Range of bigint
,
int8multirange
— corresponding Multirange
numrange
— Range of numeric
,
nummultirange
— corresponding Multirange
tsrange
— Range of timestamp without time zone
,
tsmultirange
— corresponding Multirange
tstzrange
— Range of timestamp with time zone
,
tstzmultirange
— corresponding Multirange
daterange
— Range of date
,
datemultirange
— corresponding Multirange
In addition, you can define your own range types; see CREATE TYPE for more information.
CREATE TABLE reservation (room int, during tsrange); INSERT INTO reservation VALUES (1108, '[2010-01-01 14:30, 2010-01-01 15:30)'); -- Containment SELECT int4range(10, 20) @> 3; -- Overlaps SELECT numrange(11.1, 22.2) && numrange(20.0, 30.0); -- Extract the upper bound SELECT upper(int8range(15, 25)); -- Compute the intersection SELECT int4range(10, 20) * int4range(15, 25); -- Is the range empty? SELECT isempty(numrange(1, 5));
See Table 9.53 and Table 9.55 for complete lists of operators and functions on range types.
Every non-empty range has two bounds, the lower bound and the upper bound. All points between these values are included in the range. An inclusive bound means that the boundary point itself is included in the range as well, while an exclusive bound means that the boundary point is not included in the range.
In the text form of a range, an inclusive lower bound is represented by
“[
” while an exclusive lower bound is
represented by “(
”. Likewise, an inclusive upper bound is represented by
“]
”, while an exclusive upper bound is
represented by “)
”.
(See Section 8.17.5 for more details.)
The functions lower_inc
and upper_inc
test the inclusivity of the lower
and upper bounds of a range value, respectively.
The lower bound of a range can be omitted, meaning that all
values less than the upper bound are included in the range, e.g.,
(,3]
. Likewise, if the upper bound of the range
is omitted, then all values greater than the lower bound are included
in the range. If both lower and upper bounds are omitted, all values
of the element type are considered to be in the range. Specifying a
missing bound as inclusive is automatically converted to exclusive,
e.g., [,]
is converted to (,)
.
You can think of these missing values as +/-infinity, but they are
special range type values and are considered to be beyond any range
element type's +/-infinity values.
Element types that have the notion of “infinity” can
use them as explicit bound values. For example, with timestamp
ranges, [today,infinity)
excludes the special
timestamp
value infinity
,
while [today,infinity]
include it, as does
[today,)
and [today,]
.
The functions lower_inf
and upper_inf
test for infinite lower
and upper bounds of a range, respectively.
The input for a range value must follow one of the following patterns:
(lower-bound
,upper-bound
) (lower-bound
,upper-bound
] [lower-bound
,upper-bound
) [lower-bound
,upper-bound
] empty
The parentheses or brackets indicate whether the lower and upper bounds
are exclusive or inclusive, as described previously.
Notice that the final pattern is empty
, which
represents an empty range (a range that contains no points).
The lower-bound
may be either a string
that is valid input for the subtype, or empty to indicate no
lower bound. Likewise, upper-bound
may be
either a string that is valid input for the subtype, or empty to
indicate no upper bound.
Each bound value can be quoted using "
(double quote)
characters. This is necessary if the bound value contains parentheses,
brackets, commas, double quotes, or backslashes, since these characters
would otherwise be taken as part of the range syntax. To put a double
quote or backslash in a quoted bound value, precede it with a
backslash. (Also, a pair of double quotes within a double-quoted bound
value is taken to represent a double quote character, analogously to the
rules for single quotes in SQL literal strings.) Alternatively, you can
avoid quoting and use backslash-escaping to protect all data characters
that would otherwise be taken as range syntax. Also, to write a bound
value that is an empty string, write ""
, since writing
nothing means an infinite bound.
Whitespace is allowed before and after the range value, but any whitespace between the parentheses or brackets is taken as part of the lower or upper bound value. (Depending on the element type, it might or might not be significant.)
These rules are very similar to those for writing field values in composite-type literals. See Section 8.16.6 for additional commentary.
Examples:
-- includes 3, does not include 7, and does include all points in between SELECT '[3,7)'::int4range; -- does not include either 3 or 7, but includes all points in between SELECT '(3,7)'::int4range; -- includes only the single point 4 SELECT '[4,4]'::int4range; -- includes no points (and will be normalized to 'empty') SELECT '[4,4)'::int4range;
The input for a multirange is curly brackets ({
and
}
) containing zero or more valid ranges,
separated by commas. Whitespace is permitted around the brackets and
commas. This is intended to be reminiscent of array syntax, although
multiranges are much simpler: they have just one dimension and there is
no need to quote their contents. (The bounds of their ranges may be
quoted as above however.)
Examples:
SELECT '{}'::int4multirange; SELECT '{[3,7)}'::int4multirange; SELECT '{[3,7), [8,9)}'::int4multirange;
Each range type has a constructor function with the same name as the range
type. Using the constructor function is frequently more convenient than
writing a range literal constant, since it avoids the need for extra
quoting of the bound values. The constructor function
accepts two or three arguments. The two-argument form constructs a range
in standard form (lower bound inclusive, upper bound exclusive), while
the three-argument form constructs a range with bounds of the form
specified by the third argument.
The third argument must be one of the strings
“()
”,
“(]
”,
“[)
”, or
“[]
”.
For example:
-- The full form is: lower bound, upper bound, and text argument indicating -- inclusivity/exclusivity of bounds. SELECT numrange(1.0, 14.0, '(]'); -- If the third argument is omitted, '[)' is assumed. SELECT numrange(1.0, 14.0); -- Although '(]' is specified here, on display the value will be converted to -- canonical form, since int8range is a discrete range type (see below). SELECT int8range(1, 14, '(]'); -- Using NULL for either bound causes the range to be unbounded on that side. SELECT numrange(NULL, 2.2);
Each range type also has a multirange constructor with the same name as the multirange type. The constructor function takes zero or more arguments which are all ranges of the appropriate type. For example:
SELECT nummultirange(); SELECT nummultirange(numrange(1.0, 14.0)); SELECT nummultirange(numrange(1.0, 14.0), numrange(20.0, 25.0));
A discrete range is one whose element type has a well-defined
“step”, such as integer
or date
.
In these types two elements can be said to be adjacent, when there are
no valid values between them. This contrasts with continuous ranges,
where it's always (or almost always) possible to identify other element
values between two given values. For example, a range over the
numeric
type is continuous, as is a range over timestamp
.
(Even though timestamp
has limited precision, and so could
theoretically be treated as discrete, it's better to consider it continuous
since the step size is normally not of interest.)
Another way to think about a discrete range type is that there is a clear
idea of a “next” or “previous” value for each element value.
Knowing that, it is possible to convert between inclusive and exclusive
representations of a range's bounds, by choosing the next or previous
element value instead of the one originally given.
For example, in an integer range type [4,8]
and
(3,9)
denote the same set of values; but this would not be so
for a range over numeric.
A discrete range type should have a canonicalization function that is aware of the desired step size for the element type. The canonicalization function is charged with converting equivalent values of the range type to have identical representations, in particular consistently inclusive or exclusive bounds. If a canonicalization function is not specified, then ranges with different formatting will always be treated as unequal, even though they might represent the same set of values in reality.
The built-in range types int4range
, int8range
,
and daterange
all use a canonical form that includes
the lower bound and excludes the upper bound; that is,
[)
. User-defined range types can use other conventions,
however.
Users can define their own range types. The most common reason to do
this is to use ranges over subtypes not provided among the built-in
range types.
For example, to define a new range type of subtype float8
:
CREATE TYPE floatrange AS RANGE ( subtype = float8, subtype_diff = float8mi ); SELECT '[1.234, 5.678]'::floatrange;
Because float8
has no meaningful
“step”, we do not define a canonicalization
function in this example.
When you define your own range you automatically get a corresponding multirange type.
Defining your own range type also allows you to specify a different subtype B-tree operator class or collation to use, so as to change the sort ordering that determines which values fall into a given range.
If the subtype is considered to have discrete rather than continuous
values, the CREATE TYPE
command should specify a
canonical
function.
The canonicalization function takes an input range value, and must return
an equivalent range value that may have different bounds and formatting.
The canonical output for two ranges that represent the same set of values,
for example the integer ranges [1, 7]
and [1,
8)
, must be identical. It doesn't matter which representation
you choose to be the canonical one, so long as two equivalent values with
different formattings are always mapped to the same value with the same
formatting. In addition to adjusting the inclusive/exclusive bounds
format, a canonicalization function might round off boundary values, in
case the desired step size is larger than what the subtype is capable of
storing. For instance, a range type over timestamp
could be
defined to have a step size of an hour, in which case the canonicalization
function would need to round off bounds that weren't a multiple of an hour,
or perhaps throw an error instead.
In addition, any range type that is meant to be used with GiST or SP-GiST
indexes should define a subtype difference, or subtype_diff
,
function. (The index will still work without subtype_diff
,
but it is likely to be considerably less efficient than if a difference
function is provided.) The subtype difference function takes two input
values of the subtype, and returns their difference
(i.e., X
minus Y
) represented as
a float8
value. In our example above, the
function float8mi
that underlies the regular float8
minus operator can be used; but for any other subtype, some type
conversion would be necessary. Some creative thought about how to
represent differences as numbers might be needed, too. To the greatest
extent possible, the subtype_diff
function should agree with
the sort ordering implied by the selected operator class and collation;
that is, its result should be positive whenever its first argument is
greater than its second according to the sort ordering.
A less-oversimplified example of a subtype_diff
function is:
CREATE FUNCTION time_subtype_diff(x time, y time) RETURNS float8 AS 'SELECT EXTRACT(EPOCH FROM (x - y))' LANGUAGE sql STRICT IMMUTABLE; CREATE TYPE timerange AS RANGE ( subtype = time, subtype_diff = time_subtype_diff ); SELECT '[11:10, 23:00]'::timerange;
See CREATE TYPE for more information about creating range types.
GiST and SP-GiST indexes can be created for table columns of range types. GiST indexes can be also created for table columns of multirange types. For instance, to create a GiST index:
CREATE INDEX reservation_idx ON reservation USING GIST (during);
A GiST or SP-GiST index on ranges can accelerate queries involving these
range operators:
=
,
&&
,
<@
,
@>
,
<<
,
>>
,
-|-
,
&<
, and
&>
.
A GiST index on multiranges can accelerate queries involving the same
set of multirange operators.
A GiST index on ranges and GiST index on multiranges can also accelerate
queries involving these cross-type range to multirange and multirange to
range operators correspondingly:
&&
,
<@
,
@>
,
<<
,
>>
,
-|-
,
&<
, and
&>
.
See Table 9.53 for more information.
In addition, B-tree and hash indexes can be created for table columns of
range types. For these index types, basically the only useful range
operation is equality. There is a B-tree sort ordering defined for range
values, with corresponding <
and >
operators,
but the ordering is rather arbitrary and not usually useful in the real
world. Range types' B-tree and hash support is primarily meant to
allow sorting and hashing internally in queries, rather than creation of
actual indexes.
While UNIQUE
is a natural constraint for scalar
values, it is usually unsuitable for range types. Instead, an
exclusion constraint is often more appropriate
(see CREATE TABLE
... CONSTRAINT ... EXCLUDE). Exclusion constraints allow the
specification of constraints such as “non-overlapping” on a
range type. For example:
CREATE TABLE reservation ( during tsrange, EXCLUDE USING GIST (during WITH &&) );
That constraint will prevent any overlapping values from existing in the table at the same time:
INSERT INTO reservation VALUES ('[2010-01-01 11:30, 2010-01-01 15:00)'); INSERT 0 1 INSERT INTO reservation VALUES ('[2010-01-01 14:45, 2010-01-01 15:45)'); ERROR: conflicting key value violates exclusion constraint "reservation_during_excl" DETAIL: Key (during)=(["2010-01-01 14:45:00","2010-01-01 15:45:00")) conflicts with existing key (during)=(["2010-01-01 11:30:00","2010-01-01 15:00:00")).
You can use the btree_gist
extension to define exclusion constraints on plain scalar data types, which
can then be combined with range exclusions for maximum flexibility. For
example, after btree_gist
is installed, the following
constraint will reject overlapping ranges only if the meeting room numbers
are equal:
CREATE EXTENSION btree_gist; CREATE TABLE room_reservation ( room text, during tsrange, EXCLUDE USING GIST (room WITH =, during WITH &&) ); INSERT INTO room_reservation VALUES ('123A', '[2010-01-01 14:00, 2010-01-01 15:00)'); INSERT 0 1 INSERT INTO room_reservation VALUES ('123A', '[2010-01-01 14:30, 2010-01-01 15:30)'); ERROR: conflicting key value violates exclusion constraint "room_reservation_room_during_excl" DETAIL: Key (room, during)=(123A, ["2010-01-01 14:30:00","2010-01-01 15:30:00")) conflicts with existing key (room, during)=(123A, ["2010-01-01 14:00:00","2010-01-01 15:00:00")). INSERT INTO room_reservation VALUES ('123B', '[2010-01-01 14:30, 2010-01-01 15:30)'); INSERT 0 1
A domain is a user-defined data type that is based on another underlying type. Optionally, it can have constraints that restrict its valid values to a subset of what the underlying type would allow. Otherwise it behaves like the underlying type — for example, any operator or function that can be applied to the underlying type will work on the domain type. The underlying type can be any built-in or user-defined base type, enum type, array type, composite type, range type, or another domain.
For example, we could create a domain over integers that accepts only positive integers:
CREATE DOMAIN posint AS integer CHECK (VALUE > 0); CREATE TABLE mytable (id posint); INSERT INTO mytable VALUES(1); -- works INSERT INTO mytable VALUES(-1); -- fails
When an operator or function of the underlying type is applied to a
domain value, the domain is automatically down-cast to the underlying
type. Thus, for example, the result of mytable.id - 1
is considered to be of type integer
not posint
.
We could write (mytable.id - 1)::posint
to cast the
result back to posint
, causing the domain's constraints
to be rechecked. In this case, that would result in an error if the
expression had been applied to an id
value of
1. Assigning a value of the underlying type to a field or variable of
the domain type is allowed without writing an explicit cast, but the
domain's constraints will be checked.
For additional information see CREATE DOMAIN.
Object identifiers (OIDs) are used internally by
PostgreSQL as primary keys for various
system tables.
Type oid
represents an object identifier. There are also
several alias types for oid
, each
named reg
.
Table 8.26 shows an
overview.
something
The oid
type is currently implemented as an unsigned
four-byte integer. Therefore, it is not large enough to provide
database-wide uniqueness in large databases, or even in large
individual tables.
The oid
type itself has few operations beyond comparison.
It can be cast to integer, however, and then manipulated using the
standard integer operators. (Beware of possible
signed-versus-unsigned confusion if you do this.)
The OID alias types have no operations of their own except
for specialized input and output routines. These routines are able
to accept and display symbolic names for system objects, rather than
the raw numeric value that type oid
would use. The alias
types allow simplified lookup of OID values for objects. For example,
to examine the pg_attribute
rows related to a table
mytable
, one could write:
SELECT * FROM pg_attribute WHERE attrelid = 'mytable'::regclass;
rather than:
SELECT * FROM pg_attribute WHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'mytable');
While that doesn't look all that bad by itself, it's still oversimplified.
A far more complicated sub-select would be needed to
select the right OID if there are multiple tables named
mytable
in different schemas.
The regclass
input converter handles the table lookup according
to the schema path setting, and so it does the “right thing”
automatically. Similarly, casting a table's OID to
regclass
is handy for symbolic display of a numeric OID.
Table 8.26. Object Identifier Types
Name | References | Description | Value Example |
---|---|---|---|
oid | any | numeric object identifier | 564182 |
regclass | pg_class | relation name | pg_type |
regcollation | pg_collation | collation name | "POSIX" |
regconfig | pg_ts_config | text search configuration | english |
regdictionary | pg_ts_dict | text search dictionary | simple |
regnamespace | pg_namespace | namespace name | pg_catalog |
regoper | pg_operator | operator name | + |
regoperator | pg_operator | operator with argument types | *(integer,integer)
or -(NONE,integer) |
regproc | pg_proc | function name | sum |
regprocedure | pg_proc | function with argument types | sum(int4) |
regrole | pg_authid | role name | smithee |
regtype | pg_type | data type name | integer |
All of the OID alias types for objects that are grouped by namespace
accept schema-qualified names, and will
display schema-qualified names on output if the object would not
be found in the current search path without being qualified.
For example, myschema.mytable
is acceptable input
for regclass
(if there is such a table). That value
might be output as myschema.mytable
, or
just mytable
, depending on the current search path.
The regproc
and regoper
alias types will only
accept input names that are unique (not overloaded), so they are
of limited use; for most uses regprocedure
or
regoperator
are more appropriate. For regoperator
,
unary operators are identified by writing NONE
for the unused
operand.
The input functions for these types allow whitespace between tokens,
and will fold upper-case letters to lower case, except within double
quotes; this is done to make the syntax rules similar to the way
object names are written in SQL. Conversely, the output functions
will use double quotes if needed to make the output be a valid SQL
identifier. For example, the OID of a function
named Foo
(with upper case F
)
taking two integer arguments could be entered as
' "Foo" ( int, integer ) '::regprocedure
. The
output would look like "Foo"(integer,integer)
.
Both the function name and the argument type names could be
schema-qualified, too.
Many built-in PostgreSQL functions accept
the OID of a table, or another kind of database object, and for
convenience are declared as taking regclass
(or the
appropriate OID alias type). This means you do not have to look up
the object's OID by hand, but can just enter its name as a string
literal. For example, the nextval(regclass)
function
takes a sequence relation's OID, so you could call it like this:
nextval('foo') operates on sequencefoo
nextval('FOO') same as above nextval('"Foo"') operates on sequenceFoo
nextval('myschema.foo') operates onmyschema.foo
nextval('"myschema".foo') same as above nextval('foo') searches search path forfoo
When you write the argument of such a function as an unadorned
literal string, it becomes a constant of type regclass
(or the appropriate type).
Since this is really just an OID, it will track the originally
identified object despite later renaming, schema reassignment,
etc. This “early binding” behavior is usually desirable for
object references in column defaults and views. But sometimes you might
want “late binding” where the object reference is resolved
at run time. To get late-binding behavior, force the constant to be
stored as a text
constant instead of regclass
:
nextval('foo'::text) foo
is looked up at runtime
The to_regclass()
function and its siblings
can also be used to perform run-time lookups. See
Table 9.70.
Another practical example of use of regclass
is to look up the OID of a table listed in
the information_schema
views, which don't supply
such OIDs directly. One might for example wish to call
the pg_relation_size()
function, which requires
the table OID. Taking the above rules into account, the correct way
to do that is
SELECT table_schema, table_name, pg_relation_size((quote_ident(table_schema) || '.' || quote_ident(table_name))::regclass) FROM information_schema.tables WHERE ...
The quote_ident()
function will take care of
double-quoting the identifiers where needed. The seemingly easier
SELECT pg_relation_size(table_name) FROM information_schema.tables WHERE ...
is not recommended, because it will fail for tables that are outside your search path or have names that require quoting.
An additional property of most of the OID alias types is the creation of
dependencies. If a
constant of one of these types appears in a stored expression
(such as a column default expression or view), it creates a dependency
on the referenced object. For example, if a column has a default
expression nextval('my_seq'::regclass)
,
PostgreSQL
understands that the default expression depends on the sequence
my_seq
, so the system will not let the sequence
be dropped without first removing the default expression. The
alternative of nextval('my_seq'::text)
does not
create a dependency.
(regrole
is an exception to this property. Constants of this
type are not allowed in stored expressions.)
Another identifier type used by the system is xid
, or transaction
(abbreviated xact) identifier. This is the data type of the system columns
xmin
and xmax
. Transaction identifiers are 32-bit quantities.
In some contexts, a 64-bit variant xid8
is used. Unlike
xid
values, xid8
values increase strictly
monotonically and cannot be reused in the lifetime of a database cluster.
A third identifier type used by the system is cid
, or
command identifier. This is the data type of the system columns
cmin
and cmax
. Command identifiers are also 32-bit quantities.
A final identifier type used by the system is tid
, or tuple
identifier (row identifier). This is the data type of the system column
ctid
. A tuple ID is a pair
(block number, tuple index within block) that identifies the
physical location of the row within its table.
(The system columns are further explained in Section 5.5.)
pg_lsn
Type
The pg_lsn
data type can be used to store LSN (Log Sequence
Number) data which is a pointer to a location in the WAL. This type is a
representation of XLogRecPtr
and an internal system type of
PostgreSQL.
Internally, an LSN is a 64-bit integer, representing a byte position in
the write-ahead log stream. It is printed as two hexadecimal numbers of
up to 8 digits each, separated by a slash; for example,
16/B374D848
. The pg_lsn
type supports the
standard comparison operators, like =
and
>
. Two LSNs can be subtracted using the
-
operator; the result is the number of bytes separating
those write-ahead log locations. Also the number of bytes can be
added into and subtracted from LSN using the
+(pg_lsn,numeric)
and
-(pg_lsn,numeric)
operators, respectively. Note that
the calculated LSN should be in the range of pg_lsn
type,
i.e., between 0/0
and
FFFFFFFF/FFFFFFFF
.
The PostgreSQL type system contains a number of special-purpose entries that are collectively called pseudo-types. A pseudo-type cannot be used as a column data type, but it can be used to declare a function's argument or result type. Each of the available pseudo-types is useful in situations where a function's behavior does not correspond to simply taking or returning a value of a specific SQL data type. Table 8.27 lists the existing pseudo-types.
Table 8.27. Pseudo-Types
Name | Description |
---|---|
any | Indicates that a function accepts any input data type. |
anyelement | Indicates that a function accepts any data type (see Section 38.2.5). |
anyarray | Indicates that a function accepts any array data type (see Section 38.2.5). |
anynonarray | Indicates that a function accepts any non-array data type (see Section 38.2.5). |
anyenum | Indicates that a function accepts any enum data type (see Section 38.2.5 and Section 8.7). |
anyrange | Indicates that a function accepts any range data type (see Section 38.2.5 and Section 8.17). |
anymultirange | Indicates that a function accepts any multirange data type (see Section 38.2.5 and Section 8.17). |
anycompatible | Indicates that a function accepts any data type, with automatic promotion of multiple arguments to a common data type (see Section 38.2.5). |
anycompatiblearray | Indicates that a function accepts any array data type, with automatic promotion of multiple arguments to a common data type (see Section 38.2.5). |
anycompatiblenonarray | Indicates that a function accepts any non-array data type, with automatic promotion of multiple arguments to a common data type (see Section 38.2.5). |
anycompatiblerange | Indicates that a function accepts any range data type, with automatic promotion of multiple arguments to a common data type (see Section 38.2.5 and Section 8.17). |
anycompatiblemultirange | Indicates that a function accepts any multirange data type, with automatic promotion of multiple arguments to a common data type (see Section 38.2.5 and Section 8.17). |
cstring | Indicates that a function accepts or returns a null-terminated C string. |
internal | Indicates that a function accepts or returns a server-internal data type. |
language_handler | A procedural language call handler is declared to return language_handler . |
fdw_handler | A foreign-data wrapper handler is declared to return fdw_handler . |
table_am_handler | A table access method handler is declared to return table_am_handler . |
index_am_handler | An index access method handler is declared to return index_am_handler . |
tsm_handler | A tablesample method handler is declared to return tsm_handler . |
record | Identifies a function taking or returning an unspecified row type. |
trigger | A trigger function is declared to return trigger. |
event_trigger | An event trigger function is declared to return event_trigger. |
pg_ddl_command | Identifies a representation of DDL commands that is available to event triggers. |
void | Indicates that a function returns no value. |
unknown | Identifies a not-yet-resolved type, e.g., of an undecorated string literal. |
Functions coded in C (whether built-in or dynamically loaded) can be declared to accept or return any of these pseudo-types. It is up to the function author to ensure that the function will behave safely when a pseudo-type is used as an argument type.
Functions coded in procedural languages can use pseudo-types only as
allowed by their implementation languages. At present most procedural
languages forbid use of a pseudo-type as an argument type, and allow
only void
and record
as a result type (plus
trigger
or event_trigger
when the function is used
as a trigger or event trigger). Some also support polymorphic functions
using the polymorphic pseudo-types, which are shown above and discussed
in detail in Section 38.2.5.
The internal
pseudo-type is used to declare functions
that are meant only to be called internally by the database
system, and not by direct invocation in an SQL
query. If a function has at least one internal
-type
argument then it cannot be called from SQL. To
preserve the type safety of this restriction it is important to
follow this coding rule: do not create any function that is
declared to return internal
unless it has at least one
internal
argument.
[7] For this purpose, the term “value” includes array elements, though JSON terminology sometimes considers array elements distinct from values within objects.
Table of Contents
PostgreSQL provides a large number of
functions and operators for the built-in data types. This chapter
describes most of them, although additional special-purpose functions
appear in relevant sections of the manual. Users can also
define their own functions and operators, as described in
Part V. The
psql commands \df
and
\do
can be used to list all
available functions and operators, respectively.
The notation used throughout this chapter to describe the argument and result data types of a function or operator is like this:
repeat
(text
,integer
) →text
which says that the function repeat
takes one text and
one integer argument and returns a result of type text. The right arrow
is also used to indicate the result of an example, thus:
repeat('Pg', 4) → PgPgPgPg
If you are concerned about portability then note that most of the functions and operators described in this chapter, with the exception of the most trivial arithmetic and comparison operators and some explicitly marked functions, are not specified by the SQL standard. Some of this extended functionality is present in other SQL database management systems, and in many cases this functionality is compatible and consistent between the various implementations.
The usual logical operators are available:
boolean
AND
boolean
→boolean
boolean
OR
boolean
→boolean
NOT
boolean
→boolean
SQL uses a three-valued logic system with true,
false, and null
, which represents “unknown”.
Observe the following truth tables:
a | b | a AND b | a OR b |
---|---|---|---|
TRUE | TRUE | TRUE | TRUE |
TRUE | FALSE | FALSE | TRUE |
TRUE | NULL | NULL | TRUE |
FALSE | FALSE | FALSE | FALSE |
FALSE | NULL | FALSE | NULL |
NULL | NULL | NULL | NULL |
a | NOT a |
---|---|
TRUE | FALSE |
FALSE | TRUE |
NULL | NULL |
The operators AND
and OR
are
commutative, that is, you can switch the left and right operands
without affecting the result. (However, it is not guaranteed that
the left operand is evaluated before the right operand. See Section 4.2.14 for more information about the
order of evaluation of subexpressions.)
The usual comparison operators are available, as shown in Table 9.1.
Table 9.1. Comparison Operators
Operator | Description |
---|---|
datatype < datatype
→ boolean
| Less than |
datatype > datatype
→ boolean
| Greater than |
datatype <= datatype
→ boolean
| Less than or equal to |
datatype >= datatype
→ boolean
| Greater than or equal to |
datatype = datatype
→ boolean
| Equal |
datatype <> datatype
→ boolean
| Not equal |
datatype != datatype
→ boolean
| Not equal |
<>
is the standard SQL notation for “not
equal”. !=
is an alias, which is converted
to <>
at a very early stage of parsing.
Hence, it is not possible to implement !=
and <>
operators that do different things.
These comparison operators are available for all built-in data types that have a natural ordering, including numeric, string, and date/time types. In addition, arrays, composite types, and ranges can be compared if their component data types are comparable.
It is usually possible to compare values of related data
types as well; for example integer
>
bigint
will work. Some cases of this sort are implemented
directly by “cross-type” comparison operators, but if no
such operator is available, the parser will coerce the less-general type
to the more-general type and apply the latter's comparison operator.
As shown above, all comparison operators are binary operators that
return values of type boolean
. Thus, expressions like
1 < 2 < 3
are not valid (because there is
no <
operator to compare a Boolean value with
3
). Use the BETWEEN
predicates
shown below to perform range tests.
There are also some comparison predicates, as shown in Table 9.2. These behave much like operators, but have special syntax mandated by the SQL standard.
Table 9.2. Comparison Predicates
Predicate Description Example(s) |
---|
Between (inclusive of the range endpoints).
|
Not between (the negation of
|
Between, after sorting the two endpoint values.
|
Not between, after sorting the two endpoint values.
|
Not equal, treating null as a comparable value.
|
Equal, treating null as a comparable value.
|
Test whether value is null.
|
Test whether value is not null.
|
Test whether value is null (nonstandard syntax). |
Test whether value is not null (nonstandard syntax). |
Test whether boolean expression yields true.
|
Test whether boolean expression yields false or unknown.
|
Test whether boolean expression yields false.
|
Test whether boolean expression yields true or unknown.
|
Test whether boolean expression yields unknown.
|
Test whether boolean expression yields true or false.
|
The BETWEEN
predicate simplifies range tests:
a
BETWEENx
ANDy
is equivalent to
a
>=x
ANDa
<=y
Notice that BETWEEN
treats the endpoint values as included
in the range.
BETWEEN SYMMETRIC
is like BETWEEN
except there is no requirement that the argument to the left of
AND
be less than or equal to the argument on the right.
If it is not, those two arguments are automatically swapped, so that
a nonempty range is always implied.
The various variants of BETWEEN
are implemented in
terms of the ordinary comparison operators, and therefore will work for
any data type(s) that can be compared.
The use of AND
in the BETWEEN
syntax creates an ambiguity with the use of AND
as a
logical operator. To resolve this, only a limited set of expression
types are allowed as the second argument of a BETWEEN
clause. If you need to write a more complex sub-expression
in BETWEEN
, write parentheses around the
sub-expression.
Ordinary comparison operators yield null (signifying “unknown”),
not true or false, when either input is null. For example,
7 = NULL
yields null, as does 7 <> NULL
. When
this behavior is not suitable, use the
IS [ NOT ] DISTINCT FROM
predicates:
a
IS DISTINCT FROMb
a
IS NOT DISTINCT FROMb
For non-null inputs, IS DISTINCT FROM
is
the same as the <>
operator. However, if both
inputs are null it returns false, and if only one input is
null it returns true. Similarly, IS NOT DISTINCT
FROM
is identical to =
for non-null
inputs, but it returns true when both inputs are null, and false when only
one input is null. Thus, these predicates effectively act as though null
were a normal data value, rather than “unknown”.
To check whether a value is or is not null, use the predicates:
expression
IS NULLexpression
IS NOT NULL
or the equivalent, but nonstandard, predicates:
expression
ISNULLexpression
NOTNULL
Do not write
because expression
= NULLNULL
is not “equal to”
NULL
. (The null value represents an unknown value,
and it is not known whether two unknown values are equal.)
Some applications might expect that
returns true if expression
= NULLexpression
evaluates to
the null value. It is highly recommended that these applications
be modified to comply with the SQL standard. However, if that
cannot be done the transform_null_equals
configuration variable is available. If it is enabled,
PostgreSQL will convert x =
NULL
clauses to x IS NULL
.
If the expression
is row-valued, then
IS NULL
is true when the row expression itself is null
or when all the row's fields are null, while
IS NOT NULL
is true when the row expression itself is non-null
and all the row's fields are non-null. Because of this behavior,
IS NULL
and IS NOT NULL
do not always return
inverse results for row-valued expressions; in particular, a row-valued
expression that contains both null and non-null fields will return false
for both tests. In some cases, it may be preferable to
write row
IS DISTINCT FROM NULL
or row
IS NOT DISTINCT FROM NULL
,
which will simply check whether the overall row value is null without any
additional tests on the row fields.
Boolean values can also be tested using the predicates
boolean_expression
IS TRUEboolean_expression
IS NOT TRUEboolean_expression
IS FALSEboolean_expression
IS NOT FALSEboolean_expression
IS UNKNOWNboolean_expression
IS NOT UNKNOWN
These will always return true or false, never a null value, even when the
operand is null.
A null input is treated as the logical value “unknown”.
Notice that IS UNKNOWN
and IS NOT UNKNOWN
are
effectively the same as IS NULL
and
IS NOT NULL
, respectively, except that the input
expression must be of Boolean type.
Some comparison-related functions are also available, as shown in Table 9.3.
Table 9.3. Comparison Functions
Mathematical operators are provided for many PostgreSQL types. For types without standard mathematical conventions (e.g., date/time types) we describe the actual behavior in subsequent sections.
Table 9.4 shows the mathematical
operators that are available for the standard numeric types.
Unless otherwise noted, operators shown as
accepting numeric_type
are available for all
the types smallint
, integer
,
bigint
, numeric
, real
,
and double precision
.
Operators shown as accepting integral_type
are available for the types smallint
, integer
,
and bigint
.
Except where noted, each form of an operator returns the same data type
as its argument(s). Calls involving multiple argument data types, such
as integer
+
numeric
,
are resolved by using the type appearing later in these lists.
Table 9.4. Mathematical Operators
Operator Description Example(s) |
---|
Addition
|
Unary plus (no operation)
|
Subtraction
|
Negation
|
Multiplication
|
Division (for integral types, division truncates the result towards zero)
|
Modulo (remainder); available for
|
Exponentiation
Unlike typical mathematical practice, multiple uses of
|
Square root
|
Cube root
|
Absolute value
|
Bitwise AND
|
Bitwise OR
|
Bitwise exclusive OR
|
Bitwise NOT
|
Bitwise shift left
|
Bitwise shift right
|
Table 9.5 shows the available
mathematical functions.
Many of these functions are provided in multiple forms with different
argument types.
Except where noted, any given form of a function returns the same
data type as its argument(s); cross-type cases are resolved in the
same way as explained above for operators.
The functions working with double precision
data are mostly
implemented on top of the host system's C library; accuracy and behavior in
boundary cases can therefore vary depending on the host system.
Table 9.5. Mathematical Functions
Table 9.6 shows functions for generating random numbers.
Table 9.6. Random Functions
The random()
function uses a simple linear
congruential algorithm. It is fast but not suitable for cryptographic
applications; see the pgcrypto module for a more
secure alternative.
If setseed()
is called, the series of results of
subsequent random()
calls in the current session
can be repeated by re-issuing setseed()
with the same
argument.
Without any prior setseed()
call in the same
session, the first random()
call obtains a seed
from a platform-dependent source of random bits.
Table 9.7 shows the available trigonometric functions. Each of these functions comes in two variants, one that measures angles in radians and one that measures angles in degrees.
Table 9.7. Trigonometric Functions
Another way to work with angles measured in degrees is to use the unit
transformation functions
and radians()
shown earlier.
However, using the degree-based trigonometric functions is preferred,
as that way avoids round-off error for special cases such
as degrees()
sind(30)
.
Table 9.8 shows the available hyperbolic functions.
Table 9.8. Hyperbolic Functions
This section describes functions and operators for examining and
manipulating string values. Strings in this context include values
of the types character
, character varying
,
and text
. Except where noted, these functions and operators
are declared to accept and return type text
. They will
interchangeably accept character varying
arguments.
Values of type character
will be converted
to text
before the function or operator is applied, resulting
in stripping any trailing spaces in the character
value.
SQL defines some string functions that use key words, rather than commas, to separate arguments. Details are in Table 9.9. PostgreSQL also provides versions of these functions that use the regular function invocation syntax (see Table 9.10).
The string concatenation operator (||
) will accept
non-string input, so long as at least one input is of string type, as shown
in Table 9.9. For other cases, inserting an
explicit coercion to text
can be used to have non-string input
accepted.
Table 9.9. SQL String Functions and Operators
Function/Operator Description Example(s) |
---|
Concatenates the two strings.
|
Converts the non-string input to text, then concatenates the two
strings. (The non-string input cannot be of an array type, because
that would create ambiguity with the array
|
Checks whether the string is in the specified Unicode normalization
form. The optional
|
Returns number of bits in the string (8
times the
|
Returns number of characters in the string.
|
Converts the string to all lower case, according to the rules of the database's locale.
|
Converts the string to the specified Unicode
normalization form. The optional
|
Returns number of bytes in the string.
|
Returns number of bytes in the string. Since this version of the
function accepts type
|
Replaces the substring of
|
Returns first starting index of the specified
|
Extracts the substring of
|
Extracts the first substring matching POSIX regular expression; see Section 9.7.3.
|
Extracts the first substring matching SQL regular expression; see Section 9.7.2. The first form has been specified since SQL:2003; the second form was only in SQL:1999 and should be considered obsolete.
|
Removes the longest string containing only characters in
|
This is a non-standard syntax for
|