Using prepared statements through the SPI

In the previous post, you could see how to use the server programming interface to execute statements that modified the database. This was done by creating a statement containing the all the data that needed to be inserted, but that also meant generating a string from already parsed data, which seems like a waste of cycles. It would be useful if we somehow could avoid this extra parsing step and instead use the JSONB object that we already created. In this post, you will see how the PostgreSQL executes statements and how you can improve the previous implementation by exploiting the SPI support for prepared statements. In this post, you will learn what prepared statements are and how to both prepare and execute statements through the SPI interface.

Prepared statements

Prepared statements is a concept used to give better control over the execution of statements. When preparing a statement, you build a server-side object representing a partially executed query. This object can then be saved for later execution one or multiple times. This allow you to avoid repeating an expensive part of executing a statement and just performs the part of the execution that is required to select an execution plan for final statement. Preparing a statement is easy using the PREPARE statement:

PREPARE inject_metric(timestamptz, jsonb, jsonb) AS
    INSERT INTO metric.swap VALUES($1, $2, $3);

This prepares a statement that will insert a row into the metric.swap table. Prepared statements can also have a set of parameters that will be replaced with actual values once the final parts of the execution of the statement is done. The parameters in a statement are given as $1, $2, etc. Once you have a prepared statement, you can execute it using the EXECUTE statement, which allow you to provide the actual parameters to the prepared statement as well.

EXECUTE inject_metric(
  'Tue Nov 26 07:39:14 2019',
  '{"host": "fury"}',
  '{"in": "804974592", "out": "6606921728"}'
);

Prepared statements are available in the SQL interface, but they can also be used through the SPI interface, and this is where we can get a performance boost by avoiding to re-parse the statements. Before delving into how to use it from the SPI, it is useful to understand the steps that a statement goes through when being executed. When the PostgreSQL backend receives strings to execute, it has to process them and figure out what to do. This is done in a sequence of steps where the string is transformed into an execution plan and then executed.

This is how prepared statements work when you use then through the SQL interface, but they are also used internally when executing a query and hence this is something that you can use for the background worker as well.

Prepared Statements and the Server Programming Interface

Figure 1. Executing statements

Internally, PostgreSQL separates statements into two categories: queries and utility statements. Queries are select statements that require plans and optimization—SELECT statements in all it forms—while all other statements are utility statements. Utility statements does not require any planning and optimization, they are just executed, but they can also be transformed in a prepared form which basically looks like a plan internally (it makes execution simpler). This post describe mainly queries, but utility statements are executed in a similar way (however, they do not require planning).

There are a few different ways that PostgreSQL executes queries that arrive to the backend. Queries goes through the same steps in all cases, but memory are allocated in slightly different ways depending on whether they are sent to a backend, executed in a background worker using something like exec_single_statement, or executed through the SPI. In this case the focus will be on the basic flow of executing statements through the SPI, but it works in a similar manner in the other cases. You can see the steps in Figure 1:

  1. Parsing the string will transform it into a list of parse trees, which are internal structures containing the information about each query or utility statement.
  2. Analyzing and rewriting the query will collect some basic information about the statement and apply rewrite rules to rewrite the statement into a list of query trees. The rewrite rules are, among other things, used to implement views.
  3. Prepare a plan from the query nodes to build a plan source. The plan source is a template for the actual plan and contains a list of candidate plans with placeholders for parameters.
  4. Build the actual plan from the plan source. At this point, it is possible to add actual parameter values to the plan source.
  5. Execute the plan.

In the previous post, SPI_execute was used to execute a query, which does all the five steps outlined above. Internally, these steps are separated into a prepare phase and an execute phase, just as in the SQL interface. The first three steps are can be done using the function SPI_prepare and the last two steps can be done using the function SPI_execute_plan. However, to make this work it is necessary to do a few other steps first.

The advantage in this case for using a prepared statement is that you can pass parameters directly to the prepared statement when executing it, without having to build a string for it. This skips the parsing part, that is not necessary in this case since JSONB structures are already prepared.

Retrieving the tuple descriptor for the table

To be able to prepare a statement, it is necessary to have information about the table structure, in particular the number of attributes and their types. As you saw in the post on Parsing InfluxDB Line Protocol this information is stored in a tuple descriptor, which you can get by first opening the table.

Relation rel;
TupleDesc tupdesc;
int natts;
    .
    .
    .
rel = table_open(relid, AccessShareLock);
tupdesc = RelationGetDescr(rel);
natts = tupdesc->natts;
    .
    .
    .
table_close(rel, NoLock);

Opening the table and lock it is critical to prevent it from being changed while you read and use the definition. Imagine that you create a new tuple to insert into the table based on the tuple descriptor and then try to insert it, but another session decided to change the definition while you were working with this. This is not expected and is likely to cause a crash if you try. Since you will be inserting into the table, you need to keep the table open until you have written the tuple there, but it is fine if other sessions insert rows in the meantime as long as the table definition is not changed.

By using table_open with AccessShareLock, we open the table in ACCESS SHARE mode (see Explicit Locking in the PostgreSQL manual), which will prevent the table definition from changing, but still allow inserts into the table. When closing the table NoLock is used, which will close the table but leave the lock in place until the end of the transaction and hence prevent changes to the table definition until the tuples have been inserted.

Building a prepared statement

Now that the table is locked and the number of arguments is available, you can build the actual prepared statement as a string. This will just contain $1, $2, etc as placeholders for the actual parameters and there is one parameter for each column of the table.

initStringInfo(&stmt);
appendStringInfo(
    &stmt, "INSERT INTO %s.%s VALUES (",
    quote_identifier(get_namespace_name(RelationGetNamespace(rel))),
    quote_identifier(RelationGetRelationName(rel)));
for (i = 0; i < natts; ++i)
  appendStringInfo(&stmt, "$%d%s", i + 1, (i < natts - 1 ? ", " : ""));
appendStringInfoString(&stmt, ")");

Extracting the argument types

There are several SPI functions to work with tuple descriptors that are used to get information about the table definition. To extract the type id (which is an OID for the type in the pg_type table) for each column, use the SPI_gettypeid function. Note that the attribute numbers for these functions start at one, not at zero. It is a common mistake to forget to add 1 to the array index to get the attribute number when using a function like this since the arrays are zero-indexed.

Oid *argtypes = (Oid *)palloc(natts * sizeof(Oid));
    .
    .
    .
for (i = 0; i < natts; ++i)
  argtypes[i] = SPI_gettypeid(tupdesc, i + 1);

Preparing and executing the statement

Most of the critical information is available now and this allow the statement to be prepared and executed.

SPIPlanPtr plan;
    .
    .
    .
/* Execute the plan and log any errors */
plan = SPI_prepare(stmt.data, natts, argtypes);

The first part just builds a prepared statement with parameters. This is pretty straightforward: just rely on the information in the tuple descriptor for the table that was saved away in the argtypes array. The function returns a (pointer to a) plan, which is actually a plan source wrapped in a structure: that it, it contains a set of candidate plans with placeholders for parameters. The actual plan is allocated under the transaction memory context (in this case) and will live until the end of the transaction.

Datum *values;
bool *nulls;
char *cnulls;
TupleDesc tupdesc;
   .
   .
   .
ParseInfluxCollect(state, tupdesc, values, nulls);
   .
   .
   .
cnulls = (char *)palloc(natts * sizeof(char));
for (i = 0; i < natts; ++i)
  cnulls[i] = (nulls[i]) ? 'n' : ' ';
   .
   .
   .
err = SPI_execute_plan(plan, values, cnulls, false, 0);
if (err != SPI_OK_INSERT)
  elog(LOG, "SPI_execute_plan failed executing query \"%s\": %s",
       stmt.data, SPI_result_code_string(err));

Executing the plan is done using SPI_execute_plan and is quite straightforward to use: you pass in the values and null status as arrays. There is a subtle trap here: the null array provided to this function is different from the null array provided to functions like heap_form_tuple. In this case, a character array is provided with an “n” if the value is null, and space if not, so that array is prepared from the nulls array that is provided by ParseInfluxCollect. The two additional parameters after that just says that this is not a read-only plan and there is no limit to the number of rows that should be returned, which is only relevant for queries anyway, so zero is passed in here.

But does it really make a difference?

Figure 2. Comparing SPI_execute with SPI_prepare + SPI_execute_plan

The goal of this post is to show how the prepare interface can be used and how it works, but a good question to ask at this point is if this really makes a difference? After all, the code still prepares and executes the statement for each received row, so the only savings here is that it is not necessary to parse the JSON. However, it turns out that there is actually a quite significant difference even for this small change.

To test this, I wrote a Influx Faker—a simple tool to fake InfluxDB lines and send them to a UDP port. Since UDP is used, this means that if pg_influx is unable to keep up, the lines will just be removed, so it is easy to measure performance by just sending rows to the port and see how many of these are processed. Doing that and comparing the results give the result in Figure 2. Even though this is an improvement, this is not outstanding figures. The Influx Faker is really simple and is able to send a lot of rows in a very short time-frame, so it is unlikely that it will ever be possible to reach a 100%. Fortunately, there is a lot of more improvements that can be done, which also allow peeking into other aspects of the PostgreSQL server. In other words: more fun posts.

In the next post, there will be a brief digression into how to handle input and output of types and how that works. This will help with building a good architecture and a more useful tool, but not really affect performance.

As always, the code is available at GitHub pg_influx repository.

Mats

dbmsdrops.kindahl.net

Long time developer with a keen interest in databases, distributed systems, and programming languages. Currently working as Database Architect at Timescale.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *