Postgres has some excellent features. One of these is set-returning functions. It’s possible to have a function (written in SQL, or some other language) that returns a set of values. For instance, the in-built function
generate_series() returns a set of values:
now() + INTERVAL '1 month',
'1 day') day;
This uses a set returning function as a table source: in this case a single column table.
You can use scalar set-returning functions from within Django relatively easily: I blogged about it last year.
It is possible to create your own set-returning functions. Further, the return type can be a
SETOF any table type in your database, or even a “new” table.
CREATE OR REPLACE FUNCTION foo(INTEGER, INTEGER)
RETURNS TABLE(id INTEGER,
baz JSON) AS $$
SELECT foo.id AS id,
bar.id AS bar_id,
INNER JOIN bar ON (foo.id = bar.foo_id)
WHERE foo.y = $1
AND bar.x > $2
GROUP BY foo.id, bar.id
$$ LANGUAGE SQL STABLE;
It’s possible to have a Postgres VIEW as the data source for a Django model (you just set the
Meta.db_table on the model, and mark it as
Meta.managed = False). Using a FUNCTION is a bit trickier.
You can use the
QuerySet.raw() method, something like:
qs = Foo.objects.raw('SELECT * FROM foo(%s, %s)', [x, y])
The downside of using
raw is you can’t apply annotations, or use
.filter() to limit the results.
What would be useful is if you could extract the relevant parameters out of a QuerySet, and inject them as the arguments to the function call.
But why would we want to have one of these set (or table) returning functions? Why not write a view?
I have some complex queries that reference a whole bunch of different tables. In order to be able to write a sane query, I decided to use a CTE. This allows me to write the query in a more structured manner:
WITH foo AS (
bar AS (
GROUP BY ...
There is a drawback to this approach, specifically how it interacts with Django. We can turn a query like this into a view, but any filtering we want to do using the ORM will only apply after the view has executed. Normally, this is not a problem, because Postgres can “push down” the filters it finds in the query, down into the view.
But older versions of postgres are unable to perform this operation on a CTE. In other words, each clause of a CTE must run (and be fully evaluated at that point in time) before the next one can run. In practice, if a clause of a CTE is not referenced later on, postgres will not execute that clause, but that is the extent of the optimisation.
So, if we had 50 million objects in foo_bar, and we needed to filter them in a dynamic way (ie, from the ORM), we would be unable to do this. The initial clause would execute for all 50 million rows, and then any subsequent clauses would then include all these, and so on. Then, the filtering would happen after the view had returned all it’s rows.
The workaround, using a function, is to use the parameters to do the filtering as early as possible:
CREATE OR REPLACE FUNCTION foo(INTEGER, INTEGER, INTEGER)
RETURNS TABLE(...) AS $$
WITH foo_1 AS (
WHERE x BETWEEN $1 AND $2
bar AS (
INNER JOIN baz USING (baz_id)
WHERE baz.qux > $3
GROUP BY ...
$$ LANGUAGE SQL STRICT IMMUTABLE;
Notice that we do the filtering on
foo_bar as early as we possibly can, and likewise filter the
baz the first time we reference it.
Now we have established why we may want to use a function as a model source, how do we go about doing that?
We are going to dig fairly deep into the internals of Django’s ORM now, so tighten up your boots!
When Django comes across a
.filter() call, it looks at the arguments, and applies them to a new copy of the QuerySet’s
query object: or more specifically, the
query.where node. This has a list of children, which Django will turn into SQL and execute later. The QuerySet does some validation at this point: we will only use those fields known to the QuerySet (either through being fields of the Model, or those that added using
.annotate()). Any others will result in an exception.
This will require some extension, as it is possible that one or more arguments to a Postgres function may not be fields on the Model class used to define the shape of the results.
Each Node within a QuerySet’s
query has a method:
.as_sql(). This is the part that turns the python objects into actual SQL code. We are going to leverage the fact that even the python object that refers to the table itself uses
.as_sql() to safely allow passing parameters to the function-as-a-table.
.as_sql() method of the
BaseTable object returns just the name of the table (possibly with the current alias attached), and an empty list as params. We can swap this class out with one that will return an SQL fragment, containing
function_name(%s, %s) (with the correct number of
%s arguments), and a list containing those parameters.
Every Postgres function has a unique signature: the function name, and the list of parameters; or more specifically, their types. Thus, postgres will deem the functions:
foo(INTEGER, INTEGER, BOOLEAN)
as distinct entities. We will ignore for now the fact it is possible to have optional arguments, variadic arguments and polymorphic functions.
We need some way of storing what the signature of a Postgres function is. Initially, I used an analog (perhaps even just a subclass) of Django’s Model class. This enabled me to create (temporary)
Exact(Col()) nodes to go in the
query.where.children list, to be later removed and used for function arguments. I needed to ignore the primary key field, and it always felt wrong to add to the WhereNode, only to remove later.
I ended up settling on a class like Django’s Form class. It uses a metaclass (but requires a
Meta.function_name), and uses Django’s form fields to define the arguments.
x = forms.IntegerField()
y = forms.IntegerField()
z = forms.BooleanField(required=False, default=True)
function_name = 'foo'
Function subclass can generate a
Manager on a Model class, but also can also create the object that will in turn create the relevant SQL. That part happens automatically, when a queryset created from the manager is filtered using appropriate arguments. The
Function subclass uses it’s fields to validate the params that it will be passing are valid for their types. It’s a bit like the
clean() method on a Form.
We then also need a model class (it could be a model you have already defined, if your function returns a
bar = models.ForeignKey('foo.Bar', related_name='+', on_delete=models.DO_NOTHING)
baz = JSONField()
objects = FooFunction.as_manager()
db_table = 'foo()'
managed = False
Because this is a new and unmanaged model, then we need to set the
on_delete so that the ORM won’t try to cascade deletes, but also mark the model as unmanaged. I also set the
Meta.db_table to the function call without arguments, so it looks nicer in
repr(queryset). It would be nice to be able to get the actual arguments in there, but I haven’t managed that yet.
If this was just a different way of fetching an existing model, you’d just need to add a new manager on to that model. Keep in mind that Django needs a primary key field, so ensure your function provides one.
Then, we can perform a query:
>>> print(Foo.objects.filter(x=1, y=2).query)
SELECT foo.id, foo.bar_id, foo.baz FROM foo(1, 2)
>>> print(Foo.objects.filter(x=1, y=2, z=True).query)
SELECT foo.id, foo.bar_id, foo.baz FROM foo(1, 2, true)
>>> print(Foo.objects.filter(x=1, y=2, bar__gt=5).query)
SELECT foo.id, foo.bar_id, foo.baz FROM foo(1, 2) WHERE foo.bar_id > 5
>>> print(Foo.objects.filter(x=1, y=2).annotate(qux=models.F('bar__name')).query)
SELECT foo.id, foo.bar_id, foo.baz, bar.name AS qux
FROM foo(1, 2) INNER JOIN bar ON (bar.id = foo.bar_id)
Note that filters that are not function parameters will apply after the function call. You can annotate on, and it will automatically create a join through a foreign key. If you omit a required parameter, then you’ll get an exception.
So, where’s the code for this?
Well, I need to clean it up a bit, and write up some automated tests on it. Expect it soon.