The book SQL Antipatterns is one of my favourite books. I took the opportunity to reread it on a trip to Xerocon in Sydney, and as usual it enlightened me to thing I am probably doing in my database interactions.
So, I’m going to look at these Antipatterns, and discuss how you can avoid them when using Django. This post is intended to be read with each chapter of the book. I’ve used the section headings, but instead of the chapter headings, I’ve used the Antipattern headings. They are still in the same order, though.
It seems the printed version of this book is on sale now: I’m tempted to buy a few extra copies for gifts. Ahem, cow-orkers.
Logical Database Design Antipatterns
¶
Format Comma-Separated Lists
¶
This one is pretty simple: use a relation instead of a Comma Separated field. In the cases described in the book, a ManyToManyField
is in fact simpler than a Comma Separated field. Django gets a gold star here, both in ease of use, but also in documentation about relations.
However, there may be times when a relation is overkill, and a real array is better. For instance, when storing data related to which days of the week are affected by a certain condition, it may make sense to store it in this way.
But we can do better than a simple Comma Separated field. Storing the data in a Postgres Array means we can rely on the database to validate the data, and allows searching. Similarly, we could store it in JSON, too.
I’ve maintained a JSONField for Django, although it’s not easily queryable. However, an ArrayField is coming in Django 1.8. There are alternatives already available if you need to use one now. I’ve got a project to mostly backport the django.contrib.postgres
features to 1.7: django-postgres.
Things like JSON, Array and Hstore are a better solution than storing other-delimitered values in a straight text column too. With Django 1.7, it became possible to have lookups, which can leverage the DBMS’ ability to query these datatypes.
Always Depend on One’s Parent
¶
Read chapter online.
Straight into a trickier one! And, Django’s documentation points out how to create his type of relation, but does not call out the possible issues. This book is worth it for this section alone.
So, how do we deal with trees in Django?
We can use django-mptt. This gives us (from what I can see) the “Nested Sets” pattern outlined in the book, but under the name “Modified Preorder Tree Traversal”.
I’m quite interested in the idea of using a Closure Table, and there are a couple of projects with quite different approaches to this:
- django-ctt: uses a Model class you inherit from.
- django-ct: better documented, but uses an unusual pattern of a pseudo-manager-thing.
Knowing me, I’m probably going to spend some time building a not-complete implementation at some point.
Update: Whilst I haven’t built an implementation of a Closure Table, I did implement recursive queries for an Adjacency List.
One Size Fits All
¶
Using a field id
for all tables by default is probably one of the biggest mistakes I think Django makes. And, as we shall see, we can’t yet avoid them, for at least a subset of situations.
Indeed, Django can use any single column for the primary key, and doesn’t require the use of a key column of name id
. So, in my mind, it would have been better to use the <tablename>_id
, as suggested in the book. Especially since you may also access the primary key attribute using the pk
shortcut.
class Foo(models.Model):
foo_id = models.AutoField(primary_key=True)
However, it’s not currently possible to do composite primary keys (but may be soon), which makes doing the best thing for a plain ManyToManyField possible: indeed, you don’t control that table anyway, and if you remove the id
column (and create a proper primary key), things don’t work. In practice, you can just ignore this issue, since you (mostly) don’t deal with this table, or the objects from it.
So, assuming we are changing the id
column into the name suggested in the book, what does that give us?
Nothing, until we actually need to write raw SQL code, and specifically code that joins multiple tables.
Then, we are able to use a slightly less verbose way of defining the join, and not worry about duplicate columns named id
:
SELECT * FROM foo_foo JOIN foo_bar USING (foo_id);
I’m still not sure if it’s actually worthwhile doing this or not. I’m going to start doing it, just to see whether there are any drawbacks (already found one in some of my own code, that hard-coded an id
field), or any great benefits.
Leave out the Constraints
¶
Within Django, it’s more work to create relations without the relevant constraints, and it’s not possible to create a table without a primary key, so we can just pass this one by with a big:
Use a Generic Attribute Table
¶
Again, it’s possible to create this type of a monstrosity in Django, but not easy. A better solution, if your table’s requirements change is to use migrations (included in Django 1.7), or a more flexible store, like JSON or Hstore. This also has the added advantage of being a column, rather than a related table, which means you can fetch it in one go, simply. Similarly, with Postgres 9.3, you can do all sorts of querying, and even more in 9.4.
Document or key stores are no substitute for proper attributes, but they do have their uses.
The other solution is to use Model inheritance, which Django does well. You can choose either abstract or concrete table inheritance, and with something like django-model-utils, even get some nice features like fetching only the subtypes when fetching a queryset of superclass models.
Use Dual-Purpose Foreign Key
¶
Unfortunately, Django comes with a built-in way to do this: so-called Generic Relations.
Using this, it’s possible to have an association from a given model instance to any other object of any other model class.
“You may find that this antipattern is unavoidable if you use an object-relational programming framework […]. Such a framework may mitigate the risks introduced by Polymorphic Associations by encapsulating application logic to maintain referential integrity. If you choose a mature and reputable framework, then you have some confidence that its designers have written the code to implement the association without error.”
I guess we’ll just have to rely on the fact Django is a mature and reputable framework.
In all reality, I’ve used this type of relation once: for notifications that need to be able to refer to any given object. It’s also possible to use, say, a tagging app that had generic relations. But, I’m struggling to think of too many situations where it would be better than a proper relation.
I’ve also come across it in django-reversion, and running queries against objects from it is a pain in the arse.
Create Multiple Columns
¶
Interestingly, the example for this Antipattern is the example I just used above: tags. And, this type of situation should be done in a better way: a proper relation, or perhaps an Array type. It all depends how good your database is at querying arrays. django.contrib.postgres
makes this rather easy:
class Post(models.Model):
name = models.CharField(...)
tags = ArrayField(models.CharField(...), blank=True)
Post.objects.filter(tags__contains=['foo'])
What may not be so easy is getting all of the tags in use. This may be possible: I just haven’t thought of a way to do this yet. A nice syntax might be:
Post.objects.aggregate(All('tags'))
The SQL you might be able to use to get this could look like:
SELECT
array_agg(distinct t) AS tags
FROM (
SELECT unnest(tags) FROM posts
) t;
I’m not sure if there’s a better way to get this data.
Clone Tables or Columns
¶
I can’t actually see that doing this in Django would be easy, or likely. It’s gotten me interested in some method of seamlessly doing Horizontal Partitioning as a method of archiving old data, and perhaps moving it to a different database. Specifically, moving old audit data into a separate store may become necessary at some point.
Partitioning using a multi-tenancy approach using Postgres’ schemata is another of my interests, and I’ve been working on a django-specific way to do this: django-boardinghouse. Note, this is a partial-segmentation approach, where some tables are shared, but others are per-schema.
Physical Database Design Antipatterns
¶
Use FLOAT Data Type
¶
Just don’t.
There’s a DecimalField
, and no reason not to use it.
Specify Values in the Column Definition
¶
The example the book uses is to define check constraints on a given question. Django’s approach is a bit different: the valid choices are defined in the column definition, but can be changed in code at any time. Any existing values that are no longer valid are fine, but any attempt to save an object will require it to have one of the newly valid choices.
This is both better and worse than the problem described in the book. There’s no way (short of a migration) to change the existing data, but maybe that’s actually just better.
Again, the best solution is just to use a related field, but in some cases this is indeed overkill: specifically if values are unlikely to change.
Assume You Must Use Files
¶
I’m still 50-50 on this one. Basically, storing binary files in your database (a) makes the database much bigger, which means it takes longer to back it up (and restore it), and (b) means that it’s harder to do things like use the web server, rather than the application server, to serve static files (even those user-supplied, that must be authenticated).
The main disadvantage, of not having backups, is purely an operations issue.
The secondary disadvantage: the lack of transactionality is also easily solved: don’t delete files (unless necessary), and don’t overwrite them. If you really must, then use a Postres NOTIFY delete-file <filepath>
or similar, and have a listener that handles that.
The other disadvantage, about SQL privilidges is mostly moot under Django anyway, as you are always running as the one database user.
Using Indexes Without a Plan
¶
Indexes are fairly tangiential to an ORM: I’m going to pass over this one without too much comment. I’ve been doing a fair bit of index-level optimisations on my production database lately, in an effort to improve performance. Mostly, it’s better to optimise the query, as the likely targets for indexes probably already have them.
Query Antipatterns
¶
Use Null as an Ordinary Value, or Vice Versa.
¶
Python has it’s own None
type/value, and using it in queries basically converts it into NULL
. Django is a little annoying how at times it stores empty strings instead of NULL
in string fields. I was playing around with making these into proper NULL
s, but it seemed to create other problems.
At least there is no established pattern to use other values instead of NULL
.
Reference Non-grouped Columns
¶
Since I’m dealing with Postgres, I understand this one is not much of an issue. Your query will fail if you build it wrong. Which should be the way databases work.
Sort Data Randomly
¶
Read this chapter online.
The problem of how to fetch a single random instance from a Model comes up every now and then on IRC, indeed, it did again last weekend. Unsurprisingly, I provided a link to this chapter.
One solution that is presented in the book is to select a single row, using a random offset:
import random
# Note: the initial version of this would fail since queryset.count()
# is the number of elements, randint(a, b) includes the value 'b',
# and queryset[b] would be out of range.
index = random.randint(0, queryset.count() - 1)
instance = queryset.all()[index]
This, converts to the query:
SELECT * FROM "table" LIMIT 1 OFFSET %s;
However, without an ordering, I believe this will still do a complete table seek. Instead, you want to order on a column with an index. Like the primary key:
instance = queryset.order_by('pk')[index]
It does take two queries, but sometimes two queries is better than one. Obviously, if your table was always going to be small, it may be better to do the random ordering:
instance = queryset.order_by('?')[0]
Pattern Matching Predicates
¶
I’m sorry to say Django makes it far too easy to do this:
queryset.filter(foo__contains='bar')
Becomes something like:
SELECT * FROM "table" WHERE "table"."foo" LIKE '%bar%';
In many cases, this will be fine, but as you can imagine, you may get surprising matches, or performance may really suck.
Using Postgres’s full-text search is relatively simple: you can quite easily make a custom field that handles this, and with Django 1.7 or later, you can even create your own lookups:
from django.db import models
class TSVectorField(models.Field):
def db_type(self, connection):
return 'tsvector'
class TSVectorMatches(models.lookups.BuiltinLookup):
lookup_name = 'matches'
def process_lhs(self, qn, connection, lhs=None):
lhs = lhs or self.lhs
return qn.compile(lhs)
def get_rgs_op(self, connection, rhs):
return '@@ to_tsquery(%s)' % rhs
TSVectorField.register_lookup(TSVectorMatches)
Then, you are able, on a correctly defined field, able to do:
queryset.filter(foo__matches='bar')
Which roughly translates to:
SELECT * FROM "table" WHERE (foo @@ to_tsquery('bar'));
It’s actually a little more complicated than that, but I have a working prototype at https://bitbucket.org/schinckel/django-postgres/. There is a field class, but also an example within the search
sub-app.
Clearly, you’ll want to be creating the right indexes.
Solve a Complex Problem in One Step
¶
By their very nature, ORMs tend to make this a little less easy to do. Because you don’t normally write custom code, this scenario is less common than you might see in a normal SQL access.
However, with Django, it is possible to write over-complicated queries, but also to use things like .raw()
, and .extra()
to write “Spaghetti Queries”.
However, it is worth noting that with judicious use of these features, you can indeed write queries that perform exceptionally well, indeed, far better than the ORM is able to generate for you. It’s also worth noting that you can write really, really bad queries that take a very long time, just using the ORM (without even doing things like N+1 queries for related objects).
Indeed, the “how to recognize” section of this chapter shows the biggest red flag I have noticed lately: “Just stick another DISTINCT in there”.
I’ve seen, first-hand how a .distinct()
can cause a query to take a very long period of time. Removing the need for a distinct by removing the join, and instead using subqueries, caused a query that was taking around 17 seconds with a given data set to suddenly take less than 200ms.
That alone has forced me to reconsider each and every time I use .distinct()
in my code (and probably explains why our code that runs queries against django-reversion) performs so horribly.
A Shortcut That Gets You Lost
¶
I’ve used, in my SQL snippets in this post, the shortcut that is mentioned here: SELECT * FROM ...
. Luckily, Django doesn’t use this shortcut, and instead lists out every column it expects to see.
This has a really nice side-effect: if your database tables have not been migrated to add that new column, then whenever you try to run any queries against that table, you will have an error. Which is much more likely to happen immediately, rather than at 3am when that column is first actually used.
Application Development Antipatterns
¶
Store Password in Plain Text
¶
There is no, I repeat, no reason you should ever be doing this. It’s a cardinal sin, and Django has a great authentication and authorisation framework, that you can extend however you need it.
As noted in the legitimate uses section: if you are accessing a third-party system, you may need to store the password in a readable format. In this case, something like Oauth, if available, may make things a little safer.
Execute Unverified Input As Code
¶
Read this chapter online.
Most of the risks of SQL Injection are mitigated when you use an ORM like Django’s. Of course, if you write .raw()
or .extra()
queries that don’t properly escape user-provided data, then you may still be at risk. .extra()
in particular has arguments that allow you to pass an iterable of parameters, which will then be correctly escaped as they are added to the query.
Filling in the Corners
¶
Educate your manager if (s)he thinks it’s a bad thing to have non-contiguous primary keys. Transaction rollbacks, deleted objects: there’s all sorts of reasons why there may be gaps.
Making Bricks Without Straw
¶
It goes without saying that you should have error handling within your python code.
Make SQL a Second-Class Citizen
¶
This is kind-of the point of an ORM: to remove from you the need to deal with creating complex queries in raw SQL.
Your Django models are the documentation of your table structure, or documentation can be generated from them. Your migrations files show the changes that have been made over time. Naturally, both of these will be stored in your Source Code Management system.
Clearly, as soon as you are doing anything in raw SQL, then you should follow the practices you do with the rest of your code.
Testing in-database is something I am a little bit interested in. As I move more code into the database (often for performance reasons, sometimes because it’s just fun), it would be nice to have tests for these functions. I have a long list of things in my Reading List about Postgres Unit Testing. Perhaps I’ll get around to them at some point. Integrating these with the Django test runner would be really neat.
The Model Is an Active Record
¶
Django’s use of the Active Record is slightly different to Rails. In Rails, the column types in the database control what attributes are on the model, but in Django, the python object is the master. I think this is more meaningful, because it means that everything you need to know about an object is in the model definition: you don’t need to follow the migrations to see what attributes you have.
I do like the concept of a Domain Model: it’s an approach I’ve lightly tried in the past. Perhaps it is an avenue I’ll push down further at some point. In some ways, Django’s Form
classes allow you to encapsulate this, but mostly business logic still lives on our Model
classes.
Summary
¶
So, how did Django do?
Pretty good, I’d say. The ones that were less successful either don’t really matter most of the time (primary key column is always called id
, choices defined in the model), or you don’t really need to use them (Generic Relations, searching using LIKE %foo%
, using raw SQL).
We do fall down a bit with files stored in the database, and fat models, but I would argue that those patterns work just fine, at least for me right now.