Multi-table Inheritance and the Django Admin

Django’s admin interface is a great way to be able to interact with your models without having to write any view code, and, within limits, it’s useful in production too. However, it can quickly get very crowded when you register lots of models.

Consider the situation where you are using Django’s multi-table inheritance:

from django.db import models

from model_utils.managers import InheritanceManager

class Sheep(models.Model):
    sheep_id = models.AutoField(primary_key=True)
    tag_id = models.CharField(max_length=32)
    date_of_birth = models.DateField()
    sire = models.ForeignKey('sheep.Ram', blank=True, null=True, related_name='progeny')
    dam = models.ForeignKey('sheep.Ewe', blank=True, null=True, related_name='progeny')

    objects = InheritanceManager()

    class Meta:
        verbose_name_plural = 'sheep'

    def __str__(self):
        return '{}: {}'.format(self._meta.verbose_name, self.tag_id)


class Ram(Sheep):
    sheep = models.OneToOneField(parent_link=True)

    class Meta:
        verbose_name = 'ram'
        verbose_name_plural = 'rams'


class Ewe(Sheep):
    sheep = models.OneToOneField(parent_link=True)

    class Meta:
        verbose_name = 'ewe'
        verbose_name_plural = 'ewes'

Ignore the fact there is no specialisation on those child models: in practice you’d normally have some.

Also note that I’ve manually included the primary key, and the parent link fields. This has been done so that the actual columns in the database match, and in this case will all be sheep_id. This will make writing joins slightly simpler, and avoids the (not specific to Django) ORM anti-pattern of “always have a column named id”.

We can use the models like this, but it might be nice to have all sheep in the one admin changelist, and just allow filtering by subclass model.

First, we’ll put some extra stuff onto the parent model, to make obtaining the subclasses simpler. Some of these will use a new decorator, which creates a class version of the @property decorator.

class classproperty(property):
    def __get__(self, cls, owner):
      return self.fget.__get__(None, owner)()


class Sheep(models.Model):
    # Fields, etc. defined as above.

    @classproperty
    @classmethod
    def SUBCLASS_OBJECT_CHOICES(cls):
        "All known subclasses, keyed by a unique name per class."
        return {
          rel.name: rel.related_model
          for rel in cls._meta.related_objects
          if rel.parent_link
        }

    @classproperty
    @classmethod
    def SUBCLASS_CHOICES(cls):
        "Available subclass choices, with nice names."
        return [
            (name, model._meta.verbose_name)
            for name, model in cls.SUBCLASS_OBJECT_CHOICES.items()
        ]

    @classmethod
    def SUBCLASS(cls, name):
        "Given a subclass name, return the subclass."
        return cls.SUBCLASS_OBJECT_CHOICES.get(name, cls)

Note that we don’t need to enumerate the subclasses: adding a new subclass later in development will automatically add it to these properties, even though in this case it would be unlikely to happen.

From these, we can write some nice neat stuff to enable using these in the admin.

from django import forms
from django.conf.urls import url
from django.contrib import admin
from django.utils.translation import ugettext as _

from .models import Sheep


class SubclassFilter(admin.SimpleListFilter):
    title = _('gender')
    parameter_name = 'gender'

    def lookups(self, request, model_admin):
      return Sheep.SUBCLASS_CHOICES

    def queryset(self, request, queryset):
      if self.value():
        return queryset.exclude(**{self.value(): None})
      return queryset


@admin.register(Sheep)
class SheepAdmin(admin.ModelAdmin):
    list_display = [
        'tag_id',
        'date_of_birth',
        'gender'
    ]
    list_filter = [SubclassFilter]

    def get_queryset(self, request):
      return super(SheepAdmin, self).get_queryset(request).select_subclasses()

    def gender(self, obj):
        return obj._meta.verbose_name

    def get_form(self, request, obj=None, **kwargs):
        if obj is None:
            Model = Sheep.SUBCLASS(request.GET.get('gender'))
        else:
            Model = obj.__class__

        # When we change the selected gender in the create form, we want to reload the page.
        RELOAD_PAGE = "window.location.search='?gender=' + this.value"
        # We should also grab all existing field values, and pass them as query string values.

        class ModelForm(forms.ModelForm):
            if not obj:
                gender = forms.ChoiceField(
                    choices=[('', _('Please select...'))] + Sheep.SUBCLASS_CHOICES,
                    widget=forms.Select(attrs={'onchange': RELOAD_PAGE})
                )

            class Meta:
                model = Model
                exclude = ()

        return ModelForm

    def get_fields(self, request, obj=None):
        # We want gender to be the first field.
        fields = super(SheepAdmin, self).get_fields(request, obj)

        if 'gender' in fields:
            fields.remove('gender')
            fields = ['gender'] + fields

        return fields

    def get_urls(self):
        # We want to install named urls that match the subclass ones, but bounce to the relevant
        # superclass ones (since they should be able to handle rendering the correct form).
        urls = super(SheepAdmin, self).get_urls()
        existing = '{}_{}_'.format(self.model._meta.app_label, self.model._meta.model_name)
        subclass_urls = []
        for name, model in Sheep.SUBCLASS_OBJECT_CHOICES.items():
            opts = model._meta
            replace = '{}_{}_'.format(opts.app_label, opts.model_name)
            subclass_urls.extend([
                url(pattern.regex.pattern, pattern.callback, name=pattern.name.replace(existing, replace))
                for pattern in urls if pattern.name
            ])

        return urls + subclass_urls

Wow. There’s quite a lot going on there, but the summary is:

  • We create a custom filter that filters according to subclass.
  • The .select_subclasses() means that objects are downcast to their subclass when fetched.
  • There is a custom form, that, when in create mode, has a selector for the desired subclass.
  • When the subclass is changed (only on the create form), the page is reloaded. This is required in a situation where there are different fields on each of the subclass models.
  • We register the subclass admin url paths, but use the superclass admin views.

I’ve had ideas about this for some time, and have just started using something like this in development: in my situation, there will be an arbitrary number of subclasses, all of which will have several new fields. The code in this page is extracted (and changed) from those ideas, so may not be completely correct. Corrections welcome.

Django Proxy Model Relations

I’ve got lots of code I’d do a different way if I were to start over, but often, we have to live with what we have.

One situation I would seriously reconsider is the structure I use for storing data related to how I interact with external systems. I have an Application object, and I create instances of this for each external system I interact with. Each new Application gets a UUID, and is created as part of a migration. Code in the system uses this UUID to determine if something is for that system.

But that’s not the worst of it.

I also have an AppConfig object, and other related objects that store a relation to an Application. This was fine initially, but as my code got more complex, I hit upon the idea of using Django’s Proxy models, and using the related Application to determine the subclass. So, I have AppConfig subclasses for a range of systems. This is nice: we can even ensure that we only get the right instances (using a lookup to the application to get the discriminator, which I’d probably do a different way next time).

However, we also have other bits of information that we need to store, that has a relation to this AppConfig object.

And here is where we run into problems. Eventually, I had the need to subclass these other objects, and deal with them. That gives a similar benefit to above for fetching filtered lists of objects, however when we try to follow the relations between these, something annoying happens.

Instead of getting the subclass of AppConfig, that we probably want to use because the business logic hangs off that, we instead get the actual AppConfig instances. So, in order to get the subclass, we have to fetch the object again, or swizzle the __class__. And, going back the other way would have the same problem.

Python is a dynamic language, so we should be able to do better.

In theory, all we have to do is replace the attributes on the two classes with ones that will do what we want them to do. In practice, we need to muck around a little bit more to make sure it all works out right.

It would be nice to be able to decorate the declaration of the overridden field, but that’s not valid python syntax:

>>> class Foo(object):
...   @override
...   bar = object()
  File "<stdin>", line 3
    bar = object()

So, we’ll have to do one of two things: alter the class after it has been defined, or leverage the metaclass magic Django already does.

class Foo(models.Model):
    bar = models.ForeignKey('bar.Bar')


class FooProxy(models.Model):
    bar = ProxyForeignKey('bar.BarProxy')  # Note the proxy class

    class Meta:
      proxy = True

However, we can’t just use the contribute_to_class(cls, name) method as-is, as the Proxy model attributes get dealt with before the parent model. So, we need to register a signal, and get the framework to run our code after the class has been prepared:

class ProxyField(object):
    def __init__(self, field):
        self.field = field

    def contribute_to_class(self, model, name):
        @receiver(models.signals.class_prepared, sender=model, weak=False)
        def late_bind(sender, *args, **kwargs):
          override_model_field(model, name, self.field)


class ProxyForeignKey(ProxyField):
    def __init__(self, *args, **kwargs):
        super(ProxyForeignKey, self).__init__(ForeignKey(*args, **kwargs))

Then, it’s a matter of working out what needs to happen to override_model_field.

It turns out: not much. Until we start thinking about edge cases, anyway:

def override_model_field(model, name, field):
    original_field = model._meta.get_field(name)

    model.add_to_class(name, field)
    if field.rel:
      field.rel.to.add_to_class(
        field.related_name,
        ForeignRelatedObjectsDescriptor(field.related)
      )

There is a little more to it than that:

  • We need to use the passed-in related_name if one was provided in the new field definition, else we want to use what the original field’s related name was. However, if not explicitly set, then neither field will actually have a related_name attribute.
  • We cannot allow an override of a foreign key to a non-proxy model: that would hijack the original model’s related queryset.
  • Similarly, we can only allow a single proxy to override-relate to another proxy: any subsequent override-relations would likewise hijack the related object queryset.
  • For non-related fields, we can only allow an override if the field is compatible. What that means I’m not completely sure just yet, but for now, we will only allow the same field class (or a subclass). Things that would require a db change would be verboten.

So, we need to guard our override_model_field somewhat:

def override_model_field(model, name, field):
    original_field = model._meta.get_field(name)

    if not isinstance(field, original_field.__class__):
        raise TypeError('...')

    # Must do these checks before the `add_to_class`, otherwise it breaks tests.
    if field.rel:
        if not field.rel.to._meta.proxy:
            raise TypeError('...')

        related_name = getattr(field, 'related_name', original_field.related.get_accessor_name())
        related_model = getattr(field.rel.to, related_name).related.model

        # Do we already have an overridden relation to this model?
        if related_model._meta.proxy:
            raise TypeError('...')

    model.add_to_class(name, field)

    if field.rel:
        field.rel.to.add_to_class(
          related_name,
          ForeignRelatedObjectsDescriptor(field.related)
        )

There is an installable app that includes tests: django-proxy-overrides.

Horizontal Partitioning in Postgres

It never surprises me when I find another neat feature of Postgres that makes doing a potentially difficult task simple. Today, I discovered that since 9.0, Postgres has supported a really powerful way to horizontally partition data into separate tables.

For those who haven’t heard of the concept before, horizontal partitioning is where different rows are stored in different tables, depending upon something about the data within the row.

For instance, we could partition audit data into tables based upon the timestamp of the action. Thus, all entries created during October, 2014 would be stored in a table called audit_2014_10, and entries created during March 2011 would be stored in a table called audit_2011_03. And so on. Alternatively, we could have a single table per year, or however we want to partition.

This is called “Range” partitioning.

There are a couple of ways you could think about doing this in a DBMS. You could have a writable view that redirects the writes to the correct table. The problem then is that when you add a new child table, you need to rewrite your view.

Instead, we can use Postgres’ neat table inheritance to handle all of this for you. Indeed, it is discussed in the Postgres documentation.

If we inherit one table from another, and do a query on the parent table, we will also get the rows that match the query from all child tables. That obviates the need for a view that uses UNION ALL or similar to fetch the data.

I’m going to use a toy example here, that just contains a single column.

CREATE TABLE "data" ("value" TIMESTAMPTZ);

CREATE TABLE "data_2014" (
  CHECK (
    "value" >= '2014-01-01' AND "value" < '2015-01-01')
) INHERITS ("data");

CREATE TABLE "data_2015" (
  CHECK (
    "value" >= '2015-01-01' AND "value" < '2016-01-01')
) INHERITS ("data");

This, however, is only part of the picture. Any data that is added to either of the child tables (or indeed the parent table, but we won’t be doing that), will be returned when we query the parent table.

But we want to ensure that data is partitioned out nicely. For that, we can use a trigger.

A naïve trigger may look something like:

CREATE OR REPLACE FUNCTION data_insert_trigger()
RETURNS TRIGGER AS $$

BEGIN

  IF (NEW.value >= '2014-01-01' AND NEW.value < '2015-01-01') THEN
    INSERT INTO data_2014 VALUES (NEW.*);
  ELSIF (NEW.value >= '2015-01-01' AND NEW.value < '2016-01-01') THEN
    INSERT INTO data_2015 VALUES (NEW.*);
  ELSE
    RAISE EXCEPTION 'Date out of range. Please fix the data_insert_trigger() function.';
  END IF;

  RETURN NULL;
END;

$$ LANGUAGE plpgsql;

As you can see by the ELSE clause, we will actually need to do maintainence on this function as we start to get data that falls outside of our existing ranges. We will also need to create a new table for those rows.

It would be nice if we could automatically create tables that are missing, and handle any arbitrary values.

CREATE OR REPLACE FUNCTION data_insert_trigger()
RETURNS TRIGGER AS $$

DECLARE
  table_name text;
  year integer;
  start text;
  finish text;

BEGIN
  year := date_part('year', NEW.value);
  table_name := 'data_' || year;
  start := year || '-01-01';
  finish := (year + 1) || '-01-01';

  PERFORM 1 FROM pg_tables WHERE tablename = table_name;

  IF NOT FOUND THEN
    EXECUTE
      'CREATE TABLE '
      || quote_ident(table_name)
      || ' (CHECK ("value" >= '
      || quote_literal(start)
      || ' AND "value" < '
      || quote_literal(finish)
      || ')) INHERITS (data)';
  END IF;

  EXECUTE
    'INSERT INTO '
    || quote_ident(table_name)
    || ' VALUES ($1.*)'
  USING NEW;

  RETURN NULL;
END;

$$ LANGUAGE plpgsql;

CREATE TRIGGER data_insert_trigger
BEFORE INSERT ON data
FOR EACH ROW EXECUTE PROCEDURE data_insert_trigger();

You would also want to create any indexes on the child tables, as this is where they need to be, rather than the parent table.

This function is pretty neat: it first stores what the table name should be in a variable, as well as the two bounds for this table (start and finish). Then, we see if that table exists, and if not, create it. Finally, we then insert the values into the correct child table. I’m not sure I’d recommend using it as-is: it’s quite possibly subject to a race condition if two new records came in at the same time.

The one thing that was concerning me was that DDL changes to the parent table would not propagate to the child tables: however this turned out to not be an issue at all. Since I mostly use Django, I want as little hard stuff that would require custom migration operations.

ALTER TABLE data ADD COLUMN confirmed BOOLEAN DEFAULT false;

The other thing worth noting is that Postgres will do a really good job of limiting the tables that are accessed to those that contain the relevant data:

INSERT INTO data VALUES
  ('2014-01-06 09:00:00'),
  ('2015-01-09 12:00:00'),
  ('2016-02-22 15:39:00');

EXPLAIN
SELECT * FROM data
WHERE value > '2014-01-01'
AND value < '2014-07-01';
                                      QUERY PLAN
---------------------------------------------------------------------------------------
 Append  (cost=0.00..42.10 rows=12 width=8)
   ->  Seq Scan on data  (cost=0.00..0.00 rows=1 width=8)
         Filter: ((value > '2014-01-01 00:00:00'::timestamp with time zone) AND
                  (value < '2014-07-01 00:00:00'::timestamp with time zone))
   ->  Seq Scan on data_2014  (cost=0.00..42.10 rows=11 width=8)
         Filter: ((value > '2014-01-01 00:00:00'::timestamp with time zone) AND
                  (value < '2014-07-01 00:00:00'::timestamp with time zone))
 Planning time: 0.292 ms
(6 rows)

We see that only the data_2014 table is hit by the query. If your constraints do something like cast to DATE, then this may not happen. This was causing me some concern earlier, but letting Postgres coerce the data types fixed it.

However, you can’t use a tstzrange to query if you want these constraints to help the query planner:

-- Actually hits every table.
SELECT * FROM data WHERE value <@ '[2014-01-01, 2014-07-01)'::tstzrange;

It’s worth noting that if you change a value that would cause that row to belong in a different partition, this will fail.

There are moves afoot to have this feature more tightly integrated into Postgres, perhaps using a syntax like:

CREATE TABLE data (value TIMESTAMPTZ)
PARTITION BY RANGE (value)
(PARTITION data_2014 VALUES LESS THAN '2015-01-01');

CREATE PARTITION data_2015 ON data VALUES LESS THAN '2016-01-01';

It’s not clear to me how you actually define the range. It also seems counter-productive to have to manually create the partition tables.

There are also tools that might be useful to handle the heavy lifting, like pg_partman. I haven’t used this, but it looks interesting.

Django Single Table Inheritance on the cheap.

There was a recent question on Stack Overflow about Django Single Table Inheritance (STI). It got me thinking about how to use my FSM-proxy stuff to just be about STI.

Note: this only works when all sub-classes have the same fields: the example we are going to use here is different to a state machine, in that an object may not change state after it has been created.

class Sheep(models.Model):
  type = models.CharField(max_length=4)
  tag_number = models.CharField(max_length=64)

class Ram(Sheep):
  class Meta:
    proxy = True
  
class Ewe(Sheep):
  class Meta:
    proxy = True

In this case, we can fetch all sheep as Sheep.objects.all(). However, this gives us the objects as Sheep instances, when we want those with type='ram' to return Ram instances, and those with type='ewe' to return Ewe instances.

We can do this, by the magic of type().__subclasses__().

class Sheep(models.Model):
  # fields as above
  
  def __init__(self, *args, **kwargs):
    super(Sheep, self).__init__(*args, **kwargs)
    # If we don't have a subclass at all, then we need the type attribute to match
    # our current class. 
    if not self.__class__.__subclasses__():
      self.type = self.__class__.__name__.lower()
    else:
      subclass = [x for x in self.__class__.__subclasses__() if x.__name__.lower() == self.type]
      if subclass:
        self.__class__ = subclass[0]
      else:
        self.type = self.__class__.__name__.lower()
    

This will automatically downcast Sheep objects to the correct subclass, based upon the type field.

It also sets the type field on objects that are instantiated without one (based on the current instance class). This enables us to do things like:

# Fetch all Sheep, downcast to correct subclass.
>>> Sheep.objects.all()
[<Ram: Ram object>, <Ram: Ram object>, <Ewe: Ewe object>]

# Automatically set the type on a class.
>>> Ram()
<Ram: Ram object>
>>> Ram().type
'ram'
>>> Sheep()
<Sheep: Sheep object>
>>> Sheep().type
'sheep'

# Automatically set the class on a valid subclass/type
>>> Sheep(type='ram')
<Ram: Ram object>
# Force the type field on an invalid type argument. [see below]
>>> Ram(type='ewe')
<Ram: Ram object>
>>> Sheep(type='foo')
<Sheep: Sheep object>
>>> Sheep(type='foo').type
'sheep'

The assumption I have made here is that when instantiating a class, and the type value is not a valid value (our class, or one of our subclasses), then it changes the type field to the current class.

The other assumption is that the parent class is also valid. In this case, it wouldn’t be, as sheep must be either ewes or rams (or wethers, but that’s another story).

We also need to be able to fetch Ewe and Ram objects using their manager. This is just as simple as filtering on the type.

class ProxyManager(models.Manager):
  def get_query_set(self): # Note: get_queryset in Django1.6+
    return super(ProxyManager, self).get_query_set().filter(type=self.model.__name__.lower())

class Ram(Sheep):
  objects = ProxyManager()
  class Meta:
    proxy = True

class Ewe(Sheep):
  objects = ProxyManager()
  class Meta:
    proxy = True

Now, we can do:

>>> Ram.objects.all()
[<Ram: Ram object>, <Ram: Ram object>]

Clearly, the models have been simplified: I have not shown any model methods that would be the different behaviours that the subclasses have.

Metaclass magic registry pattern

The Registry Pattern is something I use relatively frequently. In django, for instance, we see it used for the admin interface, and I used very derivative code for my first API generation tool: django-rest-api. For our integration with external POS and other systems, we need to register importers, so that the automated stats fetching is able to look for units that need to fetch data from an external system’s website, or parse incoming email headers for matching delivered data.

I had been using something similar to:

from base import BaseStatsImporter, register

class FooStatsImporter(BaseStatsImporter):
    # ...

register(FooStatsImporter)

This is all well and good, but it is annoying. I need to remember to register each class after I declare it.

Then I discovered the magic of __metaclass_, used with __new__:

class RegistryMetaClass(type):
    def __new__(cls, clsname, bases, attrs):
        new_class = super(cls, RegistryMetaClass).__new__(cls, clsname, bases, attrs)
        register(new_class)
        return new_class
        
class BaseStatsImporter(object):
    __metaclass__ = RegistryMetaClass
    
    # ...

As long as your subclasses don’t override __metaclass__, then every new subclass will be added to the registry.

Obviously, this is magic, and in some cases the explicit way would be better.

Why CustomUser subclasses are not such a good idea

Background

The system I work on has People who may or may not be Users, and very infrequently Users who may not be a Person. In fact, an extension to the system has meant that there will be more of these: a User who needs to be able to generate reports (say, a Franchisor who needs to only be able to access aggregate data from franchises, that might belong to multiple companies) who is never rostered on for shifts, which is what the Person class is all about.

Anyway, the long and the short of this was that I thought it might be a good idea to look at sub-classing User for ManagementUser.

I guess I should have listened to those smarter than me who shouted that sub-classing User is not cool. Although they never gave any concrete reasons, but now I have one.

You cannot easily convert a superclass object to a specialised sub-class. Once a user is a User, it’s hard to make them into a ManagementUser.

It can be done: the following code will take a User (or any parent class) object, a User (or whatever) subclass, and any other keyword arguments that should be passed into the constructor. It saves the newly upgraded object, and returns it.

1 def create_subclass(SubClass, old_instance, **kwargs):
2     new_instance = SubClass()
3     for field in old_instance._meta.local_fields:
4         setattr(new_instance, field.name, getattr(old_instance, field.name))
5     new_instance.save()
6     return new_instance()

However, it really should check that there isn’t an existing instance, and maybe some other checks.

What advantages does sub-classing have?

The biggest advantage, or so I thought, was to have it so you can automatically downcast your models on user login, and then get access to the extended user details. For instance, if your authentication backend automatically converts User to Person, then you can get access to the Person’s attributes (like the company they work for, their shifts, etc) without an extra level of attribute access:

1 # request.user is always an auth.User instance:
2 request.user.person.company
3 # request.user might be a person, etc.
4 request.user.company

But it turns out that even this is bad. Now, in guard decorators on view functions, you cannot just test the value of an attribute, as not all users will have that attribute. Instead, you need to test to see if the attribute exists, and then test the attribute itself.

So, what do you do instead?

The preferred method in django for extending User is to use a UserProfile class. This is just a model that has a OneToOneField linked back to User. I would look at doing a very small amount of duck-punching just to make getting a hold of the profile class:

 1 import logging
 2 from django.contrib.auth.models import User
 3 from django.db import models
 4 
 5 class Person(models.Model):
 6     user = models.OneToOneField(User, related_name="_person")
 7     date_of_birth = models.DateField(null=True, blank=True)
 8 
 9 def get_person(user):
10     try:
11         return user._person
12     except Person.DoesNotExist:
13         pass
14 
15 def set_person(user, person):
16     user._person = person
17 
18 if hasattr(User, 'person'):
19     logging.error('Model User already has an attribute "person".')
20 else:
21     User.person = property(get_person, set_person)

By having the person’s related name attribute as _person, we can wrap read access to it in an exception handler, and then use a view decorator like:

1 @user_passes_test(lambda u:u.person)
2 def person_only_view(request, **kwargs):
3     pass

We know this view will only be available to logged in users who have a related Person object.

I will point out that I am duck-punching/monkey-patching here. However, I feel that this particular method of doing it is relatively safe. I check before adding the property, and in reality I probably would raise an exception rather than just log an error.

Displaying only objects without subclasses

Sometimes, the django.contrib.auth User model just doesn’t cut it.

I have bounced around between ways of handling this sorry fact. My production system uses a nasty system of Person-User relationships (where, due to old legacy code, I need to keep the primary keys in sync), to monkey-patching User, using UserProfiles, and subclassing User.

First, a little on the nasty hack I have in place (and how that will affect my choices later on).

My project in work is a rostering system, where not everyone who is a Person in the system needs to be a User. For instance, most managers (who are Users) do not need their staff to be able to log in. However, they themselves must be a Person as well as a User, if they are to be able to log in, but also be rostered on.

Thus, there are many people in the system who are not Users. They don’t have a username, and may not even have an email address. Not that having an email address is that useful in the django User model, as there is no unique constraint upon that.

So, I am currently kind-of using Person as a UserProfile object, but there are Person instances that do not have an associated User, and some of these are required to have an email address, and have first and last names. So, there is lots of duplication across these two tables. Which need to be kept in sync.

The solution I am looking at now moves in the other direction.

A Person is a subclass of User. It has the extra data that we need for our business logic (mobile phone number, company they work for), but I have also monkey-patched User to not require a username. We are moving towards using email addresses for login names anyway, so that isn’t a problem. That has its own concerns (not everyone has a unique email address, but there are workarounds for that).

But not every User will have a Person attached. The admin team’s logins will not (and this will be used to allow them to masquerade as another user for testing and bug-hunting purposes). So, we can’t just ignore the User class altogether and do everything with the Person class.

This is all well and good. I have an authentication backend that will return a Person object instead of a User object (if one matches the credentials). Things are looking good.

Except then I look in the admin interface. And there we have all of the Person objects’ related User objects, in the User table. It would be nice if we only had the ‘pure’ Users in here, and all Person objects were just in their category.

So, I needed a way to filter this list.

Luckily, django’s admin has this capability. In my person/admin.py file, I had the following code:

1 from django.contrib import admin
2 from django.contrib import auth
3 
4 class UserAdmin(auth.admin.UserAdmin):
5     def queryset(self, request):
6         return super(UserAdmin, self).queryset(request).filter(person=None)
7 
8 admin.site.unregister(auth.models.User)
9 admin.site.register(auth.models.User, UserAdmin)

And, indeed, this works.

But then I found another User subclass. Now we needed a type of user that is distinct from Person (they are never rostered, are not associated with a given company, but do log into the system).

I wanted the changes to the admin to be isolated within the different apps, so I needed to be able to get the currently installed UserAdmin class, and subclass that to filter the queryset. So the code becomes (in both admin.py files):

 1 from django.contrib import admin
 2 from django.contrib import auth
 3 
 4 BaseUserAdmin = type(admin.site._registry[auth.models.User])
 5 
 6 class UserAdmin(BaseUserAdmin):
 7     def queryset(self, request):
 8         return super(UserAdmin, self).queryset(request).filter(foo=None)
 9 
10 admin.site.unregister(auth.models.User)
11 admin.site.register(auth.models.User, UserAdmin)

The only difference in the two files is the foo. This becomes whatever this sub-class’s name is. Thus, it is person in the person/admin.py file, and orguser in the orguser/admin.py file.

The next step is to change the backend so that it will automatically downcast the logged in user to their child class. Other people have detailed this in the past: mostly the performance issue vanishes here because we are only looking at a single database query for a single object.