Django urlpattern nested regex groups

Had one of those annoying things that I could not figure out why it was not working; learned something about how django’s url routing works along the way.

I created a new view for within the admin, that provides a summary of the permissions associated with a group, or set of groups. For our purposes, a company can have a number of groups associated with it, so I wanted to be able to optionally provide a company id: if it was provided, it would only show the groups+permissions for that company; if not provided it should show all of the groups and their permissions.

So, I had the urlpattern like:

# Included under '/company/...'
url(r'^((?P<company>\d+)?/)?groups/$', 'group_perms', name='company_group_permissions'),

This resolves fine. All of these variations work as expected:

However, I wanted to put a link in the admin change page for the company class, but was getting resolution errors, so I tried reverse directly:

reverse('group_perms', kwargs={'company': 10})
# -> NoReverseMatch: Reverse for 'group_perms' with 
#    arguments '()' and keyword arguments '{}' not found.

That’s odd. Maybe I was getting the name or something wrong:

# Result:
  func=<function group_permissions at 0x104b96de8>, 
  args=(), kwargs={'company': '10'}, 

Then, I removed the extra grouping in the regex:

url(r'^(?P<company>\d+)?/groups/$', 'group_perms', name='company_group_permissions'),

And it all works as expected. However, this slightly limits the available urls:

This one no longer works:

I can live with that.

I can’t find anything in the django docs that details this, although I kind-of remember reading that there are limits as to the ability of reverse() to generate urls.

Trust your tools, or how django's ORM bested me

Within my system, there is a complicated set of rules for determining if a person is “inactive”.

They may have been explicitly marked as inactive, or their company may have been marked as inactive. These are simple to discover and filter to only get active people:

Person.objects.filter(active=True, company__active=True)

The other clause for inactive users is if they only work at locations that have been marked as inactive. This means we can disable a location (within a company that remains active), and not have to manually deactivate the staff who only work at that location; it also means when we reactivate a location, staff will automatically be restored to an active state.

I’ve written the code several times that determines the activity status, but have never really been that happy with it. It generally degenerates into something that uses N+1 queries to discover the activity status of N people, or requires using django’s queryset.extra() method to run queries within the database.

Now, I have a cause to fetch all active staff, from the entire system. Which I had written a query to do, but it was mistakenly including staff who are only active at inactive units. I tried playing around with .extra(select={...}), but was not able to filter on the pseudo-fields that were generated.

Then, I had the idea to do the following:

active =
inactive = Location.objects.inactive()
  Q(locations__in=active) | ~Q(locations__in=inactive)

As long as the objects active and inactive are querysets, they will be lazily evaluated, and the SQL that is generated is relatively concise:

FROM "people" 
LEFT OUTER JOIN "people_locations" 
ON ("people"."id" = "people_locations"."person_id") 
  "people_locations"."location_id" IN (
    SELECT U0."id" FROM "location" U0 WHERE U0."status" = 0
  OR NOT ((
    "people"."id" IN (
      SELECT U1."person_id" FROM "people_locations" U1 WHERE (
        U1."location_id" IN (
          SELECT U0."id" FROM "location" U0 WHERE U0."status" = 1
        AND U1."person_id" IS NOT NULL
    AND "people"."id" IS NOT NULL)

This is much better than how I had previously done it, and has the bonus of being db-agnostic: wheras my previous solution used Postgres ARRAY types to aggregate the statuses of locations into a list.

The moral of the story: trust your high-level abstraction tools, and use them first. If you still have performance issues, then look at optimising.

Sorting dates in DataTables

If you have tabular data, then semantically, you’ll want to put it into an HTML table. It makes sense, and is certainly easier than trying to post-style nested divs as a table.

The other really nice thing is that it’s fairly easy to use DataTables to then make that table dynamic. Especially useful if your table is large: I use it on a report of all customers in my work, and have just started using it in some user-facing pages. In essence, it is as simple as doing:


With this, you get sortable columns, pagination, and searching.

But sorting of dates sucks, unless they are in ISO8601 format. ISO8601 is fantastic, by the way. Not only do you get dates/datetimes that are inherently no longer ambiguous, but they sort alphabetically, as you would expect. Because every field is larger than all of the fields following it, and all fields are zero-padded, eveny date or datetime will be correctly sorted.

However, the general public does not understand these two reasons for a ‘one true date format’, so we are generally forced to display it in a more readable format. Which doesn’t sort alphabetically.

There is a trick you can use to get sorting, and nice dates using DataTables, though. For example, the following (rendered) html will sort correctly, both ascending and desencding, but also only display a nice format:

  <span style="display: none;">2012-06-07</span>
  Thursday, June 7th, 2012

In django, you can use the following snippet:

  <span style="display: none;">{{ value|date:"Y-m-d" }}</span>
  {{ value|date:"l, F jS, Y" }}

Recently, DataTables also had a blog post about how to use it with Twitter Bootstrap 2. I think it looks rather nice. And with this tip, it is so much more useful.

You can also use this way of thinking on other things that should be sorted differently to how they are printed.

Spurious CORS Errors from Sentry

I realised the other day that Sentry, the awesome system we have been using for a while to track our error logs from our Django project, can also be used to track exceptions from other systems. Like Javascript. In fact, there is a client available: raven.js.

So, we have a server set up for work, but I have a side-project I have been working on, Workout Builder. So, I thought I’d set up a server in Heroku to act as my sentry server. And I found a nice simple way to get up and running: Daniel Watkins has a nice post over at Odd_Blog, Deploying Sentry on Heroku.

It’s pretty straightforward, and extremely simple. I got it up and running in no time, and then attempted to set up an email service. Rather than use my actual account for sending, I thought I’d set up a sending-only account at my domain, hosted as a Gmail Apps domain. So, I set it up, and set about testing.

All of a sudden, I’m getting errors, that didn’t appear for 30 seconds, that my test domain is not permitted to send a request due to CORS. But, I had been sending them previously.

After lots of dicking around, I discovered it was because I did not have the gmail settings quite right. Instead of telling me what the problem was, something was masking the issue (that the server was timing out because the server/port combination was not correct), and jQuery thought it was a CORS issue.

So, fixing up the email sending settings, and it’s all gravy:

EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'

EMAIL_HOST_PASSWORD = '<oh no you don\'t>'

Initially, I had used, port 25, and EMAIL_USE_TLS = False. Eventually, I got it all right.

A new Garmin Communicator plugin

As part of my plan to create a workout editor, I had to look into the method of communicating between the Garmin plugin and the browser.

It feels like a Java application. It’s documented like it, too. But, it’s written in Prototype, and includes a whole stack of other tools, like XML handling, Ajax communication, and messaging. Things that should belong in seperate parts, IMHO.

So, after a fair bit of plugging around, I was able to make enough sense of it to figure out exactly how it works:

  1. You unlock the plugin with a key-pair
  2. You get a list of devices
  3. If this is a send, then you set the value of a certain property.
  4. You start an Async communication.
  5. You poll the ‘finish’ version of that communication.
  6. When the communication is finished, if this was a receive, you load the data from a property.

I understand why they have made the plugin handle it’s communication in an async manner, but seriously: why not allow for a callback function when the communication is finished? To me, that feels like it would make so much more sense.

Anyway, my other main criticism is that it is inherently unsafe for multiple operations. Instead of, as would be possible with a callback that gets executed when the communication is finished, returning the data, it puts it into a property within the plugin. Which does mean that any bit of code can read it, but also means it’s possible to accidentally overwrite it, as the same property is used for writes.

So, the API for replacing it looks more like:

var plugin = new Garmin.Communicator();
plugin.readActivities(function(data) {
  // data contains the XML activity data.

It’s actually a little more complicated than this: we can pass in delegates, that will have callback methods called when certain events occur. These events are also pushed (using jQuery) onto the HTML element that is the plugin object. But, due to a jQuery bug, you need to listen further up the chain: so you can listen for these events on body.

This script will also add the plugin to the page if it cannot find it, and will run as a singleton: calling the constructor a second time will return the original object, but also add a new delegate to the list of delegates.

I’m tempted to remove the delegate handling, and simply have it as callback-based, but this is sort-of a transition from the way the Garmin team have done it. I’m concerned there may be issues with non-UI initiated read/write events (ie, those that happen on page load) ‘beating’ the plugin being ready, but that is a job for another day.

I’ve also written some Knockout bindings for this: but those are not quite ready for public consumption. I may actually write parsing code for the Training Center Database XML file, and the types it contains, and include that with this project. But, then I may be approaching the bloat seen in the actual Garmin plugin. At this stage, if you have a server that accepts TCX files, then this should be enough.

The project is on BitBucket, as usual: garmin-plugin.

TCX Files and Garmin Goals

I’m partway through writing a workout planning tool: it’s web-based, similar to Garmin Connect, but hopefully with a better interface. I want to be able to create workouts, but I’m really happy with Strava for my activity tracking.

Part of the appeal is being able to export the data to my Garmin Forerunner HRM: this really is one of those ‘scratch my own itch’ tools. So, I’ve had to learn a bit about the Garmin TCX format. There is documentation: it is just an XML file that matches the desired schema.

I’ve made a lot of progress with the workout creation, and even exporting this to TCX. Today, I decided to work on the Goal planning.

Some Garmin HRMs have a neat feature where you can set goals, which the watch will track as you work out. Thus, you could decide you want to run 50km in a given week, and it will show you how far along that goal you are, and how much time you have remaining. However, there is no way on the Forerunner 405cx to set goals on the device, nor with Garmin Training Center, and you have to use Garmin Connect.

The thing is, this part of the TCX file is undocumented. It is stored in the <Extensions> section, and here is my plan to document it a little better.

The basic structure of the file is:

<?xml version="1.0" encoding="UTF-8" standalone="no" ?>


  <Author xsi:type="Application_t">
    <!-- Application info goes here -->

  <ActivityGoals xmlns="">
    <!-- List of goals goes here -->

We are only interested in what happens in the list of goals.

Mostly, a goal is fairly simple:

<ActivityGoal Current="0.0000000" Measure="DistanceMeters" Sport="All" Target="1000.0000000">
  <Name>Run 1km</Name>
  <Period Recurrence="Once">

From this we can see the following fields:

  • Current The amount of Measure that has been completed.
  • Measure The type of goal. Allowable values are: DistanceMeters, TimeSeconds, Calories and NumberOfSessions.
  • Sport You may limit the goal to activities of a given sport. Allowable values are: All, Running, Biking and Other. Note that Garmin Connect will allow you to choose other sports, however, the value will effectively be cast to one of these. Note also that these are the exact same values that are valid for a Workout sport type (with the addition of All).
  • Target What the actual target is.
  • Name The name of this goal. This will not be displayed on a Forerunner 405cx: not sure about other devices.
  • Period Recurrence At this stage, I’m not sure what other values than Once are permitted, but I will be investigating this: this could turn out to be a really nice way to have a repeating weekly goal.
  • StartDateTime, EndDateTime happy to see these in ISO8601 format. Not surprised by that, as the Activity spec stuff (as well as Workout scheduling) is also all in ISO8601.

I do have a couple of comments so far: the HRM watches are essentially timezone aware, and they pull their time from the GPS satellites. I wonder if goals will then respect this: I’m at +0930: if I set a goal to end at 2012-07-21T23:59:59Z, will it finish at that time (which is a UTC timestamp), or will it finish at midnight local time? Can you set goals that finish at other times than midnight?

Initial experiments appear to show no. Setting a time other than 23:59:59 means that the goal is not shown on the device. I don’t see this as a big disadvantage. Testing the timezone-ness of the period is harder: I need to wait until midnight to do so!

Secondly, what values are valid for the recurrence period? This requires some experimentation.

It appears to accept a value of Weekly, but as to if this actually does anything, I’m yet to discover. Considering it has an explicit StartDateTime and EndDateTime, unless the watch extrapolates and updates it, I’m not expecting it to do anything. Certainly, setting an EndDateTime in the past, and choosing Weekly does not appear to have any effect. Again, I’m going to have to wait until midnight clicks over to test this properly. Hopefully, it will update the start and finish times, and reset the current amount.

Also of interest: Garmin Connect sends through a Dummy <X>goal for every goal Measure you do not provide a goal for. However, this is not necessary: removing all but the goals you want to use from the generated TCX file does not prevent sending it to the device, but having an invalid Author block does prevent it from sending.

The Forerunner 405cx will only display one goal of each type (Measure). I believe it shows only the one that is closest to expiring.

When the Garmin agent sends the data to the watch, it removes it from the filesystem. This prevents it being re-sent. When data is recieved from the watch, it appears re-create the activity file from the current goals set up in Garmin Connect. This kind-of makes sense, but is annoying, as any goals that have been set up in Garmin Connect will override the goals created elsewhere.

In practice, it means that in order to send goal data to the device, you must first download the relevant activities, and calculate just how much of each goal has been completed. I was hoping to be able to avoid this: if the watch sent us the goal Current figure, then we could just load this, and apply any changes to targets, without affecting the current value. However, with my device, at least, ActivityGoals are InputToUnit only. At least, if you have no goals in Garmin Connect, it doesn’t send back bogus (dummy) goal data!

Developing RESTful Web APIs with Python, ...

This week’s Python Weekly has a link to a presentation by Nicola Iarocci, called Developing RESTful Web APIs with Python, Flask and MongoDB.

I have a few minor concerns with some aspects of the content.

No form validation

This concerns me. I’ve recently started using django’s forms for my validation layer for API calls, and also for the generation of the serialised output. It’s not completely flawless, but it seems to be working quite well. It certainly is more robust than my hand-rolled scheme of validating data, and using code that is better tested than my own is always a bonus.

Instead, as we see later, there is a data validation layer. It has basically the same goal as django’s forms, but is somewhat more nested, rather than using classes. Also, using classes makes it easier to have inheritance, a great way to have shared rules. You could do this using the same function in your custom validation, but this feels disconnected.

scalable, high-performance, …

The integrity of my data is important to me. It’s very rare that the db is the limiting factor in my system’s performance, and having stuff written to disk as soon as it is ‘real’ is kind-of critical.

Okay, this is where I jump on my high horse: “versioning should happen in the media-type”. Or even better, resources should be forwards and backwards compatible, and clients should be written to handle (or ignore) changes to schemata.

@mimerender( ... )

A decorator that has 5 arguments? That will be applied to every view function? Surely there’s a way to do this without having to decorate every function. Django CBV FTW here.

“Thu, 1 Mar 2012 10:00:49 UTC”

Egad. I can’t think of a reason to have machine readable dates in any format other than ISO 8601. Purely for the reason of being able to sort dates whilst they are still strings.

Why not PUT?

Why not POST?

This is something that has been debated for ages. I think I kind-of agree with the author: PATCH is more explicitly a partial update. It does make me think about using some type of diff, but I guess using concurrency control covers the same ground.

"<link rel='parent' ... />"

Okay, HTML/XML inside a JSON object?

Why not have:

  "rel": "parent",
  "title": "...",
  "href": "..."

At least that way you’ll be able to parse the data out natively.

“updated”: “…”,
“etag”: “…”

I’m not sure if it is necessary/warranted/desired to have the etag as part of the representation. Especially if the etag is generated from the content: that would kind-of preclude it.

Personally, I generate etags from a higher resolution timestamp (possibly hashed with the object id, class or whatever). Whilst etags are opaque, having them as human readable helps with troubleshooting.

To me, this seems to be metadata, and should not be part of the object. I think you could argue a case that within Collection+JSON you could add this in, for convenience. It certainly would make it easier not to have to store the etag in a seperate variable on the client, for one.

The discussion about Concurrency Control is quite good. Which reminds me: I enjoyed most of this presentation. I have some minor nitpicks, but some of those I understand the author’s choices. Some I don’t (date format). It’s certainly better than the REST API Design Rulebook, which is a load of junk.

KnockoutJS persistence using Simperium

I really like KnockoutJS. I’ve said that lots of times, but I mean it. It does one thing, two-way bindings between a data model and the GUI elements, really well.

Perhaps my biggest hesitation in using it in a big project is that there is no built-in persistence layer. This would appear to be a situation where something like Backbone has an advantage.

And then, last week, I came across Simperium.

“So,” I thought, “what if you were able to transparently persist KnockoutJS models using Simperium?”

// Assume we have a SIMPERIUM_APP_ID, and a logged in user's access_token.
var simperium = new Simperium(SIMPERIUM_APP_ID, {token: access_token});
// mappingOptions is a ko.mapping mappingOptions object: really only useful
// if your bucket contains homogenous objects.
var store = new BucketMapping(simperium.bucket(BUCKET_NAME), mappingOptions);

var tony = store.all()[0];

var alan = store.create({
  name: "Alan Tenari",
  date_of_birth: "1965-02-06",
  email: ""

Now, tony is an existing object we loaded up from the server, and alan is one we just created.

Both of these objects are mapped using ko.mapping, but, this is the exciting bit, every time we make a change to any of their attributes, they are automatically persisted back to simperium.

There is a little more to it than that: we may want to only persist valid objects, for instance.

This totally gets me excited. And, I’ve already written a big chunk of the code that actually does this!

But for that, you’ll just have to wait…

Metaclass magic registry pattern

The Registry Pattern is something I use relatively frequently. In django, for instance, we see it used for the admin interface, and I used very derivative code for my first API generation tool: django-rest-api. For our integration with external POS and other systems, we need to register importers, so that the automated stats fetching is able to look for units that need to fetch data from an external system’s website, or parse incoming email headers for matching delivered data.

I had been using something similar to:

from base import BaseStatsImporter, register

class FooStatsImporter(BaseStatsImporter):
    # ...


This is all well and good, but it is annoying. I need to remember to register each class after I declare it.

Then I discovered the magic of __metaclass_, used with __new__:

class RegistryMetaClass(type):
    def __new__(cls, clsname, bases, attrs):
        new_class = super(cls, RegistryMetaClass).__new__(cls, clsname, bases, attrs)
        return new_class
class BaseStatsImporter(object):
    __metaclass__ = RegistryMetaClass
    # ...

As long as your subclasses don’t override __metaclass__, then every new subclass will be added to the registry.

Obviously, this is magic, and in some cases the explicit way would be better.

The Organism Application

I had an email from a self-confessed django beginner, asking for some assistance. Here is my solution, as I worked through it.

The Application

The application is designed to allow tracking information related to identifying various organisms. An organism may have many identifying features, such as on a tree, the height, and the leaf morphology, or on a bird, the colour of the feathers, size of the egg and so on. To make it simpler for the users, it would be useful to classify organisms as belonging to a type, which can then be used to limit the available choices of identifying features: if an organism is a bird, then we only show those features that make sense for a bird.

To do all of this, we can have a class structure that looks somewhat like:

from django.db import models

class OrganismType(models.Model):
    description = models.CharField(max_length=200)

class IdentificationField(models.Model):
    type = models.ForeignKey(OrganismType, related_name='id_fields')
    name = models.CharField(max_length=200)
    class Meta:
        unique_together = ('type', 'name')

class Organism(models.Model):
    common_name = models.CharField(max_length=200)
    latin_name = models.CharField(max_length=200, unique=True)
    type = models.ForeignKey(OrganismType, related_name='organisms')

class IdentificationDetail(models.Model):
    organism = models.ForeignKey(Organism, related_name="id_details")
    field = models.ForeignKey(IdentificationField)
    description = models.CharField(max_length=250)
    class Meta:
        unique_together = ('organism', 'field')

You’ll see I’ve also included a couple of unique_together constraints: I’ve assumed that each field for a given organism should only appear once.

Bending the admin to our will

Next, we can put all of this into the admin. This is really quite simple, but, as we will see, has it’s limits.

from django.contrib import admin

from models import OrganismType, Organism, IdentificationField, IdentificationDetail

class IdentificationFieldInline(admin.TabularInline):
    model = IdentificationField
    extra = 0

class OrganismTypeAdmin(admin.ModelAdmin):
    inlines = [IdentificationFieldInline]

class IdentificationDetailInline(admin.TabularInline):
    model = IdentificationDetail
    extra = 0

class OrganismAdmin(admin.ModelAdmin):
    inlines = [IdentificationDetailInline]    
    list_display = ('common_name', 'latin_name', 'type')
    list_filter = ('type',), OrganismTypeAdmin), OrganismAdmin)

I’ve removed the extra empty forms on the formsets, it looks much cleaner. I’ve also used a couple of the nice features of the admin to make display of stuff better.

At this point, thanks to the magic of django, you now have an administrative interface. But, it doesn’t quite do what we want: that is, we haven’t limited which identification fields will be available in the organism’s inlines.

To do that, we need to fiddle with the formset.

from django import forms

from models import IdentificationDetail, Organism

class IdentificationDetailFormSet(forms.models.BaseInlineFormSet):
    def __init__(self, *args, **kwargs):
        super(IdentificationDetailFormSet, self).__init__(*args, **kwargs)
        for form in self.forms:
    # We need to override the constructor (and the associated property) for the
    # empty form, so dynamic forms work.
    def _get_empty_form(self, **kwargs):
        form = super(IdentificationDetailFormSet, self)._get_empty_form(**kwargs)
        return form
    empty_form = property(_get_empty_form)
    # This updates one form's 'field' field queryset, if there is an organism with type
    # associated with the formset. Otherwise, make the choice list empty.
    def update_choices(self, form):
        if 'type' in
            id_fields = OrganismType.objects.get(['type']).id_fields.all()
        elif and self.instance.type:
            id_fields = self.instance.type.id_fields.all()
            id_fields = IdentificationDetail.objects.none()
        form.fields['field'].queryset = id_fields

This process is something I’ve talked about before (and finding that post was what pointed the questioner in my direction), but I’ll discuss it again anyway. This is perhaps a more concrete example anyway.

We want to change the queryset available to a given field (in this case, confusingly called field), based on the value of a related object. In this case, we want to set the queryset of an identification detail’s field to all of the available identification fields on the related organism’s type. Whew!

As it turns out, it’s easier to see this in the code. Note also that if there is no selected organism type (as would be the case when an empty form is presented), no fields can be selected.

This alone would work: except that changing the organism’s type should change the available list of field types. There are two approaches that can be used: have all of the data available in the page somewhere, and use JavaScript to filter the available list of field types, or fetch the data dynamically from the server (again, using JavaScript) at the time the type is changed. If I were using something like KnockoutJS, then the former would be easier, and improve the responsiveness: the change would be immediate. Since I’m not using anything that doesn’t come with django, I’ll fetch the data on each change.

So, we are going to need some JavaScript. When we do the end-user page, it’s easy to see how to put that in, but we need to understand how to override django’s admin templates in order to inject it in this case.

The django documentation has some nice detail about how to do this: Overriding admin templates. In this case, we need to create a file within our app at templates/admin/organisms/organism/change_form.html. We want to just add data to the regular template, so we just inherit from it.

{% extends 'admin/change_form.html' %}

{% block after_related_objects %}
{{ block.super }}
      url: "/admin/organisms/organismtype/" + this.value + '/fields/',
      type: 'get',
      success: function(data) {
        $('tr.form-row td.field-field select').html(data);
{% endblock %}

The script here adds a change event handler to the organism type <select> element, that hits the server, and gets the list of fields for that type. It then sets the content of the inline identification detail field fields to the data the server returned. This clears whatever had been stored there previously, but that is probably the behaviour we want in this case. Note that I am hard-coding the URL for now: we’ll see a way to handle that in a better way later.

Only one thing remains: to actually write the view that returns the desired content of the <select> element. For now, we will put this into the admin class of the organism type. Again, later we’ll move this to a proper seperate view, but doing it this way shows how easy it is to extend the admin interface.

Back in our file, we want to change the OrganismTypeAdmin class:


from django.contrib import admin
from django.conf.urls import patterns, url
from django.http import HttpResponse

# [snip]

class OrganismTypeAdmin(admin.ModelAdmin):
    inlines = [IdentificationFieldInline]
    def get_urls(self, **kwargs):
        urls = super(OrganismTypeAdmin, self).get_urls(**kwargs)
        urls = patterns('', 
            url(r'^(.*)/fields/$', self.get_fields, name='organisms_organismtype_fields'),
        ) + urls
        return urls
    urls = property(get_urls)
    def get_fields(self, request, *args, **kwargs):
        data = "<option value>---------</option>"
        if args[0]:
            data += "".join([
                "<option value='%(id)s'>%(name)s</option>" % x 
                for x in OrganismType.objects.get(pk=args[0]).id_fields.values()
        return HttpResponse(data)

We can use the fact that the admin model object provides its own urls, and we can override the method that generates them. We need to put our fields view before the existing ones (and allow empty strings where we want the primary key), else it will be matched by another route.

Finally, we write the view itself. If there was no primary key provided, we return a “null” option, otherwise we include that and the actual list of choices.

Doing it for real

Of course, in a real environment, we probably don’t want to give access to the admin interface to anyone but trusted users. And even then, limit that to as few as possible. In this case, I would suggest that the admin users would be creating the OrganismType objects, but creating Organism objects would be done by regular users. Which means we really only have a couple of pages that need to be written for the outside world:

  • View a list of organisms.
    • Filter the list of organisms by OrganismType
    • Search for an organism by common name or latin name
    • Search for an organism by some other means (feather colour, etc)
  • Create a new organism
  • Edit an existing organism
  • Fetch a list of field types for a given organism type (the get_fields view above.)

This may come in a future post: I had forgotten about this and need some time to get back into it.