Django sessions and security

We have an interesting set of requirements regarding session timeouts.

Our application is currently split into two parts: the newer stuff runs in a browser, but the older parts consume a JSON api, and are part of a native app. We recently stopped using HTTP Basic authentication, and instead use session-based authentication in both places. This was handy, as it allows us to:

  1. Not store the user’s password, even in memory on the local machine.
  2. Automatically have the user logged in when the native client links to an HTML page (by passing the session id through).

This is all well and good, but we have discovered a slight possible issue.

  1. User logs in to native client.
  2. User clicks on a button that loads a page in the browser (logging them in automatically).
  3. User closes browser window, but does not quit browser.
  4. Native client does not cleanly exit, and logout code is not called.

This means that the browser session is still logged in, even though the user would have no idea of this. This is a very bad thing(tm), as the next person to use the computer could have access to all of the previous user’s data.

So, we need the following to happen:

  • Logging out of the client logs out all of the linked (same session id) browser instances.
  • Closing a given browser window does not log out the session (the client may still be open, or there may be other linked browser windows).
  • When no requests are receieved within a given time period, the session expires.

So, we need a short session expiry time, but this should refresh every time a request occurs. The browser pages fetch notifications every 30 seconds, but the native client will also need to ping the server with some frequency for this to work.

This is somewhat different to the way django-session-security works. However, this does add a feature that may also be useful: if no user input is receieved on a given page within a timeout period, the session should expire. However, this may be hard to manage, as no activity may occur on one page, but another page may be getting lots of activity. For now, we might leave this out as a requirement.

It turns out Django can do everything that is required, out of the box. All you need to do is configure it correctly:

# settings.py

SESSION_SAVE_EVERY_REQUEST = True
SESSION_COOKIE_AGE = 60

The key is to understanding that the session expire time is only refreshed if the session is saved. Most requests will not save this (my fetch of unread notifications doesn’t for instance), so after the expiry time, the session would expire, even if requests had been made in the meantime.

Django Fieldsets

HTML forms contain a construct called a fieldset. These are generally used to segment a form: splitting a form into groups of fields that are logically grouped. Each fieldset may also have a legend.

Django’s forms have no concept of a fieldset natively, but with a bit of patching, we can make every django form capable of rendering itself using fieldsets, yet still be backwards compatible with non-fieldset-aware templates.

Ideally, we would like to be able to render a form in a way similar to:

<form>
  {% for fieldset in form.fieldsets %}
  <fieldset>
    <legend>{{ fieldset.title }}</legend>
    
    <ul>
      {% for field in fieldset %}
        <li>
          {{ field.label_tag }}
          {{ field }}
          {{ field.help_text }}
          {{ field.errors }}
        </li>
      {% endfor %}
    </ul>
  </fieldset>
  {% endfor %}
  
  <!-- submit button -->
</form>

And, it would make sense to be able to declare a form’s fieldsets in a manner such as:

class MyForm(forms.Form):
  field1 = forms.BooleanField(required=False)
  field2 = forms.CharField()
  
  class Meta:
    fieldsets = (
      ('Fieldset title', {
        'fields': ('field1', 'field2')
      }),
    )

This is similar to how fieldsets are declared in the django admin.

We can’t just simply create a subclass of forms.Form, and do everything there, as the metaclass stuff doesn’t work correctly. Instead, we need to duck-punch.

First, we want to redefine the metaclass __init__ method, so it will accept the fieldsets attribute.

from django import forms
from django.forms.models import ModelFormOptions

_old_init = ModelFormOptions.__init__

def _new_init(self, options=None):
  _old_init(self, options)
  self.fieldsets = getattr(options, 'fieldsets', None)

ModelFormOptions.__init__ = _new_init

Next, we will need a Fieldset class:

class Fieldset(object):
  def __init__(self, form, title, fields, classes):
    self.form = form
    self.title = title
    self.fields = fields
    self.classes = classes
  
  def __iter__(self):
    # Similar to how a form can iterate through it's fields...
    for field in self.fields:
      yield field

And finally, we need to give every form a fieldsets method, which will yield each fieldset, as a Fieldset defined above:

def fieldsets(self):
  meta = getattr(self, '_meta', None)
  if not meta:
    meta = getattr(self, 'Meta', None)
  
  if not meta or not meta.fieldsets:
    return
  
  for name, data in meta.fieldsets:
    yield Fieldset(
      form=self,
      title=name,
      fields=(self[f] for f in data.get('fields',(,))),
      classes=data.get('classes', '')
    )

forms.BaseForm.fieldsets = fieldsets

I am using this code (or something very similar to it), in projects. It works for me, but your mileage may vary…

Django Proxy Model State Machine

Finite State Machines (fsm) are a great way to model something that has, well, a finite number of known states. You can easily specify the different states, and the transitions between them.

Some time ago, I came across a great way of doing this in python: Dynamic State Machines. This maps well onto an idea I have been toying with lately, replacing a series of linked models representing different phases in a process with one model type. Initially, I had thought to just use a type flag, but actually changing the class seems like a better idea.

One aspect of django’s models that makes it easy to do this is the concept of a Proxy Model. These are models that share the database table, but have different class definitions. However, usually a model instance will be of the type that was used to fetch it:

class ModelOne(models.Model):
  field = models.CharField()
  
class ModelOneProxy(ModelOne):
  class Meta:
    proxy = True

ModelOneProxy.objects.get(pk=1) # Returns a ModelOneProxy object.
ModelOne.objects.all() # Returns all ModelOne objects.

However, by using a type field, we can, at the time it is fetched from the database, turn it into the correct type.

class StateMachineModel(models.Model):
  status = models.CharField(max_length=64)
  
  def __init__(self, *args, **kwargs):
    super(StateMachineModel, self).__init__(*args, **kwargs)
    self.__class__ = class_mapping[self.status]

However, having to store a registry of status : <ProxyModelClass> objects is not much fun.

Enter __subclasses__.

  @property
  def _get_states(self):
    """
    Get a mapping of {status: SubClass, ...}
    
    The status key will be the name of the SubClass, with the
    name of the superclass stripped out.
    
    It is intended that you prefix your subclasses with a meaningful
    name, that will be used as the status value.
    """
    return dict([
      (
        sub.__name__.lower().replace(self.__class__.__name__, ''),
        sub
      ) for sub in self.__class__.__subclasses__()
    ])
  
  # in __init__, above, replace the last line with:
    self.__class__ = self._get_states[self.status]

Now, we need to change the underlying class when the type gets changed

  def __setattr__(self, attr, value):
    if attr == 'status':
      states = self._get_states
      if value in states:
        self.__class__ = states[value]
    return super(StateMachineModel, self).__setattr__(attr, value)

As the docstring on _get_states indicates, it looks at the subclass name, and compares it to the superclass name to work out the values that will be stored as the status (and used to dynamically change the class).

This has a fairly large implication: you cannot fetch database objects of any of the subclass types directly: you would need to:

SuperClass.objects.filter(status="prefix")

Of course, you could use queryset methods to do this: that’s what I have been doing.

This is still a bit of a work in progress: it’s not well tested, but is an interesting idea.

The full version of this model class, which is slightly different to above:

from django.db import models

class StateMachineModel(models.Model):
    status = models.CharField(max_length=64)
    
    class Meta:
        abstract = True
    
    def __init__(self, *args, **kwargs):
        self._states = dict([
            (sub.__name__.replace(self.__class__.__name__, '').lower(), sub)
            for sub in self.__class__.__subclasses__()
        ])
        super(StateMachineModel, self).__init__(*args, **kwargs)
        self._meta.get_field_by_name('status')[0]._choices = [(x, x) for x in self._states.keys()]
        self._set_state()
            
    def _set_state(self):
        if getattr(self, 'status', None) in self._states:
            self.__class__ = self._states[self.status]
    
    def __setattr__(self, attr, value):
        if attr == 'status':
            self._set_state()
        return super(StateMachineModel, self).__setattr__(attr, value)

Neat and tidy read-only fields

I have a recurring pattern I’m seeing, where I have a field in a model that needs to be read-only. It usually is a Company to which an object belongs, but it also occurs in the case where an object belongs to some collection, and isn’t permitted to be moved to a different collection.

Whilst there are some workarounds that apply the field’s value to the instance after creating, it’s nicer to be able to apply the read-only nature declaratively, and not have to remember to do something in the form itself.

Unfortunately, in django, normal field subclasses don’t have access to the initial argument that was used to construct it. But forms.FileField objects do. So we can abuse that a little.

We also need a widget, that will always return False for questions about if the value has been changed, and re-render with the initial value at all times.

from django import forms

class ReadOnlyWidget(forms.HiddenInput):
    def render(self, name, value, attrs):
      value = getattr(self, 'initial', value)
      return super(ReadOnlyWidget, self).render(name, value, attrs)
    
    def _has_changed(self, initial, data):
      return False

class ReadOnlyField(forms.FileField):
  widget = forms.HiddenInput
  
  def __init__(self, *args, **kwargs):
    forms.Field.__init__(self, *args, **kwargs)
  
  def clean(self, value, initial):
    self.widget.initial = initial
    return initial

So, that’s all well and good. But a common use for me was for this field to be a related field: a Company as described above, or a User.

Enter ReadOnlyModelField, and ReadOnlyUserField.

Now, ReadOnlyModelField is a bit tricky: it’s not actually a class, but a factory function, so we will look at ReadOnlyUserField first:

class ReadOnlyUserField(ReadOnlyField):
  def clean(self, value, initial):
    initial = super(ReadOnlyUserField, self).clean(value, initial)
    return User.objects.get(pk=initial)

Note, it will have a database query.

Now, we are ready to look at a more general case:

def ReadOnlyModelField(ModelClass, *args, **kwargs):
  class ReadOnlyModelField(ReadOnlyField):
    def clean(self, value, initial):
      initial = super(ReadOnlyModelField, self).clean(value, initial)
      return ModelClass.objects.get(pk=initial)
  return ReadOnlyModelField(*args, **kwargs)

This is a bit tricky. We create a function that looks like a class, but actually creates a new class when it is called. This is so we can use it in a form definition:

class MyForm(forms.ModelForm):
  company = ReadOnlyModelField(Company)
  
  class Meta:
    model = MyModel

Django AJAX Forms

I think the more Django code I write, the more I like one particular feature.

Forms.

Simple as that. Forms are the reason I keep coming back to django, and discard other web frameworks in other languages, even though I really want to try them.

One pattern I have been using a fair bit, which was touched on in another post, is using AJAX to handle form submission, and displaying the response.

Before we continue, a quick recap on what Django’s forms offer us.

  • A declarative approach to defining the fields a form has, including validation functions.
  • Will render themselves to HTML input elements, as appropriate.
  • Handle validation of incoming form-encoded (or otherwise provided) data.
  • Fields can validate themselves, and can include validation error messages as part of the HTML output.
  • (Model forms) handle instantiation of and updating of model instances.

A normal form-submission cycle contains a POST or GET request to the server, which responds with a fresh HTML page, which the browser renders. The normal pattern for successful POST requests is to redirect to a GET afterwards, to prevent duplicate submission of forms.

By doing an ajax request instead of a full-page request means we can:

  • reduce the amount of data that is sent back from the server
  • improve apparent performance by only re-rendering the relevant data
  • reduce the amount of time spent rendering parts of the page that have not changed, such as menu, etc.

The way I have been doing it, in broad terms, is to have a template just for the form. If the request is an ajax request, then this will be rendered and returned. If it’s not an ajax request, then the full page will be returned.

Some example code, for one way to do this:

def view(request, pk):
  instance = MyModel.objects.get(pk=pk)
  
  if request.is_ajax():
    template = 'form.html'
  else:
    template = 'page.html'
  
  
  if request.method == 'POST':
    form = MyForm(request.POST, instance=instance)
    if form.is_valid():
      form.save()
      if not request.is_ajax():
        return redirect('redirect-name-here')
  else:
    form = MyForm(instance=instance)
  
  return render(request, template, {'form': form})

Our template files. page.html:

{% extends 'base.html' %}

{% block main %}
  {% include 'form.html' %}
{% endblock %}

{% block script %}
{# Assumes jQuery is loaded... #}
{# This should be in a seperate script file #}
<script>
$(document).on('submit', 'form.dynamic-form', function(form) {
  var $form = $(form);
  $.ajax({
    type: form.method,
    url: form.action,
    data: $form.serialize(),
    success: function(data) {
      $form.replace(data);
    }
  });
});
</script>
{% endblock %}

And form.html:

<form action="/path/to/url/" method="POST" class="dynamic-form">
  {% csrf_token %}
  {{ form }}
  <button type="input">Submit</button>
</form>

Obviously, this is a fairly cut-down example, but it gets the message across.

One thing I dislike in general about django is that failed form submissions are returned with a status code of 200: personally I think a 409 is more appropriate in most cases, but returning a 200 actually means this code is simpler.

Capture and test sys.stdout/sys.stderr in unittest.TestCase

Testing in Django is usually done using the unittest framework, which comes with Python. You can also test using doctest, with a little bit of work.

One advantage of doctest is that it’s super-easy to test for an exception: you just expect the traceback (which can be trimmed using \n ... \n).

In a unittest.TestCase, you can do a similar thing, but it’s a little more work.

Basically, you want to temporarily replace sys.stdout (or sys.stderr) with a StringIO instance, and set it back after the block you care about has finished.

Python has had a nice feature for some time called Context Managers. These enable you to ensure that cleanup code will be run, regardless of what happens in the block.

The syntax for running code within a context manager is:

with context_manager(thing) as other:
  # Code we want to run
  # Can use 'other' in here.

One place that you can see this syntax, in the context of testing using unittest is to check a specific exception is raised when a function that uses keyword arguments, or a statement that is not a callable is executed:

class FooTest(TestCase):
  def test_one_way(self):
    self.assertRaises(ExceptionType, callable, arg1, arg2)

  def test_another_way(self):
    with self.assertRaises(ExceptionType):
      callable(arg1, arg2)
      # Could also be:
      #     callable(arg1, arg2=arg2)
      # Or even:
      #     foo = bar + baz
      # Which are not possible in the test_one_way call.

So, we could come up with a similar way of calling our code that we want to capture the sys.stdout from:

class BarTest(TestCase):
  def test_and_capture(self):
    with capture(callable, *args, **kwargs) as output:
      self.assertEquals("Expected output", output)

And the context manager:

import sys
from cStringIO import StringIO
from contextlib import contextmanager

@contextmanager
def capture(command, *args, **kwargs):
  out, sys.stdout = sys.stdout, StringIO()
  try:
    command(*args, **kwargs)
    sys.stdout.seek(0)
    yield sys.stdout.read()
  finally:
    sys.stdout = out

It’s simple enough to do the same with sys.stderr.

Update: thanks to Justin Patrin for pointing out that we should wrap the command in a try:finally: block.

My own private PyPI

PyPI, formerly the CheeseShop is awesome. It’s a central repository of python packages. Knowing you can just do a pip install foo, and it looks on pypi for a package named foo is superb. Using pip requirements files, or setuptools install_requires means you can install all the packages you need, really simply.

And, the nice thing about pip is that it won’t bother downloading a package you already have installed, subject to version requirements, unless you specifically force it to. This is better than installing using pip install -e <scm>+https://... from a mercurial or git repository. This is a good reason to have published version numbers.

However, when installing into a new virtualenv, it still may take some time to download all of the packages, and not everything I do can be put onto pypi: quite a lot of my work is confidential and copyrighted by my employer. So, there is quite a lot of value to me to be able to have a local cache of packages.

You could use a shared (between all virtualenvs) --build directory, but the point of virtualenv is that every environment is isolated. So, a better option is a local cache server. And for publishing private packages, a server is required for this too. Being able to use the same workflow for publishing a private package as an open source package is essential.

Because we deploy using packages, our private package server is located outside of our office network. We need to be able to install packages from it on our production servers. However, this negates the other advantage of a pypi cache. It does mean we control all of the required infrastructure required to install: no more “We can’t deploy because github is down.”

So, the ideal situation is to actually have two levels of server: our private package server, and then a local cache server on each developer’s machine. You could also have a single cache server in the local network, or perhaps three levels. I’m not sure how much of a performance hit not having the cache on the local machine is.

To do this, you need two things. Your local cache needs to be able to use an upstream cache (no dicking around with /etc/hosts please), and your private server needs to be able to provide data to this.

The two tools I have been using handle neither of these. pypicache does not handle upstream caching, however this was easy to patch. My fork handles upstream caching, plus uses setuptools, enabling it to install it’s own dependencies.

localshop, however, will not work as an upstream cache, at least with pypicache, which uses some other APIs than those used by pip. However, it does have nice security features, and to move away from it would require me to extract the package data out. pypicache works to a certain extent with itself as an upstream cache, until you try to use it’s ‘requirements.txt caching’ feature. Which I tried to tonight.

Oh well.

Django and RequireJS

Until very recently, I was very happy with django-compressor. It does a great job of combining and minifying static media files, specifically JavaScript and CSS files. It will manage compilation, allowing you to use, for example, SASS and CoffeeScript. Not that I do.

But, for me, the best part was the cache invalidation. By combining JavaScript (or CSS) into files that get named according to a hash of their contents, it’s trivial for clients to not have an old cached JS or CSS file.

However, recently I have begun using RequireJS. This enables me to declare dependencies, and greatly simplify the various pages within my site that use specific JavaScript modules. But this does not play so well with django-compressor. The problem lies with the fact that there is no real way to tell RequireJS that “instead of js/file.js, it should use js/file.123ABC.js”, where 123ABC is determined by the static files caching storage. RequireJS will do optimisation, and this includes combining files, but that’s not exactly what I want. I could create a built script for each page that has a require() call in it, but that would mean jQuery etc get downloaded seperately for each different script.

I have tried using django-require, but using the {% require_module %} tag fails spectacularly (with a SuspicousOperation exception). And even then, the files that get required by a dependency hierarchy do not have the relevant version string.

That is, it seems that the only way to get the version numbering is to use django’s templating system over each of the javascript files.

There appear to be two options.

** List every static file in require.config({paths: ...}). **

This could be manually done, but may be possible to rewrite a config.js file, as we do have access to all of the processed files as part of the collectstatic process.

Basically, you need to use {% static 'js/file.js' %}, but strip off the trailing .js.

** Rewrite the static files. **

Since we are uglifying the files anyway, we could look at each require([...], function(){ ... }) call, and replace the required modules. I think this would actually be more work, as you would need to reprocess every file.

So, the former looks like the solution. django-require goes close, but, as mentioned, doesn’t quite get there.

Python deployment using fabric and pip

I’ve spent a not insignificant amount of time working on a deployment function for within my fabfile.py (the configuration file used by Fabric). It’s well worth the investment, as being able to deploy with a single command (potentially to many servers) is not only faster, but much less prone to human error.

Currently, I’m using Mercurial as my source control. I’m also using it for deployment, but I’d like to get away from that.

My deployment process looks something like this:

  1. Ensure the local repository has no uncommitted changes.
  2. Ensure the requirements.txt file is exactly the same as the output from pip freeze.
  3. Copy our public key to the remote server, for the user www-data, if it is not already installed there.
  4. Create a virtualenv in the desired location on the server, if there is not one already there.
  5. Ensure mercurial is installed on the server.
  6. Push the local repository to the remote server. This will include any subrepositories. I do a little bit of fancy magic to ensure the remote subrepositories exist.
  7. Update the remote server’s repository to the same revision as we are at locally. This means we don’t necessarily need to always deploy to tip.
  8. Install the dependencies on the remote server.
  9. Run some django management commands to ensure everything is setup correctly.
    • collect static files
    • sync the database
    • run migrations
    • ensure permissions are correct
    • compress static files
  10. Restart the various services that need to be restarted.

This process is based around requirements files for a very good reason. pip is very good at recognising which packages are already installed, and not reinstalling them if the version requirements are met. I use pip freeze > requirements.txt to ensure that what will be deployed matches exactly with what I have been developing (and testing) against.

However, this process has some issues.

  • Files must be committed to SCM before they can be deployed. This is fine for deployment to production, but is annoying for deploying to test servers. I have plenty of commits that turn on some extra debugging, and then a commit or two later, I turn it off.
  • I have some packages that I have installed locally using pip install -e /path/to/package. To deploy these, I need to:
    1. Uninstall the editable installation.
    2. Package up a new version of the app.
    3. Push the package to my package repository (I use localshop).
    4. Install the package from the package repository.
    5. Run pip freeze > requirements.txt.
    6. Commit the changes.
    7. Deploy to the test server.
  • Then, I usually need to develop further, so I reinstall using pip install -e ....

Today, I finally got around to spending some time looking at how pip can help improve this workflow.

With pip==1.3.1, we have a command that was not in pip==1.1, which was what I had been using until now. pip bundle.

My ‘deploy-to-development/test’ process now looks something like:

  1. Get a list of packages installed as editable: pip list -e
  2. Create a bundle, without dependencies, of these packages.
  3. Get a list of all packages, other than those installed as editable: pip freeze | grep -v "^-e".
  4. Ensure the server is set up (virtualenv, etc)
  5. Push the local repository to the remote server.
  6. Upload the bundle and requirements files.
  7. Install from the requirements file on the server.
  8. Force install from the bundle file on the server, without dependencies.
  9. Repeat the post-installation stuff from above.

Some of this I’m never going to be able to avoid: ensuring we have the virtualenv, and the post-installation stuff. Migrations gotta migrate. However, I would like to move away from the pushing of the local repository.

My plan: turn my project into a package (complete with setup.py), so that it becomes just another entry in the requirements file. It will be editable, which means it will be bundled up for deployment.

However, it will mean I can get away from having the nested repositories that I currently have. Ultimately, I plan to be able to:

  1. Build a bundle of editable packages.
  2. Create a requirements file of non-editable packages.
  3. Upload both of these files to the server.
  4. Install the requirements.
  5. Install the bundle.
  6. Run the post installation tasks.

That would be bliss.

hg commit --prevent-stupidity

I’ve had a pre-commit hook in my mercurial ~/.hgrc for some time, that prevents me from commiting code that contains the string import pdb; pdb.set_trace().

I’ve pushed commits containing this out to testing lots of times, and I think even onto production once or twice…

So, the pre-commit hook that has been doing that this is:

[hooks]
pretxncommit.pdb_found = hg export tip | (! egrep -q '^\+[^\+].*set_trace\(\)')

This uses a regular expression check to see if the string matches. However, it does not show the filename, and the other day I was burned by leaving in a console.time(...) statement in a javascript file. So, I’ve improved the pre-commit hook, so it can do a bit more.

## <somewhere-in-your-PYTHONPATH>/hg_hooks/debug_statements.py

import sys
import _ast
import re

class colour:
    HEADER = '\033[95m'
    OKBLUE = '\033[94m'
    OKGREEN = '\033[92m'
    WARNING = '\033[93m'
    FAIL = '\033[91m'
    END = '\033[0m'

ERROR_HEADER = "*** Unable to commit. There were errors in %s files. ***"
ERROR_MESSAGE = """  File "%s/%s", line %i,
%s%s%s"""

def syntax_check(filename, data):
    try:
        tree = compile(data, filename, "exec", _ast.PyCF_ONLY_AST)
    except SyntaxError:
        value = sys.exc_info()[1]
        msg = value.args[0]

        (lineno, offset, text) = value.lineno, value.offset, value.text
        
        if text is None:
            raise
        
        return lineno, ("%s: %s" % (msg, text)).strip('\n')

ERRORS = {
    'py': [
        re.compile('(^[^#]*import pdb; pdb.set_trace\(\))'),
        syntax_check
    ],
    'js': [
        re.compile('(^[^(//)]*console\.[a-zA-Z]+\(.*\))'),
        re.compile('(^[^(//)]*debugger;)')
    ],
}

def test_data(filename, data):
    for matcher in ERRORS.get(filename.split('.')[-1], []):
        if hasattr(matcher, 'finditer'):
            search = matcher.finditer(data)
            if search:
                for match in search:
                    line_number = data[:match.end()].count('\n')
                    yield line_number + 1, data.split('\n')[line_number]
        elif callable(matcher):
            errors = matcher(filename, data)
            if errors:
                yield errors

def test_repo(ui, repo, node=None, **kwargs):
    changeset = repo[node]
    
    errors = {}
    
    for filename in changeset:
        data = changeset.filectx(filename).data()
        our_errors = list(test_data(filename, data))
        if our_errors:
            errors[filename] = our_errors        
    
    if errors:
        print colour.HEADER + ERROR_HEADER % len(errors) + colour.END
        for filename, error_list in errors.items():
            print 
            for line_number, message in error_list:
                print ERROR_MESSAGE  % (
                    repo.root, filename, line_number,
                    colour.FAIL, message, colour.END,
                )
        print
        return True

Then, add the hook to your .hgrc:

[hooks]
pretxncommit.debug_statements = python:hg_hooks.debug_statements.test_repo

Note: I’ve updated the script to correctly show the line number since the start of the file, rather than the line number within the currently processed segment. Thanks to Vinay for reminding me about that!