tox and coverage.py

Tox makes it really easy to run multiple tests on your project: against different versions of python or different versions of a related library.

It’s still lacking proper matrix testing: you need to manually define each environment, but apparently, that is going to change:

However, that’s not what I’m writing about today.

Today is about coverage testing, using coverage.py.

It’s possible, using tox, to get coverage.py to run:

[testenv]
commands=
  coverage run setup.py test
  coverage report
deps=
  coverage

However, this will generate a coverage report for just that environment. It would be better if you generated a coverage report for the whole project (although you may want per-environment coverage testing too).

So, we can abuse the fact that the tox envlist will be created and processed in the order they appear:

[tox]
envlist = clean,py27,py34,stats

[testenv]
commands=
  coverage run -a setup.py test
deps=
  coverage

[testenv:clean]
commands=
  coverage erase

[testenv:stats]
commands=
  coverage report
  covarage html

You’ll then get a nice html report in htmlcov/, and a printed coverage report in your console.

Multiple Homebrew Pythons

The Homebrew project brings a really nice package installer to OS X, allowing you to install command line programs in a really simple way. For instance, to install the latest version of Python 3, you would do:

$ brew install python3

Because not all projects are as aware of old versions as python, when brew upgrades a package, it removes the old versions linked binaries and support files. This is actually not a good thing for python: it means you can no longer access the older interpreter.

Python keeps version-named interpreters, and then just symlinks the most recently installed to the python executable. Thus, it’s not uncommon to see, for python2:

$ ls -1 /usr/bin/python*
/usr/bin/python
/usr/bin/python-config
/usr/bin/python2.5
/usr/bin/python2.5-config
/usr/bin/python2.6
/usr/bin/python2.6-config
/usr/bin/python2.7
/usr/bin/python2.7-config
/usr/bin/pythonw
/usr/bin/pythonw2.5
/usr/bin/pythonw2.6
/usr/bin/pythonw2.7

This means, if you want to run an older version (for instance, say you use tox and want to do some testing against a range of versions), you can just use:

$ python2.5
Python 2.5.6 (r256:Unversioned directory, Mar  9 2014, 22:15:03) 
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> 

But Homebrew breaks this.

However, if you have the old versions already installed, you can easily recreate the symlinks. Indeed, here is a script that will visit all homebrew installed python3 versions, creating

cd /usr/local/Cellar/python3/

for VERSION in `ls`
do
  cd $VERSION
  
  find bin -name \*3\* -exec ln -s -f `pwd`/{} /usr/local/{} \;
  find lib -name \*python\* -maxdepth 1 -exec ln -s -F `pwd`/{} /usr/local/{} \;
  find share -name python\* -exec ln -s -f `pwd`/{} /usr/local/{} \;
  find Frameworks -name 3.\* -exec ln -s -f `pwd`/{} /usr/local/{} \;
  
  cd ..
done

It worked for me for python3 with the following versions installed:

  • 3.2.3
  • 3.3.3
  • 3.4.1

Now I just need to figure out how to get Homebrew to download and build specific versions of packages.

Adding JSON operators to PostgreSQL

Notably, the new JSONB data type in postgres is missing some of the features that the hstore data type has had for some time.

hstore - text     : delete key from left operand
hstore - text[]   : delete keys from left operand
hstore - hstore   : delete matching pairs from left operand

However, using two awesome features of Postgres, it’s possible to add these operators in.

Firstly, python as a language in postgres:

matt=# CREATE LANGUAGE plpythonu;

Then, you can write functions in python, that import standard system libraries, like json.

CREATE OR REPLACE FUNCTION json_subtract(json_object json, key text) RETURNS json AS $body$
import json
data = json.loads(json_object)
data.pop(key, None)
return json.dumps(data)
$body$ LANGUAGE plpythonu;

Finally, you can now overload the operator to get the syntactic sugar you want:

CREATE OPERATOR - (
  LEFTARG = json,
  RIGHTARG = text,
  PROCEDURE = json_subtract
);

Now, you can use the same syntax as for hstore:

matt=# SELECT '{"a":1, "b":2}'::json - 'a'::text;
 ?column? 
----------
 {"b": 2}
(1 row)

It’s possible to repeat these for the other subtraction operators:

CREATE OR REPLACE FUNCTION json_subtract(json_object json, keys text[]) RETURNS json AS $body$
import json
data = json.loads(json_object)
for key in keys:
    data.pop(key, None)
return json.dumps(data)
$body$ LANGUAGE plpythonu;

CREATE OPERATOR - (
  LEFTARG = json,
  RIGHTARG = text[],
  PROCEDURE = json_subtract
);

CREATE OR REPLACE FUNCTION json_subtract(json_object json, pairs json) RETURNS json AS $body$
import json
data = json.loads(json_object)
pairs_data = json.loads(pairs)
for key,value in pairs_data.items():
  if key in data and data[key] == value:
    data.pop(key)
return json.dumps(data)
$body$ LANGUAGE plpythonu;

CREATE OPERATOR - (
  LEFTARG = json,
  RIGHTARG = json,
  PROCEDURE = json_subtract
);

I’ll leave it as an exercise to write functions for the other operators.

In-Database Audit Trail

I’ve been thinking about audit trails, object versioning and the like a bit lately. We have situations where it’s actually important to be able to know what changes were made, when, and by whom.

The tool we are currently using in Django for this is django-reversion, but I’m not that happy with it. Part of the problem is that it adds a fair amount of overhead into each request, which we have noticed, and also that querying the data it produces is not simple, unless you are just looking for changes made to a specific object.

For our case, we want to know all the changes made to a set of objects (that may not all be of the same type), made within a given range, where there is (eventually) a foreign key relationship back to our Company model.

Part of the problem is just that: a custom solution could store the Company relationship explicitly, but also, we really care about what the changes were, rather than what the object looks like after the changes. Well, technically, we want both.

However, it has gotten me thinking about other ways to solve this problem.

In most DBMSs, it’s possible to get a trigger function to run when an update happens to each row, which makes it possible to get access to this information. Indeed, googling “Postgres audit trigger” pops up some hits that are highly relevant. I looked at Audit trigger 91plus, since it’s “improved”. It has some really nice features like storing the audit data in an hstore column, which means you can query it.

However, one place where this type of trigger breaks down is that it’s not generally possible to get the “application user” associated with a request, only the “database user”, which for our system (and most other web applications) is fixed for all access.

One way to get around this might be to, at the start of every database transaction, inject a call that creates a temporary table, with the extra bits of data that you want to log, and then use that in the trigger function.

CREATE TEMP TABLE IF NOT EXISTS
  "_app_user" (user_id integer, ip_address inet);

Then we need to add (or update) the one row that will contain our data. We must ensure that we only ever have one row in this table.

UPDATE _app_user SET user_id=%s, ip_address=%s;
INSERT INTO _app_user (user_id, ip_address)
  SELECT %s, %s WHERE NOT EXISTS (SELECT * FROM _app_user);

This code will ensure that the first statement (UPDATE) will affect all rows in the table (of which there will be at most one), and the second statement (INSERT ... SELECT ... WHERE NOT EXISTS ...) will only create a new row if there are no rows currently in the table.

It’s up to you to then pass the correct data to this. I’m currently looking at doing this using Django middleware, although I suspect this may fall down using the newer transaction handling, as otherwise we could have just ensured our middleware ran after the TransactionMiddleware. It may be possible to do it with a custom database backend, but it needs to somehow get access to the request object (which contains the user, and the ip address). Obviously, you could log other data about the request, too.

The final part of the puzzle is to inject this data into the row that will be used for the audit table entry. I modified the table definition so it included columns for the data I wanted: app_user_id and app_ip_address.

Then, inside the actual trigger function, after the audit_row object has been created, but before it is written to the table, we inject the data we want.

We need to be a little careful, as it’s possible the table does not exist:

BEGIN
  PERFORM 
    n.nspname, c.relname 
  FROM
    pg_catalog.pg_class c 
  LEFT JOIN 
    pg_catalog.pg_namespace n
  ON n.oid = c.relnamespace
  WHERE
    n.nspname like 'pg_temp_%' 
  AND
    c.relname = '_app_user';

  IF FOUND THEN
    FOR r IN SELECT * FROM _app_user LIMIT 1 LOOP
      audit_row.app_user_id = r.user_id;
      audit_row.app_ip_address = r.ip_address;
    END LOOP;
    END IF;
END;

This checks to see if the _app_user table exists in any of the valid temporary table namespaces, and if so, grabs the first (and only, from above) entry, using the values to update the row.

This function then works: if there is a temporary table with this name, it uses these fields when creating the audit, if not, it creates the audit row with empty values. This would mean that some audit statements may not contain proper user data, but in the case of Django, it’s possible to make changes outside of the request-response cycle. You could require that a user starting a shell session authenticates with a valid django username+password, but that still leaves management commands. I guess you could have a system account, but leaving these entries blank is like an explicit system user.

I haven’t got any production code using anything like this: I’d still want to test that it works as expected as part of the request, and would want to build up some method of querying it. There’s probably no reason you couldn’t do the table definition as a Django model (and indeed, have the function definition as a migration).

rrule to RFC-string

I’ve been playing around with Postgres lots lately, and I had the idea to store RRULE data in a Postgres Composite Type.

It then occurred to me I didn’t need to reimplement all of the great stuff that is in the python dateutil module: I can just use PL/Python and import it.

The next step was realising I didn’t need to use a custom type, but just use a new Domain of type text that validates the string is an RFC 2445 compatible string.

That’s all well and good, but dateutil doesn’t come with a way to convert rrule objects back into a string.

There is a patch, but it wasn’t quite right. So here is my method (which can be monkey-patched, as seen at the end):

FREQNAMES = ['YEARLY','MONTHLY','WEEKLY','DAILY','HOURLY','MINUTELY','SECONDLY']

def rrule_to_string(rule):
    output = []
    h,m,s = [None] * 3
    if rule._dtstart:
        output.append(rule._dtstart.strftime('DTSTART:%Y%m%dT%H%M%S'))
        h,m,s = rule._dtstart.timetuple()[3:6]
    
    parts = ['FREQ='+FREQNAMES[rule._freq]]
    if rule._interval != 1:
        parts.append('INTERVAL='+str(rule._interval))
    if rule._wkst:
        parts.append('WKST='+str(rule._wkst))
    if rule._count:
        parts.append('COUNT='+str(rule._count))
    
    for name, value in [
            ('BYSETPOS', rule._bysetpos),
            ('BYMONTH', rule._bymonth),
            ('BYMONTHDAY', rule._bymonthday),
            ('BYYEARDAY', rule._byyearday),
            ('BYWEEKNO', rule._byweekno),
            ('BYWEEKDAY', rule._byweekday),
            ]:
        if value:
            parts.append(name+'='+','.join(str(v) for v in value))
    
    # Only include these if they differ from rule._dtstart
    if rule._byhour and rule._byhour[0] != h:
        parts.append('BYHOUR=%s' % rule._byhour)
    if rule._byminute and rule._byminute[0] != m:
        parts.append('BYMINUTE=%s' % rule._byminute)
    if rule._bysecond and rule._bysecond[0] != s:
        parts.append('BYSECOND=%s' % rule._bysecond),
    
    
    output.append(';'.join(parts))
    return '\n'.join(output)

from dateutil.rrule import rrule
rrule.__str__ = rrule_to_string

I’ve only lightly tested it (in both regular python and PL/Python). I’ve also come up with a neat way of caching rrule objects between function calls, but that’s a topic for another day, but here it is in an SQL session.

SELECT rrulestr('DTSTART:19970902T090000 FREQ=DAILY;INTERVAL=10;COUNT=5');

            rrulestr            
--------------------------------
 DTSTART:19970902T090000       +
 FREQ=DAILY;INTERVAL=10;COUNT=5
(1 row)

Querying JSON in Postgres

Yesterday, I discovered how you can enable jsonb in postgres/psycopg2.

Today, I experimented around with how to query the data in json columns. There is documentation, but it wasn’t initially clear to me how the different operations worked.

CREATE TABLE json_test (
  id serial primary key,
  data jsonb
);

INSERT INTO json_test (data) VALUES 
  ('{}'),
  ('{"a": 1}'),
  ('{"a": 2, "b": ["c", "d"]}'),
  ('{"a": 1, "b": {"c": "d", "e": true}}'),
  ('{"b": 2}');
  

So far, so good. Let’s see what’s in there, to check:

SELECT * FROM json_test;
 id |                 data                 
----+--------------------------------------
  1 | {}
  2 | {"a": 1}
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
  5 | {"b": 2}
(5 rows)  

Super. Let’s have a go at filtering those results. There are several operators that we can use (and we’ll soon see why we chose jsonb).

Equality

Only available for jsonb, we can test that two JSON objects are identical:

SELECT * FROM json_test WHERE data = '{"a":1}';
 id | data 
----+------
  1 | {"a": 1}
(1 row)

Containment

Again, jsonb only, we can see if one JSON object contains another. In this case, containment means “is a subset of”.

SELECT * FROM json_test WHERE data @> '{"a":1}';

Give me all objects that contain the key "a", with the value 1 associated with that key:

 id |                 data                 
----+--------------------------------------
  2 | {"a": 1}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
(2 rows)

Containment goes both ways:

SELECT * FROM json_test WHERE data <@ '{"a":1}';

In this case, we can see that the query object is a superset of the empty object, as well as matching exactly to object 2.

 id |   data   
----+----------
  1 | {}
  2 | {"a": 1}
(2 rows)

Key/element existence

The last batch of jsonb only operators: we can test for the existence of a key (or an element of type string in an array, but we’ll get to those later).

SELECT * FROM json_test WHERE data ? 'a';

Give me all objects that have the key a.

 id |                 data                 
----+--------------------------------------
  2 | {"a": 1}
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
(3 rows)

We can also test for objects that have any of a list of keys:

SELECT * FROM json_test WHERE data ?| array['a', 'b'];
 id |                 data                 
----+--------------------------------------
  2 | {"a": 1}
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
  5 | {"b": 2}
(4 rows)

And, as you may expect, for objects that have all of the keys:

SELECT * FROM json_test WHERE data ?& array['a', 'b'];
 id |                 data                 
----+--------------------------------------
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
(2 rows)

Key-path traversal

We can also filter records that have a matching key-path. In simple cases, using the containment operators might be simpler, but in more complex situations, we would need to use these. These operations can also be used to extract a value, although at this stage I’m only interested in using them as part of a WHERE clause.

SELECT * FROM json_test WHERE data ->> 'a' > '1';

Give me all the records where the value of the element associated with key a is greater than 1. Notice the need to use a text value, rather than a number. I’m still investigating how this will play out.

 id |           data            
----+---------------------------
  3 | {"a": 2, "b": ["c", "d"]}
(1 row)

We can also do comparisons between primitives, objects and arrays:

SELECT * FROM json_test WHERE data -> 'b' > '1';
 id |                 data                 
----+--------------------------------------
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
  5 | {"b": 2}
(3 rows)

So, it seems that arrays and objects sort greater than numbers.

We can also look deeper down the path:

SELECT * FROM json_test WHERE data #> '{b,c}' = '"d"';

Give me objects where element b has a child object that has element c equal to the string "d". Neat.

 id |                 data                 
----+--------------------------------------
  4 | {"a": 1, "b": {"c": "d", "e": true}}

There are also versions of these operators that return a text, rather than a json object. In the case of the last query, that means we don’t need to compare to a JSON object (in the case where we actually want a string).

SELECT * FROM json_test WHERE data #>> '{b,c}' = 'd';
 id |                 data                 
----+--------------------------------------
  4 | {"a": 1, "b": {"c": "d", "e": true}}
(1 row)

Don’t cross the streams…

So, all good so far. We can query stuff, and this same stuff can be used to index jsonb columns, too.

However, the more astute reader may have noticed that I’ve been dealing with json data that has an object as it’s root. This needn’t be the case: arrays are also valid json, indeed so are any of the allowable atoms:

SELECT 
  'null'::json, 
  'true'::json, 
  'false'::json, 
  '2'::json,
  '1.0001'::json,
  '"abc"'::json, 
  '1E7'::jsonb;

Note the last one is a jsonb, which converts to canonical form:

 json | json | json  | json |  json   | json  |  jsonb   
------+------+-------+------+---------+-------+----------
 null | true | false | 2    | 1.00001 | "abc" | 10000000
(1 row)

Note also that a json null is different to an SQL NULL.

So, what happens when we start storing objects of mixed “type” in a json column?

I’m glad you asked.

INSERT INTO json_test (data) 
VALUES ('[]'), ('[1,2,"a"]'), ('null'), ('1E7'), ('"abc"');

SELECT * FROM json_test;
 id |                 data                 
----+--------------------------------------
  1 | {}
  2 | {"a": 1}
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
  5 | {"b": 2}
  6 | []
  7 | [1, 2, "a"]
  8 | null
  9 | 10000000
 10 | "abc"
(10 rows)

So far, so good. We can store those objects. And query?

Equality testing works fine:

SELECT * FROM json_test WHERE data = '{"a":1}';
SELECT * FROM json_test WHERE data = 'null';

Containment, too works as expected.

SELECT * FROM json_test WHERE data @> '{"a":1}';
SELECT * FROM json_test WHERE data <@ '{"a":1}';

Key and element existence perform reliably: perhaps surprisingly, the one query will match elements in an array, as well as keys in an object.

SELECT * FROM json_test WHERE data ? 'a';
 id |                 data                 
----+--------------------------------------
  2 | {"a": 1}
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
  7 | [1, 2, "a"]
(4 rows)
SELECT * FROM json_test WHERE data ?| array['a', 'b'];
 id |                 data                 
----+--------------------------------------
  2 | {"a": 1}
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
  5 | {"b": 2}
  7 | [1, 2, "a"]
(5 rows)
SELECT * FROM json_test WHERE data ?& array['a', 'b'];
 id |                 data                 
----+--------------------------------------
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
(2 rows)

But, as soon as we start doing key or element ‘get’, we hit a problem:

SELECT * FROM json_test WHERE data ->> 'a' > '1';

ERROR: cannot call jsonb_object_field_text 
       (jsonb ->> text operator) on an array

We can still use the key-path traversal, though, unless we have scalar values:

SELECT * FROM json_test WHERE data #> '{b,c}' = '"d"';
ERROR:  cannot call extract path from a scalar
SELECT * FROM json_test WHERE data #> '{b,c}' = '"d"' AND id < 8;
 id |                 data                 
----+--------------------------------------
  4 | {"a": 1, "b": {"c": "d", "e": true}}
(1 row)

Note the syntax for a key path: it only allows for strings (which json keys must be), or integers (which array indices are).

This seems like a pretty severe limitation. I’m not sure how things like MongoDB handle this, but in hindsight, if you are storing both array-based and object-based json data in the one column, you are probably going to be in a world of hurt anyway.

…or, maybe, do cross the streams

All is not lost, however: it’s possible to get just the object-based rows:

SELECT * FROM json_test WHERE data @> '{}';
 id |                 data                 
----+--------------------------------------
  1 | {}
  2 | {"a": 1}
  3 | {"a": 2, "b": ["c", "d"]}
  4 | {"a": 1, "b": {"c": "d", "e": true}}
  5 | {"b": 2}
(5 rows)

You could then combine this with a previously-forbidden query:

SELECT * FROM json_test WHERE data @> '{}' AND data ->> 'a' > '1';
 id |           data            
----+---------------------------
  3 | {"a": 2, "b": ["c", "d"]}
(1 row)

Indeed, postgres is so awesome you don’t even need to ensure the data @> '{} bit comes first!

But what about limiting to just array-typed data? Turns out we can use the same trick:

SELECT * FROM json_test WHERE data @> '[]';
 id |    data     
----+-------------
  6 | []
  7 | [1, 2, "a"]
(2 rows)

And, again, combine with the other required operator:

SELECT * FROM json_test WHERE data @> '[]' AND data ->> 1 = '2';
 id |    data     
----+-------------
  7 | [1, 2, "a"]
(1 row)

Worth noting is that the @> operator is only available on jsonb columns, so you won’t be able to query mixed-form data in a regular json column.

Wow! What’s next?

This foray into querying jsonb data in postgres was an aside to a project I’m working on to bring json(b) querying to django. With django 1.7’s new custom lookup features, it will be possible to write things like:

# Exact
MyModel.objects.filter(data={'a': 1})
MyModel.objects.exclude(data={})
# Key/element existence
MyModel.objects.filter(data__has='a')
MyModel.objects.filter(data__has_any=['a', 'b'])
MyModel.objects.filter(data__has_all=['a', 'b'])
# Sub/superset of key/value pair testing
MyModel.objects.filter(data__contains={'a': 1})
MyModel.objects.filter(data__in={'a': 1, 'b': 2})
# Get element/field (compare with json)
MyModel.objects.filter(data__get=(2, {'a': 1}))
# Get element/field (compare with scalar, including gt/lt comparisons)
MyModel.objects.filter(data__get=(2, 'a'))
MyModel.objects.filter(data__get__gt=('a', 1))
# key path traversal, compare with json or scalar.
MyModel.objects.filter(data__get=('{a,2}', {'foo': 'bar'}))
MyModel.objects.filter(data__get=('{a,2}', 2))
MyModel.objects.filter(data__get__lte=('{a,2}', 2))

I’m still not sure about the lookup names, especially the last set. The name “get” seems a little generic, and maybe we could use different lookup names for the input type, although only integer and string values are permitted.

Python, postgres and jsonb

I maintain a json field for django, and was working today on getting the new (1.7+) lookup code to play nicely: in order for this to happen, you basically need to be running Postgres 9.4, and using a jsonb column. Otherwise, querying kind-of sucks.

After a significant amount of work, where I drift backwards and forwards between having old and new code working, I had an idea.

Some time ago I discovered that psycopg2 has really nice support for some custom types. Indeed, it’s super-easy to get it to handle UUID and json data. But it seems that it hasn’t yet been made to work with jsonb.

However, the registration process for handling the data makes it possible to do so, and trivial, since the serialised form will be essentially identical for both:

psycopg2.extras.register_json(
    conn_or_curs=None,
    globally=False,
    loads=None,
    oid=None,
    array_oid=None
)

Note the last two arguments. We can trick psycopg2 into using jsonb instead of json.

Is your database, execute:

SELECT oid, typarray FROM pg_type WHERE typname = 'jsonb';
-- oid      --> 3802
-- typarray --> 3807

(Syntax highlighting fail means I can’t include the actual results).

Your values may vary (I’m really not sure), but you’ll simply need to call register_json with the first two:

register_json(oid=3802, array_oid=3807)

Now, assuming you have a jsonb column, when you fetch data from it, it will already be turned into python objects.

Python 2.7.5 (default, Mar  9 2014, 22:15:05) 
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import psycopg2
>>> conn = psycopg2.connect("")
>>> cur = conn.cursor()
>>> cur.execute("SELECT * FROM jsonb_test;")
>>> data = cur.fetchone()
>>> data
(1, '{}')
>>> from psycopg2.extras import register_json
>>> register_json(oid=3802, array_oid=3807)
(<psycopg2._psycopg.type 'JSON' at 0x101713418>, <psycopg2._psycopg.type 'JSONARRAY' at 0x101721208>)
>>> cur.execute("SELECT * FROM jsonb_test;")
>>> data = cur.fetchone()
>>> data
(1, {})

ssh ProxyCommand

If you need to tunnel through a machine to get to another one, using ssh, you can easily do this with a ProxyCommand entry in your .ssh/config:

Host foo
ProxyCommand ssh <gateway-machine> -W <target-machine>:%p

If you have avahi-daemon installed on your gateway machine, you can get even better:

Host foo.local
ProxyCommand ssh <gateway-machine> -W %h:%p

I use this to get access to my office machine(s), without having to worry about firing up a VPN. I can rewrite the hostname so that I don’t need to have a different entry for each machine in the office:

Host *.office
ProxyCommand ssh <gateway-machine> -W $(echo %h | sed s/.office/.local/):%p

Bingo, now I can get really simple access to any machine in the office (as long as it has avahi installed, or is a Mac).

matt@alpaca:~ $ ssh karma.office
Last login: Fri May 23 19:52:29 2014 from 10.0.0.2
matt@karma:~ $

Transparent header on scroll

I saw a nice effect the other day, on SourceJS. Basically, when you scroll the page at all, the header becomes transparent, but when you hover over it, it becomes opaque again.

Pretty, yet surprisingly easy to do.

Assume your HTML looks something like:

<body>
  <div id="main-menu">
    <!-- this contains the header -->
  </div>
</body>

The trick to getting stuff really smooth, with limited scripting required, is to use CSS transitions. So, instead of manually changing the opacity of the element, we just set/unset a class on the body, and have some CSS rules to set the opacity.

window.addEventListener('scroll', function () {
  document.body.classList[
    window.scrollY > 20 ? 'add': 'remove'
  ]('scrolled');
});

This fires every time there is a scroll event. In my browser, add/removing this class to the classList takes ~0.01ms.

Finally, there is the required CSS.

body.scrolled #main-menu {
  opacity: 0.2;
  transition: opacity .2s;
}
body.scrolled #main-menu:hover {
  opacity: 1.0;
  transition: opacity .2s;
}

That’s all there is to it!

Here’s one I prepared earlier.

Thoughts on Mutation Testing in Python (part 1)

Writing code is fun.

Writing tests is a great way to have code that is likely to work.

Using a coverage tool will show you what percentage of your code is executed when you run your tests. But getting 100% coverage does not mean your code is 100% tested.

Take for example the following:

def product(a, b):
    return a * b

How might we go about testing this function?

>>> product(2, 2)
4

Okay, so technically, we now have 100% coverage of our function. Every line is executed when running the tests, but is it really tested?

What happens if we change our original function, and see if the tests pass:

def product(a, b):
    return a + b

Hmm. When we run that, with those arguments, we still pass our test.

What we have done here is mutate our code, and in this case, the mutant survived.

In order to test this code correctly, we want all possible mutations to be killed (or, tests that run should fail with mutants).

This is the first post in a series on mutation testing in python. Up next, we will investigate the types of mutants/mutations, and how they apply to python.