Redirect all DNS traffic to the pi.hole

This is more to remind me than anything else, but I figured out how to configure my firewall to redirect all DNS traffic (except from the pihole itself) to the pihole.

My pihole has an IP address of 10.1.1.3:

iptables -t nat -A PREROUTING -i br-lan ! -s 10.1.1.3 -p tcp --dport 53 -j DNAT --to 10.1.1.3
iptables -t nat -A PREROUTING -i br-lan ! -s 10.1.1.3 -p udp --dport 53 -j DNAT --to 10.1.1.3
iptables -t nat -A POSTROUTING -j MASQUERADE

In OpenWrt, this needs to be pasted into Network → Firewall → Custom Rules, and then possibly reboot the router.

It is likely that a reboot is not necessary: the MASQUERADE line made me think I was still hitting the external DNS server, but it was transparently being handled by my pihole.

Certificate Expiry Dates without extra software

I’ve got my Home Assistant set up and running, and have obtained a Lets Encrypt certificate to allow me to serve it all over HTTPS.

One of the things that you can do is set it up to notify you about expiring certificates. However, this requires the installation of a specific package. Since I’m running Home Assistant in a docker image, I can’t really do this.

However, the tools you need to determine a certificate’s expiry date are already in most systems (otherwise how would they be able to tell if the certificate from a site is still valid?).

echo | \
  openssl s_client -connect example.com:443 2>/dev/null | \
  openssl x509 -noout -dates

This gives the very useful:

notBefore=Nov 28 00:00:00 2018 GMT
notAfter=Dec  2 12:00:00 2020 GMT

We can manipulate this using some other commands to get just the expiry date:

echo | \
  openssl s_client -connect example.com:443 2>/dev/null | \
  openssl x509 -noout -dates | \
  tail -n 1 | \
  cut -d '=' -f 2

Now, we want to turn this into a number of days from today. Bash can do arithmetic, we just need to make the values in the right format. In this case, we’ll get date to give us an epoch value, and divide this by 3600 * 24.

echo $(( ($(date +%s --date "2020-12-02 12:00:00") - $(date +%s)) / (3600 * 24) ))

That gives us 158 days from the day I wrote this. Now let’s put our command instead of the fixed date:

echo $((
  (
    $(date +%s --date "$(echo | \
  openssl s_client -connect example.com:443 2>/dev/null | \
  openssl x509 -noout -dates | \
  tail -n 1 | \
  cut -d '=' -f 2)") - $(date +%s)
  ) / (3600 * 24)
))

Okay, we still get our 158. That’s a good sign.

Now, to put this into a Home Assistant sensor, we need to edit our configuration.yaml. Note that I needed to change the date parsing format inside the docker container to %b %d %H:%M:%S %Y GMT.

sensor:
  - platform: command_line
    name: SSL Certificate Expiry
    unit_of_measurement: days
    scan_interval: 10800
    command: echo $((
      (
        $(date +%s --date "$(echo | openssl s_client -connect example.com:443 2>/dev/null
                                  | openssl x509 -noout -dates
                                  | tail -n 1
                                  | cut -d '=' -f 2)"
                   -D "%b %d %H:%M:%S %Y GMT") - $(date +%s) ) / (3600 * 24) ))

This should give us a sensor that we can then use to create an automation, as seen in the original post.

Don’t forget to change the domain to your Home Assistant hostname!

Smart Devices Aren't (or why connected devices suck)

I love tinkering with gadgets. I’ve put a bunch of sensors around my house, so I can see the temperature in various places, and have a couple of smart light and power devices too. At the moment, they are limited to my laundry (where the hard-wired switch is in the wrong place, due to the moving of a door), my workbench (because the overhead lights there run from a power point, so it was trivial to put in a smart switch), and the lounge room (where I had room in the light fitting to put a Sonoff Mini).

In all of these cases, with the exception of the Laundry, since the switch is not really accessible, I have taken great care to ensure that the physical switches still toggle the light. In that case I have an Ikea bulb connected to an Ikea dimmer.

In my study, I have a desk lamp that has a smart (dimmable) bulb in it, and it irks me no end that I have to use a smart device or computer to turn it on or off. I will be getting some more of the Ikea dimmers to alleviate this, but in the mean time, it’s a pain to turn on or off.

Having said that, I love the option of being able to automate power and lighting, or turn things off from a distance. I just don’t like that being the only way.

I installed Home Assistant on the weekend. But, in order to fit that onto my Raspberry Pi, I needed to use a bigger Micro SD card.

Which meant I needed to clone the old one.

Which took several hours.

I’d already installed Home Assistant before running out of space, and had converted a couple of my esphome devices to use the API instead of just MQTT for connection, including the lounge room light.

Now, it turns out, by default there is an “auto reboot if API not found in 15 minutes” setting, which meant that during the four or five hours it took to create an image of the Micro SD, verify this, copy to a new SD card, and then verify that, my lights (and a powerboard in my office) would flick off every 15 minutes. Likewise, if they cannot connect to a WiFi access point they will power cycle. I believe this second one can be resolved using a Captive AP setting that will mean if they can’t connect to a network, they will create their own.

Which really got me thinking. Smart devices should continue to work in every way possible when they don’t have access to the local network, or the internet. In my case, my smart devices do not have access to the internet anyway, because they don’t need to. However, the point is the same.

In situations where a network connection, or even worse, a working connection to a server that you don’t control, is no longer available, you dont’ want your lights or god forbid, your coffee machine to not be able to perform their simple task.

This worries me somewhat about the current trends in smart homes. At some point, companies will stop supporting their devices (this has already happened), and they will become less useful than their dumb counterparts. And add further to our global waste problems.

But having a significant system outage (even an intentional one, like in my case), made me think about other aspects of my home automation as well.

I’ve been using NodeRED for a couple of automation tasks. One of them was to have different grind lengths for my coffee grinder, and making this available to Siri.

However, with the device running NodeRED not operating, I was no longer able to rely on this.

I was heading this way philosophically before, but (OMG NO COFFEE) this just cemented something else in my mind. Automations, where they don’t rely on interaction between multiple devices, should live on the local device where possible. Further to this, where the interaction between devices is required for the automation (like the PIR sensor in the laundry I have that turns on the Ikea lightbulb), the devices should connect directly to one another, without requiring some other mechanism to trigger the automation.

In my case, I have a physical button that I press to trigger a long grind. But, the grind only stops if the NodeRED server tells it to. And, I had no way to (when NodeRED was not running), trigger a short grind.

I was able to fix this: I now have a short press triggering a long grind, and a long press triggering a short grind. That seems backwards, but since I mostly do a long grind in the morning before I’ve had time to properly wake up, I want that the easiest one to trigger…


Having to program this in my esphome firmware instead of NodeRED made for an interesting exercise. Because we need to turn off the device after a period of time, but need to be aware of other events that have happened in the meantime, we need to use scripts.

script:
  - id: short_grind
    then:
      - switch.turn_on: relay
      - delay: 13s
      - switch.turn_off: relay
  - id: long_grind
    then:
      - switch.turn_on: relay
      - delay: 17s
      - switch.turn_off: relay

Whenever our relay turns on, we want to start our long grind script, so that even if the relay was triggered some other way than through the script, it will turn off after 17s if not before. Whenever it turns off, we want to stop any instances of our scripts running. We can also use Template Switches to have logical devices we can use to trigger the different scripts, either from Home Assistant, or from button presses:

switch:
  - platform: gpio
    id: relay
    pin: GPIO2
    restore_mode: ALWAYS_OFF
    on_turn_on:
      - script.execute: long_grind
    on_turn_off:
      - script.stop: short_grind
      - script.stop: long_grind
  - platform: template
    name: "Grind a Single"
    optimistic: true
    id: grind_a_single
    icon: mdi:coffee-outline
    turn_on_action:
      - script.execute: short_grind
      - script.wait: short_grind
      - switch.template.publish:
          id: grind_a_single
          state: OFF
    turn_off_action:
      - switch.turn_off: relay
  - platform: template
    name: "Grind a Double"
    optimistic: true
    id: grind_a_double
    icon: mdi:coffee
    turn_on_action:
      - script.execute: long_grind
      - script.wait: long_grind
      - switch.template.publish:
          id: grind_a_double
          state: OFF
    turn_off_action:
      - switch.turn_off: relay

Both of these template switches will also turn off the grinder when toggled off if they are currently on.

There’s only one more bit of logic that’s required, and that’s the handling of the physical button. I wanted this to trigger either length based on the amount of time that the button is held down for, but I also want a UX affordance of knowing when you have held it down long enough to trigger the alternate action. Finally, if it’s on, any type of press should turn it off, and not trigger a new grind.

binary_sensor:
  - platform: gpio
    pin:
      number: GPIO14
      inverted: true
      mode: INPUT_PULLUP
    on_press:
      - light.turn_on:
          id: led
          transition_length: 0s
      - delay: 500ms
      - light.turn_off:
          id: led
          transition_length: 0s
    on_click:
      - max_length: 350ms
        then:
          - if:
              condition:
                - switch.is_on: relay
              then:
                - switch.is_off: relay
              else:
                - script.execute: long_grind
      - min_length: 500ms
        max_length: 2s
        then:
          - if:
              condition:
                - switch.is_on: relay
              then:
                - switch.is_off: relay
              else:
                - script.execute: short_grind

Remember that the turning off of the relay will stop any running scripts.

So, now, when you hold down the button, when the light turns off, you can release it and it will trigger a short grind. If you just tap the switch and release it immediately, it will trigger a long grind. Any button press when the grinder is already running will turn it off.

Hacking Arlec's 'Smart' sensor light

Quite some time ago, I purchased one of the Arlec Smart Security Lights from Bunnings. The big draw for me was that these devices have an ESP8266, and run a Tuya firmware, which can be trivially flashed without opening up the unit.

In this case, though, the device was not really that capable. Whilst there is a PIR sensor (and an LDR for tuning if the light should turn on yet or not), the status of the PIR is not exposed at all. Instead, the firmware allows toggling between three states: “ON”, “OFF”, and “SENSOR”.

That’s not actually that useful. For one, it makes using it in conjunction with a physical switch really inconvenient. The behaviour I would prefer is:

  • Light ON/OFF state can be toggled by network request.
  • Light ON/OFF state can be toggled by physical switch. It must not matter which state the switch is in, toggling it must toggle the light.
  • PIR ON turns ON the light.
  • PIR OFF turns OFF the light, but only if it was turned ON by PIR.

As the last point indicates, the only time the PIR turning off should turn the light off is if the light was turned on by the PIR. That is, either physical switch actuation or a network request should turn the light into manual override ON.

There is no manual override OFF.

Most of this was already present in a firmware I wrote for adding a PIR to a light strip. However, the ability to also toggle the state using a physical switch is important to me: not in the least because there is a switch inside the door where I want to mount this, and I’m very likely to accidentally turn it on when I go outside: it’s also a much better solution to a manual override than just having to use HomeKit. I’ll possibly add that feature back into the aforementioned project.

Like all of the other Grid Connect hardware I’ve used so far, it was easy to flash using tuya convert. But I did run into a bunch of problems from that point onwards. The base contains the ESP8266, one of the TYWE2S units, and it’s possible to see without opening up the sensor unit that this has three GPIO pins connected: GPIO4, GPIO5 and GPIO12.

With a custom firmware, it was possible to see that GPIO5 is connected to the green LED near the PIR sensor, but the other two appear to be connected to another IC on the PCB. I thought perhaps this could be accessed using the TuyaMCU protocol, but had no luck.

As it turns out, I’d prefer not to have to use that. There are two more wires, it would be great if I could connect one of them to the relay, and the other to the PIR.

Indeed, with limited rewiring (I did have to cut some tracks and run wires elsewhere), I was able to connect GPIO12 to the point on the PCB that the output from the other IC that triggered the relay, and GPIO4 to the input of the IC that was sensing the PIR output.

I also ran an extra pair of wires from GPIO14 and GND, to use to connect to the physical switch. These will only transmit low voltage.

Unfortunately, I forgot to take photos before putting it all back together and having it mounted on the wall.

Then we just need the firmware:

esphome:
  name: $device_name
  platform: ESP8266
  board: esp01_1m

globals:
  - id: manual_override
    type: bool
    restore_value: no
    initial_value: 'false'
  - id: mqtt_triggered
    type: bool
    restore_value: no
    initial_value: 'false'

sensor:
  - platform: wifi_signal
    name: "WiFi signal sensor"
    update_interval: 60s

binary_sensor:
  - platform: gpio
    pin: GPIO4
    id: pir
    device_class: motion
    filters:
      - delayed_off: 15s
    on_press:
      - light.turn_on: green_led
      - mqtt.publish:
          topic: HomeKit/${device_name}/MotionSensor/MotionDetected
          payload: "1"
      - switch.turn_on: relay
    on_release:
      - light.turn_off: green_led
      - mqtt.publish:
          topic: HomeKit/${device_name}/MotionSensor/MotionDetected
          payload: "0"
      - if:
          condition:
            lambda: 'return id(manual_override);'
          then:
            logger.log: "Manual override prevents auto off."
          else:
            switch.turn_off: relay

  - platform: gpio
    pin:
      number: GPIO14
      mode: INPUT_PULLUP
    name: "Toggle switch"
    filters:
      - delayed_on_off: 100ms
    on_state:
      - switch.toggle: relay
      - globals.set:
          id: manual_override
          value: !lambda "return id(relay).state;"

ota:

logger:

output:
  - platform: esp8266_pwm
    id: gpio5
    pin:
      number: GPIO5
    inverted: False

switch:
  - platform: gpio
    id: relay
    pin:
      number: GPIO12
      # inverted: True
      # mode: INPUT_PULLDOWN_16
    on_turn_on:
      - if:
          condition:
            lambda: 'return id(mqtt_triggered);'
          then:
            logger.log: "No MQTT message sent"
          else:
            mqtt.publish:
              topic: HomeKit/${device_name}/Lightbulb/On
              retain: ON
              payload: "1"
    on_turn_off:
      - if:
          condition:
            lambda: 'return id(mqtt_triggered);'
          then:
            logger.log: "No MQTT message sent"
          else:
            mqtt.publish:
              topic: HomeKit/${device_name}/Lightbulb/On
              retain: ON
              payload: "0"


light:
  - platform: monochromatic
    id: green_led
    output: gpio5
    restore_mode: ALWAYS_OFF
    default_transition_length: 100ms

mqtt:
  broker: "mqtt.lan"
  discovery: false
  topic_prefix: esphome/${device_name}
  on_message:
    - topic: HomeKit/${device_name}/Lightbulb/On
      payload: "1"
      then:
        - globals.set:
            id: mqtt_triggered
            value: 'true'
        - switch.turn_on: relay
        - globals.set:
            id: mqtt_triggered
            value: 'false'
        - globals.set:
            id: manual_override
            value: !lambda "return !id(pir).state;"
    - topic:  HomeKit/${device_name}/Lightbulb/On
      payload: "0"
      then:
        - globals.set:
            id: mqtt_triggered
            value: 'true'
        - switch.turn_off: relay
        - globals.set:
            id: manual_override
            value: 'false'
        - globals.set:
            id: mqtt_triggered
            value: 'false'

I’ve also implemented a filter on sending the state to MQTT: basically, we don’t want to send a message to MQTT if we received the same message. There is a race condition that can occur where this results in fast toggling of the relay as each toggle sends a message, but then receives a message with the alternate state. I’ve had this on my Sonoff Mini firmware too.

Extracting values from environment variables in tox

Tox is a great tool for automated testing. We use it, not only to run matrix testing, but to run different types of tests in different environments, enabling us to parallelise our test runs, and get better reporting about what types of tests failed.

Recently, we started using Robot Framework for some automated UI testing. This needs to run a django server, and almost certainly wants to run against a different database. This will require our tox -e robot to drop the database if it exists, and then create it.

Because we use dj-database-url to provide our database settings, our Codeship configuration contains an environment variable set to DATABASE_URL. This contains the host, port and database name, as well as the username/password if applicable. However, we don’t have the database name (or port) directly available in their own environment variables.

Instead, I wanted to extract these out of the postgres://user:password@host:port/dbname string.

My tox environment also needed to ensure that a distinct database was used for robot:

[testenv:robot]
setenv=
  CELERY_ALWAYS_EAGER=True
  DATABASE_URL={env:DATABASE_URL}_robot
  PORT=55002
  BROWSER=headlesschrome
whitelist_externals=
  /bin/sh
commands=
  sh -c 'dropdb --if-exists $(echo {env:DATABASE_URL} | cut -d "/" -f 4)'
  sh -c 'createdb $(echo {env:DATABASE_URL} | cut -d "/" -f 4)'
  coverage run --parallel-mode --branch manage.py robot --runserver={env:PORT}

And this was working great. I’m also using the $PG_USER environment variable, which is supplied by Codeship, but that just clutters things up.

However, when merged to our main repo, which has it’s own codeship environment, tests were failing. It would complain about the database not being present when attempting to run the robot tests.

It seems that we were using a different version of postgres, and thus were using a different port.

So, how can we extract the port from the $DATABASE_URL?

commands=
  sh -c 'dropdb --if-exists \
                -p $(echo {env:DATABASE_URL} | cut -d "/" -f 3 | cut -d ":" -f 3) \
                $(echo {env:DATABASE_URL} | cut -d "/" -f 4)'

Which is all well and good, until you have a $DATABASE_URL that omits the port…

dropdb: error: missing required argument database name

Ah, that would mean the command being executed was:

$ dropdb --if-exists -p  <database-name>

Eventually, I came up with the following:

sh -c 'export PG_PORT=$(echo {env:DATABASE_URL} | cut -d "/" -f 3 | cut -d ":" -f 3); \
              dropdb --if-exists \
                     -p $\{PG_PORT:-5432} \
                     $(echo {env:DATABASE_URL} | cut -d "/" -f 4)'

Whew, that is a mouthful!

We store the extracted value in a variable PG_PORT, and then use bash variable substitution (rather than tox variable substitution) to put it in, with a default value. But because of tox variable substitution, we need to escape the curly brace to allow it to be passed through to bash: $\{PG_PORT:-5432}. Also note that you’ll need a space after this before a line continuation, because bash seems to strip leading spaces from the continued line.

Django and Robot Framework

One of my colleagues has spent a bunch of time investigating and then implementing some testing using Robot Framework. Whilst at times the command line feels like it was written by someone who hasn’t used unix much, it’s pretty powerful. There are also some nice tools, like several Google Chrome plugins that will record what you are doing and generate a script based upon that. There are also other tools to help build testing scripts.

There is also an existing DjangoLibrary for integrating with Django.

It’s an interesting approach: you install some extra middleware that allows you to perform requests directly to the server to create instances using Factory Boy, or fetch data from Querysets. However, it requires that the data is serialised before sending to the django server, and the same the other way. This means, for instance, that you cannot follow object references to get a related object without a bunch of legwork: usually you end up doing another Query Set query.

There are some things in it that I do not like:

  • A new instance of the django runserver command is started for each Test Suite. In our case, this takes over 10 seconds to start as all imports are processed.
  • The database is flushed between Test Suites. We have data that is added through migrations that is required for the system to operate correctly, and in some cases for tests to execute. This is the same problem I’ve seen with TransactionTestCase.
  • Migrations are applied before running each Test Suite. This is unnecessary, and just takes more time.
  • Migrations are created automatically before running each Test Suite. This is just the wrong approach: at worst you’d want to warn that migrations are not up to date - otherwise you are testing migrations that may not have been committed: your CI would pass because the migrations were generated, but your system would fail in reality because those migrations do not really exist. Unless you are also making migrations directly on your production server and not committing them at all, in which case you really should stop that.

That’s in addition to having to install extra middleware.

But, back onto the initial issue: interacting with Django models.

What would be much nicer is if you could just call the python code directly. You’d get python objects back, which means you can follow references, and not have to deal with serialisation.

It’s fairly easy to write a Library for Robot Framework, as it already runs under Python. The tricky bit is that to access Django models (or Factory Boy factories), you’ll want to have the Django infrastructure all managed for you.

Let’s look at what the DjangoLibrary might look like if you are able to assume that django is already available and configured:

import importlib

from django.apps import apps
from django.core.urlresolvers import reverse

from robot.libraries.BuiltIn import BuiltIn


class DjangoLibrary:
    """

    Tools for making interaction with Django easier.

    Installation: ensure that in your `resource.robot` or test file, you have the
    following in your "***Settings***" section:

        Library         djangobot.DjangoLibrary     ${HOSTNAME}     ${PORT}

    The following keywords are provided:


    Factory:        execute the named factory with the args and kwargs. You may omit
                    the 'factories' module from the path to reduce the amount of code
                    required.

        ${obj}=     Factory     app_label.FactoryName       arg  kwarg=value
        ${obj}=     Factory     app_label.factories.FactoryName     arg  kwarg=value


    Queryset:       return a queryset of the installed model, using the default manager
                    and filtering according to any keyword arguments.

        ${qs}=      Queryset    auth.User       pk=1


    Method Call:    Execute the callable with tha args/kwargs provided. This differs
                    from the Builtin "Call Method" in that it expects a callable, rather
                    than an instance and a method name.

        ${x}=       Method Call     ${foo.bar}      arg  kwargs=value


    Relative Url:   Resolve the named url and args/kwargs, and return the path. Not
                    quite as useful as the "Url", since it has no hostname, but may be
                    useful when dealing with `?next=/path/` values, for instance.

        ${url}=     Relative Url        foo:bar     baz=qux


    Url:            Resolve the named url with args/kwargs, and return the fully qualified url.

        ${url}=     Url                 foo:bar     baz=qux


    Fetch Url:      Resolve the named url with args/kwargs, and then using SeleniumLibrary,
                    navigate to that URL. This should be used instead of the "Go To" command,
                    as it allows using named urls instead of manually specifying urls.

        Fetch Url   foo:bar     baz=qux


    Url Should Match:   Assert that the current page matches the named url with args/kwargs.

        Url Should Match        foo:bar     baz=qux

    """

    def __init__(self, hostname, port, **kwargs):
        self.hostname = hostname
        self.port = port
        self.protocol = kwargs.pop('protocol', 'http')

    @property
    def selenium(self):
        return BuiltIn().get_library_instance('SeleniumLibrary')

    def factory(self, factory, **kwargs):
        module, name = factory.rsplit('.', 1)
        factory = getattr(importlib.import_module(module), name)
        return factory(**kwargs)

    def queryset(self, dotted_path, **kwargs):
        return apps.get_model(dotted_path.split('.'))._default_manager.filter(**kwargs)

    def method_call(self, method, *args, **kwargs):
        return method(*args, **kwargs)

    def fetch_url(self, name, *args, **kwargs):
        return self.selenium.go_to(self.url(name, *args, **kwargs))

    def relative_url(self, name, *args, **kwargs):
        return reverse(name, args=args, kwargs=kwargs)

    def url(self, name, *args, **kwargs):
        return '{}://{}:{}'.format(
            self.protocol,
            self.hostname,
            self.port,
        ) + reverse(name, args=args, kwargs=kwargs)

    def url_should_match(self, name, *args, **kwargs):
        self.selenium.location_should_be(self.url(name, *args, **kwargs))

You can write a management command: this allows you to hook in to Django’s existing infrastructure. Then, instead of calling robot directly, you use ./manage.py robot

What’s even nicer about using a management command is that you can have that (optionally, because in development you probably will already have a devserver running) start runserver, and kill it when it’s finished. This is the same philosophy as robotframework-DjangoLibrary already does, but we can start it once before running out tests, and kill it at the end.

So, what could our management command look like? Omitting the code for starting runserver, it’s quite neat:

from __future__ import absolute_import

from django.core.management import BaseCommand, CommandError

import robot


class Command(BaseCommand):
    def add_arguments(self, parser):
        parser.add_argument('tests', nargs='?', action='append')
        parser.add_argument('--variable', action='append')
        parser.add_argument('--include', action='append')

    def handle(self, **options):
        robot_options = {
            'outputdir': 'robot_results',
            'variable': options.get('variable') or []
        }
        if options.get('include'):
            robot_options['include'] = options['include']

        args = [
            'robot_tests/{}_test.robot'.format(arg)
            for arg in options['tests'] or ()
            if arg
        ] or ['robot_tests']

        result = robot.run(*args, **robot_options)

        if result:
            raise CommandError('Robot tests failed: {}'.format(result))

I think I’d like to do a bit more work on finding tests, but this works as a starting point. We can call this like:

./manage.py robot foo --variable BROWSER:firefox --variable PORT:8000

This will find a test called robot_tests/foo_test.robot, and execute that. If you omit the test argument, it will run on all tests in the robot_tests/ directory.

I’ve still got a bit to do on cleaning up the code that starts/stops the server, but I think this is useful even without that.

Take that, Mr Morrison

People power still works.

Mr Morrison and his advisory panel still maintain that schools should remain open. For some reason, kids are immune to getting or spreading COVID-19 whilst at school, but if they visit their grandparents or a shopping centre, then all hell will break loose.

We, like many other parents around Australia, decided to withdraw our children from physical schooling (we are still doing remote learning, and our school has been very supportive of this) for nearly two weeks now.

A few days ago, we received an email from our school: the whole school will be remote-only; with exceptions for children of parents who are unable to perform remote learning, either because they work in essential jobs or whatever other reason. Those students will be managed at school, as of next week, but all students will be receiveng the same curriculum.

This is exactly what the NSW premier indicated her state’s schools would be doing. This makes perfect sense. Reduce the risk to teachers, and reduce the exposure of children to one another.

Now SA, and most other states, have announced early closure, and possible remote learning after the school holidays.

People voted by withdrawing their children from school. Maybe Mr Morrison should pay attention.

Human contact, or what the f*** are all these people doing

I went outside of my house today.

Initially I went to my office to get a few things that I’ll need to work from home (standing desk extension and a better trackpad were the main ones). But after that, I thought I’d swing by one of the smaller supermarkets in my area, and if it was quiet enough, pick up some supplies.

There were no customers when I went in. Good.

The first one who came in, when trying to get to a section near me, instead of taking the path that avoided me, instead took the path that went right by me. I quickly scrambled to get out of the way. Call me paranoid, but the fewer people I get close to, the smaller my chance of contracting the virus.

Eventually, too many people were in the store, so I left without everything I needed (I’d struggled to find some bits anyway). Having to get closer than I would have liked to people, I braved the checkout area, and made it outside.

So, for future reference, I came within a couple of metres of about 6 or 7 people at Drakes Mini between maybe 6:30 and 7pm.


The PM announced further measures, and said one thing I thought was interesting:

“It seems like lots of people want a complete lockdown”

Damn straight we do.

“Let me tell you, it won’t be for a short period of time.”

We are very aware of that: but it’s about public health. Man up, and lock it down. Have you not seen that China is able to start to release restrictions because they have started to control it?

Close Contacts and Community Transmission

The Australian government (and specifically the South Australian government) have made a big deal about how, so far at least, there has been virtually no “Community Transmission”. That it’s all been imports from overseas, other states, or “Close Contacts”.

But what do these terms “Close Contacts” and “Community Transmission” mean, in the scope of what normal people would assume they mean, and what the governments are using them to mean.

To me, a “Close Contact” would be a member of my immediate family, or perhaps one of the three people who share my office. Possibly even the other 6 people who work in my company who also work in the same place as me - since the air we all breathe is the same, and we share the same kitchen and bathroom facilities. Also, you’d probably count my extended family, although I don’t see them often, when I do see them we in close physical proximity. Indeed, even our close friends, who we don’t see as often as we would like, the sort of ones we catch up with every few months for dinner would count as close contacts.

A generous expansion of this might include the parents we see, and chat with every morning doing school drop-off. And of course, our kids’ close contacts would include their teacher and their closest friends. Note the word close.

So, you’d think that “Community Transmission” would include everyone else. People you come across in service roles, like the guy at the sushi bar, or your Barista. Maybe your neighbour that you chat to while walking your dog. The other parents who you stop briefly to and say hello. The Principal of the school, who you talk to probably once a week.

But, according to The Conversation AU, these people would be counted by the official tally as “Close Contacts”.

I think this belies how serious of a mismatch there is between the government’s current position and communication and reality there is. Keeping schools open will not, according to their definition, cause Community Transmission. It cannot, because by definition, this would count as Close Contacts.

Perhaps this is part of the reason they want to keep schools open. Being able to claim that cases are still being spread only through Close Contacts, when in reality the virus is being transmitted through the school COMMUNITY.

Maybe I’m turning into a tin-hatter, but I feel like our government is not trying hard enough to reduce the growth in number of cases. At this stage, the number of cases in South Australia is speeding up: it was growing at 23%, but now seems to be growing at 33%. Sure, some of this is better testing, but it doesn’t seem at this stage that many of the measures they have started to take have had much impact on the spread.

Lockdown: Day 6

So I guess this is becoming a bit of a journal. I missed yesterday though… ;)

The Australian government announced further economic aid to people who are newly unemployed or under-employed due to COVID-19. Notably, this included doubling the Jobseeker allowance (or whatever it’s currently called). This is surely a kick in the guts for the previously unemployed. Does it not state that the previous amount was way less than what an average person needs to survive? But now there are going to be significant numbers of people who until a week ago were fully-employed, the amount doubles.

Interesting.

We are going to take the boys out for a walk in the local National Park. I don’t know how draconian lockdown rules are going to be in the future. Keeping schools open seems to suggest they don’t really give a fuck though.

Our internet was pretty crappy all day yesterday; and a bit intermittent on Friday. Right now, it’s totally gone - allegedly there is an issue at some place that is up the chain from me: they have turned the power off there and are working on it. Hopefully it comes back on today.