QuerySets of various models

In general, the first thing we try to do to reduce the load time of a page in Django is to reduce the number of queries we are making to the database. This often has the greatest impact, sometimes orders of magnitude greater than other improvements.

One problem I recently hit with a specific set of pages was that there are potentially seven different models that may have zero or more items for a given request. This could mean we do seven queries that are all empty.

These are for all distinct models, but in this case, they are used for the same purpose, and this case, we always need all of them, if there are any relevant records. Whilst there are seven models, there may be more in the future.

Each of these models has a specific method, that validates some data against the rules that are applicable to that model, and the stored attributes. But each model has a different set of attributes, and future models will more than likely have different attributes.

There are at least three different ways we could have solved this problem. It turns out we have solved similar problems in the past the first two ways, and the third one is the one I’ve used in this case.


Solution 1: Store all data in the same model, and use a JSON field to store the attributes specific to this “class”.

This also requires a “type” field of some sort. Then, when loading data, we have the model use this type field to work out which attributes should apply.

This has a bunch of problems. First and foremost is that it becomes far more difficult to get the database (postgres, in this case) to apply integrity constraints. It’s not impossible, but it is much harder to read a constraint that checks a field and performs a JSON expression constraint. Changing a constraint, assuming it’s done using a check constraint and not a trigger, is still possible, but is likely to be harder to understand.

Secondly, it no longer becomes “automatic” to get Django Model Form support. It’s not impossible to use a Django Model Form, but you need to work a bit harder to get the initial data in, and ensure that the cleaned data for the fields is applied to the JSON field correctly.

Finally, as hinted above, using a “type” field means the logic for building the checks is more complex, unless you use class swizzling and proxy models or similar to have a different class for each type value. If an instance was accidentally updated to the wrong type, then all sorts of things could go wrong.

This was the first solution we used for our external integrations, and whilst convenient at some level, turned out to be much harder to manage than distinct model classes. It is not subject to the problem that is the basis of this article: we can always fetch objects of different logical types, as it’s all the same model class. Indeed, to only fetch a single class, we need to perform extra filtering.


Solution 2: Use concrete/multi-table inheritance.

This is the solution we moved to with our external integrations, and has been much nicer than the previous solution. Instead of having a JSON field with an arbitrary blob of data in it, we have distinct fields. This makes it much easier to have unique constraints, as well as requiring values, or values of a specific type. Knowing that the database is going to catch us accidentally putting a text value into the external id field for a system that requires an integer is reassuring.

This overcomes the second problem. We can now just use a Django ModelForm, and this makes writing views much easier. The validation for a given field or set of fields lives on the model, and where possible also in the database, as an exclusion or check constraint.

It also overcomes the third problem. We have distinct classes, which can have their own methods. We don’t need to try to use some magical proxy model code, and it’s easy for new developers to follow.

Finally, thanks to django-model-utils InheritanceManager, we can fetch all of our objects using the concrete parent model, and the .select_subclasses() method to downcast to our required class.

There are a couple of drawbacks to using concrete inheritance. Any fetch of an instance will perform a JOIN in your database, but more importantly, it’s impossible to perform a bulk_create() for models of these types.


Solution 3: Use a Postgres VIEW and JSONB to perform one query, and reconstruct models.

In the problem I’ve recently solved, we had a bunch of different models that, although being used in similar ways, didn’t have that much similarity. They were pre-existing, and it wasn’t worth the effort to move them to concrete inheritance, and using JSON fields for all data is not something I would repeat.

Instead, I came up with an idea that seems novel, based on some previous work I’ve done converting JSON data into models:

CREATE VIEW combined_objects AS

SELECT 'foo.Foo' AS model,
       foo.id,
       foo.tenant_id,
       TO_JSONB(foo) AS data
  FROM foo

 UNION ALL

SELECT 'foo.Bar' AS model,
       bar.id,
       baz.tenant_id,
       TO_JSONB(bar) AS data
  FROM bar
 INNER JOIN baz ON (baz.id = bar.baz_id)

 UNION ALL

SELECT 'qux.Qux' AS model,
       qux.id,
       tenant.id,
       TO_JSONB(qux) AS data
  FROM tenant
  JOIN qux ON (true)

This builds up a postgres view that contains all of the data from the model, but in a generic way. It also contains a tenant_id, which in this case was the mechanism that we’ll be using to filter the ones that are required at any given time. This can be a field on a model (as shown in the first subquery), or a field on a related model (as shown in the second). It could even be every object in a table for every tenant, as shown in the third.

From there, we need a model that will recreate the model instances correctly:

class CombinedObjectQuerySet(models.query.QuerySet):
    def for_tenant(self, tenant):
        return self.filter(tenant=tenant)

    def as_model_instances(self):
        return [x.instance for x in self]


class CombinedObject(models.Model):
    model = models.TextField()
    tenant = models.ForeignKey('tenant.Tenant')
    data = JSONField()

    objects = CombinedObjectQuerySet.as_manager()

    class Meta:
        managed = False
        db_table = 'combined_objects'

    def __str__(self):
        return '%s wrapper (%s)'.format(self.model, self.instance)

    def __eq__(self, other):
        return self.instance == other.instance

    @property
    def model_class(self):
        return apps.get_model(*self.model.split('.'))

    @cached_property
    def instance(self):
        return self.model_class(**self.data)

This works great, as long as you don’t apply additional annotations, typecast to python values, or want to deal with related objects. That is where it starts to get a bit tricky.

We can handle annotations and typecasting:

@cached_property
def instance(self):
    data = self.data
    model_class = self.model_class
    field_data = {
        field.name: field.to_python(data[field.name])
        for field in model_class
        if field.name in data
    }
    instance = model_class(**field_data)
    for attr, value in data.items():
        if attr not in field_data:
            setattr(instance, attr, value)
    return instance

There’s still the issue of foreign keys in the target models: in this case I know the code is not going to traverse these and trigger extra database hits. We could look at omitting those fields to prevent that being possible, but this is working well enough for now.

Postgres VIEW from Django QuerySet

It’s already possible, given an existing Postgres (or other database) VIEW, to stick a Django Model in front of it, and have it fetch data from that instead of a table.

Creating the views can currently be done using raw SQL (and a RunSQL migration operation), or using some helpers to store the SQL in files for easy versioning.

It would be excellent if it was possible to use Django’s ORM to actually generate the VIEW, and even better if you could make the migration autodetector generate migrations.

But why would this be necessary? Surely, if you were able to create a QuerySet instance that contains the items in your view, that should be good enough?

Not quite, because currently using the ORM it is not possible to perform the following type of query:

SELECT foo.a,
       foo.b,
       bar.d
  FROM foo
  INNER JOIN (
    SELECT baz.a,
           ARRAY_AGG(baz.c) AS d
      FROM baz
     GROUP BY baz.a) bar ON (foo.a = bar.a)

That is, generating a join to a subquery is not possible in the ORM. In this case, you could probably get away with a correlated Subquery, however that would probably not perform as well as using a join in this case. This is because a subquery in a SELECT is evaluated once for each row, whereas a subquery join will be evaluated once.

So, we could use a VIEW for the subquery component:

CREATE OR REPLACE VIEW bar AS

SELECT baz.a,
       ARRAY_AGG(baz.c) AS d
  FROM baz
 GROUP BY baz.a;

And then stick a model in front of that, and join accordingly:

SELECT foo.a,
       foo.b,
       bar.d
  FROM foo
 INNER JOIN bar ON (foo.a = bar.a)

The Django model for the view would look something like:

class Bar(models.Model):
    a = models.OneToOneField(
        'foo.Foo',
        on_delete=models.DO_NOTHING,
        primary_key=True,
        related_name='bar'
    )
    d = django.contrib.postgres.fields.ArrayField(
        base_field=models.TextField()
    )

    class Meta:
        managed = False

The on_delete=models.DO_NOTHING is important: without it, a delete of a Foo instance would trigger an attempted delete of a Bar instance - which would cause a database error, because it’s coming from a VIEW instead of a TABLE.

Then, we’d be able to use:

queryset = Foo.objects.select_related('bar')

So, that’s the logic behind needing to be able to do a subquery, and it becomes even more compelling if you need that subquery/view to filter the objects, or perform some other expression/operation. So, how can we make Django emit code that will enable us to handle that?

There are two problems:

  • Turn a queryset into a VIEW.
  • Get the migration autodetector to trigger VIEW creation.

The other day I came across Create Table As Select in Django, and it made me realise that we can use basically the same logic for creating a view. So, we can create a migration operation that will perform this for us:

class CreateOrReplaceView(Operation):
    def __init__(self, view_name, queryset):
        self.view_name = view_name
        self.queryset = queryset

    def database_forwards(self, app_label, schema_editor, from_state, to_state):
        queryset = self.queryset
        compiler = queryset.query.get_compiler(using=schema_editor.connection.alias)
        sql, params = compiler.as_sql()
        sql = 'CREATE OR REPLACE VIEW {view} AS {sql}'.format(
            view=schema_editor.connection.ops.quote_name(self.view_name),
            sql=sql
        )
        schema_editor.execute(sql, params)

    def state_forwards(self, app_label, state):
        pass

We can then have this operation (which needs to be passed a queryset).

This doesn’t really solve how to define the queryset for the view, and have some mechanism for resolving changes made to that queryset (so we can generate a new migration if necessary). It also means we have a queryset written in our migration operation. We won’t be able to leave it like that: due to loading issues, you won’t be able to import model classes during the migration setup - and even if you could, you shouldn’t be accessing them during a migration anyway - you should use models from the ProjectState which is tied to where in the migration graph you currently are.

What would be excellent is if we could write something like:

class Bar(models.Model):
    a = models.OneToOneField(
        'foo.Foo',
        on_delete=models.DO_NOTHING,
        primary_key=True,
        related_name='bar',
    )
    d = django.contrib.postgres.fields.ArrayField(
        base_field=models.TextField()
    )

    class Meta:
        managed = False

    @property
    def view_queryset(self):
        return Baz.objects.values('a').annotate(d=ArrayAgg('c'))

And then, if we change our view definition:

@property
def view_queryset(self):
  return Baz.objects.values('a').filter(
      c__startswith='qux',
  ).annotate(
      d=ArrayAgg('c')
  )

… we would want a migration operation generated that includes the new queryset, or at least be able to know that it has changed. Ideally, we’d want to have a queryset attribute inside the Meta class in our model, which could itself be a property. However, that’s not possible without making changes to django itself.

In the meantime, we can borrow the pattern used by RunPython to have a callable that is passed some parameters during application of the migration, which returns the queryset. We can then have a migration file that looks somewhat like:

def view_queryset(apps, schema_editor):
    Baz = apps.get_model('foo', 'Baz')

    return Baz.objects.values('a').filter(
        c__startswith='qux'
    ).annotate(
        d=ArrayAgg('c')
    )


class Migration(migrations.Migration):
    dependencies = [
        ('foo', '0001_initial'),
    ]

    operations = [
        migrations.CreateModel(
            name='Bar',
            fields=[
                ('a', models.OneToOneField(...)),
                ('d', ArrayField(base_field=models.TextField(), ...)),
            ],
            options={
                'managed': False,
            }
        ),
        CreateOrReplaceView('Bar', view_queryset),
    ]

We still need to have the CreateModel statement so Django knows about our model, but the important bit in this file is the CreateOrReplaceView, which references the callable.

Now for the actual migration operation.

class CreateOrReplaceView(migrations.Operation):
    def __init__(self, model, queryset_factory):
        self.model = model
        self.queryset_factory = queryset_factory

    def database_forwards(self, app_label, schema_editor, from_state, to_state):
        model = from_state.apps.get_model(app_label, self.model)
        queryset = self.queryset_factory(from_state.apps, schema_editor)
        compiler = queryset.query.get_compiler(using=schema_editor.connection.alias)
        sql, params = compiler.as_sql()
        sql = 'CREATE OR REPLACE VIEW {view_name} AS {query}'.format(
            view_name=model._meta.db_table,
            query=sql,
        )
        schema_editor.execute(sql, params)

The backwards migration is not quite a solved problem yet: I do have a working solution that steps up the stack to determine what the current migration name is, and then finds the previous migration that contains one of these operations for this model, but that’s a bit nasty.


There’s no (clean) way to inject ourself into the migration autodetector and “notice” when we need to generate a new version of the view, however we can leverage the checks framework to notify the user when our view queryset is out of date compared to the latest migration.

from django.apps import apps
from django.core.checks import register

@register()
def check_view_definitions(app_configs, **kwargs):
    errors = []

    if app_configs is None:
        app_configs = apps.app_configs.values()

    for app_config in app_configs:
        errors.extend(_check_view_definitions(app_config))

    return errors

And then we need to implement _check_view_definitions:

def get_out_of_date_views(app_config):
    app_name = app_config.name

    view_models = [
        model
        # We need the real app_config, not the migration one.
        for model in apps.get_app_config(app_name.split('.')[-1]).get_models()
        if not model._meta.managed and hasattr(model, 'get_view_queryset')
    ]

    for model in view_models:
        latest = get_latest_queryset(model)
        current = model.get_view_queryset()

        if latest is None or current.query.sql_with_params() != latest.query.sql_with_params():
            yield MissingViewMigration(
                model,
                current,
                latest,
                Warning(W003.format(app_name=app_name, model_name=model._meta.model_name), id='sql_helpers.W003'),
            )


def _check_view_definitions(app_config):
    return [x.warning for x in get_out_of_date_views(app_config)]

The last puzzle piece there is get_latest_queryset, which is a bit more complicated:

def get_latest_queryset(model, before=None):
    from django.db.migrations.loader import MigrationLoader
    from django.db import connection

    migration_loader = MigrationLoader(None)
    migrations = dict(migration_loader.disk_migrations)
    migration_loader.build_graph()
    state = migration_loader.project_state()
    app_label = model._meta.app_label
    root_node = dict(migration_loader.graph.root_nodes())[app_label]
    # We want to skip any migrations in our reverse list until we have
    # hit a specific node: however, if that is not supplied, it means
    # we don't skip any.
    if not before:
        seen_before = True
    for node in migration_loader.graph.backwards_plan((app_label, root_node)):
        if node == before:
            seen_before = True
            continue
        if not seen_before:
            continue
        migration = migrations[node]
        for operation in migration.operations:
            if (
                isinstance(operation, CreateOrReplaceView) and
                operation.model.lower() == model._meta.model_name.lower()
            ):
                return operation.queryset_factory(state.apps, connection.schema_editor())

This also has code to allow us to pass in a node (before), which limits the search to migrations that occur before that node in the forwards migration plan.

Since we already have the bits in place, we could also have a management command that creates a stub migration (without the queryset factory, that’s a problem I haven’t yet solved). I’ve built this into my related “load SQL from files” app.


This is still a bit of a work in progress, but writing it down helped me clarify some concepts.

Smart Lights or Smart Switches, or my philosophy of home automation

I really like home automation. Being able to see when things are turned on, and being able to turn them off remotely, or build automations around events in the home is fun. I’ve built a garage door opener, a bunch of temperature, humidity and air pressure sensors, and played around with various light bulbs and switches.

I haven’t invested deeply in any platform: for a couple of reasons. I have some of the Sonoff devices, and like that it’s possible to flash my own firmware. This allows me to lock down the device so it can only connect to the one MQTT broker in my network: indeed, my IoT devices are all on a seperate network that has very strict limits.

But that’s not what I want to talk about today. I want to talk about smart lights and smart switches, and why switches are superior.

I don’t live by myself. I have a family, none of whom share my level of excitement about smart devices, but all of whom need to still turn lights on and off. They don’t have an Apple Watch (or iPhone, in most cases), and should not have to use one to turn the lights on.

In fact, I should not have to use my watch or phone to toggle the lights either. We have managed as a society to come up with a really simple system for controlling lights: you flick the switch, and the light toggles.

Direct control of lights using the existing switches is the number one priority of a smart light system. Having a smart bulb that you can turn off using Siri, but then means you need to turn the switch-off-and-then-back-on to get it to turn on is not acceptable.

Likewise, requiring someone to use a smart device to turn a light on is unacceptable.

Having a switch that prevents the smart light from being smart (ie, when it’s off, you have to use the switch to turn it on) is also unacceptable.

This makes existing smart bulbs a non-event for me. Even if you have a duplicate “smart” switch next to an existing switch, there’s still the chance someone will turn off the switch that needs to stay on.

What is a much better solution is to have the switch itself be smart. In most cases, that means the switch will no longer be mechanical, although it could be a press button. There are a bunch of smart switches that perform this way: they work manually, but also still allow an automated or remote toggle of the state.

These switches will not work in my home.

Pretty much all of these (except one exception from Sonoff, see this video for a really nice description of how this works) require a neutral wire at the switch. It seems that my house is wired up with the neutral at the light only, and then only a pair of wires (live, and switched live) that travel down to the switch. Thus, the switch is wired in series with the live wire, and the neutral is only connected directly to the fitting. Jonathon does a really good job in the above-linked video of describing the problem.

There is an alternative solution. Sonoff also make another device, the Sonoff Mini. This one is pretty small (much smaller than the Sonoff Basic), and can be wired up and programmed to behave just like a traditional hallway switch: toggling either the manual or smart switch will result in the lights toggling. The nice thing about these is that they have the new DIY mode (just pop them open, add a jumper, and you can flash them without having to connect header pins). In fact, I even wrote a script to automate the process. It’s not quite perfect, but it’s been good for flashing the five of these that I currently own.

You can then have the mini connected up to the light (at least, in other countries you can do this yourself: in Australia this is illegal unless you are an electrician, so be aware of that), and have the switch connected just to the switch pins on the mini. Bingo, smart switches the right way. When the smart fails, the switch still works the same as they have for generations.

(Unless the microcontroller itself dies, but that’s a problem I have not as yet solved).


As an aside, I wanted to comment on the setup from Superhouse. It’s quite nifty: all of the switches are low voltage, and control relays back in the switchboard. However, it relies on the logic running in a computer (and running JavaScript!) to connect which switch to which light.

This to me feels wrong. It’s better than having those connections over WiFi, but it still means there is a single point of failure. I think the architecture I have used - smart switches that are independent, and if they fail to connect to WiFi continue to work just the same as an old dumb switch - is superior. It’s more like the philosophy I have with web pages: the pages should still work when all of the JavaScript breaks, but when it doesn’t break, they can be nicer (and have some cool affordances).

To that end, I’ve tried to make each little module of my smart home somewhat independent. For instance, having an IR sensor connected to the same microcontroller that controls the lights that sensor means that even if WiFi breaks, the lights still behave exactly the same way.

Adding a PIR sensor to an Arlec Smart LED Strip

I built a shed on Australia Day.

Actually, I built two of them over the Australia Day Long Weekend. I bought them from easyshed.com.au, and they were pretty nifty. Specifically, I bought a couple of the off the wall sheds, to attach them to the wall of my garage. There’s already a concrete path there that I was able to use as the floor.

One of the sheds is longer than the others, and is used for storage of tools. Mostly I need to get stuff during the day, but sometimes at night I may need to access something from in there. So, I bought a 2m Grid Connect LED Strip Light. It’s colour changing, but I’m not really that fussed about the colours: this one has a Warm White LED too, so that’s better than trying to twiddle the colours to get the right temperature.

Being Grid Connect, it’s really Tuya, which means it’s flashable. So I did. I’ve got a nice esphome firmware on there, which hooks it up to HomeKit via my MQTT bridge.

I wasn’t really able to get the colour parts working correctly, so I just disabled those pins, and use the light in monochromatic mode.

However, being the tinkerer I am, I opened it up to have a look inside. It’s got one of the standard Tuya mini-boards, in fact a TYWE3S.

This exposes a bunch of pins, most of which were in use:

  • GPIO 0: (Used during boot)
  • GPIO 2: UART0 (TX)
  • GPIO 4: Red LED channel
  • GPIO 5: Warm White LED channel
  • GPIO 12: Green LED channel
  • GPIO 13: unused
  • GPIO 14: Blue LED channel
  • GPIO 15: (Used during boot)
  • GPIO 16: unused

Because the LED strip runs at 12V, I was able to use this to power a PIR sensor, which I then hooked up to GPIO 13. I also tried to connect a DS18B20 temperature sensor to GPIO 16, but was not able to get it to be recognised. From the page linked above, perhaps I needed a pull-up resistor, however, I didn’t really need a temperature sensor.

Having a PIR sensor on the LED strip is worthwhile, however. Otherwise, you’d need to manually turn the lights on when going into the shed.

Having a sensor light is fantastic, but at times you might want the light to stay on, even when it does not detect motion. To achieve this, I have a global variable manual_override, that is set when the light is turned on “manually” (using HomeKit, as there is no physical switch). When this variable is set, the light will not turn off when motion is no longer detected.

I also found it was simpler to have the motion detector (binary_sensor in esphome) have a “delayed off” filter for the period I wanted the light to stay on for, rather than try to manage that in conjunction with the manual override.

The full YAML follows:

esphome:
  name: $device_name
  platform: ESP8266
  board: esp01_1m
  on_loop:
    then:
      - if:
          condition:
            not:
              mqtt.connected:
          then:
            - globals.set:
                id: has_connected_to_mqtt
                value: 'false'

globals:
  - id: has_connected_to_mqtt
    type: bool
    restore_value: no
    initial_value: 'false'
  - id: manual_override
    type: bool
    restore_value: no
    initial_value: 'false'

light:
  - platform: monochromatic
    name: "White"
    id: white
    output: white_channel
    restore_mode: ALWAYS_OFF
    default_transition_length: 0.25s

output:
  - platform: esp8266_pwm
    pin:
      number: GPIO5
    id: white_channel
    inverted: False

sensor:
  - platform: wifi_signal
    name: "WiFi signal sensor"
    update_interval: 5min

binary_sensor:
  - platform: gpio
    pin: GPIO13
    name: "PIR Sensor"
    device_class: motion
    id: pir
    filters:
      - delayed_off: 15s
    on_press:
      then:
        - mqtt.publish:
            topic: HomeKit/${device_name}/MotionSensor/MotionDetected
            payload: "1"
        - if:
            condition:
              light.is_off: white
            then:
              - light.turn_on: white
              - mqtt.publish:
                  topic: HomeKit/${device_name}/Lightbulb/On
                  payload: "1"
    on_release:
      - mqtt.publish:
          topic: HomeKit/${device_name}/MotionSensor/MotionDetected
          payload: "0"
      - if:
          condition:
            lambda: 'return id(manual_override);'
          then:
            - logger.log: "Manual override prevents auto off."
          else:
            - logger.log: "Turning off after motion delay."
            - if:
                condition:
                  - light.is_on: white
                then:
                  - light.turn_off: white
                  - mqtt.publish:
                      topic: HomeKit/${device_name}/Lightbulb/On
                      payload: "0"

ota:

logger:

mqtt:
  broker: "mqtt.lan"
  discovery: false
  topic_prefix: esphome/${device_name}
  on_message:
    - topic: HomeKit/${device_name}/Lightbulb/Brightness
      then:
        - logger.log: "Brightness message"
        - globals.set:
            id: manual_override
            value: 'true'
        - light.turn_on:
            id: white
            brightness: !lambda "return atof(x.c_str()) / 100;"
        - globals.set:
            id: has_connected_to_mqtt
            value: 'true'

    - topic: HomeKit/${device_name}/Lightbulb/On
      payload: "1"
      then:
        - logger.log: "Turn on message"
        - if:
            condition:
              and:
                - binary_sensor.is_off: pir
                - lambda: "return id(has_connected_to_mqtt);"
            then:
              - globals.set:
                  id: manual_override
                  value: 'true'
              - logger.log: "Manual override enabled"
            else:
              - globals.set:
                  id: manual_override
                  value: 'false'
              - logger.log: "Manual override disabled"
        - light.turn_on:
            id: white
        - globals.set:
            id: has_connected_to_mqtt
            value: 'true'
    - topic: HomeKit/${device_name}/Lightbulb/On
      payload: "0"
      then:
        - logger.log: "Turn off message"
        - if:
            condition:
              lambda: 'return id(has_connected_to_mqtt);'
            then:
              - light.turn_off:
                  id: white
              - globals.set:
                  id: manual_override
                  value: 'false'
              - logger.log: "Manual override disabled"
        - globals.set:
            id: has_connected_to_mqtt
            value: 'true'

  birth_message:
    topic: HomeKit/${device_name}/Lightbulb/On
    payload: "1"
  will_message:
    topic: HomeKit/${device_name}/Lightbulb/On
    payload: "0"

There’s also some logic in there to prevent the light turning off and then back on when it first connects to the MQTT broker and is already turned on.

Heat Pump Hot Water

Heat pumps seem like magic. You use a certain amount of energy (let’s go with 1kWh), and you use this to extract heat from the ambient air, and this allows you to apply a multiple (let’s go with 3kWh) of your original energy input.

This is significantly better than an element heater: in that situation, you can only apply a smaller multiple, depending upon the efficiency of the element.

Similarly, burning gas to heat up water is less efficient - you don’t quite get to apply all of the energy from the amount of gas you burn to the water. I believe boilers and element heaters are around 85% efficient.

This is also how reverse cycle air conditioners work - basically a transfer fluid is used to extract the heat from the ambient air, and either push this cooled air, or use the energy extracted to heat up other air to push around.

But heat pumps, unlike a gas heater, can use renewable energy as the source of heating water. This interests me greatly.

Currently, I spend around $100/month on gas - most of this is probably on my gas hot water, as a gas stove would use a small fraction. For my calculations, I went with 80% hot water, 20% stove. This would not reduce my gas bill to $20/month though, because of daily tariffs (the bane of low-consumption systems). Instead, an 80% reduction in usage would drop my bills to between $35 and $40 per month. Indeed, it would save me almost exactly $2/day.

But the other factor to take into account is how much more of my solar generation I would be using to run the heat pump. To do this, I increased my consumption measurements from my previous calculations by 4kWh (800W for 5 hours, picking the peak solar generation time).

This resulted in a projected increase in my power bill of about $1.10 per day.

So, having a heat pump would probably save me less than $1 per day.

With a $4200 installation price (including STC credits), this looks like a payback time in excess of 12 years. Which is far larger than the warranty period of the system - this is looking marginal.

If I double my PV system size, and install a heat pump, this should result in my energy costs reducing by around $3.40 per day. This is about $1240 annually, but then there are two capital costs to cover: closer to $10k. This reduces the payback to around 8 years.

Interestingly, this is better than the payback of me just adding more solar panels - I used a cost of $6000 for the PV upgrade, and $4000 for the Heat Pump here, rather than the $5k I used in a previous blog post.

But probably still not good enough, right now.

Arlec Powerboard esphome

I bought one of the Grid Connect powerboards from Bunnings last week, and flashed it with a custom firmware.

The model I bought was the PB89HA, which is the one with 5 sockets (one of which is not switchable).

The button is on GPIO3, and the LED is an inverted GPIO1.

The four relays are on GPIO5, GPIO4, GPIO13 and GPIO12, starting with the one closest to the button.

Since there is only one button, and four switchable outlets, I had to come up with a mechanism for controlling all of them. I ended up (after playing around with single-through-quadruple click) with a single click for toggling the first relay (nearest the button). Then, a medium (between 1 and 2 second) press turns all relays off, and a long (greater than 2 second) press turns them all on.

This is not really ideal, as there is no way to toggle just relay 2-4 without using some sort of external control - which goes against my ethos with respect to smart home.

Having said that, I’m not actually sure how I’m going to use this board…I don’t really have a bunch of things that are potentially close together that need individual control. I guess I could have it turn off everything in the entertainment unit except the PVR - that might be a way to save power overnight. I’d want the button more accessible than the powerboard that currently controls them though.

Anyway, the base YAML file follows - be aware that this does not include wifi, and would need to be included in an actual config file (with a device name defined).

esphome:
  name: $device_name
  platform: ESP8266
  board: esp01_1m
  on_boot:
    - light.turn_on:
        id: led
        brightness: 20%

binary_sensor:
  - platform: status
    name: "Status"

  - platform: gpio
    pin:
      number: GPIO3
      inverted: true
      mode: INPUT_PULLUP
    name: button
    on_multi_click:
      - timing:
        - ON for at least 1s
        - OFF for at least 0.2s
        then:
          - switch.turn_off: relay_0
          - switch.turn_off: relay_1
          - switch.turn_off: relay_2
          - switch.turn_off: relay_3

      - timing:
        - ON for at least 2s
        - OFF for at least 0.2s
        then:
          - switch.turn_on: relay_0
          - switch.turn_on: relay_1
          - switch.turn_on: relay_2
          - switch.turn_on: relay_3

      - timing:
        - ON for at most 0.5s
        - OFF for at least 0.2s
        then:
          - switch.toggle: relay_0

switch:
  - platform: gpio
    id: relay_0
    pin: GPIO5
    on_turn_on:
      - mqtt.publish:
          topic: HomeKit/${device_name}/Outlet/0/On
          retain: ON
          payload: 1
    on_turn_off:
      - mqtt.publish:
         topic: HomeKit/${device_name}/Outlet/0/On
         retain: ON
         payload: 0

  - platform: gpio
    id: relay_1
    pin: GPIO4
    on_turn_on:
      - mqtt.publish:
          topic: HomeKit/${device_name}/Outlet/1/On
          retain: ON
          payload: 1
    on_turn_off:
      - mqtt.publish:
         topic: HomeKit/${device_name}/Outlet/1/On
         retain: ON
         payload: 0

  - platform: gpio
    id: relay_2
    pin: GPIO13
    on_turn_on:
      - mqtt.publish:
          topic: HomeKit/${device_name}/Outlet/2/On
          retain: ON
          payload: 1
    on_turn_off:
      - mqtt.publish:
         topic: HomeKit/${device_name}/Outlet/2/On
         retain: ON
         payload: 0

  - platform: gpio
    id: relay_3
    pin: GPIO12
    on_turn_on:
      - mqtt.publish:
          topic: HomeKit/${device_name}/Outlet/3/On
          retain: ON
          payload: 1
    on_turn_off:
      - mqtt.publish:
         topic: HomeKit/${device_name}/Outlet/3/On
         retain: ON
         payload: 0

light:
  - platform: monochromatic
    output: led1
    id: led
    restore_mode: ALWAYS_ON

output:
  - platform: esp8266_pwm
    pin:
      number: GPIO1
    id: led1
    inverted: True

sensor:
  - platform: wifi_signal
    name: "WiFi signal sensor"
    update_interval: 5min

ota:

logger:

mqtt:
  broker: "mqtt.lan"
  discovery: false
  topic_prefix: esphome/${device_name}
  on_message:
    - topic: HomeKit/${device_name}/Outlet/0/On
      payload: "1"
      then:
        - switch.turn_on:
            id: relay_0
    - topic: HomeKit/${device_name}/Outlet/0/On
      payload: "0"
      then:
        - switch.turn_off:
            id: relay_0

    - topic: HomeKit/${device_name}/Outlet/1/On
      payload: "1"
      then:
        - switch.turn_on:
            id: relay_1
    - topic: HomeKit/${device_name}/Outlet/1/On
      payload: "0"
      then:
        - switch.turn_off:
            id: relay_1

    - topic: HomeKit/${device_name}/Outlet/2/On
      payload: "1"
      then:
        - switch.turn_on:
            id: relay_2
    - topic: HomeKit/${device_name}/Outlet/2/On
      payload: "0"
      then:
        - switch.turn_off:
            id: relay_2

    - topic: HomeKit/${device_name}/Outlet/3/On
      payload: "1"
      then:
        - switch.turn_on:
            id: relay_3
    - topic: HomeKit/${device_name}/Outlet/3/On
      payload: "0"
      then:
        - switch.turn_off:
            id: relay_3