Fronius Dashboard 0.3.0

I released a new docker version of Fronius Dashboard a few days ago. It now allows you to specify your inverter power (in Watts), using an environment variable. The default value is 5000.

docker stop fronius-dashboard

docker rm fronius-dashboard

docker pull schinckel/fronius-dashboard

docker run \
  --publish 4000:4000 \
  --detach \
  --restart always \
  --name fronius-dashboard \
  --env INVERTER=<ip-address-or-hostname> \
  --env INVERTER_SIZE=<inverter-size-in-watts> \

If your inverter is 5kW, then you don’t need to apply this update.

Fronius Dashboard

Last January, I had some solar panels installed. The inverter I chose was a Fronius Primo, which is a relatively well thought of piece of equipment.

It comes with access to Solar.Web, which is Fronius’ online portal. Using this, you can get realtime access to your power generation, and hourly updated energy generation for the current day. They also have an Apple Watch app (and iPhone, naturally).

Additionally, I configured PVOutput, which gives me generation stats with a five minute resolution.

Shortly after, I decided to have a Fronius Smart Meter installed, which gives me realtime access to feed-in and feed-out data. This matches up fairly well with the SA Power Networks smart meter data: which is only available a day or so after the fact. I wish I’d had this installed from day one.

One of the key things I wanted to be able to do was have an easily readable display that shows how much power we are generating (and how much energy we have generated today), but also the current consumption, and the feed in or out right now.

After a bit of work, I came up with a (what I think is) really nice UI. I wrote it using Elixir: mainly because I wanted to write something using Phoenix LiveView. It’s actually a bit too much like Rails for me to want to use it frequently, but I do wish there was something a bit like it for Django.

Anyway, enough of the technology choices, let’s look at the dashboard:

Fronius Dashboard: Daytime View

The data is updated every second; and at night time, it goes into a dark mode:

Fronius Dashboard: Nighttime View

Like most of my side projects, this is freely available. You can install this now on any device that runs Docker, as long as it’s amd64 or arm. I’ve built it on a Raspberry Pi 3B+, but I think it should also work on a Raspberry Pi Zero.

$ docker run -it -p 4000:4000 \
    --restart unless-stopped \
    --name fronius-dashboard \
    --env INVERTER=<ip-address-or-hostname> \

This should download the latest version, and start it up. You will need to supply your own inverter’s IP address or hostname: but if you have fronius.lan as the hostname, then you can omit that line.

To make it available outside of your network, you’ll need to forward something to port 4000 of the device that is running this service. I have the aforementioned rPi3B+ running this, an MQTT broker and Node-RED, and my MQTT2HomeKit bridge, and expose only the Fronius Dashboard service to the internet.

It’s also designed to work well as a home screen app on iOS, and possibly Android devices.

There is no warranty, implied or otherwise as to the fitness of this to your purposes, but if you are nice, I’ll try to fix any bugs. You can view the source code at Source Hut.

I’m not sure how it will handle not having a Smart Meter installed.

I’ll also be more than happy to try to make this work with other inverters.

I’ve also been playing around with using nerves-project to build a standalone image that can be burned to a RaspberryPi Zero, but I’m still working through some kinks related to being able to configure that. I’m hoping to have that as a turn-key solution that I can install into some friends who have/are purchasing Fronius inverters.

Basically, I need a mechanism for storing the WiFi credentials on the nerves board, and allow configuring the IP address/hostname of the inverter. I got partway through an admin UI, but it’s not really complete yet.

asyncpg and upserting bulk data

I’ve been doing some work on analysing the output of my PV system, and needed a mechanism for inserting large numbers of rows of data pulled from the Fronius API. Since I mostly work in python, I decided to use a python script to get the data from the inverter, but I needed a clean way to insert potentially lots of rows (every 5 minute period from every day: batched in 16 day blocks).

I’d normally just use psycopg2, but since I was using Python 3, I thought I’d try out asyncpg.

The API is quite nice:

import asyncio
import asyncpg

async def write():
    conn = await asyncpg.connect('postgres://:@:/database')
    await conn.execute('query', [params])
    await conn.close()


So, that part was fine. There are more hoops to jump through because async, but meh.

But I had an interesting case: I wanted to do a bulk insert, and I didn’t know how many records I was going to be inserting. It seemed like it was going to be a bunch of work to build up placeholder strings based on the number of rows, and the number of columns in each row.

But no, there is a function that works perfectly (and turned out to be even better, because it takes an argument as to the name of the table):

    conn = await asyncpg.connect(...)
    await conn.copy_records_to_table(table, records=values)
    await conn.close()

That was really easy. It’s a requirement that each row of values is in the same order as the table definition, but my tables in this case were simple.

But there’s one catch: this works great for inserting data, but what about when some of that data is already present. We want to insert some rows, but update others: upsert. Postgres supports this using the ON CONFLICT ... DO ... syntax. So, how can this work with the bulk insert (that uses the Postgres COPY command under the hood)?

    # Our table just contains two columns: timestamp and value.
    await conn.execute('''CREATE TEMPORARY TABLE _data(
        timestamp TIMESTAMP, value NUMERIC
    await conn.copy_records_to_table('_data', records=values)
    await conn.execute('''
        INSERT INTO {table}(timestamp, value)
        SELECT * FROM _data
        ON CONFLICT (timestamp)
        DO UPDATE SET value=EXCLUDED.value


We can use the string formatting tools here because the table name is controlled by us, so there is not an SQL injection vector to worry about. You should not do this if that value could be coming from a user though.

But there is one problem. We are updating rows, even if the value has not changed. If you don’t think too hard, that’s probably fine. But the way postgres works is that it rewrites every row that has “changed”, even if it hasn’t really changed. It marks the old row as superseded, and that space will not be reclaimed until the table is vacuumed.

Instead, we should be able to just ignore those rows that have the same value (which, in my case, should be all rows except the last one, as that is the only one that should have different data).

    await conn.execute('''
        INSERT INTO {table}(timestamp, value)
        SELECT * FROM _data
        ON CONFLICT (timestamp)
        DO UPDATE SET value=EXCLUDED.value
        WHERE {table}.value <> EXCLUDED.value

Oh nice: Postgres also supports a WHERE clause on the DO UPDATE clause.