Brink Renovent Excellent 400 integration into eBus

This guide documents the complete integration of a Brink Renovent Excellent 400 ventilation system through ebusd and MQTT for monitoring in openHAB.

Architecture
  • 28+ data points monitored every 10 seconds
  • Real-time monitoring of temperatures, airflows, fan speeds
  • Filter tracking (days used, volume processed)
  • Bypass status and configuration
  • System diagnostics (frost protection, preheater, errors)
  • Operating hours and performance metrics

Hardware Requirements

  1. Brink Renovent Excellent 400 ventilation system
  2. Esera ECO 305 eBUS Controller (or compatible eBus gateway)
  3. 2-wire cable for eBus connection (polarity-sensitive
  4. Network connectivity for Esera gateway
  5. Proxmox server (or any Linux host) for LXC/VM

Configure Renovent for eBus Mode

Access the Renovent service menu: Use the menu button on the display to enter service mode and navigate to Parameter 08. Set to: eBus (Not OT – OpenTherme). Navigate to Parameter 09 (eBus mode) and configure Master (0) or Slave (1) mode.
The Renovent can operate as either eBus Master or Slave. Slave mode (1) is recommended for passive monitoring. If you experience issues, try Master mode (0).
I am running master mode.

Physical Connection

Step 1: Locate Renovent eBus Connection

CRITICAL: Find the correct connector!
X1 Connector: 2-pin green terminal block on the back of the Renovent display housing. NOT the X2 connector (used for other purposes). Refer to the Renovent service manual, page 14, for connector location diagrams.

X1 connector location on the top rear of the Renovent Excellent 400

Step 2: Connect to Esera Gateway

Wire the eBus connection: Switch off the Renovent appliance and the Esera gateway. Connect the Renovent X1 terminals to Esera eBus terminals. Polarity matters! Follow the wiring diagram. The following two pictures show which pins I connected on what device. You may use standard 2-wire cable (no shielding required for short runs). I used shielded KNX-EIB cable, as I had some spare lying around. Keep away from high-voltage AC cables. The following pictures show which pins I connected on what device.

X1 connection using a KNX-EIB shielded cable
Esera gateway connection on pins 22 and 23

Do NOT use termination resistors. eBus does not need termination! Termination resistors will prevent proper communication. The Esera gateway has built-in termination.

Verify the connection. Switch on both devices. LED indicators on Esera should show activity once connected.
PWR: Solid green means power OK.
Station: Solid green means that the eBus is ready.
Data: Flashing green ~1Hz (eBus communication)

Esera Gateway Setup

Step 1: Access Esera Web Interface

Connect your Esera gateway to your network. Access the web interface: `http://192.168.X.Y` (or your gateway IP). Default login password (eserapwd). Please change immediately.

Step 2: Configure eBus Settings

Navigate to: eBus Settings. You need to select:
Server protocol: TCP
Port: 5001
Automatic Bus Calibration: Automatic Mode
Calibration Value: 130 (auto-detected)
Compatibility: “Support ebusd enhanced protocol” (should be checked already)

Important: The “enhanced protocol” checkbox CANNOT be disabled – this is normal. Port 5001 uses the enhanced eBus protocol (required for ebusd). Port 5000 uses Esera ASCII protocol (not used in this setup).

Step 3: Verify Connection

Test the raw data flow. In your CLI, enter:

nc 192.168.X.Y 5001

You should see binary data streaming (appears as garbled characters):
ƪƪƪƪw}@Ơƴƴƪƪƪƪ…

This confirms the Esera is receiving eBus data from the Renovent.
CRITICAL: The Esera gateway only allows ONE TCP connection at a time to port 5001! If `nc` is connected, ebusd cannot connect. Always close test connections before starting ebusd.

LXC Container Setup

Step 1: Create LXC Container in Proxmox

Here is my config:
OS: Ubuntu 24.04 LTS
Hostname: ebus
Memory: 512 MB
Swap: 512 MB
CPU: 1 core
Disk: 8 GB (minimal, only needs ~2 GB)
Network: Bridge to your LAN

Step 2: Initial Setup

SSH into your container, update your system and install wget.

ebusd Installation. Important: Always use the MQTT1 variant (includes MQTT support)

cd /tmp
wget https://github.com/john30/ebusd/releases/download/v26.1/ebusd-26.1_amd64-trixie_mqtt1.deb

Step 2: Install ebusd

dpkg -i ebusd-26.1_amd64-trixie_mqtt1.deb
# Fix any dependency issues
apt-get install -f

Step 3: Verify Installation

ebusd --version
# Should show: ebusd 26.1.26.1

Configuration Files

Step 1: Create Directory Structure

mkdir -p /etc/ebusd/encon

The encon directory is for the manufacturer ENCON (Brink’s eBus identifier).

Step 2: Obtain Renovent Configuration File

Grab the custom configuration file for The Renovent Excellent 400 from Github. Find the exact file link and WGET it into your /tmp.

Step 3: Fix Character Encoding

The original file contains German umlauts (ü, ö, ä) that cause parsing errors. Convert to ASCII:

sed 's/ü/ue/g; s/ä/ae/g; s/ö/oe/g; s/Ü/Ue/g; s/Ä/Ae/g; s/Ö/Oe/g; s/ß/ss/g' \
    /tmp/7c.renovent-excellent-400.csv > /etc/ebusd/encon/7c.csv

Verify the file:

head -20 /etc/ebusd/encon/7c.csv

Step 4: Configure ebusd Service

Edit the configuration file:

nano /etc/default/ebusd

Add the following:

EBUSD_OPTS="--device=enh:192.168.X.Y:5001 \
  --latency=50 \
  --scanconfig \
  --configpath=/etc/ebusd \
  --mqtthost=10.0.10.41 \
  --mqttport=1883 \
  --mqttjson \
  --mqtttopic=ebusd \
  --log=all:notice \
  --pollinterval=10"

Parameter explanations:

  • --device=enh:192.168.X.Y:5001Esera gateway with enhanced protocol
  • --latency=5050ms latency compensation
  • --scanconfigAuto-detect and load device configurations
  • --configpath=/etc/ebusdWhere to find CSV configs
  • --mqtthost=192.168.A.BYour MQTT broker IP
  • --mqttport=1883Standard MQTT port
  • --mqttjsonPublish in JSON format
  • --mqtttopic=ebusdMQTT topic prefix
  • --log=all:noticeLogging level
  • --pollinterval=10Poll interval (not used for active polling in our setup)

Step 5: Start and Enable ebusd

systemctl enable ebusd
systemctl start ebusd

# Wait 30 seconds for initialization
sleep 30

# Verify it's running
systemctl status ebusd

Step 6: Verify ebusd Connection

ebusctl info

Expected output:
version: ebusd 26.1.26.1
device: 192.168.X.Y:5001, TCP, enhanced
signal: acquired
scan: finished
masters: 2
messages: 141
address 31: master #8, ebusd
address 36: slave #8, ebusd
address 77: master #19
address 7c: slave #19, scanned “MF=ENCON;ID= ;SW=-;HW=-“, loaded “encon/7c.csv”

Key indicators of success:
`signal: acquired` (not “no signal”)
`messages: 141` (config loaded)
`address 7c: slave #19, scanned “MF=ENCON”` (Renovent detected)
`loaded “encon/7c.csv”` (configuration file loaded)

Test reading a value:

ebusctl read -c Excellent400 Aussenlufttemperatur
# Should return a temperature value like: 10.2

Polling Script

Why a Polling Script? ebusd can passively listen to eBus telegrams, but the Renovent doesn’t broadcast all values automatically. We need to actively request data.

The Challenge: Setting `poll` in CSV files doesn’t work reliably with ebusd 26.1. Solution: External polling script that calls `ebusctl read`.

Step 1: Create Polling Script

nano /root/ebus/poll_renovent.sh

Script content:

#!/bin/bash
while true; do
    # Temperatures
    ebusctl read -c Excellent400 Aussenlufttemperatur > /dev/null 2>&1
    ebusctl read -c Excellent400 Ablufttemperatur > /dev/null 2>&1
    ebusctl read -c Excellent400 ZusaetzlicherTemperaturfuehler > /dev/null 2>&1
    
    # Airflows - Actual
    ebusctl read -c Excellent400 TatsaechlicheZuluftmenge > /dev/null 2>&1
    ebusctl read -c Excellent400 TatsaechlicheAbluftmenge > /dev/null 2>&1
    ebusctl read -c Excellent400 Zuluftmenge > /dev/null 2>&1
    ebusctl read -c Excellent400 Abluftmenge > /dev/null 2>&1
    
    # Airflow Setpoints
    ebusctl read -c Excellent400 LuftmengeStufe0 > /dev/null 2>&1
    ebusctl read -c Excellent400 LuftmengeStufe1 > /dev/null 2>&1
    ebusctl read -c Excellent400 LuftmengeStufe2 > /dev/null 2>&1
    ebusctl read -c Excellent400 LuftmengeStufe3 > /dev/null 2>&1
    
    # Fan speeds
    ebusctl read -c Excellent400 TatsaechlicheDrehzahlZuluft > /dev/null 2>&1
    ebusctl read -c Excellent400 TatsaechlicheDrehzahlAbluft > /dev/null 2>&1
    
    # Operating Mode & Status
    ebusctl read -c Excellent400 Ventilatorbetrieb > /dev/null 2>&1
    ebusctl read -c Excellent400 StatusBypass > /dev/null 2>&1
    ebusctl read -c Excellent400 StatusVentilator > /dev/null 2>&1
    ebusctl read -c Excellent400 StatusFrostschutz > /dev/null 2>&1
    ebusctl read -c Excellent400 StatusVorheizregister > /dev/null 2>&1
    ebusctl read -c Excellent400 LeistungVorheizregister > /dev/null 2>&1
    
    # Switch position
    ebusctl read -c Excellent400 PositionStufenschalter > /dev/null 2>&1
    
    # Bypass
    ebusctl read -c Excellent400 BypassTemperatur > /dev/null 2>&1
    ebusctl read -c Excellent400 BypassHysterese > /dev/null 2>&1
    
    # Filter
    ebusctl read -c Excellent400 Filtermeldung > /dev/null 2>&1
    ebusctl read -c Excellent400 FilterverwendungTage > /dev/null 2>&1
    ebusctl read -c Excellent400 Filterverwendung > /dev/null 2>&1
    
    # Pressure
    ebusctl read -c Excellent400 IstwertZuluftdruck > /dev/null 2>&1
    ebusctl read -c Excellent400 IstwertAbluftdruck > /dev/null 2>&1
    
    sleep 10
done

Make it executable:

chmod +x /root/ebus/poll_renovent.sh

Script details:
Polls **28 data points** every 10 seconds. Each read is redirected to `/dev/null` (we only care about MQTT publication). `sleep 10` controls update frequency.
Customization: Increase `sleep 10` to `sleep 30` for less frequent updates (lower CPU usage). Remove unwanted data points to reduce eBus traffic (these are the ones I use). Add more data points from the CSV file as needed.

Step 2: Create Systemd Service

nano /etc/systemd/system/ebusd-poll.service
[Unit]
Description=eBus Renovent Polling Script
After=ebusd.service
Requires=ebusd.service

[Service]
Type=simple
ExecStart=/root/ebus/poll_renovent.sh
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Enable and start:

systemctl daemon-reload
systemctl enable ebusd-poll
systemctl start ebusd-poll

# Check status
systemctl status ebusd-poll

Step 3: Verify Polling

Check that values are being updated. Wait 15 seconds, then check:

ebusctl find -c Excellent400 | grep -v "no data" | head -10

You should see values instead of “no data stored”:
Excellent400 Aussenlufttemperatur = 10.2
Excellent400 Ablufttemperatur = 22.0
Excellent400 TatsaechlicheZuluftmenge = 180

MQTT Integration

Verify MQTT Publishing

apt install mosquitto-clients

Subscribe to all ebusd topics:

mosquitto_sub -h 192.168.A.B -p 1883 -v -t 'ebusd/#'

Expected output (updates every 10 seconds):
json
ebusd/Excellent400/Aussenlufttemperatur {
“0”: {“name”: “”, “value”: 10.1}}
ebusd/Excellent400/Ablufttemperatur {
“0”: {“name”: “”, “value”: 22.0}}
ebusd/Excellent400/TatsaechlicheZuluftmenge {
“0”: {“name”: “”, “value”: 180}}
ebusd/Excellent400/Ventilatorbetrieb {
“0”: {“name”: “”, “value”: “Normal”}}
ebusd/Excellent400/StatusBypass {
“0”: {“name”: “”, “value”: “Closed”}}
ebusd/Excellent400/Filtermeldung {
“0”: {“name”: “”, “value”: “Clean”}}

MQTT Message Format:
All Excellent400 values: `{“0”: {“name”: “”, “value”: <VALUE>}}`
Error messages: `{“error”: {“value”: “E100”}}`
Global values: May vary in format
JSON Path for extraction: $.0.value (for most messages)

openHAB Configuration

Step 1: Install Required Add-ons

In openHAB UI:
Settings – Add-ons – Bindings:
Install: MQTT Binding
Settings – Add-ons – Transformations:
Install: JSONPath Transformation
Install: MAP Transformation
Install: REGEX Transformation (for multi-value fields)

Step 2: Configure MQTT Broker Thing

Settings → Things → + → MQTT Binding → MQTT Broker
Configuration:
Broker Hostname/IP: 192.162.A.B
Port: 1883
Client ID: openhab
Save and ensure status shows ONLINE.

Step 3: Create VMC MQTT Thing

Settings → Things → + → MQTT Binding → Generic MQTT Thing
Thing ID: MqttVMC
Label: VMC MQTT
Bridge: Select your MQTT Broker

Step 4: Add Channels

For each data point, add a channel. Here is one example:

Channel ID: VmcAussenlufttemperatur
Label: VMC Aussenluft Temperatur
Channel Type: Number Value
State Topic: ebusd/Excellent400/Aussenlufttemperatur
Incoming Value Transformations: JSONPATH:$.0.value
Unit of Measurement: °C


Here is a different example. Error uses different JSON path: $.error.value instead of $.0.value:

Channel ID: VMC_errorCode
Label: VMC Error Code
Channel Type: Text Value
State Topic: ebusd/Broadcast/Error
Incoming Value Transformations: JSONPATH:$.error.value

Step 5: Create MAP Transformation Files

Bypass Status Map: /etc/openhab/transform/vmc_bypass.map

0=Initializing
1=Opening
2=Closing
3=Open
4=Closed
5=Error
6=Calibrating
255=Error
-=Unknown
NULL=Unknown
UNDEF=Unknown

Filter Status Map: /etc/openhab/transform/vmc_filter.map

0=Clean
1=Dirty
Clean=Clean
Dirty=Dirty
-=Unknown
NULL=Unknown
UNDEF=Unknown

Switch Position Map: /etc/openhab/transform/vmc_switch.map

0=Standby
1=Position_1
2=Position_2
3=Position_3
Position_0=Standby
Position_1=1
Position_2=2
Position_3=3
-=Unknown
NULL=Unknown
UNDEF=Unknown

Step 6: Link Channels to Items

For each channel, create an Item and link it:
Settings → Items → Create Item
Select “Link to Channel”
Choose the VMC MQTT Thing and specific channel
Configure Item:

Type: Match channel type (Number/String)
Name: e.g., `VMC_Aussenluft_Temperatur`
Label: e.g., `Outside Temperature`
Category: Choose appropriate icon
State Description Pattern: Add MAP transformation if needed
- Example: `MAP(vmc_bypass.map):%s` for bypass status

Step 7: Understanding multi-value fields

Some values return multiple fields separated by semicolons:
Format: Current;Min;Max;Step;Default
Example: FilterverwendungTage returns 6;0;32767;1;0

- Current: 6 days
- Min: 0 days
- Max: 32767 days
- Step: 1 day
- Default: 0 days

This was one of the most complicated configurations I went through until now.

Reverse Proxy Protection by Authentik

auth_basic is a lousy protection for your websites. Better use en SSO solution like Authentik. Here is my NGINX reverse proxy host configuration:

server {
    server_name HOST.DOMAIN.NAME;
    access_log /var/log/nginx/HOST.DOMAIN.NAME.access.log;
    error_log /var/log/nginx/HOST.DOMAIN.NAME.error.log;
    
    # Increase buffer size for large headers from Authentik
    proxy_buffers 8 16k;
    proxy_buffer_size 32k;
    
    # All requests to /outpost.goauthentik.io must be accessible without authentication
    location /outpost.goauthentik.io {
        # When using the embedded outpost, proxy to Authentik backend
        proxy_pass http://AUTHENTIK_IP:9000/outpost.goauthentik.io;
        
        # CRITICAL: Set Host to auth.DOMAIN.NAME so Authentik knows which provider to use
        proxy_set_header Host auth.DOMAIN.NAME;
        
        proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
        add_header Set-Cookie $auth_cookie;
        auth_request_set $auth_cookie $upstream_http_set_cookie;
        proxy_pass_request_body off;
        proxy_set_header Content-Length "";
    }
    
    # Special location for when the /auth endpoint returns a 401
    # For domain level, redirect to your authentik server with the full redirect path
    location @goauthentik_proxy_signin {
        internal;
        add_header Set-Cookie $auth_cookie;
        # CHANGED: Use full auth.DOMAIN.NAME URL for domain-level auth
        return 302 https://auth.DOMAIN.NAME/outpost.goauthentik.io/start?rd=$scheme://$http_host$request_uri;
    }
    
    location / {
        # -------------------------
        # BYPASS AUTH FOR LOCAL IPS
        # -------------------------
        satisfy any;
        allow 192.168.1.0/24;
        deny all;

        ##############################
        # authentik-specific config
        ##############################
        auth_request /outpost.goauthentik.io/auth/nginx;
        error_page 401 = @goauthentik_proxy_signin;
        auth_request_set $auth_cookie $upstream_http_set_cookie;
        add_header Set-Cookie $auth_cookie;

        # Translate headers from the outposts back to the actual upstream
        auth_request_set $authentik_username $upstream_http_x_authentik_username;
        auth_request_set $authentik_groups $upstream_http_x_authentik_groups;
        auth_request_set $authentik_entitlements $upstream_http_x_authentik_entitlements;
        auth_request_set $authentik_email $upstream_http_x_authentik_email;
        auth_request_set $authentik_name $upstream_http_x_authentik_name;
        auth_request_set $authentik_uid $upstream_http_x_authentik_uid;

        proxy_set_header X-authentik-username $authentik_username;
        proxy_set_header X-authentik-groups $authentik_groups;
        proxy_set_header X-authentik-entitlements $authentik_entitlements;
        proxy_set_header X-authentik-email $authentik_email;
        proxy_set_header X-authentik-name $authentik_name;
        proxy_set_header X-authentik-uid $authentik_uid;
        
        # Your original proxy settings to Homer
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        
        # Support for websocket
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        
        proxy_pass http://AUTHENTIK_IP:8090;
     }
    
    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/HOST.DOMAIN.NAME/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/HOST.DOMAIN.NAME/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}

server {
    if ($host = HOST.DOMAIN.NAME) {
        return 301 https://$host$request_uri;
    }
    listen 80;
    server_name HOST.DOMAIN.NAME;
    return 404;
}

In Authentik, you have to create a Provider:

In Authentik, create an Application:

Funkwhale: installation

Funkwhale is a very nice audio hosting platform. We’ll install it using docker-compose in Portainer.

services:
  funkwhale:
    image: funkwhale/all-in-one:latest
    container_name: funkwhale2 #I have 2 instances of funkwhale on this docker host
    restart: unless-stopped
    depends_on:
      - postgres
      - redis
    environment:
      - FUNKWHALE_HOSTNAME=audio.MYSITE.COM
      - FUNKWHALE_PROTOCOL=https
      - FUNKWHALE_PROTOCOL_HEADER=TRUE
      - FUNKWHALE_PROXY_BODY_SIZE=200M
      - FUNKWHALE_MAX_UPLOAD_SIZE=200M
      - DATABASE_URL=postgresql://funkwhale:XXX_MY_DB_PWD_XXX@postgres:5432/funkwhale
      - REDIS_URL=redis://redis:6379/0
      - DJANGO_SECRET_KEY=XXXXX_MYSECRET_XXXXX
      - FUNKWHALE_ENABLE_FEDERATION=false
      - FUNKWHALE_DISABLE_REGISTRATION=true
      # Email Configuration (for GMAIL)
      - EMAIL_CONFIG=smtp+tls://MY_GMAIL_ADDRESS@gmail.com:XX_MY_APP_PASSWORD_XX@smtp.gmail.com:587
      - DEFAULT_FROM_EMAIL=MY_GMAIL_ADDRESS@gmail.com
    volumes:
      - /mnt/funkwhale2-data/media:/data/media
      - /mnt/funkwhale2-data/static:/data/static
      - /mnt/funkwhale2-data/music:/music
    networks:
      - funkwhale2
    ports:
      - "5001:80"  # Choose e free host port; default is 5000
      
  postgres:
    image: postgres:14
    container_name: funkwhale2_postgres
    restart: unless-stopped
    environment:
      - POSTGRES_USER=funkwhale
      - POSTGRES_PASSWORD=XXX_MY_DB_PWD_XXX
      - POSTGRES_DB=funkwhale
    volumes:
      - ./postgres2:/var/lib/postgresql/data  # Different volume!
    networks:
      - funkwhale2
      
  redis:
    image: redis:7-alpine
    container_name: funkwhale2_redis
    restart: unless-stopped
    networks:
      - funkwhale2
      
networks:
  funkwhale2:  # I run 2 instances of Funkwhale on my docker.
    external: false

Create your Django secret (or passwords) with this:

openssl rand -base64 48

Funkwhale user admin

To check if users are active in database, or if they confirmed the registration email, start by connecting to the postgres database on your docker host:

docker exec -it funkwhale2_postgres psql -U funkwhale -d funkwhale

Confirm registration & confirmation status:

SELECT
  id,
  username,
  email,
  is_active,
  date_joined,
  last_login
FROM users_user
ORDER BY date_joined DESC;

will output something like:

 id | username |          email          | is_active |          date_joined          |          last_login           
----+----------+-------------------------+-----------+-------------------------------+-------------------------------
  4 | Paul     | paulXXX@domain.com      | t         | 2025-12-14 11:18:31.248306+00 | 2025-12-14 11:18:32.49417+00
  3 | Mary     | maryXYZ@domain.com      | t         | 2025-12-13 23:33:18.022068+00 | 2025-12-13 23:33:20.309663+00
  1 | admin    | siteadmin@domain.com    | t         | 2025-12-13 23:13:51.537683+00 | 2025-12-13 23:32:35.71689+00
(3 rows)

Find users who registered but never logged in:

SELECT username, email
FROM users_user
WHERE last_login IS NULL;

Find users who registered but never confirmed:

SELECT username, email
FROM users_user
WHERE is_active = false;

Setlist

#TitleArtistBPMKey
original
Key
played
Guitar Tuning
Capo
Transp
Keys
1One Horse TownBlackberry Smoke120CC-1
2Fire AwayChris Stapleton90GA-1
3Ain’t no Love in OklahomaLuke CombsCmCm0
4Country must be Country WideBrantley Gilbert83C#m-1
5Sweet Home AlabamaLynyrd Skynyrd
6Keep the wolf awayUncle Lucius
7Beer never broke my heartLuke Combs
8Tennessee WhiskeyChris Stapleton
9Man of constant SorrowBlackberry Smoke130Gm
10I think I’m in Love with YouChris Stapleton
11Change on the RiseAvi Kaplan
12Sleeping in the black topColter Wall
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29

Quick action: convert to m4a

If you do not want to keep large WAV or AIFF files on your disk that eat up precious space, you can use Apple’s Automator to write a quick action to easily convert these files to m4a.

Open Apple’s Automator and create a new quick action.

Automator: convert audio files to m4a
for f in "$@"
do
  afconvert -f m4af -d aac "$f" "${f%.*}.m4a"
done

You can also have Automator delete the original file after conversion:

Automator: convert audio to m4a and delete original file
for f in "$@"
do
  # Convert WAV to M4A
  afconvert -f m4af -d aac "$f" "${f%.*}.m4a"
  
  # Check if conversion succeeded before deleting
  if [ $? -eq 0 ]; then
    rm "$f"
  else
    echo "Conversion failed for $f, original not deleted."
  fi
done

These quick actions also work on video files (like mp4), in case you are only interested in keeping the audio track of a video file.

The above quick action workflows can be downloaded below:

To install the workflows as quick actions, you can download the above files, unzip and execute. macOs will then install the workflows. Your macOS security protection still keeps them disabled. You have to enable them in system preferences. Open Login Items & Extensions, under Extensions, behind the Finder line, hit the “i” button:

macOS system settings: enable the quick action

Source: These scripts have been developed with the help of a generative large language model.

Luxembourg’s smart meter SMARTY

Luxembourg’s smart meter branded as Smarty by the electric grid managing company CREOS allows monitoring your electricity consumption and production as well as your gas consumption. Additionally it allows the grid operator to control certain electrical appliances in your household.

Shedding relays – relais de délestage

If parts of the electricity grid in Luxembourg are becoming overloaded or unstable, the grid operator can remotely command the Smarty smart meter to take action to protect the grid. Smarty can diminish the power of domestic car charging points from 11kW to 7kW. As of writing this, CREOS confirms not using this feature (January 2025).

Smarty can also reduce the power of photovoltaic systems to about a third of the available power, if your PV management system allows this. If not, the shedding relay (FR: relais de délestage) will fully shut down the PV system when asked by the grid operator. It seems that this is currently not used neither by CREOS.

Smarty can also limit charging power of night storage heaters (DE: Nachspeicherheizung, FR: chauffage à accumulation) systems through shedding relays.

Billing

Every 15 minutes, the Smarty transmits to the grid operator (or the third party data collection partner) the average electricity consumption of the last 15 minutes. This is used to bill the quantity of electrical energy that was consumed, as well as the monitoring of the load that you used on the grid.

As of 1st January 2025, every household in Luxembourg is categorised in a power consumption category. Most households are in category 3 kW. Households with electric cars or heat pumps might be categorised into 7 kW or 12 kW categories. The monthly costs depend on the power consumption class that you are in.

If you are in the 3 kW category, and you drag 5 kW from the grid for about an hour, you have to pay additional fees (FR: supplément pour le dépassement), in this case: 5-3=2 kWh, which currently equals to 2*0.1139€ = 0,2278€.

If you use for example a (tea) water heating device that pushes your maximal power consumption over the 3 kW limit for about a minute, the fact that Smarty calculates the average over 15 minutes should assure that this short stepping over your category’s boundaries should not be visible on your invoice.

https://www.creos-net.lu/fileadmin/dokumente/downloads/fr_tarif_electricite_2025.pdf

To find out what your power category is, check your electricity provider’s portal, like myenovos.eu.

Balancing out consumption and production on 3-phase grids

When you are running a balcony power plant (DE: Balkonkraftwerk), you might be interested in how the Smarty manages billing or settlements of electricity production over the three phases. In Luxembourg, a normal household is connected to the power grid on 3 phases each capable of pumping 40A of current.

When you connect such a small photovoltaic system to a power socket on your balcony, it will feed the produced electricity only to the phase where you connected it. But this might not be the phase where your house is consuming most electricity, so you are feeding your produced electricity to the grid and drag the electricity from the grid that you need on the other phase (buying from thew grid is much more expensive then what you get paid for injecting electricity).

But there is good news: the smarty will balance this out (DE: saldierender Stromzähler), as it sums up power consumption on all three phases and power production on all three phases and only considers the difference for billing. Without this, a (mono-phase) balcony power plant is not really interesting for private use.

[Unfortunately I only have oral confirmation of this from an engineer. No guarantee on this information. It might also change over time.]

Monitoring your consumption and production

You have three possibilities to extract consumption and production data from your Smarty smart meter.

(1) By default, Smarty shows the most important values in a 5 seconds rhythm. Additionally: With the green button on your Smarty, you can cycle through the data that the meter gathers. To understand what value is currently displayed, Smarty adds the OBIS code to the display.

OBIS codes

(2) Plug the Smarty+ dongle into your Smarty meter’s P1 port and monitor your data on your smartphone

(3) Connect the P1 port over a serial / USB to a computer and monitor your data using a home automation software like for example openHAB with the help of the DSMR binding.

When using the P1 port, solution 2 & 3, you need to ask the grid operator to give you the P1 decryption key.

Source for most of the information above: https://www.creos-net.lu/en/individuals/practical-info/faq

Some information was confirmed in a telephone discussion with a grid engineer in January 2025. The general disclaimer applies here too, of course!

Proxmox rotating network interfaces

It sometimes happens that when I shutdown Proxmox for some hours, that after restart, the network interface does not work. I figured that Proxmox is rotating the network interface numbers which leads to the Linux network bridges to not map to the correct network interface and thus leads to no network connectivity. The only thing that helps is to adapt the /etc/network/interfaces file.

Some commands that can help:

brctl show
ifconfig vmbr0
dmesg
lspci | grep -i ethernet

Increase size of disk partition in Ubuntu

If you try to upgrade your system and you lack disk space, here is a solution that might help.

Run “df -h” to check how much space your system is using.

df -h

If it is at 100%, check if the partition still has some spare space left to be used.

lsblk

If the size of the partition your Ubuntu system is living on (f.ex. sda3) is bigger thant the allocated space for “ubuntu–vg-ubuntu–lv”, then you can add space to the system partition.

Result of lsblk (15GB remaining to be added)

You need to extend the logical volume to the maximum free space available:

sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

Then you need to expand the filesystem:

sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

Check with df -h ig the additional space has been allocated.