Syncing data from TrueNAS to QNAP using rsync

I was looking for a solution to backup my data from my main NAS, a TrueNAS core instance, to a QNAP. Although I found a really good tutorial by Raid Owl explaining how to back up from TrueNAS to Synology, I did not find, at the time, one for QNAP. The following tutorial should give you an idea what you need to do to achieve syncing data from TrueNAS to QNAP using rsync. Bear in mind, that although rsync does copy your data from A to B but it is technically not a backup solution.

Preparing the QNAP

Create a share

On the QNAP, create a location where the backups should go. In the Control Panel, Privilege: Shared Folders, create a shared folder (e.g. TNbackup) and set the permissions. I personally do prefer that no user (besides the rsync user) has access to the backup data share, to avoid that it becomes corrupted. So deny or read only for all other users should do the trick.

Create a user

Create a user that TrueNAS will use to connect to the QNAP. Let’s say we call it rsync. Make sure that the rsync user is part of the administrator group. This is mandatory for it to access the QNAP via ssh. Give it read/write permissions on the TNbackup share.

In the Users tab of the control panel, enable the home folder for all users in the advanced settings.

In the Network & File Services tab of the Control Panel, activate SSH on port 22 and SFTP. You can also set the Access Permissions here.

Verify that you can log in to your QNAP using ssh and your newly created user.

ssh rsync@[QNAP-IP]

Prepare the SSH configuration on QNAP

In the terminal session, open the sshd configuration file in the VI editor. Unfortunately NANO is not installed on QNAP.

sudo vi /etc/ssh/sshd_config

Find and uncomment the next two lines deleting the #-sign. Position your cursor at the beginning of the corresponding line and hit the “i”-key for insert-mode.

#PubkeyAuthentication yes
#AuthorizedKeysFile  .ssh/authorized_keys

After deleting the two #-signs, hit the Escape-key. Save using “:w” followed by the return key and quit using “:q”, followed by the return-key.

Now navigate to your users home folder:

cd /share/homes/rsync/

Create the .ssh folder and the authorized-keys (empty) file:

mkdir .ssh
chmod 700 .ssh
touch .ssh/authorized_keys

You can check if the file has been created by using this command:

ls .ssh/

Set the user permissions on the .ssd folder:

sudo chmod -R 700 .ssh
sudo chown -R rsync .ssh

You might need to restart the rsync and ssh services on QNAP using the GUI.

Preparing TrueNAS

Create a home folder for your user, using the Shell provided in the GUI:

cd /mnt/[DATASET POOL]/
mkdir home
cd home
mkdir rsync

Create the user using the Accounts/User tab and add a new one, filling out the following fields:

Full name: [RSYNC to QNAP]
Username: rsync
Password: [password]
Confirm password: [password]
User ID (auto filled by TrueNAS)
Primary Group: rsync
Auxiliary Groups: [choose one that has access to the fileshares you want to backup]
Home Directory: /mnt/[DATASET POOL]/home/rsync

That’s it for now. later on, we will fill in the SSH Public Key. Save for now.

Let’s go on and create the SSH key on TrueNAS using ssh with the rsync user. Make sure that the SSH service is running on TrueNAS (GUI: services tab). When creating the key, you can skip all the prompts with the Return-key.

Log in via SSH to your TrueNAS:

ssh rsync@[TrueNAS-IP]
ssh-keygen

To see the generated key, use:

cat .ssh/id_rsa.pub

Copy everything from that file, from ssh-rsa to something like truenas-local.

Go back to the TrueNAS GUI, edit the rsync user and paste the string into the SSH Public-Key field.

Now log in to your QNAP via SSH:

ssh rsync@[QNAP-IP]
vi .ssh/authorized_keys

Hit “i” for insert mode. Paste the key in the file. Hit Escape. Write to disk with “:w” and quit with “:q”.

To test if the connection is working, go back to your TrueNAS SSH session and connect to your QNAP:

ssh rsync@[QNAP-IP]

The QNAP will NOT ask for the password as it uses the key that we just generated and shared between the machines (for the user rsync). You have to accept the host key fingerprint (it will be saved in your known hosts file). If you do not accept it, the rsync task will most probably fail.

Create the Rsync task on TrueNAS

In the TrueNAS GUI, go to the Tasks tab, Rsync Tasks. Create a new Rsync Task.

Source:
Path: /mnt/[DATASET POOL]/fileshare_on_TrueNAS
User: rsync
Direction: PUSH
Description: Backup TrueNAS to QNAP
Schedule: what ever you like

Remote:
Remote Host: [IP OF QNAP]
Rsync Mode: SSH
Remote SSH Port: 22
Remote Path: /share/[destination fileshare_on_QNAP] 

I had to untick the compress tickbox for rsync to run. I also decided to untick the delete option. This means that if I inadvertently delete a file on my TrueNAS, it will still exist on the QNAP. Don’t forget to save.

Alternatively to setting the auxiliary group in the TrueNAS user (see above), you can also make sure that the rsync user has the correct ACL permissions (read) on the TrueNAS fileshare that you want to sync.

Run the task manually. If it fails, click on the error button and download the error log.

Be reminded that this procedure will syn the TrueNAS folder’s content to the QNAP, but it is technically not a backup!

peerTube administration

Upgrade

Run the upgrade script:

cd /var/www/peertube/peertube-latest/scripts && sudo -H -u peertube ./upgrade.sh
sudo systemctl restart peertube

Update peerTube configuration

Check for configuration changes, and report them in your config/production.yaml file

cd /var/www/peertube/versions
diff -u "$(ls --sort=t | head -2 | tail -1)/config/production.yaml.example" "$(ls --sort=t | head -1)/config/production.yaml.example"

Source: https://docs.joinpeertube.org/install/any-os

Install QEMU guest agent inside VM

On Linux

1. Install the agent inside the VM:

apt install qemu-guest-agent

2. Shut down the VM

shutdown -h now

3. Enable the guest agent in Proxmox Interface

4. Start the VM and check if the agent is running

systemctl status qemu-guest-agent

On FreeBSD 13

  1. Place virtio_console.ko in /boot/modules. (wget http://web.busana.lu/qemu/virtio_console.ko)
  2. Run kldload virtio_console.ko.
  3. Place qemu-ga in /usr/local/bin. (wget http://web.busana.lu/qemu/qemu-ga).
  4. Place qemu-guest-agent in /usr/local/etc/rc.d. (wget http://wwb.busana.lu/qemu/qemu-guest-agent)
  5. Place qemu-guest-agent in another location of your choice. This will be a copy that is re-added to the rc.d directory each time TrueNAS boots (/usr/local/qemuGB/qemu-guest-agent).
  6. Create Tunables in the TrueNAS web UI. System -> Tunables.
    1. Variable=qemu_guest_agent_enable Value=YES Type=RC Enabled=yes
    2. Variable=qemu_guest_agent_flags Value=-d -v -l /var/log/qemu-ga.log Type=RC Enabled=yes
    3. Variable=virtio_console_load Value=YES Type=LOADER Enabled=yes
  7. Create Init/Shutdown Scripts in the TrueNAS web UI. Tasks -> Init/Shutdown Scripts.
    1. Type=Command Command=service qemu-guest-agent start When=POSTINIT Enabled=yes Timeout=10
    2. Type=Command Command=cp /[path to a local copy of qemu-guest-agent file] /usr/local/etc/rc.d  When=PREINIT Enabled=yes Timeout=10 (cp /usr/local/qemuGB/qemu-guest-agent /usr/local/etc/rc.d).
  8. Reboot TrueNAS.

Execute Nextcloud OCC commands

Go to /var/www/html and execute the following command as the http user:

sudo -u www-data php occ db:add-missing-indices
sudo -u www-data php occ translate:download-models
sudo -u www-data php occ maintenance:mode --off 
sudo nextcloud.occ files:scan --all
sudo nextcloud.occ maintenance:repair
sudo -u www-data php occ -h

Running you own Matrix-Synapse server

We are setting up a matrix-synapse instance on our homeserver using an Ubuntu 22.04 VM. Most of these instructions were collected from the official documentation on github. We want to have our domain (example.com) responding for our matrix handle (e.g. @john:example.com), not the hostname (matrix.example.com) of the server.

Installation of Matrix-synapse

First install some packages we’ll need:

sudo apt install -y lsb-release wget apt-transport-https

Now add the keyrings and the repository for matrix-synapse server:

sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/matrix-org-archive-keyring.gpg] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
    sudo tee /etc/apt/sources.list.d/matrix-org.list

Now update your apt repositories:

sudo apt update

And install matrix-synapse

sudo apt install matrix-synapse-py3

When asked for the server name, we enter example.com . For further informations on this, consult the first chapter of the installation instructions: Choosing your server name. Decide if you want to have homeserver usage statistics and you are done.

Setting up Synapse

Install PostgreSQL

sudo apt install postgresql

Select your geographic area. Done. Install the libpq5 libraries:

apt install libpq5

To set up the database, you need to be logged in as the postgres user:

sudo -u postgres bash

Create the synapse user

createuser --pwprompt synapse_user

Create a safe password and note it down. Create the database:

createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse

Let’s enable password authentication so that synapse_user can connect to the database:

cd /etc/postgresql/14/main
nano pg_hba.conf

Add the (last) line for the synapse_user as follows to the pg_hba.conf file. As we use local, you do not need to provide an address (leave the fourth column empty). Save and exit.

local   replication   all                                peer
host    replication   all        127.0.0.1/32            scram-sha-256
host    replication   all        ::1/128                 scram-sha-256
local   synapse       synapse_user                       scram-sha-256

Exit the shell for postgres user and fall back to your standard user:

exit

Configure synapse to use postgreSQL

cd /etc/matrix-synapse
nano homeserver.yaml

Modify the database parameters in homeserver.yaml to match the following, save and exit.

database:
  name: psycopg2
  args:
    user: synapse_user
    password: <password>
    database: synapse
    host: localhost
    cp_min: 5
    cp_max: 10

Make sure that postgreSQL starts on boot. Open the postgres.conf file:

cd /etc/postgresql/14/main
nano postgresql.conf

Under Connection Settings, uncomment (delete the hash sign) the following line:

#listen_addresses = 'localhost' 

Save, exit. Now let’s restart matrix-synaose and the postgres service.

systemctl restart postgresql
systemctl restart matrix-synapse

For more information or debugging, consult the official documentation.

TLS certificates and reverse proxy setup

We assume that you have a functioning Nginx server. We set up two (see first paragraph of this page for the reason) basic reverse proxy configuration files in /etc/nginx/sites-available, one for the matrix server (matrix.example.com) and one for the domain itself (example.com), using the default files provided. Create the SSL certificates (do not forget to enable the sites first). Do not forget to point the corresponding DNS records to your reverse proxy and allow the firewall to let the incoming connections (port 80 & 443) through.

The matrix.example.com sites configuration file should look like this:

server {

    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    ssl_certificate /etc/letsencrypt/live/matrix.example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/matrix.exmaple.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot


    server_name matrix.example.com;
    access_log /var/log/nginx/matrix.example.com.access.log;
    error_log /var/log/nginx/matrix.example.com.error.log;

   location ~ ^(/|/_matrix|/_synapse/client) {
        # note: do not add a path (even a single /) after the port in `proxy_pass`,
        # otherwise nginx will canonicalise the URI and cause signature verification
        # errors.
        proxy_pass http://<IP OF SYNAPSE SERVER>:8008;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;

        # Nginx by default only allows file uploads up to 1M in size
        # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
        client_max_body_size 50M;

    # Synapse responses may be chunked, which is an HTTP/1.1 feature.
    proxy_http_version 1.1;
    }

}


server {
    if ($host = matrix.example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    listen 80;

    server_name matrix.example.com;
    return 404; # managed by Certbot

}

The example.com sites configuration file should look like this:

server {

    server_name example.com;
    access_log /var/log/nginx/example.com.access.log;
    error_log /var/log/nginx/example.com.error.log;

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

# the root folder should redirect to your regular website under your domain name (if needed)
location / {

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_headers_hash_max_size 512;
        proxy_headers_hash_bucket_size 128;
        proxy_pass http://<WEBSERVER>;
        include proxy_params;
     }

location /.well-known/matrix/client {
    return 200 '{"m.server": "matrix.example.com:443"}';
    default_type application/json;
    add_header Access-Control-Allow-Origin *;
}

location /.well-known/matrix/server {
    return 200 '{"m.server": "matrix.example.com:443"}';
    default_type application/json;
    add_header Access-Control-Allow-Origin *;
}
}


server {
    if ($host = example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen 80;

    server_name example.com;
    return 404; # managed by Certbot


}

When configuring your client, use the matrix.example.com URL. The .well-known json message is used by other matrix servers that want to connect to your server. They will be informed that they should use a different hostname than the one provided by the user’s handles. More information is available in the official delegation documentation.

The official documentation states that port 8448 should be open for federation and also listened to by the reverse proxy, but our configuration works fine without it. Maybe the other matrix servers also try port 443 if 8448 is not available?

If your reverse proxy is running on a different machine than the synapse server, than you have to adjust the listeners bind_addresses in the /etc/matrix-synapse/homeserver.yaml to the following. Do not forget to restart the matrix service after saving the changes.

bind_addresses: ['::1', '0.0.0.0']

Test delegation / federation using the Matrix Federation Tester.

Adding a Matrix user

In order to be able to add users, you need to define the registration shared secret in the /etc/matrix-synapse/homeserver.yaml file. Add the following line at the end, generating some random salt. Restart the matrix service afterwards.

registration_shared_secret: YOURRANDOMSALTSTRING

Now add a user, using the following command:

register_new_matrix_user -u john -c /etc/matrix-synapse/homeserver.yaml -a https://matrix.example.com
-u, --user
Local part of the new user. Will prompt if omitted.
-p, --password
New password for user. Will prompt if omitted. Supplying the password on the command line is not recommended. Use the STDIN instead.
-a, --admin
Register new user as an admin. Will prompt if omitted.
-c, --config
Path to server config file containing the shared secret.
-k, --shared-secret
Shared secret as defined in server config file. This is an optional parameter as it can be also supplied via the YAML file.
server_url
URL of the home server. Defaults to ´https://localhost:8448´.

Use a matrix client (e.g. https://app.element.io) to test your server and user. Have fun!

Expand a VM disk in Proxmox (Ubuntu VM)

Disclaimer: this is a very risky operation!
Make sure you have one or more backups of your data!
Use on your own risk!

This tutorial will guide you in increasing the space of your virtual disk. Three steps are required:
1. increase the VM disk size in Proxmox
2. change the partition table to reflect the changes in Ubuntu
3. resize your file system in Ubuntu

1. Resizing VM disk size in Proxmox

1.1 Open the Proxmox webinterface, select the VM for which you want to modify a disk size and select the hardware tab. Write down the VM ID (example: 103) and the harddisk identifier (example: scsi0).

Proxmox: Identify the hard disk identifier (scsi0) and the VM ID (103)

1.2 In the Proxmox VM/LC tree, select the parent node. Open the node shell. Resize the VM size using the following command in the Proxmox node’s shell. This step can be done online (VM running) or offline (VM shut down).

qm resize [VM_ID] [DISK_IDENTIFIER] +[SIZE_INCREASE]G

Example:

qm resize 103 scsi0 +2000G

1.3 Verify that the hardware tab reflects the executed changes in the VM disk size. If you try to add more space than there is available space left on the hard drive, you will get an error (zfs error: cannot set property for ‘vm-103-disk0’: size is greater than available space).

Now that your virtual disk has the desired size, we need to change the partition table so that your Ubuntu will see it.

2. Enlarge the partition in the virtual disk

2.1 Find the disk name of the disk whose partition you want to change inside your VM.

lsblk

My disk (sdb) now has 13.7 TB. You can notice that the partition (sdb1) does not use all the available space yet.

Current partition scheme

2.2 Before unmounting the disk, make sure that no service uses the corresponding disk. In my case, I preferred to shutdown the Apache service to make sure that no user activity intervenes on the disk.

2.3 Unmount all filesystems of the disk on which you want to change the partition table. We want to change the partition table of sdb, so we have to unmount sdb1 (which is the only partition on this disk).

umount /dev/sdb1
Check that the partition has been unmounted (/data should not be listed anymore)

2.4 We will be using fdisk to repartition the disk.

fdisk command options

You can run fdisk -l to list all the partitions available.

Inside fdisk, you can use the following commands:

fdisk commands

Run the following command to start modifying your partition. Make sure to use the disk (sdb) and not the partition name (sdb1)

fdisk /dev/sdb

2.5 Now delete the current partition scheme (that’s scary, isn’t it?) using the “d”-command.

fdisk delete partition

2.6 Create the new partition using the “n” command. We kept the default values and did not remove the ext4 signature.

2.7 Now you need to write the new partition scheme to the disk using the “w” command.

Write the partition scheme

2.8 Now check that the modifications have been saved, using lsblk.

lsblk
lsblk output (the partition shows the full space)

You will notice that the filesystem still does not yet see the whole space that is available. Use df -h.

Output of df -h (the file system does not reflect the whole available space)

So let’s resize the file system inside the partition sdb1.

3. Resize the file system

Run the resize2fs command on the newly created partition.

resize2fs /dev/sdb1
Output of resize2fs /dev/sdb1

df -h now reflects the changes:

output of df -h (the file system now uses all available space)

Additional links

If you need further information on this topic, consult the following webpages:
Proxmox Wiki: Resize disks
How to Increase VM Disk Size in Proxmox
Resize partitions in Linux
Why df and lsblk command have different results?