Disclaimer:
This is probably the longest post on the whole blog.
Treat this as a technical note about what and where should be changed rather then a proper tutorial with explanation of taken choices.
If there is anyone interested in that, please PM me.
1) My hardware:
Topton N6005 ITX Board
To keep the story short, motherboard is power-efficient and quite (not completely) silent.
11-gen iGPU with hardware H.264/H.265 (de)coder is enough for multimedia server.
Multiple Eth ports allow usage as firewall/router.
2) Installation
Earlier instances of my homelab were using:
- Arch/Manjaro + Docker
- OpenMediaVault + Docker
For current revision, I decided to change approach to use hypervisor which is more reliable and allow easier backing up of VMs/LXC containers.
Install like any Linux distro, connect Ethernet and choose management Eth device more here.
After installation you can follow this steps.
3) Networking
Host networking
Edit /etc/network/interfaces
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
auto lo
iface lo inet loopback
iface enp3s0 inet manual
auto vmbr0
iface vmbr0 inet dhcp <<dhcp in place of static, static lease on the router
#iface vmbr0 inet static
# address 192.168.0.165/24
# gateway 192.168.0.1
bridge-ports enp3s0
bridge-stp off
bridge-fd 0
iface enp4s0 inet manual
iface enp5s0 inet manual
iface enp6s0 inet manual
|
VPN - wireguard
Probably the best options is to install it in LXC.
To simplify my setup, I used wireguard on host (to allow access to Proxmox webUI).
- open shell on host
apt install wireguard wireguard-tools
nano /etc/wireguard/wg0.conf
- paste existing config and save
systemctl enable --now wg-quick@wg0.service
Internal networking - use single public IP
References: 1 2 3
In my existing setup, server is already behind reverse proxy so traffic would be targeted to a specific port on which app is running.
- add second bridge interface
- setup all VMs/LXC containers to access this bridge (set interface, ip and gateway manually during creation)
- setup host iptables
Edit /etc/network/interfaces
1
2
3
4
5
6
7
8
9
|
auto vmbr1
iface vmbr1 inet static
address 192.168.200.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up /usr/local/sbin/lxc_dnat.sh 192.168.200.0/24 ADD_RULES
post-down /usr/local/sbin/lxc_dnat.sh 192.168.200.0/24 REMOVE_RULES
|
Create lxc_dnat.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
#!/bin/bash
set -euo pipefail
VMBR_LXC_SUBNET=$1
RULE_ACTION=''
if [ "$2" == "ADD_RULES" ]; then
RULE_ACTION='-A'
elif [ "$2" == "REMOVE_RULES" ]; then
RULE_ACTION='-D'
else
exit 1
fi
#Overall MASQUERADE traffic from internal bridge to external
iptables -t nat $RULE_ACTION POSTROUTING -s $VMBR_LXC_SUBNET -o vmbr0 -j MASQUERADE
#Rules to redirect traffic from host machine port to LXC interface on internal bridge and specified port
#iptables -t nat $RULE_ACTION PREROUTING -p tcp --dport <host_port> -j DNAT --to-destination <vmbr1_ip>:<port>
#Radicale example
iptables -t nat $RULE_ACTION PREROUTING -p tcp --dport 2305 -j DNAT --to-destination 192.168.200.5:22 #SSH
iptables -t nat $RULE_ACTION PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 192.168.200.5:9030 #webUI
|
Change permissions: chmod +x /usr/local/sbin/lxc_dnat.sh
4) Storage
Disk setup:
- manually setup, format, create mount points, create mounts in fstab
- in Proxmox’s interface Datacenter->Storage->Directory
- (optional) setup MergerFS - there is old version in repo, install manually from here
- setup SnapRAID
My setup consists of 3 drives:
- 2TB, 3TB (both for data)
- 4TB (new, for parity data)
For my use case, full RAID or rather ZFS is overkill. It would consume too much power/resources resulting in diminishing returns.
My files are not changing on a daily basis so sync every few days (or on demand) is sufficient.
Download and compile SnapRAID.
More info can be found here and here
To ensure cyclic execution we can simply use crontab but there are quite a few interesting helper scripts (eg. SnapRAID or SnapRAID AIO).
I’m using the first one - it adds scrubbing, thresholds and nice notifications.
- Edit snapraid-runner.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
|
[snapraid]
; path to the snapraid executable (e.g. /bin/snapraid)
executable = /usr/local/bin/snapraid
; path to the snapraid config to be used
config = /etc/snapraid.conf
; abort operation if there are more deletes than this, set to -1 to disable
deletethreshold = 500
; if you want touch to be ran each time
touch = false
[logging]
; logfile to write to, leave empty to disable
file = snapraid.log
; maximum logfile size in KiB, leave empty for infinite
maxsize = 5000
[email]
; when to send an email, comma-separated list of [success, error]
sendon = success,error
; set to false to get full programm output via email
short = true
subject = [SnapRAID] Status Report:
from = <from_mail>
to = <to_mail>
; maximum email size in KiB
maxsize = 500
[smtp]
host =
; leave empty for default port
port =
; set to "true" to activate
ssl = true
tls = true
user =
password =
[scrub]
; set to true to run scrub after sync
enabled = true
percentage = 12
older-than = 180
|
1
|
0 2 * * 1,5 python3 <script_path>snapraid-runner.py -c <config_path>snapraid-runner.conf >/dev/null 2>&1
|
- HDD Spindown
Install hddparm and create rule under /usr/lib/udev/rules.d/85-hdparm.rules
1
2
3
4
5
6
7
8
9
10
11
12
|
# PROXMOX DEFAULT, probably load config from /etc/hdparm.conf
#ACTION=="add", SUBSYSTEM=="block", KERNEL=="[sh]d[a-z]", RUN+="/lib/udev/hdparm"
# Power management for hard drives, -B 127 is highest APM which allow spin down, -S 240=20min timeout
# /dev/sdx could be changed so it's better to use id
# HDD_PARITY, APM not supported
ACTION=="add", SUBSYSTEM=="block", KERNEL=="sda", RUN+="/usr/sbin/hdparm -S 240 /dev/sda"
# HDD_HITACHI_2T
ACTION=="add", SUBSYSTEM=="block", KERNEL=="sdb", RUN+="/usr/sbin/hdparm -B 127 -S 240 /dev/sdb"
# HDD_HITACHI_3T
ACTION=="add", SUBSYSTEM=="block", KERNEL=="sdc", RUN+="/usr/sbin/hdparm -B 127 -S 240 /dev/sdc"
|
Interesting reference: 1
5) Applications:
Samba/sshfs:
- Download turnkey-fileserver LXC, create container with small resources (probably 2 core/1GB RAM), set priviledged and manually later enable nesting, add bind mounts
- Setup root smb password in CLI configurator
- Navigate to webadmin ip:12321
- System->Users and Groups: create new linux users for Samba; same with groups
- Tools->File manager, set permission and ownership (for Samba only!)
- Samba Windows sharing, delete default shares, convert users (select user and set password for samba shares); create new share, 775? change owner to some users?
- Edit new share, security and access control: writable->yes
- Then restart samba servers
To mount on linux:
1
|
sudo mount -t cifs //<ip>/<share_name> /<mount_path> -o username=<username>,password=<password>,uid=$(id -u),gid=$(id -g)
|
Container settings: priviledged + nesting
LXC so do this on host (proxmox) shell:
https://jellyfin.org/docs/general/installation/linux/
(Applies: Intel Gen 11 Jasper Lake and Elkhart Lake platforms (e.g. N5095, N5105, N6005, J6412) have quirks when using video encoders on Linux. The Low-Power Encoding mode MUST be configured and enabled for correct VBR and CBR bitrate control that is required by Jellyfin.)
Edit /etc/modprobe.d/i915.conf
1
|
options i915 enable_guc=2
|
Grub/initfs can stay without touching
Edit /etc/pve/lxc/id.conf and add
1
2
3
4
|
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
|
Then restart container
[Inside container]
Add jellyfin
user to group which has access to those mounts.
If owners detected inside LXC are nobody:nogroup
it means lxc container is unpriviledged, use mapping or change LXC type to privileged.
1
2
3
|
sudo usermod -aG input jellyfin
sudo usermod -aG video jellyfin
sudo usermod -aG render jellyfin
|
Restart jellyfin app:
1
|
systemctl restart jellyfin
|
Install intel-opencl-icd
from here
In webUI enable what is supported (for me all except AV1) - low power H.264, HVEC and HVEC encoding.
6) Email notifications
Unfortunatelly, Proxmox only supports (right now) notification via email.
To setup postfix:
- install
libsasl2-modules
- edit /etc/postfix/main.cf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
# See /usr/share/postfix/main.cf.dist for a commented, more complete version
myhostname=proxmox.selfhosted
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
# appending .domain is the MUA's job.
append_dot_mydomain = no
# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
#mydestination = $myhostname, localhost.$mydomain, localhost
#relayhost =
mynetworks = 127.0.0.0/8
inet_interfaces = loopback-only
recipient_delimiter = +
smtp_header_checks = regexp:/etc/postfix/header_check
sender_canonical_maps = hash:/etc/postfix/sender_canonical
compatibility_level = 2
relayhost = <enter_host>:port
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_tls_wrappermode = yes
smtp_tls_security_level = encrypt
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/Entrust_Root_Certification_Authority.pem
|
- edit /etc/postfix/sasl_passwd
1
|
<relay_address>:port <email_login>:<email_password>
|
postmap hash:/etc/postfix/sasl_passwd
chmod 600 /etc/postfix/sasl_passwd
- edit /etc/postfix/sender_canonical
1
|
<proxmox_username> <source_email>
|
systemctl restart postfix
Some reference: 1 2
<End of Post>
Marek Pawlak