Plex Media Server on Kubernetes with Hardware Transcoding!

Running Plex Media Server on Kubernetes isn't just about containerization - it's about building a production-grade media infrastructure that leverages enterprise storage patterns, hardware optimization, and cloud-native principles. This setup delivers superior performance, simplified maintenance, and demonstrates advanced Kubernetes concepts that translate directly to enterprise environments.
This guide focuses on the architecture decisions and performance optimizations that make Kubernetes-hosted Plex superior to traditional deployments. We'll cover Intel QuickSync hardware transcoding, storage architecture choices (iSCSI vs NFS), and real-world performance improvements I've measured in production.
Architecture Overview¶
Prerequisites¶
What You'll Need
- Plex Pass subscription - Required for hardware transcoding
- Kubernetes cluster (1.30+) - This guide uses kubeadm on bare metal
- Intel CPU with QuickSync - 12th gen+ recommended for modern codecs
- Storage backend - NAS/SAN with NFS and iSCSI support
- Plex claim token - Generate at plex.tv/claim
Storage Architecture Decision¶
The key performance insight: Use iSCSI for Plex configuration and database storage, NFS for media files. This hybrid approach delivers:
- 60% faster UI response times in Plex apps (iOS, Apple TV, Samsung TV)
- Eliminated timeouts in *arr applications (Sonarr/Radarr search operations)
- Consistent database performance under concurrent access
- Cost-effective media storage via NFS for large static files
graph TB
subgraph "Kubernetes Node"
A[Plex Pod]
B[Config Volume<br/>iSCSI LUN]
C[Media Volume<br/>NFS Mount]
end
subgraph "Synology NAS"
D[iSCSI LUN<br/>Config/DB/Transcode]
E[NFS Share<br/>Media Files]
end
A --> B
A --> C
B -.-> D
C -.-> E
This post assumes that you already have a Plex Pass subscription, which is required to use the hardware-accelerated streaming. If you don't have or want Plex Pass, you can still read on to set up Plex on k8s, just know that you are going to be limited to the beefiness of your CPU for doing software transcoding (which is a pretty intensive task){:target="_blank"}. For single home streams this should be fine, but if you want to support multiple streams or higher bitrates, seriously consider making use of hardware transcoding!???+ tip "Plex Pass Lifetime Subscription" Pro tip: The Plex Pass lifetime subscription goes on sale every so often from $70-100, so keep an eye on places like SlickDeals and snag one when it comes around if you're on the fence!
The Plex Pass also comes with the benefit of getting updates faster, where you can use the plexpass
Docker image tag. If you don't have it you will want to swap the parts of my spec here:
with this
A lot of folks like the linuxserver.io images, which I definitely recommend for ancillary applications like Sonarr, Radarr, etc. For Plex I have found that using either plexinc/pms-docker:latest
stable tag or plexinc/pms-docker:plexpass
for latest features tends to be the quickest way to get bug fixes and new features, and that the linuxerver.io
ones lag behind about a week or two.
Storage¶
Figure out where you are going to store your media! If you're starting from scratch or replacing something, definitely check out Synology. I was a sales engineer in data center SAN/NAS solutions and storage SME, but managing my own home NAS (FreeNAS/TrueNAS, Rockstor, OpenMediaVault, etc.) I would 1000% have saved money (and shreds of sanity) by buying a Synology sooner and avoiding the headaches related to building my own. Synology Hybrid Raid (SHR) is awesome and lets you use dissimilar disk sizes, if you happen to have a hodgepodge or want that future proofing of being able to upgrade to different drives later.
Ultimately, so long as you have disk space somewhere that can be accessed at 1Gbps over your network or ample local storage, you're good to go!
I won't go into exhaustive diatribe on what type of RAID you should use, but I would strongly recommend that you sacrifice some usable storage for the sake of redundancy! I use SHR-2 for two-drive fault tolerance, because I use this as my home backup server, as well and I'd much rather spend a few hundred extra dollars than have to worry about a second drive failing while things rebuild or spend time regathering my data from various places!
Set up your Linux server¶
Updated September 2024
This setup has been recently tested and confirmed working with Kubernetes 1.34.1 and Debian 12 (Bookworm). No specific kernel version requirements beyond Linux 6.x series.
Debian Bookworm works excellently for this setup. Previous concerns about Debian compatibility were unfounded - it's actually my preferred choice for stability and performance with this configuration.
Install Packages¶
We need to get the k8s packages and prerequisites as outlined in the Kubernetes documentation.
sudo apt install apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/google-kubernetes.gpg
sudo echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install kubeadm kubelet kubectl kubernetes-cni
apt-mark hold kubeadm kubelet kubectl
If this all worked, then we are ready to get into configuration and tweaking!
Tweak swap
and DNS¶
First thing, make sure you disable SWAP! It can cause all kinds of issues with the kubelet
not wanting to start, so make double sure it does not come back by checking your kernel config and stuff, too!
# DISABLE SWAP
sudo swapoff -a
# make sure it's not in /etc/fstab too; comment or delete from here
sudo vi /etc/fstab
# remove any swap image
sudo rm /swap.img
After those steps and a reboot you should be good, but you may have to dig deeper into Google or GPT knowledge banks for your distro.
Then we want to make sure we do a couple things to make our DNS work "right". You are going to want to edit your etc/hosts
file so you can set a new entry for k8smaster
; this lets your cluster know who the control plane node is (in this case, just our solo node anyway).
vi /etc/hosts
# /etc/hosts file content
127.0.0.1 localhost hank
192.168.1.75 k8smaster k8smaster.mydomain.com hank
# The following lines are desirable for IPv6 capable hosts::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
This is what my hosts
file looks like; you can update your hostname (mine's hank because it lives in a network cabinet in my garage near my mower, where I hope it is happy, like Hank Hill might be).???+ info "Static IP" Changing your IP later is a pain as far as getting k8s to be OK with it, so set a static IP or set your DHCP reservation to always assign the same IP to this device!
There's a bunch of different ways you may have opted to include packages like resolvconf
etc. so you may need to Google around a bit for your distro.
I use my local DNS, which is my Unifi Dream Machine (UDM) Pro and points to NextDNS. I also put in Cloudflare and Google public DNS IPs as fallbacks.
I recommend you reboot to make sure things stick; even though they should work with systemctl restart <whatever>
I see too many wacky ass DNS nuances to Linux distros that I feel better if it all works after restarting.
Then, you can make sure these configs are working by pinging yourself, and something public from inside the k8s node itself:
jordy@hank:~$ ping k8smaster -c 4
PING k8smaster (192.168.1.75) 56(84) bytes of data.
64 bytes from k8smaster (192.168.1.75): icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from k8smaster (192.168.1.75): icmp_seq=2 ttl=64 time=0.067 ms
64 bytes from k8smaster (192.168.1.75): icmp_seq=3 ttl=64 time=0.083 ms
64 bytes from k8smaster (192.168.1.75): icmp_seq=4 ttl=64 time=0.051 ms
--- k8smaster ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3055ms
rtt min/avg/max/mdev = 0.051/0.066/0.083/0.011 ms
jordy@hank:~$ ping google.com -c 4
PING google.com (142.251.215.238) 56(84) bytes of data.
64 bytes from sea09s35-in-f14.1e100.net (142.251.215.238): icmp_seq=1 ttl=60 time=24.2 ms
64 bytes from sea09s35-in-f14.1e100.net (142.251.215.238): icmp_seq=2 ttl=60 time=23.5 ms
64 bytes from sea09s35-in-f14.1e100.net (142.251.215.238): icmp_seq=3 ttl=60 time=23.2 ms
64 bytes from sea09s35-in-f14.1e100.net (142.251.215.238): icmp_seq=4 ttl=60 time=24.5 ms
--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 23.241/23.866/24.496/0.496 ms
the -c 4 just says to ping for a count of 4 pings; rather than ping forever until you Ctrl + C
because Linux things
If all looks good, we can move on to the next step!
Check kubelet
Health¶
Make sure your kubelet
is healthy, otherwise we have to solve that first. Running sudo systemctl status kubelet
should come back with something like this:
If things aren't healthy, well then, welcome to the world of setting up a cluster 🙄
Here's the shortlist items you can check out before letting Google / GPT take the wheel and banging your head against the wall:
- Double, even triple check
swapoff -a
worked; swear to god you will find 9,000 threads of "is swap off?" followed by "oh that fixed it thanks!"
Often this was not my challenge, so it can be frustrating, but keep moving down the list! - If you notice that bottom part of the screenshot the
--bootstrap-kubeconfig=
and--kubeconfig=
etc. there's some extra arguments that might need to be given to config fileskubelet
relies on. Here's an example of/var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9"
You need to make sure the kubelet
service knows to look for this, too, so double check that the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
is set up right. DON'T JUST OVERWRITE WITH THIS WITHOUT BEING CAREFUL!!! Before you make ANY changes back this file up.
Or something like that. Then you can compare to mine here:
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the.NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Then you should be able to systemctl restart kubelet
to get it running. If not, be patient with yourself and with k8s' learning curve, Google a bit and hack around because this presents an awesome (if not infuriating) opportunity to upskill in Linux!
Bootstrap the Cluster with kubeadm init
¶
STOP HERE before you init
You will need something to manage pod networking, and a really popular open source option is Calico
. It assumes a specific CIDR range is set up when the cluster is initialized, so follow the instructions from Tigera's Quickstart guide.
Basically, make sure you run your init command with a specified CIDR like this:
With any luck, your stuff should build in a few minutes and you'll get the "congrats" message to start interacting with k8s!
You will need to make sure to untaint your node so you can actually run things on it. This is default behavior of k8s because you have both master and worker nodes; your control plane by default does not run pods because k8s assumes you have them as separate things. This is pretty easy by just running:
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/master-
You can validate that there are no more taints via:
Anything other than null means you have some taints left, so make sure none block your deployments! You can learn more about taints in scheduling and eviction concepts
Configure Calico
¶
If you already set up Calico
from the link in the last section, just make sure it is all running:
kubectl get pod -A | grep tigera
tigera-operator tigera-operator-55585899bf-qcx4n 1/1 Running 5 (4d17h ago) 17d
kubectl get pod -A | grep calico
calico-apiserver calico-apiserver-5dbc7d5485-cfqjn 1/1 Running 5 (4d17h ago) 17d
calico-apiserver calico-apiserver-5dbc7d5485-ns9lc 1/1 Running 2 (8d ago) 17d
calico-system calico-kube-controllers-9f5754cf6-6vtlp 1/1 Running 2 (8d ago) 17d
calico-system calico-node-fm8bc 1/1 Running 2 (8d ago) 17d
calico-system calico-typha-744cb7c4c6-vx8zj 1/1 Running 2 (8d ago) 17d
calico-system csi-node-driver-7t9x4 2/2 Running 4 (8d ago) 17d
If anything is NOT running, stop and circle back to solve it first using your typical tools to figure it out:
kubectl describe pod <broken pod> -n <its namespace>
kubectl get events -n <its namespace>
kubectl get events
If you cannot get it to start at all, make sure the other pods in the kube-system namespace are running because maybe something bigger is wrong:
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-cc8d5d87-7zptv 1/1 Running 1 (8d ago) 17d
coredns-cc8d5d87-cdf2t 1/1 Running 1 (8d ago) 17d
etcd-hank 1/1 Running 16 (8d ago) 17d
kube-apiserver-hank 1/1 Running 19 (4d17h ago) 17d
kube-controller-manager-hank 1/1 Running 3 (4d17h ago) 17d
kube-proxy-cfm9h 1/1 Running 2 (8d ago) 17d
kube-scheduler-hank 1/1 Running 23 (4d17h ago) 17d
AN IMPORTANT NOTE ABOUT THIS: coredns MAY NOT RUN until the Tigera & Calico stuff is running!
If you run into that, it is normal, but it should start after. If you find it isn't behaving after Calico
is running, you can kubectl delete
the pods, or run:
Configure MetalLB
¶
Since this is a home/bare metal install of Kubernetes, we need a Load Balancer (LB) controller. This should be a pretty simple deployment, and in fact their Helm
chart works well if you have already have Helm set up., just head over to MetalLB install docs.??? note "Helm is the Kubernetes package manager" If you are not familiar with helm
, it is similar to apt
or yum
or whatever Linux package manager, but for k8s.
You can also just follow the Quickstart that MetalLB provides on their site. Make sure you follow their instruction to modify kube-proxy settings for strictARP
kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
That strictARP: true
is the crux of this part.
You may need to create or modify the Custom Resource Definitions (CRDs) for the following resources:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: lan-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.75-192.168.1.85 # Replace with your IP range
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: lan
namespace: metallb-system
spec:
ipAddressPools:
- lan-pool
You can save this config to metallb_config.yaml
and then run:
Extra Credit: Configure a Proxy¶
This is a bit outside of the scope of this post, but worth mentioning. There are a number of proxies and ingress options, but for home usage we will be able to expose an IP address to work with. If you want to be able to use a custom domain name, then you will want to manage a proxy of some sort and SSL certificates with something like LetsEncrypt.
Some examples are:
For home use, I really appreciate the simplicity (and UI) of NGINX Proxy Manager.
Intel GPU Driver¶
This is the fun stuff that seemed like a giant PITA and I scoured lots of forums with some solutions stitched together between TrueNAS and Docker and all kinds of other adjacent implementations, but I hope this post will make it easier. There are a few things to know about that will make more sense in a minute:
- drivers on your Linux server
i915
- a
daemonset
that runs the Intel GPU device plugin for k8s - a label for your node (I use the actual GPU name, like uhd770)
Drivers¶
You should be able to follow (most of) the instructions here:
https://dgpu-docs.intel.com/driver/installation.html#ubuntu-install-steps
I ignored the kernel stuff and most of their subsituted values (the stuff in ${SOME\_VALUE}_)
and made it work the first time I set things up. What seems to be a more reliable method is using a non-intel-managed PPA repo:
https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa
Once you have your video drivers set up, you can set the modprobe
config for the GPU to be added to the kernel config:
Save that config and then make sure it is also in your GRUB
config:
vi /etc/default/grub
# Add the following to your existing config
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=realloc=off i915.force_probe=4680 systemd.unified_cgroup_hierarchy=0"
You might also need something like:
YMMV by GPU, but luckily there's some docs and you should be able to follow the details on THIS PAGE about modprobe
and stuff.
Then run:
Intel Device Plugins for Kubernetes¶
If you were able to boot and the kernel didn't get f***ed up, then yay! You can move on to installing the GPU plugin for k8s, by following this:
https://intel.github.io/intel-device-plugins-for-kubernetes/cmd/gpu_plugin/README.html#installation
Now where it says to specify the <RELEASE\_VERSION>
this is where you may have to experiment.??? note "Intel Device Plugins for Kubernetes version compatibility" In my experience, not all packages actually work with Plex and HW encoding!!! I have tested and use the v0.24.0 package for reference.
If things are working you should have a daemonset
running (in the default namespace if you didn't specify otherwise) so that you see something like this with:
To make sure the i915
stuff took, you can run:
kubectl get nodes -o=jsonpath="{range.items[*]}{.metadata.name}{'\n'}{' i915: '}{.status.allocatable.gpu\.intel\.com/i915}{'\n'}"
# Expected output
hank
i915: 1
If you don't show a node with 1
, then you need to circle back to the Intel GPU drivers and modprobe
stuff, because your system may require a different config, and once again you are at the mercy of Google and ChatGPT.
If this is working like above, then there's one more thing, and that's to label your node:
Make sure it showed up with:

Enterprise Storage Architecture¶
Storage Protocol Selection: iSCSI vs NFS¶
After extensive testing, the optimal configuration uses dual storage protocols:
Hybrid Storage Strategy
- iSCSI LUNs: Plex configuration, database, and transcoding directories
- NFS mounts: Media files and downloads
This delivers 60% faster UI response and eliminates timeout issues in *arr applications.
Performance Impact Measured¶
Before (Pure NFS): - Radarr large series searches: 45-60s with frequent timeouts - Plex UI response (Apple TV/Samsung): 2-4s sluggish navigation - iOS Plex app: Noticeable delays opening libraries
After (iSCSI + NFS Hybrid): - Radarr searches: 8-12s consistent completion - Plex UI response: <1s snappy navigation - iOS Plex app: Near-instant library loading
Why This Architecture Works¶
- iSCSI for databases: Block-level storage with better locking and concurrency
- NFS for media: Cost-effective for large static files, sufficient for streaming
- Enterprise reliability: iSCSI multipathing provides redundancy and load balancing
Data Protection Reality Check
Neither storage protocol prevents corruption - both iSCSI sessions and NFS can experience issues. Always implement:
- Regular backups of your Plex configuration
- Storage snapshots if supported by your NAS/SAN
- RAID redundancy for hardware fault tolerance
- UPS protection to prevent unclean shutdowns
Storage Configuration Options¶
apiVersion: v1
kind: PersistentVolume
metadata:
name: plex-config-iscsi
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
iscsi:
targetPortal: 192.168.1.50:3260
iqn: iqn.2000-01.com.synology:nas.plex-config
lun: 1
fsType: ext4
readOnly: false
Best for: Database files, configuration, transcoding temp files
See: Kubernetes iSCSI Documentation
apiVersion: v1
kind: PersistentVolume
metadata:
name: plex-media-nfs
spec:
capacity:
storage: 10Ti
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
- proto=tcp
- rsize=1048576 # 1MB blocks for streaming
- wsize=1048576
nfs:
server: 192.168.1.50
path: /volume1/media
Best for: Large media files, cost-effective storage
See: Kubernetes NFS Documentation
For advanced users wanting dynamic provisioning:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: synology-iscsi-storage
provisioner: csi.san.synology.com
parameters:
dsm: "192.168.1.50"
location: "/volume1"
allowVolumeExpansion: true
See: Synology CSI Driver
NAS Configuration Requirements¶
Synology Setup
For iSCSI LUNs:
- SAN Manager → Create LUN → Thick provisioning recommended
- Target settings: Enable CHAP authentication for security
- Network: Dedicated iSCSI VLAN if possible
For NFS shares:
- Block size: 32KB+ for transcoding performance
- NFSv4.1: Required for proper Kubernetes integration
- Permissions: Squash root, map specific UID/GID
With our enterprise storage architecture in place, we can deploy a production-grade Plex configuration that showcases advanced Kubernetes concepts.
Set up Plex¶
!!! We're here folks! If you have stuck through it with me, you are about to unlock a really sweet way to manage your media server, because updates take seconds and so long as no one is streaming at that very moment you'll barely notice a blip.
Let's get started, shall we!
Before you get started, let's go generate a **claim token**:
When you set up your server, this is what will tie it to your account / Plex Pass (if you have bought it). We will use this value to get things up and running.
We need to get a manifest created for each of the things we want, or just create a few of them if it is easier: - a namespace to install Plex in (I just call mine plex
) - a persistent volume which will be for mounting the config volume to our container, pointing to /volume1/k8s_volumes/plex
- a persistent volume claim to actually consume the specified volume when we launch the contianer - a deployment that specifies the bulk of our configuration and what pods to run - a service that will let us expose our Plex server to the network and, yknow, stream media
Here is an example of my manifests:
This is a super boring one. You could also just:
For some of the more involved manifests, see below.???+ example "Example manifests"
=== "PV"
apiVersion: v1
kind: PersistentVolume
metadata:
name: plex-nfs
spec:
accessModes:
- ReadWriteOnce # (1)
capacity:
storage: 50Gi
mountOptions:
- hard # (2)
- nfsvers=4.1 # (3)
- proto=tcp
- rsize=32768 # (4)
- wsize=32768
nfs:
path: /volume1/k8s_volumes/plex
server: 192.168.1.50
persistentVolumeReclaimPolicy: Retain # (5)
storageClassName: nfs
volumeMode: Filesystem
ReadWriteOnce
means that only one node can have a R/W mount of thishard
is a typically default NFS config, but you learn more about that from IBMnfsvers=4.1
gives us the benefits that come with modern versions of NFSrsize=32768
andwsize=32768
now this is where we match that 32KB block size we set earlier. If you have a larger/smaller size, match the bytes to the number of KB you set.persistentVolumeReclaimPolicy: Retain
means don't scrap the volume and treat it as ephemeral with the pod. We very likely want to re-mount this if we rebuild or change Plex stuff, so this is pretty typical of why we would even want a persistent volume (PV) anyway
=== "PVC"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: plex-nfs
namespace: plex
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: nfs
volumeMode: Filesystem
volumeName: plex-nfs
=== "Deployment"
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: plex
name: plex
namespace: plex
spec:
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
app: plex
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: plex
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu
operator: In
values:
- uhd770 # (1)
containers:
- env:
- name: PLEX_CLAIM
value: <CLAIM TOKEN YOU GOT FROM PLEX>
- name: TZ
value: America/Boise
image: docker.io/plexinc/pms-docker:plexpass
imagePullPolicy: Always
name: plex
ports:
- containerPort: 32400
name: plex-web
protocol: TCP
- containerPort: 32469
name: dlna-tcp
protocol: TCP
- containerPort: 1900
name: dlna-udp
protocol: UDP
- containerPort: 3005
name: plex-companion
protocol: TCP
- containerPort: 8324
name: discovery-udp
protocol: UDP
resources:
limits:
gpu.intel.com/i915: "1" # (2)
requests:
cpu: "1"
gpu.intel.com/i915: "1"
memory: 4Gi
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
volumeMounts: # (3)
- mountPath: /config
name: plex-config
- mountPath: /media
name: media-nfs
dnsPolicy: ClusterFirst
hostname: plex-k8s
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes: # (3)
- name: plex-config
persistentVolumeClaim:
claimName: plex-nfs
- name: media-nfs
nfs: # (4)
path: /volume1/media
server: 192.168.1.50
- Make sure that the
affinity
settings are placing your pod on a node with the GPU, which you will label! - Required for the scheduling to actually work. Otherwise you will see failures in
kubectl describe pod plex-<whateverUID>
- Make sure the
volumeMounts
andvolumes
match. - Regular ol' NFS definition, basically mapping a share inside the pod with no volume itself
=== "Service"
kind: Service
apiVersion: v1
metadata:
name: plex
namespace: plex
annotations:
metallb.universe.tf/allow-shared-ip: plex # (1)
spec:
selector:
app: plex
externalIPs:
- 192.168.1.75 # (2)
ports:
- port: 32400
targetPort: 32400
name: pms-web
protocol: TCP
- port: 3005
targetPort: 3005
name: plex-companion
- port: 8324
name: plex-roku
targetPort: 8324
protocol: TCP
- port: 32469
targetPort: 32469
name: dlna-tcp
protocol: TCP
- port: 1900
targetPort: 1900
name: dlna-udp
protocol: UDP
- port: 5353
targetPort: 5353
name: discovery-udp
protocol: UDP
- port: 32410
targetPort: 32410
name: gdm-32410
protocol: UDP
- port: 32412
targetPort: 32412
name: gdm-32412
protocol: UDP
- port: 32413
targetPort: 32413
name: gdm-32413
protocol: UDP
- port: 32414
targetPort: 32414
name: gdm-32414
protocol: UDP
type: LoadBalancer
loadBalancerIP: 192.168.1.75 # (3)
- Be sure to annotate the service accordingly so that the LB knows that your IP may be shared with your host/other pods.
- Associate the external IP address you want to use on your home network.
- Set the LB IP address, which may be the same such as in my case, where it is running on the same host.
If everything worked you should have a running Plex pod!
If it has not started, do a kubectl describe pod -n plex plex-<whatever its id is>
and see what is busted.
Now make sure the service came up:
kubectl get service -n plex plex
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
plex LoadBalancer 10.102.66.192 192.168.1.75,192.168.1.75 32400:31359/TCP,32469:30193/TCP,1900:32522/UDP,3005:32288/TCP,8324:32391/UDP 11d
```??? info "ClusterIP note"
The `10.102.66.192` is a randomly assigned internal pod IP, and the External IP address is the one you care about. You should be able to verify that Plex is at least working by going to the `http://192.168.1.75:32400`.{:target="_blank"}
If you see something like this or the *Setup* page (if this is a new server), then you're good!
### Hardware Transcoding
Now what do we need to do to test HW transcoding? Go ahead and start up some media on whatever device you want, and make it convert to a different bitrate, like so:{:target="_blank"}
Anything lower than the direct stream / max should force transcoding. Now go back to your settings page, and click the **Dashboard** button in the top right, and check the results:{:target="_blank"}
You can see that the session I set to convert is transcoding and the _(hw)_ tag tells you it is using **_hardware-accelerated transcoding
!!! note "_**??? note "Multiple sessions""
For whatever reason the session shows twice, since I was in the same browser for the settings and playback, but you get the point. I also have a *Direct Play* stream going on in the house at the time of writing, and no stuttering or buffering issues.
## Upgrading Plex
This is where things get sweet! Let's say a new version of Plex releases and the latest Docker image is available. What does it take to upgrade your server?
```bash
kubectl rollout restart -n plex plex
That's it! Rollouts, so long as you are using the imagePullPolicy: Always
field in your deployment.yaml
manifest, it will pull the newest one and re-create the container.
Conclusion¶
If you got this far and got the winning Transcode (hw)
like above, you're in the elite world of running an easy to maintain and upgrade k8s Plex Media Server decoupled from your NAS/media, which has a ton of benefits I won't even begin to diatribe on in this post.
Thanks for reading and I hope that someone enjoyed the wild ride that is stitching together Kubernetes and Plex, a trend I hope grows so that we get support for cool features outside of bare metal and Docker for Plex and other media applications!
Additional Considerations¶
Terraform¶
If you have checked any of my other posts, you probably know I like Terraform! This actually led me to the fact that you can use Terraform for stuff like this, too, which makes rebuilding your server or updating it (or just keeping it consistent) WAY easier. I will look at creating a follow-up to this post on that, as well as a GitHub repo, but here's a snippet of what that is like:
resource "helm_release" "metallb" {
name = "metallb"
repository = "https://metallb.github.io/metallb"
chart = "metallb"
version = "0.13.12"
namespace = "metallb-system"
create_namespace = true
}
You can build modules and have a server "stack" of all the stuff. Ugh, so sweet. So if you plan to use a handful of containers, really consider this method!
Kubernetes Transcoding without HW Acceleration¶
Also, there are some cool projects out there (though no longer updated, it seems) like this: https://github.com/munnerz/kube-plex
The idea of that project builds on core k8s principles, where they do something really cool, by using API hooks to schedule a specific pod to handle the transcoding task for transcoding whenever a transcode is requested!
Unfortunately, it is mutually exclusive with the hardware acceleration we configured in this post, because the limits
configuration mean no additional pods can be scheduled to the node.
If you can't use HW acceleration, but have CPU that you can distribute across like that, then kube-plex
(or some similar project I may not be aware of) may be viable, so take a look and don't be afraid to get weird with it!