Streaming a HDMI-Source using Raspberry Pi 4B

Streaming a HDMI-Source using Raspberry Pi 4B

Lately I got my hands on a cool new project, utilizing a Cisco PrecisionHD TTC8-02 Telepresence camera (More to come…).
In this blog post I want to comapre two different methods to capture this camera, using a HDMI grabber, in 1080p30 (1920×1080 pixels with 30 FPS), using a Raspberry Pi 4B (2GB in my case).


Before starting, I installed Pi OS (Bullseye) on my device.
I did all the following tests, using a WIFI-Connection on the Pi and OBS (media source) as a client. As the builting WiFi was not feasible enough in my case, I added an external WiFi-Stick.
This post will not cover installing OBS, a Wifi Driver or Pi OS etc.

To use the hardware acceleration for encoding, we have to add this line to /boot/config.txt:


Why external WiFi?

Originally I wanted to use Dicaffeine to NDI-Stream the HDMI source using a cheap USB HDMI Grabber (Ref-Link) from Amazon.

I did not manage, to receive the vido on my PC. I always saw the source, but never received a frame. Even with a custom-compiled FFmpeg I got no success using NDI at all. Not even a 720p5 test video. On Ethernet, all worked fine.

This lead me to further diagnostics… In my environment, WiFi is a nighmare. Long story short, I did some download tests:

ConnectionMeasured speed
Builtin WiFi 2,4GHz300 kB/s
Builtin WiFi 5GHz0.5 – 1 MB/s
External ancient WiFi “g-Standard” dongle (2.4 GHz)3 MB/s
New external WiFi Dongle with external antenna160 MB/s

I think the results and my reasons to use an external adapter are quite clear. In my case, the driver had to be compiled manually and required setting arm_64bit=0 in config.txt. I am pretty sure it will also work with internal WiFi in cleaner air 😉

Test environment

I created a busy scene on my desk. Then I positioned the camera and did wire it directly with the USB grabber, which is connected to the Pi’s USB2 interface (The stick is only using USB2).

For the CSI module tests, I got my hands on this CSI-2 connected HDMI-grabber. For these tests, I removed the cable from the USB-Grabber and connected it to the CSI-Grabber.

All hardware configurations have been made before the tests, so the environment is identical on all tests. Both grabbers were always connected but the other inactive at the time of testing.

The Pi sits in a case (ref link) that is cooled by a small “Pi Fan”.

I installed gstreamer from the official repo using th following command. Please not, it installs quite everything…

apt -y install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-ugly gstreamer1.0-tools gstreamer1.0-gl gstreamer1.0-gtk3

Installing the Grabbers

Installing the USB-Grabber is easy. Connect it -> works. It does not require any configuration or setup.

The CSI-Grabber is a bit more complicated:

First of all, update your system (apt update && apt -y full-upgrade), then enable the overlay by putting this line into /boot/config.txt:


This can be a second line, right below the previously added line.

Then we need to edit /boot/cmdline.txt and add cma=96M to the beginning, The whole line will start like this:

cma=96M console=tty1 root=PARTUUID=...

After a reboot, the device should be up and running.

Warning: In the following commands, I am always using /dev/video0 for the CSI and /dev/video1 for the USB-Grabber. Adapt the commands to your setup. If you only have one, it is likely /dev/video0

Setting up OBS

You can use any receiver software of course, but I an using OBS as this camera will be used for streaming later on. The media source must be set up like this (use your Pi’s IP address instead of 111.222.333.444):

The important settings are marked with red arrows.
Actually, I did not notice any difference without latency=0 but I read somewhere, this was an option to reduce buffering.

Running an SRT-Stream using the USB-Grabber

Now that OBS is set up, we can start the stream (or we could do this before OBS is up). The following command works pretty well for the USB-Grabber:

gst-launch-1.0 -vvv v4l2src device=/dev/video1 ! 'image/jpeg,colorimetry=2:4:5:1,width=1920,height=1080,framerate=30/1' ! jpegparse ! v4l2jpegdec ! v4l2convert ! v4l2h264enc ! "video/x-h264,profile=main,preset=veryfast,framerate=30/1,level=(string)4,tune=zerolatency" ! mpegtsmux ! srtsink uri=srt://:8888

Using this command, the delay (cam to OBS) is about 1 second in my environment. The color quality is good and the framerate is consistent after a few minutes runtime.

This command lowers the delay to about 0.8 seconds by using the basline profile but the color quality is much worse (see the comparison below). CPU-Usage and framerate are widely identical:

gst-launch-1.0 -vvv v4l2src device=/dev/video1 ! 'image/jpeg,colorimetry=2:4:5:1,width=1920,height=1080,framerate=30/1' ! jpegparse ! v4l2jpegdec ! v4l2convert ! v4l2h264enc ! "video/x-h264,profile=baseline,framerate=30/1,level=(string)4,tune=zerolatency" ! mpegtsmux ! srtsink uri=srt://:8888

The same, using the CSI-Module

Then I did the same expriment, using the CSI module (ref link) C790 from Geekvideo. Here we need some more setup before we are able to stream.
Unfortunately this need to be done after each reboot, else GStreamer will not start. Again: this is not a tutorial but a technical experiment and report, so maybe I will come up with a solution later.

Step 1: Create an EDID file

Create a file, called edid.txt, containing the following – copy exactly!



This file controls the capabilities, the adapter advertises on HDMI to the source device. This specific variant locks it to 1920x1080p30. The file can of course be re-used for later reboots.

Step 2: Apply EDID-Info and configure Video4Linux driver

First, apply the EDID-File to the device using

v4l2-ctl --set-edid=file=edid.txt -d /dev/video0

Next, we configure Video4Linux to the correct timings:

v4l2-ctl --set-dv-bt-timings query -d /dev/video0

Step 3: Run GStreamer

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,framerate=30/1,format=UYVY' ! v4l2h264enc ! 'video/x-h264,profile=main,preset=veryfast,level=(string)4,tune=zerolatency' ! mpegtsmux ! srtsink uri=srt://:8888

This variant caused slightly higher system load than the “bad color” USB variant, causing a lower delay of only 0.4 – 0.6 seconds

Image comparison

I created a side by side comparison image using the two grabbers. From top to bottom:

USB with baseline profile / USB with main profile / CSI with main profile

Regarding load: A second try caused much lower CPU-Usage on Test 2, so maybe the CPU-Usage is not accurate.

As you can see, the USB-Main variant has the best image quality, directly followed by the CSI-Variant. I think this could possibly be tuned further but as we’re using the same encoding settings, I fear that it comes largely from the chip.
Regarding load, the CSI-Approach is the clear winner when relating quality with load.

The next day… RGB!

A day later, I remebered that I read somewhere, this CSI-Thing would be capable of UYVY as well as RGB. So I gave this another shot. Here is the commandline:

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,framerate=30/1,format=RGB,colorimetry=sRGB' ! capssetter caps="video/x-raw,format=BGR" ! v4l2h264enc ! 'video/x-h264,profile=main,preset=veryfast,level=(string)4,tune=zerolatency' ! mpegtsmux ! srtsink uri=srt://:8888

And this is the result comparison. Delay is identical to the UYVY variant, system load is slightly higher. I think the colors are a little better (bottom is RGB, top is UYVY)…

…but compared to the USB-Grabber, the result is still worse – USB top, CSI bottom:

I also got knowledge about a new chip (Thank’s to pr3sk from JvPeek’s Discord), built into newer USB 3 grabbers. I just ordered one of these and will report in a seperate post, if those are maye “the solution” to everything 🙂

Interestingly, the CPU-Usage while grabbing from USB was much lower this time.. I got no idea why… Maybe the load comparison from yesterday was garbage…


I can say that both approaches seem to work. It looks like the USB-Variant is a bit less stable (in a few tries, the stream freezed after a few seconds).

After all, I am not really sure how to proceed. The CSI-Variant is much more performant and never had an outage while testing. Regarding image quality the USB-Variant (with main profile) is clearly better.

I am not a gstreamer pro, so maybe someone has ideas on how to improve these pipelines? Feel free to comment!


  • August, 22: Added source declaration and further explaination to EDID-File
  • August, 22: Added section “The next day”
Erfahrungsbericht: Logitech “G” (Gaming) Series

Erfahrungsbericht: Logitech “G” (Gaming) Series

[German Article] Ich bin seit vielen Jahren zufriedener “Logitech-Kunde” und bin entsetzt, wie sich die Qualität der Produkte geändert hat. Wenn du wissen willst, was mir da “über die Leber gelaufen ist”, bist du hier richtig!

Eine Sache vorweg: Dies ist ein Erfahrungsbericht. Viele Dinge sind persönliche Vorliebe und meine Einstellung beziehungsweise Erwartungshaltung zu einem Gerät für das ich einen gewissen Preis zahle. Jeder mag gewisse Dinge anders sehen und das ist auch völlig richtig so.

For my english readers: Sorry. Sometimes I feel like writing things down and don’t want to translate everything on my own – like in this case. This article is a testimonial about my experience with current logitech hardware.

Es war einmal, vor langer langer Zeit… Bereits in meiner “Kindheit” hatte ich mit der G11, später dann der G15 Tastatur meine erste Logitech G-Series Tastatur. Die G15 besaß ich bis vor ca 4-5 Jahren noch. Zu diesem Zeitpunkt bin ich dann endlich mal – wegen der Abnutzung der G15 – wieder etwas moderner geworden und auf die G910 umgestiegen. Ich weiß nicht mehr, ob es Artemis oder Orion war.

Vor etwa einem halben Jahr ist mir etwas in die Tastatur getropft, und ich musste die Entf-Tastenkappe ausbauen. Dabei ist mir ein Stück von der Kappe abgebrochen (die Halterung) und diese hielt eben nicht mehr. Beim Versuch, die Cap festzukleben, wurde leider der Schalter in Mitleidenschaft gezogen und ich habe mich dann kurzerhand entschieden, die ganze Tastatur zu ersetzen. Ich habe noch nie etwas so bereut, wie meine G910 entsorgt zu haben. Du fragst warum?

Die “neue” Logitech G910

Das ist einfach: Ich habe mir eine neue G910 bestellt und angeschlossen. Sie lief und ich war happy. Die alte 910 habe ich – weil sie ja defekt war – entsorgt. Einige Tage später merkte ich, dass ich mich erstaunlich oft vertippe. “rr” oder “tt” kam häufig vor, genau wie das Fehlen derselben Buchstaben in Worten. Also habe ich einen Test gemacht:

Notepad auf und immer abwechselnd “R” und “Enter” angeschlagen. So sah das dann aus:



Was zeigt uns das? Die R-Taste schlägt manchmal doppelt an und manchmal sogar gar nicht. Übrigens ist die Darstellung nicht übertrieben. Es war wirklich so häufig!

Defekt… Kann passieren!

Ich dachte mir, auch eine Logitech-Tastatur kann mal ein Montagsmodell sein. Ich kontaktierte den Onlineshop, in dem ich die Tastatur gekauft hatte und durfte die Tastatur zurückgeben. Zugegeben: Es war ein “gebraucht” Modell, da kann man jetzt drüber denken, was man will.

Ich erspare euch die Details: Ich habe diese Aktion mehrfach durchgeführt. Am Ende habe ich immer zwei bestellt, getestet, und wieder zurückgeschickt, bis ich eine hatte, die vermeintlich okay war. Vermeintlich!

Ihr könnt euch sicher denken, dass auch die letzten Endes “funktionierende” Tastatur nach einigen Wochen Anschlagsprobleme bekommen hat. Eine Suche bei Google bestätigt meinen Verdacht…

Ich bin nicht der einzige mit diesem Problem. Wühlt man sich durch die Foren, stellt man fest, dass das sehr häufig vorkommt. Manche haben Lösungen gefunden, wie die Tasten mit Alkohol zu spülen etc. Bei einer neuen Tastatur werde ich das aber sicher nicht machen. Ich brauche eine Tastatur, die Funktioniert und keine wartungsintensive Bastelbude.

Mein Fazit: Den Mist können sie behalten. Ich schickte die Logitech G910 also zurück und bestellt ein anderes Modell. Ich versuchte mich u.a. an der Logitech G213 Prodigy, die eine Rubberdome-Tastatur ist (keine Mech-Schalter, die kaputt gehen können): Witzig. Welcher Ingenieur dachte sich da wohl “Wir verbauen mal eine Spule, die fiept, wenn man die RGB-Beleuchtung der Tastatur auf halber Helligkeit einschaltet”? Die 10 Cent die da gespart wurden, waren es sicher wert… Übrigens findet sich das auch in den Produktbewertungen auf Amazon wieder.

Was ich selbst bei dem Modell nicht erlebt habe, aber in den Bewertungen tatsächlich auch öfter vorkommt: Einzelne Tasten schlagen nicht an… Scheint sich hier dann aber standesgemäß gleich auf ganze Regionen der Tasten zu belaufen. Also mutmaßlich Platinenfehler.

Wird die Logitech G815 meine neue Tastatur?

Ihr könnt’s euch denken (das kommt in diesem Bericht öfter mal vor): Die Logitech G213 ging natürlich zurück. Ich bestellte eine G815 – das neue Flaggschiff. Aber eben mit Kabel. Eine beeindruckende Tastatur! Ist vom Design mal was anderes als die ganzen Gaming-Spritzguss-Plastikteile. Fühlt sich hochwertig an und ist erstaunlich schwer.

Es ist eine Tastatur mit mechanischen Schaltern, aber nicht die Romer G Switches (welche die G910 hat), die für ihre Ausfälle bekannt sind. Super! Mit der Tatstatur lässt sich sehr angenehm tippen. Wären da nicht einzelne Tasten, die doppelt anschlagen… Und täglich grüßt das Murmeltier… Händler kontaktiert und ich bekomme Ersatz.

Die Ersatzlieferung folgt sogleich und ich nehme sie in Betrieb: Die “+”-Taste auf dem Numpad schlägt nur ganz selten mal an, wenn man nicht mit Gewalt drauf hämmert.

Ich geb’ dem Ganzen jetzt noch eine Chance, sonst wechsle ich den Tastentyp auf linear. Die scheinen das Problem nicht zu haben. (Ich habe eine G815 mit linearen Switches anderenorts im Einsatz, die stand hier auch eine Weile und wurde dann wegen des für mich angenehmeren Tastenanschlags durch die Taktile abgelöst)

Wie siehts mit der Maus aus?

Ich habe eingangs erwähnt, dass ich stolzer Nutzer einer G700s bin. Nach knapp 10 Jahren Nutzung (Kein Scherz! Gekauft 11/2013) funktioniert sie noch immer tadellos und ich will sie eigentlich nicht ersetzen, aber das Scrollrad löst sich langsam auf und mir stecken immer wieder kleine Splitter vom Lack des Scrollrads im Finger… Das ist glaube ich nicht ganz optimal so rein gesundheitlich.

Ich informiere mich also über aktuelle Modelle… G903 und G502 sehen vielversprechend aus. Die G903 wäre mein Favorit, aber das sagen die Bewertungen? Man glaubt es kaum… (Es folgen Auszüge aus den Bewertungen bei Amazon – Affiliate Link)

"For anyone looking to buy this mouse DO NOT BUY IT.
Like everyone else, after a few months of use my right click started to develop a double clicking issue, or just the right click never consistently detected it as clicked down."
"Super Ding außer dass die rechte Maustaste bisschen Faxxen macht"
"Linksklickfehler nach 4 Monaten Betrieb"
"Nach knapp einem Jahr wurde auch ich Opfer des Doppelklick-Fehlers,"

Darauf habe ich nun wirklich keine Lust… Also habe ich mit die G502 SE bestellt. Sieht cool aus, hat zwei Seitentasten, erlaubt das Scrollrad zu entkoppeln (nutze ich manchmal gerne beim browsen) und was ich persönlich sehr praktisch finde: Man kann mit einer Schultertaste einen Präzisionsmodus aktivieren. Das ist nicht nur beim Gaming genial… Man kann sowas durchaus auch beim arbeiten mit Unity gut nutzen.

Klingt gut (eigentlich nicht), aber naja…

Okay, die G502 ist in allen Varianten jeweils in ihrem Feld ein Mittelpreisiges Modell. Vielleicht bin ich zu anspruchsvoll, aber die Maus fühlt sich im Vergleich billig an… Die Haptik ist wie bei der Mäusen, die mit Office-PCs meist kostenlos mitgeliefert werden. Das geht gerade noch, aber das große Problem ist: Das Scrollrad… Das passt weder von der Optik, noch von der Haptik zum Rest der Maus. Auf den Bildern sieht sie toll aus, aber in echt könnte sie ein Rad von einem Kinderspielzeug-Bagger oder so sein. Da hätte ich mir lieber 5€ Aufpreis und dafür n ordentliches Scrollrad gewünscht. Etwas kleiner als die G700s ist die Maus auch, geht für mich aber gerade so.

Interessant ist auch das Klick-Geräusch. Die G700s hat ein sauberes “Klicken”. Kurz, knackig, hohe Tonlage. Die G502 SE hat einen tieferen Ton mit einem leichten Nachhall. Das sind alles zusammen so die Kleinigkeiten, die eine 100€-Maus eben 100€ Wert machen. Liebe zum Details und hohe Qualität bis zum Schluss. Die G502 SE ist zum Glück keine 100€-Maus.

Ich teste die Maus jetzt noch 1-2 Tage und überlege mir, bei meiner G700s das Scrollrad zu tauschen. Vielleicht macht sie ja nochmal 10 Jahre mit mir…


Leider hat es kein anderer Hersteller geschafft, mich zu überzeugen. Roccat zum Beispiel macht auch schöne Tastaturen, aber warum muss mir die LED unter der Taste direkt ins Gesicht leuchten? Hätten 5mm “Mantel” um die Tasten um das zu verhindern geschadet? Ich habe mir in Läden und auch die ein oder andere hier vor Ort angesehen und ausprobiert und muss sagen… Entweder ist die Haptik richtig schlecht oder die Tasten zu klein, oft wird auch die linke Windows-Taste durch Fn ersetzt (das nervt mich tierisch)… Manche Modelle haben einen Druckpunkt, dass man denkt, man drückt ne Taste von einem Schaltschrank…

Aber so ne richtig gute Tastatur mit Hintergrundbeleuchtung hab ich nicht gefunden. Ja ich bin wählerisch, aber bei Kosten von oft deutlich über 100€ für eine Tastatur darf ich das wohl auch sein.

Mir brennt eine Frage unter den Nägeln: Was ist nur aus Logitech geworden? Ist das der neue Standard? Ich war früher begeistert aber was ich mit den aktuellen Modellen gesehen habe… Es macht mich traurig.

Welche Tastatur und Maus nutzt du und hast du ähnliche Erfahrungen gemacht? Ab in die Kommentare!

Bildquelle Titelbild:

IGEL OS – VirtualCam User Settings

IGEL OS – VirtualCam User Settings

Using a webcam on a virtual desktop environment can be easy. Or it can be tricky! Read on to learn about highs and lows and what a VirtualCam has to do with it…

Where I work, we get confronted with lots of users who got their own ideas and requirements. Usually in IT, one would enforce a concept or a policy but this is not always the best solution.

For example in this case, we gave some users a mobile device (Laptop) installed with IGEL OS 11 as a thin client. The requirement that has just reached me, was to be able to use a webcam in a browser based video conferencing app. Sounds simple, huh?

The main problem without VirtualCam User Settings

On a thin client, users are usually connected to a virtual desktop environment – In our case Citrix Virtual Apps and Desktops. You do usually also not want to spend too much bandwidth on transferring video data. This is especially true for mobile clients which might access the system from external locations through an ADC (NetScaler) where you license bandwith.

The solution to this any many more issues (Buzzword: GPU requirement) is offloading.

Optimized Apps using Citrix Virtual Channel

The most common video conferencing providers like Cisco (WebEx) and Microsoft (Teams) have elaborated ways to split using their apps in a server- and a client component. This is very cool but unfortunately in our case the application was different. It’s a HTML5 app.

Browser Content Redirection

The second idea was to use Citrix Browser Content Redirection (BCR) to offload the whole webpage to the client OS. Long story short: The Application was unable to even start. The scripts were too complex to be redirected. We failed with about a thousand javascript errors.

The final solution: Citrix HDX Camera

The “fallback” is redirecting the video. Fortunately our Terminal Servers have decent GPU cards installed which allows us to process video very efficiently. Because of this, we “only” needed to enable HDX Camera Redirection. After some struggles in the first place (which were citrix policy issues) we managed to redirect the camera using this basic citrix feature quite well and it works as expected.

So far, so good…

Here comes: The User!

So initially I told, that we got some special requirements… There you go: It didn’t take long until someone figured out that their cam was actually not working as expected.

What happened?

During the pandemic we were forced to buy several different webcams as availability was a problem. So now we got about 5 to 10 different camera models plus integrated cameras in some devices. In the end a user might even have two or more cameras available, but HDX only redirects the “primary” camera (as in the first one enumerated). And this was always the integrated laptop camera for us.

How we solved it using VirtualCam

IGEL luckily just introduced the “Virtual Camera Background” feature in IGEL OS 11.08. This not only allows to blur the background for any application. It additionally provides a registry setting to set the camera source to a specific camera (by name, number or VID/PID). Then it replaces the main camera by the virtual one. This VirtualCam is then redirected by Citrix HDX.

This was the basic solution but we would need to configure the camera for every single client using the registry. While possible, that’s an annyoing sysadmin task. Additionally, a mobile user are changing workplaces from time to time. They are then often forced to use a different cam (like integrated on mobile, monitor cam at their desk).

Power to the user – Let them control VirtualCam settings!

I created a script that can be deployed to an IGEL Thin Client and linked as a custom application to allow the user to toggle and configure the VirtualCam feature.

All you need, is a basic profile to configure the Custom App and the basic VirtualCam parameters. The shell script I wrote contains the required logic.

How to install

To install the script, eighter put it into a custom partition or use the UMS’ files section to upload it into /wfs/. (For the following we assume, you uploaded the file to /wfs/ )

Then import the profile from the XML. Make sure that your firmware feature restriction does not remove any of the Virtual Camera Background features.

The profile creates a new Folder on the desktop called “Settings” and puts a link in it, called “VirtualCam”. You might want to change that under desktop integration.

Work Done!

Are you using uncommon video conferencing tools, too?
If yes, which and did you already encounter such a situation yet? How did you solve it?

Provider Comparison – Low Budget Cloud, Part 1

Provider Comparison – Low Budget Cloud, Part 1

A new journey

Hi, I am Alex. I am doing my own way of setting up my “business-IT” for my own “company” Firesplash Entertainment. You ask why? As a sole-proprietor who turned his hobby into a small business, availability counts (we are serving a cool overlay game running on twitch) but budget also does. I am known as someone who loves simple solutions that are actually manageable and payable for a normal (as far as IT people can be) human person. So, welcome to my Think-Outside-The-Box HA Cloud project Number two. Let’s start with a cloud provider comparison…

Err.. Wait.. The Hetzner Hybrid Cloud Project..?

Did that introduction sound quite familiar to you? Yep. It’s me. Some time ago I started creating a series on my friend’s blog where I described how we set up a hybrid cluster on Hetzner. It consisted of two hardware dedicated servers and a cloud VM used as quorum and router. We used HCloud Floating IPs and routed them over a GRE connection using OpenVSwitch. Yes, it worked. But it had quite some pitfalls I was never able to solve – for example for some reason the complete GRE-Tunnel dropped when I added a third node. The original plan was to move to cloud networks connected to vSwitches but Hetzner did not implement IPv6 for this feature (funny as they recently announced IPv6-Only servers…).
This simple setup did still cost us about 130€ per month and the nodes were idle most of the time in the end. Further Hetzner recently dropped highly availabe Cloud VMs (CEPH-Based) which are a breaking point for the concept and they raised their prices for IPv4 addresses again (including existing customers).
When I finally noticed that my Kubernetes-Cluster needed a major OS upgrade, that was the point for me to simply toss the existing solution and start elaborating a better solution.

The Idea

I always had an eye on Hetzner’s latest features on their cloud platform. They added Load Balancers, Firewalls, private networking (still only IPv4), … That basic stuff that turns a provider to a real cloud provider, you know?

For me it’s been time to check if I can migrate all my workload from my dedicated servers into the cloud. The original plan was to cancel my two dedicated servers, consolidate some VMs into fewer ones on hetzner’s cloud.

Maths – Calculations for a good Provider Comparison

Of course you always have to look at the expected cost – especially when talking about modern cloud providers, things can become wuite expensive very fast when you got non-cloud-native workloads. So I created a quick table to calculate my expected cost which shall help us on our cloud provider comparison. First of all I defined my new VMs including their sizing. I ended up with something like that:

A table showing the calculations for the provider comparison for Hetzner Cloud

As you can see in this table, I calculated including hetzner’s backup offerings. For further comparison we will take the ~86€ without those backup costs – But we will likely use another backup solution for our final systems. Also the calculation might not be 100% accurate because I think that backing up the additional volume on Collab (GitLab-Server) will cost extra.

Other Options

I also checked a few other options like moving our Kubernetes-Workloads to AWS, GKE, … And also OVH and other server providers.

One of these providers showed a very good pricing. Hello netcup! I will now show you my second calculation, including the same workloads – except the backup feature. Also the Loadbalancer has been removed and is replaced by a floating IP (which would also work on hetzner, saving us about 4€, but more details in the next part)

A table showing the original calculations for the provider comparison for netcup
Server calculation on netcub with current prices including german VAT as of feb. 2022

So at the first look the price difference is minimal. Talking about 7€ difference while Hetzner Cloud does definitely have the better user interface and UX. Also netcup’s prices are only valid for 12 month minimum contract period subscriptions while hetzner allows per hour billing.

Provider comparison results: Why netcup is still the better choice for me

If I would see this cloud provider comparison in it’s current state, honestly… I would say “go along with Hetzner“. But of course there is a difference. The VPS 2000 G9 of netcup has double the ram compared to a CPX31 on Hetzner. Same goes for most of the “machines”. Almost every single VM on this list has more power than on the hetzner table. Also local storage is much bigger which enables us to use Persistant Volumes on our Kubernetes Environment. So in the end you get a lot more bang for your buck there.

When we add our “Think-Outside-The-Box”-Manner we can violate a best practice and converge the control planes into the worker nodes (because those now got more than enough RAM). This is a security consideration but we don’t allow third parties to manage worloads on the cluster so we know what is running on it. It’s a risk I am willing to take.

You might have further noticed that I converged the Nextcloud instance which was planned as a StorageShare on hetzner into the Collab server – This is actually our current setup and is now possible because of the bigger HDD and RAM sizes on netcup.

So all put back into the table we actually end up with a saving of about 19€/month (thats 228€ per year!) while still having more ressources available for our workloads. Isn’t that awesome?

A table showing the modified calculations for the provider comparison for netcup

Completing the concept – Adding backups

Still, we need a backup strategy. We did not honor this for our cloud provider comparison as in the end that cost ist quite identical for all solutions. Right now I have not decided which way we will go here. Hetzner recently changed their storage box model to provide more storage for less. Unfortunately they still do not support NFS… We might want to use Proxmox Backup Server for our backups as we got quite good experience with it and the pbs-client also allows us to backup “foreign” vms. PBS allows us to do a complete partition-dump from inside the vm – Other way round we can quickly restore a full VM out of a live linux or only a few files locally if required. Also it does a good job on deduplication – and I mean not only per VM!

Proxmox Backup Server has minimum server requirements of 2+ cores and 2 GB RAM. We currently need about 450GB backup storage but I expect that to be a bit more as we will not backup images but filesystems from now on which is a bit more inefficient.

  1. All-Hetzner classic solution
    • StorageBox 1TB (BX11)
    • CloudVM CPX11
    • Total cost: 8,20€/month
    • Implication: Backup traffic goes directly from our prod VMs to external IPs
  2. All-netcup
    • S 1000 G7 (1,5 TB, only 1 vCore)
    • Total cost: 15,99€/month
    • Implication: We violate the minimum requirements for PBS
  3. Hybrid
    • Hetzner StorageBox 1TB (BX11)
    • netcup VPS 500 G8
    • Total cost: 8,74€/month
    • Implication: Latency between PBS and Storage can cause trouble

There is another option: Borg Backup

Borg does not allow us to backup (and restore) the full VM but is perfect for backing up individual files or folders containing required data.

At this point it is a decision: Do we want (file system) “snapshots” or data backups? Let’s check the pricing. It is as simple as one Hetzner StorageBox BX11 for 3,45€/month because BorgBackup is natively supported.
We still stick to hetzner here as their storage boxes are the cheapest backup solution and I am a fan of having last-resort backups somewhere else… Even if it’s the same city in the end 😉 – But we could also send our storage box to finland.

The big difference…

…is that PBS allows us to quickly restore the system state of our VM. All we have to do (at least in theory) is to setup a new VM (or use the old one), spin up the recovery system, install PBS-Client and restore the root partition (and probably all other partitions) to the mounted disk.

With borg we will still have to install an OS and all the software and then restore all data folders, config files etc using borg. Basically the same what we do while setting up our system + data restore.

Next steps…

At this point I have (had… This article was written months ago actually) not yet decided which backup strategy to go for. What would you do?

In the next part we will start setting up our VMs on netcup. Want to give it a try? I would appreciate if you’d use one of the referral links contained in this cloud provider comparison to support me and Firesplash Entertainment. Also for exploring netcup, I got a 5€ voucher for new customers for you: 36nc16447952840

Upcoming articles of this series will be published with some delay depending on my finding time to actually write the text…

Multiple VLAN’s using Windows 10/11 Onboard Tools

Multiple VLAN’s using Windows 10/11 Onboard Tools

As Intel recently deprecated their ProSet tool for Windows 11, I had to find a solution to continue operating some computers which have to connect to multiple VLAN interfaces.

Right now those computers use a single NIC, connecting to a switch which has enabled VLAN tagging. Unfortunately (of fortunately?) most of our computers are currently running a realtek NIC. VLANs are currently configured using their Diagnostic Utility.

Fortunately (or in this case unfortunately?) our latest generation of hardware now comes with an Intel network chip. This change was pushing me into this new issue: How do I configure multiple VLANS under Windows 11 on an Intel NIC?

The Issue

Intel used to have a quite easy configuration for multiple VLANs in their advanced driver settings but as I already noted: The ProSet tool including the advanced driver functions has been discontinued by Intel.
So now, I had to find a new way to solve that demand.

On Linux you can create virtual nics easily as the VLAN function is built into the network stack of the linux kernel but I knew that windows was not able to do similar – and I was wrong-ish.

Examining the advanced tab of my NIC, I’ve seen that setting a single VLAN ID is not hard, it can be done in advanced properties of the NIC or using powershell. In our case, as we need to operate on multiple VLANs things are different. There is no option to add a virtual NIC so I started tinkering.

The solution is Hyper-V

So after some research I found out, that Hyper-V is VLAN aware and you can use their network stack without installing and running the full hypervisor. Additionally you can create multiple virtual Host-NICs attached to a VLAN aware Hyper-V-Switch.

Sounds like Rocket science? It is actually quite straight-forward but you need to configure that using powershell.

Step 1: Installing the Windows components

In the windows feature installation dialog, we have to install the two components called “Hyper-V-Services” and “Hyper-V-Module for Windows powershell”

After the setup is done, you have to reboot the computer.

Step 2: Setting up the vSwitch

Hyper-V automatically creates a “Default vSwitch” that you can not delete and unfortunately it is not VLAN-Aware. We will have to keep this one as it is and go ahead creating a second vSwitch. For this, we need to open up powershell as an administrator and enter the following commands.

Your host will lose it’s network connection – Do not do that remotely or with apps running.

# This will return a list of network adapters, find your physical NIC and note its "Name" - In most cases "Ethernet"

# This creates a new vSwitch named VLAN-vSwitch and bridging our physical NIC called "Ethernet". Also we allow to add virtual Host-NICs to this switch.
New-VMSwitch -name VLAN-vSwitch -NetAdapterName Ethernet -AllowManagementOS $true

# Hyper-V automatically creates a virtual NIC without a VLAN tag to keep the host online - Remove it, except you are using a Untagged/Tagged combination.
Remove-VMNetworkAdapter -ManagementOS -Name VLAN-vSwitch

We do now have a clean new VLAN-Aware vSwitch

Step 3: Setting up VLAN interfaces

# Now we create a new virtual Host-NIC and assign a VLAN tag 123 to it. Please note, that the interface name can be chosen freely. One might want to name them by purpose.
Add-VMNetworkAdapter -ManagementOS -Name "VLAN123" -SwitchName "VLAN-vSwitch" -Passthru | Set-VMNetworkAdapterVlan -Access -VlanId 123

# You can now add as many virtual NICs as you need
Add-VMNetworkAdapter -ManagementOS -Name "VLAN456" -SwitchName "VLAN-vSwitch" -Passthru | Set-VMNetworkAdapterVlan -Access -VlanId 456

# Finally, verify that all adapter are in place


Thats all you need to do. You do noe have replicated the VLAN feature of Realtek Diagnostic Utility or Intel ProSet with windows “onboard-tools” and this solution should be compatible with any available NIC.

What do you think? Is this a clean solution or do you still preferr using other tools, if so – which?

Title Photo by Taylor Vick on Unsplash