Streaming a HDMI-Source using Raspberry Pi 4B

Streaming a HDMI-Source using Raspberry Pi 4B

Lately I got my hands on a cool new project, utilizing a Cisco PrecisionHD TTC8-02 Telepresence camera (More to come…).
In this blog post I want to comapre two different methods to capture this camera, using a HDMI grabber, in 1080p30 (1920×1080 pixels with 30 FPS), using a Raspberry Pi 4B (2GB in my case).

Preparation

Before starting, I installed Pi OS (Bullseye) on my device.
I did all the following tests, using a WIFI-Connection on the Pi and OBS (media source) as a client. As the builting WiFi was not feasible enough in my case, I added an external WiFi-Stick.
This post will not cover installing OBS, a Wifi Driver or Pi OS etc.

To use the hardware acceleration for encoding, we have to add this line to /boot/config.txt:

dtoverlay=vc4-fkms-v3d

Why external WiFi?

Originally I wanted to use Dicaffeine to NDI-Stream the HDMI source using a cheap USB HDMI Grabber (Ref-Link) from Amazon.

I did not manage, to receive the vido on my PC. I always saw the source, but never received a frame. Even with a custom-compiled FFmpeg I got no success using NDI at all. Not even a 720p5 test video. On Ethernet, all worked fine.

This lead me to further diagnostics… In my environment, WiFi is a nighmare. Long story short, I did some download tests:

ConnectionMeasured speed
Builtin WiFi 2,4GHz300 kB/s
Builtin WiFi 5GHz0.5 – 1 MB/s
External ancient WiFi “g-Standard” dongle (2.4 GHz)3 MB/s
New external WiFi Dongle with external antenna160 MB/s

I think the results and my reasons to use an external adapter are quite clear. In my case, the driver had to be compiled manually and required setting arm_64bit=0 in config.txt. I am pretty sure it will also work with internal WiFi in cleaner air 😉

Test environment

I created a busy scene on my desk. Then I positioned the camera and did wire it directly with the USB grabber, which is connected to the Pi’s USB2 interface (The stick is only using USB2).

For the CSI module tests, I got my hands on this CSI-2 connected HDMI-grabber. For these tests, I removed the cable from the USB-Grabber and connected it to the CSI-Grabber.

All hardware configurations have been made before the tests, so the environment is identical on all tests. Both grabbers were always connected but the other inactive at the time of testing.

The Pi sits in a case (ref link) that is cooled by a small “Pi Fan”.

I installed gstreamer from the official repo using th following command. Please not, it installs quite everything…

apt -y install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-ugly gstreamer1.0-tools gstreamer1.0-gl gstreamer1.0-gtk3

Installing the Grabbers

Installing the USB-Grabber is easy. Connect it -> works. It does not require any configuration or setup.

The CSI-Grabber is a bit more complicated:

First of all, update your system (apt update && apt -y full-upgrade), then enable the overlay by putting this line into /boot/config.txt:

dtoverlay=tc358743

This can be a second line, right below the previously added line.

Then we need to edit /boot/cmdline.txt and add cma=96M to the beginning, The whole line will start like this:

cma=96M console=tty1 root=PARTUUID=...

After a reboot, the device should be up and running.

Warning: In the following commands, I am always using /dev/video0 for the CSI and /dev/video1 for the USB-Grabber. Adapt the commands to your setup. If you only have one, it is likely /dev/video0

Setting up OBS

You can use any receiver software of course, but I an using OBS as this camera will be used for streaming later on. The media source must be set up like this (use your Pi’s IP address instead of 111.222.333.444):

The important settings are marked with red arrows.
Actually, I did not notice any difference without latency=0 but I read somewhere, this was an option to reduce buffering.

Running an SRT-Stream using the USB-Grabber

Now that OBS is set up, we can start the stream (or we could do this before OBS is up). The following command works pretty well for the USB-Grabber:

gst-launch-1.0 -vvv v4l2src device=/dev/video1 ! 'image/jpeg,colorimetry=2:4:5:1,width=1920,height=1080,framerate=30/1' ! jpegparse ! v4l2jpegdec ! v4l2convert ! v4l2h264enc ! "video/x-h264,profile=main,preset=veryfast,framerate=30/1,level=(string)4,tune=zerolatency" ! mpegtsmux ! srtsink uri=srt://:8888

Using this command, the delay (cam to OBS) is about 1 second in my environment. The color quality is good and the framerate is consistent after a few minutes runtime.

This command lowers the delay to about 0.8 seconds by using the basline profile but the color quality is much worse (see the comparison below). CPU-Usage and framerate are widely identical:

gst-launch-1.0 -vvv v4l2src device=/dev/video1 ! 'image/jpeg,colorimetry=2:4:5:1,width=1920,height=1080,framerate=30/1' ! jpegparse ! v4l2jpegdec ! v4l2convert ! v4l2h264enc ! "video/x-h264,profile=baseline,framerate=30/1,level=(string)4,tune=zerolatency" ! mpegtsmux ! srtsink uri=srt://:8888

The same, using the CSI-Module

Then I did the same expriment, using the CSI module (ref link) C790 from Geekvideo. Here we need some more setup before we are able to stream.
Unfortunately this need to be done after each reboot, else GStreamer will not start. Again: this is not a tutorial but a technical experiment and report, so maybe I will come up with a solution later.

Step 1: Create an EDID file

Create a file, called edid.txt, containing the following – copy exactly!

00ffffffffffff005262888800888888
1c150103800000780aEE91A3544C9926
0F505400000001010101010101010101
01010101010100000000000000000000
00000000000000000000000000000000
00000000000000000000000000000000
00000000000000000000000000000000
0000000000000000000000000000002f
0203144041a22309070766030c003000
80E3007F000000000000000000000000
00000000000000000000000000000000
00000000000000000000000000000000
00000000000000000000000000000000
00000000000000000000000000000000
00000000000000000000000000000000
00000000000000000000000000000003

Source: https://forums.raspberrypi.com/viewtopic.php?t=315247#p1885669

This file controls the capabilities, the adapter advertises on HDMI to the source device. This specific variant locks it to 1920x1080p30. The file can of course be re-used for later reboots.

Step 2: Apply EDID-Info and configure Video4Linux driver

First, apply the EDID-File to the device using

v4l2-ctl --set-edid=file=edid.txt -d /dev/video0

Next, we configure Video4Linux to the correct timings:

v4l2-ctl --set-dv-bt-timings query -d /dev/video0

Step 3: Run GStreamer

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,framerate=30/1,format=UYVY' ! v4l2h264enc ! 'video/x-h264,profile=main,preset=veryfast,level=(string)4,tune=zerolatency' ! mpegtsmux ! srtsink uri=srt://:8888

This variant caused slightly higher system load than the “bad color” USB variant, causing a lower delay of only 0.4 – 0.6 seconds

Image comparison

I created a side by side comparison image using the two grabbers. From top to bottom:

USB with baseline profile / USB with main profile / CSI with main profile

Regarding load: A second try caused much lower CPU-Usage on Test 2, so maybe the CPU-Usage is not accurate.

As you can see, the USB-Main variant has the best image quality, directly followed by the CSI-Variant. I think this could possibly be tuned further but as we’re using the same encoding settings, I fear that it comes largely from the chip.
Regarding load, the CSI-Approach is the clear winner when relating quality with load.

The next day… RGB!

A day later, I remebered that I read somewhere, this CSI-Thing would be capable of UYVY as well as RGB. So I gave this another shot. Here is the commandline:

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,framerate=30/1,format=RGB,colorimetry=sRGB' ! capssetter caps="video/x-raw,format=BGR" ! v4l2h264enc ! 'video/x-h264,profile=main,preset=veryfast,level=(string)4,tune=zerolatency' ! mpegtsmux ! srtsink uri=srt://:8888

And this is the result comparison. Delay is identical to the UYVY variant, system load is slightly higher. I think the colors are a little better (bottom is RGB, top is UYVY)…

…but compared to the USB-Grabber, the result is still worse – USB top, CSI bottom:

I also got knowledge about a new chip (Thank’s to pr3sk from JvPeek’s Discord), built into newer USB 3 grabbers. I just ordered one of these and will report in a seperate post, if those are maye “the solution” to everything 🙂

Interestingly, the CPU-Usage while grabbing from USB was much lower this time.. I got no idea why… Maybe the load comparison from yesterday was garbage…

End-Results

I can say that both approaches seem to work. It looks like the USB-Variant is a bit less stable (in a few tries, the stream freezed after a few seconds).

After all, I am not really sure how to proceed. The CSI-Variant is much more performant and never had an outage while testing. Regarding image quality the USB-Variant (with main profile) is clearly better.

I am not a gstreamer pro, so maybe someone has ideas on how to improve these pipelines? Feel free to comment!

Updates

  • August, 22: Added source declaration and further explaination to EDID-File
  • August, 22: Added section “The next day”
Provider Comparison – Low Budget Cloud, Part 1

Provider Comparison – Low Budget Cloud, Part 1

A new journey

Hi, I am Alex. I am doing my own way of setting up my “business-IT” for my own “company” Firesplash Entertainment. You ask why? As a sole-proprietor who turned his hobby into a small business, availability counts (we are serving a cool overlay game running on twitch) but budget also does. I am known as someone who loves simple solutions that are actually manageable and payable for a normal (as far as IT people can be) human person. So, welcome to my Think-Outside-The-Box HA Cloud project Number two. Let’s start with a cloud provider comparison…

Err.. Wait.. The Hetzner Hybrid Cloud Project..?

Did that introduction sound quite familiar to you? Yep. It’s me. Some time ago I started creating a series on my friend’s blog where I described how we set up a hybrid cluster on Hetzner. It consisted of two hardware dedicated servers and a cloud VM used as quorum and router. We used HCloud Floating IPs and routed them over a GRE connection using OpenVSwitch. Yes, it worked. But it had quite some pitfalls I was never able to solve – for example for some reason the complete GRE-Tunnel dropped when I added a third node. The original plan was to move to cloud networks connected to vSwitches but Hetzner did not implement IPv6 for this feature (funny as they recently announced IPv6-Only servers…).
This simple setup did still cost us about 130€ per month and the nodes were idle most of the time in the end. Further Hetzner recently dropped highly availabe Cloud VMs (CEPH-Based) which are a breaking point for the concept and they raised their prices for IPv4 addresses again (including existing customers).
When I finally noticed that my Kubernetes-Cluster needed a major OS upgrade, that was the point for me to simply toss the existing solution and start elaborating a better solution.

The Idea

I always had an eye on Hetzner’s latest features on their cloud platform. They added Load Balancers, Firewalls, private networking (still only IPv4), … That basic stuff that turns a provider to a real cloud provider, you know?

For me it’s been time to check if I can migrate all my workload from my dedicated servers into the cloud. The original plan was to cancel my two dedicated servers, consolidate some VMs into fewer ones on hetzner’s cloud.

Maths – Calculations for a good Provider Comparison

Of course you always have to look at the expected cost – especially when talking about modern cloud providers, things can become wuite expensive very fast when you got non-cloud-native workloads. So I created a quick table to calculate my expected cost which shall help us on our cloud provider comparison. First of all I defined my new VMs including their sizing. I ended up with something like that:

A table showing the calculations for the provider comparison for Hetzner Cloud

As you can see in this table, I calculated including hetzner’s backup offerings. For further comparison we will take the ~86€ without those backup costs – But we will likely use another backup solution for our final systems. Also the calculation might not be 100% accurate because I think that backing up the additional volume on Collab (GitLab-Server) will cost extra.

Other Options

I also checked a few other options like moving our Kubernetes-Workloads to AWS, GKE, … And also OVH and other server providers.

One of these providers showed a very good pricing. Hello netcup! I will now show you my second calculation, including the same workloads – except the backup feature. Also the Loadbalancer has been removed and is replaced by a floating IP (which would also work on hetzner, saving us about 4€, but more details in the next part)

A table showing the original calculations for the provider comparison for netcup
Server calculation on netcub with current prices including german VAT as of feb. 2022

So at the first look the price difference is minimal. Talking about 7€ difference while Hetzner Cloud does definitely have the better user interface and UX. Also netcup’s prices are only valid for 12 month minimum contract period subscriptions while hetzner allows per hour billing.

Provider comparison results: Why netcup is still the better choice for me

If I would see this cloud provider comparison in it’s current state, honestly… I would say “go along with Hetzner“. But of course there is a difference. The VPS 2000 G9 of netcup has double the ram compared to a CPX31 on Hetzner. Same goes for most of the “machines”. Almost every single VM on this list has more power than on the hetzner table. Also local storage is much bigger which enables us to use Persistant Volumes on our Kubernetes Environment. So in the end you get a lot more bang for your buck there.

When we add our “Think-Outside-The-Box”-Manner we can violate a best practice and converge the control planes into the worker nodes (because those now got more than enough RAM). This is a security consideration but we don’t allow third parties to manage worloads on the cluster so we know what is running on it. It’s a risk I am willing to take.

You might have further noticed that I converged the Nextcloud instance which was planned as a StorageShare on hetzner into the Collab server – This is actually our current setup and is now possible because of the bigger HDD and RAM sizes on netcup.

So all put back into the table we actually end up with a saving of about 19€/month (thats 228€ per year!) while still having more ressources available for our workloads. Isn’t that awesome?

A table showing the modified calculations for the provider comparison for netcup

Completing the concept – Adding backups

Still, we need a backup strategy. We did not honor this for our cloud provider comparison as in the end that cost ist quite identical for all solutions. Right now I have not decided which way we will go here. Hetzner recently changed their storage box model to provide more storage for less. Unfortunately they still do not support NFS… We might want to use Proxmox Backup Server for our backups as we got quite good experience with it and the pbs-client also allows us to backup “foreign” vms. PBS allows us to do a complete partition-dump from inside the vm – Other way round we can quickly restore a full VM out of a live linux or only a few files locally if required. Also it does a good job on deduplication – and I mean not only per VM!

Proxmox Backup Server has minimum server requirements of 2+ cores and 2 GB RAM. We currently need about 450GB backup storage but I expect that to be a bit more as we will not backup images but filesystems from now on which is a bit more inefficient.

  1. All-Hetzner classic solution
    • StorageBox 1TB (BX11)
    • CloudVM CPX11
    • Total cost: 8,20€/month
    • Implication: Backup traffic goes directly from our prod VMs to external IPs
  2. All-netcup
    • S 1000 G7 (1,5 TB, only 1 vCore)
    • Total cost: 15,99€/month
    • Implication: We violate the minimum requirements for PBS
  3. Hybrid
    • Hetzner StorageBox 1TB (BX11)
    • netcup VPS 500 G8
    • Total cost: 8,74€/month
    • Implication: Latency between PBS and Storage can cause trouble

There is another option: Borg Backup

Borg does not allow us to backup (and restore) the full VM but is perfect for backing up individual files or folders containing required data.

At this point it is a decision: Do we want (file system) “snapshots” or data backups? Let’s check the pricing. It is as simple as one Hetzner StorageBox BX11 for 3,45€/month because BorgBackup is natively supported.
We still stick to hetzner here as their storage boxes are the cheapest backup solution and I am a fan of having last-resort backups somewhere else… Even if it’s the same city in the end 😉 – But we could also send our storage box to finland.

The big difference…

…is that PBS allows us to quickly restore the system state of our VM. All we have to do (at least in theory) is to setup a new VM (or use the old one), spin up the recovery system, install PBS-Client and restore the root partition (and probably all other partitions) to the mounted disk.

With borg we will still have to install an OS and all the software and then restore all data folders, config files etc using borg. Basically the same what we do while setting up our system + data restore.

Next steps…

At this point I have (had… This article was written months ago actually) not yet decided which backup strategy to go for. What would you do?

In the next part we will start setting up our VMs on netcup. Want to give it a try? I would appreciate if you’d use one of the referral links contained in this cloud provider comparison to support me and Firesplash Entertainment. Also for exploring netcup, I got a 5€ voucher for new customers for you: 36nc16447952840

Upcoming articles of this series will be published with some delay depending on my finding time to actually write the text…