So my second laptop is far too weak to run OBS Studio and my custom web-based music player at the same time. It does not have a recent-enough OpenGL implementation so OBS Studio can't run, and using FFMPEG to grab my screen directly and record audio doesn't work because I'm running Wayland and PipeWire which FFMPEG does not support yet. Besides, what would YOU see if I were to stream my laptop's screen? Just a bland KDE Plasma desktop. So I had to do something more extreme.
I made a C++ program from scratch that generates the video and audio stream in real time and sends them to FFMPEG for encoding. This is a test to see if:
1. nothing breaks even if it runs for a long time
2. the audio and video does not get out of sync if it runs for a long time
There are only two test songs for now--"Melolyn" and "Plum Rain" by TQ-Jam. Of course this is just a test so more songs will be added once this becomes official.
The songs are stored as lossy WavPack files, sent to FFMPEG as lossless 16-bit stereo. Why lossy WavPack? Because I find that the difference is less audible when lossy WavPack is re-encoded to another lossy codec, compared to other codecs. The video is in 720p because that's a safe bet for my trash laptop, also transmitted lossless as RGB to FFMPEG.
FFMPEG encodes to YouTube using H.264 (CRF 16) YUV420P, audio is 16-bit stereo PCM.