Or maybe it requires genius to be this productive?
I'm just amazed at what some people are able to build. Especially open source. Does he make money from ffmpeg? Or just do it as a side gig?
Reminds me of the photopea guy. He has basically built a browser based clone of photoshop. Something that took Adobe thousands of engineers and decades to build up over time.
It's not true in every case, but the efficiency of solo developers and tiny teams is a massive advantage vs. inefficient huge teams crippled by corporate bureaucracy and layers of management. Even though the latter has astronomical budget vs the first, it doesn't actually guarantee better software.
Fabrice Bellard started FFMpeg, but doesn't work on it anymore. He also started a few other little projects incuding most famously QEMU.
ffmpeg \
-i "$input" -itsoffset "$offset" \
-i "$input" \
-map 0:0 \
-map 1:1 \
-acodec copy \
-vcodec copy \
"$output" find . -name "*.mkv" -exec ffmpeg -i "{}" -i "{}.srt" -map 0 -map 1 -c copy "{}_merged.mkv" \;The video had a slight delay which I didn’t quite notice till after I’d recorded it.
I basically wrote a quick script to generate variations of the recorded movies with delays from 50ms to 500ms in 50ms increments and then just watched each of them to see which appeared most correct for his hands playing the keys and the sound of the notes. It was slightly over 200ms and less than 250ms. 210ms seemed the best.
I have now learned the importance of a clapperboard.
(Originally I was going to record the audio in GarageBand and the video on my phone and join them later, but I was learning as I went and I wanted to see if I could record the two streams together.)
Video is hard. It's a lot of data that needs to be read, processed, and displayed under tight time constraints and needs to be synchronized with associated audio playback.
This is all made more challenging if you're trying to do it on storage, bandwidth, memory, and processing constrained consumer computing hardware. Compression somewhat solved the storage and bandwidth problems but not processing. Better fidelity codecs needed a lot more processing power to decode.
In the early 90s you had MPEG-1 at the high end of the quality scale but was so processor intensive you needed decoder ASICs in consumer hardware just to play it back. Then you had codecs like Cinepak that were far less processor intensive but middling quality. Then you had much lower fidelity like Microsoft's Video1, Apple Video, and even Smacker which had very low decoding requirements but didn't look great.
Network delivery of any of those in consumer hardware was a pipe dream when 14.4K modems were still rare and 9.6k were common. The h.261 codec, which MPEG-1 was based on, had a minimum bitrate of 64kbps which was out of reach of pretty much everyone.
Besides the hardware decoding requirements of MPEG-1 it was entirely unsuited for editing. Both QuickTime and Video for Windows were meant for editing on consumer desktop machines. The codecs they supported were meant for editing and then delivery (on CD-ROM typically).
In the mid to late 90s processing power had advanced such that MPEG-1 and h.261/3 could be decoded real-time in software. RealVideo and Sorensen Video 1 were both based on drafts of the h.263 spec which included video conferencing over POTS connections in its design criteria.
Again I'm not seeing dark times for digital video. There were lots of codecs because they had different uses, limitations, and strengths. The h.26x codecs were designed for video conferencing and it was Real and to a lesser extent Apple that realized they were also useful for streaming over the internet. Both MPEG-1/2 were unsuited for streaming as they didn't support variable frame rates and handled low dial-up bitrates poorly at best. It wasn't until the MPEG-4 überspec that internet streaming, video conferencing, and disc-based delivery settled under a single specification.
While ffmpeg is an amazing project and widely used, it didn't really do anything to settle the proliferation of codecs and containers. It really was MPEG-4 that allowed for that to happen to the extent it's happened.
One I know is https://handbrake.fr
Are there more?
You can add comments to the end of tricky commands with the # character. Then at a later date you can search for commands using the contents of those comments, using something like C-r.
No affiliation -- found it while searching for browser-based ports of FFmpeg.
And i truly appreciate people who create useful software and open source it
https://multimedia.cx/eggs/googles-youtube-uses-ffmpeg/
It wouldn't be too surprising if they still did.
But the ffmpeg processes are surely sandboxed with seccomp or similar, so it probably does not matter at all.
Literally the hacker would upload a video, wait for it to encode, and then once it was available for viewing on the website, they'd be looking at a video containing the text from `/etc/passwd` or your envvars or some secrets file or whatever.
Yes, most encoding services are very well-sandboxed and even when our tiny streaming platform at the time got hit by this when it was first appeared a few years ago, it was a non-issue because there was nothing valuable or compromising on the encode servers for them to read. (I think Ubuntu AppArmor stopped it dead in its tracks on its own, anyway.)
[0] https://docs.google.com/presentation/d/1yqWy_aE3dQNXAhW8kxMx...
(quite likely, those are all actually virtual through some auto-scaling IaaS magic)
Edit: just realized mencoder seems to be still alive and kicking. Thought it was lost in the transition of mplayer starting to use ffmpeg code -> mpv... Have to take a look at mencoder and if it is as easy as I remember :)
It seems to have done a great job of keeping up to date, with things like GPUs and shader language and whatnot. I guess that's because he designed a good extension mechanism.
I'm grateful for it.
A big Thank You to any and all who have brought FFMPEG to where it is!
It's hilarious how my introduction to this taught me so much in terms of codecs and compression.
All to just share fansubbed anime.
Couple highlights for my careering using it over the years. The first was when I was working on production and post-production for a small studio that had a popular web series and was just about to transition to their first "big" shows that would be produced for Hulu. This was when 6K raw video was just becoming a thing, we had over 50tb of footage, GPU decoding was brand-new, Windows machines couldn't work practically with Apple ProRes, lots of challenges. I ended up building a system that did things like transcode raw footage into various formats automatically whenever the server noticed there was new footage, automatically collected and stored the metadata from every shot somewhere we could centrally browse/search/filter it, etc. When it came time to deliver, it would automatically create various outputs for the web. We had to deliver ProRes masters in the end and had recently transitioned entirely to PCs. This was around when somebody successfully implemented a pretty good ProRes encoder for FFmpeg, so we were then able to encode and deliver these huge ProRes outputs not only without needing a Mac, but also entirely on our servers, no longer requiring someone's workstation to be hijacked for an entire day to do this. It all may not seem too revolutionary, but there was no way we would've been able to work with the same efficiency, for the same cost, in the same timeframe otherwise.
A couple years later, at a new (now defunct) video platform with millions of videos and maybe 5 back-end engineers, FFmpeg allowed us to build our own service to encode all uploads into the many resolutions and formats required. Encoding services were (and still are) very expensive, but in just a couple weeks we had our own that ran on standard Ubuntu server instances, spinning up/down depending on load. Immense cost savings, and not tied to any particular company. Shortly thereafter, GPU instances were available from most cloud providers and `nvenc` was available in FFmpeg, so we were able to dramatically speed up the encode process with maybe a day of work by adding GPU encoding into the mix.
These may seem like pretty obvious possibilities now, but it cannot be overstated how insane it was, especially at the time, for tiny and/or cash-strapped teams to be able to do all of this so easily, and that at the tool at the crux of it all, FFmpeg, was completely free. Yes, FFmpeg can be a pain in the ass to figure out, and it's easy these days to take it for granted, but in my opinion it has been truly revolutionary.
[1] https://www.youtube.com/watch?v=LsF5bHRxC_M [2] https://blog.twitch.tv/en/2017/10/10/live-video-transmuxing-...