What does HackerNews think of ffmpeg-libav-tutorial?

FFmpeg libav tutorial - learn how media works from basic to transmuxing, transcoding and more

Language: C

This tutorial is very outdated, eg. `AVPicture`, which is used throughout, is completely deprecated and removed from the library, so you will simply encounter linker errors trying to follow this tutorial and will have to replace every `avpicture` call made in the tutorial. Also, `avcodec_decode_video2` is deprecated and you have to use `avcodec_send_packet` and `avcodec_receive_frame`.

The FFmpeg libraries are possibly the worst thing I have ever worked with in my life. I have never been more afraid to use a library than this. FFmpeg libraries randomly break themselves, so if you find an answer from a few years ago, chances are it's useless. Want to free an AVPacket? `av_free_packet` is deprecated,. You can use `av_packet_unref` (but there's _also_ a function named `av_packet_free` that doesn't do quite the same thing).

Most questions on Stack Overflow or related platforms have no replies, the library is not well-documented (requiring you to always read the massive source code and the API reference). In addition, some things are outright undocumented and missing from the API reference or documentation and require reading often-unfinished conversations on the mailing lists with no solution. The FFmpeg libraries have very little error handling, which means if you're a bad C developer like me you are required to recompile FFmpeg with debugging information unstripped so you can trace segfaults in gdb.

https://github.com/leandromoreira/ffmpeg-libav-tutorial/ is a better and more up-to-date tutorial than this one.

Good overview of all the parts involved! I was hoping they’d talk a little more about the timing aspects, and keeping audio and video in sync during playback.

What I’ve learned from working on a video editor is that “keeping a/v in sync” is… sort of a misnomer? Or anyway, it sounds very “active”, like you’d have to line up all the frames and carefully set timers to play them or something.

But in practice, the audio and video frames are interleaved in the file, and they naturally come out in order (ish - see replies). The audio plays at a known rate (like 44.1KHz) and every frame of audio and video has a “presentation timestamp”, and these timestamps (are supposed to) line up between the streams.

So you’ve got the audio and video both coming out of the file at way-faster-than-realtime (ideally), and then the syncing ends up being more like: let the audio play, and hold back the next video frame until it’s time to show it. The audio updates a “clock” as it plays (with each audio frame’s timestamp), and a separate loop watches the clock until the next video frame’s time is up.

There seems to be surprisingly little material out there on this stuff, but the most helpful I found was the “Build a video editor in 1000 lines” tutorial [0] along with this spinoff [1], in conjunction with a few hours spent poring over the ffplay.c code trying to figure out how it works.

0: http://dranger.com/ffmpeg/

1: https://github.com/leandromoreira/ffmpeg-libav-tutorial

I think you're supposed to read the header files? I have no idea how people write ffmpeg stuff. The only good tutorial I've seen is: https://github.com/leandromoreira/ffmpeg-libav-tutorial