What does HackerNews think of Av1an?

Cross-platform command-line AV1 / VP9 / HEVC / H264 encoding framework with per scene quality encoding

Language: Rust

#30 in Docker
#61 in Python
#55 in Rust
The hardware encoders are very fast and generally better than x264 (but not by as much as you'd think with the x264 slow preset).

In addition, there are fast threaded AV1 encoders you may be overlooking, like SVT-AV1. For non-realtime, my favorite is av1an, which also yields better quality than is possible from aomenc and works with pretty much any encoder/codec: https://github.com/master-of-zen/Av1an

AFAIK state of the art is using fast presets of SVT-AV1, combined with Av1an [0] for parallelism.

[0] https://github.com/master-of-zen/Av1an

For anyone actually wanting to do this, Av1an handles all of it automatically for you.

https://github.com/master-of-zen/Av1an

Most of the new encoding tools split videos by scene and then run parallel encodes from there, such as av1an.

For a decently sized video (say a TV episode) there's usually like 100 split points to divy out to encoders.

https://github.com/master-of-zen/Av1an

>The more threads you add into a pool, the more encode overhead you will experience, since every row of encode requires the upper right CTU block to complete before it can proceed.

Nowadays, we don't have to worry about that, since we can use scene-based encoding instead. [1][2] In addition to allowing for full use of all cores, without any loss of encoding or processing efficiency, different encoding settings may be chosen for each scene, increasing potential efficiency.

[1] https://netflixtechblog.com/optimized-shot-based-encodes-for...

[2] https://github.com/master-of-zen/Av1an

https://github.com/master-of-zen/Av1an just required it. Not technically a library but it's on my mind.

Rocket still uses nightly https://rocket.rs/v0.4/guide/getting-started/

Not sure about others but those are the two I think of. It's usually something that comes up when I start playing with rust and want to do X. Last few times I've done that the nightly requirement hits.

You have to be smart about how you process the video.

Yes, if you just use AOM directly you'll have a hard time saturating the core with work.

However, if you split your video up into scenes and start an AOM instance per scene, it becomes trivial to outperform the 12 core ryzen with a 128 core machine. The main bottleneck becomes memory bandwidth.

This is what the av1an project is doing https://github.com/master-of-zen/Av1an

All that said, you need videos long enough for there to be a benefit here. The Ryzen will still win if you are talking about making 10 second av1 gifs.

I have been interested in this and to my surprise I was able to find just a single project that does something similar: https://github.com/master-of-zen/Av1an

It has the ability to do trial compression (w/ scene splitting) and evaluate quality loss up to a desired factor.