dav2d is the fastest AV2 decoder on all platforms :)
Targeted to be small, portable and very fast.
If you're out of the loop like me: AV2 is the next-generation video coding specification from the Alliance for Open Media (AOMedia). Building on the foundation of AV1, AV2 is engineered to provide superior compression efficiency, enabling high-quality video delivery at significantly lower bitrates. It is optimized for the evolving demands of streaming, broadcasting, and real-time video conferencing.
- from https://av2.aomedia.org/Oh no. Not another one. I presume this one makes lossy better, or faster or both.
https://www.sisvel.com/insights/av2-is-coming-sisvel-is-prep...
yep
Aesthetics over function; style over substance. If that's their web design policy it's likely their policy in all other aspects.
I'm also not sure that they're aware that intellectual property rights no longer exist in the US. If AV2 was vibe coded, there would be no case.
…for copyright. Not for anything else. Patents would still apply.
Otherwise it was under a constant DDoS by the AI bots.
For instance, MCP, static sites that are easy to scale, a cache in front of a dynamic site engine
Our documentation and a main website are not fronted by this protection, so they're still accessible for the scrapers.
What am I missing that explains the gap between this and “constant DDoS” of the site?
Even when the amount of AI requests isnt that high - generally it's in hundreds per second tops for our services combined - that's still a load that causes issues for legitimate users/developers. We've seen it grow from somewhat reasonable to pretty much being 99% of responses we serve.
Can it be solved by throwing more hardware at the problem? Sure. But it's not sustainable, and the reasonable approach in our case is to filter off the parasitic traffic.
- AI scrapers will pull a bunch of docs from many sites in parallel (so instead of a human request where someone picks a single Google result, it hits a bunch of sites)
- AI will crawl the site looking for the correct answer which may hit a handful of pages
- AI sends requests in quick succession (big bursts instead of small trickle over longer time)
- Personal assistants may crawl the site repeatedly scraping everything (we saw a fair bit of this at work, they announced themselves with user agents)
- At work (b2b SaaS webapp) we also found that the personal assistant variety tended to hammer really computationally expensive data export and reporting endpoints generally without filters. While our app technically supported it, it was very inorganic traffic
That said, I don't think the solution is blanket blocks. Really it's exposing sites are poorly optimized for emerging technology.
I think the world gains more if the VLAN team focuses on their amazing, free contribution to the world, than if they spend the same time trying to figure out how to save you two clicks.
We all hate that this is happening, but you don't need to attack everyone that is unfortunately caught up in it.
If you have discovered such an option, you could get very wealthy: minimizing friction for humans in e-commerce is valuable. If you're a drive-by critic not vested in the project, then yours is an instance of talk being cheap.
Keep in mind that those kinds of services: - should not be MITMed by CDNs - are generally ran by volunteers with zero budget, money and time-wise
it is incredibly annoying but what can you do? AI scrapers ruined the web.
That being said, so many of the plebs suck. Like 2% will ruin everything for everyone.
It's rarely been the citizens that have been the problem, but the governments and companies that seek the use the network connection for their overwhelming benefit.
Re (above):
> Not on topic, but wow the internet has very quickly devolved into: click -> "making sure you're not a bot", click -> "making sure you're a human", click -> "COOKIES COOKIES COOKIES", click -> "cloudflare something something"
Then I press the X to close the all-caps banner commanding me to install the app, upon which I get sent to the app store. Users of the website refer to it as an app.
High hardware prices, locked information sources, plenty of AI slop etc.
Dav1d was the surprisingly fast assembly implementation of AV1 decoding. Even for something in hand-coded platform-specific-assembly I think the general impression was that they'd done amazing work to really chase down every last bit of potential performance.
It didn't initially exist when AV1 was first rolled out and its arrival was a step change in powering adoption on devices without hardware decoding.
Dav2d is likely to play a similar role, but it exists from the start of AV2 and can build on the work of dav1d, so should have an even bigger effect.
In a weird reverse chicken and egg scenario, having really good software decode that can be deployed will spur on hardware development and adoption due to network effects.
Dav1d AV1 Decoder
Dav2d AV2 Decoder
Just like "GNU's Not Unix"Wow, this gitlab instance looked so much cleaner/simpler and less clunky than my past experiences! Also loaded really fast on first page load as well as subsequent actions
What's the current state of of Dolby trying too attack AV1 ecosystem (Snapchat more specifically)? I hope there is an organized fight back by AOM against these trolls.
https://www.deb-multimedia.org/dists/unstable/main/binary-am...
... it says "fast and small AV1 video stream decoder"
... should probably be "AV2" ?
Happy, AV2 decoding already here.
:)
One day in the mysterious future the AV3 decoder will be dav3d.
https://en.wikipedia.org/wiki/Dangerous_Dave_in_the_Haunted_...
I wonder IFF Rust had an effects system that a Jasmin MIR transform (ie like SPIRV is for shaders) would be useful?
C compilers, Rust compilers, and assemblers are all deterministic.
Modern compilers are extremely clever and will produce machine code that takes full advantage of modern CPU branch predictors, and reorder instructions to better take advantage of pipelining. This in itself will make the same code run at different speeds depending on the input data.
Then there is the whole issue of compiler version roulette. As a developer you have no idea which version of compilers your users and distros will use, and what new and wonderful optimisation they will bring.
Within a version, yes, but not cross version. Different versions of GCC/Clang etc can give you completely different code.
However for the container/extractor... those should absolutely be in a memory safe language, and those are were a lot of the exploits/crashes are, too, as metadata is more fuzzy.
As a practical example of this see something like CrabbyAVIF. All the parser code is rust, but it delegates to dav1d for the actual codec portion
Compare the number of CVEs against x264 (included decoders don't count!) and FFmpeg's H.264 decoder.
There's other memory-safe languages, and there's formal verification.
e.g. seL4 favors pancake.
Really? How many codecs have your neighbors contributed money for the development of, just curious.