The article touches on DFA based engines and notes that they may not be the fastest in all use cases. To expand on that generally the reason for this would be that the automaton you get from the regular expression has too many states, which can happen for example with very large counters or cases where the state space includes a product of multiple counters due to non-determinism between them (backtracking engines would struggle with the latter case too though).

Something an automata based engine can do (and IIRC re2 does) is that instead of constructing the full DFA, you can construct the much smaller non-deterministic finite automaton (NFA) and perform the product construction on-the-fly while caching states you've seen. However, this only helps if your input doesn't actually visit all those states. A malicious input could target your NFA engine to cause states being constantly kicked out of the cache. The slowdown here is linear though, so you're still better off than with a backtracking engine.

A project I'm working on is the Symbolic Regex Engine (SRM) [1], which uses a symbolic variant of regex derivatives. This technique allows an often-minimal DFA to be constructed lazily. This is similar to NFA based engines, but results in much fewer states thus helping with the cache issue. SRM can also handle some classes of counters natively in how the automaton gets constructed, which can turn a blow-up that would be exponential in a DFA based engine into a linear one.

[1]: https://github.com/AutomataDotNet/srm

An interesting project would be to have a go at trying to automatically generate that malicious input. If I remember correctly, re2 has heuristics to see whether construction is "still worth it"; a particularly nasty input would stay just on the verge of "worth doing construction" but cost as much as possible (if you introduce too many novel states re2 will stop doing the construction at all).

DFA based engines are often relatively simple under the hood, and backtrackers (like modern JIT'd libpcre) may outrace them due to having optimizations.

Hyperscan (https://github.com/intel/hyperscan) a project I worked on for many years avoids the potential DoS attacks on backtrackers or the ones RE2 has, but surely has its own weak spots (notably, it assumes literal factors of the regular expression are rare in the input). We did build some mitigations to that, but fundamentally I bet Hyperscan could be made to slow down in a similar (linear) way to re2.