Looks like a fun project. I should get back in to FPGA tinkering. Last time I played with one was about 20 years ago. I wonder if the development environment has improved since then?

Yes and no. Much heavier footprint, mostly the same proprietary bs, somewhat faster and much better debugging/simulation tools. Languages have improved somewhat, and as long as you stay within an eco-system and use the vendor supplied tools and software you should be mostly ok, stray outside of that (for instance: open source toolchains for cutting edge FPGAs) and you'll be in for a world of trouble. The fabrics (got a) lot larger and there are some more and interesting building blocks to play with. Higher switching speeds.

Someone who is more active in this field may have a more accurate and broader view than I do.

https://symbiflow.github.io/

Is one of the most recent and - for me - significant developments. Note that for companies that use FPGAs none of the above is considered a hurdle, though their engineers may have a different opinion and that the hobbyist/hacker market for FPGAs is so insignificant compared to the professional one that the vendors do not care about catering to it.

I think there are a lot of major developments in the last 20 years, although I'm not active in the field. Symbiflow is largely a distribution of yosys, a bunch of other IceStorm projects, and nextpnr (is that part of IceStorm?), in the same sense that Debian is a distribution of Linux. Another one, but I think limited to Lattice FPGAs, is https://github.com/FPGAwars/apio.

I think the biggest development, though, is that there's enormously more off-the-shelf Verilog and VHDL, not just on OpenCores like 20 years ago, but also on GitLab, GitHub, and so on. Easy examples are CPUs like James Beckman's J1A, the VexRiscv design used in Bunnie's Precursor: https://github.com/SpinalHDL/VexRiscv (as little as 504 Artix-7 LUTs and 505 flip-flops), and Google's OpenTitan.

But from my POV the more interesting reason for using an FPGA is for things that aren't CPUs. For example, the SUMP logic analyzer and its progeny OLS https://sigrok.org/wiki/Openbench_Logic_Sniffer (32 channels at 200MHz), although I think both of these require the proprietary vendor tools to synthesize. I'm gonna go out on a limb here and guess that reliably buffering up data at 6.4 gigabits per second is not a thing that any CPU can do, even one that isn't a softcore; CPUs that run at speeds high enough to potentially do it invariably depend on cache hierarchies that monkeywrench your timing predictability.

As I said, though, I'm not active in the field, so all I know is hearsay.