ROCm is basically AMD's answer to CUDA. Just (as usual) more open, less polished, and harder to use. Using something called HIP, CUDA application can be translated to work with ROCm instead (and therefore run on AMD cards without a complete rewrite of the app).
AFAIK they started working on it 6 or 7 years ago as the replacement for OpenCL. Not sure why exactly, but OpenCL apparently wasn't getting enough traction (and I think Blender even recently dropped OpenCL support).
After all the time, the HW support is still spotty (mostly only supporting the Radeon Pro cards, and still having no proper support for RDNA3 I think), and the SW support focuses mainly on Linux (and only three blessed distros, Ubuntu, RHEL and SuSe get official packages, so it can be pain to install anywhere else due to missing or conflicting dependencies).
So ROCm basically does work, and keeps getting better, but nVidia clearly has a larger SW dev team that makes the CUDA experience much more polished and painless.
My major discontent with AV1 has been how the encoder blurs some details completely out
That's the main reason I did not personally switch to AV1 yet as well. (Second reason being that my laptop struggled with playback too much.) Last time I tested it was 2 years back or so, and only using libaom, so I definitely hoped it would be better by now. I was so hyped about Daala (and then AV1) all those years back, so it's a bit disappointing how it turned out to be amazing and "not good enough" at the same time. :)
Losing details seems to be a common problem with "young" encoders. HEVC had similar problems for quite some time; I remember many people preferred x264 for transparent encodings, because HEVC encoders tended to blur fine-grained textures even at high bitrates. It may still be true even today; I didn't really pay attention to the topic for the last few years.
IIRC, it has to do mainly with perceptual optimizations: x264 was tweaked over many years to look good, even if it hurts objective metrics like PSNR or SSIM. On the other hand, new encoders are optimized for those metrics first, because that's how you know if a change you made to the code helped or made things worse.
I suppose only when the encoder reaches maturity and you know it preserves as much real detail as possible, then you can go wild and start adding fake detail or allocating bits to areas more important to subjective quality. I'm sure some (many? most?) of such techniques are already supported and used by AV1 encoders, but getting the most of it may still take some time.
I'm planning to do a very similar upgrade from RX 580 4GB, but I want at least 16 GB of VRAM and I need the PCIe slot next to the card. Even with a riser, the maximum thickness that fits is 45 mm, which leaves me with a second-hand RX 6800 reference model as the only AMD option (all the models from partners have larger coolers taking around 2.5 slots (50 mm), and lower models have less VRAM).
So I'm also glad I checked the dimensions of new cards before buying, but in my case the result wasn't that positive.. :)
Hopefully the RX 7800 XT stays under 250 W (like the RX 6800) so there is a chance for a current-gen 2-slot alternative. (The 7900 GRE is 260 W with 80 CU, so I would hope the 7800 XT with 60 CU draws less power, even if they push the clocks a bit higher... Can't wait for a more detailed leak that includes TBP. :) )
GANTZ was great, but I would personally recommend reading the original manga for the best experience. From what I remember, the anime did not really capture the dark atmosphere all that well in all details (even though it was still plenty dark), and the actors in the live action adaptation were perhaps over-acting a bit? And of course there is only so much you can cram into a 2 hours long movie.
That being said, reading the manga is a much larger time investment, so I suppose watching the movie or anime is still a much better experience than not experiencing GANTZ at all. :)