I don't actually know if it's considered a deepfake when it's just a voice; but I've been using the hell out of Speechify, which basically deepfakes voices and pairs them with a text input.
...so... nursing school, we have an absolute fuck-ton of reading assignments. Staring at a page of text makes my brain melt, but thankfully nowadays everything's digital, so I can copy entire chapters at a time, and paste them into Speechify. Now suddenly I have Snoop-dogg giving me a lecture on how to manage a patient as they're coming out of general anesthesia. Gets me through the reading fucking fast, and it retains so, SO much better than just trying to cram a bunch of flavorless text.
That's also the business model behind ad localization now, they'll pay the actor once for appearing on set and then pay them royalties to keep AI editing the commercial to feature different products in different countries.
If they're up front about it and if the actor agrees to it (as with Speechify), I don't see a problem with that. SAG should also be involved to try and determine fair compensation.
I think it comes down more to understanding what the tech is potentially good at, and executing it in an ethical way. My personal use is one thing; but Speechify made an entire business out of it, and people aren't calling for them to be burned to the ground.
As opposed to Google's take of "OMG AI! RUB IT INTO EVERYONE'S NOSE, THEY'RE GONNA LOVE IT!" and just slapping it onto the internet, and then pretending to be surprised when people ask for a pizza recipe and it tells them to add Elmer's Glue to it...
Two controlled inputs giving a predictable output; vs just letting it browse 4chan and see what happens. The tech industry definitely seems to lean toward the later, which is fucking tragic, but there are gems scattered throughout the otherwise pure pile of shit that LLMs are at the moment.