Legal (Though, not a lawyer!)
For most folks in the US, there're only a few instances where copying your voice would get someone in trouble. Laws against fraud, libel and slander can apply to intentional deception by fake audio and video, just as they can apply to a forged signature or false statements with malicious intent. But that intent is usually a key part. If it's just a voice that happens to sound like you, then there's no problem. If it's supposed to be you, that's where these laws come in.
You may have more luck in future using privacy law to argue against processing of your specific voice by any company within that law's jurisdiction. A few US states have laws against storing and processing biometric data of individuals resident in that state, as also the EU has. Enforcement of those is still very much up in the air, though, and companies again argue that they're learning from your voice, but not (usually) directly copying it, nor using it to track or identify you.
Philosophical (Anybody met a "real philosopher" lately? It's not my universe, I only live here!)
I think that the mission may shrink over time, but is unlikely to disappear. Even if these "AI" tools rivaled the best human performances, and had access to every work, the mission only disappears once everyone has such a tool.
The hobby will certainly continue as an art, so long as humans have both verbal and written communication. I'd love to see "AI" that can read a messy scan of any book, and render it into a fully acted scene of human drama, but to mirror your comment: my metaphorical, personal voice is mine. It belongs to me, and I'm thrilled to share it - and that's the hobby to me. Using one voice in service of the other.