To be honest, Ai would be a path for a lot of you. The issue is you'd actually have to learn it so it doesn't A.) look like slop, B.) you can actually get what you want and not "close enoughs"
The reason why a lot of it looks like slop is because people are generating the slop via websites. If someone downloads the model themselves and runs it locally, they can do all kinds of things with it. You can pose in real life, feed that pose into the Ai, and it'll make the intended character mirror that pose. Something too complex to model IRL? You can use any kind of posing art tool or animation program and some generic rigs to make "reference". (image2image)
Hands, eyes and feet fuck up because most generations are generated at 700x700 through 2300x2300 depending on your model. That's too few pixels for it to "keep track" of the fine detail objects in the scene. All a user need do is re-run their image through image2image and combine it with painted controls, something called in-painting. You just blob over the problem as if lazily censoring it. Now it takes the broken thing, JUST the broken thing, and it uses the full size of the canvas to redraw it, then it shrinks it, puts it in where the bad art was censored, and then finally the whole image upscales.
You can create models of specific characters. Use different models and different training to get different art styles instead of that generic slop shit. This is essentially what "From where you live" furry desirer dev is doing.
If you have an ethical issue, you could always pay an artist to make your character training art with the express understanding you'll be using it via Ai. A reference sheet, a few extra angles of the head, and you're in business.