Theoretically they could finetune for consistent eye reflections, but even then I suspect image models wouldn't get it right. Physical correctness is not really something they actually do.
This might be one case where I'd be okay with keeping the methods these tools use as a trade secret for at least a little while so as to slow the arms race. The longer we're able to detect fake images the the better.