Fake frames is "frame generation" for Nvidia it's called DLLS.
Rather than having the graphics card create 120 frames, you can crank the settings up to where you only get 60, then AI "guesses" what the next frame would show doubling it to 120 but keeping the higher settings.
This can make things blurry because the AI may guess wrong. So every odd frame is real, every even frame is just a guess.
Frame 1: real
Frame 2: guess
Frame 3: real
If the guess for #2 is accurate, everything is cool, if #2 guessed a target moves left when it moved right then #3 corrects and that "blink" is the problem.
The bigger issue is developers relying on that tech so they don't have to optimize code. So rather than DLSS being an extra ompf, it's going to be required for "acceptable" performance
Not to be nitpicky but DLSS is a different technology than frame generation, though it also involves AI guessing - just in a different way. DLSS (Deep Learning Super Sampling) means rendering the game at a lower resolution than your screen's output, then having it upscaled to the correct resolution via AI. This is much more performance friendly than native rendering and can often lead to a better looking visual end product than turning graphics features off and rendering natively - though it will depend on the game, genre and personal preference.
Frame generation is as you described. Worth noting is that DLSS without frame generation doesn't suffer issues like artifacts and input lag in the same manner as FG turned on. Frame generation also works better the higher your base frame rate is, so it's a bit of a "win-more". Using FG to go from 30 to 60 FPS will feel much worse than using it to go from 60 to 120.
The fake frames memes I believe stem from the updated frame generation technology in the 50 series guessing three frames at a time instead of one. So in effect you'll end up with a majority of the frames you see being "fake".
On the other hand, NVIDIA has started to consolidate all these technologies as the NVIDIA DLSS suite a few months ago for some reason.
So it's DLSS Super Resolution and DLSS Frame Generation, DLSS Ray Reconstruction and so on, with the exception of DLAA. Probably because that would get too stupid even for them.
It is image processing with statiatics rather than traditional rendering. It is a completely separate process. Also, NVidia GPUs (and the new upcoming AMD ones too) also have hardware built into the chip specifically for this.
A saw a graphic the other day that was comparing the number of frames generated between the 4x and 5x, and people in the comments were saying that the 5x uses AI frame generation to speed things up
People in the know would know that AI is largely hype, and the generated frames probably don't look as good as if they had been properly rendered