I'm a layman, so, keeping it at lay-talk, fundamentally,
how does Google Images recognize two images as having the
And, is it similar to how software find tanks and missiles
in a spy satellite photo?
What I have been told is that they both do an FFT on the
pixel information, and, from that detailed frequency data,
they look for characteristic "signature patterns".
But, I know no more than that (and even that may be wrong).
Can you shed light (in layman's terms) on this process?