Originally Posted by Mr B
Correct me if i'm wrong, but what it does, is takes the data "used" to create the image, and blows it up in size?
Nope, a flickerfixer does not "blow up in size", but "speed up in data rate" without changing the size.
The very first flickerfixers just took the picture and doubled pixelclock, so for every frame that the Amiga generates, two frames for the VGA monitor were generated.
Indivision AGA first introduced a full framebuffer, allowing odd multiplication factors (pixelclock x2.5), resulting in altered vertical frequency as well.
With the full framebuffer, HighGFX was brought to a new life: While the Amiga generates a picture with very low horizontal and vertical frequencies (outside any real monitor's capabilities), the flickerfixer takes care of pumping that up to today's monitor's requirements. Again, no scaling, just changing the data rate.
The only thing that Indivision AGA MK2 implements differently from the old model is data rates and output interface: While the old model is limited to about 330MBytes/second, this new model can handle over 600MBytes per second. In addition to the known VGA output, I have also added DVI output (complete DVI-I implementation). The basic principle remains the same: One pixel that's coming in will still be a single pixel on the output interface.
If you want to scale, you have to look at several pixels at a time and calculate a proper "in-between value" for the output pixel. I have no intention to go that way, because in order to do it properly, you have to implement fairly complicated algorithms: Picture interpolation is non-linear, so I'd have to implement DSP functions in the FPGA. Anything else just looks blurry or distorted. Not impossible, but also not the scope of the product.
As you have already pointed out, picture quality is best when it's not scaled. So once again, Indivision does not scale, which is a synonym for "does not change size".