Lossless compression doesn’t really do well for pictures of real life. For screenshots it’s ideal, but for complex images PNGs are just wayyyy to big for the virtually non noticeable difference.
A high quality JPG is going to look good. What doesn’t look good is when it gets resized, recompressed, screenshotted, recompressed again 50 times.
I found quite a lot of AVIF encoders lied about their lossless encoding modes, and instead used the normal lossy mode at a very high quality setting. I eventually found one that did true lossless and I don’t think it ever managed to produce a file smaller than the input.
Turns out, that’s a well known issue with the format. It’s just another case where Google’s marketing makes AVIF out to be fantastic, but in reality it’s actually quite mediocre.
The funny thing is, I knew something was off because Windows was generating correct thumbnails for the output files, and at that time the OS provided thumbnailer was incapable of generating correct thumbnails for anything but the simplest baseline files.
(Might be better now, idk, not running Windows now)
That’s how I knew the last encoder was producing something different, even before checking the output file size, the thumbnail was bogus.
This story is a nightmare and I’m not sure if it’s better or worse now knowing that it was ancient ICO files that tipped you off.
Open question to you or the world: for every lossless compression I ever perform, is the only way to verify lossless compression to generate before and after bitmaps or XCFs and that unless the before-bitmap and after-bitmap are identical files, then lossy compression has occurred?
jxl is a much better format, for a multitude of reasons beyond the article, but it doesn’t have much adoption yet. On the chromium team (the most important platform, unfortunately), someone seems to be actively power tripping and blocking it
I know compression has a lot of upsides, but I’ve genuinely hated it ever since broadband was a thing. Quality over quantity all the way. My websites have always used dynamic resizing, providing the resolution in a parameter, resulting in lightning fast load times, and quality when you need it.
The way things are shared on the internet is with screenshots and social media, been like that for at least 15 years. JPG is just slowly deep frying the internet.
Lossless compression doesn’t really do well for pictures of real life. For screenshots it’s ideal, but for complex images PNGs are just wayyyy to big for the virtually non noticeable difference.
A high quality JPG is going to look good. What doesn’t look good is when it gets resized, recompressed, screenshotted, recompressed again 50 times.
PNG is the wrong approach for lossless web images. The correct answer is WebP: https://siipo.la/blog/whats-the-best-lossless-image-format-comparing-png-webp-avif-and-jpeg-xl
I found quite a lot of AVIF encoders lied about their lossless encoding modes, and instead used the normal lossy mode at a very high quality setting. I eventually found one that did true lossless and I don’t think it ever managed to produce a file smaller than the input.
Turns out, that’s a well known issue with the format. It’s just another case where Google’s marketing makes AVIF out to be fantastic, but in reality it’s actually quite mediocre.
They lied about the lossiness?! I can’t begin to exclaim loudly enough about how anxious this makes me.
The funny thing is, I knew something was off because Windows was generating correct thumbnails for the output files, and at that time the OS provided thumbnailer was incapable of generating correct thumbnails for anything but the simplest baseline files.
(Might be better now, idk, not running Windows now)
That’s how I knew the last encoder was producing something different, even before checking the output file size, the thumbnail was bogus.
This story is a nightmare and I’m not sure if it’s better or worse now knowing that it was ancient ICO files that tipped you off.
Open question to you or the world: for every lossless compression I ever perform, is the only way to verify lossless compression to generate before and after bitmaps or XCFs and that unless the before-bitmap and after-bitmap are identical files, then lossy compression has occurred?
jxl is a much better format, for a multitude of reasons beyond the article, but it doesn’t have much adoption yet. On the chromium team (the most important platform, unfortunately), someone seems to be actively power tripping and blocking it
Yeah Google is trying to keep control of their image format and they are abusing their monopoly to do so
Webp, yo!
.tif or nothing, yo.
A high quality jpg looks good. The 100th compression into a jpg looks bad.
I know compression has a lot of upsides, but I’ve genuinely hated it ever since broadband was a thing. Quality over quantity all the way. My websites have always used dynamic resizing, providing the resolution in a parameter, resulting in lightning fast load times, and quality when you need it.
The way things are shared on the internet is with screenshots and social media, been like that for at least 15 years. JPG is just slowly deep frying the internet.