Guetzli JPEG Encoder usefull or useless
|A couple of days ago Google announced the release of their new JPEG Encoder Guetzli on the Research as open source. Since Google states a reduction in size of 35% i was curious.
Introduction
Google considers it as a service for the own company ,-) but also for other companiers to release the Guetzli JPEG Encoder as open source. The new encoder should generate images with a reduction in size of 35 % and less artefacts in the images, or better artefacts which according the Google tests will not seen as disturbing by the users. For backgorund information on the alogarithm Google published an article on Cornell university
Practise
To support their new encoder Google shows a couple of pictures in the article which should prove Guetzli results are better than the very common libJPEG.
Left the original, middle the result from libJPEG and Gutzli on the right side. But this example didn’t really convince me -( But the the source and binaries for Windows are available via GitHub so i could try it out myself.
Background for developments like this one here is mainly webservice and access with mobile devices to reduce the amount of data being transferred.
I took a prettey detailed image captured with my Lumix and exported in in Lightroom with a quality setting of 100, just as with the JPEGMini comaprism. This results in an image with a size of 6.989Kb . Processing this image with Gutezli you’ll get an images size of 5.686 Kb. This is a reduction in size of almost 19% but far away from the promised 35%.
On some areas in the picture you’ll get the impression the Guetzli picture is not as sharp as the original, but thats only an impression. This size reduction of arround 19% is a result of a 11 min. process (on another picture i cancelled the process after almost 40min). This amount of time is not feasible for a photographer. The software implementation doesn’t use the cores of the CPU and additonally consider sharing your pictures on social networks you’ll do this with a resolution of arround 2000px on the long edge which will reduce the effect even more.
Conclusion
Bear in mind that Google published research results so there is lot to do. The alorithem has to be faster for practical use. On the other hand all mobile network provider are dealing with increasing data volumes. There is the question to the benefit for Google, Flickr, Facebook and other services were millions of images are uploaded each day
I don’t want to title Google’s release useless but there is lot to do before the encoder can be used in a working environment, even though optimizing and uploading an image should be done only once. What do you think? Useful, interesting or just not useable in practise?
ciao tuxoche