Follow-up blog post about a conference call with Xerox: http://www.dkriesel.com/en/blog/2013/0806_conference_call_wi...
It shouldn't be seen with any setting. Nothing you can do to the device (short of involving a hammer) should change the content in any way. Compress, resize, zoom, do whatever, but it simply must not change the content at any time at any resolution/quality.
I'm just flabbergasted that such a compression scheme was ever implemented in the first place. Surely, there are alternative OCR based methods do compression that don't introduce these artifacts (that's putting it mildly) at lower resolutions.
Having various compression/quality options allows you to pick the tradeoff (file size/resulting quality) that is acceptable for your situtation. There is no perfect setting for all situations. Even the original bitmap is an imperfect (i.e. lossy) rendering of the original document.
I can just see a legal loophole now for anyone using these devices, for example "the electronic document was modified by a Xerox and we don't have the original, those numbers were not what we signed, contract void".
No matter the case of an optional setting or the size of the font involved, this can have major consequences for people who trust the device to be an accurate representation in all cases, of what they put into it.
Looks like the company is trying to weasel out of it and there are going to have to be lawsuits. Though I didn't really expect otherwise; if the dice come up badly, the damage from this could exceed the net value of the company.
Don't get me wrong - using OCR is a great compression technique, but if it isn't reliable enough, it shouldn't be the default or "normal" setting.
"Textual regions are compressed as follows: the foreground pixels in the regions are grouped into symbols. A dictionary of symbols is then created and encoded, typically also using context-dependent arithmetic coding, and the regions are encoded by describing which symbols appear where."
Then from the OCR wiki[2].
"Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as "pattern matching" or "pattern recognition"."
Furrow your brow and smash the down-vote arrow all you wish. It won't stop JBIG2 from doing much of what people consider OCR as doing today. Recognizing characters, just JBIG2 adds in making it's own dictionary which opened the path to this topic today.
[1] http://en.wikipedia.org/wiki/JBIG2 [2] http://en.wikipedia.org/wiki/Optical_character_recognition
All it's doing is recognizing "similar" patches of the image and coalescing them, which is what it's supposed to do, according to the standard. Yes, it's too aggressive.
A major and highly pertinent difference is that if this OCR-ish procedure incorrectly classifies two identical letters as being different, accuracy is not affected, and the only consequence is a larger file. With normal OCR, seeing two As and saying they're different would be an error, but in this case, it's fine.
What this means is that, while regular OCR is inherently error-prone, this compression procedure can be fully tuned anywhere between no errors and nothing but errors, with file size being the tradeoff.
The ability to run this algorithm in a way that produces no errors may be enough to disqualify it as "OCR", depending on your point of view. In any case, it certainly changes things from "that's just how it is" to "this is a royal cock-up on Xerox's part".
I might have read it wrong, but from how I understood it the default settings don't have this problem. It's when people adjust the quality settings to be lower. Am I wrong?
I was expecting "Here is new firmware and we apologize for using JBIG2, won't happen again."
One wonders if JBIG2 is used in the storing of checks by banks (my bank these days only sends me images of my checks, never the actual check any more) or DMV records, or any number of things.
So in the previous thread I suggested a JBIG2 test image, now I want to build one that if you copy it, it goes from one thing to something else entirely!
First of foremost, I agree that Xerox putting their name on a product which creates an unfaithful copy is corporate suicide. Such an ancient paragon of computer innovation should be able to come up with a clever algorithm that compresses but doesn't substitute image bits.
But...
- The original story[1] didn't mention that the product itself warns against the very thing they are reporting. Did they ignore that warning, did the copier not show it, did they use a setting that did not have the warning? Their further posts cover the issue, so it looks like somebody else set the resolution and ignored the warning.
- Calling what the JBIG2 algorithm does "OCR" is misleading. OCR is pretty much understood to be analog text (image) to digital text (ASCII, UTF-32). Matching to a real character set and outputting those characters is a defining part of true OCR. It's also confusing because the copiers have a true OCR function, and this is not related. What JBIG2 does, I would call it "sub-image matching and substitution."
- Calling JBIG2 "lossy" is also misleading. I suppose it is lossy by definition, but lossy is usually limited to pixel effects as seen in JPG, no image blocks.
- JBIG2 seems like an algorithm that shouldn't be used on low-res text documents. You might say it's just a configuration of the algorithm, but if engineers can't take it as a tool and use it correctly, you start to wonder if it's a problem with the tool.
[1] http://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres_...?
There comes a point when the quality is so poor that you no longer trust your interpretation. Is that a 3? An 8? If you can't tell, you will not act on that information without further clarification.
This compression algorithm destroys this process.
How can you trust what you are reading anymore? How do we know there isn't a bug that sometimes causes the content substitution when the source text is large and perfectly legible?
Disk space is not at enough of a premium to justify this.
convert *.jpg JPEG.pdf -- 43777 kb
convert *.png PNG.pdf -- 6907 kb
jbig2 -b J -d -p -s *.jpg; pdf.py J > JBIG2.pdf -- 947 kb
jbig2 -b J -d -p -s -2 *.jpg; pdf.py J > 2xJBIG2.pdf -- 1451 kb
Quite a difference. I don't quite understand how JPEG fares so poorly compared to (lossless) PNG, maybe because it doesn't do monochrome?[1] http://ssdigit.nothingisreal.com/2010/03/pdfs-jpeg-vs-png-vs...
The only acceptable fix for this is to disable the ability to use lower compression qualities that have could EVER cause this to happen.
"Normal" is an overly aggressive compression setting? Is that an overly aggressive setting for the end-user or for Xerox to be implementing in their hardware marketed to law firms?
I expected something better from Xerox, instead it is a sort of: "You are a stupid costumer, leave it on default and stop bothering me, it is not my fault you find bugs when not using the default."
Pretend you care, blame the users, and don't take any action. Hey, what could be wrong with that?
Why on earth does a scanner have a web interface