Because PDFs might not have the data in a structured form; how would you get the structured data out of an image in the PDF?
Misspellings, default names, a mixture, home brew naming schemes, meticulous schemes, I’ve seen it all. It’s definitely easier to just rasterize it and OCR it.
Because you're right if you're paid to evaluate all the formats with the Mark 1 eyeball and do a custom parser for each. It sounds like it's feasible for your application.
If you want a generic solution that doesn't rely on a human spending a week figuring out that those 4 absolutely positioned text fields are the invoice number together (and in order 1 4 2 3), maybe you're wrong.
Source: I don't parse pdfs for a living, but sometimes I have to select text out of pdf schematics. A lot of times I just give up and type what my Mark 1 eyeball sees in a text editor.
So it is absurd to pretend you can solve the rendering problem by rendering it into an image instead of a structured format. By rendering it into a raster, now you have 3 problems, parsing the PDF, rendering quality raster, then OCR'ing the raster. It is mind numbingly absurd.
We also parse millions of PDFs per month in all kinds languages (both Western and Asian alphabets).
Getting the basics of PDF parsing to work is really not that complicated -- A few months work. And is an order of magnitude more efficient than generating an image in 300-600 DPI and doing OCR or Visual LLM.
But some of the challenges (which we have solved) are:
• Glyphs to unicode tables are often limited or incorrect • "Boxing" blocks of text into "paragraphs" can be tricky • Handling extra spaces and missing spaces between letters and words. Often PDFs do not include the spaces or they are incorrect so you need to identify gaps yourself. • Often graphic designers of magazines/newspapers will hide text behind e.g. a simple white rectangle, and place new version of the text above. So you need to keep track of z-order and ignore hidden text. • Common text can be embedded as vector paths -- Not just logos but we also see it with text. So you need a way to handle that. • Dropcap and similar "artistic" choices can be a bit painful
There are lot of other smaller issues -- but they are generally edge cases.
OCR handles some of these issues for you. But we found that OCR often misidentifies letters (all major OCR), and they are certainly not perfect with spaces either. So if you are going for quality, you can get better results if you parse the PDFs.
Visual Transformers are not good with accurate coordinates/boxing yet -- At least we haven't seen a good enough implementation of it yet. Even though it is getting better.
PDFs don't always use UTF-8, sometimes they assign random-seeming numbers to individual glyphs (this is common if unused glyphs are stripped from an embedded font, for example)
etc etc
When extracting text directly, the goal is to put it back into content order, regardless of stream order. Then turn that into a string. As fast as possible.
That's straight text. if you want layout info, it does more. But it's also not just processing it as a straight stream and rasterizing the result. It's trying to avoid doing that work.
This is non-trivial on lots of pdfs, and a source of lots of parsing issues/errors because it's not just processing it all and rasterizing it, but trying to avoid doing that.
When rasterizing, you don't care about any of this at all. PDFs were made to raster easily. It does not matter what order the text is in the file, or where the tables are, because if you parse it straight through, raster, and splat it to the screen, it will be in the proper display order and look right.
So if you splat it onto the screen, and then extract it, it will be in the proper content/display order for you. Same is true of the tables, etc.
So the direct extraction problems don't exist if you can parse the screen into whatever you want, with 100% accuracy (and of course it doesn't matter if you use AI or not to do it).
Now, i am not sure i would use this method anyway, but your claim that the same problems exist is definitely wrong.
I think people are suggesting : Use a readymade renderer > use readymade OCR pipelines/apis > run it on pdfs
A colleague uses a document scanner to create a pdf of a document and sends it to you
You must return the data represented in it retaining as much structure as possible
How would you proceed? Return just the metadata of when the scan was made and how?
Genuinely wondering
I suggest spending a few minutes using a PDF editor program with some real-world PDFs, or even just copying and pasting text from a range of different PDFs. These files are made up of cute-tricks and hacks that whatever produced them used to make something that visually works. The high-quality implementations just put the pixels where they're told to. The underlying "structured data" is a lie.
EDIT: I see from further down the thread that your experience of PDFs comes from programmatically generated invoice templates, which may explain why you think this way.
We have algorithms that combines the individual letters to words, words to lines, lines to boxes all by looking at it geometrically. Obviously identify the spaces between words.
We handle hidden text and problematic glyph-to-unicode tables.
The output is similar to OCR except we don't do the rasterization and quality is higher because we don't depend on vision based text recognition.
The base implementation of all this, I made in less than a month 10 years ago and we rarely, if ever, touch it.
We do machine learning afterwards on the structure output too.
> quality is higher because we don't depend on vision based text recognition
This surprises me a bit; outside of an actual scan leaving the computer I’d expect PDF->image->text in a computer to be essentially lossless.
So if it is scanned it contains just a single image - no text.
OCR programs will commonly create a PDF where the text/background and detected images are separate. And then the OCR program inserts transparent (no-draw) letters in place of the text it has identified, or (less frequently) place the letters behind the scanned image in the PDF (i.e. with lower z).
We can detect if something has been generated by an OCR program by looking at the "Creator data" in the PDF that describes the program use to create the PDF. So we can handle that differently (and we do handle that a little bit differently).
PDF->image->text is 100% not lossless.
When you rasterize the PDF, you losing information because you are going from a resolution independent format to a specific resolution: • Text must be rasterized into letters at the target resolution • Images must be resampled at the target resolution • Vector paths must be rasterized to the target resolution
So for example the target resolution must be high enough that small text is legible.
If you perform OCR, you depend on the ability of the OCR program to accurately identify the letters based on the rasterized form.
OCR is not 100% accurate, because it is computer vision recognition problem, and • there are hundrends of thousands of fonts in the wild each with different details and appearances. • two letters can look the same; simple example where trivial OCR/recognition fails is capital letter "I" and lower case "l". These are both vertical lines, so you need the context (letters nearby). Same with "O" and zero. • OCR is also pretty hopeless with e.g. headlines/text written on top of images because it is hard to distinguish letters from the background. But even regular black on white text fails sometimes. • OCR will also commonly identify "ghost" letters in images that are not really there. I.e. pick up a bunch of pixels that have been detected as a letter, but really is just some pixel structure part of the image (not even necessarily text on the image) -- A form of hallucination.
Because the underlying "structured data" is never checked while the visual output is checked by dozens of people.
"Truth" is the stuff that the meatbags call "truth" as seen by their squishy ocular balls--what the computer sees doesn't matter.
The other thing is segmenting a document and linearizing it so that an LLM can understand the content better. Layout understanding helps with figuring out the natural reading order of various blocks of the page.
> There are many cases images are exported as PDFs.
One client of a client would print out her documents, then "scan" them with an Android app (actually just a photograph wrapped in a PDF). She was taught that this application is the way to create PDF files, and would staunchly not be retrained. She came up with this print-then-photograph after being told not to photograph the computer monitor - that's the furthest retraining she was able to absorb.Be there no mistake, this woman was extremely successful at her field. Successful enough to be a client of my client. But she was taught that PDF equals that specific app, and wasn't going to change her workflow to accommodate others.
You might think of your post as a <div>. Some kind of paragraph or box of text in which the text is laid out and styles applied. That's how HTML does it.
PDF doesn't necessarily work that way. Different lines, words, or letters can be in entirely different places in the document. Anything that resembles a separator, table, etc can also be anywhere in the document and might be output as a bunch of separate lines disconnected from both each other and the text. A renderer might output two-column text as it runs horizontally across the page so when you "parse" it by machine the text from both columns gets interleaved. Or it might output the columns separately.
You can see a user-visible side-effect of this when PDF text selection is done the straightforward way: sometimes you have no problem selecting text. In other documents selection seems to jump around or select abject nonsense unrelated to cursor position. That's because the underlying objects are not laid out in a display "flow" the way HTML does by default so selection is selecting the next object in the document rather than the next object by visual position.
If you were leading Tensorlake, running on early stage VC with only 10 employees (https://pitchbook.com/profiles/company/594250-75), you'd focus all your resources on shipping products quickly, iterating over unseen customer needs that could make the business skyrocket, and making your customers so happy that they tell everyone and buy lots more licenses.
Because you're a stellar tech leader and strategist, you wouldn't waste a penny reinventing low-level plumbing that's available off-the-shelf, either cheaply or as free OSS. You'd be thinking about the inevitable opportunity costs: If I build X then I can't build Y, simply because a tiny startup doesn't have enough resources to build X and Y. You'd quickly conclude that building a homegrown, robust PDF parser would be an open-ended tar pit that precludes us from focusing on making our customers happy and growing the business.
And the rest of us would watch in awe, seeing truly great tech leadership at work, making it all look easy.
Let's assume we have a staff of 10 and they're fully allocated to committed features and deadlines, so they can't be shifted elsewhere. You're the CTO and you ask the BOD for another $150k/y (fully burdened) + equity to hire a new developer with PDF skills.
The COB asks you directly: "You can get a battle-tested PDF parser off-the-shelf for little or no cost. We're not in the PDF parser business, and we know that building a robust PDF parser is an open-ended project, because real-world PDFs are so gross inside. Why are you asking for new money to build our own PDF parser? What's your economic argument?"
And the killer question comes next: "Why aren't you spending that $150k/y on building functionality that our customers need?" If don't give a convincing business justification, you're shoved out the door because, as a CTO, your job is building technology that satisfies the business objectives.
So CTO, what's your economic answer?
Only three of them can process all 2500 files i tried (which are just machine manuals from major manufacaturers, so not highly weird shit) without hitting errors, let alone producing correct results.
About 10 of them have a 5% or less failure rate on parsing the files (let alone extracting text). This is horrible.
It then goes very downhill.
I'm retired, so i have time to fuck around like this. But going into it, there is no way i would have expected these results, or had time to figure out which 3 libraries could actually be used.
> just using the "quality implementation"?
What is the quality implementation?