• TropicalDingdong@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    9 hours ago

    Thanks for sharing this.

    I’ve been running OCR on the images folder of the files since last week and just reached out to the creator to see if they want the data I’ve processed. Right now that entire graph is ONLY the “text” portion of the dump. There are 26k images, which are mostly pictures of emails and other documents. I’m like 80% through processing them (although I’ve had some hiccups in the past 24 hours).

    https://codeberg.org/sillyhonu/Image_OCR_Processing_Epstein

      • TropicalDingdong@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        7 hours ago

        Its phenomenal. I have found a few places where it falls down, and its usually when the text is incredibly small. You can see its being down sampled before it gets handed off to the model. It falls down on like, one example I found, some bank disclosure documentation from bank of america:

        It just came out as all I’s and o’s.

        For the emails, book text, letters, etc… I genuinely haven’t found a place it didn’t work correctly as I’ve been spot checking the output.

        If you have colab you can just try the script I put up. All you need to do to have it run is to book mark the house oversite committee google drive folder to your local google drive.

    • pelespirit@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      9 hours ago

      Whoa, I hope they’re interested. I didn’t realize the pics info wasn’t included. Thanks for doing all that work. I looked through some of it and there’s a ton there.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        9 hours ago

        Yeah its ridiculous how much is in there. I’m pulling their current repo to see how they are building their DB so if they don’t get back to me, I can at least combine the two databases.

        And if any one reading this wants a copy of what I’ve processed so far, I’m more than happy to share.

        But it looks to me like they dropped a couple hundred on just processing those text files. It would be north of 2.5k additional to process the data I’m creating.

        That being said, mine only goes as far as extracting the contents and creating a sha256 hash to keep track of the documents themselves/ document tampering. It doesn’t take the next step to extract names, locations, dates, etc…

        I’m working that out now but it seems like the way to do this would be so it fits into their DB seamlessly.