A nonprofit with international reach had accumulated a collection of more than 300,000 images during their more than 150-year history, but they had available only a limited set of metadata to describe these images. Further, this metadata was largely filed away in physical typewritten or handwritten documents along with other documents of interest. This nonprofit organization looked to Elder Research to computationally generate descriptive metadata for these images using AI and ML technologies, to explore the possibility of applying AI to digitize their document collection, and to digitize and store existing metadata.
We applied a set of commercial-grade AI tools and approached the problem from multiple angles, largely focusing on models for image understanding and image-to-text extraction. Using these models, we digitized existing metadata documents in a manner that preserved meaningful key-value relationships; extracted and archived any existing digital metadata; and then applied models to detect objects of interest in the image set, generate tags and descriptions, and more. We also applied facial recognition technology to identify individuals across images and preserved already-captured digital metadata. The results of this work were then organized into a collection mapping metadata to source images.
Elder Research produced or extracted more than 30 million metadata entries describing the image collection and document cache—more than 80 entries per image. We used the digitized text in conjunction with image-to-image matching algorithms to link digitized metadata records to their source images, joining this information with computationally generated image-based metadata and facial recognition results to provide the client with the basis for a rich and searchable image archive.