Collaborators

The following research groups and institutions contributed technology to Assembler.

Color & Noise Patterns
— University of Maryland

This machine learning model combines two distinct approaches. It uses the image's color values to find anomalies such as strong contrast differences or unnatural boundaries. It also examines the image's noise patterns for inconsistencies. Together, these signals may indicate the use of image editing software.

Dense-Field Copy-Move
— University Federico II of Naples

This algorithm finds similar-looking patches in the image, to determine if one of them was copied and pasted over another area, covering something that was previously there.

JPEG Dimples
— UC Berkeley

This algorithm finds areas of JPEG compressed images that do not display point-of-capture expected patterns, indicating that the image may have been edited in those areas.

Self-Consistency Splice
— UC Berkeley

This machine learning model looks at the pixel properties of the image to determine if different EXIF presets might have been used to generate them. This may indicate that more than one camera was used to create this image.

Splicebuster
— University Federico II of Naples

This algorithm identifies inconsistencies in noise patterns when comparing different parts of the image. Presence of inconsistencies may indicate that more than one camera (make and model) was used to create this image.

StyleGAN detector
— Jigsaw

This machine learning model was trained to distinguish real images of people from deepfake images specifically created through a technique called StyleGAN.

Media Forensics Challenge Dataset
— National Institute of Standards and Technology (NIST)

Assembler’s ensemble model is trained to classify images as manipulated or not manipulated based off, in part, the dataset distributed by NIST as part of the Media Forensics Challenge. Through this challenge, NIST supports research and helps advance the state of the art for image and video forensics technologies.