Did AI Ignite The Chris Rock Eye-Pad Conspiracy Theory?
By now you've likely seen the clip of Will Smith slapping Chris Rock during the 2022 Oscars. The surprising altercation, which has since been condemned by the Academy, was almost immediately followed by speculation that the entire thing had been staged — a rumor that still persists across social media despite Smith's apology and a statement from the Los Angeles Police Department. Such baseless allegations can be found on Facebook, Twitter, and Reddit, among other online destinations.
Fueling the speculation is a recent tweet that went viral alleging a cushioned pad is visible on Rock's cheek when viewed at an 8K resolution, seemingly suggesting he was prepared for a pre-planned slap. The idea is easily dismissed when one views high-resolution images captured at the event, but it does beg the question: where did the viral image come from and why does it appear to show a cheek pad? Whether legitimately sourced from an Ultra HD TV, upscaled using an app, or deliberately created for social media engagement, there's one possible explanation that makes sense: imperfect artificial intelligence.
The viral Chris Rock image may be the result of AI
The tweet was published by @viralclipsss on Twitter, an account that claims a pad is visible beneath Rock's eye when the altercation is viewed in "8K quality images." At the time of writing, the tweet has received more than 19,000 "likes," though many on social media have pointed out that images captured by professional photographers who attended the event (like the one above) don't feature the same alleged cheek cushion.
Though the claim is easily dismissed based on those professional photographs, one must wonder why the Twitter image seemingly shows a pad on Rock's cheek. Though it's impossible to say for sure, the tweet suggests someone upscaled a low-resolution screenshot of the incident to a higher "8K" resolution. This process is fairly simple thanks to apps like AI Enlarger and desktop software like Topaz Labs' Gigapixel AI (though there's no reason to assume either of those products were used to create this particular image).
Upscaling software and apps utilize machine learning to intelligently analyze an image, scale it up to a larger resolution, and then fill in the missing pixels using the AI's best guess about what's supposed to be there. This process is seemingly magical at times, making it possible, for example, to watch 1080p movies on a 4K or 8K television. The technology is not perfect, however, and depending on the quality of the source image, the resulting upscaled content may include odd artifacts or elements fabricated entirely by the software.
Assuming the first image featured in the viral tweet was put through an AI upscaler, its low resolution likely resulted in the software mistaking natural skin creases for hard edges, the end result of which is a cheek with a pad-like appearance. Gizmodo ran with this idea and tested it on their own, running the original low-resolution image through an upscaler that resulted in the same sort of cheek distortion.
Viral image may hint at a growing problem
Though it's easy to dismiss the viral Chris Rock claim by comparing it with photos that were taken during the live event, not every image altered using artificial intelligence or photo-editing software is as easy to spot. There are growing concerns about the use of AI technologies to manipulate images and videos, making it possible to, for example, impersonate politicians in a way that appears very realistic. Beyond deepfakes, it's also now feasible to generate realistic images of people who don't exist using freely available AI tools.
Fortunately, artificial intelligence may also be the solution to this problem. In 2018, for example, Adobe detailed a neural network that was trained to analyze the image sensor noise patterns in photos, doing so in a way that enabled it to spot manipulated images — even if the same images appeared entirely authentic to the human eye. On the flip side, NVIDIA showcased a different AI technology not too long after Adobe that used a neural network to scrub those same digital noise patterns, underscoring how difficult it may be to develop a foolproof system.
It's easy to imagine that social media platforms like Twitter may one day utilize this kind of technology to proactively analyze and label uploaded images and videos that may have been manipulated. These sorts of labels are already found on Facebook, for example, which flags certain content that likely contains misinformation. The bigger question is whether such labels will be adequate enough to convince viewers to doubt what their eyes see, or if such content will flourish despite our best efforts to prevent it.