Catholic priest and podcaster Father Mike Schmitz informed his YouTube congregation of over 1.2 million subscribers in November that artificial intelligence deepfakes were impersonating him in an attempt to scam them. Schmitz warned his followers that they "couldn't always trust the words coming out of his mouth," because sometimes it wasn't really his mouth or his words.
The deepfakes featured a digitally fabricated version of Schmitz soliciting prayers and blessings in exchange for clicking a link. In one instance, the fake Schmitz, with an hourglass looming behind him, urged viewers to "act quickly, because the spots for sending prayers are already running out, said another fake Schmitz with a looming hourglass behind him. And the next trip will only take place in four months." The real Schmitz, based in Duluth, Minnesota, included examples of the AI-generated impersonations in his public service announcement, highlighting the subtle robotic quality of the voice.
Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else's likeness using artificial intelligence. These AI-generated forgeries have become increasingly sophisticated, making it difficult for the average person to distinguish them from authentic content. The technology relies on machine learning algorithms, specifically deep learning, to analyze and replicate a person's facial expressions, voice, and mannerisms. This allows malicious actors to create convincing fake videos and audio recordings for various deceptive purposes, including fraud, disinformation campaigns, and identity theft.
The rise of deepfake technology poses a significant challenge to society, eroding trust in digital media and raising concerns about the potential for manipulation. Experts warn that the increasing accessibility of deepfake creation tools could lead to a proliferation of scams targeting vulnerable populations.
Schmitz acknowledged the difficulty people have in discerning real from fake. "I can look at them and say 'That's ridiculous, I would never say that,'" Schmitz said. "But people can't necessarily tell. That's a problem."
Law enforcement agencies and technology companies are working to develop methods for detecting and combating deepfakes. These efforts include creating algorithms that can identify subtle inconsistencies in synthetic media and educating the public about the risks associated with deepfakes.
Discussion
Join the conversation
Be the first to comment