If you've used social media lately, you've probably seen a parade of deepfake ads — fake videos of celebrities endorsing products, or clips of politicians saying things that they never uttered. Annoying? Yes. But mostly, it’s just harmless clickbait, right?
Think again. Deepfakes are synthetic media created through machine learning. Or, in simpler terms, they are images or videos of people that are digitally altered, often maliciously, to spread misinformation. In recent years, they have become a problem worldwide. In December 2024, Romania annulled the first round of its presidential election after AI-generated content helped propel an unexpected candidate to victory. Was this the political "infocalypse" experts have been warning us about since 2017? Not quite.
Conversations around deepfakes have centered on politics and the fear that AI-generated media will sway, or even determine, elections. So far, that's not what experts have observed. Pornography and financial scams — not ballot boxes — are where this technology has done the most damage. The best defense to deepfakes can’t be limited to new AI detection tools and stricter regulations. Humans need to also do something that’s simpler, yet harder: recognizing that algorithms across platforms are using the same playbook. They are triggering our fear, hatred, and greed to bypass our critical thinking. We must learn to recognize when our buttons are being pushed and resist long enough to think for ourselves.
In 2023, technologists and watchdogs predicted deepfakes would dominate future elections. So far, Romania appears to be an anomaly. According to analyses by Time Magazine and the Knight Institute, AI's impact on U.S. elections in 2024 was underwhelming. For now, traditional misinformation tactics such as presenting media out of context, spreading rumors, or simply lying, remain cheap and effective.
Experts suggest we should steer clear of our inclination to simply blame AI itself for misinformation in political campaigns. Rather than fixating on the supply of misinformation, they argue, we must confront the problem of demand.
This demand is better known as confirmation bias. People really like being misinformed when the bad information aligns with beliefs they already hold. Addressing this phenomenon means looking in the mirror personally, politically, and societally. That's the hard part.
I could teach you to spot deepfakes. There are methods, like looking for watermarks, weird visuals, unnatural eye movements, or catching mismatched audio. But that misses the bigger point: The same critical thinking protecting you from other forms of misinformation also works here. The question isn't "Is this a deepfake?", it's "Why do I want to believe this?"
The first widespread use of deepfakes involved using images of people to create porn without their consent.
In 2019, cybersecurity company Deeptrace found that 96% of deepfake videos were pornographic and that those videos exclusively targeted and harmed women. The harms are well documented and include privacy invasion, humiliation, intimidation, and irreparable damage to reputation. Governments worldwide are trying to address these harms through legislation, with some success. The U.S. spent nearly a decade without laws against deepfake porn until Congress passed the TAKE IT DOWN Act in April 2025.
Scams have been similarly supercharged. In a survey of 46,000 adults across 42 markets, the nonprofit Global Anti-Scam Alliance (GASA) found that 7 in 10 adults worldwide encountered a scam in the past year, and nearly a quarter lost money. GASA also estimates global losses from scams at about $42 billion.While scams existed long before AI, here's what’s changed: Deepfakes and AI-generated content transformed scamming from a specialized skill to a point-and-click operation. Anyone with Internet access can generate convincing deepfake videos, clone peoples voices, or craft tailored phishing email messages in dozens of languages.
Deepfakes enable voice-cloning scams where criminals fake your loved one's voice, making you believe you're wiring bail money for a grandchild who's actually safe at home. They've also fueled a dramatic rise in sextortion — a form of blackmail involving threats to distribute explicit images unless the victim provides money, additional images, or other demands — particularly targeting teenage boys. The Financial Crimes Enforcement Network (FinCEN) reports sextortion complaints increased 59% between 2023 and 2024, with 55,000 victims reporting losses totaling $33.5 million. These schemes have been linked to multiple youth suicides as extortionists often threaten to distribute fake but convincing nude images unless victims pay.
No defense is foolproof, but education and vigilance help. The online protection services company McAfee recommends extra caution when dealing with anyone "trying to short-circuit your critical thinking by putting you under pressure" through extreme urgency, emotional manipulation, demands for secrecy, unusual payment methods, or emotional hooks.
At the same time, AI can also defend against threats. When your bank flags a suspicious transaction, that's AI protecting you from fraud. The key is not to throw the AI baby out with the bath water. Rather it’s asking critical questions: What is this specific technology designed to do? How is it achieving that purpose? Are there documented risks and harms? What safeguards protect users from harm?
Today, AI hype and misinformation have caused mass hysteria, confusion and panic, spawning yet another buzzword: AI literacy. But we don't need a new form of literacy to cut through the AI fog. What we need is to develop and trust our own ability to think critically and act ethically. Instead of an AI agent, we need to practice agency over our own lives.
Each time we allow a tool to dictate our attention, decisions, and worldview without question, we hand over pieces of ourselves. The ever expanding pools of data, time, and money extracted by these systems proves this deeper reality. By intentionally practicing discernment, raising questions, and withholding consent, we can learn to trust ourselves and each other more while trusting the machines less. As Socrates observed, "To find yourself, think for yourself." We can still rebuild a culture of critical thinking before we lose ourselves to the machines.
Di Zhang is a teacher and librarian in South King County. Passionate about promoting digital citizenship and information literacy in all forms, Di has taught these skills to media organizations, educators and students, librarians, and the general public. He lives in Federal Way with his wife, daughter, and son.
Help keep BIPOC-led, community-powered journalism free — become a Rainmaker today.