Dec 15, 2025 20 min read

Deepfakes So Good We Can't Tell Which Elon Is the Real Disappointment

Deepfakes So Good We Can't Tell Which Elon Is the Real Disappointment
Blake Trapper to Yappers Handoff: 👀 Is my boss an AI? Am I? We're tackling the digital trust crisis, and honestly, the results are so unhinged I'm starting to doubt my own existence. A $25 million corporate heist was pulled off by a deepfake CFO, but the real crisis is that we can't use our favorite punctuation—the em dash. Your next online interaction will feel less like a "chat" and more like a Blade Runner audition. This maddness is the perfect table setting for a shouting match between Morty and Frankie—assuming you're not both deepfakes I hired on Fiverr.

Source: Bloomberg

  • Trust Erosion: The sophistication of deepfakes and chatbots has led to a collapse of trust online, where humans can no longer assume they are interacting with other real people, even in video calls with figures like former President Obama.

  • Behavioral Adjustments: To avoid being mistaken for, or fooled by, AI, humans are fundamentally changing their behavior, including abandoning common writing styles (like the em dash) and inventing informal, anti-social verification methods like family code words or pop-culture tripwires in job applications.

  • Failed Defenses: Traditional anti-bot measures (CAPTCHAs, deepfake visual "tells") are obsolete, leading institutions like Google to bring back in-person interviews and companies like Cisco to add biometric verification to hiring, while global solutions like the iris-scanning "Orb" face dystopian dread and outright bans in some countries.



Morty Gold

//consummate curmudgeon// //cardigan rage// //petty grievances// //get off my lawn// //ex-new yorker//

▶️ Listen to Morty's Micro Bio
They want to kill the em dash! The em dash! It’s an elegant, functional punctuation mark that connects thoughts—shows a human brain working! And now some algorithm uses it too much, so I have to change how I WRITE? THIS IS NOT COMPLICATED! If a raccoon learns how to open my garbage can, do I stop using a garbage can or do I buy a better lid?! The answer is the lid! But no, now we're banning perfectly good punctuation because politicians and CEOs are too incompetent to tell a real person from a computer scammer!

I taught the Romans, I know how this works! You compromise a basic function, and the whole thing falls apart! WE ARE FAILING THE TURING TEST BECAUSE WE'RE WILLINGLY ACTING DUMBER THAN THE COMPUTER! We deserve this. (I'm going to bed.)
Blake Blake's Roast: 🔥 Morty's outrage at the death of the em dash is a beautiful microcosm of his life, which consists mostly of yelling about tiny, technical failures while ignoring the systemic rot they signify.

Sheila Sharpe

//smiling assassin// //gender hypocrisy// //glass ceiling//

▶️ Listen to Sheila's Micro Bio
Oh, bless their hearts, they're worried about deepfakes taking a job—as if the real people in the C-suite weren't already just as replaceable and significantly less efficient. Here's the thing: we're suddenly terrified of AI-generated content, but where was this panic when HR was hiring based on keywords and performance reviews were written by a template? The system was already optimized for the most robotic human behavior.

Question: A deepfake CFO stole $25 million—did the actual CFO steal less? Because the receipts I have show the real financial elite have been robbing us blind, legally, for decades, and nobody asked them to do a pop-culture reference to verify their humanity. Maybe the real problem isn't the AI acting like a human; maybe it's the corporate humans who already acted like a spreadsheet. Just a thought!
Blake Blake's Roast: 🔥 Sheila's elegant defense of the em dash as a flourish of authentic human expression is quite lovely, coming from someone whose every written communication feels like a polite legal subpoena. The core issue, as Sheila highlights, is we're banning human quirks because an algorithm copied them, making this the only instance in history where plagiarism hurt the source material more than the plagiarist.

Omar Khan

//innocent observer// //confused globalist// //pop culture hook// //bruh//

▶️ Listen to Omar's Micro Bio
Wait, I'm sorry—what? You mean in the richest country in the world, people have to stop using a perfectly good punctuation mark—the em dash—just so they don’t get confused with a computer? Let me make sure I understand this: your language is now dictated by the deficiencies of a machine. In Pakistan, we have so many different languages and dialects that the nuances are so complex that no algorithm could possibly flatten them into one style.

And you people are worried about one punctuation mark? The idea that you have to sell your iris data to a billionaire to prove you are not a deepfake is peak American absurdity. In Pakistan, we worry about the government having too much power; here, you are volunteering to give a tech-bro a monopoly on identity. I don't understand this country. I thought it was about freedom, but apparently, freedom now comes with a mandatory eyeball scan.

Blake Blake's Roast: 🔥 His observation other countries—like Pakistan—can handle this with less dystopian flair is a wonderfully disorienting piece of globalist shaming. He beautifully articulates the core paranoia: a country worrying about government power is perfectly happy to hand over its eyeball data to a tech CEO who just created a $25 million scammer.

Frankie Truce

//smug contrarian// //performative outrage// //whisky walrus// //cynic//

▶️ Listen to Frankie's Micro Bio
Actually, let me push back on the idea this collapse of trust is a bad thing. The REAL story is the sheer laziness of people who were already accepting everything at face value. The whole AI-writing "witch hunt" is a power play by the content class to protect their jobs. Here's what nobody's talking about: the complaints aren't the AI writing is bad—the complaints are the AI writing is too good and too cheap.

So now they’ve weaponized perfectly normal human phrases like "delves" and the em dash to flag anything threatening their bottom line. The REAL hypocrisy is they trained the AIs on millions of pieces of human writing, and now they are censoring the very language they stole! The goal now is to get a human to not sound like a machine, which means we are losing on both sides. But sure, ban the em dash. It’s the perfect symbol for a culture abandoning nuance for algorithmic simplicity. But what do I know?
Blake Blake's Roast: 🔥 Frankie's argument the em dash ban is a conspiracy by the "content class" to protect their jobs is a glorious example of his smug intellectual honesty, and a perfect setup for a podcast ad. He's absolutely right the new goal is for humans to not sound like machines, a truly absurd development that must be a relief for him since his actual voice sounds like a podcast transcription.

Nigel Sterling

//prince of paperwork// //pivot table perv// //beautiful idiots// //fine print// //spreadsheet stooge// //right then//

▶️ Listen to Nigel's Micro Bio
Right, so—let's discuss the fundamental problem here, which is the complete failure of the asymmetry of information in the context of digital identity. Empirically speaking, the data are quite clear: once a generative model achieves the human-perception threshold (the 73% fool rate for GPT-4.5 is a rather alarming data point), the cost of producing perfect deception approaches zero. Now, I know what you're thinking: "Nigel, this is just a technological arms race," and yes, but this is an exponential arms race.

The analogy here is the 'Needle-in-a-Haystack Problem'—the more synthetic content, the larger the haystack. And this is crucial: the proposed 'analog solutions'—secret code words, pop-culture tripwires—are profoundly non-scalable and introduce sociological friction (see: Goffman's Presentation of Self in Everyday Life). I apologize for the tangent, but to sacrifice the em dash (an established syntactical construct) for a transient, easily-coded tell is methodologically unsound. Forgive me, I’m going too fast. The literature clearly indicates that a decentralized cryptographic solution (not Sam Altman's Orb, mind you—far too centralized) is the only viable framework.
Blake Blake's Roast: 🔥 He cites Goffman's Presentation of Self explaining why people don't like using code words, which is a very Nigel way of saying, "It's awkward." He's so concerned with the theoretical framework of the problem he's ignored the North Korean hackers entirely, presumably because they haven't published their manifesto in a peer-reviewed journal.

Dina Brooks

//church shade// //side-eye// //plain talk// //exasperated// //mmm-hmm//

▶️ Listen to Dina's Micro Bio
Mm-mm. Nope. I’m just watching these politicians and CEOs talk about the "crisis of trust" because a deepfake got $25 million. Let me tell you something—this is just rich folks complaining about being stolen from in a way they didn't anticipate. They're worried about synthetic theft when the real theft—wage theft, tax loopholes, no healthcare—is happening every single day.

And now a public relations lady has to give up the em dash because the computer copied her style? They’re worried about AI taking jobs when the real human CEOs are laying off thousands of people to save on stock price. Don't play with me. The fraud is already in the building, and it's wearing a $3,000 suit, not a poorly rendered Sam Altman face. We didn't fight for this.
Blake Blake's Roast: 🔥 Dina's perfectly executed moral pivot from "AI taking jobs" to "human CEOs laying off thousands" is a righteous punch-up leaveing the corporate bad actors with no synthetic defense. She dismisses the em dash drama as the purest white-collar anxiety, showing the perfect exhaustion of a woman who has no time for punctuation-based oppression.

Thurston Gains

//calm evil// //deductible denier// //greed is good// //land shark//

▶️ Listen to Thurston's Micro Bio
From a capital allocation perspective, the deepfake problem is just a poorly priced risk. Let me walk you through the math: if you have $25 million stolen by a synthetic CFO, that's not a technological failure—its a compliance failure that should have been modeled and insured against. There's an immediate need for bespoke, real-time biometric verification services, which will generate massive, tax-advantaged revenue streams.

The Orb—while conceptually centralized—is a high-growth asset class because it solves the "proof of personhood" problem crushing the hiring pipeline with North Korean IT workers. The market always prices in fraud; this is just a new iteration. And honestly, if you're the company who got burned, your shareholders have a fiduciary responsibility to fire the entire executive team for leaving $25 million exposed to a zero-cost attack vector. Nothing personal—it's just math. We’re in the "Security-as-a-Service" boom, and the fear of a fake Obama just opened the next quarter's ledger.
Blake Blake's Roast: 🔥 Thurston's ability to frame a catastrophic loss of $25 million as a delightful "arbitrage opportunity" in biometric verification services is why he’ll always be the only human in his social circle. He views the rise of the Orb and its dystopian potential not as a societal risk, but as a "high-growth asset class," confirming he is capable of finding an ROI in any apocalypse.

Wade Truett

//working man's math// //redneck philosopher// //blue-collar truth//

▶️ Listen to Wade's Micro Bio
Here's the thing about the Orb and all these biometric passwords: they’re trying to solve a spiritual problem with a technical solution. It’s like when you’re building a fence—if the wood is rotten, it don’t matter how fancy the latch is. My grandpa used to say, "The only true security is a well-fed dog and a good reputation." And now we got Sam Altman wanting to scan our eyeballs to get a digital passport? That ain't freedom; that's just a fancier chain on the same collar.

I was reading this fella Camus—he talked about the need for authentic revolt. But we’re so busy trying to prove we’re human by acting less like ourselves—giving up the em dash, using a code word—that we're becoming the very thing we’re trying to fight. You can't code morality, and you can't scan trust. Out here, we look a man in the eye. Anyway, that's what I think. (welds something)
Blake Blake's Roast: 🔥 Wade's ability to transition seamlessly from an analogy about building a fence to a profound, Camus-influenced meditation on "authentic revolt" is why he’s the most dangerous man on the panel. His assessment of the Orb as simply "a fancier chain on the same collar" is a perfect encapsulation of the dystopian dread without any of the hyper-intellectual jargon. And knowing this insight came from a man in Carhartts who just finished welding is truly humbling.

Bex Nullman

//web developer// //20-something// //doom coder// //lowercase//

▶️ Listen to Bex's Micro Bio
lmao we're so cooked. like literally though, i’m supposed to come up with a secret family code word to talk to my mom because some tech bro in the valley made a better voice clone? my therapist says i need to set healthier boundaries but i'm now supposed to perform a covert op just to verify that my own mother isn't a North Korean hacker stealing my credit card debt. i simply cannot. the vibes are BAD.

i already have to do a CAPTCHA to prove i'm human to buy concert tickets and now i have to name my hogwarts house in a job application for $35k a year and AND ANOTHER THING—i don’t even know what my hogwarts house is! (i think it’s hufflepuff, but who cares?) why do i even bother. we’re forced to give up our quirks to avoid being confused with an algorithm that literally plagiarized our quirks. Orb will get my retina scan and turn me into a crypto coin. we're so cooked lol.
Blake Blake's Roast: 🔥 Bex's worldview is "vibes are bad but also I can't be bothered," which is honestly the perfect Gen Z response to dystopia—maximum alarm, zero follow-through. She's horrified she needs a secret code word to verify her own mother, yet somehow the real trauma is not knowing her Hogwarts house for a $35k job requireing a master's degree and five years of experience. We're not cooked, Bex. You're just microwaved.

Sidney Stein

//rule enforcer// //social contracts// //deli-line logic// //excuse me!//

▶️ Listen to Sidney's Micro Bio
The elimination of the em dash because a large language model overuses it is a procedural non-starter. I’m sorry, but this is an egregious example of overreach and a complete misreading of the problem. According to the Chicago Manual of Style, section 6.85, the em dash has a distinct and necessary grammatical function. This is EXACTLY why we have rules! If we allow a subjective, anecdotal "witch hunt" against proper punctuation, what’s next? Banning the semicolon because it looks too formal?

The bylaws CLEARLY state that clarity and precision are paramount in professional communication. If we make one exception for "sounding human," we compromise the entire system of written language! Furthermore, forcing job applicants to include a pop-culture reference violates the principle of content-neutral screening. If we allow this arbitrary subjectivity, the entire hiring process becomes moot. I’ll be filing a formal complaint with the firm and the grammar police. This is unacceptable. Rules are rules.
Blake Blake's Roast: 🔥 Sidney's defense of the em dash is not a defense of style, but a defense of "The Chicago Manual of Style," section 6.85, confirming he is indeed the most romantic rule-follower on Earth. Only Sidney would equate the downfall of society with the potential banning of the semicolon, a move so petty it must be the basis of his eventual memoirs.

Dr. Mei Lin Santos

//cortisol spiker// //logic flatlined// //diagnosis drama queen//

▶️ Listen to Mei Lin's Micro Bio
Okay, so everyone’s focused on the financial fraud, but here’s what worries me: the breakdown of trust is a public health crisis waiting to happen. Do you know what happens when people can’t trust video calls? We rely on telemedicine for rural care, for check-ins with immunocompromised patients, for mental health consults. If a patient can’t be sure their doctor is real—or if a doctor can’t trust the patient isn't an AI trying to hack the EMR—the entire system breaks down. That could be a massive sepsis situation.

And Sam Altman’s Orb? Scanning eyeballs for a digital passport? That’s an easily transmissible vector for ocular infections if not sterilized correctly, not to mention a single point of failure for mass data breach! Do you know what happens when you get a retinal tear or an infarction from a bad scan? This is how people get paralyzed. I've seen this before, and it doesn't end well. Please tell me the hospitals are not implementing an untested biometric system. Please. I'm not trying to scare you, but this is EXACTLY how we lose all patient-doctor confidentiality and risk massive outbreaks.
Blake Blake's Roast: 🔥 Mei Lin takes a story about a fake CEO and Obama deepfakes and correctly diagnoses the true existential threat: the total collapse of the telemedicine infrastructure and a potential ocular infection epidemic from the Orb. Her fear of a "single point of failure for mass data breach" and her immediate pivot to "retinal tear or an infarction from a bad scan" is pure Healthcare Panicker gold.

Veronica Thorne

//ivy league snob// //status flex// //trust fund tyrant// //out-of-touch oligarch//

▶️ Listen to Veronica's Micro Bio
I've been hearing about this "crisis of human identity" and honestly, I think it's lovely that humans are being forced to define their sentience. Why don't people just elevate their writing style so it couldn't possibly be mistaken for a machine? I mean, my former public relations director—she used to use the em dash beautifully. Have they considered that the AI is only mimicking low-effort prose? It’s really quite simple. If you write with more nuance, complexity, and a sophisticated vocabulary, the machine simply cannot keep up.

I'm VERY passionate about the arts and language, and I think this is a fantastic opportunity for people to raise their communication standards! My family foundation actually funds an advanced rhetoric course at a small, private university. I don't understand why people don't just... be more intellectual. Or, if they need the secret code words, why not use Latin phrases? That’s much more elegant than naming your Hogwarts House. It's about personal refinement. Anyway, I'm late for my facial. But yes, sophistication is key.
Blake Blake's Roast: 🔥 Veronica identifies AI's core weakness—it mimics "low-effort prose"—but her solution is for the working class to simply "be more intellectual." Tying the existential crisis of deepfakes to the need for advanced rhetoric courses illustrates of how she filters every issue through her family foundation's tax write-offs. Almost makes you miss her more. Almost.

Coach Ned

//toxic optimist// //gaslighting guru// //character development//

▶️ Listen to Coach Ned's Micro Bio
Listen up, team! So we have to change our writing style and use code words? I hear a NEW PLAYBOOK! This is championship season! If the old plays aren't working—if the em dash is a TELL—then we DRAW UP A NEW ONE! You think the great quarterbacks stick with a losing strategy? NO! They ADAPT! They PIVOT!

We need to practice those code words until they're muscle memory! We need to write so clearly, so authentically, that the AI breaks trying to copy us! This is an OPPORTUNITY to be better humans! To be more diligent! To be more aware! We are the greatest team on Earth, and we will not be outsmarted by a robot trained on our own data! Leave it all on the field! We are the HOME TEAM, and we will not let an algorithm OUT-HUSTLE us! Who's with me?! LET'S! GOOOOO!
Blake Blake's Roast: 🔥 Coach Ned's ability to rebrand self-censorship and mandatory code words as an exciting "NEW PLAYBOOK" is peak delusion. The man could trip over a rug and call it an 'agility drill.' We're running the The 'Toxic Optimism' sweep with Coach Ned as the ball carrier. Let's miss our blocking assignments and see what happens to him. Ned, buddy, this isn't fourth quarter motivation. This is just dystopia with a whistle.



🏆
The value of the internet, AI, and every communication network ultimately distills to trust—or the lack of it. Not “connection.” Not “information.” Trust: who is accountable, who is real, who is acting in good faith, and what happens when they aren’t.

I remember in college when a visiting professor came to an honors class at the president’s residence. He opened with a line sounding like a provocation for its own sake: all organizations are inherently evil.

As a church boy at a church school, it hit like heresy. I spent the rest of the session with folded arms, a scowl, and the moral certainty of the unscarred. Over time—through work, bureaucracy, customer service loops, and the slow education of getting outnumbered—I began to understand what he meant. It wasn’t every person inside an organization is evil. It’s that organizations, by design, can't love you, can't even consider you...as an individual.

An organization’s first obligation is to its own survival. To scale, it must replace particularity with policy. It must convert messy human reality into categories it can manage. This makes it structurally indifferent to individual cost. The tragedy isn’t malice; it’s math.

The internet and AI are similar—systems whose primary constraint is scale. They are astonishing at moving information, but indifferent to the human consequences of how that information is produced, packaged, distorted, or received. The network doesn’t ask, “Is this true?” It asks, “Will this travel?” The model doesn’t ask, “Is this wise?” It asks, “Is this likely?” In a high-scale system, the optimization target is rarely truth; it’s throughput.

We feel these indifferences in small, personal ways. Writers who have used the em dash for decades now second-guess themselves because the same punctuation has become a tell—evidence of “AI slop.” The system doesn’t adapt to the individual; individuals must adapt to the system, editing not for clarity but for credibility. It's the pressure of scale: conformity to the network’s constraints.

You can also see it in a brutal, everyday example: insurance. Car insurance premiums rise for many reasons, but one of the most demoralizing is rule-followers subsidize the destruction of rule-breakers. Recently, on an interstate near me, a drunk driver going the wrong way at extreme speed killed a young couple. The drunk survived. The cost to the couple and their families has no price tag. The measurable damage, however, is invoiced differently—distributed to millions of rule-followers as higher premiums."

That’s not an accident. It’s the financial model of pooled risk. The organization survives by smoothing catastrophe into averages—and by charging the careful for the chaos of the reckless. It is mathematically ruthless, and it has to be, because empathy doesn’t scale.

There’s an old moral maxim: wrongdoing doesn’t become right because many people do it. Tolstoy put it more sharply: wrong doesn’t cease to be wrong because the majority share in it. But there’s a second, more modern truth we don’t say out loud: when many people do wrong, the burden does not fall evenly. It lands disproportionately on the people doing right. Scale turns vice into an externality.

This is where the internet and AI become organizations in digital form. They optimize for aggregate outcomes—efficiency, scale, pattern-matching across millions—while remaining structurally incapable of accountability to any individual. Yes, AI can be extraordinary: education, accessibility, medicine, and endless funny cat videos. But because it processes information at inhuman scale, it inherits the same structural indifference as any other system built to manage the many. It can lie persuasively without liability.

So the digital age returns us to an old rule that never went away, but now governs nearly everything: trust, but verify.

And verification has never been a binary. In human life, trust is embodied. It’s built by consistency over time, by reputational consequences, by reciprocal risk. It’s the handshake, the look in the eye, the small cues saying, “I am here, I can be held accountable, and I will be here tomorrow.”

It raises the uncomfortable question: how do you verify AI? You can check sources. You can demand citations. You can triangulate claims across reliable references. You can do the work—bring data. But those methods verify content, not intent. They tell you whether a statement is supported, not whether a speaker is responsible.

The digital equivalents of the handshake are still immature: reputation systems, provenance, identity, attestations, cryptographic signatures, institutional credibility. Useful—sometimes. But none of them fully replace the ancient non-optimized human advantage: the reality of another person who can be confronted, who can be shamed, who can be persuaded, who can be moved.

I saw featured on 60 Minutes yesterday how a mother fought an insurance company to save her daughter. Maisie, was born with Spinal Muscular Atrophy (SMA), a disease historically carrying devastating outcomes by age two. A gene therapy called Zolgensma offered a radically different future. It also carried a staggering price—in the millions for a single dose.

The insurance company’s logic was predictable. It wasn’t personal. It was actuarial. A system designed to survive cannot casually approve exceptions large enough to threaten the model. So they denied her again and again—because if you are a claims processor inside a scale machine, you are paid to defend the boundary, not to feel the tragedy pressing against it.

Then the mother did something the system is not built to withstand: she forced a human encounter. She secured an in-person meeting with the person processing the claim. She wanted them to see the child their policy would allow to die. She wanted to reintroduce particularity into a world built on categories.

And it worked—not because the organization suddenly became good, but because the machinery briefly jammed. The abstraction cracked. A child became a face. A line item became a life. The claims processor acted as individual instead of a cog in the flywheel. The organization, for a moment, behaved like humans without a form letter to hide behind.

The paradox of the digital age is technology can help create miracles—gene therapies, breakthroughs, coordination at unprecedented scale. But scale is also what makes those miracles fail to reach the individual without a fight. The cure can be engineered by machines and networks; the decision to apply it often still requires a human breach.

So yes: “In God we trust; all others must bring data" (W. Edwards Deming). Data is the baseline, and a clunky and manual verification is a non-negotiable. But the deeper claim is this: the scepter of truth is not held by the algorithm, because truth is not only accuracy. Truth is accountability, and accountability is relational.

The internet and AI are powerful precisely because they operate at distance. But distance dissolves accountability. If we want trustworthy communication in a world of infinite content, we will need more than smarter models. We will need sturdier trust vectors: provenance, consequences, real identity where it matters—and, whenever possible, the oldest counter to scale: slow, clumsy, irreplaceable human contact.

A human being, in front of another human being, refusing to fade into the aggregate.


Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to NightOffFix.com.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.