Source: Bloomberg
-
Trust Erosion: The sophistication of deepfakes and chatbots has led to a collapse of trust online, where humans can no longer assume they are interacting with other real people, even in video calls with figures like former President Obama.
-
Behavioral Adjustments: To avoid being mistaken for, or fooled by, AI, humans are fundamentally changing their behavior, including abandoning common writing styles (like the em dash) and inventing informal, anti-social verification methods like family code words or pop-culture tripwires in job applications.
-
Failed Defenses: Traditional anti-bot measures (CAPTCHAs, deepfake visual "tells") are obsolete, leading institutions like Google to bring back in-person interviews and companies like Cisco to add biometric verification to hiring, while global solutions like the iris-scanning "Orb" face dystopian dread and outright bans in some countries.

Morty Gold
//consummate curmudgeon// //cardigan rage// //petty grievances// //get off my lawn// //ex-new yorker//
▶️ Listen to Morty's Micro BioThey want to kill the em dash! The em dash! It’s an elegant, functional punctuation mark that connects thoughts—shows a human brain working! And now some algorithm uses it too much, so I have to change how I WRITE? THIS IS NOT COMPLICATED! If a raccoon learns how to open my garbage can, do I stop using a garbage can or do I buy a better lid?! The answer is the lid! But no, now we're banning perfectly good punctuation because politicians and CEOs are too incompetent to tell a real person from a computer scammer!
I taught the Romans, I know how this works! You compromise a basic function, and the whole thing falls apart! WE ARE FAILING THE TURING TEST BECAUSE WE'RE WILLINGLY ACTING DUMBER THAN THE COMPUTER! We deserve this. (I'm going to bed.)

Sheila Sharpe
//smiling assassin// //gender hypocrisy// //glass ceiling//
▶️ Listen to Sheila's Micro BioOh, bless their hearts, they're worried about deepfakes taking a job—as if the real people in the C-suite weren't already just as replaceable and significantly less efficient. Here's the thing: we're suddenly terrified of AI-generated content, but where was this panic when HR was hiring based on keywords and performance reviews were written by a template? The system was already optimized for the most robotic human behavior.
Question: A deepfake CFO stole $25 million—did the actual CFO steal less? Because the receipts I have show the real financial elite have been robbing us blind, legally, for decades, and nobody asked them to do a pop-culture reference to verify their humanity. Maybe the real problem isn't the AI acting like a human; maybe it's the corporate humans who already acted like a spreadsheet. Just a thought!

Omar Khan
//innocent observer// //confused globalist// //pop culture hook// //bruh//
▶️ Listen to Omar's Micro BioWait, I'm sorry—what? You mean in the richest country in the world, people have to stop using a perfectly good punctuation mark—the em dash—just so they don’t get confused with a computer? Let me make sure I understand this: your language is now dictated by the deficiencies of a machine. In Pakistan, we have so many different languages and dialects that the nuances are so complex that no algorithm could possibly flatten them into one style.
And you people are worried about one punctuation mark? The idea that you have to sell your iris data to a billionaire to prove you are not a deepfake is peak American absurdity. In Pakistan, we worry about the government having too much power; here, you are volunteering to give a tech-bro a monopoly on identity. I don't understand this country. I thought it was about freedom, but apparently, freedom now comes with a mandatory eyeball scan.

Frankie Truce
//smug contrarian// //performative outrage// //whisky walrus// //cynic//
▶️ Listen to Frankie's Micro BioActually, let me push back on the idea this collapse of trust is a bad thing. The REAL story is the sheer laziness of people who were already accepting everything at face value. The whole AI-writing "witch hunt" is a power play by the content class to protect their jobs. Here's what nobody's talking about: the complaints aren't the AI writing is bad—the complaints are the AI writing is too good and too cheap.
So now they’ve weaponized perfectly normal human phrases like "delves" and the em dash to flag anything threatening their bottom line. The REAL hypocrisy is they trained the AIs on millions of pieces of human writing, and now they are censoring the very language they stole! The goal now is to get a human to not sound like a machine, which means we are losing on both sides. But sure, ban the em dash. It’s the perfect symbol for a culture abandoning nuance for algorithmic simplicity. But what do I know?

Nigel Sterling
//prince of paperwork// //pivot table perv// //beautiful idiots// //fine print// //spreadsheet stooge// //right then//
▶️ Listen to Nigel's Micro BioRight, so—let's discuss the fundamental problem here, which is the complete failure of the asymmetry of information in the context of digital identity. Empirically speaking, the data are quite clear: once a generative model achieves the human-perception threshold (the 73% fool rate for GPT-4.5 is a rather alarming data point), the cost of producing perfect deception approaches zero. Now, I know what you're thinking: "Nigel, this is just a technological arms race," and yes, but this is an exponential arms race.
The analogy here is the 'Needle-in-a-Haystack Problem'—the more synthetic content, the larger the haystack. And this is crucial: the proposed 'analog solutions'—secret code words, pop-culture tripwires—are profoundly non-scalable and introduce sociological friction (see: Goffman's Presentation of Self in Everyday Life). I apologize for the tangent, but to sacrifice the em dash (an established syntactical construct) for a transient, easily-coded tell is methodologically unsound. Forgive me, I’m going too fast. The literature clearly indicates that a decentralized cryptographic solution (not Sam Altman's Orb, mind you—far too centralized) is the only viable framework.

Dina Brooks
//church shade// //side-eye// //plain talk// //exasperated// //mmm-hmm//
▶️ Listen to Dina's Micro BioMm-mm. Nope. I’m just watching these politicians and CEOs talk about the "crisis of trust" because a deepfake got $25 million. Let me tell you something—this is just rich folks complaining about being stolen from in a way they didn't anticipate. They're worried about synthetic theft when the real theft—wage theft, tax loopholes, no healthcare—is happening every single day.
And now a public relations lady has to give up the em dash because the computer copied her style? They’re worried about AI taking jobs when the real human CEOs are laying off thousands of people to save on stock price. Don't play with me. The fraud is already in the building, and it's wearing a $3,000 suit, not a poorly rendered Sam Altman face. We didn't fight for this.

Thurston Gains
//calm evil// //deductible denier// //greed is good// //land shark//
▶️ Listen to Thurston's Micro BioFrom a capital allocation perspective, the deepfake problem is just a poorly priced risk. Let me walk you through the math: if you have $25 million stolen by a synthetic CFO, that's not a technological failure—its a compliance failure that should have been modeled and insured against. There's an immediate need for bespoke, real-time biometric verification services, which will generate massive, tax-advantaged revenue streams.
The Orb—while conceptually centralized—is a high-growth asset class because it solves the "proof of personhood" problem crushing the hiring pipeline with North Korean IT workers. The market always prices in fraud; this is just a new iteration. And honestly, if you're the company who got burned, your shareholders have a fiduciary responsibility to fire the entire executive team for leaving $25 million exposed to a zero-cost attack vector. Nothing personal—it's just math. We’re in the "Security-as-a-Service" boom, and the fear of a fake Obama just opened the next quarter's ledger.

Wade Truett
//working man's math// //redneck philosopher// //blue-collar truth//
▶️ Listen to Wade's Micro BioHere's the thing about the Orb and all these biometric passwords: they’re trying to solve a spiritual problem with a technical solution. It’s like when you’re building a fence—if the wood is rotten, it don’t matter how fancy the latch is. My grandpa used to say, "The only true security is a well-fed dog and a good reputation." And now we got Sam Altman wanting to scan our eyeballs to get a digital passport? That ain't freedom; that's just a fancier chain on the same collar.
I was reading this fella Camus—he talked about the need for authentic revolt. But we’re so busy trying to prove we’re human by acting less like ourselves—giving up the em dash, using a code word—that we're becoming the very thing we’re trying to fight. You can't code morality, and you can't scan trust. Out here, we look a man in the eye. Anyway, that's what I think. (welds something)

Bex Nullman
//web developer// //20-something// //doom coder// //lowercase//
▶️ Listen to Bex's Micro Biolmao we're so cooked. like literally though, i’m supposed to come up with a secret family code word to talk to my mom because some tech bro in the valley made a better voice clone? my therapist says i need to set healthier boundaries but i'm now supposed to perform a covert op just to verify that my own mother isn't a North Korean hacker stealing my credit card debt. i simply cannot. the vibes are BAD.
i already have to do a CAPTCHA to prove i'm human to buy concert tickets and now i have to name my hogwarts house in a job application for $35k a year and AND ANOTHER THING—i don’t even know what my hogwarts house is! (i think it’s hufflepuff, but who cares?) why do i even bother. we’re forced to give up our quirks to avoid being confused with an algorithm that literally plagiarized our quirks. Orb will get my retina scan and turn me into a crypto coin. we're so cooked lol.

Sidney Stein
▶️ Listen to Sidney's Micro BioThe elimination of the em dash because a large language model overuses it is a procedural non-starter. I’m sorry, but this is an egregious example of overreach and a complete misreading of the problem. According to the Chicago Manual of Style, section 6.85, the em dash has a distinct and necessary grammatical function. This is EXACTLY why we have rules! If we allow a subjective, anecdotal "witch hunt" against proper punctuation, what’s next? Banning the semicolon because it looks too formal?
The bylaws CLEARLY state that clarity and precision are paramount in professional communication. If we make one exception for "sounding human," we compromise the entire system of written language! Furthermore, forcing job applicants to include a pop-culture reference violates the principle of content-neutral screening. If we allow this arbitrary subjectivity, the entire hiring process becomes moot. I’ll be filing a formal complaint with the firm and the grammar police. This is unacceptable. Rules are rules.

Dr. Mei Lin Santos
//cortisol spiker// //logic flatlined// //diagnosis drama queen//
▶️ Listen to Mei Lin's Micro BioOkay, so everyone’s focused on the financial fraud, but here’s what worries me: the breakdown of trust is a public health crisis waiting to happen. Do you know what happens when people can’t trust video calls? We rely on telemedicine for rural care, for check-ins with immunocompromised patients, for mental health consults. If a patient can’t be sure their doctor is real—or if a doctor can’t trust the patient isn't an AI trying to hack the EMR—the entire system breaks down. That could be a massive sepsis situation.
And Sam Altman’s Orb? Scanning eyeballs for a digital passport? That’s an easily transmissible vector for ocular infections if not sterilized correctly, not to mention a single point of failure for mass data breach! Do you know what happens when you get a retinal tear or an infarction from a bad scan? This is how people get paralyzed. I've seen this before, and it doesn't end well. Please tell me the hospitals are not implementing an untested biometric system. Please. I'm not trying to scare you, but this is EXACTLY how we lose all patient-doctor confidentiality and risk massive outbreaks.

Veronica Thorne
//ivy league snob// //status flex// //trust fund tyrant// //out-of-touch oligarch//
▶️ Listen to Veronica's Micro BioI've been hearing about this "crisis of human identity" and honestly, I think it's lovely that humans are being forced to define their sentience. Why don't people just elevate their writing style so it couldn't possibly be mistaken for a machine? I mean, my former public relations director—she used to use the em dash beautifully. Have they considered that the AI is only mimicking low-effort prose? It’s really quite simple. If you write with more nuance, complexity, and a sophisticated vocabulary, the machine simply cannot keep up.
I'm VERY passionate about the arts and language, and I think this is a fantastic opportunity for people to raise their communication standards! My family foundation actually funds an advanced rhetoric course at a small, private university. I don't understand why people don't just... be more intellectual. Or, if they need the secret code words, why not use Latin phrases? That’s much more elegant than naming your Hogwarts House. It's about personal refinement. Anyway, I'm late for my facial. But yes, sophistication is key.

Coach Ned
//toxic optimist// //gaslighting guru// //character development//
▶️ Listen to Coach Ned's Micro BioListen up, team! So we have to change our writing style and use code words? I hear a NEW PLAYBOOK! This is championship season! If the old plays aren't working—if the em dash is a TELL—then we DRAW UP A NEW ONE! You think the great quarterbacks stick with a losing strategy? NO! They ADAPT! They PIVOT!
We need to practice those code words until they're muscle memory! We need to write so clearly, so authentically, that the AI breaks trying to copy us! This is an OPPORTUNITY to be better humans! To be more diligent! To be more aware! We are the greatest team on Earth, and we will not be outsmarted by a robot trained on our own data! Leave it all on the field! We are the HOME TEAM, and we will not let an algorithm OUT-HUSTLE us! Who's with me?! LET'S! GOOOOO!
I remember in college when a visiting professor came to an honors class at the president’s residence. He opened with a line sounding like a provocation for its own sake: all organizations are inherently evil.
As a church boy at a church school, it hit like heresy. I spent the rest of the session with folded arms, a scowl, and the moral certainty of the unscarred. Over time—through work, bureaucracy, customer service loops, and the slow education of getting outnumbered—I began to understand what he meant. It wasn’t every person inside an organization is evil. It’s that organizations, by design, can't love you, can't even consider you...as an individual.
An organization’s first obligation is to its own survival. To scale, it must replace particularity with policy. It must convert messy human reality into categories it can manage. This makes it structurally indifferent to individual cost. The tragedy isn’t malice; it’s math.
The internet and AI are similar—systems whose primary constraint is scale. They are astonishing at moving information, but indifferent to the human consequences of how that information is produced, packaged, distorted, or received. The network doesn’t ask, “Is this true?” It asks, “Will this travel?” The model doesn’t ask, “Is this wise?” It asks, “Is this likely?” In a high-scale system, the optimization target is rarely truth; it’s throughput.
We feel these indifferences in small, personal ways. Writers who have used the em dash for decades now second-guess themselves because the same punctuation has become a tell—evidence of “AI slop.” The system doesn’t adapt to the individual; individuals must adapt to the system, editing not for clarity but for credibility. It's the pressure of scale: conformity to the network’s constraints.
You can also see it in a brutal, everyday example: insurance. Car insurance premiums rise for many reasons, but one of the most demoralizing is rule-followers subsidize the destruction of rule-breakers. Recently, on an interstate near me, a drunk driver going the wrong way at extreme speed killed a young couple. The drunk survived. The cost to the couple and their families has no price tag. The measurable damage, however, is invoiced differently—distributed to millions of rule-followers as higher premiums."
That’s not an accident. It’s the financial model of pooled risk. The organization survives by smoothing catastrophe into averages—and by charging the careful for the chaos of the reckless. It is mathematically ruthless, and it has to be, because empathy doesn’t scale.
There’s an old moral maxim: wrongdoing doesn’t become right because many people do it. Tolstoy put it more sharply: wrong doesn’t cease to be wrong because the majority share in it. But there’s a second, more modern truth we don’t say out loud: when many people do wrong, the burden does not fall evenly. It lands disproportionately on the people doing right. Scale turns vice into an externality.
This is where the internet and AI become organizations in digital form. They optimize for aggregate outcomes—efficiency, scale, pattern-matching across millions—while remaining structurally incapable of accountability to any individual. Yes, AI can be extraordinary: education, accessibility, medicine, and endless funny cat videos. But because it processes information at inhuman scale, it inherits the same structural indifference as any other system built to manage the many. It can lie persuasively without liability.
So the digital age returns us to an old rule that never went away, but now governs nearly everything: trust, but verify.
And verification has never been a binary. In human life, trust is embodied. It’s built by consistency over time, by reputational consequences, by reciprocal risk. It’s the handshake, the look in the eye, the small cues saying, “I am here, I can be held accountable, and I will be here tomorrow.”
It raises the uncomfortable question: how do you verify AI? You can check sources. You can demand citations. You can triangulate claims across reliable references. You can do the work—bring data. But those methods verify content, not intent. They tell you whether a statement is supported, not whether a speaker is responsible.
The digital equivalents of the handshake are still immature: reputation systems, provenance, identity, attestations, cryptographic signatures, institutional credibility. Useful—sometimes. But none of them fully replace the ancient non-optimized human advantage: the reality of another person who can be confronted, who can be shamed, who can be persuaded, who can be moved.
I saw featured on 60 Minutes yesterday how a mother fought an insurance company to save her daughter. Maisie, was born with Spinal Muscular Atrophy (SMA), a disease historically carrying devastating outcomes by age two. A gene therapy called Zolgensma offered a radically different future. It also carried a staggering price—in the millions for a single dose.
The insurance company’s logic was predictable. It wasn’t personal. It was actuarial. A system designed to survive cannot casually approve exceptions large enough to threaten the model. So they denied her again and again—because if you are a claims processor inside a scale machine, you are paid to defend the boundary, not to feel the tragedy pressing against it.
Then the mother did something the system is not built to withstand: she forced a human encounter. She secured an in-person meeting with the person processing the claim. She wanted them to see the child their policy would allow to die. She wanted to reintroduce particularity into a world built on categories.
And it worked—not because the organization suddenly became good, but because the machinery briefly jammed. The abstraction cracked. A child became a face. A line item became a life. The claims processor acted as individual instead of a cog in the flywheel. The organization, for a moment, behaved like humans without a form letter to hide behind.
The paradox of the digital age is technology can help create miracles—gene therapies, breakthroughs, coordination at unprecedented scale. But scale is also what makes those miracles fail to reach the individual without a fight. The cure can be engineered by machines and networks; the decision to apply it often still requires a human breach.
So yes: “In God we trust; all others must bring data" (W. Edwards Deming). Data is the baseline, and a clunky and manual verification is a non-negotiable. But the deeper claim is this: the scepter of truth is not held by the algorithm, because truth is not only accuracy. Truth is accountability, and accountability is relational.
The internet and AI are powerful precisely because they operate at distance. But distance dissolves accountability. If we want trustworthy communication in a world of infinite content, we will need more than smarter models. We will need sturdier trust vectors: provenance, consequences, real identity where it matters—and, whenever possible, the oldest counter to scale: slow, clumsy, irreplaceable human contact.
A human being, in front of another human being, refusing to fade into the aggregate.

Trapper to Yappers Handoff: 👀 Is my boss an AI? Am I? We're tackling the digital trust crisis, and honestly, the results are so unhinged I'm starting to doubt my own existence. A $25 million corporate heist was pulled off by a deepfake CFO, but the real crisis is that we can't use our favorite punctuation—the em dash. Your next online interaction will feel less like a "chat" and more like a Blade Runner audition. This maddness is the perfect table setting for a shouting match between Morty and Frankie—assuming you're not both deepfakes I hired on Fiverr.