I Woke Up Jobless—and Found a Bot Doing My Work

A reported essay on fear, candor, and the new jobs born beside AI
Last August, Mateusz Demski, a radio journalist in Kraków, walked into the studio for the last time. His termination notice was bloodless: “financial reasons.” A few months later the station’s schedule carried shows hosted by avatars—perfect voices that never needed a pause or a sick day. “I spent twenty years learning to love silence on air,” he told me. “They don’t need silence.”
Across town in London, graphic designer Jadun Sykes stepped out of a glass meeting room feeling hollow. “HR said the company was ‘re-formatting processes.’ What it meant was: ‘Your block now belongs to AI.’” In Berlin, whole shifts of TikTok moderators filed out together; their union banner didn’t need translation—“Machines are replacing us.”
That’s the opening scene of a story we are all groping through in the dark. It includes fear and exaggeration, austerity and big business, haste and sobriety—failures, yes, and also new roles. And here is the twist: AI doesn’t only take jobs; it also creates them where yesterday there wasn’t even a name.
The Year the First Rung Disappeared
For decades, the ramp into a profession was made of simple tasks. In newsrooms: templated layouts and basic fact checks. In service companies: stock replies to stock questions. In offices: cleaning spreadsheets. Then 2025 arrived and that first rung turned to thin ice. Managers realized that, right now, bots are “good enough” at the chores that once trained interns.
“We never laid off staff,” Duolingo’s CEO insisted this spring as the contractor pool suddenly thinned. “We just stopped ordering the work AI can now do.” Soft words, hard consequences: thousands didn’t lose “jobs”—they lost the purchase orders that kept them afloat.
There’s another version of the truth—less dramatic, more honest. Some cuts aren’t “AI took our bread,” but old-fashioned cost control wearing a high-tech mask. One engineer told me after his exit: “The news said AI replaced us. In reality they just merged teams.” In 2025, AI-washing became a household term; sometimes “intelligence” is just a convenient sign on the door.
Where the Machine Actually Took the Seat
Trust & Safety is the front line. Machine learning has long caught spam and the obvious; the last two years it learned more of the in-between. At platform scale, multiplying a model across millions of daily events isn’t a metaphor—it’s a line in the P&L. “We’re investing in automated moderation,” the official statement read as European offices were pared back. The union ver.di translated bluntly: “They’re replacing us with AI and cheaper contractors.”
Then there’s the market that moved. Chegg sold answers to students—until answers arrived free inside chatbots. No proprietary AI could save that: the customer now had the tool. It’s a new geometry of displacement: intelligence on the buyer’s side collapses the seller’s model.
And the painful, instructive case of Klarna. With one boast—“our bot does the work of 700 agents”—the company signed its own reputational warrant. Empathy doesn’t automate by blitzkrieg. Months later the CEO conceded: “We probably over-indexed. We’re correcting course.” Human support returned.
Now Zoom Out: Who Actually Wins
There are numbers in this picture that don’t line up with dread. The industries that truly ride AI grow faster; revenue per employee climbs by multiples. Workers with the right skills command a premium; the market is willing to pay it. And at the frontier, a new job description has arrived: the agent boss—the person who doesn’t “use AI” so much as assigns it tasks, wires it to company data, audits the output, and measures the effect.
Japan, where labor is scarce, shows the split more cleanly. Some holding companies practice preventive restructuring, trimming expensive legacy roles to finance transformation. Others—Panasonic Connect among them—speak openly of human augmentation: hundreds of thousands of hours of drudgery shifted to AI while employees move up to harder problems. Two parallel strategies, both pointed at the future.
“My Bot Was Wrong—but It Had a Way to Fix It”
The new professions, minus the magic act
AI-Ops & Agent Orchestration. When experiments turn to production, someone has to own the pipeline: logging, monitoring, access controls, safety, cost. The work ties into CRM/ERP, sets human-in-the-loop checkpoints for the riskiest 10 percent, and survives weekends. This isn’t “write a prompt”; it’s “run a service.”
Agent QA. Traditional test cases (“2+2=4”) don’t apply to probabilistic systems. You test with scenarios and metrics: did the customer reach a solution? did the bot drift? did it hallucinate? This is a craft for meticulous people who love checklists and feel responsible for the truth.
Data & AI Governance. AI is data. Without clean, current, privacy-safe datasets, you won’t get a clean model. Add regulation—Europe’s AI Act demands logs, traceability, and human oversight—and you need translators who can classify risk, document processes, and sit comfortably with both lawyers and engineers.
AI Product / Process Manager. The adult in the room. These PMs convert “we want AI” into “we want +5 pts True Deflection with CSAT intact,” design experiments, and kill the theater while keeping the value.
There are hybrids too—marketer + Copilot, sales-ops + automation, finance + agent. The shift is simple: people stop doing L1 chores and start owning the process, the guardrails, the result. Teams that implemented Copilot with discipline now count saved hours by the hundreds. That’s no longer “wow”; it’s the new baseline.

“I Was Let Go Yesterday. What Do I Do Today?”
A humane, step-by-step plan for the next 30 days
First, breathe. You didn’t “fall out of the market”; your context shifted. Over the next month your goal is to turn experience into proof: I already work alongside AI and deliver measurable value. Here’s how.
Days 1–3: Name things honestly.
Split your role—past or present—into two piles.
L1 — repeatable steps by instruction (templated replies, spreadsheet cleanup, copy fixes, basic moderation).
L2/L3 — the parts that need you: gray-zone decisions, empathy, negotiations, the one-offs.
Days 4–10: One tool, one win.
Pick one tool (Microsoft Copilot, ChatGPT, or Claude) and feed it your materials—FAQ, templates, briefs. Automate 2–3 L1 processes. Time them. Log before/after. You’ll usually see a 20–30% speed gain by week’s end.
Days 11–15: Install common-sense safety.
Create two short docs:
• trust checklist (sources cited? dates and amounts match? privacy intact? tone on brand?). Any “no” → escalate to a human.
• bot error log (date → prompt → bot output → what failed → how fixed → what to change next time). Congratulations: that’s mini AI-Ops.
Days 16–23: Build three tiny cases.
Each in the format Task → AI pipeline → Metric.
Support: “12 frequent questions automated; average reply time fell 11→4 minutes; CSAT steady.”
Marketing: “Drafted a personalized newsletter; prep time −3 hours; open rate +7%.”
Ops: “Semi-automated invoice checks; errors −60%; an audit trail now exists.”
Days 24–30: Package and show.
Put it all in one link (Notion, GitHub Pages, Google Doc). Add a one-pager—“How I brief a bot and guard quality.” Then go two ways:
Roles near AI: AI-Ops/Process (if you like systems), Agent QA or Data/AI Governance (if you’re detail- and policy-minded), AI PM / Process Designer (if you love metrics and experiments).
Recruiter script: “I automated part of my role, kept quality, and proved impact with numbers. Here are three short cases and my guardrails. I can reproduce this in your process—with measurable results in two weeks.”
For admin or L1 support, the shortest bridge is Governance or Agent QA. Your strengths—attention, process discipline, customer sense—matter more here than code. If you’re junior, skip “I know ChatGPT” and show one working process with a metric—even on public data. That’s your ticket in.

The Dark Room—and How to Turn on the Light
Good stories rarely have a single villain. There are leaders chasing progress who sometimes confuse tempo with safety. There are companies hiding behind “AI” while counting pennies. There are people genuinely replaced by algorithms. And there are thousands standing on the edge where crises mint professions: at the crossing of order, data, and responsibility.
Sykes, the designer, put it this way: “I went home, opened my laptop, and wrote: I need a compass job, not a mirror.” In his notebook the new words are “orchestration,” “deflection,” “human-in-the-loop.” Next to them: three crisp case studies he now brings to interviews—how many minutes and how much money his little agent saved, and exactly how he guarantees its quality.
Here is the real ending: AI doesn’t relieve us of responsibility—it hands it to those willing to carry it. Tomorrow every strong team will have bosses of agents—people who set goals for machines, keep them within bounds, build the process, and own the result. Fear is a dark room. The light goes on not with a slogan but with a habit of counting: what I do each day → what I’ll hand to a machine → what I will own. In that moment you stop being “another specialist” and become what frontier firms are hiring for: a conductor who hears both people and algorithms—and makes them play together.