Doomprompting Is the New Doomscrolling.
The new slot machines of thought — AI’s infinite scroll and the quiet outsourcing of our intention.
Doomprompting is the new doomscrolling.
The blank box of ChatGPT, Claude, or your large language model of choice staring back at you felt like a clean slate. Here was a remarkable new technology that put the world’s knowledge at our fingertips, and all it asked of us was intention.
We would never doomscroll an LLM — right?
But even the most promising technologies have an evil twin, and the blank box of curiosity is no exception. Where social media trained us to passively consume, the dark side of AI trains us to passively “converse” and “create.”
The actions feel similar but the result emptier. We cycle through versions meant to arrive closer but end up more lost. Our prompts start thoughtful but grow shorter; the replies grow longer and more seductive. Before long, you're not thinking deeply, if at all, but rather half-attentively negotiating with a machine that never runs out of suggestions. Have you considered...? Would you like me to…? Shall I go ahead and…? This slot machine’s lever is just a simple question: Continue?
The modern creator isn't overwhelmed by a blank page; they're overwhelmed by a bottomless one. AI gives you even more options, then appoints itself your guide. It tells you it’ll save time, then proceeds to steal it while you’re happily distracted.
Doomprompting is the new doomscrolling.
We've entered the age of synthetic social media, where you're always conversing with an other but with no human stakes. Dialogue has social dynamics regardless of intent. We talk about AI companions as friends, romantic partners, therapists and the limits and dangers of investing too deeply in these new-age relationships, but even productivity-focused interactions (iterating on ideas, debugging code, editing essays) subtly socialize us through manufactured conversation patterns.
Where doomscrolling made us passive consumers, doomprompting makes us passive conversationalists and creators. The shift from consumption to participation makes the illusion far more convincing, and the addiction far more sophisticated. New technology begets new human behaviors, without exception.
Blank boxes demanding intention were markers of an accidental golden period where the friction required for real cognitive collaboration existed by default. But this was just latency between a tool's release and its optimization. The market for mindful engagement is dwarfed by the market for frictionless consumption.
This follows the same evolution that transformed Facebook from college directory to behavioral modification empire and smartphones from communication devices to 24/7 consumption engines. Except this time, we jump straight from utility to addiction — personalization is the new network effect.
The dopamine of likes and follows is now the dopamine of felt productivity, social validation, and intellectual partnership. Attention shapes everything downstream from it — what we think, create, and become. We're just learning to guard it from social feeds, and now we’re confronted with AI targeting both our attention and something deeper: our intention to think and feel for ourselves.
II
AI can’t do the main thing for you.
This dynamic is clear when you actually use these tools for an extended time.
To be clear: AI's breakthroughs represent genuine, at times magical human progress. The concern isn't with AI's capabilities but with how interfaces designed for engagement rather than deeper cognition shape our relationship with thinking itself.
For me, AI for vibe-coding and occasional serious-coding has been phenomenal (despite the typical frustrations with debugging errors, loss of context, lack of continuity, and ultimately lack of finer controls). But the question still arises: how much of what's vibe-coded actually sticks? For writing, I found out pretty early on that AI was most useful upstream and downstream of the main thing.
Upstream is braindumping into a chat and filtering out core ideas from noise, maybe creating an outline for a piece. Downstream is taking a draft and pressure testing it against the machine-as-critic, finding out what’s confusing, boring, weak, etc. The messy middle is where the argument lives, and where AI is weakest.
As Balaji Srinivasan notes, AI works “middle-to-middle,” leaving humans with the burden of prompting and verifying. For writing though, I’ve found AI useful only for the early middle and late middle: Humans shape the idea. AI refines it. Humans write the core thesis. AI tests it. Humans add the final touch and ship it.
Ask AI to do the main thing for you and it’ll give you something that at first glance is impressive. But on second glance it’s full of clichés and borrowed ideas. It ingests your voice and spits out something else. It seems 60% good so you start editing it, only to spend your time reconciling its argument instead of your own. AI lacks what matters most: a cohesive self, and actual stakes in the outcome.
III
We sharpen the tool at the expense of our own skill.
This isn't always bad. Our ancestors invented the hammer without lamenting soft hands. But with creative work, we dull a key instrument while sharpening another.
We mistake the performance of thinking for thinking itself, the theater of problem-solving for problem-solving itself, the ritual of creation for creation itself. And the satisfaction feels real. We've "thought" of something, "solved" something, "created" something. The cruel irony is that this need for intention becomes the very mechanism of addiction; we mistake activity for agency.
The constant stream of dialogue prevents the silent reflective thinking that leads to genuine insight. Creative work requires communion with yourself first.
This captures the core deception perfectly — hours of "productive" AI interaction that feel like work but produce no meaningful progress or learning.
There’s a corollary for each kind of work. For instance, writing is thinking made legible. Outsourcing it all outsources thinking itself. You get results without the cognitive muscle-building that creates them. The neural pathways you would have formed now belong to the large language models that helped you. As Paul Graham warns, we're heading toward “a world divided into writes and write-nots."
There will still be some people who can write. Some of us like it. But the middle ground between those who are good at writing and those who can't write at all will disappear. Instead of good writers, ok writers, and people who can't write, there will just be good writers and people who can't write.
Is that so bad? Isn't it common for skills to disappear when technology makes them obsolete? There aren't many blacksmiths left, and it doesn't seem to be a problem.
Yes, it's bad. The reason is something I mentioned earlier: writing is thinking. In fact there's a kind of thinking that can only be done by writing. You can't make this point better than Leslie Lamport did: If you're thinking without writing, you only think you're thinking.So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots. I know which half I want to be in, and I bet you do too.
Perhaps the stakes determine how much effort we're willing to sustain — but the stakes are not always obvious. In high-risk instances within domains like medicine or law or finance, professional liability naturally pushes the system and you to preserve cognitive rigor. But in less obviously “serious” work — especially creative work — the stakes masquerade as low while being subtly existential.
When everyone uses AI trained on the same human output, we all drift toward the weighted average of creativity. This cognitive barbell effect is already emerging. You think you'll be the water lily that remains untouched by the dirty water, but you won't. Because creation isn't just about output; it's a process of becoming. The best work shapes the maker as much as the audience. The goal is to make something heavy, something with weight earned through genuine effort.
You also need to know how to do something before you know what to outsource to other people or tools. The best human-AI collaboration requires humans who could do the work themselves but choose strategic assistance at little expense to their core craft. When we outsource before developing judgment, we become dependent without ever becoming discerning. Taste is discernment expressed.
IV
We need cognitive partners, not completion engines.
Two paths emerge. For the new attention economy, AI becomes the ultimate slot machine serving up the McDonald's of thought and the M&Ms of affection instead of actual nourishment. But for those seeking craftsmanship, tools must preserve the blank box’s promise via constraints and cognitive friction. We need true cognitive sparring partners.
Resistance requires intention and specialization. So we're more likely to find it in opinionated, verticalized tools designed with constructive friction. Yes, designing for friction sounds antithetical to growth, but we must optimize for cognitive sweat equity over time spent (short of pharmaceutical aids to save us from our susceptibility to our own creations, until those too need intervention).
We’ll need “slow AI” that makes us do the work.
ChatGPT’s new “Study Mode” is a nod in this direction, with built-in instruction: DO NOT DO THE USER'S WORK FOR THEM. Or in other words, help them do the work themselves — because something valuable is learned in the process.
Speed is a drug we mistake for progress, but sometimes the shortcut is actually the long way around. Driving gets you to town faster than walking, but what's the goal? Sometimes the scenic route matters more than speed. Sometimes the most beautiful experiences aren't in town but in the wild, where cars can't go. Maybe walking clears your thoughts, energizes your body, and sparks creative ideas. Maybe you do want to drive, but a cute car that tops out at 60 mph, one that you know well, not necessarily the newest, fastest, or most powerful.
Ironically, AI model rate limits currently serve this function. When ChatGPT hits usage caps, users are forced into reflective pauses that should really be built into interfaces by design. Netflix asks if you're still watching when it's on autoplay; LLMs should ask if you're still thinking or just prompting into the abyss. (Or maybe they should ask how long it’s been since you’ve talked to a human.)
Some AI systems are explicitly trying to resist the cousin of frictionless ease: the sycophancy trap, which we fear can even tip you into a kind of GPT psychosis (or reveal it). Claude's Constitutional AI, in contrast, say its follows predetermined principles rather than human feedback optimization. It’s made to push back based on explicit values rather than instant user validation and satisfaction. How do we define “productive” and “healthy” AI use? That’s the first underlying task.
This discourse isn't anti-AI. The future isn't about avoiding AI but discovering new modes of thinking and creating with it. Some will use AI transparently for specific tasks (e.g. dictation, cleanup, pressure-testing ideas). Others will make the prompt-to-product iteration itself an art form. We'll see new formats where AI enables novel forms of human expression while preserving proof of work. The key is intentionality: using AI to amplify human creativity, not substitute for it.
Or a simpler tool: have smart friends. Let them challenge you. Spar with them. Talk to them. Fight. The conversations that push back push you forward. It’d be nice if technology could do this for us, but it’ll always lag behind the real thing.
The contest isn't between humans and machines but between two models of human-machine interaction. One treats cognition as a commodity to optimize for frictionless consumption. The other preserves what made that blank box special: the good struggle of human curiosity meeting its limits and pushing through.
Technology follows market demand within the limits we enforce. The real choice happens in billions of everyday moments — ours — when we decide whether to take the path of least cognitive resistance. The tension between better tools and better habits never resolves cleanly.
Each new technology asks us again: convenience or cognition?
This time, we should see the trade coming, and choose wisely.
If you liked this essay, consider sharing with a friend or community that may enjoy it too. If you share on a social platform, tag me — I’m mostly on Twitter.
Cover art: Blackboard. Cy Twombly. 1967.
I’m so excited to read this essay.
If writing is thinking, what is speaking?