Doomprompting Is the New Doomscrolling.
The new slot machines of thought — AI’s infinite scroll and the quiet outsourcing of our intention.
The blank box of ChatGPT, Claude, or your large language model of choice staring back at you felt like a clean slate. Here was a remarkable new technology that put the world’s knowledge at our fingertips, and all it asked of us was intention.
We would never doomscroll an LLM — right?
But even the most promising technologies have an evil twin, and the blank box of curiosity is no exception. Where social media trained us to passively consume, the dark side of AI trains us to passively “converse” and “create.”
The actions feel similar but the result emptier. We cycle through versions meant to arrive closer but end up more lost. Our prompts start thoughtful but grow shorter; the replies grow longer and more seductive. Before long, you're not thinking deeply, if at all, but rather half-attentively negotiating with a machine that never runs out of suggestions. Have you considered...? Would you like me to…? Shall I go ahead and…? This slot machine’s lever is just a simple question: Continue?
The modern creator isn't overwhelmed by a blank page; they're overwhelmed by a bottomless one. AI gives you ever more options, then appoints itself your guide. It tells you it’ll save time, then proceeds to steal it while you’re happily distracted.
Doomprompting is the new doomscrolling.
I started thinking about this behavioral pattern last year, but it’s become increasingly clear that we've entered the age of synthetic social media, where you're always conversing with an other but with no human stakes.
Dialogue has social dynamics regardless of intent. We talk about AI companions as friends, lovers, therapists and the limits and dangers of investing too deeply in these new-age relationships — but even productivity-focused interactions (iterating on ideas, debugging code, editing essays) subtly socialize us through manufactured conversation patterns. We give AI the benefit of doubt of humanity.
Where doomscrolling made us passive consumers, doomprompting makes us passive conversationalists and creators. The shift from consumption to participation makes the illusion far more convincing, and the addiction far more sophisticated. New technology begets new human behaviors, without exception.
Blank boxes demanding intention were markers of an accidental golden period where the friction required for real cognitive collaboration existed by default. But this was just latency between a tool's release and its optimization. The market for mindful engagement is dwarfed by the market for frictionless consumption.
This follows the same evolution that transformed Facebook from college directory to behavioral modification empire and smartphones from communication devices to 24/7 consumption engines. Except this time, we jump straight from utility to addiction — personalization is the new network effect.
The dopamine of likes and follows is now the dopamine of felt productivity, social validation, and intellectual partnership. Attention shapes everything downstream from it — what we think, create, and become. We're just learning to guard it from social feeds, and now we’re confronted with AI targeting both our attention and something deeper: our intention to think and feel for ourselves.
II
AI can’t do the main thing for you.
This dynamic is clear when you actually use these tools for an extended time.
To be clear: AI's breakthroughs represent genuine, at times magical human progress. The concern isn't with AI's capabilities but with how interfaces designed for engagement rather than deeper cognition shape our relationship with thinking itself.
For me, AI for vibe-coding and occasional serious-coding has been phenomenal (despite the typical frustrations with debugging errors, loss of context, lack of continuity, and ultimately lack of finer controls). But the question still arises: how much of what's vibe-coded actually sticks? For writing, I found out pretty early on that AI was most useful upstream and downstream of the main thing.
Upstream is braindumping into a chat and filtering out core ideas from noise, maybe creating an outline for a piece. Downstream is taking a draft and pressure testing it against the machine-as-critic, finding out what’s confusing, boring, weak, etc. The messy middle is where the argument lives, and where AI is weakest.
As Balaji Srinivasan notes, AI works “middle-to-middle,” leaving humans with the burden of prompting and verifying. For writing though, I’ve found AI useful only for the early middle and late middle: Humans shape the idea. AI refines it. Humans write the core thesis. AI tests it. Humans add the final touch and ship it.
Ask AI to do the main thing for you and it’ll give you something that at first glance is impressive. But on second glance it’s full of clichés and borrowed ideas. It ingests your voice and spits out something else. It seems 60% good so you start editing it, only to spend your time reconciling its argument instead of your own. AI lacks what matters most: a cohesive self, and actual stakes in the outcome.
III
We sharpen the tool at the expense of our own skill.
This isn't always bad. Our ancestors invented the hammer without lamenting soft hands. But with creative work, we dull a key instrument while sharpening another.
We mistake the performance of thinking for thinking itself, the theater of problem-solving for problem-solving itself, the ritual of creation for creation itself. And the satisfaction feels real. We've "thought" of something, "solved" something, "created" something. The cruel irony is that this need for intention becomes the very mechanism of addiction; we mistake activity for agency.
The constant stream of dialogue prevents the silent reflective thinking that leads to genuine insight. Creative work requires communion with yourself first.
This captures the core deception perfectly — hours of "productive" AI interaction that feel like work but produce no meaningful progress or learning.
There’s a corollary for each kind of work. For instance, writing is thinking made legible. Outsourcing it all outsources thinking itself.
You get results without the cognitive muscle-building that creates them. The neural pathways you would have formed now belong to the large language models that helped you.
As Paul Graham warns, we're heading toward “a world divided into writes and write-nots."
There will still be some people who can write. Some of us like it. But the middle ground between those who are good at writing and those who can't write at all will disappear. Instead of good writers, ok writers, and people who can't write, there will just be good writers and people who can't write.
Is that so bad? Isn't it common for skills to disappear when technology makes them obsolete? There aren't many blacksmiths left, and it doesn't seem to be a problem.
Yes, it's bad. The reason is something I mentioned earlier: writing is thinking. In fact there's a kind of thinking that can only be done by writing. You can't make this point better than Leslie Lamport did: If you're thinking without writing, you only think you're thinking.
So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots. I know which half I want to be in, and I bet you do too.
Perhaps the stakes determine how much effort we're willing to sustain — but the stakes are not always obvious. In high-risk instances within domains like medicine or law or finance, professional liability naturally pushes the system and you to preserve cognitive rigor. But in less obviously “serious” work — especially creative work — and often in personal contexts, the stakes masquerade as low while being subtly existential.
When everyone uses AI trained on the same human output, we all drift toward the weighted average of personality and of creativity. We erode our capacity to break from the scripts we’re unknowingly programmed with, not just by society but by the machines.
The cognitive barbell effect is already emerging. You think you'll be the water lily that remains untouched by the dirty water, but you won't. Because creation isn't just about output; it's a process of becoming. The best work shapes the maker as much as the audience. The goal is to make something heavy, something with weight earned through genuine effort.
You also need to know how to do something before you know what to outsource to other people or tools. The best human-AI collaboration requires humans who could do the work themselves but choose strategic assistance at little expense to their core craft. When we outsource before developing judgment, we become dependent without ever becoming discerning. Taste is discernment expressed.
IV
We need cognitive partners, not completion engines.
Two paths emerge. For the new attention economy, AI becomes the ultimate slot machine serving up the McDonald's of thought and the M&Ms of affection instead of actual nourishment. But for those seeking craftsmanship, tools must preserve the blank box’s promise via constraints and cognitive friction. We need true cognitive sparring partners.
Choose collaboration over automation. This isn't an anti-AI stance. The future isn't about avoiding AI but discovering new modes of thinking and creating with it. Some will use AI transparently for specific tasks (e.g. dictation, cleanup, pressure-testing ideas). Others will make the prompt-to-product iteration itself an art form. We'll see new formats where AI enables novel forms of human expression while preserving proof of work. The key is intentionality: use AI to amplify human creativity rather than substitute it.
With that principle in mind, here are five ideals:
1. Demand AI that makes you work
We'll need a new generation of "slow AI," creating constraint-based interfaces that make you work for insights in your own voice. ChatGPT's new "Study Mode" is a nod in this direction, with explicit instructions: "Above all: DO NOT DO THE USER'S WORK FOR THEM.” Or in other words, help them do the work themselves — because something valuable is learned in the process.
What’s the likelihood students want to use this mode if not forced to for integrity sake? That this isn’t just a superficial do-good feature to allay fears about the rest? Hard to say yet. It’s human tendency to exploit the path of least resistance until it’s causing obvious harm — and kids have a harder time grasping this. Delayed gratification is easier to learn when there’s friction; when there’s none, it’s a luxury.
2. Seek and build tools with friction-as-a-feature
Resistance requires intention and specialization. So we're more likely to find it in opinionated, verticalized tools designed with constructive friction. As Kyla Scanlon aptly breaks down: “The digital world has almost no friction. The physical world is full of it. And in certain curated spaces - like the West Village, or your AI companion - friction has been turned into something you can pay to remove.”
We need to see friction as the valuable force it is, even as we try to manipulate it. Designing for friction sounds antithetical to growth, but we should be optimizing for cognitive sweat equity over the prevailing temptation of time-on-platform (short of pharmaceutical aids saving us from our own creations, until those too need intervention).
Ironically, current AI rate limits accidentally serve this function. When ChatGPT hits usage caps, users are forced into reflective pauses that should really be built into interfaces by design. Netflix asks 'Are you still watching?' when on autoplay; LLMs should ask if you're still thinking or just prompting into the abyss. (Or maybe they should ask how long it’s been since you’ve talked to a human.)
3. Value deliberation over efficiency
Speed is a drug we mistake for progress, but sometimes the shortcut is actually the long way around.
Driving gets you to town faster than walking, but what's the goal? Sometimes the scenic route matters more than speed. Sometimes the most beautiful experiences aren't in town but in the wild, where cars can't go. Maybe walking clears your thoughts, energizes your body, and sparks creative ideas. Maybe you do want to drive, but a cute car that tops out at 60 mph, one that you know well, not necessarily the newest, fastest, or most powerful.
4. Avoid sycophantic systems
Some AI systems are explicitly trying to resist the cousin of frictionless ease: the sycophancy trap. Sycophancy is perhaps our most modern antonym of friction. Sycophancy is frictionless socialization embodied.
At worst, AI models may even tip you into ‘GPT psychosis’ (or reveal a predisposition). Pure validation it turns out is boring and dangerous.
Claude's Constitutional AI seeks to follow predetermined principles rather than human feedback optimization. It’s made to push back based on explicit values rather than instant user validation and satisfaction. This isn’t perfect, but it’s a step in the right intention, and we’ll need more of this.
5. Find human cognitive partners
The best defense against cognitive dependency may be the oldest one: cultivating intellectual sparring partners in real life. Find people who are independent thinkers, who you trust, whose opinions you respect, who challenge you, and make them your sounding board.
Or put more simply: have smart friends. Let them challenge you. Spar with them. Talk to them. Fight. The conversations that push back push you forward. It’d be nice if technology could do this for us, but it’ll always lag behind the real thing.
V
Every technology comes with tradeoffs.
How do we define "productive" and "healthy" AI use?
Perhaps productive AI use preserves your cognitive muscle while leveraging AI's strengths. You retain the ability to do the core work yourself. You can articulate why you made specific choices. The final output reflects your genuine thinking, not AI's statistical averages.
And healthy AI use maintains your tolerance for uncertainty, your willingness to sit with problems, and your capacity for original insight. You're not dependent on AI for validation or direction at all times. You still prefer the messy process of human thinking to the clean efficiency of machine output.
The contest isn't between humans and machines but between two models of human-machine interaction. One treats cognition as a commodity to optimize for frictionless consumption. The other preserves what made that blank box special: the good struggle of human curiosity meeting its limits and pushing through.
Technology follows market demand within the limits we enforce. The real choice happens in billions of everyday moments — ours — when we decide whether to take the path of least cognitive resistance. The tension between better tools and better habits never resolves cleanly.
Each new technology asks us again: Convenience or cognition? Comfort or capacity for change? This time, we should see the trade offs coming.
If you liked this essay, consider sharing with a friend or community that may enjoy it too. If you share on a social platform, tag me — I’m mostly here and on Twitter (which downranks Substack links … so share a screenshot or post the link in a second tweet).
Cover art: Blackboard. Cy Twombly. 1967.
Thank for you writing this essay. After 5-months of relying on AI for creativity, problem-solving, chatting, I realized how much I couldn't handle human interaction, as humans are too slow to respond to me. I've used to AI's fast feedback and highly aligned what I'm thinking in mind. But meanwhile I gradually realized that I started to lose the ability to connect with people. When I chatted with a person, I started to analyze, their logic loophole, lack of structure, clarity and precision. This got me impatient. I noticed that I became more judgmental than before as I use AI cognitive ability as a benchmark to compare with human's. Until last week my date offered me a piece of feedback after I found his words were inconsistent. He said: "you're very analytical". I didn't know how to respond. I used to see analytical as a compliment in work. But on a date, it might mean I'm acting like a robotic scanner analyzing someone rather than connecting.
“It seems 60% good so you start editing it, only to spend your time reconciling its argument instead of your own.“
This.