A scientist recently wrote about a moment that stopped her cold. She'd been earnestly participating in online discussions, offering insights, engaging with ideas, until she encountered a prominent writer who dismissed a critic simply because "he'd only gotten 170 subscribers in 6 months." The scientist felt instant shame, as if she'd been "parading around in holey, saggy underwear" while thinking she was well-dressed.
But then the absurdity hit her. In her day job, she realized, she welcomes everyone, from seasoned postdocs to eager freshmen, because she's learned that "the farther I go, the less I know." Her students, especially the newest ones, constantly humble her with perspectives "not shaped by the stiffening grooves of repetition or dulled by ego-inflating praise."
What happened to that wisdom when she moved online? More troubling still: what happens when that online logic begins to reshape how she thinks even when she's offline?
The Quiet Violence of Cognitive Colonization
We think of social media metrics as external measures - likes, follows, engagement rates tallying up somewhere outside our heads. But what if they've moved inside? What if follower counts and engagement statistics have become the invisible architecture of how we think, filtering our ideas before we're even conscious of having them?
This isn't mere influence. It's cognitive colonization - the gradual replacement of our natural creative impulses with the logic of the platform. Each time we soften an uncomfortable question or abandon a weird tangent because it won't "perform well," we're laying down neural pathways that make the next act of self-censorship easier and more automatic.
Yet we might ask: is this necessarily destructive? Consider the counterargument that constraints often fuel creativity. Sonnet forms didn't kill poetry; they channeled it into new expressions of beauty. Jazz musicians find freedom within strict chord progressions. Perhaps these digital constraints are simply the artistic limitations of our era, forcing us to distill complex ideas into more accessible forms.
But here lies a crucial distinction: traditional artistic constraints are transparent and chosen. A poet knows they're working within fourteen lines; a musician understands the harmonic framework they're navigating. The colonization of cognition operates differently; it's invisible, unconscious, and often mistaken for our own authentic preferences. We begin to believe we genuinely prefer sanitized thoughts over messy ones, not recognizing that we've internalized an external optimization function.
"The things you own end up owning you," Chuck Palahniuk observed in Fight Club. But what happens when the metrics we chase end up chasing us - not just in our posting behavior, but in the very formation of our thoughts? The question becomes whether we're gaining expressive power or losing cognitive autonomy.
The Seduction of Strategic Thinking
We tell ourselves we're being strategic, professional, smart about our audience. These feel like mature considerations, evidence of growth and sophistication. After all, isn't considering your audience a fundamental principle of good communication? Isn't clarity a virtue? Isn't reaching more people with your ideas generally better than reaching fewer?
These aren't trivial questions. The democratization of information through social platforms has given voice to perspectives previously marginalized by traditional gatekeepers. Scientists can bypass academic journals to share discoveries directly. Artists can find audiences without gallery representation. The strategic thinking that optimizes for platform success might represent a genuine evolution in how knowledge spreads, breaking down elitist barriers that once kept important ideas locked away.
But strategy in creative work often becomes self-domestication in disguise. The subtle shift happens when we begin optimizing not just our expression of ideas, but the ideas themselves. When did we start believing that the optimized thought is superior to the unfiltered one? The ideas that matter most; the ones that shift paradigms, challenge assumptions, open new possibilities - rarely arrive in tweet-ready format. They emerge from extended confusion, from following hunches that seem foolish, from being willing to sound incoherent while working through complexity.
Consider the paradox: if our most strategic thinking is shaped by what has previously succeeded on these platforms, we're essentially training ourselves to reproduce the past rather than discover the future. We've created a feedback loop where yesterday's viral content becomes today's cognitive template, which shapes tomorrow's supposedly innovative thinking. The result isn't strategic adaptation—it's intellectual domestication disguised as professional growth.
We've trained ourselves out of intellectual courage, mistaking the performance of certainty for actual understanding. But perhaps more insidiously, we've begun to mistake the feeling of having performed well for the satisfaction of having thought well. How many breakthrough insights never see daylight because they failed the internal editor's engagement audit?
The Paradox of Viral Visibility
Here's a question that should make us uncomfortable: What if going viral isn't a badge of success but a warning sign? What if the ideas that spread effortlessly do so precisely because they've been pre-processed to slide smoothly through existing mental grooves—which means they're probably not changing anything?
This challenges our intuitive understanding of impact. We assume reach equals influence, that viral spread indicates powerful ideas. But viral content often succeeds because it confirms what people already believe or triggers predictable emotional responses. It activates existing neural pathways rather than creating new ones. True paradigm shifts, by contrast, often feel uncomfortable, require sustained attention, and resist easy summarization.
Yet we shouldn't romanticize obscurity either. Some ideas deserve wide audiences. Some truths become more powerful when they reach more minds. The civil rights movement understood this – crafting messages that could spread while maintaining their transformative core. The question isn't whether broad reach is inherently problematic, but whether the mechanisms that create viral spread are compatible with the kinds of cognitive complexity our most pressing problems require.
Large audiences bring what we might call the statistical certainty of misunderstanding. When you're speaking to thousands, you're not just dealing with people, you're dealing with the mathematical inevitability of bad faith interpretation, context collapse, and the flattening effect of trying to communicate nuance to everyone at once. Every qualification you add to prevent misunderstanding becomes another point of potential confusion. Every attempt at precision creates new opportunities for distortion.
Small audiences, by contrast, preserve the possibility of being partially wrong in interesting ways. They represent people who chose to engage with your thinking rather than your brand, who might follow you down an uncertain path because they're curious about where it leads. They allow for the kind of incremental understanding that builds over time, where today's confusion can become tomorrow's insight without the pressure of having to be immediately and universally comprehensible.
Could it be that obscurity isn't the enemy of good thinking but its protector? Perhaps the most important ideas need time to develop in relative privacy before they're ready for public scrutiny. Like developing photographs in a darkroom, certain insights require protection from harsh light during their formation.
Laboratory Time vs. Stage Time
Consider two different temporal rhythms that govern intellectual work, each creating its own relationship to knowledge and uncertainty:
Laboratory time operates slowly, iteratively. It's comfortable with dead ends and patient with process. Failure becomes data rather than humiliation. Small experiments matter as much as grand conclusions. Questions are asked not to showcase what you already know, but to discover what you don't. In laboratory time, saying "I don't know" is the beginning of inquiry, not its end. Confusion is a research methodology, not a character flaw.
Stage time demands immediate results. Every utterance must land perfectly on first try. Process gets hidden; only polished products get shared. Uncertainty reads as incompetence rather than honesty. In stage time, you're either right or wrong, insightful or ignorant, worth following or worth dismissing. There's no space for the intermediate states where most actual thinking happens.
Social media has dragged all intellectual discourse onto stage time. But genuine inquiry requires laboratory time, the space to fumble, to be boring while you figure something out, to say "I wonder" instead of "I know." This creates a fundamental mismatch between how platforms operate and how minds actually work.
The tragedy isn't just that we perform certainty we don't feel - it's that we begin to mistake performance for actual cognitive process. We start to believe that real thinking should feel like stage time: confident, polished, immediately coherent. When our private mental experience doesn't match this standard, we assume something is wrong with us rather than recognizing the impossibility of the standard itself.
What if laboratory messiness isn't a flaw, but the feature? The most valuable thinking often looks indistinguishable from confusion to outside observers. As physicist Richard Feynman noted, "I would rather have questions that can't be answered than answers that can't be questioned." Yet our platforms systematically reward the latter while punishing the former.
Can we create spaces that operate on laboratory time within systems designed for stage time? Or do we need entirely different environments for the slow work of understanding?
The Epistemic Paradox of Highlight Reels
We're comparing our rough drafts to everyone else's highlights, our internal chaos to their curated clarity. This creates a peculiar epistemic environment where we see everyone else's finished thoughts but only our own messy process. The asymmetry is profound: we experience our uncertainty from the inside while observing others' certainty from the outside.
No wonder impostor syndrome runs rampant in creative spaces. We begin to believe there's something wrong with us for not knowing sooner, for still being confused while others seem so certain. But everyone is confused, they just know how to crop their screenshots.
This epistemic paradox extends beyond individual psychology to reshape our collective understanding of how knowledge works. When we primarily see intellectual confidence rather than intellectual process, we develop unrealistic expectations about thinking itself. We forget that good ideas usually begin as bad ones, that clarity emerges from confusion, that the most sophisticated insights often start as half-formed hunches.
Yet here too we find complexity rather than simple condemnation. The curation of thoughts isn't inherently problematic; it's how we've always shared knowledge. The scientific paper doesn't document every false start and dead end; the finished novel doesn't include every deleted paragraph. Curation serves the audience by distilling process into product, making complex thinking accessible and actionable.
The problem emerges when we lose sight of the curation itself, when we begin to believe that polished output reflects unpolished process. When we forget that behind every coherent argument lies hours of incoherent struggle, behind every confident assertion lies a history of uncertainty and revision.
What would happen if we saw more of the confusion? If we normalized the appearance of uncertainty in public discourse? If we treated "I don't know yet" as a complete sentence rather than an admission of failure? This isn't about celebrating ignorance or abandoning intellectual standards, it's about creating space for the cognitive processes that actually generate understanding.
But perhaps more importantly: what would happen if we became more comfortable with our own confusion, recognizing it not as evidence of inadequacy but as the natural state of minds engaged with genuinely difficult problems?
The Crisis of Selecting Against Insight
Perhaps the most troubling implication of this shift is that we're systematically selecting against the very cognitive habits that produce genuine insight. Patience, confusion, wrongness, and boredom, the precursors to brilliance, are now liabilities in our attention economy.
But let's examine this claim more carefully. Are we actually selecting against insight, or are we discovering new forms of it? The rapid iteration possible in digital environments allows for kinds of collective intelligence that weren't previously feasible. Ideas can be tested, refined, and improved through massive parallel processing across thousands of minds. The real-time feedback loops of social platforms might represent an evolutionary leap in how human knowledge develops.
Consider Wikipedia, a project that violated every traditional assumption about how authoritative knowledge gets created, yet produced the most comprehensive encyclopedia in human history. The collaborative, iterative, openly uncertain process that makes Wikipedia possible might offer a model for other forms of distributed cognition. Perhaps what looks like the death of deep thinking is actually its transformation into something more collective and responsive.
Yet the concern remains valid: are we optimizing for the kinds of insights our current systems can recognize and reward, potentially missing forms of understanding that don't translate well to digital formats? The contemplative traditions speak of insights that emerge only through sustained attention, through the kind of boredom and apparent non-productivity that our engagement-driven systems actively discourage.
We're replacing the slow fermentation of understanding with fast sugar highs of certainty. The result isn't just worse content—it's worse thinking. We're training an entire generation to mistake the performance of knowledge for knowledge itself. But we're also creating new forms of cognitive collaboration, new ways of thinking together, new possibilities for collective intelligence that wouldn't exist without these same systems.
The challenge isn't simply to reject digital thinking tools, but to understand their strengths and limitations well enough to use them intentionally rather than being unconsciously shaped by them. As writer Annie Dillard observed, "How we spend our days is, of course, how we spend our lives." How we spend our attention is how we spend our minds. And right now, we're spending them on both unprecedented opportunities and unprecedented risks.
Reclaiming the Unoptimized Mind
The scientist who inspired this reflection made a crucial realization: "The farther I go, the less I know." This isn't false modesty, it's intellectual courage. It's the recognition that expertise should make us more curious, not more dismissive. But how do we cultivate this orientation within systems designed to reward the opposite?
Perhaps the answer isn't to escape these systems entirely, but to develop what we might call dual consciousness, the ability to operate strategically within platform constraints while maintaining access to unoptimized thinking. This requires recognizing the difference between thoughts shaped for sharing and thoughts shaped for understanding, between ideas optimized for agreement and ideas optimized for truth-seeking.
What if we treated online spaces more like research labs than performance stages? What if we valued the person willing to say "I'm confused" over the one who always has an answer? What if we measured success not by reach but by the quality of questions being asked? But also: what if we recognized that different kinds of thinking require different environments, and that not every insight needs to be immediately shareable?
The metrics will always be there, humming in the background of our digital lives. But we can choose whether to let them colonize our cognition or simply observe them as one data point among many. We can practice what we might call metric resistance; the deliberate cultivation of thoughts that refuse optimization.
The weird stuff, the uncertain stuff, the ideas that can't be summarized in a hook, these aren't bugs in our thinking. They're features. They're where the new growth happens, where the surprising connections emerge, where the mind remains wild and generative rather than domesticated and predictable.
The future of human cognition may depend not on choosing between these modes of thinking, but on learning to move fluidly between them, knowing when to optimize and when to resist optimization, when to perform certainty and when to embrace confusion, when to think for others and when to think for the sheer pleasure of thinking itself.
Let the metrics hum in the background. But don't let them train your thoughts to sit and stay.
—Sal
I finally got around to reading this! Glad I kept it on my radar. As you said about my response to your other post, there are SO many threads in here to pull on. You sound so much like me. And you nailed the reason this country is where it is now. Performance over thought. Bingo.
First, thanks for sharing my story and expanding upon it in ways I hadn’t imagined. Second, there’s a lot of rich content here that will take me awhile to parse but (perhaps unsurprisingly) like the lab analogy best. In science, failing is part of the process and in some ways the whole point. Each time you fail, you learn something and keep going. It’s true that on social media that messiness and process has been smoothed out. On the flip side, there’s always been a smoothing over of ideas. It’s just that previously it was done by editors or by other scientists in peer review or other gatekeepers, and now that’s being increasingly replaced by the masses. I’m not sure if that’s inherently worse, except insofar as it dilutes the very ideas from conception vs helping refine them at the end. Not sure where I’m going with this but I appreciate this food for thought and surely will come back to it.