Skip to main content

· 3 min read
Gaurav Parashar

The moment you commit to regular swimming, you enter an unspoken pact with chlorinated water that extends far beyond improved cardiovascular health and shoulder strength. Swimmer's toe, technically known as keratolysis exfoliative or pool toes, manifests as cracking and peeling skin under the toes after prolonged pool exposure. This condition represents one of those peculiar realities of aquatic life that swim coaches forget to mention during orientation sessions. The skin becomes saturated with chemically treated water, creating an environment where normal cellular turnover accelerates into something resembling a controlled demolition project occurring at the tips of your feet.

The phenomenon mirrors what happens during extended bathtub sessions, except the pool version carries the potential for actual discomfort. Extended exposure to chlorinated water creates a perfect storm of chemical irritation and mechanical friction that transforms the ordinarily resilient skin under your toes into something approaching tissue paper consistency. The process begins subtly, with slight roughness that might be dismissed as normal wear from pool deck contact. Within days of consistent training, however, the skin develops a characteristic pattern of horizontal splits that appear precisely along the natural creases of the toe pads. These fissures often develop their own microclimate, remaining perpetually moist from subsequent pool sessions while simultaneously attempting to heal between workouts.

The timing of swimmer's toe development follows predictable patterns that correlate directly with training intensity and pool chemistry conditions. Most swimmers report initial symptoms appearing after their third consecutive week of daily training, particularly during periods when pool maintenance schedules result in elevated chlorine concentrations. The condition tends to affect the third and fourth toes most severely, likely due to their position creating optimal friction conditions against pool surfaces during push-offs and turns. Experienced swimmers develop a peculiar relationship with this condition, viewing its appearance as a badge of dedication rather than a medical concern. The peeling process often becomes ritualistic, with many swimmers unconsciously picking at loose skin during post-workout conversations or while reviewing technique videos. Podiatrists recommend limiting pool exposure time, applying barrier creams before swimming, and immediately moisturizing after pool sessions. These recommendations assume swimmers possess the luxury of abbreviated training sessions and access to high-quality foot care products in locker room environments. Reality presents different constraints, particularly for competitive swimmers whose training demands cannot accommodate skin care considerations. Some swimmers experiment with waterproof tape applications, creating makeshift protective barriers that inevitably fail after the first flip turn. Others adopt post-swim rituals involving aggressive toweling and immediate application of petroleum-based products, though these approaches often prove incompatible with rushed transitions between training sessions.

Experienced swimmers rarely discuss the condition directly, instead referencing it through coded language about "pool feet" or "deck toe." New swimmers often experience genuine concern upon discovering their first episodes of skin peeling, prompting informal mentoring sessions from veteran athletes who normalize the experience through shared anecdotes. Team environments develop unofficial hierarchies based partly on the severity of swimmer's toe presentation, with heavily peeling feet serving as visible proof of training commitment. Pool maintenance staff, observing this phenomenon across thousands of swimmers, develop their own theories about optimal chemical balance points that minimize skin irritation while maintaining sanitation standards. The condition ultimately represents one element of the broader adaptation process that transforms casual pool users into dedicated swimmers, complete with its own set of management strategies and acceptance rituals.

· 3 min read
Gaurav Parashar

EEG readings revealed a stark contrast between participants writing with digital tools and those working unaided. The tool-assisted groups showed erratic beta wave spikes in parietal regions, indicative of constant attention switching between writing and their digital aids. Meanwhile, the Brain-only group maintained steady theta waves in frontal areas, the neural signature of deep focus seen in expert meditators and absorbed artists. This neurological evidence confirms what productivity research has long suggested - what we call multitasking is often just rapid attention fragmentation that comes at a cognitive cost.

The parietal beta activity observed in tool users resembles patterns seen during divided attention tasks, where the brain struggles to maintain multiple competing threads. Each switch between writing and consulting an AI or search engine triggered a micro-interruption in cognitive flow, requiring fresh orientation. These constant transitions appeared to prevent the brain from reaching the sustained concentration state where original insights typically emerge. The unaided writers, by contrast, entered what neuroscientists call the "cognitive tunnel" - that rare mental space where time distorts and ideas connect in unexpected ways because nothing competes for attention.

What's particularly revealing is how these neural states correlated with output quality. While the multitasking groups produced work faster, their essays lacked the conceptual depth and creative connections of the focused writers. This aligns with studies showing that people in flow states not only work more deeply but make more unexpected associations between ideas. The steady frontal theta waves of the Brain-only group suggest their thinking operated at a different level - less about rapid information processing and more about meaningful integration. Quality of thought, it seems, depends on undisturbed thinking time.

The modern workplace increasingly rewards this fractured attention style, celebrating the ability to juggle multiple digital tools simultaneously. But the study's findings question whether this is genuine productivity or just the illusion of it. Like a computer rapidly switching between processes, our brains can handle multiple tasks, but with each switch comes overhead - the neural equivalent of loading and unloading working memory. The participants who worked uninterrupted may have appeared less busy in the moment but achieved more substantive results in the same timeframe.

These insights suggest we need to rethink our relationship with digital tools. Periodic single-tasking sessions - what some researchers call "cognitive fasting" - may be necessary to maintain our capacity for deep work. The study implies that the most valuable thinking happens not when we're most connected to information sources, but when we're most connected to our own uninterrupted thought processes. In an age of constant digital stimulation, preserving the conditions for sustained focus may be one of the most important cognitive skills we can cultivate.

· 3 min read
Gaurav Parashar

The study's most concerning finding emerged when AI-assisted writers switched to unaided composition. Their brain activity failed to match that of participants who had worked without AI from the beginning, showing weaker connectivity in regions critical for independent problem solving. This neural lag suggests that relying on AI tools may gradually diminish our capacity for unaided thinking, similar to how muscles weaken without regular use. The effect appeared after just a few sessions, raising questions about what prolonged AI dependence might do to our cognitive flexibility over time.

What makes this adaptation particularly troubling is its persistence. Even when aware they'd be writing without assistance, former AI users couldn't fully reactivate the neural networks needed for independent composition. Their brain activity resembled someone attempting to recall a forgotten skill rather than exercise a practiced one. This echoes research on "digital amnesia," where outsourcing memory to devices leads to poorer organic recall. The difference here is more fundamental, it's not just memory but the underlying capacity for generative thinking that appears affected. The convenience of AI assistance may come at the cost of our ability to think without it.

The adaptation pattern varied interestingly by task type. For structured assignments like essays, AI users struggled most with idea generation and organization. For more open-ended writing, their challenges centered on originality and voice. This implies that different cognitive muscles atrophy at different rates - structured thinking may decline faster than creative capacity. The EEG data supported this, showing the weakest rebound in frontal theta waves associated with planning and executive function. These are precisely the skills AI excels at supporting, making their erosion particularly ironic.

Educational contexts reveal this trap most clearly. Students who used AI for initial assignments performed progressively worse on subsequent unaided tasks compared to peers who never used assistance. The gap widened over time, suggesting cumulative effects. This mirrors findings in mathematics education, where calculator overuse in early learning leads to poorer conceptual understanding later. The common thread is that tools designed to support learning can inadvertently undermine it when they replace rather than supplement cognitive effort. The brain appears to need regular unaided practice to maintain its problem-solving capacities.

Breaking this cycle requires deliberate strategies. The study found that participants who alternated between AI-assisted and unaided writing maintained better independent skills. Others benefited from using AI only after completing initial drafts themselves. The key seems to be maintaining regular "cognitive workouts" - periods where we intentionally engage unaided with challenging tasks. As AI becomes more embedded in our workflows, we'll need to be as intentional about preserving our independent thinking skills as we are about maintaining physical health in a world of conveniences. The tools aren't the problem - it's how we allow them to reshape our cognitive habits that matters.

· 3 min read
Gaurav Parashar

The study revealed an unexpected pattern in essay quality assessments. While AI assisted submissions consistently scored higher on technical metrics like structure and grammar, human evaluators frequently described them as generic or impersonal. The unaided essays, despite their imperfections, contained more original ideas and distinctive phrasing that made them memorable. This suggests AI assistance creates a tradeoff between polish and personality, the more we rely on these tools, the more our work risks losing its unique fingerprint. The neural data showed corresponding differences, with unaided writers demonstrating stronger connectivity in brain regions associated with creative insight.

There's something fundamentally different about ideas that emerge through struggle versus those received prefabricated. The study's Brain-only group produced work with what researchers called "cognitive fingerprints" - telltale signs of individual thought processes visible in sentence structure, metaphor choice, and argument development. These quirks, often smoothed away by AI, may represent more than just stylistic preferences. They appear to reflect deeper differences in how individuals organize and express knowledge. When we use AI to refine our writing, we're not just cleaning up grammar - we're potentially filtering out the very elements that make our thinking distinctive.

The educational implications are particularly significant. Students using AI tools produced technically proficient work that earned good grades, but their long-term retention suffered. This aligns with existing research showing that the more cognitive effort we expend in creating something, the better we remember it. The struggle to articulate an idea appears to be part of how we make it our own. AI-assisted writing shortcuts this process, potentially creating what one researcher called "the illusion of competence" - the appearance of mastery without the underlying neural architecture that supports real understanding.

What's most concerning is how this effect compounds over time. The study found that participants who regularly used AI assistance showed decreasing originality in their unaided work as well. Their brains seemed to adapt to the smoother, more conventional patterns of AI-generated text, making it harder to access their own unconventional ideas. This resembles what happens when artists rely too heavily on reference images - their ability to draw from imagination atrophies. The convenience of AI may come with hidden creative costs that only become apparent over extended use.

Some participants achieved this by using AI for structural suggestions rather than content generation, or by writing first drafts unaided before applying selective refinements. The key appears to be maintaining the cognitive struggle that fuels creativity while using AI to solve specific problems rather than bypass the creative process entirely. As these tools become more sophisticated, we'll need to be increasingly intentional about protecting the messy, inefficient, but ultimately more rewarding parts of thinking for ourselves.

· 3 min read
Gaurav Parashar

The study revealed distinct neural patterns between participants using search engines versus AI for writing tasks. Those relying on search engines showed heightened beta wave activity, particularly in visual processing and integration areas, suggesting active engagement with multiple information sources. In contrast, AI users exhibited weaker theta wave connectivity, indicating reduced deep cognitive processing and memory formation. This neurological difference mirrors the practical experience of researching versus receiving answers, one requires active synthesis while the other emphasizes evaluation. The brain appears to treat these as fundamentally different cognitive activities, not just variations of the same process.

Search engine use activated parietal and occipital regions associated with visual scanning and spatial reasoning. This makes sense given the need to navigate search results, assess webpage layouts, and synthesize information from multiple tabs or sources. The cognitive load was distributed across perception, comprehension, and decision-making networks. AI assistance, by contrast, concentrated activity in frontal evaluation areas as users assessed the quality of generated content rather than its origin. The reduced theta activity suggests less engagement of the hippocampal memory system, potentially explaining why AI-assisted work feels less personally memorable or owned.

The temporal dimension of these activities also differs. Search engine use follows a nonlinear, investigative rhythm - querying, skimming, returning to sources, and gradually building understanding. This stop-start pattern appears to encourage neural plasticity as the brain makes and remakes connections between concepts. AI interactions tend toward linear efficiency: prompt, response, refinement. While productive, this streamlined exchange may bypass some of the cognitive benefits of struggle and discovery. The study's EEG readings show search engine users maintaining more persistent connectivity between brain regions, while AI users' patterns were more transient and task-specific.

These findings have implications for how we approach learning and problem-solving. Search engines foster what might be called "investigative cognition" - skills in sourcing, comparing, and synthesizing information. AI promotes "evaluative cognition" - skills in assessing, editing, and applying pre-formed solutions. Both are valuable, but they develop different mental capacities. In educational contexts, this suggests a need for balance between letting students find information and having it provided to them. The neural evidence indicates these approaches aren't interchangeable in terms of cognitive development, even when they produce similar end results.

What emerges is a picture of complementary rather than competing tools. Search engines exercise our information-gathering and critical thinking muscles, while AI tests our judgment and refinement abilities. The study participants who performed best overall were those who used both methods strategically - researching broadly before turning to AI for refinement. This hybrid approach seemed to engage the widest range of cognitive processes while maintaining personal investment in the work.

· 3 min read
Gaurav Parashar

The study revealed a curious psychological effect of using AI for writing: participants who relied on ChatGPT consistently reported feeling less ownership over their work compared to those who wrote unaided. This wasn't just a subjective impression - it manifested in concrete ways, like their inability to recall specific passages from their own essays minutes after writing them. The brain scans showed corresponding differences, with the AI-assisted group displaying weaker activity in regions associated with personal memory encoding and emotional connection to content. It suggests that when we outsource the creative process, we may be outsourcing part of our psychological investment as well.

This phenomenon extends beyond writing. We've all experienced how personally crafted solutions stick in memory better than borrowed ones, or how a hand assembled piece of furniture creates a different attachment than a store bought one. The neurological basis appears similar, the more cognitive effort we expend in creation, the stronger the neural pathways we build around that creation. When AI generates content for us, we're essentially adopting someone else's neural patterns rather than forming our own. The result is work that may be technically proficient but feels strangely disconnected from ourselves, like wearing clothes tailored for someone else's body.

The ownership illusion becomes particularly problematic in learning contexts. Students using AI for assignments often report feeling like they haven't truly mastered the material, even when their outputs are correct. This aligns with the study's findings about memory retention - the unaided writers could recall their arguments and phrasing more accurately because they'd formed those connections themselves. There's an important distinction between knowing information and knowing how to produce it, between having access to answers and possessing the ability to generate them. AI blurs this line in ways that might undermine long-term learning.

What's most concerning is how quickly this effect takes hold. The study participants developed reduced ownership feelings after just a few AI-assisted writing sessions. This rapid adaptation suggests our brains are eager to offload cognitive labor when given the chance, prioritizing efficiency over engagement. It raises questions about what might happen to creative confidence and intellectual autonomy after prolonged AI use. Will we eventually feel like caretakers rather than creators of our own work? The participants who edited AI outputs rather than copying them verbatim showed slightly better retention, hinting that active engagement might mitigate some of these effects.

The challenge moving forward will be finding ways to use AI that preserve our sense of authorship while still benefiting from its capabilities. This might mean using it for research and ideation but not generation, or employing it in iterative rather than wholesale ways. The study's garden analogy holds true, there's value in both growing plants and arranging store-bought flowers, but only one fosters the deeper connection that comes from nurturing something from seed. As AI becomes more embedded in creative processes, we'll need to be intentional about what parts of the work we keep for ourselves, not because the AI can't do them, but because we shouldn't lose the ability to.

· 3 min read
Gaurav Parashar

The EEG results from the study reveal a clear distinction between writing with and without AI assistance. Participants who composed essays unaided showed significantly stronger neural connectivity, particularly in theta and alpha frequency bands. These brainwave patterns are associated with deep cognitive processing, memory formation, and creative thinking. In contrast, those using ChatGPT exhibited weaker overall brain connectivity, suggesting their neural engagement was more superficial. The difference resembles what we see when comparing active problem-solving to passive information consumption. One builds neural pathways while the other merely utilizes them.

What's particularly interesting is how these neural patterns correlate with subjective experience. The Brain only group reported greater mental effort during writing, yet their brain activity showed more coherent communication between regions. This aligns with research on flow states, where challenge and skill balance produces optimal engagement. The AI-assisted group experienced less strain, but their brain activity appeared fragmented, with reduced coordination between frontal and temporal lobes. It's as if their cognition was divided between generating ideas and evaluating the AI's suggestions, never fully committing to either process.

The theta band findings are especially noteworthy. Strong theta activity in the unaided writers suggests robust working memory engagement and internal focus. This is the brainwave pattern observed during deep concentration, meditation, and complex problem-solving. The AI users' weaker theta connectivity implies they weren't maintaining the same level of sustained attention or mental integration. Their experience was perhaps more akin to editing than composing, with less need to hold multiple concepts in mind simultaneously. The convenience of AI may come at the cost of this valuable cognitive exercise.

These neural differences persisted beyond the writing task itself. In follow-up assessments, the Brain-only group demonstrated better recall of their own writing and stronger feelings of ownership over their work. This suggests that the depth of initial neural engagement affects long-term memory encoding and personal connection to creative output. The implications extend beyond writing - any cognitive task we outsource to AI might fail to produce the same neural imprint as doing it ourselves. There's a neurological basis for why easy work often feels less meaningful or memorable.

The study doesn't argue against AI tools, but it does highlight a tradeoff. Just as physical exercise requires actual movement of muscles, cognitive development seems to require genuine mental effort. Perhaps the solution lies in intentional use - employing AI for certain tasks while preserving others for unaided work. The brain's plasticity means we can likely maintain neural engagement by choosing when and how we use these tools, rather than defaulting to automation for everything. The key is being aware that convenience has a neurological cost we're only beginning to understand.

· 3 min read
Gaurav Parashar

It’s easy to think of tools like ChatGPT as pure productivity boosters: type a prompt, get coherent text, save time. But a recent study tracking brain activity during essay writing suggests there’s a hidden cost. Participants who used AI showed weaker neural connectivity in key regions associated with memory and critical thinking compared to those writing unaided. The more they relied on AI, the less their brains engaged in the deep, effortful work of composition. It’s not just about the output; it’s about what happens to your cognitive processes when you outsource thinking. The study calls this "cognitive debt," a gradual erosion of mental faculties that comes from leaning too heavily on automation. Like skipping the gym because you’ve bought a wheelchair, convenience can quietly undermine capability.

What struck me was how participants described their relationship to the essays they’d written with AI. Many struggled to recall their own arguments or even verbatim sentences minutes after finishing. Some admitted they felt little ownership over the work, as if they’d curated rather than created it. The EEG data mirrored this: the AI group’s brain activity resembled that of an editor, not a writer—more evaluation, less generation. There’s an obvious parallel to how we use GPS and lose our sense of direction, or how spellcheck weakens spelling. The brain seems to treat externally sourced ideas as rentals, not possessions. When you don’t sweat the details, they don’t stick.

The counterintuitive part? Participants using AI reported higher satisfaction with their essays. The work was polished, structurally sound, and technically proficient—everything we’re taught to value. But the human graders noticed something off. They described these essays as "soulless," lacking the quirks and originality of unaided writing. It’s a tension I’ve felt myself: the smoother the process, the more generic the result. AI excels at producing the average, but the average is forgettable. The study’s Brain-only group, for all their typos and awkward phrasing, had stronger activation in creative networks. Their struggle showed up on the page—and in their brains—as something unmistakably theirs.

There’s a lesson here about the difference between efficiency and mastery. Shortcuts get the job done, but they don’t build the mental infrastructure for doing it better next time. The study’s most worrying finding was what happened when AI users switched to writing without help. Their brain activity didn’t bounce back to the level of those who’d practiced unaided from the start. It’s as if the AI had done the mental heavy lifting for them, leaving their own muscles underdeveloped. This isn’t an argument against tools—it’s a case for mindful use. Maybe some tasks are worth the friction, not despite the effort but because of it.

The brain adapts to what we ask of it. The question is what we want it to become.

I’ve started leaving gaps in my workflow where AI could easily slot in. A paragraph written from scratch before tweaking it with suggestions, or a problem solved manually before checking the answer. The goal isn’t to reject help but to stay in dialogue with it. The study’s participants who used AI critically questioning outputs, rewriting chunks—showed more ownership than those who copy pasted. That’s the balance I’m after: tools as collaborators, not crutches.

· 2 min read
Gaurav Parashar

The constant influx of video, music, movies, podcasts, and notifications creates a perpetual state of stimulation, fundamentally altering our cognitive engagement. This continuous stream, amplified by the internet, presents a significant challenge to sustained focus and deep work. The pervasive nature of these digital distractions raises a critical question about our ability to find contentment and purpose independent of online connectivity.

Our reliance on immediate digital gratification has evolved to a point where uninterrupted stretches of quiet contemplation or focused effort feel increasingly alien. The brain, accustomed to rapid-fire information and novel stimuli, struggles to adapt to environments devoid of constant digital input. This shift is not merely a matter of preference but reflects a neurological reshaping influenced by habitual exposure to high-stimulus digital content. The capacity for internal reflection and original thought may diminish when external entertainment sources are always readily available.

Consider a scenario where internet access is suddenly unavailable. The initial reaction for many would likely be a sense of unease or boredom, stemming from a dependency on digital channels for entertainment and information. This dependency highlights a subtle yet profound alteration in how we perceive and engage with our immediate surroundings. The absence of digital noise reveals the extent to which we have externalized our amusement, relying on devices rather than internal resources or real-world interactions for engagement.

This pervasive stimulation impacts not only individual focus but also the collective capacity for critical thinking and nuanced understanding. Complex issues are often reduced to soundbites or sensationalized clips, catering to short attention spans. The continuous flow of information, while seemingly enriching, can paradoxically limit depth of comprehension and encourage a superficial engagement with ideas. Navigating this environment requires a deliberate re-evaluation of how we allocate our attention and where we seek intellectual and emotional fulfillment. This involves a conscious effort to disengage from constant stimulation, allowing for periods of unstructured thought and genuine connection with the non-digital world. The ability to find enjoyment and meaning without the crutch of perpetual digital entertainment is an important measure of our adaptability in an increasingly connected, yet potentially distracting, reality.

· 3 min read
Gaurav Parashar

The launch of ChatGPT agent feels like a significant inflection point for how one interacts with artificial intelligence. This isn't just about better conversational abilities; it's about a shift from a responsive tool to a proactive agent that can think and act independently. The unified agentic system, bringing together capabilities like web interaction (Operator), deep research, and ChatGPT's core intelligence, means the AI can now approach tasks with a broader, more integrated set of skills. It operates on its own virtual computer, making decisions about which tools to use—visual browser, text-based browser, terminal, or even API access—to complete a given instruction. This level of autonomy represents a material change in the AI landscape, moving beyond simple information retrieval or content generation.

The practical implications of this agentic capability are immediately apparent. Tasks that previously required multiple steps, often jumping between different applications or browser tabs, can now theoretically be delegated to ChatGPT. The examples provided—planning and buying ingredients for a meal, analyzing competitors and creating a slide deck, or managing calendar events based on news—highlight a move towards more complex, real-world problem-solving. This hints at a future where the AI isn't just an assistant but a genuine collaborator, capable of executing entire workflows. It implies a reduction in friction for digital tasks, allowing one to focus more on higher-level strategic thinking rather than the granular execution.

A key aspect is the shift in control dynamics. While the agent operates autonomously, the user retains oversight. The ability to interrupt, clarify, or completely change course mid-task is crucial. This iterative, collaborative workflow means the AI can proactively seek additional details when needed, ensuring alignment with the user's goals. It’s not a black box; there's a visible narration of what ChatGPT is doing, and the option to take over the browser or pause tasks ensures transparency and accountability. This balance between AI autonomy and human control seems critical for building trust and managing the inherent risks of such powerful tools.

However, the experimental nature of this technology, as cautioned by OpenAI, cannot be overlooked. While the advancements are impressive, relying on it for "high-stakes uses or with a lot of personal information" warrants considerable caution. The potential for prompt injection or unintended consequences remains a factor. Safeguards are in place, including rigorous security architectures and training to prevent misuse, particularly in sensitive domains. Yet, as with any nascent technology, understanding its limitations and exercising careful judgment in its application is paramount. The system is designed to ask for explicit user confirmation before taking "consequential" actions, which is a sensible measure.

This evolution of ChatGPT into a thinking and acting agent fundamentally alters the user-AI interaction model. It transitions from a command-and-response dynamic to one of delegation and supervision. The AI is no longer just a source of information or a content generator; it's now a doer, capable of navigating complex digital environments to achieve specified outcomes. This shift will likely redefine productivity tools, pushing them towards more integrated, intelligent systems that can automate multi-step processes. The long-term impact on daily workflows, both personal and professional, will be interesting to observe as this technology matures and becomes more widely adopted.