The 70.9% Paradox: When AI Matches Experts—And When It Fails Catastrophically

In 2005, a freestyle chess tournament attracted grandmasters, supercomputers, and everyone in between. The rules were simple: any combination of humans and computers could compete. Chess purists expected the grandmasters with their cutting-edge hardware to dominate. Technology enthusiasts predicted pure chess engines would crush human opponents.

Both groups were wrong.

The winners were Steven Cramton and Zackary Stephen, two amateur players from New Hampshire with chess ratings that wouldn’t qualify them for most local tournaments. Using three consumer-grade PCs, they defeated grandmasters partnered with military-grade supercomputers. They beat pure chess engines running on hardware that cost more than most people’s houses. Their secret wasn’t chess mastery or computational power—it was knowing how to orchestrate their AI tools, when to trust which engine, and which strategic questions to explore.

As Garry Kasparov later observed, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”

That was 2005. Twenty years later, we finally have data on what AI can actually do in the economy. And the pace of change should both terrify and intrigue every middle manager reading this.

The New Benchmark That Changes Everything

In September 2025, OpenAI introduced something called GDPval—a benchmark that measures AI performance not on abstract reasoning or exam-style questions, but on actual economically valuable work. Real tasks created by professionals with an average of 14 years of experience across 44 occupations: legal briefs, engineering blueprints, customer support conversations, sales forecasts, medical assessments, marketing plans.

These aren’t toy problems. They’re the tasks that contribute $3 trillion annually to the U.S. economy. The tasks that define knowledge work. The tasks that, until very recently, we assumed required human expertise, judgment, and years of training.

When GDPval launched in September, the best AI models were matching human experts roughly 50% of the time. Impressive, but still clearly behind human performance overall.

Then, just three months later in December 2025, OpenAI’s GPT-5.2 model achieved something remarkable: a 70.9% win or tie rate against human experts. In ninety days, AI jumped from parity to clear superiority on professional knowledge work tasks. And it does so eleven times faster and at one-hundredth the cost.

If you’re a middle manager responsible for tasks that can be clearly defined, measured, and evaluated—you should be paying attention.

The Number That Should Terrify You (Because of How Fast It’s Moving)

Seventy percent. That’s past the tipping point. That’s not “AI is getting there” or “AI shows promise.” That’s AI demonstrably outperforming humans on most professional knowledge work tasks that can be tested.

But here’s what should really get your attention: the speed. Three months ago, AI was at rough parity with humans. Now it’s clearly superior. That’s not a gradual slope—that’s a vertical climb.

The economic pressure is immediate and real. When you can get expert-level output in minutes instead of days, at a fraction of the cost, why wouldn’t you automate? The spreadsheet practically writes itself: 100x cost reduction, 11x speed improvement, 70% reliability. For any CFO looking at that math, the decision seems obvious.

But here’s where the story gets interesting—and where the chess lesson from 2005 becomes critically important.

Because hidden in that 70.9% success rate is a catastrophic failure mode that changes everything.

The Plot Twist: When AI Fails, It Fails Spectacularly

GDPval’s analysis revealed something that should make every executive pause before clicking “automate everything.” The 70.9% figure doesn’t tell the whole story.

Here’s what matters: of the AI outputs evaluated, roughly 27% were classified as “bad”—meaning not fit for use—and 3% were classified as “catastrophic”—meaning they could cause actual harm if deployed.

But here’s the more subtle issue: even within the 70% that win or tie with human experts, the quality isn’t uniform. A deliverable might be 70% excellent and 30% flawed. An AI-generated legal brief might nail seven arguments but miss a critical precedent. An engineering blueprint might specify correct dimensions but overlook a safety requirement. A customer service response might be 80% perfect but include one sentence that violates company policy.

Think about what that means in practice. Imagine you’re a law firm that starts using AI for brief writing. Most briefs look great—professional, well-researched, properly formatted. But buried in some of them are mistakes that could get your client sanctioned or your firm disciplined. The AI doesn’t flag these errors. It presents everything with the same confidence.

Or you’re a hospital deploying AI for patient documentation. Most notes are thorough and accurate. But occasionally, critical information is omitted or mischaracterized—and nothing in the system signals which notes need extra scrutiny.

This isn’t like having a junior employee who needs supervision. When humans make mistakes, they’re usually within a reasonable margin of error. They might miss something or make a suboptimal choice, but they rarely produce work that’s fundamentally dangerous. AI, by contrast, generates outputs that look confident and competent—until they catastrophically aren’t.

The math suddenly looks different. It’s not just about whether AI can do the work. It’s about whether you can afford the cost of catching the failures before they cause harm.

The Question That Should Keep You Up At Night

So here’s the real question: if AI can handle 70% of professional work tasks but includes subtle or catastrophic flaws that look indistinguishable from quality work—what does that make your job?

The answer is both sobering and liberating: your job becomes judgment.

Not judgment in the sense of deciding whether AI is “good” or “bad.” Not judgment as superiority or gatekeeping. Judgment as the practical skill of evaluating quality, accuracy, and appropriateness—the ability to look at an output and determine whether it’s actually fit for purpose.

Specifically:

  • Recognizing which tasks are safe to delegate to AI and which require human handling
  • Spotting the subtle errors that look correct but aren’t
  • Catching the 3% catastrophic failures before they cause harm
  • Evaluating whether an AI-generated deliverable actually solves the problem
  • Determining when an output is “good enough” versus when it needs human refinement

This is the new skill gap. Not whether you know how to prompt AI or use the latest tools. Whether you can evaluate the outputs well enough to make the whole system work—like Steven Cramton and Zackary Stephen orchestrating their three chess engines in 2005.

The Centaur Solution (And Why Process Beats Power—For Now)

After Kasparov’s defeat by Deep Blue, he didn’t retreat from AI—he invented a new form of chess called “Advanced Chess” or “Centaur Chess.” The name comes from the mythological centaur: half-human, half-horse, combining the strengths of both. In this context, it means human intelligence guiding and evaluating AI computational power—neither competing against each other, but working as an integrated team.

This is the solution we’re suggesting: not humans versus AI, but humans orchestrating AI through superior process and judgment.

Remember those amateur chess players? They didn’t beat grandmasters because they were better at chess. They beat grandmasters because they had a better process for integrating human judgment with AI capabilities. They knew when to trust which engine. They knew how to combine outputs. They knew which questions to explore.

The grandmasters, ironically, struggled with this. They were so confident in their chess expertise that they either over-trusted the machines or ignored them entirely. The amateurs, by contrast, understood something fundamental: success wasn’t about being smarter than the AI or having more powerful tools. It was about orchestrating the human-AI partnership effectively.

Right now, today, the 70.9% number makes this orchestration essential. Yes, AI can match experts on most tasks. But someone still needs to evaluate which outputs are in the 70% and which are in the catastrophic 3%. Someone needs to determine when AI’s confident answer is actually correct versus when it’s confidently wrong.

That evaluation skill—that judgment—is what makes the system work. It’s what turns a 70% success rate with occasional catastrophic failures into a reliable business process.

The Honest Path Forward

Here’s what we know today, in December 2025:

  • AI performance on professional tasks jumped from 50% to 70.9% in just three months
  • Quality issues—both subtle and catastrophic—remain present even in “successful” outputs
  • The rate of improvement suggests these numbers will continue climbing rapidly

Here’s what we don’t know:

  • How quickly AI will continue improving (though recent progress suggests: very fast)
  • Whether the quality control challenge will get easier or harder as AI improves
  • What the next three months will bring, let alone the next year

But here’s what seems increasingly clear: the future of knowledge work isn’t human versus machine. It’s not even human or machine. It’s human and machine, with humans providing the judgment layer that catches errors, evaluates quality, and determines fitness for purpose.

The question isn’t whether AI will impact your job. At 70.9% and climbing fast, that question is answered. The question is whether you’re developing the judgment skills to evaluate AI outputs effectively—knowing when to trust them, when to refine them, and when to start over.

Twenty years ago, two amateur chess players taught us that weak humans with machines and better process could beat strong humans with machines and inferior process. Today, with AI capabilities advancing every quarter, that lesson about process and judgment has never been more relevant.

The difference is this: in 2005, the chess engines were relatively stable. Today, the AI is improving so fast that the game itself keeps changing. Which means the judgment skills—the ability to evaluate quality, spot errors, and determine appropriateness—become even more critical.

Because in a world where AI can handle 70% of tasks (and counting), the work that remains is precisely the work that requires human judgment about whether the AI’s work is actually good enough.

Sources & Further Reading

  • ChessBase: “Dark horse ZackS wins Freestyle Chess Tournament” (June 2005)
  • OpenAI: “Measuring the performance of our models on real-world tasks” (GDPval introduction)
  • Kasparov, Garry: Various writings on centaur chess and human-AI collaboration
  • OpenAI: GPT-5.2 GDPval benchmark results (December 2025)

This is the first in a series exploring what AI’s measured capabilities tell us about the future of knowledge work, human judgment, and the evolving nature of professional expertise.

#HumanPurpose #HumanJudgment #FutureOfWork #AI #ExperienceGap #Leadership #DecisionMaking #ProfessionalDevelopment

The University in the Age of AI: Reimagining Higher Education

Higher education stands at a critical crossroads. The traditional model of knowledge transmission – where universities were the primary gatekeepers of information and expertise – is rapidly becoming obsolete in an era of AI-powered, democratized learning.

The Changing Landscape of Knowledge

Historically, universities served three fundamental purposes:

  1. Knowledge Preservation
  2. Knowledge Transmission
  3. Credentialing and Skill Validation

Artificial intelligence is fundamentally disrupting each of these pillars. With vast information repositories instantly accessible and AI systems capable of explaining complex concepts, the traditional lecture model is becoming increasingly irrelevant.

Beyond Information: The New Value Proposition

In a world where information is free and instantaneous, universities must pivot from being information providers to becoming transformation environments. Their true value will emerge from:

1. Critical Thinking Development

  • Teaching students how to question, analyze, and synthesize information
  • Developing human judgment in an AI-saturated world
  • Cultivating skills AI cannot replicate: emotional intelligence, creative problem-solving, ethical reasoning

2. Collaborative Learning Spaces

  • Creating environments for human interaction
  • Facilitating deep, nuanced discussions
  • Developing interpersonal skills and collaborative capabilities

3. Experiential Learning

  • Providing real-world project experiences
  • Connecting theoretical knowledge with practical application
  • Offering mentorship and guided exploration

The Credentialing Revolution

Traditional degrees are losing their monopoly. The future likely involves:

  • Micro-credentials
  • Skill-based certifications
  • Continuous learning pathways
  • Dynamic, portfolio-based assessment

The Philosophical Challenge: Positive Impact as the Central Mission

Beyond technological adaptation lies a more profound imperative: cultivating positive impact on humanity. In an era of unprecedented technological capability, universities must become more than knowledge centers – they must become crucibles of human potential dedicated to meaningful global transformation.

Positive Impact as the Core Educational Paradigm

Higher education’s future is fundamentally about developing human capacity for:

  • Ethical problem-solving
  • Interdisciplinary collaboration
  • Meaningful innovation
  • Deep empathetic understanding
  • Responsible technological development

This approach transforms universities from passive knowledge repositories to active “impact laboratories” where students learn to:

  • Address global challenges
  • Create sustainable solutions
  • Prioritize collective human welfare
  • Break down academic silos
  • Develop technologies that enhance human dignity

Human Judgment in the AI Era

Universities must become centers of human potential development. This means:

  • Teaching meta-learning skills
  • Developing adaptability
  • Cultivating curiosity
  • Nurturing interdisciplinary thinking

Practical Recommendations for Universities

  1. Redesign curriculum to emphasize:
  • AI interaction skills
  • Critical thinking
  • Interdisciplinary problem-solving
  • Emotional and social intelligence
  1. Create flexible learning models
  • Modular courses
  • Lifelong learning programs
  • Industry-aligned skill development
  1. Invest in human-centric technologies
  • Advanced simulation environments
  • Collaborative digital platforms
  • AI-assisted personalized learning

Redefining Success: Beyond Individual Achievement

The metric of educational success shifts from individual credentials to collective human advancement. Universities must now ask:

  • How does this learning serve humanity?
  • What global challenges can we address?
  • How can we develop solutions that elevate human potential?

Conclusion: The University as a Catalyst for Human Flourishing

The future of higher education is not about competing with AI, but about unleashing human potential where technology cannot reach. We need institutions that don’t just transfer information, but transform individuals into agents of meaningful change.

The university of tomorrow will be defined not by what students know, but by their capacity to imagine, create, and implement solutions that genuinely improve the human condition.

Our greatest challenge – and opportunity – is to reimagine education as a powerful instrument of positive human impact.

The Pulse: The Acceleration Paradox

The Pulse: AI’s Human Impact Report — Oct 27, 2025
The Pulse: AI’s Human Impact Report

The Acceleration Paradox

Real-time, verified, multi-source reporting
Dateline:
This Week’s Pulse

Education pivot: High-school STEM programs are shifting emphasis from coding drills to data literacy and critical analysis as AI handles more routine programming — according to Education Week. Source

Global health governance: Regulators and partners from 40+ countries urged a collaborative approach to safe, ethical and equitable AI in health at WHO’s AIRIS 2025 summit in Incheon — according to the World Health Organization. Source

Workload reality: Elite AI teams at top labs and startups are logging 80–100-hour weeks amid intensifying competition — according to The Wall Street Journal. Source

Productivity paradox:Workslop” — polished, low-value AI output — is spreading, prompting companies to add quality gates and don’t-use-AI-here rules — according to Bloomberg and Harvard Business Review. BloombergHBR

Infrastructure pressure: The energy and water demands of hyperscale AI data centers are spiking community and policy debates, including around facilities like “Stargate” — according to WIRED. Source

“Acceleration without assimilation creates disorientation. Adaptation demands reflection.”
60 Seconds Overview
  • STEM evolution: Data literacy & judgment skills rise over coding drills (Education Week).
  • Health AI cooperation: WHO AIRIS 2025 calls for safe, equitable AI in health (WHO).
  • 100-hour weeks: Elite AI teams push wartime schedules (WSJ).
  • Workslop backlash: Quality gates + “don’t-use-AI” zones (Bloomberg, HBR).
  • Data-center debate: Energy & water scrutiny intensifies (WIRED).
Jobs & Skills Watch

Skill reallocation: Managers are retraining teams to evaluate AI outputs (fact-checking, standards alignment) rather than just generate drafts. Critical-thinking and domain judgment are rising in value as content automation expands.

Worker strain: Extreme schedules in top AI teams underscore burnout risk and retention concerns — according to The Wall Street Journal. Source

Workslop response: Enterprises are introducing AI impact audits so outputs must show measurable efficiency or customer value to stay in workflows — according to Bloomberg and HBR. BloombergHBR

Policy & AI Announcements

At the WHO AIRIS 2025 Summit, delegates urged international cooperation on registries, bias audits, training standards, and human oversight for health-AI — according to WHO. The framing treats AI as part of a human health system, not a bolt-on tool. Source

“Governance is shifting from principles to procedures — and that’s progress.”
Worker Impact

Fatigue curve: Reports of 80–100-hour weeks in elite AI teams highlight the human costs of acceleration — according to WSJ. Source

AI-free focus: Leaders adopt deep-work windows and don’t-use-AI zones to restore attention and quality — according to HBR. Source

What to Watch
  • Data-center siting & sustainability: Energy/water tradeoffs in new regions (WIRED).
  • Public-sector AI audits: Follow-through after WHO’s collaboration call (WHO).
  • K-12 guidance: District policies as classrooms rebalance STEM toward data literacy (Education Week).
  • Occupational health: HR tools for burnout tracking in AI-heavy teams (WSJ).
The Deeper Question

How fast is too fast? Students must unlearn old curricula as teachers retrain; engineers stretch their limits in the name of progress; governments rush to govern what they barely grasp. Acceleration without assimilation creates disorientation, but adaptation demands reflection. The real measure of AI success may be our ability to slow down just enough to decide what should not be automated.

This week’s question: Can we design progress that respects the human clock?

Thanks for reading The Pulse. For deeper dives, listen to our podcast, “Beyond the Code: AI’s Role in Society.” Want help building critical-use AI into your workflow? Book a consult or subscribe for next week’s human-first edition.

© 2025 The Pulse: AI’s Human Impact Report. All rights reserved.

The Pulse: Human Adaptation — Learning to Live with AI

The Pulse: AI’s Human Impact Report — Oct 20, 2025
The Pulse: AI’s Human Impact Report

Human Adaptation — Learning to Live with AI

Hybrid Edition • Real-time, verified, multi-source reporting
Dateline:
This Week’s Pulse

The center of gravity this week is education—not as a tech showcase but a human-systems challenge. The U.S. is moving to professionalize AI literacy for teachers through AFT/NEA training hubs funded by Microsoft, OpenAI, and Anthropic, according to the Associated Press. Source

Google committed $1 billion to AI education and job training, signaling that teacher enablement is strategy, not charity, according to Google and Reuters. Google BlogReuters

UK teens report AI can erode study habits and original thinking, calling for clear rules and guidance, according to The Guardian. Source In San Francisco, the AI-centric Alpha School accelerates learning through personalization and coach models—critics question equity and developmental fit. Source

“Adoption doesn’t equal learning—or value. Without redesign, AI just accelerates whatever system you already have.”

In workplaces, leaders face a productivity paradox: AI boosts activity but not outcomes. “Workslop” (polished, low-value output) spreads, according to Bloomberg, while HBR urges quality gates and use-and-don’t-use rules. BloombergHBR

60 Seconds Overview
  • Teacher training goes big: Microsoft, OpenAI, and Anthropic fund large-scale educator training, according to AP.
  • Google’s $1B pledge: $1B for education and training, according to Google and Reuters.
  • AI-first schooling debate: Alpha School sparks equity debate, according to The Guardian.
  • Students ask for rules: Pupils want responsible AI use guidance, according to The Guardian.
  • Workplace reality check: “AI workslop” rises, according to Bloomberg & HBR.
  • Guardrails & misuse: OpenAI disrupts malicious networks; EU AI Act obligations in force.
Jobs & Skills Watch

Hiring shifts: Growth in AI platform engineering, data stewardship, and AI compliance tracks new guardrails and training, according to Reuters and EU AI Act guidance.

What’s automating: drafting, summarization, and media creation compress entry-level roles, according to HBR.

Skills gaining value: critical-use literacy, facilitation, and governance expertise, according to AP and EU Commission.

Skills losing value: “templateable” outputs and unreviewed solo work, according to Bloomberg.

Policy & AI Announcements

Classroom AI: Public-private teacher training hubs expand, according to AP. Google widens Gemini for Education access, according to Google.

EU AI Act cadence: Prohibitions & literacy obligations since Feb 2025; GPAI obligations since Aug 2025, according to EU Commission.

Platform governance: OpenAI disrupts 40+ malicious networks, according to OpenAI.

Adult-gated policy: Sam Altman confirms ChatGPT erotica for verified adults, according to TechCrunch and The Verge.

“The new curriculum isn’t just math and reading. It’s judgment—when to trust the model, and when to override it.”
Worker Impact

AI-assisted productivity collides with messy human workflows. “Workslop”—polished but low-value output—erodes trust, according to Bloomberg. HBR urges quality gates and defined use-cases.

Teachers save drafting time but must realign AI lesson plans with goals, according to Ars Technica. Students echo the need for critical-use pedagogy, according to The Guardian.

Platform hygiene—OpenAI’s network takedowns—reduces background risk, according to OpenAI.

What to Watch
  • Teacher-training ROI: Will results show measurable learning gains?
  • District guardrails: Procurement changes as adult-gated tools roll out.
  • EU AI Act guidance: Member-state implementations for AI literacy.
  • From workslop to workflow: Redesigns yielding true productivity.
  • Student voice: Surveys on AI’s impact on creativity and motivation.
The Deeper Question

If learning means choosing effort and wrestling with ideas, what happens when intelligence becomes ambient and frictionless? Teens say AI makes study “too easy.” That’s not nostalgia—it’s about meaning. We grow through intentional struggle.

The answer isn’t to avoid AI but to re-engineer goals around judgment, empathy, and originality. In schools, AI should be the sparring partner, not the answer key; in offices, the draft mule, not the decision-maker.

This week’s question: What constraints will we embrace so the human work remains ours?

Thanks for reading The Pulse. For deeper dives, listen to our podcast, “Beyond the Code: AI’s Role in Society.” Want help building critical-use AI into your workflow? Book a consult or subscribe for next week’s human-first edition.

© 2025 The Pulse: AI’s Human Impact Report. All rights reserved.

In Memory of Charlie Kirk

Humanity at Its Best

Yesterday, we lost Charlie Kirk—not just a political voice, but a human being who embodied something increasingly rare in our world: the courage to engage authentically with ideas and people, both those who disagreed with him and those with whom he disagreed.

Kirk’s “Prove Me Wrong” format wasn’t just clever branding—it was a declaration of faith in human discourse. He approached disagreement with curiosity rather than contempt, seeking to understand before seeking to be understood. When he sat under that tent at Utah Valley University, engaging with students who challenged his views, he was demonstrating something profound about what it means to be human.

The Gift of Genuine Dialogue

What made Charlie irreplaceable wasn’t the positions he held, but how he held them. He brought to every conversation a willingness to be genuinely present with other human beings, to risk being changed by encounter with different perspectives, and to treat even his opponents as fellow travelers in the search for truth.

This capacity for authentic engagement—for vulnerability in the face of disagreement—represents humanity at its finest. It requires intellectual courage to expose your ideas to challenge. It demands emotional maturity to remain gracious when others question what you hold dear. Most importantly, it asks us to see the person behind the position, to recognize our shared humanity even across deep differences.

What We’ve Lost

The shooter who killed Charlie Kirk attacked more than a person; they attacked the very possibility of civil discourse itself. In Charlie’s death, we’ve lost not just a voice, but a model of how human beings can engage with one another across difference without losing their dignity or their humanity.

Charlie showed us that it’s possible to hold strong convictions while remaining open to dialogue. He demonstrated that we can disagree passionately while still treating one another with respect. He proved that the pursuit of truth is not a zero-sum game, but a collaborative endeavor that requires the participation of people who see the world differently.

Honoring His Memory

As we mourn Charlie Kirk, we must also commit to preserving what he represented: the irreplaceable humanity that makes authentic dialogue possible. His legacy isn’t found in any particular political position, but in his approach to human engagement—the capacity to listen with genuine interest, to speak with honest conviction, and to treat every conversation as an opportunity to understand something new about the world and the people in it.

This is what we must not let die with him: the belief that civil discourse is possible, that good people can disagree in good faith, and that our shared humanity is stronger than our political divisions.

Charlie Kirk believed in the power of conversation to bridge divides and illuminate truth. In his memory, let us recommit ourselves to the kind of dialogue he championed—graceful, authentic, and fundamentally hopeful about what human beings can accomplish when we engage with one another as fellow seekers of understanding.

Rest in peace, Charlie. Your example of humanity at its best will not be forgotten.

Solution Sunday: The “Is This Real?” Game

Turn Summer Downtime into Literacy Detective Work

The Challenge: Kids believe everything they read online, and summer screen time often means less critical thinking practice.

The Opportunity: Summer’s flexible schedule gives families perfect moments—car rides, park picnics, rainy afternoons—to build fact-checking skills that will serve kids for life.

The Solution: A fun family game using AI to create mystery statements that kids research and verify, turning them into information detectives.


How The Game Works

Step 1: AI Generates Mystery Statements

Ask ChatGPT or Claude to create a mix of true and false statements tailored to your child’s interests and reading level.

Sample Prompt for Ages 6-9:

“Create 5 fascinating statements about animals that kids would find interesting. Make 3 true and 2 false, but make them all sound believable. Keep language at a 2nd-3rd grade reading level.”

Sample Prompt for Ages 10-12:

“Generate 7 surprising facts about space exploration. Make 4 true and 3 false. Include some details that make the false ones tricky to spot. Use 5th-6th grade vocabulary.”

Sample Prompt for Ages 13+:

“Create 6 statements about historical events that sound amazing but might not be true. Mix real and fictional events. Make them engaging for teenagers who like surprising stories.”

Step 2: Present the Challenge

Read the statements aloud or write them on cards. Tell your kids: “Some of these are completely true, and some are made up. Your job is to figure out which is which!”

Step 3: Research & Verify

Give kids time to investigate using:

  • Library books (if at home)
  • Phone research (with parent guidance)
  • Asking other adults they trust
  • Looking up official sources

Step 4: The Big Reveal

Come back together and let each child share their verdict and reasoning. Then reveal the answers and celebrate good detective work!


Age-Appropriate Adaptations

Ages 5-7: “True or Silly?”

  • Use simple, concrete topics (animals, toys, food)
  • Make false statements obviously silly once investigated
  • Focus on “How do we find out?” rather than complex verification

Example Set:

  • ✅ TRUE: “Octopuses have three hearts”
  • ❌ FALSE: “Cats can see in complete darkness with no light at all”
  • ✅ TRUE: “A group of flamingos is called a flamboyance”
  • ❌ FALSE: “Dogs can only see in black and white”

Ages 8-11: “Fact Detective”

  • Include more nuanced true/false distinctions
  • Introduce concept of “mostly true but missing details”
  • Start teaching source evaluation

Example Set:

  • ✅ TRUE: “There are more possible chess games than atoms in the observable universe”
  • ❌ FALSE: “Sharks never sleep”
  • ✅ TRUE: “Honey never spoils if stored properly”
  • ❌ FALSE: “Lightning never strikes the same place twice”
  • ✅ TRUE: “A cloud can weigh more than a million pounds”

Ages 12+: “Misinformation Hunters”

  • Include statements that require checking multiple sources
  • Discuss bias and how “true” information can be misleading
  • Connect to current events and social media literacy

Example Set:

  • ✅ TRUE: “The Great Wall of China isn’t visible from space with the naked eye”
  • ❌ FALSE: “Einstein failed math in elementary school”
  • ✅ TRUE: “There are more trees on Earth than stars in the Milky Way galaxy”
  • ❌ FALSE: “We only use 10% of our brains”

Summer Settings & Variations

🚗 Car Ride Version

  • Prepare statement cards before leaving
  • Kids research at rest stops or when you arrive
  • Perfect for long drives to keep minds active

🏖️ Vacation Detective

  • Create statements about your destination
  • Research using hotel WiFi or visitor center resources
  • Make it part of exploring new places

🏠 Rainy Day Challenge

  • Generate statements about indoor topics (science, history, books)
  • Use home resources: encyclopedias, library books, online search
  • Make it a weekly tradition

🌳 Park Picnic Game

  • Focus on nature-based statements
  • Research using nature apps or field guides
  • Combine with outdoor observation

🎭 Family Game Night

  • Create themed rounds (sports, movies, history)
  • Keep score and rotate who presents statements
  • Make it competitive but collaborative

Building Real Skills

What Kids Learn:

  • Source evaluation: “Where did this information come from?”
  • Multiple verification: “Can I find this in more than one place?”
  • Question formation: “What should I search for to check this?”
  • Evidence weighing: “Which source seems most reliable?”
  • Healthy skepticism: “This sounds amazing—is it too good to be true?”

Parent Coaching Moments:

  • “What made you decide to believe/doubt that statement?”
  • “Where could we look to double-check this?”
  • “What questions should we ask about this source?”
  • “How can we tell if a website is trustworthy?”

Sample AI-Generated Statement Sets

Ocean Mysteries (Ages 8-12)

Ask AI: “Create 6 ocean facts for kids. Make 4 true and 2 false. Include some that sound unbelievable but are real.”

Possible Results:

  1. The ocean produces more than 50% of the world’s oxygen ✅
  2. There are underwater waterfalls in the ocean ✅
  3. Dolphins have names for each other ✅
  4. The deepest part of the ocean has been fully explored ❌
  5. Some fish can live for over 400 years ✅
  6. Seahorses are the fastest swimmers in the ocean ❌

Space Adventures (Ages 10-14)

Ask AI: “Generate 5 space facts that sound incredible. Make 3 true and 2 false. Include surprising details.”

Historical Surprises (Ages 12+)

Ask AI: “Create 6 statements about historical events that sound too wild to be true. Mix real and fictional events.”


Troubleshooting Common Challenges

“This is too hard!”

  • Start with more obviously false statements
  • Work together as a team initially
  • Celebrate the process, not just correct answers

“I can’t find the answer!”

  • Teach different search strategies
  • Show them how to rephrase questions
  • Make it okay to say “I’m not sure” and keep investigating

“The AI made mistakes!”

Sometimes AI generates incorrect “true” statements. This becomes a teaching moment: “Even AI can be wrong! That’s why we always check multiple sources.”


Making It Stick: Building the Habit

Start Small

  • Begin with 2-3 statements per session
  • Choose topics your child already loves
  • Keep sessions under 20 minutes initially

Create Rituals

  • “Mystery Monday” statements each week
  • Vacation tradition for each new city
  • Bedtime wind-down activity

Celebrate Success

  • Keep a “Detective Journal” of statements investigated
  • Award “Truth Seeker” badges for good questioning
  • Share favorite discoveries with extended family

Beyond the Game: Real-World Applications

As kids get comfortable with the game, connect it to:

  • School projects: “Let’s fact-check this before including it”
  • News stories: “Should we verify this before sharing?”
  • Social media: “How could we check if this viral post is true?”
  • Friend claims: “That sounds interesting—where did you hear that?”

Parent Success Stories

“My 9-year-old now automatically asks ‘How do we know that’s true?’ when she hears surprising facts. The game turned her into a natural skeptic in the best way.” —Maria, mom of two

“We started this on a road trip to Yellowstone. By the end of the week, my kids were fact-checking the park ranger! (Respectfully, of course.)” —David, dad of three

“The best part is watching my daughter teach her younger brother how to ‘be a detective.’ She’s become the fact-checker of the family.” —Sarah, homeschool mom


Getting Started This Week

  1. Choose your AI tool: ChatGPT, Claude, or similar
  2. Pick your child’s interest: Animals, sports, science, history
  3. Generate 3-5 statements using the prompts above
  4. Find 15-20 minutes during a natural family moment
  5. Present the challenge and research together
  6. Celebrate the detective work regardless of right/wrong answers

The Big Picture

In an age where information travels faster than verification, teaching kids to question, research, and think critically isn’t just academic—it’s essential life preparation. The “Is This Real?” game turns this vital skill into summer fun.

Remember: The goal isn’t to make kids suspicious of everything, but to help them become thoughtful consumers of information who know how to seek truth in a noisy world.

Your turn: Try the game this week and share your results! What statements surprised your family? What detective strategies worked best?

Connect with other families exploring AI-powered literacy at [your community platform]. Summer learning doesn’t have to feel like school—it can feel like an adventure.

I Didn’t Write This Paper – I Composed It: Redefining Creativity in the AI Age

A personal exploration of what it means to create when AI handles the execution

The Question That Stopped Me Cold

“Did you write your most recent white paper?”

My friend’s question yesterday was innocent enough, but it hit me like a brick wall. I found myself stumbling through an answer that felt both true and inadequate at the same time.

“Well, AI wrote it, but I…” I started, then stopped. “I mean, I used AI as a tool, but I spent hours…” Another pause. “It’s complicated.”

The conversation moved on, but the question lingered. Had I written the paper? In the most literal sense—fingers on keyboard, words appearing on screen—no, I hadn’t. But dismissing my role felt wrong too. I had spent hours in conversation with AI, bringing my critical thinking, life experience, and pattern recognition to bear on the content. I had shaped every argument, guided every direction, and made countless decisions about what belonged and what didn’t.

Later that evening, the right word finally came to me: I hadn’t written the paper. I had composed it.

Read the full whitepaper here:
https://avimaderer.com/the-experience-gap/

The Composer’s Role in the Age of AI

This distinction—between writing and composing—might seem semantic, but it’s actually profound. It points to a new form of creative collaboration that we’re all navigating but haven’t quite learned to articulate yet.

When a composer writes a symphony, they don’t physically play every instrument. They create the structure, choose the harmonies, guide the emotional arc, and make countless decisions about how ideas should flow together. The orchestra executes their vision, but no one questions who created the music.

Working with AI feels remarkably similar. I brought:

  • The conceptual framework – What questions needed exploring?
  • The narrative structure – How should ideas build on each other?
  • Critical synthesis – Which connections matter and why?
  • Experiential wisdom – What insights from my own life apply here?
  • Editorial judgment – What serves the reader and what doesn’t?

AI handled the execution—the actual sentence construction, formatting, and technical writing mechanics. But every substantive choice was mine.

Beyond “Human in the Loop”

The current discourse around AI creativity often falls into two camps: either AI is doing everything (threat narrative) or humans are completely in control (reassurance narrative). Both miss the nuanced reality of genuine human-AI collaboration.

The phrase “human in the loop” suggests we’re just quality control—reviewing and approving AI’s work. But that’s not what happened with my white paper. I wasn’t in the loop; I was conducting the orchestra.

I was:

  • Initiating every major direction
  • Questioning assumptions and pushing for deeper thinking
  • Connecting disparate ideas across domains
  • Filtering through my values and experience
  • Iterating toward a vision only I could see

This is composition, not editing. Creation, not curation.

A Framework for Creative Collaboration

If we’re going to navigate this new landscape of human-AI creativity, we need better language and clearer frameworks. Here’s how I’m starting to think about it:

Execution vs. Composition

Execution includes:

  • Sentence construction and grammar
  • Formatting and structure
  • Research compilation
  • Technical writing mechanics
  • Style consistency

Composition includes:

  • Conceptual framework development
  • Narrative arc creation
  • Critical analysis and synthesis
  • Value-based filtering and judgment
  • Creative direction and vision

The key insight: AI excels at execution but cannot compose without human intentionality, experience, and judgment.

The Four Pillars of AI-Assisted Composition

When I reflect on my white paper process, four distinct human contributions emerge:

  1. Vision Setting – What is this really about? What matters here?
  2. Connection Making – How do disparate ideas relate? What patterns exist?
  3. Experience Integration – What do I know from living that informs this?
  4. Value Filtering – What serves the reader? What aligns with my beliefs?

These capabilities remain uniquely human because they require lived experience, emotional intelligence, and the kind of contextual judgment that comes from being embedded in the world.

Practical Implications

This framework isn’t just philosophical—it has real implications for how we work, communicate, and understand our roles in an AI-enabled world.

For Professionals

When someone asks if you “wrote” something created with AI assistance, you can confidently say: “I composed it. AI handled the execution, but every substantive decision was mine.”

For Evaluating Creative Work

Instead of asking “Did a human write this?” we might ask: “Who provided the creative vision, made the connections, and exercised judgment about what matters?”

for Understanding Value

Your value as a creative professional isn’t in your typing speed or grammar skills—it’s in your ability to see patterns, make connections, integrate experience, and guide work toward meaningful outcomes.

The Bigger Picture

This shift from writing to composing reflects something larger happening across all creative fields. We’re moving from a world where human value came from executing tasks to one where it comes from creative direction, synthesis, and judgment.

The Bridge Generation—those of us navigating this transition—has a unique opportunity. We remember what pure human creation felt like, but we’re also learning to collaborate with AI in ways that amplify rather than replace our essential human capabilities.

The question isn’t whether AI will change how we create—it already has. The question is whether we can articulate and claim our evolving role as composers, conductors, and creative directors in this new landscape.

Moving Forward

I’m still learning to navigate these conversations about AI-assisted creation. But I’m getting more confident about claiming my role as a composer rather than apologizing for not being a traditional writer.

The next time someone asks if I wrote something, I’ll know exactly what to say: “I composed it through conversation with AI, bringing my experience, judgment, and vision to guide every substantive decision. AI was my instrument; I was the composer.”

That feels both honest and empowering. Most importantly, it feels true.

Read the full whitepaper here:
https://avimaderer.com/the-experience-gap/


What’s your experience with AI-assisted creativity? How do you describe your role when AI handles execution but you provide the vision and judgment? I’d love to hear how you’re navigating these questions.

Heartfelt Emotions, Gut Feelings, Head Knowledge: We Can’t Fight Biology

“We cannot fight biology. While AI is having massive effects on thought processes, thinking, and pattern recognition, AI will never—or will take a very long time if ever—to replace the truly human aspects of our existence.” Sam Altman (paraphrased)

Watch Here

Sam Altman speaking at a Federal Reserve conference this week.

This is good news.

When Sam Altman—CEO of OpenAI and one of the most influential figures in artificial intelligence—tells Federal Reserve officials that AI companies “cannot fight biology,” he’s delivering a message of profound optimism about humanity’s irreplaceable role in our AI future.

This isn’t fear-mongering from a technology skeptic. This is insider knowledge from someone building the most advanced AI systems on Earth. And his conclusion? No matter how sophisticated AI becomes, there’s something fundamentally unique about human intelligence that will always be essential.

The secret lies in how our entire body thinks.

The Biological Advantage AI Cannot Replicate

While AI excels at processing information and recognizing patterns, human intelligence operates on a completely different level. We don’t just think with our heads—we integrate information from three distinct neural networks that create the rich complexity of human wisdom:

Your Heart’s Neural Network contains approximately 40,000 neurons that sense, feel, learn, and remember independently. Your heart actually sends more signals to your brain than it receives, directly shaping emotional processing, attention, and perception. When we talk about “heartfelt emotions,” we’re describing real neural activity that influences every decision we make.

Your Gut’s Neural Network—the enteric nervous system—houses about 500 million neurons, more than your spinal cord. This “second brain” operates independently, influencing mood, immune response, and decision-making through direct communication with your head-brain. Those “gut feelings” aren’t metaphors—they’re sophisticated information processing that helps guide your choices.

Your Head’s Neural Network integrates signals from these other centers while handling analysis, language, and conscious reasoning. This is the kind of processing AI does exceptionally well—but it’s only one part of human intelligence.

The Integration That Makes Us Irreplaceable

Here’s why Altman is optimistic about humanity’s future: we don’t use these systems in isolation. Human intelligence emerges from the dynamic conversation between heart, gut, and head—creating something far more sophisticated than any single system could achieve.

When you make important decisions, you’re not just running calculations. You start with emotional responses from your heart network, integrate intuitive processing from your gut network, and apply analytical thinking from your head network. The result is embodied wisdom—a way of knowing that’s rooted in your biological reality as a feeling, sensing being moving through the world.

This is what AI cannot replicate, no matter how advanced it becomes. An AI system might analyze data about love, loss, or moral dilemmas, but it cannot access the felt sense of a racing heart, a sinking stomach, or the weight of responsibility in making choices that matter.

Why Our Biology Is Our Strength

Our biological nature isn’t a limitation—it’s our competitive advantage. We experience hunger, fatigue, joy, and connection. We carry stress in our bodies and feel laughter in our bellies. We wake at night confronting our mortality, and we create meaning from our shared vulnerability.

These experiences aren’t bugs in the human system. They’re features that generate empathy, courage, creativity, and wisdom that emerges from lived experience.

When we comfort someone in grief, fall in love, create art that moves others, or make moral choices under pressure, we’re drawing from the deep well of our integrated biological intelligence. We’re not just processing information—we’re responding from the totality of our embodied existence.

The Optimistic Future Altman Sees

Altman’s message to the Federal Reserve wasn’t about humans becoming obsolete—it was about recognizing our unique and irreplaceable value. As AI handles more cognitive tasks, human worth doesn’t diminish. Instead, our distinctly biological intelligence becomes more precious.

The future belongs to humans who can integrate heart, gut, and head wisdom. Who can create meaning from embodied experience. Who can navigate complex relationships, make ethical choices under uncertainty, and generate insights that emerge from the beautiful complexity of biological consciousness.

We don’t need to compete with AI on computational tasks—that’s not where our strength lies. Our power comes from the integration of our three neural networks, informed by our mortality, and motivated by our capacity for genuine connection and understanding.

The Insider’s Perspective

When one of AI’s most influential leaders tells us that even the most advanced systems cannot replicate human biological intelligence, we should listen. This isn’t speculation—it’s a recognition from someone building the future that humans will always be essential to that future.

Our biological complexity, with its distributed neural networks and embodied wisdom, isn’t something to overcome. It’s something to celebrate and cultivate. In an age of artificial intelligence, our humanity isn’t our limitation—it’s our superpower.

The companies building AI know this. The question isn’t whether humans will remain relevant, but how we’ll embrace and develop the uniquely biological intelligence that makes us irreplaceable.


The next time you need to make an important decision, pay attention to all three centers. What does your heart tell you? What does your gut sense? What does your head know? The conversation between them is where your uniquely human—and irreplaceable—wisdom lives.

The Human Adaptation Lag: Why AI’s Speed May Leave Us Behind

We’re living through what may be the fastest technological transformation in human history. Yet there’s a fundamental mismatch between the pace of AI development and our ability to adapt to it. This “human adaptation lag” could determine whether the AI revolution becomes a gradual evolution or a jarring disruption that catches entire societies off guard.

However, many experts believe this adaptation challenge, while daunting, may be manageable with the right approach. Economists, sociologists, and AI researchers are divided on whether human societies can successfully navigate this transition—some point to our historical resilience and adaptability, while others warn that this time truly is different. Those in the optimistic camp suggest that by focusing on building adaptive capacity rather than trying to predict the unpredictable, we can develop strategies that help individuals, organizations, and society navigate rapid change. The key may lie in cultivating meta-skills like learning agility, embracing hybrid human-AI collaboration, and creating flexible systems that can evolve with technological advancement. Rather than being passive victims of change, we might become active participants in shaping how AI integrates into our world.

How We Used to Adapt to Change

Throughout history, major technological shifts unfolded over decades, giving people and institutions time to gradually adjust. The industrial revolution took nearly a century. The internet transformation happened over about 30 years, from early networks in the 1970s to widespread adoption in the 2000s. Smartphones took roughly 15 years to reshape how we communicate and work.

This slower pace allowed for organic adaptation. Workers could retrain gradually. Educational systems could evolve their curricula. Governments could develop regulations through trial and error. Companies could experiment with new business models without facing immediate obsolescence.

Most importantly, individuals had time to learn the new rules. A factory worker displaced by automation might spend years retraining for a service job. A journalist could gradually learn digital skills as newspapers slowly moved online. The changes were significant, but they rarely required overnight transformation of entire skill sets.

The AI Acceleration

AI development has compressed this timeline dramatically. Capabilities that took months to develop just a few years ago now emerge in weeks. Models that seemed cutting-edge six months ago are quickly surpassed. We’re seeing tools that can write code, create art, analyze data, and even engage in complex reasoning—all improving at an exponential pace.

This creates what we might call “technological whiplash.” The rules of entire industries are changing faster than our ability to understand them, let alone master them. Skills that professionals spent years developing may become obsolete in months. Business models that seemed stable are suddenly under threat.

Our brains, education systems, and institutions evolved for a world where major changes happened over generations, not years. We’re experiencing a fundamental mismatch between the speed of technological change and the speed of human adaptation.

The Critical Timeline Question

Perhaps the most important unknown is the timeline for AI’s transition to a stable new equilibrium. Are we looking at 2-5 years or 20 years? This isn’t just an academic question—it fundamentally changes how we should prepare.

The 2-5 Year Scenario: If AI reaches its transformative potential within the next few years, we’re essentially already behind. There’s no time for gradual adaptation. Educational systems can’t be overhauled quickly enough. Workers can’t be retrained at scale. Governments can’t develop thoughtful regulations for rapidly evolving technology. This scenario demands emergency-level responses and accepts that significant disruption is unavoidable.

The 20-Year Scenario: A longer timeline allows for more measured responses. Educational curricula can evolve. Workers can gradually acquire new skills. Policymakers can experiment with different regulatory approaches. Companies can test hybrid models that combine human expertise with AI capabilities. Society can adapt more organically to the new technological landscape.

The uncertainty itself is paralyzing. It’s nearly impossible to make rational decisions about career planning, educational investment, or business strategy when the fundamental timeline is unknown. Do you retrain for a new career that might not exist in five years? Do you invest in skills that AI might soon replicate?

The Adaptation Challenge

This speed mismatch creates several specific challenges:

Career Planning Becomes Nearly Impossible: Traditional career advice assumes relatively stable job markets with predictable skill requirements. When entire professions might be transformed in a few years, how do you plan a 20-year career? The safe choice might be to develop skills that seem AI-resistant, but even those categories are shrinking and shifting rapidly.

Educational Systems Lag Behind: Universities and schools are teaching students for jobs that may not exist by the time they graduate. By the time curricula are updated, the landscape has shifted again. The students entering the workforce today need skills that may be completely different from what they’re learning.

Policy Makers Struggle with Moving Targets: Regulating AI is like trying to write rules for a game that’s still being invented. By the time legislation is drafted, debated, and passed, the technology has often evolved beyond what the regulations anticipated. This creates a regulatory lag that leaves society vulnerable during the transition.

Individual Learning Can’t Keep Pace: Even highly motivated individuals struggle to stay current with rapid technological change. The half-life of technical skills is shrinking. Professional development that once happened over years now needs to happen continuously, but humans have limited bandwidth for constant learning and adaptation.

The Stakes

This isn’t just about jobs or economic disruption. The human adaptation lag affects how quickly we can restructure fundamental aspects of society: how we work, learn, govern, and relate to each other. If the timeline is compressed, we may not have time to thoughtfully navigate these changes.

The risk isn’t just that some people will be left behind—it’s that our collective ability to adapt may be overwhelmed by the pace of change. We could end up with a society where technology advances faster than our wisdom about how to use it responsibly.

What This Means for All of Us

The human adaptation lag suggests we need to think differently about preparation and response. Rather than trying to predict specific outcomes, we might need to focus on building adaptive capacity: the ability to learn quickly, think flexibly, and navigate uncertainty.

This means investing in meta-skills that help us learn and adapt, rather than just specific technical abilities. It means creating institutions that can evolve rapidly rather than just respond to predetermined scenarios. Most importantly, it means acknowledging that the speed of change itself is now one of our biggest challenges.

The AI revolution isn’t just about what artificial intelligence can do—it’s about whether human intelligence can adapt fast enough to keep pace with it. The next few years will likely determine whether we successfully navigate this transition or find ourselves struggling to catch up with a world that has moved beyond our ability to understand it.

Building Adaptive Capacity: A Path Forward

While the human adaptation lag presents significant challenges, recognizing it also points toward actionable strategies. Rather than trying to predict exactly what skills will be needed in an uncertain future, we can focus on building our capacity to adapt quickly and effectively.

For Individuals

Develop Meta-Learning Skills: Focus on learning how to learn efficiently. This includes critical thinking, pattern recognition, and the ability to quickly synthesize information from multiple sources. These skills remain valuable regardless of technological changes.

Build Hybrid Competencies: Combine technical familiarity with uniquely human strengths. Understanding how AI tools work while maintaining skills in creativity, emotional intelligence, complex problem-solving, and ethical reasoning creates a powerful combination.

Cultivate Adaptability: Practice working with new tools and technologies regularly. The goal isn’t to master every new platform, but to become comfortable with the process of quickly understanding and adapting to new systems.

Stay Connected to Networks: Maintain relationships with people across different industries and disciplines. These connections provide early signals about changes and opportunities that might not be visible from within a single field.

Embrace Continuous Learning: Shift from thinking about education as something that happens early in life to viewing it as an ongoing process. This might mean setting aside time each week for learning new skills or exploring emerging trends.

For Organizations

Design for Flexibility: Create systems and processes that can evolve quickly rather than optimizing for current conditions. This includes flatter organizational structures, cross-functional teams, and decision-making processes that can adapt to new information.

Invest in Human Development: Prioritize employee learning and development programs that focus on adaptability rather than just current job requirements. This creates a workforce that can grow with technological change.

Experiment Thoughtfully: Rather than waiting for perfect information, run small experiments to test how new technologies might fit into existing workflows. This allows for learning and adaptation without betting the entire organization on unproven approaches.

For Society

Reform Educational Systems: Push for educational approaches that emphasize critical thinking, creativity, and adaptability over rote memorization. This might include more project-based learning, interdisciplinary studies, and regular curriculum updates.

Support Transition Assistance: Advocate for policies that help workers transition between industries and roles, including retraining programs, portable benefits, and social safety nets that provide stability during periods of change.

Encourage Public Dialogue: Foster conversations about how we want to integrate AI into society, rather than just accepting whatever emerges from technological development. This includes discussions about ethics, governance, and the kind of future we want to create.

Reasons for Optimism

Despite the challenges, there are reasons to be hopeful about navigating the human adaptation lag:

Humans Are Remarkably Adaptable: Throughout history, we’ve successfully adapted to massive changes, from agricultural revolutions to industrial transformations. Our capacity for learning and growth is one of our greatest strengths.

AI Can Accelerate Learning: The same technology creating the adaptation challenge can also help us meet it. AI tutors, personalized learning systems, and intelligent training programs can help us learn more efficiently than ever before.

Hybrid Models Are Emerging: Rather than complete replacement, we’re seeing the development of human-AI collaboration models that amplify human capabilities rather than simply substituting for them.

Increased Awareness: The fact that we’re having these conversations now, rather than being caught completely off guard, suggests that society is becoming more conscious of the need to manage technological transitions thoughtfully.

The human adaptation lag is real, but it’s not insurmountable. By focusing on building adaptive capacity rather than trying to predict the unpredictable, we can position ourselves to thrive in an uncertain future. The key is to start now, remain flexible, and remember that our greatest asset in navigating change is our uniquely human ability to learn, connect, and create meaning from new experiences.

Understanding the human adaptation lag doesn’t solve the problem, but it does help us recognize what we’re really up against and, more importantly, what we can do about it. The future may be uncertain, but our response to it doesn’t have to be.

AI in the Loop: How Artificial Intelligence Can Transform Human Conversations

When disagreements arise, we have a powerful new tool to help us seek truth together

I had a profound realization yesterday that shifted how I think about human conversation in the age of AI. It happened during a discussion where two people—myself and a good friend—found ourselves on opposite sides of a complex issue. In the past, this scenario would have played out predictably: we’d either rely on whoever claimed to have the most expertise, or we’d agree to “look it up later” and move on with the disagreement unresolved.

But something different happened this time. After our conversation ended without resolution, I turned to GPT for deeper exploration. Within minutes, I had access to comprehensive information that would have taken hours to research traditionally. More importantly, I had the kind of nuanced, multi-perspective analysis that neither of us could have provided alone.

This experience sparked what I’m calling “AI in the Loop”—using artificial intelligence not to replace human conversation, but to enhance it in real-time.

The Old Model of Disagreement

Think about how we’ve traditionally handled disagreements about factual matters. When two people have different understandings of a situation, we typically fall back on one of these approaches:

The Authority Model: We defer to whoever seems most knowledgeable or confident, even if their expertise might be limited or biased.

The Research Promise: We agree to “look it up later” and research independently, often never actually following through or sharing what we find.

The Stalemate: We agree to disagree, leaving important questions unresolved and potentially missing opportunities for learning and growth.

Each of these approaches has significant limitations. The authority model can reinforce existing biases and shut down productive inquiry. The research promise often leads to no resolution at all. The stalemate prevents the kind of collaborative truth-seeking that deepens understanding and relationships.

The AI in the Loop Alternative

What if, instead of these limiting patterns, we invited AI to join our conversation as a research partner? Not as the final authority, but as a tool for rapidly accessing diverse perspectives and comprehensive information?

Here’s how it might work:

During the Conversation: When we encounter a factual disagreement or need deeper information, we pause and engage AI together. “Let’s ask GPT to help us understand this better.”

Collaborative Inquiry: Both parties participate in questioning the AI, ensuring we’re exploring multiple angles and challenging potential biases in the responses.

Critical Thinking Applied: We use our human judgment to evaluate the AI’s responses, identifying gaps, biases, or areas that need further exploration.

Shared Resolution: We reach conclusions together, informed by comprehensive research but grounded in our collective critical thinking.

Why This Matters Now

This approach addresses a crucial challenge of our information age: the gap between the speed of conversation and the depth of research required for informed discussion. In the past, thorough research took time that most conversations couldn’t accommodate. Now, we can access comprehensive information within minutes—if we know how to use it effectively.

The key is maintaining our role as critical thinkers while leveraging AI’s research capabilities. In my experience yesterday, I had to push back against the AI’s initial responses, which showed clear bias. Through careful questioning and critical evaluation, I was able to get more accurate, nuanced information. This process required human judgment and expertise—AI provided the breadth, I provided the depth of analysis.

The Benefits of AI in the Loop

Enhanced Understanding: Access to multiple perspectives and comprehensive information in real-time.

Reduced Bias: When used thoughtfully, AI can help us move beyond our individual knowledge limitations and preconceptions.

Collaborative Learning: The process of questioning AI together can deepen relationships and shared understanding.

Practical Resolution: Conversations can move from opinion-based disagreement to evidence-informed discussion.

Skill Development: Regular practice with AI in the loop helps develop better critical thinking and information evaluation skills.

The Critical Thinking Requirement

This approach only works if we maintain our critical thinking skills. AI responses can contain biases, inaccuracies, or oversimplifications. The human role remains essential:

  • Asking follow-up questions that reveal bias or gaps
  • Challenging assumptions in AI responses
  • Seeking multiple perspectives on complex issues
  • Evaluating sources and reasoning
  • Applying context and nuance that AI might miss

Practical Implementation

To make AI in the loop work effectively:

Set Clear Intentions: Establish that you’re seeking truth together, not trying to “win” the argument.

Share the Process: Both parties should participate in questioning the AI and evaluating responses.

Maintain Skepticism: Treat AI responses as starting points for investigation, not final answers.

Practice Critical Evaluation: Develop skills in identifying bias, gaps, and limitations in AI responses.

Focus on Learning: Approach the conversation as collaborative inquiry rather than debate.

The Broader Implications

AI in the loop represents a new model for human-AI collaboration that goes beyond simple automation. Instead of replacing human conversation, it enhances our capacity for informed discussion and collaborative truth-seeking.

This approach could transform how we handle disagreements in families, workplaces, and communities. Rather than relying on authority, avoiding difficult topics, or getting stuck in unproductive debates, we could engage in deeper, more informed conversations that actually resolve important questions.

As we navigate an era of rapid change and complex challenges, our ability to have productive conversations about difficult topics becomes increasingly important. AI in the loop offers a practical tool for upgrading the quality of human discourse—but only if we’re willing to engage our critical thinking skills and approach these conversations with genuine curiosity and openness to learning.

The future of human-AI collaboration isn’t about choosing between human wisdom and artificial intelligence. It’s about finding ways to combine our unique strengths to tackle challenges neither could handle alone. AI in the loop is just the beginning of what this partnership might look like in practice.


What conversations in your life could benefit from AI in the loop? The key is starting with curiosity rather than certainty, and maintaining our commitment to critical thinking even as we leverage AI’s research capabilities.