Humanity in the Loop

A close friend — a nurse practitioner whose life’s work sits at the intersection of oncology and spirituality in healthcare — shared something with me recently that I haven’t been able to put down.

She was describing the moments she witnesses at the bedside. The ones that don’t make it into charts or protocols. The moments where something shifts — not because of a treatment or a technology, but because one human being chose to truly meet another. She called it loving-kindness. Not as a sentiment. As a practice. A daily, renewable choice.

Every day we encounter people who challenge us. We feel irritated, misunderstood, too busy. And in that moment we have a choice: reframe toward connection, or turn away and remain fixed in our stance. Loving-kindness, she said, is rarely spoken. It lives in the tone of voice. The softness in the eyes. The willingness to stay present.

I keep thinking: this is where everything begins.


There is a word that gets used constantly in the conversation about AI and the future: abundance. It is usually followed by statistics. Productivity gains. Cost curves. Vertical farms producing 360 times the yield per square foot on 95% less water. These numbers are real and they matter.

But abundance is not only an economic event. It is a civilizational one. And civilizations are not built from the top down. They are built from the quality of the encounters between people — one moment of choosing connection over withdrawal, multiplied across billions of lives.

We are living through a transition unlike anything in recorded history. Some of us were born into a world of genuine scarcity — where resources ran out, where opportunity was rationed, where survival competed with flourishing. And some are being born right now into a world that will never know that logic. They will inherit tools and possibilities their grandparents could not have imagined.

We are the bridge generation. We stand in the middle — carrying the memory of scarcity in our bodies while moving into a world that operates by a different set of rules. That position is not an accident. It is a responsibility.


There is an important distinction being missed in almost every conversation about AI.

Some resources are extractive. Oil. Coal. Minerals. Every unit consumed is a unit gone. The world that built our institutions, our economic systems, our psychological defaults — that world was built on extractive logic. Scarcity was not a mistake. It was physics.

But there is another category of resource entirely. The sun does not deplete when it gives. A forest, wisely managed, renews itself indefinitely. Knowledge compounds. Capability builds on capability. Exponential technologies belong to this second category — they are generative, not extractive. The model isn’t a pie being divided. It’s a pie that expands as more people reach for it.

The problem is that we are trying to receive a generative abundance with an extractive mindset. We are asking scarcity questions in an abundance world. And no amount of data will fix that. Because data is not wisdom.

This is what Socrates understood and what we keep forgetting: science gives us truth. Accurate, verifiable, extraordinary truth. But truth without relevance is inert. It does not move us. The transformation — the actual shift from scarcity thinking to abundance living — is not an information problem. It is a wisdom problem. It is a human problem. It requires the work that no system can do for us.


In the ancient philosophical tradition, the psyche was understood not merely as an inner life but as a mover. Self-moving, and capable of moving things beyond itself. The foundation stone beneath the Temple Mount in Jerusalem is described in similar terms — the thing at the center of the world, hidden and weighty, around which everything orients.

Purpose works this way. When a person discovers what they are genuinely here to do, they do not simply feel better. They become a different kind of force in the world. They begin to move things.

This is what is at stake in the AI era — not whether the technology works, but whether the humans wielding it are themselves purposeful. Whether we are moving things, or being moved.


There is a phrase used in technology circles: human in the loop. It describes a system design where a person remains involved in decisions — a safeguard, a checkpoint, a corrective.

I want to propose something different. Not human in the loop. Humanity in the loop.

Because the risk is not only that AI makes bad decisions. The risk is that we automate away the very qualities that make decisions worth making — the loving-kindness, the wisdom, the capacity to meet another person in their full reality and choose connection over withdrawal.

We cannot stop what is coming. And we should not want to. But we are not passengers. We are links in a chain — one of the most critical links in the entire history of civilization.

There are perhaps a few hundred people in the world making the foundational decisions about how AI is built. Their thinking, their wisdom, their sense of responsibility matters enormously. But the direction AI actually takes — the values it serves, the lives it shapes, the civilization it helps build or erode — will not be determined by them alone. It will be determined by all of us.

Not primarily through legislation or protest. Through application. Through the nurse who uses AI to spend less time on paperwork and more time at the bedside. Through the small business owner who leverages it to reach a market they could never have accessed alone. Through the teacher, the farmer, the caregiver, the entrepreneur in a city that has never before had access to these tools — each one deciding, consciously and purposefully, what they are going to build with what abundance is making available.

That is the point of humanity in the loop. Not a political movement. A human one. Every person who brings their purpose into contact with these tools multiplies both. Every community that decides what flourishing means for them, and uses every available resource to get there, is doing exactly what this moment in history is asking of us.

That work begins in small places. At a bedside. In a difficult conversation. In the choice, made again this morning, to meet the world with open eyes and an open heart.

The pie is expanding. The question is whether we are becoming the kinds of people who can truly receive it — and give something worth multiplying back.

The Conductor’s Podium Is Empty. And Waiting!

I sat down for coffee this morning next to someone I hadn’t seen in a while.

She’s experienced. Capable. She spent years building things in the startup world — managing people, navigating complexity, holding organizations together when everything was moving fast and resources were thin. Real work. Hard-earned expertise.

She’s looking for a job now.

And the conversation left me sitting with something I couldn’t shake for the rest of the day.

The roles she’s qualified for are shrinking. Not because she lacks ability — but because AI is quietly absorbing the entry points, the mid-level positions, the rungs of the ladder people spend careers climbing. It isn’t just where you start that’s changing. It’s whether the ladder itself still exists in the form we’ve always known it.

She doesn’t need a better CV.

She needs to understand that the world she’s trying to enter has already been replaced by a larger, more open one.

And she is not alone. What she is experiencing is not a personal career setback. It is a signal — arriving at café tables and inbox rejections and awkward performance reviews all over the world — that something structural has shifted. The woman across from me this morning is standing at the edge of a transformation that is rewriting the rules for everyone, at every stage, in every field.


Here’s the frame I keep returning to.

The old equation governed everyone — not just the entrepreneur dreaming of a startup, but the professional climbing a corporate ladder, the freelancer chasing clients, the mid-career expert waiting to be recognized, the employee who traded autonomy for the security of a steady role. Every position in the value chain was a different strategy for managing the same underlying condition: resources were limited, access was controlled, and you organized your working life around getting close enough to both.

Most visions — entrepreneurial or otherwise — died somewhere in that structure. Not for lack of passion or capability. For lack of position. For lack of runway. For lack of permission from someone further up the chain who held what you needed.

We celebrated the people who navigated it successfully as a special category of human: the entrepreneur. Risk-taker. Visionary. The one who could absorb what others couldn’t.

But I think we misread them.

They weren’t drawn to the risk. They were willing to absorb it in service of something they believed in. The risk was never the feature. It was the tax. And the rest of us — the employed, the climbing, the hustling, the waiting — were paying a different version of the same tax. We called it compromise. We called it patience. We called it being realistic.

AI is eliminating the tax. For everyone.

Execution costs are collapsing. Access to tools, talent, and infrastructure that once required significant capital or institutional backing is approaching zero. That old consolation of optimists — the impossible just takes a little longer — is quietly becoming a literal statement of fact. The impossible is now within reach, and arriving faster than anyone has fully adjusted to.

It has been said: “You have a purpose and you’re motivated — you can go out and do anything you want now.”

Read that not as inspiration. Read it as a description of a new reality. One that applies not just to founders and visionaries — but to everyone who ever had something they wanted to build, contribute, or express, and found the old structure standing in the way.


Which means the conductor’s podium has been empty for a while now.

Most people just haven’t noticed it yet.

The person who thrives in the AI era looks nothing like the archetypes the old structure produced. No longer a risk-absorber. No longer a scarcity manager. No longer someone who survives the gauntlet through sheer force of will, the right connections, or proximity to capital. No longer someone waiting to be chosen.

More like a conductor.

Someone who knows what the music should sound like, and has the judgment to direct the ensemble toward it. The ensemble — the agents, the tools, the resources that are now abundant and largely free — takes direction. What it cannot do is supply the intention.

That’s the human job now.

And intention is another word for purpose.


So back to the woman at the café.

She isn’t losing a seat at a shrinking table. She’s standing in front of a conductor’s podium — open, waiting, hers — and wondering why nobody has offered her a chair.

The chair was never the point.

What she needs isn’t a job description written by someone else. What she needs is the internal permission to step up and say: I know what music I want to make. Her skills aren’t obsolete. The role she’s reaching for — fitting herself into someone else’s organizational chart, waiting to be chosen, fulfilling someone else’s vision of what her contribution should look like — is a solution to a problem that is rapidly ceasing to exist.

Most of us have spent our careers doing exactly that — and calling it a career. The structure rewarded it. The structure was built around it. Show up, perform within the defined boundaries of someone else’s vision, secure your place in the chain.

The podium asks only one thing: that you know what you’re here to direct.

This is the conversation we are not having loudly enough.

AI isn’t just changing what’s possible. It’s changing who gets to do the possible. But only for people who know what they want to do with the possible. That’s the gap. Not technology. Not access. Not even opportunity.

The gap is self-knowledge.

And closing that gap — finally, urgently — is the most practical work any of us can do right now.

Counting to Freedom: AI, the Jubilee, and the End of Human Slavery

A reflection for the season of Passover — and what it means for our civilizational moment

We are in the season of freedom. Jews around the world are about to recline at the Passover seder, reliving the Exodus from Egypt — the archetypal story of liberation from bondage. And then begins the Omer: 49 days of intentional counting, one day at a time, toward Shavuot on the 50th day. In Jewish tradition, 50 is the number of liberation — not by mystical calculation, but by direct divine command: the Jubilee falls on the 50th year, Shavuot on the 50th day. The Yovel — the Jubilee — falls every 50 years: slaves are freed, debts cancelled, land returned to its original owners. The 50th is when the cosmic ledger resets.

What if we are living through something like a civilizational Jubilee?

The Long Arc of Bondage

The story of human slavery is older than civilization itself. For most of recorded history, some humans were the literal property of others — bought, sold, worked without dignity or recourse. The Exodus story was radical precisely because it declared this an affront to the Divine image carried by every human being.

But the formal abolition of chattel slavery — hard-won and still incomplete — did not end the deeper question. The Industrial Revolution introduced a subtler bondage: humans as inputs to capital. The factory whistle replaced the overseer’s whip, but the fundamental equation remained — the many selling their time, their bodies, and their cognitive labor to the few who controlled the machinery and systems of production. For the past two centuries, power has resided with whoever controlled the means: the machines, the data, the expertise, the access.

Before continuing, a necessary pause. The word “slavery” carries the weight of immense historical suffering — the transatlantic slave trade, generations destroyed, trauma that echoes into the present and, tragically, still exists in active form in parts of the world today. To use the term in a broader philosophical context is not to minimize that horror. It is precisely because literal slavery was — and remains — so devastating that its echoes in every system that reduces humans to instruments deserve to be named and confronted. This essay uses the lens of liberation because the wound is real.

The Gutenberg Moment for the Mind

There is a useful historical parallel. In medieval Europe, the scriptoria — monks who hand-copied manuscripts — were the gatekeepers of knowledge. Information was scarce, controlled, hierarchical. The printing press didn’t just create books; it demolished a power structure. Suddenly, ideas could travel faster than institutions could suppress them. The Reformation, the Enlightenment, the Scientific Revolution — all downstream of Gutenberg.

AI is a Gutenberg moment for cognitive labor itself. For centuries, the advantage of the educated and credentialed derived partly from genuine skill — and partly from exclusive access to tools the average person simply could not reach. AI is collapsing that asymmetry. A first-generation entrepreneur in Nairobi now has access to the same quality of legal, financial, and strategic thinking as a Fortune 500 boardroom. That is not a minor adjustment. That is a restructuring of who gets to participate in the creation of value.

The Age of Abundance and the Obsolescence of Scarcity

We are entering what leading futurists have begun calling the Age of Abundance — a period where exponential advances in AI, robotics, and clean energy are driving the marginal cost of goods and services toward zero. Healthcare, education, legal counsel, creative production, financial planning — all are becoming radically more accessible. The entire edifice of industrial-era economics was built on the assumption that resources are scarce and that controlling their allocation is the source of power. That assumption is being dismantled. The old economics of zero-sum competition are giving way to a world where the creation of value no longer requires the extraction of it from someone else.

The Jubilee reset is not merely metaphorical: the accumulated advantages of access — the cognitive capital that has concentrated in fewer and fewer hands — are being cancelled.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Author Name

Freedom Is Not Yet Flourishing

But this is where the Passover story offers a sobering counterpoint. The Israelites, freed from Egypt, wandered for 40 years. Liberation is not the same as flourishing. Tradition associates the revelation at Sinai — the giving of the Torah — with the 50th day of the Omer, marked on Shavuot: freedom without direction is just a different kind of lostness.

And yet the Torah narrative contains an even deeper lesson — one that cuts to the heart of our moment. The first tablets were written entirely by God. Pure divine download. But they were shattered — not destroyed by the enemy, but broken by Moses himself, in the moment he descended to find the Israelites worshipping the Golden Calf. The dazzling technology of revelation, received passively, could not hold. It broke against human unreadiness. The second tablets were different. God commanded Moses:

“Carve for yourself two stone tablets” — Moses had to hew the stone with his own hands. Only then did God fill them with the words. The covenant that endured was the one where human effort prepared the vessel for divine content. The partnership, not the download, was what lasted.

The Golden Calf is not merely an ancient warning about idolatry. It is a parable for every generation that mistakes a powerful tool for its own salvation. Pure technological capability, received passively and worshipped uncritically, shatters. The lasting covenant requires human hands in the process — our values, our judgment, our intentional preparation of who we are becoming. AI that amplifies human purpose is the second tablet. AI that replaces human agency is the golden calf.

AI can free human beings from compulsory cognitive drudgery. What it cannot do is tell us what we are freed for. Viktor Frankl, who survived the ultimate reduction of human beings to numbered instruments, wrote that the primary human drive is not pleasure or power but meaning. The world of abundance will produce more leisure, more creative possibility, more options than any generation has ever known. And it will make the question of purpose more urgent, not less.

The counting of the Omer is a daily practice of intentional preparation — one deliberate day at a time, moving toward revelation. That may be the model for this transition: not passive waiting for abundance to arrive, but active inner work — the carving of our own tablets — to become the kind of people who know what to do with freedom. The Jubilee resets the ledger. What we write on it next is entirely up to us.

Avi Maderer |  AviMaderer.com

The 70.9% Paradox: When AI Matches Experts—And When It Fails Catastrophically

In 2005, a freestyle chess tournament attracted grandmasters, supercomputers, and everyone in between. The rules were simple: any combination of humans and computers could compete. Chess purists expected the grandmasters with their cutting-edge hardware to dominate. Technology enthusiasts predicted pure chess engines would crush human opponents.

Both groups were wrong.

The winners were Steven Cramton and Zackary Stephen, two amateur players from New Hampshire with chess ratings that wouldn’t qualify them for most local tournaments. Using three consumer-grade PCs, they defeated grandmasters partnered with military-grade supercomputers. They beat pure chess engines running on hardware that cost more than most people’s houses. Their secret wasn’t chess mastery or computational power—it was knowing how to orchestrate their AI tools, when to trust which engine, and which strategic questions to explore.

As Garry Kasparov later observed, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”

That was 2005. Twenty years later, we finally have data on what AI can actually do in the economy. And the pace of change should both terrify and intrigue every middle manager reading this.

The New Benchmark That Changes Everything

In September 2025, OpenAI introduced something called GDPval—a benchmark that measures AI performance not on abstract reasoning or exam-style questions, but on actual economically valuable work. Real tasks created by professionals with an average of 14 years of experience across 44 occupations: legal briefs, engineering blueprints, customer support conversations, sales forecasts, medical assessments, marketing plans.

These aren’t toy problems. They’re the tasks that contribute $3 trillion annually to the U.S. economy. The tasks that define knowledge work. The tasks that, until very recently, we assumed required human expertise, judgment, and years of training.

When GDPval launched in September, the best AI models were matching human experts roughly 50% of the time. Impressive, but still clearly behind human performance overall.

Then, just three months later in December 2025, OpenAI’s GPT-5.2 model achieved something remarkable: a 70.9% win or tie rate against human experts. In ninety days, AI jumped from parity to clear superiority on professional knowledge work tasks. And it does so eleven times faster and at one-hundredth the cost.

If you’re a middle manager responsible for tasks that can be clearly defined, measured, and evaluated—you should be paying attention.

The Number That Should Terrify You (Because of How Fast It’s Moving)

Seventy percent. That’s past the tipping point. That’s not “AI is getting there” or “AI shows promise.” That’s AI demonstrably outperforming humans on most professional knowledge work tasks that can be tested.

But here’s what should really get your attention: the speed. Three months ago, AI was at rough parity with humans. Now it’s clearly superior. That’s not a gradual slope—that’s a vertical climb.

The economic pressure is immediate and real. When you can get expert-level output in minutes instead of days, at a fraction of the cost, why wouldn’t you automate? The spreadsheet practically writes itself: 100x cost reduction, 11x speed improvement, 70% reliability. For any CFO looking at that math, the decision seems obvious.

But here’s where the story gets interesting—and where the chess lesson from 2005 becomes critically important.

Because hidden in that 70.9% success rate is a catastrophic failure mode that changes everything.

The Plot Twist: When AI Fails, It Fails Spectacularly

GDPval’s analysis revealed something that should make every executive pause before clicking “automate everything.” The 70.9% figure doesn’t tell the whole story.

Here’s what matters: of the AI outputs evaluated, roughly 27% were classified as “bad”—meaning not fit for use—and 3% were classified as “catastrophic”—meaning they could cause actual harm if deployed.

But here’s the more subtle issue: even within the 70% that win or tie with human experts, the quality isn’t uniform. A deliverable might be 70% excellent and 30% flawed. An AI-generated legal brief might nail seven arguments but miss a critical precedent. An engineering blueprint might specify correct dimensions but overlook a safety requirement. A customer service response might be 80% perfect but include one sentence that violates company policy.

Think about what that means in practice. Imagine you’re a law firm that starts using AI for brief writing. Most briefs look great—professional, well-researched, properly formatted. But buried in some of them are mistakes that could get your client sanctioned or your firm disciplined. The AI doesn’t flag these errors. It presents everything with the same confidence.

Or you’re a hospital deploying AI for patient documentation. Most notes are thorough and accurate. But occasionally, critical information is omitted or mischaracterized—and nothing in the system signals which notes need extra scrutiny.

This isn’t like having a junior employee who needs supervision. When humans make mistakes, they’re usually within a reasonable margin of error. They might miss something or make a suboptimal choice, but they rarely produce work that’s fundamentally dangerous. AI, by contrast, generates outputs that look confident and competent—until they catastrophically aren’t.

The math suddenly looks different. It’s not just about whether AI can do the work. It’s about whether you can afford the cost of catching the failures before they cause harm.

The Question That Should Keep You Up At Night

So here’s the real question: if AI can handle 70% of professional work tasks but includes subtle or catastrophic flaws that look indistinguishable from quality work—what does that make your job?

The answer is both sobering and liberating: your job becomes judgment.

Not judgment in the sense of deciding whether AI is “good” or “bad.” Not judgment as superiority or gatekeeping. Judgment as the practical skill of evaluating quality, accuracy, and appropriateness—the ability to look at an output and determine whether it’s actually fit for purpose.

Specifically:

  • Recognizing which tasks are safe to delegate to AI and which require human handling
  • Spotting the subtle errors that look correct but aren’t
  • Catching the 3% catastrophic failures before they cause harm
  • Evaluating whether an AI-generated deliverable actually solves the problem
  • Determining when an output is “good enough” versus when it needs human refinement

This is the new skill gap. Not whether you know how to prompt AI or use the latest tools. Whether you can evaluate the outputs well enough to make the whole system work—like Steven Cramton and Zackary Stephen orchestrating their three chess engines in 2005.

The Centaur Solution (And Why Process Beats Power—For Now)

After Kasparov’s defeat by Deep Blue, he didn’t retreat from AI—he invented a new form of chess called “Advanced Chess” or “Centaur Chess.” The name comes from the mythological centaur: half-human, half-horse, combining the strengths of both. In this context, it means human intelligence guiding and evaluating AI computational power—neither competing against each other, but working as an integrated team.

This is the solution we’re suggesting: not humans versus AI, but humans orchestrating AI through superior process and judgment.

Remember those amateur chess players? They didn’t beat grandmasters because they were better at chess. They beat grandmasters because they had a better process for integrating human judgment with AI capabilities. They knew when to trust which engine. They knew how to combine outputs. They knew which questions to explore.

The grandmasters, ironically, struggled with this. They were so confident in their chess expertise that they either over-trusted the machines or ignored them entirely. The amateurs, by contrast, understood something fundamental: success wasn’t about being smarter than the AI or having more powerful tools. It was about orchestrating the human-AI partnership effectively.

Right now, today, the 70.9% number makes this orchestration essential. Yes, AI can match experts on most tasks. But someone still needs to evaluate which outputs are in the 70% and which are in the catastrophic 3%. Someone needs to determine when AI’s confident answer is actually correct versus when it’s confidently wrong.

That evaluation skill—that judgment—is what makes the system work. It’s what turns a 70% success rate with occasional catastrophic failures into a reliable business process.

The Honest Path Forward

Here’s what we know today, in December 2025:

  • AI performance on professional tasks jumped from 50% to 70.9% in just three months
  • Quality issues—both subtle and catastrophic—remain present even in “successful” outputs
  • The rate of improvement suggests these numbers will continue climbing rapidly

Here’s what we don’t know:

  • How quickly AI will continue improving (though recent progress suggests: very fast)
  • Whether the quality control challenge will get easier or harder as AI improves
  • What the next three months will bring, let alone the next year

But here’s what seems increasingly clear: the future of knowledge work isn’t human versus machine. It’s not even human or machine. It’s human and machine, with humans providing the judgment layer that catches errors, evaluates quality, and determines fitness for purpose.

The question isn’t whether AI will impact your job. At 70.9% and climbing fast, that question is answered. The question is whether you’re developing the judgment skills to evaluate AI outputs effectively—knowing when to trust them, when to refine them, and when to start over.

Twenty years ago, two amateur chess players taught us that weak humans with machines and better process could beat strong humans with machines and inferior process. Today, with AI capabilities advancing every quarter, that lesson about process and judgment has never been more relevant.

The difference is this: in 2005, the chess engines were relatively stable. Today, the AI is improving so fast that the game itself keeps changing. Which means the judgment skills—the ability to evaluate quality, spot errors, and determine appropriateness—become even more critical.

Because in a world where AI can handle 70% of tasks (and counting), the work that remains is precisely the work that requires human judgment about whether the AI’s work is actually good enough.

Sources & Further Reading

  • ChessBase: “Dark horse ZackS wins Freestyle Chess Tournament” (June 2005)
  • OpenAI: “Measuring the performance of our models on real-world tasks” (GDPval introduction)
  • Kasparov, Garry: Various writings on centaur chess and human-AI collaboration
  • OpenAI: GPT-5.2 GDPval benchmark results (December 2025)

This is the first in a series exploring what AI’s measured capabilities tell us about the future of knowledge work, human judgment, and the evolving nature of professional expertise.

#HumanPurpose #HumanJudgment #FutureOfWork #AI #ExperienceGap #Leadership #DecisionMaking #ProfessionalDevelopment

The University in the Age of AI: Reimagining Higher Education

Higher education stands at a critical crossroads. The traditional model of knowledge transmission – where universities were the primary gatekeepers of information and expertise – is rapidly becoming obsolete in an era of AI-powered, democratized learning.

The Changing Landscape of Knowledge

Historically, universities served three fundamental purposes:

  1. Knowledge Preservation
  2. Knowledge Transmission
  3. Credentialing and Skill Validation

Artificial intelligence is fundamentally disrupting each of these pillars. With vast information repositories instantly accessible and AI systems capable of explaining complex concepts, the traditional lecture model is becoming increasingly irrelevant.

Beyond Information: The New Value Proposition

In a world where information is free and instantaneous, universities must pivot from being information providers to becoming transformation environments. Their true value will emerge from:

1. Critical Thinking Development

  • Teaching students how to question, analyze, and synthesize information
  • Developing human judgment in an AI-saturated world
  • Cultivating skills AI cannot replicate: emotional intelligence, creative problem-solving, ethical reasoning

2. Collaborative Learning Spaces

  • Creating environments for human interaction
  • Facilitating deep, nuanced discussions
  • Developing interpersonal skills and collaborative capabilities

3. Experiential Learning

  • Providing real-world project experiences
  • Connecting theoretical knowledge with practical application
  • Offering mentorship and guided exploration

The Credentialing Revolution

Traditional degrees are losing their monopoly. The future likely involves:

  • Micro-credentials
  • Skill-based certifications
  • Continuous learning pathways
  • Dynamic, portfolio-based assessment

The Philosophical Challenge: Positive Impact as the Central Mission

Beyond technological adaptation lies a more profound imperative: cultivating positive impact on humanity. In an era of unprecedented technological capability, universities must become more than knowledge centers – they must become crucibles of human potential dedicated to meaningful global transformation.

Positive Impact as the Core Educational Paradigm

Higher education’s future is fundamentally about developing human capacity for:

  • Ethical problem-solving
  • Interdisciplinary collaboration
  • Meaningful innovation
  • Deep empathetic understanding
  • Responsible technological development

This approach transforms universities from passive knowledge repositories to active “impact laboratories” where students learn to:

  • Address global challenges
  • Create sustainable solutions
  • Prioritize collective human welfare
  • Break down academic silos
  • Develop technologies that enhance human dignity

Human Judgment in the AI Era

Universities must become centers of human potential development. This means:

  • Teaching meta-learning skills
  • Developing adaptability
  • Cultivating curiosity
  • Nurturing interdisciplinary thinking

Practical Recommendations for Universities

  1. Redesign curriculum to emphasize:
  • AI interaction skills
  • Critical thinking
  • Interdisciplinary problem-solving
  • Emotional and social intelligence
  1. Create flexible learning models
  • Modular courses
  • Lifelong learning programs
  • Industry-aligned skill development
  1. Invest in human-centric technologies
  • Advanced simulation environments
  • Collaborative digital platforms
  • AI-assisted personalized learning

Redefining Success: Beyond Individual Achievement

The metric of educational success shifts from individual credentials to collective human advancement. Universities must now ask:

  • How does this learning serve humanity?
  • What global challenges can we address?
  • How can we develop solutions that elevate human potential?

Conclusion: The University as a Catalyst for Human Flourishing

The future of higher education is not about competing with AI, but about unleashing human potential where technology cannot reach. We need institutions that don’t just transfer information, but transform individuals into agents of meaningful change.

The university of tomorrow will be defined not by what students know, but by their capacity to imagine, create, and implement solutions that genuinely improve the human condition.

Our greatest challenge – and opportunity – is to reimagine education as a powerful instrument of positive human impact.

The Pulse: The Acceleration Paradox

The Pulse: AI’s Human Impact Report — Oct 27, 2025
The Pulse: AI’s Human Impact Report

The Acceleration Paradox

Real-time, verified, multi-source reporting
Dateline:
This Week’s Pulse

Education pivot: High-school STEM programs are shifting emphasis from coding drills to data literacy and critical analysis as AI handles more routine programming — according to Education Week. Source

Global health governance: Regulators and partners from 40+ countries urged a collaborative approach to safe, ethical and equitable AI in health at WHO’s AIRIS 2025 summit in Incheon — according to the World Health Organization. Source

Workload reality: Elite AI teams at top labs and startups are logging 80–100-hour weeks amid intensifying competition — according to The Wall Street Journal. Source

Productivity paradox:Workslop” — polished, low-value AI output — is spreading, prompting companies to add quality gates and don’t-use-AI-here rules — according to Bloomberg and Harvard Business Review. BloombergHBR

Infrastructure pressure: The energy and water demands of hyperscale AI data centers are spiking community and policy debates, including around facilities like “Stargate” — according to WIRED. Source

“Acceleration without assimilation creates disorientation. Adaptation demands reflection.”
60 Seconds Overview
  • STEM evolution: Data literacy & judgment skills rise over coding drills (Education Week).
  • Health AI cooperation: WHO AIRIS 2025 calls for safe, equitable AI in health (WHO).
  • 100-hour weeks: Elite AI teams push wartime schedules (WSJ).
  • Workslop backlash: Quality gates + “don’t-use-AI” zones (Bloomberg, HBR).
  • Data-center debate: Energy & water scrutiny intensifies (WIRED).
Jobs & Skills Watch

Skill reallocation: Managers are retraining teams to evaluate AI outputs (fact-checking, standards alignment) rather than just generate drafts. Critical-thinking and domain judgment are rising in value as content automation expands.

Worker strain: Extreme schedules in top AI teams underscore burnout risk and retention concerns — according to The Wall Street Journal. Source

Workslop response: Enterprises are introducing AI impact audits so outputs must show measurable efficiency or customer value to stay in workflows — according to Bloomberg and HBR. BloombergHBR

Policy & AI Announcements

At the WHO AIRIS 2025 Summit, delegates urged international cooperation on registries, bias audits, training standards, and human oversight for health-AI — according to WHO. The framing treats AI as part of a human health system, not a bolt-on tool. Source

“Governance is shifting from principles to procedures — and that’s progress.”
Worker Impact

Fatigue curve: Reports of 80–100-hour weeks in elite AI teams highlight the human costs of acceleration — according to WSJ. Source

AI-free focus: Leaders adopt deep-work windows and don’t-use-AI zones to restore attention and quality — according to HBR. Source

What to Watch
  • Data-center siting & sustainability: Energy/water tradeoffs in new regions (WIRED).
  • Public-sector AI audits: Follow-through after WHO’s collaboration call (WHO).
  • K-12 guidance: District policies as classrooms rebalance STEM toward data literacy (Education Week).
  • Occupational health: HR tools for burnout tracking in AI-heavy teams (WSJ).
The Deeper Question

How fast is too fast? Students must unlearn old curricula as teachers retrain; engineers stretch their limits in the name of progress; governments rush to govern what they barely grasp. Acceleration without assimilation creates disorientation, but adaptation demands reflection. The real measure of AI success may be our ability to slow down just enough to decide what should not be automated.

This week’s question: Can we design progress that respects the human clock?

Thanks for reading The Pulse. For deeper dives, listen to our podcast, “Beyond the Code: AI’s Role in Society.” Want help building critical-use AI into your workflow? Book a consult or subscribe for next week’s human-first edition.

© 2025 The Pulse: AI’s Human Impact Report. All rights reserved.

The Pulse: Human Adaptation — Learning to Live with AI

The Pulse: AI’s Human Impact Report — Oct 20, 2025
The Pulse: AI’s Human Impact Report

Human Adaptation — Learning to Live with AI

Hybrid Edition • Real-time, verified, multi-source reporting
Dateline:
This Week’s Pulse

The center of gravity this week is education—not as a tech showcase but a human-systems challenge. The U.S. is moving to professionalize AI literacy for teachers through AFT/NEA training hubs funded by Microsoft, OpenAI, and Anthropic, according to the Associated Press. Source

Google committed $1 billion to AI education and job training, signaling that teacher enablement is strategy, not charity, according to Google and Reuters. Google BlogReuters

UK teens report AI can erode study habits and original thinking, calling for clear rules and guidance, according to The Guardian. Source In San Francisco, the AI-centric Alpha School accelerates learning through personalization and coach models—critics question equity and developmental fit. Source

“Adoption doesn’t equal learning—or value. Without redesign, AI just accelerates whatever system you already have.”

In workplaces, leaders face a productivity paradox: AI boosts activity but not outcomes. “Workslop” (polished, low-value output) spreads, according to Bloomberg, while HBR urges quality gates and use-and-don’t-use rules. BloombergHBR

60 Seconds Overview
  • Teacher training goes big: Microsoft, OpenAI, and Anthropic fund large-scale educator training, according to AP.
  • Google’s $1B pledge: $1B for education and training, according to Google and Reuters.
  • AI-first schooling debate: Alpha School sparks equity debate, according to The Guardian.
  • Students ask for rules: Pupils want responsible AI use guidance, according to The Guardian.
  • Workplace reality check: “AI workslop” rises, according to Bloomberg & HBR.
  • Guardrails & misuse: OpenAI disrupts malicious networks; EU AI Act obligations in force.
Jobs & Skills Watch

Hiring shifts: Growth in AI platform engineering, data stewardship, and AI compliance tracks new guardrails and training, according to Reuters and EU AI Act guidance.

What’s automating: drafting, summarization, and media creation compress entry-level roles, according to HBR.

Skills gaining value: critical-use literacy, facilitation, and governance expertise, according to AP and EU Commission.

Skills losing value: “templateable” outputs and unreviewed solo work, according to Bloomberg.

Policy & AI Announcements

Classroom AI: Public-private teacher training hubs expand, according to AP. Google widens Gemini for Education access, according to Google.

EU AI Act cadence: Prohibitions & literacy obligations since Feb 2025; GPAI obligations since Aug 2025, according to EU Commission.

Platform governance: OpenAI disrupts 40+ malicious networks, according to OpenAI.

Adult-gated policy: Sam Altman confirms ChatGPT erotica for verified adults, according to TechCrunch and The Verge.

“The new curriculum isn’t just math and reading. It’s judgment—when to trust the model, and when to override it.”
Worker Impact

AI-assisted productivity collides with messy human workflows. “Workslop”—polished but low-value output—erodes trust, according to Bloomberg. HBR urges quality gates and defined use-cases.

Teachers save drafting time but must realign AI lesson plans with goals, according to Ars Technica. Students echo the need for critical-use pedagogy, according to The Guardian.

Platform hygiene—OpenAI’s network takedowns—reduces background risk, according to OpenAI.

What to Watch
  • Teacher-training ROI: Will results show measurable learning gains?
  • District guardrails: Procurement changes as adult-gated tools roll out.
  • EU AI Act guidance: Member-state implementations for AI literacy.
  • From workslop to workflow: Redesigns yielding true productivity.
  • Student voice: Surveys on AI’s impact on creativity and motivation.
The Deeper Question

If learning means choosing effort and wrestling with ideas, what happens when intelligence becomes ambient and frictionless? Teens say AI makes study “too easy.” That’s not nostalgia—it’s about meaning. We grow through intentional struggle.

The answer isn’t to avoid AI but to re-engineer goals around judgment, empathy, and originality. In schools, AI should be the sparring partner, not the answer key; in offices, the draft mule, not the decision-maker.

This week’s question: What constraints will we embrace so the human work remains ours?

Thanks for reading The Pulse. For deeper dives, listen to our podcast, “Beyond the Code: AI’s Role in Society.” Want help building critical-use AI into your workflow? Book a consult or subscribe for next week’s human-first edition.

© 2025 The Pulse: AI’s Human Impact Report. All rights reserved.

In Memory of Charlie Kirk

Humanity at Its Best

Yesterday, we lost Charlie Kirk—not just a political voice, but a human being who embodied something increasingly rare in our world: the courage to engage authentically with ideas and people, both those who disagreed with him and those with whom he disagreed.

Kirk’s “Prove Me Wrong” format wasn’t just clever branding—it was a declaration of faith in human discourse. He approached disagreement with curiosity rather than contempt, seeking to understand before seeking to be understood. When he sat under that tent at Utah Valley University, engaging with students who challenged his views, he was demonstrating something profound about what it means to be human.

The Gift of Genuine Dialogue

What made Charlie irreplaceable wasn’t the positions he held, but how he held them. He brought to every conversation a willingness to be genuinely present with other human beings, to risk being changed by encounter with different perspectives, and to treat even his opponents as fellow travelers in the search for truth.

This capacity for authentic engagement—for vulnerability in the face of disagreement—represents humanity at its finest. It requires intellectual courage to expose your ideas to challenge. It demands emotional maturity to remain gracious when others question what you hold dear. Most importantly, it asks us to see the person behind the position, to recognize our shared humanity even across deep differences.

What We’ve Lost

The shooter who killed Charlie Kirk attacked more than a person; they attacked the very possibility of civil discourse itself. In Charlie’s death, we’ve lost not just a voice, but a model of how human beings can engage with one another across difference without losing their dignity or their humanity.

Charlie showed us that it’s possible to hold strong convictions while remaining open to dialogue. He demonstrated that we can disagree passionately while still treating one another with respect. He proved that the pursuit of truth is not a zero-sum game, but a collaborative endeavor that requires the participation of people who see the world differently.

Honoring His Memory

As we mourn Charlie Kirk, we must also commit to preserving what he represented: the irreplaceable humanity that makes authentic dialogue possible. His legacy isn’t found in any particular political position, but in his approach to human engagement—the capacity to listen with genuine interest, to speak with honest conviction, and to treat every conversation as an opportunity to understand something new about the world and the people in it.

This is what we must not let die with him: the belief that civil discourse is possible, that good people can disagree in good faith, and that our shared humanity is stronger than our political divisions.

Charlie Kirk believed in the power of conversation to bridge divides and illuminate truth. In his memory, let us recommit ourselves to the kind of dialogue he championed—graceful, authentic, and fundamentally hopeful about what human beings can accomplish when we engage with one another as fellow seekers of understanding.

Rest in peace, Charlie. Your example of humanity at its best will not be forgotten.

Solution Sunday: The “Is This Real?” Game

Turn Summer Downtime into Literacy Detective Work

The Challenge: Kids believe everything they read online, and summer screen time often means less critical thinking practice.

The Opportunity: Summer’s flexible schedule gives families perfect moments—car rides, park picnics, rainy afternoons—to build fact-checking skills that will serve kids for life.

The Solution: A fun family game using AI to create mystery statements that kids research and verify, turning them into information detectives.


How The Game Works

Step 1: AI Generates Mystery Statements

Ask ChatGPT or Claude to create a mix of true and false statements tailored to your child’s interests and reading level.

Sample Prompt for Ages 6-9:

“Create 5 fascinating statements about animals that kids would find interesting. Make 3 true and 2 false, but make them all sound believable. Keep language at a 2nd-3rd grade reading level.”

Sample Prompt for Ages 10-12:

“Generate 7 surprising facts about space exploration. Make 4 true and 3 false. Include some details that make the false ones tricky to spot. Use 5th-6th grade vocabulary.”

Sample Prompt for Ages 13+:

“Create 6 statements about historical events that sound amazing but might not be true. Mix real and fictional events. Make them engaging for teenagers who like surprising stories.”

Step 2: Present the Challenge

Read the statements aloud or write them on cards. Tell your kids: “Some of these are completely true, and some are made up. Your job is to figure out which is which!”

Step 3: Research & Verify

Give kids time to investigate using:

  • Library books (if at home)
  • Phone research (with parent guidance)
  • Asking other adults they trust
  • Looking up official sources

Step 4: The Big Reveal

Come back together and let each child share their verdict and reasoning. Then reveal the answers and celebrate good detective work!


Age-Appropriate Adaptations

Ages 5-7: “True or Silly?”

  • Use simple, concrete topics (animals, toys, food)
  • Make false statements obviously silly once investigated
  • Focus on “How do we find out?” rather than complex verification

Example Set:

  • ✅ TRUE: “Octopuses have three hearts”
  • ❌ FALSE: “Cats can see in complete darkness with no light at all”
  • ✅ TRUE: “A group of flamingos is called a flamboyance”
  • ❌ FALSE: “Dogs can only see in black and white”

Ages 8-11: “Fact Detective”

  • Include more nuanced true/false distinctions
  • Introduce concept of “mostly true but missing details”
  • Start teaching source evaluation

Example Set:

  • ✅ TRUE: “There are more possible chess games than atoms in the observable universe”
  • ❌ FALSE: “Sharks never sleep”
  • ✅ TRUE: “Honey never spoils if stored properly”
  • ❌ FALSE: “Lightning never strikes the same place twice”
  • ✅ TRUE: “A cloud can weigh more than a million pounds”

Ages 12+: “Misinformation Hunters”

  • Include statements that require checking multiple sources
  • Discuss bias and how “true” information can be misleading
  • Connect to current events and social media literacy

Example Set:

  • ✅ TRUE: “The Great Wall of China isn’t visible from space with the naked eye”
  • ❌ FALSE: “Einstein failed math in elementary school”
  • ✅ TRUE: “There are more trees on Earth than stars in the Milky Way galaxy”
  • ❌ FALSE: “We only use 10% of our brains”

Summer Settings & Variations

🚗 Car Ride Version

  • Prepare statement cards before leaving
  • Kids research at rest stops or when you arrive
  • Perfect for long drives to keep minds active

🏖️ Vacation Detective

  • Create statements about your destination
  • Research using hotel WiFi or visitor center resources
  • Make it part of exploring new places

🏠 Rainy Day Challenge

  • Generate statements about indoor topics (science, history, books)
  • Use home resources: encyclopedias, library books, online search
  • Make it a weekly tradition

🌳 Park Picnic Game

  • Focus on nature-based statements
  • Research using nature apps or field guides
  • Combine with outdoor observation

🎭 Family Game Night

  • Create themed rounds (sports, movies, history)
  • Keep score and rotate who presents statements
  • Make it competitive but collaborative

Building Real Skills

What Kids Learn:

  • Source evaluation: “Where did this information come from?”
  • Multiple verification: “Can I find this in more than one place?”
  • Question formation: “What should I search for to check this?”
  • Evidence weighing: “Which source seems most reliable?”
  • Healthy skepticism: “This sounds amazing—is it too good to be true?”

Parent Coaching Moments:

  • “What made you decide to believe/doubt that statement?”
  • “Where could we look to double-check this?”
  • “What questions should we ask about this source?”
  • “How can we tell if a website is trustworthy?”

Sample AI-Generated Statement Sets

Ocean Mysteries (Ages 8-12)

Ask AI: “Create 6 ocean facts for kids. Make 4 true and 2 false. Include some that sound unbelievable but are real.”

Possible Results:

  1. The ocean produces more than 50% of the world’s oxygen ✅
  2. There are underwater waterfalls in the ocean ✅
  3. Dolphins have names for each other ✅
  4. The deepest part of the ocean has been fully explored ❌
  5. Some fish can live for over 400 years ✅
  6. Seahorses are the fastest swimmers in the ocean ❌

Space Adventures (Ages 10-14)

Ask AI: “Generate 5 space facts that sound incredible. Make 3 true and 2 false. Include surprising details.”

Historical Surprises (Ages 12+)

Ask AI: “Create 6 statements about historical events that sound too wild to be true. Mix real and fictional events.”


Troubleshooting Common Challenges

“This is too hard!”

  • Start with more obviously false statements
  • Work together as a team initially
  • Celebrate the process, not just correct answers

“I can’t find the answer!”

  • Teach different search strategies
  • Show them how to rephrase questions
  • Make it okay to say “I’m not sure” and keep investigating

“The AI made mistakes!”

Sometimes AI generates incorrect “true” statements. This becomes a teaching moment: “Even AI can be wrong! That’s why we always check multiple sources.”


Making It Stick: Building the Habit

Start Small

  • Begin with 2-3 statements per session
  • Choose topics your child already loves
  • Keep sessions under 20 minutes initially

Create Rituals

  • “Mystery Monday” statements each week
  • Vacation tradition for each new city
  • Bedtime wind-down activity

Celebrate Success

  • Keep a “Detective Journal” of statements investigated
  • Award “Truth Seeker” badges for good questioning
  • Share favorite discoveries with extended family

Beyond the Game: Real-World Applications

As kids get comfortable with the game, connect it to:

  • School projects: “Let’s fact-check this before including it”
  • News stories: “Should we verify this before sharing?”
  • Social media: “How could we check if this viral post is true?”
  • Friend claims: “That sounds interesting—where did you hear that?”

Parent Success Stories

“My 9-year-old now automatically asks ‘How do we know that’s true?’ when she hears surprising facts. The game turned her into a natural skeptic in the best way.” —Maria, mom of two

“We started this on a road trip to Yellowstone. By the end of the week, my kids were fact-checking the park ranger! (Respectfully, of course.)” —David, dad of three

“The best part is watching my daughter teach her younger brother how to ‘be a detective.’ She’s become the fact-checker of the family.” —Sarah, homeschool mom


Getting Started This Week

  1. Choose your AI tool: ChatGPT, Claude, or similar
  2. Pick your child’s interest: Animals, sports, science, history
  3. Generate 3-5 statements using the prompts above
  4. Find 15-20 minutes during a natural family moment
  5. Present the challenge and research together
  6. Celebrate the detective work regardless of right/wrong answers

The Big Picture

In an age where information travels faster than verification, teaching kids to question, research, and think critically isn’t just academic—it’s essential life preparation. The “Is This Real?” game turns this vital skill into summer fun.

Remember: The goal isn’t to make kids suspicious of everything, but to help them become thoughtful consumers of information who know how to seek truth in a noisy world.

Your turn: Try the game this week and share your results! What statements surprised your family? What detective strategies worked best?

Connect with other families exploring AI-powered literacy at [your community platform]. Summer learning doesn’t have to feel like school—it can feel like an adventure.

I Didn’t Write This Paper – I Composed It: Redefining Creativity in the AI Age

A personal exploration of what it means to create when AI handles the execution

The Question That Stopped Me Cold

“Did you write your most recent white paper?”

My friend’s question yesterday was innocent enough, but it hit me like a brick wall. I found myself stumbling through an answer that felt both true and inadequate at the same time.

“Well, AI wrote it, but I…” I started, then stopped. “I mean, I used AI as a tool, but I spent hours…” Another pause. “It’s complicated.”

The conversation moved on, but the question lingered. Had I written the paper? In the most literal sense—fingers on keyboard, words appearing on screen—no, I hadn’t. But dismissing my role felt wrong too. I had spent hours in conversation with AI, bringing my critical thinking, life experience, and pattern recognition to bear on the content. I had shaped every argument, guided every direction, and made countless decisions about what belonged and what didn’t.

Later that evening, the right word finally came to me: I hadn’t written the paper. I had composed it.

Read the full whitepaper here:
https://avimaderer.com/the-experience-gap/

The Composer’s Role in the Age of AI

This distinction—between writing and composing—might seem semantic, but it’s actually profound. It points to a new form of creative collaboration that we’re all navigating but haven’t quite learned to articulate yet.

When a composer writes a symphony, they don’t physically play every instrument. They create the structure, choose the harmonies, guide the emotional arc, and make countless decisions about how ideas should flow together. The orchestra executes their vision, but no one questions who created the music.

Working with AI feels remarkably similar. I brought:

  • The conceptual framework – What questions needed exploring?
  • The narrative structure – How should ideas build on each other?
  • Critical synthesis – Which connections matter and why?
  • Experiential wisdom – What insights from my own life apply here?
  • Editorial judgment – What serves the reader and what doesn’t?

AI handled the execution—the actual sentence construction, formatting, and technical writing mechanics. But every substantive choice was mine.

Beyond “Human in the Loop”

The current discourse around AI creativity often falls into two camps: either AI is doing everything (threat narrative) or humans are completely in control (reassurance narrative). Both miss the nuanced reality of genuine human-AI collaboration.

The phrase “human in the loop” suggests we’re just quality control—reviewing and approving AI’s work. But that’s not what happened with my white paper. I wasn’t in the loop; I was conducting the orchestra.

I was:

  • Initiating every major direction
  • Questioning assumptions and pushing for deeper thinking
  • Connecting disparate ideas across domains
  • Filtering through my values and experience
  • Iterating toward a vision only I could see

This is composition, not editing. Creation, not curation.

A Framework for Creative Collaboration

If we’re going to navigate this new landscape of human-AI creativity, we need better language and clearer frameworks. Here’s how I’m starting to think about it:

Execution vs. Composition

Execution includes:

  • Sentence construction and grammar
  • Formatting and structure
  • Research compilation
  • Technical writing mechanics
  • Style consistency

Composition includes:

  • Conceptual framework development
  • Narrative arc creation
  • Critical analysis and synthesis
  • Value-based filtering and judgment
  • Creative direction and vision

The key insight: AI excels at execution but cannot compose without human intentionality, experience, and judgment.

The Four Pillars of AI-Assisted Composition

When I reflect on my white paper process, four distinct human contributions emerge:

  1. Vision Setting – What is this really about? What matters here?
  2. Connection Making – How do disparate ideas relate? What patterns exist?
  3. Experience Integration – What do I know from living that informs this?
  4. Value Filtering – What serves the reader? What aligns with my beliefs?

These capabilities remain uniquely human because they require lived experience, emotional intelligence, and the kind of contextual judgment that comes from being embedded in the world.

Practical Implications

This framework isn’t just philosophical—it has real implications for how we work, communicate, and understand our roles in an AI-enabled world.

For Professionals

When someone asks if you “wrote” something created with AI assistance, you can confidently say: “I composed it. AI handled the execution, but every substantive decision was mine.”

For Evaluating Creative Work

Instead of asking “Did a human write this?” we might ask: “Who provided the creative vision, made the connections, and exercised judgment about what matters?”

for Understanding Value

Your value as a creative professional isn’t in your typing speed or grammar skills—it’s in your ability to see patterns, make connections, integrate experience, and guide work toward meaningful outcomes.

The Bigger Picture

This shift from writing to composing reflects something larger happening across all creative fields. We’re moving from a world where human value came from executing tasks to one where it comes from creative direction, synthesis, and judgment.

The Bridge Generation—those of us navigating this transition—has a unique opportunity. We remember what pure human creation felt like, but we’re also learning to collaborate with AI in ways that amplify rather than replace our essential human capabilities.

The question isn’t whether AI will change how we create—it already has. The question is whether we can articulate and claim our evolving role as composers, conductors, and creative directors in this new landscape.

Moving Forward

I’m still learning to navigate these conversations about AI-assisted creation. But I’m getting more confident about claiming my role as a composer rather than apologizing for not being a traditional writer.

The next time someone asks if I wrote something, I’ll know exactly what to say: “I composed it through conversation with AI, bringing my experience, judgment, and vision to guide every substantive decision. AI was my instrument; I was the composer.”

That feels both honest and empowering. Most importantly, it feels true.

Read the full whitepaper here:
https://avimaderer.com/the-experience-gap/


What’s your experience with AI-assisted creativity? How do you describe your role when AI handles execution but you provide the vision and judgment? I’d love to hear how you’re navigating these questions.