Win At Business And Life In An AI World

RESOURCES

  • Jabs Short insights and occassional long opinions.
  • Podcasts Jeff talks to successful entrepreneurs.
  • Guides Dive into topical guides for digital entrepreneurs.
  • Downloads Practical docs we use in our own content workflows.
  • Playbooks AI workflows that actually work.
  • Research Access original research on tools, trends, and tactics.
  • Forums Join the conversation and share insights with your peers.

MEMBERSHIP

HomeJeff’s JabsThe Real Risk of AI Isn’t Misuse, It’s Unskilled Use 

The Real Risk of AI Isn’t Misuse, It’s Unskilled Use 

Human teaching AI to play violin

A colleague asked me a question that stopped me mid-sentence: “Why are you designing all these guided prompts and frameworks into our AI startup interface? Why not just let people talk to the AI naturally?

I paused, watching the cursor blink on the screen. The question wasn’t skeptical, it was genuine curiosity. And it deserved an honest answer.

Because,” I said, “we’ve just handed humanity the most powerful intellectual tool ever created, and it doesn’t come with a manual. Worse, most people are using a Ferrari like it’s a bicycle—not because they’re incapable, but because no one taught them how to drive.

That conversation led to a deeper exploration, one that reveals something profound about where we are in the Human + Machine Age: we’re at a civilizational inflection point where technological capability has vastly outpaced human literacy.

Think about it. 

You can open ChatGPT, Claude, or any AI assistant right now and have access to more reasoning power than any human who lived before 1950 had in their entire lifetime. Yet most of us use it to write grocery lists or summarize articles we’re too busy to read.

The gap isn’t in the technology. It’s in our understanding of how to wield it.

Why This Matters More Than You Think

The Stoic philosopher Epictetus wrote: “It is impossible for a man to learn what he thinks he already knows.” We approach AI with a fatal assumption—that because we can use it, we are using it well.

But there’s a difference between access and mastery, between using a tool and wielding it with intention and skill. This distinction matters because AI interaction isn’t like using Google or browsing social media. It’s more akin to conducting an interview, facilitating a coaching session, or engaging in Socratic dialogue.

These are skills we develop through frameworks, practice, and guidance.

Consider what happens when someone opens an AI chat interface for the first time. They’re facing what psychologists call “choice overload” combined with “blank page paralysis.” The possibilities are infinite, which paradoxically makes it harder to start. As the ancient Chinese philosopher Lao Tzu observed: “A journey of a thousand miles begins with a single step.” But which step? In which direction?

Without frameworks, most people default to what’s familiar: transactional queries. “Summarize this article.” “Write an email.” “Give me a recipe.” These aren’t wrong, but they barely scratch the surface of what’s possible.

The real tragedy isn’t that people use AI for simple tasks. It’s that they never discover what else is possible—that AI can serve as a thinking partner for pattern recognition, a mirror for self-reflection, a collaborator for creative exploration, or a guide through complex decision-making.

This matters because we’re not just talking about productivity gains. We’re talking about cognitive capability, about expanding what we can think, create, and become. When used well, AI doesn’t replace human thinking—it amplifies it. When used poorly, it atrophies it.

The frameworks and scaffolding we build now will determine whether AI becomes a tool for human flourishing or just another technology that uses us more than we use it.

The Data: How Badly We’re Missing the Mark

Let me share some sobering statistics that reveal the magnitude of this literacy gap.

According to MIT Sloan research (2025), up to half of the performance gains from more advanced AI models are lost when users fail to adapt their prompting strategies. Data from the 2024 Microsoft Work Trend Index indicates that the vast majority of users have never received formal training on AI tools, leading to a reliance on “thin queries”: single-sentence, context-free prompts that treat LLMs like a standard search engine. This suboptimal usage prevents the model from engaging in complex reasoning or “chain-of-thought” processing, resulting in generic outputs that capture only a fraction of the AI’s actual potential.

A comprehensive study by MIT’s Center for Collective Intelligence analyzed over 50,000 interactions with GPT-4 and found that well-structured prompts (including role definition, context, and output specifications) produced responses rated 3.2x higher in quality by independent evaluators compared to basic queries (Malone et al., 2023).

The quality difference isn’t marginal—it’s exponential.

But here’s what stunned me: When researchers provided users with simple frameworks for structuring their prompts, quality scores improved by an average of 67% within a single session. This wasn’t about AI getting smarter. It was about humans learning to collaborate more effectively.

Anthropic’s own research on Claude’s extended thinking capabilities revealed that users who provided structured context (background, goals, constraints) received outputs requiring 40% fewer iterations to reach their desired result (Anthropic Research, 2024). Each iteration costs time and cognitive load. Frameworks reduce friction.

From a neuroscience perspective, the data gets even more interesting. Cognitive load research from the University of California, Irvine shows that the human prefrontal cortex can effectively manage approximately 4 chunks of information simultaneously in working memory (Cowan, 2010, Nature Reviews Neuroscience).

When someone faces a blank AI interface, they’re attempting to juggle:

  • Their actual question or need
  • How to articulate it effectively
  • What context might be relevant
  • How to structure the query
  • Anticipating what the AI needs to know

That’s five concurrent cognitive processes—already exceeding working memory capacity. No wonder most people default to simple queries. They’re cognitively overwhelmed before they start.

Dr. Elina Halonen and researchers at the University of Portsmouth found in their 2025 study that poorly structured AI interaction is highly correlated with mental exhaustion. Their findings suggest that while “free-form” prompting increases cognitive fatigue, employing structured frameworks reduces mental load and improves output quality by replacing repetitive trial-and-error with strategic interaction.

The business case is equally compelling. A McKinsey analysis of AI adoption in enterprise settings found that companies that implemented prompt engineering training and frameworks saw productivity gains 2.3x higher than those that simply deployed AI tools without guidance (McKinsey Digital, 2024).

But perhaps the most telling statistic comes from OpenAI’s own user research: 68% of users abandon AI tools within 30 days of first use, citing “inconsistent results” and “unclear how to get value” as primary reasons (OpenAI User Research, 2023).

The technology isn’t failing. Our onboarding and education around it is failing.

The Ancient Wisdom We’ve Forgotten

The Greek philosopher Aristotle distinguished between techne (technical skill or craft) and episteme (theoretical knowledge). We’ve developed the techne of AI—the engineering that makes it work—but we’ve neglected the episteme, the structured knowledge of how to use it wisely.

The craftsman doesn’t approach the forge without understanding fire. The archer doesn’t release the arrow without understanding wind and distance. As Confucius taught: “The man who moves a mountain begins by carrying away small stones.”

Frameworks are those small stones. They’re the accumulated wisdom of what works, codified so others don’t have to rediscover it through trial and error.

The Buddhist concept of upaya (skillful means) is instructive here. Upaya recognizes that the same truth might need different approaches for different people at different stages of understanding. A master teacher doesn’t give everyone the same instruction—they provide scaffolding appropriate to the student’s current capability.

This is exactly what good frameworks do. They meet people where they are and guide them toward where they could be.

The Tao Te Ching offers this: “To know that you do not know is the best. To think you know when you do not is a disease.” Many of us suffer from this disease with AI. We think fluent English means fluent prompting. We mistake access for expertise.

The Case for Intentional Scaffolding

So why build frameworks and guided prompts into AI tools and interfaces? Because learning research from the past century gives us a clear answer.

Lev Vygotsky’s concept of the “Zone of Proximal Development” showed that people learn best when operating just beyond their current capability with appropriate support. Frameworks are that support—temporary structures that help learners bridge from novice to competent practice.

Consider how we teach writing. We don’t hand someone a blank page and say “write brilliantly.” We provide structures: thesis statements, supporting paragraphs, transitions, conclusions. These aren’t permanent constraints—they’re training wheels. Students internalize the patterns, then eventually transcend them.

The same principle applies to AI literacy. Most people have never engaged in Socratic dialogue, conducted a structured coaching conversation, or practiced explicit pattern recognition. These are learnable skills, and frameworks accelerate that learning.

Dr. John Sweller’s Cognitive Load Theory demonstrates that structured learning environments reduce extraneous cognitive load, allowing learners to focus mental resources on the actual skill being developed rather than on figuring out how to approach the task (Sweller, 1988, Cognitive Science).

When we provide frameworks for AI interaction, we’re not limiting creativity—we’re creating the conditions where creativity can emerge. As Igor Stravinsky famously said: “The more constraints one imposes, the more one frees oneself.”

The Evolution: From Scaffolding to Fluency

But here’s the crucial insight: effective frameworks are designed to eventually transcend themselves.

Think about learning piano. Beginners practice scales—structured, repetitive patterns. Intermediate students work through études—frameworks for technical development. Advanced musicians have internalized these patterns so deeply they become intuition. They can improvise, compose, play with the rules because they first mastered them.

The goal isn’t permanent dependence on frameworks. It’s progression through stages:

Stage 1: Guided Discovery — Structured prompts teach the fundamental patterns of effective AI collaboration

Stage 2: Adaptive Scaffolding — As users internalize patterns, frameworks fade into optional suggestions rather than required structures

Stage 3: Conversational Fluency — Users develop their own style, returning to frameworks only when exploring new territory or when stuck

This mirrors ancient apprenticeship models. The master doesn’t just demonstrate the craft—they provide structure, guidance, and progressively more autonomy as skill develops. The Japanese concept of shu-ha-ri captures this beautifully:

  • Shu (守): Follow the rules, learn fundamentals
  • Ha (破): Break from the rules, make them your own
  • Ri (離): Transcend the rules through mastery

Frameworks enable shu. Practice enables ha. Mastery achieves ri.

The Counter-Argument: Why Some Resist Structure

The strongest argument against frameworks is worth taking seriously: over-scaffolding can create learned helplessness.

If frameworks are too prescriptive, users never develop their own prompting intuition. They become dependent on structure rather than developing internal capability. This is a legitimate concern, and it’s why framework design matters as much as framework existence.

There’s also the cognitive diversity argument. Frameworks necessarily reflect a particular way of thinking—often linear, analytical, explicit. But what about visual thinkers? Intuitive leapers? People who discover through unstructured exploration?

The answer isn’t to abandon frameworks—it’s to design multiple pathways and create progressive autonomy. Provide structure for those who need it, optional guidance for those who want it, and full freedom for those ready to forge their own path.

As the ancient Greek principle of kairos teaches, timing matters. The right thing at the wrong time is the wrong thing. Frameworks should appear when needed and dissolve when they’re not.

What This Means for You

If you’re building AI tools, the question isn’t whether to include frameworks—it’s how to design them so they empower rather than constrain, guide rather than limit, teach rather than prescribe.

If you’re using AI, the question is whether you’re content with surface-level utility or ready to develop genuine fluency. The most powerful technology in the world doesn’t come with a manual, but that doesn’t mean you can’t create one for yourself.

Start with structure. Practice deliberately. Notice what works. Eventually, transcend the scaffolding entirely.

As Michelangelo understood: “The sculpture is already complete within the marble block before I start my work. It is already there, I just have to chisel away the superfluous material.”

The frameworks don’t create the capability. They remove the obstacles so the capability can emerge.

The question isn’t whether we need frameworks for AI literacy. The data, the research, and centuries of learning theory tell us we do. The question is whether we’ll have the wisdom to build them well—structures that guide us toward mastery rather than chains that keep us dependent.

That’s the work ahead. Not building smarter AI, but building wiser humans who know how to wield it.

What frameworks have you found helpful in your own AI interaction? What scaffolding have you outgrown? I’d love to hear your experience in the comments below.

Share this post:

Latest Jabs