top of page

THE EISENHOWER URGENT/IMPORTANT MATRIX NEEDS AN UPDATE FOR THE AI AGE

  • Writer: candyandgrim
    candyandgrim
  • Jan 6
  • 5 min read

I've lived with the Eisenhower Matrix in my head for 15+ years. It's muscle memory at this point—every brief, every request gets instantly sorted: urgent and important, urgent but not important, and so on. It lives in my brain and lets me shuffle priorities at speed.

But lately, something's been off.

Two Missing Dimensions

The matrix is brilliant at telling you what to prioritize. But it's completely silent on two critical questions that now dominate my workday:

1. How should I resource this work?

Should I do it manually? Use AI assistance? Build an automated pipeline? The explosion of AI tools has created an entirely new decision layer that the Eisenhower Matrix doesn't address. Every task now has this secondary question: howdo we execute it most effectively?

2. What's the actual bandwidth cost?

I recently came across research showing that knowledge workers now switch between applications nearly 1,200 times per day, and it takes 23 minutes to fully refocus after each interruption. Time isn't our constraint anymore—mental bandwidth is. But the Eisenhower Matrix treats all "important" tasks as equal, regardless of their cognitive load.

So Here's What I'm Wrestling With:

Can the Eisenhower Matrix be updated to account for AI and bandwidth? Or does it need to be combined with other tools? Or completely reimagined?

I don't have the answer yet, but I've been experimenting with a few approaches:

Experiment 1: Overlay a Resourcing Layer onto Each Quadrant

What if we keep the Eisenhower Matrix but add execution guidance for the AI age?

Urgent + Important


  • Human + proven AI pipeline only

  • Use automation only where risk is eliminated

  • Speed matters, quality non-negotiable

  • No time to experiment or debug


Urgent + Not Important


  • Critical needle-positioning decision

  • If pipeline exists: use it

  • If not: calculate build cost vs. manual cost

  • May justify human delegation instead


Important + Not Urgent


  • Prime pipeline development window

  • Strategic investment zone

  • Build systems for future scale/repetition

  • R&D and tool development time


Not Important + Not Urgent


  • Challenge necessity first

  • If kept: experimentation zone

  • Low-stakes testing ground for new AI tools

  • Minimal investment justified


This keeps the familiar structure but adds the how to the what.


Experiment 2: The Resourcing Spectrum


Or maybe the resourcing decision is separate—a spectrum you apply after you've determined priority:

Fully Agentic AI ←→ AI-Assisted ←→ Smart Automation ←→ Manual Process

Where you place the needle depends on:


  • Scale and recurrence

  • Complexity

  • Risk tolerance

  • Upfront investment vs. per-instance bandwidth cost


This trade-off became crystal clear to me during my time at ZeroLight (2015-2017), an automotive visualization startup that was doing something fascinating. They were using real-time game engine technology to build car configurators that could generate photorealistic renders instantly, competing against traditional CGI studios who rendered each image manually.

I came from the CGI and motion graphics world—Cinema 4D, After Effects, traditional render pipelines. Everyone else at ZeroLight was game tech. This gave me a strange duality of perspective: I understood both approaches intimately.

The traditional CGI approach:


  • Fast to produce the first image (known tools, established workflow)

  • But significant render time cost for each subsequent variation

  • Every new angle, every new color, every new configuration = another render queue

  • Predictable process, but doesn't scale


The ZeroLight approach:


  • Massive upfront investment (100+ hours building the real-time system)

  • Complex pipeline requiring technical expertise

  • But once built: instant generation at any scale

  • 1 image or 10,000 images = same cost


The strategic question we constantly faced: At what point does the upfront pipeline investment pay off versus accepting per-instance manual costs?

The answer depended on variables most people didn't consider:


  • How many variations would the client actually need? (They always said "just a few" but usually meant dozens)

  • How likely was the project to recur? (One hero shot vs. ongoing campaign)

  • What was the client's approval process like? (Collaborative iteration vs. endless revision cycles)

  • Did we have the bandwidth to build a robust system, or just enough to get something out the door?


That calculation—that exact same trade-off—is now a daily decision with AI tools.

Do you:


  • Spend 2 hours building a ComfyUI workflow that can generate variations instantly?

  • Or spend 20 minutes manually prompting MidJourney each time you need something?


The answer depends on recurrence, scale, and crucially: how much mental bandwidth you have available to build and maintain the system.

Sometimes the "inefficient" manual approach preserves more cognitive capacity than the "efficient" automated one.


Experiment 3: Story Points for Honest Effort Assessment

I started exploring how developers estimate work complexity. They use something called story points—a way to account for effort, uncertainty, and complexity rather than just time. The brilliant part? You can add extra points for things like difficult stakeholders (I call this the "arsehole tax") or venturing into completely new territory.

(If you're unfamiliar with story points, Asana has a solid explainer here—but the short version is: it's a more honest way to estimate work than pretending everything fits neatly into hours)



Here's the formula I've been using:

Story Points = Base Complexity + Uncertainty Tax + Arsehole Tax + Bandwidth Premium

Where:


  • Base Complexity: Pure technical/creative difficulty (1-3 for simple, 5-8 for moderate, 13+ for complex)

  • Uncertainty Tax: +1-5 for new territory, untested approaches, unknown outcomes

  • Arsehole Tax: +1-5 for difficult stakeholders, complex approval processes, political minefields

  • Bandwidth Premium: +1-3 for work requiring deep focus or flow states (creative work has a high switching penalty)


This reveals something crucial:

For small tasks (1-3 story points total) with no existing pipeline: Just do it manually. The bandwidth cost of setting up AI workflows, prompting, reviewing outputs, and context-switching often exceeds just doing the work. This preserves mental energy and maintains flow.

For larger, recurring tasks (8+ story points): The upfront investment in building an automated pipeline makes sense. The bandwidth savings compound over time.

But story points also reveal something the productivity gurus won't tell you: the "x4, x10 output" promise of AI-assisted work is physically impossible without addressing the bandwidth cost of constant human-AI switching.

If you're toggling 1,200 times per day and each switch carries 15-23 minutes of attention residue, you're already operating at severely degraded capacity. Adding more AI-assisted tasks just accelerates the depletion.

Maybe They're Separate Tools, Not One Unified System

The more I experiment, the more I think these aren't meant to merge into one elegant matrix. Maybe what we need is a toolkit of complementary instruments:

1. Eisenhower Matrix (Priority)


  • What matters? When does it matter?

  • This remains unchanged and valuable


2. Story Points (Honest Effort Assessment)


  • How complex is this really?

  • Including uncertainty, difficult stakeholders, and bandwidth cost

  • Prevents us from lying to ourselves about "quick tasks"


3. Resourcing Spectrum (Execution Method)


  • Manual vs. automation decision

  • Informed by story points, recurrence, and bandwidth availability

  • Accounts for upfront investment vs. ongoing cognitive load


Each tool does one job well. Together, they address the dimensions the original Eisenhower Matrix was never designed to handle.

What's Your Experience?


  • Are you still using Eisenhower as-is, or have you adapted it?

  • Have you found ways to account for AI resourcing decisions in your prioritization?

  • How do you manage bandwidth—not just time—in your planning?


I'm genuinely curious how others are thinking about this. The tools that got us here might not be the tools that get us through the AI transformation.

Let's figure this out together.

 
 
 

Comments


bottom of page