The Slide Rule Moment: When Better Tools Threatened "Real" Work

The Slide Rule Moment: When Better Tools Threatened "Real" Work

Picture this: It's 1963. You're a structural engineer at a mid-sized firm in Chicago. For fifteen years, you've prided yourself on your slide rule mastery—the way you can estimate load-bearing calculations with a few deft movements, the muscle memory of converting logarithms, the quiet authority that comes from being the person others check their work against.

Then one morning, your firm's newest hire walks in with an electronic calculator.

He completes in 90 seconds what takes you eight minutes. Your carefully cultivated expertise—the thing that made you you—suddenly looks quaint. Worse, the partners are excited about it. "Think how much more we can bid on!" they say.

You feel something uncomfortable in your chest. If a machine can do the math, are you even an engineer anymore? Or just a button-pusher?

And here's the question that keeps you up at night: When you submit calculations to the city for a building permit, should you check the box that says "I certify this is my original work"?

The Anxiety Is Ancient

This moment—this precise flavor of obsolescence anxiety paired with ethical uncertainty—has happened thousands of times across human history. And every single time, we ask the same two questions:

  1. If a tool makes it easier, does that make the work less mine?
  2. Am I being honest when I claim it as my own?

The scribes of medieval monasteries felt it when the printing press arrived. For generations, copying manuscripts was sacred work—the slower you went, the more devotional the act. Then Gutenberg showed up with his machine, and suddenly any merchant could produce more books in a month than a monastery could in a decade.

The monks didn't just fear unemployment. They feared something deeper: If someone commissioned a manuscript and they used this new press, were they lying when they delivered it as "their work"? The machine did the copying, after all.

The calligraphers had the answer, of course: a printed book and an illuminated manuscript weren't the same thing at all. One scaled execution; the other embodied artistic judgment. But you had to live through the disruption to see the distinction clearly.

When Faster Became Suspicious

Let's try another one:

It's 1905. You're a professional typist—yes, that's a real profession—working for a law firm. You trained for months to reach 60 words per minute on a manual typewriter. Your fingers are strong. You know exactly how much force each key needs. You can hear errors before you see them.

Then the firm brings in the new Underwood Model 5 with its lighter touch and fewer jams. The junior typist—hired last month—is suddenly hitting 80 words per minute.

When you submit a typed legal brief to the court, the cover page asks you to certify that you "prepared this document in accordance with professional standards." Should you check that box? The machine did most of the physical work. You just... guided it?

The question gnawed at professional typists for decades. When IBM Selectrics arrived in the 1960s with their automatic correction ribbons, the debate reignited: If you could erase mistakes invisibly, were you being truthful about your competence when you signed your name to the document?

The answer, obvious in hindsight: Yes. Because "preparing a document" meant applying professional judgment about what to type, not demonstrating manual dexterity. The tool changed what the work was—from physical copying to intellectual organization.

The Architects' Existential Crisis

Let's add another wrinkle to to the slide rule vs. calculator example we started with: AutoCAD.

It's 1985. You're an architect who spent your career learning to draft by hand. You can draw a perfect circle freehand. You know building codes by heart because you've hand-lettered them onto blueprints a thousand times.

Then AutoCAD shows up.

The new associate (why is it always all the young kids showing up with the newest technology?!) produces cleaner elevations in an afternoon than you can in a week. The precision is inhuman—literally. Every line is perfectly parallel. Every dimension updates automatically when you change a wall.

Here's the thing that keeps you up at night: When you submit plans to the city for approval, you have to sign a statement certifying "these plans represent my original architectural work and professional judgment."

Can you sign that in good faith? The computer drew all the lines. It did all the technical drafting. You just... told it what to draw.

The question felt existential in 1985. By 1995, it was silly. Of course you sign it. The computer didn't design anything. It executed your design instructions with perfect precision. The originality wasn't in the line work—it was in the architectural thinking the lines represented.

But that clarity only came after living through the uncertainty – and the bar was raised. Now instead of the tool just being a calculator it was a computer. (Insert ominous foreshadowing music about AI raising the bar even more, which we'll get to below.)

The Pattern: Execution vs. Authorship

Every single time a tool becomes powerful enough to automate significant portions of professional work, we face two parallel anxieties:

  1. Economic: Will I become obsolete?
  2. Ethical: Can I honestly claim this work as mine?

And every single time, the answer to the ethical question hinges on understanding what "mine" actually means.

The medieval scribe copying a manuscript by hand wasn't creating original work—they were executing someone else's original text with manual labor. When the printing press automated that execution, nothing ethically changed. The original author was still the author. The scribe/printer was still the executor.

The architect drafting by hand wasn't creating originality through the act of drawing lines—they were expressing their original architectural vision through lines. When AutoCAD automated the drawing, nothing ethically changed. The architect was still the author of the design. The software was the executor.

The question isn't "did a tool do significant work?" The question is "did I author the intellectual content the tool helped me express?"

What the Law Says Now

The legal landscape around AI and copyright has evolved significantly through 2025 and early 2026, providing clearer guidance on when work is "yours" under the law.

The Landmark Thaler v. Perlmutter Decision (March 2025)

On March 18, 2025, the U.S. Court of Appeals for the D.C. Circuit issued the first appellate-level decision on AI authorship in Thaler v. Perlmutter. The case involved Dr. Stephen Thaler, who created an AI system called the "Creativity Machine" that autonomously generated an artwork titled "A Recent Entrance to Paradise."

The court's ruling was unequivocal: Human authorship is a bedrock requirement of copyright law. Works created solely by AI, with no human creative input, cannot be copyrighted.

Judge Patricia Millett wrote: "As a matter of statutory law, the Copyright Act requires all work to be authored in the first instance by a human being."

Just six weeks before the Thaler decision, on January 29, 2025, the U.S. Copyright Office released Part 2 of its comprehensive AI report, addressing the copyrightability of AI-generated outputs. The report reaffirms human authorship as essential while providing practical guidance for works created with AI assistance.

The Copyright Office identifies three scenarios where AI-assisted works may qualify for copyright protection:

  1. Using AI as an assistive tool – Where AI helps execute human creative choices but doesn't replace human authorship
  2. Incorporating human-created elements into AI output – Where humans add substantial creative contributions to AI-generated material
  3. Creatively arranging or modifying AI-generated elements – Where humans exercise significant creative judgment in selecting, arranging, or transforming AI outputs

Critical Limitation: Prompts Alone Are Not Enough

The Copyright Office explicitly states that text prompts alone—even detailed, sophisticated ones—do not currently provide sufficient human control to qualify as copyrightable authorship. This determination could change as technology evolves, but as of 2026, simply writing a prompt is not enough.

The day after releasing its report, on January 30, 2025, the Copyright Office granted its first copyright registration to an AI-assisted image titled "A Single Piece of American Cheese". This registration demonstrates that works created with AI assistance—where sufficient human authorship is present—can receive copyright protection.

The Training Data Question

Multiple lawsuits continue to test whether using copyrighted works to train AI models constitutes fair use. Two significant cases from the Northern District of California in late 2025 reached different conclusions:

  • Bartz v. Anthropic: The court held that using lawfully obtained copyrighted books to train LLMs qualifies as "spectacularly transformative" fair use—though using pirated copies did not
  • Kadrey v. Meta Platforms: Reached a different outcome on similar facts

The Copyright Office's Part 3 report on training AI models was released in pre-publication form on May 9, 2025, with a final version expected soon.

What This Means for "Original Work" Certification

The emerging legal framework from 2025-2026 suggests that using AI tools doesn't automatically disqualify work from being "original"—but the nature of your contribution is critical.

You can certify work as "original" if you:

  • Provide the core intellectual content, ideas, and arguments
  • Make strategic decisions about structure and approach
  • Select, arrange, and synthesize the output with creative judgment
  • Exercise expertise in determining what to include and exclude
  • Could defend every significant choice based on your professional judgment

You cannot honestly certify work as "original" if you

  • Use prompts to generate content with minimal human contribution
  • Use AI output verbatim or with only minor edits
  • Contribute no original thinking or synthesis beyond prompt engineering
  • Cannot explain or defend the substantive choices in the work

The legal standard is now clear: Human authorship means exercising creative control over the expression, not just instructing a tool what to create. If you want a deeper dive on this, read the US Copyright Office's Jan 2025 Copyright and AI report.

The Test: The "Defend Your Choices" Standard

Here's a practical test I recommend for whether work is "yours" in any context:

Can you defend every significant choice in the work without referring to "the AI suggested it"?

If someone asks you:

  • Why did you structure the argument this way?
  • Why did you include this example?
  • Why did you use this methodology?
  • Why did you emphasize these points?

...and your honest answers are all rooted in your strategic thinking, your expertise, your judgment—then it's your original work, regardless of what tools you used to execute it.

If your honest answers would be "I'm not sure, Storytell.ai generated that part" or "That's just what the AI came up with and it seemed fine"—then it's not your original work, because you're not authoring; you're curating.

The Slide Rule Engineers Redux

So back to 1963. The engineer with the calculator, about to sign a building permit certification that says "I certify these calculations represent my original professional work."

Should he sign it?

Yes. Absolutely yes.

Because "his work" doesn't mean "he personally performed every arithmetic operation." It means:

  • He selected the appropriate engineering methods
  • He applied the correct formulas for the load conditions
  • He interpreted building codes properly
  • He exercised engineering judgment about safety factors
  • He takes professional responsibility for the results

The calculator executed the arithmetic. He authored the engineering analysis. The work is his.

The 2026 Writer Using Storytell.ai

Now it's 2026. You're a writer about to submit a piece that asks "I certify this is my original work."

You uploaded a lot of your character and plot ideas to a Storytell.ai project. You used Storytell to help you think through story arcs and take all your rambling notes, sketches and voice memos and pull it all together into an outline that you iterated on. You used it to reorganize sections. You used it to find better words for concepts you'd developed. Maybe you even used it to generate a first draft from your outline—and then you substantially rewrote it.

Should you check "yes"?

Ask yourself the slide rule engineer's questions:

  • Did you develop the core ideas and arguments?
  • Did you make the strategic choices about structure and emphasis?
  • Did you select and synthesize the research?
  • Did you exercise judgment about what belonged in the piece?

And most importantly, in my opinion: Can you defend the strategic thinking, expertise and judgment as yours?

If the answers are yes—even if Storytell.ai was the equivalent of your Calculator or AutoCad —then I'd say yes, it's your original work.

The New Obligation

Here's what's changed since the slide rule era—and why the ethical question is now harder, not easier:

AI tools are powerful enough to obscure whether you're doing the authorship or just the curation.

When the architect used AutoCAD, it was obvious: The software drew lines according to the architect's specifications. The authorship was clearly human; the execution was clearly machine. It was even simpler when the calculator was just doing math.

When you use Storytell or any AI system, it can be ambiguous: Storytell generates prose that sounds human, contains ideas that seem reasonable, and includes structure that appears logical.

This means the ethical obligation is now on you to know the difference.

The slide rule engineer couldn't accidentally let the calculator do his engineering thinking—calculators don't think. But you can accidentally let Storytell do your intellectual work, because it produces output that mimics thinking.

So when you see "I certify this is my original work," you have to ask yourself harder questions than the slide rule engineer did:

  • Did I use the tool to be a thought partner to me in my thinking or to do my thinking for me?
  • Am I the author who used a powerful execution and creativity tool, or am I the curator who selected among AI-generated options?
  • If this tool disappeared tomorrow, could I recreate the essential substance from my own expertise?

The Standard Just Went Up

Remember what we learned from history: When tools eliminate execution work, they don't lower the standard—they raise it.

The AutoCAD architect faced a higher standard than the hand-drafting architect, because the tool removed their ability to hide behind technical execution quality. Now everyone's drawings look professional. The only differentiation is the quality of architectural thinking.

The same is true with AI tools like Storytell.

You can no longer hide behind "but I worked so hard on the prose." Everyone has access to well-structured, grammatically perfect prose. The only differentiation left is the quality of your thinking, your insight, your judgment.

When you certify something as "your original work," you're not just making a legal claim. You're making an ethical claim: This represents my intellectual contribution, and I take responsibility for it.

If you can make that claim honestly—if the thinking is yours, even though the execution was assisted, with creative thought partnership between you and the machine to help structure your thoughts and come up with creative iterations —then check the box with confidence.

If you can't; if you're uncertain about whether you couldn't defend the choices, then don't check the box. Because "original work" isn't about what tools you used. It's about whether you can stand behind the choices from irreducibly human part: The thinking, the judging, the meaning-making.

When AI Knows All the Answers, Your Questions Become What Matters

The slide rule engineers in 1963 had a choice:

They could resent the calculator for devaluing their hard-won computational skill.

Or they could recognize that the calculator was calling them to a higher standard—one where their ability to ask the right engineering questions mattered more than ever, because calculation was no longer the bottleneck.

You're standing at the same threshold—but the stakes are higher.

AI doesn't just execute anymore. It answers. It suggests. It generates solutions to problems you haven't fully articulated yet. In a world where every answer is instantly accessible, the only scarce resource left is the quality of your questions.

When you see "I certify this is my original work," the real question isn't whether the AI wrote the sentences. It's whether you were the one asking the questions that mattered—the questions that shaped what the AI explored, the questions that challenged its first answers, the questions that revealed what was missing, the questions that led to genuine insight rather than plausible-sounding nonsense.

The new standard for originality isn't about what you wrote. It's about what you were curious enough to ask.

The AutoCAD architect couldn't hide behind beautiful drafting skills anymore—they had to be a better architect. You can't hide behind well-structured prose anymore—you have to be a better thinker. But here's what's different now: The best thinking isn't solo anymore. It's collaborative. It's creative thought partnership between human curiosity and machine capability.

The original work isn't the output the AI generated. The original work is the exploration you led—the questions you asked, the directions you pursued, the judgment calls you made, the creative leaps you imagined, the synthesis you recognized when you saw it.

If you can defend your questions—if you can explain why you asked what you asked, pursued what you pursued, kept what you kept—then yes, it's your original work. The thinking is yours. The creative partnership was yours. The judgment was yours.

But if you can't defend the questions—if you just accepted whatever the AI offered first—then you weren't doing original work. You were hoping the AI would do it for you.

So when you reach for that checkbox, ask yourself: In a world where AI has all the answers, was I the one asking the questions that mattered? Did I think creatively enough, probe deeply enough, challenge critically enough to be a true thought partner with this tool?

If yes—if your questions shaped the exploration, if your curiosity drove the discovery, if your judgment distinguished signal from noise—then check that box with confidence.

Because in the age of AI, the people who ask better questions don't just have an advantage. They're the only ones doing original work at all.


If you'd like to know how I use Storytell.ai as a thought partner in my day-to-day work as a CEO, head over to my "Pro-Tips: How DROdio Uses Storytell.ai" Substack.