Telltale Labs

Why Growth Teams Keep Leaving AI Leverage on the Table

A Practical Playbook for Growth Leaders

Tyler Pennell · Telltale Labs

This is a practical playbook for marketers to apply AI in a more powerful way than ad hoc queries. Its core is a set of foundational principles that hold true regardless of what AI model or tool drops next.

While developing these principles, my experience leading growth in finance and healthcare start-ups kept forcing the question: How could we deploy autonomous agents in data-sensitive environments, where we spend tens of millions of marketing dollars each month, and where tactics and processes change constantly?

I found that there's a comfortable medium between transactional chats and autonomous agents, which can solve many marketing problems immediately.

Following this framework, you can achieve step-function increases in speed, decision quality, and creativity by combining agentic coding (using AI to write and reason about code) with deterministic automation (structured workflows that execute predictably).

I've built systems like these myself with rudimentary technical skills. No engineering team. No data scientist. Just a coding agent, my experience in growth, and the framework shared below.

With this knowledge you'll be able to build products that:

Automatically analyze marketing channels and deliver daily optimization recommendations
Generate, prioritize, and distribute SEO content outlines to freelance writers based on search data
dentify creative performance trends across channels, formats, iterations, and inconsistent naming conventions

What the Architecture Looks Like

Institutional Memory

A structured changelog that timestamps every meaningful change from budget shifts to creative swaps so models can connect actions to outcomes.

Encoded Logic

The unwritten rules your best operators follow, translated into directives a model can apply consistently.

Orchestrated Workflows

Individual scripts stitched into a system: fresh data in, analysis, recommendations, or content out, delivered to the right person in the right format, at the right time.

The Playbook

Preamble

Teams will have different enablement, cost, security, and compliance constraints. Some companies will enable marketers to build code-based tools (Ramp for example), some will require collaborating with technical teams.

Steps 1–5 are relevant regardless of whether your team will be agentic coding or not.

Steps 6–9 focus on building an automated channel analysis tool using agentic coding. Even if you're not coding yourself, a cursory understanding of how coding agents function is helpful when collaborating with tech teams.

Step 1

Build a Changelog

Why: A well-maintained changelog is infrastructure for AI. Language models don't retain memory between sessions. A changelog gives them the data to connect cause and effect.
How: Track every meaningful change, timestamped. Campaign launches, budget shifts, creative swaps, audience changes, landing page updates. If it could move a metric, log it. Start manually, then automate entries. Too much data degrades AI output quality and increases cost, so you'll need a clear key structure to filter data for AI ingestion.
Deliverable: A shared, structured changelog that timestamps every meaningful change and can be consumed alongside performance data by your AI tools.
Pitfalls: Incomplete entries, inconsistent formatting, and lack of participation will undermine the system. This requires process discipline and ownership. Close the loop: log when an experiment ends or when you turn off a campaign.
Step 2

Map Your Opportunities

Why: You shouldn't build solutions for problems that don't exist. Understanding your opportunities and their size will help you gain internal support and build a prioritized roadmap.
How: Look for two types of problems:
  • Manual bottlenecks: Repetitive work that can be automated
  • Decision gaps: Decisions made with insufficient data or made too slowly

Map those to AI solutions:

  • Generation: Keywords, ad copy, landing pages, email variants, creative briefs
  • Analysis: Deep channel insights, review mining, cross-source creative trends
  • Process optimization: Briefing, compliance review, creative naming

Start broad across your stack, then select a single channel to analyze for your first tool.

Deliverable: Impact-ranked opportunities and a specific channel to build your first analysis tool around.
Pitfalls: Don't force AI onto problems that don't warrant it or can be solved just as well without it.
Step 3

Designate a Growth AI Lead

Why: Using AI tools beyond ad hoc queries typically requires internal collaboration and approvals. A dedicated lead makes that collaboration easier.
How: Assign a team member to own this initiative full-time for an initial sprint (2–4 weeks). Form a cross-functional team with your AI lead, an engineer or data scientist, and someone who understands compliance and security.
Deliverable: A named AI lead with dedicated time and a cross-functional task force.
Pitfalls: Treating AI like an add-on rather than a core surface area will not produce transformational results. This can't be a half measure.
Step 4

Document Your Standard Operating Procedures (SOPs) in Plain Language

Why: Your SOPs become the foundation for every AI tool you build. They are the knowledge layer your agent will reference, capturing hard-earned insights and serving as the basis for the directives the model will use. SOPs and directives improve recursively as your experimentation is captured in the changelog.
How: Capture three inputs:
  • Internal best practices: What already works? The unwritten rules your best operators follow. You'll use the changelog to refine this over time.
  • Data-derived patterns: Re-review historical tests and performance data. This manual deep dive often surfaces missed insights before automation begins.
  • Industry best practices (optional): Proximity to a channel creates blind spots: you stop seeing what's obvious from the outside. Use an LLM to synthesize public information into a best practices summary, then compare against your own SOP to identify any obvious gaps.
Deliverable: A plain-language SOP for a single, priority marketing channel, written for a human reader and structured for AI reference.
Pitfalls: Don't over-index on trivial operational steps. This isn't employee onboarding documentation. It should capture hard-earned insights from rigorous experimentation, not basic task instructions.
Step 5

Build Directives

Why: Directives tell the coding agent how to use your SOP. How should AI interpret inputs, apply rules, and output decisions. Directives ensure you get the results you're looking for.
How: Draft input, execution, and output directives for your channel analysis tool:
  • Input directives: Specify what data the model receives, formatting rules, and how to interpret metrics using your SOP logic.
  • Execution directives: Use an LLM to ingest your SOPs and generate execution directives in markdown. Review and iterate to fill gaps in reasoning. Should the model rank opportunities? Is there a threshold that makes certain data meaningful (i.e., ad spend)? Be explicit about the decision framework.
  • Output directives: Define what actionable output looks like. What decision should this inform? Should it output a spreadsheet, report, Slack message, API payload? What does a weak output look like vs. a strong one?
Deliverable: Clear input, execution, and output directives derived from your SOPs using an LLM.
Pitfalls: Vague directives produce vague results. Ambiguity creates model confusion and system drift. Be explicit and decisive. You can refine directives later, but clarity upfront is critical.

What This Looks Like in Practice

Paid Search Audit Tool

The following examples show how Steps 4, 5, and 6 connect. A plain-language SOP for keyword review becomes a structured set of IF/THEN directives, which a coding agent then uses to produce a prioritized audit output. The channel is paid search, but the architecture is the same for any channel.

Step 4 Output

The SOP

Hard-won rules encoded in plain language.

SOP excerpt for paid search keyword review
Step 5 Output

The Directives

Directives are your SOPs turned into a precise framework for the model to evaluate data it receives.

Directives excerpt — Keyword & Match Type Audit
Step 6 Output

The Audit

What the tool produces: a prioritized table of issues, each with a clear impact and specific next action. No interpretation required: the model has already done it.

Output — Paid Search Audit Results
Step 6

Use Agentic Coding to Build a Simple Tool Without Automation

Why: The underlying scripts are the core building blocks. You need to build, test, and iterate on those before introducing additional complexity and risk with automation.
How: You're going to build your channel analysis tool and then have the agent run it with raw data before automating it. Open your coding agent (Claude Code, Cursor, Windsurf, etc.) in the folder where your directives and raw data live. Paste this prompt, adjusting the bracketed fields:

Sample Prompt

I want to build a Python script that analyzes [channel] performance data.


I've included:

- directives.md — instructions for how to interpret inputs, run analysis, and format outputs

- data.csv — raw performance export from [platform]


Build a script that reads the CSV, applies the logic in the directives, and outputs a [Markdown report / CSV / Slack message].


Do not add scheduling or automation. The script should run locally with a single command.


Before writing code, summarize your understanding of what the script should do and ask me any clarifying questions.

Monitor the model's progress and thinking. If results didn't match expectations, tell the agent why and ask for solutions. The final line of the prompt — asking the agent to summarize before building — surfaces misunderstandings before they become code you have to debug.

Pitfalls: Don't describe the problem conversationally without handing the agent your directives file. The directives are what separates a generic AI output from something calibrated to your actual SOP.
Step 7

Orchestrate a Deterministic Workflow

Why: Individual tools are useful. Connected tools can be transformative. Orchestration moves you from "a script that does X" to "actionable recommendations in Slack every morning."
How: The basic process for automating your channel analysis tool:

Query API for fresh data

Run Python scripts

Clean, analyze, generate output

Deliver report

Slack, email, or dashboard

We've already built the Python scripts. Scheduling and delivery can be orchestrated with different tools: additional Python scripts with triggers, off-the-shelf automation tools like N8N, or an AI agent. Due to unpredictability, AI agents should be reserved for processes where a follow-on action depends on an unpredictable prior output.

Deliverable: A flowchart with inputs, hand-offs, and outputs mapped. Regardless of how you orchestrate your workflow, this mapping is prerequisite. Collaborate with your cross-functional tech partners for the first run.
Pitfalls: Automation adds complexity and operational risk, which multiplies with autonomous agents. Start with deterministic workflows, assign clear owners, implement monitoring and logging, and define failure protocols. Automation can be incredibly powerful, but if efficiency gains from automation are marginal, don't do it.
Step 8

Iterate & Maintain Your Systems

Why: The tools you built aren't set it and forget it: your knowledge evolves, external platforms change, and your scope grows. Each requires a different response.
How: Build maintenance into your rhythm: weekly output reviews, monthly directive audits against your SOPs, and quarterly impact reviews using your Step 8 scores to decide what to iterate, scale, or shut down.

When Your SOPs Change

  • Version vs delete older directives files to compare outputs before cutting over.
  • Re-engage your coding agent to update directives: "My directives have changed. Review the current script and identify what logic needs to change. Summarize before making edits."
  • Back test against historical outputs. Unexplained divergence means your directives introduced ambiguity or conflict with prior directives.

When a Platform Breaks Your Tool

An API deprecates a field. Your script fails. Google Ads renames a column. In the context of new column names:

  • Fix with your coding agent. Export a sample of the new data format and prompt: "This script is breaking because the platform changed a column name. Here is a sample of the new format. Identify what changed and update the script."
  • Design for failure modes up front. Tell the coding agent to define expected column names as variables at the top of the script, not hardcoded throughout. A renamed column becomes a one-line fix.

When You Want to Add a New Step

  • Update your directives first. The directives are the spec. Code should follow them, not the other way around.
  • Extend, do not rewrite. Prompt your agent: "Add a new analysis step after Step 3. Do not change existing logic unless required. Summarize what you are adding before making changes."
  • Test the addition in isolation before integrating into the full pipeline.
Deliverable: After using your channel analysis tool for a week or two, update it with insights learned from recent experimentation using the guidelines above.
Pitfalls: Skipping steps to ship something marginally faster will cause compounding technical debt. Version control, process discipline, and backtesting will save time in the long run.
Step 9

Measure Impact

Why: Deployment isn't the finish line. Internal AI projects die quietly because their impact was unclear, not because the tools broke.
How: Evaluate your project on: time savings, performance impact, and decision quality. Decision quality is hardest to score. One way is to run a test where you log, but don't execute AI recommendations. Then compare what your team did without AI, to what they could have done with it.
Deliverable: A documented evaluation of the tool's effectiveness, including a clear score or decision: iterate, scale, or shut it down.
Pitfalls: A portfolio of ineffective tools erodes organizational trust. AI has real costs—security, maintenance, usage fees—and must earn its ROI. You will make mistakes. Measure them, learn, and iterate.

Parting Notes

Marketing and growth will look fundamentally different a year from now. Teams, processes, and responsibilities will shift. Change is uncomfortable, but the outcome is higher leverage.

Growth teams will spend less time moving tickets and trafficking media, and more time applying creativity, judgment, and systems thinking.

The hardest part is starting. This playbook gives you a practical framework to do exactly that.

Get in Touch

Ready to build your growth engine? Let's talk.