Productboard Spark: AI built for PMs. Now available & free to try in public beta.

Learn more
Product makers

AI built for product managers

Try Now

AI Workflows for Product Discovery: How Two PMs Are Doing It Right Now

Author: PRODUCTBOARD
PRODUCTBOARD
1st May 2026AI Product Management, Spark

Every product team has more signal than ever. Sales calls recorded by default. Customer success calls transcribed automatically. Product analytics dashboards tracking every click. Discovery calls, Slack threads, NPS surveys, support tickets. It all exists. It's all somewhere.

And yet, most product managers are still staring at the same hard question: what do we build next?

AI was supposed to fix this. In some ways, it has. But speed without the right signal doesn't sharpen your decisions, it just means wrong bets compound faster. The real bottleneck is knowing what's worth building.

That's the tension Frank Lee, Principal Product Manager at Amplitude, and Chris Patton, Principal Product Manager at Productboard, sat down to address in a recent live session on AI-powered product discovery. Both have rebuilt their workflows from the ground up. Both were willing to show exactly how. This post captures those workflows, step by step, so you can start applying them today.

The Real Problem Isn't a Lack of Data

Ten years ago, getting customer feedback was slow and difficult. Getting on a call with a customer required coordination, follow-up, and luck. Insights arrived in batches: quarterly research projects, annual surveys, the occasional product review.

Today the opposite is true. Everything is recorded. Everything is transcribed. The problem isn't that product managers lack signal. The problem is that the signal lives everywhere and nowhere at once. Behavioral analytics in one tool, customer calls in another, feedback tickets in a third, experiment results in a fourth. Connecting those dots manually is a full-time job on top of the actual job.

What Frank and Chris both argue—and what their workflows demonstrate—is that AI doesn't just speed up this process. When set up correctly, it collapses the distance between signal and decision entirely.

How Frank Lee Rebuilt Product Discovery Around AI Agents

Frank Lee is a Principal PM at Amplitude, where he works on agents, MCP integrations, and embedding AI across the product development lifecycle. He's been thinking about AI-powered products since 2018, and his approach to PM work reflects that: he rebuilds his workflows on a quarterly basis, every time a new model ships something meaningfully better.

His philosophy is direct…figure out where you add the most value as a PM, then hand everything else to agents.

Here's what that looks like in practice.

The Foundation: A GitHub Repo as Your PM Brain

Before any of Frank's agent workflows make sense, there's infrastructure underneath them; specifically, a GitHub repo that serves as a centralized context layer for everything he does.

Inside of it lives strategy docs, product specs, templates, pricing references, vision documents, and a growing library of skills. These are essentially prompt templates that teach agents how to run specific PM tasks. Whenever Frank gets an output he doesn't like, he feeds it back to the model and asks it to fix the underlying skill. The repo improves continuously, in small increments, as a natural byproduct of daily work.

Frank’s Github repository that holds pertinent information like product lines and specs.

"Anytime you get an output that you don't like, you can actually just ask the model—I copy paste the exact output again and say, I didn't like this aspect about it. How can I change the skill so that you don't make this mistake again?" — Frank Lee, Principal PM, Amplitude

This single habit of treating your prompt library as a product that gets better over time is the foundation everything else sits on.



Baby Amp, Frank’s prototype repo that now helps him prototype new Amplitude features.

Workflow 1: The Automated Weekly Product Brief

Frank spent years of his early career building WBR (weekly business review) decks manually. Now an agent does it for him.

The workflow: an agent connected to Amplitude via MCP scans Frank's recent dashboards, charts, session replays, and web vitals. It synthesizes everything into a structured brief covering high-level trends, what's working, what's not, areas worth investigating, and recommended next steps.

Frank’s Claude Cowork connected to his skills library. This is the first step of his discovery workflow: generating a weekly brief.

What used to take a Sunday is now asynchronous. The brief arrives. Frank reviews it and redirects his attention to the judgment calls it surfaces, not the data gathering itself.

How to set this up:

  1. Connect your analytics platform via MCP to your AI client of choice (Frank uses Claude Code and Cursor)
  2. Define a skill that specifies which data sources to pull, what the output format should be, and what kinds of recommendations you want surfaced
  3. Run it weekly, feed back any outputs you don't like, and let the skill improve over time

Workflow 2: Automated Root Cause Analysis

When a metric moves in either direction, Frank no longer digs into it manually. Instead, he runs a skill called "analyze chart."

The agent takes a specific chart as its starting point, identifies correlated charts, runs group-by analysis across segments, and generates hypotheses about why the metric is changing. It calls out anomalies and regressions, explains where the change is likely coming from, and recommends what to do next.

This is the second step of Frank’s discovery workflow: analyzing charts for deeper analysis.

If Frank likes the output, he can tell the agent to go ahead and create a notebook or build a dashboard, and it does. No manual handoff required.

How to set this up:

  1. Build a skill that instructs the agent to start with one chart, find correlations, and run multi-dimensional analysis
  2. Include instructions for hypothesis generation: push the agent to explain why something is happening, not just describe what happened
  3. Add a validation step: ask the agent to check its own assumptions against the underlying data before presenting conclusions

Workflow 3: Session Replay Analysis at Scale

Watching session replays (what Frank calls "watching film") is one of the most valuable and time-consuming parts of product work. An agent now does the first pass.

Frank points the agent at session replay data for a specific user segment or time window. The agent analyzes behavioral events like click patterns, scroll behavior, where users drop off. It surfaces clusters of friction, flags potential bugs, and identifies intent patterns across sessions.

This is the third step of Frank’s discovery workflow: session replays.

The output isn't a replacement for watching replays yourself. It's a filter. Instead of spending hours in the queue, Frank reviews what the agent flagged and goes deep only where it matters.

How to set this up:

  1. Ensure your session replay tool exposes data via MCP (Amplitude's session replay is available this way)
  2. Write a skill that defines what signals to look for: drop-offs, rage clicks, unexpected navigation patterns, recurring errors
  3. Set it to run asynchronously and alert you when a meaningful cluster appears

Workflow 4: Always-On Product Opportunity Discovery

This is the most expansive workflow: agents that continuously monitor the full picture (charts, dashboards, feedback, session replays, web vitals, active experiments) and surface prioritized opportunities.

The output is a ranked list of problems and recommended actions, scored by impact. Frank gives the example of an agent noticing that a button click does nothing in a prototype, or flagging that an experiment treatment should be fully rolled out based on the data, or catching a KYC step in a funnel causing friction.

This is the fourth step of Frank’s discovery workflow: opportunity discovery.

Each flagged opportunity becomes a mini-spec that Frank can pass to Claude Code, which then plans the work, drafts changes, and kicks off parallel sub-agents to build or investigate.

How to set this up:

  1. Define a skill that instructs the agent to look across all available data sources for problems, not just surface trends
  2. Build in a scoring mechanism (Frank uses a "RISE score") so outputs are prioritized rather than just listed
  3. Connect it to your codebase if possible, so the agent can validate whether a flagged issue still exists in production before you act on it

Frank's Advice for Getting Started

Don't wait until your stack is perfect. Start with one repo, one skill, one workflow. Connect one data source via MCP. Run the output, note what you don't like, and ask the model to improve its own prompt. That loop (run, evaluate, improve) is what compounds over time.

"The agents are so good when given the right context and instructions on figuring out where or why they might have veered off in the wrong way. You can literally just coach the model to fix its prompt a few times until you get the output that you need." — Frank Lee, Principal PM, Amplitude

Spend time understanding your data taxonomy before anything else. The quality of your agent outputs is directly proportional to how well you've defined the data structures underneath them.

How Chris Patton Uses Productboard Spark to Connect Qualitative and Quantitative Product Data

Chris Patton is a Principal PM at Productboard, where he's been focused for the past year on customer feedback analysis and building Productboard Spark, an agentic platform purpose-built for product managers.

His core belief is that staying grounded in customer feedback has always been the most important part of the PM job. What's changed is that the volume of signal has exploded, and the old methods for making sense of it don't scale.

Why Most Teams Are Doing Feedback Analysis Wrong

The instinct when you have too much data is to dump it all into an LLM and ask for a summary. It's tempting because it's fast. But Chris argues this is the wrong approach, and the results can actually be misleading.

The core problem is context window management. Even with million-token context windows, analysis of large transcript volumes suffers from what Chris calls the "messy missed middle": insights buried in the middle of the context that the model deprioritizes or misses entirely.

The right approach is more structured: analyze individual transcripts or conversations first, extract key insights from each, then categorize and cluster across sources. It takes more steps, but the outputs are actually reliable. That matters when you're using them to justify what you build next.

The End-to-End Spark Workflow for Product Teams

Here's how Chris runs discovery in practice, using Productboard Spark connected to Amplitude via MCP.

Step 1: Start with an OKR-anchored prompt

Chris begins every analysis grounded in a specific objective. In his demo, the OKR is improving onboarding activation. The prompt asks Spark to surface problems in the data related to onboarding, scoped to that strategic goal from the start.

Chris’ Spark onboarding prompt.

Step 2: Pull quantitative data first

Spark connects out to Amplitude via MCP and returns funnel data: completion rates broken down by step, where users drop off, and how significant the drop-off is. In the demo, the data shows meaningful friction at the identity verification step.

Step 3: Layer in qualitative signal

With the numbers in view, Spark then synthesizes the qualitative data (customer feedback, call transcripts, support tickets) to explain why those drop-offs are happening. The output surfaces the top issues: identity verification failures, account funding errors, problems with joint account setup, email entry problems. Each finding includes citations back to the raw source data, so you can audit the evidence directly.

The qualitative analysis output from Spark.

Simultaneously, the Amplitude agent generates supporting dashboards automatically. The quant evidence is built and waiting before you even ask for it.

An Amplitude-generated dashboard visually showing the quant evidence.

Step 4: Dig into the biggest problem

Chris selects the highest-impact issue (Plaid integration failures during the funding step) and asks Spark to dig deeper and brainstorm fallback solutions.

Deep dive response into the Plaid failure.

Spark returns a targeted analysis: direct quotes from affected users, patterns in how the failure manifests, gaps in the current experience. Then it generates a set of proposed solutions: micro-deposit verification, debit card funding, or allowing users to skip the funding step entirely.

Fallback solutions to consider.

Step 5: Evaluate solutions against the OKR

With the goal of accelerating onboarding, Chris evaluates the tradeoffs. Micro-deposits take one to two business days. Too slow. Debit card funding is instantaneous. That becomes the direction.

Step 6: Draft the brief

Chris asks Spark to draft a full product brief based on everything surfaced. Spark pulls the brief template, incorporates all the quantitative and qualitative evidence, and produces a structured document covering:

  • Problem statement grounded in data
  • Quantitative supporting evidence (baseline metrics, funnel drop-off rates)
  • Qualitative supporting evidence (user quotes, themes, citations)
  • Proposed solution
  • Success metrics with baselines and OKR-aligned targets
  • User scope and constraints
  • Risks

What used to take days of synthesis and writing happens in a single workflow.

Final product brief generated by Spark.

Where Chris Uses This in Practice

The workflow above covers deep-dive discovery, but Chris applies variations of it across his entire PM lifecycle:

  • Weekly team syncs: A digest of all feedback across the product domain, used to steer where the team focuses
  • Problem area deep dives: Synthesizing all targeted follow-up calls around a specific issue into one cohesive analysis
  • Marketing: Surfacing strong customer quotes that can be used as social proof
  • Beta recruitment: Asking Spark which customers to contact based on who surfaced the problems the new feature addresses

"Customer feedback analysis and staying customer-centric has always been, in my opinion, the most important part of being a product manager. What's changed is that we have this tsunami of signal coming in, and now we have to figure out how to actually make sense of all of it." — Chris Patton, Principal PM, Productboard

The Principle Both Workflows Share

Frank's workflows and Chris's workflows look different on the surface—one is deeply engineering-adjacent, the other rooted in qualitative synthesis—but they share the same underlying conviction.

Speed is only valuable when it's pointed at the right problem.

Frank builds a validation step into every agent workflow. Before acting on any output, he asks the agent to check its own assumptions and confirm the underlying data is solid. Chris builds structured layers into feedback analysis specifically to avoid the false confidence that comes from dumping everything into a single prompt and accepting what comes back.

Both of them also push back on the instinct to just ship because you can.

"There is a temptation right now, particularly with the AI acceleration of development, to just throw everything out there and ship it because you can. Just because we can doesn't mean we should. Stay grounded in your customer feedback. Make sure you're moving the right metrics. There is still craft to product management." — Chris Patton, Principal PM, Productboard

The AI is doing real work here. Analysis, synthesis, drafting, prototyping. But judgment is still the job. What to build, why it matters, whether this is worth the bet: those questions aren't answered by the speed of the workflow. They're answered by the quality of the thinking that the workflow frees you up to do.

Watch the full webinar: How AI Agents Are Rewiring Product Discovery →

Frequently Asked Questions

What are AI workflows for product discovery?

AI workflows for product discovery are structured, repeatable processes that use AI agents and tools to collect, synthesize, and analyze product signals (behavioral data, customer feedback, experiment results) and translate them into prioritized opportunities and decisions. Rather than one-off prompts, they're systems that run continuously and improve over time.

How do AI agents help product managers with discovery?

AI agents can automate the time-intensive parts of discovery: generating weekly product briefs, running root cause analysis on metric changes, scanning session replays for friction patterns, and monitoring experiments. This frees product managers to focus on interpretation and judgment rather than data gathering and synthesis.

What is MCP and why does it matter for product teams?

MCP stands for Model Context Protocol. It's a standard that lets AI clients like Claude or Cursor connect directly to external tools and data sources like Amplitude, Productboard, or Linear, without manual copy-pasting. For product teams, it means agents can pull live data, generate analyses, and push outputs back into your tools of record automatically.

How do you combine qualitative and quantitative data in product discovery?

The most effective approach is to use quantitative data to identify where problems are happening (drop-off rates, metric regressions, usage gaps) and qualitative data to explain why. Tools like Productboard Spark, connected to analytics platforms via MCP, can surface both layers in a single workflow and ground the synthesis in cited source data.

What is Productboard Spark?

Productboard Spark is an agentic platform built specifically for product managers. It connects to your customer feedback, analytics tools, and product data to surface insights, synthesize signals, and help draft briefs and plans grounded in a decade of product management frameworks and best practices. It's currently available in public beta.

See It in Action

Reading about these workflows is useful. Watching them run in real time, with actual data, actual tools, and two practitioners who've spent years building them, is a different thing entirely.

Frank and Chris demo all of this live in the on-demand webinar: their repos, their agent workflows, the Spark demo from prompt to finished brief.

Watch the full webinar: How AI Agents Are Rewiring Product Discovery →

newsletter

Join thousands of Product Makers who already enjoy our newsletter