AI empowered product management

published by Ben Allen

Getting Started: My GenAI Journey

During my time at GitHub, I had a front-row seat to the Generative AI ("GenAI") revolution. GitHub Copilot was the first GenAI product delivered at scale and it was fascinating to see how it transformed the developer experience.

As a hobbyist pythonista, I started using Copilot for coding tasks, and "wow moments" were frequent. As a Product Manager ("PM"), who spends a lot of time writing in VS Code, I soon realized Copilot's potential extended beyond writing code. It was super handy for PM tasks too; writing user stories with Copilot was a breeze and I was hooked.

Fast forward a year, and the world seems to have changed:

  • The entry of general purpose GenAI tools such as ChatGPT, and Claude AI
  • Every software product you've ever used scrambling to add value with "AI features"
  • The rise of Model Context Protocol (MCP) and the potential for GenAI to connect to existing tools

Progress has been rapid and GenAI tools are clearly becoming essential to the PM toolkit. After six months of daily Claude AI use at work, I still remember my first interaction. Unlike other tools I'd used, it didn't provide immediate answers. Instead, it asked clarifying questions back. It wanted to understand my intent first. This flipped a switch in my mind. Was this tool a thinking partner, a collaborator? If so, what does that mean for product managers?

Based on my six-month journey, let's explore practical examples of how GenAI tools can empower PMs across discovery, delivery, communication, and more. Can time-poor PMs start to get some time back? Can we use these tools to be more effective in our roles? Or is this some hype-bubble-fad nonsense, and a distraction from real work?

Discovery and Ideation: AI as Your Thinking Partner

PMs are always thinking through risks. In Marty Cagan's Inspired book, he discusses the importance of value, feasibility, usability, and viability risk. Digging into these risks requires some form of discovery to address the overarching question: "What are we building, and why?"

GenAI tools can help with this discovery process. For example, you can use Claude to generate customer interview questions, and analyze customer feedback. This is an area where I've had great success and saved a lot of time.

For GenAI newcomers, start with this mindset: your response quality depends entirely on the context you provide. The challenge becomes: what constitutes good context for customer interviews or usability tests? I start with a simple Product Requirements Document (PRD) template, and filling in what I know. This might include:

  1. Problem statement
  2. Value proposition
  3. User personas
  4. Goals and tasks
  5. An outline of features
  6. Known risks, assumptions, and constraints

This PRD becomes my context foundation. Using this as input, here's my three-step collaboration process.

Step 1: Create a detailed prompt

I start with specific context and clear instructions. For example:

Play the role of a user researcher, expert in customer interviews and familiar with research best practices described by the Nielsen Norman Group (https://www.nngroup.com/) and other sources such as Steve Krug (https://sensible.com/).

Given the attached PRD, generate a set of customer interview questions that will help me explore the value risk of the feature.

I'm defining value risk as "will customers find this feature valuable enough to use it regularly?"

I will be interviewing customers who roughly match the "engaged developer" user persona defined in the PRD.

Ask any clarifying questions before addressing the task.

Step 2: Ask Claude to critique the prompt

Before using any prompt, I ask Claude to review it:

Play the role of a prompt engineer, expert in prompt design and familiar with best practices described by the Anthropic AI Fluency course (https://www.anthropic.com/ai-fluency).

Consider the following prompt and provide feedback on how it could be improved.

[insert prompt here]

Step 3: Iterate on the results

Using a great prompt, Claude generates interview questions and it's time to iterate on them through manual editing or asking Claude to refine them further.

Analysis & more

Once you have a set of interview questions, you can use them to conduct customer interviews. After the interviews, you can analyze the responses and look for patterns. This is another area where AI can really shine. You can ask Claude to summarize the responses, identify key themes, and generate insights based on the data.

This is just one example of how GenAI can enhance the product discovery process. Other examples of where I've had success include:

  • Using speech-to-text tools to transcribe user interviews, then using Claude to analyze the transcripts for key themes and insights
  • Writing and editing PRDs
  • Brainstorming different approaches to a problem, such as how to improve a feature or address a customer pain point

Things I want to try in the future include:

I'm using AI to accelerate my discovery process while keeping my PM judgment in the driver's seat. Speed and quality gains are real, but they require thoughtful collaboration, not blind delegation.

Product Delivery: User Stories and Edge Cases

My AI-Enhanced Workflow

Moving from discovery to delivery, I've had great success using GenAI to improve the issues I write - whether epics, user stories, bugs, or feature requests. My typical workflow is as follows:

  1. Write a first draft issue in VS Code
    1. Use GitHub Copilot to help with this process!
  2. Ask Claude AI to review the issue and provide feedback
    1. Even better, you could stay in VS Code and ask Claude Code to review the issue
  3. Iterate on the issue based on Claude's feedback

I've seen the biggest value gains when writing "epics". GenAI is really good at breaking down large features into smaller user stories, or making suggestions for features that perhaps you didn't think of.

The Edge Case Discovery Story

I recently wrote a user story for a simple CLI feature that uploads data to a server. After writing the first draft, Claude's feedback was incredibly helpful. It suggested additional edge cases I hadn't considered, such as:

  • What happens if the server is down?
  • What happens if the user sends multiple files and only some of them are processed by the server?
  • What happens if the user tries to upload a file that is too large?

These seem obvious in hindsight, but while juggling five other tasks and coordinating school pickup, I'd missed them completely. Claude's feedback helped me to write a more complete issue that covered more edge cases and was a better starting point for discussion with my engineering team.

Different PMs will have different levels of responsibility for writing issues. Some PMs will write them themselves, others will hand over to someone else on the team. Regardless, I think GenAI can help improve the quality of issues and make the process more efficient.

Building Effective Prompts

Given how frequently we write user stories, investing in a great prompt pays huge dividends. A good prompt might consider:

  • Elements of a good issue e.g., user story, acceptance criteria, edge cases, constraints, dependencies, etc.
  • Examples of good issues that you and your team have written in the past
  • Context on the feature, such as the PRD or other relevant documentation
  • Information about the product architecture
  • Marketing documentation which explains positioning and value propositions

Connecting PM Requirements to AI Development

This systematic approach to issue improvement got me thinking about standardizing AI guidance across our entire development workflow. For example, you could add GenAI instructions to your code repository (like CLAUDE.md memory management) to direct how your engineering team uses AI for code generation. If your instructions include "all front-end code must be accessible", how does that change the requirements you write? This connection between PM requirements and AI-assisted development is an area which I think will evolve significantly over time.

Communication and Automation: AI as Your Content Partner

Go-to-Market Content Creation

Go-to-market responsibilities are often a key part of a PM's role. This includes writing marketing copy, product documentation, and stakeholder communication. GenAI tools can help with these tasks too.

As with discovery work, context is crucial. For example, if you're writing marketing copy, you might provide:

  • A style guide or brand guidelines
  • Examples of previous marketing copy that you like
  • Information about the target audience and their pain points
  • Plenty of context about the feature you're adding or changing

In previous jobs, I've been lucky enough to work with marketing teams or product marketing professionals. Usually, these folks are experts in their field and I'm not trying to replace them with GenAI. My main goal is often "how do I communicate this feature to our product marketing team so they can write the marketing copy?" Simplifying concepts, generating useful analogies, and providing clear explanations are all things GenAI can help with and I would encourage PMs to explore.

Other areas where I've had success include:

  • Writing and improving product documentation
  • Writing scripts and presentations for video demos
  • Iterating on ideas from the marketing team
  • Writing more engaging Slack messages for internal communication

Product Operations Automation

Beyond content creation, automation offers huge time-saving potential for PMs. While I'm more technical than most PMs, I believe GenAI can help anyone with tasks like:

  • Writing bash scripts to automate repetitive tasks
  • Pulling down and analysing data from different sources
  • Generating GitHub Actions workflows to automate product processes

As a simple example, I recently had Claude Code write a GitHub Action to count labels on GitHub issues to help analyze the frequency of customer feature requests. These counts made quarterly planning easier and more data-driven.

This GitHub Action example hints at broader possibilities - automated data pipelines, workflow optimization, and reporting systems that eliminate manual PM tasks. I expect this kind of AI-powered product operations to become essential as teams scale and tools mature.

Future Possibilities: MCP and Connected AI Systems

Model Context Protocol (MCP) is a new way to connect GenAI models to tools, prompts, and resources, allowing the model and third parties to share context and work together more effectively.

Claude has great "connectors" and, while writing this article, the VS Code team have recently announced the MCP marketplace.

If you consider the four data types from Cagan's Inspired book - customer, product, market, and business - MCP opens up exciting possibilities for PMs. Imagine prompts that connect directly to your tools - pulling customer feedback, analysing product data, and generating insights automatically.

For example: a prompt that pulls recent customer feedback and identifies key themes, or one that analyses product usage patterns and highlights engagement trends for new features.

Today, I often use the following MCP servers:

  • GitHub MCP Server - to create issues from VS Code, and to pull down data from GitHub issues
  • Google Docs - for seamless document sharing and PRD collaboration

It's still early days, but I think MCP has the potential to transform how PMs access and analyze data by eliminating manual export/import workflows. The key will be finding the right tools and prompts to make this a reality. Assuming the right connections exist, it feels like you're only limited by your imagination.

Of course, MCP isn't magic. Setup can be complex, especially for less technical PMs, and you're still dependent on your underlying data quality and tool APIs. Security policies, IT constraints, and integration challenges mean that while MCP expands possibilities, real-world implementation often requires patience and technical support.

The Reality Check: Limitations and Cautionary Notes

GenAI tools aren't perfect. They can make mistakes, misunderstand context, or generate irrelevant responses. As PMs, we need to be aware of these limitations and not over-rely on AI for critical decisions.

The Domain Expertise Requirement

I would describe the current state of GenAI tools as "awesome when you have some domain expertise". For example, if you're using GenAI to create customer interview questions, you need to know what good interview questions look like. You need to know to look out for things like leading questions. If you have no experience in user research, GenAI isn't going to magically make you a great researcher.

Similarly, if you're using GenAI to write automation scripts, it can be very useful to understand the underlying code and architecture created by the tool. If you're a non-technical PM, you might find it more difficult to unlock the full potential of GenAI in automation and product operations.

While I was at GitHub, the company frequently used an "office hours" model to help small subject-matter-expert teams scale. This gave product teams access to resources they typically wouldn't have. For example, you could "book" office hours with data scientists to get feedback on data problems your team was facing. My recommendation is that if you have access to experts, use them! Bring your GenAI co-created assets to them and ask for a review. If you don't have access to experts, consider investing in training or resources to build your own expertise. You'll become a better PM and GenAI will be there to enhance your new skills.

Making the Investment Decision

It's also critical to understand that GenAI isn't a "genie in a lamp" - you can't just wish for a perfect user story, comprehensive competitive analysis, or breakthrough product insight. You need to define clear roles (yours vs. AI's), set goals, provide context, and iterate on results. If you have to invest time into a solution, you will inevitably face the question: "is this worth it?"

  • When does it make sense to invest time in great prompts?
  • When does it make sense to use simple "search engine like queries" with GenAI tools?
  • When is it better to stick with traditional methods?

Remember the mindset: you're working with a very intelligent, very fast intern. This intern can help you with many tasks, but you still need to provide guidance, review their work, and ensure quality control.

Future of the PM Profession

These quality considerations lead to bigger questions about our profession's future.

  • What's the end game for PMs?
  • What does our profession look like in 5-10 years?
  • Will we be replaced by AI tools that can do our jobs better, faster, and cheaper?
  • Will we evolve into a new role that focuses on strategic thinking, creativity, and human connection?
  • Is there some horrid middle ground where 1 PM with AI tools is expected to do the work of 3 "traditional PMs"?

I sincerely hope that the value of PMs is recognized and that we can use AI to enhance our roles, not replace them. I hope we can remain focused on a very small number of products and customer problems rather than being stretched into some kind of context-switching hell-scape.

On a more optimistic note, GenAI tools are still evolving rapidly. Large Language Models (LLMs) will continue to improve, and we will find new ways to use them. As PMs, we need to stay adaptable, curious, and ready to learn from these rapid changes.

Next Steps: Building AI into Your PM Practice

Essential Training Resources

Using GenAI is a skill so it's important to seek training and opportunities to practice. From a training perspective, I've almost exclusively used 3 resources:

  1. Anthropic's prompt engineering docs
  2. Claude AI itself - using Claude to critique prompts and provide feedback
  3. Anthropic's AI Fluency course

The first 2 resources are great for learning the basics of prompt engineering and how to use these technologies effectively. The AI Fluency course is more comprehensive, and has really taken my understanding of GenAI to the next level. I highly recommend it to anyone looking to expand their knowledge and skills. It's free, and you can complete it at your own pace.

Staying Current with AI Evolution

Given the rate of change in this space, in addition to training, I recommend following the different model companies to see how their capabilities are evolving. Social media is a great way to do this. For example, I follow several LLM businesses on LinkedIn:

The same companies are also worth a "subscribe" on YouTube, where they often post videos of new features and capabilities.

Getting Organizational Support

This learning approach works best when you have organizational support to experiment. I've been fortunate in my career to work with companies that have embraced AI technologies early. This has given me the opportunity to experiment, learn, and share my experiences with others. If you're in a position to do so, I encourage you to explore these tools and see how they can enhance your PM practice.

Options Without Organizational Support

If you're not in a position to experiment with AI capabilities, what are your options? You could:

  1. Advocate for your team to explore these tools at work
  2. Use these tools in your personal projects
  3. Find another job that does allow these tools!

I've certainly seen enough value from GenAI tools to make artificial intelligence a core part of my PM practice. I believe the argument that "you won't lose your job to AI, but you will lose your job to someone who uses AI" has merit in our field. Embracing these tools and integrating them into your workflow can provide a significant competitive advantage.

Final Thoughts: The PM Superpower?

Wrapping up: GenAI is a new tool in the PM toolkit. A super powerful tool that requires training and practice to use effectively. I'd argue that GenAI is more than just a tool - it's a thinking partner that can help you be more effective in your role. The exciting part for PMs is that skills that we already have - critical thinking, problem solving, technical writing and communication - are required and enhanced by GenAI.

So, is AI even more empowering to a PM? I believe the answer is yes - but only for those willing to develop the judgment and skills to use it thoughtfully. PMs who embrace AI as a thinking partner, while maintaining their critical thinking and domain expertise, will find themselves capable of work that seemed impossible just a few years ago.

Appendix: Transparency Statement

In creating this content, I collaborated with Claude AI and Claude Code, using the Claude Sonnet 4 model, to assist with plan creation, outline generation, content drafting, and editing. I affirm that all AI-generated and co-created content underwent thorough review and evaluation, including verification of factual claims and alignment with my understanding and expertise.

The final output accurately reflects my knowledge, professional perspective, and intended meaning. While AI assistance was instrumental in streamlining the creation process, I maintain full responsibility for the content, its accuracy, and its presentation.

This disclosure is made in the spirit of transparency and to acknowledge the valuable role of AI collaboration in developing materials for Product Managers, AI enthusiasts, and those interested in the intersection of AI and product management.


Filed under:

Tags