#4 - GenAI for PMs: Using GitHub Copilot for Documentation workflows
Creating and updating documentation is a recurring, yet often tedious, task for PMs. LLMs have become an invaluable asset for documentation, largely due to their proficiency in generating and refining text-based content. In the fourth installment of my series, I’ll explore how I use GitHub Copilot to augment different aspects of product documentation.
1. From Rough Drafts to Polished Markdown
Generating formal documentation from rough drafts is perhaps the most common AI use case for Product Managers. Typically, this involves providing an AI with a draft and reference links, then asking it to match a specific tone and style. However, you can significantly augment this workflow by providing the agent with properly scoped knowledge and tools.
For my product, documentation is hosted in a public GitHub repository. By working within a local fork in VS Code with GitHub Copilot enabled, the AI gains full context of our existing formatting, and product capabilities. I can simply provide a few bullet points, and Copilot generates high-quality documentation without further instruction. Furthermore, by using "agent mode," the AI can autonomously create the Markdown file, update the Table of Contents (TOC), and raise a PR for final approval.
2. Generating FAQs and Best Practice docs from existing fragmented docs
Mature products often end up accumulated pages upon pages of documentation. n the Azure product docs, for example, many products span well over 100 pages. AI is very good at distilling insights from these extensive libraries to create high-value standalone assets, such as FAQs, "tips and tricks" sheets, or best practice guides.
I’ll give a recent example: I needed to create a "Best Practices" guide for customers migrating from a legacy product version to a modern version. I had a list of about 30 different pages written by different people tackling various aspects of the migration—such as which APIs to use, how specific features would behave, what tooling to employ, how monitoring would work and so on. However, since all these files were available in my local repository, I asked GitHub Copilot to analyze the entire set and generate a comprehensive "Best Practices" document. It was able to do this effortlessly, saving me the hours it would have taken to manually synthesize that much information.
3. Making Context-aware Bulk Edits
There are times when you must make small, repetitive edits across an entire repo—such as adding a "retirement banner" when a specific feature or SKU is deprecated. Before AI, this required a tedious manual search to identify every relevant page, followed by a series of repetitive copy-paste actions. In extensive documentation libraries, ensuring 100% coverage is difficult, as features are often mentioned in multiple places and in nested sections.
With GitHub Copilot’s "agent mode," this bulk editing becomes effortless. You simply instruct the agent to "Add a retirement banner to every page referencing this specific feature." The AI scans the repository, identifies the appropriate files, and determines the optimal placement for each. While a banner typically lives at the top of a page, Copilot is context-aware enough to place it mid-document if the feature is only mentioned in a specific subsection.
Recommended by LinkedIn
Once the mapping is complete, it applies the changes across all files and raises a PR for your review. What used to be an hour of repetitive "busy work" is now a two-minute automated task.
4. Distilling documentation from chats and emails
In support cases or email threads, you often realize a product’s behavior in specific scenarios needs to be more explicitly documented. Traditionally, this meant taking an informal discussion, distilling the essence into proper documentation, and identifying the section of your docs that requires an update. This can happen multiple times in a week and to avoid constant context switching, I used to bookmark these threads and wait for free time later to process them. Very often, this means the update will be done after a long time.
Now, I simply provide the entire conversation thread to GitHub Copilot. The agent distills the core technical details, determines the precise wording for the update, identifies all applicable files in the repository, and raises a PR. This enables near-real-time documentation improvements without the cognitive burden of manually synthesizing the note and making the changes.
In the next article of this series, I will explore how to improves this further by building an agent that proactively scrapes work communications to identify these documentation gaps and manage the entire update process automatically.
Parting Note
While GenAI is a powerful tool for improving fluency, structure, and consistency, it is important to remember that it predicts the most likely next word rather than verifying objective truth. Therefore, human review remains critical. I have noticed instances in the past where AI generated documentation that appeared logically sound but was factually incorrect. Always review every edit or addition made to your documentation, as even a small error can lead to widespread customer confusion.
Note: My AI toolkit primarily consists of Microsoft products, as they are free for me as an employee, compliant for handling work-related data, and fully integrated into the ecosystem I use daily. However, you can build similar agents using whichever generative AI tools you prefer.