AI Product Management AI Product Management is evolving rapidly. The growth of generative AI and AI-based developer tools has created numerous opportunities to build AI applications. This is making it possible to build new kinds of things, which in turn is driving shifts in best practices in product management — the discipline of defining what to build to serve users — because what is possible to build has shifted. In this post, I’ll share some best practices I have noticed. Use concrete examples to specify AI products. Starting with a concrete idea helps teams gain speed. If a product manager (PM) proposes to build “a chatbot to answer banking inquiries that relate to user accounts,” this is a vague specification that leaves much to the imagination. For instance, should the chatbot answer questions only about account balances or also about interest rates, processes for initiating a wire transfer, and so on? But if the PM writes out a number (say, between 10 and 50) of concrete examples of conversations they’d like a chatbot to execute, the scope of their proposal becomes much clearer. Just as a machine learning algorithm needs training examples to learn from, an AI product development team needs concrete examples of what we want an AI system to do. In other words, the data is your PRD (product requirements document)! In a similar vein, if someone requests “a vision system to detect pedestrians outside our store,” it’s hard for a developer to understand the boundary conditions. Is the system expected to work at night? What is the range of permissible camera angles? Is it expected to detect pedestrians who appear in the image even though they’re 100m away? But if the PM collects a handful of pictures and annotates them with the desired output, the meaning of “detect pedestrians” becomes concrete. An engineer can assess if the specification is technically feasible and if so, build toward it. Initially, the data might be obtained via a one-off, scrappy process, such as the PM walking around taking pictures and annotating them. Eventually, the data mix will shift to real-word data collected by a system running in production. Using examples (such as inputs and desired outputs) to specify a product has been helpful for many years, but the explosion of possible AI applications is creating a need for more product managers to learn this practice. Assess technical feasibility of LLM-based applications by prompting. When a PM scopes out a potential AI application, whether the application can actually be built — that is, its technical feasibility — is a key criterion in deciding what to do next. For many ideas for LLM-based applications, it’s increasingly possible for a PM, who might not be a software engineer, to try prompting — or write just small amounts of code — to get an initial sense of feasibility. [Reached length limit. Full text: https://lnkd.in/gYY-hvHh ]
Data-Driven Innovation Analysis
Explore top LinkedIn content from expert professionals.
-
-
#TeachMeTuesday How much innovation are we missing if we only look at the R&D line in financial statements? 👉 Result: a lot! 📄 In a newly published Research Policy article (👉 https://lnkd.in/enDMWuVA), Neophytos Lambertides, Marina Magidou, and Anna Emilia Maruska, Ph.D. , the authors ask a simple question: what if part of the innovation signal is hiding in plain sight — in what firms say, not only in what they report numerically? Evidence in the paper then shows that over 10% of firms reporting zero or missing R&D expenditures nonetheless exhibit clear signals of innovation activity. 🧠 What this implies for innovation measurement The results reinforce a broader message: innovation is increasingly intangible, distributed, and poorly aligned with traditional reporting categories. Relying narrowly on reported R&D risks underestimating innovation — especially in services, digital-intensive sectors, and firms innovating through processes rather than products. 🏛️ What could better measurement look like? Building on the paper’s contribution, three complementary directions stand out: 🔔 Systematic use of narrative disclosures to capture latent innovation activity beyond formal R&D 🔔Broader “total innovation investment” concepts, combining R&D with other innovation-relevant intangible expenditures 🔔 Richer output indicators, integrating patents with trademarks, designs, and market-based innovation signals 🌍 Link to global policy work These insights align closely with the measurement philosophy of the WIPO, which combines traditional and non-traditional indicators to better capture innovation in all its forms.
-
Machine Learning is NOT a one-man job! When it comes to building ML solutions, it is important to think end-to-end: from the customer to the customer. This will help to architect, plan and execute. As part of planning, it is important to understand who will need to be involved and when. Let's run through a typical project. Someone has a great idea (an executive, an engineer, a product manager, etc.)! Let's assume we already reframed the business problem as a machine solution, and let's assume that we validated that idea as a financially viable project. A product manager is going to establish a set of business requirements (How many inference requests/day or / seconds? How many users? Minimum predictive performance? Acceptable latency?) by talking to customers, running surveys, or simply by looking at the alignment of the stars. The product manager will then communicate the requirements to a technical lead, that, in turn, will need to convert those into technical requirements (Batch or real-time? How many servers? Fallback mechanisms? Do we need databases or queues to store the resulting data?). This work usually results in a set of system design blueprints. The technical lead and product manager can then start with strategic planning: what are the success metrics, the milestones, the timelines, the headcount, the required resources, and, more importantly, the budget? When the plan is established, we can then assign the work. There are usually 3 axes of development: ML modeling, the data pipelines, and the operation infrastructures. The ML engineers iterate on the ML models, the data engineers build the data pipelines to and from the development and serving pipelines, and the MLOps engineers provide different levels of automation, testing, and monitoring of the underlying services. Data engineers need to work with the database architects of original sources of data while the Data protection officers ensure the regulatory compliance of the data for the different regulations (GDPR, CCPA, PII, HIPAA, FCRA, etc.). The ML system itself will generate data: the features, the inference, the user feedback loop… That data can be analyzed by Data Scientists who in turn can partner with the ML engineers and the other engineers to provide insight on how to improve things. We also need frontend and backend engineers to expose the resulting inference to users. It takes a village! As in many engineering domains, communication skills are what separate a senior engineer from a junior one, and an effective tech lead needs to dabble in every aspect of the process to orchestrate a project to success. #machinelearning #datascience #artificialintelligence -- 👉 50% off my LangChain course: https://lnkd.in/gquCdf45 --
-
New streaming data sources and AI’s use of them have revitalized the real-time event stream processing market and boosted revenue. Product leaders can use this research to assess how real-time data, analytics and AI can enhance and differentiate their offerings and adjust their roadmaps to leverage this potential. Gartner recommends that product leaders: 🔵 Allocate a portion of the engineering budget to evaluate the accessibility and applicability of real-time data and analytics that can impact desired business outcomes. Do so by experimenting with new data streams and event logs to understand their ability to inform and adapt products and services. 🔵 Work with engineering teams to design an architecture that can leverage real-time event stream data by identifying technology and requisite technology partnerships to consume the data within the reasonable confines of your product’s existing architecture. 🔵 Demonstrate the positive effect on decision quality and outcomes that result from including real-time contextual data in your products and services. Do so by measuring the accuracy of models that either predict outcomes or recommend actions, as well as embedding the best models in decision workflows. I asked Kevin R. Quinn, Vice President, Analyst - Technical Product Management, Gartner why he believe this research matters: 💡 "AI is accelerating every aspect of business. Decisions can’t just be based on what happened, but need to account for what is happening right now." 💡"Real-time data enables timely decision-making, enhances responsiveness, improves operational efficiency, and provides a competitive edge in rapidly changing environments." Our research shows how the market for real-time streaming data is changing, and how it is more accessible and relevant for providers and end-users, than ever before. Check out the insights from Kevin R. Quinn and myself (David Pidsley) which is exclusively available to Gartner clients who are product leaders subscribed to our "Emerging Technologies and Trends Impact on Products and Services" research. ▶️ "Emerging Tech: Revolutionize Your Products With Real-Time Data and AI" [Published 31 January 2025] 🔗 https://lnkd.in/ev7nk82R (requires client login) #DecisionIntelligence #RealTime #Data #AI #RealTimeData #StreamingData #StreamingAnalytics #StreamAnalytics #EventStream #EventStreamProcessing
-
(Part 3 of my series: The Boardroom Guide to AI-Ready Data Strategy) Traditional Data Governance was built for an era of static reports and predictable workflows. But the moment you introduce Generative AI and autonomous agents, the entire risk landscape shifts. In this new world, bad data isn’t just a quality issue, it is a reputational, regulatory, and financial threat. If your governance model is still focused on locking down access and enforcing compliance checklists, you are operating as the Department of No. Modern AI Governance requires a different philosophy: The Two-Sided Governance Model 🛡 Defensive (The Shield): • Regulatory compliance • PII masking & privacy • Access control (RBAC/ABAC) • Model risk assessments This keeps us safe and compliant. ⚔ Offensive (The Sword): • Real-time data lineage • Data quality scoring • Metadata enrichment • Policy versioning and model attribution This gives AI the context it needs to behave reliably. Why Metadata Matters More Than Ever: LLMs reason on context. If your metadata is outdated or missing, your AI will confidently generate wrong answers, outdated policies, or biased decisions. RAG without metadata is just a search engine wearing a suit. This is no longer governance as a cost centre. This is governance as a business enabler, the safety harness that lets us move fast without falling off the cliff. As CAIOs and CDOs, the responsibility is to build governance systems that accelerate innovation, not block it. #AIGovernance #ResponsibleAI #RiskManagement #DataPrivacy #EnterpriseRisk #GenAI #DataLeadership
-
What if AI could unlock the hidden knowledge from your databases to boost your next project? This is the second post of my series about how #AI is reshaping R&D/Product Management, with BCG X , BCG Platinion and Boston Consulting Group (BCG) In many organizations, R&D sits on a goldmine… buried. Structured data, experimental results, test logs, field performance, documentation of success and failures... everything is there. Yet when a new project starts, teams often rely on partial knowledge, intuition, or what they happen to find. The real problem isn’t access nor data quality. It’s the invisibility of the knowledge - patterns, lessons, unfinished activities, blocked items… signals trapped across datasets. For an Large Equipment OEM, we unleashed this invisible knowledge thanks to agentic: Orchestration of agents who (1) extracts data from best practices library, (2) adapts knowledge to the specific context (3) leverage calculation tools to combine experience databases and physics …. 👉 What if we moved from static databases to agentic data systems designed to surface what you don’t know you know? 👀 Imagine an intelligent agent embedded in your R&D #data ecosystem. It uncovers hidden patterns across historical datasets, connects weak signals that no single query would reveal, and brings forward past learnings relevant to the ongoing project. It highlights what has been tried -and what has been overlooked - turning fragmented data into contextual, decision-ready insight. “Across multiple datasets, similar approaches showed consistent limitations. However, a less-explored parameter delivered better outcomes, here’s how you could build on it.” No more rediscovering the same limits. No more missing hidden opportunities. ➡️ The impact is tangible: faster and smarter project ramp-up, systematic reuse of hidden knowledge, better exploration of the solution space, and ultimately a higher success rate for new initiatives. #RnD #tech Ralph Rahme Roberto Ventura
-
Product managers & designers working with AI face a unique challenge: designing a delightful product experience that cannot fully be predicted. Traditionally, product development followed a linear path. A PM defines the problem, a designer draws the solution, and the software teams code the product. The outcome was largely predictable, and the user experience was consistent. However, with AI, the rules have changed. Non-deterministic ML models introduce uncertainty & chaotic behavior. The same question asked four times produces different outputs. Asking the same question in different ways - even just an extra space in the question - elicits different results. How does one design a product experience in the fog of AI? The answer lies in embracing the unpredictable nature of AI and adapting your design approach. Here are a few strategies to consider: 1. Fast feedback loops : Great machine learning products elicit user feedback passively. Just click on the first result of a Google search and come back to the second one. That’s a great signal for Google to know that the first result is not optimal - without tying a word. 2. Evaluation : before products launch, it’s critical to run the machine learning systems through a battery of tests to understand in the most likely use cases, how the LLM will respond. 3. Over-measurement : It’s unclear what will matter in product experiences today, so measuring as much as possible in the user experience, whether it’s session times, conversation topic analysis, sentiment scores, or other numbers. 4. Couple with deterministic systems : Some startups are using large language models to suggest ideas that are evaluated with deterministic or classic machine learning systems. This design pattern can quash some of the chaotic and non-deterministic nature of LLMs. 5. Smaller models : smaller models that are tuned or optimized for use cases will produce narrower output, controlling the experience. The goal is not to eliminate unpredictability altogether but to design a product that can adapt and learn alongside its users. Just as much as the technology has changed products, our design processes must evolve as well.
-
How do you get from an idea to a Machine Learning product? While many view machine learning as simply training models with Python code, the reality is far more complex and structured The ML development process is a systematic journey from business problem to deployed solution, requiring careful consideration at each stage to ensure technical delivery leads to business value. Here's the lifecycle broken down: 𝟭. 🔎 𝗠𝗼𝗱𝗲𝗹 𝗦𝗰𝗼𝗽𝗶𝗻𝗴 & 𝗗𝗮𝘁𝗮 𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻𝘀 Set the foundation for success by defining clear objectives and ensuring data readiness Problem Definition – Define clear business problems and figure out the use case for ML Data Sourcing & Considerations – Consider data accessibility, regulatory requirements and permissions Data Ingestion – Establish reliable data pipelines that feed your model Data Preparation – Transform raw data into clean, analysis-ready formats through pipelines Exploratory Data Analysis – Conduct exploratory analysis to understand patterns before modelling 𝟮. 🧠 𝗠𝗼𝗱𝗲𝗹 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 Build a functioning machine learning model based on your prepared data while factoring in reproducibility and performance Feature Engineering – Convert raw data into meaningful features your model can actually use Model Selection – Test multiple algorithmic approaches against your constraints Baseline Model Development – Develop simple baseline models before investing in complexity Version Control – Implement version control for code, data, AND experiments Model Training – Train models through constant iteration and cross-validation 𝟯. 🚀 𝗠𝗼𝗱𝗲𝗹 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 Bringing the model to production so it can deliver value throughout the organisation Model Evaluation & Validation – Validate performance through comprehensive testing frameworks Model Serialization & Packaging – Serialize and package models with all dependencies Resource Planning – Plan computational resources and scaling strategies Deployment Architecture Planning – Design deployment architecture considering reproducibility Business Integration – Integrate with business systems through well-designed APIs Model Registry – Maintain a registry of all model versions and metadata 𝟰. 🔄 𝗠𝗮𝗶𝗻𝘁𝗲𝗻𝗮𝗻𝗰𝗲 Ensures your deployed model continues to perform effectively over time and learn from new data Feedback Loops & Continuous Learning – Establish feedback loops to capture user interactions, helping build future model iterations Performance Tracking – Track business impact alongside operational costs to identify value creation Model Monitoring & Observability – Monitor for data drift and model degradation Check out my latest article on productionising a Machine Learning model (link in the comments) and let me know what you think!
-
Machine learning applications rarely stay static—they evolve. What begins as a simple baseline often grows into a multi-stage system shaped by scale, data complexity, and real-world constraints. In this tech blog, the engineering team at Shopify explains how their product classification system evolved as the platform scaled. The journey unfolds across three distinct stages, each with its own technical character. - Stage one focused on a traditional machine learning baseline: logistic regression with TF-IDF features built purely on product text. It was simple, interpretable, and efficient—a practical starting point. - Stage two introduced a multimodal approach, combining both text and image signals within a single model. This significantly improved accuracy, especially when product descriptions were incomplete or ambiguous. However, it remained largely a task-specific classifier trained on a fixed taxonomy. - Stage three marked a shift toward vision-language models. Instead of simply mapping inputs to predefined labels, these models learn richer semantic representations by aligning images and text in a shared embedding space. This enables deeper product understanding and better generalization as taxonomies evolve and new product types emerge. The key takeaway is that real-world machine learning systems mature in layers. You don’t jump straight to the most sophisticated model. Instead, you iterate—balancing accuracy with scalability—and design systems that can adapt as the business grows. #DataScience #MachineLearning #Classification #Evolution #Iteration #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gFYvfB8V -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gYuU_dNT
-
While building Planbow, I realized that a product manager needs market insights more than marketing and sales teams, and that’s the biggest reason a modern PM should be equipped with AI superpowers. Let’s understand why: Matching the speed of development- As we are seeing development co-pilots, low-code and no-code tools are ready with their disruptive capabilities and now building software is possible in weeks. Matching this agility with conventional product management will become the bottle-neck. Data-Driven Decisions- A product manager needs to make decisions based on ever-changing market dynamics, customer behavior, and competitor strategies. AI helps in gathering and analyzing vast amounts of data quickly, providing actionable insights that go beyond traditional research methods. Predicting Trends- AI can analyze historical data and predict future trends, enabling product managers to stay ahead of the curve. This is crucial for crafting features and strategies that resonate with future market needs, not just current demands. Customer Insights- Understanding customer pain points and preferences is key to successful product development. AI-powered tools can analyze customer feedback, reviews, and behavior in real-time, helping PMs refine the product roadmap. Efficiency in Execution- AI can automate repetitive tasks like A/B testing, performance tracking, and even certain design decisions, allowing product managers to focus on strategic initiatives that drive growth. Personalization- In today’s competitive landscape, personalization is everything. AI allows product managers to create highly personalized user experiences based on data, ensuring that the product remains relevant to diverse user segments. In short, AI empowers product managers to make smarter, faster, and more precise decisions, ensuring that their product stays competitive and innovative in a constantly evolving market.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development