A lot of product teams yearn for a concept of "Isolated Bet Impact." The challenge lies in understanding how individual initiatives influence key metrics within a complex system where multiple factors—both internal and external—interact simultaneously. One of your bets might have a positive impact even though your KPIs are down overall. Conversely, you might have placed a bad bet that is obscured by overall growth in the business. Teams risk falling into "resulting," as the poker player Annie Duke describes, where they mistakenly judge the quality of their decisions solely by lagging business outcomes rather than understanding the true impact of their bets. Running multivariate experiments is part of the solution, but not the entire solution. While an experiment can isolate the impact of a product change on an input metric, the cumulative impact on lagging business KPIs remains unclear. Without a clear way to isolate the impact of bets, teams often give up on being metrics-driven and instead rely solely on intuition to prioritize bets or choose which experiments to run. However, there is a structured approach that radically improves how teams use data to make decisions: deterministic KPI trees, a tool for bringing clarity and rigor to bet evaluation. These trees define relationships between metrics mathematically, providing a framework to estimate the contribution of individual bets to overarching business goals. The attached screenshot showcases DoubleLoop's new Bet Simulator prototype (development just kicked off today!), which uses a deterministic KPI tree to estimate the impact of bets on a business's KPIs. Through the deterministic KPI tree, each factor can be analyzed in isolation. The relationships between metrics—such as Sales = Revenue per Visitor × Total Visitors—provide a structured way to attribute changes in the root metric (sales) to specific bets and external factors. While deterministic KPI trees offer clarity, there are limitations. Many metric relationships are probabilistic, not purely mathematical. That said, even an approximate sizing of bet opportunities with deterministic KPI trees is far superior to not estimating bet impact at all. Using deterministic models enables teams to: - Use data to discuss the relative importance of different bets - Avoid overestimating or underestimating impacts - Ensure assumptions align with realistic expectations and the nuances of the business
How Data Modeling Influences Decision Making
Explore top LinkedIn content from expert professionals.
Summary
Data modeling is the process of structuring information so it mirrors real-world relationships, helping organizations make more informed decisions. By organizing data thoughtfully, teams can reveal hidden connections, clarify business impacts, and reduce errors when choosing a direction.
- Clarify outcomes: Start with clear questions or business goals to keep your model focused and avoid unnecessary complexity.
- Reveal hidden risks: Use models that capture real-life connections so you can spot opportunities or threats that spreadsheets often miss.
- Build trust: Make your data models transparent and explainable so decision-makers feel confident rather than relying on gut instinct.
-
-
For years, I thought most companies made decisions based on data and careful analysis. Then I got closer to the inside of those decisions. I saw supply chain executives fighting over spreadsheets with 20 tabs, each one producing a slightly different answer. I saw managers defaulting to “the way we’ve always done it,” even when the stakes were in the millions. I saw incredibly smart teams chasing gut instincts because the data wasn’t trusted, the process wasn’t clear, or the models weren’t explainable. That changed the way I thought about my own work. It wasn’t enough to just build a solver model, or an elegant piece of code. The real question was: 👉 Does this decision process give leaders confidence that they’re not leaving money on the table? I’ve come to believe three things: 1️⃣ Most organizations don’t measure the cost of being wrong. They underestimate how expensive “good enough” really is. 2️⃣ Consistency is underrated. A process that gives a repeatable, explainable answer beats a one-off “heroic” decision every time. 3️⃣ Bias creeps in quietly. Without structured frameworks, politics and personalities decide more than we admit. Looking back, some of the most impactful projects I’ve been part of weren’t the flashiest. They were the ones where we gave decision-makers clarity: Here is why this is the best choice. Here is what it costs if you do otherwise. Here’s the confidence level behind it. That’s why I work in optimization today. Not because I love algorithms (though I do), but because I’ve seen what happens when organizations fly blind. So here’s my challenge to you: When your team makes its next critical decision… pause and ask yourself: ✅ Could I defend this choice if a board member or regulator asked me “why this?” ✅ Do I know the cost of being wrong? ✅ Am I confident this is the best decision, or just a reasonable one? Because if you don’t know the answers, you’re not really making decisions. You’re just hoping.
-
Your data strategy has a blind spot. It's missing critical connections. Most organizations store data in ways that hide their most valuable insights. This leads to missed opportunities and undetected risks. Graph modeling changes this by representing data the way your business actually works. Instead of forcing relationships into rigid tables, graphs capture the natural connections between customers, products, suppliers, and transactions. Your data finally reflects reality. Graphs have four key elements: 🎯 Nodes represent your business entities like customers, products, and locations. 🔗 Relationships show how they connect through purchases, partnerships, and dependencies. 📋 Properties store relevant details such as dates, amounts, and contact information. 🏷️ Labels organize similar entities for easy identification. When fraud investigators model accounts and addresses as connected nodes, criminal rings become visible instantly. When supply chains are mapped as graphs, single points of failure emerge before they cause disruptions. The connections reveal what spreadsheets hide. The real power emerges when you ask business questions that span multiple relationships. ↳ Who are your most influential customers? ↳ Which suppliers create the biggest risk? ↳ What paths do successful transactions take? Organizations using graph modeling detect fraud faster, optimize supply chains more effectively, and identify opportunities competitors miss. That's why at data² we built the reView platform on a foundation of graphs. 💬 What hidden relationships could transform your business decisions? Share your thoughts below. ♻️ Know someone wrestling with complex data relationships? Share this post to help them out. 🔔 Follow me Daniel Bukowski for daily insights about analyzing data using graphs + AI.
-
Your data model already made a clinical decision. Most teams do not realize it. Clinical workflow and data structure are not the same thing. Someone translates between them. That translation decides meaning. Data models are not neutral. Here is where it shows up. Examples from real healthcare work • A visit starts when the patient arrives, but the table starts when someone clicks check in • A discharge time reflects paperwork, not when care ended • A provider field captures one name, not the care team • An encounter ignores unit transfers • A visit count mixes office visits, procedures, and phone calls None of these are bugs. They are design decisions. The failure is hiding them. The missing step sits between workflow and schema • Walk through one real patient • Mark where decisions happen • Ask what the clinician believes is true • Decide what the model will preserve • Write down what the model will lose This is translator work. When no one owns it • Metrics fight each other • Clinicians stop trusting reports • Rework feels endless Strong analytics respect how care actually happens. One simple test. If a clinician asks about one patient, can you show the full context in the chart? If not, the analysis is not ready to scale. Pressure test your next metric on one real case
-
Start with the End in Mind: A Key to Better Data Models When it comes to real-world data modeling, always start with the end goal. Ask yourself What questions does this model need to answer? Here’s why this approach works: * It keeps your model focused on business needs. * You avoid unnecessary complexity by only including relevant tables and fields. * Stakeholders get the insights they need faster. At a previous company, we needed to analyze customer churn. By focusing on this specific outcome, we identified key tables: Customer, Sales Transactions, and Product Returns. This targeted approach allowed us to create a concise model that directly addressed our churn analysis needs Tips for new users: * Define key metrics/KPIs first like total sales or monthly trends. * Build a functional model quickly and refine it as needed. * Create measures early to guide relationships and validate results. Remember: A model that serves its purpose is better than a perfect one that doesn’t. #DataModeling #PowerBI #BusinessIntelligence #DataAnalytics #BIBestPractices #DataVisualization #CustomerChurn #KPIDriven #DataDrivenDecisions #EfficientModeling #BIInsights #DataOptimization #PowerBIModels #ChurnAnalysis #AnalyticsTips
-
One of the biggest mistakes data teams make is underestimating the importance of data modeling. If you take a closer look within a cloud data platform - data modeling is an 𝒊𝒎𝒑𝒆𝒓𝒂𝒕𝒊𝒗𝒆 piece in each data layer. 𝐇𝐞𝐫𝐞'𝐬 𝐡𝐨𝐰: 𝐒𝐭𝐚𝐠𝐢𝐧𝐠 𝐋𝐚𝐲𝐞𝐫: This initial phase involves collecting raw data from various sources. Proper data modeling at this stage ensures that the data is accurately represented and organized for subsequent processes. 𝐂𝐨𝐫𝐞 𝐋𝐚𝐲𝐞𝐫: Integration of data from multiple sources happens here. Data modeling helps in creating consistent and unified entities, facilitating smoother data transformations and ensuring data integrity. 𝐂𝐨𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐋𝐚𝐲𝐞𝐫: Applying complex transformations and business rules, this layer benefits immensely from robust data models, which provide a clear structure and standardize data across the organization. 𝐏𝐫𝐞𝐬𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 𝐋𝐚𝐲𝐞𝐫: In the final stage, data is made consumption-ready for analytics and reporting. A well-designed data model ensures that the data is easily accessible and comprehensible for business users, enabling quick and accurate decision-making. Effective data modeling ensures data integrity, quality, and enables quick decision-making. #datamodeling #dataarchitecture #cloud
-
Most organisations don’t struggle to build predictive models. They struggle to turn predictions into better decisions. I’ve seen this pattern repeatedly across industries. Teams invest heavily in improving model accuracy, yet the business outcomes barely move. Not because the models are wrong, but because the decision process around them hasn’t changed. Prediction alone rarely creates value. Decisions do. In my latest article, “Decision Intelligence: Moving from Prediction to Action,” I explore why many AI initiatives stall after the modelling phase and what it takes to close the gap between analytical insight and operational impact. The shift is subtle but important. Rather than treating models as the final output, organisations need to think in terms of decision systems, where predictions, policies, workflows, and feedback loops operate together. This is where disciplines like decision orchestration, real-time analytics, and continuous learning become critical. In practice, the most effective AI systems are not defined by how accurately they predict the future. They are defined by how effectively they shape actions in the present. This article is the latest edition of The Data Science Decoder, where I unpack ideas shaping how AI is actually deployed inside organisations. If you’re working on operationalising AI, or trying to move beyond isolated models toward real business impact, you may find the perspective useful. Read the full article here:
-
Most analytics teams are strong in tools. But weak in 𝐟𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐬 𝐭𝐡𝐚𝐭 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐝𝐫𝐢𝐯𝐞 𝐝𝐞𝐜𝐢𝐬𝐢𝐨𝐧𝐬. In 2026, dashboards are not the bottleneck. 𝐃𝐞𝐜𝐢𝐬𝐢𝐨𝐧 𝐪𝐮𝐚𝐥𝐢𝐭𝐲 𝐢𝐬. The gap is not SQL or BI tools. It is how well teams connect 𝐝𝐚𝐭𝐚 → 𝐢𝐧𝐬𝐢𝐠𝐡𝐭 → 𝐚𝐜𝐭𝐢𝐨𝐧. Here are the 5 pillars that separate reporting teams from decision engines: → 𝐃𝐚𝐭𝐚 𝐂𝐨𝐥𝐥𝐞𝐜𝐭𝐢𝐨𝐧 & 𝐌𝐨𝐝𝐞𝐥𝐢𝐧𝐠 • Clear schemas, clean pipelines, reliable data quality • Strong modeling decisions define everything downstream • Poor foundations = misleading insights at scale → 𝐒𝐐𝐋 & 𝐃𝐚𝐭𝐚 𝐌𝐚𝐧𝐢𝐩𝐮𝐥𝐚𝐭𝐢𝐨𝐧 • Ability to shape, join, and aggregate data correctly • Performance-aware querying at scale • This is still the core skill most teams underestimate → 𝐒𝐭𝐚𝐭𝐢𝐬𝐭𝐢𝐜𝐬 𝐟𝐨𝐫 𝐃𝐞𝐜𝐢𝐬𝐢𝐨𝐧-𝐌𝐚𝐤𝐢𝐧𝐠 • A/B testing, variance, confidence, causality • Understanding what is signal vs noise • Without this, insights become opinions → 𝐕𝐢𝐬𝐮𝐚𝐥𝐢𝐳𝐚𝐭𝐢𝐨𝐧 & 𝐒𝐭𝐨𝐫𝐲𝐭𝐞𝐥𝐥𝐢𝐧𝐠 • Translating data into clear narratives • Choosing the right metrics and visuals • Driving alignment, not just dashboards → 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠 & 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐨𝐥𝐯𝐢𝐧𝐠 • Metric design (North Star, funnels, trade-offs) • Root cause analysis and decision framing • Connecting analysis to revenue and outcomes The mistake most organizations make: They optimize for tools and dashboards. But high-performing teams optimize for decision systems. Because in the end: Data does not create value. Decisions do. P.S. Which pillar is strongest in your team today and which one is holding you back? Follow Ashish Joshi for more insights
-
🚀 𝐃𝐚𝐭𝐚 𝐒𝐜𝐢𝐞𝐧𝐜𝐞 𝐚𝐧𝐝 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐀𝐫𝐞 𝐎𝐟𝐭𝐞𝐧 𝐓𝐚𝐮𝐠𝐡𝐭 𝐚𝐬 𝐓𝐨𝐨𝐥𝐬 𝐁𝐮𝐭 𝐓𝐡𝐞𝐲’𝐫𝐞 𝐑𝐞𝐚𝐥𝐥𝐲 𝐚 𝐖𝐚𝐲 𝐨𝐟 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠 Many people learn data science as a collection of disconnected topics. Statistics here. ML algorithms there. Visualization at the end. This document reinforces a more important idea: data science is an end-to-end reasoning process, not a toolbox. Everything starts with understanding the problem. Before models, before features, before metrics, there’s a question to be answered and a decision to be improved. Without that clarity, even technically perfect models fail to create value. Statistics isn’t just theory in this process. Descriptive statistics help you understand what the data is saying. Probability and distributions help you understand uncertainty. Inferential statistics help you decide whether patterns are real or accidental. These concepts shape better modeling decisions long before algorithms enter the picture. Machine learning then becomes a natural extension, not a replacement. Regression, classification, clustering, dimensionality reduction, and ensembles are simply different ways to formalize patterns already observed in the data. When fundamentals are strong, model choice becomes logical instead of experimental. Another important takeaway is that data preparation and feature engineering are not “pre-work.” They are core modeling steps. Scaling, binning, handling missing values, and managing imbalance often matter more than switching between algorithms. Validation is where maturity shows. Understanding bias–variance tradeoff, cross-validation strategies, and appropriate evaluation metrics is what separates models that look good in notebooks from models that hold up in production. What I appreciate most about this material is that it treats data science as a lifecycle from data understanding to modeling to communication. Visualization and storytelling aren’t optional add-ons; they’re how insights actually influence decisions. I’m uploading this document because it captures the full picture clearly. If you’re learning data science, preparing for interviews, or revisiting fundamentals after working in the field, this kind of structured perspective compounds over time. Good data science isn’t about knowing more algorithms. It’s about making better decisions with data. #DataScience #MachineLearning #AI #ArtificialIntelligence #Statistics #Analytics #MLFundamentals #DataVisualization #TechCareers #LearningInPublic #BuildInPublic #FutureOfWork
-
In data science we build AI/ML models that "make decisions", for example you build a model that determines the likelihood of a customer churning. Ok so what? Now that you know, what actions should you take? Now that you know the customer is likely to leave, or their estimated LTV is below some threshold you need to take an action that leads to a desired outcome. This is where things get murky for data teams, things are often done haphazardly and businesses wonder what's the point of data. So what do you do? What's the desired outcome? Should you call them? What if they use the call as an opportunity to cancel? Should you send them a promotion? Do nothing? This "action that leads to a desired outcome" is at the heart of Decision Intelligence (DI). DI concerns itself with clarifying the action => outcome causal chain. Here's where human intuition and artificial intelligence get integrated into a coherent whole. It also happens to be the key element businesses care about when it comes to the value of data, because if data isn't helping you take the best possible actions to achieve your desired outcomes, then what's the point? DI tries to answer these types of questions: - If I take this action, what outcome will it lead to? - What actions should I take to get the outcome I want? - Given these actions that are available to me, what outcomes can I expect? DI does this through a simple diagram, called a Causal Decision Diagram (or CDD) where the actions (or levers), intermediate effects, external factors, outcomes and their cause-effect relationships are explicitly laid out for everyone to see. The ML/AI model is only one of a multitude of inputs to this diagram. My point is that this should be the key deliverable for a data team. An agreed upon CDD model is often more valuable than the ML/AI model you were planning to build. In fact, you might even discover that the ML/AI model doesn't even need to be that complicated. A few simple rules is all it takes.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development