Managers ask "What's the probability this closes?" when they SHOULD ask "What evidence do we have that this buyer can actually purchase?" Those are completely different questions. One is guessing. The other is qualifying. The good ol’ pipeline review sounds like this: "How's the Microsoft deal looking?" "Great, had a good demo last week. They're really interested. I'd say 70% to close." Based on what? A feeling? Their enthusiasm level? The number of follow-up questions they asked? Folks should really start structuring evidence-based pipeline reviews. Try examining four things: 1. Purchasing authority evidence: - Has this person bought software like this before? - What was the process and timeline for their last similar purchase? - Who else was involved in that decision? - What budget threshold requires additional approvals? 2. Internal alignment evidence: - Have they shared our proposal with their team? - What feedback came back from those conversations? - Who asked questions and what were they? - Has anyone expressed concerns or objections? 3. Urgency evidence: - What specific business event is driving this timeline? - What happens if they don't solve this problem by their stated deadline? - Is this tied to budget cycles, project launches, or compliance requirements? - Who gets in trouble if this doesn't happen? 4. Decision-making process evidence: - What steps remain in their evaluation process? - Who reviews contracts and how long does that typically take? - What other vendors are they considering and why? - When do they need to make a final decision and why? Think about it. A traditional pipeline review sounds like: "They love the product and want to move forward. The champion is pushing internally. Should close next month." Meanwhile, an evidence-based pipeline review sounds like: "Champion confirmed budget approval authority up to $100K. Shared proposal with IT director who asked about security protocols - we're scheduling that call Wednesday. Legal review typically takes 2 weeks. Final decision needed by month-end to start implementation before Q1 planning cycle." Which sounds better to you? Exactly. :) Most sales forecasts fail because they're built on enthusiasm metrics instead of buying evidence. Prospects can be excited about your solution and still never purchase it. Start changing up your questions: - Instead of "How confident are you?" ask "What evidence do we have?" - Instead of "What's the close probability?" ask "What could prevent this from happening?" - Instead of "When will they decide?" ask "What specific event triggers their decision?" Your forecast accuracy will improve when you stop predicting outcomes and start evaluating evidence.
How to Make Evidence-Based Decisions
Explore top LinkedIn content from expert professionals.
Summary
Making evidence-based decisions means relying on data, facts, and logical reasoning instead of guesses or gut feelings to choose the best course of action. This approach helps leaders and organizations avoid costly mistakes and ensures decisions are grounded in reality rather than assumptions.
- Gather relevant information: Take time to collect and analyze data that directly relates to your decision, so you know you're working with facts instead of opinions.
- Test your assumptions: Move from just having opinions to forming hypotheses that can be proven or disproven by looking at specific outcomes or evidence.
- Distinguish decision types: Identify whether a decision is reversible or not, and adjust your level of analysis and caution accordingly, so you don't rush important choices.
-
-
Stop having opinions. Start having hypotheses. Everyone in business has opinions. But the best executives work with hypotheses. An opinion is a point of view, sometimes even backed by data. “We’re losing customers because we’re more expensive.” It sounds logical. You check the numbers, and yes, you are more expensive. But that doesn’t prove that price is the reason they leave. That’s just a correlation. A hypothesis, on the other hand, is testable. “If price is really driving churn, then the customers who pay the highest premium should be the ones leaving fastest.” See the difference? The opinion describes the symptom. The hypothesis seeks the cause and tells you what data will confirm or kill your assumption. That’s what makes hypothesis thinking so powerful: It forces you to move to test. From “I think” to “let’s check.” From debating opinions to discovering truth. Here’s how to apply it like a master 1. Start every discussion with an “if–then.” “If X is true, then we should observe Y.” It makes your thinking structured and measurable. 2. Define what would make you change your mind. Don’t just say “we’ll look at data.” Be specific about what evidence would disprove your idea. 3. Refine fast. Good consultants don’t cling to their first hypothesis. They update it every time new facts appear. In short: Opinions sound smart. Hypotheses make you smarter.
-
One of the ways that Jeff and the S-Team instilled operational excellence at Amazon was through disciplined, data-based decision-making. Most CXOs don't have a method to ensure their organizations make high-quality decisions. Below is my take on a set of principles and processes to operationalize good decision-making: 1. Timely - "Last Responsible Moment" (LRM): The concept of LRM emphasizes understanding the latest date by which each decision must be made to keep a project on track. Early decisions can lead to mistakes, and late decisions make it harder to meet operating goals. Forcing the organization to determine the last responsible moment improves its understanding of the decision, and it also spreads out the time between decisions. 2. Differentiated - One-Way vs. Two-Way Doors: The idea is simple: a two-way door decision is one where, if you walk through the door and don’t like what you see, you simply turn around and go back through. Two-way door decisions are reversible and can be made quickly without extensive analysis, enabling greater operational agility. One-way door decisions, on the other hand, are either irreversible or very expensive to reverse. These should be made slowly and with great care. 3. Informed Truth Seeking: Decisions should be made after a period of dedicated data gathering, analysis, and truth seeking supported by a clear and concise business narrative. High-quality analysis includes objectively exploring multiple courses of action and recommendations based on costs and benefits. 4. Debate: In the words of Peter Drucker, a decision is a judgement, not a choice between right and wrong. To understand an issue, a robust debate between high-judgement leaders offering different viewpoints is required. Corporate cultures that encourage open, data-based debate excel at this. 5. Consistent Forum: Decisions of consequence (one-way doors) should be made in the consistent forum. At Amazon, this meant reading a narrative at a meeting with Jeff and the S-team. The decision(s) would be made in the meeting with all of the relevant people present. The decision wouldn't be reversed by a subsequent conversation with the CEO. 6. Detailed: The details of any decision matter a lot. The documentation used to make a decision should include all relevant implications and details: costs, personnel, timeline, and detailed features. This enables alignment with the CEO and allows teams to move fast once a decision has been made. 7. Experienced Leaders: The only way to get good at decisions is to make lots of them and to be held responsible for the consequences. We all learn more from mistakes than from success. This requires an organizational structure and culture of ownership (not an ambiguous matrix), as well as a willingness to fail. Leaders – what are your thoughts on my list? What would you edit, add, or subtract??
-
Virginia recently passed a law requiring health systems to destroy certain medical records that are over a decade old. On the surface, that seems straightforward. But when researchers at the University of Virginia looked at the data, they found that 5% of imaging studies over 10 years old were still actively used in clinical decisions. That finding raises an important question: Should we get rid of old records just because they’re old, or keep them because they could still be clinically valuable? On one hand, the risk and cost of holding onto data grow over time. Older records may have weaker security, higher storage costs, and, from a public health perspective, often provide limited value. On the other hand, even a small percentage of data can have a significant clinical decision-making impact. If a handful of patients benefit from that old information, purging it could do harm. The UVA study also highlights a bigger point: our regulations are often arbitrary, not evidence-based. Ten years isn’t necessarily the right cutoff. What if we used data to define a threshold? What if we could measure how many years represent a standard deviation of use, so we know the point where the cost of storage and risk of a breach outweighs clinical benefit? Researchers and regulators need to work together. Instead of blanket rules, we need decisions grounded in data. Only then can we make the right decisions when it comes to public health and public safety.
-
Leaders are often celebrated for their "gut instinct," but between a hunch and a great decision lies a bridge forged in logic. While instinct may be where a leader's thinking begins and ends, the true engine of rational decision-making is the consistent practice of logical reasoning. This isn't about rigid, academic formality. It's about a clear-eyed approach to how we navigate complex problems. The two most powerful tools in a leader's arsenal are deductive and inductive reasoning. Understanding the difference between them is the first step toward mastering both. Deduction: Think of this as a top-down process. It's about starting with a general principle or theory and applying it to a specific case to reach a logical conclusion. If your initial principle is true, your conclusion must also be true. Induction: This is a bottom-up process. It’s about taking a series of specific observations to form a general theory or hypothesis. It is less certain than deduction, as the conclusion is only probable, not guaranteed. If you're a grad student, you can apply this to your job search. The Deductive Approach: You might start with a general assumption: "Most companies in this industry hire from a small list of target schools." You check the company’s career page, and your school is not on it. Deductively, your conclusion is that your chances are low. The Inductive Approach: You notice the company recently hired three people from your school for other roles. You see the job description emphasizes skills you have. You connect with a recruiter who says the company is open to non-target applicants. Inductively, you infer a new theory: this company is more flexible than the common assumption suggests, and your chances may be better. The Power of Both The best leaders use both forms of reasoning. Deductive reasoning helps you filter out noise, avoid dead ends, and make quick, logical decisions based on established facts. Inductive reasoning, on the other hand, helps you test assumptions, find new opportunities, and discover a path where none seemed to exist. Critical thinking isn’t about choosing one over the other; it’s about knowing when to use which. As a finance manager, I practice this daily. Deductive reasoning is crucial for tasks bound by policy or a model—like applying a standard forecasting methodology to a new business proposal. Inductive reasoning, conversely, is the engine of strategic thought. It's used when we need to find new signals in the noise—like analyzing customer feedback to form a new hypothesis about pricing or feature development. The best decisions are rarely driven by one alone; they are a conversation between the logical rigor of deduction and the insightful leaps of induction. 🧭 What's a decision where your "gut instinct" and data were at odds? How did you reconcile them? #CriticalThinking #LogicalReasoning #DecisionMaking #BusinessStrategy #FinanceLeadership #LeadershipSkills #ProblemSolving #PostMBAReflections
-
Why evidence matters more than headlines: and the danger of snake oil! In the world of neurodevelopmental conditions and traits such as autism and ADHD, one thing is constant: families are searching for answers. That very human drive for explanation and certainty can, unfortunately, make us vulnerable to misinformation, fear-mongering, and snake oil solutions. Two recent and well-known examples illustrate this risk. 🔹 the MMR vaccine and autism In 1998, a small case series suggested a link between the MMR vaccine and autism. Despite only involving 12 children, the story made global headlines. We now know that this research was not only flawed but fraudulent, and it has since been fully retracted. Yet the damage was real: vaccination rates dropped, measles outbreaks occurred, and public trust was eroded. 🔹 Paracetamol use in pregnancy More recently, headlines have suggested that paracetamol (acetaminophen) during pregnancy may increase the risk of autism or ADHD. However, the largest study to date – covering more than 2.4 million children in Sweden – found no increased risk when careful sibling comparisons were made. Earlier “signals” were most likely due to family and environmental factors, not the medication. These cases highlight a bigger problem: Headlines are short, bold, and dramatic; science is careful, slow, and nuanced. false claims spread faster than careful corrections. Once fear takes hold, it can drive people toward unproven interventions, costly treatments, or avoidance of safe healthcare. This is where “snake oil” creeps in. When people are scared, there will always be those ready to exploit that fear with promises of miracle cures, restrictive diets, or expensive “brain training” that lacks evidence. So, what should we do? ✔️ Check the evidence base, not just one study – ask whether findings are replicated, large-scale, and peer-reviewed. ✔️ Be wary of anecdotes over data – individual stories can be powerful but are not proof. ✔️ Look at who benefits – is the claim linked to financial or political gain? ✔️ Communicate carefully – those of us in healthcare, education, or research Have a duty to share science accessibly, honestly, and without sensationalism. For families and professionals alike, the message is clear: don’t let fear be louder than facts. Evidence is our best safeguard against misinformation and exploitation, and it is what truly helps us support autistic and ADHD individuals to thrive.
-
Every referral in healthcare is a $10,000 decision. But too often, they’re still made by habit, not data. “We’ve always referred here.” “That’s who we know.” “They’ve always been good to us.” That mindset is common. But it leaves money on the table, and patients at risk. When you redesign referral networks around data instead of anecdotes, 3 things happen: ✅ Patients are guided to specialists with stronger outcomes ✅ Health systems cut avoidable costs at scale ✅ PCPs make decisions with clarity and confidence So how do you put data into action? → 𝗠𝗮𝗽 𝗿𝗲𝗳𝗲𝗿𝗿𝗮𝗹 𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀 𝘄𝗶𝘁𝗵 𝘀𝗰𝗼𝗿𝗲𝗰𝗮𝗿𝗱𝘀. Use data to see where patients are actually going, highlight costly leakage, and identify which specialists consistently deliver better outcomes. → 𝗟𝗲𝗮𝗱 𝘄𝗶𝘁𝗵 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝘁𝗼 𝘄𝗶𝗻 𝘁𝗿𝘂𝘀𝘁. PCPs care most about patient outcomes. Show them evidence of higher-quality care first, cost savings will follow naturally. → 𝗦𝗲𝘁 𝗰𝗹𝗲𝗮𝗿 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝘀 𝘄𝗶𝘁𝗵 𝗱𝗮𝘁𝗮-𝗯𝗮𝗰𝗸𝗲𝗱 𝗮𝗴𝗿𝗲𝗲𝗺𝗲𝗻𝘁𝘀. Define expectations around access, communication, and coordination backed by real data. So accountability is built in from day one. Because the point isn’t to gather more data. It’s to use the right data to guide better action for patients, for systems, for growth. That’s how networks stop being a cost center and start driving sustainable growth. 👉 If you were redesigning your referral network today, which single data point would you put at the center?
-
Leaders often fall into a trap, believing they can recognize customer needs without actually talking to customers. This assumption is one of the biggest barriers to creating truly valuable products. I've seen this pattern repeatedly in my work with organizations. Leaders, bolstered by past successes, convince themselves they already know what customers want. It's easier to rely on assumptions than to embrace the uncertainty of product discovery. But here's what works: Start small. Build evidence through wins. Each time your team talks to customers and improves the product based on real feedback, you create proof that customer-centered approaches work. These wins compound into trust, which leads to more autonomy for teams. Evidence beats ego. When you can show that understanding customers leads to better products, you begin to shift the culture from assumption-based to learning-based decisions.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development