✅ Post Title: How to Run a GRC Program Review That Actually Moves the Needle 👋 Welcome to Week 12 of the series! ▪️ You’ve built policies. ▪️ You’ve assessed risks. ▪️ You’ve survived audits. But when was the last time you zoomed out and asked: 💭 Is our GRC program actually working? Many GRC programs just “exist.” Few are strategically reviewed. Fewer are improved. 🧠 The Real-World Challenge: You’re so busy doing GRC... you don’t stop to assess GRC itself. But leadership, regulators, and your own team want to know: ▪️ What’s working? ▪️ What’s stalled? ▪️ Where should we invest next? 📊 Best Practices for Reviewing Your GRC Program 1. Use a maturity model ▪️ Try NIST CSF, ISO 27001, or CMMI scales (1–5) ▪️ Assess across People, Process, and Technology 2. Collect honest input ▪️ Interview stakeholders across departments ▪️ Ask: “What frustrates you about our current GRC processes?” 3. Measure performance, not activity ▪️ Don’t just count risks reviewed—show % of critical risks mitigated ▪️ Don’t just show training completions—show behavior changes 4. Benchmark yourself ▪️ Compare against industry frameworks or peers ▪️ Highlight gaps and quick wins 5. Summarize in a story, not a spreadsheet ▪️ Use visuals and executive-level takeaways: “Here’s what we’ve improved. Here’s where we’re still at risk. Here’s what we’ll do next.” 🧠 Pro Tip: Make it part of your annual cycle. Your GRC program deserves the same review rhythm as Finance, HR, and IT. 📌 TL;DR GRC programs need reviews too Don’t just assess risk—assess your ability to manage risk Insight + roadmap > dashboard alone 📅 Coming Up: "Mapping Risk Scenarios to Controls" 💬 Have you ever reviewed your GRC program from the outside in? What did you learn? #GRC #RiskManagement #Compliance #Governance #SecurityLeadership #ProgramReview #MaturityModel #Cybersecurity #GRCBestPractices #ContinuousImprovement
Program Review Guidelines
Explore top LinkedIn content from expert professionals.
Summary
Program review guidelines are structured approaches used to assess whether an organization's programs are meeting their intended goals, informing decisions about improvement, investment, or discontinuation. These guidelines provide practical steps for evaluating program quality, impact, and alignment with organizational priorities, making sure resources are used wisely and outcomes are meaningful.
- Ask critical questions: Take time to regularly examine whether your programs are driving real results and still fit with business goals, rather than just tracking activities or participation.
- Integrate evaluation routines: Build program reviews into your annual planning cycle to keep your initiatives aligned with changing organizational needs and to spot opportunities for improvement.
- Use clear measurement tools: Select reliable indicators and benchmarks to track progress, compare impact, and guide decisions about which programs to scale, refine, or phase out.
-
-
In the last 10 years I've designed, delivered and assessed the impact of several large scale leadership development programmes. Want to know how I make sure they actually matter and aren't just a pretty certificate or a report of butts on seats? It's my 6 power questions. Start asking these and you're guaranteed to have leadership programmes that create long lasting behaviour change AND reportable outcomes. 1) What are the core leadership capabilities and behaviours we need both now and in the future? This is where you survey leaders at all levels to identify essential skills. If you're not talking to your audience then you're missing a HUGE piece of the puzzle. And for the love of god please incorporate strategy here too. What does the business need to achieve and what role does leadership play? 2) How will you assess current leadership competencies and development needs across the organisation? Are you using 360 reviews, skills assessments, interviews? 3) What development formats will allow for skills practice, real-world application and feedback? This could include workshops, cohorts, mentoring, job rotations, special project assignments... something that let's them practice is essential. 4) How will leadership development intersect with your talent management processes? The amount of times this isn't considered is staggering. Look at integration points with recruitment, promotion, succession planning and performance management. This is crucial. 5) What measures will define the success of this programme at the participant, leadership bench strength, and organisational level? Identify key leading and lagging indicators. Wanna know what these are? 💡 Leading = participation rates, completions of tasks, engagement surveys, tests etc. 💡 Lagging = leadership pipeline for critical roles, if your programmes affect things like EVP and brand, leadership retention, and your key metrics around profitability etc. Great programmes measure both ⬆️ 6) How will you evolve curriculums over time to meet changing business objectives and leadership needs? Build in processes for continuous review and refresh. This is my biggest non-negotiable. At a push you should review every 3 years but I suggest a review every year in line with strategy and business objectives + engagement surveys and employee data. Leadership development is a serious game friends. It's not just away days and leadership theory. This is how you future proof your organisation, and goes from grass roots through to established leadership. Anything I've missed that you would add?👇
-
Are your programs making the impact you envision or are they costing more than they give back? A few years ago, I worked with an organization grappling with a tough question: Which programs should we keep, grow, or let go? They felt stretched thin, with some initiatives thriving and others barely holding on. It was clear they needed a clearer strategy to align their programs with their long-term goals. We introduced a tool that breaks programs into four categories: Heart, Star, Stop Sign, and Money Tree each with its strategic path. -Heart: These programs deliver immense value but come with high costs. The team asked, Can we achieve the same impact with a leaner approach? They restructured staffing and reduced overhead, preserving the program's impact while cutting costs by 15%. -Star: High impact and high revenue programs that beg for investment. The team explored expanding partnerships for a standout program and saw a 30% increase in revenue within two years. -Stop Sign: Programs that drain resources without delivering results. One initiative had consistently low engagement. They gave it a six-month review period but ultimately decided to phase it out, freeing resources for more promising efforts. -Money Tree: The revenue generating champions. Here, the focus was on growth investing in marketing and improving operations to double their margin within a year. This structured approach led to more confident decision-making and, most importantly, brought them closer to their goal of sustainable success. According to a report by Bain & Company, organizations that regularly assess program performance against strategic priorities see a 40% increase in efficiency and long-term viability. Yet, many teams shy away from the hard conversations this requires. The lesson? Every program doesn’t need to stay. Evaluating them through a thoughtful lens of impact and profitability ensures you’re investing where it matters most. What’s a program in your organization that could benefit from this kind of review?
-
The CDC has updated its Framework for Program Evaluation in Public Health for the first time in 25 years This is an essential resource for anyone involved in programme evaluation—whether in public health, community-led initiatives, or systems change. It reflects how evaluation itself has evolved, integrating principles like advancing equity, learning from insights, and engaging collaboratively. The CDC team describes it as a “practical, nonprescriptive tool”. The framework is designed for real-world application, helping practitioners to move beyond just measuring impact to truly understand and improve programmes. I particularly like the way they frame common evaluation misconceptions, including: 1️⃣ Evaluation is only for proving success. Instead, it should help refine and adapt programmes over time. 2️⃣ Evaluation is separate from programme implementation. The best evaluations are integrated from the start, shaping decision-making in real time. 3️⃣ A “rigorous” evaluation must be experimental. The framework highlights that rigour is about credibility and usefulness, not just methodology. 4️⃣ Equity and evaluation are separate. The new framework embeds equity at every stage—who is involved, what is measured, and how findings are used. Evaluation is about learning, continuous improvement, and decision-making, rather than just assessment or accountability. As they put it: "Evaluations are conducted to provide results that inform decision making. Although the focus is often on the final evaluation findings and recommendations to inform action, opportunities exist throughout the evaluation to learn about the program and evaluation itself and to use these insights for improvement and decision making." This update is a great reminder that evaluation should be dynamic, inclusive, and action-oriented—a process that helps us listen better, adjust faster, and drive real change. "Evaluators have an important role in facilitating continuous learning, use of insights, and improvement throughout the evaluation (48,49). By approaching each evaluation with this role in mind, evaluators can enable learning and use from the beginning of evaluation planning. Successful evaluators build relationships, cultivate trust, and model the way for interest holders to see value and utility in evaluation insights." Source: Kidder, D. P. (2024). CDC program evaluation framework, 2024. MMWR. Recommendations and Reports, 73.
-
Indicators are essential tools for monitoring and evaluation (M&E), providing measurable insights into program performance and impact. They allow for comparisons over time and across contexts, ensuring decisions are based on concrete data. This document introduces key principles of indicator selection, design, and interpretation, equipping professionals with the tools to track progress, assess efficiency, and enhance accountability. The guide covers validity, reliability, and feasibility, ensuring data collection remains meaningful. It explores input, output, outcome, and impact indicators, demonstrating their role in evaluation frameworks. It also outlines best practices for selecting indicators, avoiding pitfalls like misinterpretation and over-reliance on single metrics. Techniques such as indicator pyramids and data triangulation ensure a comprehensive and balanced approach to performance measurement. For M&E professionals and humanitarian practitioners, this document is a critical resource for improving evaluation frameworks and decision-making. It highlights the importance of selecting high-quality indicators, maintaining data consistency, and using findings to refine programs. Whether assessing health, governance, or development initiatives, these insights help professionals optimize resources and maximize impact.
-
Senior engineers don't just write good code. They make everyone else's code better. 7 code review principles that raise the bar: 1. Review for learning, not just correctness The best code reviews teach. Both ways. → Explain the "why" behind your suggestions → Ask questions that make people think → Call out things you learned (praise good work) When reviews become teaching moments, your whole team gets better at engineering. 2. Stop debating style, start automating it Don't waste review cycles arguing about 2 spaces vs 4 spaces. → Use language-specific formatters → Enforce standards through CI, not human judgment → Save mental energy for architecture and logic Standards + automation = no more style bikeshedding. 3. Label risk levels and match scrutiny Use Risk: [HIGH], Risk: [MEDIUM], Risk: [LOW] in your PR titles. Not all changes are equal. A database schema update needs more review than a doc update. A typo fix shouldn't get the same scrutiny as a payment processing change. 4. Write detailed PR descriptions (AI makes this easy) Always explain: What is the change? Why is it needed? What should the reviewer focus on? → Use Claude Code to draft descriptions from your commits → Include screenshots for UI changes → Call out non-obvious implications or edge cases AI tools make this easier than ever. Embrace better tools to improve your quality bar. 5. Review the system, not just the diff After your first pass, zoom out: → How does this affect the broader architecture? → Does this introduce new patterns or follow existing ones? → What happens when this code needs to change again? The best reviews catch problems that won't surface for months. 6. Document recurring patterns Keep a living document or checklist of common review issues: → "We always forget to handle the empty state" → "Remember to validate input at API boundaries" → "Use our existing auth helper, don't write new ones" Turn repeated feedback into shared knowledge. 7. Use AI! (Seriously!!) Your role as a senior today isn't just writing code. It's ensuring your team uses the best possible tools available. That means AI code review tools like CodeRabbit. But here's the key: AI alone can't catch everything. Human judgment alone can't either. → AI catches syntax issues, potential bugs, and performance problems → Humans catch architectural decisions, business logic, and team context → Together you get comprehensive reviews without the tedium Don't let ego hold you back from tools that amplify your expertise. Good code review practicers help build teams that ships faster, learns quicker, and makes fewer mistakes over time. What advice would you give on code review? --- PS: I write a weekly newsletter on AI engineering you might like. It's free: https://lnkd.in/e7Ymdh_j. Found this useful? ♻️ Repost for your team and follow Owain Lewis for more
-
One of the hardest parts of being a TPM running AI/ML programs is assessing whether the work is delivering on the business goals you signed up for. This is where program reviews matter. If you’re running MBRs or QBRs for AI/ML programs, you need to move the conversation beyond demos and accuracy numbers. Make it data-driven with metrics that tie directly to customer and business outcomes. The TPM role is about driving clarity. In the context of AI/ML programs, that means making sure programs are evaluated on the same standards we use for any critical system: performance, reliability, and cost which are all tied back to the business. At a high level, here is how I structure metrics in program reviews to keep them efficient and grounded, while accommodating for the differing needs of AI/ML solutions in production: 1. **Performance Metrics** Agree on the right north star metric for your specific business use-case. 2. **Operational Metrics** These tell you if the model can actually operate at scale and stay relevant as data changes. 3. **Cost Metrics** These force the team to show if the spend is justified. When I facilitate program reviews, these metrics keep discussions focused on outcomes, not technical details. They also create alignment across engineering, science, product, and finance, so that decisions are made using data. 👉 In my latest blog, I go deeper into how to set up these reviews, the specific metrics for each of these categories, metric definitions, what they represent: https://lnkd.in/eWG63PXj 💭 Drop me a comment with questions or how you deal with reporting on AI/ML programs. I appreciate you taking the time to learn and share! #TechnicalProgramManagement #AI #metrics #TPM #MachineLearning
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development