Are bursary programmes and universities unintentionally failing students? Not because they don’t care. Not because support isn’t funded. But because many systems only see risk when it’s already too late. I’ve seen millions invested in student support → and still watched students fail. One of the hardest moments I’ve witnessed happens every exam season. A bursary student comes to us for academic support. The support is approved. Then the DP list is released. And we find out the student didn’t qualify for the exam. Not because support wasn’t provided. But because the warning signs showed up months earlier → and no one had clear visibility into them early enough to act. By that point, the bursary has already spent money on fees and last-minute support. But the student has already failed the course (or two). Here’s why… Academic struggles throughout the year compound quietly. Mental health dips go unnoticed. And in worst cases, students are excluded from bursary programmes altogether the following year → money wasted, potential wasted. That’s when it became clear to me: This isn’t a lack-of-support problem. It’s a coordination problem. Bursary programmes fund academic support. Universities provide wellness services. Support teams genuinely care about student success. But academic performance lives in one place. Wellness data sits in another. Interventions happen across multiple partners. And very few people ever see the full picture of a student in real time. Who is quietly falling behind? Which students are showing early risk signals? What support has already been delivered? And most importantly → is it actually working? That insight fundamentally reshaped what we’re building at Genius UP. At its core, the platform isn’t about replacing tutoring or wellness support. It’s about sitting above those interventions → at the system level. We’re building infrastructure that helps bursary programmes and universities: • Track academic performance continuously • Surface early risk signals before crisis hits • Coordinate academic and wellness support across partners • Turn fragmented support activity into clear, actionable insight and reporting In other words, moving from reacting to problems → to detecting risk early → acting intentionally → and measuring whether interventions actually made a difference. The screenshot attached is a glimpse into a platform we’ve been actively building and testing at Genius UP. Systems designed to help student support teams see risk earlier, coordinate better, and support students more effectively. Designed to give a single view across students, support activities, and outcomes. If you work in bursary programme management, student success, or higher-education operations, I’d love to hear how you’re thinking about early risk and coordination. If not, a repost or tag of someone who should be part of this conversation would really help. ✅ Drop a comment → I’d love to hear your thoughts!
Educational Data Analysis
Explore top LinkedIn content from expert professionals.
-
-
Edtech is often criticised for poor quality, misuse of student data and limited learning impact (I’ve voiced those concerns myself several times). But we can’t hold systems accountable without first showing what good or exceptional performance looks like. Once that’s clear, we can create competitive pressure and drive improvement. ⬇️ Excited to finally share our paper in HSCC Springer Nature that outlines key benchmark criteria for high-quality EdTech. The paper summarises the work our research group has been doing over the past three years. It focuses on educational impact and edtech’s added value for students’ learning. 📚 After an extensive literature review and cross-sector consultations, we’ve developed a multidimensional framework grounded in the “5Es” — efficacy, effectiveness, ethics, equity, and environment. Efficacy and Effectiveness combine experimental evidence with process-focused metrics and pedagogical implementation studies. Broader metrics focus on ethical data processing, inclusive and equitable approaches and edtech’s environmental impact. 👇 The fifteen tiered impact indicators already guide a comprehensive and flexible evaluation process of international policymakers, educators, EdTech developers and certification bodies (see EduEvidence - The International Certification of Evidence of Impact in Education and our case studies). 🙏 Huge thanks to all who contributed, especially through our participatory Delphi process. Your insights were invaluable! Nicola Pitchford Anna Lindroos Cermakova Olav Schewe Janine Campbell /Rhys Spence Jakub Labun Samuel Kembou, PhD Tal Havivi/ Ayça Atabey Dr. Yenda Prado Sofia Shengjergji, PhD Parker Van Nostrand David Dockterman Stephen Cory Robinson Andra Siibak Petra Vackova Stef Mills Michael H. Levine #EdTech #ImpactMeasurement #5Es #EdTechQuality #EdTechStandards 👇 Read here or download from:
-
Last time I wrote about results, it was to celebrate the progress of NYC Reads. This time, NAEP results are sounding the alarm: national trends show concerning declines. But as Dan Heath reminds us, there’s power in looking for the bright spots. And a few districts are bucking the trend, showing what’s possible. The big question: What are they doing differently, and how can we replicate those lessons across the country? Because if some students are succeeding, all students can. What This Administration of NAEP Revealed: 1️⃣Across the board, scores were record lows. 2️⃣Few seniors displayed strong skills. Only 22% of seniors scored Proficient in math and 35% in reading; 45% were Below Basic in math and 32% Below Basic in reading. 3️⃣No post-pandemic recovery. Five years after the pandemic, this lack of recovery is a sobering reality check. Potential Drivers of the Outcomes: 1️⃣ Chronic absenteeism is eroding learning. In 2024, nearly one in three 12th graders reported missing three or more days of school in the prior month, up from one in four in 2019. Younger grades show similar trends, meaning millions of students are losing out on learning time. 2️⃣ Recovery has been fragmented and short-lived. Academic outcomes were slipping before COVID, and the pandemic accelerated the decline. Yet recovery efforts have been piecemeal, short-term, underfunded, and uncoordinated. 3️⃣Too few students receive consistent, high-quality instruction. Even when students are in school, many are not exposed to grade-level work or effective teaching. 4️⃣Accountability has weakened just as urgency is needed. Since ESSA, momentum for clear, data-driven accountability has stalled. Bright Spots: 1️⃣ Richmond, VA: Richmond Public Schools has seen notable recovery in reading. In the 2023–24 school year, 50% of RPS students were proficient in reading, up from about 47% two years prior. Reading proficiency for economically disadvantaged students jumped from the mid-30s (percent proficient) in 2021–22 to the mid-40s by 2023–24 – roughly a 10 percentage point gain over two years. 2️⃣Mississippi: Sustained gains in reading and math over the past decade. 2024 results showed Mississippi achieving its highest-ever NAEP proficiency rates, improving across all four main NAEP tests (4th & 8th grade reading and math). 3️⃣Louisiana: Major improvement in 4th-grade reading (above pre-pandemic level). Louisiana was the only state in 2024 to statistically surpass its 2019 fourth-grade reading score. 4️⃣Tennessee: Tennessee’s 2024 NAEP results showed gains in 4th and 8th grade, in both ELA and math, propelling the state’s national rankings upward by 10 or more spots in each category. What’s Working? 1️⃣ Guarantee coherent, evidence-based instruction 2️⃣ Invest in targeted, high-dosage interventions 3️⃣ Build systemwide coherence 4️⃣ Double down on accountability and leadership 5️⃣ Engage families and communities
-
The book "Generative AI in Higher Education: The ChatGPT Effect" examines the profound shift in the academic landscape following the rise of Large Language Models, framing the future as a period of significant educational uncertainty regarding assessment, pedagogy, and the very definition of learning. Uncertainty in Assessment and Academic Integrity A primary concern is the potential collapse of traditional methods used to evaluate student knowledge. -The "Cheating" Wildcard: There is deep uncertainty about how to distinguish between genuine student effort and AI-generated output, leading to a crisis of trust in high-stakes testing. -Obsolescence of Traditional Tasks: Standard assignments, such as the five-paragraph essay, face an uncertain future as AI can produce them in seconds, forcing educators to reconsider what "evidence of learning" looks like. -Detection Efficacy: The report highlights the unpredictable reliability of AI-detection tools, creating a volatile environment where false positives and negatives disrupt the teacher-student relationship. Pedagogical and Curricular Uncertainty The document explores the "unknown" future of how subjects should be taught when AI can serve as a universal tutor. -The Role of the Educator: There is uncertainty regarding the future role of professors—transitioning from "knowledge providers" to "learning facilitators"—and whether institutions can adapt their training fast enough. -Curriculum Lag: A critical uncertainty is the "lag" between the rapid advancement of AI capabilities and the slow pace of institutional curriculum reform, potentially leaving graduates ill-prepared for an AI-integrated workforce. .Standardized Learning Risks: There is a concern that over-reliance on AI-generated content might lead to a "homogenization" of thought, where students lose the ability to engage in unique, critical inquiry. Ethical and Socio-Economic Uncertainty The broader societal implications of AI in education introduce significant strategic wildcards. -The "AI Divide": There is profound uncertainty regarding whether generative AI will democratize education by providing personalized support or exacerbate existing inequalities between those with and without access to premium AI tools. -Data and Bias: The future reliability of AI as an educational resource is shadowed by uncertainty regarding the "black box" nature of its training data and the potential for embedded algorithmic biases to influence student worldviews. In conclusion, the document suggests that higher education is at a pivotal crossroads. The future is defined not by the certainty of AI’s dominance, but by the uncertainty of whether human institutions can reinvent themselves fast enough to harness AI's potential while protecting the core values of critical thinking and academic rigor.
-
This week's theme in my workshops (and, by that extension, my posts to you here) is – assessing data collection tools (like surveys) for inclusion and access. Most of my workshops start at the same place – where most have designed at least one survey in the current/past job/education. And then it takes three hours and some meaningful collective learning to realize that planning a survey is much more than just a list of questions. It is an opportunity to connect with your community directly, hear their stories, and understand their experiences and expressions of engagement. In this post, I want to share 5 "red flag" behaviors I often see during a survey design phase: ● When the only questions included are of positive feedback. We all love hearing good things, but only asking for positive feedback disables some real growth opportunities. Example: A question like, "What did you love most about our event?" assumes your respondent only loves the event, and then it offers no room for any different experience. ● When questions are overloaded with complicated words or jargon that only a few will know. You know your mission inside and out, but your community might not understand the same terms you do. Speak in their language. Think of your survey as a conversation. Example: A question like, "How would you rate the efficacy of our donor stewardship activities?" assumes everyone understands the details of "stewardship". ● When every possible question about every possible aspect of the mission is asked – because "why not". Designing surveys – without context – that go on for more than 10-12 minutes - can feel like asking for too much. Be mindful of the respondents and the needs of the data collection. Every question should have a purpose. ● When questions contradict anonymity. Our communities are diverse, and our surveys should hold a neat, safe space for those communities. Ensuring accessibility – balanced with truly useful demographic questions means not harming someone's anonymity – thus making the experience of collecting data easier and meaningful. Example: A survey asking about racial and ethnic diversity in a group of 99% homogenous population (thus making the 1% racially diverse population nervous about the possible breach of anonymity). ● When questions do not offer an 'Opt-Out' option by making everything required. Some questions may feel too personal or uncomfortable for individuals to respond to, and our surveys must create space for that. Give respondents the space to skip a question if they need to. Example: A survey that requires donors to disclose their income range without offering a way to skip the question if they're uncomfortable sharing that information. Stay tuned for a soon-to-be post on what we can do differently then. Have any other such behaviors? Share them here. In the meantime, try some of these resources (all designed to do good with data): https://lnkd.in/gUK-6M_Y #nonprofits #community
-
Your programme works. You have data to prove it. Then the hard questions came: 'How do you KNOW it was YOUR intervention?' 'Which parts must stay the same when we replicate this in 12 countries?' 'Why did it work in the first place?' Silence. You're not alone in not having the answers. Most programme (innovative or traditional) can't answer these questions because they collected activity data, not evidence for scale. Here's what you should be measuring at each stage instead: 📍 Early stage (Pilot): Don't just count participants. Measure: Did it work? Was it feasible? Do users actually want this? 📍 Mid-stage (Acceleration): Don't just report more numbers. Measure: What are the core elements that CAN'T change? What CAN flex for different contexts? 📍 Scale stage: Don't just show reach. Measure: Can you prove YOUR intervention caused the change? Can others sustain it without you? UNICEF's Innovation MEL Toolbox breaks down exactly what evidence you need at each stage (from ideation to scale) including practical tools like: →Theory of Change for different stages →Contribution Analysis (when RCTs aren't possible) →Fidelity & Adaptation Monitoring →Scaling Approach frameworks Whether you're testing something new, expanding what works, or adapting proven approaches to new contexts, this document is for you. 🔥 If this resonated, follow me. I break down Monitoring and Evaluation (M&E) concepts daily with practical, implementable tips that are grounded in facilitation experience across sectors. #MonitoringAndEvaluation
-
The presentation by Jessina McGregor, PhD, explains how Interrupted Time Series (ITS) is a robust quasi-experimental design widely used to evaluate the effect of interventions, especially when randomized controlled trials aren’t feasible. ITS analyzes outcome data collected at multiple, evenly spaced time points before and after an intervention to assess whether the intervention caused a change. It can detect both immediate effects (as a sudden shift in the outcome level) and gradual effects (as a change in the trend over time). ITS belongs to the broader field of causal inference because it aims to answer: Did the intervention cause a change in the outcome? While ITS can’t guarantee the same level of causal certainty as randomization, its strength comes from its structured design: using many observations over time, ruling out pre-existing trends, and sometimes including control groups, staggered rollouts, or intervention removal to strengthen causal claims. Statistical analysis of ITS often uses segmented regression or ARIMA models, which properly account for autocorrelation (the fact that observations over time are related). Careful planning is critical: defining the intervention clearly, selecting measurable outcomes, collecting long enough baseline and follow-up periods, and adjusting for other events like policy changes or seasonal effects. Overall, ITS is an essential tool in causal inference, particularly valuable for evaluating large-scale or system-level interventions in fields like antimicrobial stewardship, where randomized trials are often impractical. Link: https://lnkd.in/eGby6kMc #statistics #quasiexperimental #causalinference
-
COVID-19 induced school closures did not result in learning losses everywhere! My new paper with Syedah Aroob Iqbal shows one country where pandemic school closures did not harm student learning. Despite widespread school disruptions in Uzbekistan, grade-5 math scores actually IMPROVED by 0.29 standard deviations during the pandemic period. Even more striking: students tested in 2019 and retested in 2021 showed remarkable gains of 0.72 standard deviations over those 2 years. This suggests that learning continuity was maintained despite COVID-induced disruptions to traditional schooling. Uzbekistan's experience demonstrates that effective responses – perhaps national TV broadcasts of daily lessons by best teachers in the country – can actually support continued academic progress during crisis periods. The findings raise important questions about what policies and practices enabled this success, and how other education systems might learn from Uzbekistan's approach to maintaining learning continuity during unprecedented disruptions. https://shorturl.at/Fxl2c It was with some trepidation that I looked towards distance education done right to alleviate the situation. I am glad I was proven right, but of course, this is all due to the students, families, teachers, administrators, and Ministry of Education of Uzbekistan. (Me on Uzbek TV in 2020 https://lnkd.in/eJQfa3E4. [For background, my blog with Nodira Meliboeva and Janssen Teixeira in 2020 on what Uzbekistan did: https://lnkd.in/eJDy3d7Y.
-
📊 Only 5 percent of genAI pilots deliver fast revenue gains. The other 95 percent do not move the P&L. The question is not does AI work, it is are we setting it up to work. 🧩 MIT’s new analysis shows heavy investment with light returns, especially when projects stay at the demo stage. The winners embed AI into real workflows, adapt systems over time, and measure business outcomes, not novelty. 🎓 For education and EdTech this matters even more. If revenue led use cases struggle to show quick wins, learning led use cases will need patient design, teacher training, strong data governance, and clear guardrails. Quick demos do not equal durable classroom impact. 👩🏫 As EdTech Specialist & AI Lead, I focus on long term value. I am building AI literacy pathways for staff and students, running practical PD tied to lessons, and aligning tools with GDPR and the EU AI Act. We track time saved, feedback quality, and student outcomes, not hype. 💡 Short term metrics can underprice long term transformation. The real gains show up in better feedback loops, improved planning, and consistent assessment, plus safer data practices that unlock responsible innovation. That takes strategy, not just spend. 💬 How are you balancing quick wins with long term AI investment in your school or organisation. Which 2 or 3 metrics prove value in the first 12 months without chasing vanity numbers. Share your approach below!
-
AI is revolutionizing academic mentorship in ways we never imagined possible. Here's how one tool increased student retention by 22% and faculty engagement by 20%: The data from higher education reveals a critical problem: While 76% of faculty recognize mentorship as crucial for student success, only 37% of students ever receive formal mentorship. The traditional system is fundamentally broken. Universities randomly pair students with faculty mentors, ignoring research interests, teaching philosophies, and learning styles. This creates a dangerous cycle of "marginal mentoring" - where mentorship becomes an afterthought. Faculty become overwhelmed balancing multiple responsibilities. Students feel disconnected from their academic guidance. But AI has changed everything. Modern AI platforms analyze critical dimensions that humans often miss: • Research interest alignment • Teaching and learning styles • Academic career goals • Communication preferences • Project compatibility The results are transformative: • 22% higher student retention rates • 20% higher faculty engagement • Increased research productivity • Higher publication rates • Stronger academic relationships But here's the most profound discovery: The most effective academic mentorship often comes from faculty who recently navigated similar research challenges - something traditional academic hierarchy completely missed. Through machine learning, these platforms provide: • Real-time academic guidance • Early intervention alerts • Progress analytics • Development tracking • Targeted research resources The future of higher education is a powerful combination of faculty wisdom and AI capabilities: • Personalized academic matching • Evidence-based program optimization • Proactive student support • Enhanced learning experiences This isn't just about better matching - it's about transforming the entire educational experience.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development