When I first started teaching online back in 2017, the course evaluation process bothered me. Initially, I was excited to get feedback from my students about their learning experience. Then I saw the survey questions. Even though there were about 15 of them, none actually helped me improve the course. They were all extremely generic and left me scratching my head, unsure of what to do with the information. It’s not like I could ask follow-up questions or suggest improvements to the survey itself. Understandably, the institution used these evaluations for its own data points, and there wasn’t much chance of me influencing that process. So, I decided to take a different approach. What if I created my own informal course evaluations that were completely optional? In this survey, I could ask course-specific and teaching-style questions to figure out how to improve the course before the next run started. After several revisions, I came up with these questions: - Overall course rating (1–5 stars) - What was your favorite part (if any) of this course? - What did you find the least helpful (if any) during this course? - Please rate the relevancy of the learning materials (readings and videos) to your academic journey, career, or instructional design journey. (1 = not relevant at all, 10 = extremely relevant) - Please rate the relevancy of the learning activities and assessments to your academic journey, career, or instructional design journey. (1 = not relevant at all, 10 = extremely relevant) - Did you find my teaching style and feedback helpful for your assignments? - What suggestions do you have for improving the course (if any)? - Are there any other comments you'd like to share with me? I was—and still am—pleasantly surprised at how many students complete both the optional course survey and the official one. If you're looking for more meaningful feedback about your courses, I recommend giving this a try! This process has really helped me improve my learning experiences over time.
Evaluation of Student Surveys
Explore top LinkedIn content from expert professionals.
Summary
Evaluation of student surveys refers to the process of reviewing and analyzing feedback collected from students to assess teaching quality, course content, and learning experiences. This practice aims to use student input to guide improvements, but it often faces challenges related to bias, specificity, and the way survey results are used by institutions.
- Review survey questions: Consider tailoring survey questions to address course-specific aspects and your teaching style so you can gather more meaningful and actionable feedback from students.
- Look for hidden patterns: Analyze survey data with attention to differences across student groups and course types to uncover trends or biases that may affect ratings and outcomes.
- Use multiple feedback sources: Supplement official surveys with informal feedback or student partnerships to gain a fuller picture of student experiences and address gaps in traditional survey methods.
-
-
As module teams finish marking and prepare to review student performance data, this is a crucial time to evaluate what's working and for which students 📊 Here are 5 ways to uncover what the data is really telling us: 1. Beyond Face Value Data: Module feedback and NSS scores are helpful, but informal feedback can be invaluable. Consider asking open-ended questions 10 minutes before the end of the lecture, such as “How did you find the session? What was clear? What was less clear?” These can be answered anonymously using tools like Mentimeter or Padlet and can uncover what’s working and what’s not for all our students. 2. Review with Specificity: Is your data reviewed with an intersectional lens? For example, indicators like disability and ethnicity encompass many declared disabilities (both visible and invisible) and various ethnic heritages. By splitting data sets by specific disability categories and ethnicity descriptors, we can better understand diverse student experiences and unique needs. Reviewing year-on-year, as well as post-pandemic vs. pre-pandemic, can add another layer of insight. You can visualise this using software like Tableau to make it easily accessible for the whole team. 3. Observing Differences Without Deficit: We often have expectations about students' lives but may lack awareness of the invisible barriers they experience as well as their navigational capacity for overcoming those barriers. For one student, a part-time job might present significant difficulties, while for another, it can offer essential structure that complements their studies. It's crucial to first develop a genuine awareness of students' lives and then listen to their perspectives on how they navigate their experiences beyond what we can see and without judgement. 4. Beneath the Metrics: Beyond final attainment, consider the hidden metrics that highlight differences across the student journey. These include extension requests, core module grade averages, late submissions, and failed or repeated assessments. These indicators can signal that some students with certain protected characteristics are experiencing very different learning journeys than we might expect. 5. Ensuring Representative Feedback: The truth is, not all students provide feedback. Often, the students we most want to hear from may not engage through traditional methods at all. Beyond student reps, consider appointing student partners who can capture the student voice in different ways throughout the year. Having close peers discuss what’s working and what’s not for different students can be much more impactful and revealing over time. I'm curious to know, are there other innovative ways you've seen student data/voice captured on a degree course with inclusion in mind? Please share your thoughts 🔍
-
More evidence that student evaluations suck (or the problem may be the university administration) A while back, I posted an article on why teaching evaluations suck. They often simply don't do what they are intended to do, that is evaluate the quality of teaching, & they often do have unintended consequences, that is encourage pandering for grades (see here: https://lnkd.in/erXhXFEx). Troy Heffernan (University of Oxford) & Paul Harpur OAM (The University of Queensland) offer additional evidence of why teaching evaluations suck. They reviewed 39 Australian Uni's policies on student evaluations & analyzed how they affect faculty. Simply put. Much to worry about. They found: (1) Bias in student evaluations against women & other marginalized academics. These biases negatively impact career progression, including hiring, promotions, & grant opportunities, despite Uni's relying on SETs for such decisions. (2) Many Uni's do not comply with anti-discrimination laws & workplace health & safety regulations. Universities could be legally liable if they use discriminatory SET data in employment decisions or fail to protect academics from mental distress caused by biased evaluations. (3) A need for reform in how uni's collect & use SET data, including implementing better safeguards to remove abusive or prejudiced comments. They suggest Uni's align policies with legal frameworks to create a more equitable academic environment. Check it out. It's eye-opening! Reference: Heffernan, T., & Harpur, P. (2023). Discrimination against academics & career implications of student evaluations: university policy versus legal compliance. Assessment & Evaluation in Higher Education, 48(8), 1283–1294. Link: https://lnkd.in/eRcZYekW Abstract: Across the international higher education sector, existing studies highlight that student evaluations of courses & teaching are biased & prejudiced towards academics & can cause mental distress. Yet student evaluation data is often used as part of faculty hiring, firing, promotion, award & grant decisions. That a data source known to be prejudiced & biased is used for employment & career decisions raises questions around whether these university policies are discriminatory towards university staff. This paper investigates these questions via an analysis of: a) what are the common university policies relating to evaluation data collection & its use, b) are these policies leaving academics exposed to discrimination, & c) what types of policies may be leaving universities liable to legal ramifications due to non-compliance with anti-discrimination & workplace health & safety laws? The work demonstrates why most institutions are operating outside the bounds of the law, highlights to academics what types of policies may fail to meet discrimination & workplace laws, & informs university leaders of the actions that may be exposing their universities to legal implications for failing to protect their staff.
-
Student evaluations of teaching: it's not only how you teach — it's also whom you teach. New paper by Sara Ayllón et al. finds that "less generous students systematically sort into certain fields, courses, and instructors’ sections". As the Figure below shows, there is "significant variation in the average ratings across majors, with instructors in the lowest-rated majors (e.g. Architecture and Economics) receiving approximately 0.5 SD lower ratings than the highest-rated majors (e.g. Medicine and Philosophy). While differences in instructional quality may partially explain these gaps, it is likely that student sorting plays an important role." The paper also documents "considerable variability in the disadvantage faced by female faculty across and within fields". Notably: "female faculty in Business and Economics face substantially more gender-biased students than faculty in Arts and Communications and, as a result, receive significantly worse student ratings." The good news is: there are ways to correct for this. "A complex solution is to provide ratings for female and male faculty that adjust for gender-specific generosity and are normed to be equivalent across genders. This is technically feasible, but sacrifices transparency. A simpler solution flags to administrators courses in which female faculty face an expected disadvantage" Read the full paper here: Sara Ayllón, Lars Lefgren, Richard W. Patterson, Olga Stoddard, Nicolás Urdaneta (2025), ‘Sorting’ Out Gender Discrimination and Disadvantage: Evidence from Student Evaluations of Teaching, National Bureau of Economic Research working paper 33911. https://lnkd.in/ecKBEZEi (open access) https://lnkd.in/eDZnQbf8 (gated)
-
At semester’s end, many universities lean on student evaluations of teaching as a proxy for quality. By Kirkpatrick’s classic framework, that tool mostly taps Level-1 “Reaction”, how much students liked the class, not whether they actually learned (Level-2), transfer what they learned (Level-3), or whether there are meaningful outcomes (Level-4). For example, a student comment like “great lecturer!” is reaction; showing they can solve novel problems on a closed-book exam is learning; applying concepts in later courses or internships is behavior and results. A recently published study “The boys’ club: gender biases in students’ evaluations of their philosophy professors” (https://lnkd.in/eVksAVqX) further shows why caution is needed. When identical content was presented as if delivered by a man versus a woman, the “man professor” was consistently rated higher on competence, clarity, confidence, interest, and willingness to enroll, while the “woman professor” was more often judged on “care.” Making gender cues more realistic (using voices) preserved these differences, and they persisted even among students who endorsed egalitarian views. In short, student evaluations reflect preference and stereotype, maybe even more than they reflect pedagogy. If student evaluations mostly assess reaction and are systematically gender-biased, they are not a sound stand-alone basis for quality management (...or for hiring and promotion). But what could be a good way to actually evaluate teaching? #HigherEducation #GenderBias #UniversityTeaching #AcademicLeadership #QualityManagement
-
🎯 It's all about feedback: student evaluations We all need feedback to grow—at work, in science, and in teaching. In industry or national labs, our managers (who may not know every technical detail) still give us valuable input on teamwork and professional growth, and contribution to the team success. In academia, we get constant feedback via paper and grant reviews, and through student course evaluations. Many colleagues ask, “How can students evaluate professors?” Student comments can be blunt or even harsh, testing your moral fiber to read them. But feedback, however imperfect, is essential to improve. What matters isn’t just what I know, but how well I communicate and support learning. To make evaluations more useful, I explain why they matter and how I’ll act on them. Then, at semester’s end, I steel myself to review the results—and I can clearly see how things evolve! Spring 2024 vs. Spring 2025 (averages) Metric 2024 Avg → 2025 Avg Instructor contributed to understanding 4.40 → 4.60 Course challenged you 4.60 → 5.00 Atmosphere invited extra help 4.20 → 4.50 Responded to inquiries in 48–72 hrs 4.40 → 4.56 Respectful & positive environment 4.40 → 4.90 Useful feedback on assignments 4.20 → 4.11 Sessions well organized 4.60 → 4.70 Materials enhanced learning 4.40 → 4.70 Hours/week outside class ~6–7 hrs → ~8–9 hrs Key takeaways • Higher engagement: Response rate up, students feel more challenged • Stronger climate: Positive, supportive scores climbed across the board • Room to grow: “Useful feedback” dipped slightly—time to refine assignment comments Grateful for every piece of feedback. Here’s to iterating and communicating even more effectively next semester!
-
📊 Student evaluations are never as straightforward as they seem. Those clean, bright numbers look so confident and clear, but they mask a lot of important detail, without which we can't make the right kinds of change for the right students (and teachers, and course materials, and...). If you're regularly incorporating student evaluations into your operational and strategic decisions, are you also considering: - Rigour, response rates and bias - Over-reliance on quantitative measures and/or single sources of data - Impact on teaching staff - Formative vs. summative use of data - Unhelpful comparisons of very different disciplines, study modes and cohort sizes - CAULLT - recently hosted a webinar sharing the outcomes of a grant project looking at how teaching and learning leaders use student evaluation surveys. Gail Crimmins and Dr Sarah Casey presented the findings and launched a Good Practice Guide on Student Evaluation of Learning and Teaching to support effective use of these surveys. If you couldn't make it, here are my notes... (my highlights and interpretation - there's more in the guide and published article!). #HigherEducation #Evaluation #StudentResearch
-
Can Student Feedback Improve Teaching? New research shows a positive (but small) effect. You could argue it's a promising study but in my opinion a mean effect size of 0.27 is a poor return for the amount of time, resources and often negative impact on teachers that students evaluations cause. As Karpicke, Butler and Roediger showed, students appear to know almost nothing about how learning happens (or instructional design for that matter) so what exactly are they evaluating? These are subjective student judgments, which are inevitably influenced by factors completely unrelated to actual teaching effectiveness (e.g., teacher gender, strictness, or personality) so what you often get is basically a popularity contest. Also the cited improvement in teacher quality here seems to be assessed using proxies such as teacher behaviours or classroom dynamics, rather than direct measures like student achievement. Full paper here: https://lnkd.in/esapKicj Karpicke, Butler, & Roediger paper: https://lnkd.in/edREtyqd
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development