Monitoring and evaluation are recognised as critical instruments for enhancing accountability, learning and results-based management within the United Nations Institute for Training and Research (UNITAR). This policy framework provides the institutional foundation for establishing a coherent, standardised and effective M&E system that supports evidence-informed decision-making, performance improvement and transparency across all programmes . The policy framework on monitoring and evaluation defines the following core elements: – Introduction and background on UNITAR’s mandate, evolution of its M&E practices and integration within strategic reforms – Clear definitions of monitoring, evaluation and related concepts including audit, review, appraisal and investigation – Complementarity between monitoring and evaluation functions and their integration into strategic and operational planning – Detailed guidance on logical framework requirements, SMART indicators, baseline establishment and risk management – Purpose and criteria of evaluation (relevance, effectiveness, efficiency, impact, sustainability) following OECD DAC standards – Categories of evaluation (corporate, decentralised, and external) and corresponding modalities, timing and requirements – Mandatory evaluation thresholds and discretionary evaluation provisions, including cost allocation (2.5% of project budget for independent evaluations) – Procedures for evaluation planning, management, reporting, dissemination and capacity development – Defined institutional roles and responsibilities for the Executive Director, Planning, Performance and Results Section (PPRS) and programme management – Glossary of terms to ensure shared understanding and consistency across all UNITAR entities The content underscores that a structured M&E policy framework ensures coherence, rigour and accountability throughout the project cycle. By institutionalising common standards, evaluation criteria and quality assurance mechanisms, UNITAR strengthens evidence generation, promotes learning and enhances its contribution to the 2030 Agenda for Sustainable Development and the UN Evaluation Group’s norms and standards .
Assessment Policy Development
Explore top LinkedIn content from expert professionals.
Summary
Assessment policy development refers to the process of creating guidelines that shape how student learning is measured and how program outcomes are evaluated, especially in the context of new challenges such as artificial intelligence. Recent discussions highlight the need for flexible and fair policies that uphold academic integrity while supporting meaningful learning experiences.
- Review current practices: Examine your existing assessment methods to identify areas where AI tools might pose risks and adapt strategies accordingly.
- Integrate real-world tasks: Design assessments that require students to apply knowledge in practical, authentic situations to promote deeper understanding and discourage misuse of technology.
- Encourage process transparency: Include components like oral interviews, staged submissions, or peer review to monitor students’ learning journeys and provide ongoing feedback.
-
-
This guide from the University of Melbourne discusses adapting assessment strategies in academic settings due to the challenges posed by AI-generated text, focusing on practical strategies for assessment design to ensure integrity and enhance learning: The authors suggest: 1. Shifting Emphasis from Assessing Product to Assessing Process: Encourages assessing the learning journey rather than just the end product. For example, using platforms like Cadmus to track and evaluate students' progress on assignments provides insights into their learning processes. 2. Incorporating Tasks that Require Evaluative Judgement: Involves tasks where students review or evaluate work against a set of criteria, fostering critical thinking. An example is peer review, where students assess each other's work and reflect on feedback received to improve their own submissions. 3. Designing Nested or Staged Assessments: Breaks down a large task into smaller, interconnected tasks, allowing for ongoing feedback and development (e.g. a semester-long project broken into stages, such as initial research, draft submission, and final presentation, with each stage building upon the previous one). 4. Diversifying Assessment Formats: Expands the types of assessment beyond traditional essays and reports to include videos, podcasts, and other multimedia formats. This approach can enhance creativity and cater to diverse learning styles. For instance, students might create a podcast discussing a topic or a video presentation summarising their research findings. 5. Incorporating More Authentic, Context-Specific, or Personal Assignments: Makes assessments more relevant to real-world scenarios or personal experiences, which can increase student engagement and reduce the temptation to misuse AI. An example could be analysing a local case study or applying theories to personal experiences relevant to the subject matter. 6. Including More In-Class and Group Assignments: Facilitates collaboration and learning from peers, while also making it harder for students to rely on AI tools. This might involve group discussions, projects, or in-class presentations on assigned topics. 7. Incorporating Oral Interviews to Test Understanding or Application of Knowledge: Requires students to verbally articulate their understanding or reasoning in response to prompts, making it difficult for AI to assist. Examples include scenario-based interviews or explaining procedures and safety protocols in practical subjects. https://lnkd.in/g2t-dDCM
-
'Assessment reform for the age of artificial intelligence' was the compass, and it still points truly. But it has become clear that we need a map. The Tertiary Education Quality and Standards Agency has today published 'Enacting assessment reform in a time of artificial intelligence'. This is our collective attempt to provide a map. In it, we describe three pathways: - taking a program-wide approach to assessment reform - assuring learning in every unit/subject - implementing a combination of these approaches The strengths and challenges of each pathway are discussed. We also provide a set of critical questions for institutions and learning and teaching leaders to consider. *See links in comments*. Thank you to the brilliant team who put this together, co-led by Margaret Bearman and Phillip Dawson with Helen Gniel, Lenka Ucnik, Jan McLean, Rowena Harper and Danny Liu. Thank you to the Assessment Forum members who contributed: Simon Buckingham Shum, Christopher Deneen, Cath Ellis, Tim Fawns, Michael Henderson, Sarah Howard, Lina Markauskaite and Christine Slade. This resource would not be what it is without the generous and thoughtful input from colleagues across the sector. Thank you to everyone who contributed (LinkedIn won't let me tag you all - I tried, sorry).
-
This short paper sets out The Quality Assurance Agency for Higher Education’s advice for providers on how to approach the assessment of students in a world where students have access to Generative Artificial Intelligence (AI) tools. The principles set out here are applicable to both higher and further education. This resource develops a theme first introduced in our earlier advice - Maintaining quality and standards in the ChatGPT era: QAA advice on the opportunities and challenges posed by Generative Artificial Intelligence, around the (re)design of assessment strategies to mitigate the risks to academic integrity posed by the increased use of Generative Artificial Intelligence tools (such as ChatGPT) by students and learners. Reviewing assessment strategies a.Reducing the volume of assessment by removing items that are susceptible to misuse of Generative Artificial Intelligence tools to generate unauthorised outputs and repurposing the time available for other pedagogical activities. b.Promoting a shift towards greater use of synoptic assessments that test programme level outcomes by requiring students to synthesise knowledge from different parts of the programme. Some of these may permit or incorporate the use of Generative Artificial Intelligence tools. c.Developing a range of authentic assessments in which students are asked to use and apply their knowledge and competencies in real-life, often workplace related, settings. Ideally authentic assessments should have a synoptic element. Also, find in the paper 7 the types of assessment that could be deployed when developing programme-level assessment strategies.... https://lnkd.in/dn98XPWp
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development