Great discussions and exchanges are underway India AI Impact Summit 2026 As we head into Day 3 of Summit Sessions the conversations are gathering momentum. Tomorrow, the Masakhane African Languages Hub is forging Global South Alliances and architecting the future of multilingual governance. Our Director Chenai Chair will participate in a session on South-South Collaboration in an aim to highlight how the Global South must not only supply data, but also shape the standards for AI. Tajuddeen Gwadabe will be taking part in a session on Advancing Multilingual AI, in collaboration with Center for Democracy & Technology and Cornell University Global AI Initiative We look forward to your attendance and participation in the sessions in New Delhi and invite you to watch the online streamed sessions here: https://lnkd.in/gt5WgzXf #IndiaAllmpactSummit2026 #MasakhaneAfricanLanguagesHub #Masakhane #ResponsibleAl #AIEquity #NLP
India AI Impact Summit 2026: Global South Alliances & Multilingual Governance
More Relevant Posts
-
Multimodal AI is often tested on either perception or reasoning—but rarely both together. MERaLiON2-Omni asks: what happens when you need both simultaneously? The paper introduces a model designed for omni-modal understanding—combining vision, audio, and text with strong logical reasoning. The focus is on Southeast Asian languages, where multilingual multimodal benchmarks are scarce. Key finding: there's a real tradeoff between perceptual capability and reasoning depth. Models optimized for one tend to degrade on the other. MERaLiON2-Omni is explicitly designed to balance this tension. The architecture integrates cross-modal attention with chain-of-thought reasoning across modalities—not just concatenating representations, but reasoning over the interactions between them. Benchmarks show strong gains on tasks requiring cross-modal inference: e.g., answering questions that require seeing an image, hearing audio context, and reasoning through the relationship. Particularly relevant for real-world AI assistants in multilingual, multimodal environments. #MultimodalAI #LLM #Reasoning #NLP #AIResearch
To view or add a comment, sign in
-
Our paper has been accepted as an Oral Presentation at the AI Across Cultures Workshop @ CHI 2026 in Barcelona - together with Samantha Adorno! "Let’s Talk, Not Type: An Oral-First Multi-Agent Architecture for Guaraní" AI systems are often framed as universal, but their interaction paradigms are almost always text-first. For communities like Guaraní speakers in Paraguay, this isn't a minor inconvenience. It means that "language support" risks becoming symbolic rather than meaningful. Our work proposes an oral-first, multi-agent architecture where turn-taking, repair, shared context, and governance are treated as core features, not afterthoughts. By separating language understanding from conversation state and permission mechanisms, we make conversational structure explicit and enable reasoning over interaction dynamics rather than isolated commands. The goal: AI that is more interpretable and genuinely responsive in oral and low-resource settings. Preprint soon! #CHI2026 #NLP #AIForGood #IndigenousLanguages #HCI #MultiAgentSystems #Guaraní
To view or add a comment, sign in
-
We’re thrilled to share that our paper, "Empathy in Greek Exam-Related Support Conversations: A Comparative Evaluation of LLM Responses", has been accepted at #LREC2026 in Mallorca! 🌴 In this work, co-authored with Prokopis Prokopidis at the Institute for Language and Speech Processing, we explore a crucial question: How well do modern AI models handle highly sensitive, real-world human anxiety? We evaluated the Greek-focused KriKri-8B against state-of-the-art multilingual LLMs to see how empathetically and safely they respond to teenagers facing the immense stress of the Greek Panhellenic exams. To do this, we are introducing the GEAR dataset, designed to assess AI systems across Understanding, Empathy, Reasoning, and Harm. If you're attending LREC this year, drop by our poster session! I’d love to connect and discuss conversational AI, culturally-aware models, and ethical safeguards in education. #NLP #LargeLanguageModels #ConversationalAI #AIethics #LREC2026 #KriKri #AthenaRC
To view or add a comment, sign in
-
-
When AI Hallucinates, a Culture Erodes. I asked a global AI model to translate a proverb from my home in Bukavu: "Harhenga Mwami Haje Undi." The AI told me it means leaders are replaceable. It was dead wrong. In our Mashi culture, it’s a command of duty: "A king must NOT abandon his throne." This isn’t just a "mistranslation", it’s Semantic Erasure. When global models force our complex oral traditions into generic logic, we lose the "Soul" of our languages. 📉 Current LLMs exhibit a 60% Semantic Gap when interpreting African languages (Source: Africa Declaration on AI, 2025). 💰 Closing this gap through Cultural Grounding could unlock $15 Billion in value for the African AI sector. We don't need "Text-Hungry" models; we need Native Audio-to-Audio architectures that listen to our elders, not just the internet. Let's build AI that speaks the truth about who we are. 🤝 #IAMali #LelapaAI #Masakhane #DeepLearningIndaba #SautiYetu #InclusiveAI #NLP #CongoTech #Bukavu #Goma #DigitalSovereignty #AfricanLanguages #GitHub #TechLeadership #voiceAI #BlackinAI #MasakhaneAfricanLanguagesHub #DataScienceforSocialImpact #DSFSI
To view or add a comment, sign in
-
I’m delighted to share that our paper “When Meaning Isn’t Literal: Exploring Idiomatic Meaning Across Languages and Modalities” has been accepted at the IEEE International Conference on Multimedia and Expo (ICME 2026) — a CORE A-ranked conference with an acceptance rate of 28.89%. 🎉 Idioms often carry meanings that go far beyond their literal words, making them particularly challenging for current AI systems to interpret, especially across different languages and modalities. In this work, we explore how VLMs understand these culturally grounded, non-literal expressions and improves cross-lingual semantic understanding. With my co-authors: Sarmistha Das, Shreyas Guha, Suvrayan Bandyopadhyay, Salisa Phosit, and Kitsuchart Pasupa Looking forward to presenting our work at ICME 2026 and engaging with the research community! #ICME2026 #MultimodalAI #NLP #Idioms #CrossLingualAI #Research
To view or add a comment, sign in
-
🗣️ AI speaks many languages. But does it treat them all fairly? Large language models (LLMs) are now embedded in tools we use every day - from chatbots to decision support systems. Yet these models can reproduce or even amplify biases, especially when moving across languages, cultures, and contexts. 👉 In the fourth session of the seminar series ‘Current Trends in AI: Fairness in LLMs’, Pieter Delobelle takes a closer look at the hidden assumptions inside LLMs. We explore how bias emerges, how it can be measured, and what techniques exist to mitigate it, particularly across different languages. For those who want to look beyond performance metrics and engage with the societal impact of language technology 😊 📅 23 April 2026 📍KU Leuven Brugge or livestream 🔗 More info via: https://lnkd.in/eF7ZDrur ❓Contact programme manager Benedicte Seynhaeve #AI #LLM #FairAI VAIA - Flanders AI Academy
To view or add a comment, sign in
-
-
🌟 Weekly Scientist Spotlight Series! 🌟 Building AI systems that truly understand and process language and speech across diverse contexts is a significant challenge. From enabling real-time speech recognition in noisy environments to preserving linguistic and cultural nuances, researchers are pushing the boundaries of what's possible. Today, we feature scientists advancing speech and language AI with technical rigor while ensuring human-centeredness, accessibility, and context. 📊 Meet this week's featured researchers: Sakinat Folorunso, Associate Professor, Olabisi Onabanjo University(O.O.U), Nigeria, works on culturally grounded, energy-aware AI and multilingual NLP for low-resource and African contexts. Find out more here: https://lnkd.in/eYpVy47d Rohun Nisa, Researcher, University of Kashmir, India, develops human-centered speech processing systems using deep learning for real-time applications. Find out more here: https://lnkd.in/erimzYQH Find out more about their research in the carousel! 📝 To participate in this series fill this form: https://lnkd.in/etdJ-Z6G For opportunities to work on such initiatives, contact Women in AI’s Chief Research Officer: Vinutha Magal Shreenath, PhD #WomeninAI #ai #ScientistSpotlight #DiversityInTech #WomenInSTEM #DiversityMatters #WomenInScience Dr. Alessandra Sala | Vinutha Magal Shreenath, PhD | Angela Kim | Bhuva Shakti (MS MBA) | Lisel Engelbrecht | Silvia A. Carretta, PhD | Aalya Dhawan | Nebahat Arslan | Bhagi Agrawal | Frincy Clement | Yasmin Al Enazi | Nabanita Roy
To view or add a comment, sign in
-
The "Universal Grammar" status quo has long dominated AI development, pushing a Western-centric lens that forces diverse languages into narrow boxes. The result? A 15% error rate in even the most advanced models when they hit cultural and morphological "divergence." With Mātṛ, we aren't retraining models or looking for similarities. We are mathematically modeling the divergence. By quantifying exactly where and how languages drift apart, we’ve built a model-agnostic enhancement layer that: - Identifies semantic and morphological errors existing benchmarks miss. - Corrects 60%+ of those errors without retraining. - Doubles interpretation accuracy for underserved markets. From our base in Dubai AI Campus, we are building the global infrastructure for truly inclusive AI. It’s time for AI to stop just "translating" and start truly interpreting. You can find more info here: www.zwag.ai Read our pre-print on arXiv: https://lnkd.in/dvy2xBeT #AI #Linguistics #Mātṛ #DeepTech #NLP #Innovation #GlobalAI Andrew Jackson, PhD Eyad Saleh Aidan Gomez Prashant K. (PK) Gulati John Ball
To view or add a comment, sign in
-
-
I had the privilege of sharing Mātr’s work at the India AI Impact Summit 2026 , and this research is exactly why. There’s a growing temptation in our ecosystem to solve Indic language AI by building separate, dedicated models. But every time a frontier model leaps forward (and they do, every few months), a standalone Indic model falls further behind. Mātr flips this entirely. Instead of competing with GPT or Claude, it sits on top of any LLM as an interpretation layer — what the researchers call a “metalinguistic decision layer.” When the base model improves, Mātr’s corrections automatically ride that wave. No retraining. No parallel data. No starting over. That’s not a workaround — it’s architecturally smarter. Here’s how it works technically: Their Universal Metalinguistic Framework (UMF) maps every language across 16 typological dimensions — word order, case marking, morphological complexity, honorifics, evidentiality, pro-drop patterns, and more — derived from the World Atlas of Language Structures. It then computes a divergence vector between source and target languages, quantifying exactly where and how much they differ structurally. At inference time, the engine generates 32 translation candidates via beam search, then applies a dual-layer evaluation: a semantic constraint layer that resolves lexical ambiguities (like “play” defaulting to “play a musical instrument” in Sinhala instead of “play/have fun” — because the former appears more frequently in training data), and a typological scoring layer that checks structural compliance against the target language’s grammatical requirements. The results across 341 test sentences and 9 languages are revealing: → Languages with high typological divergence from English (Sinhala, Tamil, Hindi) triggered the most interventions — validating that the framework correctly identifies where LLMs systematically fail → Hindi achieved 84.9% intervention precision with a 2.14x gain-risk ratio → Chinese hit 100% precision → Intervention rates correlated strongly with typological distance (Pearson r = 0.82, p < 0.01) What I respect the most is that the team is transparent about limitations. For morphologically dense languages like Sinhala and Tamil, the system detects errors but doesn’t always resolve them correctly yet — case marking mismatches and honorific selection remain challenging. They’ve mapped every failure into an interpretable error taxonomy rather than hiding behind aggregate scores. That’s real science. And critically, conventional benchmarks like BLEU and BERTScore barely register UMF’s improvements, because these metrics measure surface-form similarity, not structural correctness. A translation can have perfect n-gram overlap and still be cognitively wrong for a native speaker. UMF operates orthogonally to these metrics, catching errors they’re structurally incapable of detecting. Garage Labs Technologies is proud to support ZWAG AI team. #AI #Linguistics #DeepTech #NLP #InclusiveAI
Building Mātr - Universal Interpretation Model. Just as TCP/IP standardized data, Mātṛ unlocks the 84% non-English world via a model-agnostic layer for universal interpretation.
The "Universal Grammar" status quo has long dominated AI development, pushing a Western-centric lens that forces diverse languages into narrow boxes. The result? A 15% error rate in even the most advanced models when they hit cultural and morphological "divergence." With Mātṛ, we aren't retraining models or looking for similarities. We are mathematically modeling the divergence. By quantifying exactly where and how languages drift apart, we’ve built a model-agnostic enhancement layer that: - Identifies semantic and morphological errors existing benchmarks miss. - Corrects 60%+ of those errors without retraining. - Doubles interpretation accuracy for underserved markets. From our base in Dubai AI Campus, we are building the global infrastructure for truly inclusive AI. It’s time for AI to stop just "translating" and start truly interpreting. You can find more info here: www.zwag.ai Read our pre-print on arXiv: https://lnkd.in/dvy2xBeT #AI #Linguistics #Mātṛ #DeepTech #NLP #Innovation #GlobalAI Andrew Jackson, PhD Eyad Saleh Aidan Gomez Prashant K. (PK) Gulati John Ball
To view or add a comment, sign in
-
-
Exploring the reliability of Large Language Models (LLMs) has been the focus of my recent work. While modern LLMs demonstrate impressive capabilities in natural language generation and open-domain question answering, they can sometimes produce responses that are factually incorrect or unsupported by evidence — a phenomenon commonly referred to as AI hallucination. Currently, I am conducting an empirical investigation to better understand how and when these hallucinations occur, and how reliable model responses are across different types of questions and datasets. The work involves evaluating model outputs against well-known benchmarks such as TruthfulQA, FEVER, and Natural Questions, and analysing patterns in factual, logical, and contextual errors. The goal is to contribute to a deeper understanding of reliability in generative AI systems and support the development of more trustworthy AI applications. As generative AI continues to be integrated into real-world products and decision-support systems, improving evaluation methods and transparency around model behaviour is becoming increasingly important. Excited to keep exploring this space and share insights along the way. #AI #GenerativeAI #LargeLanguageModels #AIResearch #MachineLearning #DataScience
To view or add a comment, sign in
-
More from this author
Explore related topics
- How AI Impacts Global Collaboration
- AI's Impact on Global Economic Conversations
- The Future of Software With Genai
- Inclusive AI Development and Global Collaboration
- Automation impact on women in Global South
- Importance of Inclusivity in AI Development
- Future of AI Through Ongoing Engagement
- Global Dialogue on AI Futures
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development