AI coding tools have an accessibility problem. I decided to fix it. I am a screen reader user and accessibility specialist. I use Claude Code every day to build apps at Techopolis LLC. And every day, I have to fight for the fundamentals. Labeled inputs. Focus trapping. Semantic HTML. Contrast ratios. Live regions. These are not advanced requirements. They are the basics. And AI drops them constantly. I tried writing detailed instructions. I tried custom skills. I tried adding reminders to every prompt. None of it stuck. As conversations grow, the model deprioritizes accessibility. Every time. So I built something different. Six specialized AI agents, each with one focused job it cannot ignore. An ARIA Specialist. A Modal Specialist. A Contrast Master. A Keyboard Navigator. A Live Region Controller. And an Accessibility Lead that coordinates them. A hook fires on every prompt. If the task involves UI code, the team activates automatically. If it does not, Claude works normally. It enforces WCAG 2.1 Level AA compliance. It covers VoiceOver, NVDA, and JAWS compatibility. It catches framework-specific pitfalls like React conditional rendering breaking live regions and Tailwind color classes failing contrast. It is open source, MIT licensed, and installs in about thirty seconds. I built it because I need it. And I know I am not the only one. If you work with AI coding tools and care about accessibility, star the repo and share this with your team. The more people involved, the better it gets. GitHub: https://lnkd.in/geYhcZm3 Full writeup: https://lnkd.in/gZdQVxr5 #Accessibility #a11y #OpenSource #WCAG #ClaudeCode #AI #WebDevelopment #AssistiveTechnology #ScreenReader #DevTools #InclusiveDesign
Improving Access to AI Tools
Explore top LinkedIn content from expert professionals.
Summary
Improving access to AI tools means making artificial intelligence technologies easier to use and more widely available for everyone, regardless of technical background, abilities, or resources. This includes breaking down barriers such as limited accessibility, complicated interfaces, and unequal opportunities so more people can benefit from AI in their work and daily lives.
- Prioritize inclusivity: Design AI tools with accessibility features like screen reader compatibility and clear labeling so people of all abilities can participate.
- Expand training opportunities: Offer flexible learning programs and mentorship to help people from diverse backgrounds build AI skills without strict schedules or prerequisites.
- Build scalable systems: Organize AI tools into structured workflows and automate repeatable tasks so teams and individuals can consistently use AI without technical bottlenecks.
-
-
My recent research, which examines the adoption of emerging technologies through a gender lens, illuminates continued disparities in women's experiences with Generative AI. Day after day we continue to hear about the ways GenAI will change how we work, the types of jobs that will be needed, and how it will enhance our productivity, but are these benefits equally accessible to everyone? My research suggests otherwise, particularly for women. 🕰️ The Time Crunch: Women, especially those juggling careers with care responsibilities, are facing a significant time deficit. Across the globe women spend up to twice as much time as men on care and household duties, resulting in women not having the luxury of time to upskill in GenAI technologies. This "second shift" at home is increasing an already wide divide. 💻 Tech Access Gap: Beyond time constraints, many women face limited access to the necessary technology to engage with GenAI effectively. This isn't just about owning a computer - it's about having consistent, uninterrupted access to high-speed internet and up-to-date hardware capable of running advanced AI tools. According to the GSMA, women in low- and middle-income countries are 20% less likely than men to own a smartphone and 49% less likely to use mobile internet. 🚀 Career Advancement Hurdles: The combination of time poverty and tech access limitations is creating a perfect storm. As GenAI skills become increasingly expected in the workplace, women risk falling further behind in career advancement opportunities and pay. This is especially an issue in tech-related fields and leadership positions. Women account for only about 25% of engineers working in AI, and less than 20% of speakers at AI conferences are women. 🔍 Applying a Gender Lens: By viewing this issue through a gender lens, we can see that the rapid advancement of GenAI threatens to exacerbate existing inequalities. It's not enough to create powerful AI tools; we must ensure equitable access and opportunity to leverage these tools. 📈 Moving Forward: To address this growing divide, we need targeted interventions: Flexible, asynchronous training programs that accommodate varied schedules Initiatives to improve tech access in underserved communities. Workplace policies that recognize and support employees with caregiving responsibilities. Mentorship programs specifically designed to support women in acquiring GenAI skills. There is great potential with GenAI, but also risk of leaving half our workforce behind. It's time for tech companies, employers, and policymakers to recognize and address these gender-specific barriers. Please share initiatives or ideas you have for making GenAI more inclusive and accessible for everyone. #GenderEquity #GenAI #WomenInTech #InclusiveAI #WorkplaceEquality
-
AI field note: introducing Toolshed from PwC, a novel approach to scaling tool use with AI agents (and winner of best paper/poster at ICAART). LLMs are limited in the number of external tools agents can use at once., usually to about 128 which sounds like a lot, but in a real-world enterprise quickly becomes a limitation. This creates a major bottleneck for real-world applications like database operations or collaborative AI systems that need access to hundreds or thousands of specialized functions. Enter Toolshed, a novel approach from PwC that reimagines tool retrieval and usage that enables AI systems to effectively utilize thousands of tools without fine-tuning or retraining. Toolshed introduces two primary technical components that work together to enable scalable tool use beyond the typical 128-tool limit: 📚 Toolshed Knowledge Bases: Vector databases optimized for tool retrieval that store enhanced representations of each tool, including: tool name and description, argument schema with parameter details, synthetically generated hypothetical questions, key topics and intents the tool addresses Tool-specific metadata for execution. 🧲 Advanced RAG-Tool Fusion: A comprehensive three-phase approach that creatively applies retrieval-augmented generation techniques to the tool selection problem, enhancing tool documents with rich metadata and contextual information accuracy, decomposing queries into independent sub-tasks, and reranking to ensure optimal tool selection. The paper demonstrates significant quantitative improvements over existing methods through rigorous benchmarking and systematic testing: ⚡️ 46-56% improvement in retrieval accuracy (on ToolE and Seal-Tools benchmarks vs. standard methods like BM25). ✨ Optimized top-k selection threshold to systematically balance retrieval accuracy with agent performance and token costs. 💫 Scalability testing: Proven effective when scaling to 4,000 tools. 🎁 Zero fine-tuning required: Works with out-of-the-box embeddings and LLMs. Not too shabby. Toolshed addresses challenges in enterprise AI deployment, offering practical solutions for complex production environments such as cross-domain versatility (we successfully tested across finance, healthcare, and database domains), secure database interactions, multi-agent orchestration, and cost optimization. Congratulations to Elias Lumer, Vamse Kumar Subbiah, and team for winning the best poster award at the International Conference on Agents and AI! For any organization building production AI systems, Toolshed offers a practical path to more capable, reliable tool usage at scale. Really impressive and encouraging work. Link in description.
-
This seems to be on everyone’s mind: how to operationalize your product team around AI. Peter Yang and I recently chatted about this topic and here’s what I shared about how we are doing this at Duolingo. For improving our product: -Using AI to solve problems that weren’t solvable before. One of the problems we had been trying to solve for years was conversation practice. With our Max feature, Video Call, learners can now practice conversations with our character Lily. The conversations are also personalized to each learner’s proficiency level. -Prototyping with AI to speed up the product process. For example, for our Duolingo Chess, PMs vibe-coded with LLMs to quickly build a prototype. This decreased rounds of iteration, allowing our Engineers to start building the final product much sooner. -Integrating AI into our tooling to scale. This allowed us to go from 100 language courses in 12 years to nearly 150 new ones in the last 12 months. For increasing AI adoption: -Building with AI Slack channels. Created an AI Slack channel for people to show and tell and share prototypes and tips. -“AI Show and Tell” at All-Hands meetings. Added a five‑minute live demo slot in every all hands meeting for people to share updates on AI work. -FriAIdays. Protected a two‑hour block every Friday for hands-on experimentation and demos. -Function-specific AI working groups. Assembled a cross-functional group (Eng, PM, Design, etc.) to test new tools and share best practices with the rest of the org. -Company-wide AI hackathon. Scheduled a 3-day hackathon focused on using generative AI. Here are some of our favorite AI tools and how we are using them: -ChatGPT as a general assistant -Cursor or Replit for vibe coding or prototyping -Granola or Fathom for taking meeting notes -Glean for internal company search #productmanagement #duolingo
-
Most AI tool lists miss the point. The advantage doesn’t come from knowing more tools. It comes from knowing where they fit in your workflow. Right now most people use AI like this: → Try a tool → Generate something → Move on No structure. No repeatability. So the productivity gains stay small. The real leverage appears when you treat AI tools like a stack, not a collection of apps. Almost every modern AI workflow fits into four layers. If you understand these layers, you can build systems that run every week without starting from scratch. 1️⃣ Thinking layer Tools that help you clarify problems and structure ideas. → ChatGPT → Claude Use them to: → research unfamiliar topics → break down complex problems → outline strategies and plans → stress-test ideas before execution Most people jump straight to creation. The real value often starts one step earlier: better thinking. 2️⃣ Creation layer Tools that turn ideas into assets. → writing tools (Jasper, Writesonic) → design tools (Canva AI, Flair) → image tools (Midjourney, DALL-E, Stable Diffusion) → video tools (Runway, HeyGen, Synthesia) This layer turns raw ideas into: → presentations → visuals → videos → marketing assets → documentation Think of it as production infrastructure for knowledge work. 3️⃣ Automation layer Tools that connect steps together. → Zapier → Make → Bardeen Instead of repeating tasks manually, these tools: → move information between systems → trigger actions automatically → remove repetitive work Example: Research → draft → create visuals → publish. Automation turns that into a repeatable pipeline. 4️⃣ Deployment layer Tools that deliver work to customers and teams. → websites (Framer, Durable) → chatbots (Chatbase, SiteGPT) → marketing tools (AdCreative, Simplified) This is where work becomes: → websites → marketing campaigns → customer experiences → digital products Without deployment, great AI output never reaches the real world. If you run a business or lead a team, here’s a simple playbook. Step 1 Pick one tool per layer. You don’t need ten tools doing the same job. Step 2 Design one repeatable workflow. Example: → research with ChatGPT → draft content → create visuals in Canva → automate publishing with Zapier Step 3 Automate the steps that repeat every week. Anything you do more than three times should become a system. Step 4 Improve the workflow over time. Small improvements compound faster than constantly switching tools. The people getting the most value from AI right now are not the ones testing every new tool. They are the ones building simple systems that run every day. Tools will change. Workflows compound. 💾 Save this if you’re building your AI stack. ♻️ Repost to help others move from experimenting with AI to actually using it in their work. ➕ Follow Gabriel Millien for practical insights on AI execution and building real leverage with AI. Image credit: Aditya Goenka
-
Unlocking the Potential of AI in Emerging Economies: Reflections on the UN’s High-Level Report Over the last 5 days at UNGA in NY, I've had the privilege of giving 4 keynotes on the intersections of AI and AMR, Misinformation, Climate and even pandemic reliance. I've been wowed by the frontier tech on display - by Google, Meta, OpenAI, and many others! AI and DPI (Digital Public Infrastructure) are in nearly every discussion, amplified by the new Global Digital Compact... The UN’s report "Governing AI for Humanity" from the High-level Advisory Body on AI is a milestone. While AI has long been used in global health for tasks like diagnostic imaging and predicting sepsis, fast-evolving large language and multimodal models —require us to rethink how we govern, assess, and implement AI. WHO has emphasized the importance of ethical and regulatory frameworks to ensure AI is used responsibly. Our guidance (below) focuses on creating AI that is technically robust, culturally relevant and contextually appropriate for diverse settings. AI must be designed to work in the unique environments where it will be deployed - ADDRESSING priorities on the ground... How do we make AI tools more accessible? Open-sourcing models is a good start, but it’s not enough. Models should be validated in real-world settings, under the conditions typical of many LMICs—where low bandwidth, intermittent connectivity, and limited access to advanced compute are common obstacles. AI systems have to operate effectively within these constraints and we have to develop the necessary infrastructure to enable continuous evaluation and fine-tuning. We have to move beyond the notion that LMICs need to gather more data before they can fully engage with AI. The truth is, we don’t need a perfect starting point—and we will likely never have one! Foundational AI models are designed to learn and evolve. It’s up to us to create systems that allow these models to be refined and adapted to local contexts, with appropriate safeguards. Waiting only risks widening the digital divide and leaving many countries behind in the global race for AI innovation. We have to shift focus from simply validating AI models to validating the entire process of using AI in health. Systems are dynamic and evolving, and we need to be just as agile in how we monitor their deployment. We're not there yet with the tools and benchmarking that's needed, but we're working on it! So, what’s next? We must act quickly. Invest in the necessary infrastructure, such as computing power—not just for training models, but for deploying them where they are needed. Support large-scale collaborations that build systems in a sustainable and inclusive way. Foster strong partnerships across governments, academia, and the private sector to ensure transparency and accountability. #AIforHealth #DigitalHealth #LMICs #UNGA79 Nick Martin Bilal A Mateen Annie Hartley Sameer Pujari Rubayat Khan Rebecca Distler Fred Hersch Trevor Mundel
-
Throwing AI tools at your team without a plan is like giving them a Ferrari without driving lessons. AI only drives impact if your workforce knows how to use it effectively. After: 1-defining objectives 2-assessing readiness 3-piloting use cases with a tiger team Step 4 is about empowering the broader team to leverage AI confidently. Boston Consulting Group (BCG) research and Gilbert’s Behavior Engineering Model show that high-impact AI adoption is 80% about people, 20% about tech. Here’s how to make that happen: 1️⃣ Environmental Supports: Build the Framework for Success -Clear Guidance: Define AI’s role in specific tasks. If a tool like Momentum.io automates data entry, outline how it frees up time for strategic activities. -Accessible Tools: Ensure AI tools are easy to use and well-integrated. For tools like ChatGPT create a prompt library so employees don’t have to start from scratch. -Recognition: Acknowledge team members who make measurable improvements with AI, like reducing response times or boosting engagement. Recognition fuels adoption. 2️⃣ Empower with Tiger Team Champions -Use Tiger/Pilot Team Champions: Leverage your pilot team members as champions who share workflows and real-world results. Their successes give others confidence and practical insights. -Role-Specific Training: Focus on high-impact skills for each role. Sales might use prompts for lead scoring, while support teams focus on customer inquiries. Keep it relevant and simple. -Match Tools to Skill Levels: For non-technical roles, choose tools with low-code interfaces or embedded automation. Keep adoption smooth by aligning with current abilities. 3️⃣ Continuous Feedback and Real-Time Learning -Pilot Insights: Apply findings from the pilot phase to refine processes and address any gaps. Updates based on tiger team feedback benefit the entire workforce. -Knowledge Hub: Create an evolving resource library with top prompts, troubleshooting guides, and FAQs. Let it grow as employees share tips and adjustments. -Peer Learning: Champions from the tiger team can host peer-led sessions to show AI’s real impact, making it more approachable. 4️⃣ Just in Time Enablement -On-Demand Help Channels: Offer immediate support options, like a Slack channel or help desk, to address issues as they arise. -Use AI to enable AI: Create customGPT that are task or job specific to lighten workload or learning brain load. Leverage NotebookLLM. -Troubleshooting Guide: Provide a quick-reference guide for common AI issues, empowering employees to solve small challenges independently. AI’s true power lies in your team’s ability to use it well. Step 4 is about support, practical training, and peer learning led by tiger team champions. By building confidence and competence, you’re creating an AI-enabled workforce ready to drive real impact. Step 5 coming next ;) Ps my next podcast guest, we talk about what happens when AI does a lot of what humans used to do… Stay tuned.
-
Chief Justice John Roberts’ end of year letter addresses AI at the courts, and its potential to increase access to justice. Miriam Kim and I, with students, have posted a new short paper on this topic, How LLMs Can Help Address the Access to Justice Gap through the Courts (https://lnkd.in/gedyRnJG), that demonstrates ways generative AI tools can help low-income Americans get the help they need. In our paper, the first in a series, using the example of the Arizona courts, we show how LLMs can (1) translate court documents into languages like Navajo, (2) help consumers find legal assistance, (3) simplify the expungement process, 4) provide assistance in eviction proceedings, and 5) help courts determine the steps needed to make such capabilities available to the public. The legal system is impenetrable and inaccessible to many, and the translation and refactoring capabilities of LLMs can help bridge the gap and realize the promise of the law. To support further work, and for illustrative purposes, we publish two GPT-powered chatbots built based on information on existing websites hosted by the Arizona state courts ( https://lnkd.in/g3eGem46 and https://lnkd.in/gWfeiEak), provide all of our prompts and instructions for implementing the five use cases described above in an appendix, and compare and contrast the different responses we get from the different platforms. This is an early draft, and we welcome your comments and thoughts on the topic. The Paper: https://lnkd.in/gedyRnJG by Colleen Chien Miriam Kim Akhil Raj Rohit Rathish Shaunak Chaudry
-
🌟 As a faculty mentor at the American Association of Colleges and Universities (AAC&U) Institute, I had the privilege of moderating discussions with representatives from forward-thinking institutions last week. https://lnkd.in/gak9VPjP These conversations revealed exciting projects and critical insights into integrating AI in teaching, learning, and research. 🚀 Innovative Projects: 1. William & Mary: Developed a “Gen AI Proficiency” framework to guide curriculum changes. 2. University of Virginia: Hired students to co-create AI guides, workshops, and a hackathon, centering student voices. 3. Clemson University: Launched a “10-Step AI Challenge”, a sandbox program for faculty and students to explore AI tools in a low-stakes environment. 4. University of the Pacific: Conducted AI surveys and implemented tools like Scopus AI and Grammarly. 5. Virginia Commonwealth University: Piloting AI literacy in general education with faculty ambassadors promoting discipline-specific strategies. 6. Moravian University Refining AI policies and running pilot programs to explore AI’s benefits and challenges. 7. Seton Hall University: Focused on faculty autonomy with sample syllabus statements and student-led AI discussions. 💡Key Insights from the Discussions 1. Defining #AILiteracy: Institutions are grappling with creating flexible definitions of AI literacy that align with diverse disciplines and evolving technologies. 2. Addressing #Access and #Equity: Access to advanced AI tools, like Scopus AI, remains unequal, creating challenges for under-resourced institutions. Open AI resources are crucial for closing these gaps. 3. #Faculty Buy-In: Hesitation persists due to concerns about ethics, workload, and job security. Tailored workshops and faculty ambassadors are helping to build confidence. 4. #Student-Centered Approaches: Engaging students as co-creators in AI initiatives, as seen at UVA, adds valuable perspectives and fosters adoption. 5. Long-Term #Sustainability: Institutions are shifting focus to teaching principles and transferable skills to adapt to AI’s rapidly changing landscape. Thank you to Sharon Stoerger for co-facilitating with me; and C. Edward Watson, Hannah Schneider for organizing a wonderful Mid-Year event! #AIinEducation #HigherEducation #GenerativeAI #FutureOfLearning #AACU
-
Giving someone an AI tool without context is like handing them a nail gun when they’ve only ever swung a hammer. Risky at best. Costly at worst. If you want to be pragmatic about using AI, then you should be intentional about learning how it works. That’s the mindset we’ve taken across our entire organization, and we’ve made specific investments in our training processes and workflows that ensure we’re using AI responsibly, effectively, and securely. Here’s how we’ve built that structure: ✅ Step 1: Start with education. Before anyone gets access to AI tools - like GitHub Copilot - they take a formal training course. It covers the fundamentals, best practices, and what to avoid. They have to pass an exam to show they understand the basics. 🔓 Step 2: Unlock tool access. Only after passing the exam do employees get access to our internal AI assistant. It’s a clear barrier designed to ensure responsible usage and protect the integrity of our codebase. 🔁 Step 3: Create a learning loop. We host monthly learning sessions that serve two purposes: - Keep the team up to date on the latest advancements - Create space for engineers to share what’s working, what’s not, and lessons learned along the way These sessions also double as a safe place for feedback and iteration so our own internal knowledge can evolve as the tools we use evolve. This is what it looks like to put pragmatism into practice. It’s structured. It’s intentional. It protects the business while empowering the team.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development