🔥 A prison fire in Chile killed dozens. A supermarket fire in Paraguay ended with the same outcome. In both cases, blame fell on individuals at the scene. Guards were accused of failing to act. A store manager was held responsible for keeping people inside. But as fire safety engineer Jose Torero explained at a National Academy of Engineering and Committee on Human Rights workshop, the deeper causes were built into the #systems. Poor design, hazardous materials, and weak regulations shaped the outcome. In Chile, overcrowding and facility conditions left prison staff unable to rescue those trapped. In Paraguay, outdated safety codes allowed flammable insulation and locked exits. People had no way to escape. These instances show that #safety isn’t automatic. It depends on #engineering decisions, competent oversight, and a commitment to protect lives before such emergencies occur. "Issues at the Intersection of Engineering and Human Rights," the just-out proceedings from The National Academies of Sciences, Engineering, and Medicine, explores how choices in #infrastructure, #design, and #education influence dignity, access, and protection. Some takeaways: 🔧 Safety is technical, but it’s also political. Fires, floods, and failures often trace back to decisions about what gets built, where corners are cut, and who is expected to cope. 🏚️ Disparities are often engineered. People living with unsafe water, poor roads, or weak buildings are often living with someone else’s decisions. 📐 Ethics alone aren’t enough. Most engineering curricula treat human rights as a side note, if at all. The result is professionals are increasingly trained to solve problems without asking whose problem it is. ♿ Design reaches further than intended. A wheelchair ramp helps more than wheelchair users. Good engineering has ripple effects. 🧠 Communities bring more than stories. They bring knowledge. When engineers treat them as vital input, the results are better for everyone. 🧭 Human rights point to what matters: who’s affected, who decides, who’s left out. 📘 Read more at: https://lnkd.in/ezkT8dZJ 👥Thanks to the crew: Leading: Charlie Bolden, Jr., Betsy Popken, Davis Chacon-Hurtado, Glen Daigger, Wesley Harris, Deb Niemeier. Contributing: Theresa Harris, Maya Elizabeth Carrasquillo, Tyler Giannini, Shareen Hertel, Muhammad Hamid Zaman, Bernard Amadei, Mira Olson, Shirley Ann Jackson, Darshan Karwat, Carlton Waterhouse, Eric Buckley, Bethany Hoy, Kimberly L. Jones, Amy Smith, John Kleba, Michael Ashley Stein, Jay Aronson, Julie Owono, José Torero, Lindsey Andersen, Alice Agogino, Tamara E. Brown, Wendy Hui Kyong Chun, Katie Shay. Staffing: David Butler, Rebecca Everly, Casey Gibson, Ana Deros, Hoang-Nam Vu, Chessie Briggs, and Joe Alper.
Human Rights Considerations in Engineering
Explore top LinkedIn content from expert professionals.
Summary
Human rights considerations in engineering involve making sure that technical decisions, designs, and practices respect and protect people's basic dignity, safety, and rights. From digital technology to infrastructure projects, this means engineers must think beyond technical outcomes to consider the real-world impacts on individuals and communities.
- Prioritize safety and dignity: Design and implement projects with an emphasis on protecting lives, preventing harm, and ensuring everyone’s basic rights are respected.
- Engage impacted communities: Include the voices and experiences of those affected by engineering decisions to create solutions that are fair and responsive to real needs.
- Support transparency and accountability: Adopt clear processes and accessible tools to monitor risks, address concerns, and remedy any harms that might occur during engineering projects.
-
-
The adoption of digital technologies is transforming how organisations tackle Environmental, Social, and Governance (ESG) challenges, particularly in managing human rights risks in operations and supply chains. Since being awarded an Australian Research Council DECRA in 2020, I have focused on exploring the role of digital tools in ESG. Through my research at RMIT University, I have examined innovative approaches that combine technology with human rights principles to drive accountability and sustainability. From blockchain and mobile apps to AI-driven analytics, a growing array of tools are helping business detect and address human rights risks in their operations and supply chains. But with the explosion of commercial and non-commercial options, effective adoption requires a well-planned, coherently executed strategy. Based on insights from my recent comparative study of digital innovations, here are a few guidelines for ESG managers: *** Understand Value Beyond Auditing*** Many tools promise real-time reporting, but often, they are just another auditing function. Ask whether the technology provides more than static assessments—does it actively help mitigate risks? For instance, can it identify vulnerabilities and offer solutions to reduce them? ***Prioritise Compatibility and User-Friendliness*** Digital tools must align with your organisation’s systems, workflows, and values. Poor alignment often leads to tools being underutilised or abandoned. Choose solutions that integrate seamlessly and are easy to use across teams. ***Connect Tools to Training and Remediation*** Effective digital tools don’t just survey affected communities, they empower them. The most impactful tools build trust, provide training and include grievance mechanisms to resolve issues face-to-face. This approach drives meaningful risk reduction. These research efforts align with my broader commitment to equipping organisations with the tools and strategies needed to address modern slavery risks and uphold human rights standards within their ESG frameworks. Achieving progress in ESG requires innovation, collaboration and actionable insights. If you’re interested in learning more or collaborating, I’d love to connect and discuss further. 🔗 https://lnkd.in/g2ffcVuX #ESG #HumanRights #DigitalInnovation #ModernSlavery #bizhumanrights #Sustainability, RMIT College of Business and Law, Kok-Leong Ong, RMIT Business and Human Rights Centre, Finn Devlin
-
📢 Human Rights Assessment of the GenAI Value Chain by BSR If you’re working on responsible AI, this one’s a must. Like, really - what can possibly be more important than considering human rights in AI Governance. BSR has just released a comprehensive human rights risk assessment for the generative AI value chain — and it’s not just theory. This report breaks down how each actor (from dataset providers to end users) may be connected to real-world human rights impacts. 🧩 What’s inside? -> A value chain analysis covering suppliers, foundation model developers, downstream developers, deployers, and users. -> Detailed breakdowns of risk pathways for discrimination, surveillance, misinformation, privacy violations, job loss, and more. -> Clear links to international human rights instruments (UDHR, ICCPR, ICESCR, CEDAW, etc.). -> Practical and well-structured recommendations for each actor — including suppliers and deployers, often left out. -> A full set of practitioner guides covering stakeholder engagement, transparency, governance, impact assessment, risk mitigation, enforcement, and remedy. 💡 Why it matters? While many AI governance tools focus on abstract principles, BSR’s approach is concrete, actionable, and grounded in international law. This series stands out because it puts people at the center of AI governance — with direct links to the UNGPs and a sharp focus on actual and foreseeable harm. 🙌 Sincere thanks to the full team at BSR for producing this high-quality, deeply thoughtful resource. Special recognition to J.Y. Hoh, Samone Nigam, Lindsey Andersen, and Hannah Darnton — your work reflects a rare balance of clarity, strategic depth, and moral grounding. This series will be an essential reference for anyone serious about integrating human rights into AI governance. #AIGovernance #HumanRights #ResponsibleAI #BSR #GenerativeAI === Did you like this post? Connect or Follow 🎯 Jakub Szarmach Want to see all my posts? Ring that 🔔.
-
A thoughtful comment on my recent post about New Orleans' secret facial recognition program (https://lnkd.in/eJ3hV6_f) raised a question I hear often: "Who decides what values should guide tech deployment? Values change over time and vary across people—isn't building policy on values exactly what creates marginalization?" It's a fair and honest question that gets to the heart of why many organizations struggle with ethical tech decisions. But it's not as intractable as it seems. We don't need perfect consensus on all values to make good decisions about technology deployment. We need alignment on fundamental principles that protect human dignity and agency. Here's what works in practice: • Start with shared human fundamentals. Despite our differences, most people agree on basics: the right to be treated with dignity, to have agency over our lives, to be safe from arbitrary harm. As Cennydd Bowles notes in "Future Ethics" (a must read!), forty-eight nations found enough common ground to encode these into the Universal Declaration of Human Rights. • Focus on process transparency, not value prescription. Instead of asking "What values should guide this?" ask "Who gets to participate in this decision, and how?" The New Orleans case failed precisely because there was no inclusive decision-making process. • Use meaning-making as a framework. As I explore in "What Matters Next," meaningful tech emerges from the overlap between what we intend, what we actually do, and what others understand. Values-aligned tech happens when there's transparency and alignment—not when a small group decides for everyone else. • Ground decisions in shared harm prevention. Reid Blackman, Ph.D.'s "Ethical Machines" (another must read!) makes a crucial point: we don't need to agree on grand ethical theories to identify ethical risks. Most people can agree that systematic discrimination, privacy violations, and erosion of trust constitute harm worth preventing. • Balance the harms of action versus inaction. This isn't about avoiding all risk—it's about choosing which harms to confront. In New Orleans, leaders weighed the harms of action (potential privacy violations, erosion of trust) against the harms of inaction (potential security risks). But they made this choice in secret, without community input. When we delay ethical decision-making because we can't achieve perfect consensus, we're not avoiding harm—we're choosing to accept the harms of the status quo. The question isn't "Whose values win?" It's "How do we create systems where affected communities have meaningful input into decisions that impact them?" That's not moral relativism—that's democratic responsibility. What's been your experience finding common ground on these issues? #TechHumanist #DigitalEthics #TechEthics #AIEthics #MeaningfulTech #TechGovernance #WhatMattersNextbook ( 📸: John A. DeMato)
-
The Dark Side of India’s Infrastructure Boom India’s infrastructure sector is witnessing rapid growth, with projects in Highways, railways, and other areas advancing swiftly. However, the rise of small construction companies has raised serious concerns. These firms often secure contracts by bidding at unrealistically low rates, only to exploit their workforce, particularly civil engineers, to cut costs. These companies demand 12–14-hour workdays from engineers without weekly holidays. If an employee refuses to work on Sundays, their salary is cut, or they receive warnings threatening termination. Salaries are often delayed by one to two months, and some companies even withhold the final month’s salary if an employee resigns. This exploitation is rampant in firms handling Highways and railway projects. Employees are housed in makeshift camps in remote areas, lacking basic facilities like proper medical care or nutritious food. Such conditions severely disrupt the social and professional lives of civil engineers, leaving them physically and mentally drained. By age 35-40, many engineers suffer from various health issues due to relentless work pressure and poor living conditions. Despite these clear violations of labour laws, both companies and authorities remain silent, ignoring the exploitation. This unchecked malpractice not only harms employees but also undermines the quality and sustainability of India’s infrastructure growth. #InfrastructureBoom #ConstructionIndustry #WorkerExploitation #CivilEngineers #LabourRights
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development