Evolving Corporate Values

Explore top LinkedIn content from expert professionals.

  • View profile for Marc Beierschoder
    Marc Beierschoder Marc Beierschoder is an Influencer

    Most companies scale the wrong things. I fix that. | From complexity to repeatable execution | Partner, Deloitte

    147,636 followers

    𝟔𝟔% 𝐨𝐟 𝐀𝐈 𝐮𝐬𝐞𝐫𝐬 𝐬𝐚𝐲 𝐝𝐚𝐭𝐚 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐬 𝐭𝐡𝐞𝐢𝐫 𝐭𝐨𝐩 𝐜𝐨𝐧𝐜𝐞𝐫𝐧. What does that tell us? Trust isn’t just a feature - it’s the foundation of AI’s future. When breaches happen, the cost isn’t measured in fines or headlines alone - it’s measured in lost trust. I recently spoke with a healthcare executive who shared a haunting story: after a data breach, patients stopped using their app - not because they didn’t need the service, but because they no longer felt safe. 𝐓𝐡𝐢𝐬 𝐢𝐬𝐧’𝐭 𝐣𝐮𝐬𝐭 𝐚𝐛𝐨𝐮𝐭 𝐝𝐚𝐭𝐚. 𝐈𝐭’𝐬 𝐚𝐛𝐨𝐮𝐭 𝐩𝐞𝐨𝐩𝐥𝐞’𝐬 𝐥𝐢𝐯𝐞𝐬 - 𝐭𝐫𝐮𝐬𝐭 𝐛𝐫𝐨𝐤𝐞𝐧, 𝐜𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 𝐬𝐡𝐚𝐭𝐭𝐞𝐫𝐞𝐝. Consider the October 2023 incident at 23andMe: unauthorized access exposed the genetic and personal information of 6.9 million users. Imagine seeing your most private data compromised. At Deloitte, we’ve helped organizations turn privacy challenges into opportunities by embedding trust into their AI strategies. For example, we recently partnered with a global financial institution to design a privacy-by-design framework that not only met regulatory requirements but also restored customer confidence. The result? A 15% increase in customer engagement within six months. 𝐇𝐨𝐰 𝐜𝐚𝐧 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐫𝐞𝐛𝐮𝐢𝐥𝐝 𝐭𝐫𝐮𝐬𝐭 𝐰𝐡𝐞𝐧 𝐢𝐭’𝐬 𝐥𝐨𝐬𝐭? ✔️ 𝐓𝐮𝐫𝐧 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐧𝐭𝐨 𝐄𝐦𝐩𝐨𝐰𝐞𝐫𝐦𝐞𝐧𝐭: Privacy isn’t just about compliance. It’s about empowering customers to own their data. When people feel in control, they trust more. ✔️ 𝐏𝐫𝐨𝐚𝐜𝐭𝐢𝐯𝐞𝐥𝐲 𝐏𝐫𝐨𝐭𝐞𝐜𝐭 𝐏𝐫𝐢𝐯𝐚𝐜𝐲: AI can do more than process data, it can safeguard it. Predictive privacy models can spot risks before they become problems, demonstrating your commitment to trust and innovation. ✔️ 𝐋𝐞𝐚𝐝 𝐰𝐢𝐭𝐡 𝐄𝐭𝐡𝐢𝐜𝐬, 𝐍𝐨𝐭 𝐉𝐮𝐬𝐭 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞: Collaborate with peers, regulators, and even competitors to set new privacy standards. Customers notice when you lead the charge for their protection. ✔️ 𝐃𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐀𝐧𝐨𝐧𝐲𝐦𝐢𝐭𝐲: Techniques like differential privacy ensure sensitive data remains safe while enabling innovation. Your customers shouldn’t have to trade their privacy for progress. Trust is fragile, but it’s also resilient when leaders take responsibility. AI without trust isn’t just limited - it’s destined to fail. 𝐇𝐨𝐰 𝐰𝐨𝐮𝐥𝐝 𝐲𝐨𝐮 𝐫𝐞𝐠𝐚𝐢𝐧 𝐭𝐫𝐮𝐬𝐭 𝐢𝐧 𝐭𝐡𝐢𝐬 𝐬𝐢𝐭𝐮𝐚𝐭𝐢𝐨𝐧? 𝐋𝐞𝐭’𝐬 𝐬𝐡𝐚𝐫𝐞 𝐚𝐧𝐝 𝐢𝐧𝐬𝐩𝐢𝐫𝐞 𝐞𝐚𝐜𝐡 𝐨𝐭𝐡𝐞𝐫 👇 #AI #DataPrivacy #Leadership #CustomerTrust #Ethics

  • View profile for Daniel Baulch

    Managing Director | Investigations | Integrity Advisory | Financial Crime, ABC & Regulatory Risk | Governance, Assurance & High-Risk Matters

    10,572 followers

    What would PNG look like if we held MPs to the same standards? RNZ reports that a second Fijian Deputy Prime Minister has been charged by their anti-corruption body in as many weeks, with resignations following and court dates set. That’s institutions doing their job—openly, quickly, and without fear or favour. Now imagine the same in PNG. Too many of our political leaders maintain private business interests that receive public money—creating conflicts that would be unacceptable under best-practice integrity regimes. If we applied clear, enforced rules, here’s what would change: *️⃣ Real conflict-of-interest controls: Ministers and MPs fully disclose interests, recuse where required, and place assets into blind trusts when necessary. *️⃣ Beneficial-ownership transparency: A public register that makes it obvious when a company seeking public funds is tied to an office-holder. *️⃣ Clean procurement: Hard bans (and active auditing) on contracts to entities controlled by sitting MPs or their close associates. *️⃣ Independent enforcement: Well-resourced integrity agencies able to investigate and lay charges—without political interference. *️⃣ Consequences that bite: Immediate step-aside/resignation conventions when charged; lifetime debarment from public tenders for proven misconduct. This isn’t about politics—it’s about protecting the public purse and restoring trust. Good people and honest businesses win when the rules are clear and applied equally. Question for all of us: if these standards were applied today, how much leakage would stop tomorrow—and how quickly would service delivery improve? #IntegrityMatters #PNG #GoodGovernance #AntiCorruption #ConflictOfInterest #Transparency #RuleOfLaw #PublicFinance https://lnkd.in/g3TtKcNX

  • View profile for Paula Cipierre
    Paula Cipierre Paula Cipierre is an Influencer

    Global Head of Privacy | LL.M. IT Law | Certified Privacy (CIPP/E) and AI Governance Professional (AIGP)

    9,513 followers

    Can law help build ethical AI systems by design, or does ethics resist formalization? In earlier posts, I argued that ethics is about reasoned judgement under uncertainty, and that regulation can create clarity where organizations otherwise struggle. With today’s post I want to connect law and ethics to technical implementation; specifically, the role that law can play in facilitating ethical data practices by design. Privacy professionals are well familiar with this concept, as epitomized by Art. 25 GDPR which requires organizations to implement data protection by design and default. But as Prof. Christian Djeffal outlines in a recent article, law by design has since become a fixture of EU law: Law by design translates legal and ethical goals into technical and organizational obligations. At the same time, it deliberately leaves discretion as to implementation. ➡️ What law can do well Frameworks like the GDPR and the AI Act show how law can meaningfully support ethical data practices by design: ✅ They shape how organizations structure the lifecycle of data processing, starting with an initial assessment of the necessity and proportionality of processing. ✅ They require organizations to clearly define roles and responsibilities from the beginning, and document any relevant risks. ✅ They encourage organizations to seek diverse perspectives when developing and deploying new technologies, thus reflecting the inherently interdisciplinary nature of sociotechnical design. ➡️ What this means for ethical AI Ethics is no longer a nice-to-have when it is hardcoded into legal requirements. As I argued in my master's thesis, the AI Act, for instance, translates ethical obligations into technical requirements, specifically mandating: ✅ Respect for human autonomy by requiring human oversight of the development and deployment of AI systems. ✅ The prevention of harm through accuracy, robustness, and security. ✅ Fairness and explainability through robust data governance and record-keeping. ➡️ Where law reaches its limits At the same time, law by design does not resolve any dilemmas or trade-offs. Ethical behavior is not a technological fact, but the result of human deliberation. Procedure matters just as much as outcome, and legal requirements alone do not tell organizations how to weigh competing priorities in practice. ➡️ What this means for leaders on ethical AI Law by design is not a shortcut to ethical AI. But it can create the right incentives. Leaders should: ✅ Leverage law by design requirements as a foundation for responsible data processing. ✅ Facilitate ethical deliberation to translate law by design requirements into concrete deliverables. ✅ Open up the room for innovation by, in Djeffal's words, "prompting the development of solutions where none yet exist." Link to Djeffal's article: https://bit.ly/45Sj76P. #ResponsibleAI #AIGovernance #DataEthics #Leadership

  • View profile for Vanessa Larco

    Formerly Partner @ NEA | Early Stage Investor in Category Creating Companies

    20,633 followers

    Before diving headfirst into AI, companies need to define what data privacy means to them in order to use GenAI safely. After decades of harvesting and storing data, many tech companies have created vast troves of the stuff - and not all of it is safe to use when training new GenAI models. Most companies can easily recognize obvious examples of Personally Identifying Information (PII) like Social Security numbers (SSNs) - but what about home addresses, phone numbers, or even information like how many kids a customer has? These details can be just as critical to ensure newly built GenAI products don’t compromise their users' privacy - or safety - but once this information has entered an LLM, it can be really difficult to excise it. To safely build the next generation of AI, companies need to consider some key issues: ⚠️Defining Sensitive Data: Companies need to decide what they consider sensitive beyond the obvious. Personally identifiable information (PII) covers more than just SSNs and contact information - it can include any data that paints a detailed picture of an individual and needs to be redacted to protect customers. 🔒Using Tools to Ensure Privacy: Ensuring privacy in AI requires a range of tools that can help tech companies process, redact, and safeguard sensitive information. Without these tools in place, they risk exposing critical data in their AI models. 🏗️ Building a Framework for Privacy: Redacting sensitive data isn’t just a one-time process; it needs to be a cornerstone of any company’s data management strategy as they continue to scale AI efforts. Since PII is so difficult to remove from an LLM once added, GenAI companies need to devote resources to making sure it doesn’t enter their databases in the first place. Ultimately, AI is only as safe as the data you feed into it. Companies need a clear, actionable plan to protect their customers - and the time to implement it is now.

  • View profile for Vishal Chopra

    Data Analytics & Excel Reports | Leveraging Insights to Drive Business Growth | ☕Coffee Aficionado | TEDx Speaker | ⚽Arsenal FC Member | 🌍World Economic Forum Member | Enabling Smarter Decisions

    12,512 followers

    As businesses integrate AI into their operations, the landscape of data governance and privacy laws is evolving rapidly. Governments worldwide are strengthening regulations, with frameworks like GDPR, CCPA, and India’s DPDP Act setting higher compliance standards. But as AI becomes more embedded in decision-making, new challenges arise: 🔍 Key Trends in Data Governance & Privacy Compliance ✔ Stricter AI Regulations: The EU AI Act mandates greater transparency, accountability, and ethical AI deployment. Businesses must document AI decision-making processes to ensure fairness. ✔ Beyond GDPR: Laws like China’s PIPL and Brazil’s LGPD signal a global shift toward tougher data protection measures. ✔ AI and Automated Decisions Scrutiny: Regulations are focusing on AI-driven decisions in areas like hiring, finance, and healthcare, demanding explainability and fairness. ✔ Consumer Control Over Data: The push for data sovereignty and stricter consent mechanisms means businesses must rethink their data collection strategies. 💡 How Businesses Must Adapt To remain compliant and build trust, companies must: 🔹 Implement Ethical AI Practices: Use privacy-enhancing techniques like differential privacy and federated learning to minimize risks. 🔹 Strengthen Data Governance: Establish clear data access controls, retention policies, and audit mechanisms to meet compliance standards. 🔹 Adopt Proactive Compliance Measures: Rather than reacting to regulations, businesses should embed privacy-by-design principles into their AI and data strategies. In this new era of ethical AI and data accountability, businesses that prioritize compliance, transparency, and responsible AI deployment will gain a competitive advantage. 𝑰𝒔 𝒚𝒐𝒖𝒓 𝒃𝒖𝒔𝒊𝒏𝒆𝒔𝒔 𝒓𝒆𝒂𝒅𝒚 𝒇𝒐𝒓 𝒕𝒉𝒆 𝒏𝒆𝒙𝒕 𝒘𝒂𝒗𝒆 𝒐𝒇 𝑨𝑰 𝒂𝒏𝒅 𝒑𝒓𝒊𝒗𝒂𝒄𝒚 𝒓𝒆𝒈𝒖𝒍𝒂𝒕𝒊𝒐𝒏𝒔? 𝑾𝒉𝒂𝒕 𝒔𝒕𝒆𝒑𝒔 𝒂𝒓𝒆 𝒚𝒐𝒖 𝒕𝒂𝒌𝒊𝒏𝒈 𝒕𝒐 𝒔𝒕𝒂𝒚 𝒂𝒉𝒆𝒂𝒅? #DataPrivacy #EthicalAI #datadrivendecisionmaking #dataanalytics

  • View profile for Jo Bird ✨
    Jo Bird ✨ Jo Bird ✨ is an Influencer

    Brand partnership Creativity Speaker, Brand Story Creator & Creative Director | Tell better stories, build better brands | Ex-Gymshark

    102,813 followers

    This Semrush rebrand needs to be STUDIED 🤯 A rebrand by a 28 million user tech giant is never about a new logo. It’s a statement about where they believe the industry is going. Here’s what Semrush said: Search has changed. Completely. - People aren't just Googling anymore  - AI-driven search is up 527% year on year  - Consumers are finding brands through social, through community forums and AI - Discovery is fragmented and most brand strategies aren’t prepared for it This isn’t a ‘2026 marketing trend.’ This is a fundamental change in how brand visibility works. Which is why I’m not just interested in the visuals (spoiler: I do love them 😏) I’m interested in the story beneath them. For Semrush, the story is visibility, unified intelligence and accessibility for all marketers. Here's what I think they got right: → The Positioning: Moving from SEO tool to ‘brand visibility platform’ is a bigger idea and more honest about what modern brand discovery actually requires. → The Tagline: "Be found everywhere search happens" is an actionable phrase. Every brand can use it to see if their current strategy works or not. → The Identity: Tech-inspired colours, intelligence-inspired graphics. It feels futuristic without losing the Semrush character we all know and love. What I think is most important about the rebrand is the importance of brand story integrity. Having worked inside brands for many years, I know that tech can help scale the visibility, but it cannot be responsible for the identity itself. The right system is: Story, then scale. Read more about the rebrand HERE → https://bit.ly/4d3GYVG What do you think?  Has Semrush nailed this rebrand? 👇 [AD] 

  • View profile for DAVID Sayce

    Head of Digital Marketing / Marketing Consultant for B2B & Professional Services. Helping firms fix what’s not working in Strategy, Search, Brand Visibility & AI-Driven Visibility ~ Available from September 2026

    25,824 followers

    Why is authenticity crucial in rebranding? For a rebrand to truly resonate, it needs to be authentic. Clients, partners, and stakeholders increasingly look for brands that communicate transparently and genuinely. Sharing the “why” behind a rebrand helps build trust, letting your audience see the purpose and values driving your transformation. When a rebrand feels authentic, it strengthens client connections, making them more likely to stay engaged with the new brand. Here are some key ways to communicate your “why” effectively: 1️⃣ Transparent Messaging: Clearly outline the reasons behind your rebrand, whether it’s to reflect growth, a new direction, or a commitment to better serve your clients. 2️⃣ Engaging Stories: Share stories that demonstrate your values and journey, helping clients connect emotionally with the new brand. 3️⃣ Consistent Tone: Maintain a consistent, genuine tone across all channels to reinforce trust and familiarity. 4️⃣ Feedback and Involvement: Engage your audience in the rebranding process by seeking feedback or even previewing changes. This inclusion fosters a sense of ownership and loyalty. By focusing on authenticity, a rebrand can become more than a visual change; it becomes an opportunity to reinforce your brand’s values and mission. What’s the main message you’d want to communicate in a rebrand? #Marketing #Branding #authenticity #DigitalMarketing

  • View profile for David Shields
    David Shields David Shields is an Influencer

    Chief Executive Officer

    23,836 followers

    This report from Business & Human Rights Resource Centre, 'Bitter Truth: Migrant Worker Abuse in the Production of Sugar, Cocoa, and Coffee in Chiapas', published in April 2025, explores the harsh realities faced by agricultural workers in Chiapas, Mexico. It highlights a number of signficant issues with #supplychain and #procurement practices within the sector: 1. Labour Exploitation Migrant workers, including Indigenous peoples from Central America, suffer from low wages, excessively long hours, unsanitary housing, harassment, and violence, particularly targeting women. 2. Forced and Child Labour Cases of modern slavery persist, with children exposed to hazardous working conditions. 3. Health & Living Conditions Lack of healthcare and social benefits; overcrowded and unsafe housing; exposure to agrochemical pollution, linked to childhood leukaemia and other illnesses. 4. Climate Crisis Impacts Rising temperatures affect crop yields, particularly coffee. Environmental degradation due to deforestation, agrochemical use, and industrial waste mismanagement. 5. Transparency Issues Many firms lack public #humanrights policies, particularly in the sugarcane sector. The lessons for #procurement and #supplychain functions from the report include: - Strengthen supplier accountability and require suppliers to publicly disclose human rights policies. - Ensure compliance with fair labour standards. - Implement ethical sourcing practices, prioritise suppliers with strong human rights commitments. - Avoid sourcing from companies with documented labour abuses. - Monitor and audit supply chains, conduct regular audits to verify compliance with labour rights and environmental standards. - Use independent verification mechanisms. - Support sustainable procurement, encourage suppliers to reduce agrochemical use and adopt renewable energy. - Promote fair trade models that empower local communities. These recommendations aim to protect workers, increase transparency, and promote sustainability in agroindustry, but are obviously applicable across many similar supply chains.

  • View profile for Asad Ansari

    Founder | Data & AI Transformation Leader | Driving Digital & Technology Innovation across UK Government and Financial Services | Board Member | Commercial Partnerships | Proven success in Data, AI, and IT Strategy

    29,664 followers

    What actually breaks transformation programmes, technology or fragmentation? Three legacy systems. Zero single source of truth. One transformation programme to fix it. We led the mobilisation phase for a major public sector transformation replacing three legacy systems that had operated independently for years. The challenge was not technical complexity. It was operational fragmentation. Data existed in multiple places with no master version. Teams worked in silos using different methodologies. Deployment frequency was constrained by lack of data led insights. Enterprise data architecture suffered because nobody owned the complete picture. Here's what we actually did, 1. Established integrated project teams pairing our experts with client resources. 2. Teams worked together to build capability that stays after we finished. 3. Conducted workshops and hands on sessions on Agile data management. 4. Implemented master data management processes and data governance tools. 5. Created insights dashboards that gave visibility into what was actually happening. 6. Introduced KPI monitoring and feedback mechanisms so teams could see impact. 7. Delivered comprehensive training through train the trainer programmes. As a result, → 60 percent improvement in data team Agile development competency. → 40 percent increase in deployment frequency. → Pool of master trainers created who can upskill new joiners. → Single source of truth for data established across previously siloed systems. → Culture of continuous learning fostered instead of reliance on external expertise. The insight most organisations miss. Transformation fails when it treats capability building as separate from delivery. The best programmes are the ones where external specialists work alongside internal teams, not instead of them. Where knowledge transfer is designed in from day one, not added as an afterthought when contracts end. The work is not finished when systems go live. It is finished when the organisation can run, improve, and evolve those systems without external dependency. How much of your transformation budget goes to building internal capability versus buying external delivery?

  • View profile for Preeth Pandalay

    Helping Agile leaders and teams make better decisions in the age of AI | Trainer & Advisor

    14,576 followers

    The Hidden Costs of Undervaluing Change Agents in Transformation In today's tech industry, Agile transformation is a strategic necessity. However, many organizations make the mistake of undervaluing their change agents, treating them as interchangeable roles or optional extras. This misstep undermines Agile implementation and leads to "Agile debt," a concept akin to technical debt. Why Change Agents Matter - Change agents are more than mere facilitators or coaches; they are crucial leaders in Agile transformation. Their deep understanding of Agile principles and leadership experience drives meaningful change. Yet, when companies substitute these roles with less experienced staff—like programmers doubling as change agents—they miss out on strategic leadership, causing long-term damage. Global employee engagement challenges magnify this issue. According to Gallup, only 23% of employees are actively engaged at work, while 15% are actively disengaged. Without strong Agile leadership, efforts to boost engagement and drive change often fall short, contributing to disengagement and stalling transformation. The Risks of Agile Debt - Agile debt accumulates when organizations fail to invest in skilled Agile practitioners. Over time, this debt manifests as misaligned teams, ineffective practices, and a lack of progress. This superficial approach to Agile leads to longer time-to-market, degrading customer satisfaction, and a diminished capacity to innovate—risks no organization can afford. Marginalizing change agents fosters a hollow Agile adoption, where the deeper values and principles are overlooked. Given the already low global engagement rates, this can be particularly harmful, exacerbating disengagement and creating a vicious cycle. Addressing Agile Debt - To avoid Agile debt and ensure successful transformation, organizations must: 1. Invest in Expertise: Place experienced Change agents in roles. Their leadership is essential in navigating complex transformation challenges. 2. Avoid Shortcuts: Don't fill roles with underqualified individuals. Short-term cost savings can lead to long-term challenges that are far more costly to resolve. 3. Prioritize Continuous Improvement: Agile transformation is ongoing. Regularly inspect and adapt your practices, empowering your Agile leaders to guide teams effectively. 4. Enable & Empower Agile Leaders: Provide Change agents with the resources and autonomy they need to lead successfully. Their role is critical to the health of your Agile practices and the success of your transformation. In conclusion, the success of Agile transformation hinges on valuing Change agents as true leaders. Organizations can achieve sustainable, long-term success by avoiding Agile debt and empowering these roles. Let's move beyond superficial Agile adoption and commit to meaningful change that drives innovation and customer satisfaction. #Agile #Scrum #Leadershipandmanagement

Explore categories