Tech Policy Advocacy

Explore top LinkedIn content from expert professionals.

  • 🚀 Now publicly available 🚀 The Data Innovation Toolkit! And Repository! (✍️ coauthored with Maria Claudia Bodino, Nathan da Silva Carvalho, Marcelo Cogo, and Arianna Dafne Fini Storchi, and commissioned by the Digital Innovation Lab (iLab) of DG DIGIT at the European Commission) 👉 Despite the growing awareness about the value of data to address societal issues, the excitement around AI, and the potential for transformative insights, many organizations struggle to translate data into actionable strategies and meaningful innovations. 🔹 How can those working in the public interest better leverage data for the public good? 🔹 What practical resources can help navigate data innovation challenges? To bridge these gaps, we developed a practical and easy-to-use toolkit designed to support decision makers and public leaders managing data-driven initiatives. 🛠️ What’s inside the first version of the Digital Innovation Toolkit (105 pages)? 👉A repository of educational materials and best practices from the public sector, academia, NGOs, and think tanks. 👉 Practical resources to enhance data innovation efforts, including: ✅Checklists to ensure key aspects of data initiatives are properly assessed. ✅Interactive exercises to engage teams and build essential data skills. ✅Canvas models for structured planning and brainstorming. ✅Workshop templates to facilitate collaboration, ideation, and problem-solving. 🔍 How was the toolkit developed? 📚 Repository: Curated literature review and a user-friendly interface for easy access. 🎤 Interviews & Workshops: Direct engagement with public sector professionals to refine relevance. 🚀 Minimum Viable Product (MVP): Iterative development of an initial set of tools. 🧪 Usability Tests & Pilots: Ensuring functionality and user-friendliness. This is just the beginning! We’re excited to continue refining and expanding this toolkit to support data innovation across public administrations. 🔗 Check it out and let us know your thoughts: 💻 Data Innovation Toolkit: https://lnkd.in/e68kqmZn 💻 Data Innovation Repository: https://lnkd.in/eU-vZqdC #DataInnovation #PublicSector #DigitalTransformation #OpenData #AIforGood #GovTech #DataForPublicGood

  • View profile for Daniel Bamidele

    Emerging AI Governance, Safety & Compliance Leader | Managed $1M+ Programmes Across 3 Continents |Information & Data Protection| | Lead Contributor, Tokens & Tangents Newsletter. Building joincalen.com

    7,187 followers

    A Canadian government department wanted to use AI to process visa applications faster. Before they could deploy, they had to complete an Algorithmic Impact Assessment. Question 15: "Could this system's decisions affect someone's legal rights?" Yes. Question 23: "Will decisions be automatically made without human review?" Partially. Question 31: "Does the system use machine learning trained on historical data?" Yes. Final score: Level 3 (High Impact) Requirements triggered: → Explainability for every decision → Human review for all rejections → Quarterly bias testing → Public audit trail The department couldn't deploy until these were in place. Six months later: The system processed applications 40% faster. But monitoring revealed something interesting: Applications from certain countries were flagged for review at 3x the rate predicted. Because the assessment was public, a researcher noticed this gap. Investigation revealed the AI learned patterns from old data when those countries had different visa requirements. System was retrained. Assessment was updated. Public report explained what was learned. This is what good governance looks like: Not rules preventing deployment. Not audits finding problems later. But transparency creating continuous learning. The Canadian approach proves something crucial: You don't need complex regulations. You need organizations to commit publicly to their AI's impact, then govern the gap between promise and reality. Simple. Transparent. Effective. Why isn't everyone doing this? #AIRegulation #AIPolicy #DigitalGovernance #TechPolicy #RegulatoryCompliance

  • View profile for Vlada Bortnik

    CEO & Co-founder, Marco Polo | Helping millions feel close | Writing & speaking on inspired capitalism, conscious leadership, and the inner journey towards *being* enough

    8,620 followers

    6% of global revenue. That's the fine Denmark will impose on social platforms that fail to keep kids under 15 off their sites. Right now, 94% of Danish kids under 13 have social media profiles. More than half of kids under 10 do, too — despite platforms' own age restrictions. Denmark's Digital Minister Caroline Stage Olsen put it bluntly: "The amount of violence, self-harm that they are exposed to online is simply too great a risk for our children." For years, platforms followed a familiar playbook: maximize time-on-site, optimize for engagement, ignore the consequences. And when regulators push back? Cry complexity. But complexity isn't the issue. Willingness is. This is bigger than Denmark. Australia's age restriction law kicks in next month. France and the Netherlands are advancing similar measures. The trend is clear: governments are finally done waiting for self-regulation that hasn't happened yet. Why the 6% fine matters: When the cost of inaction exceeds the profit from delay, innovation becomes inevitable. Companies like Marco Polo, Sage Haven, and VSCO® are proving you can build with wellbeing at the core and still thrive. What does that look like? - Focus on connection and value, not time spent. - Friction where it's needed. - Fewer (or none) algorithmic accelerants. - Transparent research on mental health impacts. Stage said it plainly: "We've given the tech giants so many chances to stand up and do something about what is happening on their platforms. They haven't done it." Now the question isn't whether reform is coming — it's whether platforms will shape it or resist it country by country, year by year. #ConsciousLeadership #TechEthics #DigitalResponsibility **** Update: Just to be clear they have alignment on the ban but from the article: "Stage said a ban won’t take effect immediately. Allied lawmakers on the issue from across the political spectrum who make up a majority in parliament will likely take months to pass relevant legislation. “I can assure you that Denmark will hurry, but we won’t do it too quickly because we need to make sure that the regulation is right and that there is no loopholes for the tech giants to go through,” Stage said. Her ministry said pressure from tech giants’ business models was “too massive.”"

  • View profile for Anurag(Anu) Karuparti

    Agentic AI Strategist @Microsoft (30k+) | Author - Generative AI for Cloud Solutions | LinkedIn Learning Instructor | Responsible AI Advisor | Ex-PwC, EY | Marathon Runner

    31,519 followers

    𝐀𝐈 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 & 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐋𝐚𝐰𝐬 𝐟𝐨𝐫 𝐆𝐞𝐧𝐀𝐈 𝐀𝐩𝐩𝐬 Building GenAI Apps for a Global Audience?  Understanding Regional Data Protection and AI laws is not optional, it is foundational. Here is what you need to know: 1. UNDERSTANDING GLOBAL REGULATORY VARIANCE Building GenAI for a global audience requires understanding regional data protection and AI laws. Key Regulations by Region: • EU AI Act: Risk-based AI obligations for certain AI systems and transparency use cases • GDPR (EU): Transparency & Consent • DPDP (India): Digital Personal Data Protection • PIPL (China): Strict Data Localization • CCPA (California): Data Access & Opt-Out • LGPD (Brazil): Local Compliance Rules 2. IMPACT OF THESE REGULATIONS ON YOUR AI TRAINING DATA To build compliant GenAI apps,  Ensure that data used for training AI models follows the regional rules: Data Collection → Processing → Model Training → Deployment Three Core Requirements: a. User Consent: Obtain explicit consent for data collection and use b. Data Minimization: Collect only necessary data for the intended purpose c. Anonymization: Remove personally identifiable information from training data 3. MITIGATING AI ETHICS AND BIAS RISKS AI systems must be fair and ethical, particularly in high-risk areas: a. Fairness: Ensure your AI models don't discriminate, especially in areas like recruitment or finance. b. Bias Mitigation: Regularly test and adjust your models to reduce bias in the outputs. 4. ENSURING TRANSPARENCY IN AI MODEL DEVELOPMENT Transparency is a cornerstone of compliance, especially when your AI impacts users directly: a. Explainability: Protect data in transit and at rest. b. Consent Management: Collect, track, and manage user consent. c. Privacy by Design: Embed privacy into every system layer. 5. MANAGING CROSS-BORDER DATA FLOW GenAI apps often rely on data from various regions, so it's critical to understand data sovereignty laws: a. Data Sovereignty: Follow local laws on where data is stored and processed. b. Data Transfer Agreements: Use SCCs or BCRs for compliant cross-border transfers. THE COMPLIANCE CHECKLIST Before launching GenAI globally, verify: 1. Regional Compliance: • GDPR for EU? (Transparency & Consent) • DPDP for India? (Data Protection) • PIPL for China? (Data Localization) • CCPA for California? (Access & Opt-Out) • LGPD for Brazil? (Local Rules) 2. Training Data: • User consent obtained? • Data minimized? • PII anonymized? 3. Ethics & Bias: • Fairness tested? • Bias mitigation in place? 4. Transparency: • Explainability documented? • Consent management system? • Privacy by design? 5. Cross-Border: • Data sovereignty compliance? • Transfer agreements (SCCs/BCRs)? Each region has different requirements.  Build for the strictest, adapt for the rest. Which regulation applies to your GenAI app?

  • View profile for Maryam Abass

    Translating the balance between innovation & human rights into ethical tech insights | AI & Privacy Analyst | CIPP/E | CIPM | AIGP

    1,939 followers

    Understanding the Difference Between DPIA and AI Impact Assessment As AI systems become deeply embedded in how we live and work, effective governance has never been more important. Yet, one common source of confusion remains — the difference between a Data Protection Impact Assessment (DPIA) and an AI Impact Assessment (AIIA). Though both assess risk, their focus and purpose are fundamentally different: 1. The DPIA — Protecting Data Privacy A DPIA is a legal requirement under frameworks like the EU and UK GDPR. - Purpose: To protect individuals’ rights and freedoms when their personal data is being processed. - Key Question: Could this data processing activity, especially when using AI pose a high risk to privacy? - Triggers: The use of new technologies, large-scale processing of sensitive data, or systematic monitoring. 2. The AIIA — Managing Broader AI Risks An AIIA (or Algorithmic Impact Assessment) takes a wider view. While not yet a legal requirement in most regions, it’s emerging as a best practice, especially under frameworks like the EU AI Act. - Purpose: To identify and mitigate ethical, societal, and operational risks beyond privacy. - Key Question: Is the AI system trustworthy, safe, fair, and accountable in its design and outcomes? - Scope: It examines the entire AI lifecycle, from data collection and model design to deployment and human oversight. Why the Distinction Matters When deploying AI, organizations often need to complete a DPIA because AI typically involves personal data and new technology. But that alone isn’t enough. A DPIA doesn’t fully capture risks like algorithmic bias, discrimination, or lack of transparency—these are the domain of the AIIA. To build trustworthy and responsible AI systems, organizations should: 1. Conduct an AIIA to address systemic and ethical risks. 2. Integrate the findings of the DPIA to ensure privacy compliance. This layered approach turns governance from a compliance checkbox into a framework for responsible and innovative AI deployment. #AIGovernance #ResponsibleAI #DPIA #AIIA #DataPrivacy #GDPR #Compliance #Ethics

  • View profile for Panagiotis Kriaris
    Panagiotis Kriaris Panagiotis Kriaris is an Influencer

    FinTech | Payments | Banking | Innovation | Leadership

    158,944 followers

    The report is out: State of the Crypto Industry 2026. If you want to understand where the industry is heading, here are my main take-aways. 𝟭. Crypto platforms are being evaluated on their ability to run compliant, reliable systems at scale, where onboarding, monitoring, and transaction controls are expected to work together consistently under regulatory pressure - and not as separate functions. 𝟮. Identity verification is no longer a one-off check but a continuous capability embedded across the full user lifecycle, influencing risk decisions, conversion, and regulatory outcomes. 𝟯. Performance improvements are coming from how verification flows are structured, with platforms reducing unnecessary steps, limiting retries, and routing users based on risk. 𝟰. Fraud activity is becoming more organized and persistent, with automated attacks, synthetic identities, and coordinated behavior targeting weaknesses in verification and monitoring, requiring systems that can detect patterns and isolated events. 𝟱. AI is being applied to connect identity data, user behavior, and transaction activity into a single decision process, allowing platforms to identify risks in real time and adjust controls dynamically. 𝟲. Most platforms are combining internal decision-making with external verification capabilities, keeping control over risk logic and user experience while relying on specialized providers for document verification, biometrics, and screening. 𝟳. Regulatory frameworks such as MiCA and the Travel Rule are directly affecting how products are built and how transactions are processed, with many firms still facing challenges in implementation, data sharing, and cross-border requirements. 𝟴. Transaction value is increasingly driven by businesses rather than individuals, reflecting the growing use of crypto for treasury management, cross-border settlement, and day-to-day movement of funds. 𝟵. Stablecoins are increasingly used as a transaction medium where stability and efficiency matter, shifting activity toward payment and settlement use cases rather than trading. 𝟭𝟬. Platforms are redesigning onboarding so that the level of verification matches the level of risk, reducing unnecessary steps for low-risk users while applying deeper checks only where needed, as both over-checking and under-checking create direct commercial and regulatory costs. 𝗢𝗻𝗲 𝘁𝗵𝗶𝗻𝗴 𝗶𝘀 𝗰𝗹𝗲𝗮𝗿: As crypto matures, the focus is moving away from simply enabling access to assets toward managing how transactions are executed in practice, including who is allowed to participate, how those decisions are made, and how risk and compliance are applied throughout the lifecycle. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗺𝗮𝗶𝗻 𝘁𝗮𝗸𝗲-𝗮𝘄𝗮𝘆? Opinions: my own, Source: Sumsub 𝐒𝐮𝐛𝐬𝐜𝐫𝐢𝐛𝐞 𝐭𝐨 𝐦𝐲 𝐧𝐞𝐰𝐬𝐥𝐞𝐭𝐭𝐞𝐫: https://lnkd.in/dkqhnxdg

  • View profile for Amir Tabch

    Chairman & CEO | Senior Executive Officer | Regulated Digital Asset Market Infrastructure | Bridging Capital Markets & Virtual Assets | Exchange, Brokerage, Custody, Tokenization | Crypto, OTC, On/Off Ramps, Stablecoins

    33,718 followers

    Interesting signal from the #UAE this week. Khaleej Times reports that #UAE residents can now pay insurance premiums and receive claims in cryptocurrencies, following the launch of the first crypto-asset digital wallet by Dubai Insurance. This matters less because crypto payments are new, and more because insurance is one of the most conservative corners of financial services. If digital assets are good enough for premiums, claims, and regulated balance sheets, they are no longer experimental. The interesting part here is not the technology, it is the governance. Insurance, banking, and regulated exchanges moving into crypto rails is how volatility turns into infrastructure. The #UAE keeps showing that adoption happens fastest when regulators, incumbents, and builders move in the same direction. It also reinforces a broader trend. The #UAE now ranks fifth globally for crypto adoption, according to the World Crypto Rankings 2025 research report. Regulation first, infrastructure second, innovation third. Crypto adoption does not arrive through slogans. It arrives when regulated institutions quietly make it operational. https://lnkd.in/eWibWEk4 #UAE #Crypto #DigitalAssets #VirtualAssets #Regulation #Insurance #InstitutionalAdoption #FinancialInfrastructure #Cryptocurrency ##Cryptocurrencies

  • View profile for Sara Noggler
    Sara Noggler Sara Noggler is an Influencer

    Founder & CEO Polyhedra | LinkedIn Top Voice | Director of Future Blend — Rome Policy Forum | 30k+ connections (Limit reached - Please Follow)

    35,350 followers

    Crypto regulation is no longer a wild frontier. It’s becoming global, structured — and strategic. The newly released PwC Global Crypto Regulation Report 2025 marks a regulatory turning point for digital assets. Here are some key takeaways worth your attention: 1) US Pivot: A clear shift away from “regulation by enforcement” toward well-defined frameworks. Spot Bitcoin & Ethereum ETFs are just the beginning — Staked ETFs are coming next. 2) MiCAR in Full Effect: The EU now has a single market for crypto. Authorization, whitepapers, and AML rules are now standard. 3) Stablecoins in Focus: Regulators worldwide are setting strict, but innovation-friendly rules. Europe treats them as payment tools, while the US signals support for bank-issued stablecoins. 4) DeFi Under the Microscope: Expect more scrutiny. Global regulators are applying “same risk, same rule” logic to lending, DEXs, and even mixing services. 5) Tokenization Rising: From pilot programs in the EU to SEC-CFTC coordination in the US, real-world asset tokenization is becoming a regulated frontier for capital markets. Regulatory clarity is no longer optional. Time to adapt, align, and build responsibly. #CryptoRegulation #MiCAR #Stablecoins #Tokenization #DeFi #DigitalAssets #Web3Policy #PwC #FutureOfFinance

  • Ellen K. Pao and I teamed up for the Los Angeles Times to share our ideas about regulating social media. “The status quo of social media companies in the U.S. is akin to having an unregulated flight industry. Imagine if we didn’t track flight times or delays or if we didn’t record crashes and investigate why they happened. Imagine if we never found out about rogue pilots or passengers and those individuals were not blacklisted from future flights. Airlines would have less of an idea of what needs to be done and where the problems are. They would also face less accountability. The lack of social media industry standards and metrics to track safety and harm has driven us to a race to the bottom.” Using our past experiences working on the most consequential social media decisions in modern history, we argue that it’s time for Congress to create a new agency to establish and enforce baseline safety and privacy rules for technology companies. Link to the piece is in the comments! #contentmoderation #socialmedia #trustandsafety #internetgovernance

  • View profile for M Nagarajan

    Sustainable Cities | Startup Ecosystem Builder | Deep Tech for Impact

    19,618 followers

    𝐖𝐞𝐛𝟑 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 stands as a transformative force, especially in how it can create digital public goods that go beyond corporate interests to benefit society as a whole. Japan’s Ministry of Economy, Trade, and Industry (METI) is pioneering a "𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐏𝐮𝐛𝐥𝐢𝐜 𝐆𝐨𝐨𝐝𝐬" project that we, in the Indian startup ecosystem, can draw valuable insights from. METI’s Demonstration Project for Building Digital Public Goods Using Web 3.0 and Blockchain challenges the conventional perception of blockchain as merely speculative. It brings forward initiatives that include marketplaces for tokenized assets, smart contracts in sports, and decentralized governance for regional revitalization. Japan’s Sake World showcases how Web3 can transform the traditional Sake market, allowing buyers and sellers to interact directly with NFTs while ensuring fair royalties for breweries. It’s a visionary step that bridges Japan’s heritage with the latest tech trends in a globalized manner. 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐟𝐫𝐨𝐦 𝐉𝐚𝐩𝐚𝐧’𝐬 𝐕𝐢𝐬𝐢𝐨𝐧: 𝐒𝐨𝐜𝐢𝐞𝐭𝐲 𝟓.𝟎 𝐚𝐧𝐝 𝐏𝐮𝐛𝐥𝐢𝐜 𝐆𝐨𝐨𝐝 𝐓𝐞𝐜𝐡 𝐢𝐧 𝐈𝐧𝐝𝐢𝐚 Japan’s vision aligns with its Society 5.0 framework, which seeks to blend the digital and physical worlds for societal progress. Such projects are built not solely for profit but for broad societal impact, creating models that India can adopt as we aim to innovate across industries like health, education, and environmental management. 𝐖𝐡𝐲 𝐈𝐧𝐝𝐢𝐚’𝐬 𝐒𝐭𝐚𝐫𝐭𝐮𝐩 𝐄𝐜𝐨𝐬𝐲𝐬𝐭𝐞𝐦 𝐒𝐡𝐨𝐮𝐥𝐝 𝐀𝐝𝐨𝐩𝐭 𝐏𝐮𝐛𝐥𝐢𝐜 𝐆𝐨𝐨𝐝𝐬 𝐢𝐧 𝐖𝐞𝐛𝟑? India has an immense opportunity to leverage Web3 technology as a public good that serves beyond corporate interests. For instance, digital platforms using blockchain can enhance transparency, support SMEs in tracking supply chains, and allow decentralized governance models that involve local communities directly. 𝐓𝐫𝐞𝐧𝐝𝐬 𝐚𝐧𝐝 𝐒𝐭𝐚𝐭𝐢𝐬𝐭𝐢𝐜𝐬 According to PwC, 75% of enterprises are expected to adopt some form of blockchain technology by 2030, yet only a small portion actively utilize it for public benefit. The EU’s Horizon 2020 project serves as a relevant reference point, as it has shown success in incentivizing innovation through public-private collaborations that focus on societal benefits. The journey to adopting public goods in Web3 involves thoughtful regulation, public-private partnerships, and continuous support from government bodies. By creating legal clarity and public guidelines, we can unlock the power of decentralized technology as a vehicle for digital transformation, bringing tangible benefits to communities across India. Japan's "Sake World" marketplace leverages NFTs to enhance traceability and royalties for artisans, a model that could inspire India’s rich cultural industries like textiles, handicrafts, and spices. What's your take on this ? #blockchaintechnology #web3technology #japan #startupecosystem #indianstartups

Explore categories