Trust in closed source systems declining

Explore top LinkedIn content from expert professionals.

Summary

Trust in closed source systems—which are technologies whose inner workings are kept secret and unavailable for public inspection—is steadily declining as people rely more on these tools despite growing doubts about their reliability and transparency. This shift is driven by concerns about hidden flaws, biased data, and a lack of accountability, especially as closed-source AI and cloud solutions play bigger roles in critical decisions.

  • Prioritize transparency: Whenever possible, choose open systems or demand clear explanations about how closed-source technologies work and make decisions.
  • Encourage user feedback: Invite users to share concerns and experiences, so issues can be identified and addressed more quickly, preventing silent dependency on questionable systems.
  • Promote independent review: Support third-party audits or collaborative oversight for closed systems, ensuring trust is built through external validation rather than blind reliance.
Summarized by AI based on LinkedIn member posts
  • View profile for Graham Cooke

    CEO & Founder @bravaxyz. Defining Intelligent Capital Markets | Al policy engines + stablecoin rails for automated, transparent credit | Author | Ex-Google | Exited Founder | NED

    15,222 followers

    OpenAI's closed-source approach puts them on the wrong side of history. Here's why the future of AI must be open and transparent: The battle between open and closed systems defines tech history. In each case, closed systems dominated early but open systems won in the end: • CompuServe vs Internet • Windows vs Linux • iOS vs Android Why? Because open systems harness collective intelligence: Thousands of developers spot bugs faster than any single company. Security issues get fixed quickly. Innovation accelerates exponentially. The cost of development plummets. But with AI, the stakes are higher than ever. Imagine AI making life-or-death decisions: • Medical diagnoses • Self-driving cars • Financial systems • Military applications Would you trust a system you can't inspect? This is why OpenAI's transformation is concerning. In 2015, they launched with a clear mission: Create AI that benefits humanity through open-source development. By 2019, everything changed: • Shifted to "capped-profit" model • Took $1B from Microsoft • Went closed source This betrayal of principles led to Musk's lawsuit in 2024. But this isn't about personal drama. It's about the future of humanity's relationship with AI. Open source creates trust through transparency. When code is visible, you can verify what it does. With closed systems, you're forced to trust black boxes. The pattern throughout tech history is clear: • Open systems start slower • But they win in the end • They harness humanity's collective intelligence • They build trust through transparency We're at a pivotal moment in AI development. Will it be controlled by a few companies? Or will it be open, transparent, and community-driven? History suggests the winner is clear. The question is: Which side of history will you be on? --- I'm Graham. Former Google employee who built $2B+ revenue products. Author of "Web3: The End of Business as Usual." Currently building bravaxyz to make blockchain technology accessible to billions. Follow for more insights on AI, blockchain, and the future of technology.

  • View profile for Dion Wiggins

    CTO at Omniscien Technologies | Board Member | Strategic Advisor | Consultant | Author

    12,932 followers

    Trust Betrayed. Again. Anthropic—the company that branded itself as “privacy-first” and “safety-driven”—just torched its own moat. Starting now, Claude will train on your chat transcripts and coding sessions unless you manually opt out by September 28. Five years of storage replaces the old 30-day deletion rule. Free, Pro, Max, Claude Code—no exceptions. This is not an update. It is a betrayal. → Hypocrisy laid bare: The self-proclaimed “responsible” AI company now runs the same playbook as the rest—harvest first, ask forgiveness later. → Compliance nightmare: Sensitive conversations, contracts, legal docs, and code can now sit in Anthropic’s servers for half a decade. Opt-out ≠ consent. → Structural exposure: For governments and enterprises that bought Claude for its privacy promises, the foundation just cracked. → Pattern confirmed: In the end, every closed model company caves to the same growth imperative: extract more data, hold it longer, and lock users in. The last fig leaf of “privacy-first AI” has fallen. The message is simple: sovereignty and control cannot be outsourced. The question for every policymaker, CIO, and enterprise is now clear: how many more times will you let “responsible AI” vendors betray your trust before you build systems you truly control? https://lnkd.in/gm2J-T6h

  • View profile for Albert Yu

    Co-Founder & CTO at Anzenna | Ex-Google, Ex-Yahoo Paranoid

    1,811 followers

    The recent Copilot audit log bug immediately brought to mind Ken Thompson's foundational 1984 paper, "Reflections on Trusting Trust." This particular bug, as highlighted by a recent report, allowed Copilot to access files and return information without those actions being properly recorded in the audit logs. For organizations that rely on comprehensive audit trails for security and compliance, this created a significant blind spot such that an access that effectively left no trace. Thompson's core warning was remarkably visionary: you cannot fully trust code you did not write yourself, particularly because the very tools used to build or run that code can be subtly subverted in ways you will never see. Forty years later, this challenge emerges again with the proliferation of locally installed AI tools. Many, like Claude Code, often suggest installation via sudo. The rationale is clear: if such tools resided in a user-writable path, they could silently update themselves or be tampered with by external actors. This is the Trusting Trust problem in its modern manifestation: when you do not control the toolchain, you are effectively trusting the vendor and their update pipeline implicitly. The challenge for insider risk has evolved, and so must our approach. The core question now is: How are you independently verifying the behavior of the AI agents (insider) operating in your environment?

Explore categories