Open Source Research and Development

Explore top LinkedIn content from expert professionals.

Summary

Open source research and development means sharing the code, data, and methods behind technologies so anyone can study, modify, or build upon them. This approach encourages collaboration, transparency, and innovation in fields like artificial intelligence, allowing a wider range of people and organizations to contribute and benefit.

  • Promote open access: Make your research materials and software freely available so others can learn, adapt, and improve upon your work without barriers.
  • Encourage community input: Invite feedback and contributions from users worldwide to spark fresh ideas and accelerate progress.
  • Support transparency: Clearly document your development process and share details about your data and models to build trust and allow others to replicate your results.
Summarized by AI based on LinkedIn member posts
  • View profile for Ian Hogarth

    founder and investor

    10,103 followers

    Today the UK's AI Safety Institute is open sourcing our safety evaluations platform. We call it "Inspect". Inspect is a software library which enables testers to assess specific capabilities of individual models. Released through an open source licence, it is now freely available for the AI community to use. As a team, we are big believers in the power of open source software - open source software can enable more people to contribute, counteract centralisation of power, improve transparency & reproducibility, give end users more control over their tools and reduce costs for all. However 'open' vs 'closed' is a complex topic. Large corporations can use 'open' as a business tactic to catch up and compete (e.g. Android vs iOS), and often something important will remain proprietary. See: https://lnkd.in/eWgtjKMN Within the AI space there are some remarkable efforts to drive forward openness - consider DeepMind's AlphaFold work or Meta's OpenCatalyst project. I am personally very attracted to projects that attempt to truly open up the full process of training AI models, for example GPT-NeoX, OLMo or Pythia which all have publicly available training data and OSI-licensed training and evaluation code and model weights. these projects are truly open source vs just open weight - you can see the data the model is trained on etc. To date these projects have mostly been developed by non-profits like EleutherAI and Allen Institute for AI (AI2). I'm not sure how common it is for governments to ship open source software, but I'm glad that the UK AI Safety Institute is taking this step. I'd like to especially thank JJ Allaire the legendary creator of ColdFusion who joined AISI and spearheaded this project. Thank you JJ! One of the structural challenges in AI is the need for coordination across borders and institutions. I believe academia, start-ups, large companies, government and civil society to all play a role, and open source can be a mechanism to coordinate more broadly. It may be an inconvenient truth, but open source software is currently one of the ways that America and China 'work together' on AI research - perhaps this points at another mechanism for international collaboration over safety https://lnkd.in/eYXDd-wy. This work is a continuation of the work Rishi Sunak and Michelle Donelan MP kicked off with the AI Safety Summit which brought together countries, academia, civil society and the private sector to coordinate around tackling risks from AI so we can enjoy the benefits. More details and github repo here https://lnkd.in/eXdDqcAt

  • View profile for Amar Ratnakar Naik

    AI Leader | Driving Transformation with Products and Engineering

    3,019 followers

    For years, the open-source community has challenged the closed-source dominance of players. Today, OpenAI has released gpt-oss-120b and gpt-oss-20b, two new open-weight reasoning models. This is a monumental shift, and here’s why it's a game-changer for the entire industry: -Open License: These models come with a permissive Apache 2.0 license, allowing for free commercial use without restrictions—a direct response to developer demand for freedom. -Agentic Power: Built for advanced agentic tasks like tool use and code execution, they're not just powerful but practical for real-world applications. -Deep Customization: They support full-parameter fine-tuning, giving developers unprecedented control to adapt the models to any use case. -Unprecedented Transparency: For the first time, you get full access to the chain-of-thought for easier debugging and higher trust in model outputs. OpenAI's entry into the open-weight space is a major catalyst for the entire AI ecosystem, promising to - Accelerate Competition: This forces all players to innovate faster, release better models, and offer more compelling features to attract developers. The competition drives rapid improvement across the board. - Democratisation of AI: The availability of powerful, open-weight models lowers the barrier to entry for developers and startups. They no longer need multi-billion dollar budgets to access advanced AI capabilities. This enables a wider range of individuals and small teams to experiment, build, and deploy AI solutions, leading to a much larger pool of innovators. -Rapid Customization and Specialization: Open-weight models are perfect for fine-tuning with specific data. Developers can take a strong base model like gpt-oss-20b and specialize it for a niche industry, a company's internal knowledge base, or a unique application. This speeds up the development cycle for tailored AI solutions that were previously too expensive or complex to build. -Community-Driven Development: The principles of open source mean that a global community can now inspect, debug, and improve these models. The LLM market is projected to be worth over $80 billion by 2033, and the fight for developer mindshare is at its core. In essence, this movement can act as a catalyst for the AI landscape to a decentralized ecosystem where innovation can flourish at all levels. 👇 Try them here: - Blog: https://lnkd.in/g4kprY4v - GitHub: https://lnkd.in/gHf2M3mV - Hugging Face: https://lnkd.in/gWESjjDt - Try the models : https://www.gpt-oss.com/ What does this mean for other open models? Let's discuss! 👇

  • View profile for Katharina Koerner

    AI Governance, Privacy & Security I Trace3 : Innovating with risk-managed AI/IT - Passionate about Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,704 followers

    The Open Source Initiative (OSI) has released a new definition for "open-source" AI after a 2 year effort involving consultation with global experts. Founded in 1998, OSI is known for establishing the Open Source Definition, a standard that outlines the criteria software must meet to be considered "open source", e.g. the ability to view, modify, and distribute source code, widely respected in the industry. The new OSI definition for open-source AI requires AI models to make their training data, code, and model weights fully accessible. Definition: "What is Open Source AI When we refer to a “system,” we are speaking both broadly about a fully functional structure and its discrete structural elements. To be considered Open Source, the requirements are the same, whether applied to a system, a model, weights and parameters, or other structural elements. An Open Source AI is an AI system made available under terms and in a way that grant the freedoms to: - Use the system for any purpose and without having to ask for permission. - Study how the system works and inspect its components. - Modify the system for any purpose, including to change its output. - Share the system for others to use with or without modifications, for any purpose. These freedoms apply both to a fully functional system and to discrete elements of a system." The OSI definition also includes specify conditions for open source AI systems, emphasizing the necessity of having access to the preferred form for modifications, which includes: - Data Information: Detailed information about the training data that allows skilled individuals to recreate the system. This includes descriptions of data sources, selection processes, labeling, and methodologies. Public and third-party data sources must also be disclosed. - Code: Full source code used for training and operating the system should be open source, detailing data processing, training procedures, and the architecture of the model. - Parameters: Model parameters, like weights and configurations, should be accessible under open source terms, including details like training checkpoints and final states. There's broad press coverage on this OSI accomplishment, e.g., with these examples mentioned in SiliconANGLE & theCUBE: 1) Meta's Llama Models: These are highlighted as failing to meet the OSI's open-source AI criteria as they have restrictions on commercial use and do not provide open access to the training data or details about it, making it impossible to recreate these models freely. 2) Stability AI's Stable Diffusion Models: Although claimed as "open" by Stability AI, these models require businesses with more than $1 million in annual revenue to purchase an enterprise license. 3) Mistral's Models: Places restrictions on the use of its Mistral 3B and 8B models for certain commercial ventures. On the other hand, these organization have endorsed the new definition: https://lnkd.in/gkeUaQzB

  • The administration released its AI Action Plan today - making a strong endorsement of open source AI and open weights (page 4). I encourage you to read it and understand the critical role open source will play in the future of our country's AI strategy. Don't discount the role that efforts like OpenFL will have in bringing federated learning to government and helping them partner with industry to build the best models available. "Open-source and open-weight AI models are made freely available by developers for anyone in the world to download and modify. Models distributed this way have unique value for innovation because startups can use them flexibly without being dependent on a closed model provider. They also benefit commercial and government adoption of AI because many businesses and governments have sensitive data that they cannot send to closed model vendors. And they are essential for academic research, which often relies on access to the weights and training data of a model to perform scientifically rigorous experiments. We need to ensure America has leading open models founded on American values. Opensource and open-weight models could become global standards in some areas of business and in academic research worldwide. For that reason, they also have geostrategic value. While the decision of whether and how to release an open or closed model is fundamentally up to the developer, the Federal government should create a supportive environment for open models." https://lnkd.in/eHDvT8r3

  • View profile for Loris Degioanni

    CTO and Founder at Sysdig

    5,962 followers

    “Why are companies like OpenAI giving away their intellectual property?” Well, the short answer is: open source underpins all great technologies. This week, OpenAI released two versions of “GPT-OSS,” the company’s first open models in more than half a decade. These models can be run locally, giving developers more control over costs, privacy, and performance. And while this isn’t “open source” in the truest sense of the term, it’s a huge step in the right direction. It’s a big move, but it’s not a new idea. The most powerful tech innovations have been built on open source. Linux, Kubernetes, Docker, the list goes on… You don’t build a strong ecosystem by locking it behind an API. You build it by giving people the freedom to run and improve the technology themselves. That’s also how you drive real adoption, and that’s how you move the industry forward. I’ve said it many times before: the future is built on open source. I’ve founded my whole career on this belief, and I’ve developed and supported open source projects for almost 30 years. We made packet analysis accessible with Wireshark, we gave the world a detection engine with Falco, and we even delivered cloud system forensics this year with Stratoshark. Each of these projects has grown because of the people who have used them, extended them, and shared what they’ve learned. That’s not by accident. That’s the model. When it comes to cloud security, for instance, we shouldn’t be fighting an asymmetrical battle. Attackers are already collaborating. They’re trading tactics, sharing malware, and refining operations together. Defenders must do the same. Security is stronger when it’s collaborative. So is AI. So is innovation. What I’m saying is that open source isn’t just a licensing model. It’s a model for distribution, collaboration, and trust. It creates leverage for everyone – for builders, users, companies, and even countries – to shape the future on their terms. So that’s why OpenAI is “giving away IP.” Because when you build in the open, everyone wins. (And also because they know they will still have a huge market of users willing to pay for the premium tier, but my point still stands.)

  • View profile for Jason Corso

    Toyota Professor of AI at Michigan | Voxel51 Co-Founder and Chief Scientist | Creator, Builder, Writer, Coder, Human

    23,554 followers

    💉 💊 🩺 Why open source transparency is critical for the future of medical AI   As AI transforms everything from diagnostics to drug development, we're at a crossroads. The traditional model of proprietary research and data silos is holding back innovation at a time when we desperately need it. That could be to address the global doctor shortage or cutting the $2.6 billion cost of bringing new drugs to market. The path forward requires balancing patient privacy with knowledge sharing. When we embrace true open source principles with full data transparency, we unlock faster innovation, catch biases earlier, and build AI systems we can actually trust with human lives. In my piece published this week on HIT Consultant, I explore the challenges we face, from HIPAA compliance to corporate data hoarding, and why initiatives like NIH's lung imaging database show us what's possible when industry, academia, and government collaborate. Read Full Piece: https://lnkd.in/eg6gcMxF My lab has some big open source medical releases coming up this year. Stay tuned! University of Michigan College of Engineering Advanced Research Projects Agency for Health (ARPA-H) Bon Ku Morgan Hutchinson, MD Karlyn Beer, MS, PhD Mike Oelke Voxel51 Filippos Bellos Donald Likosky Michigan AI Laboratory

Explore categories