Why AI won’t “Kill Open Source”

In a recent ZDNet piece, David Gewirtz warns that open source may not survive the rise of generative AI. I see it differently. AI doesn’t change what open source is about: transparency, collaboration, sharing, and freedom. It only changes how we write code.

Some people say that open source will not survive the rise of generative AI. I understand the angle, but it leaves me puzzled.

Open source is first a tool for transparency. When you can see the source code, you can verify that it does not hide anything dangerous or dishonest. You can check if there are bugs. It is also a way to let others contribute to a project. They can fix bugs, propose new features, or adapt the code for another purpose. And there is another important aspect: sharing and forking. Sharing code is how open source spreads knowledge and helps others build new things. Forking allows innovation, diversity, and sometimes even new communities to grow from the same base.

All of that stays true, no matter how the code is written. Whether it comes from a human typing every line or from a machine generating it, it doesn’t change what open source means. If you make the code proprietary, you lose these essential benefits: transparency, collaboration, sharing, freedom, whatever tool you used to create it.

Yes, AI can now produce working code in seconds. But I see that like a student who learned to code by reading open source examples. Models like OpenAI Codex have learned programming rules, syntax, and best practices by reading code and programming rules written by humans. They do not copy-paste code, they generate new code by following patterns and logic that already exist.

It’s learning, not theft.

Attribution also doesn’t change. The authorship of code remains with humans: the ones who drive the AI system, decide what to generate, what to keep, and what to publish. AI is a tool, not an author. The person who commits and shares the code is still the responsible contributor, just like before.

So what does AI really change? Not much. The principles of open source remain the same. The tools evolve, but the model stays. Humans are still responsible for what they publish. The code is still open to review, improvement, and redistribution.

The real challenge is not AI. It’s what happens when companies build powerful AI systems but keep them closed, trained on public data they don’t share back. That is the real risk for the open ecosystem. And it is not new: we have seen it before with proprietary software built on open foundations. The answer is not fear. The answer is to make AI open too, with open models, open data, and clear licenses.

In the end, AI doesn’t destroy open source. It shows why we need it more than ever. When machines can write code, we still need a way to check what they do, to trust the result, and to improve it together. That is open source. And that doesn’t change.

Gaël Duval – November 2025.

CEO at Murena.com – /e/OS and Mandrake Linux creator. Open source/Free Software evangelist since 1997.

Leave a Reply

Your email address will not be published. Required fields are marked *