Linux now allows AI code, but the human still owns the risk
Linux is opening the door to AI code in kernel submissions in 2026. But here's the part that matters to you: AI can help write code, yet you're still the one responsible for what gets sent upstream. The new policy is simple in spirit. AI coding isn't inherently bad for our repositories, but AI models tend to produce code that is far harder to maintain when nobody checks it carefully. In other words, Linux is saying yes to AI assistance, not yes to shrugging off responsibility.
That is the real story behind the new Linux kernel AI policy.
What changed in the Linux kernel policy
According to new Linux project documentation highlighted by XDA Developers, Linux kernel development now permits AI-generated code submissions as long as those patches follow the normal kernel coding and contribution rules.
That means AI in Linux kernel work is allowed if the submission:
- follows kernel submission guidelines
- fits Linux kernel licensing requirements
- is properly attributed to the bot or AI tool used
This is not a free pass. It is closer to saying, "Use the tool if you want, but follow all the same rules you already had to follow."
Why you are still on the hook
The key rule is the most human one: the person who submits the patch owns it.
Linux treats AI-written code as the contributor's work. So if your assistant generated the patch, you still have to:
- review all AI-generated code
- ensure the code complies with licensing rules
- add your own Signed-off-by tag
- take full responsibility for the contribution
That last point is the one people will remember. If the patch breaks something, creates a legal problem, or slips in bad logic, you cannot point at the bot and walk away.
I think that is the only rule that makes sense. An AI tool cannot answer mailing list questions, defend the design, or own the consequences later.
The Signed-off-by rule matters more than it sounds
One detail from the documentation is easy to miss, but it is huge.
AI agents must not add Signed-off-by tags.
Why? Because the Linux kernel uses the Developer Certificate of Origin, or DCO. That sign-off is a legal and practical statement from a human being. It says you have the right to submit the work and that you stand behind it.
A bot cannot legally certify that.
So even if the AI wrote 90 percent of a patch, the submission still needs a human sign-off from a person who reviewed it and accepts responsibility.
This is bigger than Linux
The Linux policy is really a template for how serious software teams will handle AI code.
You can see the same pattern across the wider AI coding debate:
- AI speeds up output
- humans still own quality
- humans still own legal compliance
- humans still own long-term maintenance
That matches what many engineers have been saying for months. AI can generate code quickly, but quick code is not the same as good code.
One recent analysis from The New Stack put it bluntly: AI code often creates duplication, fragile logic, standards drift, and context blindness inside a repo. Those are boring problems, but boring problems are exactly what make maintenance painful a year later.
Why maintainability is the real danger
A lot of AI-generated code looks fine on first read. That is part of the problem.
It can:
- duplicate existing functions instead of reusing them
- miss edge cases
- break team conventions quietly
- use outdated or deprecated patterns
- solve the local prompt while ignoring the larger system
If you maintain Linux code, or any production code, that stuff matters more than whether the first demo worked.
This is also why some developers have started pulling back from using AI for production work. The speed boost is real, but so is the cleanup cost. If your team spends the next six months untangling weird abstractions and duplicate helpers, the tool did not really save time.
AI tools are not magic wrappers, and they are not excuses either
There is another useful angle here. Some people dismiss modern AI products as "just wrappers," while others treat them like independent engineers. Both views miss the point.
The useful middle ground is this: AI tools can be powerful, but they still need human structure around them.
That structure includes:
- clear coding standards
- tests written by humans
- review workflows
- audit trails
- security checks
- context about the repo and architecture
So yes, AI can help you write code. No, it does not replace your judgment.
The cost problem most people ignore
The responsibility issue is not only about code quality. It is also about cost and tooling choices.
A recent Almost Timely newsletter argued that agentic AI systems can burn through massive usage compared with normal chat. The piece claimed a 30-minute chat might use 15,000 to 20,000 words, while agent systems in the same period could consume 15 to 20 million words. It also argued that command line integrations can use far less than MCP-based approaches in some cases.
Even if you do not buy every number, the broader point is solid: once AI starts acting across tools, terminals, services, and repos, the human operator still has to set guardrails.
You choose the permissions. You choose the workflow. You choose whether the agent runs in a sandbox or directly on your machine. You pay the bill if it goes wrong.
That sounds a lot like the Linux kernel policy, just in a different form.
What smart teams should do next
If you use AI to write code for Linux or for your own products, use a simple checklist:
- Review every generated patch line by line.
- Check license compatibility before submission.
- Make sure bot attribution is correct.
- Never let an AI agent add Signed-off-by tags.
- Run tests that you trust.
- Search the repo before generating new code to avoid duplication.
- Keep tasks small so reviews stay realistic.
- Enforce standards in CI, not just in your head.
- Give AI the right context, not the whole universe.
- Assume the human submitter owns the result every time.
That last one is the rule behind all the others.
What this means for open source maintainers
There is real tension here.
On one side, maintainers want more help, faster fixes, and lower contribution friction. On the other, there is growing worry that low-effort AI patches could flood open source projects with code that works today but weakens the project over time.
You can see that anxiety in community discussions that claim AI code is hollowing out open source. Even when those complaints are light on evidence, the concern itself is understandable. Maintainers are not just reviewing syntax. They are protecting design quality, legal safety, and project culture.
Linux's answer is pragmatic: accept the tool, keep the accountability human.
FAQ
Is Linus Torvalds using AI?
Yes, Linus Torvalds has used AI for personal coding work, including a Python visualizer component mentioned in early 2026 discussions. But that does not mean he wants AI to replace human judgment in kernel development. The Linux approach is closer to "use AI as a tool, then review like a professional."
Is AI writing 90% of code?
Not in any universal sense. A widely repeated claim said Claude was responsible for about 90% of Anthropic's code at one point, but broader reporting has also said the average merged code share across Anthropic was closer to 50%. The safe takeaway is that AI writes a lot of code in some organizations, but the percentage depends on the team, the workflow, and how "written by AI" is measured.
Why did I stop using AI for coding?
A common reason is that AI makes coding faster in the short term but introduces problems in production work. Developers often cite old syntax, deprecated components, extra loops, duplicate logic, weak edge-case handling, and a creeping sense that they are no longer thinking deeply about the code. If you feel that, you are not alone.
Has AI made coding obsolete?
No. AI has changed coding, but it has not made coding obsolete. Someone still has to define the problem, choose the architecture, verify the output, test edge cases, handle licensing, secure the system, and maintain the code later. AI reduces some typing. It does not remove responsibility.
Final takeaway
Linux now allows AI-written code, but it does not allow AI-written accountability.
That is the part worth remembering. If you submit the patch, you own the patch. If you ship the feature, you own the bug. If you sign off on the code, you own the decision.
AI can help you move faster. It cannot carry your responsibility for you.

