The image of a developer signing a line of code may seem outdated in an era where AI agents generate patches, tests, and documentation in seconds. But that signature still carries more weight than ever. Not because software should revert to slow, manual craftsmanship, but because critical systems require something no model can provide on its own: verifiable human accountability.
The advancement of programming assistants has changed the pace of development. Tools like Copilot, Claude Code, Cursor, or Codex help write functions, locate bugs, suggest refactorings, and automate tasks that previously took hours. In some teams, there’s no longer debate about whether to use AI, but rather how to integrate it without losing control over quality, safety, and authorship.
The problem arises when speed is mistaken for reliability. An AI-generated patch can compile, follow the project’s style, and pass simple tests, but fail in an edge case, introduce an insecure dependency, or replicate a vulnerable pattern learned from public code. The appearance of correctness is not enough. In software—especially in critical infrastructure—the cost of a failure often manifests only when the system is in production.
Linux sets a boundary: AI helps but does not sign
The Linux kernel has formalized a rule that sums up this moment well. AI assistants can aid development, but they cannot add the Signed-off-by tag. That signature is linked to the Developer Certificate of Origin, the mechanism by which a contributor certifies they have the right to submit that code and take responsibility for it.
The kernel guide is clear: only a person can review AI-generated code, verify its licensing, add their own signature, and be accountable for the result. When an AI tool has participated, it must be indicated with a Created-by or Assisted-by tag, identifying the agent, model version, and any other analysis tools used.
This isn’t a rejection of AI. It’s a way to embed it within an existing trust process. The message for other projects is simple: AI can write, suggest, or accelerate, but it cannot bear legal or technical responsibility. If a patch breaks something, introduces a vulnerability, or violates a license, accountability cannot rest on a model, an API, or a distant provider company.
This distinction will become increasingly crucial. In development environments with autonomous agents, the temptation to delegate complete decisions will grow. But the more autonomous a tool is, the more essential it becomes to record what it did, with what permissions, on which repository, under what instructions, and who validated the final result.
Fast-generated code also creates quick debt
Security in AI-assisted development cannot just rely on running a linter or basic tests. A Veracode report on AI-generated code found that 45% of the analyzed samples contained known security flaws. This doesn’t mean all AI-generated code is unsafe, nor that human code is perfect. But it indicates that productivity without review can build up technical and security debt at a faster pace.
Here’s an idea that should become standard practice: treat AI-generated code as untrusted until proven otherwise. This means human review, automated testing, static analysis, dependency checks, secret management, security testing, and architectural review when changes affect sensitive areas.
It’s also important to monitor dependencies. Models may suggest libraries that don’t exist, are outdated, or poorly maintained. In a traditional workflow, a developer usually understands the ecosystem they’re working in. In an agent-driven workflow, a tool might include packages for convenience without understanding the organization’s security policies. That risk isn’t mitigated by trust but by controls.
The classic principle of least privilege should be extended with another: least agency. An agent shouldn’t be able to do everything it technically can, only what’s necessary to complete a specific task. It’s not the same to ask it to generate a patch proposal as to grant permissions to modify branches, open pull requests, change pipelines, install dependencies, or deploy to production.
Supply chain shows that implicit trust fails
Recent incidents outside of AI-assisted development reinforce this lesson. In April 2026, the CPUID site—known for tools like CPU-Z and HWMonitor—was compromised for less than 24 hours. According to Kaspersky, attackers replaced legitimate download links with trojanized installers containing a signed executable and a malicious DLL named CRYPTBASE.dll, loaded via DLL sideloading.
The detail matters. Sometimes, compromising a signed binary doesn’t require modification of the binary itself. Instead, attackers can alter the distribution chain, place a library in the right location, and exploit legacy load rules. The signature on the executable still conveys trust, but the delivered package is already compromised.
In April 2026, Apple patched CVE-2026-28950, a vulnerability in Notification Services, which could cause notifications marked for deletion to remain unexpectedly on the device. Security media linked the flaw to forensic retrieval of content from Signal notifications. End-to-end encryption can be robust, but final privacy depends on the OS, notifications, local backups, logs, and extraction tools.
These cases share a common lesson: security no longer depends on a single layer or promise. A model might generate seemingly correct code. An executable might be signed. An app might encrypt messages. But if the supply chain, the OS, library loading, or log management fail, trust is broken elsewhere.
Continuous verification, not ceremonial auditing
AI governance shouldn’t be just a set of policies reviewed once a year. When agents generate code, modify documentation, open issues, review pull requests, or propose deployments, verification must be part of daily workflows.
This involves recording which parts of a change were AI-assisted, maintaining traceability of prompts and relevant actions, requiring human review in critical paths, applying automatic controls before merging, and limiting agent permissions per repository, environment, and task. It also means measuring false positives, repeated errors, and zones where the tool tends to make mistakes.
The goal isn’t to halt AI adoption but to make it sustainable. Innovation without auditability is unmanageable. Automation without boundaries is uncontrollable. And code that no one signs today can become a serious problem when it fails tomorrow.
The role of developers doesn’t disappear with agents. It changes. They become less typists of repetitive functions and more responsible for judgment, context, architecture, security, and validation. AI can write a lot, but someone must understand what should not be written, what should not be deployed, and what deserves slower review.
By 2026, the question isn’t whether an organization uses AI to develop software. It’s who signs what that AI produces, what controls verify it, and what happens when it fails. Humans remain in charge not out of nostalgia, but because they are still the only ones capable of answering.
Frequently Asked Questions
Can an AI sign code in the Linux kernel?
No. The kernel guide states that AI agents cannot add Signed-off-by tags. Only a person can certify the Developer Certificate of Origin and assume responsibility.
What does the Assisted-by tag mean?
It indicates that an AI tool helped with the contribution. It provides transparency without transferring legal or technical responsibility to the AI.
Is all AI-generated code unsafe?
Not necessarily. But it should be treated as untrusted code until reviewed. Security studies have found common flaws in AI samples, so human review and testing remain necessary.
What controls should a company applying AI agents in programming implement?
They should restrict permissions, log actions, require human review for sensitive changes, conduct security analysis, monitor dependencies, and maintain continuous verification within the development flow.

