In recent years, AI-assisted programming, known as vibe coding, has become a common practice within software development. According to the State of Cloud Security Report 2025 by Palo Alto Networks, 99% of organizations already utilize AI agents in their development processes. However, the latest report from Unit 42, the threat intelligence division of this leading cybersecurity company, warns that this approach can pose significant security risks if not implemented with proper controls.
These solutions enable both professional developers and citizen developers — users without advanced technical training in code review or cybersecurity — to create large quantities of operational code in a very short time. However, the generated software can embed hidden vulnerabilities, insecure security practices, or unreliable dependencies that are not detected in early testing phases and only emerge when the applications are already in production.
From Assisted Development to Automated Risk
According to Unit 42, the main issue does not stem from the technology itself but from the false sense of security it can create by hiding difficult-to-detect security flaws in early stages. Vibe coding or AI-generated code often “looks correct” and functions as intended, but it is not inherently designed to meet secure coding standards.
By introducing significant changes in how vulnerabilities arise, many security flaws originate directly during the development phase, when code is generated automatically and rapidly. This increases the likelihood of integrating different types of issues, such as:
- Insecure application development leading to a security breach.
- Unsecure platform logic resulting in code execution.
- Unsecure platform logic allowing authentication bypass.
Additionally, these tools can become new targets for cybercriminals, who seek to manipulate their behavior through techniques such as injecting instructions into programmers or using code fragments from external sources hosting malicious software.
Securing the “Code Lifecycle”: a Strategic Priority
To address this scenario, Unit 42 has introduced the SHIELD framework, designed to reintroduce secure design principles into AI-assisted coding. This framework provides organizations with a practical guide to balance productivity in vibe coding with effective risk management, preventing innovation from expanding the attack surface:
- S – Separation of Duties: Vibe coding platforms may grant excessive privileges. Avoid incompatible functions and restrict AI agents to development and testing environments.
- H – Human in the Loop: Require mandatory human review and approval of PRs for any critical code.
- I – Input/Output Validation: Separate trusted instructions from untrusted data and apply security validations (SAST) before merging code.
- E – Enforce Security-Focused Helper Models: Use specialized agents to validate security, scan for secrets, and verify controls before deployment.
- L – Least Agency: Apply the principle of least privilege, restricting access and destructive commands.
- D – Defensive Technical Controls: Implement SCA and disable auto-execution to enhance monitoring and security during deployment.
While vibe coding represents a natural evolution in software development, Palo Alto Networks warns that many organizations are neglecting established security principles, such as the “least privilege” approach, in favor of speed and functionality. Therefore, this practice should not be adopted without thorough security review, as protection cannot be limited to the final phases of the application lifecycle. Instead, security must be integrated from the initial design and code generation stages to ensure secure and reliable development.

