Paul Krill
Editor at Large

GitHub Copilot makes insecure code even less secure, Snyk says

news
Feb 22, 20242 mins

Developer security firm warns that Copilot and other AI-powered coding assistants may replicate security vulnerabilities already present in the user’s codebase.

CSO  >  An exclamation-mark alert in a field of abstract technology.
Credit: Alengo

GitHub’s AI-powered coding assistant, GitHub Copilot, may suggest insecure code when the user’s existing codebase contains security issues, according to developer security company Snyk.

GitHub Copilot can replicate existing security issues in code, Snyk said in a blog post published February 22. “This means that existing security debt in a project can make insecure developers using Copilot even less secure,” the company said. However, GitHub Copilot is less likely to suggest insecure code in projects without security issues, as it has a less insecure code context to draw from.

Generative AI coding assistants such as GitHub Copilot, Amazon CodeWhisperer, and ChatGPT offer a significant leap forward in productivity and code efficiency, Snyk said. But these tools do not understand code semantics and thus cannot judge it.

GitHub Copilot generates code snippets based on patterns and structures it has learned from a vast repository of existing code. While this approach has advantages, it also can have a glaring drawback in the context of security, Snyk said. Copilot’s code suggestions may inadvertently replicate existing security vulnerabilities and bad practices present in neighbor files.

To mitigate duplication of existing security issues in code generated by AI assistants, Snyk advises the following steps:

  • Developers should conduct manual reviews of code.
  • Security teams should put a SAST (security application security testing) guardrail in place, including policies.
  • Developers should adhere to secure coding guidelines.
  • Security teams should provide training and awareness to development teams and prioritize and triage the backlog of issues per team.
  • Executive teams should mandate security guardrails.

Snyk data says the average commercial software project has an average of 40 vulnerabilities in first-party code, and almost a third of those are high-severity issues. “This is the playground in which AI generation tools can duplicate code by using these vulnerabilities as their context,” Snyk said. The most common issues Snyk sees in commercial projects are cross-site scripting, path traversal, SQL injection, and hard-coded secrets and credentials.

GitHub could not be reached late-Wednesday afternoon to respond to Snyk’s comments about GitHub Copilot.

More on GitHub Copilot:

Paul Krill

Paul Krill is editor at large at InfoWorld. Paul has been covering computer technology as a news and feature reporter for more than 35 years, including 30 years at InfoWorld. He has specialized in coverage of software development tools and technologies since the 1990s, and he continues to lead InfoWorld’s news coverage of software development platforms including Java and .NET and programming languages including JavaScript, TypeScript, PHP, Python, Ruby, Rust, and Go. Long trusted as a reporter who prioritizes accuracy, integrity, and the best interests of readers, Paul is sought out by technology companies and industry organizations who want to reach InfoWorld’s audience of software developers and other information technology professionals. Paul has won a “Best Technology News Coverage” award from IDG.

More from this author