Equinox IT Blog

You write insecure code with AI

You write insecure code with AI
5:27

AI assistants like GitHub Copilot can help you write code faster. However, is it secure code? While you might believe your code is more secure, it likely isn't.

Before the era of AI, in April 2011, Sony shut down its PlayStation Network due to a major hack that leaked personal data. The downfall was significant:

  • 77 million accounts compromised.
  • The PlayStation network was offline for 23 days.
  • US$ 171 million (May 2011) in costs to Sony (US$ 239 million adjusted to August 2024).

The details of the hack were never publicly released. However, some assume an insecure Application Programming Interface (API) was the cause. I feel sorry for the developer who wrote that code.

How do we avoid insecure code?

Human error leads to insecure code, but developers can mitigate this through:

  • Learning about common security flaws.
  • Peer-reviewing code.
  • Writing tests to validate functionality.
  • Engaging security experts (red team) to hack APIs.

These strategies are essential for every project. However, the key to secure code is to slow down, focus on quality, and thoroughly understand your work.

The best way to write secure code is to slow down.

Rushed code, driven by deadlines or lack of interest, often results in security vulnerabilities. Quality coding takes time. Shortcuts can be costly.

Early this decade, AI code assistants emerged as a vital tool for developers. They appear to offer a powerful shortcut without the usual drawbacks.

AI Code Assistants speed up development

AI code assistants speed up development by creating code from text prompts and completing partial lines.

The first popular AI code assistant was GitHub Copilot.

1716348658431

An example of using GitHub Copilot to write code that validates phone numbers. Animation source: github.com

When you write code, GitHub Copilot anticipates what comes next, providing inline suggestions and reducing the physical typing required.

GitHub surveyed over 2,000 developers about their experience with GitHub Copilot. Here are some key takeaways:

  • 88% perceived they were more proactive using GitHub Copilot.
  • 77% spent less time searching.
  • 96% were faster with repetitive tasks.

I've used GitHub Copilot a lot. It speeds up my typing and is enjoyable but didn't revolutionise coding. The real change arrived with ChatGPT-4, which transformed programming.

AI Code Assistants' security risks

AI code assistants can write portions of code similar to the APIs that Sony uses to run their PlayStation Network. However, they write code without context. They don't understand the requirements of your system; therefore, they don't understand the architectural characteristics (also known as "-ilities") of your system, such as:

  • Availability.
  • Reliability.
  • Maintainability.
  • Security.

AI code assistants speed up coding, but they often miss potential issues.

Unlike AI code assistants, an experienced developer would encourage you to consider the bigger picture, discussing security, maintenance, testing, and constraints instead of just providing the quickest solution.

Developers need to be aware of AI's security shortfalls. However, a Cornell University study in 2022 suggests awareness is lacking. In the study "Do Users Write More Insecure Code with AI Assistants?" the authors attempted to answer the following questions:

  • Do users write more insecure code when they have access to an AI programming assistant is available?
  • Do users trust AI assistants to write secure code?
  • How do users' language and behaviour when interacting with an AI assistant affect the degree of security vulnerabilities in their code?

Within the abstract, the authors wrote:

"Overall, we find that participants who had access to an AI assistant wrote significantly less secure code than those without access to an assistant."

And in their conclusion:

"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

This is the most concerning statement in the study. Put simply, using an AI assistant, you are more likely to write insecure code while thinking it's more secure than you'd write yourself.

Developers need to be aware of this false confidence.

The authors noted that participants who put more effort into creating prompts were more likely to receive secure solutions.

How do we avoid insecure code when using AI Code Assistants?

AI assistants are essential for competitiveness but come with security risks. So, how do we mitigate the risks?

The study had a limited pool of developers, mostly university students. The researchers suggested that more experienced developers may have more robust security backgrounds.

My top tips are:

  • Junior developers must understand the code an AI assistant has written before adding it to a codebase. Additionally, they need to let code reviewers know where AI was used.
  • Although senior developers are less likely to fall for common traps, they must always be critical of AI assistants' code.

And most importantly:

  • Developers need to be aware of the false confidence AI assistants can create.

We've solved this before. It's just one more common security flaw.

Do you have a development team that needs to use AI assistants safely? If so, get in touch. I'd love to hear your story.

Subscribe by email