Share this
You write insecure code with AI
by Sean Kelly on 15 August 2024
AI assistants like GitHub Copilot can help you write code faster. However, is it secure code? While you might believe your code is more secure, it likely isn't.
Before the era of AI, in April 2011, Sony shut down its PlayStation Network due to a major hack that leaked personal data. The downfall was significant:
- 77 million accounts compromised.
- The PlayStation network was offline for 23 days.
- US$ 171 million (May 2011) in costs to Sony (US$ 239 million adjusted to August 2024).
The details of the hack were never publicly released. However, some assume an insecure Application Programming Interface (API) was the cause. I feel sorry for the developer who wrote that code.
How do we avoid insecure code?
Human error leads to insecure code, but developers can mitigate this through:
- Learning about common security flaws.
- Peer-reviewing code.
- Writing tests to validate functionality.
- Engaging security experts (red team) to hack APIs.
These strategies are essential for every project. However, the key to secure code is to slow down, focus on quality, and thoroughly understand your work.
The best way to write secure code is to slow down.
Rushed code, driven by deadlines or lack of interest, often results in security vulnerabilities. Quality coding takes time. Shortcuts can be costly.
Early this decade, AI code assistants emerged as a vital tool for developers. They appear to offer a powerful shortcut without the usual drawbacks.
AI Code Assistants speed up development
AI code assistants speed up development by creating code from text prompts and completing partial lines.
The first popular AI code assistant was GitHub Copilot.
An example
of using GitHub Copilot to write code that validates phone numbers. Animation source: github.com
When you write code, GitHub Copilot anticipates what comes next, providing inline suggestions and reducing the physical typing required.
GitHub surveyed over 2,000 developers about their experience with GitHub Copilot. Here are some key takeaways:
- 88% perceived they were more proactive using GitHub Copilot.
- 77% spent less time searching.
- 96% were faster with repetitive tasks.
I've used GitHub Copilot a lot. It speeds up my typing and is enjoyable but didn't revolutionise coding. The real change arrived with ChatGPT-4, which transformed programming.
AI Code Assistants' security risks
AI code assistants can write portions of code similar to the APIs that Sony uses to run their PlayStation Network. However, they write code without context. They don't understand the requirements of your system; therefore, they don't understand the architectural characteristics (also known as "-ilities") of your system, such as:
- Availability.
- Reliability.
- Maintainability.
- Security.
AI code assistants speed up coding, but they often miss potential issues.
Unlike AI code assistants, an experienced developer would encourage you to consider the bigger picture, discussing security, maintenance, testing, and constraints instead of just providing the quickest solution.
Developers need to be aware of AI's security shortfalls. However, a Cornell University study in 2022 suggests awareness is lacking. In the study "Do Users Write More Insecure Code with AI Assistants?" the authors attempted to answer the following questions:
- Do users write more insecure code when they have access to an AI programming assistant is available?
- Do users trust AI assistants to write secure code?
- How do users' language and behaviour when interacting with an AI assistant affect the degree of security vulnerabilities in their code?
Within the abstract, the authors wrote:
"Overall, we find that participants who had access to an AI assistant wrote significantly less secure code than those without access to an assistant."
And in their conclusion:
"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."
This is the most concerning statement in the study. Put simply, using an AI assistant, you are more likely to write insecure code while thinking it's more secure than you'd write yourself.
Developers need to be aware of this false confidence.
The authors noted that participants who put more effort into creating prompts were more likely to receive secure solutions.
How do we avoid insecure code when using AI Code Assistants?
AI assistants are essential for competitiveness but come with security risks. So, how do we mitigate the risks?
The study had a limited pool of developers, mostly university students. The researchers suggested that more experienced developers may have more robust security backgrounds.
My top tips are:
- Junior developers must understand the code an AI assistant has written before adding it to a codebase. Additionally, they need to let code reviewers know where AI was used.
- Although senior developers are less likely to fall for common traps, they must always be critical of AI assistants' code.
And most importantly:
- Developers need to be aware of the false confidence AI assistants can create.
We've solved this before. It's just one more common security flaw.
Do you have a development team that needs to use AI assistants safely? If so, get in touch. I'd love to hear your story.
Share this
- Agile Development (153)
- Software Development (126)
- Agile (76)
- Scrum (66)
- Application Lifecycle Management (50)
- Capability Development (47)
- Business Analysis (46)
- DevOps (43)
- IT Professional (42)
- Equinox IT News (41)
- Agile Transformation (38)
- IT Consulting (38)
- Knowledge Sharing (36)
- Lean Software Development (35)
- Requirements (35)
- Strategic Planning (35)
- Solution Architecture (34)
- Digital Disruption (32)
- IT Project (31)
- International Leaders (31)
- Digital Transformation (26)
- Project Management (26)
- Cloud (25)
- Azure DevOps (23)
- Coaching (23)
- IT Governance (23)
- System Performance (23)
- Change Management (20)
- Innovation (20)
- MIT Sloan CISR (15)
- Client Briefing Events (13)
- Architecture (12)
- Working from Home (12)
- IT Services (10)
- Data Visualisation (9)
- Kanban (9)
- People (9)
- Business Architecture (8)
- Communities of Practice (8)
- Continuous Integration (7)
- Business Case (4)
- Enterprise Analysis (4)
- Angular UIs (3)
- Business Rules (3)
- GitHub (3)
- Java Development (3)
- Lean Startup (3)
- Satir Change Model (3)
- API (2)
- Automation (2)
- Scaling (2)
- Security (2)
- Toggles (2)
- .Net Core (1)
- AI (1)
- Diversity (1)
- Testing (1)
- ✨ (1)
- August 2024 (1)
- February 2024 (3)
- January 2024 (1)
- September 2023 (2)
- July 2023 (3)
- August 2022 (4)
- August 2021 (1)
- July 2021 (1)
- June 2021 (1)
- May 2021 (1)
- March 2021 (1)
- February 2021 (2)
- November 2020 (2)
- September 2020 (1)
- July 2020 (1)
- June 2020 (3)
- May 2020 (3)
- April 2020 (2)
- March 2020 (8)
- February 2020 (1)
- November 2019 (1)
- August 2019 (1)
- July 2019 (2)
- June 2019 (2)
- April 2019 (3)
- March 2019 (2)
- February 2019 (1)
- December 2018 (3)
- November 2018 (3)
- October 2018 (3)
- September 2018 (1)
- August 2018 (4)
- July 2018 (5)
- June 2018 (1)
- May 2018 (1)
- April 2018 (5)
- March 2018 (3)
- February 2018 (2)
- January 2018 (2)
- December 2017 (2)
- November 2017 (3)
- October 2017 (4)
- September 2017 (5)
- August 2017 (3)
- July 2017 (3)
- June 2017 (1)
- May 2017 (1)
- March 2017 (1)
- February 2017 (3)
- January 2017 (1)
- November 2016 (1)
- October 2016 (6)
- September 2016 (1)
- August 2016 (5)
- July 2016 (3)
- June 2016 (4)
- May 2016 (7)
- April 2016 (13)
- March 2016 (8)
- February 2016 (8)
- January 2016 (7)
- December 2015 (9)
- November 2015 (12)
- October 2015 (4)
- September 2015 (2)
- August 2015 (3)
- July 2015 (8)
- June 2015 (7)
- April 2015 (2)
- March 2015 (3)
- February 2015 (2)
- December 2014 (4)
- September 2014 (2)
- July 2014 (1)
- June 2014 (2)
- May 2014 (9)
- April 2014 (1)
- March 2014 (2)
- February 2014 (2)
- December 2013 (1)
- November 2013 (2)
- October 2013 (3)
- September 2013 (2)
- August 2013 (6)
- July 2013 (2)
- June 2013 (1)
- May 2013 (4)
- April 2013 (5)
- March 2013 (2)
- February 2013 (2)
- January 2013 (2)
- December 2012 (1)
- November 2012 (1)
- October 2012 (2)
- September 2012 (3)
- August 2012 (3)
- July 2012 (3)
- June 2012 (1)
- May 2012 (1)
- April 2012 (1)
- February 2012 (1)
- December 2011 (4)
- November 2011 (2)
- October 2011 (2)
- September 2011 (4)
- August 2011 (2)
- July 2011 (3)
- June 2011 (4)
- May 2011 (2)
- April 2011 (2)
- March 2011 (3)
- February 2011 (1)
- January 2011 (4)
- December 2010 (2)
- November 2010 (3)
- October 2010 (1)
- September 2010 (1)
- May 2010 (1)
- February 2010 (1)
- July 2009 (1)
- April 2009 (1)
- October 2008 (1)