Image
prof Dan Boneh

Dan Boneh and team find relying on AI is more likely to make your code buggier

Summary

Their study examined how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages.

Jan
2023

Professor Dan Boneh and team share the findings of their study, "Do Users Write More Insecure Code with AI Assistants?"

As stated in the abstract, the authors found that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities.

The authors conclude that AI assistants should be viewed with caution because they can mislead inexperienced developers and create security vulnerabilities.

The authors also hope their findings will lead to improvements in the way AI assistants are designed because they have the potential to make programmers more productive, to lower barriers to entry, and to make software development more accessible to those who dislike the hostility of internet forums.
 

Excerpted fromStudy finds AI assistants help developers produce code that's more likely to be buggy: At the same time, tools like Github Copilot and Facebook InCoder make developers believe their code is sound.” December 21, 2022.

Published : Jan 11th, 2023 at 12:25 pm
Updated : Jan 11th, 2023 at 12:29 pm