
A study has found that AI tools for coding and programming lead to more errors and vulnerabilities than the manual method.
It can be tempting to help the coding along with an AI tool like Github Copilot and Facebook InCoder, but be careful.
A new study from Stanford University has found that programmers and developers who use AI tools to code end up with less secure code than those who do it on their own.
In turn, developers who use these tools often believe that their code is more secure.
It writes The Register.
“We found that participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection,” the thesis states, which continues:
“Surprisingly, we also found that participants given access to an AI assistant were more likely to believe they were writing secure code than those without access to the AI assistant.”
However, the study only looked at 25 vulnerabilities and three programming languages. Here we are talking about Python, C and Verilog.
“Although the results are inconclusive as to whether the AI assistant helped or harmed participants, we observed that participants in the AI assistant group were significantly more likely to introduce integer overflow errors into their solutions,” Stanford writes, according to the media.
The bottom line is that AI assistants should be used with caution as they can mislead newer developers and create security vulnerabilities.