Code-generating tools could be more of a security hindrance than help


New research by a group of Stanford-affiliated researchers has uncovered that code-generating AI tools such as Github Copilot can present more security risks than many users may realize.

The study looked specifically at Codex, a product of OpenAI, of which Elon Musk is among the co-founders. 

Codex powers the Microsoft-owned GitHub Copilot platform, which is designed to make coding easier and more accessible by translating natural language into code and suggesting changes based on contextual evidence.

AI-coding problems

Lead co-author of the study, Neil Perry, explains that “code-generating systems are currently not a replacement for human developers”.

The study asked 47 developers of differing abilities to use Codex for security-related problems, using Python, JavaScript and C programming languages. It concluded that the participants who relied on Codex were more likely to write insecure code compared with a control group.

Perry explained: “Developers using [coding tools] to complete tasks outside of their own areas of expertise should be concerned, and those using them to speed up tasks that they are already skilled at should carefully double-check the outputs and the context that they are used in in the overall project.”

This isn’t the first time that AI-powered coding tools have come under scrutiny. In fact, one of GitHub’s solutions to improve code quality in Copilot saw the Microsoft-owned company face legal action for failing to attribute the work of other developers. The result was a $9 billion lawsuit for 3.6 million individual Section 1202 violations.

For now, AI-powered code-generating tools are best thought of as a helping hand that can speed up programming rather than an all-out replacement, however if the development over the past few years is anything to go by, they may soon replace traditional coding.

Via TechCrunch



from TechRadar - All the latest technology news https://ift.tt/4w6ZaiC

No comments