Despite becoming an overnight hit when it launched ChatGPT is still struggling to excel in some areas – particularly assisting coding, new research has claimed.
Positioned as an ideal solution to programming issues, some developers have been making use of an array of generative AI tools such as GitHub’s Copilot to speed up workflow, freeing up more time to focus on productive work.
However a new study from researchers at Purdue University found more than half (52%) of the responses that ChatGPT produced are incorrect.
ChatGPT helping with coding
The researchers analyzed 517 questions from Stack Overflow and compared ChatGPT’s answers to human responses, finding that the AI’s errors were widespread. In all, more than half (54%) were conceptual misunderstandings, around one in three (36%) were factual inaccuracies, a similar number (28%) were logical mistakes in code, and 12% were terminology errors.
In the paper, ChatGPT was also criticized for producing unnecessarily long and complex responses containing more detail than what was needed, leading to potential confusion and distractions. However, the ultra-small-scale poll of 12 programmers saw one-third preferring ChatGPT’s articulate and textbook-like answers, highlighting the ease with which coders can be misled.
The implications of these findings are pretty significant, because errors in coding can ultimately lead to bigger problems downstream, affecting multiple departments or organizations.
The writers summarize: “Since ChatGPT produces a large number of incorrect answers, our results emphasize the necessity of caution and awareness regarding the usage of ChatGPT answers in programming tasks.”
Besides exercising caution, the researchers also call for further research into identifying and mitigating such errors, as well as greater transparency and communication around potential inaccuracies.
More from TechRadar Pro
https://cdn.mos.cms.futurecdn.net/uPLBWiv4TLsoNw5MLRFr46-1200-80.jpg
Source link