2023-06-11 10:31:23
2023-06-11 10:31:19
2023-06-11 10:29:10
1595919
#AI #GenerativeAI #Cybersecurity #Programming #ChatGPT: "Unless you have been living under a rock, you’ll be well aware of the generative AI craze. In the last year, millions of people have started using ChatGPT to support their work efforts, finding that it can significantly ease the burdens of their day-to-day workloads. That being said, there are some shortcomings.
We’ve seen ChatGPT generate URLs, references, and even code libraries and functions that do not actually exist. These LLM (large language model) hallucinations have been reported before and may be the result of old training data.
If ChatGPT is fabricating code libraries (packages), attackers could use these hallucinations to spread malicious packages without using familiar techniques like typosquatting or masquerading.
Those techniques are suspicious and already detectable. But if an attacker can create a package to replace the “fake” packages recommended by ChatGPT, they might be able to get a victim to download and use it.
The impact of this issue becomes clear when considering that whereas previously developers had been searching for coding solutions online (for example, on Stack Overflow), many have now turned to ChatGPT for answers, creating a major opportunity for attackers."
https://vulcan.io/blog/ai-hallucinations-package-risk
We’ve seen ChatGPT generate URLs, references, and even code libraries and functions that do not actually exist. These LLM (large language model) hallucinations have been reported before and may be the result of old training data.
If ChatGPT is fabricating code libraries (packages), attackers could use these hallucinations to spread malicious packages without using familiar techniques like typosquatting or masquerading.
Those techniques are suspicious and already detectable. But if an attacker can create a package to replace the “fake” packages recommended by ChatGPT, they might be able to get a victim to download and use it.
The impact of this issue becomes clear when considering that whereas previously developers had been searching for coding solutions online (for example, on Stack Overflow), many have now turned to ChatGPT for answers, creating a major opportunity for attackers."
https://vulcan.io/blog/ai-hallucinations-package-risk
Can you trust ChatGPT’s package recommendations?
ChatGPT can offer coding solutions, but its tendency for hallucination presents attackers with an opportunity. Here's what we learned.Bar Lanyado (Vulcan Cyber)