Skip Navigation

Can you trust ChatGPT’s package recommendations?

vulcan.io Can you trust ChatGPT’s package recommendations?

ChatGPT can offer coding solutions, but its tendency for hallucination presents attackers with an opportunity. Here's what we learned.

“* People ask LLMs to write code

LLMs recommend imports that don't actually exist
Attackers work out what these imports' names are, and create & upload them with malicious payloads
People using LLM-written code then auto-add malware themselves”

5

You're viewing a single thread.

5 comments