Skip Navigation

Can you trust ChatGPT’s package recommendations?

vulcan.io Can you trust ChatGPT’s package recommendations?

ChatGPT can offer coding solutions, but its tendency for hallucination presents attackers with an opportunity. Here's what we learned.

From https://twitter.com/llm_sec/status/1667573374426701824

  1. People ask LLMs to write code
  2. LLMs recommend imports that don't actually exist
  3. Attackers work out what these imports' names are, and create & upload them with malicious payloads
  4. People using LLM-written code then auto-add malware themselves
14

You're viewing a single thread.

14 comments
  • I wouldn't trust the packages used by an LLM, or any other part of code they write, or any other text response they generate. Everything must be treated skeptically and verified!

    I use ChatGPT daily to assist with my job as a fullstack web developer. If I'm asking it for boilerplate code, I already know exactly what it should look like, and having ChatGPT generate the code and then proofreading it myself is usually a small timesaver. And even if it's not, the lower perceived effort on my part is beneficial for my mental load.

    When I ask it for code where I don't already know the 'right' answer (e.g. refactoring an algorithm to use loops instead of recursion, example for library with poor documentation, creating new function from scratch), I always write a series of test cases to ensure the code behaves as expected.

    Similarly if it suggests a library I am unfamiliar with, I'll check it's github or stats on npmjs to verify it's maintained and commonly used. Though it almost always picks the same library that I had picked previously (when one of my previous projects was in a similar situation). Probably because those libraries were the most commonly used. I never experienced a made up import; however, there were a couple instances where it made up a function that did not actually exist inside of the library.

14 comments