Security

ChatGPT and Software Supply Chain Risks

ChatGPT can make life easier, but teams shouldn’t trust LLM tools fully unless they have the right solutions to mitigate risks. Learn how Riscosity's data flow security solution helps to minimize risk.

Anirban Banerjee
Dr. Anirban Banerjee is the CEO and Co-founder of Riscosity
Published on
6/8/2023
3
min.

Misuse of ChatGPT

While some of the obvious misuse of ChatGPT in the world of cyber security was not unexpected – asking the artificial intelligence to write harder-to-detect malware and easier-to-convince phishing emails – a new threat has emerged that can leverage the very nature of the large language model. Ultimately, ChatGPT is a learning machine, and bases its answers on information it sources from the Internet. However, since the source information is not vetted and necessarily accurate, it is possible for ChatGPT’s answers to be inaccurate, biased, or even a complete fabrication. This phenomenon is known as “AI hallucination”, like what a human would perceive under the influence of certain pharmaceutical agents.

One incident that recently made the news was when a lawyer used ChatGPT to research prior judgments that supported his argument. Problem was, some of the cases referenced, in the judge’s words, “appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” However, there is a greater risk possible, where wrong information is replaced by intentionally malicious information.

ChatGPT as partner in crime

Researchers found that when ChatGPT was asked for a solution to a coding problem, it responded with links to software packages that did not exist. Now, if such a package were created with malicious code, and a developer asked the same question and was directed to the now-existing package, that is the beginning of a successful software supply chain attack. Once the developer uses that malicious package in their existing code, their entire application is compromised. Depending on the package, the attacker can now exfiltrate confidential information, implant additional malware, etc.

It is evident that the problem is significantly more severe than getting a wrong answer or code that does not work; here, the ChatGPT user is being provided an answer that is not only wrong but harmful. A physical world analogy would be receiving advice to walk off a venomous snake bite – not only is that bad advice, but it is also harmful advice considering the movement can spread the venom faster throughout the body.

Continuous monitoring as solution

While the obvious solution to the above scenario is to test and verify all open source software, diligence should not be restricted to only pre-deployment activities. Considering the possibility that malicious software may still slip through to production, it is important to continuously monitor application behavior. Here is where Third Party Data Observability (TPDO) can help. By examining what the code does – which data is transferred to whom, when and in what quantities outside the organization – it is possible to detect an attack in progress and shut it down before significant harm is done. For example, if an organization detects that an application is communicating with a server located in a country where it does not do business, that may be indicative of a software supply chain attack. A similar scenario is when an organization detects 100 MB of data transfer per day where the expected volume is 10 MB daily.

Conclusion

ChatGPT can make life easier, but people, especially developers, shouldn’t trust the robots fully just yet. And for the times that mistakes happen, introducing a data flow security solution can help minimize their impact.