Interest in artificial intelligence saw a significant spike at the end of last November, when startup OpenAI launched ChatGPT – a new interface for its Large Language Model (LLM).
Although the uses of ChatGPT are many and all very interesting, cyber security experts are already at work studying the possible critical issues that such a technology could bring up at the cybersecurity level and the use hackers may do of it. Let’s look at them together.
ChatGPT for creating “infostealers” by hackers
As early as late December – so not even two months after the launch of ChatGPT – on underground forums dedicated to hacking activities, one user had advanced the idea of using ChatGPT to quickly and easily create lines of code useful for creating malaware capable of stealing sensitive information from unsuspecting users.
So-called infostealers enter the system through unsafe software downloads, or when visiting unsavory pages.
The hacker reportedly posted a screenshot on the forum depicting some lines of code created with ChatGPT. The goal? To infiltrate PCs as quietly as possible and then search and steal information from Word documents, PDFs, and even images.
Thanks to ChatGPT’s artificial intelligence, which can accurately read and translate the information in these documents, scamming people and stealing their data could become even easier.
ChatGPT for creating encryption tools by hackers
Encrypting files is not always a procedure carried out to ensure their security. A scam that has been gaining momentum in recent years is one that involves in fact “taking hostage” documents, photos and any kind of user’s files and then demanding a ransom.
How? By getting into the PC through malaware – contained perhaps in an insecure program downloaded from unsavory sources. This malaware will encrypt the files by changing their extension and making them unusable.
Often, this procedure is accompanied by the release of a .txt file. There the criminal explains that – to decrypt the files – it will be necessary to pay disproportionate sums of money. Only to disappear into thin air after receiving the money, leaving the user with hundreds of useless files.
Also in late December 2022, another hacker had posted another screenshot on a forum. In which he explained that he had created a perfect script taking inspiration from OpenAI technologies.
Facilitation of illegal activities on the dark web thanks to ChatGPT
A final example depicting the potential threats posed by the use of ChatGPT by hackers is actually a hypothesis. But one that seems to be becoming more and more concrete.
While the first two examples in fact represent the way ChatGPT can help cybercriminals create code for potential malaware, in this case we are talking instead about a more pragmatic use of the software in the area of fraudulent activities.
That is, its use in the dark web. Indeed, thanks to ChatGPT, it might be possible to create entire illegal marketplaces to facilitate fraudulent activities in the most hidden files of the internet. And without any possibility of tracking.
From e-commerce startups to the use of the most sophisticated and traceable payment methods-namely blockchain and cryptocurrencies, the whole system can be easily created right through ChatGPT.
The threat to the creative environment: will artificial intelligences steal our jobs?
Although this latest threat that we are going to analyze is not strictly in the legal realm, it is nevertheless part of the concerns expressed by those in some creative industries. Who see their work being “potentially” stolen by artificial intelligences.
Artists, writers, videomakers, and the entire creative sector-especially in the digital realm-are seeing increasingly advanced artificial intelligences coming to create works of art, manuscripts, and lines of code in a matter of minutes. And the threat toward these professions seems real.
Moreover, recently, the news that artificial intelligences have useful crawlers to scan the web for inspiration has caused outrage. Indeed, many artists have complained that the same artificial intelligences that threaten their work can “steal” unique elements from their works.
In conclusion, is ChatGPT dangerous?
The answer is far from straightforward. The simplest one, after analyzing the above-mentioned case histories, is yes. But the same could be said about any software useful for creating code. After all, malware came into existence before artificial intelligences of this kind.
It’s all in how different countries go about spreading digital literacy at all levels. The Internet is scary when one is unfamiliar with it and when one is navigating it by sight. But if one is well versed in the threats it may entail, then there is little to fear.
And as for the discourse of intellectual property in the arts, many are bringing up the example of Canva. A piece of software that-at its launch-had turned the noses up at many graphic designers. Who were concerned that such a technology would steal their work.
Canva turned 10 years old last year, and the world still needs graphic designers, creatives and artists. The same perspective should probably be applied to ChatGPT. Which will be able to create flawless text and code, but will always need human tweaking to give them soul and form.