Dozens and dozens of illegal chatbots on the ChatGPT Store: what is happening?

Elizabeth Smith

The growing popularity of AI-powered platforms, such as OpenAI’s GPT Store, brings with it the promise of innovation and personalization of digital services.

However, this same accessibility and ease of use can soon become a double-edged sword, lowering the barrier to entry for developers but also increasing the risk of abuse.

What is the GPT Store?

Before we talk about illegal chatbots and the dark sides of ChatGPT, it’s good to give some context. Just like Office and Chrome, OpenAI’s AI platform has also decided to allow plugins, through the GPT Store.

The GPT Store is therefore a platform that serves as a digital marketplace for chatbots and applications based on ChatGPT’s artificial intelligence. We have seen it, for example, with GPT Excel, which is capable of generating complex formulas to use on your spreadsheets in no time.

Designed in the wake of the success of ChatGPT, this online store therefore aims to provide a space where developers, companies and creatives can offer their personalized AI services to the public.

Thanks to ChatGPT technology, which allows you to generate textual responses in a natural and contextualized way, the Gpt Store presents itself as a hub for a vast range of applications: from automated customer assistance, to content creation, up to educational and entertainment.

The platform is accessible to anyone with an internet connection and offers tools that simplify the creation of chatbots, making it accessible even to those without advanced programming skills.

On the one hand, this democratizes access to AI technologies, while at the same time presenting various critical issues in terms of moderation and control of the quality and legality of the content offered.

Illegal chatbots on ChatGPT Store

So, what do we really mean when we talk about illegal chatbots?

Let’s imagine a chatbot capable of automatically generating stories or illustrations of well-known superheroes, such as, for example, those from the Marvel universe.

Although such a tool may appear harmless or even creative, if created without the permission of the rights holders, it constitutes a violation of copyright, since it exploits protected intellectual property to generate content without paying the necessary licenses or without obtaining the permit.

Not to mention the impact on the work of designers and animators, who, faced with tools capable of generating entire storyboards infinitely and in just a few seconds, could find artificial intelligence a competitor that is virtually impossible to overcome.

Another example would be a chatbot designed to emulate the voice or communication style of celebrities or public figures. Software that, while technically impressive, inherently brings with it legal and ethical issues that are impossible to ignore, especially if used to spread false or misleading information, or to impersonate such figures in an unauthorized manner.

And yet, there are chatbots that promise to help students circumvent anti-plagiarism systems like Turnitin, which not only encourage dishonest behavior but also undermine academic and professional integrity, creating content that, while passing originality checks, is fruit of deception.

Another category of problematic chatbots is those designed exclusively to advertise and lead the user towards paid services through deceptive practices, such as the promise of free access to content or services that are actually paid for, or the concealment of service conditions.

The other side of Artificial Intelligence

Innovation is always good, but what happens when developers themselves struggle to keep up with the technologies they’ve created?

The proliferation of chatbots openly violating OpenAI’s terms of service, creating copyrighted content, or impersonating public figures, highlights the difficulties of implementing effective moderation at scale for exponentially growing tools like ChatGPT.

OpenAI’s approach, which combines automated systems, human review and user reporting, is a common model among digital platforms. However, the case of the GPT Store demonstrates that this system can have limitations, especially in the presence of a rapid expansion of available content.

From a legal point of view, we instead talk about copyright and the responsibility of platforms for user-generated content. In many legal systems, platforms have a duty to take action against copyright infringements as soon as they become aware of them.

However, the effectiveness of such a system depends on the ability of the platforms themselves to identify and act against violations in a timely manner.

It is undeniable that, if at the moment OpenAI is still looking for a solution, the presence of deceptive or illegal software is destined in the long term to erode users’ trust in the platform and its moderation capabilities, with potential negative repercussions on the reputation and the reliability of OpenAI and its Store.

The challenge is therefore twofold: on the one hand to guarantee the safety and legality of the hosted contents, on the other to promote an ecosystem open to innovation and creativity.

Read also: Midjourney, what is the software that creates super realistic images and why could it be a danger of misinformation

Related articles...
Latest news
Healthcare and digital transformation: new horizons
Spain, no resignation for Sanchez despite the scandal
Reviving tech, saving the planet: the benefits of the right to repair movement
Target price: what is it and why is it so important?
The garden that doesn’t waste water: all about the dry garden
How not to get your phone hacked: 6 effective tips

Newsletter

Sign up now to stay updated on all business topics.