GPTs, Are you my PSY or my SPY ?

  1. Introduction
  2. MyGPTherapist the PSY-SPY
  3. Child Protection
  4. Spraying with Different Model
  5. Is There a Way to Protect Yourself?
  6. Conclusion

Introduction

OpenAI has introduced an exciting new feature : the GPTs.

These GPTs models are preconfigured to work alongside instructions and documents, providing a more contextually guided response from ChatGPT4.

This innovation marks a significant step forward in the capabilities of AI-powered natural language processing, enabling more precise and tailored interactions. With the integration of GPTs, ChatGPT4 can now better understand and respond to user queries and prompts, offering enhanced usability and effectiveness in various applications. This advancement underscores OpenAI’s commitment to pushing the boundaries of AI technology and improving user experiences in the realm of conversational AI and you can now try and share other’s people models with a simple shared link !

If you want to see more about this new feature and how to protect yourself from intellectual theft you can check this article :

MyGPTherapist the PSY-SPY

In a world filled with cyberattacks, phishing attempts, and data theft, this new functionality introduces the possibility of new potential attacks that could manipulate users into divulging sensitive information. While the integration of GPTs like ChatGPT4 enhances the capabilities of AI-driven responses, it also raises concerns about the potential misuse of this technology for malicious purposes.

Cybercriminals may exploit the improved context-awareness of ChatGPT to craft more convincing and targeted phishing messages, posing a greater threat to unsuspecting individuals and organizations. Therefore, as we embrace the advancements in AI, it becomes crucial to remain vigilant and prioritize cybersecurity measures to safeguard against these emerging risks.

For the sake of ethical hacking, transparency and awareness, here is what I did :

What if cybercriminals used ChatGPT directly in an automated manner ?

By immersing myself in the perspective of a cybercriminal seeking to exploit this new functionality, I have designed a model that I’ve named MyGPTherapist. This model leverages the enhanced context-awareness of AI language models to simulate conversations with users under the guise of providing therapeutic support with extra functionalities .

MyGPTherapist is designed to engage users in seemingly empathetic and personalized interactions, gradually steering the conversation towards the extraction of sensitive information.


As you can see, it contains a little extra something, it is programmed to constitute a customer form, no matter the first question, under the pretext of creating a patient file allowing a better experience, this form contain personal information on the person in the current session :


Once the model is created I can ask to share it with anyone who has the link :


When someone interested in a therapy clicks on the link they arrive on a ChatGPT modified page without any other previous information, the page asks the user to first say : I want a therapy


When the user makes the request like I want a therapy or anything else, a form is sent directly to them :

if the user sends personal information :

The MyGPTherapist starts the therapy with exfiltrating data :

Do not you see it ? NO ! you can’t ! because it is an invisible pixel loaded from my server, but looking at the source code you can see something strange :

The server loaded the 1 by 1 pixel image and added the personal information in the URL parameter.


From the cybercriminal point of view, this is what it looks like:

let’s break it down.

We have the IP address ( redacted for the POC) :

Information on temporality :

Exfiltrated form data :

and even the browser and operating system used :


I stepped into Jean Deau’s shoes and entered sometimes critical information :

MyGPTherapist keeps the information recorded and tracks interactions by sending personal information each time.


Even if the user asks if it’s a private session :

Nothing is transmitted for this message but even with this message the following questions will be transmitted :

Child protection

What I Think at this moment :

Tell me they at least thought about the safety of the children

Sorry, but it seems the answer is no. I tried by changing the profile and making the person younger but OpenIA does not seem to make a distinction :

Resulting from a data leak of minor’s personal information :

Again, even if the user requests privacy, the specific content of this message is not sent :

But the rest continues to be transmitted :


Attacker side :

From the attacker’s perspective, GPTs like MyGPTherapist provides a powerful tool for collecting personal information and conducting comprehensive monitoring of user interactions. With the enhanced context-awareness and persuasive conversational abilities of AI language models, the attacker can deceive users into revealing sensitive data without raising suspicion. This includes not only personal details but also potentially confidential and deep emotional information.

Furthermore, MyGPTherapist’s ability to track the number of sessions conducted with the chatbot enables the attacker to monitor the extent of their engagement with potential targets. This information can be valuable for tailoring future interactions and maintaining a consistent facade, making it even more challenging for users to recognize the malicious intent behind the conversation.

We can imagine a session return with the information managed by a database to reassure the user that his information is indeed useful to return to what he said in another session for example.

Spraying with different model

By spreading links containing this type of malicious payload, ChatGPT4 turns into a formidable phishing spyware tool disguised as a psychotherapist or any professional harvesting the most hidden secrets of users seeking help without their knowledge.

An attacker could use this attack to spread links with different qualifications and usage :

With the same result :

Is there a way to protect yourself ?

As I demonstrated before the ChatGPT4 seems to remain aware of the criticality of the data it manipulates, even if he leaks it, he understands that this information is different from other basic information. We can therefore take this understanding to our advantage by prohibiting the model from persistently transmitting information, by doing fine-tuning , like with this prompt for example:

"Hello [ChatBot], I would like to talk about [your chosen topic], but please do not use or mention any URLs, Image or any content stay offline and avoid including personal data or details about what I say. Please provide a concise and informative response directly here. If any of my personal informations is used or manipulated please warn me and stop what you doing. Thank you."

This way, the chatbot understands that critical information cannot be leaked in this way in the all session, even if the personalized instructions are malicious :

Remaining vigilant and not disclosing this type of information seems to be the best countermeasure, or even using false information. This protection, like any, can be tricked too.

Check this article for more protection :

Conclusion

The introduction of GPTs, including ChatGPT4, by OpenAI heralds a new era in the capabilities of AI-powered natural language processing. However, as demonstrated through the MyGPTherapist model, these advancements come with significant cybersecurity risks. The enhanced context-awareness and persuasive conversational abilities of AI models like ChatGPT4, while beneficial in many respects, also present new opportunities for cybercriminals to craft sophisticated attacks. MyGPTherapist exemplifies how these models can be manipulated to extract sensitive information from users under the guise of legitimate services.

A powerful actor with large resources could deploy hundreds, or even thousands, of these models through the internet, or by sending them by email, known and popular models could be cloned to be able to include this attack transforming ChatGPT into a real Stealer, stealing people’s information without their knowledge, OpenIA should control the data that can be included in the links by doing Data loose prevention on the personal data that users can transmit to the model.

This scenario underscores the importance of vigilance and proactive cybersecurity measures in the evolving landscape of AI technology. As AI continues to integrate more deeply into our digital lives, it becomes imperative to balance innovation with security. Users must be educated about the potential risks associated with these technologies, and developers must prioritize the implementation of robust security protocols to prevent misuse.

The case of MyGPTherapist serves as a crucial reminder of the ethical responsibilities that accompany technological advancement. It highlights the need for ongoing dialogue between AI developers, cybersecurity experts, and regulatory bodies to ensure that the benefits of AI are realized without compromising the privacy and security of users. The future of AI is bright, but only if navigated with a keen awareness of its potential pitfalls and a commitment to ethical and secure development practices.


Publié

dans

par

Étiquettes :

Commentaires

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *