We have seen a lot of enhancements from OpenAI lately when it comes to ChatGPT. However, this new discovery may cause OpenAI users to question their data privacy and safety concerns with OpenAI’s ChatGPT Mac App. A recent discovery shows that ChatGPT’s Mac App has been storing all of your conversations with the Chatbot in plain text.
Highlights:
- ChatGPT App on macOS has been exposed to store conversations in the form of plain texts.
- These plain texts were even accessible by 3rd party apps or anonymous users.
- The issue has been now resolved by the company, data is now encrypted.
ChatGPT Mac App is Storing Your Chats in Plain Text
If you have been frequently using the Mac app of ChatGPT, then it is high time that you update it. That is why since its launch, the first desktop app for ChatGPT has been saving all of your chats on your Mac in a non-encrypted plain text format.
As Pedro José Pereira Vieito showed on Threads, it was possible to have another application access those conversation files you had with ChatGPT and show the plain text of the conversations right after they took place due to the fact of access.
Additionally, Pedro asserted that the ChatGPT software did not make use of the default macOS sandbox, which safeguards user data and app data. How does this even sound secure, in such a critical matter of data?
This implies that every discussion you’ve ever had with OpenAI’s chatbot through the Mac app can be read by other apps on your Mac or by anybody who gains access to your computer.
Potentially everything you have stored on the conversations with ChatGPT, starting with your details, sensitive information, and much more.
Why did this issue arise?
Vieito stated that the following safeguards were not present in ChatGPT as the application has been designed because OpenAI decided to turn off the sandbox feature and store the conversations in plain text in a readable directory that has all of these protections disabled.
Sandboxing is a standard security mechanism which ensures that an app runs in an isolated and secure environment on a device. This system enables developers to protect app data and user information away from other apps, along with using encryption for security while it is on a user’s device.
In a separate post, the developer highlighted that macOS has blocked access to any private data ever since macOS Mojave was released in 2018, when sandboxing is used. As a result, all apps running on the operating system need explicit user permission before they can access user data from another app.
This provides a clear picture as to why ChatGPT’s Mac App has been storing conversations in the form of plain texts.
Has OpenAI fixed this?
OpenAI released an upgrade that it claims encrypts the talks after The Verge contacted the company regarding the problem.
A representative for OpenAI sent a comment to The Verge stating:
“We are aware of this issue and have shipped a new version of the application which encrypts these conversations. We’re committed to providing a helpful user experience while maintaining our high security standards as our technology evolves.”
Taya Christianson, OpenAI spokesperson
OpenAI promptly fixed the problem after being made aware of it. All of your interactions are now encrypted and are not kept in plain text thanks to a recent update from the company.
You could already be using this patched version of the macOS ChatGPT app if you have enabled automatic updates. You can manually update the app if you don’t.
- Launch the ChatGPT macOS app.
- In the menu bar, select the ChatGPT option.
- Click on Check for Updates.
- To update to the most recent version of the app, click Install.
With this update, all of your conversations are encrypted again, and no third-party app or tool can access them in the form of plain text.
Implications for the Future
Although the problem has been solved, it’s a good example that we have to remember about the constant threat in the applications that we use and the protection of our private data. Users should also get more involved and ensure that they are updated often on security measures that have to do with the applications being used.
- Increased Vigilance on Privacy: This incident underscores the importance of stringent data handling and privacy measures. Users will likely demand better transparency and assurance about how their data is stored and managed in future software releases.
- Stronger Data Encryption: Developers might implement stronger encryption techniques to ensure that user data is not easily accessible or readable by other applications, enhancing overall security.
- Enhanced App Testing: More rigorous testing protocols, especially for data security vulnerabilities, will be essential before launching updates or new features.
- User Control Over Data: There may be a shift towards giving users more control over their data, such as improved options for data management and deletion within apps.
- Regulatory Compliance: Future developments will likely align more closely with data protection regulations (like GDPR or CCPA) to avoid similar issues and build user trust.
While these implications are likely to arise, it will be interesting to note how the relationship between humanity and AI tools continue to grow, especially when it comes to sensitive matters such as personal information.
Another recent development finds that ChatGPT’s voice mode is vulnerable to Jailbreak to answer questions on prohibited topics using a technique called ‘VoiceJailbreak’.
Conclusion
In the world of ever evolving Generative AI technologies, it is normal of major AI companies to make mistakes. However, sometimes, the mistake can cost us dear. So all we can do is just stay secure and think twice before we share anything too personal or sensitive with chatbots of today.