A serious security flaw has been discovered in ChatGPT that could allow attackers to embed malicious SVG (Scalable Vector Graphics) and image files into shared chats. This vulnerability, tracked as CVE-2025-43714, affects ChatGPT through March 30, 2025, and raises major concerns about phishing and user safety.
How the Vulnerability Works
Security researchers found that ChatGPT improperly renders SVG files in shared conversations. Instead of showing the SVG code as plain text in code blocks, the system renders it directly in the browser—creating a stored cross-site scripting (XSS) vulnerability.
“The ChatGPT system through 2025-03-30 performs inline rendering of SVG documents… enabling HTML injection within most modern graphical web browsers,” explained researcher zer0dac.
This means an attacker could embed harmful content that executes in the user’s browser when they view a shared chat. Unlike JPG or PNG images, SVGs can include embedded HTML and JavaScript, making them more dangerous if not handled correctly.
Real-World Risks
The vulnerability could allow attackers to display phishing messages that look legitimate or trick users into clicking harmful links. In more severe cases, the SVGs could include flashing images designed to harm photosensitive users.
Experts note that even without JavaScript, visual manipulation alone can mislead or harm users, especially those unfamiliar with technical details.
OpenAI’s Response
OpenAI has reportedly taken temporary action by disabling the chat link-sharing feature after the issue was reported. However, a permanent solution that addresses SVG rendering behavior is still in progress.
The company has not issued a full patch or fix yet, making it crucial for users to stay cautious when opening shared ChatGPT conversations from unknown sources.
Expert Recommendations
- Do not click on shared ChatGPT links from untrusted users.
- Be cautious of unusual images or visual effects in shared chats.
- Use up-to-date browsers and security software that can help block unsafe scripts.
Security researchers emphasize the need to secure AI chat platforms like ChatGPT against traditional web vulnerabilities. As these tools become more common in daily work and communication, the risks from even simple features like image sharing can become serious threats.