A deep dive into the updated Terms of Service and why creatives are sounding the alarm
What if the files you thought were private, your unreleased songs, design drafts, private photos, legal contracts, or other confidential documents, could be used to train AI’s brain?
Recently, I received an email from WeTransfer announcing updates to both their Privacy Policy and Terms of Service, including a cryptic new clause in Section 6.3. What’s especially noteworthy is the shift from their previous, minimalist stance: previously, WeTransfer had limited usage rights mostly for file transfers, without any explicit commercial, AI‑training, or derivative works permissions.

This article explains - not just affirms - what 6.3 exactly means, why its language raised eyebrows, what WeTransfer later clarified, and what creators should do next. We’ll unpack the original license, the public reaction, WeTransfer’s response, and practical steps you can take to protect your work.
1. What Section 6.3 Originally Said (and Why It Mattered)
The new 6.3 granted WeTransfer a sweeping license over user content, far broader than most users expected. According to El País, starting August 8th, 2025, users would grant “a perpetual, worldwide, non‑exclusive, royalty‑free, transferable license […] including to reproduce, distribute, modify, create derivative works […] to train AI models” (El País). That’s not just storage rights, it’s ownership-level access. Any file you share could be digested, altered, or incorporated into WeTransfer’s technology stack, including AI systems.
So why did creatives push back so forcefully?
2. The Outcry from Users and Creatives
Professional creators immediately sounded the alarm, worried that sensitive or embargoed content was no longer private. TechRadar noted widespread concern among designers and filmmakers, many warning they’d switch platforms or encrypt files (Fix Gaming Channel).
“Time to stop using @WeTransfer … they’ll own anything you transfer to power AI.”
This wasn’t paranoid, it was a rational reaction. Creatives rely on confidentiality. The prospect of their work fueling AI without notice or payment is a real threat. Faced with backlash, WeTransfer didn’t hold back.
3. WeTransfer Strikes Back with Clarifications
Within days, WeTransfer revised 6.3 and publicly assured users that AI training with their content and data wasn’t happening.
- AlternativeTo reports that the company removed references to “machine learning,” “derivative works,” and “commercializing” user content (AlternativeTo).
- Gadgets360 confirmed that WeTransfer explicitly clarified that user files would not be used to train AI models after the backlash (Gadgets360).
This indicates the original wording was boilerplate overreach, not a genuine business intent. But why wasn’t that clear from the start? That question goes to the heart of the issue.
4. Why This Matters - Beyond Fancy Legal Words
Even if unintended, such vagueness in Terms of Service (ToS) sows distrust and sets dangerous precedents. TechFinitive points out that WeTransfer repeatedly emphasizes “Your content is always your content”, but that’s overshadowed by expansive, fear-inducing license language (TechFinitive).
This episode echoes the past controversies of Adobe and Dropbox. Broad language, even if harmless in practice, fuels suspicion and compels platforms to bury changes in legalese, which is burdensome for users.
So, what should you do with this knowledge?
5. Protecting Your Content—Wise Practices Going Forward
Rather than assuming the worst, let’s focus on smart, proactive habits that keep your content safe while preserving flexibility. After all, you might be building the next generation of file-sharing platforms.
Use password protection on files
Password protection gives you a simple, immediate defense layer—almost everyone can do it. Start by encrypting your documents. ZIP archives with strong passwords (or built-in file protection in tools like Office or macOS) are simple yet powerful first steps (wired). This adds a layer of protection before your files leave your device
Choose privacy‑first services
Privacy‑first platforms reduce risk from the start, but still feel familiar to users. Opt for platforms that prioritize end-to-end encryption and zero-knowledge policies, like Tresorit, Cryptee, or NordLocker. These tools often include additional safeguards: time-limited links, multi-factor authentication, and transparency about who can access your files.
Be especially cautious with sensitive content
Sensitive content hybrid approach means you’re protected even if one layer fails. For high-stakes work, confidential client files, and pre-launch assets, consider combining password protection with a privacy-strong service. Encrypt before upload, then use a platform you trust to manage access securely (suralink).
Think of it as a layered strategy, not paranoia. As creators and future service providers, this approach lets you stay in control, confident, and a step ahead in a world where trust is earned, not given.
To sum up…
- WeTransfer’s Section 6.3 briefly granted it broad rights, covering commercial use, AI/ML training, derivative works, sublicensing, and perpetual access, even after file deletion. That marked a significant shift from its original, minimalist approach.
- While WeTransfer quickly walked back the most alarming claims and clarified that they will not train AI on user content, the incident shines a light on the importance of staying alert to wording changes, even in trusted services.
- Rather than reacting with panic, lean into practical, layered strategies; password-protect files, choose privacy-first platforms, and treat especially sensitive content with extra caution. These steps don’t require drastic measures; they help you stay in control, informed, and strategically prepared.
- Don’t assume reputable platforms have your best interests at heart; review changes, ask hard questions, and safeguard your content wisely.
- The best tool for privacy is knowledge and a willingness to protect what you create.