top of page

OpenAI Protects Its Platform, Not Its Subscribers



The uncomfortable truth about who ChatGPT really serves



By Dr. Wil Rodríguez

For TOCSIN Magazine


ree

You’ve probably used ChatGPT. Maybe you even pay for it—$20 a month for Plus, perhaps $200 for Pro. You think you’re the customer, right? You’re paying good money for a service that helps you write, think, create, and solve problems.


Think again.


What if I told you that the moment you hit “send” on that innocent question about your marriage problems, your mental health struggles, or your business strategy, you’ve just created a permanent record that OpenAI can—and will—hand over to lawyers, governments, and anyone else with a subpoena?


What if I told you that even though you’re paying premium prices, OpenAI will secretly switch you to a different, dumbed-down AI model the moment you ask something they don’t like—and you can’t turn that off?


What if I told you that the CEO himself admits they haven’t figured out how to protect your privacy, but they’re charging you anyway?


Welcome to the real world of OpenAI. Where you’re not the customer. You’re the product. And your data? That’s just inventory.



The Data Breach They Blamed on Someone Else


March 2023. You’re a paying ChatGPT Plus subscriber. You trust the company with your credit card, your conversations, your ideas. Then boom—1.2% of subscribers (that’s thousands of people) suddenly have their payment information exposed. Names. Email addresses. Home addresses. Partial credit card numbers. For nine hours, strangers could see each other’s conversation titles.


How did CEO Sam Altman respond? He blamed an “open source library bug.” Not OpenAI’s responsibility, apparently. Just some free software they used. Nothing to see here, folks.


But here’s the thing: when you’re building what you claim will be “artificial general intelligence”—an AI smarter than humans—maybe you should be able to secure a basic credit card database. Just saying.


Imagine if your bank said, “Sorry we lost your money, but hey, the vault manufacturer made a bad lock.” Would you accept that? Of course not. But with OpenAI, we’re supposed to shrug and move on.



Your $20 Doesn’t Buy You Control


September 2025. You’re still paying your monthly fee. You’ve gotten used to ChatGPT. Maybe you depend on it for work. Then OpenAI rolls out “safety guardrails” that sound nice in a press release but mean something very different in practice.


Now, when you ask certain questions—questions they’ve decided are “sensitive”—ChatGPT secretly switches you to a different, more cautious model. You don’t get to choose. You don’t get a warning. You just get a different AI, one that’s been lobotomized to protect OpenAI from lawsuits.


Paying subscribers started screaming about it online: “Adults deserve to choose the model that fits their workflow, context, and risk tolerance.” They called it what it is: “silent overrides” and “secret safety routers.”


You’re paying premium prices for a premium product, but OpenAI treats you like a child who can’t be trusted with sharp objects. And you can’t turn it off. At all.


This isn’t about safety. It’s about liability. OpenAI is protecting OpenAI—not you.



Your Secrets Aren’t Secret (And They Know It)


Here’s where it gets really disturbing.


People tell ChatGPT everything. Their depression. Their marriages falling apart. Their medical symptoms they’re too embarrassed to ask a doctor about. Their business plans. Their political views. Their fears, dreams, and midnight thoughts.


They think they’re talking to a machine. Private. Confidential. Safe.


They’re wrong.


Sam Altman himself—the CEO—admitted it: there’s no confidentiality when you talk to ChatGPT. In legal cases, OpenAI will hand over your conversations. They have to. And Altman’s response? He called the situation “very screwed up” but said they “haven’t figured that out yet.”


Let that sink in. They haven’t figured it out. But they’re still charging you. Still collecting your data. Still letting you pour your heart out to their chatbot, knowing full well that a lawyer, a government agency, or a corporation could demand to read everything you’ve ever said.


Would you tell your deepest secrets to someone who openly admits they’ll snitch on you if a lawyer asks? No? Then why are you telling ChatGPT?



The New York Times Wants Your Deleted Chats Forever


When The New York Times sued OpenAI for copyright infringement, they made a chilling demand: OpenAI must keep all user content indefinitely. Including your deleted ChatGPT conversations. Including everything you thought you’d erased.


OpenAI claims this data is “stored separately in a secure system” accessible only to a “small, audited legal and security team.”


Feel better? I don’t.


Because here’s the reality: your “deleted” conversations aren’t deleted. They’re evidence in a corporate lawsuit. And if OpenAI loses that case, what happens next? Will they be forced to hand over millions of users’ private conversations to prove they didn’t steal copyrighted material?


You deleted those messages because you wanted them gone. OpenAI kept them because lawyers told them to. Whose interests are they protecting?



Countries Are Banning ChatGPT—And There’s a Reason


Italy shut ChatGPT down in 2023. Why? Because their data protection authority said there was “no legal basis” for OpenAI’s “massive collection and processing of personal data.”


Canada launched a formal investigation into OpenAI for “collecting, using, and disclosing personal information without consent.”


These aren’t authoritarian regimes trying to suppress technology. These are Western democracies with privacy laws, and they looked at OpenAI’s practices and said, “Hell no.”


But in the U.S.? We just keep swiping our credit cards.



When They Remove Features You Paid For


Remember “Browse with Bing”? That feature ChatGPT Plus subscribers paid for that let the AI search the web for current information? OpenAI disabled it. Why? Because it let users “bypass paywalls.”


Translation: content owners complained, and OpenAI threw paying subscribers under the bus.


You paid for that feature. OpenAI removed it. Not because it didn’t work. Not because it was dangerous. But because it pissed off the wrong people.


Again: whose interests is OpenAI protecting?



The Stack Overflow Rebellion


When Stack Overflow—the massive programming Q&A site—partnered with OpenAI to scrape forum posts for ChatGPT training, users revolted. Programmers who’d spent years answering questions for free tried to delete their posts to prevent AI companies from profiting off their work.


Stack Overflow’s response? They banned users for deleting their own content. En masse.


Why? Because they’d already sold that data to OpenAI. User consent didn’t matter. User control didn’t matter. The deal mattered.


If you’ve ever answered a question on Stack Overflow, your knowledge is now ChatGPT’s knowledge. And you got nothing. Not a dime. Not even a thank you.



The Pattern Is Clear


Every time there’s a conflict between:


  • Your privacy vs. OpenAI’s legal protection → OpenAI wins

  • Your control vs. OpenAI’s liability → OpenAI wins

  • Your features vs. corporate partnerships → OpenAI wins

  • Your consent vs. their business model → OpenAI wins



You are not the customer. You are the raw material.


OpenAI is building an empire on your words, your ideas, your problems, your secrets. They’re charging you for the privilege. And when push comes to shove, they will throw you under the bus to protect themselves.


Sam Altman talks about building “beneficial AGI for all of humanity.” But they can’t even build basic privacy protection for paying customers.


They can’t even let you choose which version of their AI you use.


They can’t even keep your credit card number safe.


But sure. Trust them with artificial general intelligence.



REFLECTION BOX


Ask yourself:


  • What have you told ChatGPT that you wouldn’t want a lawyer to read in court?

  • If you’re paying $20-$200/month, what are you actually getting that protects you instead of protecting them?

  • When OpenAI says “trust us” with AGI, why should you believe them when they can’t even secure basic user data?

  • Who profits from your data? (Hint: it’s not you)

  • If your own country’s privacy regulators are investigating or banning ChatGPT, why are you still using it?



The uncomfortable truth: You’ve been having a one-sided relationship with a corporation that sees you as a product, not a person.


Maybe it’s time to rethink what you’re giving away—and what you’re getting back.



What Now?


I’m not saying don’t use AI. I’m saying understand what you’re trading.


Every prompt you send is data they own. Every problem you share is training material. Every secret you confess is potential evidence.


They’re not evil. They’re just a corporation. And corporations protect corporations.


The question is: who’s protecting you?





WANT MORE UNCOMFORTABLE TRUTHS?



Subscribe to TOCSIN MAGAZINE — where we ask the questions Silicon Valley doesn’t want you asking.


We don’t do press releases. We don’t do corporate PR. We do reality.


Because somebody has to.


🔥 TOCSIN MAGAZINE — The antidote to tech propaganda

tocsinmag.com | Subscribe now for unfiltered investigative journalism


Join the conversation that scares the powerful.

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Snjezana-Xena M.
Snjezana-Xena M.
2 minutes ago
Rated 5 out of 5 stars.

Nice article

Like
bottom of page