McGraw Hill’s response to the ShinyHunters breach is almost a perfect specimen of how organisations use the shared responsibility model to manage blame rather than manage risk.
The company confirmed that 13.5 million accounts were exposed and then, in the same breath, noted that the incident “appears to be part of a broader issue involving a misconfiguration within Salesforce’s environment that has impacted multiple organisations.” Translation: it may not be our misconfiguration. Salesforce might have done this to us. We are a victim of infrastructure we pay for and depend on but do not control.
This is technically defensible. It may even be factually accurate. And it changes almost nothing for the 13.5 million people whose contact data is now publicly available on a dark web leak site.
What Shared Responsibility Actually Is
Every major cloud and SaaS vendor publishes a shared responsibility model. AWS, Azure, Salesforce, Snowflake — the diagrams look slightly different but the structure is the same: vendor is responsible for some things, customer is responsible for others, and there is a line between them. The line is drawn to protect the vendor.
Shared responsibility is a liability framework masquerading as a security framework. It answers the question “who is responsible when something goes wrong?” — a legal and contractual question — while doing very little to answer “what prevents something going wrong?” The security work required to configure Salesforce Guest User permissions correctly, audit sharing rules, and monitor API usage falls entirely on the customer. The vendor’s responsibility is to provide the tooling to do that work and to operate the infrastructure that runs beneath it. Configuration and monitoring remain yours.
This arrangement works tolerably when both parties understand it. The problem is that most organisations signing SaaS contracts do not read the shared responsibility documentation carefully, do not have Salesforce administrators with deep security specialisation, and inherit configurations from implementation partners who optimised for functionality rather than least-privilege access control. The shared responsibility model does not care. You clicked the accept button on the contract. The responsibility is yours.
The Configuration Gap Is Not Closing
Salesforce has had misconfiguration-driven data exposure events consistently for years. Guest User access in Experience Cloud sites has leaked sensitive records from healthcare, government, and financial services organisations. The pattern is not a Salesforce vulnerability in the conventional sense — it is correctly-functioning software configured in a way that exposes data to unintended audiences. Salesforce provides Health Check. It provides security settings guidance. It publishes well-documented best practice for Guest User permissions. The tooling is there.
And yet breaches attributed to Salesforce misconfiguration continue at scale. ShinyHunters is now hitting Salesforce-connected organisations repeatedly in a single month — Rockstar, McGraw Hill, and the suggestion of further unnamed victims in the same campaign. The attack surface is not a zero-day. It is a known configuration weakness that most customers have not closed.
This is the real problem the shared responsibility model obscures: the burden of correct configuration is placed on thousands of individual customers, each of whom must independently discover, understand, and apply security best practice for a platform whose configuration surface area is enormous. Most will not do this completely. Some will not do it at all. The aggregate result is a vast population of misconfigured instances at any given time — precisely the target environment that systematic actors like ShinyHunters are built to exploit.
What Needs to Change
The honest conversation about SaaS security requires three things that the industry currently avoids.
First, secure defaults are not optional. If Salesforce Guest Users should not have broad record access, the default configuration should not permit it. Every configuration option that requires the customer to actively restrict access is a configuration option that will be left open by a percentage of customers. Vendors can close this by shipping restrictive defaults that require deliberate action to open, not permissive defaults that require expertise to close.
Second, configuration visibility needs to be contractual, not aspirational. Organisations should require, as a contract condition, that their SaaS vendors continuously scan configuration against published security baselines and alert on deviations. Some do this today. Most do not. “Use our Health Check tool” is not a substitute for “we will tell you when your configuration diverges from what we expect.”
Third, the shared responsibility model needs to be disclosed honestly in the sales process. Salesforce implementations are sold with significant promises about security and reliability. The specific ways in which that security depends entirely on customer configuration choices — and the consequences of misconfiguration at scale — are not routinely part of that conversation.
Until these change, expect more McGraw Hills. The shared responsibility model tells you whose fault it is. It does nothing to make it stop.