Our practice
The work, not a checklist.
A definitive checklist for trauma-informed content design does not exist. Anyone who sells you one is selling you something else. What does exist is a set of commitments that shape how the work runs: where audits start, where training begins, what architecture does that rewording cannot and what changes when AI enters the pipeline. This page sets those out.
The argument behind the practice is on a companion page, Our approach.
Where accessibility ends
Beyond WCAG
WCAG 2.2 is necessary and not sufficient. Accessibility standards test whether content can be perceived and operated, whether it meets contrast ratios, works with assistive technology, survives keyboard navigation and degrades gracefully. They do not test whether it can be understood, acted on or recovered from by a person whose working memory is compromised. That second test is where most real-world failures sit.
A form can pass every WCAG criterion and still be unusable by a bereaved partner in the first week. A letter can meet every plain-language guideline and still trigger panic in the person it was written for. The gap between technical accessibility and cognitive accessibility is where trauma-informed design does its work.
Our audits start where WCAG ends. We look for the places where content assumes cognitive capacity the user will not have: density, sequencing, jargon reintroduced under a different name, decisions loaded onto the user when the system should have made them and tone that signals judgement where the user needs permission. None of this is captured by a standards checker. All of it matters more than most of what is.
Trauma-informed
More than rewording
Architecture, not just language
Most content improvement projects stop at language: simpler words, shorter sentences and a more human tone. This is the easy part of the work, and often the least useful.
The harder work is architectural. How is a journey sequenced when the user may abandon it three times before completing it? Where does a decision sit relative to the information a user needs to make it? How is content grouped when the reader cannot hold more than one idea in working memory? What does a page do when the user has arrived at it in the wrong order, from a search result they did not understand, on a device that is not the one the service was designed for?
The Cancer Research UK work was not primarily a rewording exercise. It was a redesign of how thousands of pages of cancer information were structured, so that a person in active treatment could find the next useful thing rather than drowning in a comprehensive library written for someone with time to read.
Universal Credit was not about friendly copy. It was about whether the journey held together when a claimant was making it for the third time at midnight on a borrowed phone.
Language matters. Architecture matters more. A well-written sentence in the wrong place serves no one.
Scale and its costs
AI and trauma-informed content
AI is already being used to generate, personalise and triage content at a scale no human content team can match. Used well, this is an opportunity; trauma-informed principles can be embedded in style guides, prompt libraries, tone-of-voice systems and content pipelines in ways that reach far more users than any hand-crafted intervention ever could.
Used badly, it is an amplifier for the failure modes the rest of this page describes. A model trained on existing corporate content will produce more corporate content. A generation pipeline optimised for engagement will produce content that performs in lived experience and fails in living experience.
Personalisation that targets demographic categories will entrench the framing that vulnerability is a property of certain people rather than a state most people pass through.
The question is not whether to use AI in trauma-informed content work. It is whether the pipeline has been designed with living experience, reduced capacity and regulatory obligation in scope from the start or whether those constraints will be retrofitted after the first regulatory finding. We work with teams on the first version of that conversation, not the second.
How engagements run
What working with us looks like
Most engagements start with a focused assessment of your highest-risk content, whatever a regulator, a complaints team or a vulnerability lead would flag first. From there, work usually expands into one of three shapes: implementation support where we redesign content alongside your teams, training that is grounded in your specific regulatory context rather than generic empathy or strategic input on content systems and AI pipelines.
We do not sell templates, frameworks without implementation paths or motivational keynotes. We are clear about what we have not worked on, where the evidence is thin and where someone else would serve you better. If you tell us what the actual problem is, we will tell you honestly whether we are the right people to help.