
Client
Salesforce
Deliverables
Website redesign for Salesforce’s Office of Ethical & Humane Use
Results
We reshaped Salesforce’s Ethical & Humane Use page from a compliance index into a narrative-led trust hub. The new architecture follows a story arc — principles, then proof, then practice — organized as a hub-and-spoke structure that lets each page compete on its own SEO terms while preserving the authority of the existing URLs. The result is a foundation that grows with Salesforce’s trusted AI work and doesn’t just state the company’s AI posture, it shows the receipts.
MORE INFOView Website
Salesforce’s Office of Ethical and Humane Use (OEHU) leads the company’s trusted AI work — the principles, governance, and practices behind every Agentforce release. But the existing public page was organized around the team’s internal structure (RAIT, Policy, Accessibility), which made sense to insiders and confused everyone else. Customers, policymakers, and journalists couldn’t find it, and the copy assumed trust it hadn’t earned. The task: redesign the site into a narrative-rich destination that explains how Salesforce builds trusted AI in language humans actually understand — especially in the age of agents.
We ran a Figma workshop with OEHU's policy, product, and accessibility teams to map three audiences — customers and partners, policymakers and civil society, internal stakeholders — and five trust priority areas. We followed it with two rounds of competitive research: first across the responsible AI sites of major tech vendors, then across narrative-driven trust sites outside the AI space, to study how principle-heavy content can feel human.
The research surfaced one decisive gap. Across the AI industry, principles are presented as lists, governance as org charts, and resources as link collections. Nobody proves they actually do what they say. From there we proposed three strategic directions and the OEHU team chose Narrative-First — the one built around showing the work, not stating it.
The chosen architecture restructures the site as a narrative arc: Hub → How We Build Trusted AI → How We Govern AI Responsibly → Responsible AI in Practice → Resources. The Hub establishes Salesforce's foundations. The middle pages show the technical implementation and the oversight structure that enforces it. The Practice page is the proof: real cases where principles, platform features, and governance work together. Resources sits at the end, organized not by content type but by what a visitor wants to do: build AI, evaluate AI, or shape AI policy.
Every page is built around the same design rule: evidence over aspiration. Principles are paired with the policies that enforce them, the people accountable for them, and the products where they show up. A claim earns its place by what it points to.
We helped Salesforce reshape its Trusted AI presence — moving from a compliance reference page to a public hub where customers, policymakers, and partners can actually understand how Salesforce builds and governs agentic AI.
The new architecture follows a narrative arc: principles, technical implementation, governance, proof in practice, and resources organized by visitor intent. Each page is a chapter, building toward the next.

Behind every page is the same rule: evidence over aspiration. Every claim is paired with the policy, person, or product that proves it. The result is a site that doesn't just state Salesforce's trusted AI posture, it shows the receipts.
WITH US IT HAPPENS.
WE WOULD LOVE TO HEAR FROM YOU.