In 2014, the U.S. Department of Veterans Affairs was caught falsifying wait-time records while veterans died waiting for care. The scandal wasn't a design failure in the conventional sense—it was a systems failure that reinforced decades of distrust among the very population the agency existed to serve. For designers tasked with improving VA digital services afterward, the challenge wasn't information architecture or interface polish. It was the fact that their users had rational, evidence-based reasons to believe the system would fail them again.
This is the uncomfortable territory where service design meets institutional legitimacy. Most design methodologies assume a baseline of user willingness—that if you build something usable, people will use it. But what happens when the intended users have learned, through direct experience or generational knowledge, that engaging with government systems leads to harm, surveillance, denial, or humiliation? Standard user-centered design frameworks have surprisingly little to say about this.
The challenge is structural, not cosmetic. You cannot solve distrust with a friendlier interface any more than you can solve food insecurity with better restaurant menus. Yet design does have tools for this work—tools rooted in transparency, accountability, and the deliberate redistribution of power within service interactions. The question is whether institutions are willing to deploy them honestly, or whether they'll treat trust-building as another optimization metric to game.
Trust Deficits Are Rational, Not Irrational
Design practice tends to frame user resistance as a problem to be overcome—a friction point in the journey map, an adoption barrier to be smoothed away. When citizens avoid government services they're entitled to, the instinct is to diagnose a usability problem. But for many populations, avoidance isn't a failure to understand the system. It's a perfectly calibrated response to how the system has historically treated them.
Consider the documented history: Indigenous communities subjected to forced assimilation programs administered through government services. Black Americans navigating welfare systems designed with surveillance and punishment baked into their logic. Immigrant communities where engagement with any government agency risks exposure to enforcement mechanisms. These aren't abstract grievances. They're institutional patterns with living witnesses and ongoing consequences.
Herbert Simon's concept of bounded rationality is useful here. Citizens aren't making irrational decisions when they avoid services—they're making rational decisions based on the information available to them. That information includes personal experience, community knowledge, and observable institutional behavior. A grandmother who tells her family not to apply for benefits isn't being stubborn. She's transmitting survival intelligence.
For designers, this reframing matters enormously. If distrust is irrational, the solution is persuasion—better messaging, simpler forms, friendlier branding. If distrust is rational, the solution requires changing the conditions that produced it. That means the design brief isn't 'make people trust us' but rather 'give people legitimate reasons to reconsider their assessment.' The distinction sounds subtle, but it redirects the entire design effort from communication to structural change.
This is where many public service redesigns fail before they begin. They treat trust as an input variable—something you establish in onboarding and then leverage throughout the service. But for populations with trust deficits, trust isn't the starting point. It's the outcome, if it arrives at all. Designing for this reality means building services that function without trust and that earn it incrementally through demonstrated behavior.
TakeawayWhen users avoid a service, the design question isn't 'how do we persuade them?' but 'what has this system done to make avoidance the rational choice?' The answer to that question is where real redesign begins.
Designing for Trustworthiness, Not Trust
There's a critical distinction in organizational theory between trust and trustworthiness. Trust is a psychological state held by the user. Trustworthiness is a property of the system. Designers can't directly create trust—that's the citizen's decision to make. But they can systematically design for trustworthiness by embedding specific qualities into every service interaction.
The first quality is radical transparency. Not transparency as a marketing gesture—a cheerful 'here's how we use your data' banner—but operational transparency that shows the machinery. What happens after you submit this form? Who sees it? What are the decision criteria? What's the timeline? When Estonia built its X-Road digital infrastructure, it included a feature that lets citizens see exactly which government officials have accessed their records and when. That's not a UX nicety. It's a power redistribution mechanism.
The second quality is user control, and it cuts against deep institutional instincts. Government services are typically designed to extract information from citizens in exchange for access to entitlements. The interaction assumes the institution asks and the citizen answers. Trust-building design inverts this wherever possible. It gives citizens control over what they share, when they share it, and the ability to withdraw. It builds in opt-out paths that don't carry penalties. It treats consent as ongoing, not one-time.
The third quality is visible accountability. When the system makes an error—and it will—citizens need to see consequences and corrections. This means designing complaint mechanisms that aren't dead ends, publishing error rates and resolution times, and creating feedback loops where citizen reports visibly change system behavior. Most government services bury their failure modes. Trust-building services surface them deliberately.
Together, these three qualities—transparency, control, and accountability—form what we might call a trustworthiness architecture. It's the structural equivalent of what sociologists call 'trust signals,' except it operates at the system level rather than the interpersonal one. The design challenge is that each of these qualities requires institutions to accept vulnerability, which runs counter to the bureaucratic instinct for self-protection.
TakeawayYou cannot design trust into a system—trust belongs to the user. What you can design is trustworthiness: transparency that reveals the machinery, control that redistributes power, and accountability that makes failure visible and correctable.
Incremental Credibility: Trust as a Design Outcome
If trust cannot be assumed at the outset, it must be accumulated through experience. This suggests a design pattern we might call incremental credibility—structuring services so that each interaction is a small, low-risk test that the institution can pass or fail. Over time, passed tests compound into something that begins to resemble trust.
The pattern draws on game theory's concept of iterated interactions. In a single encounter, there's no basis for trust. But in repeated encounters with consistent outcomes, cooperation becomes rational. Applied to service design, this means creating frequent, low-stakes touchpoints where the institution can demonstrate reliability. A text message that accurately predicts a wait time. A status update that arrives when promised. A caseworker who remembers what was discussed last time. None of these are remarkable individually, but their consistency is the mechanism.
This has practical implications for service architecture. Rather than designing monolithic application processes—submit everything at once and wait—incremental credibility suggests phased engagement models. Let citizens access basic information anonymously. Let them create accounts without committing to applications. Let them start applications without submitting them. Each phase is a test: did the system behave as expected? Was the experience respectful? Did anything unexpected happen with my information?
The Finnish social services agency Kela offers a useful reference. Its redesigned benefits system allows citizens to explore eligibility through anonymous calculators before any personal data is exchanged. The first interaction carries zero risk. This isn't just good UX—it's a deliberate trust-building strategy that acknowledges the asymmetry of power in citizen-government interactions and temporarily levels it.
The hardest part of incremental credibility isn't the design—it's the institutional patience it requires. Organizations accustomed to measuring success by application completions and processing volumes will struggle with a model that treats a citizen's decision to not proceed as a legitimate and respected outcome. But that respect is precisely what builds credibility. A service that pressures you to complete an application is optimizing for its own metrics. A service that lets you walk away and come back is optimizing for your autonomy. Citizens can tell the difference.
TakeawayTrust isn't a switch you flip at onboarding—it's a compound interest account funded by consistent, low-risk interactions where the institution behaves exactly as promised. Design the deposits, not the withdrawal.
The design profession's most uncomfortable truth in this space is that the best service design cannot compensate for an untrustworthy institution. If the underlying policies are punitive, if the data is shared with enforcement agencies, if the system is built to deny rather than serve—then elegant interaction design becomes a more sophisticated form of deception. The design work has to be honest about what it can and cannot change.
What design can do is make institutional behavior legible. It can create structures where trustworthiness is observable, where power asymmetries are acknowledged and partially corrected, and where citizens accumulate evidence through direct experience rather than being asked to take faith-based leaps.
The strategic insight for practitioners is this: stop designing services that require trust. Start designing services that earn it—interaction by interaction, kept promise by kept promise. The citizens you're designing for have been paying attention longer than you have. Design accordingly.