Earn trust with these 7 signals
It’ll come as no surprise to you that various information sources suggest trust demonstrably impacts the way organisations operate and perform. Take, for example, Accenture’s 2018 review of the performance of 7,030 companies. This ‘competitive agility index’ helped expose that trust has a potentially disproportionate role on bottom line business outcomes like growth and EBITDA.
Much has been written about this topic. “We need more trust.” “We need to earn back the trust we have lost.” “Trust is at an all time low.”
This is common rhetoric. Whether completely accurate or not is a slightly different story (see Ipsos Mori’s recent work for more). The ‘what’ — trust matters and we, as organisations, need to do better at being worthy of it — is well understood by business leaders. What is less common, however, is a tangible and well utilised reference point for ‘how’. What’s missing are open and transparent demonstrations of these models, methods and frameworks in action, along with the results they help produce.
Although this is a complex and somewhat nuanced problem or opportunity space, recent work from TIGTech — the result of a two year review of the trust literature and a variety of stakeholder engagement activities — offers useful insights for leaders and practitioners to consider and potentially act on.
The purpose of this article is to introduce privacy leaders and practitioners to what I have found to be immensely useful work. I’m doing this because I’ve long held the belief that privacy, in a reasonably holistic sense, can be an enabler of trustworthiness and responsible innovation. It’s my experience that privacy professionals are well positioned to lead and contribute to work that makes this a reality.
This article is not an exhaustive overview of the topic, rather a basic introduction. At the end I share further information about resources that you can access for free if you’d like to learn and do more.
What is trust?
There are a number of definitions. Most of them are categorically quite similar. Commonly trust is thought of as hope about expectations fulfilled, confidence in the unknown or a belief in the reliability or truth of someone or something.
Intuitively we seem to ‘get’ this. But trust is complex. It works two, or multiple ways, depending on the number of parties involved. It’s influenced by a variety of factors, from our cultural settings through cognitive biases and even, perhaps, our genetics. NYU’s neuroscience research highlights certain aspects of this (our brain decides whether a face is trustworthy before we consciously perceive it) complexity rather beautifully.
Trust is also a spectrum. It’s more than a binary relational state. It’s not as simple as you trust or don’t.
Image credit: TIGTech
Even though all of this is complex, much can be done to influence trust states for the better.
How is trust earned?
This again, is somewhat complex. The answer, “it depends” likely won’t cut it, so let’s go a little further.
Trust is perhaps best earned through consistent demonstrations of trustworthiness. This is likely true in both interpersonal and organisation to individual settings. It’s for this reason that Hilary, the lead author of the TIGTech research I referenced in the introduction, has come to believe that trust is better described as “a belief in another’s trustworthiness”.
But what does it mean to be worthy of trust?
TIGTech identifies 7 drivers of trust, or ‘signals of trustworthiness’ (I prefer the latter framing for a variety of reasons).
Image credit: Mat Mytka from Greater Than Learning and Hilary Sutcliffe from TIGTech
“For such diverse fields of research there was an unusual and remarkable consensus on the qualities which are important for trust — intent, competence, respect, integrity, inclusion, fairness and openness. Our research made it very clear that these are not just abstract concepts, or academic theories. These 7 Trust Drivers are deeply rooted in our individual and collective psychology and the fundamental ways our societies work and have evolved.”
By defining what these signals mean to your organisation and its stakeholders, breaking them down into concrete actions and progressively doing and communicating the work, you put yourself in the best position to give good evidence of your organisation’s trustworthiness (the eighth category in the visual above). From there there’s a pretty good chance that this will impact trust states for the better.
Just a quick note at this point. Being worthy of trust is about much more than simply communicating values or qualities. Trustworthiness has to be demonstrable and understandable if it’s to be appreciated and impactful. Put simply, you don’t want to risk falling into the category of companies where there is seemingly no meaningful association between stated values and actions. This will very likely negatively impact trust states.
It’s also worth noting that it may be unwise to simply expect or desire more trust (again, this is common rhetoric). Given the variables that seem to impact trust states — and the fact that many trust based decisions seem to be less active than we’d like (in Kahneman’s framing we might suggest many of these decisions are largely ‘system 1’ thinking) — it’s perhaps more prudent to think about a world where trust is better placed. A world where those who are trustworthy benefit from more positive trust states, and those that lack trustworthiness experience the opposite.
Moving on.
How might this work in practice?
Let’s start by describing a hypothetical scenario. Keep in mind that this is an oversimplified, ‘on paper’ example of how an organisation might embed the signals of trustworthiness into its strategic planning and daily practice. This may be very different to your interpretation or practical use of the signals.
Work with me.
You’re a FinTech. You’ve decided to embrace some of this work on signals of trustworthiness as part of a new Open Banking proposition you’re developing.
You’re operating in the UK, which means you have to consider PSD2 consent requirements (and everything else that comes along with Open Banking) and other relevant regulations, such as the GDPR, PECR etc.
You establish a cross functional team. You’ve got privacy and data protection, legal counsel, a product manager, a user experience designer, a few software engineers, a user researcher and a comms professional. This gives you a great foundation from which to approach the project holistically.
You might lead the process with some discovery work that consists of different research practices. To better define the opportunity you use the Problem Worth Solving framework and Jobs to be Done switching interviews. You also conduct desktop research and related activities that help the team understand everything from relevant behavioural science through to the regulated landscape (including the rules, technical specifications and CX guidelines for Open Banking). There’s likely a concept PIA/DPIA embedded into this process somewhere too.
Given you’ve committed to the 7 signals of trust, you choose to add an additional set of complementary activities.
You lead this process by defining the intent of the proposition. Specifically, how delivering it will benefit the people directly and indirectly impacted by it. An activity such as consequence scanning can be immensely useful at this stage.
You then engage in a process of defining a series of actions, considerations and potential outcomes for the other 6 signals of trust. These considerations inform the way you make decisions, specifically the parameters that support ethical decision making and unavoidable trade-offs.
As you continue with the relevant workflows and practices, the 7 signals feature prominently in strategic planning, as well as proactive, retroactive and retrospective meetings. The signals also feature in everyday workflows by helping inform the tiny micro decisions that are being made.
As you progress you might decide to invite a representative sample of your prospective stakeholders into a research process (this ideally occurs multiple times). This brings to life a combination of intent, openness, respect and inclusion.
You do this as part of a broader approach to ethical decision making and participatory design by using a supporting tool like the ethics decision making framework. You execute a program of research that puts prototypes to the test using outcome focused usability sessions with supporting contextual inquiry (although this is in a lab setting, this approach enables you to generate proxy quantitative data with supporting qualitative data quickly and cost effectively).
Here’s an example of this approach for consent based data sharing as part of Australia’s Consumer Data Right (the ‘regime’ that enables Open Banking down under).
The research participants engage with the prototype unimpeded and answer a series of scripted and off the cusp questions from the researchers.
The data from these sessions is gathered and analysed. The results might be published somewhere in a decision log as part of a commitment to openness that helps show progress and invites stakeholders to both observe and participate in the project as it evolves and matures.
As the project gets closer to implementation, assessments are again conducted against the 7 signals of trustworthiness. Pass/fail parameters are defined. For instance, the average of all signals must be >5.5 in order for the project to proceed as is. These assessments can be made progressively to support the process, help communicate openly with stakeholders and encourage the most challenging question many organisations fail to explicitly ask, “just because we can, should we?”
The process in its entirety — the goals, the practices, the measures, the documentation, the data, the communication — forms the evidence of trustworthiness that’s likely to impact trust states for the better.
Now, if all of this sounds like gobbly gook. If you’re struggling to visualise how these signals might be embedded into a sequence of work activities, let me give you something actionable to start with. This practice can be embedded into almost anything, from formal assessments through to meetings through to collaborative design sessions.
Ask and answer 8 questions:
- How aligned is this proposal (decision, action, tool, product etc.) to our intent (purpose) as an organisation: 1–7 (1 being not aligned at all, 7 being it couldn’t be better)
- How is this proposal likely to impact our ability to deliver on our value promise (competence): 1–7 (1 being not at all, 7 being something like we couldn’t possibly deliver our proposition consistently without it)
- How is this proposal likely to affect our ability to treat our stakeholders with respect: 1–7 (1 being it’ll destroy our ability to treat those we care about with respect, 7 being something like it’s essential to treating the people we care about with the respect they deserve)
- How might this proposal impact our ability to be open and vulnerable: 1–7 (1 being something like this will significantly hinder our ability, 7 being something like this is essential to enabling us to operate openly and with a willingness to be vulnerable to the actions of others)
- How might this proposal impact our ability to be honest, accountable to our actions and impartial wherever possible (integrity): 1–7 (1 being something like this will destroy our ability to do so, 7 being something like this is essential to us operating with integrity)
- How might this proposal impact our ability to engage in just processes that deliver equitable outcomes (fairness): 1–7 (1 being something like this will destroy our ability to do so, 7 being something like this is essential to an organisational model that is fair, by design)
- How might this proposal impact our ability to actively include our stakeholders in ways that are valuable, meaningful and engaging: 1–7 (1 being something like this will destroy our ability to do so, 7 being something like this is essential to us designing effective processes and practices of inclusion i.e. participatory design, co-design etc.)
- How impactful and useful is our evidence of trustworthiness: 1–7 (1 being it literally has no impact on trust states, and 7 being it directly impacts the way our stakeholders make decisions about their relationship with us)
From this simple practice you will have eight, single digit answers. You add them up and divide them by 8. This will give you a number to work with.
It’s up to you to decide how you want to interpret the acceptability of the outcome. In my experience, unless it’s >5.5, something probably needs to be revisited and reworked.
You could also visually plot this using a spider diagram. I’ve already found this to be highly effective in my work, primarily as a communication tool that supports meaningful discussion and collaboration. It can help you explore ways to improve different signals. You can track (through versioning) the impact that new ideas or tactics have on the scores over time.
Image credit: TIGTech (representative data only)
So where do we get to with all of this?
Start small
This sounds and looks like a lot of new work. I get that. It absolutely can be. But I don’t think that’s the only way to peel the carrot. Much can be done by starting small and being pragmatic.
My simplest piece of advice to walk away with is this; next time you’re assessing a proposal from the business or a client, whether that be via a formal process like a PIA or a more dynamic process like a design review meeting, encourage the group to ask and answer those 8 questions. Scribble a spider diagram on a whiteboard or in a visual collaboration tool. Make visible the trustworthiness assessment. Invite a candid discussion about where there are gaps and where there might be ways to enhance the trustworthiness of what you’re trying to do and how.
More formally integrating this into workflows will take time. It’ll be challenging. It will require patience. So start small, get some wins on the board and if you’re open to it, share your progress with other privacy pros around the world. We could all benefit from more data on the effectiveness of these types of approaches in practice.
If you’re interested in learning more about this research and the way you might be able to use it, you can access Hilary’s micro-learning course via Mural here (this is a visitor link, so there’s no need for you to have a Mural account if you don’t want one).You can also head to www.TIGTech.org for the full report and further thinking on trust and tech governance.
If you want to chat about this further, reach out to Hilary Sutcliffe or Mat Mytka. They are both awesome.