How does trust impact data sharing?

Nathan Kinch
4 min readOct 31, 2021
From an upcoming keynote. Source visuals come from ODI and Frontier Economics

I’ve written about this topic before. Quite a lot in fact. Much of my previous musings focused on our work conducting basic experiments where we looked at our approach to service design (referred to as Data Trust by Design. “Our” being the services firm I used to run, Greater Than X).

The scope of these experiments was pretty basic. We started by using behavioural data (event logs etc.) to develop baseline understanding of how data sharing works in a given context today. Little emphasis was assigned to this, primarily because of the fact that most data sharing today is passive*. There’s limited choice. When choice is offered, it’s often largely illusory etc.

*Very happy to expand on this if useful.

So the way things work currently wasn’t the control. The control was actually the way things currently work today, combined with a step by step explanation of the data processing activities that are actually occurring in any given context (think of this as a guided attempt to ‘fully’ inform the individual and so they are able to make an actual choice when the decision moment is reached).

Basically the research participants are going through the process of searching for and purchasing something on an eCommerce site. Rather than all this happening smoothly (as it does in the real world), there’s commentary added to the experience that describes how this experience is being brought to life (commentary delivered by the researchers. These sessions were in a lab setting).

This tends to result in a very low propensity for people to willingly share data with organisations (there’s too much processing, the data is thought of as being overly sensitive, the risks seem disproportionate the outcomes etc.). It often surfaces a consistent mental model (shock > a feeling of helplessness > apathy > translates behaviourally to seeming like people don’t care. If they are given choice, without unnecessary tradeoffs, they seemingly do care*), something we published about in our work together with the Consumer Policy Research Centre (this work has been cited frequently, particularity in Australian circles as it related to AdTech).

*Happy to expand on all this and more if useful.

We’d then build upon this new data by designing interventions that utilised practices from Data Trust by Design, like The Better Disclosure Toolkit, to fundamentally change the way a given individual experiences a proposition. This approach seeks to embed qualities of trustworthiness into the experience, make these qualities visible, enable the individual to exercise meaningful agency and demonstrate how the limited data being used is actually delivering value (in effect, showing a cause and effect type relationship so people can appropriate the value of the data they’ve decided to share).

One of very many design patterns develop for consent (in this case, revocation using an interaction to showcase the revocation event occurring. We achieved this by quickly decreasing the opacity until the specific attribute of group of attributes disappeared, followed by a state transition clarifying success)

In these settings we were able to demonstrate an 8x increase in propensity to share data (in an environment where the sharing is optional, like consent). Not relative to how things work today, but relative to the control we established (the one where most folks are shocked by the extend of data processing).

Why such a huge increase (relative to the peer reviewed literature)?

https://theodi.org/article/the-economic-impact-of-trust-in-data-ecosystems-frontier-economics-for-the-odi-report/

Truthfully, it’s hard to say with any confidence. These experiments were speculative at best. We’ve always tried to, if anything, downplay the results.

However, our speculation was that this was almost like a ‘paradigm shift’. This shift in the mental models, mindset etc. of the individual resulted in a sizeable shift in trust states. This was only possible because of the information delivered as part of the control experience (folks are no longer operating in the dark, which can lead to active distrust).

We went from an environment of resigned trust (often at best) to one of active trust.

Again, from my upcoming keynote. Source is from Sutcliffe et al. at TIGTech

And because the new baseline we established was so low (very likely biased by a variety of factors, including the inherent limitations of the research methods), the relative increase was huge.

Now, I’m spending less time at the moment on these more tactical things. I’m thinking much more about organisational design and the ways in which qualities of trustworthiness can be embedded into the complex adaptive system that is the organisation. But this — the relationship between trustworthiness, trust and data sharing — is a topic that isn’t going away any time soon.

What I’d like to do is start a discussion, share learnings from the ‘trenches’ and encourage the community at large to figure out better ways to make the trustworthiness of — and trust in — data sharing a new normal.

Comment away.

--

--

Nathan Kinch

A confluence of Happy Gilmore, Conor McGregor and the Dalai Lama.