10 years later, my reflections on trust in tech

Nathan Kinch
11 min readOct 18, 2021

--

The mask is the result of my three year old’s instructions. I literally had to wear this all day…

I’ve been quiet by my usual standards. This is because I stepped back from Greater Than X and Greater Than Learning. Our services biz had a great run. It achieved the stuff we wanted it to achieve. It was always designed to be a temporary structure that helped us learn and deliver value along the way.

At the same time, I was supporting my family through some challenging health issues. We’ve now come through the worst of it. And with some renewed focus and energy, I’m ready to get back to better than my best ;)

To very briefly summarise my near term focus, I’ll be working 3 days a week with boards, executive committees and various ‘leaders’ and teams. I’ll help them better understand the complexity and nuance of trust and trustworthiness, specifically as it relates to modern business. This work will help inform corporate governance (setting strategy and identifying/managing risk). It will also inform various business functions. Together we will better design for the qualities of trustworthiness. Through this we will try to positively impact trust states. By positively impacting trust states we will hopefully contribute to better relational dynamics that have a positive business, social and environment impact (yes, everything is connected!).

Time will tell if this turns out to be true.

I’m working 3 days because I have a better sense of what I value. I want to spend more of my life truly experiencing life’s most meaningful moments. I also want to support my wife/bestie/life partner as she embarks on a new journey (additional study, getting back to work, transforming her ‘career’ etc.). I’ll be there to pick up the slack and ‘be’ with our 3 year old.

With that out of the way, let’s get to the purpose of this post.

I’ve now spent about a decade fascinated by the variables that impact how we, as a species, cooperate, compete and coordinate at scale. Much of my career has focused on applied ethics and trust in tech companies, along with the products and services they build. It’s never that simple, but this is a good enough explanation for now.

I’d like to reflect on my experience as I reset and prepare for what’s next. I trust some of what I share will be interesting, insightful and/or useful.

For those of you short on time, here’s the TLDR.

Ten years ago less was known (just walk away now with that killer insight. Seriously, I’m on my game today…). What I now see as being of critical importance — something that a much larger group might agree with — was a fringe issue for most corporate decision makers (at best). There was an emerging body of evidence, plenty of great work being done by passionate people and an undercurrent of public discourse. But, it was all pretty early.

Today things are hot. There’s an entire movement. Even though I’m over the moon about this, a big part of me remains skeptical. In systems parlance many of our efforts are focused on the ‘events’.

In the future I believe far more emphasis will be placed on the lower levels of the iceberg model. Through constant optimisation of today’s paradigm, with parallel efforts attempting to design new systems that ‘transcend the paradigm’, we might well design a future where technology serves, respects and protects us, by design (this has to be supported by something other than today’s corporatism… Probably obvious, but shouldn’t go unspoken).

This will require investment. This will require focus. This will require many of us to confront our mental models of today and curiously consider what might be, even if our speculations and pondering seem completely altruistic. This will require us to lean into the complexity and try to better map the system and its relationships so we can intervene to greater effect.

This might result in a world where far more of us spend more of our days truly experiencing life’s most meaningful moments. Technology’s role in this future is something I’ve published about before.

I’m excited to be a part of this journey.

For those of you wanting a little more of an experience, read on.

Where have we come from?

A broad question indeed. Pretty sure Yuval Noah Harrari covered this in depth. Jokes aside, let me narrow the focus a little.

I’m not going back to the intention of the internet or the values held by those involved early. I’ll focus primarily on the way our understanding of trust in these issues and areas has evolved. I’ll do this in a way that is hopefully relatable by describing my experience over the last decade and augment that basic narrative with some relevant data.

I first became fascinated with these issues — how technology and technology business were impacting society — when I was building my first company. I was trying to solve a specific issue I’d experienced myself, injury in elite sport.

To do this we needed to get to know athletes intimately. We’d do this by accessing various categories of data. We’d then build a model. These types of models would ideally inform interventions that decreased injury and increased the likelihood folks performed at their best. There’s a real value to decent predictions in sport.

I took a very athlete centric view to all of this. After all, this was what drove me in the first place.

During this experience I realised — given that the teams were in fact our customers — that there was a power imbalance at play. This was driven by various factors, with information asymmetry playing a significant role.

I didn’t feel comfortable with this. So I took it upon myself to try and learn more. I ended up down rabbit holes that Alice would envy. In fact, I’m still navigating that maze.

This encouraged me to open myself up. It amplified my inherent curiosity. It led to me to learn about a variety of topics, from moral philosophy through to sociology and economics (a non-exhaustive list).

Without getting too epistemic, if there’s anything I now ‘know’ it’s that all of this is complex.

I’ve been on this journey more than once…

Some slightly more detailed version of the Dunning Kruger Effect

I eventually got to the point where I’d established some basic beliefs:

  • Technology was impacting society in myriad ways, many of which we very likely understand poorly or weren’t yet aware of. Often the benefits of technologies were the focus. Less attention was paid to the other side of that proverbial coin
  • Our right to privacy was (and is) important, but any meaningful privacy rights were seemingly being eroded by ‘the system’. This obviously led me down the, “what is privacy anyways?” line of questioning
  • Privacy Enhance Tech was becoming increasingly prevalent, but it wasn’t a priority (or even close to) within corporations
  • Everyday folks were seemingly nonchalant (there was cognitive dissonance and an intent to action gap. I’ve written about this a bunch in the past, specifically in relation to the mental models many of us hold. This post won’t get into that) about the whole thing, maybe because it was too complex, abstract etc. This might also be informed by different cognitive biases such as hyperbolic discounting (because any future risks — that may not be realised — were not as tangible and relevant as anything available today in the form of immediate benefits)
  • The prevailing paradigm, goals of the systems, incentives of the system etc. were in many ways actively discouraging organisations from protecting people’s privacy (and other rights) by design
  • A lot of smart people around the world were doing brilliant work on all of these issues and more, but this was a very ‘fringe’ thing

This is simplified and non exhaustive.

What this perspective did do was encourage me to very explicitly focus my work in this direction. I didn’t want to just build something where this was a consideration (a sports tech company with decent privacy, data protection and data ethics practices), I wanted this to be the thing I contributed to building (whatever that turned out to be).

What’s kind of interesting is that the data was pretty mixed at this time. Social media and other ‘big tech’ was often seen more favourably than it is today. There was more general support and belief in the mission’s of these types of companies. Fewer issues had surfaced in the public light. My conversations with various folks within commercial organisations was largely unproductive. The opportunities and challenges I was attempting to expose were simply not seen as important. At best it was as if these issues might become material one day, and at that point, decisions would be made to consider related implications.

Then things began to accelerate. PEW’s privacy studies were increasingly alarming (keeping in mind the intent-action gap). Ipsos Mori’s work for the Royal Society showcased a data trust deficit, at least in the UK. The World Economic Forum began publishing various reports. These issues started making news in places like TechCrunch. The major tech companies really started getting into trouble, year on year slipping further down various ‘trust leaderboards’. Big things like Open Banking were coming into effect. The regulatory landscape was reasonably quickly evolving.

Oh yeah, and there’s that whole crypto/DLT thing. What’s up with that? ;)

More recently we’ve seen pretty extensive research that demonstrates trust might well disproportionately impact bottom line business outcomes (Accenture’s 2018 competitive agility index). There’s a growing body of research showcasing a relationship between trust and active data sharing (ODI and Frontier Economics), along with its broader value on society and the economy.

ODI and Frontier Economics, 2021

There’s more academic emphasis, particularly in the behavioural sciences, being placed on how choice architecture impacts our privacy preferences and related behaviours. I’ve actually contributed to some super interesting research in this area with Kent Greyson from Northwestern. There’s pretty strong evidence to suggest there’s a solid gap between stated corporate values and behaviour (Donald Sull et al. MIT CultureX), which is clearly a much broader issue than we can cover in this post. Folks around the world boycotted WhatsApp at scale, almost overnight. Big Tech, as ironic as this might sound, is under more intense surveillance than ever before.

I could go on. And I’m not even going to touch the pandemic, for now.

In short, society is having a more meaningful conversation, not just about what had happened or what was, but whether it was (morally) ‘right’, or ‘optimal’ or ‘better’ than alternatives.

Thankfully this conversation is alive and well today.

So, where are we today?

There’s an entire movement around tech ethics and responsible innovation. It’s stronger than ever before. More folks are contributing their voices. Folks are leaving companies misaligned to their values. More companies are ‘leaning in’. There are more supportive academic pathways for folks interested in these opportunities and challenges. Heck, even the folks at Davos are joining the party ;)

Even though this is the case, I’m healthily skeptical about the trajectory. In the simplest of terms, this is mostly because we seem to be (in systems parlance) tackling the events.

Disrupt Design

There’s less pattern work. Even less structural work. And, as far as I can tell, far less work on mental models or attempts to transcend the paradigm (see the Iceberg Model. I can also highly recommend Disrupt Design’s courses and work more generally. They are awesome!).

I talked about this at a Microsoft event earlier in the year. Funny that…

Let me be clear. Being skeptical doesn’t mean I’m not optimistic about our prospects. I believe we can meaningfully transform. It’s possible. I believe we are better placed than ever, largely because there is such a diverse group of folks globally directing attention this way. I believe that by working together, we give ourselves the best chance of a better future (even if the specifics are unclear right now).

So to summarise quickly before moving on, this — the importance of trustworthiness and trust in tech (+ other businesses and institutions) — is a real thing. This is a board level conversation. This impacts corporate strategy. It’s impacting the way products and services are designed. It’s impacting everyday conversations. We have begun to change the narrative. We might be on a better trajectory.

But, for more reasons than I’ll describe right now, there is a lot of mistrust and active distrust.

To end up in a world where there is well placed trust and where those that are most trustworthy are the recipients of said trust, we have to keep pushing the boundaries and keep attempting to do better.

What about the future…?

Predictions are hard, especially about the future. So let me pose some basic speculations instead.

I see two ongoing streams or themes or work today.

The first is about making incremental change in the hope it betters our current systems. This is about designing better tech within the current architecture. This is about better aligning incentives. This is about less manipulative nudging. This is about trying to do what is right, rather than what we can get away with. This is about regulating (maybe). This is about many things that have begun to feel quite familiar.

Most of my ‘active’ career has been focused here.

The second is about designing new systems that may, over time, replace the systems we rely upon today. This is more ambitious, longer term, tough for most folks to ‘get’ and something that I have mostly been observing.

I expect to continue spending plenty of time in the first category. My hope is that, by doing this, I’ll help progressively expose different ways of operating, get plenty of wins on the board and help folks productively, and on their terms, challenge their own mental models.

I still see this being where most of the emphasis is for the next few years at minimum.

I also see continued parallel work in the second category. Over time, although this is impossible to suggest with any confidence, these new systems might gain popularity. Eventually they could replace the systems we rely upon today.

I see this as being far broader than something like a transition towards the SAFE Network. I see this being more connected and dependent. Or rather, interconnected and interdependent.

Some folks might refer to this as something like collective consciousness. In development models, specifically as they relate to organisations and organising, this could be likened to Evolutionary Teal. I am not sure what to call it. I am not sure what it will become. I just see the potential for much of our existence to be better supported by systems that are fit for our purposes, our aspirations and our general flourishing as a collective.

And I think (my best guess right now) that for this to happen, we have to see these connections for what they are. It might be simpler to separate something like the ethics of the tech we build from the ethics of an organisation’s operating model more broadly, how it impacts people’s lives, the biosphere etc. but I’m not sure that approach will serve us in the way we need.

I’ll stop now. I’ve gone on long enough. I hope I’ve encouraged some pondering of the possibility.

I’m excited to get back to this work. I feel fresh. I’m ready to go.

Hit me up with comments, observations about what I’ve missed, requests for clarity of further information etc. Let’s keep building the conversation. Let’s actively explore alternatives. Let’s build optimism through empirical validation. From there we can double down on the stuff that works and scrap what doesn’t. Through this intentional process we can strive for a better tomorrow.

Hoping you are all safe and healthy. With love.

--

--

Nathan Kinch

A confluence of Happy Gilmore, Conor McGregor and the Dalai Lama.