Talks: Trusting Machines Snd Strangers With Our Future

Print pagePDF pageEmail page

Rachel Botsman: We’ve stopped trusting institutions and started trusting strangers

Rachel Botsman is an author and a visiting academic at the University of Oxford, Saïd Business School. Her work focuses on how technology is enabling trust in ways that are changing the way we live, work, bank and consume. She defined the theory of “collaborative consumption” in her first book, What’s Mine Is Yours, which she co-authored with Roo Rogers. The concept was subsequently named by TIME as one of the “10 Ideas that Will Change the World” and by Thinkers50 as the 2015 Breakthrough Idea.

Named a “Young Global Leader” by the World Economic Forum, Botsman examines the growth and challenges of start-ups such as Airbnb, TaskRabbit and Uber. She is regular writer and commentator in leading international publications including the New York Times, The Wall Street Journal, Harvard Business Review, The Economist, WIRED and more. She is currently writing a new book that explores why the real disruption happening isn’t technology; it’s a profound shift in trust.

  • A trust leap happens when we take the risk to do something new or different to the way that we’ve always done it.
  • I define trust as a confident relationship to the unknown. Now, when you view trust through this lens, it starts to explain why it has the unique capacity to enable us to cope with uncertainty, to place our faith in strangers, to keep moving forward.
  • Climbing the trust stack.” Let me use BlaBlaCar as an example to bring it to life. On the first level, you have to trust the idea. So you have to trust the idea of ride-sharing is safe and worth trying. The second level is about having confidence in the platform, that BlaBlaCar will help you if something goes wrong. And the third level is about using little bits of information to decide whether the other person is trustworthy.

Zeynep Tufekci: Machine intelligence makes human morals more important

We’ve entered an era of digital connectivity and machine intelligence. Complex algorithms are increasingly used to make consequential decisions about us. Many of these decisions are subjective and have no right answer: who should be hired, fired or promoted; what news should be shown to whom; which of your friends do you see updates from; which convict should be paroled. With increasing use of machine learning in these systems, we often don’t even understand how exactly they are making these decisions. Zeynep Tufekci studies what this historic transition means for culture, markets, politics and personal life.

Tufekci is a contributing opinion writer at the New York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvard’s Berkman Klein Center for Internet and Society.

  • Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions. It’s more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. And the system learns by churning through this data.
  • Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things. They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy.Remember — for things you haven’t even disclosed. This is inference.
  • We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency. We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms.