šŸ“° How incentives impact design decisions in tech (Spam Mail #5)

Interfaces, unintended consequences, and capitalism.

Hi there,

Iā€™m trying something a little different this week. If youā€™re just here looking for šŸ’© Cool shit, scroll down for the links.

šŸ“° A word: Iā€™ve been reading about the role technology has in society today, and thinking about why we seem to be fraught with so many issues. So Iā€™m going to try thread a few different articles together about this and dissect what is happening. Let me know what you think. I hope you like it.


We judge new technology by how it makes our lives easier - how quick it is to get anything at your door, how simple it is to send a meme to anyone. That way of thinking exists entirely in the present; with no pause or consideration about what could happen. In that paradigm of thinking technology only solves immediate, specific problems, and design decisions have little regard for unintended consequences.

Iā€™m going to talk about interfaces a bit here. The present-bias in tech means we think of interfaces as just a way of controlling the tech we use. But I donā€™t think thatā€™s true. From Why Do We Interface?:

We design our interfaces and they in turn redefine what it means to be human.

Thereā€™s a lot said about bias in algorithms, but not enough about bias in the design of tech itself. Design decisions we make can outlive us. Just like algorithms adopt the bias of the training data, technology adopts the bias of its design decisions.

Thereā€™s a bias toward speed in the tech world, and because of that a desire to make customer experiences frictionless. From How Design Contributes to Individualism:

Human-centered design, with its focus on individual needs, risks being an agent of individualism: products are too often designed for pleasure before sustainability; to mirror what people believe rather than broadening their perspective; to make things effortless instead of deliberative. Addressing communal needs requires inspiring collective effort and shifting collective behavior ā€” and collective effort often requires individual sacrifice.

That individualistic thinking is fixed in that present-bias. Design for today. A fascinating example of this is HOTorNOT. Remember it? In either case you should go read this wonderful piece about it by Mashable. The site was a thing of its time on the web, but its core idea of scoring and rating people set the blueprint for just about every dating and social app we use today:

Created on a lark in 2000, HOTorNOT became what weā€™d now call an overnight viral hit by letting people upload pictures of themselves to the internet so total strangers could rate their attractiveness on a scale of 1 to 10. Twenty years later, itā€™s a conceit that smacks of the juvenile ā€œedginessā€ of the early web. It's now seen at best as superficial and crass, at worst as problematic and potentially offensive. However, the deeper you dive into HOTorNOTā€™s history, the more surprised you'll be by the thoughtfulness bubbling below its shallow surface ā€” and its fundamental impact on internet history.

If technology is a reflection of its time, HOTorNOT may have been fun in the early days of the web, but as the internet matured, so did business models. Capitalism now rules our favorite platforms.

When tech faces issues such as bias, hate speech, or disinformation the entire frame of thinking again falls to that paradigm of immediate, specific problems today. ā€˜Solutionsā€™ to these issues are a game of whack-a-mole addressing discrete cases, because tackling the systemic issues would fundamentally threaten the existence of many tech businesses. The NYT reported that Facebook temporarily tweaked its news feed algorithm to combat disinformation. Arguably a qualitative improvement, but one that challenged its bottom line (hence temporary). The incentive is capitalism - growth at all costs - and the business model is to do that by solving discrete, individual customer problems at the expense of any future societal outcomes.

That conflict has come up a few times in just the past few weeks. Itā€™s why Facebookā€™s Oversight Board is spending 90 days discussing nipples on Instagram, instead of tackling disinformation:

The remit of the Oversight Board is designed to be restrictively narrow: The board can only take up appeals against removal of content (meaning it cannot look at cases where disputed material is left up rather than taken down), and it cannot review cases that are not appealed. Despite Facebookā€™s alleged commitment to transparency, confidentiality is a key priority for the company when it comes to the inner workings of the Oversight Board. Members of the board cannot discuss their work except through authorized public relations channels.

And why when Googleā€™s co-lead AI ethicist, Timnit Gerbu, challenged the risk of language models in the search algorithm, she was forced out.

[AI language models] have grown increasingly popularā€”and increasingly largeā€”in the last three years. They are now extraordinarily good, under the right conditions, at producing what looks like convincing, meaningful new textā€”and sometimes at estimating meaning from language. But, says the introduction to the paper, ā€œwe ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.ā€

How can we design technology for the future of society if thereā€™s no incentives to do so? This is where I come back to talking about interfaces. Just as HOTorNOT created a blueprint to algorithmically score people, TikTok is reimagining a new human-computer interface. Eugene Wei calls it algorithm-friendly design, to describe how the appā€™s interface is explicitly designed to train its recommendations algorithm. TikTokā€™s success is its recommendation engine, and that is just as much its interface as its algorithm. A slide deck by Attention Factory lays out TikTokā€™s design like this:

THE DIFFERENCE between search engines and recommendation engines is encapsuled by the concept of ā€œpeople looking for informationā€, vs. ā€œinformation looking for people.ā€

That is a seismic shift in how we think of interfacing with technology, and it's a step further than recommendation engines we use in Facebook, Instagram and Google (theyā€™re still based on looking for something first). So I come back to unintended consequences. If we continue to design so information comes to us, what does that mean for our collective experience in ten years? Remember, solving discrete, individual problems isnā€™t the primary incentive, thatā€™s only how companies grow. So I only wonder how far weā€™re moving away from tech improving our lives when the very design of the apps we use is predominantly not for us?

Iā€™ll caveat that blaming all of these societal issues on tech interfaces is overly simplistic. Jacobin Mag has an excellent essay arguing this: Donā€™t Blame Social Media. Blame Capitalism:

This techno-deterministic narrative vastly inflates the capabilities of data capture and algorithms, and, in so doing, blames a whole range of problems on technology that have their root in more fundamental social and economic conditions of modern society. It is important to understand what effects these technologies are having on us, both personally and collectively, but failing to recognize the longer history of these problems and the broader structures that contribute to them will lead us to solutions that donā€™t actually get to the root causes.

I agree with this take. Iā€™ve focussed on interfaces here because I feel thatā€™s where a lot of decisions are made that lead to unintended consequences. Why that happens is rooted in politics and capitalism. We donā€™t think about that enough.

***

Iā€™ve focused on trying to make sense of whatā€™s going on. Identifying the problems is one thing, coming up with solutions is far more difficult. Iā€™ll likely dive deeper on solutions in a future issue, but for now what I feel we need is to approach new technology with a little more friction.

Platforms like Facebook and Google feel public because they are so large, but when decisions arenā€™t public itā€™s hard for academics and journalists to really assess whatā€™s going on.

In many other industries risk assessments exist to identify potential harm - such as engineering to prevent bridges collapsing, healthcare to prevent adverse reactions to drugs and even digitally in cybersecurity. We should look to those methodologies for assessing risk in new technology too.

So, thereā€™s a lack of transparency or accountability. They seem obvious principles to start with but I donā€™t see tech companies having the incentives to do this (see: capitalism). Thatā€™s why I believe adding a bit of friction is needed and regulation is a way to do that. In other industries, like financial services and automobiles, regulation added protections for people. Individually, though, a simple thing we can do is pause and think about unintended consequences in new tech we design and use.

And one final note, Wired have a wonderful idea that nods to how Walt Whitman would 'advocate for a space that would accommodate everyone' in the suggestion To Mend a Broken Internet, Create Online Parks:

MUCH OF OUR communal life now unfolds in digital spaces that feel public but are not. When technologists refer to platforms like Facebook and Twitter as ā€œwalled gardensā€ā€”environments where the corporate owner has total controlā€”theyā€™re literally referring to those same private pleasure gardens that Whitman was reacting to. And while Facebook and Twitter may be open to all, as in those gardens, their owners determine the rules.

šŸ’© Cool shit

Random and awesome links from the web to end the week with.

Network Effect - It explores the psychological effect of internet use on humanity. It's also one of the coolest interactive data projects ever.

YDays - This app has a real fresh take on social. Get a daily drawing prompt for you and a group of friends. How fun!

Kode Club - An incredible open world website promoting a new sports club. It brings many video game elements to web interfaces. My fan starts whirring when using this but it's worth it.

Soundwalk - Designed to accompany walks around Central Park. It's a clever way of spatially thinking about music - depending on the route you take the soundtrack will change.

AI Incident Database - AI is always claimed to be the panacea for solving all of lifeā€™s problems. We all know thatā€™s not true and itā€™s actually riddled with issues. Here's a database of all those issues.

COVID-19 Indoor Safety Guideline - Calculate the COVID-19 risk of different indoor activities. Given how the virus is spreading in the US, this should be more well known.



Your computer is now infected. If you ā¤ļø what you read you can fix it by sharing this email with a friend.