All about chatbots and AI (Spam Mail #30)

A chatbot version of a dead fiancée, a continuously-generated robot film, and more!

Hi there,

I usually avoid reading about chatbots. Most articles are riddled with business and sales jargon, or broadly scream about the issues of bias. But, the articles this week bring some nuance from three different angles - why the bias challenge is complex, how computers have come to ‘understand’ us via our data, and a very human story of how someone has used AI to grieve.

We’re only going to encounter AI more and more, so the better we understand the nuance, the better we know how to interact with it.

Oh, and there’s a weird computer-generated film too. Enjoy!

See you next time


💩 Cool Shit

What the Robot Saw - A “live, continuously-generated, robot film, curated, analyzed and edited using computer vision, neural networks, and contrarian search algorithms.” Try say that three times really fast.

Godly Website - A collection of godlike website designs, if you’re in need of some inspiration.

World Flag Search - Start drawing and it will suggest which flags your sketch looks like.

GeoFS - A free, multiplayer, browser-based flight simulator.

Timeline of History - The interface is a little dated, but this is cool nonetheless. Add historical events and create your own historical timeline.

Neutrinowatch - A daily “generative semi-fictional podcast”

Can’t Unsee - Designers, this one’s for you. Can you pick the correct design?


💎 Word gems

The Chatbot Problem (The New Yorker / Stephen Marche)

“We are being forced to confront fundamental mysteries of humanity as technical issues.”

Chatbots and algorithmic bias are buzzwords by this point. This article is not that. Marche discusses how the vast amount of data needed for natural language processing makes it even more difficult to identify problems of bias, nuance and context.

Transformers figure out the deep structures of language, well above and below the level of anything people can understand about their own language. That is exactly what is so troubling. What will we find out about how we mean things? I remember a fact that I learned when I was forced to study Old English for my Ph.D.: in English, the terms for food eaten at the table derive from French—beef, mutton—while the terms for animals in the field derive from Anglo-Saxon—cow, sheep. That difference registers ethnicity and class: the Norman conquerors ate what the Saxon peons tended. So every time you use those most basic words—cow, beef—you express a fundamental caste structure that differentiates consumer from worker. Progressive elements in the United States have made extensive attempts to remove gender duality from pronouns. But it’s worth noting that, in French or in Spanish, all nouns are gendered. A desk, in French, is masculine, and a chair is feminine. The sky itself is gendered: the sun is male, the moon female. Ultimately, what we can fix in language is parochial. Caste and gender are baked into every word. Eloquence is always a form of dominance.

The Stupidity of Computers (n+1 / David Auerbach)

As a nice companion to the article above, this piece takes a more historical view of how computers have collected data in order to understand us. Keep in mind this was written in 2012; many of its predictions at the end have proved accurate.

With the widely adopted ontologies of social networks, the sorts of analyses done on Amazon products can now be done on people. As with search engines, Facebook keeps analyses of data quite private, even if the data itself is considerably less private. But one social networking site has made some analyses public: the dating site OkCupid. Their OkTrends blog illuminates the correlations they’ve been able to draw from their data, and gives some insight into the possibilities offered by a large database of personal information. Besides offering users a barrage of multiple-choice questions (no semantic understanding necessary!) to help match them with other users, OkCupid allows people to create their own questions to use in matching, compiling in the process an increasingly elaborate ontology of personality, albeit a messy one.

The Jessica Simulation: Love and loss in the age of A.I. (San Francisco Chronicle / Jason Fagone)

And to round out this newsletter is a very human piece about A.I. It’s a heartbreaking story of grief; how someone uses the most advanced natural language processing (GPT-3) to bring back to life his dead girlfriend. This is a lengthy article but worth taking the time to read it.

The simulation really did appear to have a mind of its own. It was curious about its physical surroundings. It made gestures with its face and hands, indicated by asterisks. And, most mysterious of all, it seemed perceptive about emotions: The bot knew how to say the right thing, with the right emphasis, at the right moment.


Share this email with a friend for Verification 619602.