#83: We're distracted + the cost of LLM in search

Plus SMB plumbing, some GPT powered products, and more.

💎 Word gems

We’ve always been distracted (Aeon / Joe Stadolnik)

This is not an excuse to dismiss the negative impacts of new technology. It is helpful to put our current concerns about our lack of attention into context.

For as long as technologies of writing and reading have been extending the mind, writers have offered strategies for managing that interaction and given advice for thinking properly in media environments that appeared hostile to ‘proper’ thought. It’s not hard to find past theories of the ways that technologies, such as printed books or writing, shaped thought in past millennia. However, these theories don’t give us a sense of exactly how minds were being shaped, or a sense of what was gained by thinking differently. To understand the entanglement of books and minds as it was being shaped, we might turn to readers and writers in Europe during the Middle Ages, when bookshelves swelled with manuscripts but memory and attention seemed to shrivel.

The processing cost of LLMs are a threat to Google’s search revenue (SemiAnalysis / Dylan Patel and Afzal Ahmad)

This two-part analysis by SemiAnalysis is outstanding. Some numbers may be estimates but the insights are nonetheless great.

The current cost of LLM queries make them higher than that of a search query:

ChatGPT currently costs ~$700,000 a day to operate in hardware inference costs. If the current implementation and operation of ChatGPT were ham-fisted into every Google Search, it would represent a tremendous increase in cost structure to the tune of $36 billion. Google’s annual net income for their services business unit would drop from $55.5 billion in 2022 to $19.5 billion.

And this is to Microsoft’s advantage against Google:

Bing has a meager market share. Any share gains Microsoft grabs will give them tremendous top-line and bottom-line financials.

If anything can be predicted in tech it’s that processing costs go down and everything will become cheaper. But in the short-to-medium term while everyone is making optimizations it looks like OpenAI is shaking up the search economy and I’m very curious how we’ll see it play out:

The key for the user-facing models and future AI silicon is increasing their context window so more of the preceding model or source material can be fed forward through the layers. Scaling sequence length is also very costly in terms of inference costs, which will balloon your cost structure.

💩 Cool shit

Super Mario Bros. Plumbing - The official site for Mario & Luigi plumbing that appeared in the latest movie ad. It’s filled with lots of nostalgic fun.

Sheet Formula - In plain text type what you want your excel formula to do and it will generate it for you.

Stable Attribution - A great tool that shows you the source images behind stable diffusion.

Roastd - I don’t know how real this is but if you want someone to roast your landing page here you go.

Why Pelé was so great - Some great visualizations here celebrating Pelé.

ChefGPT - An example of the types of niche products we’re going to be seeing more of with large language models. This one generates recipes based on the ingredients you have.

WTF does this company do - Put in a URL and this uses GPT-3 to explain what the company is about.


Share this email with a friend because 2nd attempt for you, 📦📦𝐘𝐨𝐮𝐫 𝐩𝐚𝐜𝐤𝐚𝐠𝐞 𝐧𝐨𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧 ID#400200🚚🚚