Two readers, same search — completely different results. Here’s why. | |||||
![]() |
|||||
Hi there, Welcome back to our series on LLM search! For the last few weeks, we’ve been discussing how to optimize your books for the future of search engines (whether it’s Google, Amazon, GPTs, Perplexity, or others). Note: sorry this one comes a bit late, I was hiking in the Italian Alps for the past couple of weeks (a nice respite from the blasting Spanish heat for our dog!) 🏔️🐕 To recap what we’ve talked about so far:
And what remains is one vital aspect we haven’t covered at all so far: personalization. The age of personalizationPersonalization is nothing new: you already get different results on Google based on such factors as your location and past search history. It’s also the very backbone of Amazon’s recommendation engine. Amazon’s unique proprietary data on “also boughts” (“customers who bought this item also bought…”) allows them to make personalized recommendations to readers based on their recent purchases. For example: “This user recently bought Onyx Storm, and we know that 80% of readers who bought this title also bought Quicksilver by Callie Hart. However, this user hasn’t bought this book yet, so let’s recommend it to him.” LLMs, however, take personalization to a whole new level, mostly thanks to the vector embeddings we discussed last time. To understand more clearly what personalization looks like in the LLM world, I took a look at a patent filed by Google in 2024 called “User embedding models for personalization of sequence processing model” — which outlines some of the technology that Google uses in its AI Overview and AI mode answers. In simple terms, the search engine takes your past interactions — like past searches, purchases, watched videos, or read articles — and compresses them into compact "digital fingerprints" known as… you guessed it: embeddings. These user information embeddings efficiently capture the essence of your preferences and behaviors. When you ask a question or provide an instruction, Google's AI combines the embedding representing your past behaviors with embeddings created from your current query. This approach allows the AI to generate personalized responses tailored specifically to you. The implications of this are, of course, massive: it means that two individuals running the same search may get completely different results. If we shift the perspective from the searcher to the content creator, it means that ranking for specific searches may soon lose all its meaning: you won’t universally rank in the top 3 results for a search anymore — you may show up #1 for some, #3 for others, or even not at all. Let’s take a practical example, and return to the sample query I used to kick off this whole series of newsletters on LLM optimization: “Can you recommend five romantasy books, released this year, that feature enemies-to-lovers but have NO explicit sex scenes? They should ideally be well-reviewed and if possible, feature faes or other magical creatures.” This time, however, instead of asking Google/Gemini (which doesn’t have data on my past book purchases), I run it on an advanced Amazon AI search engine (which does have access to all my prior purchases and reading data). Now, Amazon knows (among many other things) that I’m a massive Brandon Sanderson fan, and have read all of his epic fantasy series. Based on this, Amazon will likely scour its inventory looking for books whose embeddings not only match my search, but also have similarities to Mistborn and The Stormlight Archive. This will allow the search engine to provide much more accurate recommendations, as they will be based not only on what I tell them, but on what the engine knows about my personal tastes. It’s like having a personal librarian who both intimately knows your reading tastes and possesses an exhaustive knowledge of all the books published to date. This is the future of search — and the future of book recommendations. And it’s already here. If you’ve followed me so far, you’ve taken the first step towards AI search engine optimization: taking the time to understand how it works, and what its main underlying concepts are (query fan-out, vector embeddings, and personalization). Over the next few weeks, I'll share practical tips on what you can actively do to optimize your books, and websites, for AI search engines. Until then, happy writing, and happy marketing! Ricardo |
|||||
|
|||||
Copyright © 2025 Reedsy, All rights reserved. |