Is LLM Optimization a Myth? Two Machine Learning Engineers Explain What’s Actually True.

If you spend enough time on LinkedIn, you’ll notice many self-proclaimed LLM content optimization experts preaching the death of SEO (again). Their claims are bold: if you want customers to find you, you need to show up not just in Google Search, but in ChatGPT, Gemini, Claude, and Perplexity AI's answers, too.

The idea that you can optimize content for AI models the same way you once did for search engines is certainly tempting. But is it actually possible?

We wanted to find out what’s happening behind the buzz. Do large language models really change how content becomes visible online? Or is content optimization for AI search just a new term marketers use to sell the same old SEO?

To get an honest, technical perspective, we spoke with two people who build these systems, not just write about them: Machine Learning Engineers Oleh Demkovych and Yurii Guts.

They explain how LLMs read, interpret, and retrieve content, and what that means for anyone writing it.

Introduction:

Yurii Guts Raccoon Writing Oleh Demkovych Raccoon Writing
Yurii Guts has spent more than fifteen years working across software, AI, and data science, building enterprise AI systems for Fortune 500 and public-sector clients. He believes LLMs are indeed changing the rules, but SEO isn’t going anywhere. Oleh Demkovych has over eight years of experience in machine learning and another five in Java development. He approaches new marketing trends with the precision of an engineer and a fair amount of skepticism, especially when claims sound too good to be true.

Do LLMs really change online visibility or just reshape SEO content strategy?

Marketers say attention is shifting from Google search results to AI-powered conversational platforms. From your perspective, does that actually change how businesses gain online visibility and web traffic? Or do the same ranking factors still apply behind the scenes?

Yurii Guts:

I’m cautious about big claims, but I do think LLMs are changing how content gets noticed online. AI tools attract users because they give answers fast, feel more personal, and (at least for now) skip the ads. 

Meanwhile, traditional search is going through a bit of a crisis. Users keep running into irrelevant, sponsored, or low-quality results. Over time, it’s natural that people start leaning toward AI tools that feel cleaner and more direct.

Oleh Demkovych:

LLMs are definitely changing how we interact with information, but not necessarily how content becomes visible. Under the hood, they still depend on web search or curated data sources.

The same fundamentals — accessibility, structure, and credibility — still decide what gets surfaced. 

The attention shift is real, but the way content becomes visible hasn’t really changed.

Takeaway:

While LLMs are changing where people look for information, the mechanics of visibility haven’t been reinvented. AI tools still rely on indexed, structured, and credible web content. What’s really shifting is user behavior, and not the underlying principles that decide what surfaces.

Is LLM optimization replacing SEO content optimization, or just changing how we think about search?

Some marketers claim SEO is becoming irrelevant as “LLM optimization” takes over. Others say it’s simply evolving and adapting to new user behaviors, AI tools, and search interfaces. So which is it: another marketing buzzword, or a genuine shift in how visibility works online?

Yurii Guts:

SEO isn’t dying — it’s reshaping. I’m not a marketer, but I’ve been building websites since the early 2000s, and I’ve always wanted SEO — in the sense of “manipulating the algorithm” — to disappear.

What makes more sense now is a multichannel approach: combining search, social media, messengers, and yes, AI tools too. Especially with younger audiences, that’s the natural direction. People will use whatever helps them find answers faster. That’s the real shift.

Oleh Demkovych:

Keyword stuffing — that’s dying for sure. 

Traditional SEO tricks like matching exact keywords don’t work anymore.

LLMs and even modern Google Search already use vector-based retrieval instead of literal keyword matching. That shift began long before LLMs became mainstream.

But classic indexing still matters. For your content to appear in an AI answer, it first has to be indexed by a search engine. If the crawler ranks it poorly — say, because it’s just a list of bullet points with no depth — it won’t show up in search, and the model won’t see it either.

Takeaway:

The hype around “LLM optimization” may be new, but the principle isn’t. SEO isn’t dead — it’s adapting. The focus is shifting from keyword games to structure, clarity, and credibility. Whether your content is surfaced by Google or an AI model, quality remains the key to visibility.

How do LLMs understand the content they process, and where do their answers come from?

Let’s look at a practical example. When someone asks an AI chatbot for “top logistics software companies,” how does a specific brand or business name end up in that answer? Does the model rely on pre-trained data, fresh crawling, or web indexing in real time?

Yurii Guts:

We don’t know exactly how commercial AI services generate their results — they’re proprietary systems. But if you look at how they behave, you can see they follow the same basic logic as most modern LLMs.

When a user sends a query, the model also receives a kind of hidden instruction that it can use certain external tools. One of those tools is web search.

After that, the model can go two ways: it can generate an answer based on its own training data, which gets updated a few times a year from large internet-crawl datasets. Or it can decide to run a web search first, pull fresh results, and then use those to build a richer context for its final answer.

In practice, what you get is a blend: the model combines what it already 'knows' with what it just found online.

Oleh Demkovych:

Most LLMs rely on a hybrid approach: they blend pre-trained knowledge with real-time retrieval.

A model already has its own knowledge — the training data it was built on. But that data doesn’t cover everything. It’s curated and moderated by the team developing the model, not the entire Internet. So the chance that an article about a random company will end up in a model’s training set is actually very low. Training is expensive and happens only a few times a year.

To make the model more useful in real-world settings, developers connect it to tools like search engines. For example, Gemini uses Google Search under the hood, pulls relevant data, and then adds reasoning on top of it. So even if the LLM “sounds” self-contained, it still relies on search to refresh its answers.

Takeaway:

Large language models don’t generate answers in isolation. They combine what they’ve already learned with real-time search results, pulling from the same SERP everyone else uses.

Is Llm Optimization Real Raccoon Writing 1

Do LLMs care more about entities or keywords when processing content?

Writers often hear that models like GPT-4 focus more on context and entities (like “AWS,” “Kubernetes,” or “Stripe”) than on keywords. How do LLMs really interpret unstructured data? And does using exact brand or product names still make a difference for content ranking?

Yurii Guts:

LLMs are built on two key ideas: attention and meaning. They don’t match exact words; they understand context. Every word or phrase is represented as a point in a huge semantic space, where similar concepts sit close together.

When the model generates text, it uses attention to look for relevant meanings. It doesn’t search by keywords. It works with algebraic relationships between those points in that semantic space.

Technically, it doesn’t even process words — it works with tokens, which can be a single letter, part of a word, or a few words. The model’s tokenizer builds that vocabulary during training. LLMs aren’t thrown off by synonyms or different word forms.

They’ve seen those millions of times before and learned to connect them. The only time you might need to be careful is with very niche terms that have little public data.

Oleh Demkovych:

I wouldn’t say models really focus on entities. It’s broader than that. They process entities together with the relationships between them and the context they appear in. Essentially, yes, they look at the whole text to understand the meaning.

Smaller, topic-focused pieces work better. Unlike old SEO, where you’d write long articles packed with keywords, LLMs look for smaller, semantically relevant chunks. If your paragraph covers multiple topics at once, the chance it matches any specific query goes down.

So, it’s better to create specific, context-rich pieces than long general overviews.

The more concentrated the topic, the better models (and people) understand it.

Takeaway:

LLMs don’t search by exact keywords. They interpret meaning through context and relationships between entities. For writers, that means clarity matters more than repetition — it’s better to connect ideas naturally than to overuse specific terms.

Should writers still care about content structure and definitions in the age of LLMs?

Writers are often told to give clear definitions, use structured templates, and follow SEO best practices. Does that actually help an AI system’s natural-language understanding, or is it mainly about improving the user experience?

Yurii Guts:

In theory, the attention mechanism doesn’t really care whether a definition appears right after the term or a few lines later. But giving the definition immediately definitely doesn’t hurt.

The model has likely seen that pattern thousands of times — think Wikipedia-style text — so it’s familiar with it. 

I’d say follow your instinct: if a human reader would appreciate the definition at that point, leave it in.

As for structure, it absolutely helps. LLMs handle structured text well: headings, lists, sections with clear starts and endings. They also understand Markdown, XML tags, and JSON objects. A well-organized format makes it easier for the model to parse and interpret what it’s reading.

Oleh Demkovych:

Definitions make sense when a term is specific or easy to confuse. If the meaning is already obvious from context, there’s no need to repeat it.

In general, a clean structure helps both people and models. Poorly organized text makes it harder for retrieval tools to match your content with the right query.

Takeaway:

Structure still matters. Clear hierarchy, defined terms, and logical flow help both people and machines understand your message. A well-organized text is easier to parse, retrieve, and interpret — whether it’s a search engine crawler or an LLM doing the reading.

Does adding more detail help LLMs understand your content, or just make it harder to process?

Some experts say detailed blog posts and in-depth analysis help LLMs “understand” topics better, while others say shorter, information-rich sections improve performance. What’s the right balance for LLM-friendly content?

Yurii Guts:

It depends on the query. At some point, too much detail starts working against you — extra words dilute the context. 

The goal is to stay focused and avoid filler.

That said, more detail often helps, because it gives the model richer context for attention to work with. The ideal is information-dense writing — specific and relevant, but without padding.

To be certain, you’d need an experiment: short and long versions of the same text, a set of real queries, and a measurable quality metric. But generally, clarity and focus win.

Oleh Demkovych:

More detail is usually better, as long as it’s organized. Think of your article as a collection of logical blocks. Each will likely be split into smaller chunks during retrieval, so it’s important that every block makes sense on its own.

The idea is to keep detailed context within those focused sections. Too much scattered information makes it harder for the model to connect the meaning.

Takeaway:

More detail helps only up to a point. Both LLMs and readers benefit from focused, information-rich writing that stays on topic. Overloading text with filler or tangents weakens both comprehension and retrievability.

Is Llm Optimization Real Raccoon Writing 2

Do formatting and markup actually help LLMs make sense of content?

What about technical details like schema markup, metadata, internal links, or TL;DR summaries? Do those still influence how AI systems index and analyze a web page, or are they mainly a factor for search algorithms, not large language models?

Yurii Guts:

For LLMs, structured formatting is generally helpful. Think Markdown, XML tags, JSON objects. It makes it easier to understand where one section ends and another begins.

Pseudomarkup also shows up in training datasets, so models are used to that structure.

As for what exactly the model “sees” from your website — that depends on the tools doing the scraping and indexing. Some pass the raw markup; others pre-process the page and keep only the text.

Schema tags are probably more important for web crawlers that prepare data for models than for the models themselves.

Hyperlinks, on the other hand, are critical for those crawlers and scrapers. They help connect sources. But for the model itself, links don’t matter much: under the hood, it just processes a continuous stream of text, and the attention mechanism handles relationships between ideas.

Oleh Demkovych:

Not all structure works equally well for LLMs. For instance, large tables are hard for models to interpret, especially if they have many cells or complex dependencies between rows and columns. LLMs can easily lose track of what belongs where.

Lists and headings work far better. Many web scrapers split pages using Markdown or header tags like H2 and H3, and from my experience, models handle that perfectly. 

When the hierarchy is clear — what’s nested, what’s grouped — the model parses it without confusion.

If you start your article with a short summary or TL;DR, that’s more useful for people than for models. During contextual retrieval, only small, relevant chunks are usually pulled in. Sometimes that’s your summary, sometimes a section deeper in the text.

In short, structure absolutely helps, but clarity beats complexity. Clean Markdown and logical flow make your content easier to read — both for humans and machines.

Takeaway:

Formatting helps when it adds clarity, not decoration. Headings, lists, and simple structure make content easier to segment and interpret, while heavy markup or complex tables can do the opposite. Keep the layout clean and information easy to extract.

How fast do LLM content strategies change, and what happens when models learn from AI-generated text?

The tech industry moves fast — faster than most marketing teams can adapt. How quickly do LLM content strategies become outdated? And what happens if AI models start learning from their own AI-generated text instead of high-quality, human-created resources?

Yurii Guts:

Yes, everything in this space changes incredibly fast. Some of what I’m saying could already be outdated by this afternoon. The core ideas haven’t shifted much over the past few years, but new architectures and training methods keep showing up.

The real challenge now is data. 

If models start training mostly on AI-generated content, we’ll face what’s known as model collapse.

It’s when quality slowly degrades because the system keeps learning from its own output.

Developers need strong safeguards to avoid that — tools and processes that make sure only high-quality, human-written data goes into training.

Many believe LLMs are already close to a kind of glass ceiling: they’ve used up most of the human-created text available online. Finding new, valuable data is the next big hurdle.

Oleh Demkovych:

I don’t think anyone can promise that today’s optimization practices will last long. Retraining large models is expensive, so companies only do it a few times a year. But the tools around them (search systems, retrieval layers) change constantly.

Whatever best practices we have now will need to evolve. The only constant is that quality content always matters. Models may change how it’s found, but not the need for it.

Takeaway:

LLM optimization tactics evolve quickly, but the fundamentals remain stable. As models improve and data sources shift, the only sustainable strategy is quality — original, well-written, human content that adds genuine insight.

How can writers create LLM-friendly content without losing human readers?

Given everything we’ve discussed — structure, optimization, and user intent — what practical advice would you give writers who want to improve both readability and discoverability? How do you create usable, human-centered content that’s also easy for AI systems to process?

Yurii Guts:

I always remember something a colleague once told me.

I asked her why some people still put two spaces after a period, claiming it makes the text more readable. She said, “The best way to make text easy to read is to write clearly — not to overthink the spacing.”

That sums it up perfectly. Write for people, and let technology adapt.

Oleh Demkovych:

It’s not about chasing algorithms. It’s about giving readers and models something clear to work with.

The best-performing content is always focused, specific, and easy to follow for both humans and machines.

Skip the buzzwords, keep the structure logical, and make sure the context stays clean. That’s what keeps your content visible in any system, LLM or not.

Takeaway:

The best content works for both. Write clearly, structure logically, and focus on meaning instead of algorithms. When writing is strong, readable, and precise, it’s naturally optimized for people, search engines, and AI models alike.

Viktoriia Bezsmolna

Viktoriia Bezsmolna

LinkedIn

Viktoriia is a senior content writer, editor, and CEO of Raccoon Writing. She is also an experienced speaker and author of several courses on content writing for tech. Most importantly, Viktoriia is a fan of mythology, linguistics, comic books, and Lego.