6 Ways SEO Has Changed in the Past 10 Years — Every Website Owner Needs to Know This

If you built a website around 2015 and optimized it with the tools and thinking of that era, here’s something you need to hear: the rulebook has been rewritten. Not once — six times over.

The strategies that once moved the needle — stuffing pages with keywords, building link networks, launching a mobile “m-dot” subdomain — don’t just underperform today. Some of them will actively hurt you. Google has spent the past decade rewarding websites that serve real people, and penalizing ones that try to game the system.

Here’s what changed, and why it matters to your site right now.


1. Mobile-First Indexing (2015–2018)

In 2015, Google fired a warning shot called “Mobilegeddon” — a ranking update that punished sites with poor mobile experiences. By 2018, they completed the transition: Google now indexes and ranks your mobile version first. Your desktop site is secondary.

If your site still isn’t responsive — if users have to pinch and zoom to read your content — you’re not just losing mobile visitors. You’re losing rankings across the board.


2. RankBrain & Machine Learning (2015–2019)

In 2015, Google introduced RankBrain, an AI system that interprets the meaning behind search queries rather than just matching keywords. It was the beginning of the end for exact-match keyword stuffing.

By 2019, a follow-up system called BERT allowed Google to understand context, nuance, and natural language at a deep level. Writing the same keyword 15 times on a page no longer signals relevance — it signals spam. What matters now is whether your content genuinely answers what a searcher is trying to accomplish.


3. E-E-A-T: The Rise of Content Authority (2018–2022)

In 2018, a major algorithm update — nicknamed “Medic” — elevated a new set of signals: Expertise, Authoritativeness, and Trustworthiness (E-A-T). In 2022, Google added a second “E” for Experience, reflecting that first-hand knowledge matters.

This is especially critical for any site covering health, finance, legal topics, or advice of consequence. Anonymous, generic content lost ground fast. What replaced it: named authors with credentials, cited sources, brand reputation signals, and content that demonstrates real-world experience. If your “About” page is vague and your articles have no byline, this one’s for you.


4. Core Web Vitals & Technical UX (2020–2021)

In 2021, Google formalized a set of performance metrics called Core Web Vitals, built around three questions: How fast does your main content load? How quickly can users interact with the page? Does the layout jump around while loading?

For the first time, technical page performance became an explicit ranking factor — not just a user experience nicety. A slow, janky website, even with great content, now faces a measurable disadvantage. If you haven’t audited your site’s performance scores recently, Google’s free PageSpeed Insights tool will give you a rude awakening.


5. Helpful Content: Human-First Writing (2022)

In 2022, Google launched its Helpful Content Update with a clear target in its sights: content written for search engines rather than for people. The kind of content that hits every keyword, follows every on-page SEO formula, but says nothing useful to an actual human reader.

This was a philosophical shift. Optimization itself wasn’t devalued — but optimization divorced from genuine usefulness was. Sites built on thin, templated, or AI-generated filler took significant ranking hits. The question Google now asks is simple: if search didn’t exist, would anyone find this content worth reading?


6. AI Overviews & the Zero-Click Era (2023–2025)

This is the one that changes everything. Google now answers many queries directly in the search results — no click required. And with AI Overviews rolling out broadly in 2024, that trend has accelerated dramatically.

Ranking #1 no longer guarantees traffic. The goal posts have moved: the new prize is being cited as a source inside AI-generated answers. That requires authoritative, clearly structured, trustworthy content — which loops back to every point above.


The Thread Running Through All Six

A decade ago, SEO rewarded sites that understood Google’s algorithm. Today, it rewards sites that serve their readers well enough that Google’s algorithm has no choice but to take notice. If your strategy hasn’t changed since 2015, it isn’t just outdated — it’s working against you.

The good news: the same update that fixes your mobile experience helps your Core Web Vitals. The same investment in authoritative content helps your E-E-A-T signals and your chances of appearing in AI citations. The improvements compound.

Start with one. The best time to update your SEO thinking was ten years ago. The second best time is now.

The “Docs-as-Code” Transition: Moving Beyond the CMS

For years, the standard for technical documentation was the monolithic CMS systems designed for “content” in the abstract, but often divorced from the actual environment where software is built. My journey across organizations like Google, Microsoft, and Grafana Labs has fundamentally shifted my perspective toward docs-as-code workflows.

Why the Shift?

Early in my career, I saw the friction caused by siloed documentation. When docs live in a separate web portal managed by a non-developer editor, they naturally drift away from the source code. By adopting tools like Git, Markdown, Hugo, and Docusaurus, we bring documentation into the developer’s native habitat. This allows developers to take responsibility for documenting their own work, and it allows technical writers to be more fully integrated into the team’s development process.

For technical writers transitioning from traditional CMS platforms, this shift represents a fundamental reimagining of your role within the development team. You’re no longer the downstream recipient of incomplete information; you become an embedded collaborator who can see, understand, and influence the code alongside the documentation. This visibility transforms the quality and accuracy of what you produce.

Treating documentation like code means it follows the same lifecycle as the product:

Version Control: Using Git allows for precise tracking of changes and the ability to revert errors instantly. Beyond basic rollback capabilities, Git enables powerful branching strategies where documentation updates can be developed in parallel with features, tested in staging environments, and merged only when the feature ships. This synchronization prevents the common problem of documentation being published too early or too late relative to feature releases.

Peer Review: At Google and elsewhere, I submitted substantial change-lists or pull-requests (PRs), ensuring every word was vetted by engineers through the same code review process they use for features. This peer review culture catches technical inaccuracies before publication and creates shared ownership of documentation quality. Engineers become invested stakeholders rather than reluctant contributors. The review process also serves as an informal mentorship opportunity whereby junior engineers learn from seeing how senior developers critique and improve documentation, while writers gain deeper technical insights through reviewer feedback.

Automation: CI/CD pipelines can run linters to check for broken links or style guide violations before a single page is published. At Grafana Labs, we used linters to check for divergences from the team’s Writers’ Toolkit (our style guide). Advanced teams integrate Vale or other prose linters to enforce terminology consistency, readability metrics, and brand voice guidelines. Some organizations even run automated accessibility checks, ensuring documentation meets Web Content Accessibility Guidelines (WCAG) standards before deployment. This automation amplifies editorial judgment by catching mechanical errors that would otherwise consume review cycles.

Impact on Developer Experience

At Grafana Labs, I collaborated with a director of development and another engineer to lead the building of a developer documentation portal (grafana.com/developers) designed for discoverability. At the open-source observability company, engineers had built several distinct silos of information for their separate projects, but this made it frustrating to find what you needed. The company’s divergent threads for plugin building, specifications, and design system were brought together under one umbrella. You can read all about our journey to creating the portal at “The Grafana developer portal: your gateway to enhanced plugin development.”

When documentation lives in the repository, it becomes more of a “living” entity. Paid developers and open-source contributors alike are more likely to contribute updates or suggest edits when they can simply open a PR. The psychological barrier to contribution drops precipitously when the workflow mirrors what developers already do dozens of times per day.

Moreover, having documentation in the repository enables powerful cross-referencing. Code comments can link directly to documentation sections; documentation can reference specific lines of code with permanent links that update as the codebase evolves. This bidirectional relationship creates a cohesive knowledge ecosystem rather than two separate information silos.

Conclusion

The transition is about a transformation of documentation culture. It’s an acknowledgment that documentation is a first-class citizen of the software development life cycle (SDLC). When we treat docs like code, we bring software engineers and writers together to build better products. This cultural shift manifests in tangible ways: documentation tickets appear in the same sprint planning as feature work, documentation coverage becomes a release criterion, and engineers budget time for documentation the same way they budget for testing.

In the past 7 years since I first started using the docs-as-code workflow, I’ve learned that it isn’t just about version control or static site generators. It’s about breaking down the artificial barriers between code and documentation, between engineers and writers, between the product and its explanation. When those barriers dissolve, both the code and the docs improve, creating a virtuous cycle that benefits everyone: developers, writers, and most importantly, the users trying to understand and use what we’ve built.

AI-Assisted Workflows: The Future of the Technical Communicator

The rise of Generative AI has sparked intense debate in the technical writing community. Some observers see it as an existential threat, while others view it as an unprecedented boon. I see it as something more nuanced: a powerful extension of our existing toolkit that can be transformative or destructive depending on how thoughtfully it’s deployed.

Today, integrating tools like ChatGPT, Gemini, and Claude is mandatory for technical communicators. These are tools that can enhance both velocity and quality when used well. There’s no fighting it, so the best option is to evolve with it.

AI as a Force Multiplier

I use AI not to replace the fundamental writing process, but to automate what I call the “scaffolding” of documentation: the structural and repetitive elements that consume time without adding unique value. My expertise in AI-assisted content workflows and prompt engineering allows me to accelerate delivery without compromising technical accuracy or depth.

Here’s how I strategically integrate these tools into my daily practice:

  • Outline Generation: I leverage AI to brainstorm structural frameworks for complex tutorials and technical guides, ensuring I haven’t overlooked standard conceptual steps or logical progressions. This is particularly valuable when documenting unfamiliar systems or when tackling sprawling enterprise platforms.
  • Code Sample Refinement: AI excels at generating clean boilerplate code in languages like Python, JavaScript, or TypeScript. I use it to quickly produce initial examples, which I then rigorously test against actual environments, refine for edge cases, and optimize for clarity and best practices.
  • Drafting Alt-Text and Metadata: I delegate repetitive, SEO-heavy tasks such as crafting image descriptions, meta descriptions, and keyword-rich headers. AI drafts those elements and this frees up cognitive bandwidth to focus on the core technical narrative and the complex explanations that truly require human expertise.

In some cases, I use AI to prototype articles and help topics, if it’s a well-matched use case. For example, if I find a company’s news release and want to write a journalistic article about it, then AI is well-suited to the task of writing a simple news brief, provided it’s been prompted to do so in my own preferred style.

The key is treating AI as a collaborator in the mundane, not a replacement for critical thinking.

The Importance of Human Oversight

During my time at a technical journalist at Wellesley Information Services, I covered the AI/ML beat extensively, researching emerging models and their practical applications to stay current with rapidly evolving technologies. The most critical lesson I learned is that AI can confidently hallucinate technical details such as API parameters, fabricating version numbers, or suggesting deprecated methods as current best practices. Beyond factual accuracy, AI often lacks creativity in its prose; its formulaic patterns and predictable phrasing can be easily spotted by discerning readers and quickly become tiresome or even alienating.

This is where the technical writer’s value proposition has fundamentally shifted. We’re no longer just writers, we’re what the industry calls “Human-in-the-Loop” operators. Though I prefer a more precise term: “Expert-in-the-Loop”. Here’s what that expertise looks like in practice:

  1. Verification and Validation: Every AI-generated code snippet must be meticulously verified against the actual API documentation, tested in representative environments, and validated for current best practices. I’ve caught countless instances where AI confidently suggested outdated information.
  2. Voice and Tone Consistency: AI often produces technically adequate but tonally generic content that lacks the nuanced voice required for specific brand identities. Whether it’s the approachable, reader-first standards I helped establish for SAPinsider or the precision-focused clarity expected at Grafana Labs, maintaining authentic brand voice requires human judgment and sensitivity.
  3. Complex Synthesis and Context: AI fundamentally struggles with activities that require deep contextual understanding such as interviewing subject matter experts to extract the “why” behind new product releases or synthesizing conflicting stakeholder requirements. These remain uniquely human skills that draw on empathy, experience, and professional intuition.

The Takeaway

The future of technical communication isn’t framed by an “AI vs. Human” dichotomy—it’s defined by the emergence of the AI-augmented writer. By mastering prompt engineering, understanding the capabilities and limitations of large language models, and thoughtfully integrating these tools into our docs-as-code pipelines, we can deliver higher-quality documentation at the accelerated pace demanded by modern software development cycles.

The writers who will thrive in this new landscape aren’t those who resist these tools or those who uncritically embrace them. They’re the professionals who develop the discernment to know when to leverage AI for efficiency and when to rely exclusively on human expertise for accuracy, creativity, and strategic thinking. Expert judgment is what separates good documentation from exceptional documentation in the age of AI.