Server-side SEO: a practical guide

A working definition of server-side SEO, the four signals it must control, why JavaScript-injected fixes are unreliable, and how to deploy edge-based HTML rewriting in practice.

Triangle Technology9 min read

"Server-side" gets confused with framework choice. Next.js renders on the server. Astro renders on the server. So does PHP. None of those phrases describe SEO.

Server-side SEOis the practice of fixing your SEO at the network layer — modifying the HTML response before it reaches the requesting client. It's the only mode in which fixes are guaranteed to land in the source Google indexes.

This guide covers what server-side SEO is, why it matters more than rendering framework choice, and how to deploy it without rewriting your stack.

The four signals that must live in HTML

Most SEO factors split into two buckets: signals that depend on rendered HTML, and signals that depend on user behavior. The first bucket is the one we can control directly:

  • Title and meta description. Google reads these from the response head. JavaScript can rewrite them, but Googlebot may snapshot the page before that happens.
  • Canonical, hreflang, and robots directives. These tell Google which version of a page to index. Missing or wrong canonicals are a top duplicate-content cause.
  • Structured data (JSON-LD). Eligibility for rich snippets depends on parsable schema in the HTML response.
  • Heading structure. One h1, sensible h2/h3hierarchy. Tools that inject headings via JavaScript won't fix the underlying structure.

Each of these is something Google evaluates from the raw HTML. If your fix only happens in the browser, you're hoping Google saw it. Hoping is not a strategy.

Why JavaScript-injected fixes are unreliable

Modern Googlebot does execute JavaScript — but with caveats:

  • Two-stage indexing. Google first indexes the raw HTML, then queues the page for JavaScript rendering on a separate timeline. The first signal it stores is whatever was in the response.
  • Time and resource limits.Googlebot won't wait indefinitely for scripts to settle. Pages with heavy client-side rendering get partial indexing.
  • Other crawlers exist. Bingbot, AI assistants, social link previews, and SEO competitor crawlers mostly read raw HTML. JS-injected fixes are invisible to them.

The result: a fix that works for a logged-in user may not work for a search engine, and the gap is invisible until rankings drift.

The edge-rewriting pattern

The pattern below is what EdgeRank implements. It works for any site behind a CDN that supports response transformation — most notably Cloudflare Workers (with HTMLRewriter), Akamai EdgeWorkers, AWS Lambda@Edge, and Fastly.

  1. A request hits the CDN edge before reaching your origin.
  2. The edge worker fetches the response from your origin (or cache), then streams it through an HTML rewriter that knows which selectors to modify and what to set them to.
  3. The modified response is sent back to the requesting client. From the client's perspective, the HTML has always been this way.

Cloudflare's HTMLRewriter is particularly suited to this because it's streaming-aware: you don't buffer the entire response, so latency stays in the milliseconds even on large pages.

A worked example

Suppose your audit identified these issues on /products/widget-x:

  • Missing canonical tag
  • Title is generic ("Home — Acme")
  • No Open Graph image
  • Schema is empty

Without server-side rewriting, you file a ticket, an engineer updates the template, you wait for the next deploy. Two sprints later, the fix lands.

With edge rewriting, the worker config is updated to apply (in pseudocode):

when path matches "/products/widget-x":
  set <title> = "Widget X — Specs, Pricing, and Reviews | Acme"
  inject <link rel="canonical" href="https://acme.com/products/widget-x">
  inject <meta property="og:image" content="/og/widget-x.png">
  inject <script type="application/ld+json">{ Product schema }</script>

The change is live within seconds. The next time Googlebot fetches the page, it sees the corrected HTML. No deploy, no template owner, no waiting room.

Tradeoffs and considerations

  • Source of truth.Edge rewrites layer on top of your origin. They don't fix the underlying template. For long-term hygiene, you want the canonical fix in code eventually — but the edge rewrite buys time.
  • Caching.Cache keys must include rewriter version, or you'll serve stale rewrites. Most edge platforms handle this; verify yours does.
  • Auditability.When something looks wrong on production, engineers need to know whether it's the template or the worker. A clear audit trail of edge changes (who, what, when) is mandatory.
  • Verification. Some rewrites are cross-cutting; a bad one can affect every page. Always preview against a staging traffic slice before going 100%.

Getting started

You can implement this yourself: spin up a Cloudflare Worker, write HTMLRewriter handlers, build a config UI, integrate with audits, handle rollbacks, monitor for regressions. It's a real engineering project — typically 2–4 months from scratch.

Or you can use EdgeRank's Optimizer, which ships the worker, the rewriter, the dashboard, the audit pipeline, and the rollback story as one product. Start free and watch your first fix deploy at the edge in under five minutes.

Server-side SEO has been the obviously-correct architecture for years. The reason it isn't standard practice is that most teams haven't had a tool for it. Now they do.