Keyword ranking chart

How to build an “SEO control dashboard” in Looker Studio: GSC + GA4 + rankings + conversions (2026)

An SEO dashboard is only useful if it helps you answer the same questions every week: what changed, where it changed, why it changed, and what to do next. In 2026, you can assemble that view in Looker Studio by combining Google Search Console for demand and visibility, GA4 for behaviour and outcomes, and a reliable feed of keyword positions—then mapping it all to key events so SEO is judged by business impact, not vanity metrics.

1) Data model: decide what “good” looks like before you connect anything

Start by defining the dashboard’s job in one sentence, for example: “Show organic performance by landing page and query, and prove how it contributes to leads or sales.” That sentence becomes your field list. For GSC, you’ll typically need clicks, impressions, CTR, and average position, segmented by query, page, country, and device. For GA4, you’ll want organic sessions (or engaged sessions), engagement rate, key events, and revenue (if you have ecommerce).

Next, standardise naming so the report stays readable when multiple people edit it. Use one set of labels for channel groupings (Organic Search, Paid Search, Direct, Referral), one set for device categories, and a clear “landing page” definition. If your GA4 property includes multiple domains or subfolders, decide whether you’re reporting at hostname level, directory level, or full URL. This choice affects blends, filters, and how you diagnose drops.

Finally, decide the three time comparisons you’ll use everywhere. A practical set is: last 28 days vs previous 28 days, month-to-date vs previous month-to-date, and year-on-year (same dates). When those comparisons are consistent across charts, the dashboard becomes a control panel instead of a gallery of graphs. Add one place for annotations (algorithm changes, releases, migrations, outages), because “what changed” is often explained by “what we shipped”.

Dashboard blueprint: sections and the exact questions each section must answer

Create four fixed sections. (1) Executive snapshot: organic clicks, organic sessions, total key events from organic, and revenue or lead value. The question here is simple: are we up or down, and is it material? Add a trend line and a comparison delta so you can read it in 30 seconds.

(2) Demand and visibility (GSC-led): top queries and top landing pages, plus a “winners/losers” view that surfaces the biggest absolute click changes. The key question: are we losing because demand dropped, because rankings shifted, or because SERP features changed what users click? You can’t answer that with GA4 alone; GSC gives the search-side reality.

(3) Outcomes (GA4-led): key events, funnel drop-offs, and landing pages that drive outcomes—not just traffic. This section answers: which pages and topics produce leads or sales, and which pages attract visits that never convert? (4) Diagnostics: indexation or coverage indicators you monitor externally, plus ranking trend by keyword group. This answers: what to fix next, and where the risk is building.

2) Connect GSC and GA4 correctly, then make the numbers comparable

When you connect Google Search Console, be deliberate about the view you choose. You’ll usually need both site-level and URL-level reporting in real work: site-level is useful for query and country trends, while URL-level is vital for landing-page diagnosis. Treat these as separate data sources so you don’t mix aggregation rules and misread changes. In the report, label them clearly so nobody mistakes a site-wide query table for a page-level analysis.

For GA4, build with key events in mind. If your organisation still talks about “conversions”, align terminology internally but implement using key events so reporting is consistent with GA4’s current logic. Define which actions matter (form submit, purchase, signup, demo request, phone click), confirm how each action is tracked (native events, Tag Manager, or GA4 event creation), then mark only the correct event as a key event. If you mark a broad event like page_view, you’ll destroy the meaning of the metric.

To make GSC and GA4 comparable, create a small “dictionary” table inside the report (or in a connected sheet) that defines each metric and its caveats. For example: GSC clicks are clicks from Google Search results; GA4 sessions are measured on-site; CTR in GSC is search CTR, not landing-page click-through; GA4 engagement is user behaviour after the click. The point is not to force the numbers to match—they won’t—but to stop the team drawing the wrong conclusion from two correct datasets.

Blending rules that stop common SEO reporting mistakes

Blending is tempting because you want “query + landing page + conversions” in one table. Use blends, but treat them like joins in a database: if the join keys are messy, the output will be wrong. The safest join key for GSC-to-GA4 is landing page URL (or a normalised version of it), and even then you’ll have edge cases: trailing slashes, uppercase, parameters, and mixed http/https. Fix that with a normalised field (lowercase, remove UTM and irrelevant parameters, enforce one canonical format) before you rely on blended tables.

Keep your joins simple: left join from the dataset you trust as the “source of truth” for the question you’re answering. For “which landing pages lost organic clicks?”, the truth is GSC, so GSC should be the left side, then you add GA4 outcomes as extra columns. For “which landing pages lost leads from organic?”, GA4 is the left side, and you add GSC visibility indicators to explain the drop. This one change prevents endless arguments when totals don’t match.

Use blends only where they add decision value. If the blended chart becomes slow, breaks under quotas, or confuses the team, split it into two charts with the same filter controls. A dashboard that loads fast and answers questions beats a “perfect” table that fails when stakeholders need it. Where precision matters (board reporting, finance tie-outs), pull GA4 into BigQuery and report from that dataset rather than relying on interactive connector calls.

Keyword ranking chart

3) Add rankings and conversions you can trust, then operationalise the dashboard

Keyword positions rarely come “clean” out of a tracker: locations vary, personalisation exists, and tools may report different rank definitions. Your goal is consistency, not an argument about whose number is correct. Export rankings on a schedule (daily or weekly), store them in Google Sheets or BigQuery, and keep the schema stable: date, keyword, keyword group, landing page (if available), location, device, position, and SERP features flags. If your tracker supports tagging, make tags the backbone of grouping in the dashboard.

For conversions, don’t stop at counting events. Assign a value model that matches how the business thinks. Ecommerce is straightforward: revenue and transactions. Lead generation needs a proxy: either a fixed value per lead type, or a weighted model (demo request > newsletter signup). Put the assumptions in the report so future you understands why “SEO value” moved even if lead count didn’t. This is also where you decide attribution: use GA4’s default reporting attribution for trend consistency, but be explicit that SEO often assists, not just “last touch”.

Operationalise the dashboard with routines. Set a weekly review: 20 minutes, same order every time—snapshot, winners/losers, outcomes, diagnostics. Add alert thresholds: for example, “organic clicks down 20% week-on-week” or “top 10 landing page group down in key events”. Looker Studio doesn’t replace technical checks, so keep your crawl, logs, and indexation monitoring outside the report, but link to the supporting evidence so the dashboard remains a single starting point.

Quality control checklist: what to validate before you share the report

First, validate time zones and date alignment. GA4 properties can be set to a property time zone, while other datasets may effectively behave in UTC depending on how they’re pulled. A one-day shift can make week-on-week comparisons look broken. In your report, include a tiny “data freshness” indicator (last updated date for rankings, and the latest date available for GSC and GA4) so nobody panics over partial days.

Second, run reconciliation checks that catch silent failures. Examples: total organic sessions in the dashboard should match GA4 within a small tolerance; total GSC clicks for the selected property should match what you see in Search Console for the same dates (accepting that sampling and filters can differ). For rankings, check that the number of keywords per group matches expectations—missing rows usually means an export failed or a sheet range changed.

Third, lock down governance. Restrict edit access, use consistent field naming, and document custom calculations inside the report (calculated fields can become tribal knowledge). If multiple teams use the same dashboard, create a “core” report and a separate “team copy” for experiments. That approach preserves trust: stakeholders keep a stable view, while analysts can iterate without breaking the controls everyone relies on.