Since Google Analytics switched from Universal to GA4, the page loading speed monitoring was no longer available in GA by default. So, we had to reproduce that functionality with Google Tag Manager using custom JavaScript. There are many code examples for achieving this for regular websites, but we haven’t found any that work with single-page application websites (SPA).

Long story short – after much vibe coding and flesh-n-blood developer verification, here’s the full JS code that works for both SPA and regular websites. You can just paste it into your GTM as a Custom HTML tag and trigger on Initialization – All Pages. The code pushes one event into the dataLayer with just one value, the calculated page loading time in seconds. It works for each page refresh, navigation, or URL history change.  We could get deep into a discussion about whether this tracks “true” page loading time, and we’re open to that discussion – that’s what the comment section is for. But as with anything else in web analytics, we believe this method is good enough to be directionally useful. We haven’t encountered any issues so far, so please let us know if you see any strange behavior.

Here’s the full code.  

HTML
<script>
(function () {
  // Prevent double setup
  if (window.__pageLoadTimingSetupDone) return;
  window.__pageLoadTimingSetupDone = true;

  var dataLayer = window.dataLayer = window.dataLayer || [];
  var perf = window.performance || window.webkitPerformance || window.msPerformance || window.mozPerformance;

  function toSecondsTwoDecimals(ms) {
    // ms -> seconds with 2 decimals
    return Math.round(ms / 10) / 100;
  }

  function pushTiming(ms, source) {
    if (!ms || ms <= 0) return;

    dataLayer.push({
      event: 'page_load_time_calc',
      page_load_timing: toSecondsTwoDecimals(ms),
      page_load_source: source
    });
  }

  /* -------------------------------
   *  INITIAL HARD PAGE LOAD
   * ----------------------------- */

  var initialSent = false;

  function calcInitialLoad() {
    if (initialSent || !perf) return;

    var ms = 0;

    // Navigation Timing v2
    if (perf.getEntriesByType && typeof perf.getEntriesByType === 'function') {
      var navEntries = perf.getEntriesByType('navigation');
      if (navEntries && navEntries.length) {
        var nav = navEntries[0];

        if (typeof nav.loadEventEnd === 'number' && nav.loadEventEnd > 0) {
          ms = nav.loadEventEnd; // since timeOrigin
        } else if (typeof nav.domComplete === 'number' && nav.domComplete > 0) {
          ms = nav.domComplete;
        }
      }
    }

    // Legacy Navigation Timing fallback
    if (!ms && perf.timing) {
      var t = perf.timing;
      if (t.loadEventEnd && t.navigationStart) {
        ms = t.loadEventEnd - t.navigationStart;
      } else if (t.domComplete && t.navigationStart) {
        ms = t.domComplete - t.navigationStart;
      }
    }

    if (ms && ms > 0) {
      initialSent = true;
      pushTiming(ms, 'initial');
    }
  }

  if (document.readyState === 'complete') {
    // Load already fired
    calcInitialLoad();
  } else {
    window.addEventListener('load', calcInitialLoad);
  }

  /* -------------------------------
   *  SPA NAVIGATION TIMING
   * ----------------------------- */

  if (!perf) return; // need perf for SPA timing

  var spaStart = null;
  var spaTimer = null;

  function nowMs() {
    if (perf.now && typeof perf.now === 'function') {
      return perf.now(); // relative to timeOrigin
    }
    return Date.now(); // fallback
  }

  function startSpaNav() {
    spaStart = nowMs();

    if (spaTimer) {
      clearTimeout(spaTimer);
      spaTimer = null;
    }

    // Heuristic: SPA "loaded" 1500ms after navigation
    spaTimer = setTimeout(function () {
      endSpaNav('spa_timeout');
    }, 1500);
  }

  function endSpaNav(reason) {
    if (spaStart == null) return;

    var duration = nowMs() - spaStart;
    spaStart = null;

    if (spaTimer) {
      clearTimeout(spaTimer);
      spaTimer = null;
    }

    if (duration > 0) {
      pushTiming(duration, 'spa');
    }
  }

  // Patch history API once to detect SPA route changes
  try {
    var originalPushState = history.pushState;
    history.pushState = function () {
      var rv = originalPushState && originalPushState.apply(this, arguments);
      startSpaNav();
      return rv;
    };

    var originalReplaceState = history.replaceState;
    history.replaceState = function () {
      var rv = originalReplaceState && originalReplaceState.apply(this, arguments);
      startSpaNav();
      return rv;
    };

    window.addEventListener('popstate', function () {
      startSpaNav();
    });
  } catch (e) {
    // fail silently if history patching isn't allowed
  }
})();

Any SEO expert that ever crawled a large website has faced this dilemma. When the audit is done and you’re left with a huge table of broken internal links – how should you present that to the client?

Most SEOs do one of the two:

The first method relies heavily on the client’s team to make sense of the audit and the massive table of inlinks, while the second approach may lead to faster but incomplete resolution.

While both are standard practices among SEO consultants, we have never been truly comfortable with either. So we went a step or two further to make our technical SEO audits more readable and actionable, ultimately leading to greater issue resolution, easier workflow for all teams involved, and generally better client relationships.

Here’s our process with one massive e-commerce site as an example:

It all starts with a Screaming Frog SEO spider crawl

For this site, the crawl ran for about 35 hours. Among other issues and exports, we pulled out a CSV of all link relationships on the website. That CSV was about 20GB large, with 50 million rows (yikes!). We wouldn’t want to burden the client’s team with downloading and unpacking such a huge table; those folks should be busy fixing the website, not figuring out how to work with massive CSVs.

Removing the burden of handling a massive dataset

So instead, we uploaded the file with all link relationships to Google Cloud Storage and then into BigQuery. The BQ file size limit is 100MB, so our 20GB file had to go to Storage first. We did some minor transformations in BigQuery, just cleaning a few fields and removing ones that aren’t needed, saving the output as a BQ View table.

The Dashboard

Finally, we built a quick but comprehensive Looker Studio dashboard to not just visualize the data, but as a functional tool for the client’s web team to work with. They could easily select all inlinks with specific issues, like 5xx and 4xx errors, redirects, empty category pages, or discontinued products.

Status code filter

 

We even went a step further and prepared a companion document with pre-filtered links to the dashboard containing complex RegEx filters. For example, URL paths containing the double slash (https?://.*//.*), any non-ASCII characters (.*[^\x00-\x7F].*), or diacritical characters specifically (.*[ČčĆćĐ𩹮ž].*), and even repeated folder paths – that needed to be manually generated with their specific folders because of the RE2 RegEx limitations.
Just remember to enable “view filters in report link” in the Looker Studio report settings.

The dashboard workflow is simple, yet it reveals all instances of a particular issue found in the crawl.

First, use the filters to focus on a specific linking error → the first table lists all the link destination URLs that match selected filters.

Filter the issue → see every affected destination URL

 

Then click any of those rows and see the table below which reveals all the pages linking to that URL, and exactly in which on-page elements is the link located. Also, the anchor text, and whether the link is found in the initial HTML, JS-rendered page, or both.

Click a destination URL → get the exact source pages, placement, and anchor text that need fixing

 

Finally, since this table also contains internal links to all assets, not just web pages, the same dashboard can be used to find oversized images, ZIP files or PDFs on the website. Handy isn’t it?

 

Why this matters

We hope this inspires other SEOs to put less pressure on their client’s internal teams and make it as easy as possible for them to work with the SEO audit findings. Technical SEO is difficult enough; a consultant’s job is not to give the client a headache, but to reduce friction and drive action leading to meaningful improvements.

 

Learn more about our approach to SEO and discover how we help businesses stand out online.