Another tool you can use is Chrome's Performance Monitor tab, which is actually different from the Performance tab. I see this as a sort of heartbeat monitor of how your website is doing perf-wise, without the hassle of manually starting and stopping a trace. If you see constant CPU usage here on an otherwise inert webpage, then you probably have a power usage problem.
[...] But in terms of memory usage, we have a new browser API that helps quite a bit with measuring it:
performance.measureMemory, which sadly was much less of a mouthful). There are several advantages of this API:
- It returns a promise that automatically resolves after garbage collection. (No more need for weird hacks to force GC!)
- In the case of cross-origin iframes, which are process-isolated due to Site Isolation, it will break down the attribution. So you can know exactly how memory-hungry your ads and embeds are!
[...] That said, it can still be finicky to use this API. First off, it's only available in Chrome 89+. (In slightly older releases, you can set the "enable experimental web platform features" flag and use the old
[...] First thing to notice is that there are 6 fonts that are loading right after the HTML has downloaded and parsed (request 2-7). This is a sure sign that these fonts are being preloaded. What usually happens is the DOM is constructed, then the CSS downloaded to create the CSSOM. These are both combined to form the render tree. At this point, fonts referenced in the
@font-facerule will be discovered and requested by the browser (assuming they are needed to render the specified text on the page).
There's an important point to consider here. The order in which you list your font sources in the
@font-facerule is very important. According to the CSS Fonts Module Level 3 specification:
It is required for the
@font-facerule to be valid. Its value is a prioritized, comma-separated list of external references or locally-installed font face names. When a font is needed the user agent iterates over the set of references listed, using the first one it can successfully activate. Fonts containing invalid data or local font faces that are not found are ignored and the user agent loads the next font in the list.
Or in other words, the browser will start at the top of the list looking for a font format it supports. The first one it encounters is loaded, and if successful the sources listed later are ignored.
At request 9 we can see the CSS file being downloaded and parsed by the browser. Also notice how only 2 of the preloaded fonts have been fully downloaded. This is a critical part of the page load with limited bandwidth available. Under the hood the browser still needs fonts to be able to render text on screen. A
preloadwill prime the browser cache, but it won't add a font to the FontFaceSet. For that you either need to use the CSS Font Loading API, or the traditional
As you scroll down the GitHub homepage, we animate in certain elements to bring your attention to them. Traditionally, a typical way of building this relied on listening to the scroll event, calculating the visibility of all elements that you're tracking, and triggering animations depending on the elements' position in the viewport. There's at least one big problem with an approach like this: calls to getBoundingClientRect() will trigger reflows, and utilizing this technique might quickly create a performance bottleneck. Luckily,
IntersectionObserversare supported in all modern browsers, and they can be set up to notify you of an element's position in the viewport, without ever listening to scroll events, or without calling getBoundingClientRect.
If you're powering any animations through video elements, you likely want to do two things: only play the video while it's visible in the viewport, and lazy-load the video when it's needed. Sadly, the lazy load attribute doesn't work on videos, but if we use
IntersectionObserversto play videos as they appear in the viewport, we can get both of these features in one go. Together with setting preload to none, this simple observer setup saves us several megabytes on each page load.
Chrome has two levels of caching for V8 compiled code (both classic scripts and module scripts): a low-cost "best effort" in-memory cache maintained by V8 (the
Isolatecache), and a full serialized on-disk cache.
Isolatecache operates on scripts compiled in the same V8
Isolate(i.e. same process, roughly "the same website's pages when navigating in the same tab"). It is "best-effort" in the sense that it tries to be as fast and as minimal as possible, using data already available to us, at the cost of a potentially lower hit-rate and lack of caching across processes. [...]
The on-disk code cache is managed by Chrome (specifically, by Blink), and it fills the gap that the
Isolatecache cannot: sharing code caches between processes, and between multiple Chrome sessions. It takes advantage of the existing HTTP resource cache, which manages caching and expiring data received from the web.
Code caching is done on a coarse, per-script basis, meaning that changes to any part of the script invalidate the cache for the entire script. If your shipping code consists of both stable and changing parts in a single script, e.g. libraries and business logic, then changes to the business logic code invalidate the cache of the library code.
Instead, you can split out the stable library code into a separate script, and include it separately. Then, the library code can be cached once, and stay cached when the business logic changes.
This has additional benefits if the libraries are shared across different pages on your website: since the code cache is attached to the script, the code cache for the libraries is also shared between pages.
Only the functions that are compiled by the time the script finishes executing count towards the code cache, so there are many kinds of function that won’t be cached despite executing at some later point. Event handlers (even
onload), promise chains, unused library functions, and anything else that is lazily compiled without being called by the time
</script>is seen, all stays lazy and is not cached.
One way to force these functions to be cached is to force them to be compiled, and a common way of forcing compilation is by using IIFE heuristics. [...]
This means that functions that should be in the code cache can be forced into it by wrapping them in parentheses. This can, however, make startup time suffer if the hint is applied incorrectly, and in general this is somewhat of an abuse of heuristics, so our advice is to avoid doing this unless it is necessary.
None of the above suggestions is guaranteed to speed up your web app. Unfortunately, code caching information is not currently exposed in DevTools, so the most robust way to find out which of your web app’s scripts are code-cached is to use the slightly lower-level