We're continuing on from the previous
experiment and in Chrome
M68, we have added an experimental MediaStreamTrack constraint to control
which echo canceller is being used, added support for a native echo canceller on
Windows as well as improved the functionality of the native echo canceller on
macOS. As before, all of this is behind an Origin
Trial, so you'll have to sign up, or
start Chrome with a command line flag, if you want to try it out. For more
information, see below.
What's new?
First and foremost, it's now possible to control which echo canceller is being
used by including a new constraint in your getUserMedia calls, e.g:
echoCancellationType: type
where type can be one of:
browser to use the software implementation provided by the browser; or
system to use the implementation provided by the underlying
system. Currently, this is one of the implementations on macOS and on Windows.
If you leave the constraint out, Chrome will select echo canceller like it
always has: if there's hardware echo cancellation, it will be used, otherwise
Chrome's software echo canceller will. Without specifying the constraint, Chrome
will never chose one of the two experimental echo cancellers that are part of
this trial.
As echoCancellationType works like any other constraint, it's possible to
specify system as an ideal value and have Chrome use it if it's available, or
fall back to the browser one otherwise. The browser echoCancellationType is
always available in Chrome. To figure out which echo canceller was picked, you
can call getSettings() on the getUserMedia audio track and check the value of
the echoCancellationType field.
Finally, you can check what echo cancellers are available for a
MediaStreamTrack by calling getCapabilities() on it. However,
echoCancellationType is not yet implemented for InputDeviceInfo.
Windows echo cancellation support
We've expanded the native echo canceller support to include Windows using the
Voice Capture
DSP.aspx)
component. As with the macOS echo canceller, we want to evaluate its
performance, and see if there are cases where it performs better than our
software solution, if only for being placed closer to the audio hardware.
Contrary to the case with macOS, our initial testing on Windows hasn't been very
promising. We will continue to tweak the implementation to see if we can get it
to perform better. For now, it's probably best to avoid experimenting with the
Windows echo canceller on any larger scale. Try it out in controlled settings,
such as on your local machine, but don't expect it to work flawlessly!
Improved macOS echo cancellation support
During the previous experiment, the macOS implementation lacked the ability to
correctly track which output device was being used. This meant it would be
unable to cancel echo from any device that wasn't the computer's default
device. In many cases, this might not have been a problem, since macOS can
automatically switch default devices when headsets, etc. are plugged or
unplugged. It wouldn't work correctly in all cases, though.
This functionality has been added to Chrome M68 and is implemented both for the
macOS and Windows echo canceller. Chrome's software echo canceller has not been
affected by this lack of functionality, as it uses an internal loopback to get
the playout audio to cancel.
How to enable the experiment
To get this new behavior on your site, your need to be signed
up for the "Experimental support
for native AEC" Origin Trial. If you just want to try it out locally, the
experiment can be enabled on the command line:
Passing this flag on the command line makes the new echoCancellationType
constraint globally available in Chrome for the current session. Using this
constraint, you can then test the native echo cancellers in your app, as
described above. This is the same command line flag as in the previous trial; on
Chrome M68 it will enable the new functionality. Enabling the new origin trial
will only activate the new functionality – it will not trigger the previous
trial in older versions of Chrome.
Filing feedback
As with the previous experiment, we're interested in the qualitative performance
of the macOS and Windows echo cancellers; primarily the former. We would also
like feedback on how well the new echoCancellationType constraint works in
practice, how easy it is to use, etc. This includes its inclusion in
getSettings and getCapabilities.
We're also interested in how Chrome interacts with other applications when using
these native echo cancellers, as well as any stability issues or other problems
with the implementation.
If you're trying this out, please file your feedback in this
bug.
If possible, include what hardware was used (OS version, hardware model,
microphone / headset / etc.). If doing more large-scale experiments, links to
comparative statistics on audio call quality are appreciated; whether objective
or subjective.
Feature Policy allows web developers to selectively enable, disable, and
modify the behavior of certain APIs and web features in the browser. It's like
CSP but instead of controlling security, it
controls features!
The feature policies themselves are little opt-in agreements between developer
and browser that can help foster our goals of building (and maintaining) high
quality web apps.
Introduction
Building for the web is a rocky adventure. It's hard enough to build a top-notch
web app that nails performance and uses all the latest best practices. It's even
harder to keep that experience great over time. As your project evolves,
developers come on board, new features land, and the codebase grows. That
Great Experience ™ you once achieved may begin to deteriorate and UX
starts to suffer! Feature Policy is designed to keep you on track.
With Feature Policy, you opt-in to a set of "policies" for the browser to
enforce on specific features used throughout your site. These policies restrict
what APIs the site can access or modify the browser's default behavior for
certain features.
Here are examples of things you can do with Feature Policy:
Change the default behavior
of autoplay on mobile and third party videos.
Restrict a site from using sensitive APIs like camera or microphone.
Allow iframes to use the fullscreen API.
Block the use of outdated APIs like synchronous XHR and document.write().
Ensure images are sized properly (e.g. prevent layout thrashing) and are not
too big for the viewport (e.g. waste user's bandwidth).
Policies are a contract between developer and browser. They inform the
browser about what the developer's intent is and thus, help keep us honest when
our app tries to go off the rails and do something bad. If the site or embedded
third-party content attempts to violate any of the developer's preselected
rules, the browser overrides the behavior with better UX or blocks the API
altogether.
Using Feature Policy
Feature Policy provides two ways to control features:
Through the Feature-Policy HTTP header.
With the allow attribute on iframes.
The biggest difference between the HTTP header and the allow attribute is
that the allow attribute only controls features within an iframe. The header
can control features in the main response + any iframe'd content within
the page. This is because iframes inherit the policies of their parent
page.
The Feature-Policy HTTP header
The easiest way to use Feature Policy is by sending the Feature-Policy HTTP
header with the response of a page. The value of this header is a policy or set
of policies that you want the browser to respect for a given origin:
Feature-Policy: <feature> <allow list origin(s)>
The origin allow list can take several different values:
*: The feature is allowed in top-level browsing contexts and in nested
browsing contexts (iframes).
'self': The feature is allowed in top-level browsing contexts and
same-origin nested browsing contexts. It is disallowed in cross-origin
documents in nested browsing contexts.
'none': The feature is disallowed in top-level browsing contexts and
disallowed in nested browsing contexts.
<origin(s)>: specific origins to enable the policy for (e.g. https://example.com).
Example
Let's say you wanted to block all content from using
the Geolocation API across your site. You can do that by sending a strict
allowlist of 'none' for the geolocation feature:
Feature-Policy: geolocation 'none'
If a piece of code or iframe tries to use the Geolocation API, the browser
blocks it. This is true even if the user has previously given
permission to share their location.
In other cases, it might make sense to relax this policy a bit. We can allow
our own origin to use the Geolocation API but prevent third-party content from
accessing it by setting 'self' in the allow list:
Feature-Policy: geolocation 'self'
The iframe allow attribute
The second way to use Feature Policy is for controlling content within
an iframe. Use the allow attribute to specify a policy list for
embedded content:
<!-- Allow all browsing contexts within this iframe to use fullscreen. -->
<iframe src="https://example.com..." allow="fullscreen"></iframe>
<!-- Equivalent to: -->
<iframe src="https://example.com..." allow="fullscreen *"></iframe>
<!-- Allow only iframe content on a particular origin to access the user's location. -->
<iframe src="https://google-developers.appspot.com/demos/..."
allow="geolocation https://google-developers.appspot.com"></iframe>
Note: Frames inherit the policy settings of their parent page. If the page
and iframe both specify a policy list, the more restrictive allow list is
used. See Inheritance rules.
What about the existing iframe attributes?
Some of the features controlled by Feature Policy have an existing
attribute to control their behavior. For example, <iframe allowfullscreen>
is an attribute that allows iframe content to use the
HTMLElement.requestFullscreen() API. There's also the allowpaymentrequest and
allowusermedia attributes for allowing the
Payment Request API and getUserMedia(),
respectively.
Try to use the allow attribute instead of these old
attributes where possible. For cases where you need to support backwards
compatibility using the allow attribute with an equivalent legacy attribute
is perfectly fine (e.g. <iframe allowfullscreen allow="fullscreen">).
Just note that the more restrictive policy wins. For example, the following
iframe would not be allowed to enter fullscreen because
allow="fullscreen 'none'" is more restrictive than allowfullscreen:
<!-- Blocks fullscreen access if the browser supports feature policy. -->
<iframe allowfullscreen allow="fullscreen 'none'" src="...">
Controlling multiple policies at once
Several features can be controlled at the same time by sending the HTTP header
with a ; separated list of policies:
Feature-Policy: unsized-media 'none'; geolocation 'self', https://example.com; camera *;
or by sending a separate header for each policy:
Feature-Policy: unsized-media 'none'
Feature-Policy: geolocation 'self' https://example.com
Feature-Policy: camera *;
This example would do the following:
Disallows the use of unsized-media for all browsing contexts.
Disallows the use of geolocation for all browsing contexts except for the
page's own origin and https://example.com.
Allows camera access for all browsing contexts.
Example - setting multiple policies on an iframe
<!-- Blocks the iframe from using the camera and microphone
(if the browser supports feature policy). -->
<iframe allow="camera 'none'; microphone 'none'">
JavaScript API
Heads up: While Chrome 60 added support for the Feature-Policy HTTP header
and the allow attribute on iframes, the JavaScript API is still
being fleshed out and is likely to change as it goes through the standardization
process. You can enable the API using the
--enable-experimental-web-platform-features flag in chrome:flags.
Feature Policy includes a small JavaScript API to allow client-side
code to determine what features are allowed by a page or frame. You can access
its goodies under document.policy for the main document or frame.policy for
iframes.
Example
The example below illustrates the results of sending a policy of
Feature-Policy: geolocation 'self' on the site https://example.com:
/* @return {Array<string>} List of feature policies allowed by the page. */
document.policy.allowedFeatures();
// → ["geolocation", "midi", "camera", "usb", "autoplay",...]
/* @return {boolean} True if the page allows the 'geolocation' feature. */
document.policy.allowsFeature('geolocation');
// → true
/* @return {boolean} True if the provided origin allows the 'geolocation' feature. */
document.policy.allowsFeature('geolocation', 'https://google-developers.appspot.com/');
// → false
/* @return {Array<string>} List of origins (used throughout the page) that are
allowed to use the 'geolocation' feature. */
document.policy.getAllowlistForFeature('geolocation');
// → ["https://example.com"]
List of policies
So what features can be controlled through Feature Policy?
Right now, there's a lack of documentation on what policies are implemented
and how to use them. The list will also grow over time as different browsers
adopt the spec and implement various policies. Feature policy will be a moving
target and good reference docs will definitely be needed.
For now, there are a couple of ways to see what features are controllable.
Check out our Feature Policy Kitchen Sink of demos. It has examples
of each policy that's been implemented in Blink.
If you have the --enable-experimental-web-platform-features flag turned on
in chrome:flags, query document.policy.allowedFeatures() on about:blank
to find the list:
Check chromestatus.com for the policies that have been
implemented or are being considered in Blink.
To determine how to use some of these policies, check out the
spec's GitHub repo.
It contains a few explainers on some of the policies.
FAQ
When do I use Feature Policy?
All policies are opt-in, so use Feature Policy when/where it makes sense. For
example, if your app is an image gallery, the maximum-downscaling-image
policy would help you avoid sending gigantic images to mobile viewports.
Other policies like document-write and sync-xhr should be used with more
care. Turning them on could break third-party content like ads. On the
other hand, Feature Policy can be a gut check to make sure your pages
never uses these terrible APIs!
Pro tip: Enable Feature Policy on your own content before enabling it on
third-party content.
Do I use Feature Policy in development or production?
Both. We recommend turning policies on during development and keeping the
policies active while in production. Turning policies on during development can
help you start off on the right track. It'll help you catch any unexpected
regressions before they happen. Keep policies turned on in production
to guarantee a certain UX for users.
Is there a way to report policy violations to my server?
A Reporting API
is in the works! Similar to how sites can opt-in to receiving reports about
CSP violations or
deprecations, you'll
be able to receive reports about feature policy violations in the wild.
What are the inheritance rules for iframe content?
Scripts (either first or third-party) inherit the policy of their browsing
context. That means that top-level scripts inherit the main document's policies.
iframes inherit the policies of their parent page. If the iframe has an
allow attribute, the stricter policy between the parent page and the allow
list, wins. For more information on iframe usage, see the
allow attribute on iframes.
Disabling a feature policy is a one-way toggle. Once a policy is disabled, it
cannot be re-enabled by any frame or descendant.
If I apply a policy, does it last across page navigations?
No. The lifetime of a policy is for a single page navigation response. If
the user navigates to a new page, the Feature-Policy header must be explicitly
sent in the new response for the policy to apply.
As of now, Chrome is the only browser to support feature policy. However,
since the entire API surface is opt-in or feature-detectable, Feature Policy
lends itself nicely to progressive enhancement.
Conclusion
Feature Policy can help provide a well-lit path towards better UX and
good performance. It's especially handy when developing or maintaining an app
since it can help avoid potential footguns before they sneak into your codebase.
Experimenting with First Input Delay in the Chrome UX Report
The goal of the
Chrome User Experience Report
is to help the web community understand the distribution and evolution of real
user performance. To date, our focus has been on paint and page load metrics
like First Contentful Paint (FCP) and Onload (OL), which have helped us
understand how websites visually perform for users. Starting with the
June 2018 release, we’re experimenting with a new user-centric metric that
focuses on the interactivity of web pages:
First Input Delay
(FID). This new metric will enable us to better understand how responsive
websites are to user input.
FID was recently made available in Chrome as an
origin trial,
which means that websites can opt into experimenting with this new web platform
feature. Similarly, FID will be available in the Chrome UX Report as an
experimental metric, which means it will be available for the duration of the
origin trial within a separate "experimental" namespace.
How FID is measured
So what exactly is FID? Here’s how it’s defined in the
First Input Delay
announcement blog post:
First Input Delay (FID) measures the time from when a user first interacts
with your site (i.e. when they click a link, tap on a button, or use a custom,
JavaScript-powered control) to the time when the browser is actually able to
respond to that interaction.
It’s like measuring the time from ringing someone’s doorbell to them answering
the door. If it takes a long time, there could be many reasons. For example,
maybe the person is far away from the door or maybe they cannot move quickly.
Similarly, web pages may be busy doing other work or the user’s device may be
slow.
Exploring FID in the Chrome UX Report
With one month of FID data from millions of origins, there is already a wealth
of interesting insights to be discovered. Let’s look at a few queries that
demonstrate how to extract these insights from the Chrome UX Report on BigQuery.
Let’s start by querying for the percent of fast FID experiences for developers.google.com.
We can define a fast experience as one in which FID is less than 100 ms.
Per RAIL recommendations,
if the delay is 100 ms or better, it should feel instantaneous to the user.
SELECT
ROUND(SUM(IF(fid.start < 100, fid.density, 0)), 4) AS fast_fid
FROM
`chrome-ux-report.all.201806`,
UNNEST(experimental.first_input_delay.histogram.bin) AS fid
WHERE
origin = 'https://developers.google.com'
The results show that 95% of FID experiences on this origin are perceived as
instantaneous. That seems really good, but how does it compare to all origins
in the dataset?
SELECT
ROUND(SUM(IF(fid.start < 100, fid.density, 0)) / SUM(fid.density), 4) AS fast_fid
FROM
`chrome-ux-report.all.201806`,
UNNEST(experimental.first_input_delay.histogram.bin) AS fid
The results of this query show that 84% of FID experiences are less than 100 ms.
So developers.google.com is above average.
Note: This is also your periodic reminder that origin popularity is not
represented anywhere in the dataset. So an origin that gets a few million
instant FID experiences may have the same density as one that only gets a few
hundred visitors with instant FID experiences.
Next, let’s try slicing this data to see if there’s a difference between the
percent of fast FID on desktop versus mobile. One hypothesis is that mobile
devices have slower FID values, possibly due to slower hardware compared to
desktop computers. If the CPU is less powerful, it may be busier for a longer
time and result in slower FID experiences.
SELECT
form_factor.name AS form_factor,
ROUND(SUM(IF(fid.start < 100, fid.density, 0)) / SUM(fid.density), 4) AS fast_fid
FROM
`chrome-ux-report.all.201806`,
UNNEST(experimental.first_input_delay.histogram.bin) AS fid
GROUP BY
form_factor
form_factor
fast_fid
desktop
96.02%
phone
79.90%
tablet
76.48%
The results corroborate our hypothesis. Desktop has a higher cumulative density
of fast FID experiences than phone and tablet form factors. Understanding why
these differences exist, eg CPU performance, would require A/B testing outside
the scope of the Chrome UX Report.
Now that we’ve seen how to identify whether an origin has fast FID experiences,
let’s take a look at a couple of origins that perform really well.
This origin has 98%
of FID experiences under 100 ms. How do they do it? Analyzing how it’s built in
WebPageTest,
we can see that it’s quite an image-heavy WordPress page but it has 168 KB of
JavaScript that executes in about 500 ms on our lab machine. This is not very
much JavaScript according to the HTTP Archive,
which puts this page in the 28th percentile.
The pink bar spanning 2.7 to 3.0 seconds is the Parse HTML phase. During this
time the page is not interactive and appears visually incomplete (see “3.0s”
in the filmstrip above). After that, any long tasks that do need to be processed
are broken up to ensure that the main thread stays quiescent. The pink lines on
row 11 demonstrate how the JavaScript work is spread out in quick bursts.
This origin has 96% instant FID
experiences. It loads 267 KB of JavaScript (38th percentile in HTTP Archive) and
processes it for 900 ms on the lab machine. The filmstrip shows that the page
takes about 5 seconds to paint the background and about 2 more seconds to paint
the content.
What’s most interesting about the results
is that nothing interactive is even visible while the main thread is busy
between 3 and 5 seconds. It’s actually the slowness of this page’s FCP that
improves the FID. This is a good example of the importance of using many metrics
to represent the user experience.
Start exploring
You can learn more about FID in this week’s episode of The State of the Web:
Having FID available in the Chrome UX Report enables us to establish a baseline
of interactivity experiences. Using this baseline, we can observe its change in
future releases or benchmark individual origins. If you’d like to start
collecting FID in your own site’s field measurements, sign up for the origin
trial by going to bit.ly/event-timing-ot
and select the Event Timing feature. And of course, start exploring
the dataset for interesting insights into the state of interactivity on the web.
This is still an experimental metric, so please give us your feedback and share
your analysis on the Chrome UX Report discussion group
or @ChromeUXReport on Twitter.
Chrome 67 on desktop has a new feature called Site Isolation enabled by
default. This
article explains what Site Isolation is all about, why it’s necessary, and why web developers should
be aware of it.
What is Site Isolation?
The internet is for watching cat videos and managing cryptocurrency wallets, amongst other things —
but you wouldn’t want fluffycats.example to have access to your precious cryptocoins! Luckily,
websites typically cannot access each other’s data inside the browser thanks to the Same-Origin
Policy. Still, malicious websites may try to bypass this policy to attack other websites, and
occasionally, security bugs are found in the browser code that enforces the Same-Origin Policy. The
Chrome team aims to fix such bugs as quickly as possible.
Site Isolation is a security feature in Chrome that offers an additional line of defense to make
such attacks less likely to succeed. It ensures that pages from different websites are always put
into different processes, each running in a sandbox that limits what the process is allowed to do.
It also blocks the process from receiving certain types of sensitive data from other sites. As a
result, with Site Isolation it’s much more difficult for a malicious website to use speculative
side-channel attacks like Spectre to steal data from other sites. As the Chrome team finishes
additional enforcements, Site Isolation will also help even when an attacker’s page can break some
of the rules in its own process.
Site Isolation effectively makes it harder for untrusted websites to access or steal information
from your accounts on other websites. It offers additional protection against various types of
security bugs, such as
the recent Meltdown and Spectre side-channel attacks.
Even when all cross-site pages are put into separate processes, pages can still legitimately request
some cross-site subresources, such as images and JavaScript. A malicious web page could use an
<img> element to load a JSON file with sensitive data, like your bank balance:
<img src="https://your-bank.example/balance.json">
<!-- Note: the attacker refused to add an `alt` attribute, for extra evil points. -->
Without Site Isolation, the contents of the JSON file would make it to the memory of the renderer
process, at which point the renderer notices that it’s not a valid image format and doesn’t render
an image. But, the attacker could then exploit a vulnerability like Spectre to potentially read that
chunk of memory.
Instead of using <img>, the attacker could also use <script> to commit the sensitive data to
memory:
Cross-Origin Read Blocking, or CORB, is a new security feature that prevents the contents of
balance.json from ever entering the memory of the renderer process memory based on its MIME type.
Let’s break down how CORB works. A website can request two types of resources from a server:
data resources such as HTML, XML, or JSON documents
media resources such as images, JavaScript, CSS, or fonts
A website is able to receive data resources from its own origin or from other origins with
permissive CORS headers such as
Access-Control-Allow-Origin: *. On the other hand, media resources can be included from any
origin, even without permissive CORS headers.
CORB prevents the renderer process from receiving a cross-origin data resource (i.e. HTML, XML, or
JSON) if:
the resource has an X-Content-Type-Options: nosniff header
CORS doesn’t explicitly allow access to the resource
If the cross-origin data resource doesn’t have the X-Content-Type-Options: nosniff header set,
CORB attempts to sniff the response body to determine whether it’s HTML, XML, or JSON. This is
necessary because some web servers are misconfigured and serve images as text/html, for example.
Data resources that are blocked by the CORB policy are presented to the process as empty, although
the request does still happen in the background. As a result, a malicious web page has a hard time
pulling cross-site data into its process to steal.
For optimal security and to benefit from CORB, we recommend the following:
Mark responses with the correct Content-Type header. (For example, HTML resources should be
served as text/html, JSON resources with
a JSON MIME type and XML resources with
an XML MIME type).
Opt out of sniffing by using the X-Content-Type-Options: nosniff header. Without this header,
Chrome does do a quick content analysis to try to confirm that the type is correct, but since this
errs on the side of allowing responses through to avoid blocking things like JavaScript files,
you’re better off affirmatively doing the right thing yourself.
Why should web developers care about Site Isolation?
For the most part, Site Isolation is a behind-the-scenes browser feature that is not directly
exposed to web developers. There is no new web-exposed API to learn, for example. In general, web
pages shouldn’t be able to tell the difference when running with or without Site Isolation.
However, there are some exceptions to this rule. Enabling Site Isolation comes with a few subtle
side-effects that might affect your website. We maintain
a list of known Site Isolation issues,
and we elaborate on the most important ones below.
Full-page layout is no longer synchronous
With Site Isolation, full-page layout is no longer guaranteed to be synchronous, since the frames of
a page may now be spread across multiple processes. This might affect pages if they assume that a
layout change immediately propagates to all frames on the page.
As an example, let’s consider a website named fluffykittens.example that communicates with a
social widget hosted on social-widget.example:
<!-- https://fluffykittens.example/ -->
<iframe src="https://social-widget.example/" width="123"></iframe>
<script>
const iframe = document.querySelector('iframe');
iframe.width = 456;
iframe.contentWindow.postMessage(
// The message to send:
'Meow!',
// The target origin:
'https://social-widget.example'
);
</script>
At first, the social widget’s <iframe>’s width is 123 pixels. But then, the FluffyKittens page
changes the width to 456 pixels (triggering layout) and sends a message to the social widget,
which has the following code:
Whenever the social widget receives a message through the postMessage API, it logs the width of
its root <html> element.
Which width value gets logged? Before Chrome enabled Site Isolation, the answer was 456. Accessing
document.documentElement.clientWidth forces layout, which used to be synchronous before Chrome
enabled Site Isolation. However, with Site Isolation enabled, the cross-origin social widget
re-layout now happens asynchronously in a separate process. As such, the answer can now also be
123, i.e. the old width value.
If a page changes the size of a cross-origin <iframe> and then sends a postMessage to it, with
Site Isolation the receiving frame may not yet know its new size when receiving the message. More
generally, this might break pages if they assume that a layout change immediately propagates to all
frames on the page.
In this particular example, a more robust solution would set the width in the parent frame, and
detect that change in the <iframe> by listening for a resize event.
Key Point: In general, avoid making implicit assumptions about browser layout behavior. Full-page
layout involving cross-origin <iframe>s was never explicitly specified to be synchronous, so it’s
best to not write code that relies on this.
Unload handlers might time out more often
When a frame navigates or closes, the old document as well as any subframe documents embedded in it
all run their unload handler. If the new navigation happens in the same renderer process (e.g. for
a same-origin navigation), the unload handlers of the old document and its subframes can run for
an arbitrarily long time before allowing the new navigation to commit.
In this situation, the unload handlers in all frames are very reliable.
However, even without Site Isolation some main frame navigations are cross-process, which impacts
unload handler behavior. For example, if you navigate from old.example to new.example by typing
the URL in the address bar, the new.example navigation happens in a new process. The unload
handlers for old.example and its subframes run in the old.example process in the background,
after the new.example page is shown, and the old unload handlers are terminated if they don’t
finish within a certain timeout. Because the unload handlers may not finish before the timeout,
the unload behavior is less reliable.
Note: Currently, DevTools support for unload handlers is largely missing. For example, breakpoints
inside of unload handlers don’t work, any requests made during unload handlers don’t show up in the
Network pane, any console.log calls made during unload handlers may not show up, etc. Star
Chromium issue #851882 to
receive updates.
With Site Isolation, all cross-site navigations become cross-process, so that documents from
different sites don’t share a process with each other. As a result, the above situation applies in
more cases, and unload handlers in <iframe>s often have the background and timeout behaviors
described above.
Another difference resulting from Site Isolation is the new parallel ordering of unload handlers:
without Site Isolation, unload handlers run in a strict top-down order across frames. But with Site
Isolation, unload handlers run in parallel across different processes.
These are fundamental consequences of enabling Site Isolation. The Chrome team is working on
improving the reliability of unload handlers for common use cases, where feasible. We’re also
aware of bugs where subframe unload handlers aren’t yet able to utilize certain features and are
working to resolve them.
An important case for unload handlers is to send end-of-session pings. This is commonly done as
follows:
Site Isolation makes it harder for untrusted websites to access or steal information from your
accounts on other websites by isolating each site into its own process. As part of that, CORB tries
to keep sensitive data resources out of the renderer process. Our recommendations above ensure you
get the most out of these new security features.
Thanks to
Alex Moshchuk,
Charlie Reis,
Jason Miller,
Nasko Oskov,
Philip Walton,
Shubhie Panicker, and
Thomas Steiner
for reading a draft version of this article and giving their feedback.
You've designed a webapp, built its code and service worker, and finally added the
Web App Manifest to describe how it should behave when
'installed' on a user's device. This includes things like high-resolution icons to use for e.g. a
mobile phone's launcher or app switcher, or how your webapp should start when opened from the
user's home screen.
And while many browsers will respect the Web App Manifest, not every browser will load or respect
every value you specify. Enter PWACompat, a
library that takes your Web App Manifest and automatically inserts relevant meta or link tags
for icons of different sizes, the favicon, startup mode, colors etc.
This means you no longer have to add innumerable, non-standard tags (like <link rel="icon" ... />
or <meta name="orientation" ... />) your pages. And for iOS home screen applications, PWACompat
will even dynamically create splash screens so you don't have to generate one for every different
screen size.
Using PWACompat
To use PWACompat, be sure to link to your Web App
Manifest on all your pages:
Modern browsers today will sometimes suspend pages or discard them entirely when
system resources are constrained. In the future, browsers want to do this
proactively, so they consume less power and memory. The Page Lifecycle
API, shipping in Chrome 68,
provides lifecycle hooks so your pages can safely handle these browser
interventions without affecting the user experience. Take a look at the API to
see whether you should be implementing these features in your application.
Background
Application lifecycle is a key way that modern operating systems manage
resources. On Android, iOS, and recent Windows versions, apps can be started and
stopped at any time by the OS. This allows these platforms to streamline and
reallocate resources where they best benefit the user.
On the web, there has historically been no such lifecycle, and apps can be kept
alive indefinitely. With large numbers of web pages running, critical system
resources such as memory, CPU, battery, and network can be oversubscribed,
leading to a bad end-user experience.
While the web platform has long had events that related to lifecycle states
— like load,
unload, and
visibilitychange
— these events only allow developers
to respond to user-initiated lifecycle state changes. For the web to work
reliably on low-powered devices (and be more resource conscious in general on
all platforms) browsers need a way to proactively reclaim and re-allocate system
resources.
In fact, browsers today already do take active measures to conserve
resources
for pages in background tabs, and many browsers (especially Chrome) would like
to do a lot more of this — to lessen their overall resource footprint.
The problem is developers currently have no way to prepare for these types of
system-initiated interventions or even know that they're happening. This means
browsers need to be conservative or risk breaking web pages.
Introducing and standardizing the concept of lifecycle states on the web.
Defining new, system-initiated states that allow browsers to limit the
resources that can be consumed by hidden or inactive tabs.
Creating new APIs and events that allow web developers to respond to
transitions to and from these new system-initiated states.
This solution provides the predictability web developers need to build
applications resilient to system interventions, and it allows browsers to more
aggressively optimize system resources, ultimately benefiting all web users.
The rest of this post will introduce the new Page Lifecycle features shipping in
Chrome 68 and explore how they relate to all the existing web platform states
and events. It will also give recommendations and best-practices for the types
of work developers should (and should not) be doing in each state.
Overview of Page Lifecycle states and events
All Page Lifecycle states are discrete and mutually exclusive, meaning a page
can only be in one state at a time. And most changes to a page's lifecycle state
are generally observable via DOM events (see developer recommendations for each
state for the exceptions).
Perhaps the easiest way to explain the Page Lifecycle states — as well as
the events that signal transitions between them — is with a diagram:
States
The following table explains each state in detail. It also lists the possible
states that can come before and after as well as the events developers can
use to observe changes.
State
Description
Active
A page is in the active state if it is visible and has
input focus.
Possible previous states: passive(via the focus event)
In the frozen state the browser suspends execution of
freezable
tasks in the page's
task queues until the page is unfrozen. This means things like
JavaScript timers and fetch callbacks do not run. Already-running
tasks can finish (most importantly the freeze callback), but they may be limited in what they
can do and how long they can run.
Browsers freeze pages as a way to preserve CPU/battery/data usage; they
also do it as a way to enable faster
back/forward navigations — avoiding the need for a full page
reload.
Possible previous states: hidden(via the freeze event)
A page is in the terminated state once it has started being
unloaded and cleared from memory by the browser. No
new tasks can start in this state, and in-progress tasks may be
killed if they run too long.
Possible previous states: hidden(via the pagehide event)
Possible next states:
NONE
Discarded
A page is in the discarded state when it is unloaded by the
browser in order to conserve resources. No tasks, event callbacks, or
JavaScript of any kind can run in this state, as discards typically
occur under resource constraints, where starting new processes is
impossible.
Browsers dispatch a lot of events, but only a small portion of them signal a
possible change in Page Lifecycle state. The table below outlines all events
that pertain to lifecycle and lists what states they may transition to and from.
The document's
visibilityState value has changed. This can
happen when a user navigates to a new page, switches tabs, closes a tab,
minimizes or closes the browser, or switches apps on mobile operating
systems.
This could be either a brand new page load or a page taken from the
page navigation cache. If the page
was taken from the page navigation cache, the event's
persisted property is true, otherwise it is
false.
Possible previous states: frozen(a resume
event would have also fired)
If the user is navigating to another page and the browser is able to add
the current page to the page navigation
cache to be reused later, the event's persisted property
is true. When true, the page is entering the
frozen state, otherwise it is entering the terminated state.
*
Indicates a new event defined by the Page Lifecycle API
New features added in Chrome 68
The chart above shows two states that are system-initiated rather than
user-initiated: frozen and discarded.
As mentioned above, browsers today already occasionally freeze and discard
hidden tabs (at their discretion), but developers have no way of knowing when
this is happening.
In Chrome 68, developers can now observe when a hidden tab is frozen and
unfrozen by listening for the freeze
and resume events on document.
document.addEventListener('freeze', (event) => {
// The page is now frozen.
});
document.addEventListener('resume', (event) => {
// The page has been unfrozen.
});
In Chrome 68 the document object also now includes a
wasDiscarded
property. To determine whether a page was discarded while in a hidden
tab, you can inspect the value of this property at page load time (note:
discarded pages must be reloaded to use again).
if (document.wasDiscarded) {
// Page was previously discarded by the browser while in a hidden tab.
}
For advice on what things are important to do in the freeze and resume
events, as well as how to handle and prepare for pages being discarded, see
developer recommendations for each state.
The next several sections offer an overview of how these new features fit into
the existing web platform states and events.
Observing Page Lifecycle states in code
In the active, passive, and hidden
states, it's possible to run JavaScript code that determines the current
Page Lifecycle state from existing web platform APIs.
The frozen and terminated states, on the
other hand, can only be detected in their respective event listener
(freeze and pagehide) as the state is
changing.
Observing state changes
Building on the getState() function defined above, you can observe all Page
Lifecycle state changes with the following code.
// Stores the initial state using the `getState()` function (defined above).
let state = getState();
// Accepts a next state and, if there's been a state change, logs the
// change to the console. It also updates the `state` value defined above.
const logStateChange = (nextState) => {
const prevState = state;
if (nextState !== prevState) {
console.log(`State change: ${prevState} >>> ${nextState}`);
state = nextState;
}
};
// These lifecycle events can all use the same listener to observe state
// changes (they call the `getState()` function to determine the next state).
['pageshow', 'focus', 'blur', 'visibilitychange', 'resume'].forEach((type) => {
window.addEventListener(type, () => logStateChange(getState()), {capture: true});
});
// The next two listeners, on the other hand, can determine the next
// state from the event itself.
window.addEventListener('freeze', () => {
// In the freeze event, the next state is always frozen.
logStateChange('frozen');
}, {capture: true});
window.addEventListener('pagehide', (event) => {
if (event.persisted) {
// If the event's persisted property is `true` the page is about
// to enter the page navigation cache, which is also in the frozen state.
logStateChange('frozen');
} else {
// If the event's persisted property is not `true` the page is
// about to be unloaded.
logStateChange('terminated');
}
}, {capture: true});
The above code does three things:
Sets the initial state using the getState() function.
Defines a function that accepts a next state and, if there's a change,
logs the state changes to the console.
Adds capturing event listeners for all necessary lifecycle events, which
in turn call logStateChange(), passing in the next state.
One thing to note about the above code is that all the event listeners are added
to window and they all pass
{capture: true}.
There are a few reasons for this:
Not all Page Lifecycle events have the same target. pagehide, and
pageshow are fired on window; visibilitychange, freeze, and
resume are fired on document, and focus and blur are fired on their
respective DOM elements.
Most of these events do not bubble, which means it's impossible to add
non-capturing event listeners to a common ancestor element and observe all
of them.
The capture phase executes before the target or bubble phases, so adding
listeners there helps ensure they run before other code can cancel them.
Managing cross-browsers differences
The chart in the beginning of this article outlines the state and event flow
according to the Page Lifecycle API. But since this API has just been
introduced, the new events and DOM APIs have not been implemented in all
browsers.
Furthermore, the events that are implemented in all browsers today are not
implemented consistently. For example:
Some browsers do not fire a blur event when switching tabs. This means
(contrary to the diagram and tables above) a page could go from the active
state to the hidden state without going through passive first.
Several browsers implement a page navigation cache,
and the Page Lifecycle API classifies cached pages as being in the frozen
state. Since this API is brand new, these browsers do not yet implement the
freeze and resume events, though this state can still be observed via
the pagehide and pageshow events.
Older versions of Internet Explorer (10 and below) do not implement the
visibilitychange event.
The dispatch order of the pagehide and visibilitychange events has
changed. Previously
browsers would dispatch visibilitychange after pagehide if the page's
visibility state was visible when the page was being unloaded. New Chrome
versions will dispatch visibilitychange before pagehide, regardless of
the document's visibility state at unload time.
To make it easier for developers to deal with these cross-browsers
inconsistencies and focus solely on following the lifecycle state recommendations
and best practices, we've released
PageLifecycle.js, a
JavaScript library for observing Page Lifecycle API state changes.
PageLifecycle.js
normalizes cross-browser differences in event firing order so that state changes
always occur exactly as outlined in the chart and tables in this article (and
do so consistently in all browsers).
Developer recommendations for each state
As developers, it's important to both understand Page Lifecycle states and
know how to observe them in code because the type of work you should (and should
not) be doing depends largely on what state your page is in.
For example, it clearly doesn't make sense to display a transient notification
to the user if the page is in the hidden state. While this example is pretty
obvious, there are other recommendations that aren't so obvious that are worth
enumerating.
State
Developer recommendations
Active
The active state is the most critical time for the user and thus
the most important time for your page to be
responsive to user input.
In the passive state the user is not interacting with the page,
but they can still see it. This means UI updates and animations should still
be smooth, but the timing of when these updates occur is less critical.
When the page changes from active to passive, it's a
good time to persist unsaved application state.
Hidden
When the page changes from passive to hidden, it's
possible the user will not interact with it again until it's reloaded.
The transition to hidden is also often the last state change
that's reliably observable by developers (this is especially true on
mobile, as users can close tabs or the browser app itself, and the
beforeunload, pagehide, and unload
events are not fired in those cases).
This means you should treat the hidden state as the likely end to the
user's session. In other words, persist any unsaved application state
and send any unsent analytics data.
You should also stop making UI updates (since they won't be seen
by the user), and you should stop any tasks that a user wouldn't want
running in the background.
Frozen
In the frozen state,
freezable tasks in the
task queues are suspended until the page is unfrozen — which may
never happen (e.g. if the page is discarded).
This means when the page changes from hidden to frozen
it's essential that you stop any timers or tear down any connections that,
if frozen, could affect other open tabs in the same origin, or affect the
browser's ability to put the page in the
page navigation cache.
You should also persist any dynamic view state (e.g. scroll position
in an infinite list view) to
sessionStorage (or
IndexedDB via
commit()) that you'd want restored if the page were
discarded and reloaded later.
If the page transitions from frozen back to hidden,
you can reopen any closed connections or restart any polling you
stopped when the page was initially frozen.
Terminated
You generally do not need to take any action when a page transitions
to the terminated state.
Since pages being unloaded as a result of user action always go
through the hidden state before entering the terminated
state, the hidden state is where session-ending logic (e.g.
persisting application state and reporting to analytics) should be
performed.
Also (as mentioned in the recommendations for
the hidden state), it's very important for developers to realize
that the transition to the terminated state cannot be reliably
detected in many cases (especially on mobile), so developers who depend
on termination events (e.g. beforeunload,
pagehide, and unload) are likely losing data.
Discarded
The discarded state is not observable by developers at the
time a page is being discarded. This is because pages are typically
discarded under resource constraints, and unfreezing a page just to allow
script to run in response to a discard event is simply not possible in
most cases.
As a result, you should prepare for the possibility of a discard in
the change from hidden to frozen, and then you can
react to the restoration of a discarded page at page load time by
checking document.wasDiscarded.
Once again, since reliability and ordering of lifecycle events is not
consistently implemented in all browsers, the easiest way to follow the advice
in the table above is to use
PageLifecycle.js.
Legacy lifecycle APIs to avoid
The unload event
Many developers treat the unload event as a guaranteed callback and use it as
an end-of-session signal to save state and send analytics data, but doing this
is extremely unreliable, especially on mobile! The unload event does not
fire in many typical unload situations, including closing a tab from the tab
switcher on mobile or closing the browser app from the app switcher.
Furthermore, the mere presence of a registered unload event handler (via
either onunload or addEventListener()) can prevent browsers from being able
to put pages in the page navigation cache for faster
back and forward loads.
In all modern browsers (including IE11), it's recommended to always use the
pagehide event to detect possible page unloads (a.k.a the
terminated state) rather than the unload event. If you
need to support Internet Explorer versions 10 and lower, you should feature
detect the pagehide event and only use unload if the browser doesn't support
pagehide:
const terminationEvent = 'onpagehide' in self ? 'pagehide' : 'unload';
addEventListener(terminationEvent, (event) => {
// Note: if the browser is able to cache the page, `event.persisted`
// is `true`, and the state is frozen rather than terminated.
}, {capture: true});
For more information on page navigation caches, and why the unload event harms them, see:
The beforeunload event has a similar problem to the unload event, in that
when present it prevents browsers from caching the page in their
page navigation cache.
The difference between beforeunload and unload, though, is that there are
legitimate uses of beforeunload. For instance, when you want to warn the user
that they have unsaved changes they'll lose if they continue unloading the page.
Since there are valid reasons to use beforeunload but using it prevents pages
from being added to the page navigation cache, it's recommended that you only
add beforeunload listeners when a user has unsaved changes and then remove
them immediately after the unsaved changes are saved.
In other words, don't do this (since it adds a beforeunload listener
unconditionally):
addEventListener('beforeunload', (event) => {
// A function that returns `true` if the page has unsaved changes.
if (pageHasUnsavedChanges()) {
event.preventDefault();
return event.returnValue = 'Are you sure you want to exit?';
}
}, {capture: true});
Instead do this (since it only adds the beforeunload listener when it's
needed, and removes it when it's not):
const beforeUnloadListener = (event) => {
event.preventDefault();
return event.returnValue = 'Are you sure you want to exit?';
};
// A function that invokes a callback when the page has unsaved changes.
onPageHasUnsavedChanges(() => {
addEventListener('beforeunload', beforeUnloadListener, {capture: true});
});
// A function that invokes a callback when the page's unsaved changes are resolved.
onAllChangesSaved(() => {
removeEventListener('beforeunload', beforeUnloadListener, {capture: true});
});
FAQs
My page does important work when it's hidden,
how can I stop it from being frozen or discarded?
There are lots of legitimate reasons web pages shouldn't be frozen while running
in the hidden state. The most obvious example is an app that plays music.
There are also situations where it would be risky for Chrome to discard a page,
like if it contains a form with unsubmitted user input, or if it has a
beforeunload handler that warns when the page is unloading.
For the moment, Chrome is going to be conservative when discarding pages and
only do so when it's confident it won't affect users. For example, pages that
have been observed to do any of the following while in the hidden state will not
be discarded unless under extreme resource constraints:
Playing audio
Using WebRTC
Updating the table title or favicon
Showing alerts
Sending push notifications
What is the page navigation cache?
The page navigation cache is a general term used to describe a navigation
optimization some browsers implement that makes using the back and forward
buttons faster. Webkit calls it the
Page Cache and
Firefox calls it the
Back-Forwards Cache
(or bfcache for short).
When a user navigates away from a page, these browsers freeze a version of that
page so that it can be quickly resumed in case the user navigates back using
the back or forward buttons. Remember that adding a beforeunload or unload
event handler prevents this optimization from being
possible.
For all intents and purposes, this freezing is functionally the same as
the freezing browsers perform to conserve CPU/battery; for that reason it's
considered part of the frozen lifecycle state.
Why aren't the load or DOMContentLoaded events mentioned?
The Page Lifecycle API defines states to be discrete and mutually exclusive.
Since a page can be loaded in either the active, passive, or hidden state, a
separate loading state does not make sense, and since the load and
DOMContentLoaded events don't signal a lifecycle state change, they're not
relevant to this API.
If I can't run asynchronous APIs in the frozen or terminated states,
how can I save data to IndexedDB?
In frozen and terminated states,
freezable tasks
in a page's task queues
are suspended, which means asynchronous and callback-based APIs such as IndexedDB
cannot be reliably used.
In the future, we will add a commit() method to
IDBTransaction objects, which will
give developers a way to perform what are effectively write-only transactions
that don't require callbacks. In other words, if the developer is just writing
data to IndexedDB and not performing a complex transaction consisting of reads
and writes, the commit() method will be able to finish before task queues are
suspended (assuming the IndexedDB database is already open).
For code that needs to work today, however, developers have two options:
Use Session Storage:Session Storage
is synchronous and is persisted across page discards.
Use IndexedDB from your service worker: a service worker can store data in
IndexedDB after the page has been terminated or discarded. In the freeze or
pagehide event listener you can send data to your service worker via
postMessage(),
and the service worker can handle saving the data.
Testing your app in the frozen and discarded states
To test how your app behaves in the frozen and discarded states, you can visit
chrome://discards to actually freeze or discard any of your
open tabs.
This allows you to ensure your page correctly handles the freeze and resume
events as well as the document.wasDiscarded flag when pages are reloaded after
a discard.
Summary
Developers who want to respect the system resources of their user's devices
should build their apps with Page Lifecycle states in mind. It's critical that
web pages are not consuming excessive system resources in situations that the
user wouldn't expect
In addition, the more developers start implementing the new Page Lifecycle APIs,
the safer it will be for browsers to freeze and discard pages that aren't being
used. This means browsers will consume less memory, CPU, battery, and network
resources, which is a win for users.
Lastly, developers who want to implement the
best practices described in this
article but don't want to memorize all the possible state and event transitions
paths can use
PageLifecycle.js to easily
observe lifecycle state changes consistently in all browsers.
If your site meets the
add to home screen criteria,
Chrome will no longer show the add to home screen banner. Instead, you’re in
control over when and how to prompt the user.
To prompt the user, listen for the beforeinstallprompt event, then, save
the event and add a button or other UI element to your app to indicate it can
be installed.
let installPromptEvent;
window.addEventListener('beforeinstallprompt', (event) => {
// Prevent Chrome <= 67 from automatically showing the prompt
event.preventDefault();
// Stash the event so it can be triggered later.
installPromptEvent = event;
// Update the install UI to notify the user app can be installed
document.querySelector('#install-button').disabled = false;
});
When the user clicks the install button, call prompt() on the saved
beforeinstallprompt event, Chrome then shows the add to home screen dialog.
btnInstall.addEventListener('click', () => {
// Update the install UI to remove the install button
document.querySelector('#install-button').disabled = true;
// Show the modal add to home screen dialog
installPromptEvent.prompt();
// Wait for the user to respond to the prompt
installPromptEvent.userChoice.then(handleInstall);
});
To give you time to update your site Chrome will show a mini-infobar the first
time a user visits a site that meets the add to home screen criteria. Once
dismissed, the mini-infobar will not be shown again for a while.
When a user has a large number of tabs running, critical resources such as
memory, CPU, battery and the network can be oversubscribed, leading to a
bad user experience.
If your site is running in the background, the system may suspend it to
conserve resources. With the new Page Lifecycle API, you can now listen for,
and respond to these events.
For example, if a user's had a tab in the background for a while, the browser
may choose to suspend script execution on that page to conserve resources.
Before doing so, it will fire the freeze event, allowing you to close open
IndexedDB or network connections or save any unsaved view state. Then, when
the user refocuses the tab, the resume event is fired, where you can
reinitialize anything that was torn down.
const prepareForFreeze = () => {
// Close any open IndexedDB connections.
// Release any web locks.
// Stop timers or polling.
};
const reInitializeApp = () => {
// Restore IndexedDB connections.
// Re-acquire any needed web locks.
// Restart timers or polling.
};
document.addEventListener('freeze', prepareForFreeze);
document.addEventListener('resume', reInitializeApp);
Check out Phil's Page Lifecycle API
post for lots more detail, including code samples, tips and more.
You can find the spec and an
explainer doc on GitHub.
Payment Handler API
The Payment Request API is an open,
standards-based way to accept payments. The
Payment Handler API extends the
reach of Payment Request by enabling web-based payment apps to facilitate
payments directly within the Payment Request experience.
As a seller, adding an existing web-based payment app is as easy as adding an
entry to the supportedMethods property.
const request = new PaymentRequest([{
// Your custom payment method identifier comes here
supportedMethods: 'https://bobpay.xyz/pay'
}], {
total: {
label: 'total',
amount: { value: '10', currency: 'USD' }
}
});
If a service worker that can handle the specified payment method is installed,
it will show up in the Payment Request UI and the user can pay with it.
Eiji has a great post that shows
how to implement this for merchant sites, and for payment handlers.
And more!
These are just a few of the changes in Chrome 68 for developers, of course,
there’s plenty more.
Be sure to check out New in Chrome DevTools, to
learn what’s new in for DevTools in Chrome 68.
Subscribe
Then, click the subscribe button on our
YouTube channel, and
you’ll get an email notification whenever we launch a new video, or add our
RSS feed to your feed reader.
I’m Pete LePage, and as soon as Chrome 69 is released, I’ll be right
here to tell you -- what’s new in Chrome!
Speed is now a landing page factor for Google Search and Ads
Speed is now a landing page factor for Google Search and Ads
When real users have a slow experience on mobile, they're much less likely
to find what they are looking for or purchase from you in the future. For many
sites this equates to a huge missed opportunity, especially when more than half
of visits are abandoned if a mobile page takes over 3 seconds to
load.
Last week, Google Search and Ads teams announced two new speed initiatives to
help improve user-experience on the web. Both efforts are leveraging real-world
user experience data (see Chrome User Experience
Report) to prioritize and highlight
pages that deliver optimized and fast user experiences.
Speed is now used as a ranking factor for mobile searches
Users want to find answers to their questions quickly and
data
shows that people really care about how quickly their pages load. The Search
team announced speed would be a ranking
signal
for desktop searches in 2010 and as of this month (July 2018), page speed will
be a ranking factor for mobile
searches
too.
If you're a developer working on a site, now is a good time to evaluate your
performance using our speed
tools. Think about how
performance affects the user experience
of your pages and consider measuring a variety of real-world user-centric
performance metrics.
PageSpeed Insights, an online tool that shows
speed field data for
your site, alongside suggestions for common optimizations to improve it.
Lighthouse, a lab
tool providing
personalized advice on how to improve your website across performance,
accessibility, PWA, SEO, and other best practices.
The Mobile Speed Score for ads landing pages
Advertising and speed go hand in hand, with faster landing pages delivering
better ROI. Last week, at Google Marketing Live, the Ads team introduced the
new mobile speed
score.
The 1-10 mobile speed score (10 being the
fastest) is based on
real-world user experience data, taking into account many factors including
the relationship between page speed and potential conversion rates. This score
lets you quickly see which landing pages on mobile are providing a fast
experience on mobile and which need some work.
You should also implement Parallel
tracking, which will soon
(October 30th, 2018) become mandatory for all Ads accounts. This enhancement
helps load landing pages more quickly, which can reduce lost visits. Parallel
tracking sends customers directly from your ad to your final URL while click
measurement happens in the background using the browser's
navigator.sendBeacon()
method.
To help discuss and prioritize speed in your organization, we've made available
tools like the Speed
Scorecard, allowing you to
compare mobile site-speed to your peers, and the Impact
Calculator, a tool for
estimating the revenue impact investing in speed could have on your mobile
site.
Next steps: measure, optimize, monitor and repeat.
Optimized web experiences lead to higher user engagement, conversions, and ROI;
performance is a feature and a competitive edge.
Looking for tools and tips on which tools and metrics to use, or how to
evaluate and make a business case for performance? Check out our "How to Think
about Speed Tools" guide for a
hands-on overview.
NoState Prefetch is a new mechanism in Chrome that is an alternative to the deprecated
prerendering process, used to power features like <link rel="prerender">. Like prerendering, it fetches resources in advance; but unlike prerendering
it does not execute JavaScript or render any part of the page in advance. The goal of NoState
Prefetch is to use less memory than prerendering, while still reducing page load times.
NoState Prefetch is not an API but rather a mechanism used by Chrome to implement various APIs
and features. The Resource Hints API, as well as the
prefetching of pages by the Chrome address bar, are both
implemented using NoState Prefetch. If you’re using Chrome 63 or later, your browser is already
using NoState Prefetch for features like <link rel="prerender">.
This article explains how NoStatePrefetch works, the motivations for introducing it, and
instructions for using Chrome's histograms to view stats about its usage.
Motivation
There were two primary motivations for introducing NoState Prefetch:
Reduce memory usage
NoState Prefetch only uses ~45MiB of memory. Maintaining the preload scanner is the primary
memory expense for NoState Prefetch and this cost remains relatively constant across different
use cases. Increasing the size or volume of fetches does not have a significant effect on the
amount of memory consumed by NoState Prefetch.
By contrast, prerendering typically consumes 100MiB of memory and memory consumption is capped at
150MiB. This high memory consumption makes it unsuitable for low-end (i.e. <= 512MB of RAM)
devices. As a result, Chrome does not do prerendering on low-end devices and instead will
preconnect.
Facilitate support of new web platform features
With prerendering, no user-facing (e.g., playing music or video) or stateful actions (e.g.,
mutating session or local storage) should occur. However, it can be difficult and complex to
prevent these actions from occurring while rendering a page. NoState Prefetch only fetches
resources in advance: it does not execute code or render the page. This makes it simpler to
prevent user-facing and stateful actions from occurring.
Implementation
The following steps explain how NoState Prefetch works.
NoStatePrefetch is triggered.
A prerender resource hint (i.e. <link rel=”prerender”>) and some Chrome features will
trigger NoState Prefetch provided that the following two conditions are met: a) the user is
not on a low-end device, and b) the user is not on a cellular network.
A new, dedicated renderer is created for the NoState Prefetch.
In Chrome, a
“renderer”
is a process responsible for taking a HTML document, parsing it,
constructing its render tree, and painting the result to the screen. Each tab in Chrome, as
well as each NoState Prefetch process, has its own renderer to provide isolation. This
helps minimize the effects of something going wrong (e.g., a tab crashing) as well as
prevent malicious code from accessing other tabs or other parts of the system.
The resource that is being loaded with NoState Prefetch is fetched. The HTMLPreloadScanner
then scans this resource to discover any subresources that need to be fetched.
If the main resource or any of its subresources has a registered service worker, these requests will go through the appropriate service worker.
NoState Prefetch only supports the GET HTTP method; it will not fetch any subresources that
require the use of other HTTP methods. Additionally, it will not fetch any resources that
require user actions (e.g., auth popups, SSL client certificate, or manual overrides).
Subresources that are fetched will be fetched with an “IDLE” Net Priority.
The “IDLE” Net Priority is the lowest possible Net Priority in Chrome.
All resources retrieved by the NoState Prefetch are cached according to their cache headers.
NoState Prefetch will cache all resources except those with the no-store Cache-Control
header. A resource will be revalidated before use if there is a Vary response header,
no-cache Cache-Control header, or if the resource is more than 5 minutes old.
The renderer is killed after all subresources are loaded.
If subresources time out, the renderer will be killed after 30 seconds.
The browser does not make any state modifications besides updating the cookie store and the
local DNS cache.
It’s important to call this out because this is the “NoState” in “NoState Prefetch”.
At this point in the “normal” page load process, the browser would probably do things that
would modify the browser state: for example, executing JavaScript, mutating sessionStorage
or localStorage, playing music or videos, using the History API, or prompting the user. The
only state modifications that occur in NoState Prefetch are the updating of the DNS cache
when responses arrive and the updating of the cookie store if a response contains the
Set-Cookie header.
When the resource is needed, it is loaded into the browser window.
However, unlike a prerendered page, the page won't be immediately visible - it still needs
to be rendered by the browser. The browser will not reuse the renderer it used for the
NoState Prefetch and will instead use a new renderer. Not rendering the page in advance
reduces the memory consumption of NoStatePrefetch, but it also lessens the possible impact
it can have on page load times.
If the page has a service worker, this page load will go through the service worker again.
If NoState Prefetch has not finished fetching subresources by the time the page is needed,
the browser will continue with the page load process from where NoState Prefetch left off.
The browser will still need to fetch resources, but not as many as would be necessary if
NoState Prefetch had not been initiated.
Impact on Web Analytics
Pages loaded using NoState Prefetch are registered by web analytics tools at slightly different
times depending on whether the tool collects data on the client-side or the server-side.
Client-side analytics scripts register a pageview when the page is shown to the user. These
scripts rely on the execution of JavaScript and NoState Prefetch does not execute any JavaScript.
Server-side analytics tools register metrics when a request is handled. For resources loaded via
NoState Prefetch, there can be a significant gap of time between when a request is handled and
when the response is actually used by the client (if it is used at all). Currently, there is no
server-side mechanism for determining whether a request was made via NoStatePrefetch.
Check it out
NoStatePrefetch shipped in December 2017 in Chrome 63. It's currently used to:
Implement the prerender resource hint
Fetch the first result in Google Search results
Fetch pages that the Chrome address bar predicts are likely to be visited next
You can use the Chrome Internals to see how you’ve been using NoStatePrefetch.
To view the list of sites that have been loaded with NoState Prefetch, go to
chrome://net-internals/#prerender.
To view stats on your NoState Prefetch usage, go to chrome://histograms and search for
“NoStatePrefetch”. There are three different NoState Prefetch histograms - one for each use case
of NoState Prefetch:
“NoStatePrefetch” (stats for usage by prerender resource hints)
“gws_NoStatePrefetch” (stats for usage by the Google search results page)
“omnibox_NoStatePrefetch” (stats for usage by the Chrome address bar)
There's a new observer in town! ReportingObserver is a new API that lets you
know when your site uses a deprecated API or runs into a
browser intervention:
const observer = new ReportingObserver((reports, observer) => {
for (const report of reports) {
console.log(report.type, report.url, report.body);
}
}, {buffered: true});
observer.observe();
The callback can be used to send reports to a backend or analytics provider
for further analysis.
Why is that useful? Until now, deprecation and
intervention warnings were only available in the DevTools as console messages.
Interventions in particular are only triggered by various real-world constraints
like device and network conditions. Thus, you may never even see these messages
when developing/testing a site locally. ReportingObserver provides
the solution to this problem. When users experience potential issues in the wild,
we can be notified about them.
ReportingObserver has only shipped in Chrome 69. It is being considered by
other browsers.
Introduction
A while back, I wrote a blog post ("Observing your web app")
because I found it fascinating how many APIs there are for monitoring the
"stuff" that happens in a web app. For example, there are APIs that can observe
information about the DOM: ResizeObserver,
IntersectionObserver, MutationObserver. There are APIs for capturing
performance measurements: PerformanceObserver. Other
APIs like window.onerror and window.onunhandledrejection even let us know
when something goes wrong.
However, there are other types of warnings which are not captured by these
existing APIs. When your site uses a deprecated API or runs up
against a browser intervention, DevTools is first to tell you
about them:
One would naturally think window.onerror captures these warnings. It does not!
That's because window.onerror does not fire for warnings
generated directly by the user agent itself. It fires for runtime errors
(JS exceptions and syntax errors) caused by executing your code.
ReportingObserver picks up the slack. It provides a programmatic way to be
notified about browser-issued warnings such as deprecations
and interventions. You can use it as a reporting tool and
lose less sleep wondering if users are hitting unexpected issues on your live
site.
ReportingObserver is part of a larger spec, the Reporting API,
which provides a common way to send these different reports to a backend.
The Reporting API is basically a generic framework to specify a set of server
endpoints to report issues to.
The API
The API is not unlike the other "observer" APIs such
as IntersectionObserver and ResizeObserver. You give it a callback;
it gives you information. The information that the callback receives is a
list of issues that the page caused:
It is great for situations like lazy-loading a library that uses
a ReportingObserver. The observer gets added late but you
don't miss out on anything that happened earlier in the page load.
Stop observing
Yep! It's got a disconnect method:
observer.disconnect(); // Stop the observer from collecting reports.
Examples
Example - report browser interventions to an analytics provider:
const observer = new ReportingObserver((reports, observer) => {
for (const report of reports) {
sendReportToAnalytics(JSON.stringify(report.body));
}
}, {types: ['intervention'], buffered: true});
observer.observe();
Example - be notified when APIs are going to be removed:
const observer = new ReportingObserver((reports, observer) => {
for (const report of reports) {
if (report.type === 'deprecation') {
sendToBackend(`Using a deprecated API in ${report.body.sourceFile} which will be
removed on ${report.body.anticipatedRemoval}. Info: ${report.body.message}`);
}
}
});
observer.observe();
Conclusion
ReportingObserver gives us an additional way for discovering and monitoring
potential issues in your web app. It's even a useful tool for understanding the
health of your code base (or lack thereof). Send reports to a backend,
know about the real-world issues users are hitting on your site, update
code, profit!
Future work
In the future, my hope is that ReportingObserver becomes the de-facto API
for catching all types of issues in JS. Imagine one API to catch everything
that goes wrong in your app:
JS exceptions and errors (currently serviced by window.onerror).
Unhandled JS promise rejections (currently serviced by window.onunhandledrejection)
I'm also excited about tools integrating ReportingObserver into
their workflows. Lighthouse is an example of a tool
that already flags browser deprecations when you run its
"Avoids deprecated APIs" audit:
Lighthouse currently uses the DevTools protocol
to scrape console messages and report these issues to developers. Instead, it
might be interesting to switch to ReportingObserver
for its well structured deprecation reports and additional metadata like
anticipatedRemoval date.
CSS Scroll Snap feature allows web
developers to create well-controlled scroll experiences by declaring scroll
snapping positions. Paginated articles and image carousels are two commonly used
examples of this. CSS Scroll Snap provides an easy to use and consistent API for
building these popular UX patterns and Chrome is shipping a high fidelity and
fast implementation of it in version 69.
Background
The case for scroll snapping
Scrolling is a popular and natural way to interact with content on the web. It
is the platform's native means of providing access to more information than is
visible on the screen at once, becoming especially vital on mobile platforms
with limited screen real estate. So it is no surprise that web authors
increasingly prefer to organize content into scrollable flat lists as opposed to
deep hierarchies.
Scrolling's main drawback is its lack of precision. Rarely does a scroll end up
aligned to a paragraph or sentence. This is even more pronounced for paginated
or itemized content with meaningful boundaries when the scroll finishes at the
middle of the page or image leaving it partially visible. These use cases
benefit from a well-controlled scrolling experience.
Web developers have long relied on JavaScript based solutions for controlling
the scroll to help address this shortcoming. However, JavaScript based solutions
fall short of providing a full fidelity solution due to lack of scroll
customization primitives or access to composited scrolling. CSS Scroll Snap
ensures there is a fast, high fidelity and easy to use solution that works
consistently across browsers.
CSS Scroll Snap allows web authors to mark each scroll container with boundaries
for scroll operations at which to finish. Browsers then choose the most
appropriate end position depending on the particulars of the scroll operation,
scroll container's layout and visibility, and details of the snap positions,
then smoothly animate to it. Going back to our earlier example, as the user
finishes scrolling the carousel, its visible image snaps into place. No scroll
adjustments needed by JavaScript.
The API history
CSS Scroll Snap has been under discussion for several
years. As a
result, several browsers implemented earlier draft specifications, before it
underwent a fundamental design
change. The
final design changed the underlying point alignment based snapping model to a
box alignment model. The change ensures scroll snapping can handle responsive
designs and layout changes by default without requiring authors to re-calculate
snap points. It also enables browsers to make better scroll snapping decisions
e.g., correctly snapping targets larger than the scroll container.
Chrome, Opera and Safari are shipping the latest specifications with the other
major browser vendors planning to follow along in the near future
(Firefox bug,
Edge bug).
This means you'll find several tutorials on the web which discuss the old syntax
which is still currently implemented by Edge and Firefox.
CSS Scroll Snap
Scroll snapping is the act of adjusting the scroll offset of a scroll container
to be at a preferred snap position once the scroll operation is finished.
A scroll container may be opted into scroll snapping by using scroll-snap-type
property. This tells the browser that it should consider snapping this scroll
container to the snap positions produced by its descendents. scroll-snap-type
determines the axis on which scrolling occurs: x, y, or both, and the
snapping strictness: mandatory, proximity. More on these later.
A snap position can be produced by declaring a desired alignment on an element.
This position is the scroll offset at which the nearest ancestor scroll
container and the element are aligned as specified for the given axis. The
following alignments are possible on each axis: start, end, center.
A start alignment means that the scroll container snapport start edge should
be flushed with the element snap area start edge. Similarly, the end and
center alignments mean that the scroll container snapport end edge or center
should be flushed with the element snap area end edge or center.
Snapport is the area
of the scroll container to which the snap areas are aligned. By default it is
the same as the visual viewport of the scroll container but it can be adjusted
using scroll-padding property.
The following examples illustrate how these concepts can be used in practice.
Example - Horizontal gallery
A common use case for scroll snapping is an image carousel. For example, to
create a horizontal image carousel that snaps to each image as you scroll, we
can specify the scroll container to have a mandatory scroll-snap-type on the
horizontal axis. set each image to scroll-snap-align: center to ensure that
the snapping centers the image within the carousel.
Because snap positions are associated with an element, the snapping algorithm
can be smart about when and how it snaps given the element and the scroll
container size. For example, consider the case where one image is larger than
the carousel. A naïve snapping algorithm may prevent the user from panning
around to see the full image. But the
specification
requires implementations to detect this case and allow the user to freely scroll
around within that image only snapping at its edges.
Example - Journeyed product page
Another common case that can benefit from scroll snapping are pages with
multiple logical sections that are vertically scrolled through, e.g., a typical
product page. scroll-snap-type: y proximity; is a `more natural fit for cases
like this. It does not interfere when user scrolls to the middle of a particular
section but also snaps and brings attention to a new section when they scroll
close enough to it.
Here is how this can be achieved:
<style>
article {
scroll-snap-type: y proximity;
/* Reserve space for header plus some extra space for sneak peeking. */
scroll-padding-top: 15vh;
overflow-y: scroll;
}
section {
/* Snap align start. */
scroll-snap-align: start;
}
header {
position: fixed;
height: 10vh;
}
</style>
<article>
<header> Header </header>
<section> Section One </section>
<section> Section Two </section>
<section> Section Three </section>
</article>
Scroll padding and margin
Our product page has a fixed position top header. Our design also asked for some
of the top section to remain visible when scroll container is snapped in order
to provide a design cue to users about the content above.
scroll-padding is a new css property that can be used to adjust the effective
viewable region of scroll container. This region is also known as snapport and
is used when calculating scroll snap alignments. The property defines an inset
against the scroll container's padding box. In our example 15vh additional inset
was added to the top which instructs the browser to consider a lower position,
15vh below the top edge of the scroll container, as its vertical start edge for
scroll snapping. When snapping, the start edge of the snap target element will
become flushed with this new position thus leaving space above.
scroll-margin defines the outset amount used to adjust the snap target
effective box similar to how scroll-padding functions on snap scroll
container.
You may have noticed that these two properties do not have the word "snap" in
them. This is intentional as they actually modify the box for all relevant
scroll operations and are not just scroll snapping. For example Chrome takes
them into account when calculating page size for paging scroll operations such
as PageDown and PageUp and also when calculating scroll amount for
Element.scrollIntoView() operation.
Interaction with other scrolling APIs
DOM Scrolling API {#dom-scrolling-api}
Scroll snapping happens after all scroll operations including those
initiated by script. When you are using APIs like Element.scrollTo, the
browser will calculate the intended scroll position of the operation, then apply
appropriate snapping logic to find the final snapped location. Thus, there is
no need for user script to do any manual calculations for snapping.
Smooth Scrolling {#smooth-scrolling}
Smooth scrolling controls the behavior of a programmatic scroll operation while
scroll snap determines its destination. Since they control orthogonal aspects of
scrolling, they can be used together and complement each other.
Overscroll Behavior {#overscroll-behavior}
Overscroll behavior API controls how
scroll is chained across multiple elements and it is not affected by scroll
snap.
Caveats and best practices
Avoid using mandatory snapping when target elements are widely spaced apart.
This can cause content in between the snap positions to become inaccessible.
Use CSS.supports for feature detecting CSS Scroll Snap. But avoid using
scroll-snap-type which is also present in the deprecated specification and can
be unreliable.
if (CSS.supports('scroll-snap-align: start')) {
// use css scroll snap
} else {
// use fallback
}
Do not assume that programmatically scrolling APIs such as Element.scrollTo
always finish at the requested scroll offset. Scroll snapping may adjust the
scroll offset after programmatic scrolling is complete. Note that this was not a
good assumption even before scroll snap since scrolling may have been
interrupted for other reasons but it is especially the case with scroll
snapping.
Note: There is an
upcoming proposal to change various scrolling APIs to return a promise.
This promise is resolved when user agent either completes or aborts that
scrolling operation. Once this is standardized and implemented, it provides an
ergonomic and efficient way for following up a user script initiated scroll
with other actions.
Future work
Chrome 69 ships the core functionality specified in CSS Scroll Snap
specification. The main omissions are snapping for keyboard scrolling and
fragment navigations which at the moment are not supported by any other
implementations. Chrome will continue improving this feature over time
particularly focusing on missing features, improving snap selection algorithm,
animation smoothness, and devtools facilities.
Chrome 69 adds an AV1 decoder to Chrome Desktop (Windows, Mac, Linux, ChromeOS)
based on the official bitstream specification. At this time, support is
limited to "Main" profile 0 and does not include encoding capabilities. The
supported container is ISO-BMFF (MP4). See From raw video to web ready for
a brief explanation of containers. To enable this feature use the
chrome://flags/#enable-av1-decoder flag.
Some platforms or key systems only support CENC mode, while others only support
CBCS mode. Still others are able to support both. These two encryption schemes
are incompatible, so web developers must be able to make intelligent choices
about what content to serve.
To avoid having to determine which platform they’re on to check for “known”
encryption scheme support, a new encryptionScheme key is added in
MediaKeySystemMediaCapabilitydictionnary to allow websites to specify
which encryption scheme could be used in Encrypted Media Extensions (EME).
The new encryptionScheme key can be one of two values:
'cenc' AES-CTR mode full sample and video NAL subsample encryption.
'cbcs' AES-CBC mode partial video NAL pattern encryption.
If not specified, it indicates that any encryption scheme is acceptable. Note
that Clear Key always supports the 'cenc' scheme.
The example below shows how to query two configurations with different
encryption schemes. In this case, only one will be chosen.
In the example below, only one configuration with two different encryption
schemes is queried. In that case, Chrome will discard any capabilities object
it cannot support, so the accumulated configuration may contain one encryption
scheme or both.
await navigator.requestMediaKeySystemAccess('org.w3.clearkey', [{
videoCapabilities: [
{ // A video capability using the "cenc" encryption scheme
contentType: 'video/mp4; codecs="avc1.640028"',
encryptionScheme: 'cenc'
},
{ // A video capability using the "cbcs" encryption scheme
contentType: 'video/mp4; codecs="avc1.640028"',
encryptionScheme: 'cbcs'
},
],
audioCapabilities: [
{ // An audio capability using the "cenc" encryption scheme
contentType: 'audio/mp4; codecs="mp4a.40.2"',
encryptionScheme: 'cenc'
},
{ // An audio capability using the "cbcs" encryption scheme
contentType: 'audio/mp4; codecs="mp4a.40.2"',
encryptionScheme: 'cbcs'
},
],
initDataTypes: ['keyids']
}]);
Nowadays HDCP is a common policy requirement for streaming high resolutions
of protected content. And web developers who want to enforce an HDCP policy
must either wait for the license exchange to complete or start streaming
content at a low resolution. This, is a sad situation that the HDCP Policy
Check API aims to solve.
This proposed API allows web developers to query whether a certain HDCP policy
can be enforced so that playback can be started at the optimum resolution for
the best user experience. It consists of a simple method to query the status of
a hypothetical key associated with an HDCP policy, without the need to create a
MediaKeySession or fetch a real license. It does not require MediaKeys to be
attached to any audio or video elements either.
The HDCP Policy Check API works simply by calling
mediaKeys.getStatusForPolicy() with an object that has a minHdcpVersion key
and a valid value. If HDCP is available at the specified version, the returned
promise resolves with a MediaKeyStatus of 'usable'. Otherwise, the promise
resolves with other error values of MediaKeyStatus such as
'output-restricted' or 'output-downscaled'. If the key system does not
support HDCP Policy Check at all (e.g. Clear Key System), the promise rejects.
In a nutshell, here’s how the API works for now. Check out the official sample
to try out all versions of HDCP.
const config = [{
videoCapabilities: [{
contentType: 'video/webm; codecs="vp09.00.10.08"',
robustness: 'SW_SECURE_DECODE' // Widevine L3
}]
}];
navigator.requestMediaKeySystemAccess('com.widevine.alpha', config)
.then(mediaKeySystemAccess => mediaKeySystemAccess.createMediaKeys())
.then(mediaKeys => {
// Get status for HDCP 2.2
return mediaKeys.getStatusForPolicy({ minHdcpVersion: 'hdcp-2.2' })
.then(status => {
if (status !== 'usable')
return Promise.reject(status);
console.log('HDCP 2.2 can be enforced.');
// TODO: Fetch high resolution protected content...
});
})
.catch(error => {
// TODO: Fallback to fetch license or stream low-resolution content...
});
Available for Origin Trials
To get feedback from web developers, the HDCP Policy Check API is available as
an Origin Trial in Chrome 69 for Desktop (Chrome OS, Linux, Mac, and
Windows). You will need to request a token, so that the feature is
automatically enabled for your origin for a limited period of time, without the
need to enable the experimental "Web Platform Features" flag at
chrome://flags/#enable-experimental-web-platform-features.
Buffered ranges and duration values are now reported by Presentation Time Stamp
(PTS) intervals, rather than by Decode Time Stamp (DTS) intervals in Media
Source Extensions (MSE).
When MSE was new, Chrome’s implementation was tested against WebM and MP3, some
media stream formats where there was no distinction between PTS and DTS. And
it was working fine until ISO BMFF (aka MP4) was added. This container
frequently contains out-of-order presentation versus decode time streams (for
codecs like H.264 for example) causing DTS and PTS to differ. That caused
Chrome to report (usually just slightly) different buffered ranges and duration
values than expected. This new behavior will roll out gradually in Chrome 69
and make its MSE implementation compliant with the MSE specification.
This change affects MediaSource.duration (and consequently
HTMLMediaElement.duration), SourceBuffer.buffered (and consequently
HTMLMediaElement.buffered), and SourceBuffer.remove(start, end).
If you’re not sure which method is used to report buffered ranges and duration
values, you can go to the internal chrome://media-internals page and search for
"ChunkDemuxer: buffering by PTS" or "ChunkDemuxer: buffering by DTS" in the
logs.
Android Go is a lightweight version of Android designed for entry-level
smartphones. To that end, it does not necessarily ship with some media-viewing
applications, so if a user tries to open a downloaded video for instance, they
won’t have any applications to handle that intent.
To fix this, Chrome 69 on Android Go now listens for media-viewing intents so
users can view downloaded audio, videos, and images. In other words, it takes
the place of the missing viewing applications.
Note that this Chrome feature is enabled on all Android devices running Android
O and onwards with 1 GB of RAM or less.
Removal of “stalled” events for media elements using MSE
A "stalled" event is raised on a media element if downloading media data has
failed to progress for about 3 seconds. When using Media Source Extensions
(MSE), the web app manages the download and the media element is not aware of
its progress. This caused Chrome to raise "stalled" events at inappropriate
times whenever the website has not appended new media data chunks with
SourceBuffer.appendBuffer() in the last 3 seconds.
As websites may decide to append large chunks of data at a low frequency, this
is not a useful signal about buffering health. Removing "stalled" events for
media elements using MSE clears up confusion and brings Chrome more in line
with the MSE specification. Note that media elements that don't use MSE will
continue to raise "stalled" events as they do today.
Chrome 69 also removed the stalled event from HTMLMediaElements. You'll
find the explanation in Audio/Video Updates in Chrome
69 by François Beaufort.
Removal of document.createTouchList()
The TouchEvent() constructor has been
supported in Chrome
since version 48. To comply with the specification, document.createTouchList()
is now removed. The document.createTouch() method was removed in Chrome 68.
The window.confirm() method no longer activates its parent tab
Calling window.confirm() on a background tab will no longer activate that
tab. If it is called on a background tab, the function returns immediately with
false and no dialog box is shown to the user. If the tab is active, the call
behaves as usual.
The window.alert() method was abused by sites for years, allowing them to
force themselves to the front, disrupting whatever the user was doing. There
was a similar problem with window.prompt(). Because these behaviors were
removed in Chrome 64, and 56 respectively, the abuse has been moving to
window.confirm().
Custom site performance reports with the CrUX Dashboard
Continuous performance monitoring is crucial to identify trends and
regressions before they negatively affect your site engagement and bottom line
metrics. The
Chrome UX Report
(CrUX) enables you to track user experience and performance metrics for
millions of origins -- and yes, you can even compare competitors' performance
head-to-head! Today we're releasing the CrUX Dashboard that you can use to
better understand how an origin's performance evolves. It's built on
Data Studio and automatically syncs
with the latest datasets and can be easily customized and shared with everyone
on your team.
Go try it out at g.co/chromeuxdash -- it only
takes a minute to set it up! There are a few one-time confirmation prompts, so
if you have any hesitation refer to this helpful walkthrough video:
There are now three ways to explore the Chrome UX Report dataset, so let's see
what makes this one so special.
BigQuery is
great for slicing and dicing the raw data at will across any number of
origins. You get 1 TB of querying for
free each month and a
billing account is required to cover any overages.
PageSpeed Insights allows you to explore the
latest snapshot of the user experience for a single URL or origin. You can see
how the page load performance is distributed in a web interface or API.
The CrUX Dashboard enables you to see how the
user experience of an origin changes over time. All of the data querying and
visualizing is done for you with unlimited free usage and the data is
automatically updated for you.
This dashboard is built on Data Studio
, Google's dashboarding and reporting platform that is free to use. Under the
hood, the entire data pipeline is managed for you thanks to the Chrome UX
Report's community connector. All you need to do is
enter an origin and it will load the data and generate the visualizations for
you. It's even open source, so you can explore how it works in the
GoogleDataStudio/community-connectors
repository on GitHub.
In this release we've set you up with three charts:
First Contentful Paint
Device Distribution
Connection Distribution
Each chart includes historical data so you can see how the distribution
changes over time. And this really is a live dashboard; the visualizations
will automatically update after each monthly release.
Some features we're exploring for future improvements are more metrics like
First Input Delay,
better error handling of unrecognized origins, and the ability to compare
multiple origins. If you have any suggestions to make the dashboard even
better, we'd love to hear from you on the
forum
or
@ChromeUXReport.
OffscreenCanvas — Speed up Your Canvas Operations with a Web Worker
Tl;dr; Now you can render your graphics off the main thread with OffscreenCanvas!
Canvas is a popular way
of drawing all kinds of graphics on the screen and an entry point to the world of WebGL.
It can be used to draw shapes, images, run animations, or even display and process video content.
It is often used to create beautiful user experiences in media-rich web applications and
online games.
It is scriptable, which means that the content drawn on canvas can be created programmatically,
e.g., in JavaScript. This gives canvas great flexibility.
At the same time, in modern websites, script execution is one of the most frequent
sources of user responsiveness issues.
Because canvas logic and rendering happens on the same thread as user interaction,
the (sometimes heavy) computations involved in animations can harm the app’s real
and perceived performance.
Until now, canvas drawing capabilities were tied to the <canvas> element,
which meant it was directly depending on the DOM. OffscreenCanvas, as the name implies,
decouples the DOM and the Canvas API by moving it off-screen.
Thanks to this decoupling, rendering of OffscreenCanvas is fully detached from the DOM and
therefore offers some speed improvements over the regular canvas as there is no synchronization
between the two.
What is more, though, is that it can be used in a Web Worker, even though there is no
DOM available. This enables all kinds of interesting use cases.
Use OffscreenCanvas in a worker
Workers
are the web’s version of threads — they allow you to run tasks in the background.
Moving some of your scripting to a worker gives your app more headroom to perform user-critical
tasks on the main thread. Until now, there was no way to use the Canvas API in a worker, as there
is no DOM available.
OffscreenCanvas does not depend on the DOM, so it can be used instead. Here I use OffscreenCanvas
to calculate a gradient color in a worker:
It gets more interesting when moving heavy calculation to a worker allows you to free up
significant resources on the main thread. We can use the transferControlToOffscreen
method to mirror the regular canvas to an OffscreenCanvas instance. Operations applied to
OffscreenCanvas will be rendered on the source canvas automatically.
OffscreenCanvas is transferable.
Apart from specifying it as a field in the message, you need to also pass it as a second argument
in postMessage (a transfer) so that it can be used in the worker context.
In the example below, the “heavy calculation” happens when the color theme is changing — it should
take a few milliseconds even on a fast desktop. You can choose to run animations on the main thread
or in the worker. In case of the main thread, you cannot interact with the button while the heavy
task is running — the thread is blocked. In case of the worker, there is no impact on
UI responsiveness.
It works the other way too: the busy main thread does not influence the animation running on
a worker. You can use this feature to avoid visual jank and guarantee a smooth animation
despite main thread traffic:
In case of a regular canvas, the animation stops when the main thread gets artificially overworked,
while the worker-based OffscreenCanvas plays smoothly.
Use with popular libraries
Because OffscreenCanvas API is generally compatible with the regular Canvas Element, you can easily
use it as a progressive enhancement, also with some of the leading graphic libraries on the market.
For example, you can feature-detect it and if available, use it with Three.js by specifying
the canvas option in the renderer constructor:
The one gotcha here is that Three.js expects canvas to have a style.width and style.height property.
OffscreenCanvas, as fully detached from DOM, does not have it, so you need to provide it yourself,
either by stubbing it out or providing logic that ties these values to the original
canvas dimensions.
Here is a demo of how to run a basic Three.js animation in a worker:
Bear in mind that some of the DOM related APIs are not readily available in a worker, so if you
want to use more advanced Three.js features like textures, you might need more workarounds.
For some ideas on how to start experimenting with these, take a look at the
video from Google I/O 2017.
Examples in this video use the deprecated commit() call. Use worker.requestAnimationFrame instead.
Conclusion
If you’re making heavy use of the graphical capabilities of canvas, OffscreenCanvas can positively
influence your app’s performance. Making canvas rendering contexts available to workers increases
parallelism in web applications and makes better use of multi-core systems.
OffscreenCanvas is available without a flag in Chrome 69. It is also
in development in Firefox.
Because its API is very aligned with the regular canvas element, you can easily feature-detect it
and use it as a progressive enhancement, without breaking existing app or library logic.
It offers performance advantage in all cases where the graphics and animations are not tied closely
to the DOM surrounding the canvas.
Web Performance Made Easy: Google I/O 2018 edition
We've been pretty busy over the past year trying to figure out how to make the Web faster and
more performant. This led to new tools, approaches and libraries that we’d like share with you
in this article. In the first part, we’ll show you some optimization techniques we used in practice
when developing The Oodles Theater app. In the second part,
we’ll talk about our experiments with predictive loading and the new
Guess.js initiative.
Note: Prefer a video to an article? You can watch the presentation on which this
was based instead:
The need for performance
The Internet gets heavier and heavier every year. If we check
the state of the web we can see that a median
page on mobile weights at about 1.5MB, with the majority of that being JavaScript and images.
The growing size of the websites, together with other factors, like network latency,
CPU limitations, render blocking patterns or superfluous third-party code, contributes to
the complicated performance puzzle.
Most users rate speed as being at the very top of the UX hierarchy of their needs. This isn't too
surprising, because you can't really do a whole lot until a page is finished loading. You can't
derive value from the page, you can't admire its aesthetics.
We know that performance matters to the users, but it can also feel like a secret discovering
where to start optimizing. Fortunately, there are tools that can help you on the way.
Lighthouse - a base for performance workflow
Lighthouse is a part of Chrome DevTools
that allows you to make an audit of your website, and gives you hints on how to make it better.
We recently launched a bunch of
new performance audits
that are really useful in everyday development workflow.
Let’s explore how you can take advantage of them on a practical example:
The Oodles Theater app. It’s a little demo web app,
where you can try out some of our favourite interactive Google Doodles and even play a game or two.
While building the app, we wanted to make sure that it was as performant as possible. The starting
point for optimization was a Lighthouse report.
The initial performance of our app as seen on Lighthouse report was pretty terrible.
On a 3G network, the user needed to wait for 15 seconds for the first meaningful paint, or for
the app to get interactive. Lighthouse highlighted a ton of issues with our site, and the overall
performance score of 23 mirrored exactly that.
The page weighted about 3.4MB - we desperately needed to cut some fat.
This started our first performance challenge: find things that we can easily remove without
affecting the overall experience.
Performance optimization opportunities
Remove unnecessary resources
There are some obvious things that can be safely removed: whitespace and comments.
Lighthouse highlights this opportunity in the Unminified CSS & JavaScript audit. We were using
webpack for our build process, so in order to get minification we simply used the
Uglify JS plugin.
Minification is a common task, so you should be able to find a ready-made solution for whichever
build process you happen to use.
Another useful audit in that space is Enable text compression. There is no reason to
send uncompressed files, and most of
the CDNs support this out of the
box these days.
We were using Firebase Hosting to host our code,
and Firebase enables gzipping by default, so by the sheer
virtue of hosting our code on a reasonable CDN we got that for free.
While gzip is a very popular way of compressing, other mechanisms like
Zopfli and Brotli are
getting traction as well. Brotli enjoys support in most browsers, and you can use a binary
to pre-compress your assets before sending them to the server.
Use efficient cache policies
Our next step was to ensure that we don’t send resources twice if unnecessary.
The Inefficient cache policy audit in Lighthouse helped us notice that we could be optimizing
our caching strategies in order to achieve exactly that. By setting a max-age expiration header
in our server, we made sure that on a repeated visit the user can reuse the resources they have
downloaded before.
Ideally you should aim at caching as many resources as securely possible for the longest possible
period of time and provide validation tokens for efficient revalidation of the resources
that got updated.
Remove unused code
So far we removed the obvious parts of the unnecessary download, but what about the less obvious
parts? For example, unused code.
Sometimes we include in our apps code that is not really necessary. This happens especially
if you work on your app for a longer period of time, your team or your dependencies change,
and sometimes an orphan library gets left behind. That's exactly what happened to us.
At the beginning we were using Material Components library to quickly prototype our app.
In time we moved to a more custom look and feel and we forgot entirely about that library.
Fortunately, the code coverage check helped us rediscover it in our bundle.
You can check your code coverage stats in DevTools, both for the runtime as well as load time of
your application. You can see the two big red stripes in the bottom screenshot - we had over 95%
of our CSS unused, and a big bunch of JavaScript as well.
Lighthouse also picked up this issue in the unused CSS rules audit. It showed a potential saving
of over 400kb. So we got back to our code and we removed both the JavaScript and CSS part
of that library.
This brought our CSS bundle down 20-fold, which is pretty good for a tiny, two-line-long commit.
Of course, it made our performance score go up, and also the
Time to Interactive
got much better.
However, with changes like this, it’s not enough to check your metrics and scores alone.
Removing actual code is never risk-free, so you should always look out for potential regressions.
Our code was unused in 95% - there’s still this 5% somewhere. Apparently one of our components
was still using the styles from that library - the little arrows in the doodle slider. Because it
was so small though, we could just go and manually incorporate those styles back into the buttons.
So if you remove code, just make sure you have a proper testing workflow in place to help you
guard against potential visual regressions.
Avoid enormous network payloads
We know that large resources can slow down web page loads. They can cost our users money and
they can have a big impact on their data plans, so it's really important to be mindful of this.
Lighthouse was able to detect that we had an issue with some of our network payloads using the
Enormous network payload
audit.
Here we saw that we had over 3mb worth of code that was being shipped down – which is quite a lot,
especially on mobile.
At the very top of this list, Lighthouse highlighted that we had a JavaScript vendor bundle
that was 2mb of uncompressed code. This is also a problem highlighted by webpack.
As the saying goes: the fastest request is the one that's not made.
Ideally you should be measuring the value of every single asset you're serving down to your users,
measuring the performance of those assets, and making a call on whether it's worth actually shipping
down with the initial experience. Because sometimes these assets can be deferred, or lazily loaded,
or processed during idle time.
In our case, because we're dealing with a lot of JavaScript bundles, we were fortunate because
the JavaScript community has a rich set of JavaScript bundle auditing tools.
We started off with webpack bundle analyzer, which informed us that we were including a
dependency called unicode which was 1.6mb of parsed JavaScript, so quite a lot.
We then went over to our editor and using the
Import Cost Plugin for Visual code
we were able to
visualize the cost of every module that we were importing. This allowed us to discover which
component was including code that was referencing this module.
We then switched over to another tool, BundlePhobia. This is a tool
which allows you to enter in the name of any NPM package and actually see what its minified and
gzipped size is estimated to be. We found a nice alternative for the slug module we were using
that only weighed 2.2kb, and so we switched that up.
This had a big impact on our performance. Between this change and discovering other opportunities
to trim down our JavaScript bundle size, we saved 2.1mb of code.
We saw 65% improvements overall, once you factor in the gzipped and minified size of these bundles.
And we found that this was really worth doing as a process.
So, in general, try to eliminate unnecessary downloads in your sites and apps.
Make an inventory of your assets and measure their performance impact can make a really big
difference, so make sure that you're auditing your assets fairly regularly.
Lower JavaScript boot-up time with code splitting
Although large network payloads can have a big impact on our app, there's another thing that can
have a really big impact, and that is JavaScript.
JavaScript is your
most expensive asset.
On mobile, if you're sending
down large bundles of JavaScript, it can delay how soon your users are able to interact with your
user interface components. That means they can be tapping on UI without anything meaningful
actually happening. So it's important for us to understand why JavaScript costs so much.
This is how a browser processes JavaScript.
We first of all have to download that script, we have a JavaScript engine which then needs to parse
that code, needs to compile it and execute it.
Now these phases are something that don't take a whole lot of time on a high-end device like
a desktop machine or a laptop, maybe even a high-end phone. But on a median mobile phone this
process can take anywhere between five and ten times longer. This is what delays interactivity,
so it's important for us to try trimming this down.
And in the case of the Oodle app, it told us that we had 1.8 seconds of time spent in JavaScript
boot-up. What was happening was that we were statically importing in all of our routes and
components into one monolithic JavaScript bundle.
One technique for working around this is using code splitting.
Code splitting is this notion of instead of giving your users a whole pizza’s worth of JavaScript,
what if you only gave them one slice at a time as they needed it?
Code splitting can be applied at a route level or a component level. It works great with React and
React Loadable, Vue.js, Angular, Polymer, Preact, and multiple other libraries.
We incorporated code splitting into our application, we switched over from static imports to
dynamic imports, allowing us to asynchronously lazy load code in as we needed it.
The impact this had was both shrinking down the size of our bundles, but also decreasing our
JavaScript boot up time. It took it down to 0.78 seconds, making the app 56% faster.
In general, if you're building a JavaScript-heavy experience, be sure to only send code to
the user that they need.
Take advantage of concepts like code splitting, explore ideas like tree shaking, and check out
webpack-libs-optimizations
repo for a few ideas on how you can trim down your library size if you happen to be using webpack.
Optimize images
In the Oodle app we're using a lot of images. Unfortunately, Lighthouse was much less enthusiastic
about it than we were. As a matter of fact, we failed on all three image-related audits.
We forgot to optimize our images, we were not sizing them correctly, and also we could get some
gain from using other image formats.
We started with optimizing our images.
For one-off optimization round you can use visual tools like
ImageOptim or XNConvert.
A more automated approach is to add an image optimization step to your build process, with
libraries like imagemin.
This way you make sure that the images added in the future get optimized automatically.
Some CDNs, for example Akamai or third-party solutions like
Cloudinary or Fastly offer you comprehensive
image optimization solutions. so you can also simply host your images on those services.
If you don't want to do that because of the cost, or latency issues, projects like
Thumbor or Imageflow offer
self-hosted alternatives.
Our background PNG was flagged in webpack as big, and rightly so. After sizing it correctly to
the viewport and running it through ImageOptim, we went down to 100kb, which is acceptable.
Repeating this for multiple images on our site allowed us to bring down the overall page weight
significantly.
Use the right format for animated content
GIFs can get really expensive. Surprisingly, the GIF format was never intended as an animation
platform in the first place. Therefore, switching to a more suitable video format offers you large
savings in terms of file size.
In Oodle app, we were using a GIF as an intro sequence on the home page. According to Lighthouse,
we could be saving over 7mb by switching to a more efficient video format. Our clip weighted about
7.3mb, way too much for any reasonable website, so instead we turned it into a video element with
two source files - an mp4 and WebM for wider browser support.
We used the FFmpeg tool to convert our animation GIF into the mp4 file.
The WebM format offers you even larger savings - the ImageOptim API can do such conversion for you.
We managed to save over 80% of our overall weight thanks to this conversion. This brought
us down to around 1mb.
Still, 1mb is a large resource to push down the wire, especially for a user on a restricted
bandwidth. Luckily we could use
Effective Type API
to realize they're on a slow bandwidth, and give them a much smaller JPEG instead.
This interface uses the effective round-trip time and downing values to estimate the network type
the user is using. It simply returns a string, slow 2G, 2G, 3G or 4G. So depending on
this value, if the user is on below 4G we could replace the video element with the image.
if (navigator.connection.effectiveType) { ... }
It does remove a little bit from the experience, but at least the site is usable on a slow
connection.
Lazy-load off-screen images
Carousels, sliders, or really long pages often load images, even though the user cannot see them
on the page straight away.
Lighthouse will flag this behavior in the off-screen images audit, and you can also see it
for yourself in the network panel of DevTools. If you see a lot of images incoming while only
a few are visible on the page, it means that maybe you could consider lazy loading them instead.
Lazy loading is not yet supported natively in the browser, so we have to use JavaScript to add this
capability. We used Lazysizes library to add lazy
loading behavior to our Oodle covers.
<!-- Import library -->
import lazysizes from 'lazysizes' <!-- or -->
<script src="lazysizes.min.js"></script>
<!-- Use it -->
<img data-src="image.jpg" class="lazyload"/>
<img class="lazyload"
data-sizes="auto"
data-src="image2.jpg"
data-srcset="image1.jpg 300w,
image2.jpg 600w,
image3.jpg 900w"/>
Lazysizes is smart because it does not only track the visibility changes of the element, but it
also proactively prefetches elements that are near the view for the optimal user experience.
It also offers an optional integration of the IntersectionObserver, which gives you very
efficient visibility lookups.
After this change our images are being fetched on-demand. If you want to dig deeper into that
topic, check out images.guide - a very handy and comprehensive resource.
Help browser deliver critical resources early
Not every byte that's shipped down the wire to the browser has the same degree of importance,
and the browser knows this. A lot of browsers have heuristics to decide what they should be
fetching first. So sometimes they'll fetch CSS before images or scripts.
Something that could be useful is us, as authors of the page, informing the browser what's
actually really important to us. Thankfully, over the last couple of years browser vendors have
been adding a number of features to help us with this, e.g.
resource hints) like link rel=preconnect,
or preload or prefetch.
These capabilities that were brought to the web platform help the browser fetch the right thing
at the right time, and they can be a little bit more efficient than some of the custom loading,
logic-based approaches that are done using script instead.
Let's see how Lighthouse actually guides us towards using some of these features effectively.
The first thing Lighthouse tells us to do is avoid multiple costly round trips to any origin.
In the case of the Oodle app, we’re actually heavily using Google Fonts. Whenever you drop a
Google Font stylesheet into your page it's going to connect up to two subdomains. And what
Lighthouse is telling us is that if we were able to warm up that connection, we could save
anywhere up to 300 milliseconds in our initial connection time.
Taking advantage of link rel preconnect, we can effectively mask that connection latency.
Especially with something like Google Fonts where our font face CSS is hosted
on googleapis.com, and our font
resources are hosted on Gstatic, this can have a really big
impact. So we applied this optimization and we shaved off a few hundred milliseconds.
The next thing that Lighthouse suggests is that we preload key requests.
<link rel=preload> is really powerful, it informs the browser that a resource is needed as
part of the current navigation, and it tries to get the browser fetching it as soon as possible.
Now here Lighthouse is telling us that we should be going and preloading our key web font
resources, because we're loading in two web fonts.
Preloading in a web font looks like this - specifying rel=preload, you pass in as with
the type of font, and then you specify the type of font you're trying to load in, such as woff2.
The impact this can have on your page is quite stark.
Normally, without using link rel preload, if web fonts happen to be critical to your page, what
the browser has to do is it first of all has to fetch your HTML, has to parse your CSS, and
somewhere much later down the line, it'll finally go and fetch your web fonts.
Using link rel preload, as soon as the browser has parsed your HTML it can actually start fetching
those web fonts much earlier on. In the case of our app, this was able to shave a second off the
time it took for us to render text using our web fonts.
Now it's not quite that straightforward if you're going to try preloading fonts using Google Fonts,
there is one gotcha.
The Google Font URLs that we specify on our font faces in our stylesheets happened to be something
that the fonts team update fairly regularly. These URLs can expire, or get updated on a regular
frequency, and so what we would suggest to do if you want complete control over your font loading
experience is to self-host your web fonts. This can be great because it gives you access
to things like link rel preload.
In our case we found the tool
Google Web Fonts Helper
really useful in helping us offline some of those web fonts and set them up locally, so check that
tool out.
Whether you're using web fonts as part of your critical resources, or it happens to be JavaScript,
try to help the browser deliver your critical resources as soon as possible.
Experimental: Priority Hints
We've got something special to share with you today. In addition to features like resource hints,
as well as preload, we've been working on a brand new experimental browser feature we’re calling
priority hints.
This is a new feature that allows you to hint to the browser how important a resource is. It
exposes a new attribute - importance - with the values low, high or auto.
This allows us to convey lowering the priority of less important resources, such as non-critical
styles, images, or fetch API calls to reduce contention. We can also boost the priority of more
important things, like our hero images.
In the case of our Oodle app, this actually led to one practical place where we could optimize.
Before we added lazy loading to our images, what the browser was doing is, we had this image
carousel with all of our doodles and the browser was fetching all the images at the very start
of the carousel with a high priority early on. Unfortunately, it was the images in the middle
of the carousel that were most important to the user. So what we did was, we set the importance
of those background images to very low, the foreground ones to very high, and what this had was
a two second impact over slow 3G, and how quickly we were able to fetch and render those images.
So a nice positive experience.
We're hoping to bring this feature to Canary
in a few weeks, so keep an eye out for that.
Have a web font loading strategy
Typography is fundamental to good design, and if you're using web fonts you ideally don't want to
block rendering of your text, and you definitely don't want to show invisible text.
We highlight this in Lighthouse now, with the avoid invisible text while web fonts are loading
audit.
If you load your web fonts using a font face block, you're letting the browser decide what to do
if it takes a long time for that web font to fetch. Some browsers will wait anywhere up
to three seconds for this before falling back to a system font, and they'll eventually swap it
out to the font once it's downloaded.
We're trying to avoid this invisible text, so in this case we wouldn't have been able to see this
week's classic doodles if the web font had taken too long. Thankfully, with a new feature called
font-display, you actually get a lot more control over this process.
Font display helps you decide how web fonts will render or fallback based on how long it takes
for them to swap.
In this case we're using font display swap. Swap gives the font face a zero second block period,
and an infinite swap period. This means the browser's going to draw your text pretty immediately
with a fallback font if the font takes a while to load. And it's going to swap it once the font
face is available.
In the case of our app, why this was great is that it allowed us to display some meaningful text
very early on, and transition over to the web font once it was ready.
In general, if you happen to be using web fonts, as a large percentage of the web does, have
a good web font loading strategy in place.
There are a lot of web platform features you can use to optimize your loading experience for fonts,
but also check out Zach Leatherman's Web Font Recipes repo,
because it's really great.
Reduce render-blocking scripts
There are other parts of our application that we could push earlier in the download chain to provide
at least some basic user experience a little bit earlier.
On the Lighthouse timeline strip you can see that during these first few seconds when all the
resources are loading, the user cannot really see any content.
Downloading and processing external stylesheets is blocking our rendering process from making
any progress.
We can try to optimize our critical rendering path by delivering some of the styles a bit earlier.
If we extract the styles that are responsible for this initial render and inline them in our HTML,
the browser is able to render them straight away without waiting for the external stylesheets
to arrive.
In our case, we used an NPM module called Critical
to inline our critical content in index.html during a build step.
While this module did most of the heavy lifting for us, it was still a little bit tricky to get
this working smoothly across different routes.
If you are not careful or your site structure is really complex, it might be really difficult to
introduce this type of pattern if you did not plan for
app shell architecture
from the beginning.
This is why it's so important to take performance considerations early on. If you don't design for
performance from the start, there is a high chance that you will you run into issues doing it later.
In the end our risk paid off, we managed to make it work and the app started delivering content much
earlier, improving our first meaningful paint time significantly.
The outcome
That was a long list of performance optimizations we applied to our site. Let's take a look at the
outcome. This is how our app loaded on a medium mobile device on a 3G network, before and after
the optimization.
The Lighthouse performance score went up from 23 to 91. That's pretty nice progress in terms
of speed. All of the changes were fueled by us continuously checking and following the Lighthouse
report. If you'd like to check out how we technically implemented all of the improvements, feel
free to take a look at our repo, especially at the PRs that
landed there.
Predictive performance - data-driven user experiences
We believe that machine learning represents an exciting opportunity for the future in many areas.
One idea that we hope will spark more experimentation in the future, is that real data can really
guide the user experiences we’re creating.
Today, we make a lot of arbitrary decisions about what the user might want or need, and therefore
what is worth being prefetched, or preloaded, or pre-cached. If we guess right we are able to
prioritize a small amount of resources, but it's really hard to scale it to the whole website.
We actually have data available to better inform our optimizations today.
Using the Google Analytics reporting API
we can take a look at the next top page and exit
percentages for any URL on our site and therefore drive conclusions on which resources we should
prioritize.
If we combine this with a good probability model, we avoid wasting our user’s data by aggressively
over-prefetching content. We can take advantage of that Google
Analytics data, and use machine learning and models like
Markov chains or
neural network in order
to implement such models.
In order to facilitate this experiments, we're happy to announce a new initiative we're calling
Guess.js.
Guess.js is a project focused on data-driven user experiences for the web. We hope that it's going
to inspire exploration of using data to improve web performance and go beyond that. It's all
open source and available on GitHub today. This was built in collaboration with the open source
community by Minko Gechev, Kyle Matthews from Gatsby, Katie Hempenius, and a number of others.
Check out Guess.js, let us know what you think.
Summary
Scores and metrics are helpful in improving speed of the Web, but they are just the means,
not the goals themselves.
We've all experienced slow page loads on the go, but we now have an opportunity to give our
users more delightful experiences that load really quickly.
Improving performance is a journey. Lots of small changes can lead to big gains. By using the
right optimization tools and keeping an eye on the Lighthouse reports, you can provide better
and more inclusive experience to your users.
With special thanks to: Ward Peeters, Minko Gechev, Kyle Mathews, Katie Hempenius, Dom Farolino,
Yoav Weiss, Susie Lu, Yusuke Utsunomiya, Tom Ankers, Lighthouse & Google Doodles.
One of the best things about WebAssembly is the
ability experiment with new capabilities and implement new ideas before the
browser ships those features natively (if at all).. You can think of using
WebAssembly this way as a high-performance polyfill mechanism, where you
write your feature in C/C++ or Rust rather than JavaScript.
With a plethora of existing code available for porting, it's possible to do
things in the browser that weren't viable until WebAssembly came along.
This article will walk through an example of how to take the existing AV1
video codec source code, build a wrapper for it, and try it out inside your
browser and tips to help with building a test harness to debug the wrapper.
Full source code for the example here is available at
github.com/GoogleChromeLabs/wasm-av1 for reference.
TL;DR: Download one of these two 24fps test
videofiles
and try them on our built demo.
Choosing an interesting code-base
For a number of years now, we've seen that a large percentage of traffic on
the web consists of video data, Cisco
estimates it as much as 80% in fact! Of course, browser
vendors and video sites are hugely aware of the desire to reduce the data
consumed by all this video content. The key to that, of course, is better
compression, and as you'd expect there is a lot of research into
next-generation video compression aimed at reducing the data burden of shipping
video across the internet.
As it happens, the Alliance for Open Media has been
working on a next generation video compression scheme called
AV1 that promises to shrink
video data size considerably. In the future, we'd expect browsers to ship
native support for AV1, but luckily the source code for the compressor and
decompressor are open source, which
makes that an ideal candidate for trying to compile it into WebAssembly so
we can experiment with it in the browser.
Adapting for use in the browser
One of the first things we need to do to get this code into the browser is to
get to know the existing code to understand what the API is like. When first
looking at this code, two things stand out:
The source tree is built using a tool called cmake; and
There are a number of examples that all assume some kind of file-based interface.
All the examples that get built by default can be run on the command line, and
that is likely to be true in many other code bases available in the community.
So, the interface we're going to build to make it run in the browser could be
useful for many other command line tools.
Using cmake to build the source code
Fortunately, the AV1 authors have been experimenting with
Emscripten, the SDK we're going to
use to build our WebAssembly version. In the root of the
AV1 repository, the file
CMakeLists.txt contains these build rules:
if(EMSCRIPTEN)
add_preproc_definition(_POSIX_SOURCE)
append_link_flag_to_target("inspect" "-s TOTAL_MEMORY=402653184")
append_link_flag_to_target("inspect" "-s MODULARIZE=1")
append_link_flag_to_target("inspect"
"-s EXPORT_NAME=\"\'DecoderModule\'\"")
append_link_flag_to_target("inspect" "--memory-init-file 0")
if("${CMAKE_BUILD_TYPE}" STREQUAL "")
# Default to -O3 when no build type is specified.
append_compiler_flag("-O3")
endif()
em_link_post_js(inspect "${AOM_ROOT}/tools/inspect-post.js")
endif()
The Emscripten toolchain can generate output in two formats, one is called
asm.js and the other is WebAssembly.
We'll be targeting WebAssembly as it produces smaller output and can run
faster. These existing build rules are meant to compile an
asm.js version of the library for use in an
inspector application that's leveraged to look at the content of a video
file. For our usage, we need WebAssembly output so we add these lines just
before the closing endif() statement in the
rules above.
# Force generation of Wasm instead of asm.js
append_link_flag_to_target("inspect" "-s WASM=1")
append_compiler_flag("-s WASM=1")
Note: It's important to create a build directory that's separate from the
source code tree, and run all the commands below inside that build directory.
Building with cmake means first generating some
Makefiles by running
cmake itself, followed by running the command
make which will perform the compilation step.
Note, that since we are using Emscripten we need to use the
Emscripten compiler toolchain rather than the default host compiler.
That's achieved by using Emscripten.cmake which
is part of the Emscripten SDK and
passing it's path as a parameter to cmake itself.
The command line below is what we use to generate the Makefiles:
The parameter path/to/aom should be set to the full path of
the location of the AV1 library source files. The
path/to/emsdk-portable/.../Emscripten.cmake parameter needs
to be set to the path for the Emscripten.cmake toolchain description file.
For convenience we use a shell script to locate that file:
If you look at the top-level Makefile for this project, you
can see how that script is used to configure the build.
Now that all of the setup has been done, we simply call make
which will build the entire source tree, including samples, but most
importantly generate libaom.a which contains the
video decoder compiled and ready for us to incorporate into our project.
Designing an API to interface to the library
Once we've built our library, we need to work out how to interface with it to
send compressed video data to it and then read back frames of video that we
can display in the browser.
Taking a look inside the AV1 code tree, a good starting point is an example
video decoder which can be found in the file
simple_decoder.c.
That decoder reads in an IVF file
and decodes it into a series of images that represent the frames in the video.
We implement our interface in the source file
decode-av1.c.
Since our browser can't read files from the file system, we need to design some
form of interface that lets us abstract away our I/O so that we can build
something similar to the example decoder to get data into our AV1 library.
On the command line, file I/O is what's known as a stream interface, so we can
just define our own interface that looks like stream I/O and build whatever we
like in the underlying implementation.
We define our interface as this:
DATA_Source *DS_open(const char *what);
size_t DS_read(DATA_Source *ds,
unsigned char *buf, size_t bytes);
int DS_empty(DATA_Source *ds);
void DS_close(DATA_Source *ds);
// Helper function for blob support
void DS_set_blob(DATA_Source *ds, void *buf, size_t len);
The open/read/empty/close functions look a lot like normal
file I/O operations which allows us to map them easily onto file I/O for a
command line application, or implement them some other way when run inside
a browser. The DATA_Source type is opaque from
the JavaScript side, and just serves to encapsulate the interface. Note, that
building an API that closely follows file semantics makes it easy to reuse in
many other code-bases that are intended to be used from a command line
(e.g. diff, sed, etc.).
We also need to define a helper function called DS_set_blob
that binds raw binary data to our stream I/O functions. This lets the blob be
'read' as if it's a stream (i.e. looking like a sequentially read file).
Our example implementation enables reading the passed in blob as if it was a
sequentially read data source. The reference code can be found in the file
blob-api.c,
and the entire implementation is just this:
Building a test harness to test outside the browser
One of the best practices in software engineering is to build unit tests for
code in conjunction with integration tests.
When building with WebAssembly in the browser, it makes sense to build some
form of unit test for the interface to the code we're working with so we can
debug outside of the browser and also be able to test out the interface we've
built.
In this example we've been emulating a stream based API as the interface to
the AV1 library. So, logically it makes sense to build a test harness that we
can use to build a version of our API that runs on the command line and does
actual file I/O under the hood by implementing the file I/O itself underneath
our DATA_Source API.
The stream I/O code for our test harness is straightforward, and looks like
this:
By abstracting the stream interface we can build our WebAssembly module to
use binary data blobs when in the browser, and interface to real files when
we build the code to test from the command line. Our test harness code can be
found in the example source file
test.c.
Implementing a buffering mechanism for multiple video frames
When playing back video, it's common practice to buffer a few frames to help
with smoother playback. For our purposes we'll just implement a buffer of 10
frames of video, so we'll buffer 10 frames before we start playback. Then each
time a frame is displayed, we'll try to decode another frame so we keep the
buffer full. This approach makes sure frames are available in advance to help
stop the video stuttering.
With our simple example, the entire compressed video is available to read, so
the buffering isn't really needed. However, if we're to extend the source data
interface to support streaming input from a server, then we need to have the
buffering mechanism in place.
The code in
decode-av1.c
for reading frames of video data from the AV1 library and storing in the buffer
as this:
void
AVX_Decoder_run(AVX_Decoder *ad) {
...
// Try to decode an image from the compressed stream, and buffer
while (ad->ad_NumBuffered < NUM_FRAMES_BUFFERED) {
ad->ad_Image = aom_codec_get_frame(&ad->ad_Codec,
&ad->ad_Iterator);
if (ad->ad_Image == NULL) {
break;
}
else {
buffer_frame(ad);
}
}
We've chosen to make the buffer contain 10 frames of video, which is just an
arbitrary choice. Buffering more frames means more waiting time for the video
to begin playback, whilst buffering too few frames can cause stalling during
playback. In a native browser implementation, buffering of frames is far more
complex than this implementation.
Getting the video frames onto the page with WebGL
The frames of video that we've buffered need to be displayed on our page. Since
this is dynamic video content, we want to be able to do that as fast as
possible. For that, we turn to
WebGL.
WebGL lets us take an image, such as a frame of video, and use it as a texture
that gets painted on to some geometry. In the WebGL world, everything consists
of triangles. So, for our case we can use a convenient built in feature of
WebGL, called gl.TRIANGLE_FAN.
However, there is a minor problem. WebGL textures are supposed to be RGB
images, one byte per color channel. The output from our AV1 decoder is images
in a so-called YUV format, where the default output has 16 bits per channel,
and also each U or V value corresponds to 4 pixels in the actual output image.
This all means we need to color convert the image before we can pass it to
WebGL for display.
To do so, we implement a function AVX_YUV_to_RGB() which you
can find in the source file
yuv-to-rgb.c.
That function converts the output from the AV1 decoder into something we can
pass to WebGL. Note, that when we call this function from JavaScript we need
to make sure that the memory we're writing the converted image into has been
allocated inside the WebAssembly module's memory - otherwise it can't get
access to it. The function to get an image out from the WebAssembly module and
paint it to the screen is this:
function show_frame(af) {
if (rgb_image != 0) {
// Convert The 16-bit YUV to 8-bit RGB
let buf = Module._AVX_Video_Frame_get_buffer(af);
Module._AVX_YUV_to_RGB(rgb_image, buf, WIDTH, HEIGHT);
// Paint the image onto the canvas
drawImageToCanvas(new Uint8Array(Module.HEAPU8.buffer,
rgb_image, 3 * WIDTH * HEIGHT), WIDTH, HEIGHT);
}
}
The drawImageToCanvas() function that implements the WebGL painting can be
found in the source file
draw-image.js
for reference.
Future work and takeaways
Trying our demo out on two test
videofiles
(recorded as 24 f.p.s. video) teaches us a few things:
It's entirely feasible to build a complex code-base to run performantly in the browser using WebAssembly; and
Something as CPU intensive as advanced video decoding is feasible via WebAssembly.
There are some limitations though: the implementation is all running on the
main thread and we interleave painting and video decoding on that single
thread. Offloading the decoding into a web worker could provide us with
smoother playback, as the time to decode frames is highly dependent on the
content of that frame and can sometimes take more time than we have budgeted.
The compilation into WebAssembly uses the AV1 configuration for a generic CPU
type. If we compile natively on the command line for a generic CPU we see
similar CPU load to decode the video as with the WebAssembly version, however
the AV1 decoder library also includes
SIMD implementations that run up to
5 times faster. The WebAssembly Community Group is currently working on
extending the standard to include
SIMD primitives,
and when that comes along it promises to speed up decoding considerably.
When that happens, it'll be entirely feasible to decode 4k HD video in
real-time from a WebAssembly video decoder.
In any case, the example code is useful as a guide to help port any existing
command line utility to run as a WebAssembly module and shows what's possible
on the web already today.
Credits
Thanks to Jeff Posnick, Eric Bidelman and Thomas Steiner for providing
valuable review and feedback.
In my last wasm article I talked
about how to compile a C library to wasm so you can use it on the web. One thing
that stood out to me (and to many readers) is the crude and slightly awkward way
you have to manually declare which functions of your wasm module you are using.
To refresh your mind, this is the code snippet I am talking about:
Here we declare the names of the functions that we marked with
EMSCRIPTEN_KEEPALIVE, what their return types are, and what the types of their
arguments are. Afterwards we can use the functions on the api object to invoke
these functions. However, using wasm this way doesn't support strings and
requires you to manually move chunks of memory around which makes many library
APIs very tedious to use. Isn't there a better way? Why yes there is, otherwise
what would this article be about?
C++ name mangling
While the developer experience would be reason enough to build a tool that helps
with these bindings, there's actually a more pressing reason: When you compile C
or C++ code, each file is compiled separately. Then a linker takes care of
munging all these so-called object files together and turning them into a wasm
file. With C, the names of the functions are still available in the object file
for the linker to use. All you need to be able to call a C function is the name,
which we are providing as a string to cwrap().
C++ on the other hand supports function overloading, meaning you can implement
the same function multiple times as long as the signature is different (e.g.
differently typed parameters). At the compiler level, a nice name like add
would get mangled into something that encodes the signature in the function
name for the linker. As a result, we wouldn't be able to look up our function
with its name anymore.
Note: You can prevent the compiler from mangling your functions' names by
annotating it with extern "C".
Enter embind
embind
is part of the Emscripten toolchain and provides you with a bunch of C++ macros
that allow you to annotate C++ code. You can declare which functions, enums,
classes or value types you are planning to use from JavaScript. Let's start
simple with some plain functions:
#include <emscripten/bind.h>
using namespace emscripten;
double add(double a, double b) {
return a + b;
}
std::string exclaim(std::string message) {
return message + "!";
}
EMSCRIPTEN_BINDINGS(my_module) {
function("add", &add);
function("exclaim", &exclaim)
}
Compared to my previous article, we are not including emscripten.h anymore, as
we don't have to annotate our functions with EMSCRIPTEN_KEEPALIVE anymore.
Instead we have an EMSCRIPTEN_BINDINGS section in which we list the names under
which we want to expose our functions to JavaScript.
Note: The parameter for the EMSCRIPTEN_BINDINGS macro is mostly used to avoid
name conflicts in bigger projects.
To compile this file, we can use the same setup (or, if you want, the same
Docker image) as in the previous
article. To use embind,
we add the --bind flag:
$ emcc --bind -O3 add.cpp
Now all that's left is whipping up an HTML file that loads our freshly
created wasm module:
Note: If you are curious, I wrote up the
same C++ module
without embind to give you a feel for how much work it is doing for you.
As you can see, we aren't using cwrap() anymore. This just works straight out
of the box. But more importantly, we don't have to worry about manually copying
chunks of memory to make strings work! embind gives you that for free, along
with type checks:
This is pretty great as we can catch some errors early instead of dealing with
the occasionally quite unwieldy wasm errors.
Objects
Many JavaScript constructors and functions use is options objects. It's a nice
pattern in JavaScript, but extremely tedious to realize in wasm manually. embind
can help here, too!
For example, I came up with this incredibly useful C++ function that processes my
strings, and I urgently want to use it on the web. Here is how I did that:
I am defining a struct for the options of my processMessage() function. In the
EMSCRIPTEN_BINDINGS block, I can use value_object to make JavaScript see
this C++ value as an object. I could also use value_array if I preferred to
use this C++ value as an array. I also bind the processMessage() function, and
the rest is embind magic. I can now call the processMessage() function from
JavaScript without any boilerplate code:
For completeness sake, I should also show you how embind allows you to expose
entire classes, which brings a lot of synergy with ES6 classes. You can probably
start to see a pattern by now:
On the JavaScript side, this almost feels like a native class:
<script src="/a.out.js"></script>
<script>
Module.onRuntimeInitialized = _ => {
const c = new Module.Counter(22);
console.log(c.counter); // prints 22
c.increase();
console.log(c.counter); // prints 23
console.log(c.squareCounter()); // prints 529
};
</script>
What about C?
embind was written for C++ and can only be used in C++ files, but that doesn't
mean that you can't link against C files! To mix C and C++, you only need to
separate your input files into two groups: One for C and one for C++ files and
augment the CLI flags for emcc as follows:
$ emcc --bind -O3 --std=c++11 a_c_file.c another_c_file.c -x c++ your_cpp_file.cpp
Conclusion
embind gives you great improvements in the developer experience when working
with wasm and C/C++. This article does not cover all the options embind offers.
If you are interested, I recommend continuing with embind's
documentation.
Keep in mind that using embind can make both your wasm module and your
JavaScript glue code bigger by up to 11k when gzip'd — most notably on small
modules. If you only have a very small wasm surface, embind might cost more than
it's worth in a production environment! Nonetheless, you should definitely give
it a try.
Note: We'll publish the video version of What's New In DevTools (Chrome 70) around
mid-October 2018.
Welcome back! It's been about 12 weeks since our last update, which was for Chrome 68.
We skipped Chrome 69 because we didn't have enough new features or UI changes to warrant a post.
New features and major changes coming to DevTools in Chrome 70 include:
Break on AudioContext events. Use the Event Listener Breakpoints
pane to pause on the first line of an AudioContext lifecycle event handler.
Debug Node.js apps with ndb. Detect and attach to child processes, place breakpoints
before modules are required, edit files from the DevTools UI, blackbox scripts outside
of the working directory, and more.
Live Expressions in the Console
Pin a Live Expression to the top of your Console when you want to monitor its value in
real-time.
Click Create Live Expression .
The Live Expression UI opens.
Type the expression that you want to monitor.
Click outside of the Live Expression UI to save your expression.
Live Expression values update every 250 milliseconds.
Highlight DOM nodes during Eager Evaluation
Type an expression that evaluates to a DOM node in the Console and Eager Evaluation
highlights that node in the viewport.
Performance panel optimizations
When profiling a large page, the Performance panel previously took tens of seconds to process
and visualize the data. Clicking on a event to learn more about it in the Summary tab also
sometimes took multiple seconds to load. Processing and visualizing is faster in Chrome 70.
More reliable debugging
Chrome 70 fixes some bugs that were causing breakpoints to disappear or not get
triggered.
It also fixes bugs related to sourcemaps. Some TypeScript users would instruct
DevTools to blackbox a certain TypeScript file while stepping through code, and instead DevTools
would blackbox the entire bundled JavaScript file.
These fixes also address an issue that was causing the Sources panel to generally run slowly.
Enable network throttling from the Command Menu
You can now set network throttling to fast 3G or slow 3G from the Command Menu.
If you're on Mac or Windows, consider using Chrome Canary as your default
development browser. Canary gives you access to the latest DevTools features.
Note: Canary is released as soon as its built, without testing. This means that Canary
breaks about once-a-month. It's usually fixed within a day. You can go back to using Chrome
Stable while Canary is broken.
Previous release notes
See the devtools-whatsnew tag for links to all previous DevTools
release notes.
It’s been ten years since Chrome was first released. A lot has changed
since then, but our goal of building a solid foundation for modern web
applications hasn’t!
In Chrome 69, we've added support for:
CSS Scroll Snap allows you to create smooth, slick,
scroll experiences.
Display cutouts let's you use the full area of the screen,
including any space behind the display cutout, sometimes called a notch.
The Web Locks API allows you to asynchronously acquire a
lock, hold it while work is performed, then release it.
CSS Scroll Snap
allows you to create smooth, slick, scroll experiences, by declaring scroll
snap positions that tell the browser where to stop after each scrolling
operation. This is super helpful for image carousels, or paginated sections
where you want the user to scroll to a specific point.
For an image carousel, I’d add scroll-snap-type: x mandatory; to the
scroll container, and scroll-snap-align: center; to each image. Then, as the
user scrolls through the carousel, each image will be smoothly scrolled into
the perfect position.
CSS Scroll Snapping works well, even when the snap targets have different
sizes or when they are larger than the scroller! Check out the post
Well-Controlled Scrolling with CSS Scroll Snap
for more details and samples you can try!
Display cutouts (aka notches)
There are an
increasing number of mobile devices
being released with a display cutout, sometimes called a notch. To deal with
that, browsers add a little bit of extra margin to your page so the content
isn’t obscured by the notch.
But what if you want to use that space?
With CSS environment variables and the
viewport-fit
meta tag, now you can. For example, to tell the browser to expand into the
display cutout area, set the viewport-fit property, in the viewport meta
tag to cover.
The Web Locks API allows you to
asynchronously acquire a lock, hold it while work is performed, then release it.
While the lock is held, no other script in the origin can acquire the same lock,
helping to coordinate the usage of shared resources.
For example, if a web app running in multiple tabs wants to ensure that only
one tab is syncing to the network, the sync code would attempt to acquire a
lock named network_sync_lock.
navigator.locks.request('network_sync_lock', async lock => {
// The lock has been acquired.
await do_something();
await do_something_else();
// Now the lock will be released.
});
The first tab to acquire the lock will sync the data to the network. If
another tab tries to acquire the same lock, it’ll be queued. Once the lock has
been released, the next queued request will be granted the lock, and execute.
MDN has a great Web Locks primer
and includes a more in-depth explanation and lots of examples.
And more!
These are just a few of the changes in Chrome 69 for developers, of course,
there’s plenty more.
From the CSS4 spec, you can now create color transitions around the
circumference of a circle, using
conic gradients.
Lea Verou has a
CSS conic-gradient() polyfill
that you can use, and the page includes a whole bunch of really cool
community submitted samples.
A special thanks to all the people who have helped to make New in Chrome
happen. Every single one of these people are awesome!
Heather Duthie
Tim Malieckal
Rick Murphy
Derek Bass
Kiran Puri
Nilesh Bell-Gorsia
Lee Carruthers
Philip Maniaci
Chris Turiello
Andrew Barker
Alex Crowe
Izzy Cosentino
Norm Magnuson
Loren Borja
Michelle Ortega
Varun Bajaj
Ted Maroney
Andrew Bender
Andrew Naugle
Michelle Michelson
Todd Rawiszer
Anthony Mcgowen
Victoria Canty
Alexander Koht
Jarrod Kloiber
Andre Szyszkowski
Kelsey Allen
Liam Spradlin
And of course, thank you for watching and providing your
comments and feedback! I read all of them, and take your suggestions
to heart. These videos have gotten better because of you!
Thanks for watching!
New in Chrome Bloopers
Video coming soon!
We put together a fun little blooper reel for you to enjoy! After watching
it, I've learned a few things:
When I trip over my words, I make some weird noises.
I make faces and stick my tongue out.
I wiggle, a lot.
Subscribe
Want to stay up to date with our videos, then subscribe
to our Chrome Developers YouTube channel,
and you’ll get an email notification whenever we launch a new video, or add our
RSS feed to your feed reader.
I’m Pete LePage, and as soon as Chrome 70 is released, I’ll be right
here to tell you -- what’s new in Chrome!