Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

What's New In DevTools (Chrome 90)

$
0
0

What's New In DevTools (Chrome 90)

New CSS flexbox debugging tools

DevTools now has dedicated CSS flexbox debugging tools!

CSS flexbox debugging tools

When an HTML element on your page has display: flex or display: inline-flex applied to it, you can see a flex badge next to it in the Elements panel. Click the badge to toggle the display of a flex overlay on the page.

In the Styles pane, you can click on the new icon next to the display: flex or display: inline-flex to open the Flexbox editor. Flexbox editor provides a quick way to edit the flexbox properties. Try it yourself!

In addition, the Layout pane has a Flexbox section, display all the flexbox elements on the page. You can toggle the overlay of each element.

Flexbox section in the Layout pane

Chromium issues: 1166710, 1175699

New Core Web Vitals overlay

Better visualize and measure your page performance with the new Core Web Vitals overlay.

Core Web Vitals is an initiative by Google to provide unified guidance for quality signals that are essential to delivering a great user experience on the web.

Open the Command Menu, run the Show Rendering command, and then enable the Core Web Vitals checkbox.

The overlay currently displays:

  • Largest Contentful Paint (LCP): measures loading performance. To provide a good user experience, LCP should occur within 2.5 seconds of when the page first starts loading.
  • First Input Delay (FID): measures interactivity. To provide a good user experience, pages should have a FID of less than 100 milliseconds.
  • Cumulative Layout Shift (CLS): measures visual stability. To provide a good user experience, pages should maintain a CLS of less than 0.1.

Core Web Vitals overlay

Chromium issue: 1152089

Issues tab updates

Moved issue count to the Console status bar

A new issue count button is now added in the Console status bar to improve the visibility of issues warnings. This will replace the issue message in the Console.

Issue count in the Console status bar

Chromium issue: 1140516

Report Trusted Web Activity issues

The Issues tab now reports Trusted Web Activity issues. This aims to help developers understand and fix the Trusted Web Activity issues of their sites, improving the quality of their applications.

Open a Trusted Web Activity. Then, open the Issues tabs by clicking on the Issues count button in the Console status bar to view the issues. Watch this talk by Andre to learn more about how to create and deploy Trusted Web Activity.

Trusted Web Activity issues in the Issues tab

Chromium issue: 1147479

Format strings as (valid) JavaScript string literals in the Console

Now, the Console formats strings as valid JavaScript string literals in the Console. Previously, the Console would not escape double quotes when printing strings.

Format strings as (valid) JavaScript string literals

Chromium issue: 1178530

New Trust Tokens pane in the Application panel

DevTools now displays all available Trust Tokens in the current browsing context in the new Trust Tokens pane, under the Application panel.

Trust Token is a new API to help combat fraud and distinguish bots from real humans, without passive tracking. Learn how to get started with Trust Tokens.

New Trust Tokens pane

Chromium issue: 1126824

Emulate the CSS color-gamut media feature

Emulate the CSS color-gamut media feature

The color-gamut media query lets you test the approximate range of colors that are supported by the browser and the output device. For example, if the color-gamut: p3 media query matches, it means that the user’s device supports the Display-P3 colorspace.

Open the Command Menu, run the Show Rendering command, and then set the Emulate CSS media feature color-gamut dropdown.

Chromium issue: 1073887

Improved Progressive Web Apps tooling

DevTools now display a more detailed Progressive Web Apps (PWA) installability warning message in the Console, with a link to documentation.

PWA installability warning

The Manifest pane now shows a warning message if the manifest description exceeds 324 characters.

PWA description truncate warning

In addition, the Manifest pane now shows a warning message if the screenshot of the PWA doesn’t match the requirements. Learn more about the the PWA screenshots property and its requirements here.

PWA screenshot warning

Chromium issue: 1146450, 1169689, 965802

New Remote Address Space column in the Network panel

Use the new Remote Address Space column in the Network panel to see the network IP address space of each network resource.

New “Remote Address Space” column

Chromium issue: 1128885

Performance improvements

Page loads performance with DevTools opened are now improved. In some extreme cases we saw 10x performance improvements.

DevTools collects stack traces and attaches them to console messages or asynchronous tasks for later consumption by the developer in case of an issue. Since this collection has to happen synchronously in the browser engine, slow stack trace collection can significantly slow down the page with DevTools open. We've managed to reduce the overhead of stack trace collection significantly.

Stay tuned for a more detailed engineering blog post explained on the implementation.

Chromium issues: 1069425, 1077657

Display allowed/disallowed features in the Frame details view

Frame details view now shows a list of allowed and disallowed browser features controlled by the Permissions policy.

Permissions policy is a web platform API which gives a website the ability to allow or block the use of browser features in its own frame or in iframes that it embeds.

Allowed/disallowed features based on Permission policy

Chromium issue: 1158827

New SameParty column in the Cookies pane

The Cookies pane in the Application panel now displays the SameParty attribute of the cookies. The SameParty attribute is a new boolean attribute to indicate whether a cookie should be included in requests to origins of the same First-Party Sets.

SameParty column

Chromium issue: 1161427

Deprecated non-standard fn.displayName support

Support for the non-standard fn.displayName has been deprecated. Use fn.name instead.

Deprecation of non-standard `fn.displayName` support

Chrome has traditionally supported the non-standard fn.displayName property as a way for developers to control debug names for functions that show up in error.stack and in DevTools stack traces. In the example above, the Call Stack would previously show noLongerSupport.

Replace fn.displayName with the standard fn.name, which was made configurable (via Object.defineProperty) in ECMAScript 2015 to replace the non-standardfn.displayName property.

Support for fn.displayName has been unreliable and not consistent across browser engines. It slows down stack trace collection, a cost that developers always pay no matter whether they actually use fn.displayName or not.

Use `fn.name` to control debug names for functions

Chromium issue: 1177685

Deprecation of Don't show Chrome Data Saver warning in the Settings menu

The Don't show Chrome Data Saver warning setting is removed because Chrome Data Saver has been deprecated.

Deprecated “Don't show Chrome Data Saver warning” settings

Chromium issue: 1056922

##Experimental features

Automatic low-contrast issue reporting in the Issues tab

DevTools added experimental support for reporting contrast issues in the Issues tab automatically.

Low-contrast text is the most common automatically-detectable accessibility issue on the web. Displaying these issues in the Issues tab helps developers discover these issues easier.

Open a page with low-contrast issues (e.g. this demo). Then, open the Issues tabs by clicking on the Issues count button in the Console status bar to view the issues.

Automatic low-contrast issue reporting

Chromium issue: 1155460

Full accessibility tree view in the Elements panel

You can now toggle to view the new and improved full accessibility tree view of a page.

The current accessibility pane provides a limited display of its nodes, only showing the direct ancestor chain from the root node to the inspected node. The new accessibility tree view aims to improve that and makes the accessibility tree more explorable, useful, and easier for developers to use.

After enabling the experiment, a new button will show in the Elements panel, click to switch between the existing DOM tree and the full accessibility tree.

Please note that this is an early-stage experiment. We plan to improve and expand the functionality over time.

Full accessibility tree view

Chromium issue: 887173

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>


Adding Rank Magnitude to the CrUX Report in BigQuery.

$
0
0

Adding Rank Magnitude to the CrUX Report in BigQuery.

Starting with the February 2021 dataset, we’re adding an experimental metric to the CrUX report in BigQuery which distinguishes the popularity of origins by orders of magnitude: The top 1k origins, top 10k, top 100k, top 1M, ... Let’s see how this looks in practice:

SELECT
  experimental.popularity.rank AS rank_magnitude,
  COUNT(DISTINCT origin) AS num_origins
FROM
  `chrome-ux-report.all.202102`
GROUP BY
  rank_magnitude
ORDER BY
  rank_magnitude
Row rank_magnitude num_origins
1 1000 1000
2 10000 9000
3 100000 90000
4 1000000 900000
5 10000000 7264371

For the February 2021 global data set, we get 5 buckets. As expected, in row 1, we see that there are 1000 origins with rank magnitude 1000 - the 1k most popular origins by our metric. Row 2 may look surprising, indicating that there are only 9k origins in the top 10k set; this is because the origins in row 1 are also part of the top 10k set. To select the top 10k origins, one needs to specify experimental.popularity.rank <= 10000 when querying.

The dataset also contains country specific rank magnitude. For example, this query lists the 10k origins that are most popular in Germany.

SELECT DISTINCT origin
FROM `chrome-ux-report.country_de.202102`
WHERE experimental.popularity.rank <= 10000

To touch on the potential of our new popularity metric, let’s see how popularity segments of the web differ with respect to the first contentful paint metric (FCP). For the purpose of this query, we consider 1 second a fast user experience.

SELECT
  SUM(fcp.density)/count(distinct origin)
FROM
  `chrome-ux-report.all.202102`,
  UNNEST(first_contentful_paint.histogram.bin) AS fcp
WHERE
  fcp.start < 1000 AND experimental.popularity.rank <= 1000

For the origins with experimental.popularity.rank <= 1000, the query sums all histogram bucket densities for FCP metric values smaller than 1000ms and divides it by the number of origins - that is, it calculates the average percentage of fast FCP loads for the 1k most popular origins. In this query, all origins have equal weight, so arguably this is not perfect. But let’s see whether the result is sensitive to changing the rank magnitude, by altering the where clause to specify experimental.popularity.rank <= 10000. We do this for 10k, 100k, and so on:

Rank magnitude of origins Percentage of FCP < 1s, averaged over origins
1000 53.6%
10,000 49.6%
100,000 45.9%
1,000,000 43.2%
10,000,000 39.9%

This indicates that a faster user experience on the web is correlated with being more popular.

Learn more about using CrUX on BigQuery and browse the CrUX Cookbook for more example queries. Share your queries if you like, and let us know what you find.

How we built the Chrome DevTools WebAuthn tab

$
0
0

How we built the Chrome DevTools WebAuthn tab

The Web Authentication API, also known as WebAuthn, allows servers to use public key cryptography - rather than passwords - to register and authenticate users. It does this by enabling integration between these servers and strong authenticators. These authenticators may be dedicated physical devices (e.g. security keys) or integrated with platforms (e.g. fingerprint readers). You can read more about WebAuthn here at webauthn.guide.

Developer pain points

Prior to this project, WebAuthn lacked native debugging support on Chrome. A developer building an app that used WebAuth required access to physical authenticators. This was especially difficult for two reasons:

  1. There are many different flavors of authenticators. Debugging the breadth of configurations and capabilities necessitated that the developer have access to many different, sometimes expensive, authenticators.

  2. Physical authenticators are, by design, secure. Therefore, inspecting their state is usually impossible.

We wanted to make this easier by adding debugging support right in the Chrome DevTools.

Solution: a new WebAuthn tab

The WebAuthn DevTools tab makes debugging WebAuthn much easier by allowing developers to emulate these authenticators, customize their capabilities, and inspect their states.

New WebAuthn tab

Implementation

Adding debugging support to WebAuthn was a two-part process.

Two-part process

Part 1: Adding WebAuthn Domain to the Chrome DevTools Protocol

First, we implemented a new domain in the Chrome DevTools Protocol (CDP) which hooks into a handler that communicates with the WebAuthn backend.

The CDP connects DevTools front-end with Chromium. In our case, we utilized the WebAuthn domain acts to bridge between the WebAuthn DevTools tab and Chromium's implementation of WebAuthn.

The WebAuthn domain allows enabling and disabling the Virtual Authenticator Environment, which disconnects the browser from the real Authenticator Discovery and plugs in a Virtual Discovery instead.

We also expose methods in the domain that act as a thin layer to the existing Virtual Authenticator and Virtual Discovery interfaces, which are part of Chromium's WebAuthn implementation. These methods include adding and removing authenticators as well as creating, getting, and clearing their registered credentials.

(Read the code)

Part 2: Building the user-facing tab

Second, we built a user-facing tab in the DevTools frontend. The tab is made up of a view and a model. An auto-generated agent connects the domain with the tab.

While there are 3 necessary components needed, we only needed to be concerned about two of them: the model and the view. The 3rd component - the agent, is autogenerated after adding the domain. Briefly, the agent is the layer that carries the calls between the front end and the CDP.

The model

The model is the JavaScript layer that connects the agent and the view. For our case, the model is quite simple. It takes commands from the view, builds the requests such that they're consumable by the CDP, and then sends them through via the agent. These requests are usually one-directional with no information being sent back to the view.

However, we do sometimes pass back a response from the model either to provide an ID for a newly-created authenticator or to return the credentials stored in an existing authenticator.

(Read the code)

The view

New WebAuthn tab

We use the view to provide the user-interface that a developer can find when accessing DevTools. It contains:

  1. A toolbar to enable virtual authenticator environment.
  2. A section to add authenticators.
  3. A section for created authenticators.

(Read the code)

Toolbar to enable virtual authenticator environment

toolbar

Since most of the user-interactions are with one authenticator at a time rather than the entire tab, the only functionality we provide in the toolbar is toggling the virtual environment on and off.

Why is this necessary? It's important that the user has to explicitly toggle the environment because doing so disconnects the browser from the real Authenticator Discovery. Therefore, while it's on, connected physical authenticators like a fingerprint reader won't be recognized.

We decided that an explicit toggle means a better user experience, especially for those who wander into the WebAuthn tab without expecting real discovery to be disabled.

Adding the Authenticators section

Adding the Authenticators section

Upon enabling the virtual authenticator environment, we present the developer with an inline-form that allows them to add a virtual authenticator. Within this form, we provide customization options which allow the user to decide the authenticator's protocol and transport methods, as well as whether the authenticator supports resident keys and user verification.

Once the user clicks Add, these options are bundled and sent to the model which makes the call to create an authenticator. Once that's complete, the front end will receive a response and then modify the UI to include the newly-created authenticator.

The Authenticator view

The Authenticator view

Each time an authenticator is emulated, we add a section to the authenticator view to represent it. Each authenticator section includes a name, ID, configuration options, buttons to remove the authenticator or set it active, and a credential table.

The Authenticator name

The authenticator's name is customizable and defaults to "Authenticator" concatenated with the last 5 characters of its ID. Originally, the authenticator's name was its full ID and unchangeable. We implemented customizable names so the user can label the authenticator based on its capabilities, the physical authenticator it's meant to emulate, or any nickname that's a bit easier to digest than a 36-character ID.

Credentials table

We added a table to each authenticator section that shows all the credentials registered by an authenticator. Within each row, we provide information about a credential, as well as buttons to remove or export the credential.

Currently, we gather the information to fill these tables by polling the CDP to get the registered credentials for each authenticator. In the future, we plan on adding an event for credential creation.

The Active button

We also added an Active radio button to each authenticator section. The authenticator that is currently set active will be the only one that listens for and registers credentials. Without this, the registering of credentials given multiple authenticators is non-deterministic which would be a fatal flaw when trying to test WebAuthn using them.

We implemented the active status using the SetUserPresence method on virtual authenticators. The SetUserPresence method sets whether tests of user presence succeed for a given authenticator. If we turn it off, an authenticator won't be able to listen for credentials. Therefore, by ensuring that it is on for at most one authenticator (the one set as active), and disabling user presence for all the others, we can force deterministic behavior.

An interesting challenge we faced while adding the active button was avoiding a race condition. Consider the following scenario:

  1. User clicks Active radio button for Authenticator X, sending a request to the CDP to set X as active. The Active radio button for X is selected, and all the others are deselected.

  2. Immediately after, user clicks Active radio button for Authenticator Y, sending a request to the CDP to set Y as active. The Active radio button for Y is selected, and all the others, including for X, are deselected.

  3. In the backend, the call to set Y as active is completed and resolved. Y is now active and all other authenticators are not.

  4. In the backend, the call to set X as active is completed and resolved. X is now active and all other authenticators, including Y, are not.

Now, the resulting situation is as follows: X is the active authenticator. However, the Active radio button for X isn't selected. Y isn't the active authenticator. However, the Active radio button for Y is selected. There is a disagreement between the front end and the actual status of the authenticators. Obviously, that's a problem.

Our solution: Establish pseudo two-way communication between the radio buttons and the active authenticator. First, we maintain a variable activeId in the view to keep track of the ID of the currently active authenticator. Then, we wait for the call to set an authenticator active to return then set activeId to the ID of that authenticator. Lastly, we loop through each authenticator section. If the ID of that section equals activeId, we set the button to selected. Otherwise, we set the button to unselected.

Here's what that looks like:


 async _setActiveAuthenticator(authenticatorId) {
   await this._clearActiveAuthenticator();
   await this._model.setAutomaticPresenceSimulation(authenticatorId, true);
   this._activeId = authenticatorId;
   this._updateActiveButtons();
 }

 _updateActiveButtons() {
   const authenticators = this._authenticatorsView.getElementsByClassName('authenticator-section');
   Array.from(authenticators).forEach(authenticator => {
     authenticator.querySelector('input.dt-radio-button').checked =
         authenticator.getAttribute('data-authenticator-id') === this._activeId;
   });
 }

 async _clearActiveAuthenticator() {
   if (this._activeId) {
     await this._model.setAutomaticPresenceSimulation(this._activeId, false);
   }
   this._activeId = null;
 }

Usage metrics

We wanted to track this feature's usage. Initially, we came up with two options.

  1. Count each time the WebAuthn tab in DevTools was opened. This option could potentially lead to overcounting, as someone may open the tab without actually using it.

  2. Track the number of times the "Enable virtual authenticator environment" checkbox in the toolbar was toggled. This option also had a potential overcounting problem as some may toggle the environment on and off multiple times in the same session.

Ultimately, we decided to go with the latter but restrict the counting by having a check to see if the environment had already been enabled in the session. Therefore, we would only increase the count by 1 regardless of how many times the developer toggled the environment. This works because a new session is created each time the tab is reopened, thus resetting the check and allowing for the metric to be incremented again.

Summary

Thank you for reading! If you have any suggestions to improve the WebAuthn tab, let us know by filing a bug.

Here're some resources if you would like to learn more about WebAuthn:

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

Puppetaria: accessibility-first Puppeteer scripts

$
0
0

Puppetaria: accessibility-first Puppeteer scripts

Puppeteer and its approach to selectors

Puppeteer is a browser automation library for Node: it lets you control a browser using a simple and modern JavaScript API.

The most prominent browser task is, of course, browsing web pages. Automating this task essentially amounts to automating interactions with the webpage.

In Puppeteer, this is achieved by querying for DOM elements using string-based selectors and performing actions such as clicking or typing text on the elements. For example, a script that opens opens developer.google.com, finds the search box, and searches for puppetaria could look like this:

(async () => {
   const browser = await puppeteer.launch({ headless: false });
   const page = await browser.newPage();
   await page.goto('https://developers.google.com/', { waitUntil: 'load' });
   // Find the search box using a suitable CSS selector.
   const search = await page.$('devsite-search > form > div.devsite-search-container');
   // Click to expand search box and focus it.
   await search.click();
   // Enter search string and press Enter.
   await search.type('puppetaria');
   await search.press('Enter');
 })();

How elements are identified using query selectors is therefore a defining part of the Puppeteer experience. Until now, selectors in Puppeteer have been limited to CSS and XPath selectors which, albeit expressionally very powerful, can have drawbacks for persisting browser interactions in scripts.

Syntactic vs. semantic selectors

CSS selectors are syntactic in nature; they are tightly bound to the inner workings of the textual representation of the DOM tree in the sense that they reference IDs and class names from the DOM. As such, they provide an integral tool for web developers for modifying or adding styles to an element in a page, but in that context the developer has full control over the page and its DOM tree.

On the other hand, a Puppeteer script is an external observer of a page, so when CSS selectors are used in this context, it introduces hidden assumptions about how the page is implemented which the Puppeteer script has no control over.

The effect is that such scripts can be brittle and susceptible to source code changes. Suppose, for example, that one uses Puppeteer scripts for automated testing for a web application containing the node <button>Submit</button> as the third child of the body element. One snippet from a test case might look like this:

const button = await page.$('body:nth-child(3)'); // problematic selector
await button.click();

Here, we are using the selector 'body:nth-child(3)' to find the submit button, but this is tightly bound to exactly this version of the webpage. If an element is later added above the button, this selector no longer works!

This is not news to test writers: Puppeteer users already attempt to pick selectors that are robust to such changes. With Puppetaria, we give users a new tool in this quest.

Puppeteer now ships with an alternative query handler based on querying the accessibility tree rather than relying on CSS selectors. The underlying philosophy here is that if the concrete element we want to select has not changed, then the corresponding accessibility node should not have changed either.

We name such selectors “ARIA selectors” and support querying for the computed accessible name and role of the accessibility tree. Compared to the CSS selectors, these properties are semantic in nature. They are not tied to syntactic properties of the DOM but instead descriptors for how the page is observed through assistive technologies such as screen readers.

In the test script example above, we could instead use the selector aria/Submit[role="button"] to select the wanted button, where Submit refers to the accessible name of the element:

const button = await page.$('aria/Submit[role="button"]');
await button.click();

Now, if we later decide to change the text content of our button from Submit to Done the test will again fail, but in this case that is desirable; by changing the name of the button we change the page's content, as opposed to its visual presentation or how it happens to be structured in the DOM. Our tests should warn us about such changes to ensure that such changes are intentional.

Going back to the larger example with the search bar, we could leverage the new aria handler and replace

const search = await page.$('devsite-search > form > div.devsite-search-container');

with

const search = await page.$('aria/Open search[role="button"]');

to locate the search bar!

More generally, we believe that using such ARIA selectors can provide the following benefits to Puppeteer users:

  • Make selectors in test scripts more resilient to source code changes.
  • Make test scripts more readable (accessible names are semantic descriptors).
  • Motivate good practices for assigning accessibility properties to elements.

The rest of this article dives into the details on how we implemented the Puppetaria project.

The design process

Background

As motivated above, we want to enable querying elements by their accessible name and role. These are properties of the accessibility tree, a dual to the usual DOM tree, that is used by devices such as screen readers to show webpages.

From looking at the specification for computing the accessible name, it is clear that computing the name for an element is a non-trivial task, so from the beginning we decided that we wanted to reuse Chromium’s existing infrastructure for this.

How we approached implementing it

Even limiting ourselves to using Chromium’s accessibility tree, there are quite a few ways that we could implement ARIA querying in Puppeteer. To see why, let’s first see how Puppeteer controls the browser.

The browser exposes a debugging interface via a protocol called the Chrome DevTools Protocol (CDP). This exposes functionality such as "reload the page" or "execute this piece of JavaScript in the page and hand back the result" via a language-agnostic interface.

Both the DevTools front-end and Puppeteer are using CDP to talk to the browser. To implement CDP commands, there is DevTools infrastructure inside all components of Chrome: in the browser, in the renderer, and so on. CDP takes care of routing the commands to the right place.

Puppeteer actions such as querying, clicking, and evaluating expressions are performed by leveraging CDP commands such as Runtime.evaluate that evaluates JavaScript directly in the page context and hands back the result. Other Puppeteer actions such as emulating color vision deficiency, taking screenshots, or capturing traces use CDP to communicate directly with the Blink rendering process.

CDP

This already leaves us with two paths for implementing our querying functionality; we can:

  • Write our querying logic in JavaScript and have that injected into the page using Runtime.evaluate, or
  • Use a CDP endpoint that can access and query the accessibility tree directly in the Blink process.

We implemented 3 prototypes:

  • JS DOM traversal - based on injecting JavaScript into the page
  • Puppeteer AXTree traversal - based on using the existing CDP access to the accessibility tree
  • CDP DOM traversal - using a new CDP endpoint purpose-built for querying the accessibility tree

JS DOM traversal

This prototype does a full traversal of the DOM and uses element.computedName and element.computedRole, gated on the ComputedAccessibilityInfo launch flag, to retrieve the name and role for each element during the traversal.

Puppeteer AXTree traversal

Here, we instead retrieve the full accessibility tree through CDP and traverse it in Puppeteer. The resulting accessibility nodes are then mapped to DOM nodes.

CDP DOM traversal

For this prototype, we implemented a new CDP endpoint specifically for querying the accessibility tree. This way, the querying can happen on the back-end through a C++ implementation instead of in the page context via JavaScript.

Unit test benchmark

The following figure compares the total runtime of querying four elements 1000 times for the 3 prototypes. The benchmark was executed in 3 different configurations varying the page size and whether or not caching of accessibility elements was enabled.

Benchmark: Total runtime of querying four elements 1000 times

It is quite clear that there is a considerable performance gap between the CDP-backed querying mechanism and the two others implemented solely in Puppeteer, and the relative difference seems to increase dramatically with the page size. It is somewhat interesting to see that the JS DOM traversal prototype responds so well to enabling accessibility caching. With caching disabled, the accessibility tree is computed on demand and discards the tree after each interaction if the domain is diabled. Enabling the domain makes Chromium cache the computed tree instead.

For the JS DOM traversal we ask for the accessible name and role for every element during the traversal, so if caching is disabled, Chromium computes and discards the accessibility tree for every element we visit. For the CDP based approaches, on the other hand, the tree is only discarded between each call to CDP, i.e. for every query. These approaches also benefit from enabling caching, as the accessibility tree is then persisted across CDP calls, but the performance boost is therefore comparatively smaller.

Even though enabling caching looks desirable here, it does come with a cost of additional memory usage. For Puppeteer scripts that e.g records trace files, this could be problematic. We therefore decided not to enable accessibility tree caching per default. Users can turn on caching themselves by enabling the CDP Accessibility domain.

DevTools test suite benchmark

The previous benchmark showed that implementing our querying mechanism at the CDP layer gives a performance boost in a clinical unit-test scenario.

To see if the difference is pronounced enough to make it noticeable in a more realistic scenario of running a full test suite, we patched the DevTools end-to-end test suite to make use of the JavaScript and CDP-based prototypes and compared the runtimes. In this benchmark, we changed a total of 43 selectors from [aria-label=…] to a custom query handler aria/…, which we then implemented using each of the prototypes.

Some of the selectors are used multiple times in test scripts, so the actual number of executions of the aria query handler was 113 per run of the suite. The total number of query selections was 2253, so only a fraction of the query selections happened through the prototypes.

Benchmark: e2e test suite

As seen in the figure above, there is a discernible difference in the total runtime. The data is too noisy to conclude anything specific, but it is clear that the performance gap between the two prototypes shows in this scenario as well.

A new CDP endpoint

In light of the above benchmarks, and since the launch flag-based approach was undesirable in general, we decided to move forward with implementing a new CDP command for querying the accessibility tree. Now, we had to figure out the interface of this new endpoint.

For our use case in Puppeteer, we need the endpoint to take so-called RemoteObjectIds as argument and, to enable us to find the corresponding DOM elements afterwards, it should return a list of objects that contains the backendNodeIds for the DOM elements.

As seen in the chart below, we tried quite a few approaches satisfying this interface. From this, we found that the size of the returned objects, i.e whether or not we returned full accessibility nodes or only the backendNodeIds made no discernible difference. On the other hand, we found that using the existing NextInPreOrderIncludingIgnored was a poor choice for implementing the traversal logic here, as that yielded a noticeable slow-down.

Benchmark: Comparison of CDP-based AXTree traversal prototypes

Wrapping it all up

Now, with the CDP endpoint in place, we implemented the query handler on the Puppeteer side. The grunt of the work here was to restructure the query handling code to enable queries to resolve directly through CDP instead of querying through JavaScript evaluated in the page context.

What’s next?

The new aria handler shipped with Puppeteer v5.4.0 as a built-in query handler. We are looking forward to seeing how users adopt it into their test scripts, and we cannot wait to hear your ideas on how we can make this even more useful!

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

Handling Heavy Ad Interventions

$
0
0

Handling Heavy Ad Interventions

Ads that consume a disproportionate amount of resources on a device negatively impact the user’s experience—from the obvious effects of degrading performance to less visible consequences such as draining the battery or eating up bandwidth allowances. These ads range from the actively malicious, such as cryptocurrency miners, through to genuine content with inadvertent bugs or performance issues.

Chrome is experimenting with setting limits on the resources an ad may use and unloading that ad if the limits are exceeded. You can read the announcement on the Chromium blog for more details. The mechanism used for unloading ads is the Heavy Ad Intervention.

Heavy Ad criteria

An ad is considered heavy if the user has not interacted with it (for example, has not tapped or clicked it) and it meets any of the following criteria:

  • Uses the main thread for more than 60 seconds in total
  • Uses the main thread for more than 15 seconds in any 30 second window
  • Uses more than 4 megabytes of network bandwidth

All resources used by any descendant iframes of the ad frame count against the limits for intervening on that ad. It’s important to note that the main thread time limits are not the same as elapsed time since loading the ad. The limits are on how long the CPU takes to execute the ad's code.

Testing the intervention

You can test the new intervention in Chrome 84 and upwards.

  • Enable chrome://flags/#enable-heavy-ad-intervention
  • Disable chrome://flags/#heavy-ad-privacy-mitigations

Setting chrome://flags/#enable-heavy-ad-intervention to Enabled activates the new behavior, but by default there is some noise and variability added to the thresholds to protect user privacy. Setting chrome://flags/#heavy-ad-privacy-mitigations to Disabled prevents this, meaning the restrictions are applied deterministically, purely according to the limits. This should make debugging and testing easier.

Note: Earlier versions of Chrome include the #heavy-ad-privacy-mitigations-opt-out flag which should be set to Enabled for testing.

When the intervention is triggered you should see the content in the iframe for a heavy ad replaced with an Ad removed message. If you follow the included Details link, you will see a message explaining: "This ad uses too many resources for your device, so Chrome removed it."

You can see the intervention applied to sample content on heavy-ads.glitch.me You can also use this test site to load an arbitrary URL as a quick way of testing your own content.

Be aware when testing that there are a number of reasons that may prevent an intervention being applied.

  • Reloading the same ad within the same page will exempt that combination from the intervention. Clearing your browsing history and opening the page in a new tag can help here.
  • Ensure the page remains in focus - backgrounding the page (switching to another window) will pause task queues for the page, and so will not trigger the CPU intervention.
  • Ensure you do not tap or click ad content while testing - the intervention will not be applied to content that receives any user interaction.

What do you need to do?

You show ads from a third-party provider on your site

No action needed, just be aware that users may see ads that exceed the limits removed when on your site.

You show first-party ads on your site or you provide ads for third-party display

Continue reading to ensure you implement the necessary monitoring via the Reporting API for Heavy Ad interventions.

You create ad content or you maintain a tool for creating ad content

Continue reading to ensure that you are aware of how to test your content for performance and resource usage issues. You should also refer to the guidance on the ad platforms of your choice as they may provide additional technical advice or restrictions, for example, see the Google Guidelines for display creatives. Consider building configurable thresholds directly into your authoring tools to prevent poor performing ads escaping into the wild.

What happens when an ad is removed?

An intervention in Chrome is reported via the aptly named Reporting API with an intervention report type. You can use the Reporting API to be notified about interventions either by a POST request to a reporting endpoint or within your JavaScript.

These reports are triggered on the root ad-tagged iframe along with all of its descendants, i.e. every frame unloaded by the intervention. This means that if an ad comes from a third-party source, i.e. a cross-site iframe, then it’s up to that third-party (for example, the ad provider) to handle the report.

To configure the page for HTTP reports, the response should include the Report-To header:

Report-To: { "url": "https://example.com/reports", "max_age": 86400 }

The POST request triggered will include a report like this:

POST /reports HTTP/1.1
Host: example.com
…
Content-Type: application/report

[{
 "type": "intervention",
 "age": 60,
 "url": "https://example.com/url/of/ad.html",
 "body": {
   "sourceFile": null,
   "lineNumber": null,
   "columnNumber": null,
   "id": "HeavyAdIntervention",
   "message": "Ad was removed because its CPU usage exceeded the limit. See https://www.chromestatus.com/feature/4800491902992384"
 }
}]

Note: The null values are expected. The intervention will trigger when the limits are reached, but that particular point in the code is not necessarily the problem.

The JavaScript API provides the ReportingObserver with an observe() method that can be used to trigger a provided callback on interventions. This can be useful if you want to attach additional information to the report to aid in debugging.

// callback that will handle intervention reports
function sendReports(reports) {
  for (let report of reports) {
    // Log the `report` json via your own reporting process
    navigator.sendBeacon('https://report.example/your-endpoint', report);
  }
}

// create the observer with the callback
const observer = new ReportingObserver(
  (reports, observer) => {
    sendReports(reports);
  },
  { buffered: true }
);

// start watching for interventions
observer.observe();

However, because the intervention will literally remove the page from the iframe, you should add a failsafe to ensure that the report is definitely captured before the page is gone completely, for example, an ad within an iframe. For this, you can hook your same callback into the pagehide event.

window.addEventListener('pagehide', (event) => {
  // pull all pending reports from the queue
  let reports = observer.takeRecords();
  sendReports(reports);
});

Remember that, to protect the user experience, the pagehide event restricts the amount of work that can happen within it. For example, trying to send a fetch() request with the reports will result in that request being canceled. You should use navigator.sendBeacon() to send that report and even then, this is only best-effort by the browser not a guarantee.

Caution: Do not use the unload and beforeunload events here. This will actively hurt your page caching and performance across multiple browsers.

The resulting JSON from the JavaScript is similar to that sent on the POST request:

[
  {
    type: 'intervention',
    url: 'https://example.com/url/of/ad.html',
    body: {
      sourceFile: null,
      lineNumber: null,
      columnNumber: null,
      id: 'HeavyAdIntervention',
      message:
        'Ad was removed because its network usage exceeded the limit. See https://www.chromestatus.com/feature/4800491902992384',
    },
  },
];

Diagnosing the cause of an intervention

Ad content is just web content, so make use of tools like Lighthouse to audit the overall performance of your content. The resulting audits provide inline guidance on improvements. You can also refer to the web.dev/fast collection.

You may find it helpful to test your ad in a more isolated context. You can use the custom URL option on https://heavy-ads.glitch.me to test this with a ready-made, ad-tagged iframe. You can use Chrome DevTools to validate content has been tagged as an ad. In the Rendering panel (accessible via the three dot menu then More Tools > Rendering) select "Highlight Ad Frames". If testing content in the top-level window or other context where it is not tagged as an ad the intervention will not be triggered, but you can still manually check against the thresholds.

Network usage

Bring up the Network panel in Chrome DevTools to see the overall network activity for the ad. You will want to ensure the "Disable cache" option is checked to get consistent results over repeated loads.

Network panel in DevTools.
Network panel in DevTools.

The transferred value at the bottom of the page will show you the amount transferred for the entire page. Consider using the Filter input at the top to restrict the requests just to the ones related to the ad.

If you find the initial request for the ad, for example, the source for the iframe, you can also use the Initiator tab within the request to see all of the requests it triggers.

Initiator tab for a request.
Initiator tab for a request.

Sorting the overall list of requests by size is a good way to spot overly large resources. Common culprits include images and videos that have not been optimized.

Sort requests by response size.
Sort requests by response size.

Additionally, sorting by name can be a good way to spot repeated requests. It may not be a single large resource triggering the intervention, but a large number of repeated requests that incrementally go over the limit.

CPU usage

The Performance panel in DevTools will help diagnose CPU usage issues. The first step is to open up the Capture Settings menu. Use the CPU dropdown to slow down the CPU as much as possible. The interventions for CPU are far more likely to trigger on lower-powered devices than high-end development machines.

Enable network and CPU throttling in the Performance panel.
Enable network and CPU throttling in the Performance panel.

Next, click the Record button to begin recording activity. You may want to experiment with when and how long you record for, as a long trace can take quite a while to load. Once the recording is loaded you can use the top timeline to select a portion of the recording. Focus on areas on the graph in solid yellow, purple, or green that represent scripting, rendering, and painting.

Summary of a trace in the Performance panel.
Summary of a trace in the Performance panel.

Explore the Bottom-Up, Call Tree, and Event Log tabs at the bottom. Sorting those columns by Self Time and Total Time can help identify bottlenecks in the code.

Sort by Self Time in the Bottom-Up tab.
Sort by Self Time in the Bottom-Up tab.

The associated source file is also linked there, so you can follow it through to the Sources panel to examine the cost of each line.

Execution time shown in the Sources panel.
Execution time shown in the Sources panel.

Note: DevTools may not always display the timing information if the frame has already been unloaded, so you may want to capture the traces with the ad isolated or with the intervention disabled.

Common issues to look for here are poorly optimized animations that are triggering continuous layout and paint or costly operations that are hidden within an included library.

How to report incorrect interventions

Chrome tags content as an ad by matching resource requests against a filter list. If non-ad content has been tagged, consider changing that code to avoid matching the filtering rules. If you suspect an intervention has been incorrectly applied, then you can raise an issue via this template. Please ensure you have captured an example of the intervention report and have a sample URL to reproduce the issue.

Deprecations and removals in Chrome 84

$
0
0

Note: Chrome expects to start the spec-mandated turn down of AppCache in Chrome

  1. For details and instructions for managing the transition gracefully, see Preparing for AppCache removal. For information on a feature that will help you identify uses of this and other deprecated APIs, see Know your code health

Deprecations and removals in Chrome 84

@import rules in CSSStyleSheet.replace() removed

The original spec for constructable stylesheets allowed for calls to:

sheet.replace("@import('some.css');")

This use case is being removed. Calls to replace() now throw an exception if @import rules are found in the replaced content.

Intent to Remove | Chrome Platform Status | Chromium Bug

Remove TLS 1.0 and TLS 1.1

TLS (Transport Layer Security) is the protocol which secures HTTPS. It has a long history stretching back to the nearly twenty-year-old TLS 1.0 and its even older predecessor, SSL. Both TLS 1.0 and 1.1 have a number of weaknesses.

  • TLS 1.0 and 1.1 use MD5 and SHA-1, both weak hashes, in the transcript hash for the Finished message.
  • TLS 1.0 and 1.1 use MD5 and SHA-1 in the server signature. (Note: this is not the signature in the certificate.)
  • TLS 1.0 and 1.1 only support RC4 and CBC ciphers. RC4 is broken and has since been removed. TLS’s CBC mode construction is flawed and is vulnerable to attacks.
  • TLS 1.0’s CBC ciphers additionally construct their initialization vectors incorrectly.
  • TLS 1.0 is no longer PCI-DSS compliant.

Supporting TLS 1.2 is a prerequisite to avoiding the above problems. The TLS working group has deprecated TLS 1.0 and 1.1. Chrome has now also deprecated these protocols.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Feedback

Using Custom Tabs with Android 11

$
0
0

Using Custom Tabs with Android 11

Android 11 introduced changes on how apps can interact with other apps that the user has installed on the device. You can read more about those changes on Android documentation.

When an Android app using Custom Tabs targets SDK level 30 or above some changes may be necessary. This article goes over the changes that may be needed for those apps.

In the simplest case, Custom Tabs can be launched with a one-liner like so:

new CustomTabsIntent.Builder().build()
        .launchUrl(this, Uri.parse("https://www.example.com"));

Applications launching applications using this approach, or even adding UI customizations like changing the toolbar color, adding an action button won’t need to do any changes in the application.

Preferring Native Apps

But, if you followed the best practices some changes may be required.

The first relevant best practice is that applications should prefer a native app to handle the intent instead of a Custom Tab if an app that is capable of handling it is installed.

On Android 11 and above

Android 11 introduces a new Intent flag, FLAG_ACTIVITY_REQUIRE_NON_BROWSER, which is the recommended way to try opening a native app, as it doesn’t require the app to declare any package manager queries.

static boolean launchNativeApi30(Context context, Uri uri) {
    Intent nativeAppIntent = new Intent(Intent.ACTION_VIEW, uri)
            .addCategory(Intent.CATEGORY_BROWSABLE)
            .addFlags(Intent.FLAG_ACTIVITY_NEW_TASK |
                    Intent.FLAG_ACTIVITY_REQUIRE_NON_BROWSER);
    try {
        context.startActivity(nativeAppIntent);
        return true;
    } catch (ActivityNotFoundException ex) {
        return false;
    }
}

The solution is to try to launch the Intent and use FLAG_ACTIVITY_REQUIRE_NON_BROWSER to ask Android to avoid browsers when launching.

If a native app that is capable of handling this Intent is not found, an ActivityNotFoundException will be thrown.

Before Android 11

Even though the application may target Android 11, or API level 30, previous Android versions will not understand the FLAG_ACTIVITY_REQUIRE_NON_BROWSER flag, so we need to resort to querying the Package Manager in those cases:

private static boolean launchNativeBeforeApi30(Context context, Uri uri) {
    PackageManager pm = context.getPackageManager();

    // Get all Apps that resolve a generic url
    Intent browserActivityIntent = new Intent()
            .setAction(Intent.ACTION_VIEW)
            .addCategory(Intent.CATEGORY_BROWSABLE)
            .setData(Uri.fromParts("http", "", null));
    Set<String> genericResolvedList = extractPackageNames(
            pm.queryIntentActivities(browserActivityIntent, 0));

    // Get all apps that resolve the specific Url
    Intent specializedActivityIntent = new Intent(Intent.ACTION_VIEW, uri)
            .addCategory(Intent.CATEGORY_BROWSABLE);
    Set<String> resolvedSpecializedList = extractPackageNames(
            pm.queryIntentActivities(specializedActivityIntent, 0));

    // Keep only the Urls that resolve the specific, but not the generic
    // urls.
    resolvedSpecializedList.removeAll(genericResolvedList);

    // If the list is empty, no native app handlers were found.
    if (resolvedSpecializedList.isEmpty()) {
        return false;
    }

    // We found native handlers. Launch the Intent.
    specializedActivityIntent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
    context.startActivity(specializedActivityIntent);
    return true;
}

The approach used here is to query the Package Manager for applications that support a generic http intent. Those applications are likely browsers.

Then, query for applications that handle itents for the specific URL we want to launch. This will return both browsers and native applications setup to handle that URL.

Now, remove all browsers found on the first list from the second list, and we’ll be left only with native apps.

If the list is empty, we know there are no native handlers and return false. Otherwise, we launch the intent for the native handler.

Putting it all together

We need to ensure using the right method for each occasion:

static void launchUri(Context context, Uri uri) {
    boolean launched = Build.VERSION.SDK_INT >= 30 ?
            launchNativeApi30(context, uri) :
            launchNativeBeforeApi30(context, uri);

    if (!launched) {
        new CustomTabsIntent.Builder()
                .build()
                .launchUrl(context, uri);
    }
}

Build.VERSION.SDK_INT provides the information we need. If it’s equal or larger than 30, Android knows the FLAG_ACTIVITY_REQUIRE_NON_BROWSER and we can try launching a nativa app with the new approach. Otherwise, we try launching with the old approach.

If launching a native app fails, we then launch a Custom Tabs.

There’s some boilerplate involved in this best practice. We’re working on making this simpler by encapsulating the complexity in a library. Stay tuned for updates to the android-browser-helper support library.

Detecting browsers that support Custom Tabs

Another common pattern is to use the PackageManager to detect which browsers support Custom Tabs on the device. Common use-cases for this are setting the package on the Intent to avoid the app disambiguation dialog or choosing which browser to connect to when connecting to the Custom Tabs service.

When targeting API level 30, developers will need to add a queries section to their Android Manifest, declaring an intent-filter that matches browsers with Custom Tabs support.

<queries>
    <intent>
        <action android:name=
            "android.support.customtabs.action.CustomTabsService" />
    </intent>
</queries>

With the markup in place, the existing code used to query for browsers that support Custom Tabs will work as expected.

Frequently Asked Questions

Q: The code that looks for Custom Tabs providers queries for applications that can handle https:// intents, but the query filter only declares an android.support.customtabs.action.CustomTabsService query. Shouldn’t a query for https:// intents be declared?

A: When declaring a query filter, it will filter the responses to a query to the PackageManager, not the query itself. Since browsers that support Custom Tabs declare handling the CustomTabsService, they won’t be filtered out. Browsers that don’t support Custom Tabs will be filtered out.

Conclusion

Those are all the changes required to adapt an existing Custom Tabs integration to work with Android 11. To learn more about integrating Custom Tabs into an Android app, start with the implementation guide then check out the best practices to learn about building a first-class integration.

Let us know if you have any questions or feedback!

Deprecations and removals in Chrome 85

$
0
0

Deprecations and removals in Chrome 85

AppCache Removal Begins

Chrome 85 starts a spec-mandated turn down of AppCache in Chrome. For details and instructions for managing the transition gracefully, see Preparing for AppCache removal. For information on a feature that will help you identify uses of this and other deprecated APIs, see Know your code health

Intent to Remove | Chrome Platform Status | Chromium Bug

Reject insecure SameSite=None cookies

Use of cookies with SameSite set to None without the Secure attribute is no longer supported. Any cookie that requests SameSite=None but is not marked Secure will be rejected. This feature started rolling out to users of Stable Chrome on July 14, 2020. See SameSite Updates for a full timeline and details. Cookies delivered over plaintext channels may be cataloged or modified by network attackers. Requiring secure transport for cookies intended for cross-site usage reduces this risk.

Intent to Remove | Chrome Platform Status | Chromium Bug

-webkit-box quirks from -webkit-line-clamp

Intent to Remove | Chrome Platform Status | Chromium Bug

Feedback


A new default Referrer-Policy for Chrome: strict-origin-when-cross-origin

$
0
0

A new default Referrer-Policy for Chrome: strict-origin-when-cross-origin

Before we start:

  • If you're unsure of the difference between "site" and "origin", check out Understanding "same-site" and "same-origin".
  • The Referer header is missing an R, due to an original misspelling in the spec. The Referrer-Policy header and referrer in JavaScript and the DOM are spelled correctly.

Summary

  • Browsers are evolving towards privacy-enhancing default referrer policies, to provide a good fallback when a website has no policy set.
  • Chrome plans to gradually enable strict-origin-when-cross-origin as the default policy in 85; this may impact use cases relying on the referrer value from another origin.
  • This is the new default, but websites can still pick a policy of their choice.
  • To try out the change in Chrome, enable the flag at chrome://flags/#reduced-referrer-granularity. You can also check out this demo to see the change in action.
  • Beyond the referrer policy, the way browsers deal with referrers might change—so keep an eye on it.

What's changing and why?

HTTP requests may include the optional Referer header, which indicates the origin or web page URL the request was made from. The Referer-Policy header defines what data is made available in the Referer header, and for navigation and iframes in the destination's document.referrer.

Exactly what information is sent in the Referer header in a request from your site is determined by the Referrer-Policy header you set.

Diagram: Referer sent in
      a request.
Referrer-Policy and Referer.

When no policy is set, the browser's default is used. Websites often defer to the browser’s default.

For navigations and iframes, the data present in the Referer header can also be accessed via JavaScript using document.referrer.

Up until recently, no-referrer-when-downgrade has been a widespread default policy across browsers. But now many browsers are in some stage of moving to more privacy-enhancing defaults.

Chrome plans to switch its default policy from no-referrer-when-downgrade to strict-origin-when-cross-origin, starting in version 85.

This means that if no policy is set for your website, Chrome will use strict-origin-when-cross-origin by default. Note that you can still set a policy of your choice; this change will only have an effect on websites that have no policy set.

Note: this step to help reduce silent cross-site user tracking is part of a larger initiative: the Privacy Sandbox. Check Digging into the Privacy Sandbox for more details.

What does this change mean?

strict-origin-when-cross-origin offers more privacy. With this policy, only the origin is sent in the Referer header of cross-origin requests.

This prevents leaks of private data that may be accessible from other parts of the full URL such as the path and query string.

Diagram: Referer sent
      depending on the policy, for a cross-origin request.
Referer sent (and document.referrer) for a cross-origin request, depending on the policy.

For example:

Cross-origin request, sent from https://site-one.example/**stuff/detail?tag=red** to https://site-two.example/…:

What stays the same?

  • Like no-referrer-when-downgrade, strict-origin-when-cross-origin is secure: no referrer (Referer header and document.referrer) is present when the request is made from an HTTPS origin (secure) to an HTTP one (insecure). This way, if your website uses HTTPS (if not, make it a priority), your website's URLs won't leak in non-HTTPS requests—because anyone on the network can see these, so this would expose your users to man-in-the-middle-attacks.
  • Within the same origin, the Referer header value is the full URL.

For example: Same-origin request, sent from https://site-one.example/**stuff/detail?tag=red** to https://site-one.example/…:

What's the impact?

Based on discussions with other browsers and Chrome's own experimentation run in Chrome 84, user-visible breakage is expected to be limited.

Server-side logging or analytics that rely on the full referrer URL being available are likely to be impacted by reduced granularity in that information.

What do you need to do?

Chrome plans to start rolling out the new default referrer policy in 85 (July 2020 for beta, August 2020 for stable). See status in the Chrome status entry.

Understand and detect the change

To understand what the new default changes in practice, you can check out this demo.

You can also use this demo to detect what policy is applied in the Chrome instance you are running.

Test the change, and figure out if this will impact your site

You can already try out the change starting from Chrome 81: visit chrome://flags/#reduced-referrer-granularity in Chrome and enable the flag. When this flag is enabled, all websites without a policy will use the new strict-origin-when-cross-origin default.

Chrome screenshot: how
      to enable the flag chrome://flags/#reduced-referrer-granularity.
Enabling the flag.

You can now check how your website and backend behave.

Another thing to do to detect impact is to check if your website's codebase uses the referrer—either via the Referer header of incoming requests on the server, or from document.referrer in JavaScript.

Certain features on your site might break or behave differently if you're using the referrer of requests from another origin to your site (more specifically the path and/or query string) AND this origin uses the browser's default referrer policy (i.e. it has no policy set).

If this impacts your site, consider alternatives

If you're using the referrer to access the full path or query string for requests to your site, you have a few options:

  • Use alternative techniques and headers such as Origin and Sec-fetch-Site for your CSRF protection, logging, and other use cases. Check out Referer and Referrer-Policy: best practices.
  • You can align with partners on a specific policy if this is needed and transparent to your users. Access control—when the referrer is used by websites to grant specific access to their resources to other origins—might be such a case although with Chrome's change, the origin will still be shared in the Referer Header (and in document.referrer).

Note that most browsers are moving in a similar direction when it comes to the referrer (see browser defaults and their evolutions in Referer and Referrer-Policy: best practices.

Implement an explicit, privacy-enhancing policy across your site

What Referer should be sent in requests originated by your website, i.e. what policy should you set for your site?

Even with Chrome's change in mind, it's a good idea to set an explicit, privacy-enhancing policy like strict-origin-when-cross-origin or stricter right now.

This protects your users and makes your website behave more predictably across browsers. Mostly, it gives you control —rather than having your site depend on browser defaults.

Check Referer and Referrer-Policy: best practices for details on setting a policy.

About Chrome enterprise

The Chrome enterprise policy ForceLegacyDefaultReferrerPolicy is available to IT administrators who would like to force the previous default referrer policy of no-referrer-when-downgrade in enterprise environments. This allows enterprises additional time to test and update their applications.

This policy will be removed in Chrome 88.

Send feedback

Do you have feedback to share or something to report? Share feedback on Chrome's intent to ship, or tweet your questions at @maudnals.

With many thanks for contributions and feedback to all reviewers - especially Kaustubha Govind, David Van Cleve, Mike West, Sam Dutton, Rowan Merewood, Jxck and Kayce Basques.

Resources

Deprecations and removals in Chrome 86

$
0
0

Deprecations and removals in Chrome 86

Remove WebComponents v0

Web Components v0 was removed from desktop and Android in Chrome 80. Chromium 86 removes them from WebView. This removal includes Custom Elements v0, Shadow DOM v0, and HTML Imports.

Deprecate FTP support

Chrome is deprecating and removing support for FTP URLs. The current FTP implementation in Google Chrome has no support for encrypted connections (FTPS), nor proxies. Usage of FTP in the browser is sufficiently low that it is no longer viable to invest in improving the existing FTP client. In addition, more capable FTP clients are available on all affected platforms.

Google Chrome 72 and later removed support for fetching document subresources over FTP and rendering of top level FTP resources. Currently navigating to FTP URLs results in showing a directory listing or a download depending on the type of resource. A bug in Google Chrome 74 and later resulted in dropping support for accessing FTP URLs over HTTP proxies. Proxy support for FTP was removed entirely in Google Chrome 76.

The remaining capabilities of Google Chrome’s FTP implementation are restricted to either displaying a directory listing or downloading a resource over unencrypted connections.

Deprecation of support will follow this timeline:

Chrome 86

FTP is still enabled by default for most users, but turned off for pre-release channels (Canary and Beta) and will be experimentally turned off for one percent of stable users. In this version you can re-enable it from the command line using either the --enable-ftp command line flag or the --enable-features=FtpProtocol flag.

Chrome 87

FTP support will be disabled by default for fifty percent of users but can be enabled using the flags listed above.

###Chrome 88

FTP support will be disabled.

Feedback

Gaining security and privacy by partitioning the cache

$
0
0

Gaining security and privacy by partitioning the cache

In general, caching can improve performance by storing data so future requests for the same data are served faster. For example, a cached resource from the network can avoid a round trip to the server. A cached computational result can omit the time to do the same calculation.

In Chrome, the cache mechanism is used in various ways and HTTP Cache is one example.

How Chrome's HTTP Cache currently works

As of version 85, Chrome caches resources fetched from the network, using their respective resource URLs as the cache key. (A cache key is used to identify a cached resource.)

The following example illustrates how a single image is cached and treated in three different contexts:

Cache Key: { https://x.example/doge.png }

A user visits a page (https://a.example) that requests an image (https://x.example/doge.png). The image is requested from the network and cached using https://x.example/doge.png as the key.

Cache Key: { https://x.example/doge.png }

The same user visits another page (https://b.example), which requests the same image (https://x.example/doge.png).
The browser checks its HTTP Cache to see if it already has this resource cached, using the image URL as the key. The browser finds a match in its Cache, so it uses the cached version of the resource.

Cache Key: { https://x.example/doge.png }

It doesn't matter if the image is loaded from within an iframe. If the user visits another website (https://c.example) with an iframe (https://d.example) and the iframe requests the same image (https://x.example/doge.png), the browser can still load the image from its cache because the cache key is the same across all of the pages.

This mechanism has been working well from a performance perspective for a long time. However, the time a website takes to respond to HTTP requests can reveal that the browser has accessed the same resource in the past, which opens the browser to security and privacy attacks, like the following:

  • Detect if a user has visited a specific site: An adversary can detect a user's browsing history by checking if the cache has a resource which might be specific to a particular site or cohort of sites.
  • Cross-site search attack: An adversary can detect if an arbitrary string is in the user's search results by checking whether a 'no search results' image used by a particular website is in the browser's cache.
  • Cross-site tracking: The cache can be used to store cookie-like identifiers as a cross-site tracking mechanism.

To mitigate these risks, Chrome will partition its HTTP cache starting in Chrome 86.

How will cache partitioning affect Chrome's HTTP Cache?

With cache partitioning, cached resources will be keyed using a new "Network Isolation Key" in addition to the resource URL. The Network Isolation Key is composed of the top-level site and the current-frame site.

Note: The "site" is recognized using "scheme://eTLD+1" so if requests are from different pages, but they have the same scheme and effective top-level domain+1 they will use the same cache partition. To learn more about this, read Understanding "same-site" and "same-origin".

Look again at the previous example to see how cache partitioning works in different contexts:

Cache Key: { https://a.example, https://a.example, https://x.example/doge.png }

A user visits a page (https://a.example) which requests an image (https://x.example/doge.png). In this case, the image is requested from the network and cached using a tuple consisting of https://a.example (the top-level site), https://a.example (the current-frame site), and https://x.example/doge.png (the resource URL) as the key. (Note that when the resource request is from the top-level -frame, the top-level site and current-frame site in the Network Isolation Key are the same.)

Cache Key: { https://b.example, https://b.example, https://x.example/doge.png }

The same user visits a different page (https://b.example) which requests the same image (https://x.example/doge.png). Though the same image was loaded in the previous example, since the key doesn't match it will not be a cache hit.

The image is requested from the network and cached using a tuple consisting of https://b.example, https://b.example, and https://x.example/doge.png as the key.

Cache Key: { https://a.example, https://a.example, https://x.example/doge.png }

Now the user comes back to https://a.example but this time the image (https://x.example/doge.png) is embedded in an iframe. In this case, the key is a tuple containing https://a.example, https://a.example, and https://x.example/doge.png and a cache hit occurs. (Note that when the top-level site and the iframe are the same site, the resource cached with the top-level frame can be used.

Cache Key: { https://a.example, https://c.example, https://x.example/doge.png }

The user is back at https://a.example but this time the image is hosted in an iframe from https://c.example.

In this case, the image is downloaded from the network because there is no resource in the cache that matches the key consisting of https://a.example, https://c.example, and https://x.example/doge.png.

Cache Key: { https://a.example, https://c.example, https://x.example/doge.png }

What if the domain contains a subdomain or a port number? The user visits https://subdomain.a.example, which embeds an iframe (https://c.example:8080), which requests the image.

Because the key is created based on "scheme://eTLD+1", subdomains and port numbers are ignored. Hence a cache hit occurs.

Cache Key: { https://a.example, https://c.example, https://x.example/doge.png }

What if the iframe is nested multiple times? The user visits https://a.example, which embeds an iframe (https://b.example), which embeds yet another iframe (https://c.example), which finally requests the image.

Because the key is taken from the top-frame (https://a.example) and the immediate frame which loads the resource (https://c.example), a cache hit occurs.

FAQs

Is it already enabled on my Chrome? How can I check?

The feature is being rolled out through late 2020. To check whether your Chrome instance already supports it:

  1. Open chrome://net-export/ and press Start Logging to Disk.
  2. Specify where to save the log file on your computer.
  3. Browse the web on Chrome for a minute.
  4. Go back to chrome://net-export/ and press Stop Logging.
  5. Go to https://netlog-viewer.appspot.com/#import.
  6. Press Choose File and pass the log file you saved.

You will see the output of the log file.

On the same page, find SplitCacheByNetworkIsolationKey. If it is followed by Experiment_[****], HTTP Cache partitioning is enabled on your Chrome. If it is followed by Control_[****] or Default_[****], it is not enabled.

How can I test HTTP Cache partitioning on my Chrome?

To test HTTP Cache partitioning on your Chrome, you need to launch Chrome with a command line flag: --enable-features=SplitCacheByNetworkIsolationKey. Follow the instruction at Run Chromium with flags to learn how to launch Chrome with a command line flag on your platform.

As a web developer, are there any action I should take in response to this change?

This is not a breaking change, but it may impose performance considerations for some web services.

For example, those that serve large volumes of highly cacheable resources across many sites (such as fonts and popular scripts) may see an increase in their traffic. Also, those who consume such services may have an increased reliance on them.

(There's a proposal to enable shared libraries in a privacy-preserving way called Web Shared Libraries (presentation video), but it's still under consideration.)

What is the impact of this behavioral change?

The overall cache miss rate increases by about 3.6%, changes to the FCP (First Contentful Paint) are modest (~0.3%), and the overall fraction of bytes loaded from the network increases by around 4%. You can learn more about the impact on performance in the HTTP cache partitioning explainer.

Is this standardized? Do other browsers behave differently?

"HTTP cache partitions" is standardized in the fetch spec though browsers behave differently:

  • Chrome: Uses top-level scheme://eTLD+1 and frame scheme://eTLD+1
  • Safari: Uses top-level eTLD+1
  • Firefox: Planning to implement with top-level scheme://eTLD+1 and considering including a second key like Chrome

How is fetch from workers treated?

Dedicated workers use the same key as their current frame. Service workers and shared workers are more complicated since they may be shared among multiple top-level sites. The solution for them is currently under discussion.

Resources

Feedback

Deprecations and removals in Chrome 87

$
0
0

Deprecations and removals in Chrome 87

Chrome 87 beta was released on October 15, 2020 and stable was released on November 17, 2020ß.

Comma separator in iframe allow attribute

Permissions policy declarations in an <iframe> tag can no longer use commas as a separator between items. Developers should use semicolons instead.

-webkit-font-size-delta

Blink will no longer support the rarely-used -webkit-font-size-delta property. Developers should use font-size to control font size instead.

Deprecate FTP support

Chrome is deprecating and removing support for FTP URLs. The current FTP implementation in Google Chrome has no support for encrypted connections (FTPS), nor proxies. Usage of FTP in the browser is sufficiently low that it is no longer viable to invest in improving the existing FTP client. In addition, more capable FTP clients are available on all affected platforms.

Google Chrome 72 and later removed support for fetching document subresources over FTP and rendering of top level FTP resources. Currently navigating to FTP URLs results in showing a directory listing or a download depending on the type of resource. A bug in Google Chrome 74 and later resulted in dropping support for accessing FTP URLs over HTTP proxies. Proxy support for FTP was removed entirely in Google Chrome 76. In Chrome 86, FTP was turned off for pre-release channels (Canary and Beta) and was experimentally turned off for one percent of stable users.

The remaining capabilities of Google Chrome’s FTP implementation are restricted to either displaying a directory listing or downloading a resource over unencrypted connections.

Remainder of the deprecation follows this timeline:

Chrome 87

FTP support will be disabled by default for fifty percent of users but can be enabled using the flags listed above.

Chrome 88

FTP support will be disabled.

Feedback

Augmented reality with model-viewer

$
0
0

Augmented reality with model-viewer

In February, we introduced the <model-viewer> web component, which lets you declaratively add a 3D model to a web page, while hosting the model on your own site. One thing it didn't support was augmented reality. That is, you could not render the component's source image on top of a device's camera feed.

To do that, we've since added support for Magic Leap, and Quick Look on iOS. Now we're announcing support for AR on Android with the addition of the ar attribute. This attribute is built on a new ARCore feature called Scene Viewer, an external app for viewing 3D models. To learn more about Scene Viewer, check out Viewing 3D models in AR from an Android browser.

Mars Rover

Let's see how to do augmented reality with <model-viewer>.

The attribute

A web component, as you may recall, requires no special knowledge to use. It behaves like a standard HTML element, having a unique tag as well as properties and methods. After installing it with a <script> tag, use it like any other HTML element.

<model-viewer alt="A 3D model of an astronaut." src="Astronaut.gltf" ios-src="Astronaut.usdz" magic-leap ar>

This looks much the same as what I had in my earlier article. Notice the thing I've highlighted at the very end. That's the new attribute.

Installing the new version

If you're using <model-viewer> already, you're probably importing the component using the <script> tags exactly as shown in the documentation. We're continually making improvements. If you want to test new features before deliberately upgrading and deploying, you'll want to install a specific version of <model-viewer>. To do this, add the version number to the file URLs as shown below. Then, watch the releases page for updates.

<script type="module"
  src="https://unpkg.com/@google/model-viewer@0.3.1/dist/model-viewer.js">
</script>

<script nomodule
  src="https://unpkg.com/@google/model-viewer@0.3.1/dist/model-viewer-legacy.js">
</script>

Conclusion

Give the new version of <model-viewer> a try and let us know what you think. Issues and feedback are welcome on GitHub.

In Chrome 76 you can hide the Add to Home screen mini-infobar

$
0
0

In Chrome 76 you can hide the Add to Home screen mini-infobar

Background on Progressive Web Apps and the mini-infobar

Progressive Web Apps (PWA) are a pattern for creating app-like, instant loading, reliable and installable websites.

Example of the Add to Home screen mini-infobar for AirHorner

When your PWA passes the installability checklist on Android, a Chrome system dialog called the mini-infobar will appear at the bottom of the browser window.

Today the Add to Home screen mini-infobar is shown at the same time as the beforeinstallprompt event.

Changes in Chrome 76

Note: Chrome 76 went to stable in July 2019.

We’ve been listening to our community and what we heard was that developers want more control over when to ask users to install a PWA. We heard you!

Starting in Chrome 76, you can prevent the mini-infobar by calling preventDefault() on the beforeinstallprompt event.

The beforeinstallprompt event can help you promote the installation of your progressive web app, making it easy for users to add it to their home screen. Our community has shared that users who install a PWA to the home screen are highly engaged, with more repeat visits, time spent in the app and when applicable, higher conversion rates.

Example of Pinterest using an install banner to promote the installability of their PWA. See Add to Home Screen for complete details on the add to home screen flow, including code samples, and other best practices.

To promote your web app without the mini-infobar, listen for the beforeinstallprompt event, then, save the event. Next, update your user interface to indicate your PWA can be installed, for example by adding an install button, showing an install banner, using in-feed promotions, or a menu option. When the user clicks on the install element, call prompt() on the saved beforeinstallprompt event to show the add to home screen modal dialog.

Future of the Add to Home screen mini-infobar

The use of the add to home screen infobar is still a temporary measure. We’re experimenting with new UI patterns that will continue to give Progressive Web App users the ability to install, and do this in a way that reduces clutter in the browsing experience.

How do I notify users that my PWA is installable?

$
0
0

How do I notify users that my PWA is installable?

If your PWA has use cases where it’s helpful for a user to install your app, for example if you have users who use your app more than once a week, you should be promoting the installation of your PWA within the web UI of your app.

But how should you notify the user that your PWA can be installed?

Check out Patterns for Promoting PWA Installation, a series of recommended patterns and best practices that you can use to promote the installation of your Progressive Web App.

It includes patterns for notifying users within the core UI of your app, within your app's content, or just letting the browser notify the user. And, it includes recommendations on how to place the notification for different types of apps.


Address Bar Install for Progressive Web Apps on the Desktop

$
0
0

Address Bar Install for Progressive Web Apps on the Desktop

On desktop, there's typically no indication to a user that a Progressive Web App is installable, and if it is, the install flow is hidden within the three dot menu.

In Chrome 76 (beta mid-June 2019), we're making it easier for users to install Progressive Web Apps on the desktop by adding an install button to the address bar (omnibox). If a site meets the Progressive Web App installability criteria, Chrome will automatically show an install icon in the address bar. Clicking the button prompts the user to install the PWA.

Like other install events, you can listen for the appinstalled event to detect if the user installed your PWA.

Adding your own install prompt

If your PWA has use cases where it’s helpful for a user to install your app, for example if you have users who use your app more than once a week, you should be promoting the installation of your PWA within the web UI of your app.

To add your own custom install button, listen for the beforeinstallprompt event. When it’s fired, save a reference to the event, and update your user interface to let the user know they can install your Progressive Web App.

Patterns for promoting the installation of your PWA

There are three key ways you can promote the installation of your PWA:

  • Automatic browser promotion, like the address bar install button.
  • Application UI promotion, where UI elements appear in the application interface, such as banners, buttons in the header or navigation menu, etc.
  • Inline promotional patterns interweave promotions within the site content.

Check out Patterns for Promoting PWA Installation (mobile) for more details. It’s focus is mobile, but many of the patterns are applicable for desktop, or can be easily modified to work for desktop experiences.

Feedback

Updating WebAPKs More Frequently

$
0
0

Updating WebAPKs More Frequently

When a Progressive Web App is installed on Android, Chrome automatically requests and installs a WebAPK of your app. Being installed via an APK makes it possible for your app to show up in the app launcher, in Android's app settings and to register a set of intent filters.

Chrome 76 and later

Chrome checks for updates either every 1 day or every 30 days. Checking for updates every day happens the large majority of the time. It switches to the 30 day interval in unlikely cases where the update server cannot provide an update.

Hypothetical update check for Chrome 76 and later

  • January 1: Install WebAPK
  • January 1: Launch WebAPK → No update check (0 days have passed)
  • January 2: Launch WebAPK → Check whether update is needed (1 day has passed)
  • January 4: Launch Chrome → No update check (Launching Chrome has no effect)
  • January 4: Launch WebAPK → Check whether update is needed (1+ days have passed)
  • January 6: Clear Chrome's data in Android settings
  • January 9: Launch WebAPK → No update check (From Chrome's perspective this is the first WebAPK launch)
  • January 10: Launch WebAPK → Check whether update is needed (1 day has passed)

Chrome 75 and earlier

Chrome checks for updates either every 3 days or every 30 days. Checking for updates every 3 days happens the large majority of the time. It switches to the 30 day interval in unlikely cases where the update server cannot provide an update.

Hypothetical update check for Chrome 75 and earlier

  • January 1: Install WebAPK
  • January 1: Launch WebAPK → No update check (0 days have passed)
  • January 2: Launch WebAPK → No update check (1 day has passed)
  • January 4: Launch Chrome → No update check (Launching Chrome has no effect)
  • January 4: Launch WebAPK → Check whether update is needed (3+ days have passed)
  • January 6: Clear Chrome's data in Android settings
  • January 9: Launch WebAPK → No update check (From Chrome's perspective this is the first WebAPK launch)
  • January 12: Launch WebAPK → Check whether update is needed (3+ days have passed)

Further reading

For complete details, including additional triggers that cause Chrome to check the manifest, and potentially request and install a new WebAPK, refer to the Updating the WebAPK section of the WebAPK docs.

Web Components update: more time to upgrade to v1 APIs

$
0
0

Web Components update: more time to upgrade to v1 APIs

The Web Components v1 APIs are a web platform standard that has shipped in Chrome, Safari, Firefox, and (soon) Edge. The v1 APIs are used by literally millions of sites, reaching billions of users daily. Developers using the draft v0 APIs provided valuable feedback that influenced the specifications. But the v0 APIs were only supported by Chrome. In order to ensure interoperability, late last year, we announced that these draft APIs were deprecated and were scheduled for removal as of Chrome 73.

Because enough developers requested more time to migrate, the APIs have not yet been removed from Chrome. The v0 draft APIs are now targeted for removal in Chrome 80, around February 2020.

Here's what you need to know if you believe you might be using the v0 APIs:

Back to the future: disabling the v0 APIs

To test your site with the v0 APIs disabled, you need to start Chrome from the command line with the following flags:

--disable-blink-features=ShadowDOMV0,CustomElementsV0,HTMLImports

You'll need to exit Chrome before starting it from the command line. If you have Chrome Canary installed, you can run the test in Canary while keeping this page loaded in Chrome.

To test your site with v0 APIs disabled:

  1. If you're on Mac OS, browse to chrome://version to find the executable path for Chrome. You'll need the path in step 3.
  2. Quit Chrome.
  3. Restart Chrome with the command-line flags:

    --disable-blink-features=ShadowDOMV0,CustomElementsV0,HTMLImports

    For instructions on starting Chrome with flags, see Run Chromium with flags. For example, on Windows, you might run:

    chrome.exe --disable-blink-features=ShadowDOMV0,CustomElementsV0,HTMLImports
    
  4. To double check that you have started the browser correctly, open a new tab, open the DevTools console, and run the following command:

    console.log(
    "Native HTML Imports?", 'import' in document.createElement('link'),
    "Native Custom Elements v0?", 'registerElement' in document, 
    "Native Shadow DOM v0?", 'createShadowRoot' in document.createElement('div'));
    

    If everything is working as expected, you should see:

    Native HTML Imports? false Native Custom Elements v0? false Native Shadow DOM v0? false
    

    If the browser reports true for any or all of these features, something's wrong. Make sure you've fully quit Chrome before restarting with the flags.

  5. Finally, load your app and run through the features. If everything works as expected, you're done.

Use the v0 polyfills

The Web Components v0 polyfills provide support for the v0 APIs on browsers that don't provide native support. If your site isn't working on Chrome with the v0 APIs disabled, you probably aren't loading the polyfills. There are several possibilities:

  • You're not loading the polyfills at all. In this case, your site should fail on other browsers, like Firefox and Safari.
  • You're loading the polyfills conditionally based on browser sniffing. In this case, your site should work on other browsers. Skip ahead to Load the polyfills.

In the past, the Polymer Project team and others have recommended loading the polyfills conditionally based on feature detection. This approach should work fine with the v0 APIs disabled.

Install the v0 polyfills

The Web Components v0 polyfills were never published to the npm registry. You can install the polyfills using Bower, if your project is already using Bower. Or you can install from a zip file.

  • To install with Bower:

    bower install --save webcomponents/webcomponentsjs#^0.7.0

  • To install from a zip file, download the latest 0.7.x release from GitHub:

    https://github.com/webcomponents/webcomponentsjs/releases/tag/v0.7.24

    If you install from a zip file, you'll have to manually update the polyfills if another version comes out. You'll probably want to check the polyfills in with your code.

Load the v0 polyfills

The Web Components v0 polyfills are provided as two separate bundles:

webcomponents-min.js Includes all of the polyfills. This bundle includes the shadow DOM polyfill, which is much larger than the other polyfills, and has greater performance impact. Only use this bundle if you need shadow DOM support.
webcomponents-lite-min.js Includes all polyfills except for shadow DOM. Note: Polymer 1.x users should choose this bundle, since Shadow DOM emulation was included directly in the Polymer library. Polymer 2.x users need a different version of the polyfills. See the Polymer Project blog post for details.

There are also individual polyfills available as part of the Web Components polyfill package. Using individual polyfills separately is an advanced topic, not covered here.

To load the polyfills unconditionally:

<script src="/bower_components/webcomponents/webcomponentsjs/webcomponents-lite-min.js">
</script>

Or:

<script src="/bower_components/webcomponents/webcomponentsjs/webcomponents-min.js">
</script>

If you installed the polyfills directly from GitHub, you'll need to supply the path where you installed the polyfills.

If you conditionally load the polyfills based on feature detection, your site should continue to work when v0 support is removed.

If you conditionally load the polyfills using browser sniffing (that is, loading the polyfills on non-Chrome browsers), your site will fail when v0 support is removed from Chrome.

Help! I don't know what APIs I'm using!

If you don't know whether you're using any of the v0 APIs—or which APIs you're using—you can check the console output in Chrome.

  1. If you previously started Chrome with the flags to disable the v0 APIs, close Chrome and restart it normally.
  2. Open a new Chrome tab and load your site.
  3. Press Control+Shift+J (Windows, Linux, Chrome OS) or Command+Option+J (Mac) to open the DevTools Console.
  4. Check the console output for deprecation messages. If there's a lot of console output, enter "Deprecation" in the Filter box.

Console window showing deprecation warnings

You should see deprecation messages for each v0 API you're using. The output above shows messages for HTML Imports, custom elements v0, and shadow DOM v0.

If you're using one or more of these APIs, see Use the v0 polyfills.

Next steps: moving off of v0

Making sure the v0 polyfills are getting loaded should ensure your site keeps working when the v0 APIs are removed. We recommend moving to the Web Components v1 APIs, which are broadly supported.

If you're using a Web Components library, like the Polymer library, X-Tag, or SkateJS, the first step is to check whether newer versions of the library are available that support the v1 APIs.

If you have your own library, or wrote custom elements without a library, you'll need to update to the new APIs. These resources might help:

Summing up

The Web Components v0 draft APIs are going away. If you take one thing away from this post, make sure you test your app with the v0 APIs disabled and load the polyfills as needed.

For the long run, we encourage you to upgrade to the latest APIs, and keep using Web Components!

Deprecations and removals in Chrome 77

$
0
0

Deprecations and removals in Chrome 77

Removals

Card issuer networks as payment method names

Removes support for calling PaymentRequest with card issuer networks (e.g., "visa", "amex", "mastercard") in the supportedMethods field.

Intent to Remove | Chrome Platform Status | Chromium Bug

Deprecate Web MIDI use on insecure origins

Web MIDI use is classified into two groups: non-privilege use, and privilege use with sysex permission. Until Chrome 77, only the latter use prompts users for permission. To reduce security concerns, permissions will always be requested regardless of sysex use. This means that using Web MIDI on insecure origins will no longer be allowed.

Intent to Remove | Chrome Platform Status | Chromium Bug

Deprecations

Deprecate WebVR 1.1 API

This API is now deprecated in Chrome, being replaced by the WebXR Device API, which is expected to ship in Chrome 78. The WebVR Origin Trial ended on July 24, 2018.

WebVR was never enabled by default in Chrome, and was never ratified as a web standard. The WebXR Device API is the replacement API for WebVR. Removing WebVR from Chrome allows us to focus on the future of WebXR and remove the maintenance burden of WebVR, as well as reaffirm that Chrome is committed to WebXR as the future for building immersive web-based experiences. Removal is expected in Chrome 79.

Intent to Remove | Chrome Platform Status | Chromium Bug

Feedback

Fresher service workers, by default

$
0
0

Fresher service workers, by default

Note: This article was updated to reflect that the byte-for-byte service worker update check applies to imported scripts starting in Chrome 78.

tl;dr

Starting in Chrome 68, HTTP requests that check for updates to the service worker script will no longer be fulfilled by the HTTP cache by default. This works around a common developer pain point, in which setting an inadvertent Cache-Control header on your service worker script could lead to delayed updates.

If you've already opted-out of HTTP caching for your /service-worker.js script by serving it with Cache-Control: max-age=0, then you shouldn't see any changes due to the new default behavior.

Additionally, starting in Chrome 78, the byte-for-byte comparison will be applied to scripts loaded in a service worker via importScripts(). Any change made to an imported script will trigger the service worker update flow, just like a change to the top-level service worker would.

Background

Every time you navigate to a new page that's under a service worker's scope, explicitly call registration.update() from JavaScript, or when a service worker is "woken up" via a push or sync event, the browser will, in parallel, request the JavaScript resource that was originally passed in to the navigator.serviceWorker.register() call, to look for updates to the service worker script.

For the purposes of this article, let's assume its URL is /service-worker.js and that it contains a single call to importScripts(), which loads additional code that's run inside the service worker:

// Inside our /service-worker.js file:
importScripts('path/to/import.js');

// Other top-level code goes here.

What's changing?

Prior to Chrome 68, the update request for /service-worker.js would be made via the HTTP cache (as most fetches are). This meant if the script was originally sent with Cache-Control: max-age=600, updates within the next 600 seconds (10 minutes) would not go to the network, so the user may not receive the most up-to-date version of the service worker. However, if max-age was greater than 86400 (24 hours), it would be treated as if it were 86400, to avoid users being stuck with a particular version forever.

Starting in 68, the HTTP cache will be ignored when requesting updates to the service worker script, so existing web applications may see an increase in the frequency of requests for their service worker script. Requests for importScripts will still go via the HTTP cache. But this is just the default—a new registration option, updateViaCache is available that offers control over this behavior.

updateViaCache

Developers can now pass in a new option when calling navigator.serviceWorker.register(): the updateViaCache parameter. It takes one of three values: 'imports', 'all', or 'none'.

The values determine if and how the browser's standard HTTP cache comes into play when making the HTTP request to check for updated service worker resources.

  • When set to 'imports', the HTTP cache will never be consulted when checking for updates to the /service-worker.js script, but will be consulted when fetching any imported scripts (path/to/import.js, in our example). This is the default, and it matches the behavior starting in Chrome 68.

  • When set to 'all', the HTTP cache will be consulted when making requests for both the top-level /service-worker.js script, as well as any scripts imported inside of the service worker, like path/to/import.js. This option corresponds to the previous behavior in Chrome, prior to Chrome 68.

  • When set to 'none', the HTTP cache will not be consulted when making requests for either the top-level /service-worker.js or for any imported scripts, such as the hypothetical path/to/import.js.

For example, the following code will register a service worker, and ensure that the HTTP cache is never consulted when checking for updates to either the /service-worker.js script, or for any scripts that are referenced via importScripts() inside of /service-worker.js:

if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/service-worker.js', {
    updateViaCache: 'none',
    // Optionally, set 'scope' here, if needed.
  });
}

Checks for updates to imported scripts

Prior to Chrome 78, any service worker script loaded via importScripts() would be retrieved only once (checking first against the HTTP cache, or via the network, depending on the updateViaCache configuration). After that initial retrieval, it would be stored internally by the browser, and never re-fetched.

The only way to force an already installed service worker to pick up changes to an imported script was to change the script's URL, usually either by adding in a semver value (e.g. importScripts('https://example.com/v1.1.0/index.js')) or by including a hash of the contents (e.g. importScripts('https://example.com/index.abcd1234.js')). A side-effect of changing the imported URL is that the top-level service worker script's contents change, which in turn triggers the service worker update flow.

Starting with Chrome 78, each time an update check is performed for a top-level service worker file, checks will be made at the same time to determine whether or not the contents of any imported scripts have changed. Depending on the Cache-Control headers used, these imported script checks might be fulfilled by the HTTP cache if updateViaCache is set to 'all' or 'imports' (which is the default value), or the checks might go directly against the network if updateViaCache is set to 'none'.

If an update check for an imported script results in a byte-for-byte difference compared to what was previously stored by the service worker, that will in turn trigger the full service worker update flow, even if the top-level service worker file remains the same.

The Chrome 78 behavior matches what Firefox implemented several years ago, in Firefox 56. Safari already implements this behavior as well.

What do developers need to do?

If you've effectively opted-out of HTTP caching for your /service-worker.js script by serving it with Cache-Control: max-age=0 (or a similar value), then you shouldn't see any changes due to the new default behavior.

If you do serve your /service-worker.js script with HTTP caching enabled, either intentionally or because it's just the default for your hosting environment, you may start seeing an uptick of additional HTTP requests for /service-worker.js made against your server—these are requests that used to be fulfilled by the HTTP cache. If you want to continue allowing the Cache-Control header value to influence the freshness of your /service-worker.js, you'll need to start explicitly setting updateViaCache: 'all' when registering your service worker.

Given that there may be a long-tail of users on older browser versions, it's still a good idea to continue setting the Cache-Control: max-age=0 HTTP header on service worker scripts, even though newer browsers might ignore them.

Developers can use this opportunity to decide whether they want to explicitly opt their imported scripts out of HTTP caching now, and add in updateViaCache: 'none' to their service worker registration if appropriate.

Serving imported scripts

Starting with Chrome 78, developers might see more incoming HTTP requests for resources loaded via importScripts(), since they will now be checked for updates.

If you would like to avoid this additional HTTP traffic, set long-lived Cache-Control headers when serving scripts that include semver or hashes in their URLs, and rely on the default updateViaCache behavior of 'imports'.

Alternatively, if you want your imported scripts to be checked for frequent updates, then make sure you either serve them with Cache-Control: max-age=0, or that you use updateViaCache: 'none'.

Further reading

"The Service Worker Lifecycle" and "Caching best practices & max-age gotchas", both by Jake Archibald, are recommended reading for all developers who deploy anything to the web.

Viewing all 599 articles
Browse latest View live