Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

What's New In DevTools (Chrome 87)

$
0
0

What's New In DevTools (Chrome 87)

New CSS Grid debugging tools

DevTools now has better support for CSS grid debugging!

CSS grid debugging

When an HTML element on your page has display: grid or display: inline-grid applied to it, you can see a grid badge next to it in the Elements panel. Click the badge to toggle the display of a grid overlay on the page.

The new Layout pane has a Grid section offering you a number of options for viewing the grids.

Check out the documentation to learn more.

Chromium issue: 1047356

New WebAuthn tab

You can now emulate authenticators and debug the Web Authentication API with the new WebAuthn tab.

Select More options > More tools > WebAuthn to open the WebAuthn tab.

WebAuthn tab

Prior to the new WebAuthn tab, there was no native WebAuthn debugging support on Chrome. Developers needed physical authenticators to test their web application with Web Authentication API.

With the new WebAuthn tab, web developers can now emulate these authenticators, customize their capabilities, and inspect their states, without the need of any physical authenticators. This makes the debugging experience much easier.

Check out our documentation to learn more about the WebAuthn feature.

Chromium issue: 1034663

Move tools between top and bottom panel

DevTools now support moving tools in DevTools between the top and bottom panel. This way, you can view any two tools at once.

For example, if you would like to view Elements and Sources panel at once, you can right click on the Sources panel, and select Move to bottom to move it to the bottom.

Move to bottom

Similarly, you can move any bottom tab to the top by right clicking on a tab and select Move to top.

Move to top

Chromium issue: 1075732

Elements panel updates

View the Computed sidebar pane in the Styles pane

You can now toggle the Computed sidebar pane in the Styles pane.

The Computed sidebar pane in the Styles pane is collapsed by default. Click on the button to toggle it.

Computed sidebar pane

Chromium issue: 1073899

Grouping CSS properties in the Computed pane

You can now group the CSS properties by categories in the Computed pane.

With this new grouping feature, it will be easier to navigate in the Computed pane (less scrolling) and selectively focus on a set of related properties for CSS inspection.

On the Elements panel, select an element. Toggle the Group checkbox to group/ungroup the CSS properties.

Grouping CSS properties

Chromium issues: 1096230, 1084673, 1106251

Lighthouse 6.4 in the Lighthouse panel

The Lighthouse panel is now running Lighthouse 6.4. Check out the release notes for a full list of changes.

Lighthouse

New audits in Lighthouse 6.4:

  • Preload fonts. Validates if all fonts that usefont-display: optional were preloaded.
  • Valid sourcemaps. Audits if a page has valid sourcemaps for large, first-party JavaScript.
  • [Experimental] Large JavaScript library. Large JavaScript libraries can lead to poor performance. This audit suggests cheaper alternatives to common, large JavaScript libraries like moment.js.

Chromium issue: 772558

performance.mark() events in the Timings section

The Timings section of a Performance recording now marks performance.mark() events.

Performance.mark events

New resource-type and url filters in the Network panel

Use the new resource-type and url keywords in the Network panel to filter network requests.

For example, use resource-type:image to focus on the network requests that are images.

resource-type filter

Check out filter requests by properties to discover more special keywords like resource-type and url.

Chromium issues: 1121141, 1104188

Frame details view updates

Display COEP and COOP reporting to endpoint

You can now view the Cross-Origin Embedder Policy (COEP) and Cross-Origin Opener Policy (COOP)reporting to endpoint under the Security & Isolation section.

The Reporting API defines a new HTTP header, Report-To, that gives web developers a way to specify server endpoints for the browser to send warnings and errors to.

reporting to endpoint

Read this article to learn more about how to enable COEP and COOP and make your website "cross-origin isolated".

Chromium issue: 1051466

Display COEP and COOP report-only mode

DevTools now displays report-only label for COEP and COOP that are set to report-only mode.

report-only label

Watch this video to learn about how to prevent information leaks and enable COOP and COEP in your website.

Chromium issue: 1051466

Deprecation of Settings in the More tools menu

The Settings in the More tools menu has been deprecated. Open the Settings from the main panel instead.

Settings in the main panel

Chromium issue: 1121312

Experimental features

View and fix color contrast issues in the CSS Overview panel

CSS Overview panel now displays a list of low color contrast texts of your page.

In this example, the demo page has a low color contrast issue. Click on the issue, you can view a list of elements that have the issue.

Low color contrast issues

Click on an element in the list to open the element in Elements panel. DevTools provides auto color suggestion to help you fix the low contrast text.

Chromium issue: 1120316

Customize keyboard shortcuts in DevTools

You can now customize the keyboard shortcuts for your favourite commands in DevTools.

Go to Settings > Shortcuts, hovering on a command and click the Edit button (pen icon) to customize the keyboard shortcut.

Customize keyboard shortcuts

To reset all shortcuts, click on Restore default shortcuts.

Chromium issue: 174309

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>


New in Chrome 86

$
0
0

New in Chrome 86

Chrome 86 is starting to roll out to stable now.

Here's what you need to know:

I’m Pete LePage, working and shooting from home, let’s dive in and see what’s new for developers in Chrome 86!

File System Access

Today, you can use the <input type="file"> element read a file from disk. To save changes, add download to an anchor tag, it’ll show the file picker, then saves the file. There’s no way to write to the same file you opened. That workflow is annoying.

With the File System Access API (formerly the Native File System API), which graduated from it's origin trial, and is now available in stable, you can call showOpenFilePicker(), which shows a file picker, then returns a file handle that you can use to read the file.

async function getFileHandle() {
  const opts = {
    types: [
      {
        description: 'Text Files',
        accept: {
          'text/plain': ['.txt', '.text'],
          'text/html': ['.html', '.htm']
        }
      }
    ]
  };
  return await window.showOpenFilePicker(opts);
}

To save a file to disk, you can either use that file handle that you got earlier, or call showSaveFilePicker() to get a new file handle.

async function saveFile(fileHandle) {
  if (!fileHandle) {
    fileHandle = await window.showSaveFilePicker();
  }
  const writable = await fileHandle.createWritable();
  await writable.write(contents);
  await writable.close();
}
permission prompt screen shot
Prompt to the user requesting permission to write to a file.

Before writing, Chrome will check if the user has granted write permission, if write permission hasn’t been granted, Chrome will prompt the user first.

Calling showDirectoryPicker() will open a directory, allowing you to get a list of files, or create new files in that directory. Perfect for things like IDEs, or media players that interact with lots of files. Of course, before you can write anything, the user must grant write permission.

There’s a lot more to the API, so check out the File System Access article on web.dev.

Origin Trial: WebHID

Game controller
Game controller.

Human interface devices, commonly referred to as HID, takes input from, or provides output to... humans. There’s a long tail of human interface devices that are too new, too old, or too uncommon to be accessible by the systems' device drivers.

The WebHID API, now available as an origin trial, solves this by providing a way to access these devices in JavaScript. With WebHID, web based games can take full advantage of gamepads, including all of the buttons, joysticks, sensors, triggers, LEDs, rumble packs, and more.

butOpenHID.addEventListener('click', async (e) => {
  const deviceFilter = { vendorId: 0x0fd9 };
  const opts = { filters: [deviceFilter] };
  const devices = await navigator.hid.requestDevice(opts);
  myDevice = devices[0];
  await myDevice.open();
  myDevice.addEventListener('inputreport', handleInpRpt);
});

Web based video chat apps can use the telephony buttons on specialized speakers, to start or end calls, mute the audio, and more.

HID device picker
HID device picker.

Of course, powerful APIs like this, can only interact with devices when the user explicitly chooses to allow it.

Check out Connecting to uncommon HID devices for more details, examples, how you can get started, and a cool demo.

Origin Trial: Multi-Screen Window Placement API

Today, you can get the properties of the screen the browser window is on by calling window.screen(). But what if you have a multi-monitor setup? Sorry, the browser will only tell you about the screen it’s currently on.

const screen = window.screen;
console.log(screen);
// {
//   availHeight: 1612,
//   availLeft: 0,
//   availTop: 23,
//   availWidth: 3008,
//   colorDepth: 24,
//   orientation: {...},
//   pixelDepth: 24,
//   width: 3008
// }

The Multi-Screen Window Placement API, starts an origin trial in Chrome 86, it allows you to enumerate the screens connected to your machine, and place windows on specific screens. Being able to place windows on specific screens is critical for things like presentation apps, financial services apps, and more.

Before you can use the API, you’ll need to request permission. If you don’t, the browser will prompt the user when you first try to use it.

async function getPermission() {
  const opt = { name: 'window-placement' };
  try {
    const perm = await navigator.permissions.query(opt);
    return perm.state === 'granted';
  } catch {
    return false;
  }
}

Once the user has granted permission, calling window.getScreens() returns a promise that resolves with an array of Screen objects.

const screens = await window.getScreens();
console.log(screens);
// [
//   {id: 0, internal: false, primary: true, left: 0, ...},
//   {id: 1, internal: true, primary: false, left: 3008, ...},
// ]

I can then use that information when calling requestFullScreen(), or placing new windows. Tom has all the details in his Managing several displays with the Multi-Screen Window Placement API article on web.dev.

And more

The new CSS selector, :focus-visible, lets you opt-in to the same heuristic the browser uses when it's deciding whether to display the default focus indicator.

/* Focusing the button with a keyboard will
   show a dashed black line. */
button:focus-visible {
  outline: 4px dashed black;
}

/* Focusing the button with a mouse, touch,
   or stylus will show a subtle drop shadow. */
button:focus:not(:focus-visible) {
  outline: none;
  box-shadow: 1px 1px 5px rgba(1, 1, 0, .7);
}

You can customize the color, size, or type of number or bullet for lists with the CSS ::marker Pseudo-Element.

li::marker {
  content: '😍';
}
li:last-child::marker {
  content: '🤯';
}

And Chrome Dev Summit will be coming to a screen near you, so stay tuned to our YouTube channel for more info!

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 86.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 87 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Gaining security and privacy by partitioning the cache

$
0
0

Gaining security and privacy by partitioning the cache

In general, caching can improve performance by storing data so future requests for the same data are served faster. For example, a cached resource from the network can avoid a round trip to the server. A cached computational result can omit the time to do the same calculation.

In Chrome, the cache mechanism is used in various ways and HTTP Cache is one example.

How Chrome's HTTP Cache currently works

As of version 85, Chrome caches resources fetched from the network, using their respective resource URLs as the cache key. (A cache key is used to identify a cached resource.)

The following example illustrates how a single image is cached and treated in three different contexts:

Cache Key: { https://x.example/doge.png }

A user visits a page (https://a.example) that requests an image (https://x.example/doge.png). The image is requested from the network and cached using https://x.example/doge.png as the key.

Cache Key: { https://x.example/doge.png }

The same user visits another page (https://b.example), which requests the same image (https://x.example/doge.png).
The browser checks its HTTP Cache to see if it already has this resource cached, using the image URL as the key. The browser finds a match in its Cache, so it uses the cached version of the resource.

Cache Key: { https://x.example/doge.png }

It doesn't matter if the image is loaded from within an iframe. If the user visits another website (https://c.example) with an iframe (https://d.example) and the iframe requests the same image (https://x.example/doge.png), the browser can still load the image from its cache because the cache key is the same across all of the pages.

This mechanism has been working well from a performance perspective for a long time. However, the time a website takes to respond to HTTP requests can reveal that the browser has accessed the same resource in the past, which opens the browser to security and privacy attacks, like the following:

  • Detect if a user has visited a specific site: An adversary can detect a user's browsing history by checking if the cache has a resource which might be specific to a particular site or cohort of sites.
  • Cross-site search attack: An adversary can detect if an arbitrary string is in the user's search results by checking whether a 'no search results' image used by a particular website is in the browser's cache.
  • Cross-site tracking: The cache can be used to store cookie-like identifiers as a cross-site tracking mechanism.

To mitigate these risks, Chrome will partition its HTTP cache starting in Chrome 86.

How will cache partitioning affect Chrome's HTTP Cache?

With cache partitioning, cached resources will be keyed using a new "Network Isolation Key" in addition to the resource URL. The Network Isolation Key is composed of the top-level site and the current-frame site.

Note: The "site" is recognized using "scheme://eTLD+1" so if requests are from different pages, but they have the same scheme and effective top-level domain+1 they will use the same cache partition. To learn more about this, read Understanding "same-site" and "same-origin".

Look again at the previous example to see how cache partitioning works in different contexts:

Cache Key: { https://a.example, https://a.example, https://x.example/doge.png }

A user visits a page (https://a.example) which requests an image (https://x.example/doge.png). In this case, the image is requested from the network and cached using a tuple consisting of https://a.example (the top-level site), https://a.example (the current-frame site), and https://x.example/doge.png (the resource URL) as the key. (Note that when the resource request is from the top-level -frame, the top-level site and current-frame site in the Network Isolation Key are the same.)

Cache Key: { https://b.example, https://b.example, https://x.example/doge.png }

The same user visits a different page (https://b.example) which requests the same image (https://x.example/doge.png). Though the same image was loaded in the previous example, since the key doesn't match it will not be a cache hit.

The image is requested from the network and cached using a tuple consisting of https://b.example, https://b.example, and https://x.example/doge.png as the key.

Cache Key: { https://a.example, https://a.example, https://x.example/doge.png }

Now the user comes back to https://a.example but this time the image (https://x.example/doge.png) is embedded in an iframe. In this case, the key is a tuple containing https://a.example, https://a.example, and https://x.example/doge.png and a cache hit occurs. (Note that when the top-level site and the iframe are the same site, the resource cached with the top-level frame can be used.

Cache Key: { https://a.example, https://c.example, https://x.example/doge.png }

The user is back at https://a.example but this time the image is hosted in an iframe from https://c.example.

In this case, the image is downloaded from the network because there is no resource in the cache that matches the key consisting of https://a.example, https://c.example, and https://x.example/doge.png.

Cache Key: { https://a.example, https://c.example, https://x.example/doge.png }

What if the domain contains a subdomain or a port number? The user visits https://subdomain.a.example, which embeds an iframe (https://c.example:8080), which requests the image.

Because the key is created based on "scheme://eTLD+1", subdomains and port numbers are ignored. Hence a cache hit occurs.

Cache Key: { https://a.example, https://c.example, https://x.example/doge.png }

What if the iframe is nested multiple times? The user visits https://a.example, which embeds an iframe (https://b.example), which embeds yet another iframe (https://c.example), which finally requests the image.

Because the key is taken from the top-frame (https://a.example) and the immediate frame which loads the resource (https://c.example), a cache hit occurs.

FAQs

As a web developer, are there any action I should take in response to this change?

This is not a breaking change, but it may impose performance considerations for some web services.

For example, those that serve large volumes of highly cacheable resources across many sites (such as fonts and popular scripts) may see an increase in their traffic. Also, those who consume such services may have an increased reliance on them.

(There's a proposal to enable shared libraries in a privacy-preserving way called Web Shared Libraries (presentation video), but it's still under consideration.)

What is the impact of this behavioral change?

The overall cache miss rate increases by about 3.6%, changes to the FCP (First Contentful Paint) are modest (~0.3%), and the overall fraction of bytes loaded from the network increases by around 4%. You can learn more about the impact on performance in the HTTP cache partitioning explainer.

Is this standardized? Do other browsers behave differently?

"HTTP cache partitions" is standardized in the fetch spec though browsers behave differently:

  • Chrome: Uses top-level scheme://eTLD+1 and frame scheme://eTLD+1
  • Safari: Uses top-level eTLD+1
  • Firefox: Planning to implement with top-level scheme://eTLD+1 and considering including a second key like Chrome

How is fetch from workers treated?

Dedicated workers use the same key as their current frame. Service workers and shared workers are more complicated since they may be shared among multiple top-level sites. The solution for them is currently under discussion.

Resources

Feedback

Deprecations and removals in Chrome 87

$
0
0

Deprecations and removals in Chrome 87

Comma separator in iframe allow attribute

Permissions policy declarations in an <iframe> tag can no longer use commas as a separator between items. Developers should use semicolons instead.

-webkit-font-size-delta

Blink will no longer support the rarely-used -webkit-font-size-delta property. Developers should use font-size to control font size instead.

Deprecate FTP support

Chrome is deprecating and removing support for FTP URLs. The current FTP implementation in Google Chrome has no support for encrypted connections (FTPS), nor proxies. Usage of FTP in the browser is sufficiently low that it is no longer viable to invest in improving the existing FTP client. In addition, more capable FTP clients are available on all affected platforms.

Google Chrome 72 and later removed support for fetching document subresources over FTP and rendering of top level FTP resources. Currently navigating to FTP URLs results in showing a directory listing or a download depending on the type of resource. A bug in Google Chrome 74 and later resulted in dropping support for accessing FTP URLs over HTTP proxies. Proxy support for FTP was removed entirely in Google Chrome 76. In Chrome 86, FTP was turned off for pre-release channels (Canary and Beta) and was experimentally turned off for one percent of stable users.

The remaining capabilities of Google Chrome’s FTP implementation are restricted to either displaying a directory listing or downloading a resource over unencrypted connections.

Remainder of the deprecation follows this timeline:

Chrome 87

FTP support will be disabled by default for fifty percent of users but can be enabled using the flags listed above.

Chrome 88

FTP support will be disabled.

Feedback

How we built the Chrome DevTools WebAuthn tab

$
0
0

How we built the Chrome DevTools WebAuthn tab

The Web Authentication API, also known as WebAuthn, allows servers to use public key cryptography - rather than passwords - to register and authenticate users. It does this by enabling integration between these servers and strong authenticators. These authenticators may be dedicated physical devices (e.g. security keys) or integrated with platforms (e.g. fingerprint readers). You can read more about WebAuthn here at webauthn.guide.

Developer pain points

Prior to this project, WebAuthn lacked native debugging support on Chrome. A developer building an app that used WebAuth required access to physical authenticators. This was especially difficult for two reasons:

  1. There are many different flavors of authenticators. Debugging the breadth of configurations and capabilities necessitated that the developer have access to many different, sometimes expensive, authenticators.

  2. Physical authenticators are, by design, secure. Therefore, inspecting their state is usually impossible.

We wanted to make this easier by adding debugging support right in the Chrome DevTools.

Solution: a new WebAuthn tab

The WebAuthn DevTools tab makes debugging WebAuthn much easier by allowing developers to emulate these authenticators, customize their capabilities, and inspect their states.

New WebAuthn tab

Implementation

Adding debugging support to WebAuthn was a two-part process.

Two-part process

Part 1: Adding WebAuthn Domain to the Chrome DevTools Protocol

First, we implemented a new domain in the Chrome DevTools Protocol (CDP) which hooks into a handler that communicates with the WebAuthn backend.

The CDP connects DevTools front-end with Chromium. In our case, we utilized the WebAuthn domain acts to bridge between the WebAuthn DevTools tab and Chromium's implementation of WebAuthn.

The WebAuthn domain allows enabling and disabling the Virtual Authenticator Environment, which disconnects the browser from the real Authenticator Discovery and plugs in a Virtual Discovery instead.

We also expose methods in the domain that act as a thin layer to the existing Virtual Authenticator and Virtual Discovery interfaces, which are part of Chromium's WebAuthn implementation. These methods include adding and removing authenticators as well as creating, getting, and clearing their registered credentials.

(Read the code)

Part 2: Building the user-facing tab

Second, we built a user-facing tab in the DevTools frontend. The tab is made up of a view and a model. An auto-generated agent connects the domain with the tab.

While there are 3 necessary components needed, we only needed to be concerned about two of them: the model and the view. The 3rd component - the agent, is autogenerated after adding the domain. Briefly, the agent is the layer that carries the calls between the front end and the CDP.

The model

The model is the JavaScript layer that connects the agent and the view. For our case, the model is quite simple. It takes commands from the view, builds the requests such that they're consumable by the CDP, and then sends them through via the agent. These requests are usually one-directional with no information being sent back to the view.

However, we do sometimes pass back a response from the model either to provide an ID for a newly-created authenticator or to return the credentials stored in an existing authenticator.

(Read the code)

The view

New WebAuthn tab

We use the view to provide the user-interface that a developer can find when accessing DevTools. It contains:

  1. A toolbar to enable virtual authenticator environment.
  2. A section to add authenticators.
  3. A section for created authenticators.

(Read the code)

Toolbar to enable virtual authenticator environment

toolbar

Since most of the user-interactions are with one authenticator at a time rather than the entire tab, the only functionality we provide in the toolbar is toggling the virtual environment on and off.

Why is this necessary? It's important that the user has to explicitly toggle the environment because doing so disconnects the browser from the real Authenticator Discovery. Therefore, while it's on, connected physical authenticators like a fingerprint reader won't be recognized.

We decided that an explicit toggle means a better user experience, especially for those who wander into the WebAuthn tab without expecting real discovery to be disabled.

Adding the Authenticators section

Adding the Authenticators section

Upon enabling the virtual authenticator environment, we present the developer with an inline-form that allows them to add a virtual authenticator. Within this form, we provide customization options which allow the user to decide the authenticator's protocol and transport methods, as well as whether the authenticator supports resident keys and user verification.

Once the user clicks Add, these options are bundled and sent to the model which makes the call to create an authenticator. Once that's complete, the front end will receive a response and then modify the UI to include the newly-created authenticator.

The Authenticator view

The Authenticator view

Each time an authenticator is emulated, we add a section to the authenticator view to represent it. Each authenticator section includes a name, ID, configuration options, buttons to remove the authenticator or set it active, and a credential table.

The Authenticator name

The authenticator's name is customizable and defaults to "Authenticator" concatenated with the last 5 characters of its ID. Originally, the authenticator's name was its full ID and unchangeable. We implemented customizable names so the user can label the authenticator based on its capabilities, the physical authenticator it's meant to emulate, or any nickname that's a bit easier to digest than a 36-character ID.

Credentials table

We added a table to each authenticator section that shows all the credentials registered by an authenticator. Within each row, we provide information about a credential, as well as buttons to remove or export the credential.

Currently, we gather the information to fill these tables by polling the CDP to get the registered credentials for each authenticator. In the future, we plan on adding an event for credential creation.

The Active button

We also added an Active radio button to each authenticator section. The authenticator that is currently set active will be the only one that listens for and registers credentials. Without this, the registering of credentials given multiple authenticators is non-deterministic which would be a fatal flaw when trying to test WebAuthn using them.

We implemented the active status using the SetUserPresence method on virtual authenticators. The SetUserPresence method sets whether tests of user presence succeed for a given authenticator. If we turn it off, an authenticator won't be able to listen for credentials. Therefore, by ensuring that it is on for at most one authenticator (the one set as active), and disabling user presence for all the others, we can force deterministic behavior.

An interesting challenge we faced while adding the active button was avoiding a race condition. Consider the following scenario:

  1. User clicks Active radio button for Authenticator X, sending a request to the CDP to set X as active. The Active radio button for X is selected, and all the others are deselected.

  2. Immediately after, user clicks Active radio button for Authenticator Y, sending a request to the CDP to set Y as active. The Active radio button for Y is selected, and all the others, including for X, are deselected.

  3. In the backend, the call to set Y as active is completed and resolved. Y is now active and all other authenticators are not.

  4. In the backend, the call to set X as active is completed and resolved. X is now active and all other authenticators, including Y, are not.

Now, the resulting situation is as follows: X is the active authenticator. However, the Active radio button for X isn't selected. Y isn't the active authenticator. However, the Active radio button for Y is selected. There is a disagreement between the front end and the actual status of the authenticators. Obviously, that's a problem.

Our solution: Establish pseudo two-way communication between the radio buttons and the active authenticator. First, we maintain a variable activeId in the view to keep track of the ID of the currently active authenticator. Then, we wait for the call to set an authenticator active to return then set activeId to the ID of that authenticator. Lastly, we loop through each authenticator section. If the ID of that section equals activeId, we set the button to selected. Otherwise, we set the button to unselected.

Here's what that looks like:


 async _setActiveAuthenticator(authenticatorId) {
   await this._clearActiveAuthenticator();
   await this._model.setAutomaticPresenceSimulation(authenticatorId, true);
   this._activeId = authenticatorId;
   this._updateActiveButtons();
 }

 _updateActiveButtons() {
   const authenticators = this._authenticatorsView.getElementsByClassName('authenticator-section');
   Array.from(authenticators).forEach(authenticator => {
     authenticator.querySelector('input.dt-radio-button').checked =
         authenticator.getAttribute('data-authenticator-id') === this._activeId;
   });
 }

 async _clearActiveAuthenticator() {
   if (this._activeId) {
     await this._model.setAutomaticPresenceSimulation(this._activeId, false);
   }
   this._activeId = null;
 }

Usage metrics

We wanted to track this feature's usage. Initially, we came up with two options.

  1. Count each time the WebAuthn tab in DevTools was opened. This option could potentially lead to overcounting, as someone may open the tab without actually using it.

  2. Track the number of times the "Enable virtual authenticator environment" checkbox in the toolbar was toggled. This option also had a potential overcounting problem as some may toggle the environment on and off multiple times in the same session.

Ultimately, we decided to go with the latter but restrict the counting by having a check to see if the environment had already been enabled in the session. Therefore, we would only increase the count by 1 regardless of how many times the developer toggled the environment. This works because a new session is created each time the tab is reopened, thus resetting the check and allowing for the metric to be incremented again.

Summary

Thank you for reading! If you have any suggestions to improve the WebAuthn tab, let us know by filing a bug.

Here're some resources if you would like to learn more about WebAuthn:

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

The Chromium Chronicle: Adding Tests to the Waterfall

$
0
0

The Chromium Chronicle: Adding Tests to the Waterfall

Episode 14: by Zhaoyang Li in MTV, and Eric Aleshire in TOK (October 2020)
Previous episodes

Want to detect regressions for your new feature in Chrome? Add your tests to the waterfall (Chrome’s continuous build and test infrastructure)!

There are many builders on Chrome's waterfall that run tests on a variety of platforms. This article describes how to add a test suite to an existing builder. Before proceeding, consider these questions:

Should the new tests live in a brand new suite, or just be added to an existing one?

  • Tests are organized in test suites by proximity of source location and theme. If your new tests can’t logically fit into any existing suite, you probably need a new suite.

Should the tests run on a public builder or an internal builder?

  • Use an internal builder if the code lives in an internal repo, or the tests involve confidential data.

Should the tests run in FYI CI, main CI or commit queue(CQ)?

  • FYI CI needs your self-monitoring and is used for test refinement or experimentation.
  • Main CI tests are regularly monitored by sheriffs.
  • CQ blocks CL submission at failure but takes more infra resources. A new suite should always start from CI before being promoted to CQ.
  • If you’re not sure, your platform’s EngProd team can help you decide.

I already have a test suite running in CI, how do I add it to CQ? / What if I need a new builder?

How to add a test suite to an existing builder

To add a test suite to an existing builder, you need to config some files in //src/testing/buildbot/:

  1. Create a key in gn_isolate_map.pyl for the new test suite with test target label and type info.
  2. Add that key to a test group in test_suites.pyl. (You can find the mapping from builder name to the test groups on builder in waterfalls.pyl.)

     'all_simulator_tests': {
       'previously_existing_test_suite': {},
       'exciting_new_feature_test_suite': {},
     },
    
  3. More fine tunings.

    • mixins.pyl contains arguments that can be applied to a group of tests at various group levels.
    • variants.pyl helps run a suite in multiple instances with different arguments.
  4. Regenerate configuration files by running generate_buildbot_json.py.

After these, it is a simple matter of checking in your config changes; the builders running this suite will pick up the new tests automatically, and the results will begin to flow in on the web interface for the builder on the waterfall - complete with plenty of debug info in the case of failures!

Puppetaria: accessibility-first Puppeteer scripts

$
0
0

Puppetaria: accessibility-first Puppeteer scripts

Puppeteer and its approach to selectors

Puppeteer is a browser automation library for Node: it lets you control a browser using a simple and modern JavaScript API.

The most prominent browser task is, of course, browsing web pages. Automating this task essentially amounts to automating interactions with the webpage.

In Puppeteer, this is achieved by querying for DOM elements using string-based selectors and performing actions such as clicking or typing text on the elements. For example, a script that opens opens developer.google.com, finds the search box, and searches for puppetaria could look like this:

(async () => {
   const browser = await puppeteer.launch({ headless: false });
   const page = await browser.newPage();
   await page.goto('https://developers.google.com/', { waitUntil: 'load' });
   // Find the search box using a suitable CSS selector.
   const search = await page.$('devsite-search > form > div.devsite-search-container');
   // Click to expand search box and focus it.
   await search.click();
   // Enter search string and press Enter.
   await search.type('puppetaria');
   await search.press('Enter');
 })();

How elements are identified using query selectors is therefore a defining part of the Puppeteer experience. Until now, selectors in Puppeteer have been limited to CSS and XPath selectors which, albeit expressionally very powerful, can have drawbacks for persisting browser interactions in scripts.

Syntactic vs. semantic selectors

CSS selectors are syntactic in nature; they are tightly bound to the inner workings of the textual representation of the DOM tree in the sense that they reference IDs and class names from the DOM. As such, they provide an integral tool for web developers for modifying or adding styles to an element in a page, but in that context the developer has full control over the page and its DOM tree.

On the other hand, a Puppeteer script is an external observer of a page, so when CSS selectors are used in this context, it introduces hidden assumptions about how the page is implemented which the Puppeteer script has no control over.

The effect is that such scripts can be brittle and susceptible to source code changes. Suppose, for example, that one uses Puppeteer scripts for automated testing for a web application containing the node <button>Submit</button> as the third child of the body element. One snippet from a test case might look like this:

const button = await page.$('body:nth-child(3)'); // problematic selector
await button.click();

Here, we are using the selector 'body:nth-child(3)' to find the submit button, but this is tightly bound to exactly this version of the webpage. If an element is later added above the button, this selector no longer works!

This is not news to test writers: Puppeteer users already attempt to pick selectors that are robust to such changes. With Puppetaria, we give users a new tool in this quest.

Puppeteer now ships with an alternative query handler based on querying the accessibility tree rather than relying on CSS selectors. The underlying philosophy here is that if the concrete element we want to select has not changed, then the corresponding accessibility node should not have changed either.

We name such selectors “ARIA selectors” and support querying for the computed accessible name and role of the accessibility tree. Compared to the CSS selectors, these properties are semantic in nature. They are not tied to syntactic properties of the DOM but instead descriptors for how the page is observed through assistive technologies such as screen readers.

In the test script example above, we could instead use the selector aria/Submit[role="button"] to select the wanted button, where Submit refers to the accessible name of the element:

const button = await page.$('aria/Submit[role="button"]');
await button.click();

Now, if we later decide to change the text content of our button from Submit to Done the test will again fail, but in this case that is desirable; by changing the name of the button we change the page's content, as opposed to its visual presentation or how it happens to be structured in the DOM. Our tests should warn us about such changes to ensure that such changes are intentional.

Going back to the larger example with the search bar, we could leverage the new aria handler and replace

const search = await page.$('devsite-search > form > div.devsite-search-container');

with

const search = await page.$('aria/Open search[role="button"]');

to locate the search bar!

More generally, we believe that using such ARIA selectors can provide the following benefits to Puppeteer users:

  • Make selectors in test scripts more resilient to source code changes.
  • Make test scripts more readable (accessible names are semantic descriptors).
  • Motivate good practices for assigning accessibility properties to elements.

The rest of this article dives into the details on how we implemented the Puppetaria project.

The design process

Background

As motivated above, we want to enable querying elements by their accessible name and role. These are properties of the accessibility tree, a dual to the usual DOM tree, that is used by devices such as screen readers to show webpages.

From looking at the specification for computing the accessible name, it is clear that computing the name for an element is a non-trivial task, so from the beginning we decided that we wanted to reuse Chromium’s existing infrastructure for this.

How we approached implementing it

Even limiting ourselves to using Chromium’s accessibility tree, there are quite a few ways that we could implement ARIA querying in Puppeteer. To see why, let’s first see how Puppeteer controls the browser.

The browser exposes a debugging interface via a protocol called the Chrome DevTools Protocol (CDP). This exposes functionality such as "reload the page" or "execute this piece of JavaScript in the page and hand back the result" via a language-agnostic interface.

Both the DevTools front-end and Puppeteer are using CDP to talk to the browser. To implement CDP commands, there is DevTools infrastructure inside all components of Chrome: in the browser, in the renderer, and so on. CDP takes care of routing the commands to the right place.

Puppeteer actions such as querying, clicking, and evaluating expressions are performed by leveraging CDP commands such as Runtime.evaluate that evaluates JavaScript directly in the page context and hands back the result. Other Puppeteer actions such as emulating color vision deficiency, taking screenshots, or capturing traces use CDP to communicate directly with the Blink rendering process.

CDP

This already leaves us with two paths for implementing our querying functionality; we can:

  • Write our querying logic in JavaScript and have that injected into the page using Runtime.evaluate, or
  • Use a CDP endpoint that can access and query the accessibility tree directly in the Blink process.

We implemented 3 prototypes:

  • JS DOM traversal - based on injecting JavaScript into the page
  • Puppeteer AXTree traversal - based on using the existing CDP access to the accessibility tree
  • CDP DOM traversal - using a new CDP endpoint purpose-built for querying the accessibility tree

JS DOM traversal

This prototype does a full traversal of the DOM and uses element.computedName and element.computedRole, gated on the ComputedAccessibilityInfo launch flag, to retrieve the name and role for each element during the traversal.

Puppeteer AXTree traversal

Here, we instead retrieve the full accessibility tree through CDP and traverse it in Puppeteer. The resulting accessibility nodes are then mapped to DOM nodes.

CDP DOM traversal

For this prototype, we implemented a new CDP endpoint specifically for querying the accessibility tree. This way, the querying can happen on the back-end through a C++ implementation instead of in the page context via JavaScript.

Unit test benchmark

The following figure compares the total runtime of querying four elements 1000 times for the 3 prototypes. The benchmark was executed in 3 different configurations varying the page size and whether or not caching of accessibility elements was enabled.

Benchmark: Total runtime of querying four elements 1000 times

It is quite clear that there is a considerable performance gap between the CDP-backed querying mechanism and the two others implemented solely in Puppeteer, and the relative difference seems to increase dramatically with the page size. It is somewhat interesting to see that the JS DOM traversal prototype responds so well to enabling accessibility caching. With caching disabled, the accessibility tree is computed on demand and discards the tree after each interaction if the domain is diabled. Enabling the domain makes Chromium cache the computed tree instead.

For the JS DOM traversal we ask for the accessible name and role for every element during the traversal, so if caching is disabled, Chromium computes and discards the accessibility tree for every element we visit. For the CDP based approaches, on the other hand, the tree is only discarded between each call to CDP, i.e. for every query. These approaches also benefit from enabling caching, as the accessibility tree is then persisted across CDP calls, but the performance boost is therefore comparatively smaller.

Even though enabling caching looks desirable here, it does come with a cost of additional memory usage. For Puppeteer scripts that e.g records trace files, this could be problematic. We therefore decided not to enable accessibility tree caching per default. Users can turn on caching themselves by enabling the CDP Accessibility domain.

DevTools test suite benchmark

The previous benchmark showed that implementing our querying mechanism at the CDP layer gives a performance boost in a clinical unit-test scenario.

To see if the difference is pronounced enough to make it noticeable in a more realistic scenario of running a full test suite, we patched the DevTools end-to-end test suite to make use of the JavaScript and CDP-based prototypes and compared the runtimes. In this benchmark, we changed a total of 43 selectors from [aria-label=…] to a custom query handler aria/…, which we then implemented using each of the prototypes.

Some of the selectors are used multiple times in test scripts, so the actual number of executions of the aria query handler was 113 per run of the suite. The total number of query selections was 2253, so only a fraction of the query selections happened through the prototypes.

Benchmark: e2e test suite

As seen in the figure above, there is a discernible difference in the total runtime. The data is too noisy to conclude anything specific, but it is clear that the performance gap between the two prototypes shows in this scenario as well.

A new CDP endpoint

In light of the above benchmarks, and since the launch flag-based approach was undesirable in general, we decided to move forward with implementing a new CDP command for querying the accessibility tree. Now, we had to figure out the interface of this new endpoint.

For our use case in Puppeteer, we need the endpoint to take so-called RemoteObjectIds as argument and, to enable us to find the corresponding DOM elements afterwards, it should return a list of objects that contains the backendNodeIds for the DOM elements.

As seen in the chart below, we tried quite a few approaches satisfying this interface. From this, we found that the size of the returned objects, i.e whether or not we returned full accessibility nodes or only the backendNodeIds made no discernible difference. On the other hand, we found that using the existing NextInPreOrderIncludingIgnored was a poor choice for implementing the traversal logic here, as that yielded a noticeable slow-down.

Benchmark: Comparison of CDP-based AXTree traversal prototypes

Wrapping it all up

Now, with the CDP endpoint in place, we implemented the query handler on the Puppeteer side. The grunt of the work here was to restructure the query handling code to enable queries to resolve directly through CDP instead of querying through JavaScript evaluated in the page context.

What’s next?

The new aria handler shipped with Puppeteer v5.4.0 as a built-in query handler. We are looking forward to seeing how users adopt it into their test scripts, and we cannot wait to hear your ideas on how we can make this even more useful!

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

What's New In DevTools (Chrome 88)

$
0
0

What's New In DevTools (Chrome 88)

Faster DevTools startup

DevTools startup now is ~37% faster in terms of JavaScript compilation (from 6.9s down to 5s)! 🎉

The team did some optimization to reduce the performance overhead of serialisation, parsing and deserialisation during the startup.

There will be an upcoming engineering blog post explaining the implementation in detail. Stay tuned!

Chromium issue: 1029427

New CSS angle visualization tools

DevTools now has better support for CSS angle debugging!

CSS angle

When an HTML element on your page has CSS angle applied to it (e.g. background: linear-gradient(angle, color-stop1, color-stop2), transform: rotate(angle)), a clock icon is shown next to the angle in the Styles pane. Click on the clock icon to toggle the clock overlay. Click anywhere in the clock or drag the needle to change the angle!

There are mouse and keyboard shortcuts to change the angle value as well, check out our documentation to learn more!

Chromium issues: 1126178, 1138633

Emulate unsupported image types

DevTools added two new emulations in the Rendering tab, allowing you to disable AVIF and WebP image formats. These new emulations make it easier for developers to test different image loading scenarios without having to switch browsers.

Suppose we have the following HTML code to serve an image in AVIF and WebP for newer browsers, with a fallback PNG image for older browsers.

<picture>
  <source srcset="test.avif" type="image/avif">
  <source srcset="test.webp" type="image/webp">
  <img src="test.png" alt="A test image">
</picture>

Open the Rendering tab, select “Disable AVIF image format” and refresh the page. Hover over the img src. The current image src (currentSrc) is now the fallback WebP image.

Emulate image types

Chromium issue: 1130556

Simulate storage quota size in the Storage pane

You can now override storage quota size in the Storage pane. This feature gives you the ability to simulate different devices and test the behavior of your apps in low disk availability scenarios.

Go to Application > Storage, enable the Simulate custom storage quota checkbox, and enter any valid number to simulate the storage quota.

Simulate storage quota size

Chromium issues: 945786, 1146985

New Web Vitals lane in the Performance panel recordings

Performance recordings now have an option to display Web Vitals information.

After recording your load performance, enable the Web Vitals checkbox in the Performance panel to view the new Web Vitals lane.

The lane currently displays Web Vitals information such as First Contentful Paint (FCP), Largest Contentful Paint (LCP) and Layout Shift (LS).

Check out web.dev/vitals to learn more about how to optimize user experience with the Web Vitals metrics.

Web Vitals lane

Chromium issue: N/A

Report CORS errors in the Network panel

DevTools now shows CORS error when a network request is failed due to Cross-origin Resource Sharing (CORS).

In the Network panel, observe the failed CORS network request. The status column shows “CORS error”. Hover over on the error, the tooltip now shows the error code. Previously, DevTools only showed generic “(failed)” status for CORS errors.

This lays the foundation for our next enhancements on providing more detailed description of the CORS problems!

CORS errors

Chromium issue: 1141824

Frame details view updates

Cross-origin isolation information in the Frame details view

The cross-origin isolated status is now displayed under the Security & Isolation section.

The new API availability section displays the availability of SharedArrayBuffers (SAB) and whether they can be shared using postMessage().

A deprecation warning will show if the SAB and postMessage() is currently available, but the context is not cross-origin isolated. Learn more about cross-origin isolation and why it will be required for features like SharedArrayBuffers in this article.

Cross-origin information

Chromium issue: 1139899

New Web Workers information in the Frame details view

DevTools now displays dedicated web workers under the frame which creates them.

In the Application panel, expand a frame with web workers, then select a worker under the Workers tree to view the web worker's details.

Web workers information

Chromium issues: 1122507, 1051466

Display opener frame details for opened windows

You can now view the details about which frame caused the opening of another Window.

Select an opened window under the Frames tree to view the window details. Click on the Opener Frame link to reveal the opener in the Elements panel.

Opener frame details

Chromium issue: 1107766

Open Network panel from the Service Workers pane

View all service worker (SW) request routing information with the new Network requests link. This provides developers added context when debugging the SW.

Go to Application > Service Workers, click on the Network requests of a SW. The Network panel is opened in the bottom panel displaying all service worker related requests (the network requests are filtered by “is:service-worker-intercepted”).

Open Network panel from the Service Workers

Chromium issue: N/A

New copy options in the Network panel

Copy property value

The new “Copy value” option in the context menu lets you copy the property value of a network request.

Copy property value

In the Network panel, click on a network request to open the Headers pane. Right click on one of the properties under these section: Request payload (JSON) Form Data Query String Parameters Request Headers Response Headers

Then, you can select Copy value to copy the property value to your clipboard.

Chromium issue: 1132084

Copy stacktrace for network initiator

Right-click a network request, then select Copy stacktrace to copy the stacktrace to your clipboard.

Copy stacktrace

Chromium issue: 1139615

Preview Wasm variable value on mouseover

When hovering over a variable in WebAssembly (Wasm) disassembly while paused on a breakpoint, DevTools now shows the variable current value.

In the Sources panel, open a Wasm file, put a breakpoint and refresh the page. Hover to a variable to see the value.

Preview Wasm variable on mouseover

Chromium issues: 1058836, 1071432

Consistent units of measurement for file/memory sizes

DevTools now consistently use kB for displaying file/memory sizes. Previously DevTools mixed kB (1000 bytes) and KiB (1024 bytes). For example, the Network panel previously used “kB” labels but actually performed calculations using KiB, which caused needless confusion.

Chromium issue: 1035309

Experimental features

CSS Flexbox debugging tools

Flexbox debugging tools are coming!

For starters, DevTools now shows a flex badge in the Elements panel for elements with display: flex applied to it.

Beside that, new alignment icons are added in the following flexbox properties:

  • flex-direction
  • align-items
  • align-content
  • align-self
  • justify-items
  • justify-content

On top of that, these icons are context-aware. The icon direction will be adjusted according to:

  • flex-direction
  • direction
  • writing-mode

These icons aim to help you better visualize the flexbox layout of the page.

CSS Flex debugging

Here is the design doc of the Flexbox tooling features. More features will be added soon.

Give it a try and let us know what you think!

Chromium issues: 1144090, 1139945

Customize chords keyboard shortcuts

DevTools added experimental support for customize keyboard shortcuts since last release.

You can now create chords (a.k.a multi-keypress shortcuts) in the shortcut editor.

Go to Settings > Shortcuts, hovering on a command and click the Edit button (pen icon) to customize the chords shortcut.

Chords keyboard shortcuts

Chromium issue: 174309

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>


New in Chrome 87

$
0
0

New in Chrome 87

Chrome 87 is starting to roll out to stable now.

Here's what you need to know:

I’m Pete LePage, working, and shooting from home, let’s dive in and see what’s new for developers in Chrome 87!

Chrome Dev Summit

Chrome Dev Summit logo

The Chrome Dev Summit is back on December 9th and 10th with its 8th chapter. But this time, we're coming to you. We're bringing all the latest updates, lots of new content, and plenty of Chromies.

There are a ton of great talks, workshops, office hours, etc, and we'll be in the YouTube chat to answer your questions. Learn more, and find out how you can not just watch, but participate!

Camera pan, tilt, zoom

Most meeting rooms at Google have cameras with pan, tilt, and zoom capabilities, so that the camera can be pointed at the people in the room. But it’s not just fancy conference cameras that support PTZ - pan, tilt, zoom - many web cams support it too.

Starting in Chrome 87, once a user has granted permission you can now control the PTZ features on a camera.

Feature detection is a little different from what you're probably used to. You’ll need to call navigator.mediaDevices.getSupportedContraints() to see if the browser supports PTZ.

const supports = navigator.mediaDevices.getSupportedConstraints();

if (supports.pan && supports.tilt && supports.zoom) {
  // Browser supports camera PTZ.
}
Permission prompt for PTZ
Permission prompt for PTZ

Then, like all other powerful APIs, the user will need to grant permission to the camera, but also to PTZ functionality.

To request permission for PTZ functionality, call navigator.mediaDevices.getUserMedia() with the PTZ constraints. This will prompt the user to grant both regular camera and camera with PTZ permissions.

try {
  const opts = {video: {pan: true, tilt: true, zoom: true}};
  const stream = await navigator.mediaDevices.getUserMedia(opts);
  document.querySelector("#video").srcObject = stream;
} catch (error) {
  // User denies prompt, or
  // matching media is not available.
}

Finally, a call to MediaStreamTrack.getSettings() will tell you what the camera supports.

const [videoTrack] = stream.getVideoTracks();
const capabilities = videoTrack.getCapabilities();
const settings = videoTrack.getSettings();

if ('pan' in settings) {
  enablePan(capabilities, settings);
}
// Similar for tilt and zoom...

Once the user has granted permission, you can then call videoTrack.applyConstraints() to adjust the pan, tilt, and zoom.

function enablePan(capabilities, settings) {
  const input = document.getElementById('rangePan');
  input.min = capabilities.pan.min;
  input.max = capabilities.pan.max;
  input.step = capabilities.pan.step;
  input.value = settings.pan;

  input.addEventListener('input', async () => {
    const opts = { advanced: [{ pan: input.value }] };
    await videoTrack.applyConstraints(opts);
  });
}

Personally, I’m really excited about PTZ, so I can hide my messy kitchen, but you'll have to check out the video to see that!

Francois has a great post Control camera pan, tilt, and zoom on web.dev with code samples, complete details the best way to request permission, and a demo, so that you can try it out, and see if your webcam supports PTZ.

Range requests and service workers

HTTP range requests, which have been available in major browsers for several years, allow servers to send requested data to the client in chunks. This is especially useful for large media files, where the user experience is improved through smoother playback, enhanced scrubbing, and better pause and resume functions.

Historically, range requests and services workers did not work well together, forcing developers to build work-arounds. Starting in Chrome 87, passing range requests through to the network from inside a service worker will "just work."

self.addEventListener('fetch', (event) => {
  // The Range: header will pass through
  // in browsers that behave correctly.
  event.respondWith(fetch(event.request));
});

For an explanation of the issues with range requests and what's changed in Chrome 87, see Jeff’s article Handling range requests in a service worker on web.dev.

Origin Trial: Font access API

Photopea image editor
Photopea image editor

Bringing design apps like Figma, Gravit, and Photopea, to the web is great, and we’re seeing a lot more coming. While the web has the ability to offer a plethora of fonts, not everything is available on the web.

For many designers, there are some fonts installed on their computer that are critical to their work. For example, corporate logo fonts, or specialized fonts for CAD and other design applications.

With the font access API, which starts an origin trial in Chrome 87, a site can now enumerate the installed fonts, giving users access to all of the fonts on their system.

// Query for all available fonts and log metadata.
const fonts = navigator.fonts.query();
try {
  for await (const metadata of fonts) {
    console.log(`${metadata.family} (${metadata.fullName})`);
  }
} catch (err) {
  console.error(err);
}

// Roboto (Roboto Black)
// Roboto (Roboto Black Italic)
// Roboto (Roboto Bold)

And sites can hook in at lower levels to get access to the font bytes, allowing them to do their own OpenType layout implementation, or perform vector filters, or transforms, on the glyph shapes.

const fonts = navigator.fonts.query();
try {
  for await (const metadata of fonts) {
    const sfnt = await metadata.blob();
    makeMagic(metadata.family, sfnt);
  }
} catch (err) {
  console.error(err);
}

Check out Tom’s article Use advanced typography with local fonts on web.dev with all the details, and the link to the Origin Trial so you can try it yourself.

And more

  • Transferable Streams - ReadableStream, WritableStream, and TransformStream objects can now be passed as arguments to postMessage().
  • We’ve implemented the most granular flow-relative features of the CSS Logical Properties and Values spec, including shorthands and offsets to make these logical properties and values a bit easier to write. For example, a single margin-block property can replace separate margin-block-start and margin-block-end rules.
  • New @font-face descriptors have been added to ascent-override, descent-override, and line-gap-override to override metrics of the font.
  • There are several new text-decoration and underline properties.
  • And there are a number of changes related to cross-origin isolation.

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 87.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 88 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Simulating color vision deficiencies in the Blink Renderer

$
0
0

Simulating color vision deficiencies in the Blink Renderer

This article describes why and how we implemented color vision deficiency simulation in DevTools and the Blink Renderer.

Note: If you prefer watching a presentation over reading articles, then enjoy the video below! If not, skip the video and read on.

Background: bad color contrast

Low-contrast text is the most common automatically-detectable accessibility issue on the web.

A list of common accessibility issues on the web. Low-contrast text is by far the most common issue.

According to WebAIM’s accessibility analysis of the top 1-million websites, over 86% of home pages have low contrast. On average, each home page has 36 distinct instances of low-contrast text.

Using DevTools to find, understand, and fix contrast issues

Chrome DevTools can help developers and designers to improve contrast and to pick more accessible color schemes for web apps:

We’ve recently added a new tool to this list, and it’s a bit different from the others. The above tools mainly focus on surfacing contrast ratio information and giving you options to fix it. We realized that DevTools was still missing a way for developers to get a deeper understanding of this problem space. To address this, we implemented vision deficiency simulation in the DevTools Rendering tab.

In Puppeteer, the new page.emulateVisionDeficiency(type) API lets you programmatically enable these simulations.

Color vision deficiencies

Roughly 1 in 20 people suffer from a color vision deficiency (also known as the less accurate term “color blindness”). Such impairments make it harder to tell different colors apart, which can amplify contrast issues.

A colorful picture of melted crayons, with no color vision deficiencies simulated
A colorful picture of melted crayons, with no color vision deficiencies simulated.
The impact of simulating achromatopsia on a colorful picture of melted crayons.
The impact of simulating achromatopsia on a colorful picture of melted crayons.
The impact of simulating deuteranopia on a colorful picture of melted crayons.
The impact of simulating deuteranopia on a colorful picture of melted crayons.
The impact of simulating protanopia on a colorful picture of melted crayons.
The impact of simulating protanopia on a colorful picture of melted crayons.
The impact of simulating tritanopia on a colorful picture of melted crayons.
The impact of simulating tritanopia on a colorful picture of melted crayons.

As a developer with regular vision, you might see DevTools display a bad contrast ratio for color pairs that visually look okay to you. This happens because the contrast ratio formulas take into account these color vision deficiencies! You might still be able to read low-contrast text in some cases, but people with vision impairments don’t have that privilege.

By letting designers and developers simulate the effect of these vision deficiencies on their own web apps, we aim to provide the missing piece: not only can DevTools help you find and fix contrast issues, now you can also understand them!

Simulating color vision deficiencies with HTML, CSS, SVG, and C++

Before we dive into the Blink Renderer implementation of our feature, it helps to understand how you’d implement equivalent functionality using web technology.

You can think of each of these color vision deficiency simulations as an overlay covering the entire page. The Web Platform has a way to do that: CSS filters! With the CSS filter property, you can use some predefined filter functions, such as blur, contrast, grayscale, hue-rotate, and many more. For even more control, the filter property also accepts a URL which can point to a custom SVG filter definition:

<style>
  :root {
    filter: url(#deuteranopia);
  }
</style>
<svg>
  <filter id="deuteranopia">
    <feColorMatrix values="0.367  0.861 -0.228  0.000  0.000
                           0.280  0.673  0.047  0.000  0.000
                          -0.012  0.043  0.969  0.000  0.000
                           0.000  0.000  0.000  1.000  0.000">
  </filter>
</svg>

The above example uses a custom filter definition based on a color matrix. Conceptually, every pixel’s [Red, Green, Blue, Alpha] color value is matrix-multiplied to create a new color [R′, G′, B′, A′].

Each row in the matrix contains 5 values: a multiplier for (from left to right) R, G, B, and A, as well as a fifth value for a constant shift value. There are 4 rows: the first row of the matrix is used to compute the new Red value, the second row Green, the third row Blue, and the last row Alpha.

You might be wondering where the exact numbers in our example come from. What makes this color matrix a good approximation of deuteranopia? The answer is: science! The values are based on a physiologically accurate color vision deficiency simulation model by Machado, Oliveira, and Fernandes.

Anyway, we have this SVG filter, and we can now apply it to arbitrary elements on the page using CSS. We can repeat the same pattern for other vision deficiencies. Here’s a demo of what that looks like:

A photo of melted crayons

The same photo of melted crayons, optionally with CSS and SVG filter effects applied

If we wanted to, we could build our DevTools feature as follows: when the user emulates a vision deficiency in the DevTools UI, we inject the SVG filter into the inspected document, and then we apply the filter style on the root element. However, there are several problems with that approach:

  • The page might already have a filter on its root element, which our code might then override.
  • The page might already have an element with id="deuteranopia", clashing with our filter definition.
  • The page might rely on a certain DOM structure, and by inserting the <svg> into the DOM we might violate these assumptions.

Edge cases aside, the main problem with this approach is that we’d be making programmatically observable changes to the page. If a DevTools user inspects the DOM, they might suddenly see an <svg> element they never added, or a CSS filter they never wrote. That would be confusing! To implement this functionality in DevTools, we need a solution that doesn’t have these drawbacks.

Let’s see how we can make this less intrusive. There’s two parts to this solution that we need to hide: 1) the CSS style with the filter property, and 2) the SVG filter definition, which is currently part of the DOM.

<!-- Part 1: the CSS style with the filter property -->
<style>
  :root {
    filter: url(#deuteranopia);
  }
</style>
<!-- Part 2: the SVG filter definition -->
<svg>
  <filter id="deuteranopia">
    <feColorMatrix values="0.367  0.861 -0.228  0.000  0.000
                           0.280  0.673  0.047  0.000  0.000
                          -0.012  0.043  0.969  0.000  0.000
                           0.000  0.000  0.000  1.000  0.000">
  </filter>
</svg>

Avoiding the in-document SVG dependency

Let’s start with part 2: how can we avoid adding the SVG to the DOM? One idea is to move it to a separate SVG file. We can copy the <svg>…</svg> from the above HTML and save it as filter.svg — but we need to make some changes first! Inline SVG in HTML follows the HTML parsing rules. That means you can get away with things like omitting quotes around attribute values in some cases. However, SVG in separate files is supposed to be valid XML — and XML parsing is way more strict than HTML. Here’s our SVG-in-HTML snippet again:

<svg>
  <filter id="deuteranopia">
    <feColorMatrix values="0.367  0.861 -0.228  0.000  0.000
                           0.280  0.673  0.047  0.000  0.000
                          -0.012  0.043  0.969  0.000  0.000
                           0.000  0.000  0.000  1.000  0.000">
  </filter>
</svg>

To make this valid standalone SVG (and thus XML), we need to make two changes. Can you guess which?

<svg xmlns="http://www.w3.org/2000/svg">
 
<filter id="deuteranopia">
   
<feColorMatrix values="0.367  0.861 -0.228  0.000  0.000
                           0.280  0.673  0.047  0.000  0.000
                          -0.012  0.043  0.969  0.000  0.000
                           0.000  0.000  0.000  1.000  0.000"
/>
 
</filter>
</svg>

The first change is the XML namespace declaration at the top. The second addition is the so-called “solidus” — the slash that indicates the <feColorMatrix> tag both opens and closes the element. The HTML parser couldn’t care less about /> vs. >, but in XML the difference matters.

Anyway, with those changes, we can finally save this as a valid SVG file, and point to it from the CSS filter property value in our HTML document:

<style>
  :root {
    filter: url(filters.svg#deuteranopia);
  }
</style>

Hurrah, we no longer have to inject SVG into the document! That’s already a lot better. But… we now depend on a separate file. That’s still a dependency. Can we somehow get rid of it?

As it turns out, we don’t actually need a file. We can encode the entire file within a URL by using a data URL. To make this happen, we literally take the contents of the SVG file we had before, add the data: prefix, configure the proper MIME type, and we’ve got ourselves a valid data URL that represents the very same SVG file:

data:image/svg+xml,
  <svg xmlns="http://www.w3.org/2000/svg">
    <filter id="deuteranopia">
      <feColorMatrix values="0.367  0.861 -0.228  0.000  0.000
                             0.280  0.673  0.047  0.000  0.000
                            -0.012  0.043  0.969  0.000  0.000
                             0.000  0.000  0.000  1.000  0.000" />
    </filter>
  </svg>

The benefit is that now, we no longer need to store the file anywhere, or load it from disk or over the network just to use it in our HTML document. So instead of referring to the filename like we did before, we can now point to the data URL:

<style>
  :root {
    filter: url('data:image/svg+xml,\
      <svg xmlns="http://www.w3.org/2000/svg">\
        <filter id="deuteranopia">\
          <feColorMatrix values="0.367  0.861 -0.228  0.000  0.000\
                                 0.280  0.673  0.047  0.000  0.000\
                                -0.012  0.043  0.969  0.000  0.000\
                                 0.000  0.000  0.000  1.000  0.000" />\
        </filter>\
      </svg>#deuteranopia');
  }
</style>

At the end of the URL, we still specify the ID of the filter we want to use, just like before. Note that there’s no need to Base64-encode the SVG document in the URL — doing so would only hurt readability and increase file size. We added backslashes at the end of each line to ensure the newline characters in the data URL don’t terminate the CSS string literal.

So far, we’ve only talked about how to simulate vision deficiencies using web technology. Interestingly, our final implementation in the Blink Renderer is actually quite similar. Here’s a C++ helper utility we’ve added to create a data URL with a given filter definition, based on the same technique:

AtomicString CreateFilterDataUrl(const char* piece) {
  AtomicString url =
      "data:image/svg+xml,"
        "<svg xmlns=\"http://www.w3.org/2000/svg\">"
          "<filter id=\"f\">" +
            StringView(piece) +
          "</filter>"
        "</svg>"
      "#f";
  return url;
}

And here’s how we’re using it to create all the filters we need:

AtomicString CreateVisionDeficiencyFilterUrl(VisionDeficiency vision_deficiency) {
  switch (vision_deficiency) {
    case VisionDeficiency::kAchromatopsia:
      return CreateFilterDataUrl("…");
    case VisionDeficiency::kBlurredVision:
      return CreateFilterDataUrl("<feGaussianBlur stdDeviation=\"2\"/>");
    case VisionDeficiency::kDeuteranopia:
      return CreateFilterDataUrl(
          "<feColorMatrix values=\""
          " 0.367  0.861 -0.228  0.000  0.000 "
          " 0.280  0.673  0.047  0.000  0.000 "
          "-0.012  0.043  0.969  0.000  0.000 "
          " 0.000  0.000  0.000  1.000  0.000 "
          "\"/>");
    case VisionDeficiency::kProtanopia:
      return CreateFilterDataUrl("…");
    case VisionDeficiency::kTritanopia:
      return CreateFilterDataUrl("…");
    case VisionDeficiency::kNoVisionDeficiency:
      NOTREACHED();
      return "";
  }
}

Note that this technique gives us access to the full power of SVG filters without having to re-implement anything or re-invent any wheels. We’re implementing a Blink Renderer feature, but we’re doing so by leveraging the Web Platform.

Okay, so we’ve figured out how to construct SVG filters and turn them into data URLs that we can use within our CSS filter property value. Can you think of a problem with this technique? It turns out, we can’t actually rely on the data URL being loaded in all cases, since the target page might have a Content-Security-Policy that blocks data URLs. Our final Blink-level implementation takes special care to bypass CSP for these “internal” data URLs during loading.

Edge cases aside, we’ve made some good progress. Because we no longer depend on inline <svg> being present in the same document, we’ve effectively reduced our solution to just a single self-contained CSS filter property definition. Great! Now let’s get rid of that too.

Avoiding the in-document CSS dependency

Just to recap, this is where we’re at so far:

<style>
  :root {
    filter: url('data:…');
  }
</style>

We still depend on this CSS filter property, which might override a filter in the real document and break things. It would also show up when inspecting the computed styles in DevTools, which would be confusing. How can we avoid these issues? We need to find a way to add a filter to the document without it being programmatically observable to developers.

One idea that came up was to create a new Chrome-internal CSS property that behaves like filter, but has a different name, like --internal-devtools-filter. We could then add special logic to ensure this property never shows up in DevTools or in the computed styles in the DOM. We could even make sure it only works on the one element we need it for: the root element. However, this solution wouldn't be ideal: we’d be duplicating functionality that already exists with filter, and even if we try hard to hide this non-standard property, web developers could still find out about it and start using it, which would be bad for the Web Platform. We need some other way of applying a CSS style without it being observable in the DOM. Any ideas?

The CSS spec has a section introducing the visual formatting model it uses, and one of the key concepts there is the viewport. This is the visual view through which users consult the web page. A closely related concept is the initial containing block, which is kind of like a styleable viewport <div> that only exists at the spec level. The spec refers to this “viewport” concept all over the place. For example, you know how the browser shows scrollbars when the content doesn’t fit? This is all defined in the CSS spec, based on this “viewport”.

This viewport exists within the Blink Renderer as well, as an implementation detail. Here’s the code that applies the default viewport styles according to the spec:

scoped_refptr<ComputedStyle> StyleResolver::StyleForViewport() {
  scoped_refptr<ComputedStyle> viewport_style =
      InitialStyleForElement(GetDocument());
  viewport_style->SetZIndex(0);
  viewport_style->SetIsStackingContextWithoutContainment(true);
  viewport_style->SetDisplay(EDisplay::kBlock);
  viewport_style->SetPosition(EPosition::kAbsolute);
  viewport_style->SetOverflowX(EOverflow::kAuto);
  viewport_style->SetOverflowY(EOverflow::kAuto);
  // …
  return viewport_style;
}

You don't need to understand C++ or the intricacies of Blink’s Style engine to see that this code handles the viewport’s (or more accurately: the initial containing block’s) z-index, display, position, and overflow. Those are all concepts you might be familiar with from CSS! There’s some other magic related to stacking contexts, which doesn’t directly translate to a CSS property, but overall you could think of this viewport object as something that can be styled using CSS from within Blink, just like a DOM element — except it’s not part of the DOM.

This gives us exactly what we want! We can apply our filter styles to the viewport object, which visually affects the rendering, without interfering with the observable page styles or the DOM in any way.

Conclusion

To recap our little journey here, we started out by building a prototype using web technology instead of C++, and then started working on moving parts of it to the Blink Renderer.

  • We first made our prototype more self-contained by inlining data URLs.
  • We then made those internal data URLs CSP-friendly, by special-casing their loading.
  • We made our implementation DOM-agnostic and programmatically unobservable by moving styles to the Blink-internal viewport.

What’s unique about this implementation is that our HTML/CSS/SVG prototype ended up influencing the final technical design. We found a way to use the Web Platform, even within the Blink Renderer!

For more background, check out our design proposal or the Chromium tracking bug which references all related patches.

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

The Chromium Chronicle: Restricting Target Visibility

$
0
0

The Chromium Chronicle: Restricting Target Visibility

Episode 15: by Joe Mason in Montreal (November 2020)
Previous episodes

Chrome is a big project with many sub-systems. It’s common to find code written for one component that would be useful elsewhere, but might have hidden restrictions. For safety, limit external access to dangerous functionality. For instance, a custom function tuned for specific performance needs:

// Blazing fast for 2-char strings, O(n^3) otherwise.
std::string ConcatShortStringsFast(const std::string& a, const std::string& b);

There are several ways to restrict access. GN visibility rules stop code outside your component from depending on a target. By default targets are visible to all, but you can modify that:

# In components/restricted_component/BUILD.gn
visibility = [
  # Applies to all targets in this file. Only the given targets can depend on them.
  "//components/restricted_component:*",
  "//components/authorized_other_component:a_single_target",
]
source_set("internal") {
  # This dangerous target should be locked down even more.
  visibility = [ "//components/restricted_component:privileged_target" ]
}

Visibility declarations are validated with gn check, which runs as part of every GN build.

Another mechanism is DEPS include_rules, which limits access to header files. Every directory inherits include_rules from its parent, and can modify those rules in its own DEPS file. All header files included from outside directories must be allowed by the include_rules.

# In //components/authorized_other_component/DEPS
include_rules = [
  # Common directories like //base are inherited from //components/DEPS or //DEPS.
  # Also allow includes from restricted_component, but not restricted_component/internal.
  "+components/restricted_component",
  "-components/restricted_component/internal",
  # But do allow a single header from internal, for testing.
  "+components/restricted_component/internal/test_support.h",
]

To ensure these dependencies are appropriate, changes that add a directory to include_rules must be approved by that directory's OWNERS. No approval is needed to restrict a directory using include_rules! You can ensure that everyone changing your component remembers not to use certain headers by adding an include_rule forbidding them.

include_rules are checked by the presubmit, so you won’t see any errors until you try to upload a change. To test include_rules without uploading, run buildtools/checkdeps/checkdeps.py <directory>.

Resources

Deprecations and removals in Chrome 88

$
0
0

Deprecations and removals in Chrome 88

Chrome 88 beta was released on December 3, 2020 and is expected to become the stable version in the third week of January 2021.

Don't allow popups during page unload (enterprises)

Since Chrome 80, pages have no longer been able to open a new page during unloading using window.open(). Since then enterprises have been able to use the AllowPopupsDuringPageUnload policy flag to allow popups during page unload. Starting in Chrome 88, this flag is no longer supported.

Web Components v0 removed

Web Components v0 have been in a reverse origin trial since Chrome 80. This allowed users of the API time to upgrade their sites while ensuring that new adopters of Web Components used version 1. The reverse origin trial ends with Chrome 87, making Chrome 88 the first in which version 0 is no longer supported. The Web Components v1 APIs replace Web Components v0 and are fully supported in Chrome, Safari, Firefox, and Edge. This removal covers the items listed below.

Custom Elements v0 HTML Imports Shadow DOM v0

FTP support removed

Chrome has removed support for FTP URLs. The legacy FTP implementation in Chrome has no support for encrypted connections (FTPS), nor proxies. Usage of FTP in the browser is sufficiently low that it is no longer viable to invest in improving the existing FTP client. In addition, more capable FTP clients are available on all affected platforms.

Google Chrome 72 and later removed support for fetching document subresources over FTP and rendering of top level FTP resources. Navigating to FTP URLs results in showing a directory listing or a download depending on the type of resource. A bug in Google Chrome 74 and later resulted in dropping support for accessing FTP URLs over HTTP proxies. Proxy support for FTP was removed entirely in Google Chrome 76.

The remaining capabilities of Google Chrome’s FTP implementation were restricted to either displaying a directory listing or downloading a resource over unencrypted connections.

In Chrome 77, FTP support was disabled by default for fifty percent of users but was available with flags.

In Chrome 88 all FTP support is disabled.

Feedback

DevTools architecture refresh: migrating to Web Components

$
0
0

DevTools architecture refresh: migrating to Web Components

When DevTools was first created many, many years ago the team chose to build a bespoke UI framework. This was a reasonable choice at the time and has served DevTools well.

But since then various features have landed in the platform and one of those, Web Components, is a great fit for building new UI elements in DevTools. By leaning on what the platform provides we can greatly reduce the amount of bespoke UI code we have to maintain and invest more in building features for DevTools, rather than supporting bespoke infrastructure.

To help with the transition, we created a guide to building UI elements in DevTools to share with the wider DevTools team. Some of the guide is bespoke to DevTools and its architecture, which brings its own set of constraints, but some of it are generic guidelines on the approaches we’ve used to build, structure and test Web Components.

Today, we’re making this document publicly available at goo.gle/building-ui-devtools. If you’ve ever wondered more about how Web Components are used in large, real world applications, or some of the challenges that come with integrating components into a large, pre-existing codebase, this document could help and provide some answers. If you have any questions about our guidelines, feel free to tweet me.

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

Debugging WebAssembly with modern tools

$
0
0

Debugging WebAssembly with modern tools

The road so far

A year ago, Chrome announced initial support for native WebAssembly debugging in Chrome DevTools.

We demonstrated basic stepping support and talked about opportunities usage of DWARF information instead of source maps opens for us in the future:

  • Resolving variable names
  • Pretty-printing types
  • Evaluating expressions in source languages
  • …and much more!

Today, we’re excited to showcase the promised features come into life and the progress Emscripten and Chrome DevTools teams have made over this year, in particular, for C and C++ apps.

Before we start, please keep in mind that this is still a beta version of the new experience, you need to use the latest version of all tools at your own risk, and if you run into any issues, please report them to https://bugs.chromium.org/p/chromium/issues/entry?template=DevTools+issue.

Let’s start with the same simple C example as the last time:

#include <stdlib.h>

void assert_less(int x, int y) {
  if (x >= y) {
    abort();
  }
}

int main() {
  assert_less(10, 20);
  assert_less(30, 20);
}

To compile it, we use latest Emscripten and pass a -g flag, just like in the original post, to include debug information:

emcc -g temp.c -o temp.html

Now we can serve the generated page from a localhost HTTP server (for example, with serve), and open it in the latest Chrome Canary.

This time we’ll also need a helper extension that integrates with Chrome DevTools and helps it make sense of all the debugging information encoded in the WebAssembly file. Please install it by going to this link: goo.gle/wasm-debugging-extension

You’ll also want to enable WebAssembly debugging in the DevTools Experiments. Open Chrome DevTools, click the gear () icon in the top right corner of DevTools pane, go to the Experiments panel and tick WebAssembly Debugging: Enable DWARF support.

Experiments pane of the DevTools settings

When you close the Settings, DevTools will suggest to reload itself to apply settings, so let’s do just that. That’s it for the one-off setup.

Now we can go back to the Sources panel, enable Pause on exceptions (⏸ icon), then check Pause on caught exceptions and reload the page. You should see the DevTools paused on an exception:

Screenshot of the Sources panel showing how to enable "Pause on caugh exceptions"

By default, it stops on an Emscripten-generated glue code, but on the right you can see a Call Stack view representing the stacktrace of the error, and can navigate to the original C line that invoked abort:

DevTools paused in the `assert_less` function and showing values of `x` and `y` in the Scope view

Now, if you look in the Scope view, you can see the original names and values of variables in the C/C++ code, and no longer have to figure out what mangled names like $localN mean and how they relate to the source code you’ve written.

This applies not only to primitive values like integers, but to compound types like structures, classes, arrays, etc., too!

Rich type support

Let’s take a look at a more complicated example to show those. This time, we’ll draw a Mandelbrot fractal with the following C++ code:

#include <SDL2/SDL.h>
#include <complex>

int main() {
  // Init SDL.
  int width = 600, height = 600;
  SDL_Init(SDL_INIT_VIDEO);
  SDL_Window* window;
  SDL_Renderer* renderer;
  SDL_CreateWindowAndRenderer(width, height, SDL_WINDOW_OPENGL, &window,
                              &renderer);

  // Generate a palette with random colors.
  enum { MAX_ITER_COUNT = 256 };
  SDL_Color palette[MAX_ITER_COUNT];
  srand(time(0));
  for (int i = 0; i < MAX_ITER_COUNT; ++i) {
    palette[i] = {
        .r = (uint8_t)rand(),
        .g = (uint8_t)rand(),
        .b = (uint8_t)rand(),
        .a = 255,
    };
  }

  // Calculate and draw the Mandelbrot set.
  std::complex<double> center(0.5, 0.5);
  double scale = 4.0;
  for (int y = 0; y < height; y++) {
    for (int x = 0; x < width; x++) {
      std::complex<double> point((double)x / width, (double)y / height);
      std::complex<double> c = (point - center) * scale;
      std::complex<double> z(0, 0);
      int i = 0;
      for (; i < MAX_ITER_COUNT - 1; i++) {
        z = z * z + c;
        if (abs(z) > 2.0)
          break;
      }
      SDL_Color color = palette[i];
      SDL_SetRenderDrawColor(renderer, color.r, color.g, color.b, color.a);
      SDL_RenderDrawPoint(renderer, x, y);
    }
  }

  // Render everything we've drawn to the canvas.
  SDL_RenderPresent(renderer);

  // SDL_Quit();
}

You can see that this application is still fairly small – it’s a single file containing 50 lines of code – but this time I’m also using some external APIs, like SDL library for graphics as well as complex numbers from the C++ standard library.

I’m going to compile it with the same -g flag as above to include debug information, and also I’ll ask Emscripten to provide the SDL2 library and allow arbitrarily-sized memory:

emcc -g mandelbrot.cc -o mandelbrot.html \
     -s USE_SDL=2 \
     -s ALLOW_MEMORY_GROWTH=1

When I visit the generated page in the browser, I can see the beautiful fractal shape with some random colors:

Demo page

When I open DevTools, once again, I can see the original C++ file. This time, however, we don’t have an error in the code (whew!), so let’s set some breakpoint at the beginning of our code instead.

When we reload the page again, the debugger will pause right inside our C++ source:

DevTools paused on the `SDL_Init` call

We can already see all our variables on the right, but only width and height are initialized at the moment, so there isn’t much to inspect.

Let’s set another breakpoint inside our main Mandelbrot loop, and resume execution to skip a bit forward.

DevTools paused inside the nested loops

At this point our palette has been filled with some random colors, and we can expand both the array itself, as well as the individual SDL_Color structures and inspect their components to verify that everything looks good (for example, that “alpha” channel is always set to full opacity). Similarly, we can expand and check the real and imaginary parts of the complex number stored in the center variable.

If you want to access a deeply nested property that is otherwise hard to navigate to via the Scope view, you can use the Console evaluation, too! However, note that more complex C++ expressions are not yet supported.

Console panel showing the result of `palette[10].r`

Let’s resume execution a few times and we can see how the inner x is changing as well – either by looking in the Scope view again, adding the variable name to the watch list, evaluating it in the console, or by hovering over the variable in the source code:

Tooltip over the variable `x` in the source showing its value `3`

From here, we can step-in or step-over C++ statements, and observe how other variables are changing too:

Tooltips and Scope view showing values of `color`, `point` and other variables

Okay, so this all works great when a debug information is available, but what if we want to debug a code that wasn’t built with the debugging options?

Raw WebAssembly debugging

For example, we asked Emscripten to provide a prebuilt SDL library for us, instead of compiling it ourselves from the source, so – at least currently – there’s no way for the debugger to find associated sources. Let’s step-in again to get into the SDL_RenderDrawColor:

DevTools showing disassembly view of `mandelbrot.wasm`

We’re back to the raw WebAssembly debugging experience.

Now, it looks a bit scary and isn’t something most Web developers will ever need to deal with, but occasionally you might want to debug a library built without debug information – whether because it’s a 3rd-party library you have no control over, or because you’re running into one of those bugs that occurs only on production.

To aid in those cases, we’ve made some improvements to the basic debugging experience, too.

First of all, if you used raw WebAssembly debugging before, you might notice that the entire disassembly is now shown in a single file – no more guessing which function a Sources entry wasm-53834e3e/ wasm-53834e3e-7 possibly corresponds to.

New name generation scheme

We improved names in the disassembly view, too. Previously you’d see just numeric indices, or, in case of functions, no name at all.

Now we’re generating names similarly to other disassembly tools, by using hints from the WebAssembly name section, import/export paths and, finally, if everything else fails, generating them based on the type and the index of the item like $func123. You can see how, in the screenshot above, this already helps to get slightly more readable stacktraces and disassembly.

When there is no type information available, it might be hard to inspect any values besides the primitives – for example, pointers will show up as regular integers, with no way of knowing what’s stored behind them in memory.

Memory inspection

Previously, you could only expand the WebAssembly memory object – represented by env.memory in the Scope view – to look up individual bytes. This worked in some trivial scenarios, but wasn’t particularly convenient to expand and didn’t allow to reinterpret data in formats other than byte values. We’ve added a new feature to help with this, too: a linear memory inspector.

If you right-click on the env.memory, you should now see a new option called Inspect memory:

Context menu on the `env.memory` in the Scope pane showing an "Inspect Memory" item

Once clicked, it will bring up a Memory Inspector, in which you can inspect the WebAssembly memory in hexadecimal and ASCII views, navigate to specific addresses, as well as interpret the data in different formats:

Memory Inspector pane in DevTools showing a hex and ASCII views of the memory

Advanced scenarios and caveats

Profiling WebAssembly code

When you open DevTools, WebAssembly code gets “tiered down” to an unoptimized version to enable debugging. This version is a lot slower, which means that you can’t rely on console.time, performance.now and other methods of measuring speed of your code while DevTools are open, as the numbers you get won’t represent the real-world performance at all.

Instead, you should use the DevTools Performance panel which will run the code at the full speed and provide you with a detailed breakdown of the time spent in different functions:

Profiling panel showing various Wasm functions

Alternatively, you can run your application with DevTools closed, and open them once finished to inspect the Console.

We’ll be improving profiling scenarios in the future, but for now it’s a caveat to be aware of. If you want to learn more about WebAssembly tiering scenarios, check out our docs on WebAssembly compilation pipeline.

Building and debugging on different machines (including Docker / host)

When building in a Docker, virtual machine, or on a remote build server, you will likely run into situations where the paths to the source files used during the build don’t match the paths on your own filesystem where the Chrome DevTools are running. In this case, files will show up in the Sources panel but fail to load.

To fix this issue, we have implemented a path mapping functionality in the C/C++ extension options. You can use it to remap arbitrary paths and help the DevTools locate sources.

For example, if the project on your host machine is under a path C:\src\my_project, but was built inside a Docker container where that path was represented as /mnt/c/src/my_project, you can remap it back during debugging by specifying those paths as prefixes:

Options page of the C/C++ debugging extension

The first matched prefix “wins”. If you’re familiar with other C++ debuggers, this option is similar to the set substitute-path command in GDB or a target.source-map setting in LLDB.

Debugging optimized builds

Like with any other languages, debugging works best if optimizations are disabled. Optimizations might inline functions one into another, reorder code, or remove parts of the code altogether – and all of this has a chance to confuse the debugger and, consequently, you as the user.

If you don’t mind a more limited debugging experience and still want to debug an optimized build, then most of the optimizations will work as expected, except for function inlining. We plan to address the remaining issues in the future, but, for now, please use -fno-inline to disable it when compiling with any -O level optimizations, e.g.:

emcc -g temp.c -o temp.html \
     -O3 -fno-inline

Separating the debug information

Debug information preserves lots of details about your code, defined types, variables, functions, scopes, and locations – anything that might be useful to the debugger. As a result, it often can be larger than the code itself.

To speed up loading and compilation of the WebAssembly module, you might want to split out this debug information into a separate WebAssembly file. To do that in Emscripten, pass a -gseparate-dwarf=… flag with a desired filename:

emcc -g temp.c -o temp.html \
     -gseparate-dwarf=temp.debug.wasm

In this case, the main application will only store a filename temp.debug.wasm, and the helper extension will be able to locate and load it when you open DevTools.

When combined with optimizations like described above, this feature can be even used to ship almost-optimized production builds of your application, and later debug them with a local side file. In this case, we’ll additionally need to override the stored URL to help the extension find the side file, for example:

emcc -g temp.c -o temp.html \
     -O3 -fno-inline \
     -gseparate-dwarf=temp.debug.wasm \
     -s SEPARATE_DWARF_URL=file://[local path to temp.debug.wasm]

To be continued…

Whew, that was a lot of new features!

With all those new integrations, Chrome DevTools becomes a viable, powerful, debugger not only for JavaScript, but also for C and C++ apps, making it easier than ever to take apps, built in a variety of technologies and bring them to a shared, cross-platform Web.

However, our journey is not over yet. Some of the things we’ll be working on from here on:

  • Cleaning up the rough edges in the debugging experience.
  • Adding support for custom type formatters.
  • Working on improvements to the profiling for WebAssembly apps.
  • Adding support for code coverage to make it easier to find unused code.
  • Improving support for expressions in console evaluation.
  • Adding support for more languages.
  • …and more!

Meanwhile, please help us out by trying the current beta on your own code and reporting any found issues to https://bugs.chromium.org/p/chromium/issues/entry?template=DevTools+issue.

Stay tuned for future updates!

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

New in Chrome 88

$
0
0

New in Chrome 88

Chrome 88 is starting to roll out to stable now.

Here's what you need to know:

I’m Pete LePage, working, and shooting from home, let’s dive in and see what’s new for developers in Chrome 88!

Manifest v3

Chrome 88 now supports extensions built with manifest v3, and you can upload them to the Chrome Web Store. Manifest v3 is a new extension platform, that makes Chrome extensions more secure, performant, and privacy respecting, by default.

For example, it disallows remotely hosted code, which helps Chrome Web Store reviewers better understand what risks an extension poses. And should allow you to update your extensions faster.

It introduces service workers as a replacement for background pages. Since service workers are only resident in memory when needed, extensions will use less system resources.

And to give users greater visibility and control over how extensions use and share their data, in a future release we will be adopting a new install flow that allows users to withhold sensitive permissions at install time.

Check out developer.chrome.com for complete details, and how to migrate your current extension to manifest v3.

CSS aspect-ratio property

Normally, only some elements have an aspect ratio, for example images. For them, if only the width, or the height, is specified, the other is automatically computed using the intrinsic aspect ratio.

<!-- Height is auto-computed from width & aspect ratio -->
<img src="..." style="width: 800px;">

In Chrome 88, the aspect-ratio property allows you to explicitly specify an aspect ratio, enabling a similar behavior.

.square {
  aspect-ratio: 1 / 1;
}

You can also use progressive enhancement to check if it’s supported in the browser, and apply a fallback if necessary. Then, with the new CSS 4 not selector, you can make your code a little cleaner!

.square {
  aspect-ratio: 1 / 1;
}

@supports not (aspect-ratio: 1 / 1) {
  .square {
    height: 4rem;
    width: 4rem;
  }
}

Thanks to Jen Simmons for calling out this is supported in the latest Safari Technical Preview, so we should see it in Safari soon! And check out Una's demo to see it in action.

Heavy throttling of chained JS timers

Chrome 88 will heavily throttle chained JavaScript timers for hidden pages in particular conditions. This will reduce CPU usage, which will also reduce battery usage. There are some edge cases where this will change behavior, but timers are often used where a different API would be more efficient, and more reliable.

That was pretty jargon heavy, and a bit ambiguous, so check out Jake's article Heavy throttling of chained JS timers beginning in Chrome 88 on developer.chrome.com for all the details.

Play billing in Trusted Web Activity

You can now use Play Billing in your Trusted Web Activity to sell digital goods and subscriptions using the new Digital Goods API. It’s available as an origin trial in Chrome 88 on Android, and we expect it to expand the origin trial to Chrome OS in the next release.

Once your accounts are set-up, update your Trusted Web Activity to enable Play billing, and create your digital goods in the Play Developer Console. Then, in your PWA, add your origin trial token, and you’re ready to add the code to check for existing purchases, query for available purchases, and make new purchases.

// Get list of potential digital goods

const itemService =
  await window.getDigitalGoodsService("https://play.google.com/billing");

const details =
  await itemService.getDetails(['ripe_bananas', 'walnuts', 'pecans' ]);

Adriana and Andre go into more detail in their Chrome Dev Summit talk - What’s new for web apps in Play, or check out the docs.

And more

And of course there’s plenty more.

  • To conform to a change in the HTML standard, anchor tags with target="_blank" will now imply rel="no-opener" by default, this helps prevent tab-napping attacks.
  • Most operating systems enable mouse acceleration by default, but that can be a problem for some games. In Chrome 88, the Pointer Lock API allows you to disable mouse acceleration. That means the same physical motion, slow or fast, results in the same rotation, providing a better gaming experience and higher accuracy.
  • And addEventListener now takes an Abort Signal as an option. Calling abort() removes that event listener, making it easy to shut down event listeners when no longer needed.

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 87.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video.

I’m Pete LePage, and as soon as Chrome 89 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback


What's New In DevTools (Chrome 89)

$
0
0

What's New In DevTools (Chrome 89)

<<../../_shared/devtools-research.md>>

Debugging support for Trusted Types violations

Breakpoint on Trusted Type violations

You can now set breakpoints and catch exceptions on Trusted Type Violations in the Sources panel.

Trusted Types API helps you prevent DOM-based cross-site scripting vulnerabilities. Learn how to write, review and maintain applications free of DOM XSS vulnerabilities with Trusted Types here.

In the Sources panel, open the debugger sidebar pane. Expand the CSP Violation Breakpoints section and enable the Trusted Type violations checkbox to pause on the exceptions. Try it yourself with this demo page.

Breakpoint on Trusted Type violations

Chromium issue: 1142804

The Sources panel now shows a warning icon next to the line that violates Trusted Type. Hover on it to preview the exception. Click on it to expand the Issues tab, it provides more details on the exceptions and guidance on how to fix it.

Link issue in the Sources panel to the Issues tab

Chromium issue: 1150883

Capture node screenshot beyond viewport

You can now capture node screenshots for a full node, including content below the fold. Previously, the screenshot was cut off for content not visible in the viewport. The full-page screenshots are precise now as well.

In the Elements panel, right click on an element and select Capture node screenshot.

Capture node screenshot beyond viewport

Chromium issue: 1003629

New Trust Tokens tab for network requests

Inspect the Trust Token network requests with the new Trust Tokens tab.

Trust Token is a new API to help combat fraud and distinguish bots from real humans, without passive tracking. Learn how to get started with Trust Tokens.

Further debugging support will come in the next releases.

New Trust Token tab for network requests

Chromium issue: 1126824

Lighthouse 7 in the Lighthouse panel

The Lighthouse panel is now running Lighthouse 7. Check out the release notes for a full list of changes.

Lighthouse 7 in the Lighthouse panel

New audits in Lighthouse 7:

  • Preload Largest Contentful Paint (LCP) image. Audits if the image used by the LCP element is preloaded in order to improve your LCP time.
  • Issues logged to the Issues panel. Indicates a list of unresolved issues in the Issues panel.
  • Progressive Web Apps (PWA). The PWA Category changed fairly significantly.
  • The Installable group is now powered entirely by the capability checks that enable Chrome's installable criteria. These are the same signals seen in the Manifest pane.

    • The "Registers a service worker…" audit moves to the PWA Optimized group, and the "Uses HTTPS" audit is now included as part of the key "installability requirements" audit.
    • The Fast and reliable group is removed. As the revamped "installability requirements" audit includes offline-capability checking, the “current page and start_url respond with 200 when offline” audit was removed. The "Page load is fast enough on mobile network" audit was removed too.

Chromium issue: 772558

Elements panel updates

Support forcing the CSS :target state

You can now use DevTools to force and inspect the CSS :target state.

In the Elements panel, select an element and toggle the element state. Enable the :target checkbox to force and inspect the styles.

Use the :target pseudo-class to style element when the hash in the URL and the id of an element are the same. Check out this demo to try it yourself. This new DevTools feature lets you test such styles without having to manually change the URL all the time.

forcing the CSS `:target` state

Chromium issue: 1156628

New shortcut to duplicate element

Use the new Duplicate element shortcut to clone an element instantly.

Right click an element in the Elements panel, select Duplicate element. A new element will be created under it.

Alternatively, you can duplicate element with keyboard shortcuts:

  • Mac: Shift + Option + ⬇️
  • Window/ Linux: Shift + Alt + ⬇️

Duplicate element

Chromium issues: 1150797, 1150797

Color pickers for custom CSS properties

The Styles pane now shows color pickers for custom CSS properties.

In addition, you can hold the Shift key and click on color picker to cycle through the RGBA, HSLA, and Hex representations of the color value.

Color pickers for custom CSS properties

Chromium issue: 1147016

New shortcuts to copy CSS properties

You can now copy CSS properties quicker with a few new shortcuts.

In the Elements panel, select an element. Then, right-click on a CSS class or a CSS property in the Styles pane to copy the value.

New shortcuts to copy CSS properties

Copy options for CSS class:

  • Copy selector. Copy the current selector name.
  • Copy rule. Copy the rule of the current selector.
  • Copy all declarations: Copy all declarations under the current rule, including invalid and prefixed properties.

Copy options for CSS property:

  • Copy declaration. Copy the declaration of the current line.
  • Copy property. Copy the property of the current line.
  • Copy value: Copy the value of the current line.

Chromium issue: 1152391

Cookies updates

New option to show URL-decoded cookies

You can now opt to view the URL-decoded cookies value in the Cookies pane.

Go to the Application panel and select the Cookies pane. Select any cookie on the list. Enable the new Show URL decoded checkbox to view the decoded cookie.

Option to show URL-decoded cookies

Chromium issue: 997625

Clear only visible cookies

The Clear all cookies button in the Cookies pane is now replaced by Clear filtered cookies button.

In the Application panel > Cookies pane, enter text in the textbox to filter the cookies. In our example here, we filter the list by “PREF”. Click on the Clear filtered cookies button to delete the visible cookies. Clear the filter text and you will see the other cookies remain in the list. Previously, you only had the option to clear all cookies.

Clear only visible cookies

Chromium issue: 978059

New option to clear third-party cookies in the Storage pane

When clearing the site data in the Storage pane, DevTools now clear only first-party cookies by default. Enable the including third-party cookies to clear the third-party cookies as well.

Option to clear third-party cookies

Chromium issue: 1012337

Edit User-Agent Client Hints for custom devices

You can now edit User-Agent Client Hints for custom devices.

Go to Settings > Devices and click on Add custom device.... Expand the User agent client hints section to edit the client hints.

Edit User-Agent Client Hints

User-Agent Client Hints are an alternative to User-Agent string that enables developers to access information about a user's browser in a privacy-preserving and ergonomic way. Learn more about User-Agent Client Hints in web.dev/user-agent-client-hints/.

Chromium issue: 1073909

Network panel updates

Persist “record network log” setting

DevTools now persist the “Record network log” setting. Previously, DevTools reset the user’s choice whenever a page reloads.

Record network log

Chromium issue: 1122580

View WebTransport connections in the Network panel

Network panel now displays WebTransport connections.

WebTransport connections

WebTransport is a new API offering low-latency, bidirectional, client-server messaging. Learn more about its use cases, and how to give feedback about the future of the implementation in web.dev/webtransport/.

Chromium issue: 1152290

“Online” renamed to “No throttling”

The network emulation option “Online” is now renamed to “No Throttling”.

Record network log

Chromium issue: 1028078

New copy options in the Console and Sources panel

New shortcuts to copy object in the Console and Sources panel

You can now copy object values with the new shortcuts in the Console and Sources panel. This is handy especially when you have a large object (e.g. a long array) to copy.

Copy object in the Console

Copy object in the Sources panel

Chromium issues: 1149859, 1148353

New shortcuts to copy file name in the Sources panel

You can now copy file name by right clicking on a file in the Sources panel and select Copy file name.

Copy file name

Chromium issue: 1155120

Frame details view updates

New Service Workers information in the Frame details view

DevTools now displays dedicated service workers under the frame which creates them.

In the Application panel, expand a frame with service workers, then select a service worker under the Service Workers tree to view the details.

Service Workers information in the Frame details view

Chromium issue: 1122507

Measure Memory information in the Frame details view

The performance.measureMemory() API status is now displayed under the API availability section.

The new performance.measureMemory() API estimates the memory usage of the entire web page. Learn how to monitor your web page's total memory usage with this new API in this article.

Measure Memory

Chromium issue: 1139899

Provide feedback from the Issues tab

If you ever want to improve an issue message, go to the Issues tab from the Console or More Settings > More tools > Issues > to open the Issues tab. Expand an issue message, and click on the Is the issue message helpful to you?, then you can provide feedback in the pop up.

Issue feedback link

Dropped frames in the Performance panel

When analyzing load performance in the Performance panel, the Frames section now marks dropped frames as red. Hover on it to find out the frame rate.

Dropped frames

Chromium issue: 1075865

Emulate foldable and dual-screen in Device Mode

You can now emulate dual-screen and foldable devices in DevTools.

After enabling the Device Toolbar, select one of these devices: Surface Duo or Samsung Galaxy Fold.

Click on the new span icon to toggle between single-screen or folded and dual-screen or unfolded postures.

You can also enable the Experimental Web Platform features to access the new CSS media screen-spanning feature and JavaScript getWindowSegments API. The experimental icon displays the state of the Experimental Web Platform features flag. The icon is highlighted when the flag is turned on. Navigate to chrome://flags and toggle the flag.

Emulate dual-screen

Chromium issue: 1054281

Experimental features

Automate browser testing with Puppeteer Recorder

DevTools can now generate Puppeteer scripts based on your interaction with the browser, making it easier for you to automate browser testing. Puppeteer is a Node.js library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol.

Go to this demo page. Open the Sources panel in DevTools. Select the Recording tab on the left pane. Add a new recording and name the file (e.g. test01.js).

Click on the Record button at the bottom to start recording the interaction. Try to fill in the on-screen form. Observe that Puppeteer commands are appended to the file accordingly. Click on the Record button again to stop the recording.

To run the script, follow the Getting started guide in Puppeteer official site.

Please note that this is an early-stage experiment. We plan to improve and expand the Recorder functionality over time.

Puppeteer Recorder

Chromium issue: 1144127

Font editor in the Styles pane

The new Font Editor is a popover editor in the Styles pane for font related properties to help you find the perfect typography for your webpage.

The popover provides a clean UI to dynamically manipulate typography with a series of intuitive input types.

Font editor in the Styles pane

Chromium issue: 1093229

CSS flexbox debugging tools

DevTools added experimental support for flexbox debugging since last release.

DevTools now draws a guiding line to help you better visualize the CSS align-items property. The CSS gap property is supported as well. In our example here, we have CSS gap: 12px;. Notice the hatching pattern for each gap.

flexbox

Chromium issue: 1139949

New CSP Violations tab

View all Content Security Policy (CSP) violations at a glance in the new CSP Violations tab. This new tab is an experiment that should make it easier to work with web pages with a large amount of CSP and Trusted Type violations.

CSP Violations tab

Chromium issue: 1137837

New color contrast calculation - Advanced Perceptual Contrast Algorithm (APCA)

The Advanced Perceptual Contrast Algorithm (APCA) is replacing the AA/AAA guidelines contrast ratio in the Color Picker.

APCA is a new way to compute contrast based on modern research on color perception. Compared to AA/AAA guidelines, APCA is more context-dependent. The contrast is calculated based on the text’s spatial properties (font weight & size), color (perceived lightness difference between text and background), and context (ambient light, surroundings, intended purpose of the text).

APCA in Color Picker

Chromium issue: 1121900

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>

Deprecations and removals in Chrome 87

$
0
0

Deprecations and removals in Chrome 87

Chrome 87 beta was released on October 15, 2020 and stable was released on November 17, 2020ß.

Comma separator in iframe allow attribute

Permissions policy declarations in an <iframe> tag can no longer use commas as a separator between items. Developers should use semicolons instead.

-webkit-font-size-delta

Blink will no longer support the rarely-used -webkit-font-size-delta property. Developers should use font-size to control font size instead.

Deprecate FTP support

Chrome is deprecating and removing support for FTP URLs. The current FTP implementation in Google Chrome has no support for encrypted connections (FTPS), nor proxies. Usage of FTP in the browser is sufficiently low that it is no longer viable to invest in improving the existing FTP client. In addition, more capable FTP clients are available on all affected platforms.

Google Chrome 72 and later removed support for fetching document subresources over FTP and rendering of top level FTP resources. Currently navigating to FTP URLs results in showing a directory listing or a download depending on the type of resource. A bug in Google Chrome 74 and later resulted in dropping support for accessing FTP URLs over HTTP proxies. Proxy support for FTP was removed entirely in Google Chrome 76. In Chrome 86, FTP was turned off for pre-release channels (Canary and Beta) and was experimentally turned off for one percent of stable users.

The remaining capabilities of Google Chrome’s FTP implementation are restricted to either displaying a directory listing or downloading a resource over unencrypted connections.

Remainder of the deprecation follows this timeline:

Chrome 87

FTP support will be disabled by default for fifty percent of users but can be enabled using the flags listed above.

Chrome 88

FTP support will be disabled.

Feedback

Migrating Puppeteer to TypeScript

$
0
0

Migrating Puppeteer to TypeScript

<<../../_shared/devtools-research.md>>

We’re big fans of TypeScript on the DevTools team — so much so that new code in DevTools is being authored in it and we’re in the middle of a big migration of the entire codebase to being type-checked by TypeScript. You can find out more about that migration in our talk at Chrome Dev Summit 2020. It therefore made perfect sense to look at migrating Puppeteer’s codebase to TypeScript, too.

Planning the migration

When planning how to migrate we wanted to be able to make progress in small steps. This keeps the overhead of the migration down — you’re working only on a small part of the code at anyone time — and keeps the risk down, too. If anything goes wrong with one of the steps you can easily revert it. Puppeteer has a lot of users and a broken release would cause problems for lots of them, so it was vital that we kept the risk of breaking changes to a minimum.

We were also fortunate that Puppeteer has a robust set of unit tests in place covering all of its functionality. This meant we could be confident that we weren’t breaking code as we migrated, but also that we weren’t introducing changes to our API. The goal of the migration was to complete it without any Puppeteer users even realising that we’d migrated, and the tests were a vital part of that strategy. If we hadn't had good test coverage, we would have added that before continuing with the migration.

Performing any code change without tests is risky, but changes where you’re touching entire files or the entirety of the codebase are especially risky. When making mechanical changes, it’s easy to miss a step, and on multiple occasions the tests caught a problem that had slipped past both the implementer and the reviewer.

One thing we did invest time in upfront was our Continuous Integration (CI) setup. We noticed that CI runs against pull requests were flaky and often failed. This happened so often that we’d gotten into the habit of ignoring our CI and merging the pull requests anyway, assuming that the failure was a one-off issue on CI rather than a problem in Puppeteer.

After some general maintenance and dedicated time to fix some regular test flakes, we got it into a much more consistently passing state, enabling us to listen to CI and know that a failure was indicating an actual problem. This work isn’t glamorous, and it’s frustrating watching endless CI runs, but it was vital to have our test suite running reliably given the number of pull requests that the migration was throwing at it.

Pick and land one file

At this point we had our migration ready to go and a robust CI server full of tests to watch our backs. Rather than dive in on any arbitrary file, we purposefully picked a small file to migrate. This is a useful exercise because it lets you validate the planned process you’re about to undertake. If it works on this file, your approach is valid; if not, you can go back to the drawing board.

Additionally going file by file (and with regular Puppeteer releases, so all the changes didn’t ship in the same npm version) kept the risk down. We picked DeviceDescriptors.js as the first file, because it was one of the most straightforward files in the codebase. It can feel slightly underwhelming to do all this prep work and land such a small change, but the goal isn’t to make huge changes immediately, but to proceed with caution and methodically file by file. Time spent validating the approach definitely saves time later on in the migration when you hit those more complicated files.

Prove the pattern and repeat

Thankfully the change to DeviceDescriptors.js successfully made it into the codebase, and the plan worked as we’d hoped it would! At this point you’re ready to knuckle down and get on with it, which is exactly what we did. Using a GitHub label is a really nice way to group all pull requests together, and we found that useful to track progress.

Get it migrated and improve it later

For any individual JavaScript file our process was:

  1. Rename the file from .js to .ts.
  2. Run the TypeScript compiler.
  3. Fix any issues.
  4. Create the pull request.

Most of the work in these initial pull requests was to extract TypeScript interfaces for existing data structures. In the case of the first pull request that migrated DeviceDescriptors.js that we discussed previously, the code went from:

module.exports = [
  { 
    name: 'Pixel 4',
    … // Other fields omitted to save space
  }, 
  …
]

And became:

interface Device {
  name: string,
  …
}

const devices: Device[] = [{name: 'Pixel 4', …}, …]

module.exports = devices;

As part of this process that meant that we worked through every line of the codebase checking for issues. As with any codebase that’s been around a few years and grown over time, there are areas of opportunity to refactor code and improve the situation. Especially with the move to TypeScript, we saw places where a slight restructure of the code would enable us to lean on the compiler more and get better type safety.

Counter-intuitively, it’s really important to resist making these changes straight away. The goal of the migration is to get the codebase into TypeScript, and at all times during a large migration you should be thinking about the risk of causing breakages to the software and to your users. By keeping the initial changes minimal, we kept that risk low. Once the file was merged and migrated to TypeScript, we could then make follow-up changes to improve the code to leverage the type system. Make sure you set strict boundaries for your migration and try to stay within them.

Migrating the tests to test our type definitions

Once we had the entire source code migrated to TypeScript, we could turn our focus to our tests. Our tests had great coverage, but were all written in JavaScript. This meant that one thing they didn’t test was our type definitions. One of the long-term goals of the project (which we’re still working on) is to ship high-quality type definitions out of the box with Puppeteer, but we didn’t have any checks in our codebase about our type definitions.

By migrating the tests to TypeScript (following the same process, going file by file), we found issues with our TypeScript that would otherwise have been left up to users to find for us. Now our tests not only cover all our functionality, but act as a quality check of our TypeScript too!

We’ve already benefited hugely from TypeScript as engineers who work on the Puppeteer codebase. Coupled with our much improved CI environment, it’s enabled us to become more productive when working on Puppeteer and have TypeScript catch bugs that otherwise would have made it into an npm release. We’re excited to get high quality TypeScript definitions shipped to enable all the developers using Puppeteer to benefit from this work too.

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

Improving DevTools startup time

$
0
0

Improving DevTools startup time

<<../../_shared/devtools-research.md>>

DevTools startup now is ~13% faster 🎉 (from 11.2s down to 10s).

TL;DR; The result is achieved by removing a redundant serialization.

Overview

While DevTools is starting up, it needs to make some calls to the V8 JavaScript engine.

DevTools starting up process

The mechanism Chromium uses to send DevTools commands to V8 (and for IPC in general) is called mojo. My teammates Benedikt Meurer and Sigurd Schneider discovered an inefficiency while working on another task, and came up with an idea to improve the process by removing two redundant steps in how these messages are sent and received.

Let us dive into how the mojo mechanism works!

The mojo mechanisms

The mojo mechanisms

There is a mojo command EvaluateScript which runs the JS command. It serializes the whole JS command including the arguments into a string of JavaScript source code that can be eval(). As you might imagine, these strings can become quite long and expensive. After the command is received by V8, these strings of JavaScript code are deserialized before running. This process of serializing and deserializing for every single message creates significant overhead.

Benedikt Meurer realised that serialisation and deserialisation of the arguments is quite expensive, and that the whole "Serialize JS command to JS string" and "Deserialize JS string" steps are redundant and can be skipped.

Technical details: RenderFrameHostImpl::ExecuteJavaScript

How we improved

Improved mechanisms

We introduced another mojo API method which allows us to pass the object name, the method to be called, and the list of arguments directly, instead of having to create the string of JavaScript source code. This allows us to skip serialization & deserialization, and removes the need to parse the JavaScript code.

For technical details on how we implemented this optimization, consult these two patches:

  1. CL 2431864: [devtools] Reduce performance overhead of message dispatch in the front-end
  2. CL 2442012: [devtools] Use ExecuteJavaScriptMethod in DevTools

Impact

To measure the effectiveness of the change, we ran some measurements comparing Chromium revisions cb971089a058 and 4f213b39d581 (before and after the change).

For both revisions, we ran the following scenario 5 times:

  1. Record trace using chrome://tracing
  2. Open DevTools-on-DevTools
  3. Get the recorded CrRendererMain trace and compare the V8-specific metrics.

Based on these experiments, DevTools opens ~13% faster (from 11.2s down to 10s) with the optimization.

Hightlights, CPU durations

Method name Not optimized (ms) Optimized (ms) Differences (ms) Speed improvement (%)
Total 11,213.19 9,953.99 -1,259.20 12.65%
v8.run 499.67 3.61 -496.06 12.65%
V8.Execute 1,654.87 1,349.61 -305.25 3.07%
v8.callFunction 1,171.84 1,339.77 167.94 -1.69%
v8.compile 133.93 3.56 -130.37 1.31%

DevTools load CPU time (ms)

Full tracing metrics comparison table

As a result, DevTools opens and works faster with less CPU usage. 🎉

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

CSS-in-JS support in DevTools

$
0
0

CSS-in-JS support in DevTools

<<../../_shared/devtools-research.md>>

This article talks about CSS-in-JS support in DevTools that landed since Chrome 85 and, in general, what we mean by CSS-in-JS and how it’s different from regular CSS that has been supported by DevTools for a long time.

What is CSS-in-JS?

The definition of CSS-in-JS is rather vague. In a broad sense, it’s an approach for managing CSS code using JavaScript. For example, it could mean that the CSS content is defined using JavaScript and the final CSS output is generated on the fly by the app.

In the context of DevTools, CSS-in-JS means that the CSS content is injected into the page using CSSOM APIs. Regular CSS is injected using <style> or <link> elements, and it has a static source (e.g. a DOM node or a network resource). In contrast, CSS-in-JS often does not have a static source. A special case here is that the content of a <style> element can be updated using CSSOM API, causing the source to become out of sync with the actual CSS stylesheet.

If you use any CSS-in-JS library (e.g. styled-component, Emotion, JSS), the library might inject styles using CSSOM APIs under the hood depending on the mode of development and the browser.

Let’s look at some examples on how you can inject a stylesheet using CSSOM API similar to what CSS-in-JS libraries are doing.

// Insert new rule to an existing CSS stylesheet
const element = document.querySelector('style');
const stylesheet = element.sheet;
stylesheet.replaceSync('.some { color: blue; }');
stylesheet.insertRule('.some { color: green; }'); 

You can create a completely new stylesheet as well:

// Create a completely new stylesheet
const sheet = new CSSStyleSheet();
stylesheet.replaceSync('.some { color: blue; }');
stylesheet.insertRule('.some { color: green; }'); 

// Apply constructed stylesheet to the document
document.adoptedStyleSheets = [...document.adoptedStyleSheets, sheet];

CSS support in DevTools

In DevTools, the most commonly used feature when dealing with CSS is the Styles pane. In the Styles pane, you can view what rules apply to a particular element and you can edit the rules and see the changes on the page in realtime.

Before last year, the support for CSS rules modified using CSSOM APIs was rather limited: you could only see the applied rules but could not edit them. The main goal we had last year was to allow editing of CSS-in-JS rules using the Styles pane. Sometimes we also call CSS-in-JS styles “constructed” to indicate that they were constructed using Web APIs.

Let’s dive into the details of Styles editing works in DevTools.

Style editing mechanism in DevTools

Style editing mechanism in DevTools

When you select an element in DevTools, the Styles pane is shown. The Styles pane issues a CDP command called CSS.getMatchedStylesForNode to get CSS rules that apply to the element. CDP stands for Chrome DevTools Protocol and it’s an API that allows DevTools frontend to get additional information about the inspected page.

When invoked, CSS.getMatchedStylesForNode identifies all the stylesheets in the document and parses them using the browser’s CSS parser. Then it builds an index that associates every CSS rule with a position in the stylesheet source.

You might ask, why does it need to parse the CSS again? The problem here is that for performance reasons the browser itself is not concerned with the source positions of CSS rules and, therefore, it does not store them. But DevTools needs the source positions to support CSS editing. We don’t want regular Chrome users to pay the performance penalty, but we do want DevTools users to have access to the source positions. This re-parsing approach addresses both use cases with minimal downsides.

Next, the CSS.getMatchedStylesForNode implementation asks the browser’s style engine to provide CSS rules that match the given element. And at last, the method associates the rules returned by the style engine with the source code and provides a structured response about CSS rules so that DevTools knows which part of the rule is the selector or properties. It allows DevTools to edit the selector and the properties independently.

Now let’s look at editing. Remember that CSS.getMatchedStylesForNode returns source positions for every rule? That’s crucial for editing. When you change a rule, DevTools issues another CDP command that actually updates the page. The command includes the original position of the fragment of the rule that is being updated and the new text that the fragment needs to be updated with.

On the backend, when handling the edit call, DevTools updates the target stylesheet. It also updates the copy of the stylesheet source that it maintains and updates the source positions for the updated rule. In response to the edit call, the DevTools frontend gets back the updated positions for the text fragment that has been just updated.

This explains why editing CSS-in-JS in DevTools didn’t work out of the box: CSS-in-JS doesn’t have an actual source stored anywhere and the CSS rules live in the browser’s memory in CSSOM data structures.

How we added support for CSS-in-JS

So, to support editing of CSS-in-JS rules, we decided that the best solution would be to create a source for constructed stylesheets that can be edited using the existing mechanism described above.

The first step is to build the source text. The browser’s style engine stores the CSS rules in the CSSStyleSheet class. That class is the one whose instances you can create from JavaScript as discussed previously. The code to build the source text is as follows:

String InspectorStyleSheet::CollectStyleSheetRules() {
  StringBuilder builder;
  for (unsigned i = 0; i < page_style_sheet_->length(); i++) {
    builder.Append(page_style_sheet_->item(i)->cssText());
    builder.Append('\n');
  }
  return builder.ToString();
}

It iterates over the rules found in a CSSStyleSheet instance and builds a single string out of it. This method is invoked when an instance of InspectorStyleSheet class is created. The InspectorStyleSheet class wraps a CSSStyleSheet instance and extracts additional metadata that is required by DevTools:

void InspectorStyleSheet::UpdateText() {
  String text;
  bool success = InspectorStyleSheetText(&text);
  if (!success)
    success = InlineStyleSheetText(&text);
  if (!success)
    success = ResourceStyleSheetText(&text);
  if (!success)
    success = CSSOMStyleSheetText(&text);
  if (success)
    InnerSetText(text, false);
}

In this snippet, we see CSSOMStyleSheetText that calls CollectStyleSheetRules internally. CSSOMStyleSheetText is invoked if the stylesheet is not inline or a resource stylesheet. Basically, these two snippets already allow basic editing of the stylesheets that are created using the new CSSStyleSheet() constructor.

A special case is the stylesheets associated with a <style> tag that have been mutated using the CSSOM API. In this case, the stylesheet contains the source text and additional rules that are not present in the source. To handle this case, we introduce a method to merge those additional rules into the source text. Here, the order matters because CSS rules can be inserted in the middle of the original source text. For example, imagine that the original <style> element contained the following text:

/* comment */
.rule1 {}
.rule3 {}

Then the page inserted some new rules using the JS API producing the following order of rules: .rule0, .rule1, .rule2, .rule3, .rule4. The resulting source text after the merge operation should be as follows:

.rule0 {}
/* comment */
.rule1 {}
.rule2 {}
.rule3 {}
.rule4 {}

The preservation of the original comments and indentation is important for the editing process because the source text positions of rules have to be precise.

Another aspect that is special for CSS-in-JS stylesheets is that they can be changed by the page at any time. If the actual CSSOM rules would go out of sync with the text version, the editing would not work. For this we introduced a so-called probe, that allows the browser to notify the backend part of DevTools when a stylesheet is being mutated. Mutated stylesheets are then synchronized during the next call to CSS.getMatchedStylesForNode.

With all these pieces in place, CSS-in-JS editing already works but we wanted to improve the UI to indicate if a stylesheet was constructed. We have added a new attribute called isConstructed to CDP’s CSS.CSSStyleSheetHeader that the frontend makes use of to properly display the source of a CSS rule:

Constructable stylesheet

Conclusions

To recap our story here, we went through the relevant use cases related to CSS-in-JS that DevTools didn’t support and walked through the solution to support those use cases. The interesting part of this implementation is that we were able to leverage existing functionality by making CSSOM CSS rules have a regular source text, avoiding the need to completely re-architect style editing in DevTools.

For more background, check out our design proposal or the Chromium tracking bug which references all related patches.

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

Viewing all 599 articles
Browse latest View live