Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

DevTools Digest, August 2016

$
0
0

Hi, I’m Kayce, technical writer for DevTools, here to tell you about the latest happenings in DevTools land.

The Resources panel is now the Application panel

As of Chrome 52, the Resources panel is no more! It has been renamed to the Application panel. All of the old features of the Resources panel are still available, plus many new ones to help you debug Progressive Web Apps. Now you, too, can experience the joys of debugging the service worker lifecycle and cache timing issues.

Check out our new guide, written by yours truly, to learn more about the new features: Debug Progressive Web Apps

ChromeLens

ChromeLens is an excellent new extension to help you make your website more accessible to the visually impaired.

P.S. Rob Dodson just launched a new video series on accessibility, a11ycasts.

New features now in Canary

Canary is currently Chrome 54. So, for future readers, if you’re using Chrome 54 or beyond, you can use these features!

The Color Picker is in the Sources panel.

sources panel color picker

Right-click on the Resources pane in the Network panel and you can copy a string of cURL requests to download all of your resources.

copy all as curl

JavaScript can be disabled from the Command Menu. This used to be available only from Settings.

disable JS

console.warn() now includes a stack trace.

console.warn() stack trace

This last feature has been around for a few months, but it’s worth another mention. Create a Timeline recording with the JS Profile option enabled, and you can see a function-by-function breakdown of execution times in the Sources panel.

function execution times in sources
panel

New ideas from the community

Here are some new ideas from the community that may be coming to a future DevTools Near You.

  • @matthewcp: Speed up memory leak debugging by displaying a simple list of growing objects.
  • @jonathanbingram: Increase / decrease font-weight values with the increment / decrement keyboard shortcuts.
  • @_bl4de: Edit cookies (actually a long-standing request, but thanks for bringing it up again). Rumor has it that @kdzwinel has a PR in the works.
  • @kienzle_s: Add OR filters to the Network panel filter.

Got a new idea? We’d love to hear it. Ping us on Twitter at @ChromeDevTools and tell us what’s up.

While I’ve got your attention, if there’s any docs that need fixing, or features that need explaining, feel free to open an issue on the docs repository.

Until next month!


API Deprecations and Removals in Chrome 53

$
0
0

In nearly every version of Chrome, we see a significant number of updates and improvements to the product, its performance, and also capabilities of the Web Platform. This article describes the changes in Chrome 52, which is in beta as of June 9. This list is subject to change at any time.

Deprecation policy

To keep the platform healthy, we sometimes remove APIs from the Web Platform which have run their course. There can be many reasons why we would remove an API, such as: they are superseded by newer APIs, are updated to reflect changes to specifications to bring alignment and consistency with other browsers, or they are early experiments that never came to fruition in other browsers and thus can increase the burden of support for web developers.

Some of these changes might have an effect on a very small number of sites. To mitigate issues ahead of time, we try to give developers advanced notice so that if needed, they can make the required changes to keep their sites running.

Chrome currently has a process for deprecations and removals of API’s and the TL;DR is:

  • Announce on blink-dev.
  • Set warnings and give time scales in the developer console of the browser when usage is detected on a page.
  • Wait, monitor, and then remove feature as usage drops.

You can find a list of all deprecated features in chromestatus.com using the deprecated filter and removed features by applying the removed filter. We will also try to summarize some of the changes, reasoning, and migration paths in these posts.

DHE-based ciphers being phased out

TL;DR: DHE-based ciphers are removed in Chrome 53, desktop because they’re insufficient for long term use. Servers should employ ECDHE, if it’s available or a plain-RSA cipher if it is not.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Last year, we Chrome the minimum TLS Diffie-Hellman group size from 512-bit to 1024-bit; however, 1024-bit is insufficient for the long-term. Metrics report that around 95% of DHE connections seen by Chrome use 1024-bit DHE. This, compounded with how DHE is negotiated in TLS, makes it difficult to move past 1024-bit.

Although there is a draft specification that fixes this problem, it is still a draft and requires both client and server changes. Meanwhile, ECDHE is already widely implemented and deployed. Servers should upgrade to ECDHE if available. Otherwise, ensure a plain-RSA cipher suite is enabled.

DHE-based ciphers have been deprecated since Chrome 51. Support is being removed from desktop in Chrome 53.

FileError deprecation warning

TL;DR: Removal of the deprecated FileError interface is expected in Chrome 54. Replace references to err.code with err.name and err.message.

Intent to Remove | Chromestatus Tracker | Chromium Bug

The current version of the File API standard does not contain the FileError interface and it’s support was deprecated some time in 2013. In Chrome 53, this deprecation warning will be printed to the Dev Tools console:

‘FileError’ is deprecated and will be removed in 54. Please use the ‘name’ or ‘message’ attributes of the error rather than ‘code’.

This has different effects in different contexts.

  • FileReader.error and FileWriter.error will be DOMException objects instead of FileError objects.
  • For asynchronous FileSystem calls the ErrorCallback will be passed FileError.ErrorCode instead of FileError.
  • For synchronous FileSystem calls FileError.ErrorCode will be thrown instead of FileError.

This change only impacts code that relies on comparing the error instance’s code (e.code) directly against FileError enum values (FileError.NOT_FOUND_ERR, etc). Code that tests against hard coded constants (for example e.code === 1) could fail by reporting incorrect errors to the user.

Fortunately the FileError, DOMError, and DOMException error types all share name and message properties which give consistent names for error cases (e.g. e.name === "NotFoundError"). Code should use those properties instead, which will work across browsers and continue to work once the FileError interface itself has been removed.

The removal of FileError is anticipated Chrome 54.

Remove results attribute for <input type=search>

TL;DR: The results attribute is being removed because it’s not part of any standard and is inconsistently implemented across browsers.

Intent to Remove | Chromestatus Tracker | Chromium Bug

The results value is only implemented in webkit and behaves highly inconsistently on those that do. For example, Chrome adds a magnifier icon to the input box, while on Safari desktop, it controls how many previous searches are shown in a popup shown by clicking the magnifier icon. Since this isn’t part of any standard, it’s being deprecated.

If you still need to include the search icon in your input field then you will have to add some custom styling to the element. You can do this by including a background image and specifying a left padding on the input field.

input[type=search] {
  background: url(some-great-icon.png) no-repeat scroll 15px 15px;
  padding-left:30px;
}

This attribute has been deprecated since Chrome 51.

Web Animations API hits cross-browser milestone

$
0
0

The Web Animations API is part of a new web standard, currently under development by browser engineers from Mozilla and Google.

Chrome 36 implemented the element.animate() method from the Web Animations API, empowering developers to build performant compositor threaded animations using JavaScript.

We’re excited to see Mozilla have now shipped their implementation of element.animate() in Firefox 48, enabling true cross-browser accelerated animations using this emerging JS API. Google and Mozilla have worked hard together to make sure our implementations are interoperable. This has truly been a collaborative effort!

The benefits in using the Web Animations API can include faster frame-rate with lower power consumption which translates to a better user experience on all devices, especially mobile.

The Web Animations API can be used in all browsers via a polyfill that will use the full speed native implementation where it exists, and gracefully fall back to a Javascript implementation otherwise. We’re encouraged by the WebKit community considering their own implementation, and the Edge team adding it to their backlog. We look forward to Web Animations soon being supported in all major browsers.

To get the full accelerated Web Animations experience in either Chrome, Firefox or Opera, head over to these demo pages and try it for yourself.

Access USB devices on the Web

$
0
0

If I said plain and simple “USB”, there is a good chance that you will immediately think of keyboards, mice, audio, video and storage devices. You’re right but you’ll find other kinds of Universal Serial Bus (USB) devices out there.

These non-standardized USB devices require hardware vendors to write native drivers and SDKs in order for you (the developer) to take advantage of them. Sadly this native code has historically prevented these devices from being used by the Web. And that’s one of the reasons the WebUSB API has been created: to provide a way to expose USB device services to the Web. With this API, hardware manufacturers will be able to build cross-platform JavaScript SDKs for their devices. But most importantly this will make USB safer and easier to use by bringing it to the Web.

Let’s see what you could expect with the WebUSB API:

  1. Buy a USB device.
  2. Plug it into your computer.
  3. A notification appears right away, with the right website to go to for this device.
  4. Simply click on it. Website is there and ready to use!
  5. Click to connect and a USB device chooser shows up in Chrome, where you can pick your device.
  6. Tada!

What would this procedure be like without the WebUSB API?

  • Read a box, label, or search on line and possibly end up on the wrong website.
  • Have to install a native application.
  • Is it supported on my operating system? Make sure you download the “right” thing.
  • Scary OS prompts popup and warn you about installing drivers/applications from the Internet.
  • Malfunctioning code harms the whole computer. The Web is built to contain malfunctioning websites.
  • Only use the USB device once? On the Web, the website is gone once you closed tab. On a computer the code sticks around.

Before we start

This article assumes you have some basic knowledge of how USB works. If not, I recommend reading USB in a NutShell. For background information about USB, check out the official USB specifications.

The WebUSB API is currently a draft which means that it is far enough along to be real and usable, but there is still time to make fixes that developers need. That’s why the Chrome Team is actively looking for eager developers to try it and give feedback on the spec and feedback on the implementation.

In the very near future we plan for you to be able to enable WebUSB on your origin via Origin Trials. Until then you can enable it on your local computer for development purposes by flipping an experimental flag. The implementation is partially complete and currently available on Chrome OS, Linux, Mac, and Windows. Go to chrome://flags/#enable-experimental-web-platform-features, enable the highlighted flag, restart Chrome and you should be good to go.

Web USB Flag highlighted in chrome://flags

Available for Origin Trials

In order to get as much feedback as possible from developers using the WebUSB API in the field, we will also add this feature in Chrome 54 as an origin trial for Chrome OS, Linux, Mac, and Windows. During the origin trial, the API may still change in backward-incompatible ways before we freeze it into the web platform. To use this experimental API in Chrome with no flag, you’ll need to request a token for your origin and insert it in your application.

The trial will end in March 2017. By that point, we expect to have figured out any changes necessary to stabilize the feature and move it out from Origin Trials.

Privacy & Security

Attacks against USB devices

The WebUSB API does not even try to provide a way for a web page to connect to arbitrary USB devices. There are plenty of published attacks against USB devices that makes it unsafe to allow this.

For this reason, a USB device can define a set of origins that are allowed to connect to it. This is similar to the CORS mechanism in HTTP. In other words, WebUSB devices are associated with a web origin and can only be accessed from a page from the same origin.

Hardware manufacturers will have to update the firmware in their USB devices in order to enable WebUSB access to their device via the Platform Capability descriptor. Later a Public Device Registry will be created so that hardware manufacturers can support WebUSB on existing devices.

HTTPS Only

Because this experimental API is a powerful new feature added to the Web, Chrome aims to make it available only to secure contexts. This means you’ll need to build with TLS in mind.

We care deeply about security, so you will notice that new Web capabilities require HTTPS. The WebUSB API is no different, and is yet another good reason to get HTTPS up and running on your site.

During development you’ll be able to interact with WebUSB through http://localhost by using tools like the Chrome Dev Editor or the handy python -m SimpleHTTPServer, but to deploy it on a site you’ll need to have HTTPS set up on your server. I personally enjoy GitHub Pages for demo purposes.

To add HTTPS to your server you’ll need to get a TLS certificate and set it up. Be sure to check out the Security with HTTPS article for best practices there. For info, you can now get free TLS certificates with the new Certificate Authority Let’s Encrypt.

User Gesture Required

As a security feature, getting access to connected USB devices with navigator.usb.requestDevice must be called via a user gesture like a touch or mouse click.

Let’s start coding

The WebUSB API relies heavily on JavaScript Promises. If you’re not familiar with them, check out this great Promises tutorial. One more thing, () => {} are simply ECMAScript 2015 Arrow functions – they have a shorter syntax compared to function expressions and lexically bind the value of this.

Get Access to USB devices

You can either prompt the user to select a single connected USB device using navigator.usb.requestDevice or call navigator.usb.getDevices to get a list of all connected USB devices the origin has access to.

The navigator.usb.requestDevice function takes a mandatory JavaScript object that defines filters. These filters are used to match any USB device with the given vendor (vendorId) and optionally product (productId) identifiers. The classCode, subclassCode and protocolCode keys can also be defined there as well.

USB Device Chooser screenshot

For instance, here’s how to get access to a connected Arduino device configured to allow the origin.

navigator.usb.requestDevice({ filters: [{ vendorId: 0x2341 }] })
.then(device => {
  console.log(device.productName);      // "Arduino Micro"
  console.log(device.manufacturerName); // "Arduino LLC"
})
.catch(error => { console.log(error); });

Before you ask, I didn’t magically come up with this 0x2341 hexadecimal number. I simply searched for the word “Arduino” in this List of USB ID’s.

The USB device returned in the fulfilled promise above has some basic, yet important information about the device such as the supported USB version, maximum packet size, vendor and product IDs, the number of possible configurations the device can have - basically all fields contained in the device USB Descriptor

For info, if a USB device announces its support for WebUSB, as well as defining a landing page URL, Chrome will show a persistent notification when the USB device is plugged in. Clicking on this notification will open the landing page.

WebUSB notification

From there, you can simply call navigator.usb.getDevices and get access to your Arduino device as shown below.

navigator.usb.getDevices().then(devices => {
  devices.map(device => {
    console.log(device.productName);      // "Arduino Micro"
    console.log(device.manufacturerName); // "Arduino LLC"
  });
})

Talk to an Arduino USB board

Okay, now let’s see how easy it is to communicate from a WebUSB compatible Arduino board over the USB port. Check out instructions at https://github.com/webusb/arduino to WebUSB-enable your sketches.

Don’t worry, I’ll cover all the WebUSB device methods mentioned below later in this article.

var device;

navigator.usb.requestDevice({ filters: [{ vendorId: 0x2341 }] })
.then(selectedDevice => {
   device = selectedDevice;
   return device.open(); // Begin a session.
 })
.then(() => device.selectConfiguration(1)) // Select configuration #1 for the device.
.then(() => device.claimInterface(2)) // Request exclusive control over interface #2.
.then(() => device.controlTransferOut({
    requestType: 'class',
    recipient: 'interface',
    request: 0x22,
    value: 0x01,
    index: 0x02})) // Ready to receive data
.then(() => device.transferIn(5, 64)) // Waiting for 64 bytes of data from endpoint #5.
.then(result => {
  let decoder = new TextDecoder();
  console.log('Received: ' + decoder.decode(result.data));
})
.catch(error => { console.log(error); });

Please keep in mind that the WebUSB library we are using here is just implementing one example protocol (based on the standard USB serial protocol) and that manufacturers can create any set and types of endpoints they wish. Control transfers are especially nice for small configuration commands as they get bus priority and have a well defined structure.

And here’s the sketch that has been uploaded to the Arduino board.

// Third-party WebUSB Arduino library
#include <WebUSB.h>

const WebUSBURL URLS[] = {
  { 1, "webusb.github.io/arduino/demos/" },
  { 0, "localhost:8000" },
};

const uint8_t ALLOWED_ORIGINS[] = { 1, 2 };

WebUSB WebUSBSerial(URLS, 2, 1, ALLOWED_ORIGINS, 2);

#define Serial WebUSBSerial

void setup() {
  Serial.begin(9600);
  while (!Serial) {
    ; // Wait for serial port to connect.
  }
  Serial.write("WebUSB FTW!");
  Serial.flush();
}

void loop() {
  // Nothing here for now.
}

The third-party WebUSB Arduino library used in the sample code above does basically two things:

  • The device acts as a WebUSB device enabling Chrome to read the landing page URL and the list of origins allowed to communicate with.
  • It exposes a WebUSB Serial API that you may use to override the default one.

Let’s look at the JavaScript code again. Once we get the device picked by the user, device.open simply runs all platform-specific steps to start a session with the USB device. Then, we have to select an available USB Configuration with device.selectConfiguration. Remember that a Configuration specifies how the device is powered, its maximum power consumption and its number of interfaces. Talking about interfaces, we also need to request exclusive access with device.claimInterface since data can only be transferred to an interface or associated endpoints when the interface is claimed. Finally calling device.controlTransferOut is needed to set up the Arduino device with the appropriate commands to communicate through the WebUSB Serial API.

From there, device.transferIn performs a bulk transfer onto the device to inform it that the host is ready to receive bulk data. Then, the promise is fulfilled with a result object containing a DataView data that has to be parsed appropriately.

For those who are familiar with USB, all of this should look pretty familiar.

I want moar

The WebUSB API lets you interact with the all USB transfer/endpoint types:

  • CONTROL transfers, used to send or receive configuration or command parameters to a USB device are handled with controlTransferIn(setup, length) and controlTransferOut(setup, data).
  • INTERRUPT transfers, used for a small amount of time sensitive data are handled with the same methods as BULK transfers with transferIn(endpointNumber, length) and transferOut(endpointNumber, data).
  • ISOCHRONOUS transfers, used for streams of data like video and sound are handled with isochronousTransferIn(endpointNumber, packetLengths) and isochronousTransferOut(endpointNumber, data, packetLengths).
  • BULK transfers, used to transfer a large amount of non-time-sensitive data in a reliable way are handled with transferIn(endpointNumber, length) and transferOut(endpointNumber, data).

You may also want to have a look at Mike Tsao’s WebLight project which provides a ground-up example of building a USB-controlled LED device designed for the WebUSB API (not using an Arduino here). You’ll find hardware, software, and firmware.

Tips

Debugging USB in Chrome is easier with the internal page chrome://device-log where you can see all USB device related events in one single place.

Internal page to debug Web USB

Early adopters who want to test their existing devices with WebUSB before they update their firmware or the Public Device Registry is implemented are not out of luck thanks to a dedicated switch. To disable checking of the WebUSB allowed origins descriptors that implement a CORS-like mechanism to secure origin to device communications, run chrome with the --disable-webusb-security switch.

On most Linux systems, USB devices are mapped with read-only permissions by default. To allow Chrome to open a USB device, you will need to add a new udev rule. Create a file at /etc/udev/rules.d/50-yourdevicename.rules with the following content:

SUBSYSTEM=="usb", ATTR{idVendor}=="[yourdevicevendor]", MODE="0664", GROUP="plugdev"

where [yourdevicevendor] is 2341 if your device is an Arduino for instance. ATTR{idProduct} can also be added for a more specific rule. Make sure your user is a member of the plugdev group. Then, just reconnect your device.

What’s next

A second iteration of the WebUSB API will look at Shared Worker and Service Worker support. Imagine for instance a security key website using the WebUSB API that would install a service worker to act as a middle man to authenticate users.

And for your greatest pleasure, the WebUSB API is already available now on Android in Chrome 54.

Resources

Please share your WebUSB demos with the #webusb hashtag.

Intervening against document.write()

$
0
0

Have you recently seen a warning like the following in your Developer Console in Chrome and wondered what it was?

(index):34 A Parser-blocking, cross-origin script, https://paul.kinlan.me/ad-inject.js, is invoked via document.write(). This may be blocked by the browser if the device has poor network connectivity.

Composability is one of the great powers of the web, allowing us to easily integrate with services built by third parties to build great new products! One of the downsides of composability is that it implies a shared responsibility over the user experience. If the integration is sub-optimal, the user experience will be negatively impacted.

One known cause of poor performance is the use of document.write() inside pages, specifically those uses that inject scripts. As innocuous as the following looks, it can cause real issues for users.

document.write('<script src="https://paul.kinlan.me/ad-inject.js"></script>');

Before the browser can render a page, it has to build the DOM tree by parsing the HTML markup. Whenever the parser encounters a script it has to stop and execute it before it can continue parsing the HTML. If the script dynamically injects another script, the parser is forced to wait even longer for the resource to download, which can incur one or more network roundtrips and delay the time to first render of the page

For users on slow connections, such as 2G, external scripts dynamically injected via document.write() can delay the display of main page content for tens of seconds, or cause pages to either fail to load or take so long that the user just gives up. Based on instrumentation in Chrome, we’ve learned that pages featuring third-party scripts inserted via document.write() are typically twice as slow to load than other pages on 2G.

We collected data from a 28 day field trial on 1% of Chrome stable users, restricted to users on 2G connections. We saw that 7.6% of all page loads on 2G included at least one cross-origin, parser-blocking script that was inserted via document.write() in the top level document. As a result of blocking the load of these scripts, we saw the following improvements on those loads:

  • 10% more page loads reaching first contentful paint(a visual confirmation for the user that the page is effectively loading), 25% more page loads reaching the fully parsed state, and 10% fewer reloads suggesting a decrease in user frustration.
  • 21% decrease of the mean time (over one second faster) until the first contentful paint
  • 38% reduction to the mean time it takes to parse a page, representing an improvement of nearly six seconds, dramatically reducing the time it takes to display what matters to the user.

With this data in mind, the Chrome team have recently announced an intention to intervene on behalf of all users when we detect this known-bad pattern by changing how document.write() is handled in Chrome (See Chrome Status). Specifically Chrome will not execute the <script> elements injected via document.write() when all of the following conditions are met:

  1. The user is on a slow connection, specifically when the user is on 2G. (In the future, the change might be extended to other users on slow connections, such as slow 3G or slow WiFi.)
  2. The document.write() is in a top level document. The intervention does not apply to document.written scripts within iframes as they don’t block the rendering of the main page.
  3. The script in the document.write() is parser-blocking. Scripts with the ‘async’ or ‘defer’ attributes will still execute.
  4. The script is not already in the browser HTTP cache. Scripts in the cache will not incur a network delay and will still execute.
  5. The request for the page is not a reload. Chrome will not intervene if the user triggered a reload and will execute the page as normal.

Third party snippets sometimes use document.write() to load scripts. Fortunately, most third parties provide asynchronous loading alternatives, which allow third party scripts to load without blocking the display of the rest of the content on the page.

How do I fix this?

This simple answer is don’t inject scripts using document.write(). We are maintaining a set of known services that provide asynchronous loader support that we encourage you to keep checking.

If your provider is not on the list and does support asynchronous script loading then please let us know and we can update the page to help all users.

If your provider does not support the ability to asynchronously load scripts into your page then we encourage you to contact them and let us and them know how they will be affected.

If your provider gives you a snippet that includes the document.write(), it might be possible for you to add an async attribute to the script element, or for you to add the script elements with DOM API’s like document.appendChild() or parentNode.insertBefore() much like Google Analytics does.

How to detect when your site is affected

There are a large number of criteria that determine whether the restriction is enforced, so how do you know if you are affected?

Detecting when a user is on 2G

To understand the potential impact of this change you first need to understand how many of your users will be on 2G. You can detect the user’s current network type and speed by using the Network Information API that is available in Chrome and then send a heads-up to your analytic or Real User Metrics (RUM) systems.

if(navigator.connection &&
   navigator.connection.type === 'cellular' &&
   navigator.connection.downlinkMax <= 0.115)
  // Notify your service to indicate that you might be affected by this restriction.
}

Catch warnings in Chrome DevTools

Since Chrome 53, DevTools issues warnings for problematic document.write() statements. Specifically, if a document.write() request meets criteria 2 to 5 (Chrome ignores the connection criteria when sending this warning), the warning will look something like:

Seeing warnings in Chrome DevTools is great, but how do you detect this at scale? You can check for HTTP headers that are sent to your server when the intervention happens.

Check your HTTP headers on the script resource

When a script inserted via document.write has been blocked, Chrome will send the following header to the requested resource:

Intervention: <https://shorturl/relevant/spec>;

When a script inserted via document.write is found and could be blocked in different circumstances, Chrome might send:

Intervention: <https://shorturl/relevant/spec>; level="warning"

The intervention header will be sent as part of the GET request for the script (asynchronously in case of an actual intervention).

What does the future hold?

The initial plan is to execute this intervention when we detect the criteria being met. We started with showing just a warning in the Developer Console in Chrome 53. (Beta was in July 2016. We expect Stable to be available for all users in September 2016.)

We will intervene to block injected scripts for 2G users tentatively starting in Chrome 54, which is estimated to be in a stable release for all users in mid-October 2016. Check out the Chrome Status entry for more updates.

Over time, we’re looking to intervene when any user has a slow connection (i.e, slow 3G or WiFi). Follow this Chrome Status entry.

Want to learn more?

To learn more, see these additional resources:

BroadcastChannel API: a message bus for the web

$
0
0

The BroadcastChannel API allows same-origin scripts to send messages to other browsing contexts. It can be thought of as a simple message bus that allows pub/sub semantics between windows/tabs, iframes, web workers, and service workers.

API Basics

The Broadcast Channel API is a simple API that makes communicating between browsing contexts easier. That is, communicating between windows/tabs, iframes, web workers, and service workers. Messages which are posted to a given channel are delivered to all listeners of that channel.

The BroadcastChannel constructor takes a single parameter: the name of a channel. The name identifies the channel and lives across browsing contexts.

// Connect to the channel named "my_bus".
const channel = new BroadcastChannel('my_bus');

// Send a message on "my_bus".
channel.postMessage('This is a test message.');

// Listen for messages on "my_bus".
channel.onmessage = function(e) {
  console.log('Received', e.data);
};

// Close the channel when you're done.
channel.close();

Sending messages

Messages can be strings or anything supported by the structured clone algorithm (Strings, Objects, Arrays, Blobs, ArrayBuffer, Map).

Example - sending a Blob or File

channel.postMessage(new Blob(['foo', 'bar'], {type: 'plain/text'}));

A channel won’t broadcast to itself. So if you have an onmessage listener on the same page as a postMessage() to the same channle, that message event doesn’t fire.

Differences with other techniques

At this point you might be wondering how this relates to other techniques for message passing like WebSockets, SharedWorkers, the MessageChannel API, and window.postMessage(). The Broadcast Channel API doesn’t replace these APIs. Each serves a purpose. The Broadcast Channel API is meant for easy one-to-many communication between scripts on the same origin.

Some use cases for broadcast channels:

  • Detect user actions in other tabs
  • Know when a user logs into an account in another window/tab.
  • Instruct a worker to perform some background work
  • Know when a service is done performing some action.
  • When the user uploads a photo in one window, pass it around to other open pages.

Example - page that knows when the user logs out, even from another open tab on the same site:

<button id="logout">Logout</button>

<script>
function doLogout() {
  // update the UI login state for this page.
}

const authChannel = new BroadcastChannel('auth');

const button = document.querySelector('#logout');
button.addEventListener('click', e => {
  // A channel won't broadcast to itself so we invoke doLogout()
  // manually on this page.
  doLogout();
  authChannel.postMessage({cmd: 'logout', user: 'Eric Bidelman'});
});

authChannel.onmessage = function(e) {
  if (e.data.cmd === 'logout') {
    doLogout();
  }
};
</script>

In another example, let’s say you wanted to instruct a service worker to remove cached content after the user changes their “offline storage setting” in your app. You could delete their caches using window.caches, but the service worker may already contain a utility to do this. We can use the Broadcast Channel API to reuse that code! Without the Broadcast Channel API, you’d have to loop over the results of self.clients.matchAll() and call postMessage() on each client in order to achieve the communication from a service worker to all of its clients (actual code that does that). Using a Broadcast Channel makes this O(1) instead of O(N).

Example - instruct a service worker to remove a cache, reusing its internal utility methods.

// In index.html

const channel = new BroadcastChannel('app-channel');
channel.postMessage({action: 'clearcache'});

channel.onmessage = function(e) {
  if (e.data.action === 'clearcache') {
    console.log('Cache removed:', e.data.removed);
  }
};
// In sw.js

function nukeCache(cacheName) {
  return caches.delete(cacheName).then(removed => {
    // ...do more stuff (internal) to this service worker...
    return removed;
  });
}

const channel = new BroadcastChannel('app-channel');

channel.onmessage = function(e) {
  const action = e.data.action;
  const cacheName = e.data.cacheName;

  if (action === 'clearcache') {
    nukeCache(cacheName).then(removed => {
      channel.postMessage({action, removed});
    });
  }
};

Difference with postMessage()

Unlike postMessage(), you no longer have to maintain a reference to an iframe or worker in order to communicate with it:

// Don't have to save references to window objects.
const popup = window.open('https://another-origin.com', ...);
popup.postMessage('Sup popup!', 'https://another-origin.com');

window.postMessage() also allows you to communicate across origins. The Broadcast Channel API is same-origin. Since messages are guaranteed to come from the same origin, there’s no need to validate them like we used to with window.postMessage():

// Don't have to validate the origin of a message.
const iframe = document.querySelector('iframe');
iframe.contentWindow.onmessage = function(e) {
  if (e.origin !== 'https://expected-origin.com') {
    return;
  }
  e.source.postMessage('Ack!', e.origin);
};

Simply “subscribe” to particular channel and have secure, bidirectional communication!

Difference with SharedWorkers

Use BroadcastChannel for simple cases where you need to send message to potentially several windows/tabs, or workers.

For fancier use cases like managing locks, shared state, synchronizing resources between a server and multiple clients, or sharing a WebSocket connection with a remote host, shared workers are the most appropriate solution.

Difference with MessageChannel API

The main difference between the Channel Messaging API and BroadcastChannel is that the latter is a means to dispatch messages to multiple listeners (one-to-many). MessageChannel is meant for one-to-one communication directly between scripts. It’s also more involved, requiring you to setup channels with a port on each end.

Feature detection & browser support

Currently, Chrome 54, Firefox 38, and Opera 41 support the Broadcast Channel API.

if ('BroadcastChannel' in self) {
  // BroadcastChannel API supported!
}

As for polyfills, there are a few out there:

I haven’t tried these, so your mileage may vary.

Resources

Options of a PushSubscription

$
0
0

When a pushsubscriptionchange event occurs, it’s an opportunity for a developer to re-subscribe the user for push. One of the pain points of this is that to re-subscribe a user, the developer has to keep the applicationServerKey (and any other subscribe() options) in sync between the web page’s JavaScript and their service worker.

In Chrome 54 and later you can now access the options via the options parameter in a subscription object, known as PushSubscriptionOptions.

You can copy and paste the following code snippet into simple-push-demo to see what the options look like. The code simply gets the current subscription and prints out subscription.options.

navigator.serviceWorker.ready.then(registration => {
      return registration.pushManager.getSubscription();
    })
    .then(subscription => {
      if (!subscription) {
        console.log('No subscription 😞');
        return;
      }

      console.log('Here are the options 🎉');
      console.log(subscription.options);
    });

With this small piece of information you can re-subscribe a user in the pushsubscriptionchange event like so:

self.addEventListener('pushsubscriptionchange', e => {
      e.waitUntil(registration.pushManager.subscribe(e.oldSubscription.options)
        .then(subscription => {
          // TODO: Send new subscription to application server  
        }));
    });

It’s a small change, that will be super useful in the future.

DevTools Digest, September 2016: Perf Roundup

$
0
0

Hallo! It’s Kayce again, tech writer for DevTools. For this DevTools Digest I thought I’d switch it up a little and do a roundup of some perf tooling improvements in DevTools over the last few Chrome releases.

All features are already in Chrome Stable unless noted otherwise.

CPU Throttling for a Mobile-First World

Available in Chrome 54, which is currently Canary.

Software is eating the world, and mobile is eating software. DevTools is steadily evolving to better meet the needs of a mobile-first development world. The latest development in DevTools’ mobile-first tooling is CPU Throttling. Use this feature to gain better awareness of how your site performs on resource-constrained devices.

Select one of the options from the CPU Throttling dropdown menu on the Timeline panel to handicap the computing power of your development machine.

CPU Throttling

Some notes about CPU throttling:

  • Throttling immediately takes effect and continues until you disable it, just like network throttling.
  • This feature is for general awareness of how your site would probably perform on a resource-constrained device. It’s impossible for DevTools to truly emulate the performance characteristics of a mobile system on chip.
  • Throttling is relative to your development machine. In other words, 5x throttling on a top-of-the-line desktop will yield different results than 5x throttling on a five-year-old budget laptop.

With that said, combine CPU Throttling with Network Throttling and Device Mode, and you start to get a much better picture about how your site will look and perform on mobile devices, right from the convenience of your development machine browser.

Network View in Timeline Recordings

Enable the Network checkbox next time you take a Timeline recording to analyze how your page downloaded its resources. Click on a resource to view more information about it in the Summary pane.

Network view in Timeline

The Initiator field in the summary is particularly useful. This field tells you where the resource is being requested.

Passive Event Listeners

Passive event listeners are an emerging standard to improve scroll performance. Check out this article by yours truly to learn more:

Improving scroll performance with passive event listeners

DevTools has shipped a couple features to help you find listeners that could benefit from a little {passive: true} love.

First off, the Console emits a warning when a synchronous listener is blocking page scroll for unreasonable amounts of time.

Synchronous listener warning

You can test this out for yourself in the demo below:

Scroll jank due to touch/wheel handlers demo

Next, you can use the little dropdown menu on the Event Listeners pane to filter for passive or blocking listeners.

Passive listeners filter

Last, you can toggle the passive or blocking state of a listener by hovering over it and pressing Toggle Passive. This feature is currently limited to touchstart, touchmove, mousewheel, and wheel event listeners.

Toggle passive

I’ll wrap this section up with a little tip. Enable the Scrolling Performance Issues checkbox on the Rendering drawer to get a visual representation of potential scrolling issues. When a section of a page is highlighted, it means that there is a listener bound to that section of the page that might negatively affect scroll performance.

Scrolling performance issues demo

Group by Activity

Back in mid-June the Call Tree pane on the Timeline panel got a new sorting category: Group by Activity. This grouping lets you view how much time your page spent parsing HTML, evaluating scripts, painting, and so on.

Group by activity

Timeline Stats in the Sources Panel

Create a Timeline recording with the JS Profile option enabled, and you can see a function-by-function breakdown of execution times in the Sources panel.

Timeline stats in Sources panel

Share your perspective

As always, we’d love to hear your feedback or ideas on anything DevTools related.

Until next month!


CacheQueryOptions arrive in Chrome 54

$
0
0

If you use the Cache Storage API, either within a service worker or directly from web apps via window.caches, there’s some good news: starting in Chrome 54, the full set of CacheQueryOptions is supported, making it easier to find the cached responses you’re looking for.

What options are available?

The following options can be set in any call to CacheStorage.match() or Cache.match(). When not set, they all default to false (or undefined for cacheName), and you can use multiple options in a single call to match().

ignoreSearch

This instructs the matching algorithm to ignore the search portion of a URL, also known as the URL query parameters. This can come in handy when you have a source URL that contains query parameters that are used for, e.g., analytics tracking, but are not significant in terms of uniquely identifying a resource in the cache. For example, many folks have fallen prey to the following service worker “gotcha”:

self.addEventListener('install', event => {
  event.waitUntil(
    caches.open('my-cache')
      .then(cache => cache.add('index.html'))
  );
});

self.addEventListener('fetch', event => {
  // Make sure this is a navigation request before responding.
  if (event.request.mode === 'navigation') {
    event.respondWith(
      caches.match(event.request) || fetch(event.request)
    );
  }
});

This sort of code works as expected when a user navigates directly to index.html, but what if your web app uses an analytics provider to keep track of inbound links, and the user navigates to index.html?utm_source=some-referral? By default, passing index.html?utm_source=some-referral to caches.match() won’t return the entry for index.html. But if ignoreSearch is set to true, you can retrieve the cached response you’d expect regardless of what query parameters are set:

caches.match(event.request, {ignoreSearch: true})

cacheName

cacheName comes in handy when you have multiple caches and you want a response that’s stored in one specific cache. Using it can make your queries more efficient (since the browser only has to check inside one cache, instead of all of them) and allows you to retrieve a specific response for a given URL when multiple caches might have that URL as a key. cacheName only has an effect when used with CacheStorage.match(), not Cache.match(), because Cache.match() already operates on a single, named cached.

// The following are functionally equivalent:
caches.open('my-cache')
  .then(cache => cache.match('index.html'));

// or...
caches.match('index.html', {cacheName: 'my-cache'});

ignoreMethod and ignoreVary

ignoreMethod and ignoreVary are a bit more niche than ignoreSearch and cacheName, but they serve specific purposes.

ignoreMethod allows you to pass in a Request object that has any method (POST, PUT, etc.) as the first parameter to match(). Normally, only GET or HEAD requests are allowed.

// In a more realistic scenario, postRequest might come from
// the request property of a FetchEvent.
const postRequest = new Request('index.html', {method: 'post'});

// This will never match anything.
caches.match(postRequest);

// This will match index.html in any cache.
caches.match(postRequest, {ignoreMethod: true});

If set to true, ignoreVary means that cache lookups will be done without regards to any Vary headers that are set in the cached responses. If you know that you are not dealing with cached responses that use the Vary header, then you don’t have to worry about setting this option.

Browser support

CacheQueryOptions is only relevant in browsers that support the Cache Storage API. Besides Chrome and Chromium-based browsers, that’s currently limited to Firefox, which already natively supports CacheQueryOptions.

Developers who want to use CacheQueryOptions in versions of Chrome prior to 54 can make use of a polyfill, courtesy of Arthur Stolyar.

Cross-origin Service Workers: Experimenting with Foreign Fetch

$
0
0

Background

Service workers give web developers the ability to respond to network requests made by their web applications, allowing them to continue working even while offline, fight lie-fi, and implement complex cache interactions like stale-while-revalidate. But service workers have historically been tied to a specific origin—as the owner of a web app, it’s your responsibility to write and deploy a service worker to intercept all the network requests your web app makes. In that model, each service worker is responsible for handling even cross-origin requests, for example to a third-party API or for web fonts.

What if a third-party provider of an API, or web fonts, or other commonly used service had the power to deploy their own service worker that got a chance to handle requests made by other origins to their origin? Providers could implement their own custom networking logic, and take advantage of a single, authoritative cache instance for storing their responses. Now, thanks to foreign fetch, that type of third-party service worker deployment is a reality.

Deploying a service worker that implements foreign fetch makes sense for any provider of a service that’s accessed via HTTPS requests from browsers—just think about scenarios in which you could provide a network-independent version of your service, in which browsers could take advantage of a common resource cache. Services that could benefit from this include, but are not limited to:

  • API providers with RESTful interfaces
  • Web font providers
  • Analytics providers
  • Image hosting providers
  • Generic content delivery networks

Imagine, for instance, that you’re an analytics provider. By deploying a foreign fetch service worker, you can ensure that all requests to your service that fail while a user is offline are queued and replayed once connectivity returns. While it’s been possible for a service’s clients to implement similar behavior via first-party service workers, requiring each and every client to write bespoke logic for your service is not as scalable as relying on a shared foreign fetch service worker that you deploy.

Prerequisites

Origin Trial token

Foreign fetch is still considered experimental. In order to keep from prematurely baking this design in before it’s fully specified and agreed upon by browser vendors, it’s been implemented in Chrome 54 as an Origin Trial. As long as foreign fetch remains experimental, to use this new feature with the service you host, you’ll need to request a token that’s scoped to your service’s specific origin. The token should be included as an HTTP response header in all cross-origin requests for resources that you want to handle via foreign fetch, as well as in the response for your service worker JavaScript resource:

Origin-Trial: token_obtained_from_signup

The trial will end in March 2017. By that point, we expect to have figured out any changes necessary to stabilize the feature, and (hopefully) enable it by default. If foreign fetch is not enabled by default by that time, the functionality tied to existing Origin Trial tokens will stop working.

To facilitate experimenting with foreign fetch prior to registering for an official Origin Trial token, you can bypass the requirement in Chrome for your local computer by going to chrome://flags/#enable-experimental-web-platform-features and enabling the “Experimental Web Platform features” flag. Please note that this needs to be done in every instance of Chrome that you want to use in your local experimentations, whereas with an Origin Trial token the feature will be available to all of your Chrome users.

HTTPS

As with all service worker deployments, the web server you use for serving both your resources and your service worker script needs to be accessed via HTTPS. Additionally, foreign fetch interception only applies to requests that originate from pages hosted on secure origins, so the clients of your service need to use HTTPS to take advantage of your foreign fetch implementation.

Using Foreign Fetch

With the prerequisites out of the way, let’s dive into the technical details needed to get a foreign fetch service worker up and running.

Registering your service worker

The first challenge that you’re likely to bump into is how to register your service worker. If you’ve worked with service workers before, you’re probably familiar with the following:

// You can't do this!
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('service-worker.js');
}

This JavaScript code for a first-party service worker registration makes sense in the context of a web app, triggered by a user navigating to a URL you control. But it’s not a viable approach to registering a third-party service worker, when the only interaction browser will have with your server is requesting a specific subresource, not a full navigation. If the browser requests, say, an image from a CDN server that you maintain, you can’t prepend that snippet of JavaScript to your response and expect that it will be run. A different method of service worker registration, outside the normal JavaScript execution context, is required.

The solution comes in the form of an HTTP header that your server can include in any response:

Link: </service-worker.js>; rel="serviceworker"; scope="/"

Let’s break down that example header into its components, each of which is separated by a ; character.

  • </service-worker.js> is required, and is used to specify the path to your service worker file (replace /service-worker.js with the appropriate path to your script). This corresponds directly to the scriptURL string that would otherwise be passed as the first parameter to navigator.serviceWorker.register(). The value needs to be enclosed in <> characters (as required by the Link header specification), and if a relative rather than absolute URL is provided, it will be interpreted as being relative to the location of the response.
  • rel="serviceworker" is also required, and should be included without any need for customization.
  • scope=/ is an optional scope declaration, equivalent to the options.scope string you can pass in as the second parameter to navigator.serviceWorker.register(). For many use cases, you’re fine with using the default scope, so feel free to leave this out unless you know that you need it. The same restrictions around maximum allowed scope, along with the ability to relax those restrictions via the Service-Worker-Allowed header, apply to Link header registrations.

Just like with a “traditional” service worker registration, using the Link header will install a service worker that will be used for the next request made against the registered scope. The body of the response that includes the special header will be used as-is, and is available to the page immediately, without waiting for the foreign service worker to finish installation.

Remember that foreign fetch is currently implemented as an Origin Trial, so alongside your Link response header, you’ll need to include a valid Origin-Trial header as well. The minimum set of response headers to add in order to register your foreign fetch service worker is

Link: </service-worker.js>; rel="serviceworker"
Origin-Trial: token_obtained_from_signup

Note: Astute readers of the service worker specification may have noticed another means of performing service worker registration, via a <link rel="serviceworker"> DOM element. Support for <link>-based registration in Chrome is currently controlled by the same Origin Trial as the Link header, so it is not yet enabled by default. <link>-based registration has the same limitations as JavaScript-based registration when it comes to foreign fetch registration, so for the purposes of this article, the Link header is what you should be using.

Debugging Registration

During development, you’ll probably want to confirm that your foreign fetch service worker is properly installed and processing requests. There are a few things you can check in Chrome’s Developer Tools to confirm that things are working as expected.

Are the proper response headers being sent?

In order to register the foreign fetch service worker, you need to set a Link header on a response to a resource hosted on your domain, as described earlier in this post. During the Origin Trial period, and assuming you don’t have chrome://flags/#enable-experimental-web-platform-features set, you also need to set a Origin-Trial response header. You can confirm that your web server is setting those headers by looking at the entry in the Network panel of DevTools:

Headers displayed in the Network panel

Is the foreign fetch service worker properly registered?

You can also confirm the underlying service worker registration, including its scope, by looking at the full list of service workers in the Application panel of DevTools. Make sure to select the “Show all” option, since by default, you’ll only see service workers for the current origin.

The foreign fetch service worker in the Applications panel

The install event handler

Now that you’ve registered your third-party service worker, it will get a chance to respond to the install and activate events, just like any other service worker would. It can take advantage of those events to, for example, populate caches with required resources during the install event, or prune out-of-date caches in the activate event.

Beyond normal install event caching activities, there’s an additional step that’s required inside your third-party service worker’s install event handler. Your code needs to call registerForeignFetch(), as in the following example:

self.addEventListener('install', event => {
  event.registerForeignFetch({
    scopes: [self.registration.scope], // or some sub-scope
    origins: ['*'] // or ['https://example.com']
  });
});

There are two configuration options, both required:

  • scopes takes an array of one or more strings, each of which represents a scope for requests that will trigger a foreignfetch event. But wait, you may be thinking, I’ve already defined a scope during service worker registration! That’s true, and that overall scope is still relevant—each scope that you specify here must be either equal to or a sub-scope of the service worker’s overall scope. The additional scoping restrictions here allow you to deploy an all-purpose service worker that can handle both first-party fetch events (for requests made from your own site) and third-party foreignfetch events (for requests made from other domains), and make it clear that only a subset of your larger scope should trigger foreignfetch. In practice, if you’re deploying a service worker dedicated to handling only third-party, foreignfetch events, you’re just going to want to use a single, explicit scope that’s equal to your service worker’s overall scope. That’s what the example above will do, using the value self.registration.scope.
  • origins also takes an array of one or more strings, and allows you to restrict your foreignfetch handler to only respond to requests from specific domains. For example, if you explicitly whitelist ‘https://example.com’, then a request made from a page hosted at https://example.com/path/to/page.html for a resource served from your foreign fetch scope will trigger your foreign fetch handler, but requests made from https://random-domain.com/path/to/page.html won’t trigger your handler. Unless you have a specific reason to only trigger your foreign fetch logic for a subset of remote origins, you can just specify '*' as the only value in the array, and all origins will be whitelisted.

The foreignfetch event handler

Now that you’ve installed your third-party service worker and it’s been configured via registerForeignFetch(), it will get a chance to intercept cross-origin subresource requests to your server that fall within the foreign fetch scope.

Note: There’s an additional restriction in Chrome’s current implementation: only GET, POST, or HEAD requests that contain only CORS-safelisted headers are eligible for foreign fetch. This restriction is not part of the foreign fetch specification and may be relaxed in future versions of Chrome.

In a traditional, first-party service worker, each request would trigger a fetch event that your service worker had a chance to respond to. Our third-party service worker is given a chance to handle a slightly different event, named foreignfetch. Conceptually, the two events are quite similar, and they give you the opportunity to inspect the incoming request, and optionally provide a response to it via respondWith():

self.addEventListener('foreignfetch', event => {
  // Assume that requestLogic() is a custom function that takes
  // a Request and returns a Promise which resolves with a Response.
  event.respondWith(
    requestLogic(event.request).then(response => {
      return {
        response: response,
        // Omit to origin to return an opaque response.
        // With this set, the client will receive a CORS response.
        origin: event.origin,
        // Omit headers unless you need additional header filtering.
        // With this set, only Content-Type will be exposed.
        headers: ['Content-Type']
      };
    })
  );
});

Despite the conceptual similarities, there are a few differences in practice when calling respondWith() on a ForeignFetchEvent. Instead of just providing a Response (or Promise that resolves with a Response) to respondWith(), like you do with a FetchEvent, you need to pass a Promise that resolves with an Object with specific properties to the ForeignFetchEvent’s respondWith():

  • response is required, and must be set to the Response object that will be returned to the client that made the request. If you provide anything other than a valid Response, the client’s request will be terminated with a network error. Unlike when calling respondWith() inside a fetch event handler, you must provide a Response here, not a Promise which resolves with a Response! You can construct your response via a promise chain, and pass that chain as the parameter to foreignfetch’s respondWith(), but the chain must resolve with an Object that contains the response property set to a Response object. You can see a demonstration of this in the code sample above.
  • origin is optional, and it’s used to determine whether or not the response that’s returned is opaque. If you leave this out, the response will be opaque, and the client will have limited access to the response’s body and headers. If the request was made with mode: 'cors', then returning an opaque response will be treated as an error. However, if you specify a string value equal to the origin of the remote client (which can be obtained via event.origin), you’re explicitly opting in to provides a CORS-enabled response to the client.
  • headers is also optional, and is only useful if you’re also specifying origin and returning a CORS response. By default, only headers in the CORS-safelisted response header list will be included in your response. If you need to further filter what’s returned, headers takes a list of one or more header names, and it will use that as a whitelist of which headers to expose in the response. This allows you to opt-in to CORS while still preventing potentially sensitive response headers from being exposed directly to the remote client.

It’s important to note that when the foreignfetch handler is run, it has access to all the credentials and ambient authority of the origin hosting the service worker. As a developer deploying a foreign fetch-enabled service worker, it’s your responsibility to ensure that you do not leak any privileged response data that would not otherwise be available by virtue of those credentials. Requiring an opt-in for CORS responses is one step to limit inadvertent exposure, but as a developer you can explicitly make fetch() requests inside your foreignfetch handler that do not use the implied credentials via:

self.addEventListener('foreignfetch', event => {
  // The new Request will have credentials omitted by default.
  const noCredentialsRequest = new Request(event.request.url);
  event.respondWith(
    // Replace with your own request logic as appropriate.
    fetch(noCredentialsRequest)
      .catch(() => caches.match(noCredentialsRequest))
      .then(response => ({response}))
  );
});

Client considerations

There are some additional considerations that affect how your foreign fetch service worker handles requests made from clients of your service.

Clients that have their own first-party service worker

Some clients of your service may already have their own first-party service worker, handling requests originating from their web app. What does this mean for your third-party, foreign fetch service worker?

The fetch handler(s) in a first-party service worker get the first opportunity to respond to all requests made by the web app, even if there’s a third-party service worker with foreignfetch enabled with a scope that covers the request. But clients with first-party service workers can still take advantage of your foreign fetch service worker!

Inside a first-party service worker, using fetch() to retrieve cross-origin resources will trigger the appropriate foreign fetch service worker. That means code like the following can take advantage of your foreignfetch handler:

// Inside a client's first-party service-worker.js:
self.addEventListener('fetch', event => {
  // If event.request is under your foreign fetch service worker's
  // scope, this will trigger your foreignfetch handler.
  event.respondWith(fetch(event.request));
});

Similarly, if there are first-party fetch handlers, but they don’t call event.respondWith() when handling requests for your cross-origin resource, the request will automatically “fall through” to your foreignfetch handler:

// Inside a client's first-party service-worker.js:
self.addEventListener('fetch', event => {
  if (event.request.mode === 'same-origin') {
    event.respondWith(localRequestLogic(event.request));
  }

  // Since event.respondWith() isn't called for cross-origin requests,
  // any foreignfetch handlers scoped to the request will get a chance
  // to provide a response.
});

If a first-party fetch handler calls event.respondWith() but does not use fetch() to request a resource under your foreign fetch scope, then your foreign fetch service worker will not get a chance to handle the request.

Clients that don’t have their own service worker

All clients that make requests to a third-party service can benefit when the service deploys a foreign fetch service worker, even if they aren’t already using their own service worker. There is nothing specific that clients need to do in order to opt-in to using a foreign fetch service worker, as long as they’re using a browser that supports it. This means that by deploying a foreign fetch service worker, your custom request logic and shared cache will benefit many of your service’s clients immediately, without them taking further steps.

Putting it all together: where clients look for a response

Taking into account the information above, we can assemble a hierarchy of sources a client will use to find a response for a cross-origin request.

  1. A first-party service worker’s fetch handler (if present)
  2. A third-party service worker’s foreignfetch handler (if present, and only for cross-origin requests)
  3. The browser’s HTTP cache (if a fresh response exists)
  4. The network

The browser starts from the top and, depending on the service worker implementation, will continue down the list until it finds a source for the response.

Learn more

Stay up to date

Chrome’s implementation of the foreign fetch Origin Trial is subject to change as we address feedback from developers. We’ll keep this post up to date via inline changes, and will make note the specific changes below as they happen. We’ll also share information about major changes via the @chromiumdev Twitter account.

API Deprecations and Removals in Chrome 54

$
0
0

In nearly every version of Chrome, we see a significant number of updates and improvements to the product, its performance, and also capabilities of the Web Platform. This article describes the deprecations and removals in Chrome 54, which is in beta as of Setempber 15. This list is subject to change at any time.

Deprecation policy

To keep the platform healthy, we sometimes remove APIs from the Web Platform which have run their course. There can be many reasons why we would remove an API, such as: they are superseded by newer APIs, they are updated to reflect changes to specifications to bring alignment and consistency with other browsers, or they are early experiments that never came to fruition in other browsers and thus can increase the burden of support for web developers.

Some of these changes will have an effect on a very small number of sites. To mitigate issues ahead of time, we try to give developers advanced notice so that if needed, they can make the required changes to keep their sites running.

Chrome currently has a process for deprecations and removals of API’s and the TL;DR is:

  • Announce on the blink-dev mailing list.
  • Set warnings and give time scales in the Chrome DevTools Console when usage is detected on a page.
  • Wait, monitor, and then remove feature as usage drops.

You can find a list of all deprecated features in chromestatus.com using the deprecated filter and removed features by applying the removed filter. We will also try to summarize some of the changes, reasoning, and migration paths in these posts. We will also try to summarize some of the changes, reasoning, and migration paths in these posts.

Disable navigations in the unload handler

TL;DR: All cross-origin navigations will be disallowed in window.onunload event handlers to bring Chrome inline with the HTML spec as well as Firefox and Safari.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Previous versions of Chrome allowed cross-origin navigation to be interrupted inside window.onunload. by setting window.location.href = '#fragment'. According to the HTML spec, only in-page navigations are allowed in the unload handlers, and in previous versions of Chrome other methods of navigating were blocked as required by the spec. Starting in Chrome 54, such navigations will be disallowed to bring us in line with the spec as well as Firefox and Safari.

HTTP/0.9 deprecated

TL;DR: HTTP/0.9 is deprecated. Developers should move to a later version, preferably HTTP/2.

Intent to Remove | Chromestatus Tracker | Chromium Bug

HTTP/0.9 is the predecessor to HTTP/1.x. It lacks many features of its successors. A particular concern for the modern web is its lack of response headers. Without them, there’s no way to verify that an HTTP/0.9 response is really an HTTP/0.9 response. This can cause several problems. Examples include, among other problems:

  • Clients that treat certain error responses as valid HTTP/0.9 responses.
  • Servers that fail to close the request socket causing clients to treat responses as a hanging GET which either stays alive eternally or until a user navigates from a page that made the request.
  • Servers that are unable to indicate to the browser that a request failed, which can cause problems with caching heuristics.

The only foolproof way to fix issues with HTTP/0.9 is to remove support altogether. Which is why support for HTTP/0.9 is removed in Chrome 54.

Use of initTouchEvent is removed

TL;DR: initTouchEvent has been deprecated in favor of the TouchEvent constructor to improve spec compliance and will be removed altogether in Chrome 54.

Intent to Remove | Chromestatus Tracker | CRBug Issue

For a long time developers have been able to create synthetic touch events in Chrome using the initTouchEvent API. These are frequently used to simulate Touch Events either for testing or automating some UIs in your site. Since Chrome 49, this deprecated API has displayed the following warning .

'TouchEvent.initTouchEvent' is deprecated and will be removed in M53, around September 2016. Please use the TouchEvent constructor instead. See https://www.chromestatus.com/features/5730982598541312 for more details.

Aside from not being in the Touch Events spec, there are a number of reasons why this change is good. The Chrome implementation of initTouchEvent was not compatible at all with Safari’s initTouchEvent API and was different to Firefox on Android’s. And finally, the TouchEvent constructor is a lot easier to use.

For these reasons we decided to follow the spec rather than maintain an API that is neither specced nor compatible with the only other implementation. Developers needing an alternative should use the TouchEvent constructor.

Because the iOS and Android/Chrome implementations of the initTouchEvent API were so wildly different, sites would often have code along the lines of (frequently forgetting Firefox)

var event = document.createEvent('TouchEvent');

if(ua === 'Android') {
  event.initTouchEvent(touchItem, touchItem, touchItem, "touchstart", window,
    300, 300, 200, 200, false, false, false, false);
} else {
  event.initTouchEvent("touchstart", false, false, window, 0, 300, 300, 200,
    200, false, false, false, false, touches, targetTouches, changedTouches, 0, 0);
}

document.body.dispatchEvent(touchEvent);

This is bad because it looks for “Android” in the User-Agent and Chrome on Android will match and hit this deprecation. It can’t be removed just yet though because there will be other WebKit and older Blink based browsers on Android for a while that you will still need to support the older API.

To correctly handle TouchEvents on the web you should change your code to support Firefox, IE Edge, and Chrome by checking for the existence of TouchEvent on the window object and if it has a positive “length” (indicating it’s a constructor that takes an argument) you should use that.

if('TouchEvent' in window && TouchEvent.length > 0) {
  var touch = new Touch({
    identifier: 42,
    target: document.body,
    clientX: 200,
    clientY: 200,
    screenX: 300,
    screenY: 300,
    pageX: 200,
    pageY: 200,
    radiusX: 5,
    radiusY: 5
  });

  event = new TouchEvent("touchstart", {
    cancelable: true,
    bubbles: true,
    touches: [touch],
    targetTouches: [touch],
    changedTouches: [touch]
  });
}
else {
  event = document.createEvent('TouchEvent');

  if(ua === 'Android') {
    event.initTouchEvent(touchItem, touchItem, touchItem, "touchstart", window,
      300, 300, 200, 200, false, false, false, false);
  } else {
    event.initTouchEvent("touchstart", false, false, window, 0, 300, 300, 200,
      200, false, false, false, false, touches, targetTouches,
      changedTouches, 0, 0);
  }
}

document.body.dispatchEvent(touchEvent);

KeyboardEvent.keyIdentifier attribute removed

TL;DR: The little-supported keyboardEvent.keyIdentifier property is being removed in favor the standards-based KeyboardEvent.key property.

Intent to Remove | Chromestatus Tracker | Chromium Bug

The keyboardEvent.keyIdentifier attribute was briefly part of a W3C specification in 2009 and 2010. However, it was only ever implemented in WebKit.

Developers needing to replace this attribute can use either the standards-based KeyboardEvent.key property or the KeyboardEvent.code property (as described in an article we did last spring). The former has the widest implementation base, being supported on all major desktop browsers except Safari. The later is currently supported on Chrome, Firefox, and Opera. Removing this feature is intended to drive adoption of KeyboardEvent.key property. There is no word from Apple as to whether will support this; however the also deprecated (but not yet removed from Chrome) KeyboardEvent.keyCode and KeyboardEvent.charCode properties are still available on Safari.

Remove MediaStream ended event and attribute and onended attribute

TL;DR: The ended event and attribute and the onended event handler are being removed because they have been removed from the Media Capture and Streams spec.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Neither the ended event, nor the onended event handler have been part of the WebRTC spec for about three years. Developers wanting to watch events should use MediaStreamTracks instead of MediaStreams.

Deprecate SVGSVGElement.viewPort

The implementation has not worked in Chrome since 2012. The attribute is not present at all in other browsers and it has been removed from the specification. For these reasons the property is being deprecated. Removal is anticipated in Chrome 55.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Deprecate SVGViewElement.viewTarget

The SVGViewElement.viewTarget attribute is not part of the SVG2.0 specification and it’s usage is small or nonexistent. This attribute is deprecated in Chrome 54. Removal is anticipated in Chrome 56.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove SVGZoomEvent

The SVGZoomEvent is not part of the SVG2.0 specification and does not function in Chromium. Despite that it’s still feature detectable, leading to potential confusion by developers. It will be removed.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Re-rastering Composited Layers on Scale Change

$
0
0

TL;DR

Starting in Chrome 53, all content is re-rastered when its transform scale changes, if it does not have the will-change: transform CSS property. In other words, will-change: transform means “please animate it fast”.

This only applies to transforms scales that happen via script manipulation, and does not apply to CSS animations or Web Animations.

This means your site will likely get better-looking content, but it may also be slower without some simple changes outlined below.

Implications for web developers

Under this change, will-change: transform can be thought of as forcing the content to be rastered into a fixed bitmap, which subsequently never changes under transform updates. This allows developers to increase the speed of transform animations on that bitmap, such as moving it around, rotating or scaling it.

Note

  • We do not distinguish between scale and translation transforms.

Put will-change: transform on elements when you need very fast (in other words, 60fps) transform animation speeds, and it is expected that rastering the element at high quality on every frame is not fast enough. Otherwise, avoid will-change: transform.

To optimize the performance-quality tradeoff, you may want to add will-change: transform when animations begin and remove it when they end. Be aware, however, that there is often a large one-time performance cost to adding or removing will-change: transform.

Additional implementation considerations

Removing will-change: transform causes content to be re-rastered at a crisp scale, but only on the next animation frame (via requestAnimationFrame). Thus if you have a layer with will-change: transform on it and simply wish to trigger a re-raster but then continue animating, you must remove will-change: transform, then re-add it in a requestAnimationFrame() callback.

If at any time during an animation, you want to raster at the current scale, apply the above technique to remove in one frame, the re-add will-change: transform in a subsequent frame.

This may have the side-effect of the content losing its composited layer, causing the above recommendation to be somewhat expensive. If that is a problem, we recommend adding transform: translateZ(0) to the content as well to ensure it remains in a composited layer during this operation.

Summary of impact

This change has implications for rendered content quality, performance, and developer control.

  • Rendered content quality: rendered output of elements which animate transform scale will always be crisp by default.
  • Performance: animating transform when will-change: transform is present will be fast.
  • Developer control: developers can choose between quality and speed, on a per-element and per-animation frame basis by adding and removing
    will-change: transform.

See the referenced design doc above for much more detail.

Examples

In this example, the element with the remainsBlurry id will stay blurry after this change, but the element with the noLongerBlurry id will become crisp. That is because the former has a will- change: transform CSS property on it.

Examples of transform scale animations from real applications

Updates to developers.google.com/web

$
0
0

Updates to developers.google.com/web

We launched WebFundamentals two years ago to help ensure that developers had the latest guidance on how to build great sites and apps that worked well desktop, but more importantly, on mobile.

A lot has chanced since then, the mobile web experience has improved dramatically and it's opened up a ton of new possibilities. Service workers let us build a web that is instant and reliable. Progressive Web Apps raise the bar for building amazing web experiences.

Last week, we launched a new visual design for WebFundamentals to make it easier for you to find the content you're looking for, and get the information you need. We updated content to ensure it's accurate and added a number of new articles to help you build better web experiences and amazing Progressive Web Apps.

Some of the new content includes:

Of course, we still have lots of work to do, there's new guidance that needs to be developed, new content that needs to be written, and issues that need to be fixed. But, we're working on it!

One of the goals for this update was to make it easier for you to contribute. We've majorly simplified the development process, removing many of the pre-req's that used to exist and shortened the deployment process. If you find an issue, you can either file it in our issue tracker or fix it yourself and submit a pull request to our WebFundamentals GitHub repository.

As we create and update developers.google.com/web we also have to think about the future of our other resources. Many of you will know that our team created and, through the community, supported the growth of HTML5Rocks but over the last two years it has seen no updates. We've already migrated updates.html5rocks.com to Web Updates and we are working to move addition content from HTML5Rocks here. We've added support for HTTPS to HTML5Rocks, and will are commited to ensuring that the great content that's there, won't disappear.

I personally want to thank our contributors, the developers who have helped translate content, and you. Your feedback, but reports, translations, new content, questions, and the content you've contributed to HTML5Rocks and WebFundamentals has been invaluable. We couldn't have done it without your help! Thank you!

Intervening against document.write()

$
0
0

Intervening against document.write()

Have you recently seen a warning like the following in your Developer Console in Chrome and wondered what it was?

(index):34 A Parser-blocking, cross-origin script,
https://paul.kinlan.me/ad-inject.js, is invoked via document.write().
This may be blocked by the browser if the device has poor network connectivity.

Composability is one of the great powers of the web, allowing us to easily integrate with services built by third parties to build great new products! One of the downsides of composability is that it implies a shared responsibility over the user experience. If the integration is sub-optimal, the user experience will be negatively impacted.

One known cause of poor performance is the use of document.write() inside pages, specifically those uses that inject scripts. As innocuous as the following looks, it can cause real issues for users.

document.write('<script src="https://paul.kinlan.me/ad-inject.js"></script>');

Before the browser can render a page, it has to build the DOM tree by parsing the HTML markup. Whenever the parser encounters a script it has to stop and execute it before it can continue parsing the HTML. If the script dynamically injects another script, the parser is forced to wait even longer for the resource to download, which can incur one or more network roundtrips and delay the time to first render of the page

For users on slow connections, such as 2G, external scripts dynamically injected via document.write() can delay the display of main page content for tens of seconds, or cause pages to either fail to load or take so long that the user just gives up. Based on instrumentation in Chrome, we've learned that pages featuring third-party scripts inserted via document.write() are typically twice as slow to load than other pages on 2G.

We collected data from a 28 day field trial on 1% of Chrome stable users, restricted to users on 2G connections. We saw that 7.6% of all page loads on 2G included at least one cross-origin, parser-blocking script that was inserted via document.write() in the top level document. As a result of blocking the load of these scripts, we saw the following improvements on those loads:

  • 10% more page loads reaching first contentful paint(a visual confirmation for the user that the page is effectively loading), 25% more page loads reaching the fully parsed state, and 10% fewer reloads suggesting a decrease in user frustration.
  • 21% decrease of the mean time (over one second faster) until the first contentful paint
  • 38% reduction to the mean time it takes to parse a page, representing an improvement of nearly six seconds, dramatically reducing the time it takes to display what matters to the user.

With this data in mind, the Chrome team have recently announced an intention to intervene on behalf of all users when we detect this known-bad pattern by changing how document.write() is handled in Chrome (See Chrome Status). Specifically Chrome will not execute the <script> elements injected via document.write() when all of the following conditions are met:

  1. The user is on a slow connection, specifically when the user is on 2G. (In the future, the change might be extended to other users on slow connections, such as slow 3G or slow WiFi.)
  2. The document.write() is in a top level document. The intervention does not apply to document.written scripts within iframes as they don't block the rendering of the main page.
  3. The script in the document.write() is parser-blocking. Scripts with the 'async' or 'defer' attributes will still execute.
  4. The script is not already in the browser HTTP cache. Scripts in the cache will not incur a network delay and will still execute.
  5. The request for the page is not a reload. Chrome will not intervene if the user triggered a reload and will execute the page as normal.

Third party snippets sometimes use document.write() to load scripts. Fortunately, most third parties provide asynchronous loading alternatives, which allow third party scripts to load without blocking the display of the rest of the content on the page.

How do I fix this?

This simple answer is don't inject scripts using document.write(). We are maintaining a set of known services that provide asynchronous loader support that we encourage you to keep checking.

If your provider is not on the list and does support asynchronous script loading then please let us know and we can update the page to help all users.

If your provider does not support the ability to asynchronously load scripts into your page then we encourage you to contact them and let us and them know how they will be affected.

If your provider gives you a snippet that includes the document.write(), it might be possible for you to add an async attribute to the script element, or for you to add the script elements with DOM API's like document.appendChild() or parentNode.insertBefore() much like Google Analytics does.

How to detect when your site is affected

There are a large number of criteria that determine whether the restriction is enforced, so how do you know if you are affected?

Detecting when a user is on 2G

To understand the potential impact of this change you first need to understand how many of your users will be on 2G. You can detect the user's current network type and speed by using the Network Information API that is available in Chrome and then send a heads-up to your analytic or Real User Metrics (RUM) systems.

if(navigator.connection &&
   navigator.connection.type === 'cellular' &&
   navigator.connection.downlinkMax <= 0.115) {
  // Notify your service to indicate that you might be affected by this restriction.
}

Catch warnings in Chrome DevTools

Since Chrome 53, DevTools issues warnings for problematic document.write() statements. Specifically, if a document.write() request meets criteria 2 to 5 (Chrome ignores the connection criteria when sending this warning), the warning will look something like:

Seeing warnings in Chrome DevTools is great, but how do you detect this at scale? You can check for HTTP headers that are sent to your server when the intervention happens.

Check your HTTP headers on the script resource

When a script inserted via document.write has been blocked, Chrome will send the following header to the requested resource:

Intervention: <https://shorturl/relevant/spec>;

When a script inserted via document.write is found and could be blocked in different circumstances, Chrome might send:

Intervention: <https://shorturl/relevant/spec>; level="warning"

The intervention header will be sent as part of the GET request for the script (asynchronously in case of an actual intervention).

What does the future hold?

The initial plan is to execute this intervention when we detect the criteria being met. We started with showing just a warning in the Developer Console in Chrome 53. (Beta was in July 2016. We expect Stable to be available for all users in September 2016.)

We will intervene to block injected scripts for 2G users tentatively starting in Chrome 54, which is estimated to be in a stable release for all users in mid-October 2016. Check out the Chrome Status entry for more updates.

Over time, we're looking to intervene when any user has a slow connection (i.e, slow 3G or WiFi). Follow this Chrome Status entry.

Want to learn more?

To learn more, see these additional resources:

BroadcastChannel API: A Message Bus for the Web

$
0
0

BroadcastChannel API: A Message Bus for the Web

The BroadcastChannel API allows same-origin scripts to send messages to other browsing contexts. It can be thought of as a simple message bus that allows pub/sub semantics between windows/tabs, iframes, web workers, and service workers.

API basics

The Broadcast Channel API is a simple API that makes communicating between browsing contexts easier. That is, communicating between windows/tabs, iframes, web workers, and service workers. Messages which are posted to a given channel are delivered to all listeners of that channel.

The BroadcastChannel constructor takes a single parameter: the name of a channel. The name identifies the channel and lives across browsing contexts.

// Connect to the channel named "my_bus".
const channel = new BroadcastChannel('my_bus');

// Send a message on "my_bus".
channel.postMessage('This is a test message.');

// Listen for messages on "my_bus".
channel.onmessage = function(e) {
  console.log('Received', e.data);
};

// Close the channel when you're done.
channel.close();

Sending messages

Messages can be strings or anything supported by the structured clone algorithm (Strings, Objects, Arrays, Blobs, ArrayBuffer, Map).

Example - sending a Blob or File

channel.postMessage(new Blob(['foo', 'bar'], {type: 'plain/text'}));

A channel won't broadcast to itself. So if you have an onmessage listener on the same page as a postMessage() to the same channle, that message event doesn't fire.

Differences with other techniques

At this point you might be wondering how this relates to other techniques for message passing like WebSockets, SharedWorkers, the MessageChannel API, and window.postMessage(). The Broadcast Channel API doesn't replace these APIs. Each serves a purpose. The Broadcast Channel API is meant for easy one-to-many communication between scripts on the same origin.

Some use cases for broadcast channels:

  • Detect user actions in other tabs
  • Know when a user logs into an account in another window/tab.
  • Instruct a worker to perform some background work
  • Know when a service is done performing some action.
  • When the user uploads a photo in one window, pass it around to other open pages.

Example - page that knows when the user logs out, even from another open tab on the same site:

<button id="logout">Logout</button>

<script>
function doLogout() {
  // update the UI login state for this page.
}

const authChannel = new BroadcastChannel('auth');

const button = document.querySelector('#logout');
button.addEventListener('click', e => {
  // A channel won't broadcast to itself so we invoke doLogout()
  // manually on this page.
  doLogout();
  authChannel.postMessage({cmd: 'logout', user: 'Eric Bidelman'});
});

authChannel.onmessage = function(e) {
  if (e.data.cmd === 'logout') {
    doLogout();
  }
};
</script>

In another example, let's say you wanted to instruct a service worker to remove cached content after the user changes their "offline storage setting" in your app. You could delete their caches using window.caches, but the service worker may already contain a utility to do this. We can use the Broadcast Channel API to reuse that code! Without the Broadcast Channel API, you'd have to loop over the results of self.clients.matchAll() and call postMessage() on each client in order to achieve the communication from a service worker to all of its clients (actual code that does that). Using a Broadcast Channel makes this O(1) instead of O(N).

Example - instruct a service worker to remove a cache, reusing its internal utility methods.

// In index.html

const channel = new BroadcastChannel('app-channel');
channel.onmessage = function(e) {
  if (e.data.action === 'clearcache') {
    console.log('Cache removed:', e.data.removed);
  }
};

const messageChannel = new MessageChannel();

// Send the service worker a message to clear the cache.
// We can't use a BroadcastChannel for this because the
// service worker may need to be woken up. MessageChannels do that.
navigator.serviceWorker.controller.postMessage({
  action: 'clearcache',
  cacheName: 'v1-cache'
}, [messageChannel.port2]);



// In sw.js

function nukeCache(cacheName) {
  return caches.delete(cacheName).then(removed => {
    // ...do more stuff (internal) to this service worker...
    return removed;
  });
}

self.onmessage = function(e) {
  const action = e.data.action;
  const cacheName = e.data.cacheName;

  if (action === 'clearcache') {
    nukeCache(cacheName).then(removed => {
      // Send the main page a response via the BroadcastChannel API.
      // We could also use e.ports[0].postMessage(), but the benefit
      // of responding with the BroadcastChannel API is that other
      // subscribers may be listening.
      const channel = new BroadcastChannel('app-channel');
      channel.postMessage({action, removed});
    });
  }
};

Difference with postMessage()

Unlike postMessage(), you no longer have to maintain a reference to an iframe or worker in order to communicate with it:

// Don't have to save references to window objects.
const popup = window.open('https://another-origin.com', ...);
popup.postMessage('Sup popup!', 'https://another-origin.com');

window.postMessage() also allows you to communicate across origins. The Broadcast Channel API is same-origin. Since messages are guaranteed to come from the same origin, there's no need to validate them like we used to with window.postMessage():

// Don't have to validate the origin of a message.
const iframe = document.querySelector('iframe');
iframe.contentWindow.onmessage = function(e) {
  if (e.origin !== 'https://expected-origin.com') {
    return;
  }
  e.source.postMessage('Ack!', e.origin);
};

Simply "subscribe" to particular channel and have secure, bidirectional communication!

Difference with SharedWorkers

Use BroadcastChannel for simple cases where you need to send message to potentially several windows/tabs, or workers.

For fancier use cases like managing locks, shared state, synchronizing resources between a server and multiple clients, or sharing a WebSocket connection with a remote host, shared workers are the most appropriate solution.

Difference with MessageChannel API

The main difference between the Channel Messaging API and BroadcastChannel is that the latter is a means to dispatch messages to multiple listeners (one-to-many). MessageChannel is meant for one-to-one communication directly between scripts. It's also more involved, requiring you to setup channels with a port on each end.

Feature detection and browser support

Currently, Chrome 54, Firefox 38, and Opera 41 support the Broadcast Channel API.

if ('BroadcastChannel' in self) {
  // BroadcastChannel API supported!
}

As for polyfills, there are a few out there:

I haven't tried these, so your mileage may vary.

Resources


DevTools Digest, September 2016: Perf Roundup

$
0
0

DevTools Digest, September 2016: Perf Roundup

Hallo! It's Kayce again, tech writer for DevTools. For this DevTools Digest I thought I'd switch it up a little and do a roundup of some perf tooling improvements in DevTools over the last few Chrome releases.

All features are already in Chrome Stable unless noted otherwise.

CPU throttling for a mobile-first world

Available in Chrome 54, which is currently Canary.

Software is eating the world, and mobile is eating software. DevTools is steadily evolving to better meet the needs of a mobile-first development world. The latest development in DevTools' mobile-first tooling is CPU Throttling. Use this feature to gain better awareness of how your site performs on resource-constrained devices.

Select one of the options from the CPU Throttling dropdown menu on the Timeline panel to handicap the computing power of your development machine.

CPU Throttling

Some notes about CPU throttling:

  • Throttling immediately takes effect and continues until you disable it, just like network throttling.
  • This feature is for general awareness of how your site would probably perform on a resource-constrained device. It's impossible for DevTools to truly emulate the performance characteristics of a mobile system on chip.
  • Throttling is relative to your development machine. In other words, 5x throttling on a top-of-the-line desktop will yield different results than 5x throttling on a five-year-old budget laptop.

With that said, combine CPU Throttling with Network Throttling and Device Mode, and you start to get a much better picture about how your site will look and perform on mobile devices, right from the convenience of your development machine browser.

Network view in timeline recordings

Enable the Network checkbox next time you take a Timeline recording to analyze how your page downloaded its resources. Click on a resource to view more information about it in the Summary pane.

Network view in Timeline

The Initiator field in the summary is particularly useful. This field tells you where the resource is being requested.

Passive event listeners

Passive event listeners are an emerging standard to improve scroll performance. Check out this article by yours truly to learn more:

Improving scroll performance with passive event listeners

DevTools has shipped a couple features to help you find listeners that could benefit from a little {passive: true} love.

First off, the Console emits a warning when a synchronous listener is blocking page scroll for unreasonable amounts of time.

Synchronous listener warning

You can test this out for yourself in the demo below:

Scroll jank due to touch/wheel handlers demo

Next, you can use the little dropdown menu on the Event Listeners pane to filter for passive or blocking listeners.

Passive listeners filter

Last, you can toggle the passive or blocking state of a listener by hovering over it and pressing Toggle Passive. This feature is currently limited to touchstart, touchmove, mousewheel, and wheel event listeners.

Toggle passive

I'll wrap this section up with a little tip. Enable the Scrolling Performance Issues checkbox on the Rendering drawer to get a visual representation of potential scrolling issues. When a section of a page is highlighted, it means that there is a listener bound to that section of the page that might negatively affect scroll performance.

Scrolling performance issues demo

Group by activity

Back in mid-June the Call Tree pane on the Timeline panel got a new sorting category: Group by Activity. This grouping lets you view how much time your page spent parsing HTML, evaluating scripts, painting, and so on.

Group by activity

Timeline stats in the sources panel

Create a Timeline recording with the JS Profile option enabled, and you can see a function-by-function breakdown of execution times in the Sources panel.

Timeline stats in Sources panel

Share your perspective

As always, we'd love to hear your feedback or ideas on anything DevTools related.

Until next month!

Options of a PushSubscription

$
0
0

Options of a PushSubscription

When a pushsubscriptionchange event occurs, it's an opportunity for a developer to re-subscribe the user for push. One of the pain points of this is that to re-subscribe a user, the developer has to keep the applicationServerKey (and any other subscribe() options) in sync between the web page's JavaScript and their service worker.

In Chrome 54 and later you can now access the options via the options parameter in a subscription object, known as PushSubscriptionOptions.

You can copy and paste the following code snippet into simple-push-demo to see what the options look like. The code simply gets the current subscription and prints out subscription.options.

    navigator.serviceWorker.ready.then(registration => {
      return registration.pushManager.getSubscription();
    })
    .then(subscription => {
      if (!subscription) {
        console.log('No subscription 😞');
        return;
      }

      console.log('Here are the options 🎉');
      console.log(subscription.options);
    });

With this small piece of information you can re-subscribe a user in the pushsubscriptionchange event like so:

    self.addEventListener('pushsubscriptionchange', e => {
      e.waitUntil(registration.pushManager.subscribe(e.oldSubscription.options)
        .then(subscription => {
          // TODO: Send new subscription to application server
        }));
    });

It's a small change, that will be super useful in the future.

CacheQueryOptions Arrive in Chrome 54

$
0
0

CacheQueryOptions Arrive in Chrome 54

If you use the Cache Storage API, either within a service worker or directly from web apps via window.caches, there's some good news: starting in Chrome 54, the full set of CacheQueryOptions is supported, making it easier to find the cached responses you're looking for.

What options are available?

The following options can be set in any call to CacheStorage.match() or Cache.match(). When not set, they all default to false (or undefined for cacheName), and you can use multiple options in a single call to match().

ignoreSearch

This instructs the matching algorithm to ignore the search portion of a URL, also known as the URL query parameters. This can come in handy when you have a source URL that contains query parameters that are used for, for example, analytics tracking, but are not significant in terms of uniquely identifying a resource in the cache. For example, many folks have fallen prey to the following service worker "gotcha":

self.addEventListener('install', event => {
  event.waitUntil(
    caches.open('my-cache')
      .then(cache => cache.add('index.html'))
  );
});

self.addEventListener('fetch', event => {
  // Make sure this is a navigation request before responding.
  if (event.request.mode === 'navigation') {
    event.respondWith(
      caches.match(event.request) || fetch(event.request)
    );
  }
});

This sort of code works as expected when a user navigates directly to index.html, but what if your web app uses an analytics provider to keep track of inbound links, and the user navigates to index.html?utm_source=some-referral? By default, passing index.html?utm_source=some-referral to caches.match() won't return the entry for index.html. But if ignoreSearch is set to true, you can retrieve the cached response you'd expect regardless of what query parameters are set:

caches.match(event.request, {ignoreSearch: true})

cacheName

cacheName comes in handy when you have multiple caches and you want a response that's stored in one specific cache. Using it can make your queries more efficient (since the browser only has to check inside one cache, instead of all of them) and allows you to retrieve a specific response for a given URL when multiple caches might have that URL as a key. cacheName only has an effect when used with CacheStorage.match(), not Cache.match(), because Cache.match() already operates on a single, named cached.

// The following are functionally equivalent:
caches.open('my-cache')
  .then(cache => cache.match('index.html'));

// or...
caches.match('index.html', {cacheName: 'my-cache'});

ignoreMethod and ignoreVary

ignoreMethod and ignoreVary are a bit more niche than ignoreSearch and cacheName, but they serve specific purposes.

ignoreMethod allows you to pass in a Request object that has any method (POST, PUT, etc.) as the first parameter to match(). Normally, only GET or HEAD requests are allowed.

// In a more realistic scenario, postRequest might come from
// the request property of a FetchEvent.
const postRequest = new Request('index.html', {method: 'post'});

// This will never match anything.
caches.match(postRequest);

// This will match index.html in any cache.
caches.match(postRequest, {ignoreMethod: true});

If set to true, ignoreVary means that cache lookups will be done without regards to any Vary headers that are set in the cached responses. If you know that you are not dealing with cached responses that use the Vary header, then you don't have to worry about setting this option.

Browser support

CacheQueryOptions is only relevant in browsers that support the Cache Storage API. Besides Chrome and Chromium-based browsers, that's currently limited to Firefox, which already natively supports CacheQueryOptions.

Developers who want to use CacheQueryOptions in versions of Chrome prior to 54 can make use of a polyfill, courtesy of Arthur Stolyar.

Cross-origin Service Workers: Experimenting with Foreign Fetch

$
0
0

Cross-origin Service Workers: Experimenting with Foreign Fetch

Background

Service workers give web developers the ability to respond to network requests made by their web applications, allowing them to continue working even while offline, fight lie-fi, and implement complex cache interactions like stale-while-revalidate. But service workers have historically been tied to a specific origin—as the owner of a web app, it's your responsibility to write and deploy a service worker to intercept all the network requests your web app makes. In that model, each service worker is responsible for handling even cross-origin requests, for example to a third-party API or for web fonts.

What if a third-party provider of an API, or web fonts, or other commonly used service had the power to deploy their own service worker that got a chance to handle requests made by other origins to their origin? Providers could implement their own custom networking logic, and take advantage of a single, authoritative cache instance for storing their responses. Now, thanks to foreign fetch, that type of third-party service worker deployment is a reality.

Deploying a service worker that implements foreign fetch makes sense for any provider of a service that's accessed via HTTPS requests from browsers—just think about scenarios in which you could provide a network-independent version of your service, in which browsers could take advantage of a common resource cache. Services that could benefit from this include, but are not limited to:

  • API providers with RESTful interfaces
  • Web font providers
  • Analytics providers
  • Image hosting providers
  • Generic content delivery networks

Imagine, for instance, that you're an analytics provider. By deploying a foreign fetch service worker, you can ensure that all requests to your service that fail while a user is offline are queued and replayed once connectivity returns. While it's been possible for a service's clients to implement similar behavior via first-party service workers, requiring each and every client to write bespoke logic for your service is not as scalable as relying on a shared foreign fetch service worker that you deploy.

Prerequisites

Origin Trial token

Foreign fetch is still considered experimental. In order to keep from prematurely baking this design in before it’s fully specified and agreed upon by browser vendors, it's been implemented in Chrome 54 as an Origin Trial. As long as foreign fetch remains experimental, to use this new feature with the service you host, you’ll need to request a token that's scoped to your service's specific origin. The token should be included as an HTTP response header in all cross-origin requests for resources that you want to handle via foreign fetch, as well as in the response for your service worker JavaScript resource:

Origin-Trial: token_obtained_from_signup

The trial will end in March 2017. By that point, we expect to have figured out any changes necessary to stabilize the feature, and (hopefully) enable it by default. If foreign fetch is not enabled by default by that time, the functionality tied to existing Origin Trial tokens will stop working.

To facilitate experimenting with foreign fetch prior to registering for an official Origin Trial token, you can bypass the requirement in Chrome for your local computer by going to chrome://flags/#enable-experimental-web-platform-features and enabling the "Experimental Web Platform features" flag. Please note that this needs to be done in every instance of Chrome that you want to use in your local experimentations, whereas with an Origin Trial token the feature will be available to all of your Chrome users.

HTTPS

As with all service worker deployments, the web server you use for serving both your resources and your service worker script needs to be accessed via HTTPS. Additionally, foreign fetch interception only applies to requests that originate from pages hosted on secure origins, so the clients of your service need to use HTTPS to take advantage of your foreign fetch implementation.

Using Foreign Fetch

With the prerequisites out of the way, let's dive into the technical details needed to get a foreign fetch service worker up and running.

Registering your service worker

The first challenge that you're likely to bump into is how to register your service worker. If you've worked with service workers before, you're probably familiar with the following:

// You can't do this!
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('service-worker.js');
}

This JavaScript code for a first-party service worker registration makes sense in the context of a web app, triggered by a user navigating to a URL you control. But it's not a viable approach to registering a third-party service worker, when the only interaction browser will have with your server is requesting a specific subresource, not a full navigation. If the browser requests, say, an image from a CDN server that you maintain, you can't prepend that snippet of JavaScript to your response and expect that it will be run. A different method of service worker registration, outside the normal JavaScript execution context, is required.

The solution comes in the form of an HTTP header that your server can include in any response:

Link: </service-worker.js>; rel="serviceworker"; scope="/"

Let's break down that example header into its components, each of which is separated by a ; character.

  • </service-worker.js> is required, and is used to specify the path to your service worker file (replace /service-worker.js with the appropriate path to your script). This corresponds directly to the scriptURL string that would otherwise be passed as the first parameter to navigator.serviceWorker.register(). The value needs to be enclosed in <> characters (as required by the Link header specification), and if a relative rather than absolute URL is provided, it will be interpreted as being relative to the location of the response.
  • rel="serviceworker" is also required, and should be included without any need for customization.
  • scope=/ is an optional scope declaration, equivalent to the options.scope string you can pass in as the second parameter to navigator.serviceWorker.register(). For many use cases, you're fine with using the default scope, so feel free to leave this out unless you know that you need it. The same restrictions around maximum allowed scope, along with the ability to relax those restrictions via the Service-Worker-Allowed header, apply to Link header registrations.

Just like with a "traditional" service worker registration, using the Link header will install a service worker that will be used for the next request made against the registered scope. The body of the response that includes the special header will be used as-is, and is available to the page immediately, without waiting for the foreign service worker to finish installation.

Remember that foreign fetch is currently implemented as an Origin Trial, so alongside your Link response header, you'll need to include a valid Origin-Trial header as well. The minimum set of response headers to add in order to register your foreign fetch service worker is

Link: </service-worker.js>; rel="serviceworker"
Origin-Trial: token_obtained_from_signup

Note: Astute readers of the service worker specification may have noticed another means of performing service worker registration, via a <link rel="serviceworker"> DOM element. Support for <link>-based registration in Chrome is currently controlled by the same Origin Trial as the Link header, so it is not yet enabled by default. <link>-based registration has the same limitations as JavaScript-based registration when it comes to foreign fetch registration, so for the purposes of this article, the Link header is what you should be using.

Debugging registration

During development, you'll probably want to confirm that your foreign fetch service worker is properly installed and processing requests. There are a few things you can check in Chrome's Developer Tools to confirm that things are working as expected.

Are the proper response headers being sent?

In order to register the foreign fetch service worker, you need to set a Link header on a response to a resource hosted on your domain, as described earlier in this post. During the Origin Trial period, and assuming you don't have chrome://flags/#enable-experimental-web-platform-features set, you also need to set a Origin-Trial response header. You can confirm that your web server is setting those headers by looking at the entry in the Network panel of DevTools:

Headers displayed in the Network panel

Is the Foreign Fetch service worker properly registered?

You can also confirm the underlying service worker registration, including its scope, by looking at the full list of service workers in the Application panel of DevTools. Make sure to select the "Show all" option, since by default, you'll only see service workers for the current origin.

The foreign fetch service worker in the Applications panel

The install event handler

Now that you've registered your third-party service worker, it will get a chance to respond to the install and activate events, just like any other service worker would. It can take advantage of those events to, for example, populate caches with required resources during the install event, or prune out-of-date caches in the activate event.

Beyond normal install event caching activities, there's an additional step that's required inside your third-party service worker's install event handler. Your code needs to call registerForeignFetch(), as in the following example:

self.addEventListener('install', event => {
  event.registerForeignFetch({
    scopes: [self.registration.scope], // or some sub-scope
    origins: ['*'] // or ['https://example.com']
  });
});

There are two configuration options, both required:

  • scopes takes an array of one or more strings, each of which represents a scope for requests that will trigger a foreignfetch event. But wait, you may be thinking, I've already defined a scope during service worker registration! That's true, and that overall scope is still relevant—each scope that you specify here must be either equal to or a sub-scope of the service worker's overall scope. The additional scoping restrictions here allow you to deploy an all-purpose service worker that can handle both first-party fetch events (for requests made from your own site) and third-party foreignfetch events (for requests made from other domains), and make it clear that only a subset of your larger scope should trigger foreignfetch. In practice, if you're deploying a service worker dedicated to handling only third-party, foreignfetch events, you're just going to want to use a single, explicit scope that's equal to your service worker's overall scope. That's what the example above will do, using the value self.registration.scope.
  • origins also takes an array of one or more strings, and allows you to restrict your foreignfetch handler to only respond to requests from specific domains. For example, if you explicitly whitelist 'https://example.com', then a request made from a page hosted at https://example.com/path/to/page.html for a resource served from your foreign fetch scope will trigger your foreign fetch handler, but requests made from https://random-domain.com/path/to/page.html won't trigger your handler. Unless you have a specific reason to only trigger your foreign fetch logic for a subset of remote origins, you can just specify '*' as the only value in the array, and all origins will be whitelisted.

The foreignfetch event handler

Now that you've installed your third-party service worker and it's been configured via registerForeignFetch(), it will get a chance to intercept cross-origin subresource requests to your server that fall within the foreign fetch scope.

Note: There's an additional restriction in Chrome's current implementation: only GET, POST, or HEAD requests that contain only CORS-safelisted headers are eligible for foreign fetch. This restriction is not part of the foreign fetch specification and may be relaxed in future versions of Chrome.

In a traditional, first-party service worker, each request would trigger a fetch event that your service worker had a chance to respond to. Our third-party service worker is given a chance to handle a slightly different event, named foreignfetch. Conceptually, the two events are quite similar, and they give you the opportunity to inspect the incoming request, and optionally provide a response to it via respondWith():

self.addEventListener('foreignfetch', event => {
  // Assume that requestLogic() is a custom function that takes
  // a Request and returns a Promise which resolves with a Response.
  event.respondWith(
    requestLogic(event.request).then(response => {
      return {
        response: response,
        // Omit to origin to return an opaque response.
        // With this set, the client will receive a CORS response.
        origin: event.origin,
        // Omit headers unless you need additional header filtering.
        // With this set, only Content-Type will be exposed.
        headers: ['Content-Type']
      };
    })
  );
});

Despite the conceptual similarities, there are a few differences in practice when calling respondWith() on a ForeignFetchEvent. Instead of just providing a Response (or Promise that resolves with a Response) to respondWith(), like you do with a FetchEvent, you need to pass a Promise that resolves with an Object with specific properties to the ForeignFetchEvent's respondWith():

  • response is required, and must be set to the Response object that will be returned to the client that made the request. If you provide anything other than a valid Response, the client's request will be terminated with a network error. Unlike when calling respondWith() inside a fetch event handler, you must provide a Response here, not a Promise which resolves with a Response! You can construct your response via a promise chain, and pass that chain as the parameter to foreignfetch's respondWith(), but the chain must resolve with an Object that contains the response property set to a Response object. You can see a demonstration of this in the code sample above.
  • origin is optional, and it's used to determine whether or not the response that's returned is opaque. If you leave this out, the response will be opaque, and the client will have limited access to the response's body and headers. If the request was made with mode: 'cors', then returning an opaque response will be treated as an error. However, if you specify a string value equal to the origin of the remote client (which can be obtained via event.origin), you're explicitly opting in to provides a CORS-enabled response to the client.
  • headers is also optional, and is only useful if you're also specifying origin and returning a CORS response. By default, only headers in the CORS-safelisted response header list will be included in your response. If you need to further filter what's returned, headers takes a list of one or more header names, and it will use that as a whitelist of which headers to expose in the response. This allows you to opt-in to CORS while still preventing potentially sensitive response headers from being exposed directly to the remote client.

It's important to note that when the foreignfetch handler is run, it has access to all the credentials and ambient authority of the origin hosting the service worker. As a developer deploying a foreign fetch-enabled service worker, it's your responsibility to ensure that you do not leak any privileged response data that would not otherwise be available by virtue of those credentials. Requiring an opt-in for CORS responses is one step to limit inadvertent exposure, but as a developer you can explicitly make fetch() requests inside your foreignfetch handler that do not use the implied credentials via:

self.addEventListener('foreignfetch', event => {
  // The new Request will have credentials omitted by default.
  const noCredentialsRequest = new Request(event.request.url);
  event.respondWith(
    // Replace with your own request logic as appropriate.
    fetch(noCredentialsRequest)
      .catch(() => caches.match(noCredentialsRequest))
      .then(response => ({response}))
  );
});

Client considerations

There are some additional considerations that affect how your foreign fetch service worker handles requests made from clients of your service.

Clients that have their own first-party service worker

Some clients of your service may already have their own first-party service worker, handling requests originating from their web app. What does this mean for your third-party, foreign fetch service worker?

The fetch handler(s) in a first-party service worker get the first opportunity to respond to all requests made by the web app, even if there's a third-party service worker with foreignfetch enabled with a scope that covers the request. But clients with first-party service workers can still take advantage of your foreign fetch service worker!

Inside a first-party service worker, using fetch() to retrieve cross-origin resources will trigger the appropriate foreign fetch service worker. That means code like the following can take advantage of your foreignfetch handler:

// Inside a client's first-party service-worker.js:
self.addEventListener('fetch', event => {
  // If event.request is under your foreign fetch service worker's
  // scope, this will trigger your foreignfetch handler.
  event.respondWith(fetch(event.request));
});

Similarly, if there are first-party fetch handlers, but they don't call event.respondWith() when handling requests for your cross-origin resource, the request will automatically "fall through" to your foreignfetch handler:

// Inside a client's first-party service-worker.js:
self.addEventListener('fetch', event => {
  if (event.request.mode === 'same-origin') {
    event.respondWith(localRequestLogic(event.request));
  }

  // Since event.respondWith() isn't called for cross-origin requests,
  // any foreignfetch handlers scoped to the request will get a chance
  // to provide a response.
});

If a first-party fetch handler calls event.respondWith() but does not use fetch() to request a resource under your foreign fetch scope, then your foreign fetch service worker will not get a chance to handle the request.

Clients that don't have their own service worker

All clients that make requests to a third-party service can benefit when the service deploys a foreign fetch service worker, even if they aren't already using their own service worker. There is nothing specific that clients need to do in order to opt-in to using a foreign fetch service worker, as long as they're using a browser that supports it. This means that by deploying a foreign fetch service worker, your custom request logic and shared cache will benefit many of your service's clients immediately, without them taking further steps.

Putting it all together: where clients look for a response

Taking into account the information above, we can assemble a hierarchy of sources a client will use to find a response for a cross-origin request.

  1. A first-party service worker's fetch handler (if present)
  2. A third-party service worker's foreignfetch handler (if present, and only for cross-origin requests)
  3. The browser's HTTP cache (if a fresh response exists)
  4. The network

The browser starts from the top and, depending on the service worker implementation, will continue down the list until it finds a source for the response.

Learn more

Stay up to date

Chrome's implementation of the foreign fetch Origin Trial is subject to change as we address feedback from developers. We'll keep this post up to date via inline changes, and will make note the specific changes below as they happen. We'll also share information about major changes via the @chromiumdev Twitter account.

ResizeObserver: It’s Like document.onresize for Elements

$
0
0

ResizeObserver: It’s Like document.onresize for Elements

After MutationObserver, PerformanceObserver and IntersectionObserver, we have another observer for your collection! ResizeObserver allows you to be notified when an element’s content rectangle has changed its size, and react accordingly. The spec is currently being iterated on in the WICG and your feedback is very much welcome.

Motivation

previously, you had to attach a listener to the document’s resize event to get notified of any change of the viewport’s dimensions. In the event handler, you would then have to figure out which elements have been affected by that change and call a specific routine to react appropriately. If you need the new dimensions of an element after a resize, you need to call getBoundingClientRect or getComputerStyle, which can cause layout thrashing if you don’t take care of batching all your reads and all your writes.

And then you realize that this doesn’t even cover the cases where elements change their size without the main window having been resized. For example, appending new children, setting an element’s display style to none, or similar actions can change the size of an element, its siblings or ancestors.

This is why ResizeObserver is a useful primitive. It reacts to changes in size of any of the observed elements, independent of what caused the change. It provides you access to the new size of the observed elements, too. Let’s get straight into it.

API

All the APIs with the “observer” suffix I mentioned above share a simple API design. ResizeObserver is no exception. You create a ResizeObserver object and pass a callback to the constructor. The callback will be given an array of ResizeOberverEntries – one entry per observed element – which contain the new dimensions for the element.

var ro = new ResizeObserver( entries => {
  for (let entry of entries) {
    const cr = entry.contentRect;
    console.log('Element:', entry.target);
    console.log(`Element size: ${cr.width}px x ${cr.height}px`);
    console.log(`Element padding: ${cr.top}px ; ${cr.left}px`);
  }
});

// Observe one or multiple elements
ro.observe(someElement);

Some details

What is being reported?

Generally, ResizeObserver reports the content rectangle of an element. The content rectangle is the box in which content can be placed. It is the border box minus the padding.

It's important to note that while ResizeObserver reports both the dimensions of the contentRect and the padding, it only watches the contentRect. Don't confuse contentRect with the bounding box of the element. The bounding box, as reported by getBoundingClientRect, is the box that contains the entire element and its decendants. SVGs are an exception to the rule, where ResizeObserver will report the dimensions of the bounding box.

When is it being reported?

The spec prescribes that ResizeObserver should process all resize events before paint and after layout. This makes the callback of an ResizeObserver the ideal place to make changes to your page’s layout. Because ResizeObserver processing happens between layout and paint, doing so will only invalidate layout, not paint.

Gotcha

You might be asking yourself: What happens if I change the size of an observed element inside the ResizeObserver’s callback? The answer is: You will trigger another call to the callback right away. However, ResizeObserver has a mechanism to avoid infinite callback loops and cyclic dependencies. Changes will only be processed in the same frame if the resized element is deeper in the DOM tree than the shallowest element processed in the previous callback. Otherwise, they’ll get deferred to the next frame.

Application

One thing that ResizeObserver allows you to do is to implement per-element media queries. By observing elements, you can imperatively define your design breakpoints and change the element’s styles. In the following example, the second box will change its border radius according to its width.

const ro = new ResizeObserver(entries => {
  for (let entry of entries) {
    entry.target.style.borderRadius = Math.max(0, 250 - entry.contentRect.width) + 'px';
  }
});
// Only observe the second box
ro.observe(document.querySelector('.box:nth-child(2)'));

Another interesting example to look at is a chat window. The problem that arises in a typical top-to-bottom conversation layout is scroll positioning. To avoid confusing the user, it is helpful if the window sticks to the bottom of the conversation, where the newest messages will appear. Additionally, any kind of layout change (think of a phone going from landscape to portrait or vice versa) should strive to achieve the same.

ResizeObserver allows you to write a single piece of code that takes care of both scenarios. Resizing the window is an event that ResizeObservers can capture by very definition, but calling appendChild() will also resize that element (except if overflow: hidden is set), because it needs to make space for the new elements. With this in mind, you can get away with a couple of lines to achieve the desired effect:

const ro = new ResizeObserver(entries => {
  document.scrollingElement.scrollTop =
    document.scrollingElement.scrollHeight;
});

// Observe the scrollingElement for when the window gets resized
ro.observe(document.scrollingElement);
// Observe the timeline to process new messages
ro.observe(timeline);

Pretty neat, huh?

From here, we could add more code to handle the case where the user has scrolled up manually and we want to the scrolling to stick to that message when a new message comes in.

Another use case is for any kind of custom element that is doing its own layout. Until ResizeObserver, there was no reliable way to get notified when your own dimensions change so you ca re-layout your own children.

Out now!

As with a lot of the observer APIs, ResizeObserver is not 100% polyfillable, which is why native implementations are needed. Current polyfill implementations either rely on polling or on adding sentinel elements to the DOM. The former will drain your battery on mobile by keeping the CPU busy while the latter modifies your DOM and might mess up styling and other DOM-reliant code.

ResizeObserver is in Chrome 55 Canary, behind the Experimental Web Platform flag. It is a small primitive that allows you to write certain effects in a much more efficient way. Try them out and let us know what you think or if you have questions.

Viewing all 599 articles
Browse latest View live