Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

New in Chrome 58

$
0
0

New in Chrome 58

Note: Want the full list of changes? Check out the Chromium source repository change list

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 58!

IndexedDB 2.0

The structure of your site’s database has large performance impacts, and can be difficult to change. IndexedDB 2.0 changes that.

  • object stores and indexes can now be renamed in-place after a refactoring.
  • Binary keys allow more natural keys without worrying about performance penalties.
  • Data retrieval is easier with the getKey(), openKeyCursor() and continuePrimaryKey() methods.

And bulk recovery of entire datasets no longer needs a cursor with the getAll() and getAllKey().

Full screen Progressive Web Apps

When Progressive Web Apps are launched from the Android home screen, they launch in a standalone app-like mode that hides the omnibox. This helps create an engaging user experience, and frees up screen space for content.

However, for even more immersive experiences like games, video players, or other rich content, mobile UI elements such as the system bars can still be a distraction and take up valuable pixels that you may want.

Now you can make your Progressive Web App feel fully immersive by setting display: fullscreen in your web app manifest.

A PWA launched from the home screen (left), launched from the home screen in standalone mode (middle), and launched from the home screen in fullscreen mode (right).

When your app is launched from the home screen, all non-app mobile UI elements will be hidden.

Sandboxed iframe Improvements

Chrome 58 now supports the new iframe sandbox keyword allow-top-navigation-by-user-activation.

When triggered by a user interaction, this keyword gives sandboxed iframes the ability to navigate the top-level page, while still blocking auto-redirects.

And more!

And of course, there’s plenty more.

  • Say goodbye to the clearfix hack. Instead of manually resetting multiple layout properties like float and clear, you can now add a new block-formatting context using display: flow-root.
  • PointerEvents.getCoalescedEvents() allows you to access all input events since the last time a PointerEvent was delivered. Perfect for when you need a precise history of points for things like drawing apps.
  • And Workers and SharedWorkers can now be created using data: URLs, making development with Workers more secure by giving them an opaque origin.

These are just a few of the changes in Chrome 58 for developers.

If you enjoyed this video, check out Designer vs. Developer, a new video series that tries to solve the challenges faced when designers and developers work together.

Then subscribe to our YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 59 is released, I’ll be right here to tell you -- what’s new in Chrome!


Moving to the the Native Notification System on Mac OS X

$
0
0

Moving to the the Native Notification System on Mac OS X

Starting in Chrome 59, notifications sent via the Notifications API or the chrome.notifications extensions API will be shown directly by the Mac OS X native notification system instead of Chrome's own system.

This change makes Chrome on Mac OS X feel much better integrated into the platform and fixes a number of long standing bugs, such as Chrome not respecting the system Do Not Disturb setting.

Below we'll look at the differences this change introduces to the existing API's.

Notification center

One of the benefits of this change is that notifications will be displayed in OS X's notification center.

Google Chrome Notifications will be displayed in the Mac OS X notification center

Differences

Icon size and positioning

The appearance of icons will change. They'll be smaller in size and padding is applied. You may want to consider switching to a transparent background icon instead of a solid color to be aesthetically pleasing.

Before and after for Chrome on Mac notification icons displayed by Chrome vs displayed by Mac OS X

Action icons

Before this change action buttons and icons would be displayed in the notification. With native notifications the action button icons will not be used and the user will need to hover over the notification and select the "More" button to see the available actions.

Before and after of notification action buttons with icons displayed by Chrome vs displayed by Mac OS X

The Chrome logo will always be displayed and cannot be replaced or altered. This is a requirement for third party applications on Mac OS X.

Images

The image option will no longer be supported on OS X. If you define an image property the notification will still be displayed, but it will ignore the image parameter (See example below).

Before and after of notification image for Chrome on Mac OS X

You can feature detect image support with the following code:

if ('image' in Notification.prototype) {
  // Image is supported.
} else {
  // Image is NOT supported.
}

Chrome extension changes

Chrome extensions have the concept of notification templates which will behave differently with this change.

The image notification template will no longer show the image. You should ensure that images are supplemental and not required to be useful to your users.

Before and after for image templates in the chrome.notification API

The list notification template will only show the first item in the list. You may want to consider moving back to the basic notification style and using body text to summarize the set of changes.

Before and after for list templates in the chrome.notification API

Progress notifications will append a percentage value to the notification title to indicate the progress instead of a progress bar.

Before and after for progress templates in the chrome.notification API

The last difference in notification UI is that the appIconMarkUrl will no longer be used on Mac OS X.

Before and after for appIconMarkUrl in the chrome.notification API

Getting Started with Headless Chrome

$
0
0

Getting Started with Headless Chrome

TL;DR

Headless Chrome is a way to run the Chrome browser in a headless environment. Essentially, running Chrome without chrome! It brings all modern web platform features provided by Chromium and the Blink rendering engine to the command line.

Why is that useful?

A headless browser is a great tool for automated testing and server environments where you don't need a visible UI shell. For example, you may want to run some tests against a real web page, create a PDF of it, or just inspect how the browser renders an URL.

Note: Headless mode is available on Mac and Linux in Chrome 59. Windows support is coming soon!

Starting Headless (CLI)

The easiest way to get started with headless mode is to open the Chrome binary from the command line. If you've got Chrome 59+ installed, start Chrome with the --headless flag:

chrome \
  --headless \                   # Runs Chrome in headless mode.
  --disable-gpu \                # Temporarily needed for now.
  --remote-debugging-port=9222 \
  https://www.chromestatus.com   # URL to open. Defaults to about:blank.

Note: Right now, you'll also want to include the --disable-gpu flag. That will eventually go away.

chrome should point to your installation of Chrome. The exact location will vary from platform to platform. Since I'm on Mac, I created convenient aliases for each version of Chrome that I have installed:

alias chrome="/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome"
alias chrome-canary="/Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary"
alias chromium="/Applications/Chromium.app/Contents/MacOS/Chromium"

Command line features

In some cases, you may not need to programmatically script Headless Chrome. There are some useful command line flags to perform common tasks.

Note: You may also need to include the --disable-gpu flag for now when running these commands.

Printing the DOM

The --dump-dom flag prints document.body.innerHTML to stdout:

chrome --headless --dump-dom https://www.chromestatus.com/

Create a PDF

The --print-to-pdf flag creates a PDF of the page:

chrome --headless --print-to-pdf https://www.chromestatus.com/

Taking screenshots

To capture a screenshot of a page, use the --screenshot flag:

chrome --headless --screenshot https://www.chromestatus.com/

# Size of a standard letterhead.
chrome --headless --screenshot --window-size=1280,1696 https://www.chromestatus.com/

# Nexus 5x
chrome --headless --screenshot --window-size=412,732 https://www.chromestatus.com/

Running with --screenshot will produce a file named screenshot.png in the current working directory. If you're looking for full page screenshots, things are a tad more involved. There's a great blog post from David Schnurr that has you covered. Check out Using headless Chrome as an automated screenshot tool .

Debugging Chrome without a browser UI?

When you run Chrome with --remote-debugging-port=9222, it starts an instance with the DevTools Protocol enabled. The protocol is used to communicate with Chrome and drive the headless browser instance. It's also what tools like Sublime, VS Code, and Node use for remote debugging an application. #synergy

Since you don't have browser UI to see the page, navigate to http://localhost:9222 in another browser to check that everything is working. You'll see a list of inspectable pages where you can click through and see what Headless is rendering:

DevTools Remote
DevTools remote debugging UI

From here, you can use the familiar DevTools features to inspect, debug, and tweak the page as you normally would. If you're using Headless programmatically, this page is also a powerful debugging tool for seeing all the raw DevTools protocol commands going across the wire, communicating with the browser.

Using programmatically (Node)

Launching Chrome

In the previous section, we started Chrome manually using --headless --remote-debugging-port=9222. However, to fully automate tests, you'll probably want to spawn Chrome from your application.

One way is to use child_process:

const exec = require('child_process').exec;

function launchHeadlessChrome(url, callback) {
  // Assuming MacOSx.
  const CHROME = '/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome';
  exec(`${CHROME} --headless --remote-debugging-port=9222 ${url}`, callback);
}

launchHeadlessChrome('https://www.chromestatus.com', (err, stdout, stderr) => {
  ...
});

But things get tricky if you want a portable solution that works across multiple platforms. Just look at that hard-coded path to Chrome :(

Using Lighthouse's ChromeLauncher

Lighthouse is a marvelous tool for testing the quality of your web apps. One thing people don't realize is that it ships with some really nice helper modules for working with Chrome. One of those modules is ChromeLauncher. ChromeLauncher will find where Chrome is installed, set up a debug instance, launch the browser, and kill it when your program is done. Best part is that it works cross-platform thanks to Node!

Note: The Lighthouse team is exploring a standalone package for ChromeLauncher with an improved API. Let us know if you have feedback.

By default, ChromeLauncher will try to launch Chrome Canary (if it's installed), but you can change that to manually select which Chrome to use. To use it, first install Lighthouse from npm:

yarn add lighthouse

Example - using ChromeLauncher to launch Headless

const {ChromeLauncher} = require('lighthouse/lighthouse-cli/chrome-launcher');

/**
 * Launches a debugging instance of Chrome on port 9222.
 * @param {boolean=} headless True (default) to launch Chrome in headless mode.
 *     Set to false to launch Chrome normally.
 * @return {Promise<ChromeLauncher>}
 */
function launchChrome(headless = true) {
  const launcher = new ChromeLauncher({
    port: 9222,
    autoSelectChrome: true, // False to manually select which Chrome install.
    additionalFlags: [headless ? '--headless' : '']
  });

  return launcher.run().then(() => launcher)
    .catch(err => {
      return launcher.kill().then(() => { // Kill Chrome if there's an error.
        throw err;
      }, console.error);
    });
}

launchChrome(true).then(launcher => {
  ...
});

Running this script doesn't do much, but you should see an instance of Chrome fire up in the task manager that loaded about:blank. Remember, there won't be any browser UI. We're headless.

To control the browser, we need the DevTools protocol!

Retrieving information about the page

chrome-remote-interface is a great Node package that provides high-level APIs on top of the DevTools Protocol. You can use it to orchestrate Headless Chrome, navigate to pages, and fetch information about those pages.

Warning: The DevTools protocol can do a ton of interesting stuff, but it can be a bit daunting at first. I recommend spending a bit of time browsing the DevTools Protocol API Viewer, first. Then, move on to the chrome-remote-interface API docs to see how it wraps the raw protocol.

Let's install the library:

yarn add chrome-remote-interface

Examples

Example - print the user agent

launchChrome().then(launcher => {
  chrome.Version().then(version => console.log(version['User-Agent']));
});

Results in something like: HeadlessChrome/60.0.3082.0

Example - check if the site has a web app manifest

const chrome = require('chrome-remote-interface');

function onPageLoad(Page) {
  return Page.getAppManifest().then(response => {
    if (!response.url) {
      console.log('Site has no app manifest');
      return;
    }
    console.log('Manifest: ' + response.url);
    console.log(response.data);
  });
}

launchChrome().then(launcher => {

  chrome(protocol => {
    // Extract the parts of the DevTools protocol we need for the task.
    // See API docs: https://chromedevtools.github.io/debugger-protocol-viewer/
    const {Page} = protocol;

    // First, enable the Page domain we're going to use.
     Page.enable().then(() => {
      Page.navigate({url: 'https://www.chromestatus.com/'});

      // Wait for window.onload before doing stuff.
      Page.loadEventFired(() => {
        onPageLoad(Page)).then(() => {
          protocol.close();
          launcher.kill(); // Kill Chrome.
        });
      });
    });

  }).on('error', err => {
    throw Error('Cannot connect to Chrome:' + err);
  });

});

Example - extract the <title> of the page using DOM APIs.

const chrome = require('chrome-remote-interface');

function onPageLoad(Runtime) {
  const js = "document.querySelector('title').textContent";

  // Evaluate the JS expression in the page.
  return Runtime.evaluate({expression: js}).then(result => {
    console.log('Title of page: ' + result.result.value);
  });
}

launchChrome().then(launcher => {

  chrome(protocol => {
    // Extract the parts of the DevTools protocol we need for the task.
    // See API docs: https://chromedevtools.github.io/debugger-protocol-viewer/
    const {Page, Runtime} = protocol;

    // First, need to enable the domains we're going to use.
    Promise.all([
      Page.enable(),
      Runtime.enable()
    ]).then(() => {
      Page.navigate({url: 'https://www.chromestatus.com/'});

      // Wait for window.onload before doing stuff.
      Page.loadEventFired(() => {
        onPageLoad(Runtime).then(() => {
          protocol.close();
          launcher.kill(); // Kill Chrome.
        });
      });

    });

  }).on('error', err => {
    throw Error('Cannot connect to Chrome:' + err);
  });

});

Further resources

Here are some useful resources to get you started:

Docs

Tools

Demos

  • "The Headless Web" - Paul Kinlan's great blog post on using Headless with api.ai.

FAQ

How do I create a Docker container that runs Headless Chrome?

Check out lighthouse-ci. It has an example Dockerfile that uses Ubuntu as a base image, and installs + runs Lighthouse in an App Engine Flexible container.

How is this related to PhantomJS?

Headless Chrome is similar to tools like PhantomJS. Both can be used for automated testing in a headless environment. The main difference between the two is that Phantom uses an older version of WebKit as its rendering engine while Headless Chrome uses the latest version of Blink.

At the moment, Phantom also provides a higher level API than the DevTools Protocol.

Where do I report bugs?

For bugs against Headless Chrome, file them on crbug.com.

For bugs in the DevTools protocol, file them at github.com/ChromeDevTools/devtools-protocol.


Detect if your Native app is installed from your web site

$
0
0

Detect if your Native app is installed from your web site

As the capabilities of the Web become more aligned with what was once the domain of native experiences there are an increasing number of times where a developer will want to reduce the confusion for their users who have both the web and native apps installed.

Take notifications for example, introduced in Chrome 42; they allow developers to easily re-engage with users who opt to receive messages. But what if the user also has their native app installed? There was no way for you as the developer to know if your user has your app installed on their current device. If the user has the app installed there might be no reason to prompt for notifications from the web as well.

In Chrome 59 we are introducing a new API called edRelatedApps(). This new API lets you determine if your native app is installed on a device.

This is an incredibly powerful API because it gives you access to information that you can't infer from the web. This means that there must be a provable bi-directional relationship between your site and your native app. There are three core components that make this work.

  1. There is a reference to your native app from your Web App Manifest via the related_applications property. This is your site saying that it is related to the native app
  2. There is a native app installed with the same package name as the one referenced in your Web App Manifest. The app must have a reference from your AndroidManifest.xml via the asset_statements element. This asserts that your native app has a relationship with your site
  3. If the above two criteria are met then a call to getInstalledRelatedApps() will resolve the list of apps.

These three steps are in place to ensure that only you can query your apps and that you have reliably demonstrated ownership of the site and app, which are now described in more detail.

Define the relationship to your native app in your Web App Manifest

You need to ensure that you have a Web App Manifest linked to from your site. In the manifest you must define a related_applications property that contains a list of the apps that you want to detect. The related_applications property is an array of objects that contain the platform on which the app is hosted and the unique identifier for your app on that platform.

{
  ...
  "related_applications": [{
    "platform": "play",
    "id": "<package-name>"
  }],
  ...
}

Note: Only Chrome on Android supports this, so the platform must be set to "play". You also need your "id" to be the exact package name for your Android App.

Create the relationship to your site in your AndroidManifest.xml

Next, you need to have your native app signal to the device that it is related to your Web App. You need to define the relationship with your site by ensuring that the app has the same package name as that defined in the Web App Manifest and that it also refers back to the website using the Android Digital Asset Links infrastructure.

Creating the link back to your site is possible by adding the following to your AndroidManifest.xml:

<manifest>
  <application>
   ...
    <meta-data android:name="asset_statements" android:resource="@string/asset_statements" />
   ...
  </application>
</manifest>

And then adding the following to your strings.xml resource, replacing the <site-domain> with the domain of your site:

<string name="asset_statements">
  [{
    \"relation\": [\"delegate_permission/common.handle_all_urls\"],
    \"target\": {
      \"namespace\": \"web\",
      \"site\": \"https://<site-domain>\"
    }
  }]
</string>

Test for presence of the app

Once you have the required metadata deployed on your app and on your site, you should be able to call navigator.getInstalledRelatedApps(). This returns a promise that resolves to the list of apps that are installed on the user's device that meet the above criteria (i.e., have been proved to be owned by the app developer).

navigator.getInstalledRelatedApps().then(relatedApps => {
  for (let app of relatedApps) {
    console.log(app.platform);
    console.log(app.url);
    console.log(app.id);
  }
});
Related Apps in Devtools
Demo of Related Apps in DevTools

Testing on localhost

In your strings.xml set the "site" property value to be http://localhost:[yourportnumber] like the following.

<string name="asset_statements">
  [{
    \"relation\": [\"delegate_permission/common.handle_all_urls\"],
    \"target\": {
      \"namespace\": \"web\",
      \"site\": \"http://localhost:8000\"
    }
  }]
</string>

Ensure that your Web App Manifest has the correct package name for your locally installed App. When you deploy your Android App, make sure that you update the site property value to be the correct URL for your site. The most efficient way to manage this is through your product flavours (Release, Debug etc) which allows you to specify different resource files based on the build target, meaning that your Release target will only ever contain your live domain.

Usecases

There are many different ways that you can use this API. Here are some quick examples of possible uses and common pieces of functionality that you will find useful.

Cancel the progressive web app installation if the native app is installed?

You can intercept the "beforeinstallprompt on the event so the banner doesn't show right away, check to see if there are no apps installed and if so call prompt() to show the banner.

window.addEventListener("beforeinstallprompt", e => {
  if (navigator.getInstalledRelatedApps) {
    e.preventDefault();  // Stop automated install prompt.
    navigator.getInstalledRelatedApps().then(relatedApps => {
        if (relatedApps.length == 0) {
        e.prompt();
      }
    });
  }
});

Prevent duplicate notifications

One of the intended uses for this API is to allow you to de-dupe notifications from both your Native Application and your Web App. There are a couple of options available to you.

If the user has your application installed and they don't have notifications enabled already on your site, you can use the getInstalledRelatedApps() method to selectively disable the UI that the user would normally use to enable the feature.

window.addEventListener("load", e => {
  if (navigator.getInstalledRelatedApps) {
    navigator.getInstalledRelatedApps()
    .then(apps => {
      if(apps.length > 0) { /\* Hide the UI \*/ }
    });
  }
});

Alternatively, if the user has already been to your site and has Web Push enabled you can unregister the push subscription during the onload event.

window.addEventListener("load", e => {
  if (navigator.getInstalledRelatedApps) {
    let sw = navigator.serviceWorker.ready;

    navigator.getInstalledRelatedApps()
    .then(apps => (apps.length > 0) ? sw.then(reg => reg.pushManager) : undefined)
    .then(pushManager => {
      if(pushManager) pushManager.unsubscribe();
    });
  }
});

This API is not available directly to the service worker so it is not possible to de-duplicate notifications as they arrive at your service worker.

Detecting if your PWA is installed from a native app

If the user has installed your Progressive Web App through the old Add to Homescreen method (i.e, in anything prior to Chrome 58) then it is not possible to detect if your app is installed. Chrome added your site to the Homescreen as a bookmark, and this data was not exposed to the system.

If the user has installed the web app using the new Web APK functionality, it is possible to determine if your web app is installed. If you know your package id of your Web APK then you can use the context.getPackageManager().getApplicationInfo() API to determine if it is installed. Please note that this is experiemental.

Not Working?

File a bug right here against the Chrome implementation. The correct people will be notified (I've sneakily put this in, so I am sure they will be grateful).

We are keen to keep getting feedback on the spec and if you have any issues or suggestions, file an issue against the spec

Deprecations and Removals in Chrome 59

$
0
0

Deprecations and Removals in Chrome 59

In nearly every version of Chrome, we see a significant number of updates and improvements to the product, its performance, and also capabilities of the Web Platform. This article describes the deprecations and removals in Chrome 59, which is in beta as of April 27. This list is subject to change at any time.

Remove features from WebVR that are not in the revised spec

The current implementation of WebVR, originally implemented in Chrome 52, contained several methods and properties that will not be in the final spec. Deprecation messages were added for these features for the Origin Trial that started in Chrome 56. These features and are now being removed. They include:

  • VRDisplay.getPose()
  • VRDisplay.resetPose()
  • VRDisplay.isConnected
  • VRDisplayCapabilities.hasOrientation
  • VREyeParameters.fieldOfView

Intent to Experiment | Chromestatus Tracker | Chromium Bug | Origin Trial Results so Far

Remove FileReaderSync from service workers

The Service Worker spec has always had the (non-normative) note that "any type of synchronous requests must not be initiated inside of a service worker", to avoid blocking the service worker (as blocking the service worker would block all network requests from controlled pages). However synchronous APIs such as FileReaderSync were still available in service workers. FileReaderSync was deprecated in Chrome 57. It is removed in Chrome 59.

Intent to Deprecate | Chromestatus Tracker | Chromium Bug

Remove non-standard DeviceOrientation Event initialization functions

For some time now there's been a general trend in browser APIs away from initialization functions and toward object constructors. The most recent version of the DeviceOrientation Event Specification follows this trend by requiring constructors for both DeviceOrientationEvent and DeviceMotionEvent.

Since Chrome is enabling these constructors by default in Chrome 59 the legacy initialization fuctions, initDeviceMotionEvent() and initDeviceOrientationEvent() are also removed. Edge has deprecated the initialization functions and Firefox has already shipped the constructors.

Intent to Remove | Chromium Bug

Remove "on-demand" value for hover/any-hover media queries

The “on-demand” value for hover/any-hover media queries was removed from the spec about a year ago. Consequently, these media queries are removed in Chrome 59.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove remote and readonly members of MediaStreamTrack

In Chrome 48 the MediaStreamTrack.remote and MediaStreamTrack.readonly properties were added in support of the Media Capture and Streams API with the goal of allowing JavaScript to know whether a WebRTC MediaStreamTrack is from a remote source or a local one.

Since that time, these properties have been removed from the spec. As of Chrome 59, they are no longer supported.

Chromium Bug

Remove support for ProgressEvent

Earlier versions of the DOM spec required implementation of document.createEvent("ProgressEvent"). However usage was always low and support has already been removed from Gecko and Webkit. The event itself was removed from the spec in March of this year.

To conform with the platform and most recent spec, ProgressEvent is now removed from Chrome.

Chromium Bug

Remove SVGTests.required Features

In the first version of the SVG spec, an application could call DOMImplementation.hasFeature to verify that a particuilar SVG interface is supported. Many SVG elements contained a requiredFeatures attribute that returned the same information.

In SVG2 DOMImplementation.hasFeature property always returns true. Consequently requiredFeatures no longer does anything useful. Because it was removed from the spec it was deprecated in Chrome 54 and has now been removed.

Intent to Remove | Chromestatus Tracker | Chromium Bug

<<../../_deprecation-policy.md>>

What's New In DevTools (Chrome 60)

$
0
0

What's New In DevTools (Chrome 60)

Welcome! Here's what's new in DevTools in Chrome 60. You can check what version of Chrome you're running at chrome://version.

New features

New Audits panel, powered by Lighthouse

The Audits panel is now powered by Lighthouse. Lighthouse provides a comprehensive set of tests for measuring the quality of your web pages.

A Lighthouse report
Figure 1. A Lighthouse report

Check out the DevTools talk from Google I/O '17 below to learn more. Lighthouse is discussed at 32:30.

To audit a page:

  1. Click the Audits tab.
  2. Click Perform an audit.
  3. Click Run audit. Lighthouse sets up DevTools to emulate a mobile device, runs a bunch of tests against the page, and then displays the results in the Audits panel.

The scores at the top for Progressive Web App, Performance, Accessibility, and Best Practices are your aggregate scores for each of those categories. The rest of the report is a breakdown of each of the tests that determined your scores. Improve the quality of your web page by fixing the failing tests.

Lighthouse is an open-source project. To learn lots more about how it works and how to contribute to it, check out the Lighthouse talk from Google I/O '17 below.

Third-party badges

Use third-party badges to get more insight into the entities that are making network requests on a page and logging to the Console.

Hovering over a third-party badge in the Network panel
Figure 2. Hovering over a third-party badge in the Network panel
Hovering over a third-party badge in the Console
Figure 3. Hovering over a third-party badge in the Console

To enable third-party badges:

  1. Open the Command Menu.
  2. Run the Show third party badges command.

Use the Group by product option in the Call Tree and Bottom-Up tabs to group performance recording activity by the third-party entities that caused the activities.

Grouping by product in the Bottom-Up tab
Figure 4. Grouping by product in the Bottom-Up tab

A new keyboard shortcut for Continue to Here

When stepping through code, hold Command (Mac) or Control (Windows, Linux) and then click to continue to that line of code.

Continue to Here
Figure 5. Continue To Here

Changes

More informative object previews in the Console

Previously, when you logged or evaluated an object in the Console, the Console would only display Object, which is not particularly helpful. Now, the Console provides more information about the contents of the object.

How the Console used to preview objects
Figure 6. How the Console used to preview objects
How the Console now previews objects
Figure 7. How the Console now previews objects

More informative context selection menu in the Console

The Console's Context Selection menu now provides more information about available contexts.

  • The title describes what each item is.
  • The subtitle below the title describes the domain where the item came from.
  • Hover over an iframe context to highlight it in the viewport.
The new Context Selection menu
Figure 8. Hovering over an iframe in the new Context Selection menu highlights it in the viewport

Real-time updates in the Coverage tab

When recording code coverage in Chrome 59, the Coverage tab would just display Recording, with no visibility into what code was being used. Now, the Coverage tab shows you in real-time what code is being used.

Loading and interacting with a page using the old Coverage tab
Figure 9. Loading and interacting with a page using the old Coverage tab
Loading and interacting with a page using the new Coverage tab
Figure 10. Loading and interacting with a page using the new Coverage tab

Simpler network throttling options

The network throttling menus in the Network and Performance panels have been simplified to include only three options: Offline, Slow 3G, which is common in places like India, and Fast 3G, which is common in places like the United States.

The new network throttling options
Figure 11. The new network throttling options

The throttling options have been tweaked to match other, kernel-level throttling tools. DevTools no longer shows the latency, download, and upload metrics next to each option, because those values were misleading. The goal is to match the true experience of each option.

Async stacks on by default

The Async checkbox has been removed from the Sources panel. Async stack traces are now on by default. In the past, this option was opt-in, because of performance overhead. The overhead is now minimal enough to enable the feature by default. If you prefer to have async stack traces disabled, you can turn them off in Settings or by running the Do not capture async stack traces command in the Command Menu.

Feedback

The best place to discuss any of the features or changes you see here is the google-chrome-developer-tools@googlegroups.com mailing list.

That's all for what's new in DevTools in Chrome 60. See you in 6 weeks!

Leveraging the Performance Metrics that Most Affect User Experience

$
0
0

Leveraging the Performance Metrics that Most Affect User Experience

You've probably heard time and time again that performance matters, and it's critical that your web apps are fast.

But as you try to answer the question: how fast is my app? You'll realize that fast is a very vague term. What exactly do we mean when we say fast? In what context? And fast for whom?

When talking about performance it's important to be precise, so we don't create misconceptions or spread myths that can sometimes lead to well-intentioned developers optimizing for the wrong things—ultimately harming the user experience rather than improving it.

To offer a specific example, it's very common today to hear people say something like: I tested my app, and it loads in X.XX seconds.

The problem with this statement is not that it's false, it's that it misrepresents reality. Load times will vary dramatically from user to user, depending on their device capabilities and network conditions. Presenting load times as a single number ignores the users who experienced much longer loads.

In reality, your app's load time is the collection of all load times from every individual user, and the only way to fully represent that is with a distribution like in the histogram below:

A histogram of load times for website visitors

The numbers along the X-axis show load times, and the height of the bars on the y-axis show the relative number of users who experienced a load time in that particular time bucket. As this chart shows, while the largest segment of users experienced loads of less than one or two seconds, many of them still saw much longer load times.

The other reason "my site loads in X.XX seconds" is a myth is that load is not a single moment in time—it's an experience that no one metric can fully capture. There are multiple moments during the load experience that can affect whether a user perceives it as "fast", and if you just focus on one you might miss bad experiences that happen during the rest of the time.

For example, consider an app that optimizes for a fast initial render, delivering content to the user right away. If that app then loads a large JavaScript bundle that takes several seconds to parse and execute, the content on the page will not be interactive until after that JavaScript runs. If a user can see a link on the page but can't click on it, or if they can see a text box but can't type in it, they probably won't care how fast the page rendered.

So rather than measuring load with just one metric, we should be measuring the times of every moment throughout the experience that can have an affect on the user's load perception.

A second example of a performance myth is that performance is only a concern at load time.

We as a team have been guilty of making this mistake, and it can be magnified by the fact that most performance tools only measure load performance.

But the reality is poor performance can happen at any time, not just during load. Apps that don't respond quickly to taps or clicks and apps that don't scroll or animate smoothly can be just as bad as apps that load slowly. Users care about the entire experience, and we developers should too.

A common theme in all of these performance misconceptions is they focus on things that have little or nothing to do with the user experience. Likewise, traditional performance metrics like load time or DOMContentLoaded time are extremely unreliable since when they occur may or may not correspond to when the user thinks the app is loaded.

So to ensure we don't make this mistake going forward, we have to answer these questions:

  1. What metrics most accurately measure performance as perceived by a human? 2. How do we measure these metrics on our actual users? 3. How do we interpret our measurements to determine whether an app is "fast"? 4. Once we understand our app's real-user performance, what do we do to prevent regressions and hopefully improve performance in the future?

User-centric performance metrics

When a user navigates to a web page, they're typically looking for visual feedback to reassure them that everything is going to work as expected.

Is it happening? Did the navigation start successfully? Has the server responded?
Is it useful? Has enough content rendered that I can actually engage with it?
Is it usable? Can I interact with the page, or is it still busy loading?
Is it delightful? Are the interactions smooth and natural, free of lag and jank?

To understand when a page delivers this feedback to its users, we've defined several new metrics:

First paint and fist contentful paint

The Paint Timing API defines two metrics: first paint (FP) and first contentful paint (FCP). These metrics mark the points, immediately after navigation, when the browser renders pixels to the screen. This is important to the user because it answers the question: is it happening?

The primary difference between the two metrics is FP marks the point when the browser renders anything that is visually different from what was on the screen prior to navigation. By contrast, FCP is the point when the browser renders the first bit of content from the DOM, which may be text, an image, SVG, or even a <canvas> element.

First meaningful paint and hero element timing

First meaningful paint (FMP) is the metric that answers the question: "is it useful?". While the concept of "useful" is very hard to spec in a way that applies generically to all web pages (and thus no spec exists, yet), it's quite easy for web developers themselves to know what parts of their pages are going *to be most useful to their users.

Examples of hero elements on various websites

These "most important parts" of a web page are often referred to as hero elements. For example, on the YouTube watch page, the hero element is the primary video. On Twitter it's probably the notification badges and the first tweet. On a weather app it's the forecast for the specified location. And on a news site it's likely the primary story and featured image.

Web pages almost always have parts that are more important than others. If the most important parts of a page load quickly, the user may not even notice if the rest of the page doesn't.

Long tasks

Browsers respond to user input by adding tasks to a queue on the main thread to be executed one by one. This is also where the browser executes your application's JavaScript, so in that sense the browser is single-threaded.

In some cases, these tasks can take a long time to run, and if that happens, the main thread is blocked and all other tasks in the queue have to wait.

Long tasks as seen in the Chrome developer tools

To the user this appears as lag or jank, and it's a major source of bad experiences on the web today.

The long tasks API identifies any task longer than 50 milliseconds as potentially problematic, and it exposes those tasks to the app developer. The 50 millisecond time was chosen so applications could meet the RAIL guidelines of responding to user input within 100 ms.

Time to interactive

The metric Time to interactive (TTI) marks the point at which your application is both visually rendered and capable of reliably responding to user input. An application could be unable to respond to user input for a couple of reasons:

  • The JavaScript needed to make the components on the page work hasn't yet
  • loaded. There are long tasks blocking the main thread (as described in the
  • last section).

The TTI metric identifies the point at which the page's initial JavaScript is loaded and the main thread is idle (free of long tasks).

Mapping metrics to user experience

Getting back to the questions we previously identified as being the most important to the user experience, this table outlines how each of the metrics we just listed maps to the experience we hope to optimize:

The Experience The Metric
Is it happening? First Paint (FP) / First Contentful Paint (FCP)
Is it useful? First Meaningful Paint (FMP) / Hero Element Timing
Is it usable? Time to Interactive (TTI)
Is it delightful? Long Tasks (technically the absence of long tasks)

And these screenshots of a load timeline should help you better visualize where the load metrics fit in the load experience:

Screenshots of where these metrics occur in the load experience

The next section details how to measure these metrics on real users.

Measuring these metrics on real users

One of the main reasons we've historically optimized for metrics like load and DOMContentLoaded is because they're exposed as events in the browser and easy to measure on real users.

By contrast, a lot of these other metrics have been historically very hard to measure. For example, this code is a hack we often see developers use to detect long tasks:

(function detectLongFrame() {
  var lastFrameTime = Date.now();
  requestAnimationFrame(function() {
    var currentFrameTime = Date.now();

    if (currentFrameTime - lastFrameTime > 50) {
      // Report long frame here...
    }

    detectLongFrame(currentFrameTime);
  });
}());

This code starts an infinite requestAnimationFrame loop and records the time on each iteration. If the current time is more than 50 milliseconds after the previous time, it assumes it was the result of a long task. While this code mostly works, it has a lot of downsides: it adds overhead to every frame, it prevents idle blocks, and it's terrible for battery life.

The most important rule of performance measurement code is that it shouldn't make performance worse.

Services like Lighthouse and Web Page Test have offered some of these new metrics for a while now (and in general they're great tools for testing performance on features prior to releasing them), but these tools don't run on your user's devices, so they don't reflect the actual performance experience of your users.

Luckily, with the addition of a few new browser APIs, measuring these metrics on real users is finally possible without a lot of hacks or workaround, which can often make performance worse.

These new APIs are PerformanceObserver, PerformanceEntry, and DOMHighResTimeStamp. To see some code with these new APIs in action, the following code example creates a new PerformanceObserver instance and subscribes to be notified about paint entries (e.g. FP and FCP) as well as any long tasks that occur:

const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    // `entry` is a PerformanceEntry instance.
    console.log(entry.entryType);
    console.log(entry.startTime); // DOMHighResTimeStamp
    console.log(entry.duration); // DOMHighResTimeStamp
  }
});

// Start observing the entry types you care about.
observer.observe({entryTypes: ['resource', 'paint']});

What PerformanceObserver gives us that we've never had before is the ability to subscribe to performance events after they happen and respond to them in an asynchronous fashion. This replaces the older PerformanceTiming interface, which often required polling to see when the data was available.

Tracking FP/FCP

Once you have the data for a particular performance event, you can send it to whatever analytics service you use to capture the metric for the current user. For example, using Google Analytics you might track first paint times as follows:

const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    // `name` will be either 'first-paint' or 'first-contentful-paint'.
    const metricName = entry.name;
    const time = Math.round(entry.startTime + entry.duration);

    ga('send', 'event', {
      eventCategory: 'Performance Metrics',
      eventAction: metricName,
      eventValue: time,
      nonInteraction: true,
    });
  }
});

// Start observing paint entries.
observer.observe({entryTypes: ['paint']});

Tracking FMP using hero elements

Once you've identified what elements on the page are the hero elements, you'll want to track the point at which they're visible to your users.

We don't yet have a standardized definition for FMP (and thus no performance entry type either). This is in part because of how difficult it is to determine, in a generic way, what "meaningful" means for all pages.

However, in the context of a single page or a single application, it's generally best to consider FMP to be the moment when your hero elements are visible on the screen.

Steve Souders has a great article called User Timing and Custom Metrics that details many of the techniques for using browser's performance APIs to determine in code when various types of media are visible.

Tracking TTI

In the long term, we hope to have a TTI metric standardized and exposed in the browser via PerformanceObserver. In the meantime, we've developed a polyfill that can be used to detect TTI today and works in any browser that supports the Long Tasks API.

The polyfill exposes a getFirstConsistentlyInteractive() method, which returns a promise that resolves with the TTI value. You can track TTI using Google Analytics as follows:

import ttiPolyfill from './path/to/tti-polyfill.js';

ttiPolyfill.getFirstConsistentlyInteractive().then((tti) => {
  ga('send', 'event', {
    eventCategory: 'Performance Metrics',
    eventAction: 'TTI',
    eventValue: tti,
    nonInteraction: true,
  });
});

The getFirstConsistentlyInteractive() method accepts an optional startTime configuration option, allowing you to specify a lower bound for which you know your app cannot be interactive before. By default the polyfill uses DOMContentLoaded as the start time, but it's often more accurate to use something like the moment your hero elements are visible or the point when you know all your event listeners have been added.

Refer to the TTI polyfill documentation for complete installation and usage instructions.

Tracking long tasks

I mentioned above that long tasks will often cause some sort of negative user experience (e.g. a sluggish event handler or a dropped frame). It's good to be aware of how often this is happening, so you can make efforts to minimize it.

To detect long tasks in JavaScript you create a new PerformanceObserver and observe entries of type longtask. One nice feature of long task entries is they contain an attribution property, so you can more easily track down which code caused the long task:

const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    ga('send', 'event', {
      eventCategory: 'Performance Metrics',
      eventAction: 'longtask',
      eventValue: Math.round(entry.startTime + entry.duration),
      eventLabel: JSON.stringify(entry.attribution),
    });
  }
});

observer.observe({entryTypes: ['longtask']});

The attribution property will tell you what frame context was responsible for the long task, which is helpful in determining if third party iframe scripts are causing issues. Future versions of the spec are planning to add more granularity and expose script URL, line, and column number, which will be very helpful in determine if your own scripts are causing slowness.

Tracking input latency

Long tasks that block the main thread can prevent your event listeners from executing in a timely manner. The RAIL performance model teaches us that in order for a user interface to feel smooth, it should respond within 100 ms of user input, and if this isn't happening, it's important to know about it.

To detect input latency in code you can compare the event's time stamp to the current time, and if the difference is larger than 100 ms, you can (and should) report it.

const subscribeBtn = document.querySelector('#subscribe');

subscribeBtn.addEventListener('click', (event) => {
  // Event listener logic goes here...

  const lag = performance.now() - event.timeStamp;
  if (lag > 100) {
    ga('send', 'event', {
      eventCategory: 'Performance Metric'
      eventAction: 'input-latency',
      eventLabel: '#subscribe:click',
      eventValue: Math.round(lag),
      nonInteraction: true,
    });
  }
});

Since event latency is usually the result of a long task, you can combine your event latency detection logic with your long task detection logic: if a long task was blocking the main thread at the same time as event.timeStamp you could report that long task's attribution value as well. This would allow you to draw a very clear line between negative performance experiences and the code that caused it.

While this technique isn't perfect (it doesn't handle long event listeners later in the propagation phase, and it doesn't work for scrolling or composited animations that don't run on the main thread), it's a good first step into better understanding how often long running JavaScript code affects user experience.

Interpreting the data

Once you've started collecting performance metrics for real users, you need to put that data into action. Real-user performance data is useful for a few primary reasons:

  • Validating that your app performs as expected. Identifying places where poor
  • performance is negatively affecting conversions (whatever that means for your
  • app). Finding opportunities to improve the user experience and delight your
  • users.

One thing definitely worth comparing is how your app performs on mobile devices vs desktop. The following chart shows the distribution of TTI across desktop (blue) and mobile (orange). As you can see from this example, the TTI value on mobile was quite a bit longer than on desktop:

TTI distribution across desktop and mobile

While the numbers here are app-specific (and you shouldn't assume they'd match your numbers, you should test for yourself), this gives you an example of how you might approach reporting on your usage metrics:

Desktop

Percentile TTI (seconds)
50% 2.3
75% 4.7
90% 8.3

Mobile

Percentile TTI (seconds)
50% 3.9
75% 8.0
90% 12.6

Breaking your results down across mobile and desktop and analyzing the data as a distribution allows you to get quick insight into the experiences of real users. For example, looking at the above table, I can easily see that, for this app, 10% of mobile users took longer than 12 seconds to become interactive!

How performance affects business

One huge advantage of tracking performance in your analytics tools is you can then use that data to analyze how performance affects business.

If you're tracking goal completions or ecommerce conversions in analytics, you could create reports that explore any correlations between these and the app's performance metrics. For example:

  • Do users with faster interactive times buy more stuff? Do users who experience
  • more long tasks during the checkout flow drop off at higher rates?

If correlations are found, it'll be substantially easier to make the business case that performance is important and should be prioritized.

Load abandonment

We know that users will often leave if a page takes too long to load. Unfortunately, this means that all of our performance metrics share the problem of survivorship bias, where the data doesn't include load metrics from people who didn't wait for the page to finish loading (which likely means the numbers are too low).

While you can't track what the numbers would have been if those users had stuck around, you can track how often this happens as well as how long each user stayed for.

This is a bit tricky to do with Google Analytics since the analytics.js library is typically loaded asynchronously, and it may not be available when the user decides to leave. However, you don't need to wait for analytics.js to load before sending data to Google Analytics. You can send it directly via the Measurement Protocol.

This code adds a listener to the visibilitychange event (which fires if the page is being unloaded or goes into the background) and it sends the value of performance.now() at that point.

window.__trackAbandons = () => {
  // Remove the listener so it only runs once.
  document.removeEventListener('visibilitychange', window.__trackAbandons);
  const ANALYTICS_URL = 'https://www.google-analytics.com/collect';
  const GA_COOKIE = document.cookie.replace(
    /(?:(?:^|.*;)\s*_ga\s*\=\s*(?:\w+\.\d\.)([^;]*).*$)|^.*$/, '$1');
  const TRACKING_ID = 'UA-21292978-3';
  const CLIENT_ID =  GA_COOKIE || (Math.random() * Math.pow(2, 52));

  // Send the data to Google Analytics via the Measurement Protocol.
  navigator.sendBeacon && navigator.sendBeacon(ANALYTICS_URL, [
    'v=1', 't=event', 'ec=Load', 'ea=abandon', 'ni=1',
    'tid=' + TRACKING_ID,
    'cid=' + CLIENT_ID,
    'ev=' + Math.round(performance.now()),
  ].join('&'));
};
document.addEventListener('visibilitychange', window.__trackAbandons);

Of course, you'll want to make sure you remove this listener once the page becomes interactive or you'll be reporting abandonment for loads where you were also reporting TTI.

document.removeEventListener('visibilitychange', window.__trackAbandons);

Optimizing performance and preventing regression

The great thing about defining user-centric metrics is when you optimize for them, you inevitably improve user experience as well.

One of the simplest ways to improve performance is to just ship less JavaScript code to the client, but in cases where reducing code size is not an option, it's critical that you think about how you deliver your JavaScript.

Optimizing FP/FCP

You can lower the time to first paint and first contentful paint by removing any render blocking scripts or stylesheets from the <head> of your document.

By taking the time to identify the minimal set of styles needed to show the user that "it's happening" and inlining them in the <head> (or using HTTP/2 server push), you can get incredibly fast first paint times.

The app shell pattern is a great example of how to do this for Progressive Web Apps.

Optimizing FMP/TTI

Once you've identified the most critical UI elements on your page (the hero elements), you should ensure that your initial script load contains just the code needed to get those elements rendered and make them interactive.

Any code unrelated to your hero elements that is included in your initial JavaScript bundle will slow down your time to interactivity. There's no reason to force your user's devices to download and parse JavaScript code they don't need right away.

As a general rule, you should try as hard as possible to minimize the time between FMP and TTI. In cases where it's not possible to minimize this time, it's absolutely critical that your interfaces make it clear that the page isn't yet interactive.

One of the most frustrating experiences for a user is tapping on an element and then having nothing happen.

Preventing long tasks

By splitting up your code and prioritizing the order in which it's loaded, not only can you get your pages interactive faster, but you can reduce long tasks and then hopefully have less input latency and fewer slow frames.

In addition to splitting up code into separate files, you can also split up large chunks of synchronous code into smaller chunks that can execute asynchronously or be deferred to the next idle point. By executing this logic asynchronously in smaller chunks, you leave room on the main thread for the browser to respond to user input.

Lastly, you should make sure you're testing your third party code and holding any slow running code accountable. Third party ads or tracking scripts that cause lots of long tasks may end up hurting your business more than they're helping it.

Preventing regressions

This article has focused heavily on performance measurement on real users, and while it's true that RUM data is the performance data that ultimately matters, lab data is still critical in ensuring your app performs well (and doesn't regress) prior to releasing new features. Lab tests are ideal for regression detection, as they run in a controlled environment and are far less prone to the random variability of RUM tests.

Tools like Lighthouse and Web Page Test can be integrated into your continuous integration server, and you can write tests that fail a build if key metrics regress or drop below a certain threshold.

And for code already released, you can add custom alerts to inform you if there are unexpected spikes in the occurrence of negative performance events. This could happen, for example, if a third party releases a new version of one of their services and suddenly your users start seeing significantly more long tasks.

To successfully prevent regressions you need to be testing performance in both the lab and the wild with every new feature releases.

A flow chart RUM and lab testing in the release process

Wrapping up and looking forward

We've made significant strides in the last year in exposing user-centric metrics to developers in the browser, but we're not done yet, and we have a lot more planned.

We'd really like to standardize time to interactive and hero element metrics, so developers won't have to measure these themselves or depend on polyfills. We'd also like to make it easier for developers to attribute dropped frames and input latency to particular long tasks and the code that caused them.

While we have more work to do, we're excited about the progress we've made. With new APIs like PerformanceObserver and long tasks supported natively in the browser, developers finally have the primitives they need to measure performance on real users without degrading their experience.

The metrics that matter the most are the ones that represent real user experiences, and we want to make it as easy as possible for developers to delight their users and create great applications.

Staying connected

File spec issues:

File polyfill issues:

Ask questions:

Voice your support on concerns on new API proposals:

New in Chrome 59

$
0
0

New in Chrome 59

  • Headless Chrome allows you to run Chrome in an automated environment without a user interface or peripherals.
  • Notifications on macOS will be shown directly by the native macOS notification system.
  • You can now capture full resolution photos with the image capture API, and there’s plenty more!

Note: Want the full list of changes? Check out the Chromium source repository change list

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 59!

Headless Chrome

A headless browser is a great tool for running automated tests and server environments where you don't need to see the rendered output or have a visible UI shell. For example:

  • Using Selenium for unit tests against your progressive web app
  • To create a PDF of a wikipedia page
  • Inspecting a page with DevTools

Starting in Chrome 59, you can now run headless Chrome. It brings all modern web platform features provided by Chrome to the command line.

Check out Eric Bidelman’s post on Updates for full details. He’s got examples on how you can use it to convert pages to PDF, dump the DOM and how to use it programmatically in Node.

Native notifications on macOS

Chrome has historically included its own notification system for web and extension developers to show notifications to users. But, we’ve heard from users and developers alike that they want Chrome to use the native OS notification system.

Starting in Chrome 59 on mac OS, Chrome will use the native notification system, improving the user experience and ensuring that the notifications feel more integrated in the platform. My personal favorite, notifications will now respect my do not disturb settings.

Notification generated by Chrome (left), Native macOS generated notification (right).

Because of the way macOS handles notifications, there are a few low usage APIs that are now discouraged, as they’ll result in a degraded experience on macOS.

Check out our Updates post for all the details.

Image capture API

Capturing high res photos in a web app can be hard. Either the user has to upload a photo they’ve already taken, or switch from the browser to the camera, take the photo, switch back to the browser and upload the photo.

With the new Image Capture API in Chrome 59, you have to access the full resolution capabilities of any available camera. The API provides control of features such as zoom, brightness, contrast, ISO and even white balance.

Check Sam’s post for full details and sample code you can use to get started right away.

And more!

  • The MediaError.message string provides, if available, any additional error message detail to help web developers debugging media player errors.

These are just a few of the changes in Chrome 59 for developers.

If you enjoyed this video, check out Designer vs. Developer, a new video series that tries to solve the challenges faced when designers and developers work together.

Then subscribe to our YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 60 is released, I’ll be right here to tell you -- what’s new in Chrome!


Object rest and spread properties

$
0
0

Object rest and spread properties

Before discussing object rest and spread properties, let’s take a trip down memory lane and remind ourselves of a very similar feature.

ES2015 array rest and spread elements

Good ol’ ECMAScript 2015 introduced rest elements for array destructuring assignment and spread elements for array literals.

// Rest elements for array destructuring assignment:
const primes = [2, 3, 5, 7, 11];
const [first, second, ...rest] = primes;
console.log(first); // 2
console.log(second); // 3
console.log(rest); // [5, 7, 11]

// Spread elements for array literals:
const primesCopy = [first, second, ...rest];
console.log(primesCopy); // [2, 3, 5, 7, 11]

These ES2015 features have been supported since Chrome 46 and Chrome 47, respectively.

ES.next: object rest and spread properties 🆕

So what’s new, then? Well, a stage 3 proposal enables rest and spread properties for object literals, too.

// Rest properties for object destructuring assignment:
const person = {
    firstName: 'Sebastian',
    lastName: 'Markbåge',
    country: 'USA',
    state: 'CA',
};
const { firstName, lastName, ...rest } = person;
console.log(firstName); // Sebastian
console.log(lastName); // Markbåge
console.log(rest); // { country: 'USA', state: 'CA' }

// Spread properties for object literals:
const personCopy = { firstName, lastName, ...rest };
console.log(personCopy);
// { firstName: 'Sebastian', lastName: 'Markbåge', country: 'USA', state: 'CA' }

Spread properties offer a more elegant alternative to Object.assign() in many situations:

// Shallow-clone an object:
const data = { x: 42, y: 27, label: 'Treasure' };
// The old way:
const clone1 = Object.assign({}, data);
// The new way:
const clone2 = { ...data };
// Either results in:
// { x: 42, y: 27, label: 'Treasure' }

// Merge two objects:
const defaultSettings = { logWarnings: false, logErrors: false };
const userSettings = { logErrors: true };
// The old way:
const settings1 = Object.assign({}, defaultSettings, userSettings);
// The new way:
const settings2 = { ...defaultSettings, ...userSettings };
// Either results in:
// { logWarnings: false, logErrors: true }

However, there are some subtle differences in how spreading handles setters:

  1. Object.assign() triggers setters; spread doesn’t.
  2. You can stop Object.assign() from creating own properties via inherited read-only properties, but not the spread operator.

Axel Rauschmayer’s write-up explains these gotchas in more detail.

Object rest and spread properties are supported by default in V8 v6.0.75+ and Chrome 60+. Consider transpiling your code until this feature is more widely supported across engines.

Deprecations and Removals in Chrome 60

$
0
0

Deprecations and Removals in Chrome 60

In nearly every version of Chrome, we see a significant number of updates and improvements to the product, its performance, and also capabilities of the Web Platform. This article describes the deprecations and removals in Chrome 60, which is in beta as of June 8. This list is subject to change at any time.

Security

crypto.subtle now requires a secure origin

The Web Crypto API which has been supported since Chrome 37 has always worked on non-secure origins. Because of Chrome's long-standing policy of prefering secure origins for powerful features, crypto.subtle is no only visible on secure origins.

Intent to Remove | Chromium Bug

Remove content-initiated top frame navigations to data URLs

Because of their unfamilliarity to non-technical browser users, we're increasingly seeing the data: scheme being used in spoofing and phishing attacks. To prevent this, we're blocking web pages from loading data: URLs in the top frame. This applies to &lt;a&gt; tags, window.open, window.location and similar mechanisms. The data: scheme will still work for resources loaded by a page.

This feature was deprecated in Chrome 58 and is now removed.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Temporarily disable navigator.sendBeacon() for some blobs

The navigator.sendBeacon() function has been available since Chrome 39. As originally implemented, the function's data argument could contain any arbitrary blob whose type is not CORS-safelisted. We believe this is a potential security threat, though no one has yet tried to exploit it. Because we do NOT have a reasonable immediate fix for it, temporarily, sendBeacon() can no longer be invokable on blobs whose type is NOT CORS-safelisted.

Although this change was implemented for Chrome 60, it is has since been merged back to Chrome 59.

Chromium Bug

CSS

Make shadow-piercing descendant combinator behave like descendent combinator

The shadow-piercing descendant combinator (>>>), part of CSS Scoping Module Level 1 , was intended to match the children of a particular ancestor element even when they appeared inside of a shadow tree. This had some limitations. First, per the spec, it could only be used in JavaScript calls such as querySelector() and did not work in stylesheets. More importantly, browser vendors were unable to make it work beyond one level of the Shadow DOM.

Consequently, the descendant combinator has been removed from relevant specs including Shadow DOM v1. Rather than break web pages by removing this selector from Chromium, we've chosen instead to alias the shadow-piercing descendent combinator to the descendant combinator. The original behavior was deprecated in Chrome 45. The new behavior is implemented in Chrome 60.

Intent to Remove | Chromestatus Tracker | Chromium Bug

JavaScript

Move getContextAttributes() behind a flag

The getContextAttributes() function has been supported on CanvasRenderingContext2D since 2013. However the feature was not part of any standard and has not become part of one since that time. It should have been implemented behind the --enable-experimental-canvas-features command line flag, but was mistakenly not. In Chrome 60 this oversight has been corrected. It's believed that this change is safe, since there's no data showing that anyone is using the method.

Chromium Bug

Remove Headers.prototype.getAll()

The Headers.prototype.getAll() function is being removed per the latest version of the Fetch specification.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove indexedDB.webkitGetDatabaseNames()

We added this feature when Indexed DB was relatively new in Chrome and prefixing was all the rage. The API asynchronously returns a list of existing database names in an origin, which seemed sensible enough.

Unfortunately, the design is flawed, in that the results may be obsolete as soon as they are returned, so it can really only be used for logging, not serious application logic. The github issue tracks/links to previous discussion on alternatives, which would require a different approach. While there's been on-and-off interest by developers, given the lack of cross- browser progress the problem has been worked around by library authors.

Developers needing this functionality need to develop their own solution. Libraries like Dexie.js for example use a global table ßwhich is itself another database to track the names of databases.

This feature was deprecated in Chrome 58 and is now removed.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove WEBKIT_KEYFRAMES_RULE and WEBKIT_KEYFRAME_RULE

The non-standard WEBKIT_KEYFRAMES_RULE and WEBKIT_KEYFRAME_RULE constants are removed from CSS Rule. Developers should use KEYFRAMES_RULE and KEYFRAME_RULE instead.

Intent to Remove | Chromestatus Tracker | Chromium Bug

<<../../_deprecation-policy.md>>

Introduction to the Budget API

$
0
0

Introduction to the Budget API

The Push Messaging API enables us to send notifications to a user even when the browser is closed. Many developers want to be able to use this messaging to update and synchornise content without the browser being open, but the API has one important restriction: you must always display a notification for every single push message recieved.

Being able to send a push message to synchronize data on a user's device can be extremely useful for users and developers, but allowing a web app to do work in the background without the user knowing is open to abuse.

The Budget API, is a new API designed to allow developers to perform background work without notifying the user, such as silent push or performing a background fetch. In Chrome 60 and above you'll be able to start using this API and the Chrome team is eager to get feedback from developers.

To allow developers to consume a user's resources in the background, the web platform is introducing the concept of a budget using the new Budget API. Each site has a set amount of resource that they can consume for background actions, such as a silent push, where each operation will depleat the budget. When the budget is spent, background actions can no longer be performed without user visibility or consent. The user agent will be responsible for determining budget assigned to a web app based on it's heuristics, for example budget allowance could be linked to user engagement. Each browser can decide it's own heuristic.

TL;DR The budget API allows to you to reserve budget, use budget, get a list of remaining budget and understand the cost of background operations

Reserving Budget

In Chrome 60 and above, the navigator.budget.reserve() method will be available without any flags.

The reserve() method allows you to request budget for a specific operation and it'll return a boolean to indicate if the budget can be reserved. If the budget was reserved, there is no need to notify the user of your background work.

In the example of push notifications, you can attempt to reserve budget for a "silent-push" operation and if reserve() resolves with true, the operation is allowed. Otherwise it'll return false and you'll need to show a notification

self.addEventListener('push', event => {
 const promiseChain = navigator.budget.reserve('silent-push')
   .then((reserved) => {
     if (reserved) {
       // No need to show a notification.
       return;
     }

     // Not enough budget is available, must show a notification.
     return registration.showNotification(...);
   });
 event.waitUntil(promiseChain);
});

In Chrome 60, 'silent-push' is the only operation type that is available, but you can find a full list of operation types in the spec . There is also no way to reset your budget once it's used, the only way to get more budget is to use a new profile. Sadly you can't use incognito for this either as the Budget API will return a budget of zero in Incognito (although there is a bug that results in an error during my testing).

You should only call reserve() when you intend to perform the operation you are reserving. If you called reserve in the above example but still showed a notification, the budget will still be used.

One common use case that isn't enabled by reserve() alone, is the ability to schedule a silent push from a backend. The Budget API does have API's to enable this use case but they are still being worked on in Chrome and are currently only available behind flags and / or an Origin Trial.

Budget API and Origin Trials

There are two methods, getBudget() and getCost(), that can be used by a web app to plan the usage of their budget.

In Chrome 60, both of these methods are behind an origin trial but you can use them locally by enabling the Experimental Web Platform features flag (Open chrome://flags/#enable-experimental-web-platform-features in Chrome).

Let's look how to use these API's.

Get your Budget

You can find your available budget with the getBudget() method. This returns an array of BudgetStates which will indicate your budget at various points in time.

To list the budget entries we can run:

navigator.budget.getBudget()
.then((budgets) => {
  budgets.forEach((element) => {
    console.log(\`At '${new Date(element.time).toString()}' \` +
      \`your budget will be '${element.budgetAt}'.\`);
  });
});

The first entry will be your current budget and additional values will be future changes to your budget.

At 'Mon Jun 05 2017 12:47:20' you will have a budget of '1'.
At 'Fri Jun 09 2017 10:42:57' you will have a budget of '2'.
At 'Fri Jun 09 2017 12:31:09' you will have a budget of '3'.

One of the benefits of including future budget allowances is that developers can share this information with their backend to adapt their server side behavior (i.e. only send a push message to trigger an update when the client has budget for a silent push).

Note: At the time of writing there is a bug where the budget can decrease to a negative value. Your budget should never go below zero.

Get the Cost of an Operation

To find out how much an operation will cost, calling getCost() will return a number indicating the maximum amount of budget that will be consumed if you call reserve() for that operation.

For example, we can find out the cost of not showing a notification when you receive a push message (i.e. the cost of a silent push), with the following code:

navigator.budget.getCost('silent-push')
.then((cost) => {
  console.log('Cost of silent push is:', cost);
})
.catch((err) => {
  console.error('Unable to get cost:', err);
});

At the time of writing, Chrome 60 will print:

Cost of silent push is: 2

One thing to highlight with the reserve() and getCost() methods is that the actual cost of an operation can be less than the cost returned by getCost(). You may still be able to reserve an operation if your current budget is less than the indicated cost. The specific details from the spec are as follows:

The reserved cost of certain background operations could be less than the cost indicated by getCost() when the user's device is in favorable conditions, for example because it's not on battery power.

Note: If you pass in an operation type that the browser does not support or is invalid, the promise will reject with a TypeError.

That's the current API in Chrome and as the web continues to support new API's that require the ability to perform background work, like background fetch, the Budget API can be used to manage the number of operations you can perform without notifying the user.

As you use the API please provide feedback through the Origin Trial or crbug.com.

Latest Updates to the Credential Management API

$
0
0

Latest Updates to the Credential Management API

Some of the updates described here are explained at the Google I/O session, Secure and Seamless Sign-In: Keeping Users Engaged:

Chrome 57

Chrome 57 introduced this important change to the Credential Management API.

Credentials can be shared from a different subdomain

Chrome can now retrieve a credential stored in a different subdomain using the Credential Management API. For example, if a password is stored in login.example.com, a script on www.example.com can show it as one of account items in account chooser dialog.

You must explicitly store the password using navigator.credentials.store(), so that when a user chooses a credential by tapping on the dialog, the password gets passed and copied to the current origin.

Once it's stored, the password is available as a credential in the exact same origin www.example.com onward.

In the following screenshot, credential information stored under login.aliexpress.com is visible to m.aliexpress.com and available for the user to choose from:

Account chooser showing selected subdomain login details

Chrome 60

Chrome 60 introduces several important changes to the Credential Management API:

PasswordCredential object now includes password

The Credential Management API took a conservative approach to handling passwords. It concealed passwords from JavaScript, requiring developers to send the PasswordCredential object directly to their server for validation via an extension to the fetch() API.

But this approach introduced a number of restrictions. We received feedback that developers could not use the API because:

  • They had to send the password as part of a JSON object.

  • They had to send the hash value of the password to their server.

After performing a security analysis and recognizing that concealing passwords from JavaScript did not prevent all attack vectors as effectively as we were hoping, we have decided to make a change.

The Credential Management API now includes a raw password in a returned credential object so you have access to it as plain text. You can use existing methods to deliver credential information to your server:

navigator.credentials.get({
  password: true,
  federated: {
    provider: [ 'https://accounts.google.com' ]
  },
  mediation: 'silent'
}).then(c => {
  if (c) {
    let form = new FormData();
    form.append('email', c.id);
    form.append('password', c.password);
    form.append('csrf_token', csrf_token);
    return fetch('/signin', {
      method: 'POST',
      credentials: 'include',
      body: form
    });
  } else {

    // Fallback to sign-in form
  }
}).then(res => {
  if (res.status === 200) {
    return res.json();
  } else {
    throw 'Auth failed';
  }
}).then(profile => {
  console.log('Auth succeeded', profile);
});

Custom fetch will be deprecated soon

To determine if you are using a custom fetch() function, check if it uses a PasswordCredential object or a FederatedCredential object as a value of credentials property, for example:

fetch('/signin', {
  method: 'POST',
  credentials: c
})

Using a regular fetch() function as shown in the previous code example, or using an XMLHttpRequest is recommended.

navigator.credentials.get() now accepts an enum mediation

Until Chrome 60, navigator.credentials.get() accepted an optional unmediated property with a boolean flag. For example:

navigator.credentials.get({
  password: true,
  federated: {
    provider: [ 'https://accounts.google.com' ]
  },
  unmediated: true
}).then(c => {

  // Sign-in
});

Setting unmediated: true prevents the browser from passing a credential without showing the account chooser and skips user mediation.

The flag is now extended as mediation. The user mediation could happen when:

  • A user needs to choose an account to sign-in with.

  • A user wants to explicitly sign-in after navigator.credentials.requireUseMediation() call.

Choose one of the following options for the mediation value:

mediation value Compared to boolean flag Behavior
silent Equals unmediated: true Credential passed without showing an account chooser.
optional Equals unmediated: false Shows an account chooser if preventSilentAccess() called previously.
required A new option Always show an account chooser. Useful when you want to let a user to switch an account using the native account chooser dialog.

In this example, the credential is passed without showing an account choosen, the equivalent of the previous flag, unmediated: true:

navigator.credentials.get({
  password: true,
  federated: {
    provider: [ 'https://accounts.google.com' ]
  },
  mediation: 'silent'
}).then(c => {

  // Sign-in
});

requireUserMediation() renamed to preventSilentAccess()

To align nicely with the new mediation property offered in the get() call, the navigator.credentials.requireUserMediation() method has been renamed to navigator.credentials.preventSilentAccess().

The renamed method prevents passing a credential without showing the account chooser (sometimes called without user mediation). This is useful when a user signs out of a website or unregisters from one and doesn't want to get signed back in automatically at the next visit.

signoutUser();
if (navigator.credentials) {
  navigator.credentials.preventSilentAccess();
}

Create credential objects asynchronously with new method navigator.credentials.create()

You now have the option to create credential objects asynchronously with the new method, navigator.credentials.create(). Read on for a comparison between both the sync and async approaches.

Creating a PasswordCredential object

Sync approach
let c = new PasswordCredential(form);
Async approach (new)
let c = await navigator.credentials.create({
  password: form
});

or:

let c = await navigator.credentials.create({
  id: id,
  password: password
});

Creating a FederatedCredential object

Sync approach
let c = new FederatedCredential({
  id:       'agektmr',
  name:     'Eiji Kitamura',
  provider: 'https://accounts.google.com',
  iconURL:  'https://*****'
});
Async approach (new)
let c = await navigator.credentials.create({
  id:       'agektmr',
  name:     'Eiji Kitamura',
  provider: 'https://accounts.google.com',
  iconURL:  'https://*****'
});

Automated testing with Headless Chrome

$
0
0

Automated testing with Headless Chrome

If you want to run automated tests using Headless Chrome, look no further! This article will get you all set up using Karma as a runner and Mocha+Chai for authoring tests.

What are these things?

Karma, Mocha, Chai, Headless Chrome, oh my!

Karma is a testing harness that works with any of the the most popular testing frameworks (Jasmine, Mocha, QUnit).

Chai is an assertion library that works with Node and in the browser. We need the latter.

Headless Chrome is a way to run the Chrome browser in a headless environment without the full browser UI. One of the benefits of using Headless Chrome (as opposed to testing directly in Node) is that your JavaScript tests will be executed in the same environment as users of your site. Headless Chrome gives you a real browser context without the memory overhead of running a full version of Chrome.

Setup

Installation

Install Karma, the relevant, plugins, and the test runners using yarn:

yarn add --dev karma karma-chrome-launcher karma-mocha karma-chai
yarn add --dev mocha chai

or use yarn:

npm i --save-dev karma karma-chrome-launcher karma-mocha karma-chai
npm i --save-dev mocha chai

I'm using Mocha and Chai in this post, but if you're not a fan, choose your favorite assertion library that works in the browser.

Configure Karma

Create a karma.config.js file that uses the ChromeHeadless launcher.

karma.conf.js

module.exports = function(config) {
  config.set({
    frameworks: ['mocha', 'chai'],
    files: ['test/**/*.js'],
    reporters: ['progress'],
    port: 9876,  // karma web server port
    colors: true,
    logLevel: config.LOG_INFO,
    browsers: ['ChromeHeadless'],
    autoWatch: false,
    // singleRun: false, // Karma captures browsers, runs the tests and exits
    concurrency: Infinity
  })
}

Note: Run ./node_modules/karma/bin/ init karma.conf.js to generate the Karma configuration file.

Write a test

Create a test in /test/test.js.

/test/test.js

describe('Array', () => {
  describe('#indexOf()', () => {
    it('should return -1 when the value is not present', () => {
      assert.equal(-1, [1,2,3].indexOf(4));
    });
  });
});

Run your tests

Add a test script in package.json that runs Karma with our settings.

package.json

"scripts": {
  "test": "karma start --single-run --browsers ChromeHeadless karma.conf.js"
}

When you run your tests (yarn test), Headless Chrome should fire up and output the results to the terminal:

Output from Karma

Creating your own Headless Chrome launcher

The ChromeHeadless launcher is great because it works out of the box for testing on Headless Chrome. It includes the appropriate Chrome flags for you and launches a remote debugging version of Chrome on port 9222.

However, sometimes you may want to pass custom flags to Chrome or change the remote debugging port the launcher uses. To do that, create a customLaunchers field that extends the base ChromeHeadless launcher:

karma.conf.js

module.exports = function(config) {
  ...

  config.set({
    browsers: ['Chrome', 'ChromeHeadless', 'MyHeadlessChrome'],

    customLaunchers: {
      MyHeadlessChrome: {
        base: 'ChromeHeadless',
        flags: ['--disable-translate', '--disable-extensions', '--remote-debugging-port=9223']
      }
    },
  }
};

Running it all on Travis CI

Configuring Karma to run your tests in Headless Chrome is the hard part. Continuous integration in Travis is just a few lines away!

To run your tests in Travis, use dist: trusty and install the Chrome stable addon:

.travis.yml

language: node_js
node_js:
  - "7"
dist: trusty # needs Ubuntu Trusty
sudo: false  # no need for virtualization.
addons:
  chrome: stable # have Travis install chrome stable.
cache:
  yarn: true
  directories:
    - node_modules
install:
  - yarn
script:
  - yarn test

Note: check out the example repo for reference.


DOMException: The play() request was interrupted

$
0
0

DOMException: The play() request was interrupted

Did you just stumble upon this unexpected media error in the Chrome DevTools JavaScript Console?

Uncaught (in promise) DOMException: The play() request was interrupted by a call to pause().

or

Uncaught (in promise) DOMException: The play() request was interrupted by a load request.

You're in the right place then. Have no fear. I'll explain what is causing this and how to fix it.

What is causing this

Here's some JavaScript code below that reproduces the "Uncaught (in promise)" error you're seeing:

DON'T

<video id="video" preload="none" src="https://example.com/file.mp4"></video>

<script>
  video.play(); // <-- This is asynchronous!
  video.pause();
</script>

The code above results in this error message in Chrome DevTools:

Uncaught (in promise) DOMException: The play() request was interrupted by a call to pause().

As the video is not loaded due to preload="none", video playback doesn't necessarily start immediately after video.play() is executed.

Moreover since Chrome 50, a play() call on an a <video> or <audio> element returns a Promise, a function that returns a single result asynchronously. If playback succeeds, the Promise is fulfilled, and if playback fails, the Promise is rejected along with an error message explaining the failure.

Now here's what happening:

  1. video.play() starts loading video content asynchronously.
  2. video.pause() interrupts video loading because it is not ready yet.
  3. video.play() rejects asynchronously loudly.

Since we're not handling the video play Promise in our code, an error message appears in Chrome DevTools.

Note: Calling video.pause() isn't the only way to interrupt a video "play()" request. You can reset entirely video playback state, including buffer with video.load() and video.src = ''.

How to fix it

Now that we understand the root cause, let's see what we can do to fix this properly.

First, don't ever assume a media element (video or audio) will play. Look at the Promise returned by the play function to see if it was rejected. It is worth noting that the Promise won't fulfill until playback has actually started, meaning the code inside the then() will not execute until the media is playing.

Example: Autoplay

<video id="video" preload="none" src="https://example.com/file.mp4"></video>

<script>
  var playPromise = video.play();

  if (playPromise !== undefined) {
    playPromise.then(_ => {
      // Automatic playback started!
    })
    .catch(error => {
      // Auto-play was prevented
      // Show a UI element to let the user manually start playback
    });
  }
</script>

Example: Play & Pause

<video id="video" preload="none" src="https://example.com/file.mp4"></video>
 
<script>
  var playPromise = video.play();
 
  if (playPromise !== undefined) {
    playPromise.then(_ => {
      // Automatic playback started!
      // We can now safely pause video...
      video.pause();
    })
    .catch(error => {
      // Auto-play was prevented
      // Show a UI element to let the user manually start playback
    });
  }
</script>

That's great for this simple example but what if you use video.play() to be able to play a video later when user interacts with the website you may think?

I'll tell you a secret... you don't have to use video.play(), you can use video.load() and here's how:

Example: Fetch & Play

<video id="video"></video>
<button id="button"></button>

<script>
  button.addEventListener('click', onButtonClick);

  function onButtonClick() {
    // This will allow us to play video later...
    video.load();
    fetchVideoAndPlay();
  }

  function fetchVideoAndPlay() {
    fetch('https://example.com/file.mp4')
    .then(response => response.blob())
    .then(blob => {
      video.srcObject = blob;
      return video.play();
    })
    .then(_ => {
      // Video playback started ;)
    })
    .catch(e => {
      // Video playback failed ;(
    })
  }
</script>

Warning: Don't make your onButtonClick function asynchronous with the async keyword for instance. You'll lose the "user gesture token" required to allow your video to play later.

Play promise support

At the time of writing, HTMLMediaElement.play() returns a promise in Chrome, Firefox, Opera, and Safari. Edge is still working on it.

Aligned Input Events

$
0
0

Aligned Input Events

TL;DR

  • Chrome 60 reduces jank by lowering event frequency, thereby improving the consistency of frame timing.
  • The getCoalescedEvents() method, introduced in Chrome 58 provides the same wealth of event information you've had all along.

Providing a smooth user experience is important for the web. The time between receiving an input event and when the visuals actually update is important and generally doing less work is important. Over the past few releases of Chrome we have driven down input latency across these devices.

In the interest of smoothness and performance, in Chrome 60 we are making a change that causes these events to occur at a lower frequency while increasing the granularity of the information provided. Much like when Jelly Bean was released and brought the Choreographer which aligns input on Android, we are bringing frame aligned input to the web on all platforms.

But sometimes you need more events. So, in in Chrome 58 we implemented a method called getCoalescedEvents(), which lets your application retrieve the full path of the pointer even while it's receiving fewer events.

Let's talk about event frequency first.

Lowering event frequency

Let's understand some basics: touch-screens deliver input at 60-120Hz and mice deliver input typically at 100Hz (but can be anywhere up to 2000Hz). Yet the typical refresh rate of a monitor is 60Hz. So what does that actually mean? It means that we receive input at a higher rate than we actually update the display at. So let's look at a performance timeline from devtools for a simple canvas painting app.

In the picture below with requestAnimationFrame()-aligned input disabled you can see multiple processing blocks per frame with an inconsistent frame time. The small yellow blocks indicate hit testing for such things as the target of the DOM event, dispatching the event, running javascript, updating the hovered node and possibly re-calculating layout and styles.

A performance timeline showing inconsistent frame timing.

So why are we doing extra work that doesn't cause any visual updates? Ideally we don't want to do any work that doesn't ultimately benefit the user. Starting in Chrome 60 the input pipeline will delay dispatching continuous events (wheel, mousewheel, touchmove, pointermove, mousemove) and dispatch them right before the requestAnimationFrame() callback occurs. In the picture below (with the feature enabled) you see a more consistent frame time and less time processing events.

We have been running an experiment with this feature enabled on the Canary and Dev channels and have found that we perform 35% less hit tests which allows the main thread to be ready to run more often.

An important note that web developers should be aware of is that any discrete event (such as keydown, keyup, mouseup, mousedown, touchstart, touchend) that occurs will be dispatched right away along with any pending events, preserving the relative ordering. With this feature enabled a lot of the work is streamlined into the normal event loop flow, providing a consistent input interval. This brings continuous events inline with scroll and resize events which have already been streamlined into the event loop flow in Chrome.

A performance timeline showing relatively consistent frame timing.

We've found that the vast majority of applications consuming such events have no use for the higher frequency. Android has already aligned events for a number of years so nothing there is new, but sites may experience less granular events on desktop platforms. There has always been a problem with janky main threads causing hiccups for input smoothness meaning that you might see jumps in position whenever the application is doing work, making it impossible to know how the pointer got from one spot to the other.

The getCoalescedEvents() method

As I said, there are rare scenarios where the application would prefer to know the full path of the pointer. So to fix the case where you see large jumps and the reduced frequency of events, in Chrome 58 we launched an extension to pointer events called getCoalescedEvents(). And below is an example of how jank on the main thread is hidden from the application if you use this API.

Comparing standard and coalesced events.

Instead of receiving a single event you can access the array of historical events that caused the event. Android ) , iOS and Windows.aspx) all have very similar APIs in their native SDKs and we are exposing a similar API to the web.

Typically a drawing app may have drawn a point by looking at the offsets on the event:

window.addEventListener("pointermove", function(event) {
  drawPoint(event.pageX, event.pageY)
});

This code can easily be changed to use the array of events:

window.addEventListener("pointermove", function(event) {
  Var events = 'getCoalescedEvents' in event ? event.getCoalescedEvents() : [event];
  for (let e of events) {
    drawPoint(e.pageX, e.pageY)
  }
});

Note that not every property on the coalesced events is populated. Since the coalesced events are not really dispatched but are just along for the ride they aren't hit tested. Some fields such as currentTarget, and eventPhase will have their default values. Calling dispatch related methods such as stopPropagation() or preventDefault() will have no effect on the parent event.


Supercharged Live Stream Blog: Code Splitting

$
0
0

Supercharged Live Stream Blog: Code Splitting

Note: As always – this is not production-ready code. I have simplified the code at the cost of generality. Our main goal is to convey concepts and demystify buzzwords.

In our most recent Supercharged Live Stream we implemented code splitting and route-based chunking. With HTTP/2 and native ES6 modules, these techniques will become essential to enabling efficient loading and caching of script resources.

Miscellaneous tips & tricks in this episode

  • asyncFunction().catch() with error.stack: 9:55
  • Modules and nomodule attribute on <script> tags: 7:30
  • promisify() in Node 8: 17:20

TL;DR:

How to do code splitting via route-based chunking:

  1. Obtain a list of your entry points.
  2. Extract the module dependencies of all these entry points.
  3. Find shared dependencies between all entry points.
  4. Bundle the shared dependencies.
  5. Rewrite the entry points.

Code splitting vs. route-based chunking

Time stamp: 1:50

Code splitting and route-based chunking are closely related and are often used interchangably. This has caused some confusion. Let’s try to clear this up:

  • Code splitting: Code splitting is the process of splitting your code into multiple bundles. If you are not shipping one big bundle with all of your JavaScript to the client, you are doing code splitting. One specific way of splitting your code is to use route-based chunking.
  • Route-based chunking: Route-based chunking creates bundles that are related your app’s routes. By analyzing your routes and their dependencies, we can change what modules go into which bundle.

Why do code splitting?

Loose modules

With native ES6 modules, every JavaScript module can import its own dependencies. When the browser receives a module, all import statements will trigger additional fetches to get ahold of the modules that are necessary to run the code. However, all these modules can have dependencies of their own. The danger is that the browser ends up with a cascade of fetches that last for multiple round trips before the code can finally be executed.

Bundling

Bundling, which is inlining all your modules into one single bundle will make sure the browser has all the code it needs after 1 round trip and can start running the code more quickly. This, however, forces the user to download a lot of code that is not needed, so bandwidth and time have been wasted. Additionally, every change to one of our original modules will result in a change in the bundle, invalidating any cached version of the bundle. Users will have to re-download the entire thing.

Code splitting

Code splitting is the middle ground. We are willing to invest additional round trips to get network efficiency by only downloading what we need, and better caching efficiency by making the number of modules per bundle much smaller. If the bundling is done right, the total number of round trips will be much lower than with loose modules. Finally, we could make use of pushing mechanisms like link[rel=preload] or HTTP/2 Push to save additional round trio times if needed.

Step 1: Obtain a list of your entry points

Time stamp: 16:02

This is only one of many approaches, but in the episode we parsed the website’s sitemap.xml to get the entry points to our website. Usually, a dedicated JSON file listing all entry points is used.

Using babel to process JavaScript

Time stamp: 25:25 to 28:25

Babel is commonly used for “transpiling”: consuming bleeding-edge JavaScript code and turning it into an older version of JavaScript so that more browsers are able to execute the code. The first step here is to parse the new JavaScript with a parser (Babel uses babylon) that turns the code into a so-called “Abstrac Syntax Tree” (AST). Once the AST has been generated, a series of plugins analyze and mangle the AST.

Note: I am not very experiences with Babel. The plugins I built work, but they are probably neither efficient nor idiomatically babel. For a more in-depth guide to authoring Babel plugins, I recommend the Babel Handbook.

We are going to make heavy use of babel to detect (and later manipulated) the imports of a JavaScript module. You might be tempted to resort to regular expressions, but regular expressions are not powerful enough to properly parse a language and are hard to maintain. Relying on tried-and-tested tools like Babel will save you many headaches.

Here’s a simple example of running Babel with a custom plugin:

const plugin = {
  visitor: {
    ImportDeclaration(decl) {
      /* ... */
    }
  }
}
const {code} = babel.transform(inputCode, {plugins: [plugin]});

A plugin can provide a visitor object. The visitor contains a function for any node type that the plugin wants to handle. When a node of that type is encountered while traversing the AST the corresponding function in the visitor object will be invoked with that node as a parameter. In the example above, the ImportDeclaration() method will be called for every import declaration in the file. To get more of a feeling for node types and the AST, take a look at astexplorer.net.

Step 2: Extract the module dependencies

Time stamp: 28:25 to 34:57

To build the dependency tree of a module, we will parse that module and create a list of all the modules it imports. We also need to parse those dependencies, as they in turn might have dependencies as well. A classic case for recursion!

async function buildDependencyTree(file) {
  let code = await readFile(file);
  code = code.toString('utf-8');

  // `dep` will collect all dependencies of `file`
  let dep = [];
  const plugin = {
    visitor: {
      ImportDeclaration(decl) {
        const importedFile = decl.node.source.value;
        // Recursion: Push an array of the dependency’s dependencies onto the list
        dep.push((async function() {
          return await buildDependencyTree(`./app/${importedFile}`);
        })());
        // Push the dependency itself onto the list
        dep.push(importedFile);
      }
    }
  }
  // Run the plugin
  babel.transform(code, {plugins: [plugin]});
  // Wait for all promises to resolve and then flatten the array
  return flatten(await Promise.all(dep));
}

Step 3: Find shared dependencies between all entry points

Time stamp: 34:57 to 38:30

Since we have a set of dependency trees – a dependency forest if you will – we can find the shared dependencies by looking for nodes that appear in every tree. We will flatten and deduplicate our forest and filter to only keep the elements that appear in all trees.

function findCommonDeps(depTrees) {
  const depSet = new Set();
  // Flatten
  depTrees.forEach(depTree => {
    depTree.forEach(dep => depSet.add(dep));
  });
  // Filter
  return Array.from(depSet)
    .filter(dep => depTrees.every(depTree => depTree.includes(dep)));
}

Step 4: Bundle shared dependencies

Time stamp: 39:20 to 46:43

To bundle our set of shared dependencies, we could just concatenate all the module files. Two problems arise when using that approach: The first problem is that the bundle will still contain import statements which will make the browser attempt to fetch resources. The second problem is that the dependencies’ dependencies have not been bundled. Because we have done it before, we are going to write another babel plugin.

Note: I am convinced someone with more experience with Babel and its APIs would be able to do all of this in one pass. For the sake of brevity and clarity, I chose to write multiple plugins and parse some files multiple times so that the steps are truly separate.

The code is fairly similar to our first plugin, but instead of just extracting the imports, we will also be removing them and inserting a bundled version of the imported file:

async function bundle(oldCode) {
  // `newCode` will be filled with code fragments that eventually form the bundle.
  let newCode = [];
  const plugin = {
    visitor: {
      ImportDeclaration(decl) {
        const importedFile = decl.node.source.value;
        newCode.push((async function() {
          // Bundle the imported file and add it to the output.
          return await bundle(await readFile(`./app/${importedFile}`));
        })());
        // Remove the import declaration from the AST.
        decl.remove();
      }
    }
  };
  // Save the stringified, transformed AST. This code is the same as `oldCode`
  // but without any import statements.
  const {code} = babel.transform(oldCode, {plugins: [plugin]});
  newCode.push(code);
  // `newCode` contains all the bundled dependencies as well as the
  // import-less version of the code itself. Concatenate to generate the code
  // for the bundle.
  return flatten(await Promise.all(newCode)).join('\n');
}

Step 5: Rewrite entry points

Time stamp: 46:43 to 46:43

For the last step we will write yet another Babel plugin. Its job is to remove all imports of modules that are in the shared bundle.

async function rewrite(section, sharedBundle) {
  let oldCode = await readFile(`./app/static/${section}.js`);
  oldCode = oldCode.toString('utf-8');
  const plugin = {
    visitor: {
      ImportDeclaration(decl) {
        const importedFile = decl.node.source.value;
        // If this import statement imports a file that is in the shared bundle, remove it.
        if(sharedBundle.includes(importedFile))
          decl.remove();
      }
    }
  };
  let {code} = babel.transform(oldCode, {plugins: [plugin]});
  // Prepend an import statement for the shared bundle.
  code = `import '/static/_shared.js';\n${code}`;
  await writeFile(`./app/static/_${section}.js`, code);
}

End

This was quite the ride, wasn’t it? Please remember that our goal for this episode was to explain and demystify code splitting. The result works – but it’s specific to our demo site and will fail horribly in the generic case. For production, I’d recommend relying on established tools like WebPack, RollUp, etc..

You can find our code in the GitHub repository.

See you next time!

Upcoming Regular Expression Features

$
0
0

Upcoming Regular Expression Features

ES2015 introduced many new features to the JavaScript language, including significant improvements to the regular expression syntax with the Unicode (/u) and sticky (/y) flags. But development has not stopped since then. In tight collaboration with other members at TC39 (the ECMAScript standards body), the V8 team has proposed and co-designed several new features to make regular expressions even more powerful.

These features are currently being proposed for inclusion in the JavaScript specification. Even though the proposals have not been fully accepted, they are already at Stage 3 in the TC39 process. We have implemented these features behind a flag (see below) in order to be able to provide timely design and implementation feedback to the respective proposal authors before the specification is finalized.

This blog post gives you a preview of this exciting future. If you'd like to follow along with the upcoming examples, enable experimental JavaScript features at chrome://flags/#enable-javascript-harmony.

Named Captures

Regular expressions can contain so-called captures (or groups), which can capture a portion of the matched text. So far, developers could only refer to these captures by their index, which is determined by the position of the capture within the pattern.

const pattern = /(\d{4})-(\d{2})-(\d{2})/u;
const result = pattern.exec('2017-07-10');
// result[0] === '2017-07-10'
// result[1] === '2017'
// result[2] === '07'
// result[3] === '10'

But regular expressions are already notoriously difficult to read, write, and maintain, and numeric references can add further complications. For instance, in longer patterns it can be tricky to determine the index of a particular capture:

/(?:(.)(.(?<=[^(])(.)))/  // Index of the last capture?

And even worse, changes to a pattern can potentially shift the indices of all existing captures:

/(a)(b)(c)\3\2\1/     // A few simple numbered backreferences.
/(.)(a)(b)(c)\4\3\2/  // All need to be updated.

Named captures are an upcoming feature that helps mitigate these issues by allowing developers to assign names to captures. The syntax is similar to Perl, Java, .Net, and Ruby:

const pattern = /(?<year>\d{4})-(?<month>\d{2})-(?<day>\d{2})/u;
const result = pattern.exec('2017-07-10');
// result.groups.year === '2017'
// result.groups.month === '07'
// result.groups.day === '10'

Named captures can also be referenced by named backreferences and through String.prototype.replace:

// Named backreferences.
/(?<LowerCaseX>x)y\k<LowerCaseX>/.test('xyx');  // true

// String replacement.
const pattern = /(?<fst>a)(?<snd>b)/;
'ab'.replace(pattern, '$<snd>$<fst>');                              // 'ba'
'ab'.replace(pattern, (m, p1, p2, o, s, {fst, snd}) => fst + snd);  // 'ba'

Full details of this new feature are available in the specification proposal.

dotAll Flag

By default, the . atom in regular expressions matches any character except for line terminators:

/foo.bar/u.test('foo\nbar');   // false

A proposal introduces dotAll mode, enabled through the /s flag. In dotAll mode, . matches line terminators as well.

/foo.bar/su.test('foo\nbar');  // true

Full details of this new feature are available in the specification proposal.

Unicode Property Escapes

Regular expression syntax has always included shorthands for certain character classes. \d represent digits and is really just [0-9]; \w is short for word characters, or [A-Za-z0-9_].

With Unicode awareness introduced in ES2015, there are suddenly many more characters that could be considered numbers, for example the circled digit one: ①; or considered word characters, for example the Chinese character for snow: 雪.

Neither of these can be matched with \d or \w. Changing the meaning of these shorthands would break existing regular expression patterns.

Instead, new character classes are being introduced. Note that they are only available for Unicode-aware regular expressions denoted by the /u flag.

/\p{Number}/u.test('①');      // true
/\p{Alphabetic}/u.test('雪');  // true

The inverse can be matched by with \P.

/\P{Number}/u.test('①');      // false
/\P{Alphabetic}/u.test('雪');  // false

The Unicode consortium defines many more ways to classify code points, for example math symbols or Japanese Hiragana characters:

/^\p{Math}+$/u.test('∛∞∉');                            // true
/^\p{Script_Extensions=Hiragana}+$/u.test('ひらがな');  // true

The full list of supported Unicode property classes can be found in the current specification proposal. For more examples, take a look at this informative article.

Lookbehind Assertions

Lookahead assertions have been part of JavaScript’s regular expression syntax from the start. Their counterpart, lookbehind assertions, are finally being introduced. Some of you may remember that this has been part of V8 for quite some time already. We even use lookbehind asserts under the hood to implement the Unicode flag specified in ES2015.

The name already describes its meaning pretty well. It offers a way to restrict a pattern to only match if preceded by the pattern in the lookbehind group. It comes in both matching and non-matching flavors:

/(?<=\$)\d+/.exec('$1 is worth about ¥123');  // ['1']
/(?<!\$)\d+/.exec('$1 is worth about ¥123');  // ['123']

For more details, check out our previous blog post dedicated to lookbehind assertions, and examples in related V8 test cases.

Acknowledgements

This blog post wouldn’t be complete without mentioning some of the people that have worked hard to make this happen: especially language champions Mathias Bynens, Dan Ehrenberg, Claude Pache, Brian Terlson, Thomas Wood, Gorkem Yakin, and Irregexp guru Erik Corry; but also everyone else who has contributed to the language specification and V8’s implementation of these features.

We hope you’re as excited about these new regular expression features as we are!

What's New In DevTools (Chrome 61)

$
0
0

What's New In DevTools (Chrome 61)

New features and major changes coming to DevTools in Chrome 61 include:

Note: You can check what version of Chrome you're running at chrome://version. Chrome auto-updates to a new major version about every 6 weeks.

Simulate low-end and mid-tier mobile devices in Device Mode

The Device Mode Throttling menu is now exposed by default, and it now lets you simulate a low-end or mid-tier mobile device with a couple of clicks.

The Throttling Menu
Figure 1. The Throttling Menu
Throttling Menu definitions
Figure 2. Hover over the Throttling menu or open the Capture Settings menu to see the definitions for Mid-tier mobile and Low-end mobile

View storage usage

The new Usage section in the Clear Storage tab of the Application panel shows you how much storage an origin is using, as well as the maximum quota for the entire device.

The Usage section
Figure 3. The Usage section shows that https://airhorner.com is using 66.9KB out of the device's quota of 15214MB

View when a service worker cached responses

The new Time Cached column in the Cache Storage tab shows you when a service worker cached responses.

The Time Cached column
Figure 4. The Time Cached column

Enable the FPS Meter from the Command Menu

You can now enable the FPS Meter from the Command Menu.

Enabling the FPS Meter from the Command Menu
Figure 5. Enabling the FPS Meter from the Command Menu

Set mousewheel behavior to zoom or scroll with Performance recordings

Open Settings and set the new Flamechart mouse wheel action setting to change how mousewheels behave on the Performance panel.

For example, when you use a mousewheel on the Main section of a recording, or when you swipe with two fingers on a trackpad, the default behavior is to zoom in or out. When you change the setting to Scroll, this gesture now scrolls up or down.

The 'Flamechart mouse wheel action' setting
Figure 6. The Flamechart mouse wheel action setting

Debugging support for ES6 Modules

ES6 Modules are shipping natively in Chrome 61. There's not much going on here with regards to DevTools, other than that debugging works as you'd expect it to. Try setting some breakpoints in and stepping through Paul Irish's ES6-Module-implementation of TodoMVC to see for yourself.

Feedback

The best place to discuss any of the features or changes you see here is the google-chrome-developer-tools@googlegroups.com mailing list. You can also tweet us at @ChromeDevTools if you're short on time.

That's all for what's new in DevTools in Chrome 61. See you in 6 weeks for Chrome 62!

New in Chrome 60

$
0
0

New in Chrome 60

  • Paint Timing API allows you to measure time to first paint and time to first contentful paint with the Paint Timings AP.
  • The font-display allows you to control how fonts are rendered before they're downloaded.
  • WebAssembly has landed
  • And there’s plenty more!

Note: Want the full list of changes? Check out the Chromium source repository change list

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 59!

Paint timings API

When a user navigates to a web page, they're look for some visual feedback to reassure them that everything is working. With the new paint timings API, we can now measure that.

The API exposes two metrics:

  • Time to first paint - which marks the point when the browser starts to render something, the first bit of content on the screen.
  • Time to first contentful paint - which marks the point when the browser renders the first bit of content from the DOM, text, an image, etc.

Check out Leveraging the Performance Metrics that Most Affect User Experience to learn how you can track these metrics and use them to improve your experience.

CSS font-display property

Web Fonts give you the ability to incorporate rich typography. But, if the user doesn’t already have the typeface, it needs to be downloaded, potentially making your site appear slow.

Thankfully, most browsers will use a fallback if the font takes too long to download. The new font-display property, allows you to control how a downloadable font renders before it’s fully loaded.

  • auto uses whatever font display strategy the user-agent uses.
  • block gives the font face a short block period and an infinite swap period.
  • swap gives the font face a zero second block period and an infinite swap period.
  • fallback gives the font face an extremely small block period and a short swap period.
  • optional gives the font face an extremely small block period and a zero second swap period.

It’s supported in Chrome 60 and Opera, and is in development on Firefox. Check out Controlling Font Performance with font-display for more information.

WebAssembly

Web Assembly or wasm provides a new way to run code, written in languages like C and C++ on the web, at near native speed.

It provides the speed necessary to build an in-browser video editor or to run a Unity game at a high frame rate utilizing existing standards-based web platform APIs.

You can find more info at webassembly.org, including demos, docs and how to get started.

And more!

  • The new Web Budget API enables sites with the Push Notification permission to send a limited number of push messages that trigger background work such as syncing data or dismissing notifications, without the need to show a user-visible notification.
  • PushSubscription.expirationTime is now available, notifying sites when and if a subscription will expire.

Note: The Payment Request API was pushed to Chrome 61.

These are just a few of the changes in Chrome 60 for developers.

Then subscribe to our YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 61 is released, I’ll be right here to tell you -- what’s new in Chrome!

Media Updates in Chrome 61

$
0
0

Media Updates in Chrome 61

Background video track optimizations (MSE only)

To improve battery life, Chrome now disables video tracks when the video is played in the background if the video uses Media Source Extensions (MSE).

You can inspect these changes by going to the chrome://media-internals page, and filter for the "info" property. When the tab containing a playing video becomes inactive, you'll see a message like Selected video track: [] indicating that the video track has been disabled. When the tab becomes active again, video track is re-enabled automatically.

Log panel in the chrome://media-internals page
Figure 1. Log panel in the chrome://media-internals page

For those who want to understand what is happening, here's a JavaScript code snippet that shows you what Chrome is roughly doing behind the scenes.

var video = document.querySelector('video');
var selectedVideoTrackIndex;

document.addEventListener('visibilitychange', function() {
  if (document.hidden) {
    // Disable video track when page is hidden.
    selectedVideoTrackIndex = video.videoTracks.selectedIndex;
    video.videoTracks[selectedVideoTrackIndex].selected = false;
  } else {
    // Re-enable video track when page is not hidden anymore.
    video.videoTracks[selectedVideoTrackIndex].selected = true;
  }
});

You may want to reduce the quality of the video stream when video track is disabled. It would be as simple as using the Page Visibility API as shown above to detect when a page is hidden.

And here are some restrictions:

  • This optimization only applies to videos with a keyframe distance < 5s.
  • If the video doesn't contain any audio tracks, the video will be automatically paused when played in the background.

Chromium Bug

Automatic video fullscreen when device is rotated

If you rotate a device to landscape while a video is playing in the viewport, playback will automatically switch to fullscreen mode. Rotating the device to portrait puts the video back to windowed mode.

Note that you can implement manually this behaviour yourself. (See the Mobile Web Video Playback article).

Automatic video fullscreen when device is rotated

This magic behaviour only happens when:

  • device is an Android phone (not a tablet)
  • user's screen orientation is set to "Auto-rotate"
  • video size is at least 200x200px
  • video uses native controls
  • video is currently playing
  • at least 75% of the video is visible (on-screen)
  • orientation rotates by 90 degrees (not 180 degrees)
  • there is no fullscreen element yet
  • screen is not locked using the Screen Orientation API

Chromium Bug

Viewing all 599 articles
Browse latest View live