Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

Deprecations and removals in Chrome 63

$
0
0

Deprecations and removals in Chrome 63

In nearly every version of Chrome, we see a significant number of updates and improvements to the product, its performance, and also capabilities of the Web Platform. This article describes some of the deprecations and removals in Chrome 63, which is in beta as of October 26. Visit the deprecations page for more deprecations and removals from this and previous versions of Chrome. This list is subject to change at any time.

Security and privacy

CSS

JavaScript and APIs

Interface properties with a Promise type no longer throw exceptions

Interface properties and functions that return a promise have been inconsistent about whether error conditions throw exceptions or reject, which would invoke a promise's catch() block. The current version of the IDL spec calls for all promise-returning properties and functions to reject rather than throw an exception.

For example, previously, a call to MediaKeySession.closed would throw a TypeError for illegal invocation if called at the wrong time. With this change such calls must now implement a catch() block.

This change brings Chrome inline with the specification. This change has already been made for functions.

Chromestatus Tracker | Chromium Bug

Remove getMatchedCSSRules()

The getMatchedCSSRules() method is a webkit-only API to get a list of all the style rules applied to a particular element. Webkit has an open bug to remove it. For these reasons it is removed from Chrome in version 63. Developers who need this functionality can look at this Stackoverflow post

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove RTCRtcpMuxPolicy of "negotiate"

The rtcpMuxPolicy is used by Chrome to specify its preferred policy regarding use of RTP/RTCP multiplexing. In Chrome 57, we changed the default rtcpMuxPolicy to "require" and deprecated "negotiate" for following reasons:

  • Non-muxed RTCP uses extra network resources.
  • Removing "negotiate" will make the API surface simpler, since an "RtpSender"/"RtpReceiver" will then only ever have a single transport.

In Chrome 63, "negotiate" is removed.

Intent to Deprecate | Chromium Bug

<<../../_deprecation-policy.md>>


Exceeding the buffering quota

$
0
0

Exceeding the buffering quota

If you're working with Media Source Extensions (MSE), one thing you will eventually need to deal with is an over-full buffer. When this occurs, you'll get what's called a QuotaExceededError. In this article, I'll cover some of the ways to deal with it.

What is the QuotaExceededError?

Basically, QuotaExceededError is what you get if you try to add too much data to your SourceBuffer object. (Adding more SourceBuffer objects to a parent MediaSource element can also throw this error. That's outside the scope of this article.) If SourceBuffer has too much data in it, calling SourceBuffer.appendBuffer() will trigger the following message in the Chrome console window.

image

There are a few things to note about this. First, notice that the name QuotaExceededError appears nowhere in the message. To see that, set a breakpoint at a location where you can catch the error and examine it in your watch or scope window. I've shown this below.

image

Second, there's no definitive way to find out how much data the SourceBuffer can handle.

Behavior in other browsers

At the time of writing, Safari does not throw a QuotaExceededError in many of its builds. Instead it removes frames using a two step algorithm, stopping if there is enough room to handle the appendBuffer(). First, it frees frames from between 0 and 30 seconds before the current time in 30 second chunks. Next, it frees frames in 30 second chunks from duration backwards to as close as 30 seconds after currentTime. You can read more about this in a Webkit changeset from 2014).

Fortunately, along with Chrome, Edge and Firefox do throw this error. If you're using another browser, you'll need to do your own testing. Though probably not what you'd build for a real-life media player, François Beaufort's source buffer limit test) at least lets you observe the behavior.

How much data can I append?

The exact number varies from browser to browser. Since you can't query for the amount currently appended data, you'll have to keep track of how much you're appending yourself. As for what to watch, here's the best data I can gather at the time of writing. For Chrome these numbers are upper limits meaning they can be smaller when the system encounters memory pressure.

Chrome Chromecast* Firefox Safari Edge
Video 150MB 30MB 100MB 290MB Unknown
Audio 12MB 2MB 15MB 14MB Unknown
  • Or other limited memory Chrome device.

So what do I do?

Since the amount of supported data varies so widely and you can't find the amount of data in a SourceBuffer, you must get it indirectly by handling the QuotaExceededError. Now let's look at a few ways to do that.

There are several approaches to dealing with QuotaExceededError. In reality a combination of one or more approaches is best. Your approach should be to base the work on how much you're fetching and attempting to append beyond HTMLMediaElement.currentTime and adjusting that size based on the QuotaExceededError. Also using a manifest of some kind such as an mpd file) (MPEG-DASH) or an m3u8 file) (HLS) can help you keep track of the data you're appending to the buffer.

Now, let's look at several approaches to dealing with the QuotaExceededError.

  • Remove unneeded data and re-append.
  • Append smaller fragments.
  • Lower the playback resolution.

Though they can be used in combination, I'll cover them one at a time.

Remove unneeded data and re-append

Really this one should be called, "Remove least-likely-to-be-used-soon data, and then retry append of data likely-to-be-used-soon." That was too long of a title. You'll just need to remember what I really mean.

Removing recent data is not as simple as calling SourceBuffer.remove(). To remove data from the SourceBuffer, it's updating flag must be false. If it is not, call SourceBuffer.abort() before removing any data.

There are a few things to keep in mind when calling SourceBuffer.remove().

  • This could have a negative impact on playback. For example, if you want the video to replay or loop soon, you may not want to remove the beginning of the video. Likewise, if you or the user seeks to a part of the video where you've removed data, you'll have to append that data again to satisfy that seek.
  • Remove as conservatively as you can. Beware of removing the currently playing group of frames beginning at the keyframe at or before currentTime because doing so could cause playback stall. Such information may need to be parsed out of the bytestream by the web app if it is not available in the manifest. A media manifest or app knowledge of keyframe intervals in the media can help guide your app's choice of removal ranges to prevent removing the currently playing media. Whatever you remove, don't remove the currently playing group of pictures or even the first few beyond that. Generally, don't remove beyond the current time unless you're certain that the media is not needed any longer. If you remove close to the playhead you may cause a stall.
  • Safari 9 and Safari 10 do not correctly implement SourceBuffer.abort(). In fact, they throw errors that will halt playback. Fortunately there are open bug trackers here) and here). In the meantime, you'll have to work around this somehow. Shaka Player does it by stubbing out an empty abort() function on those versions of Safari.

Append smaller fragments

I've shown the procedure below. This may not work in every case, but it has the advantage that the size of the smaller chunks can be adjusted to suit your needs. It also doesn't require going back to the network which might incur additional data costs for some users.

const pieces = new Uint8Array([data]);
(function appendFragments(pieces) {
  if (sourceBuffer.updating) {
    return;
  }
  pieces.forEach(piece => {
    try {
      sourceBuffer.appendBuffer(piece);
    }
    catch e {
      if (e.name !== 'QuotaExceededError') {
        throw e;
      }

      // Reduction schedule: 80%, 60%, 40%, 20%, 16%, 12%, 8%, 4%, fail.
      const reduction = pieces[0].byteLength * 0.8;
      if (reduction / data.byteLength < 0.04) {
        throw new Error('MediaSource threw QuotaExceededError too many times');
      }
      const newPieces = [
        pieces[0].slice(0, reduction),
        pieces[0].slice(reduction, pieces[0].byteLength)
      ];
      pieces.splice(0, 1, newPieces[0], newPieces[1]);
      appendBuffer(pieces);
    }
  });
})(pieces);

Lower the playback resolution

This is similar to removing recent data and re-appending. In fact, the two may be done together, though the example below only shows lowering the resolution.

There are a few things to keep in mind when using this technique:

  • You must append a new initialization segment. You must do this any time you change representations. The new initialization segment must be for the media segments that follow.
  • The presentation timestamp of the appended media should match the timestamp of the data in the buffer as closely as possible, but not jump ahead. Overlapping the buffered data may cause a stutter or brief stall, depending on the browser. Regardless of what you append, don't overlap the playhead as this will throw errors.
  • Seeking may interrupt playback. You may be tempted to seek to a specific location and resume playback from there. Be aware that this will cause playback interruption until the seek is completed.

Promise.prototype.finally

$
0
0

Promise.prototype.finally

Promise.prototype.finally is enabled by default in V8 v6.3.165+ and Chrome 63+. It allows registering a callback to be invoked when a promise is settled (i.e. either fulfilled, or rejected).

Imagine you want to fetch some data to show on the page. Oh, and you want to show a loading spinner when the request starts, and hide it when the request completes. When something goes wrong, you show an error message instead.

const fetchAndDisplay = ({ url, element }) => {
  showLoadingSpinner();
  fetch(url)
    .then((response) => response.text())
    .then((text) => {
      element.textContent = text;
      hideLoadingSpinner();
    })
    .catch((error) => {
      element.textContent = error.message;
      hideLoadingSpinner();
    });
};

fetchAndDisplay({
  url: someUrl,
  element: document.querySelector('#output')
});

If the request succeeds, we display the data. If something goes wrong, we display an error message instead.

In either case we need to call hideLoadingSpinner(). Until now, we had no choice but to duplicate this call in both the then() and the catch() block. With Promise.prototype.finally, we can do better:

const fetchAndDisplay = ({ url, element }) => {
  showLoadingSpinner();
  fetch(url)
    .then((response) => response.text())
    .then((text) => {
      element.textContent = text;
    })
    .catch((error) => {
      element.textContent = error.message;
    })
    .finally(() => {
      hideLoadingSpinner();
    });
};

Not only does this reduce code duplication, it also separates the success/error handling phase and the cleanup phase more clearly. Neat!

Currently, the same thing is possible with async/await, and without Promise.prototype.finally:

const fetchAndDisplay = async ({ url, element }) => {
  showLoadingSpinner();
  try {
    const response = await fetch(url);
    const text = await response.text();
    element.textContent = text;
  } catch (error) {
    element.textContent = error.message;
  } finally {
    hideLoadingSpinner();
  }
};

Since async and await are strictly better, my recommendation remains to use them instead of vanilla promises. That said, if you prefer vanilla promises for some reason, Promise.prototype.finally can help make your code simpler and cleaner.

Removing ::shadow and /deep/ in Chrome 63

$
0
0

Removing ::shadow and /deep/ in Chrome 63

Starting in Chrome 63, you cannot use the shadow-piercing selectors ::shadow and /deep/ to style content inside of a shadow root.

  • The /deep/ combinator will act as a descendant selector. x-foo /deep/ div will work like x-foo div.
  • The ::shadow pseudo-element will not match any elements.

Note: If your site uses Polymer, the team has put together a thorough guide walking through steps to migrate off of ::shadow and /deep/.

The decision to remove

The ::shadow and /deep/ were deprecated in Chrome version 45. This was decided by all of the participants at the April 2015 Web Components meetup.

The primary concern with shadow-piercing selectors is that they violate encapsulation and create situations where a component can no longer change its internal implementation.

Note: For the moment, ::shadow and /deep/ will continue to work with JavaScript APIs like querySelector() and querySelectorAll(). Ongoing support for these APIs is being discussed on GitHub.

The CSS Shadow Parts spec is being advanced as an alternative to shadow piercing selectors. Shadow Parts will allow a component author to expose named elements in a way that preserves encapsulation and still allows page authors the ability to style multiple properties at once.

What should I do if my site uses ::shadow and /deep/?

The ::shadow and /deep/ selectors only affect legacy Shadow DOM v0 components. If you're using Shadow DOM v1, you should not need to change anything on your site.

You can use Chrome Canary to verify your site does not break with these new changes. If you notice issues, try and remove any usage of ::shadow and /deep/. If it's too difficult to remove usage of these selectors, consider switching from native shadow DOM over to the shady DOM polyfill. You should only need to make this change if your site relies on native shadow DOM v0.

Using Trusted Web Activity

$
0
0

Using Trusted Web Activity

Note: Looking for the code? The support library and a sample using Trusted Web activity will be available soon, in the Android support library version 27.

There are many different ways to integrate web content on Android, each having their own unique benefits and drawbacks. Developers have frequently asked for a simple way to launch content fullscreen like a WebView, which is run using the latest and preferred browser of the user.

At the Chrome Developer Summit 2017 (October 2017) we’ve announced a new technology called Trusted Web activities which are now available in Chrome’s Canary channel. Trusted Web activities are a new way to integrate your web-app content such as your PWA with your Android app using a similar protocol to Chrome Custom Tabs.

There are a few things that make Trusted Web activities different from other ways to integrate web content with your app:

  1. Content in a Trusted Web activity is trusted -- the app and the site it opens are expected to come from the same developer. (This is verified using Digital Asset Links.)
  2. Trusted Web activities come from the web: they’re rendered by the user’s browser, in exactly the same way as a user would see it in their browser except they are run fullscreen. Web content should be accessible and useful in the browser first.
  3. Browsers are also updated independent of Android and your app -- Chrome is available back to Android Jelly Bean. That saves on APK size and ensures you can use a modern web runtime. (Note that since Lollipop, WebView has also been updated independent of Android, but there a significant number of pre-Lollipop Android users.)
  4. The host app doesn’t have direct access to web content in a Trusted Web activity or any other kind of web state, like cookies and localStorage. Nevertheless, you can coordinate with the web content by passing data to and from the page in URLs (e.g. through query parameters.)
  5. Transitions between web and native content are between activities. Each activity (i.e. screen) of your app is either completely provided by the web, or by an Android activity

To make it easier to test, there are currently no qualifications for content opened in the preview of Trusted Web activities. You can expect, however, that Trusted Web activities will ultimately have need to meet requirements similar to improved Add to Home screen, which is designed to be a baseline of interactivity and performance. You can audit your site for these requirements using the Lighthouse “user can be prompted to Add to Home screen” audit.

Today, if the user’s version of Chrome doesn’t support Trusted Web activities, we’ll fall back to a simple toolbar like the one you’d see in a Custom Tab. It is also possible for other browsers to implement the same protocol that Trusted Web activities use. While the host app has the final say on what browser get opened, this means that today the best practice for Trusted Web activities is to follow the same best practice for Custom Tabs: use the user’s default browser, so long as that browser provides Custom Tabs.

We hope that you can experiment with this API and give us feedback at @ChromiumDev

Take control of your scroll: customizing pull-to-refresh and overflow effects

$
0
0

Take control of your scroll: customizing pull-to-refresh and overflow effects

TL;DR

The CSS overscroll-behavior property allows developers to override the browser's default overflow scroll behavior when reaching the top/bottom of content. Use cases include disabling the pull-to-refresh feature on mobile, removing overscroll glow and rubberbanding effects, and preventing page content from scrolling when it's beneath a modal/overlay.

overscroll-behavior requires Chrome 63+. It's in development or being considered by other browsers. See chromestatus.com for more information.

Background

Scroll boundaries and scroll chaining

Scroll chaining on Chrome Android.

Scrolling is one of the most fundamental ways to interact with a page, but certain UX patterns can be tricky to deal with because of the browser's quirky default behaviors. As an example, take an app drawer with a large number of items that the user may have to scroll through. When they reach the bottom, the overflow container stops scrolling because there's no more content to consume. In other words, the user reaches a "scroll boundary". But notice what happens if the user continues to scroll. The content behind the drawer starts scrolling! Scrolling is taken over by the parent container; the main page itself in the example.

Turns out this behavior is called scroll chaining; the browser's default behavior when scrolling content. Oftentimes the default is pretty nice, but sometimes it's not desirable or even unexpected. Certain apps may want to provide a different user experience when the user hits a scroll boundary.

The pull-to-refresh effect

Pull-to-refresh is a intuitive gesture popularized by mobile apps such as Facebook and Twitter. Pulling down on a social feed and releasing creates new space for more recent posts to be loaded. In fact, this particular UX has become so popular that mobile browsers like Chrome on Android have adopted the same effect. Swiping down at the top of the page refreshes the entire page:

Twitter's custom pull-to-refresh
when refreshing a feed in their PWA.
Chrome Android's native pull-to-refresh action
refreshes the entire page.

For situations like the Twitter PWA, it might make sense to disable the native pull-to-refresh action. Why? In this app, you probably don't want the user accidentally refreshing the page. There's also the potential to see a double refresh animation! Alternatively, it might be nicer to custom the browser's action, aligning it more closely to the site's branding. The unfortunate part is that this type of customization has been tricky to pull off. Developers end up writing unnecessary JavaScript, add non-passive touch listeners (which block scrolling), or stick the entire page in a 100vw/vh <div> (to prevent the page from overflowing). These workarounds have well-documented negative effects on scrolling performance.

We can do better!

Introducing overscroll-behavior

The overscroll-behavior property is a new CSS feature that controls the behavior of what happens when you over-scroll a container (including the page itself). You can use it to cancel scroll chaining, disable/customize the pull-to-refresh action, disable rubberbanding effects on iOS (when Safari implements overscroll-behavior), and more. The best part is that using overscroll-behavior does not adversely affect page performance like the hacks mentioned in the intro!

The property takes three possible values:

  1. auto - Default. Scrolls that originate on the element may propagate to ancestor elements.
  2. contain - prevents scroll chaining. Scrolls do not propagate to ancestors but local effects within the node are shown. For example, the overscroll glow effect on Android or the rubberbanding effect on iOS which notifies the user when they've hit a scroll boundary. Note: using overscroll-behavior: contain on the html element prevents overscroll navigation actions.
  3. none - same as contain but it also prevents overscroll effects within the node itself (e.g. Android overscroll glow or iOS rubberbanding).

Note: overscroll-behavior also supports shorthands for overscroll-behavior-x and overscroll-behavior-y if you only want to define behaviors for a certain axis.

Let's dive into some examples to see how to use overscroll-behavior.

Prevent scrolls from escaping a fixed position element

The chatbox scenario

Content beneath the chat window scrolls too :(

Consider a fixed positioned chatbox that sits at the bottom of the page. The intention is that the chatbox is self-contained component and that it scrolls separately from the content behind it. However, because of scroll chaining, the document starts scrolling as soon as the user hits the last message in the chat history.

For this app, it's more appropriate to have scrolls that originate within the chatbox stay within the chat. We can make that happen by adding overscroll-behavior: contain to the element that holds the chat messages:

#chat .msgs {
  overflow: auto;
  overscroll-behavior: contain;
  height: 300px;
}

Essentially, we're creating a logical separation between the chatbox's scrolling context and the main page. The end result is that the main page stays put when the user reaches the top/bottom of the chat history. Scrolls that start in the chatbox do not propagate out.

The page overlay scenario

Another variation of the "underscroll" scenario is when you see content scrolling behind a fixed position overlay. A dead giveaway overscroll-behavior is in order! The browser is trying to be helpful but it ends up making the site look buggy.

Example - modal with and without overscroll-behavior: contain:

Before: page content scrolls beneath overlay.
After: page content doesn't scroll beneath overlay.

Disabling pull-to-refresh

Turning off the pull-to-refresh action is a single line of CSS. Just prevent scroll chaining on the entire viewport-defining element. In most cases, that's <html> or <body>:

body {
  /* Disables pull-to-refresh but allows overscroll glow effects. */
  overscroll-behavior-y: contain;
}

With this simple addition, we fix the double pull-to-refresh animations in the chatbox demo and can instead, implement a custom effect which uses a neater loading animation. The entire inbox also blurs as the inbox refreshes:

Before
After

Here's a snippet of the full code:

<style>
  body.refreshing #inbox {
    filter: blur(1px);
    touch-action: none; /* prevent scrolling */
  }
  body.refreshing .refresher {
    transform: translate3d(0,150%,0) scale(1);
    z-index: 1;
  }
  .refresher {
    --refresh-width: 55px;
    pointer-events: none;
    width: var(--refresh-width);
    height: var(--refresh-width);
    border-radius: 50%;
    position: absolute;
    transition: all 300ms cubic-bezier(0,0,0.2,1);
    will-change: transform, opacity;
    ...
  }
</style>

<div class="refresher">
  <div class="loading-bar"></div>
  <div class="loading-bar"></div>
  <div class="loading-bar"></div>
  <div class="loading-bar"></div>
</div>

<section id="inbox"><!-- msgs --></section>

<script>
  let _startY;
  const inbox = document.querySelector('#inbox');

  inbox.addEventListener('touchstart', e => {
    _startY = e.touches[0].pageY;
  }, {passive: true});

  inbox.addEventListener('touchmove', e => {
    const y = e.touches[0].pageY;
    // Activate custom pull-to-refresh effects when at the top fo the container
    // and user is scrolling up.
    if (document.scrollingElement.scrollTop === 0 && y > _startY &&
        !document.body.classList.contains('refreshing')) {
      // refresh inbox.
    }
  }, {passive: true});
</script>

Disabling overscroll glow and rubberbanding effects

To disable the bounce effect when hitting a scroll boundary, use overscroll-behavior-y: none:

body {
  /* Disables pull-to-refresh and overscroll glow effect.
     Still keeps swipe navigations. */
  overscroll-behavior-y: none;
}
Before: hitting scroll boundary shows a glow.
After: glow disabled.

Note: This will still preserve left/right swipe navigations. To prevent navigations, you can use overscroll-behavior-x: none. However, this is still being implemented in Chrome.

Full demo

Putting it all together, the full chatbox demo, uses overscroll-behavior to create a custom pull-to-refresh animation and disable scrolls from escaping the chatbox widget. This provides an optimal user experience that would have been tricky to achieve without CSS overscroll-behavior.

View demo | Source


Dynamic import()

$
0
0

Dynamic import()

Note: Dynamic import() is available in Chrome 63 and Safari Technology Preview 24.

Dynamic import() introduces a new function-like form of import that unlocks new capabilities compared to static import. This article compares the two and gives an overview of what's new.

Static import (recap)

Back in September, Chrome 61 shipped with support for the ES2015 import statement within modules.

Consider the following module, located at ./utils.mjs:

// Default export
export default () => {
  console.log('Hi from the default export!');
};

// Named export `doStuff`
export const doStuff = () => {
  console.log('Doing stuff…');
};

Here's how to statically import and use the ./utils.mjs module:

<script type="module">
  import * as module from './utils.mjs';
  module.default();
  // → logs 'Hi from the default export!'
  module.doStuff();
  // → logs 'Doing stuff…'
</script>

Note: The previous example uses the .mjs extension to signal that it's a module rather than a regular script. On the web, file extensions don't really matter, as long as the files are served with the correct MIME type (e.g. text/javascript for JavaScript files) in the Content-Type HTTP header. The .mjs extension is especially useful on other platforms such as Node.js, where there's no concept of MIME types or other hooks such as type="module" to determine whether something is a module or a regular script. We're using the same extension here for consistency across platforms and to clearly make the distinction between modules and regular scripts.

This syntactic form for importing modules is a static declaration: it only accepts a string literal as the module specifier, and introduces bindings into the local scope via a pre-runtime “linking” process. The static import syntax can only be used at the top-level of the file. Static import enables important use cases such as static analysis, bundling tools, and tree-shaking.

In some cases, it's useful to:

  • import a module on-demand (or conditionally)
  • compute the module specifier at runtime
  • import a module from within a regular script (as opposed to a module)

None of those are possible with static import.

Dynamic import() 🔥

Dynamic import() introduces a new function-like form of import that caters to those use cases. import(moduleSpecifier) returns a promise for the module namespace object of the requested module, which is created after fetching, instantiating, and evaluating all of the module's dependencies, as well as the module itself.

Here's how to dynamically import and use the ./utils.mjs module:

<script type="module">
  const moduleSpecifier = './utils.mjs';
  import(moduleSpecifier)
    .then((module) => {
      module.default();
      // → logs 'Hi from the default export!'
      module.doStuff();
      // → logs 'Doing stuff…'
    });
</script>

Note: Although import() looks like a function call, it is specified as syntax that just happens to use parentheses (similar to super()). That means that import doesn't inherit from Function.prototype so you cannot call or apply it, and things like const importAlias = import don't work — heck, import is not even an object! This doesn't really matter in practice, though.

Here's an example of how dynamic import() enables lazy-loading modules upon navigation in a small single-page application:

<!DOCTYPE html>
<meta charset="utf-8">
<title>My library</title>
<nav>
  <a href="books.html" data-entry-module="books">Books</a>
  <a href="movies.html" data-entry-module="movies">Movies</a>
  <a href="video-games.html" data-entry-module="video-games">Video Games</a>
</nav>
<main>This is a placeholder for the content that will be loaded on-demand.</main>
<script>
  const main = document.querySelector('main');
  const links = document.querySelectorAll('nav > a');
  for (const link of links) {
    link.addEventListener('click', async (event) => {
      event.preventDefault();
      try {
        const module = await import(`/${link.dataset.entryModule}.mjs`);
        // The module exports a function named `loadPageInto`.
        module.loadPageInto(main);
      } catch (error) {
        main.textContent = error.message;
      }
    });
  }
</script>

The lazy-loading capabilities enabled by dynamic import() can be quite powerful when applied correctly. For demonstration purposes, Addy modified an example Hacker News PWA that statically imported all its dependencies, including comments, on first load. The updated version uses dynamic import() to lazily load the comments, avoiding the load, parse, and compile cost until the user really needs them.

Recommendations

Static import and dynamic import() are both useful. Each have their own, very distinct, use cases. Use static imports for initial paint dependencies, especially for above-the-fold content. In other cases, consider loading dependencies on-demand with dynamic import().

What's New In DevTools (Chrome 64)

$
0
0

What's New In DevTools (Chrome 64)

Note: The video version of these release notes will be published around late-January 2018.

Welcome back! New features coming to DevTools in Chrome 64 include:

Note: Check what version of Chrome you're running at chrome://version. If you're running an earlier version, these features won't exist. If you're running a later version, these features may have changed. Chrome auto-updates to a new major version about every 6 weeks.

Local Overrides

Suppose you're using DevTools to change a site's HTML, CSS, or JavaScript. When you reload the page, the changes are lost. Local Overrides make it possible to persist the changes across page loads. To use Local Overrides:

  1. Open the Sources panel.
  2. Open the Overrides tab.

    Opening the Overrides tab. Figure 1. The Overrides tab
  3. Click Setup Overrides.

  4. Select the directory where you want to save the changes.
  5. Click Allow to give DevTools write access to the directory. The Overrides tab now shows the mapping between the site and your local directory.

    DevTools showing the local override mapping. Figure 2. The Overrides tab shows the mapping between the site developers.google.com and the local directory test
  6. Make some changes to the site's code via the Styles pane or the Sources panel. These changes now persist across page loads.

    Changing the style of an element. Figure 3. Setting the color property of the selected h1 element to crimson
  7. Open the local directory that you mapped to the site. For every change you make, DevTools saves a copy of the modified file to this directory.

    Viewing the local copy of a modified file Figure 4. Viewing the local copy of a modified file

Check out Paul Irish's talk from Chrome Dev Summit 2017 for a video example of this workflow.

Performance Monitor

Use the Performance Monitor to get a real-time view of various aspects of a page's performance, including:

  • CPU usage.
  • JavaScript heap size.
  • The total number of DOM nodes, JavaScript event listeners, documents, and frames on the page.
  • Layouts and style recalculations per second.

To use the Performance Monitor:

  1. Open the Command Menu.
  2. Start typing Performance then select Show Performance Monitor.

    The Performance Monitor Figure 5. The Performance Monitor
  3. Click a metric to show or hide it. In Figure 5 the CPU Usage, JS heap size, and JS event listeners charts are shown.

Console Sidebar

On large sites, the Console can quickly get flooded with irrelevant messages. Use the new Console Sidebar to reduce the noise and focus on the messages that are important to you.

Using the Console Sidebar to show error messages only Figure 6. Using the Console Sidebar to show error messages only

The Console Sidebar is hidden by default. Click Show Console Sidebar Show Console
Sidebar to show it.

Group similar Console messages

The Console now groups similar messages together by default. For example, in Figure 7 there are 27 instances of the message [Violation] Avoid using document.write().

An example of the Console grouping similar messages together Figure 8. An example of the Console grouping similar messages together

Click on a group to expand it and see each instance of the message.

An example of an expanded group of Console messages Figure 9. An example of an expanded group of Console messages

Uncheck the Group Similar checkbox to disable this feature.

Feedback

The best place to discuss any of the features or changes you see here is the google-chrome-developer-tools@googlegroups.com mailing list. You can also tweet us at @ChromeDevTools if you're short on time. If you're sure that you've encountered a bug in DevTools, please open an issue.

Previous release notes

See the devtools-whatsnew tag for links to all previous DevTools release notes.


New in Chrome 63

$
0
0

New in Chrome 63

And there’s plenty more!

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 63!

Note: Want the full list of changes? Check out the Chromium source repository change list.

Dynamic module imports

Importing JavaScript modules is super handy, but it’s static, you can’t import modules based on runtime conditions.

Thankfully, that changes in Chrome 63, with the new dynamic import syntax. It allows you to dynamically load code into modules and scripts at runtime. It can be used to lazy load a script only when it’s needed, improving the performance of your application.

button.addEventListener('click', event => {
  import('./dialogBox.js')
  .then(dialogBox => {
    dialogBox.open();
  })
  .catch(error => {
    /* Error handling */
  });
});

Instead of loading your whole application when the user first hits your page, you can grab the resources you need to sign in. Your initial load is small and screaming fast. Then once the user signs in, load the rest, and you’re good to go.

Async iterators and generators

Writing code that does any sort of iteration with async functions can be ugly. In fact, it’s the core part of my favorite interview coding question.

Now, with async generator functions and the async iteration protocol, consumption or implementation of streaming data sources becomes streamlined, and my coding question becomes much easier.

async function* getChunkSizes(url) {
  const response = await fetch(url);
  const b = response.body;
  for await (const chunk of magic(b)) {
    yield chunk.length;
  }
}

Async iterators can be used in for-of loops and also to create your own custom async iterators through async iterator factories.

Over-scroll behavior

Scrolling is one of the most fundamental ways to interact with a page, but certain patterns can be tricky to deal with. For example, the browsers pull to refresh feature, where swiping down at the top of the page, does a hard reload.

Entire page reloads
Custom refresh behavior

In some cases, you might want to override that behavior and provide your own experience. That’s what Twitter’s progressive web app does, when you pull down, instead of reloading the whole the page, it simply adds any new tweets to the current view.

Chrome 63 now supports the CSS overscroll-behavior property, making it easy to override the browser's default overflow scroll behavior.

You can use it to:

The best part, overscroll-behavior doesn’t have a negative effect on your page performance!

Permission UI changes

I love web push notifications but I’ve been really frustrated by the number of sites asking for permission on page load, without any context - and I’m not alone.

90% of all permission requests are ignored or temporarily blocked.

In Chrome 59, we started to address this problem by temporarily blocking a permission if the user dismissed the request three times. Now in m63, Chrome for Android will make permission requests modal dialogs.

Remember, this isn’t just for push notifications, this is for all permission requests. If you ask permission at the appropriate time and in context, we’ve found that users are two and a half times more likely to grant permission!

And more!

These are just a few of the changes in Chrome 63 for developers, of course, there’s plenty more.

  • finally is now available on Promise instances and is invoked after a Promise has been fulfilled or rejected.
  • The new Device Memory JavaScript API helps you understand performance constraints by giving you hints about the total amount of RAM on the user's device. You can tailor your experience at runtime, reducing complexity on lower end devices, providing users a better experience with fewer frustrations.
  • The Intl.PluralRules API allows you to build applications that understand pluralization of a given language by indicating which plural form applies for a given number, and language. And can help with ordinal numbers.

Be sure to subscribe to our YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 64 is released, I’ll be right here to tell you -- what’s new in Chrome!

The Device Memory API

$
0
0

The Device Memory API

The range of capabilities of devices that can connect to the web is wider today than it's ever been before. The same web application that's served to a high-end desktop computer may also be served to a low-powered phone, watch, or tablet, and it can be incredibly challenging to create compelling experiences that work seamlessly on any device.

The Device Memory API is a new web platform feature aimed at helping web developers deal with the modern device landscape. It adds a read-only property to the navigator object, navigator.deviceMemory, which returns how much RAM the device has in gigabytes, rounded down to the nearest power of two. The API also features a Client Hints Header, Device-Memory, that reports the same value.

The Device Memory API gives developers the ability to do two primary things:

  • Make runtime decisions about what resources to serve based on the returned device memory value (e.g. serve a "lite" version of an app to users on low-memory devices).
  • Report this value to an analytics service so you can better understand how device memory correlates with user behavior, conversions, or other metrics important to your business.

Tailoring content dynamically based on device memory

If you're running your own web server and are able to modify the logic that serves resources, you can conditionally respond to requests that contain the Device-Memory Client Hints Header.

GET /main.js HTTP/1.1
Host: www.example.com
Device-Memory: 0.5
Accept: */*

With this technique you can create one or more versions of your application script(s) and respond to requests from the client conditionally based on the value set in the Device-Memory header. These versions don't need to contain completely different code (as that's harder to maintain). Most of the time the "lite" version will just exclude features that may be expensive and not critical to the user experience.

Using the Client Hints Header

To enable the Device-Memory header, either add the Client Hints <meta> tag to the <head> of your document:

<meta http-equiv="Accept-CH" content="Device-Memory">

Or include "Device-Memory" in your server's Accept-CH response headers:

HTTP/1.1 200 OK
Date: Thu Dec 07 2017 11:44:31 GMT
Content-Type: text/html
Accept-CH: <strong>Device-Memory</strong>, Downlink, Viewport-Width

This tells the browser to send the Device-Memory header with all sub-resource requests for the current page.

In other words, once you've implemented one of the options above for your website, if a user visits on a device with 0.5 GB of RAM, all image, CSS, and JavaScript requests from this page will include the Device-Memory header with the value set to "0.5", and your server can respond to such requests however you see fit.

For example, the following Express route serves a "lite" version of a script if the Device-Memory header is set and its value is less than 1, or it serves a "full" version if the browser doesn't support the Device-Memory header or the value is 1 or greater:

app.get('/static/js/:scriptId', (req, res) => {
  // Low-memory devices should load the "lite" version of the component.
  // The logic below will set `scriptVersion` to "lite" if (and only if)
  // `Device-Memory` isn't undefined and returns a number less than 2.
  const scriptVersion = req.get('Device-Memory') < 1 ? 'lite' : 'full';

  // Respond with the file based on `scriptVersion` above.
  res.sendFile(`./path/to/${req.params.scriptId}.${scriptVersion}.js`);
});

Using the JavaScript API

In some cases (like with a static file server or a CDN) you won't be able to conditionally respond to requests based on an HTTP header. In these cases you can use the JavaScript API to make conditional requests in your JavaScript code.

The following logic is similar to the Express route above, except it dynamically determines the script URL in the client-side logic:

// Low-memory devices should load the "lite" version of the component.
// The logic below will set `componentVersion` to "lite" if (and only if)
// deviceMemory isn't undefined and returns a number less than 1.
const componentVersion = navigator.deviceMemory < 1 ? 'lite' : 'full';

const component = await import(`path/to/component.${componentVersion}.js`);
component.init();

While conditionally serving different versions of the same component based on device capabilities is a good strategy, sometimes it can be even better to not serve a component at all.

In many cases, components are purely enhancements. They add some nice touches to the experience, but they aren't required for the app's core functionality. In these cases, it may be wise to not load such components in the first place. If a component intended to improve the user experience makes the app sluggish or unresponsive, it's not achieving its goal.

With any decision you make that affects the user experience, it's critical you measure its impact. It's also critical that you have a clear picture of how your app performs today.

Understanding how device memory correlates with user behavior for the current version of your app will better inform what action needs to be taken, and it'll give you a baseline against which you can measure the success of future changes.

Tracking device memory with analytics

The Device Memory API is new, and most analytics providers are not tracking it by default. Fortunately, most analytics providers give you a way to track custom data (for example, Google Analytics has a feature called Custom Dimensions), that you can use to track device memory for you users' devices.

Using a custom device memory dimension

Using custom dimensions in Google Analytics is a two-step process.

  1. Set up the custom dimension in Google Analytics
  2. Update your tracking code to set the device memory value for the custom dimension you just created.

When creating the custom dimension, give it the name "Device Memory" and choose a scope of "session" since the value will not change during the course of a user's browsing session:

Creating a Device Memory custom dimensions in Google Analytics

Next update your tracking code. Here's an example of what it might look like. Note that for browsers that don't support the Device Memory API, the dimension value will be "(not set)".

// Create the tracker from your tracking ID.
// Replace "UA-XXXXX-Y" with your Google Analytics tracking ID.
ga('create', 'UA-XXXXX-Y', 'auto');

// Set the device memory value as a custom dimension on the tracker.
// This will ensure it gets sent with all future data to Google Analytics.
// Note: replace "XX" with the index of the custom dimension you created
// in the Google Analytics admin.
ga('set', 'dimensionXX', navigator.deviceMemory || '(not set)');

// Do any other other custom setup you want...

// Send the initial pageview.
ga('send', 'pageview');

Reporting on Device Memory data

Once the device memory dimension is set on the tracker object, all data you send to Google Analytics will include this value. This will allow you to break down any metric you want (e.g. page load times, goal completion rate, etc.) by device memory to see if there are any correlations.

Since device memory is a custom dimension rather than a built-in dimension, you won't see it in any of the standard reports. To access this data you'll have to create a custom report. For example, the configuration for a custom report that compares load times by device memory might look like this:

Creating a Device Memory custom report in Google Analytics

And the report it generates might looks like this:

Device Memory report

Once you're collecting device memory data and have a baseline for how users are experiencing your application across all ranges of the memory spectrum, you can experiment with serving different resources to different users (using the techniques described in the section above). Afterwards you'll be able to look at the results and see if they've improved.

Wrapping up

This post outlines how to use the Device Memory API to tailor your application to the capabilities of your users' devices, and it shows how to measure how these users experience your app.

While this post focuses on the Device Memory API, most of the techniques described here could be applied to any API that reports device capabilities or network conditions.

As the device landscape continues to widen, it's more important than ever that web developers consider the entire spectrum of users when making decisions that affect their experience.

Audio/Video Updates in Chrome 63/64

$
0
0

Audio/Video Updates in Chrome 63/64

Media Capabilities - Decoding Info API

Today, web developers rely on isTypeSupported() or canPlayType() to vaguely know if some media can be decoded or not. The real question though should be: “How well it would perform on this device?”

This is exactly one of the things the proposed Media Capabilities wants to solve: An API to query the browser about the decoding abilities of the device based on information such as the codecs, profile, resolution, bitrates, etc. It would expose information such as whether the playback should be smooth and power efficient based on previous playback statistics recorded by the browser.

In a nutshell, here’s how the Decoding Info API works for now. Check out the official sample.

const mediaConfig = {
  type: 'media-source', // or 'file'
  audio: {
    contentType: 'audio/webm; codecs=opus',
    channels: '2', // audio channels used by the track
    bitrate: 132266, // number of bits used to encode a second of audio
    samplerate: 48000 // number of samples of audio carried per second
  },
  video: {
    contentType: 'video/webm; codecs="vp09.00.10.08"',
    width: 1920,
    height: 1080,
    bitrate: 2646242, // number of bits used to encode a second of video
    framerate: '25' // number of frames used in one second
  }
};

navigator.mediaCapabilities.decodingInfo(mediaConfig).then(result => {
  console.log('This configuration is' +
      (result.supported ? '' : ' NOT') + ' supported,' +
      (result.smooth ? '' : ' NOT') + ' smooth and' +
      (result.powerEfficient ? '' : ' NOT') + ' power efficient.');
});

You can try different media configurations until you find the best one (smooth and powerEfficient) and use it to play the appropriate media stream. By the way, Chrome’s current implementation is based on previously recorded playback information. It defines smooth as true when the percentage of dropped frames is less than 10% while powerEfficient is true when more than 50% of frames are decoded by the hardware. Small frames are always considered power efficient.

Note: The result returned from navigator.mediaCapabilities.decodingInfo will always be reported as smooth and power-efficient if the media configuration is supported and playback stats have not been recorded yet by the browser.

I recommend using a snippet similar to the one below to detect availability and fallback to your current implementation for browsers that don’t support this API.

function isMediaConfigSupported(mediaConfig) {

  const promise = new Promise((resolve, reject) => {
    if (!('mediaCapabilities' in navigator)) {
      return reject('MediaCapabilities API not available');
    }
    if (!('decodingInfo' in navigator.mediaCapabilities)) {
      return reject('Decoding Info not available');
    }
    return resolve(navigator.mediaCapabilities.decodingInfo(mediaConfig));
  });

  return promise.catch(_ => {
    let fallbackResult = {
      supported: false,
      smooth: false, // always false
      powerEfficient: false // always false
    };
    if ('video' in mediaConfig) {
      fallbackResult.supported = MediaSource.isTypeSupported(mediaConfig.video.contentType);
      if (!fallbackResult.supported) {
        return fallbackResult;
      }
    }
    if ('audio' in mediaConfig) {
      fallbackResult.supported = MediaSource.isTypeSupported(mediaConfig.audio.contentType);
    }
    return fallbackResult;
  });
}

Caution: The snippet above must use canPlayType() instead of isTypeSupported() if the media configuration type is "file".

Available for Origin Trials

In order to get your valuable feedback, the Decoding Info API (part of Media Capabilities) is available as an Origin Trial in Chrome 64. You will need to request a token, so that the feature would be automatically enabled for your origin for a limited period of time, without the need to enable the experimental “Web Platform Features” flag at chrome://flags/#enable-experimental-web-platform-features.

Intent to Experiment | Chromestatus Tracker | Chromium Bug

HDR video playback on Windows 10

High Dynamic Range (HDR) videos have higher contrast, revealing precise, detailed shadows and stunning highlights with more clarity than ever. Moreover support for wide color gamut means colors are more vibrant.

Simulated SDR vs HDR comparison (seeing true HDR requires an HDR display)
Figure 1. Simulated SDR vs HDR comparison (seeing true HDR requires an HDR display)

As VP9 Profile 2 10-bit playback is now supported in Chrome for Windows 10 Fall Creator Update, Chrome additionally supports HDR video playback when Windows 10 is in HDR mode. On a technical note, Chrome 64 now supports the scRGB color profile which in turn allows media to play back in HDR.

You can give it a try by watching The World in HDR in 4K (ULTRA HD) on YouTube and check that it plays HDR by looking at the YouTube player quality setting.

YouTube player quality setting featuring HDR
Figure 2. YouTube player quality setting featuring HDR

All you need for now is Windows 10 Fall Creator Update, an HDR-compatible graphics card and display (e.g. NVIDIA 10-series card, LG HDR TV or monitor), and turn on HDR mode in Windows display settings.

Web developers can detect the approximate color gamut supported by the output device with the recent color-gamut media query and the number of bits used to display a color on the screen with screen.colorDepth. Here’s one way of using those to detect if VP9 HDR is supported for instance:

// Detect if display is in HDR mode and if browser supports VP9 HDR.
function canPlayVp9Hdr() {

  // TODO: Adjust VP9 codec string based on your video encoding properties.
  return (window.matchMedia('(color-gamut: p3)').matches &&
      screen.colorDepth >= 48 &&
      MediaSource.isTypeSupported('video/webm; codecs="vp09.02.10.10.01.09.16.09.01"'))
}

The VP9 codec string with Profile 2 passed to isTypeSupported() in the example above needs to be updated based on your video encoding properties.

For info, here are your current configuration's results inline:

  • Screen supports approximately the gamut specified by the DCI P3 Color Space or more.
  • Color depth is 48 bytes or more.
  • Browser supports VP9 Profile 2, Level 1, 10-bit YUV content.

Note that it is not possible yet to define HDR colors in CSS, canvas, images and protected content. The Chrome team is working on it. Stay tuned!

Persistent licenses for Windows and Mac

Persistent license in Encrypted Media Extensions (EME) means the license can be persisted on the device so that applications can load the license into memory without sending another license request to the server. This is how offline playback is supported in EME.

Until now, Chrome OS and Android were the only platforms to support persistent licenses. It is not true anymore. Playing protected content through EME while the device is offline is now possible in Chrome 64 on Windows and Mac as well.

const config = [{
  sessionTypes: ['persistent-license'],
  videoCapabilities: [{
    contentType: 'video/webm; codecs="vp9"',
    robustness: 'SW_SECURE_DECODE' // Widevine L3
  }]
}];

navigator.requestMediaKeySystemAccess('com.widevine.alpha', config)
.then(access => {
  // User will be able to watch encrypted content while being offline when
  // license is stored locally on device and loaded later.
})
.catch(error => {
  // Persistent licenses are not supported on this platform yet.
});

You can try persistent licenses yourself by checking out the Sample Media PWA and following these steps:

  1. Go to https://biograf-155113.appspot.com/ttt/episode-2/
  2. Click "Make available offline" and wait for the video to be downloaded.
  3. Turn off your internet connection.
  4. Click the "Play" button and enjoy the video!

Media preload defaults to "metadata"

Matching other browsers' implementations, Chrome desktop now sets the default preload value for <video> and <audio> elements to "metadata" in order to reduce bandwidth and resource usage. This new behaviour only applies in Chrome 64 to cases where no preload value is set. Note that the preload attribute's hint is discarded when a MediaSource is attached to the media element as the web site handles its own preload.

In other words, <video> preload value is now "metadata" while <video preload="auto"> preload value stays "auto". Give a try to the official sample.

Intent to Ship | Chromestatus Tracker | Chromium Bug

Unsupported playbackRate raises an exception

Following an HTML specification change, when media elements' playbackRate is set to a value not supported by Chrome (e.g. a negative value), a "NotSupportedError" DOMException is thrown in Chrome 63.

const audio = document.querySelector('audio');
try {
  audio.playbackRate = -1;
} catch(error) {
  console.log(error.message); // Failed to set the playbackRate property
}

By the way, Chrome’s current implementation raises this exception when playbackRate is either negative, less than 0.0625, or more than 16. Give a try to the official sample to see this in action.

Intent to Ship | Chromestatus Tracker | Chromium Bug

Background video track optimizations

The chrome team is always trying to find new ways to improve battery life and Chrome 63 was no exception.

If the video doesn't contain any audio tracks, the video will be automatically paused when played in the background in Chrome desktop. This is a follow-up from a similar change that was only applying to [MSE videos in Chrome 62].

Chromium Bug

Remove muting for extreme playbackRates

Before Chrome 64, sound was muted when playbackRate was below 0.5 or above 4 as the quality degraded significantly. As Chrome has switched to a Waveform-Similarity-Overlap-Add (WSOLA) approach for quality degrading, sound doesn’t need to be muted anymore. It means you can play sound crazy slow and crazy fast now.

Chromium Bug

Lighthouse 2.6 Updates

$
0
0

Lighthouse 2.6 Updates

Lighthouse 2.6 is out! Highlights include:

See the 2.6 release notes for the full list of new features, changes, and bug fixes.

How to update to 2.6

  • NPM. Run npm update lighthouse. Run npm update lighthouse -g flag if you installed Lighthouse globally.
  • Chrome Extension. The extension should automatically update, but you can manually update it via chrome://extensions.
  • DevTools. The Audits panel will be shipping with 2.6 in Chrome 65. You can check what version of Chrome you're running via chrome://version. Chrome updates to a new version about every 6 weeks. You can run the latest Chrome code by downloading Chrome Canary.

New performance audits

JavaScript boot-up time is high

View a breakdown of the time your page spends parsing, compiling, and executing each script. JavaScript boot-up time is a somewhat-hidden but important factor in page load time.

The \
Figure 1. The JavaScript boot-up time is high audit

Uses inefficient cache policy on static assets

Make sure that the browser properly caches each of your resources.

The \
Figure 2. The Uses inefficient cache policy on static assets audit

Avoids page redirects

Page redirects add an extra network roundtrip, or two if an extra DNS lookup is required. Minimize redirects in order to speed up page load time.

The \
Figure 3. The Avoids page redirects audit

Rehaul of the accessibility section score

In Lighthouse 2.6, the aggregate accessibility score is calculated differently. The score weighs each accessibility audit based on the severity of its impact on user experience, as well as the frequency of the issue, based on the HTTP Archive dataset. See googlechrome/lighthouse/issues/3444 for an in-depth discussion.

Report UX Improvements

Note: These updates are in the Chrome Extension version of Lighthouse only. They are not yet in the Audits panel of DevTools.

Top-level errors

At the top of your report, Lighthouse alerts you to errors that may have affected your page's scores.

Top-level errors at the top of a report
Figure 4. Top-level errors at the top of a report

Click Export Report Export Report then select Print Summary or Print Expanded to print out summarized or detailed versions of your reports.

Print summary and expanded views
Figure 5. Print summary and expanded views

Aspect ratio bug fix

2.6 also fixes a bug that caused the Displays images with correct aspect ratio audit to fail even when there were no images on the page, or all images were properly sized.

Enter AudioWorklet

$
0
0

Enter AudioWorklet

Chrome 64 comes with a highly anticipated new feature in Web Audio API - AudioWorklet. This article introduces its concept and usage for those who are eager to create a custom audio processor with JavaScript code. Please take a look at the live demos on GitHub or the instruction on how to use this experimental feature in Chrome 64.

Concepts

AudioWorklet nicely keeps the user-supplied JavaScript code all within the audio processing thread — that is, it doesn’t have to jump over to the main thread to process audio. This means the user-supplied script code gets to run on the audio rendering thread (AudioWorkletGlobalScope) along with other built-in AudioNodes, which ensures zero additional latency and synchronous rendering.

Registration and Instantiation

Using AudioWorklet consists of two parts: AudioWorkletProcessor and AudioWorkletNode. This is more involved than using ScriptProcessorNode, but it is needed to give developers the low-level capability for custom audio processing. AudioWorkletProcessor represents the actual audio processor written in JavaScript code, and it lives in the AudioWorkletGlobalScope. AudioWorkletNode is the counterpart of AudioWorkletProcessor and takes care of the connection to and from other AudioNodes in the main thread. It is exposed in the main global scope and functions like a regular AudioNode.

Here's a pair of code snippets that demonstrate the registration and the instantiation.

// The code in the main global scope.
class MyWorkletNode extends AudioWorkletNode {
  constructor(context) {
    super(context, 'my-worklet-processor');
  }
}

let context = new AudioContext();

context.audioWorklet.addModule('processors.js').then(() => {
  let node = new MyWorkletNode(context);
});

Creating an AudioWorkletNode requires at least two things: an AudioContext object and the processor name as a string. A processor definition can be loaded and registered by the new AudioWorklet object's addModule() call. Worklet APIs including AudioWorklet are only available in a secure context, thus a page using them must be served over HTTPS, although http://localhost is considered a secure for local testing.

It is also worth noting that you can subclass AudioWorkletNode to define a custom node backed by the processor running on the worklet.

// This is "processor.js" file, evaluated in AudioWorkletGlobalScope upon
// audioWorklet.addModule() call in the main global scope.
class MyWorkletProcessor extends AudioWorkletProcessor {
  constructor() {
    super();
  }

  process(inputs, outputs, parameters) {
    // audio processing code here.
  }
}

registerProcessor('my-worklet-processor', MyWorkletProcessor);

The registerProcessor() method in the AudioWorkletGlobalScope takes a string for the name of processor to be registered and the class definition. After the completion of script code evaluation in the global scope, the promise from AudioWorklet.addModule() will be resolved notifying users that the class definition is ready to be used in the main global scope.

Custom AudioParam

One of the useful things about AudioNodes is schedulable parameter automation with AudioParams. AudioWorkletNodes can use these to get exposed parameters that can be controlled at the audio rate automatically.

User-defined AudioParams can be declared in an AudioWorkletProcessor class definition by setting up a set of AudioParamDescriptors. The underlying WebAudio engine will pick up this information upon the construction of an AudioWorkletNode, and will then create and link AudioParam objects to the node accordingly.

/* A separate script file, like "my-worklet-processor.js" */
class MyWorkletProcessor extends AudioWorkletProcessor {

  // Static getter to define AudioParam objects in this custom processor.
  static get parameterDescriptors() {
    return [{
      name: 'myParam',
      defaultValue: 0.707
    }];
  }

  constructor() { super(); }

  process(inputs, outputs, parameters) {
    // |myParamValues| is a Float32Array of 128 audio samples calculated
    // by WebAudio engine from regular AudioParam operations. (automation
    // methods, setter) By default this array would be all values of 0.707
    let myParamValues = parameters.myParam;
  }
}

AudioWorkletProcessor.process() method

The actual audio processing happens in the process() callback method in the AudioWorkletProcessor and it must be implemented by user in the class definition. The WebAudio engine will invoke this function in an isochronous fashion to feed inputs and parameters and fetch outputs.

/* AudioWorkletProcessor.process() method */
process(inputs, outputs, parameters) {
  // The processor may have multiple inputs and outputs. Get the first input and
  // output.
  let input = inputs[0];
  let output = outputs[0];

  // Each input or output may have multiple channels. Get the first channel.
  let inputChannel0 = input[0];
  let outputChannel0 = output[0];

  // Get the parameter value array.
  let myParamValues = parameters.myParam;

  // Simple gain (multiplication) processing over a render quantum (128 samples).
  // This processor only supports the mono channel.
  for (let i = 0; i < inputChannel0.length; ++i) {
    outputChannel0[i] = inputChannel0[i] * myParamValues[i];
  }

  // To keep this processor alive.
  return true;
}

Additionally, the return value of the process() method can be used to control the lifetime of AudioWorkletNode so that developers can manage the memory footprint. Returning false from process() method will mark the processor inactive and the WebAudio engine will not invoke the method anymore. To keep the processor alive, the method must return true. Otherwise, the node/processor pair will be garbage collected by the system eventually.

Bi-directional Communication with MessagePort

Sometimes custom AudioWorkletNodes will want to expose controls that do not map to AudioParam. For example, a string-based type attribute could be used to control a custom filter. For this purpose and beyond, AudioWorkletNode and AudioWorkletProcessor are equipped with a MessagePort for bi-directional communication. Any kind of custom data can be exchanged through this channel.

MessagePort can be accessed via .port attribute on both the node and the processor. The node's port.postMessage() method sends a message to the associated processor's port.onmessage handler and vice versa.

/* The code in the main global scope. */
context.audioWorklet.addModule('processors.js').then(() => {
  let node = new AudioWorkletNode(context, 'port-processor');
  node.port.onmessage = (event) => {
    // Handling data from the processor.
    console.log(event.data);
  };

  node.port.postMessage('Hello!');
});
/* "processor.js" file. */
class PortProcessor extends AudioWorkletProcessor {
  constructor() {
    super();
    this.port.onmessage = (event) => {
      // Handling data from the node.
      console.log(event.data);
    };

    this.port.postMessage('Hi!');
  }

  process(inputs, outputs, parameters) {
    // Do nothing, producing silent output.
    return true;
  }
}

registerProcessor('port-processor', PortProcessor);

Also note that MessagePort supports Transferable, which allows you to transfer data storage or a WASM module over the thread boundary. This opens up countless possibility on how the AudioWorklet system can be utilized.

Walkthrough: building a GainNode

Putting everything together, here's a complete example of GainNode built on top of AudioWorkletNode and AudioWorkletProcessor.

Index.html

<!doctype html>
<html>
<script>
  const context = new AudioContext();

  // Loads module script via AudioWorklet.
  context.audioWorklet.addModule('gain-processor.js').then(() => {
    let oscillator = new OscillatorNode(context);

    // After the resolution of module loading, an AudioWorkletNode can be
    // constructed.
    let gainWorkletNode = new AudioWorkletNode(context, 'gain-processor');

    // AudioWorkletNode can be interoperable with other native AudioNodes.
    oscillator.connect(gainWorkletNode).connect(context.destination);
    oscillator.start();
  });
</script>
</html>

gain-processor.js

class GainProcessor extends AudioWorkletProcessor {

  // Custom AudioParams can be defined with this static getter.
  static get parameterDescriptors() {
    return [{ name: 'gain', defaultValue: 1 }];
  }

  constructor() {
    // The super constructor call is required.
    super();
  }

  process(inputs, outputs, parameters) {
    let input = inputs[0];
    let output = outputs[0];
    let gain = parameters.gain;
    for (let channel = 0; channel < input.length; ++channel) {
      let inputChannel = input[channel];
      let outputChannel = output[channel];
      for (let i = 0; i < inputChannel.length; ++i)
        outputChannel[i] = inputChannel[i] * gain[i];
    }

    return true;
  }
}

registerProcessor('gain-processor', GainProcessor);

This covers the fundamental of AudioWorklet system. Live demos are available at Chrome WebAudio team's GitHub repository.

Experimental Usage and Origin Trials

AudioWorklet is available on Chrome 64 (currently beta) behind the experimental flag. You can activate the feature with the following command line option:

--enable-blink-features=Worklet,AudioWorklet

Alternatively, you can go to chrome://flags and enable "Experimental Web Platform Features" and relaunch the browser. Note that this will enable the entire set of experimental features in the browser. Along with the experimental release, we have added this feature in Chrome 64 as an Origin Trial for all platforms. With Origin Trials, You can deploy code using AudioWorklet to users running Chrome 64 and get feedback from them. To participate in this trial please use the signup form.

Chrome 64 to deprecate the chrome.loadTimes() API

$
0
0

Chrome 64 to deprecate the chrome.loadTimes() API

chrome.loadTimes() is a non-standard API that exposes loading metrics and network information to developers in order to help them better understand their site's performance in the real world.

Since this API was implemented in 2009, all of the useful information it reports can be found in standardized APIs such as:

These standardized APIs are being implemented by multiple browser vendors. As a result, chrome.loadTimes() is being deprecated in Chrome 64.

The deprecated API

The chrome.loadTimes() function returns a single object containing all of its loading and network information. For example, the following object is the result of calling chrome.loadTimes() on www.google.com:

{
  "requestTime": 1513186741.847,
  "startLoadTime": 1513186741.847,
  "commitLoadTime": 1513186742.637,
  "finishDocumentLoadTime": 1513186742.842,
  "finishLoadTime": 1513186743.582,
  "firstPaintTime": 1513186742.829,
  "firstPaintAfterLoadTime": 0,
  "navigationType": "Reload",
  "wasFetchedViaSpdy": true,
  "wasNpnNegotiated": true,
  "npnNegotiatedProtocol": "h2",
  "wasAlternateProtocolAvailable": false,
  "connectionInfo": "h2"
}

Standardized replacements

You can now find each of the above values using standardized APIs. The following table matches each value to its standardized API, and the sections below show code examples of how to get each value in the old API with modern equivalents.

chrome.loadTimes() feature Standardized API replacement
requestTime Navigation Timing 2
startLoadTime Navigation Timing 2
commitLoadTime Navigation Timing 2
finishDocumentLoadTime Navigation Timing 2
finishLoadTime Navigation Timing 2
firstPaintTime Paint Timing
firstPaintAfterLoadTime N/A
navigationType Navigation Timing 2
wasFetchedViaSpdy Navigation Timing 2
wasNpnNegotiated Navigation Timing 2
npnNegotiatedProtocol Navigation Timing 2
wasAlternateProtocolAvailable N/A
connectionInfo Navigation Timing 2

The code examples below return equivalent values to those returned by chrome.loadTimes(). However, for new code these code examples are not recommended. The reason is chrome.loadTimes() gives times values in epoch time in seconds whereas new performance APIs typically report values in milliseconds relative to a page's time origin, which tends to be more useful for performance analysis.

Several of the examples also favor Performance Timeline 2 APIs (e.g. performance.getEntriesByType()) but provide fallbacks for the older Navigation Timing 1 API as it has wider browser support. Going forward, Performance Timeline APIs are preferred and are typically reported with higher precision.

requestTime

function requestTime() {
  // If the browser supports the Navigation Timing 2 and HR Time APIs, use
  // them, otherwise fall back to the Navigation Timing 1 API.
  if (window.PerformanceNavigationTiming && performance.timeOrigin) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return (ntEntry.startTime + performance.timeOrigin) / 1000;
  } else {
    return performance.timing.navigationStart / 1000;
  }
}

startLoadTime

function startLoadTime() {
  // If the browser supports the Navigation Timing 2 and HR Time APIs, use
  // them, otherwise fall back to the Navigation Timing 1 API.
  if (window.PerformanceNavigationTiming && performance.timeOrigin) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return (ntEntry.startTime + performance.timeOrigin) / 1000;
  } else {
    return performance.timing.navigationStart / 1000;
  }
}

commitLoadTime

function commitLoadTime() {
  // If the browser supports the Navigation Timing 2 and HR Time APIs, use
  // them, otherwise fall back to the Navigation Timing 1 API.
  if (window.PerformanceNavigationTiming && performance.timeOrigin) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return (ntEntry.responseStart + performance.timeOrigin) / 1000;
  } else {
    return performance.timing.responseStart / 1000;
  }
}

finishDocumentLoadTime

function finishDocumentLoadTime() {
  // If the browser supports the Navigation Timing 2 and HR Time APIs, use
  // them, otherwise fall back to the Navigation Timing 1 API.
  if (window.PerformanceNavigationTiming && performance.timeOrigin) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return (ntEntry.domContentLoadedEventEnd + performance.timeOrigin) / 1000;
  } else {
    return performance.timing.domContentLoadedEventEnd / 1000;
  }
}

finishLoadTime

function finishLoadTime() {
  // If the browser supports the Navigation Timing 2 and HR Time APIs, use
  // them, otherwise fall back to the Navigation Timing 1 API.
  if (window.PerformanceNavigationTiming && performance.timeOrigin) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return (ntEntry.loadEventEnd + performance.timeOrigin) / 1000;
  } else {
    return performance.timing.loadEventEnd / 1000;
  }
}

firstPaintTime

function firstPaintTime() {
  if (window.PerformancePaintTiming) {
    const fpEntry = performance.getEntriesByType('paint')[0];
    return (fpEntry.startTime + performance.timeOrigin) / 1000;
  }
}

firstPaintAfterLoadTime

function firstPaintTimeAfterLoad() {
  // This was never actually implemented and always returns 0.
  return 0;
}

navigationType

function navigationType() {
  if (window.PerformanceNavigationTiming) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return ntEntry.type;
  }
}

wasFetchedViaSpdy

function wasFetchedViaSpdy() {
  // SPDY is deprecated in favor of HTTP/2, but this implementation returns
  // true for HTTP/2 or HTTP2+QUIC/39 as well.
  if (window.PerformanceNavigationTiming) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return ['h2', 'hq'].includes(ntEntry.nextHopProtocol);
  }
}

wasNpnNegotiated

function wasNpnNegotiated() {
  // NPN is deprecated in favor of ALPN, but this implementation returns true
  // for HTTP/2 or HTTP2+QUIC/39 requests negotiated via ALPN.
  if (window.PerformanceNavigationTiming) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return ['h2', 'hq'].includes(ntEntry.nextHopProtocol);
  }
}

npnNegotiatedProtocol

function npnNegotiatedProtocol() {
  // NPN is deprecated in favor of ALPN, but this implementation returns the
  // HTTP/2 or HTTP2+QUIC/39 requests negotiated via ALPN.
  if (window.PerformanceNavigationTiming) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return ['h2', 'hq'].includes(ntEntry.nextHopProtocol) ?
        ntEntry.nextHopProtocol : 'unknown';
  }
}

wasAlternateProtocolAvailable

function wasAlternateProtocolAvailable() {
  // The Alternate-Protocol header is deprecated in favor of Alt-Svc
  // (https://www.mnot.net/blog/2016/03/09/alt-svc), so technically this
  // should always return false.
  return false;
}

connectionInfo

function connectionInfo() {
  if (window.PerformanceNavigationTiming) {
    const ntEntry = performance.getEntriesByType('navigation')[0];
    return ntEntry.nextHopProtocol;
  }
}

Removal plan

The chrome.loadTimes() API will be deprecated in Chrome 64 and is targeted to be removed in late 2018. Developers should migrate their code as soon as possible to avoid any loss in data.

Intent to Deprecate | Chromestatus Tracker | Chromium Bug

Deprecations and removals in Chrome 64

$
0
0

Deprecations and removals in Chrome 64

In nearly every version of Chrome, we see a significant number of updates and improvements to the product, its performance, and also capabilities of the Web Platform. This article describes some of the deprecations and removals in Chrome 64, which is in beta as of December 14.

To see all deprecations and removals for this and previous versions of Chrome, visit the deprecations page. This list is subject to change at any time.

Remove support for multiple shadow roots

Shadow DOM version 0 allowed multiple shadow roots. At a standards meeting in April 2015 it was decided that this feature should not be part of version 1. Support was deprecated shortly thereafter in Chrome 45. In Chrome 64 support is now removed.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove getMatchedCSSRules()

The getMatchedCSSRules() method is a non-standard, WebKit-only API that retrieves a list of style rules applied to a particular element. This has been deprecated since 2014. It's now being removed because it's not on a standards track.

Since there is currently no standards-based alternative, developers would need to create their own. There is at least one example on StackOverflow.

Intent to Remove | Chromestatus Tracker | Chromium Bug

<<../../_deprecation-policy.md>>


Chrome User Experience Report: expanding to top 1 Million+ origins

$
0
0

Chrome User Experience Report: expanding to top 1 Million+ origins

Today, we're happy to announce a new Chrome User Experience Report with expanded coverage of over 1 million top origins on the web. Originally announced at the Chrome Developer Summit 2017, the report is a public dataset of key user experience metrics for popular sites on the web.

All data included in the new dataset reflects real-user measurement captured during the month of November 2017. CrUX performance data is based on real-world measurement, as experienced by Chrome users across a diverse set of hardware and network conditions around the world. Moving forward, we will release a new report monthly to provide insight into trends and user experience changes on the web.

A key goal of CrUX is to enable macro level analysis of real-world user experience trends on the web, expanding the scope of performance analysis beyond an individual page or website. It has been exciting to see the community begin to experiment with this data, for example:

For details on the dataset format, how to access it, and best practices for analysis, please see our developer documentation, and join the discussion if you have questions or feedback. We're excited to see what you'll build with the expanded dataset!

Preloading modules

$
0
0

Preloading modules

Browsers are finally starting to natively support JavaScript modules, both with static and dynamic import support. This means it's now possible to write module-based JavaScript that runs natively in the browser, without transpilers or bundlers.

Module-based development offers some real advantages in terms of cacheability, helping you reduce the number of bytes you need to ship to your users. The finer granularity of the code also helps with the loading story, by letting you prioritise the critical code in your application.

However, module dependencies introduce a loading problem, in that the browser needs to wait for a module to load before it finds out what its dependencies are. One way around this is by preloading the dependencies, so that the browser knows about all the files ahead of time and can keep the connection busy.

Until now, there wasn't really a good way of declaratively preloading modules. Chrome 64 ships with <link rel="modulepreload"> behind the "Experimental Web Platform Features" flag. <link rel="modulepreload"> is a module-specific version of <link rel="preload"> that solves a number of the latter's problems.

Warning: It's still very much early days for modules in the browser, so while we encourage experimentation, we advise caution when using this technology in production for now!

Chrome added support for <link rel="preload"> back in version 50, as a way of declaratively requesting resources ahead of time, before the browser needs them.

<head>
  <link rel="preload" as="style" href="critical-styles.css">
  <link rel="preload" as="font" crossorigin type="font/woff2" href="myfont.woff2">
</head>

This works particularly well with resources such as fonts, which are often hidden inside CSS files, sometimes several levels deep. In that situation, the browser would have to wait for multiple roundtrips before finding out that it needs to fetch a large font file, when it could have used that time to start the download and take advantage of the full connection bandwidth.

<link rel="preload"> and its HTTP header equivalent provide a simple, declarative way of letting the browser know straight away about critical files that will be needed as part of the current navigation. When the browser sees the preload, it starts a high priority download for the resource, so that by the time it's actually needed it's either already fetched or partly there.

This is where things get tricky. There are several credentials modes for resources, and in order to get a cache hit they must match, otherwise you end up fetching the resource twice. Needless to say, double-fetching is bad, because it wastes the user's bandwidth and makes them wait longer, for no good reason.

For <script> and <link> tags, you can set the credentials mode with the crossorigin attribute. However, it turns out that a <script type="module"> with no crossorigin attribute indicates a credentials mode of omit, which doesn't exist for <link rel="preload">. This means that you would have to change the crossorigin attribute in both your <script> and <link> to one of the other values, and you might not have an easy way of doing so if what you're trying to preload is a dependency of other modules.

Furthermore, fetching the file is only the first step in actually running the code. First, the browser has to parse and compile it. Ideally, this should happen ahead of time as well, so that when the module is needed, the code is ready to run. However, V8 (Chrome's JavaScript engine) parses and compiles modules differently from other JavaScript. <link rel="preload"> doesn't provide any way of indicating that the file being loaded is a module, so all the browser can do is load the file and put it in the cache. Once the script is loaded using a <script type="module"> tag (or it's loaded by another module), the browser parses and compiles the code as a JavaScript module.

In a nutshell, yes. By having a specific link type for preloading modules, we can write simple HTML without worrying about what credentials mode we're using. The defaults just work.

<head>
  <link rel="modulepreload" href="super-critical-stuff.js">
</head>
[...]
<script type="module" src="super-critical-stuff.js">

And since Chrome now knows that what you're preloading is a module, it can be smart and parse and compile the module as soon as it's done fetching, instead of waiting until it tries to run.

But what about modules' dependencies?

Funny you should ask! There is indeed something we haven't talked about: recursivity.

The <link rel="modulepreload"> spec actually allows for optionally loading not just the requested module, but all of its dependency tree as well. Browsers don't have to do this, but they can.

So what would be the best cross-browser solution for preloading a module and its dependency tree, since you'll need the full dependency tree to run the app?

Browsers that choose to preload dependencies recursively should have robust deduplication of modules, so in general the best practice would be to declare the module and the flat list of its dependencies, and trust the browser not to fetch the same module twice.

<head>
  <!-- dog.js imports dog-head.js, which in turn imports
       dog-head-mouth.js, and so on.  -->
  <link rel="modulepreload" href="dog.js">
  <link rel="modulepreload" href="dog-head.js">
  <link rel="modulepreload" href="dog-head-mouth.js">
  <link rel="modulepreload" href="dog-head-mouth-tongue.js">
</head>

Does preloading modules help performance?

Preloading can help in maximizing bandwidth usage, by telling the browser about what it needs to fetch so that it's not stuck with nothing to do during those long roundtrips. If you're experimenting with modules and running into performance issues due to deep dependency trees, creating a flat list of preloads can definitely help!

That said, module performance is still being worked on, so make sure you take a close look at what's happening in your application with Developer Tools, and consider bundling your application into several chunks in the meantime. There's plenty of ongoing module work happening in Chrome, though, so we're getting closer to giving bundlers their well-earned rest!

An update on Better Ads

$
0
0

An update on Better Ads

Yesterday, the Coalition for Better Ads announced the "Better Ads Experience Program." This Program provides guidelines for companies like Google on how they can use the Better Ads Standards to help improve users' experience with ads on the web.

In June, we announced Chrome's plans to support the Better Ads Standards in early 2018. Violations of the Standards are reported to sites via the Ad Experience Report, and site owners can submit their site for re-review once the violations have been fixed. Starting on February 15, in line with the Coalition's guidelines, Chrome will remove all ads from sites that have a "failing" status in the Ad Experience Report for more than 30 days. All of this information can be found in the Ad Experience Report Help Center, and our product forums are available to help address any questions or feedback.

We look forward to continuing to work with industry bodies to improve the user experience for everyone.

Disabling hardware noise suppression

$
0
0

Disabling hardware noise suppression

In Chrome 64 we're trying a new behavior for getUserMedia audio streams that have the echoCancellation constraint enabled. What's new is that such streams will temporarily disable hardware noise suppression for the duration of the stream. We anticipate this will make the echo canceller perform better. As this is functionality is experimental, it needs to be explicitly turned on; see below.

At this point, this behavior is only supported for certain input devices and only on macOS. Support is limited to devices which have togglable “ambient noise reduction” in the Sound panel of System Preferences.

Background

An echo canceller tries to remove any sound played out on the speakers from the audio signal that's picked up by the microphone. Without this, what you're saying as one party of a call, will be picked up by the microphone of the other parties and then sent back to you. You'll hear an echo of yourself!

To be successful in removing echo, WebRTC’s echo canceller (which is used in Chrome) needs to get as clean an audio signal as possible from the microphone. Processing that's applied before the audio reaches the echo canceller, such as hardware noise suppression, will normally impede its performance. Moreover, there is already software noise suppression in place, but only after the echo canceller has done its processing.

Details of the new behavior

Web developers can enable the new behavior on their sites by opting in to an Origin Trial. End users can enable it globally by passing a command-line flag when starting Chrome. For more information, see below.

When this is enabled, and a web page calls getUserMedia to get audio from an input device, the following happens:

  • If the echoCancellation constraint is enabled, hardware noise suppression will be turned off for the duration of the newly created audio stream.

  • Since this setting is system-wide this will apply to all audio input streams from the same device (i.e. the same microphone).

  • Once the last stream that wants hardware noise suppression turned off closes, hardware noise suppression is turned back on.

  • If hardware noise suppression was already disabled beforehand, Chrome will not change its state.

  • If getUserMedia is called without echoCancellation enabled, Chrome will not touch hardware noise suppression.

As this setting is also user-controllable, there are some specific interactions with the user:

  • If Chrome has turned hardware noise suppression off, and the user turns it back on, Chrome will not attempt to disable it again for that stream.

  • If Chrome has turned hardware noise suppression off, and the user turns it back on, then off again, Chrome will still re-enable it once the stream ends.

The behavior takes effect by simply enabling the experiment. There are no API changes necessary.

How to enable the experiment

To get this new behavior on your site, your need to be signed up for the "Disable Hardware Noise Suppression" Origin Trial. If you just want to try it out locally, it can also be enabled on the command line:

chrome --enable-blink-features=DisableHardwareNoiseSuppression

Passing this flag on the command-line enables the feature globally for the current session.

There are a couple of aspects we wish to evaluate with this experiment:

  • Qualitative differences, in the field, between having hardware noise suppression turned on vs. off.

  • How does changing this setting from within Chrome affect the end user and other software they may be running?

We are interested in feedback on both of these aspects. Are calls better or worse with this feature turned on? Are there problems with the implementation that causes unexpected behaviors? In any case, if you're trying this out, please file feedback on this bug. If possible, include what microphone / headset / etc. was used and if it supports ambient noise reduction. If doing more large-scale experiments, links to comparative statistics on audio call quality are appreciated.

What's New In DevTools (Chrome 63)

$
0
0

What's New In DevTools (Chrome 63)

Welcome back! New features coming to DevTools in Chrome 63 include:

Note: You can check what version of Chrome you're running at chrome://version. Chrome auto-updates to a new major version about every 6 weeks.

Read on or watch the video below to learn more!

Multi-client remote debugging support

If you've ever tried debugging an app from an IDE like VS Code or WebStorm, you've probably discovered that opening DevTools messes up your debug session. This issue also made it impossible to use DevTools to debug WebDriver tests.

As of Chrome 63, DevTools now supports multiple remote debugging clients by default, no configuration needed.

Multi-client remote debugging was the number 1 most-popular DevTools issue on crbug.com, and number 3 across the entire Chromium project. Multi-client support also opens up quite a few interesting opportunities for integrating other tools with DevTools, or using those tools in new ways. For example:

  • Protocol clients such as ChromeDriver or the Chrome debugging extensions for VS Code and Webstorm, and WebSocket clients such as Puppeteer, can now run at the same time as DevTools.
  • Two separate WebSocket protocol clients, such as Puppeteer or chrome-remote-interface, can now connect to the same tab simultaneously.
  • Chrome Extensions using the chrome.debugger API can now run at the same time as DevTools.
  • Multiple different Chrome Extensions can now use the chrome.debugger API on the same tab simultaneously.

Workspaces 2.0

Workspaces have been around for some time in DevTools. This feature enables you to use DevTools as your IDE. You make some changes to your source code within DevTools, and the changes persist to the local version of your project on your file system.

Workspaces 2.0 builds off of 1.0, adding a more helpful UX and improved auto-mapping of transpiled code. This feature was originally scheduled to be released shortly after Chrome Developer Summit (CDS) 2016, but the team postponed it to sort out some issues.

Check out the "Authoring" part (around 14:28) of the DevTools talk from CDS 2016 to see Workspaces 2.0 in action.

Four new audits

In Chrome 63 the Audits panel has 4 new audits:

  • Serve images as WebP.
  • Use images with appropriate aspect ratios.
  • Avoid frontend JavaScript libraries with known security vulnerabilities.
  • Browser errors logged to the Console.

See Run Lighthouse in Chrome DevTools to learn how to use the Audits panel to improve the quality of your pages.

See Lighthouse to learn more about the project that powers the Audits panel.

Simulate push notifications with custom data

Simulating push notifications has been around for a while in DevTools, with one limitation: you couldn't send custom data. But with the new Push text box coming to the Service Worker pane in Chrome 63, now you can. Try it now:

  1. Go to Simple Push Demo.
  2. Click Enable Push Notifications.
  3. Click Allow when Chrome prompts you to allow notifications.
  4. Open DevTools.
  5. Go to the Service Workers pane.
  6. Write something in the Push text box.

    Simulating a push notification with custom data. Figure 1. Simulating a push notification with custom data via the Push text box in the Service Worker pane
  7. Click Push to send the notification.

    The simulated push notification
    Figure 2. The simulated push notification

Trigger background sync events with custom tags

Triggering background sync events has also been in the Service Workers pane for some time, but now you can send custom tags:

  1. Open DevTools.
  2. Go to the Service Workers pane.
  3. Enter some text in the Sync text box.
  4. Click Sync.
Triggering a custom background sync event
Figure 3. After clicking Sync, DevTools sends a background sync event with the custom tag update-content to the service worker

Feedback

The best place to discuss any of the features or changes you see here is the google-chrome-developer-tools@googlegroups.com mailing list. You can also tweet us at @ChromeDevTools if you're short on time. If you're sure that you've encountered a bug in DevTools, please open an issue.

Previous release notes

See the devtools-whatsnew tag for links to all previous DevTools release notes.

Viewing all 599 articles
Browse latest View live