Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

Estimating Available Storage Space

$
0
0

Estimating Available Storage Space

tl;dr

Chrome 61, with more browsers to follow, now exposes an estimate of how much storage a web app is using and how much is available via:

if ('storage' in navigator && 'estimate' in navigator.storage) {
  navigator.storage.estimate().then(({usage, quota}) => {
    console.log(`Using ${usage} out of ${quota} bytes.`);
  });
}

Modern web apps and data storage

When you think about the storage needs of a modern web application, it helps to break what's being stored into two categories: the core data needed to load the web application, and the data needed for meaningful user interaction once the application's loaded.

The first type of data, what's needed to load your web app, consists of HTML, JavaScript, CSS, and perhaps some images. Service workers, along with the Cache Storage API, provide the needed infrastructure for saving those core resources and then using them later to quickly load your web app, ideally bypassing the network entirely. (Tools that integrate with your web app's build process, like the new Workbox libraries or the older sw-precache, can fully automate the process of storing, updating, and using this type of data.)

But what about the other type of data? These are resources that aren't needed to load your web app, but which might play a crucial role in your overall user experience. If you're writing an image editing web app, for instance, you may want to save one or more local copies of an image, allowing users to switch between revisions and undo their work. Or if you're developing an offline media playback experience, saving audio or video files locally would be a critical feature. Every web app that can be personalized ends up needing to save some sort of state information. How do you know how much space is available for this type of runtime storage, and what happens when you run out of room?

The past: window.webkitStorageInfo and navigator.webkitTemporaryStorage

Browsers have historically supported this type of introspection via prefixed interfaces, like the very old (and deprecated) window.webkitStorageInfo, and the not-quite-as-old, but still non-standard navigator.webkitTemporaryStorage. While these interfaces provided useful information, they don't have have a future as web standards.

That's where the WHATWG Storage Standard enters the picture.

The future: navigator.storage

As part of the ongoing work on the Storage Living Standard, a couple of useful APIs have made it to the StorageManager interface, which is exposed to browsers as navigator.storage. Like many other newer web APIs, navigator.storage is only available on secure (served via HTTPS, or localhost) origins.

Last year, we introduced the navigator.storage.persist() method, which allows your web application to request that its storage be exempted from automatic cleanup.

It's now joined by the navigator.storage.estimate() method, which serves as a modern replacement for navigator.webkitTemporaryStorage.queryUsageAndQuota(). estimate() returns similar information, but it exposes a promise-based interface, which is in keeping with other modern asynchronous APIs. The promise that estimate() returns resolves with an object containing two properties: usage, representing the number of bytes currently used, and quota, representing the maximum bytes that can be stored by the current origin. (Like everything else related to storage, quota is applied across an entire origin.)

If a web application attempts to store—using, for example, IndexedDB or the Cache Storage API—data that's large enough to bring a given origin over its available quota, the request will fail with a QuotaExceededError exception.

Storage estimates in action

Exactly how you use estimate() depends on the type of data your app needs to store. For example, you could update a control in your interface letting users know how much space is being used after each storage operation is complete. You'd ideally then provide an interface allowing users to manually clean up data that's no longer needed. You might write code along the lines of:

// For a primer on async/await, see
// https://developers.google.com/web/fundamentals/getting-started/primers/async-functions
async function storeDataAndUpdateUI(dataUrl) {
  // Pro-tip: The Cache Storage API is available outside of service workers!
  // See https://googlechrome.github.io/samples/service-worker/window-caches/
  const cache = await caches.open('data-cache');
  await cache.add(dataUrl);

  if ('storage' in navigator && 'estimate' in navigator.storage) {
    const {usage, quota} = await navigator.storage.estimate();
    const percentUsed = Math.round(usage / quota * 100);
    const usageInMib = Math.round(usage / (1024 * 1024));
    const quotaInMib = Math.round(quota / (1024 * 1024));

    const details = `${usageInMib} out of ${quotaInMib} MiB used (${percentUsed}%)`;

    // This assumes there's a <span id="storageEstimate"> or similar on the page.
    document.querySelector('#storageEstimate').innerText = details;
  }
}

How accurate is the estimate?

It's hard to miss the fact that the data you get back from the function is just an estimate of the space an origin is using. It's right there in the function name! Neither the usage nor the quota values are intended to be stable, so it's recommended that you take the following into account:

  • usage reflects how many bytes a given origin is effectively using for same-origin data, which in turn can be impacted by internal compression techniques, fixed-size allocation blocks that might include unused space, and the presence of "tombstone" records) that might be created temporarily following a deletion. To prevent the leakage of exact size information, cross-origin, opaque resources saved locally may contribute additional padding bytes to the overall usage value.
  • quota reflects the amount of space currently reserved for an origin. The value depends on some constant factors like the overall storage size, but also a number of potentially volatile factors, including the amount of storage space that's currently unused. So as other applications on a device write or delete data, the amount of space that the browser is willing to devote to your web app's origin will likely change.

The present: feature detection and fallbacks

estimate() is enabled by default starting in Chrome 61. Firefox is experimenting with navigator.storage, but, as of August 2017, it's not turned on by default. You need to enable the dom.storageManager.enabled preference in order to test it.

When working with functionality that isn't yet supported in all browsers, feature detection is a must. You can combine feature detection along with a promise-based wrapper on top of the older navigator.webkitTemporaryStorage methods to provide a consistent interface along the lines of:

function storageEstimateWrapper() {
  if ('storage' in navigator && 'estimate' in navigator.storage) {
    // We've got the real thing! Return its response.
    return navigator.storage.estimate();
  }

  if ('webkitTemporaryStorage' in navigator &&
      'queryUsageAndQuota' in navigator.webkitTemporaryStorage) {
    // Return a promise-based wrapper that will follow the expected interface.
    return new Promise(function(resolve, reject) {
      navigator.webkitTemporaryStorage.queryUsageAndQuota(
        function(usage, quota) {resolve({usage: usage, quota: quota})},
        reject
      );
    });
  }

  // If we can't estimate the values, return a Promise that resolves with NaN.
  return Promise.resolve({usage: NaN, quota: NaN});
}

Deprecations and Removals in Chrome 61

$
0
0

Deprecations and Removals in Chrome 61

In nearly every version of Chrome, we see a significant number of updates and improvements to the product, its performance, and also capabilities of the Web Platform. This article describes the deprecations and removals in Chrome 60, which is in beta as of June 8. This list is subject to change at any time.

Security and Privacy

Block resources whose URLs contain '\n' and '<' characters.

There is a type of hacking called dangling markup injection in which a truncated URL is used to send data to an external endpoint. For example, consider a page containing <img src='https://evil.com/?. Becaue the URL has no closing quote, browsers will read to the next quote that occurs and treat the enclosed characters as if it were a single URL.

Chrome 61 mitigateas this vulnerability by restricting the character sets allowed in href and src attributes. Specifically, Chrome will stop processing URLs when it encounters new line characters (\n) and less than characters (<).

Developers with a legitimate use case for new line and less than characters in a URL should instead escape these characters.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove usage of notifications from insecure iframes

Perfmission requests from iframes can confuse users since it is difficult to distinguish between the containing page's origin and the origin of the iframe that is making the request. When the requests scope is unclear, it is difficult for users to judge whether to grant or deny permission.

Disallowing notifications in iframes will also align the requirements for notification permission with that of push notifications, easing friction for developers.

Developers who need this functionality can open a new window to request notification permission.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Deprecate and remove Presentation API on insecure contexts

It's been found that on insecure origins, the Presentation API can be used as a hacking vector on insecure origins. Since displays don't have address bars the API can be used to spoof content. It's also possible to exfiltrate data from running presentation.

In aligning with Blink’s intention to remove powerful features on insecure origins, we plan to deprecate and remove support for the Presentation API on insecure contexts. Starting in Chrome 61, PresentationRequest.start() will no longer function on insecure origins.

Intent to Remove | Chromestatus Tracker | Chromium Bug

CSS

Make shadow-piercing descendant combinator behave like descendent combinator

Note: This change was originally slated for Chrome 60, but was bumped after Chrome 60 removals were published.

The shadow-piercing descendant combinator (>>>), part of CSS Scoping Module Level 1 , was intended to match the children of a particular ancestor element even when they appeared inside of a shadow tree. This had some limitations. First, per the spec, it could only be used in JavaScript calls such as querySelector() and did not work in stylesheets. More importantly, browser vendors were unable to make it work beyond one level of the Shadow DOM.

Consequently, the descendant combinator has been removed from relevant specs including Shadow DOM v1. Rather than break web pages by removing this selector from Chromium, we've chosen instead to alias the shadow-piercing descendent combinator to the descendant combinator. The original behavior was deprecated in Chrome 45. The new behavior is implemented in Chrome 61.

Intent to Remove | Chromestatus Tracker | Chromium Bug

JavaScript

Disallow defining of indexed properties on windows

Previously some browsers allowed for JavaScript assignments like the following:

window[0] = 1;

The current HTML spec notes that this is an explicit violation of the JavaScript sepc. As such, this ability is removed in Chrome 61. As of February 2016, Firefox is already in compliance.

Chromium Bug

<<../../_deprecation-policy.md>>

What's New In DevTools (Chrome 62)

$
0
0

What's New In DevTools (Chrome 62)

New features coming to DevTools in Chrome 62:

Note: You can check what version of Chrome you're running at chrome://version. Chrome auto-updates to a new major version about every 6 weeks.

Top-level await operators in the Console

The Console now supports top-level await operators.

Using top-level await operators in the Console
Figure 1. Using top-level await operators in the Console

New screenshot workflows

You can now take a screenshot of a portion of the viewport, or of a specific HTML node.

Screenshots of a portion of the viewport

To take a screenshot of a portion of your viewport:

  1. Click Inspect Inspect or press Command+Shift+C (Mac) or Control+Shift+C (Windows, Linux) to enter Inspect Element Mode.
  2. Hold Control and select the portion of the viewport that you want to take a screenshot of.
  3. Release your mouse. DevTools downloads a screenshot of the portion that you selected.
Taking a screenshot of a portion of the viewport
Figure 2. Taking a screenshot of a portion of the viewport

Screenshots of specific HTML nodes

To take a screenshot of a specific HTML node:

  1. Select an element in the Elements panel.

    An example of a node
    Figure 3. In this example, the goal is to take a screenshot of the blue header that contains the text Tools. Note that this node is already selected in the DOM Tree of the Elements panel
  1. Open the Command Menu.
  1. Start typing node and select Capture node screenshot. DevTools downloads a screenshot of the selected node.

    The result of the 'Capture node screenshot' command
    Figure 4. The result of the Capture node screenshot command

CSS Grid highlighting

To view the CSS Grid that's affecting an element, hover over an element in the DOM Tree of the Elements panel. A dashed border appears around each of the grid items. This only works when the selected item, or the parent of the selected item, has display:grid applied to it.

Highlighting a CSS Grid
Figure 5. Highlighting a CSS Grid

A new API for querying objects

Call queryObjects(Constructor) from the Console to return an array of objects that were created with the specified constructor. For example:

  • queryObjects(Promise). Returns all Promises.
  • queryObjects(HTMLElement). Returns all HTML elements.
  • queryObjects(foo), where foo is a function name. Returns all objects that were instantiated via new foo().

New Console filters

The Console now supports negative and URL filters.

Negative filters

Type -<text> in the Filter box to filter out any Console message that includes that <text>.

In general, if <text> is found anywhere in the UI that DevTools presents for the message, then the Console hides the message. This includes the message text, the filename from which the message originated, and the stack trace text (when applicable).

An example of 3 messages that will be filtered out
Figure 6. An example of 3 messages that will be filtered out
The result after applying the negative filter
Figure 7. The result after applying the -foo Console filter

URL filters

Type url:<text> in the Filter box to only show messages that originated from a script whose URL includes <text>.

The filter uses fuzzy matching. If <text> appears anywhere in the URL, then DevTools shows the message.

An example of URL filtering
Figure 8. Using URL filtering to only display messages that originate from scripts whose URL includes hymn. By hovering over the script name, you can see that the host name includes this text

HAR imports in the Network panel

Drag and drop a HAR file into the Network panel to import it.

Importing a HAR file
Figure 9. Importing a HAR file

Previewable cache resources in the Application panel

Click a row in a Cache Storage table to see a preview of that resource below the table.

Previewing a cache resource
Figure 10. Previewing a cache resource

Feedback

The best place to discuss any of the features or changes you see here is the google-chrome-developer-tools@googlegroups.com mailing list. You can also tweet us at @ChromeDevTools if you're short on time.

Previous release notes

Introducing visualViewport

$
0
0

Introducing visualViewport

What if I told you, there's more than one viewport.

BRRRRAAAAAAAMMMMMMMMMM

And the viewport you're using right now, is actually a viewport within a viewport.

BRRRRAAAAAAAMMMMMMMMMM

And sometimes, the data the DOM gives you, refers to one of those viewport and not the other.

BRRRRAAAAM… wait what?

It's true, take a look:

Layout viewport vs visual viewport

The video above shows a web page being scrolled and pinch-zoomed, along with a mini-map on the right showing the position of viewports within the page.

Things are pretty straight forward during regular scrolling. The green area represents the layout viewport, which position: fixed items stick to.

Things get weird when pinch-zooming is introduced. The red box represents the visual viewport, which is the part of the page we can actually see. This viewport can move around while position: fixed elements remain where they were, attached to the layout viewport. If we pan at a boundary of the layout viewport, it drags the layout viewport along with it.

Improving compatibility

Unfortunately web APIs are inconsistent in terms of which viewport they refer to, and they're also inconsistent across browsers.

For instance, element.getBoundingClientRect().y returns the offset within the layout viewport. That's cool, but we often want the position within the page, so we write:

element.getBoundingClientRect().y + window.scrollY

However, many browsers use the visual viewport for window.scrollY, meaning the above code breaks when the user pinch-zooms.

Chrome 61 changes window.scrollY to refer to the layout viewport instead, meaning the above code works even when pinch-zoomed. In fact, browsers are slowly changing all positional properties to refer to the layout viewport.

With the exception of one new property…

Exposing the visual viewport to script

A new API exposes the visual viewport as window.visualViewport. It's a draft spec, with cross-browser approval, and it's landing in Chrome 61.

console.log(window.visualViewport.width);

Here's what window.visualViewport gives us:

visualViewport properties
offsetLeft Distance between the left edge of the visual viewport, and the layout viewport, in CSS pixels.
offsetTop Distance between the top edge of the visual viewport, and the layout viewport, in CSS pixels.
pageLeft Distance between the left edge of the visual viewport, and the left boundary of the document, in CSS pixels.
pageTop Distance between the top edge of the visual viewport, and the top boundary of the document, in CSS pixels.
width Width of the visual viewport in CSS pixels.
height Height of the visual viewport in CSS pixels.
scale The scale applied by pinch-zooming. If content is twice the size due to zooming, this would return 2. This is not affected by devicePixelRatio.

There are also a couple of events:

window.visualViewport.addEventListener('resize', listener);
visualViewport events
resize Fired when width, height, or scale changes.
scroll Fired when offsetLeft or offsetTop changes.

Demo

The video at the start of this article was created using visualViewport, check it out in Chrome 61+. It uses visualViewport to make the mini-map stick to the top-right of the visual viewport, and applies an inverse scale so it always appears the same size, despite pinch-zooming.

Gotchas

Events only fire when the visual viewport changes

It feels like an obvious thing to state, but it caught me out when I first played with visualViewport.

If the layout viewport resizes but the visual viewport doesn't, you don't get a resize event. However, it's unusual for the layout viewport to resize without the visual viewport also changing width/height.

The real gotcha is scrolling. If scrolling occurs, but the visual viewport remains static relative to the layout viewport, you don't get a scroll event on visualViewport, and this is really common. During regular document scrolling, the visual viewport stays locked to the top-left of the layout viewport, so scroll does not fire on visualViewport.

If you're wanting to hear about all changes to the visual viewport, including pageTop and pageLeft, you'll have to listen to the window's scroll event too:

visualViewport.addEventListener('scroll', update);
visualViewport.addEventListener('resize', update);
window.addEventListener('scroll', update);

Avoid duplicating work with multiple listeners

Similar to listening to scroll & resize on the window, you're likely to call some kind of "update" function as a result. However, it's common for many of these events to happen at the same time. If the user resizes the window, it'll trigger resize, but quite often scroll too. To improve performance, avoid handling the change multiple times:

// Add listeners
visualViewport.addEventListener('scroll', update);
visualViewport.addEventListener('resize', update);
addEventListener('scroll', update);

let pendingUpdate = false;

function update() {
  // If we're already going to handle an update, return
  if (pendingUpdate) return;

  pendingUpdate = true;

  requestAnimationFrame(() => {
    pendingUpdate = false;

    // Handle update here
  });
}

I've filed a spec issue for this, as I think there may be a better way, such as a single update event.

Event handlers don't work

Due to a Chrome bug, this does not work:

Buggy – uses an event handler

visualViewport.onscroll = () => console.log('scroll!');

Instead:

Works – uses an event listener

visualViewport.addEventListener('scroll', () => console.log('scroll'));

Offset values are rounded

I think (well, I hope) this is another Chrome bug.

offsetLeft and offsetTop are rounded, which is pretty inaccurate once the user has zommed in. You can see the issues with this during the demo – if the user zooms in and pans slowly, the mini-map snaps between unzoomed pixels.

The event rate is slow

Like other resize and scroll events, these no not fire every frame, especially on mobile. You can see this during the demo – once you pinch zoom, the mini-map has trouble staying locked to the viewport.

Accessibility

In the demo I used visualViewport to counteract the user's pinch-zoom. It makes sense for this particular demo, but you should think carefully before doing anything that overrides the user's desire to zoom in.

visualViewport can be used to improve accessibility. For instance, if the user is zooming in, you may choose to hide decorative position: fixed items, to get them out of the user's way. But again, be careful you're not hiding something the user is trying to get a closer look at.

You could consider posting to an analytics service when the user zooms in. This could help you identify pages that users are having difficulty with at the default zoom level.

visualViewport.addEventListener('resize', () => {
  if (visualViewport.scale > 1) {
    // Post data to analytics service
  }
});

And that's it! visualViewport is a nice little API which solves compatibility issues along the way.

New in Chrome 61

$
0
0

New in Chrome 61

  • Chrome 61 now supports JavaScript modules natively, unifying the way modular JavaScript can be written.
  • You can now use navigator.share to trigger the native Android share dialog.
  • The WebUSB API has landed, allowing web apps to access user permitted USB devices.
  • And there’s plenty more!

Note: Want the full list of changes? Check out the Chromium source repository change list.

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 61!

JavaScript Modules

Chrome 61 adds native support for JavaScript modules via the <script type="module"> element. That makes it possible for Chrome to fetch granular dependencies in parallel, taking advantage of caching, avoiding duplications across the page and ensuring that script executes in the correct order.

<script type="module">
  import {addText} from './utils.js';
  addText('Modules are pretty cool.');
</script>

This standardized module system unifies the way modular JavaScript can be written and shipped to web browsers. In the future, the same system will be available in Node, making it easier for you to write and deploy isomorphic JavaScript.

You can learn more about modules and the aspects of JavaScript that are affected by modules from the links below.

Web Share API

If you want users to be easily able to share your content on their favorite social network, you need to integrate sharing buttons into your site for each social network. It adds bloat to your page, doesn’t always fit your UI nicely, and means you need to include code from a third party site.

The Web Share API, available today on Chrome for Android allows you to invoke the native sharing capabilities of the users device, allowing the user to easily share text or links with any of their installed native apps!

In a future release, this API will also be able to share to installed web apps. To use it, simply call navigator.share with the details of the page you want to share the system will handle the rest.

navigator.share({
  title: document.title, text: 'Hello',
  url: window.location.href
}).then(() => {
  console.log('Successful share');
});

Check out Paul’s WebShare API Update for full details and some best practices that you should be following.

WebUSB

USB Device Chooser screenshot

Most hardware peripherals such as keyboards, mice, printers, and gamepads are supported by high-level web platform APIs. But, using specialized educational, scientific, industrial or other USB devices in the browser has been hard, often requiring specialized drivers.

Chrome now supports the WebUSB API, allowing web apps to communicate with USB devices, after the user has provided their consent. To learn more about the security and privacy considerations and how they’re addressed, have a peek at the WebUSB spec.

Then, when you’re ready to dive in, take a look at Francois’ WebUSB post on updates.

And more!

  • You can now specify scrolling smoothness with the scroll-behavior CSS property.
  • CSS hex color values can now specify alpha transparency by adding digits to the end of the string.
  • You can access the relative positions of the screen content with the Visual Viewport API, exposing complex functionality like pinch-and-zoom in a more direct way.

These are just a few of the changes in Chrome 61 for developers.

Then subscribe to our YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 62 is released, I’ll be right here to tell you -- what’s new in Chrome!

Introducing the Web Share API

$
0
0

Introducing the Web Share API

Good news, everybody! In Chrome 61 for Android, we've launched the navigator.share() method, which allows websites to invoke the native sharing capabilities of the host platform.

This method, part of the simple Web Share API—written by Matt Giuca on the Chrome team—allows you easily trigger the native Android share dialog, passing either a URL or text to share. This is an important API as it gives your end-users user control of how and where the data is shared.

Usage

The Web Share API is a Promise-based, single method API. It accepts an object which must have at least one of the properties named text or url.

if (navigator.share) {
  navigator.share({
      title: 'Web Fundamentals',
      text: 'Check out Web Fundamentals—it rocks!,
      url: 'https://developers.google.com/web',
  })
    .then(() => console.log('Successful share'))
    .catch((error) => console.log('Error sharing', error));
}

Once invoked it will bring up the native picker (see video) and allow you to share the data with the app chosen by the user.

To use the Web Share API:

  • you must be served over HTTPS

  • you can only invoke the API in response to a user action, such as a click (e.g., you can't call navigator.share as part of the page load)

  • you can also share any URL, not just URLs under your website's current scope: and you may also share text without a URL

  • you should feature-detect it in case it's not available on your users' platform (e.g., via navigator.share !== undefined)

The URL

For the initial launch on Android, users using the Web Share API will be on a mobile device. Some sites might have a "m." URL, or a custom URL for the user's context. You can share any URL via the Web Share API, but you could reuse a canonical URL on your page to provide a better experience to the user. For example, you might do:

let url = document.location.href;
const canonicalElement = document.querySelector('link[rel=canonical]');
if (canonicalElement !== undefined) {
    url = canonicalElement.href;
}
navigator.share({url: url});

Case Study

Santa Tracker is a holiday tradition here at Google. Every December, you can celebrate the season with games and educational experiences: and in the new year, Santa Tracker is open-sourced and delivered.

In 2016, we used the Web Share API on Android via an Origin Trial (note: this is not required to use the Web Share API now, as part of Chrome 61). This API was a perfect fit for mobile—in previous years, we had disabled share buttons on mobile, as space is at a premimum and we couldn't justify having several share targets.

Santa Tracker share button

With the Web Share API, we were able to present just one button, saving precious pixels. We also found that users shared with Web Share around 20% more than users without the API enabled.

(If you're on Chrome 61 on Android, head to Santa Tracker and see Web Share in action.)

History

The Web Share API was originally launched as an Origin Trial as part of Chrome 55.

Prior to the Web Share API, there have been a number of ways to invoke native sharing capabilities on the platform, but they all have significant drawbacks. There was:

  • Web Intents (dead)
  • Protocol handling via registerProtocolHandler, but this has zero support on mobile
  • Direct sharing to a well-known service URL such as Twitter
  • Android intent: URL syntax (which was, unfortunately, Android-only, and required apps to opt-in)

More Information

Read more about the launch at Chrome Platform Status. Here are some important links:

In the future, websites themselves will be allowed to register themselves as a "share receiver", enabling sharing to the web—from both the web and native apps. We are on the Chrome team are incredibly excited by this.

Introducing the Web Share API

$
0
0

Introducing the Web Share API

Good news, everybody! In Chrome 61 for Android, we've launched the navigator.share() method, which allows websites to invoke the native sharing capabilities of the host platform.

This method, part of the simple Web Share API—written by Matt Giuca on the Chrome team—allows you easily trigger the native Android share dialog, passing either a URL or text to share. This is an important API as it gives your end-users user control of how and where the data is shared.

Usage

The Web Share API is a Promise-based, single method API. It accepts an object which must have at least one of the properties named text or url.

if (navigator.share) {
  navigator.share({
      title: 'Web Fundamentals',
      text: 'Check out Web Fundamentals — it rocks!',
      url: 'https://developers.google.com/web',
  })
    .then(() => console.log('Successful share'))
    .catch((error) => console.log('Error sharing', error));
}

Once invoked it will bring up the native picker (see video) and allow you to share the data with the app chosen by the user.

To use the Web Share API:

  • you must be served over HTTPS

  • you can only invoke the API in response to a user action, such as a click (e.g., you can't call navigator.share as part of the page load)

  • you can also share any URL, not just URLs under your website's current scope: and you may also share text without a URL

  • you should feature-detect it in case it's not available on your users' platform (e.g., via navigator.share !== undefined)

The URL

For the initial launch on Android, users using the Web Share API will be on a mobile device. Some sites might have a "m." URL, or a custom URL for the user's context. You can share any URL via the Web Share API, but you could reuse a canonical URL on your page to provide a better experience to the user. For example, you might do:

let url = document.location.href;
const canonicalElement = document.querySelector('link[rel=canonical]');
if (canonicalElement !== null) {
    url = canonicalElement.href;
}
navigator.share({url: url});

Case Study

Santa Tracker is a holiday tradition here at Google. Every December, you can celebrate the season with games and educational experiences: and in the new year, Santa Tracker is open-sourced and delivered.

In 2016, we used the Web Share API on Android via an Origin Trial (note: this is not required to use the Web Share API now, as part of Chrome 61). This API was a perfect fit for mobile—in previous years, we had disabled share buttons on mobile, as space is at a premium and we couldn't justify having several share targets.

Santa Tracker share button

With the Web Share API, we were able to present just one button, saving precious pixels. We also found that users shared with Web Share around 20% more than users without the API enabled.

(If you're on Chrome 61 on Android, head to Santa Tracker and see Web Share in action.)

History

The Web Share API was originally launched as an Origin Trial as part of Chrome 55.

Prior to the Web Share API, there have been a number of ways to invoke native sharing capabilities on the platform, but they all have significant drawbacks. There was:

  • Web Intents (dead)
  • Protocol handling via registerProtocolHandler, but this has zero support on mobile
  • Direct sharing to a well-known service URL such as Twitter
  • Android intent: URL syntax (which was, unfortunately, Android-only, and required apps to opt-in)

More Information

Read more about the launch at Chrome Platform Status. Here are some important links:

In the future, websites themselves will be allowed to register themselves as a "share receiver", enabling sharing to the web—from both the web and native apps. We are on the Chrome team are incredibly excited by this.

Audio/Video Updates in Chrome 62

$
0
0

Audio/Video Updates in Chrome 62

Persistent licenses for Android

Persistent license in Encrypted Media Extensions (EME) means the license can be persisted on the device so that applications can load the license into memory without sending another license request to the server. This is how offline playback is supported in EME.

Until now, Chrome OS was the only platform to support persistent licenses. It is not true anymore. Playing protected content through EME while the device is offline is now possible on Android as well.

const config = [{
  sessionTypes: ['persistent-license'],
  videoCapabilities: [{
    contentType: 'video/webm; codecs="vp9"',
    robustness: 'SW_SECURE_DECODE' // Widevine L3
  }]
}];

// Chrome will prompt user if website is allowed to uniquely identify
// user's device to play protected content.
navigator.requestMediaKeySystemAccess('com.widevine.alpha', config)
.then(access => {
  // User will be able to watch encrypted content while being offline when
  // license is stored locally on device and loaded later.
})
.catch(error => {
  // Persistent licenses are not supported on this platform yet.
});

You can try persistent licenses yourself by checking out the Sample Media PWA and following these steps:

  1. Go to https://biograf-155113.appspot.com/ttt/episode-2/
  2. Click "Make available offline" and wait for the video to be downloaded.
  3. Turn airplane mode on.
  4. Click the "Play" button and enjoy the video!

Note: Widevine support is disabled in Incognito mode in Android. That way users do not inadvertently lose paid licenses when closing Incognito tabs.

Widevine L1 for Android

As you may already know, all Android devices are required to support Widevine Security Level 3 (Widevine L3). However there are many devices out there that also support the highest security level: Widevine Security Level 1 (Widevine L1) where all content processing, cryptography, and control is performed within the Trusted Execution Environment (TEE).

Good news! Widevine L1 is now supported in Chrome for Android so that media can be played in the most secure way. Note that it was supported already on Chrome OS.

const config = [{
  videoCapabilities: [{
    contentType: 'video/webm; codecs="vp9"',
    robustness: 'HW_SECURE_ALL' // Widevine L1
  }]
}];

// Chrome will prompt user if website is allowed to uniquely identify
// user's device to play protected content.
navigator.requestMediaKeySystemAccess('com.widevine.alpha', config)
.then(access => {
  // User will be able to watch encrypted content in the most secure way.
})
.catch(error => {
  // Widevine L1 is not supported on this platform yet.
});

Shaka Player, the JavaScript library for adaptive media formats (such as DASH and HLS) has a demo for you to try Widevine L1 out:

  1. Go to https://shaka-player-demo.appspot.com/demo/ and click "Allow" when prompted.
  2. Pick "Angel One (multicodec, multilingual, Widevine)".
  3. Enter HW_SECURE_ALL in the "Video Robustness" field of the "Configuration" section.
  4. Click the "Load" button and enjoy the video!

Background video track optimizations (MSE only)

The chrome team is always trying to find new ways to improve battery life and Chrome 62 is no exception.

Chrome now disables video tracks when the video is played in the background if the video uses Media Source Extensions (MSE). Check out our previous article to learn more.

Customize seekable range on live MSE streams

As you may already know, the seekable attribute contains the ranges of the media resource to which the browser can seek. Typically, it contains a single time range which starts at 0 and ends at the media resource duration. If the duration is not available though, such as a live stream, the time range may continuously change.

The good news is that you can now more effectively customize the seekable range logic with Media Source Extensions (MSE) by providing or removing a single seekable range that is union'ed with the current buffered ranges. It results in a single seekable range which fits both, when the media source duration is +Infinity.

In the code below, the media source has already been attached to a media element and contains only its init segment:

const mediaSource = new MediaSource();
...

mediaSource.duration = +Infinity;
// Seekable time ranges: { }
// Buffered time ranges: { }

mediaSource.setLiveSeekableRange(1 /* start */, 4 /* end */);
// Seekable time ranges: { [1.000, 4.000) }
// Buffered time ranges: { }

// Let's append a media segment that starts at 3 seconds and ends at 6.
mediaSource.sourceBuffers[0].appendBuffer(someData);
// Seekable time ranges: { [1.000, 6.000) }
// Buffered time ranges: { [3.000, 6.000) }

mediaSource.clearLiveSeekableRange();
// Seekable time ranges: { [0.000, 6.000) }
// Buffered time ranges: { [3.000, 6.000) }

There are many cases that I didn't cover above so I'd suggest you give a try to the official sample to see how buffered and seekable time ranges react to different MSE events.

Intent to Ship | Chromestatus Tracker | Chromium Bug

FLAC in MP4 for MSE

The lossless audio coding format FLAC has been supported in regular media playback since Chrome 56. FLAC in ISO-BMFF support (aka FLAC in MP4) was added shortly after. And now FLAC in MP4 is available in Chrome 62 for Media Source Extensions (MSE).

For info, Firefox folks are the ones who developed and implemented support for a FLAC in MP4 encapsulation spec, and the BBC has been experimenting with using that with MSE. You can read the BBC's "Delivering Radio 3 Concert Sound" post to learn more.

Here's how you can detect if FLAC in MP4 is supported for MSE:

if (MediaSource.isTypeSupported('audio/mp4; codecs="flac"')) {
  // TODO: Fetch data and feed it to a media source.
}

If you want to see a full example, check out our official sample.

Intent to Ship | Chromestatus Tracker | Chromium Bug

Automatic video goes to fullscreen when the device is rotated

If you rotate a device to landscape while a video is playing in the viewport, playback will automatically switch to fullscreen mode. Rotating the device to portrait puts the video back to windowed mode. Check out our past article for more details.


Picture In Picture (PiP)

$
0
0

Picture In Picture (PiP)

Since April 2017, Chrome for Android O supports Picture In Picture. It allows users to play a <video> element in a small overlay window that isn't blocked by other windows, so that they can watch while doing other things.

Here's how it works: open Chrome, go to a website that contains a video and play it fullscreen. From there, press the Home button to go to your Android Home Screen and the playing video will automatically transition to Picture In Picture. That's all! Pretty cool right?

Android Picture in Picture photo
Figure 1. Android Picture in Picture photo

It is, but... what about desktop? What if the website wants to control that experience?

The good news is that a Picture In Picture Web API specification is being drafted as we speak. This spec aims to allow websites to initiate and control this behavior by exposing the following set of properties to the API:

  • Notify the website when a video enters and leaves Picture in Picture mode.
  • Allow the website to trigger Picture in Picture on a video element via a user gesture.
  • Allow the website to exit Picture in Picture.
  • Allow the website to check if Picture in Picture can be triggered.

And this is how it could look like:

<video id="video" src="https://example.com/file.mp4"></video>

<button id="pipButton"></button>

<script>
  // Hide button if Picture In Picture is not supported.
  pipButton.hidden = !document.pictureInPictureEnabled;

  pipButton.addEventListener('click', function() {
    // If there is no element in Picture In Picture yet, let's request Picture
    // In Picture for the video, otherwise leave it.
    if (!document.pictureInPictureElement) {
      video.requestPictureInPicture()
      .catch(error => {
        // Video failed to enter Picture In Picture mode.
      });
    } else {
      document.exitPictureInPicture()
      .catch(error => {
        // Video failed to leave Picture In Picture mode.
      });
    }
  });
</script>

Warning: The code above is not implemented by browsers yet.

Feedback

So what do you think? Please submit your feedback and raise issues in the Picture In Picture WICG repository. We're eager to hear your thoughts!

Preventing Android's default PIP behavior

Today, you can prevent video from using Android's default PiP behavior in Chrome by responding to a resize event, and detecting when the window size has changed significantly (see code below). This is not recommended as a permanent solution but provides a temporary option until the Web API is implemented.

// See whether resize is small enough to be PiP. It's a hack, but it'll
// work for now.
window.addEventListener('resize', function() {
  if (!document.fullscreenElement) {
    return;
  }

  var minimumScreenSize = 0.33;
  var screenArea = screen.width * screen.height;
  var windowArea = window.outerHeight * window.outerWidth;

  // If the size of the window relative to the screen is less than a third,
  // let's assume we're in PiP and exit fullscreen to prevent Auto PiP.
  if ((windowArea / screenArea) < minimumScreenSize) {
    document.exitFullscreen();
  }
});

Autoplay Policy Changes

$
0
0

Autoplay Policy Changes

Chrome's autoplay policies are about to change in 2018 and I'm here to tell you why and how this is going to affect video playback with sound. Spoiler alert: Users are going to love it!

Figure 1. Internet memes tagged "autoplay" found on Imgflip and Imgur

New behaviors

As you may have noticed, web browsers are moving towards stricter autoplay policies in order to improve the web experience for users, minimize the incentives to install extensions that block ads, and reduce data consumption on expensive and/or constrained networks.

With these new autoplay policies, the Chrome team aims to provide a greater control to users over content playing in their browser. Those will also benefit publishers who have legitimate autoplay use cases.

Chrome's autoplay policies are simple:

  • Muted autoplay is always allowed.
  • Autoplay with sound is allowed if any of the following conditions are met:
  • Top frame can delegate autoplay permission to their iframes to allow autoplay with sound.

Media Engagement Index (MEI)

The MEI measures an individual's propensity to consume media on a site. Chrome's current approach is a ratio of visits to significant media playback events per origin:

  • Consumption of the media (audio/video) must be greater than 7 seconds.
  • Audio must be present and unmuted.
  • Tab with video is active.
  • Size of the video (in px) must be greater than 200x140.

From that, Chrome calculates a media engagement score which is highest on sites where media is played on a regular basis. When it is high enough, media playback is allowed to autoplay on desktop only.

User's MEI is available at the chrome://media-engagement internal page.

Screenshot of the chrome://media-engagement page
Figure 2. Screenshot of the chrome://media-engagement internal page

Iframe delegation

Once an origin has received autoplay permission, it can delegate that permission to iframes via a new HTML attribute. Check out the Gesture Delegation API proposal to learn more.

<iframe src="myvideo.html" gesture="media">

Without iframe delegation, videos will not be able to autoplay with sound.

Example scenarios

Example 1: Every time a user visits VideoSubscriptionSite.com on their laptop they watch a TV show or a movie. As their media engagement score is high, autoplay is allowed.

Example 2: GlobalNewsSite.com has both text and video content. Most users go to the site for text content and watch videos only occasionally. Users' media engagement score is low, so autoplay wouldn't be allowed if a user navigates directly from a social media page or search.

Example 3: LocalNewsSite.com has both text and video content. Most people enter the site through the homepage and then click on the news articles. Autoplay on the news article pages would be allowed because of user interaction with the domain. However, care should be taken to make sure users aren't surprised by autoplaying content.

Example 4: MyMovieReviewBlog.com embeds an iframe with a movie trailer to go along with their review. The user interacted with the domain to get to the specific blog, so autoplay is allowed. However, the blog needs to explicitly delegate that privilege to the iframe in order for the content to autoplay.

Best practises for web developers

Here's the one thing to remember: Don't ever assume a video will play, and don't show a pause button when the video is not actually playing. It is so important that I'm going to write it one more time below for those who simply skim through that post.

Key Point: Don't ever assume a video will play, and don't show a pause button when the video is not actually playing.

You should always look at the Promise returned by the play function to see if it was rejected:

var promise = document.querySelector('video').play();

if (promise !== undefined) {
  promise.then(_ => {
    // Autoplay started!
  }).catch(error => {
    // Autoplay was prevented.
    // Show a "Play" button so that user can start playback.
  });
}

Warning: Don't play interstitial ads without showing any media controls as they may not autoplay and users will have no way of starting playback.

One cool way to engage users is about using muted autoplay and let them chose to unmute (see code snippet below). Some websites already do this effectively, including Facebook, Instagram, Twitter, and YouTube.

<video id="video" muted autoplay>
<button id="unmuteButton"></button>

<script>
  unmuteButton.addEventListener('click', function() {
    video.muted = false;
  });
</script>

Feedback

At the time of writing, Chrome's autoplay policies aren't carved in stone. Please reach out to the Chrome team, ChromiumDev on Twitter to share your thoughts.

Deprecations and Removals in Chrome 62

$
0
0

Deprecations and Removals in Chrome 62

In nearly every version of Chrome, we see a significant number of updates and improvements to the product, its performance, and also capabilities of the Web Platform. This article describes the deprecations and removals in Chrome 62, which is in beta as of September 14. This list is subject to change at any time.

Remove RTCPeerConnection.getStreamById()

Nearly two years ago, getStreamById() was removed from the WebRTC spec. Most other browsers have already removed this from their implementations, and the feature was deprecated in Chrome 60. Though this function is believed to be little-used, it's also believed there is some minor interoperability risk with Edge and WebKit-based browsers other than Safari where getStreamById() is still supported. Developers needing an alternative implementation can find example code in the Intent to Remove, below.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove SharedWorker.workerStart

This property, which was intended for use in monitoring worker performance was removed from the spec more than two years ago and it is not supported in the other major browsers. A more modern approach to tracking performance of a worker would use Performance.timeOrigin.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove SVGPathElement.getPathSegAtLength()

In Chrome 48, SVGPathElement.pathSegList() and related interfaces were removed in compliance with the SVG specification. At that time, this method was mistakenly left in. We don't excpct this removal to break any web pages since, for the last two years, it has returned an object that no longer exists in Blink.

Intent to Remove | Chromestatus Tracker | Chromium Bug

<<../../_deprecation-policy.md>>

Sensors For The Web!

$
0
0

Sensors For The Web!

Today, sensor data is used in many native applications to enable use cases such as immersive gaming, fitness tracking, and augmented or virtual reality. Wouldn't it be cool to bridge the gap between native and web applications? The Generic Sensor API, For The Web!

What is Generic Sensor API?

The Generic Sensor API is a set of interfaces which expose sensor devices to the web platform. The API consists of the base Sensor interface and a set of concrete sensor classes built on top. Having a base interface simplifies the implementation and specification process for the concrete sensor classes. For instance, take a look at the Gyroscope class, it is super tiny! The core functionality is specified by the base interface, and Gyroscope merely extends it with three attributes representing angular velocity.

Typically a concrete sensor class represents an actual sensor on the platform e.g., accelerometer or gyroscope. However, in some cases, implementation of a sensor class fuses data from several platform sensors and exposes the result in a convenient way to the user. For example, the AbsoluteOrientation sensor provides a ready-to-use 4x4 rotation matrix based on the data obtained from the accelerometer, gyroscope and magnetometer.

You might think that the web platform already provides sensor data and you are absolutely right! For instance, DeviceMotion and DeviceOrientation events expose motion sensor data, while other experimental APIs provide data from an environmental sensors. So why do we need new API?

Comparing to the existing interfaces, Generic Sensor API provides great number of advantages:

  • Generic Sensor API is a sensor framework that can be easily extended with new sensor classes and each of these classes will keep the generic interface. The client code written for one sensor type can be reused for another one with very few modifications!
  • You can configure sensor, for example, set the sampling frequency suitable for your application needs.
  • You can detect whether a sensor is available on the platform.
  • Sensor readings have high precision timestamps, enabling better synchronization with other activities in your application.
  • Sensor data models and coordinate systems are clearly defined, allowing browser vendors to implement interoperable solutions.
  • The Generic Sensor based interfaces are not bound to the DOM (neither Navigator nor Window objects), and it opens up future opportunities of using the same API within service workers or implementing Generic Sensor API in headless JS runtimes, for instance, on embedded devices.
  • Security and privacy aspects are the top priority for the Generic Sensor API and provide much better security level compared to older sensor APIs. There is integration with Permissions API.

Generic Sensor APIs in Chrome

At the time of writing, Chrome supports several sensors that you can experiment with.

Motion sensors:

  • Accelerometer
  • Gyroscope
  • LinearAccelerationSensor
  • AbsoluteOrientationSensor
  • RelativeOrientationSensor

Environmental sensors:

  • AmbientLightSensor
  • Magnetometer

You can enable Generic Sensor APIs for development purposes by turning on a feature flag. Go to chrome://flags/#enable-generic-sensor to enable motion sensors or chrome://flags/#enable-generic-sensor-extra-classes to enable environmental sensors. Restart Chrome and you should be good to go.

More information on browser implementation status can be found on chromestatus.com

Motion sensors are available as an origin trial

In order to get your valuable feedback, the Generic Sensor API will be available in Chrome 62 as an origin trial. You will need to request a token, so that the feature would be automatically enabled for your origin, without the need to enable Chrome flag.

Need to renew your origin trial token? No worries. Use this form to renew your token and don't forget to leave your valuable feedback.

What are all these sensors? How can I use them?

Sensors is a quite specific area which might need a brief introduction. If you are familiar with sensors, you can jump right to the hands-on coding section. Otherwise, let’s look at each supported sensor in detail.

Accelerometer and linear acceleration sensor

Accelerometer sensor measurements
Figure 1: Accelerometer sensor measurements

The Accelerometer sensor measures acceleration of a device hosting the sensor on three axes (X, Y and Z). This sensor is an inertial sensor, meaning that when the device is in linear free fall, the total measured acceleration would be 0 m/s2, and when a device lying flat on a table, the acceleration in upwards direction (Z axis) will be equal to the Earth’s gravity, i.e. g ≈ +9.8 m/s2 as it is measuring the force of the table pushing the device upwards. If you push device to the right, acceleration on X axis would be positive, or negative if device is accelerated from right toward the left.

Accelerometers can be used for things like: step counting, motion sensing or simple device orientation. Quite often, accelerometer measurements are combined with data from other sources in order to create fusion sensors, such as, orientation sensors.

The LinearAccelerationSensor measures acceleration that is applied to the device hosting the sensor, excluding the contribution of a gravity force. When a device is at rest, for instance, lying flat on the table, the sensor would measure ≈ 0 m/s2 acceleration on three axes.

Gyroscope

Gyroscope sensor measurements
Figure 2: Gyroscope sensor measurements

The Gyroscope sensor measures angular velocity in rad/s around the device’s local X, Y and Z axis. Most of the consumer devices have mechanical (MEMS) gyroscopes, which are inertial sensors that measure rotation rate based on inertial Coriolis force. A MEMS gyroscopes are prone to drift that is caused by sensor’s gravitational sensitivity which deforms the sensor’s internal mechanical system. Gyroscopes oscillate at relative high frequencies, e.g., 10’s of kHz, and therefore, might consume more power compared to other sensors.

Orientation sensors

AbsoluteOrientation sensor measurements
Figure 3: AbsoluteOrientation sensor measurements

The AbsoluteOrientationSensor is a fusion sensor that measures rotation of a device in relation to the Earth’s coordinate system, while the RelativeOrientationSensor provides data representing rotation of a device hosting motion sensors in relation to a stationary reference coordinate system.

All modern 3D JavaScript frameworks support quaternions and rotation matrices to represent rotation; however, if you use WebGL directly, the OrientationSensor interface has convenient methods for WebGL compatible rotation matrices. Here are few snippets:

three.js

let torusGeometry = new THREE.TorusGeometry(7, 1.6, 4, 3, 6.3);
let material = new THREE.MeshBasicMaterial({ color: 0x0071C5 });
let torus = new THREE.Mesh(torusGeometry, material);
scene.add(torus);

// Update mesh rotation using quaternion.
const sensorAbs = new AbsoluteOrientationSensor();
sensorAbs.onreading = () => torus.quaternion.fromArray(sensorAbs.quaternion);
sensorAbs.start();

// Update mesh rotation using rotation matrix.
const sensorRel = new RelativeOrientationSensor();
let rotationMatrix = new Float32Array(16);
sensor_rel.onreading = () => {
    sensorRel.populateMatrix(rotationMatrix);
    torus.matrix.fromArray(rotationMatrix);
}
sensorRel.start();

BABYLON

const mesh = new BABYLON.Mesh.CreateCylinder("mesh", 0.9, 0.3, 0.6, 9, 1, scene);
const sensorRel = new RelativeOrientationSensor({frequency: 30});
sensorRel.onreading = () => mesh.rotationQuaternion.FromArray(sensorRel.quaternion);
sensorRel.start();

WebGL

// Initialize sensor and update model matrix when new reading is available.
let modMatrix = new Float32Array([1,0,0,0, 0,1,0,0, 0,0,1,0, 0,0,0,1]);
const sensorAbs = new AbsoluteOrientationSensor({frequency: 60});
sensorAbs.onreading = () => sensorAbs.populateMatrix(modMatrix);
sensorAbs.start();

// Somewhere in rendering code, update vertex shader attribute for the model
gl.uniformMatrix4fv(modMatrixAttr, false, modMatrix);

Orientation sensors enable various use cases, such as immersive gaming, augmented and virtual reality.

For more information about motion sensors, advanced use cases, and requirements, please check motion sensors explainer document.

Let’s code!

The Generic Sensor API is very simple and easy-to-use! The Sensor interface has start() and stop() methods to control sensor state and several event handlers for receiving notifications about sensor activation, errors and newly available readings. The concrete sensor classes usually add their specific reading attributes to the base class.

Development environment

During development you'll be able to use sensors through localhost. The simplest way is to serve your web application using Web Server for Chrome. If you are developing for mobile devices, set up port forwarding for your local server, and you are ready to rock!

When your code is ready, deploy it on a server that supports HTTPS. GitHub Pages are served over HTTPS, making it a great place to share your demos.

Note: Don't forget to enable Generic Sensor API in Chrome.

3D model rotation

In this simple example, we use the data from an absolute orientation sensor to modify the rotation quaternion of a 3D model. The model is a three.js Object3D class instance that has a quaternion property. The following code snippet from the orientation phone demo, illustrates how the absolute orientation sensor can be used to rotate a 3D model.

function initSensor() {
    sensor = new AbsoluteOrientationSensor({frequency: 60});
    sensor.onreading = () => model.quaternion.fromArray(sensor.quaternion);
    sensor.onerror = event => {
        if (event.error.name == 'NotReadableError') {
            console.log("Sensor is not available.");
        }
    }
    sensor.start();
}

The device's orientation will be reflected in 3D model rotation within the WebGL scene.

Sensor updates 3D model's orientation
Figure 4: Sensor updates orientation of a 3D model

Punchmeter

The following code snippet is extracted from the punchmeter demo, illustrating how the linear acceleration sensor can be used to calculate the maximum velocity of a device under the assumption that it is initially laying still.

this.maxSpeed = 0;
this.vx = 0;
this.ax = 0;
this.t = 0;

function onreading() {
    let dt = (this.accel.timestamp - this.t) * 0.001; // In seconds.
    this.vx += (this.accel.x + this.ax) / 2 * dt;

    let speed = Math.abs(this.vx);

    if (this.maxSpeed < speed) {
        this.maxSpeed = speed;
    }

    this.t = this.accel.timestamp;
    this.ax = this.accel.x;
}

....

this.accel.addEventListener('reading', onreading);

The current velocity is calculated as an approximation to the integral of the acceleration function.

Demo web application for punch speed measurement
Figure 5: Measurement of a punch speed

Privacy and security

Sensor readings are sensitive data which can be subject to various attacks from malicious web pages. Chrome's implementation of Generic Sensor APIs enforces few limitations to mitigate the possible security and privacy risks. These limitations must be taken into account by developers who intend to use the API, so let’s briefly list them.

Only HTTPS

Because Generic Sensor API is a powerful feature, Chrome only allows it on secure contexts. In practice it means that to use Generic Sensor API you'll need to access your page through HTTPS. During development you can do so via http://localhost but for production you'll need to have HTTPS on your server. See Security with HTTPS article for best practices and guidelines there.

Only the main frame

To prevent iframes from reading sensor data the Sensor objects can be created only within a main frame.

Sensor readings delivery can be suspended

Sensor readings are only accessible by a visible web page, i.e., when the user is actually interacting with it. Moreover, sensor data would not be provided if the user focuses from a main frame to a cross-origin iframe, so that the main frame cannot infer user input.

What’s next?

There is a set of already specified sensor classes to be implemented in the near future such as Proximity sensor and Gravity sensor; however thanks to the great extensibility of Generic Sensor framework we can anticipate appearance of even more new classes representing various sensor types.

Another important area for future work is improving of the Generic Sensor API itself, the Generic Sensor specification is currently a draft which means that there is still time to make fixes and bring new functionality that developers need.

You can help!

The sensor specifications are in active development and we need your feedback to make sure that this development goes in the right direction. Try the APIs either by enabling runtime flags in Chrome or taking part in the origin trial and share your experience. Let us know what features would be great to add or if there is something you would like to modify in the current API.

Please fill the survey . Also feel free to file specification issues as well as bugs for the Chrome implementation.

Resources

An event for CSS position:sticky

$
0
0

An event for CSS position:sticky

TL;DR

Here's a secret: You may not need scroll events in your next app. Using an IntersectionObserver, I show how you can fire a custom event when position:sticky elements become fixed or when they stop sticking. All without the use of scroll listeners. There's even an awesome demo to prove it:

View demo | Source

Introducing the sticky-change event

An event is the the missing feature of CSS position:sticky.

One of the practical limitations of using CSS sticky position is that it doesn't provide a platform signal to know when the property is active. In other words, there's no event to know when an element becomes sticky or when it stops being sticky.

Take the following example, which fixes a <div class="sticky"> 10px from the top of its parent container:

.sticky {
  position: sticky;
  top: 10px;
}

Wouldn't it be nice if the browser told when the elements hits that mark? Apparently I'm not the only one that thinks so. A signal for position:sticky could unlock a number of use cases:

  1. Apply a drop shadow to a banner as it sticks.
  2. As a user reads through your content, record analytics hits to know their progress.
  3. As a user scrolls the page, update a floating TOC widget to the current section.

With these use cases in mind, we've crafted an end goal: create an event that fires when a position:sticky element becomes fixed. Let's call it the sticky-change event:

document.addEventListener('sticky-change', e => {
  const header = e.detail.target;  // header became sticky or stopped sticking.
  const sticking = e.detail.stuck; // true when header is sticky.
  header.classList.toggle('shadow', sticking); // add drop shadow when sticking.

  document.querySelector('.who-is-sticking').textContent = header.textContent;
});

The demo uses this event to headers a drop shadow when they become fixed. It also updates the new title at the top of the page.

In the demo, effects are applied without scrollevents.

Scroll effects without scroll events?

Terminology
Structure of the page.

Let's get some terminology out of the way so I can refer to these names throughout the rest of the post:

  1. Scrolling container - the content area (visible viewport) containing the list of "blog posts".
  2. Headers - blue title in each section that have position:sticky.
  3. Sticky sections - each content section. The text that scrolls under the sticky headers.
  4. "Sticky mode" - when position:sticky is applying to the element.

To know which header enters "sticky mode", we need some way of determining the scroll offset of the scrolling container. That would give us a way to calculate the header that's currently showing. However, that gets pretty tricky to do without scroll events :) The other problem is that position:sticky removes the element from layout when it becomes fixed.

So without scroll events, we've lost the ability to perform layout-related calculations on the headers.

Adding dumby DOM to determine scroll position

Instead of scroll events, we're going to use an IntersectionObserver to determine when headers enter and exit sticky mode. Adding two nodes (aka sentinels) in each sticky section, one at the top and one at the bottom, will act as waypoints for figuring out scroll position. As these markers enter and leave the container, their visiblitiy changes and Intersection Observer fires a callback.

Without sentinel elements showing
The hidden sentinel elements.

We need two sentinels to cover four cases of scrolling up and down:

  1. Scrolling down - header becomes sticky when its top sentinel crosses the top of the container.
  2. Scrolling down - header leaves sticky mode as it reaches the bottom of the section and its bottom sentinel crosses the top of the container.
  3. Scrolling up - header leaves sticky mode when its top sentinel scrolls back into view from the top.
  4. Scrolling up - header becomes sticky as its bottom sentinel crosses back into view from the top.

It's helpful to see a screencast of 1-4 in the order they happen:

Intersection Observers fire callbacks when the sentinels enter/leave the scroll container.

The CSS

The sentinels are positioned at the top and bottom of each section. .sticky_sentinel--top sits on the top of the header while .sticky_sentinel--bottom rests at the bottom of the section:

Bottom sentinel reaching its threshold
Position of the top and bottom sentinel elements.
:root {
  --default-padding: 16px;
  --header-height: 80px;
}
.sticky {
  position: sticky;
  top: 10px; /* adjust sentinel height/positioning based on this position. */
  height: var(--header-height);
  padding: 0 var(--default-padding);
}
.sticky_sentinel {
  position: absolute;
  left: 0;
  right: 0; /* needs dimensions */
  visibility: hidden;
}
.sticky_sentinel--top {
  /* Adjust the height and top values based on your on your sticky top position.
  e.g. make the height bigger and adjust the top so observeHeaders()'s
  IntersectionObserver fires as soon as the bottom of the sentinel crosses the
  top of the intersection container. */
  height: 40px;
  top: -24px;
}
.sticky_sentinel--bottom {
  /* Height should match the top of the header when it's at the bottom of the
  intersection container. */
  height: calc(var(--header-height) + var(--default-padding));
  bottom: 0;
}

Setting up the Intersection Observers

Intersection Observers asynchronously observe changes in the intersection of a target element and the document viewport or a parent container. In our case, we're observe intersections with a parent container.

The magic sauce is IntersectionObserver. Each sentinel gets an IntersectionObserver to observer its intersection visibility within the scroll container. When a sentinel scrolls into the visible viewport, we know a header become fixed or stopped being sticky. Likewise, when a sentinel exits the viewport.

First, I set up observers for the header and footer sentinels:

/**
 * Notifies when elements w/ the `sticky` class begin to stick or stop sticking.
 * Note: the elements should be children of `container`.
 * @param {!Element} container
 */
function observeStickyHeaderChanges(container) {
  observeHeaders(container);
  observeFooters(container);
}

observeStickyHeaderChanges(document.querySelector('#scroll-container'));

Then, I added an observer to fire when .sticky_sentinel--top elements pass through the top of the scrolling container (in either direction). The observeHeaders function creates the top sentinels and adds them to each section. The observer calculates the intersection of the sentinel with top of the container and decides if it's entering or leaving the viewport. That information determines if the section header is sticking or not.

/**
 * Sets up an intersection observer to notify when elements with the class
 * `.sticky_sentinel--top` become visible/invisible at the top of the container.
 * @param {!Element} container
 */
function observeHeaders(container) {
  const observer = new IntersectionObserver((records, observer) => {
    for (const record of records) {
      const targetInfo = record.boundingClientRect;
      const stickyTarget = record.target.parentElement.querySelector('.sticky');
      const rootBoundsInfo = record.rootBounds;

      // Started sticking.
      if (targetInfo.bottom < rootBoundsInfo.top) {
        fireEvent(true, stickyTarget);
      }

      // Stopped sticking.
      if (targetInfo.bottom >= rootBoundsInfo.top &&
          targetInfo.bottom < rootBoundsInfo.bottom) {
       fireEvent(false, stickyTarget);
      }
    }
  }, {threshold: [0], root: container});

  // Add the top sentinels to each section and attach an observer.
  const sentinels = addSentinels(container, 'sticky_sentinel--top');
  sentinels.forEach(el => observer.observe(el));
}

The observer is configured with threshold: [0] so its callback fires as soon as the sentinel becomes visible.

The process is similar for the bottom sentinel (.sticky_sentinel--bottom). A second observer is created to fire when the footers pass through the bottom of the scrolling container. The observeFooters function creates the sentinel nodes and attaches them to each section. The observer calculates the intersection of the sentinel with bottom of the container and decides if it's entering or leaving. That information determines if the section header is sticking or not.

/**
 * Sets up an intersection observer to notify when elements with the class
 * `.sticky_sentinel--bottom` become visible/invisible at the botton of the
 * container.
 * @param {!Element} container
 */
function observeFooters(container) {
  const observer = new IntersectionObserver((records, observer) => {
    for (const record of records) {
      const targetInfo = record.boundingClientRect;
      const stickyTarget = record.target.parentElement.querySelector('.sticky');
      const rootBoundsInfo = record.rootBounds;
      const ratio = record.intersectionRatio;

      // Started sticking.
      if (targetInfo.bottom > rootBoundsInfo.top && ratio === 1) {
        fireEvent(true, stickyTarget);
      }

      // Stopped sticking.
      if (targetInfo.top < rootBoundsInfo.top &&
          targetInfo.bottom < rootBoundsInfo.bottom) {
        fireEvent(false, stickyTarget);
      }
    }
  }, {threshold: [1], root: container});

  // Add the bottom sentinels to each section and attach an observer.
  const sentinels = addSentinels(container, 'sticky_sentinel--bottom');
  sentinels.forEach(el => observer.observe(el));
}

The observer is configured with threshold: [1] so its callback fires when the entire node is within view.

Lastly, there's my two utilities for firing the sticky-change custom event and generating the sentinels:

/**
 * @param {!Element} container
 * @param {string} className
 */
function addSentinels(container, className) {
  return Array.from(container.querySelectorAll('.sticky')).map(el => {
    const sentinel = document.createElement('div');
    sentinel.classList.add('sticky_sentinel', className);
    return el.parentElement.appendChild(sentinel);
  });
}

/**
 * Dispatches the `sticky-event` custom event on the target element.
 * @param {boolean} stuck True if `target` is sticky.
 * @param {!Element} target Element to fire the event on.
 */
function fireEvent(stuck, target) {
  const e = new CustomEvent('sticky-change', {detail: {stuck, target}});
  document.dispatchEvent(e);
}

That's it!

Final demo

We created a custom event when elements with position:sticky become fixed and added scroll effects without the use of scroll events.

View demo | Source

Conclusion

I've often wondered if IntersectionObserver would be a helpful tool to replace some of the scroll event-based UI patterns that have developed over the years. Turns out the answer is yes and no. The semantics of the IntersectionObserver API make it hard to use for everything. But as I've shown here, you can use it for some interesting techniques.

Another way to detect style changes?

Not really. What we needed was a way to observe style changes on a DOM element. Unfortunately, there's nothing in the web platform APIs that allow you to watch style changes.

A MutationObserver would be a logical first choice but that doesn't work for most cases. For example, in the demo, we'd receive a callback when the sticky class is added to an element, but not when the element's computed style changes. Recall that the sticky class was already declared on page load.

In the future, a "Style Mutation Observer" extension to Mutation Observers might be useful to observe changes to an element's computed styles. position: sticky.

Abortable fetch

$
0
0

Abortable fetch

The original GitHub issue for "Aborting a fetch" was opened in 2015. Now, if I take 2015 away from 2017 (the current year), I get 2. This demonstrates a bug in maths, because 2015 was in fact "forever" ago.

2015 was when we first started exploring aborting ongoing fetches, and after 780 GitHub comments, a couple of false starts, and 5 pull requests, we finally have abortable fetch landing in browsers, with the first being Firefox 57.

I'll dive into the history later, but first, the API:

The controller + signal manoeuvre

Meet the AbortController and AbortSignal:

const controller = new AbortController();
const signal = controller.signal;

The controller only has one method:

controller.abort();

When you do this, it notifies the signal:

signal.addEventListener('abort', () => {
  // Logs true:
  console.log(signal.aborted);
});

This API is provided by the DOM standard, and that's the entire API. It's deliberately generic so it can be used by other web standards and JavaScript libraries.

Abort signals and fetch

Fetch can take an AbortSignal. For instance, here's how you'd make a fetch timeout after 5 seconds:

const controller = new AbortController();
const signal = controller.signal;

setTimeout(() => controller.abort(), 5000);

fetch(url, { signal }).then(response => {
  return response.text();
}).then(text => {
  console.log(text);
});

When you abort a fetch, it aborts both the request and response, so any reading of the response body (such as response.text()) is also aborted.

Note: It's ok to call .abort() after the fetch has already completed, fetch simply ignores it.

Here's a demo – At time of writing, the only browser which supports this is Firefox 57. Also, brace yourself, no one with any design skill was involved in creating the demo.

Alternatively, the signal can be given to a request object and later passed to fetch:

const controller = new AbortController();
const signal = controller.signal;
const request = new Request(url, { signal });

fetch(request);

This works because request.signal is an AbortSignal.

Note: Technically, request.signal isn't the same signal you pass to the constructor. It's a new AbortSignal that mimics the signal passed to the constructor. This means every Request has a signal, whether one is given to its constructor or not.

Reacting to an aborted fetch

When you abort an async operation, the promise rejects with a DOMException named AbortError:

fetch(url, { signal }).then(response => {
  return response.text();
}).then(text => {
  console.log(text);
}).catch(err => {
  if (err.name === 'AbortError') {
    console.log('Fetch aborted');
  } else {
    console.error('Uh oh, an error!', err);
  }
});

You don't often want to show an error message if the user aborted the operation, as it isn't an "error" if you successfully do that the user asked. To avoid this, use an if-statement such as the one above to handle abort errors specifically.

Here's an example that gives the user a button to load content, and a button to abort. If the fetch errors, an error is shown, unless it's an abort error:

// This will allow us to abort the fetch.
let controller;

// Abort if the user clicks:
abortBtn.addEventListener('click', () => {
  if (controller) controller.abort();
});

// Load the content:
loadBtn.addEventListener('click', async () => {
  controller = new AbortController();
  const signal = controller.signal;

  // Prevent another click until this fetch is done
  loadBtn.disabled = true;
  abortBtn.disabled = false;

  try {
    // Fetch the content & use the signal for aborting
    const response = await fetch(contentUrl, { signal });
    // Add the content to the page
    output.innerHTML = await response.text();
  }
  catch (err) {
    // Avoid showing an error message if the fetch was aborted
    if (err.name !== 'AbortError') {
      output.textContent = "Oh no! Fetching failed.";
    }
  }

  // These actions happen no matter how the fetch ends
  loadBtn.disabled = false;
  abortBtn.disabled = true;
});

Note: This example uses async functions.

Here's a demo – At time of writing, the only browser which supports this is Firefox 57.

One signal, many fetches

A single signal can be used to abort many fetches at once:

async function fetchStory({ signal }={}) {
  const storyResponse = await fetch('/story.json', { signal });
  const data = await response.json();

  const chapterFetches = data.chapterUrls.map(async url => {
    const response = await fetch(url, { signal });
    return response.text();
  });

  return Promise.all(chapterFetches);
}

In the above example, the same signal is used for the initial fetch, and for the parallel chapter fetches. Here's how you'd use fetchStory:

const controller = new AbortController();
const signal = controller.signal;

fetchStory({ signal }).then(chapters => {
  console.log(chapters);
});

In this case, calling controller.abort() will abort whichever fetches are in-progress.

The future

Other browsers

Firefox did a great job to ship this first. Their engineers implemented from the test suite while the spec was being written. For other browsers, here are the tickets to follow:

  • Edge – closed as "fixed", but it hasn't made it into a released version yet.
  • Chrome.
  • Safari.

In a service worker

I need to finish the spec for the service worker parts, but here's the plan:

As I mentioned before, every Request object has a signal property. Within a service worker, fetchEvent.request.signal will signal abort if the page is no longer interested in the response. As a result, code like this just works:

addEventListener('fetch', event => {
  event.respondWith(fetch(event.request));
});

If the page aborts the fetch, fetchEvent.request.signal signals abort, so the fetch within the service worker also aborts.

If you're fetching something other than event.request, you'll need to pass the signal to your custom fetch(es).

addEventListener('fetch', event => {
  const url = new URL(event.request.url);

  if (event.request.method == 'GET' && url.pathname == '/about/') {
    // Modify the URL
    url.searchParams.set('from-service-worker', 'true');
    // Fetch, but pass the signal through
    event.respondWith(
      fetch(url, { signal: event.request.signal })
    );
  }
});

Follow the spec to track this – I'll add links to browser tickets once it's ready for implementation.

The history

Yeah… it took a long time for this relatively simple API to come together. Here's why:

API disagreement

As you can see the GitHub discussion is pretty long. There a lot of nuance in that thread (and some lack-of-nuance), but the key disagreement is one group wanted the abort method to exist on the object returned by fetch(), whereas the other wanted a separation between getting the response and affecting the response.

These requirements are incompatible, so one group wasn't going to get what they wanted. If that's you, sorry! If it makes you feel better, I was also in that group. But seeing AbortSignal fit the requirements of other APIs makes it seem like the right choice. Also, allowing chained promises to become abortable would become very complicated, if not impossible.

If you wanted to return an object that provides a response, but can also abort, you could create a simple wrapper:

function abortableFetch(request, opts) {
  const controller = new AbortController();
  const signal = controller.signal;

  return {
    abort: () => controller.abort(),
    ready: fetch(request, { ...opts, signal })
  };
}

False starts in TC39

There was an effort to make a cancelled action distinct from an error. This included a third promise state to signify "cancelled", and some new syntax to handle cancellation in both sync and async code:

Not real code — proposal was withdrawn

try {
  // Start spinner, then:
  await someAction();
}
catch cancel (reason) {
  // Maybe do nothing?
}
catch (err) {
  // Show error message
}
finally {
  // Stop spinner
}

The most common thing to do when an action is cancelled, is nothing. The above proposal separated cancellation from errors so you didn't need to handle abort errors specifically. catch cancel let you hear about cancelled actions, but most of the time you wouldn't need to.

This got to stage 1 in TC39, but consensus wasn't achieved, and the proposal was withdrawn.

Our alternative proposal, AbortController, didn't require any new syntax, so it didn't make sense to spec it within TC39. Everything we needed from JavaScript was already there, so we defined the interfaces within the web platform, specifically the DOM standard. Once we'd made that decision, the rest came together relatively quickly.

Large spec change

XMLHttpRequest has been abortable for years, but the spec was pretty vague. It wasn't clear at which points the underlying network activity could be avoided, or terminated, or what happened if there was a race condition between abort() being called and the fetch completing.

We wanted to get it right this time, but that resulted in a large spec change that needed a lot of reviewing (that's my fault, and a huge thanks to Anne van Kesteren and Domenic Denicola for dragging me through it) and a decent set of tests.

But we're here now! We have a new web primitive for aborting async actions, and multiple fetches can be controlled at once! Further down the line, we'll look at enabling priority changes throughout the life of a fetch, and a higher-level API to observe fetch progress.

WebVR changes in Chrome 62

$
0
0

WebVR changes in Chrome 62

The current WebVR origin trial is ending on November 14, 2017, shortly after the stable release of Chrome 62. We have begun a new trial with the WebVR 1.1 API in Chrome 62 that will continue through Chrome 64.

The new trial includes some API behavior updates that are consistent with the direction of the forthcoming WebVR 2.0 spec:

  • Use of WebVR is restricted in cross-origin iframes. If you intend for embedded cross-origin iframes to be able to use WebVR, add the attribute allow="vr" to the iframe tag, or use a Feature-Policy header (spec discussion, bug).
  • Limit use of getFrameData() and submitFrame() to VRDisplay.requestAnimationFrame() (spec discussion, bug).
  • window.requestAnimationFrame() does not fire if the page is not visible, meaning it will not fire on Android while WebVR is presenting (spec discussion, bug).
  • The synthetic click event at viewport (0, 0) has been removed (for both Cardboard and the Daydream controller touchpad) (bug). The vrdisplayactivate event is now considered a user gesture, and may be used to request presentation and begin media playback, without relying on the click event. Code that was previously relying on click event handlers for input should be converted to check for gamepad button presses. (Example implementation)
  • Chrome may exit presentation if the page takes greater than 5 seconds to display the first frame (code change). It is recommended that the page display within two seconds and that a splash screen is used if needed.

Your current WebVR Origin Trial tokens will not be recognized by Chrome 62. To participate in this new trial please use the sign up form.


The Intl.PluralRules API

$
0
0

The Intl.PluralRules API

Iñtërnâtiônàlizætiøn is hard. Handling plurals is one of many problems that might seem simple, until you realize every language has its own pluralization rules.

For English pluralization, there are only two possible outcomes. Let’s use the word “cat” as an example:

  • 1 cat, i.e. the 'one' form, known as the singular in English
  • 2 cats, but also 42 cats, 0.5 cats, etc., i.e. the 'other' form (the only other), known as the plural in English.

The brand new Intl.PluralRules API tells you which form applies in a language of your choice based on a given number.

const pr = new Intl.PluralRules('en-US');
pr.select(0);   // 'other' (e.g. '0 cats')
pr.select(0.5); // 'other' (e.g. '0.5 cats')
pr.select(1);   // 'one'   (e.g. '1 cat')
pr.select(1.5); // 'other' (e.g. '0.5 cats')
pr.select(2);   // 'other' (e.g. '0.5 cats')

Unlike other internationalization APIs, Intl.PluralRules is a low-level API that does not perform any formatting itself. Instead, you can build your own formatter on top of it:

const suffixes = new Map([
    // Note: in real-world scenarios, you wouldn’t hardcode the plurals
    // like this; they’d be part of your translation files.
    ['one',   'cat'],
    ['other', 'cats'],
]);
const pr = new Intl.PluralRules('en-US');
const formatCats = (n) => {
    const rule = pr.select(n);
    const suffix = suffixes.get(rule);
    return `${n} ${suffix}`;
};

formatCats(1);   // '1 cat'
formatCats(0);   // '0 cats'
formatCats(0.5); // '0.5 cats'
formatCats(1.5); // '1.5 cats'
formatCats(2);   // '2 cats'

For the relatively simple English pluralization rules, this might seem like overkill; however, not all languages follow the same rules. Some languages have only a single pluralization form, and some languages have multiple forms. Welsh, for example, has six different pluralization forms!

const suffixes = new Map([
    ['zero',  'cathod'],
    ['one',   'gath'],
    // Note: the `two` form happens to be the same as the `'one'`
    // form for this word specifically, but that is not true for
    // all words in Welsh.
    ['two',   'gath'],
    ['few',   'cath'],
    ['many',  'chath'],
    ['other', 'cath'],
]);
const pr = new Intl.PluralRules('cy');
const formatWelshCats = (n) => {
    const rule = pr.select(n);
    const suffix = suffixes.get(rule);
    return `${n} ${suffix}`;
};

formatWelshCats(0);   // '0 cathod'
formatWelshCats(1);   // '1 gath'
formatWelshCats(1.5); // '1.5 cath'
formatWelshCats(2);   // '2 gath'
formatWelshCats(3);   // '3 cath'
formatWelshCats(6);   // '6 chath'
formatWelshCats(42);  // '42 cath'

To implement correct pluralization while supporting multiple languages, a database of languages and their pluralization rules is needed. The Unicode CLDR includes this data, but to use it in JavaScript, it has to be embedded and shipped alongside your other JavaScript code, increasing load times, parse times, and memory usage. The Intl.PluralRules API shifts that burden to the JavaScript engine, enabling more performant internationalized pluralizations.

Note: While CLDR data includes the form mappings per language, it doesn’t come with a list of singular/plural forms for individual words. You still have to translate and provide those yourself, just like before.

Ordinal numbers

The Intl.PluralRules API supports various selection rules through the type property on the optional options argument. Its implicit default value (as used in the above examples) is 'cardinal'. To figure out the ordinal indicator for a given number instead (e.g. 11st, 22nd, etc.), use { type: 'ordinal' }:

const pr = new Intl.PluralRules('en-US', {
    type: 'ordinal'
});
const suffixes = new Map([
    ['one',    'st'],
    ['two',    'nd'],
    ['few',    'rd'],
    ['other',  'th'],
]);
const formatOrdinals = (n) => {
    const rule = pr.select(n);
    const suffix = suffixes.get(rule);
    return `${n}${suffix}`;
};

formatOrdinals(0);   // '0th'
formatOrdinals(1);   // '1st'
formatOrdinals(2);   // '2nd'
formatOrdinals(3);   // '3rd'
formatOrdinals(4);   // '4th'
formatOrdinals(11);  // '11th'
formatOrdinals(21);  // '21st'
formatOrdinals(42);  // '42nd'
formatOrdinals(103); // '103rd'

Intl.PluralRules is a low-level API, especially when compared to other internationalization features. As such, even if you’re not using it directly, you might be using a library or framework that depends on it.

Intl.PluralRules is available by default in V8 v6.3.172, Chrome 63, and Firefox 58. As this API becomes more widely available, you’ll find libraries such as Globalize dropping their dependency on hardcoded CLDR databases in favor of the native functionality, thereby improving load-time performance, parse-time performance, run-time performance, and memory usage.

Animating a Blur

$
0
0

Animating a Blur

Blurring is a great way to redirect a users's focus. Making some visual elements appear blurred while keeping other elements in focus naturally directs the user's focus. Users ignore the blurred content and instead focus on the content they can read. One example would be a list of icons that display details about the individual items when hovered over. During that time the remaining choices could be blurred to redirect the user to the newly displayed information.

TL;DR:

Animating a blur is not really an option as it is very slow. Instead, pre-compute a series of increasingly blurred versions and cross-fade between them. My colleague Yi Gu wrote a library to take care of everything for you! Take a look at our demo.

However, this technique can be quite jarring when applied without any transitional period. Animating a blur — transitioning from unblurred to blurred — seems like a reasonable choice, but if you've ever tried doing this on the web, you probably found that the animations are anything but smooth, as this demo shows if you don't have a powerful machine. Can we do better?

Note: Always test your web apps on mobile devices. Desktop machines tend to have deceptively powerful GPUs.

The problem

Markup is
  turned into textures by the CPU. Textures are uploaded to the GPU. The GPU
  draws these textures to the framebuffer using shaders. The blurring happens in
  the shader.

As of now, we can't make animating a blur work efficiently. We can, however, find a work-around that looks good enough, but is, technically speaking, not an animated blur. To get started, let's first understand why the animated blur is slow. To blur elements on the web there're two techniques: The CSS filter property and SVG filters. Thanks to increased support and ease of use, CSS filters are typically used. Unfortunately, if you are required to support Internet Explorer, you have no choice but to use SVG filters as IE 10 and 11 support those but not CSS filters. The good news is that our workaround for animating a blur works with both techniques. So let's try to find the bottleneck by looking at DevTools:

If you enable "Paint Flashing" in DevTools, you won’t see any flashes at all. It looks like no repaints are happening. And that's technically correct as a "repaint" refers to the CPU having to repaint the texture of a promoted element. Whenever an element is both promoted and blurred, the blur is applied by the GPU using a shader.

Both SVG filters and CSS filters use convolution filters) to apply a blur. Convolution filters are fairly expensive as for every output pixel a number of input pixels have to be considered. The bigger the image or the bigger the blur radius, the more costly the effect is.

And that's where the problem lies, we are running a rather expensive GPU operation every frame, blowing our frame budget of 16ms and therefore ending up well below 60fps.

Down the rabbit hole

So what can we do to make this run smoothly? We can use sleight of hand! Instead of animating the actual blur value (the radius of the blur), we pre-compute a couple of blurred copies where the blur value increases exponentially then cross-fade between them using opacity.

The cross-fade is a series of overlapping opacity fade-ins and fade-outs. If we have four blur stages for example, we fade out the first stage while fading in the second stage at the same time. Once the second stage reaches 100% opacity and the first one has reached 0%, we fade out the second stage while fading in the third stage. Once that is done, we finally fade out the third stage and fade in the fourth and final version. In this scenario, each stage would take ¼ of the total desired duration. Visually, this looks very similar to a real, animated blur.

In our experiments increasing the blur radius exponentially per stage yielded the best visual results. Example: If we have four blur stages we'd apply filter: blur(2^n) to each stage, i.e. stage 0: 1px, stage 1: 2px, stage 2: 4px and stage 3: 8px. If we force each of these blurred copies onto their own layer (called "promoting") using will-change: transform, changing opacity on these elements should be super-duper fast. In theory, this would allow us to front-load the expensive work of blurring. Turns out, the logic is flawed. If you run this demo, you'll see that framerate is still below 60fps, and the blurring is actually worse than before.

DevTools
  showing a trace where the GPU has long periods of busy time.

A quick look into DevTools reveals that the GPU is still extremely busy and stretches each frame to ~90ms. But why? We are not changing the blur value anymore, only the opacity, so what's happening? The problem lies, once again, in the nature of the blur effect: As explained before, if the element is both promoted and blurred, the effect is applied by the GPU. So even though we are not animating the blur value anymore, the texture itself is still unblurred and needs to be re-blurred every frame by the GPU. The reason for the frame rate being even worse than before stems from the fact that compared to the naïve implementation the GPU actually has more work than before as most of the time two textures are visible that need to be blurred independently.

What we came up with is not pretty, but it makes the animation blazingly fast. We go back to not promoting the to-be-blurred element, but instead promote a parent wrapper. If an element is both blurred and promoted, the effect is applied by the GPU. This is what made our demo slow. If the element is blurred but not promoted, the CPU calculates the blur instead and rasterizes it to the nearest parent texture. In our case that's the promoted parent wrapper element. The blurred image is now the texture of the parent element and can be re-used for all future frames. This only works because we know that the blurred elements are not animated and caching them is actually beneficial. Here's a demo that implements this technique. I wonder what the Moto G4 thinks of this approach? Spoiler alert: it thinks it's great:

DevTools
  showing a trace where the GPU has lots of idle time.

Now we've got lots of headroom on the GPU and a silky-smooth 60fps. We did it!

Productionizing

In our demo we duplicated a DOM structure multiple times to have copies of the content to blur at different strengths. You might be wondering how this would work in a production environment as that might have some unintended side-effects with the author's CSS styles or even their JavaScript. You are right. Enter Shadow DOM!

While most people think about Shadow DOM as a way to attach "internal" elements to their Custom Elements, it is also an isolation and performance primitive! JavaScript and CSS cannot pierce Shadow DOM boundaries which allows us to duplicate content without interfering with the developer's styles or application logic. We already have a <div> element for each copy to rasterize onto and now use these <div>s as shadow hosts. We create a ShadowRoot using attachShadow({mode: closed'}) and attach a copy of the content to the ShadowRoot instead of the <div> itself. We have to make sure to also copy all stylesheets into the ShadowRoot to guarantee that our copies are styled the same way as the original.

Note: In most cases — especially when writing custom elements — we advise against using closed ShadowRoots. Find out more in Eric's article.

Some browsers do not support Shadow DOM v1 and for those we fall back to just duplicating the content and hoping for the best that nothing breaks. We could use the Shadow DOM polyfill with ShadyCSS, but we did not implement this in our library.

And there you go. After our journey down Chrome's rendering pipeline we figured out how we can animate blurs efficiently across browsers!

Conclusion

This kind of effect is not to be used lightly. Due to the fact that we copy DOM elements and force them onto their own layer we can push the limits of lower-end devices. Copying all stylesheets into each ShadowRoot is a potential performance risk as well, so you should decide whether you would rather adjust your logic and styles to not be affected by copies in the LightDOM or use our ShadowDOM technique. But sometimes our technique might be a worthwhile investment. Take a look at the code in your GitHub repository as well as the demo and hit me up on Twitter if you have any questions!

What's New In DevTools (Chrome 63)

$
0
0

What's New In DevTools (Chrome 63)

Note: The video version of these release notes will be published around early-December 2017.

Welcome back! New features coming to DevTools in Chrome 63 include:

Note: You can check what version of Chrome you're running at chrome://version. Chrome auto-updates to a new major version about every 6 weeks.

Multi-client remote debugging support

If you've ever tried debugging an app from an IDE like VS Code or WebStorm, you've probably discovered that opening DevTools messes up your debug session. This issue also made it impossible to use DevTools to debug WebDriver tests. See the video below for an example of the issue in VS Code.

As of Chrome 63, DevTools now supports multiple remote debugging clients by default, no configuration needed. Watch the video below to see an example of VS Code and DevTools in action, side-by-side.

Multi-client remote debugging was the number 1 most-popular DevTools issue on Crbug, and number 3 across the entire Chromium project. Multi-client support also opens up quite a few interesting opportunities for integrating other tools with DevTools, or using those tools in new ways. For example:

  • Two WebSocket protocol clients, such as two Puppeteer sessions, can now connect to the same tab simultaneously.
  • Chrome Extensions using the chrome.debugger API can now run at the same time as DevTools.
  • Multiple different Chrome Extensions can now use the chrome.debugger API on the same tab simultaneously.

Workspaces 2.0

Workspaces have been around for some time in DevTools. This feature enables you to use DevTools as your IDE. You make some changes to your source code within DevTools, and the changes persist to the local version of your project on your file system.

Workspaces 2.0 builds off of 1.0, adding a more helpful UX and improved auto-mapping of transpiled code. This feature was originally scheduled to be released shortly after Chrome Developer Summit (CDS) 2016, but the team postponed it to sort out some issues.

Check out the "Authoring" part (around 14:28) of the DevTools talk from CDS 2016 to see Workspaces 2.0 in action.

Four new audits

In Chrome 63 the Audits panel has 4 new audits:

  • Serve images as WebP.
  • Use images with appropriate aspect ratios.
  • Avoid frontend JavaScript libraries with known security vulnerabilities.
  • Browser errors logged to the Console.

See Run Lighthouse in Chrome DevTools to learn how to use the Audits panel to improve the quality of your pages.

See Lighthouse to learn more about the project that powers the Audits panel.

Simulate push notifications with custom data

Simulating push notifications has been around for a while in DevTools, with one limitation: you couldn't send custom data. But with the new Push text box coming to the Service Worker pane in Chrome 63, now you can. Try it now:

  1. Go to Simple Push Demo.
  2. Click Enable Push Notifications.
  3. Click Allow when Chrome prompts you to allow notifications.
  4. Open DevTools.
  5. Go to the Service Workers pane.
  6. Write something in the Push text box.

    Simulating a push notification with custom data. Figure 1. Simulating a push notification with custom data via the Push text box in the Service Worker pane
  7. Click Push to send the notification.

    The simulated push notification
    Figure 2. The simulated push notification

Trigger background sync events with custom tags

Triggering background sync events has also been in the Service Workers pane for some time, but now you can send custom tags:

  1. Open DevTools.
  2. Go to the Service Workers pane.
  3. Enter some text in the Sync text box.
  4. Click Sync.
Triggering a custom background sync event
Figure 3. After clicking Sync, DevTools sends a background sync event with the custom tag update-content to the service worker

Feedback

The best place to discuss any of the features or changes you see here is the google-chrome-developer-tools@googlegroups.com mailing list. You can also tweet us at @ChromeDevTools if you're short on time. If you're sure that you've encountered a bug in DevTools, please open an issue.

Previous release notes

See the devtools-whatsnew tag for links to all previous DevTools release notes.

New in Chrome 62

$
0
0

New in Chrome 62

And there’s plenty more!

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 62!

Note: Want the full list of changes? Check out the Chromium source repository change list.

Network Quality Indicator

The Network Information API has been available in Chrome for a while, but it only provides theoretical network speeds given the user’s connection. Imagine you’re on WiFi, but connected to a cellular hotspot that only has 2G speeds? The API would report WiFi!

console.log(navigator.connection.type);
> wifi

In Chrome 62, the API has been expanded to provide actual network performance metrics from the client. Using these network quality signals, you can tailor content to the network. For example, on very slow connections, you could improve page load performance by serving a reduced version.

To simplify your application logic, the API returns the measured network performance in terms of how it would compare to a cellular connection. For example, connected to a super fast fiber connection, the API would report 4G.

console.log(navigator.connection.effectiveType);
> 4G

These signals will also be available as HTTP request headers and enabled via Client Hints. Checkout out the sample and have a look at the spec for a deeper dive.

OpenType Variable Fonts

Traditionally, one font contained only a single instance of a font family, for example one weight or one stretch. If you wanted regular, bold and italic, you’d need to include three separate fonts, increasing the weight of your page.

An OpenType variable font is the equivalent of multiple individual fonts that can be compactly packaged within a single font file. By adjusting the font-variation-settings CSS property, stretch, style, weight and more, can easily be adjusted, providing an infinite number of stylistic variations. Those three fonts can now be combined into a single, compact file.

.heading {
  font-family: "Avenir Next Variable";
  font-size: 48px;
  font-variation-settings: 'wght' 700, 'wdth' 75;
}
.content {
  font-family: "Avenir Next Variable";
  font-size: 24px;
  font-variation-settings: 'wght' 400;
}

OpenType variable fonts gives us a powerful new tool to create responsive typography, and reduce our page weight. Check out Introducing OpenType Variable Fonts by John Hudson for more details.

Media capture from DOM elements

You can now live-capture content into a MediaStream directly from HTMLMediaElements like audio and video, with the Media Capture from DOM Elements API.

After invoking captureStream() on an HTML media element, the streamed content can be manipulated, processed, sent remotely or recorded. Imagine using web audio to create your own equalizer or vocoder. Or stream the content to a remote site using WebRTC. The possibilities are almost endless.

Not Secure labels for some HTTP pages

As we announced previously, starting in Chrome 62, when a user enters data on an HTTP page, Chrome will mark the page as "Not Secure" with a label in the address bar. This label will also be shown in Incognito Mode for all HTTP pages.

And more!

These are just a few of the changes in Chrome 62 for developers, of course, there’s plenty more.

Then subscribe to our YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 63 is released, I’ll be right here to tell you -- what’s new in Chrome!

Lighthouse 2.5 Updates

$
0
0

Lighthouse 2.5 Updates

Lighthouse 2.5 is now released! Highlights include:

See the release notes for the full list of new features, changes, and bug fixes coming to Lighthouse in version 2.5.

chrome-launcher is now a standalone Node module

chrome-launcher is now a standalone Node module, making it easier to launch Google Chrome from your own Node applications.

Five new audits

Appropriate aspect ratios

Category: Best Practices

The Does not use images with appropriate aspect ratios audit alerts you when an image's rendered aspect ratio is significantly different than the image's actual dimensions. The aspect ratio is the ratio between width and height. If the ratio is significantly different when rendered, then the image probably looks distorted.

The \ Figure 1. The Does not use images with appropriate aspect ratios audit

JavaScript libraries with security vulnerabilities

Category: Best Practices

The Includes front-end JavaScript libraries with known security vulnerabilities audit warns you about how many vulnerabilities a library has, as well as the highest severity level among those vulnerabilities.

The \ Figure 2. The Includes front-end JavaScript libraries with known security vulnerabilities audit

Unused JavaScript

Category: Performance

The Unused JavaScript audit breaks down how much JavaScript a page loads but does not use during startup.

Note: This audit is only available when running Lighthouse from Node or the command line in full-config mode.

The \ Figure 3. The Unused JavaScript audit

Low server response times

Category: Performance

The Keep server response times low (TTFB) audit measures how long it takes the client to receive the first byte of the main document. If Time To First Byte (TTFB) is long, then the request is taking a long time traveling through the network, or the server is slow.

The \ Figure 4. The Keep server response times low audit

Console errors

Category: Best Practices

The Browser errors were logged to the console audit alerts you to any errors that are logged to the console as the page loads.

The \ Figure 5. The Browser errors were logged to the console audit

Throttling guide

Check out the new Throttling Guide to learn how to conduct high-quality, packet-level throttling. This guide is intended for advanced audiences.

Viewing all 599 articles
Browse latest View live