Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

HTMLMediaElement.play() Returns a Promise

$
0
0

Automatically playing audio and video on the web is a powerful capability, and one that’s subject to different restrictions on different platforms. Today, most desktop browsers will always allow web pages to begin <video> or <audio> playback via JavaScript without user interaction. Most mobile browsers, however, require an explicit user gesture before JavaScript-initiated playback can occur. This helps ensure that mobile users, many of whom pay for bandwidth or who might be in a public environment, don’t accidentally start downloading and playing media without explicitly interacting with the page.

It’s historically been difficult to determine whether user interaction is required to start playback, and to detect the failures that happen when (automatic) playback is attempted and fails. Various workarounds exist, but are less than ideal. An improvement to the underlying play() method to address this uncertainty is long overdue, and this has now made it to the web platform, with an initial implementation in Chrome 50.

A play() call on an a <video> or <audio> element now returns a Promise. If playback succeeds, the Promise is fulfilled, and if playback fails, the Promise is rejected along with an error message explaining the failure. This lets you write intuitive code like the following:

var playPromise = document.querySelector('video').play();

// In browsers that don’t yet support this functionality,
// playPromise won’t be defined.
if (playPromise !== undefined) {
  playPromise.then(function() {
    // Automatic playback started!
  }).catch(function(error) {
    // Automatic playback failed.
    // Show a UI element to let the user manually start playback.
  });
}

In addition to detecting whether the play() method was successful, the new Promise-based interface allows you to determine when the play() method succeeded. There are contexts in which a web browser may decide to delay the start of playback—for instance, desktop Chrome will not begin playback of a <video> until the tab is visible. The Promise won’t fulfill until playback has actually started, meaning the code inside the then() will not execute until the media is playing. Previous methods of determining if play() is successful, such as waiting a set amount of time for a playing event and assuming failure if it doesn’t fire, are susceptible to false negatives in delayed-playback scenarios.

We’ve published a live example of this new functionality. View it in a browser such as Chrome 50 that supports this Promise-based interface. Be forewarned: the page will automatically play music when you visit it. (Unless, of course, it doesn’t!)


Web Notification Improvements in Chrome 50: Icons, Close Events, Renotify Preferences and Timestamps

$
0
0

Push notifications allow you to provide a great app-like experience for your users, alerting them of important and timely updates like incoming chat messages. The notification platform is relatively new in browsers and as more and more use cases and requirements are fleshed out, we are seeing many additions to the APIs for notifications. Chrome 50 (beta in March 2016) is no exception, with no fewer than four new features that give developers more control over notifications. You get the ability to:

  • add icons to notification buttons,
  • modify the timestamp to help create a consistent experience,
  • track notification close events to help synchronise notifications and provide analytics,
  • manage the renotify experience when a notification replaces the currently displayed notification.

Chrome 50 has also added Payloads for Push notifications. To stay up to date with the Notifications API as it’s implemented in Chrome, follow the spec and the spec issue tracker.

Create Compelling Action Buttons with Custom Icons

In a recent post about notification action buttons in Chrome 49, I mentioned that you can’t attach images to notification buttons to make them snazzy and appealing, but you could use Unicode characters to inline emojis etc.. Now you don’t have to worry: with this recent addition you can now specify an image on the action button:

self.registration.showNotification('New message from Alice', {
  actions: [
   {action: 'like', title: 'Like', icon: 'https://example/like.png'},
   {action: 'reply', title: 'Reply', icon: 'https://example/reply.png'}]
});

The action icon’s appearance differs by platform. For example, on Android the icon will have a dark grey filter applied in Lollipop and above, and a white filter pre-Lollipop, while on desktop it will be full colour. (Note: there is discussion about the future of this on desktop.) Some platforms might not even be able to display action icons, so ensure that you are using the icons to provide context to the action and not as the sole indicator of the intent.

And finally, because the resources must be downloaded, it is good practice to keep the icons as small as possible and to precache them in your install event. (A the time of this writing, fetches of notification resources in Chrome are not yet routed through the service worker.)

Notification Close Events

A frequently requested feature of notifications is the ability to know when the user has dismissed a notification. We had no way to do that until a recent set of changes to the notification specification added a notificationclose event.

By using the notificationclick and the notificationclose event you can understand how your users are interacting with your notifications. Are they leaving them open for a long time and then actively dismissing them or are they acting on them right away.

One popular use case is to be able to synchronise notifications between devices. If the user dismisses a notification on their desktop device, the same notification on their mobile device should also be dismissed. We don’t yet have the ability to do this silently (remember every push message must have a notification displayed), but by using notificationclose it opens up the ability to handle this by allowing you to track the notification state for the user on your server and synchronise that with the other devices as the user uses them.

To use the notificationclose event, register it inside your service worker and it will fire only when the user has actively dismissed a notification, for example, if the user dismisses a specific notification or dismisses all the notifications in their tray (on Android).

If the requireInteraction flag is false or not set, then if the notification is not manually dismissed by the user, but instead automatically by the system, the notificationclose event will not be triggered.

A simple implementation is shown below. When the user dismisses the notification you get access to the notification object from which you can perform custom logic.

self.addEventListener('notificationclose', e => console.log(e.notification));

You can test this in the Notification Generator; you will get an alert when you close the notification.

Don’t Annoy Your Users When You Replace an Existing Notification

I am pretty sure Uncle Ben was talking about the notification system and not the powers of Peter Parker when he said “With great power comes great responsibility”. The notification system is a powerful medium for interacting with users. If you abuse their trust they will turn off all notifications and you may lose them entirely.

When you create a notification you can set it to create an audible alert or vibrate to get the attention of the user. Additionally, you can replace an existing notification by reusing its ‘tag’ attribute on a new notification object.

Prior to Chrome 50, every time you created a notification or replaced an existing one, it would run a vibration pattern or play an audible alert and this could cause frustration for your users. Now In Chrome 50, you now have control over what happens during the renotification via a simple boolean flag called ‘renotify’. The new default behaviour when using the same ‘tag’ for subsequent notifications is to be silent and as the developer you must opt in to “re-notifying” the user by setting the flag to “true”.

self.registration.showNotification('Oi!', {
  'renotify': true,
  'tag': 'tag-id-1'
});

You can try this out in the Notification Generator.

Manage the timestamp displayed to the user

On Android, Chrome’s notifications show their create times in the top right corner by default. Unfortunately, this might not be the time that the notification was actually generated by your system. For example, the event might have been triggered when the device was offline, or the notification could be shown for an upcoming meeting. As of Chrome 50, Chrome has added a new ‘timestamp’ property that enables developers to provide the time that should be displayed in the notification.

self.registration.showNotification('Best day evar!', {
  'timestamp': 360370800000
});

The timestamp is currently only visible on Chrome for Android. Although it is not visible on desktop, it will affect the notification order on both mobile and desktop.

Device Orientation Changes Are Coming to Chrome 50

$
0
0

Developers working on virtual or augmented reality web apps are undoubtedly familiar with the DeviceOrientationEvent. For the uninitiated, “This End Up: Using Device Orientation” provides a great overview of how a deviceorientation event listener can respond to a device twisting and turning.

In earlier versions of Chrome, the alpha, beta, and gamma values included in the DeviceOrientationEvent were provided as absolute degrees with respect to the Earth’s coordinate frame. Providing absolute degrees requires using a device’s magnetometer sensor to detect the Earth’s magnetic field, and that in turn is susceptible to nearby magnetic field fluctuations that could throw off the readings. In practice, this could lead to a web app registering a bunch of DeviceOrientationEvents due to a nearby magnet, despite the device itself not actually moving. For a virtual reality application that only cares about tracking changes in orientation, this magnetic noise is bad news.

What’s Changing?

Starting with Chrome 50, the degrees included in the DeviceOrientationEvent are by default no longer absolute with respect to the Earth’s coordinate frame. This means that DeviceOrientationEvents should only be triggered when there’s actual movement, as detected by some combination of a device’s accelerometer and gyroscope. The magnetometer, and false readings due to magnetic field fluctuations, are out of the picture.

But I Still Need Absolute Degrees!

If you’re writing JavaScript that needs to use absolute degrees, perhaps as part of an augmented reality web application that needs to map directly onto the physical world, you’re not out of luck. The previous behavior, dependent on a device’s magnetometer, is available via a new deviceorientationabsolute event. From a developer’s perspective, it’s analogous to the existing DeviceOrientationEvent, with the guarantee that the absolute property will be set to true.

Detecting What’s Supported

Developers who would prefer absolute degrees can use feature detection to determine whether they’re on a browser that supports the new DeviceOrientationAbsoluteEvent event:

if ('ondeviceorientationabsolute' in window) {
  // We can listen for the new deviceorientationabsolute event.
} else if ('ondeviceorientation' in window) {
  // We can still listen for deviceorientation events.
  // The `absolute` property of the event tells us whether
  // or not the degrees are absolute.
}

Cross-Browser Compatibility

The values reported in the DeviceOrientationEvent have never been consistent.

Safari and Firefox on iOS uses relative values for the degrees, which matches the implementation change introduced in Chrome 50. The change should lead to more consistency with web applications that were written with iOS in mind.

Firefox (on platforms other than iOS), Edge, and Chrome versions prior to 50 use absolute degree values for theDeviceOrientationEvent when run on devices with the appropriate sensors.

As of this writing, Chrome 50 is the first browser to support the new DeviceOrientationAbsoluteEvent.

Advanced Orientation Tracking with the DeviceMotionEvent

Boris Smus has a fantastically detailed article detailing with some of the downsides of using the DeviceOrientationEvent, and how to implement a bespoke sensor fusion using DeviceMotionEvents. They provide low-level access to the accelerometer and gyroscope, and can lead to a more accurate virtual reality experience for your users.

Additional Resources

Web Push Payload Encryption

$
0
0

Prior to Chrome 50, push messages could not contain any payload data. When the ‘push’ event fired in your service worker, all you knew was the the server was trying to tell you something, but not what it might be. You then had to make a follow up request to the server and obtain the details of the notification to show, which might fail in poor network conditions.

Now in Chrome 50 (and in the current version of Firefox on desktop) you can send some arbitrary data along with the push so that the client can avoid making the extra request. However, with great power comes great responsibility, so all payload data must be encrypted.

Encryption of payloads is an important part of the security story for web push. HTTPS gives you security when communicating between the browser and your own server, because you trust the server. However, the browser chooses which push provider will be used to actually deliver the payload, so you, as the app developer, have no control over it.

Here, HTTPS can only guarantee that no one can snoop on the message in transit to the push service provider. Once they receive it, they are free to do what they like, including re-transmitting the payload to third-parties or maliciously altering it to something else. To protect against this we use encryption to ensure that push services can’t read or tamper with the payloads in transit.

Client-side changes

If you have already implemented push notifications without payloads then there are only two small changes that you need to make on the client-side.

This first is that when you send the subscription information to your backend server you need to gather some extra information. If you already use JSON.stringify() on the PushSubscription object to serialize it for sending to your server then you don’t need to change anything. The subscription will now have some extra data in the keys property.

> JSON.stringify(subscription)
{"endpoint":"https://android.googleapis.com/gcm/send/f1LsxkKphfQ:APA91bFUx7ja4BK4JVrNgVjpg1cs9lGSGI6IMNL4mQ3Xe6mDGxvt_C_gItKYJI9CAx5i_Ss6cmDxdWZoLyhS2RJhkcv7LeE6hkiOsK6oBzbyifvKCdUYU7ADIRBiYNxIVpLIYeZ8kq_A","keys":{"p256dh":"BLc4xRzKlKORKWlbdgFaBrrPK3ydWAHo4M0gs0i1oEKgPpWC5cW8OCzVrOQRv-1npXRWk8udnW3oYhIO4475rds=","auth":"5I2Bu2oKdyy9CwL8QVF0NQ=="}}

The two values p256dh and auth are encoded in a variant of Base64 that I’ll call URL-Safe Base64.

If you want to get right at the bytes instead, you can use the new getKey() method on the subscription that returns a parameter as an ArrayBuffer. The two parameters that you need are auth and p256dh.

> new Uint8Array(subscription.getKey('auth'));
[228, 141, 129, ...] (16 bytes)

> new Uint8Array(subscription.getKey('p256dh'));
[4, 183, 56, ...] (65 bytes)

The second change is a new data property when the push event fires. It has various synchronous methods for parsing the received data, such as .text(), .json(), .arrayBuffer() and .blob().

self.addEventListener('push', function(event) {
  if (event.data) {
    console.log(event.data.json());
  }
});

Server-side changes

On the server side, things change a bit more. The basic process is that you use the encryption key information you got from the client to encrypt the payload and then send that as the body of a POST request to the endpoint in the subscription, adding some extra HTTP headers.

The details are relatively complex, and as with anything related to encryption it’s better to use an actively developed library than to roll your own. The Chrome team has published a library for Node.js, with more languages and platforms coming soon. This handles both encryption and the web push protocol, so that sending a push message from a Node.js server is as easy as webpush.sendWebPush(message, subscription).

While we definitely recommend using a library, this is a new feature and there are many popular languages that don’t yet have any libraries. If you do need to implement this for yourself, here are the details.

I’ll be illustrating the algorithms using Node-flavored JavaScript, but the basic principles should be the same in any language.

Inputs

In order to encrypt a message, we first need to get two things from the subscription object that we received from the client. If you used JSON.stringify() on the client and transmitted that to your server then the client’s public key is stored in the keys.p256dh field, while the shared authentication secret is in the keys.auth field. Both of these will be URL-safe Base64 encoded, as mentioned above. The binary format of the client public key is an uncompressed P-256 elliptic curve point.

const clientPublicKey = new Buffer(subscription.keys.p256dh, 'base64');
const clientAuthSecret = new Buffer(subscription.keys.auth, 'base64');

The public key allows us to encrypt the message such that it can only be decrypted using the client’s private key.

Public keys are usually considered to be, well, public, so to allow the client to authenticate that the message was sent by a trusted server we also use the authentication secret. Unsurprisingly, this should be kept secret, shared only with the application server that you want to send you messages, and treated like a password.

We also need to generate some new data. We need a 16-byte cryptographically secure random salt and a public/private pair of elliptic curve keys. The particular curve used by the push encryption spec is called P-256, or prime256v1. For the best security the key pair should be generated from scratch every time you encrypt a message, and you should never reuse a salt.

ECDH

Let’s take a little aside to talk about a neat property of elliptic curve cryptography. There is relatively simple process which combines your private key with someone else’s public key to derive a value. So what? Well, if the other party takes their private key and your public key it will derive the exact same value!

This is the basis of the elliptic curve Diffie-Hellman (ECDH) key agreement protocol, which allows both parties to have the same shared secret even though they only exchanged public keys. We’ll use this shared secret as the basis for our actual encryption key.

const crypto = require('crypto');

const salt = crypto.randomBytes(16);

// Node has ECDH built-in to the standard crypto library. For some languages
// you may need to use a third-party library.
const serverECDH = crypto.createECDH('prime256v1');
const serverPublicKey = serverECDH.generateKeys();
const sharedSecret = serverECDH.computeSecret(clientPublicKey);

HKDF

Already time for another aside. Let’s say that you have some secret data that you want to use as an encryption key, but it isn’t cryptographically secure enough. You can use the HMAC-based Key Derivation Function (HKDF) to turn a secret with low security into one with high security.

One consequence of the way that it works is that it allows you to take a secret of any number of bits and produce another secret of any size up to 255 times as long as a hash produced by whatever hashing algorithm you use. For push, the spec requires us to use SHA-256, which has a hash length of 32 bytes (256 bits).

As it happens, we know that we only need to generate keys up to 32 bytes in size. This means that we can use a simplified version of the algorithm that can’t handle larger output sizes.

I’ve included the code for a Node version below, but you can find out how it actually works in RFC 5869.

The inputs to HKDF are a salt, some initial keying material (ikm), an optional piece of structured data specific to the current use-case (info) and the length in bytes of the desired output key.

// Simplified HKDF, returning keys up to 32 bytes long
function hkdf(salt, ikm, info, length) {
  if (length > 32) {
    throw new Error(`Cannot return keys of more than 32 bytes, ${length} requested`);
  }

  // Extract
  const keyHmac = crypto.createHmac('sha256', salt);
  keyHmac.update(ikm);
  const key = keyHmac.digest();

  // Expand
  const infoHmac = crypto.createHmac('sha256', key);
  infoHmac.update(info);
  // A one byte long buffer containing only 0x01
  const ONE_BUFFER = new Buffer(1).fill(1);
  infoHmac.update(ONE_BUFFER);
  return infoHmac.digest().slice(0, length);
}

Deriving the encryption parameters

We now use HKDF to turn the data we have into the parameters for the actual encryption.

The first thing we do is use HKDF to mix the client auth secret and the shared secret into a longer, more cryptographically secure secret. In the spec this is referred to as a Pseudo-Random Key (PRK) so that’s what I’ll call it here, though cryptography purists may note that this isn’t strictly a PRK.

Now we create the final content encryption key and a nonce that will be passed to the cipher. These are created by making a simple data structure for each, referred to in the spec as an info, that contains information specific to the elliptic curve, sender and receiver of the information in order to further verify the message’s source. Then we use HKDF with the PRK, our salt and the info to derive the key and nonce of the correct size.

The info type for the content encryption is ‘aesgcm’ which is the name of the cipher used for push encryption.

const authInfo = new Buffer('Content-Encoding: auth\0', 'utf8');
const prk = hkdf(clientAuthSecret, sharedSecret, authInfo, 32);

function createInfo(type, clientPublicKey, serverPublicKey) {
  const len = type.length;

  // The start index for each element within the buffer is:
  // value               | length | start    |
  // -----------------------------------------
  // 'Content-Encoding: '| 18     | 0        |
  // type                | len    | 18       |
  // nul byte            | 1      | 18 + len |
  // 'P-256'             | 5      | 19 + len |
  // nul byte            | 1      | 24 + len |
  // client key length   | 2      | 25 + len |
  // client key          | 65     | 27 + len |
  // server key length   | 2      | 92 + len |
  // server key          | 65     | 94 + len |
  // For the purposes of push encryption the length of the keys will
  // always be 65 bytes.
  const info = new Buffer(18 + len + 1 + 5 + 1 + 2 + 65 + 2 + 65);

  // The string 'Content-Encoding: ', as utf-8
  info.write('Content-Encoding: ');
  // The 'type' of the record, a utf-8 string
  info.write(type, 18);
  // A single null-byte
  info.write('\0', 18 + len);
  // The string 'P-256', declaring the elliptic curve being used
  info.write('P-256', 19 + len);
  // A single null-byte
  info.write('\0', 24 + len);
  // The length of the client's public key as a 16-bit integer
  info.writeUInt16BE(clientPublicKey.length, 25 + len);
  // Now the actual client public key
  clientPublicKey.copy(info, 27 + len);
  // Length of our public key
  info.writeUInt16BE(serverPublicKey.length, 92 + len);
  // The key itself
  serverPublicKey.copy(info, 94 + len);

  return info;
}

// Derive the Content Encryption Key
const contentEncryptionKeyInfo = createInfo('aesgcm', clientPublicKey, serverPublicKey);
const contentEncryptionKey = hkdf(salt, prk, contentEncryptionKeyInfo, 16);

// Derive the Nonce
const nonceInfo = createInfo('nonce', clientPublicKey, serverPublicKey);
const nonce = hkdf(salt, prk, nonceInfo, 12);

Padding

Another aside, and time for a silly and contrived example. Let’s say that your boss has a server that sends her a push message every few minutes with the company stock price. The plain message for this will always be a 32-bit integer with the value in cents. She also has a sneaky deal with the catering staff which means that they can send her the string “doughnuts in the break room” 5 minutes before they are actually delivered so that she can “coincidentally” be there when they arrive and grab the best one.

The cipher used by Web Push creates encrypted values that are exactly 16 bytes longer than the unencrypted input. Since “doughnuts in the break room” is longer that a 32-bit stock price, any snooping employee will be able to tell when the doughnuts are arriving without decrypting the messages, just from the length of the data.

For this reason, the web push protocol allows you to add padding to the beginning of the data. How you use this is up to your application, but in the above example you could pad all messages to be exactly 32 bytes, making it impossible to distinguish the messages based only on length.

The padding value is a 16-bit big-endian integer specifying the padding length followed by that number of NUL bytes of padding. So the minimum padding is two bytes - the number zero encoded into 16 bits.

const padding = new Buffer(2 + paddingLength);
// The buffer must be only zeroes, except the length
padding.fill(0);
padding.writeUInt16BE(paddingLength, 0);

When your push message arrives at the client, the browser will be able to automatically strip out any padding, so your client code only receives the unpadded message.

Encryption

Now we finally have all of the things to do the encryption. The cipher required for Web Push is AES128 using GCM. We use our content encryption key as the key and the nonce as the initialization vector (IV).

In this example our data is a string, but it could be any binary data. You can send payloads up to a size of 4078 bytes - 4096 bytes maximum per post, with 16-bytes for encryption information and at least 2 bytes for padding.

// Create a buffer from our data, in this case a UTF-8 encoded string
const plaintext = new Buffer('Push notification payload!', 'utf8');
const cipher = crypto.createCipheriv('id-aes128-GCM', contentEncryptionKey,
nonce);

const result = cipher.update(Buffer.concat(padding, plaintext));
cipher.final();

// Append the auth tag to the result - https://nodejs.org/api/crypto.html#crypto_cipher_getauthtag
return Buffer.concat([result, cipher.getAuthTag()]);

Web Push

Phew! Now that you have an encrypted payload, you just need to make a relatively simple HTTP POST request to the endpoint specified by the user’s subscription.

You need to set three headers.

Encryption: salt=<SALT>
Crypto-Key: dh=<PUBLICKEY>
Content-Encoding: aesgcm

<SALT> and <PUBLICKEY> are the salt and server public key used in the encryption, encoded as URL-safe Base64.

When using the Web Push protocol, the body of the POST is then just the raw bytes of the encrypted message. However, until Chrome and Google Cloud Messaging support the protocol, you can easily include the data in your existing JSON payload as follows.

{
    registration_ids: [  ],
    raw_data: BIXzEKOFquzVlr/1tS1bhmobZ…”
}

The value of the rawData property must be the base64 encoded representation of the encrypted message.

Debugging / Verifier

Peter Beverloo, one of the Chrome engineers that implemented the feature (as well as being one of the people who worked on the spec), has created a verifier.

By getting your code to output each of the intermediate values of the encryption you can paste them into the verifier and check that you are on the right track.

Prioritizing Your Resources with <link rel='preload'>

$
0
0

Have you ever wanted to let the browser know about an important font, script, or other resource that will be needed by the page, without delaying the page’s onload event? <link rel="preload"> gives web developers to power to do just that, using a familiar HTML element syntax with a few key attributes to determine the exact behavior. It’s a draft standard that’s shipping as part of the Chrome 50 release.

Resources loaded via <link rel="preload"> are stored locally in the browser, and are effectively inert until they’re referenced in the DOM, JavaScript, or CSS. For example, here’s one potential use case in which a script file is preloaded, but not executed immediately, as it would have been if it were included via a <script> tag in the DOM.

<link rel="preload" href="used-later.js" as="script">
<!-- ...other HTML... -->
<script>
  // Later on, after some condition has been met, we run the preloaded
  // JavaScript by inserting a <script> tag into the DOM.
  var usedLaterScript = document.createElement('script');
  usedLaterScript.src = 'used-later.js';
  document.body.appendChild(usedLaterScript)
</script>

So what’s happening here? The href attribute used in that example should be familiar to web developers, as it’s the standard attribute used to specify the URL of any linked resource.

The as attribute is probably new to you, however, and it’s used in the context of a <link> element to give the browser more context about the destination of preloading request being made. This additional information ensures that the browser will set appropriate request headers, request priority, as well as apply any relevant Content Security Policy directives that might be in place for the correct resource context.

Learn (a Lot) More

Yoav Weiss wrote the definitive guide to using <link rel="preload">. If you’re intrigued and want to start using it on your own pages, I’d recommend reading through his article to learn more about the benefits and creative use cases.

<link rel="preload"> supersedes <link rel="subresource">, which has significant bugs and drawbacks, and which was never implemented in browsers other than Chrome. As such, Chrome 50 removes support for <link rel="subresource">.

Web Animations Improvements in Chrome 50

$
0
0

The Web Animations API, which first shipped in Chrome 36, provides convenient Javascript control of animations in the browser, and is also being implemented in Gecko and WebKit.

Chrome 50 introduces changes to improve interoperability with other browsers and to be more compliant with the spec. These changes are:

  • Cancel events
  • Animation.id
  • State change for the pause() method
  • Deprecation of dashed names as keys in keyframes

Cancel Events

The Animation interface includes a method to cancel an animation, funnily enough called cancel(). Chrome 50 implements firing of the cancel event when the method is called as per the spec, which triggers event handling through the oncancel attribute if it’s been initialized.

Support for Animation.id

When you create an animation using element.animate() you can pass in a number of properties. For example, here’s an example of animating opacity on an object:

element.animate([ { opacity: 1 }, { opacity: 0 } ], 500);

By specifying the id property, it’ll be set on the Animation object returned which can help when debugging your content when you have lots of Animation objects to deal with. Here’s an example of how you’d specify an id for an animation you instantiate:

element.animate([{opacity: 1}, {opacity: 0}], {duration: 500, id: "foo"});

State Change for the pause() Method

The pause() method is used to pause an animation that’s in progress. If you check the state of the animation using the playState attribute it should be set to paused after the paused() method has been called. In Chrome versions prior to 50, the playState attribute would indicate idle if the animation hadn’t started yet, however now it reflects the correct state which is paused.

Deprecating Dashed Names as Keys in Keyframes

To further comply with the spec. and other implementations, Chrome 50 sends a warning to the console if dashed names are used for keys in keyframe animations, although they will continue to work. The correct strings to use are camelCase names as per the CSS property to IDL attribute conversion algorithm.

For example, the CSS property margin-left would require you to pass in marginLeft as the key.

Chrome 51 will remove support for dashed names altogether, so this is a good time to correct any existing content with the correct naming as per the spec.

These changes bring Chrome’s implementation of Web Animations closer to other browsers implementations and more compliant with the specification which all helps simplify web page content authoring for better interoperability.

Media Source API: Automatically Ensure Seamless Playback of Media Segments in Append Order

$
0
0

The HTML audio and video elements enable you to load, decode and play media, simply by providing a src URL:

<video src='foo.webm'></video>

That works well in simple use cases, but for techniques such as adaptive streaming, the Media Source Extensions API (MSE) provides more control. MSE enables streams to be built in JavaScript from segments of audio or video.

You can try out MSE at simpl.info/mse:

Screenshot of video played back using the MSE API

The code below is from that example.

A MediaSource represents a source of media for an audio or video element. Once a MediaSource object is instantiated and its open event has fired, SourceBuffers can be added to it. These act as buffers for media segments:

var mediaSource = new MediaSource();
video.src = window.URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', function() {
  var sourceBuffer =
      mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
  // Get video segments and append them to sourceBuffer.
}

Media segments are ‘streamed’ to an audio or video element by adding each segment to a SourceBuffer with appendBuffer(). In this example, video is fetched from the server then stored using the File APIs:

reader.onload = function (e) {
  sourceBuffer.appendBuffer(new Uint8Array(e.target.result));
  if (i === NUM_CHUNKS - 1) {
    mediaSource.endOfStream();
  } else {
    if (video.paused) {
      // start playing after first chunk is appended
      video.play();
    }
    readChunk(++i);
  }
};

Setting Playback Order

Chrome 50 adds additional support to the SourceBuffer mode attribute, allowing you to specify that media segments are played back continuously, in the order that they were appended, no matter whether the media segments initially had discontinuous timestamps.

Use the mode attribute to specify playback order for media segments. It has one of two values:

  • segments: The timestamp of each segment (which may have been modified by timestampOffset) determines playback order, no matter the order in which segments are appended.
  • sequence: The order of segments buffered in the media timeline is determined by the order in which segments are appended to the SourceBuffer.

If the media segments have timestamps parsed from byte stream data when they are appended to the SourceBuffer, the SourceBuffer’s mode property will be set to segments. Otherwise mode will be set to sequence. Note that timestamps are not optional: they must be there for most stream types, and cannot be there for others: inband timestamps are innate to stream types that contain them.

Setting the mode attribute is optional. For streams that don’t contain timestamps (audio/mpeg and audio/aac) mode can only be changed from segments to sequence: an error will be thrown if you try to change mode from sequence to segments. For streams that have timestamps, it is possible to switch between segments and sequence, though in practice that would probably produce behaviour that was undesirable, hard to understand or difficult to predict.

For all stream types, you can change the value from segments to sequence. This means segments will be played back in the order they were appended, and new timestamps generated accordingly:

sourceBuffer.mode = 'sequence';

Being able to set the mode value to sequence ensures continuous media playback, no matter if the media segment timestamps were discontinuous — for example, if there were problems with video muxing, or if (for whatever reason) discontinuous segments are appended. It is possible for an app to polyfill with timestampOffset to ensure continuous playback, if correct stream metadata is available, but sequence mode makes the process simpler and less error prone.

MSE Apps and Demos

These show MSE in action, though without SourceBuffer.mode manipulation:

Browser Support

  • Chrome 50 and above by default
  • For Firefox, see MDN for details

Specification

API Information

Removing Headaches from Focus Management

$
0
0

The 'sequential focus navigation starting point' feature defines where we start to search for focusable elements for sequential focus navigation ([Tab] or [Shift-Tab]) when there is no focused area. It's especially helpful for accessibility features like "skip links" and managing focus in the document.

Removing Headaches from Focus Management

HTML provides us with a lot of built in support for dealing with keyboard interactions, which means it’s pretty easy to write pages which can be used via the keyboard - whether a motor impairment prevents us from using a mouse, or we’re just so efficient removing our hands from the keyboard wastes precious milliseconds.

Keyboard handling revolves around focus, which determines where keyboard events will go in the page. There are a few situations in which, up till now, we’ve needed to do some extra work to make things work well for keyboard users. The focus() method allows us to manage focus by selectively choosing an element to focus in response to a user action. However, this best practice suffers from a lot of gotchas and requires some tricky JavaScript hackery to provide a baseline experience.

While this technique isn’t going to completely go away any time soon, in Chrome 50 it will be necessary in fewer cases thanks to the Sequential Focus Navigation Start Point. With this change, well-authored pages will automatically become more accessible without any need for extra manual focus management. Let’s look at an example.

Linking Within a Page

Text heavy sites often interlink within the same page to help users quickly jump to important sections.

<!-- Table of Contents -->
<a href="#recipes">Recipes</a>
<a href="#ingredients">Ingredients</a>

<!-- Recipes Section -->
<h2 id="recipes">Recipes</h1>
<h3>Vegemite Cheesecake</h3>
<p>
  Vegemite cheesecake is delicious. We promise.
  <a href="cheesecake.html">Read More</a>
</p>

If I were a keyboard user (and a glutton for Australian foods) my next series of actions would go something like this:

  • Press [Tab] twice to focus the Recipes link
  • Press [Enter] to jump to the Recipes section
  • Press [Tab] again to focus the Read More link

Let’s see that in action using Chrome 49.

Oh. Well that didn’t go quite according to plan did it?

Instead of focusing the Read More link, pressing [Tab] for the final time moved focus to the next item in the table of contents. This is because the developer did not set tabindex="-1" on the header to make it focusable. So clicking on the #recipes named anchor did not move focus. It’s a subtle mistake, and not a big deal if you’re a mouse user. But it’s potentially a very big deal if you’re a keyboard or switch device user. Consider the amount of interlinking on a typical Wikipedia page? It would be frustrating to have to constantly tab back and forth through all of those anchors!

Let’s look at the same experience now using Chrome 50.

Wow that’s exactly what we wanted, and best of all, we didn’t have to change our code. The browser just figured out where focus should go based on where we were in the document.

How Does it Work?

Prior to the implementation of the focus starting point, focus would always move from either the previous focused element, or the top of the page. This means that choosing what gets focused next can involve moving focus to something which shouldn’t really be focusable, like a container element or a heading. This causes all sorts of weirdness, including showing a focus ring if you happen to idly click such an element.

The focus start point, as the name suggests, provides a mechanism for suggesting where to start looking for the next focusable element when we press [Tab] or [Shift-Tab].

It can be set in a number of ways: If something has focus, it’s also the focus navigation start point, just like before. Also, just like before, if nothing else has set the focus navigation start point, then it will be the current document or, if available and supported, the currently active dialog. If we navigate to a page fragment like in the example above, that will now set the focus start point. Also, if we click any element on the page, regardless of whether it is focusable, that will now set the focus navigation start point. Finally, if the element which was the focus start point is removed from the DOM, its parent becomes the focus start point. No more focus whack-a-mole!

Other Use Cases

Aside from the above example, there are many other scenarios where this feature can come in handy.

Hiding Elements

There may be times when a user will be focused on an item that needs to be set to visibility: hidden or display: none. An example of this would be clickable items within a carousel. In prior versions of Chrome, hiding the currently focused item in this manner would reset focus back to the default starting point, turning the aforementioned carousel into a nasty trap for motor impaired users. With sequential focus starting point, this is no longer the case. If an element is hidden through either of the above methods, pressing the [Tab] key will simply move to the next focusable item.

Skip links are invisible anchors which can only be reached via the keyboard. They allow users to “skip” navigation elements in order to jump straight into the content of a page and they can be extremely beneficial for keyboard and switch device users. As explained on the WebAIM site:

Without some sort of system for bypassing the long list of links, some users are at a huge disadvantage. Consider users with no arm movement, who use computers by tapping their heads on a switch or that use a stick in their mouth to press keyboard keys. Requiring users to perform any action perhaps 100s of times before reaching the main content is simply unacceptable.

Many popular websites implement skip links, though you may have never noticed them.

A skip link on GitHub.com

Because skip links are named anchors, they work in the same fashion as our original table of contents example.

Caveats and Support

Sequential focus navigation starting point is currently only supported in Chrome 50, Firefox, and Opera. Until it is supported in all browsers you’ll still need to add tabindex="-1" (and remove the focus outline) to your named anchor targets.

Demo

Sequential focus navigation starting point is a great addition to the browser’s set of accessibility primitives. It’s easy to grok and actually lets us remove code from our application while improving the experience for our users. Double win! Take a look at the demo to explore this feature in more depth.


Chrome Supports createImageBitmap() in Chrome 50

$
0
0

Decoding images for use with a canvas is pretty common, whether it’s to allow users to customize an avatar, crop an image, or just zoom in on a picture. The problem with decoding images is that it can be CPU intensive, and that can sometimes mean jank or checkerboarding. As of Chrome 50 (and in Firefox 42+) you now have another option: createImageBitmap(). It allows you to decode an image in the background, and get access to a new ImageBitmap primitive, which you can draw into a canvas in the same way you would an <img> element, another canvas, or a video.

Drawing Blobs with createImageBitmap()

Let’s say you download a blob image with fetch() (or XHR), and you want to draw it into a canvas. Without createImageBitmap() you would have to create an image element and a Blob URL to get the image into a format you could use. With it you get a much more direct route to painting:

fetch(url)
  .then(response => response.blob())
  .then(blob => createImageBitmap(blob))
  .then(imageBitmap => ctx.drawImage(imageBitmap, 0, 0));

This approach will also work with images stored as blobs in IndexedDB, making blobs something of a convenient intermediate format. As it happens Chrome 50 also supports the .toBlob() method on canvas elements, which means you can – for example – generate blobs from canvas elements.

Using createImageBitmap() in Web Workers

One of the nicest features of createImageBitmap() is that it’s also available in workers, meaning that you can now decode images wherever you want to. If you have a lot of images to decode that you consider non-essential you would ship their URLs to a Web Worker, which would download and decode them as time allows. It would then transfer them back to the main thread for drawing into a canvas.

data flow with createImageBitmap and web workers

The code for doing this may look something like:

// In the worker.
fetch(imageURL)
  .then(response => response.blob())
  .then(blob => createImageBitmap(blob))
  .then(imageBitmap => {
    // Transfer the imageBitmap back to main thread.
    self.postMessage({ imageBitmap }, [imageBitmap]);
  }, err => {
    self.postMessage({ err });
  });

// In the main thread.
worker.onmessage = (evt) => {
  if (evt.data.err)
    throw new Error(evt.data.err);

  canvasContext.drawImage(evt.data.imageBitmap, 0, 0);
}

Today if you call createImageBitmap() on the main thread, that’s exactly where the decoding will be done. The plans are, however, to have Chrome automatically do the decoding in another thread, helping to keep the main thread workload down. In the meantime, however, you should be mindful of doing the decoding on the main thread, as it is intensive work that could block other essential tasks, like JavaScript, style calculations, layout, painting, or compositing.

A Helper Library

To make life a little simpler, I have created a helper library that handles the decoding on a worker, and sends back the decoded image to the main thread, and draws it into a canvas. You should, of course, feel free to reverse engineer it and apply the model to your own apps. The major benefit is more control, but that (as usual) comes with more code, more to debug, and more edge cases to be considered than using an <img> element.

If you need more control with image decoding, createImageBitmap() is your new best friend. Check it out in Chrome 50, and let us know how you get on!

FormData methods for inspection and modification

$
0
0

FormData is the XHR user’s best friend, and it’s getting an upgrade in Chrome 50. We’re adding methods allowing you to inspect your FormData objects or modify them after-the-fact. You can now use get(), delete(), and iteration helpers like entries, keys, and more. (Check out the full list.)

If you’re not already using FormData, it’s a simple, well-supported API that allows you to programmatically build a virtual form and send it to a far away place using window.fetch() or XMLHttpRequest.send(formData).

For some examples, read on!

Parse Real Forms Like a Pro

FormData can be constructed from a real HTML form, taking a snapshot of all its current values. However, the object used to be entirely opaque. All you could do was send it on, unchanged, to a server. Now, you can take it, modify it, bop it, observe it, shrink it, change it, and finally, upload it:

function sendRequest(theFormElement) {
  var formData = new FormData(theFormElement);
  formData.delete("secret_user_data"); // don't include this one!
  if (formData.has("include_favorite_color")) {
    formData.set("color", userPrefs.getColor());
  }
  // log all values like <input name="widget">
  console.info("User selected widgets", formData.getAll("widget"));

  window.fetch(url, {method: 'POST', body: formData});
}

You can also send FormData via the older XMLHttpRequest:

var x = new XMLHttpRequest();
  x.open('POST', url);
  x.send(formData);

Don’t throw away your FormData

If you’re building your own FormData from scratch, you might have found it frustrating that you couldn’t reuse it - you’ve spent a lot of time on those fields! As both the window.fetch() and XMLHttpRequest.send() methods takes a snapshot of the FormData, you can now safely reuse and modify your work! Check this example out:

// append allows multiple values for the same key
  var formData = new FormData();
  formData.append("article", "id-123");
  formData.append("article", "id-42");

  // send like request
  formData.set("action", "like");
  window.fetch(url, {method: 'POST', body: formData});

  // send reshare request
  formData.set("action", "reshare");  // overrides previous "action"
  window.fetch(url, {method: 'POST', body: formData});

DOMTokenList Validation Added in Chrome 50

$
0
0

In Chrome 50, you’ll be able to check the support of options for some HTML attributes that are backed by DOMTokenList instances in JavaScript. Right now, these places are:

  • iframe sandbox options
  • link relations (the rel attribute, or relLink in JavaScript)

Let’s show a quick example:

var iframe = document.getElementById(...);
  if (iframe.sandbox.supports('an-upcoming-feature')) {
    // support code for mystery future feature
  } else {
    // fallback code
  }
  if (iframe.sandbox.supports('allow-scripts')) {
    // instruct frame to run JavaScript
    // NOTE: this is well-supported, and just an example!
  }

As the list of supported options grows and changes, you can use feature detection to perform the correct actions for your web applications.

Canvas toBlob() support added in Chrome 50

$
0
0

The canvas element is getting an upgrade as of Chrome 50: it now supports the toBlob() method! This is great news for anyone generating images on the client side, who wants to – say – upload them to their server, or store them in IndexedDB for future use.

function sendImageToServer (canvas, url) {

  function onBlob (blob) {
    var request = new XMLHttpRequest();
    request.open('POST', url);
    request.onload = function (evt) {
      // Blob sent to server.
    }

    request.send(blob);
  }

  canvas.toBlob(onBlob);
}

Using toBlob() is great, because instead of manipulating a base64 encoded string that you get from toDataURL(), you can now you work with the encoded binary data directly. It’s smaller, and it tends to fit more use-cases than a data URI.

If you’re wondering whether you can draw image blobs to another canvas context, the answer is – in Firefox and Chrome – yes, absolutely! You can do this with the createImageBitmap() API, which is also landing in Chrome 50.

API Deprecations and Removals in Chrome 50

$
0
0

In nearly every version of Chrome we see a significant number of updates and improvements to the product, its performance, and also capabilities of the web platform.

Deprecation policy

To keep the platform healthy we sometimes remove APIs from the Web Platform which have run their course. There can be many reasons why we would remove an API, such as: they are superseded by newer APIs, are updated to reflect changes to specifications, to bring alignment and consistency with other browsers, or they are early experiments that never came to fruition in other browsers and thus can increase the burden of support for web developers.

Some of these changes might have an effect on a very small number of sites and to mitigate issues ahead of time we try to give developers advanced notice so that if needed they can make the required changes to keep their sites running.

Chrome currently has a process for deprecations and removals of API’s and the TL;DR is:

  • Announce on blink-dev
  • Set warnings and give time scales in the developer console of the browser when usage is detected on a page
  • Wait, monitor and then remove feature as usage drops

You can find a list of all deprecated features in chromestatus.com using the deprecated filter and removed features by applying the removed filter. We will also try to summarize some of the changes, reasoning, and migration paths in these posts.

In Chrome 50 (Estimated beta date: March 10 to 17) there are a number of changes to Chrome. This list is subject to change at any time.

Remove Support for SPDY/3.1

TL;DR: Support for HTTP/2 is widespread enough that SPDY/3.1 support can be dropped.

Intent to Remove | Chromestatus Tracker | Chromium Bug

SPDY/3.1 was an experimental application layer protocol that provided performance improvements over HTTP/1.1. It did this by, for example, connection multiplexing and server push. Many of its features were incorporated into HTTP/2, which was published as an RFC last May. Since HTTP/2 is supported by major servers and clients, it’s time to remove SPDY/3.1 from Chrome.

Remove TLS Next Protocol Negotiation (NPN)

TL;DR: As part of deprecation of SPDY, NPN is removed, having previously been replaced with ALPN.

Intent to Remove | Chromestatus Tracker | Chromium Bug

NPN was the TLS extension used to negotiate SPDY (and, in transition, HTTP/2). During the standardization process, NPN was replaced with ALPN, published as RFC 7301 in July 2014. We intend to remove NPN at the same time as the SPDY removal.

AppCache Deprecated on Insecure Contexts

TL;DR: To hinder cross-site scripting, we’re deprecating AppCache on insecure origins. We expect that in Chrome 52 it will only work on origins serving ontent over HTTPS.

Intent to Remove | Chromestatus Tracker | Chromium Bug

AppCache is a feature that allows offline and persistent access to an origin, which is a powerful privilege escalation for an cross-site scripting attack. As part of a larger effort to remove powerful features on insecure origins.

Chrome is removing this attack vector by only allowing it over HTTPS. We’re deprecating HTTP support in Chrome 50 and expect to remove it entirely in Chrome 52.

Document.defaultCharset is Removed

TL;DR: document.defaultCharset has been removed to improve spec compliance.

Intent to Remove | Chromestatus Tracker | CRBug Issue

The document.defaultCharset, deprecated in Chrome 49, is a read-only property that returns the default character encoding of the user’s system based on their regional settings. It’s not been found to be useful to maintain this value because of the way that browsers use the character encoding information in the HTTP Response or in the meta tag embedded in the page.

Instead, use document.characterSet to get the first value specified in the HTTP header. If that is not present then you will get the value specified in the charset attribute of the <meta> element (for example, <meta charset="utf-8">). Finally if none of those are available the document.characterSet will be the user’s system setting.

You can read more discussion of the reasoning not to spec this out in this github issue

TL;DR: Remove support for the subresource value for the rel attribute of HTMLLinkElement.

Intent to Remove | Chromestatus Tracker | Chromium Bug

The intent of the subresource atttribute on <link> was to prefetch a resource during a browser’s idle time. After a browser downloaded a page, it could then pre-download resources such as other pages so that when they were requested by users, they could simply be retrieved from the browser cache.

The subresource attribute suffered from a number of problems. First, it never worked as intended. Referenced resources were downloaded with low priority. The attribute was never implemented on any browser other than Chrome. The Chrome implementation had a bug that caused resources to be downloaded twice.

Developers looking to improve the user experience through preloading of content have a number of options, the most customizable of which is to build a service worker to take advantage of precaching and the Caches API. Additional solutions include other values for the rel attribute including preconnect, prefetch, preload, prerender. Some of these options are experimental and may not be widely supported.

Remove Insecure TLS Version Fallback

TL;DR: Remove a mechanism for forcing servers to return data using less- or non-secure versions of TLS.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Transport layer security (TLS) supports a mechanism for negotiating versions, allowing for the introduction of new TLS versions without breaking compatibility. Some servers implemented this in such a way that browsers were required to use insecure endpoints as a fallback. Because of this, attackers could force any web site, not just those that are incorrectly configured, to negotiate for weaker versions of TLS.

Remove KeyboardEvent.prototype.keyLocation

TL;DR: Remove an unneeded alias for the Keyboard.prototype.location attribute.

Intent to Remove | Chromestatus Tracker | Chromium Bug

This attribute is simply an alias to the Keyboard.prototype.location attribute, which allows disambiguation between keys that are located multiple places on a keyboard. For example, both attributes allow developers to distinguish between the two Enter keys on an extended keyboard.

Error and Success Handlers Required in RTCPeerConnection Methods

TL;DR: The WebRTC RTCPeerConnection methods createOffer() and createAnswer() now require an error handler as well as a success handler. Previously it had been possible to call these methods with only a success handler. That usage is deprecated.

Intent to Remove | Chromestatus Tracker | Chromium Bug

In Chrome 49we added a warning if you call setLocalDescription() or setRemoteDescription() without supplying an error handler. The error handler argument is mandatory as of Chrome 50.

This is part of clearing the way for introducing promises on these methods, as required by the WebRTC spec.

Here’s an example from the WebRTC RTCPeerConnection demo (main.js, line 126):

function onCreateOfferSuccess(desc) {
  pc1.setLocalDescription(desc, function() {
     onSetLocalSuccess(pc1);
  }, onSetSessionDescriptionError);
  pc2.setRemoteDescription(desc, function() {
    onSetRemoteSuccess(pc2);
  }, onSetSessionDescriptionError);
  pc2.createAnswer(onCreateAnswerSuccess, onCreateSessionDescriptionError);
}

Note that both setLocalDescription() and setRemoteDescription() have an error handler. Older browsers expecting only a success handler will simply ignore the error handler argument if it’s present; calling this code in an older browser will not cause an exception.

In general, for production WebRTC applications we recommend that you use adapter.js, a shim, maintained by the WebRTC project, to insulate apps from spec changes and prefix differences.

The XMLHttpRequestProgressEvent is No Longer Supported

TL;DR: The XMLHttpRequestProgressEvent interface will be removed, together with the attributes position and totalSize.

Intent to Remove | Chromestatus Tracker | Chromium Bug

This event existed to support the Gecko compatibility properties position and totalSize. Support for all three was dropped in Mozilla 22 and the functionality has long been superceded by the ProgressEvent.

var progressBar = document.getElementById("p"),
      client = new XMLHttpRequest()
  client.open("GET", "magical-unicorns")
  client.onprogress = function(pe) {
    if(pe.lengthComputable) {
      progressBar.max = pe.total
      progressBar.value = pe.loaded
    }
  }

Remove Prefixed Encrypted Media Extensions

TL;DR: Prefixed encrypted media extensions have been removed in favor of a spec-based, unprefixed replacement.

Intent to Remove | Chromestatus Tracker | Chromium Bug

In Chrome 42, we shipped a specification-based, unprefixed version of encrypted media extensions. This API is used to discover, select, and interact with Digital Rights Management systems for use with HTMLMediaElement.

That was nearly a year ago. And since the unprefixed version has more capabilities than the prefixed version, it’s time to remove the prefixed version of the API.

Remove Support for SVGElement.offset Properties

TL;DR: Offset properties for SVGElement have been dropped in favor of the more widely-supported properties on HTMLElement.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Offset properties have long been supported by both HTMLElement and SVGElement; however, Geck and Edge only support them on HTMLElement. To improve consistency between browsers these properties were deprecated in Chrome 48 and are now being removed.

Though equivalent properties are part of HTMLElement, developers looking for an alternative can also use [getBoundingClientRect()](https://developer.mozilla.org/en-US/docs/Web/API/Element/getBoundingClientRect)

Creating a Web-enabled IoT device with Intel® Edison

$
0
0

The Internet of Things is on everyone's lips these days, and it makes tinkerers and programmers like me very excited. Nothing is cooler than bringing your own inventions to life and being able to talk to them!

Client application

Web and IoT, a match to be

There are still a lot of hurdles to overcome before Internet of Things can be a huge success. One obstacle is companies that require people to install apps for each device that people purchase, cluttering users’ phones with a multitude of apps that they rarely use. This is something that we would like to avoid.

For this reason, we are very excited about the Physical Web project, which allows devices to broadcast a URL to an online website, in a non-intrusive way. In combinations with emerging web technologies such as Web Bluetooth, Web USB and Web NFC, the sites can connect directly to the device or at least explain the proper way of doing so.

Although we focus primarily on Web Bluetooth in this article, some use cases might be better suited for Web NFC or Web USB. For example, Web USB is preferred if you require a physical connection for security reasons.

The web site can also serve as a Progressive Web Apps (PWA). We encourage readers to check out Google’s explanation of PWAs. PWAs are site that have a responsive app like user experience, can work offline and can be added to the device home screen.

As a proof of concept, I have been building a small device using the Intel® Edison Arduino breakout board. Teh device contains a temperature sensor (TMP36) as well as an actuator (colored LED cathode). The schematics for this device can be found at the end of this article.

Breadboard

The Edison is an interesting device because it can run a full Linux distribution. Therefore I can easily program it using Node.js. The installer lets you install the Intel® XDK which makes it easy to get started, although you can program and upload to your device manually as well.

Note: It is possible to use Brillo or Ostro instead of the default OS software. If you do, follow the Brillo or Ostro OS documentation to get a Node.js application running on the device.

For my Node.js app, I required three node modules, as well as their dependencies:

  • eddystone-beacon
  • parse-color
  • johnny-five

The former automatically installs noble, which is the node module that I use to talk via Bluetooth Low Energy.

Note: It is important to not list noble as a dependency in the package.json file, as you need to use the same noble instance as eddystone-beacon, for them to work together.

You can find more info here

The package.json file for the project looks like this:

{
  "name": "edison-webbluetooth-demo-server",
  "version": "1.0.0",
  "main": "main.js",
  "engines": {
    "node": ">=0.10.0"
  },
  "dependencies": {
    "eddystone-beacon": "^1.0.5",
    "johnny-five": "^0.9.30",
    "parse-color": "^1.0.0"
  }
}

Announcing the web site

The latest version (M49) of Chrome on Android supports Physical Web, which allows Chrome to see URLs being broadcasted by devices around it. There are a few requirements: the sites need to be publicly accessible and use HTTPS.

The Eddystone protocol has an 18 byte size limit on URLs. So to make the URL for my demo app work (https://edison-webbt.appspot.com/), I need to use a URL shortener.

Broadcasting the URL is quite simple. Import the required libraries and call a few functions. One way of doing this is by calling advertiseUrl when the BLE chip is turned on:

var beacon = require("eddystone-beacon");
var bleno = require('eddystone-beacon/node_modules/bleno');

bleno.on('stateChange', function(state) {
  if (state === 'poweredOn') {
    beacon.advertiseUrl("https://goo.gl/9FomQC", {name: 'Edison'});
  }
}

That really couldn’t be much easier. You see in the image below that Chrome finds the device nicely.

Note: The Physical Web enabled devices only show up on your phone when Bluetooth is turned on, and you launch Chrome (currently only works with Beta). Additionally, you have to opt into the feature first time you launch Chrome beta.

Chrome announces nearby Physical Web beacons The web app URL is listed

Communicating with the sensor/actuator

We use Johnny-Five to talk to our board enhancements. In simple cases like this, this is not strictly easier than communicating with the raws input (pins) manually, but for bigger projects it can be a real help. Johnny-Five has a nice abstraction for talking to the TMP36 sensor, but for some reason I could only get it to return undefined as the current temperature, so I went ahead and read the temperature value manually.

Below you can find the simple code for listening to temperature changes as well as setting the initial LED color.

var five = require("johnny-five");
var Edison = require("edison-io");
var board = new five.Board({
  io: new Edison()
});

board.on("ready", function() {
  var led = new five.Led.RGB({
      pins: {
          red: 3,
          green: 5,
          blue: 6
      },
  });

  colorCharacteristic._led = led;
  led.color(colorCharacteristic._value);
  led.intensity(30);

  board.analogRead("A0", function(raw) {
    var mV = 5 * 1000 * (raw / 1024);
    var value = (mV / 10) - 50;
    temperatureCharacteristic.valueChange(value);
  });
}

You can ignore the above *Characteristic variables for now; these will be defined in the later section about interfacing with Bluetooth.

As you notice, I talk to the TMP36 via the analog A0 port. The voltage legs on the color LED cathode are connected to digital pins 3, 5 and 6, which happen to be the pulse-wide modulation (PWM) pins on the Edison Arduino breakout board.

Edison board

Talking to Bluetooth

Talking to Bluetooth couldn’t be much easier than it is with noble.

In the following example, we create two Bluetooth Low Energy characteristics: one for the LED and one for the temperature sensor. The former allows us to read the current LED color and set a new color. The latter allows us to subscribe to temperature change events.

Initially I had some problems with the Bluetooth connection being unstable, not working on every startup, or bailing out with a Frame Reassemble failure while connecting.

If that happens, run the rfkill block bluetooth command, followed by rfkill unblock bluetooth over the serial connection to make it work again. The startup issue went away when I started powering the device from a power supply instead of using USB for power.

If you encounter Frame Reassemble failures, reduce how often you send temperature change events until you no longer encounter the failure.

Generally you should always use external power when using Bluetooth or when you connect something like a servo to your board.

With noble, creating a characteristic is quite easy. All you need to do is to define how the characteristic communicates and define a UUID. The communication options are read, write, notify, or any combination thereof. The easiest way to do this is to create a new object and inherit from bleno.Characteristic.

Note: I am not using ES2016 here as Edison currently uses an older version of Node.js.

With the newly launched Ostro Project which supports the Edison, that is no longer the case. If you use Brillo as part of the Brillo Early Access Program, then it is possible to compile and install a recent version of Node.js.

The resulting characteristic object looks like the following:

var TemperatureCharacteristic = function() {
  bleno.Characteristic.call(this, {
    uuid: 'fc0a',
    properties: ['read', 'notify'],
    value: null
  });

  this._lastValue = 0;
  this._total = 0;
  this._samples = 0;
  this._onChange = null;
};

util.inherits(TemperatureCharacteristic, bleno.Characteristic);

We are storing the current temperature value in the this._lastValue variable. We need to add an onReadRequest method and encode the value for a “read” to work.

TemperatureCharacteristic.prototype.onReadRequest = function(offset, callback) {
  var data = new Buffer(8);
  data.writeDoubleLE(this._lastValue, 0);
  callback(this.RESULT_SUCCESS, data);
};

For “notify” we need to add a method to handle subscriptions and unsubscription. Basically, we simply store a callback. When we have a new temperature reason we want to send, we then call that callback with the new value (encoded as above).

TemperatureCharacteristic.prototype.onSubscribe = function(maxValueSize, updateValueCallback) {
  console.log("Subscribed to temperature change.");
  this._onChange = updateValueCallback;
  this._lastValue = undefined;
};

TemperatureCharacteristic.prototype.onUnsubscribe = function() {
  console.log("Unsubscribed to temperature change.");
  this._onChange = null;
};

As values can fluctuate a bit, we need to smooth out the values we get from the TMP36 sensor. I opted to simply take the average of 100 samples and only send updates when the temperature changes by at least 1 degree.

TemperatureCharacteristic.prototype.valueChange = function(value) {
  this._total += value;
  this._samples++;

  if (this._samples < NO_SAMPLES) {
    return;
  }

  var newValue = Math.round(this._total / NO_SAMPLES);

  this._total = 0;
  this._samples = 0;

  if (this._lastValue && Math.abs(this._lastValue - newValue) < 1) {
    return;
  }

  this._lastValue = newValue;

  console.log(newValue);
  var data = new Buffer(8);
  data.writeDoubleLE(newValue, 0);

  if (this._onChange) {
    this._onChange(data);
  }
};

That was the temperature sensor. The color LED is simpler. The object as well as the “read” method are shown below. The characteristic is configured to allow for “read” and “write” operations and has a different UUID than the temperature characteristic.

var ColorCharacteristic = function() {
  bleno.Characteristic.call(this, {
    uuid: 'fc0b',
    properties: ['read', 'write'],
    value: null
  });
  this._value = 'ffffff';
  this._led = null;
};

util.inherits(ColorCharacteristic, bleno.Characteristic);

ColorCharacteristic.prototype.onReadRequest = function(offset, callback) {
  var data = new Buffer(this._value);
  callback(this.RESULT_SUCCESS, data);
};

To control the LED from the object, I add a this._led member which I use to store the Johnny-Five LED object. I also set the color of the LED to its’ default value (white, aka #ffffff).

board.on("ready", function() {
  ...
  colorCharacteristic._led = led;
  led.color(colorCharacteristic._value);
  led.intensity(30);
  ...
}

The “write” method receives a string (just like “read” sends a string), which can consist of a CSS color code (For example: CSS names like rebeccapurple or hex codes like #ff00bb). I use a node module called parse-color to always get the hex value which is what Johnny-Five expects.

ColorCharacteristic.prototype.onWriteRequest = function(data, offset, withoutResponse, callback) {
  var value = parse(data.toString('utf8')).hex;
  if (!value) {
    callback(this.RESULT_SUCCESS);
    return;
  }

  this._value = value;
  console.log(value);

  if (this._led) {
    this._led.color(this._value);
  }
  callback(this.RESULT_SUCCESS);
};

All of the above will not work if we don’t include the bleno module. eddystone-beacon will not work with bleno unless you use the noble version distributed with it. Luckily doing that is quite simple:

var bleno = require('eddystone-beacon/node_modules/bleno');
var util = require('util');

Now all we need is for it to advertise our device (UUID) and its characteristics (other UUIDs)

bleno.on('advertisingStart', function(error) {
    ...
    bleno.setServices([
      new bleno.PrimaryService({
        uuid: 'fc00',
        characteristics: [
          temperatureCharacteristic, colorCharacteristic
        ]
      })
    ]);
});

Creating the client web app

Without getting into too many defails on how the non-bluetooth parts of the client app work, we can demonstrate a responsive user interface created in Polymer as an example. The resulting app is shown below:

Client app on phone Error message

The right side shows an earlier version, that showcases a simple error log that I added to ease the development.

Web Bluetooth makes it easy to communicate with Bluetooth Low Energy devices, so let’s look at a simplified version of my connection code. If you don’t know how promises work, check out this resource before reading further.

Connecting to a Bluetooth device involves a chain of promises. First we filter for the device (UUID: FC00, name: Edison). This displays a dialog to allow the user to select the device given the filter. Then we connect to the GATT service and get the primary service and associated characteristics, and then we read the values and set up notification callbacks.

Note: To make successive read/writes in the promise chain happen property, it is best practice to avoid fetching the characteristics in parallel with something like Promise.all([p1, p2]).

The simplified version of our code below only works with the latest Web Bluetooth API and therefore thus requires Chrome Dev (M49) on Android.

navigator.bluetooth.requestDevice({
  filters: [{ name: 'Edison' }],
  optionalServices: [0xFC00]
})

.then(device => device.gatt.connect())

.then(server => server.getPrimaryService(0xFC00))

.then(service => {
  let p1 = () => service.getCharacteristic(0xFC0B)
  .then(characteristic => {
    this.colorLedCharacteristic = characteristic;
    return this.readLedColor();
   });

  let p2 = () => service.getCharacteristic(0xFC0A)
  .then(characteristic => {
    characteristic.addEventListener(
      'characteristicvaluechanged', this.onTemperatureChange);
    return characteristic.startNotifications();
  });

  return p1().then(p2);
})

.catch(err => {
  // Catch any error.
})

.then(() => {
  // Connection fully established, unless there was an error above.
});

Reading and writing a string from a DataView / ArrayBuffer (what the WebBluetooth API uses) is just as easy as using Buffer on the Node.js side. All we need to use is TextEncoder and TextDecoder:

readLedColor: function() {
  return this.colorLedCharacteristic.readValue()
  .then(data => {
    // In Chrome 50+, a DataView is returned instead of an ArrayBuffer.
    data = data.buffer ? data : new DataView(data);
    let decoder = new TextDecoder("utf-8");
    let decodedString = decoder.decode(data);
    document.querySelector('#color').value = decodedString;
  });
},

writeLedColor: function() {
  let encoder = new TextEncoder("utf-8");
  let value = document.querySelector('#color').value;
  let encodedString = encoder.encode(value.toLowerCase());

  return this.colorLedCharacteristic.writeValue(encodedString);
},

Handling the characteristicvaluechanged event for the temperature sensor is also quite easy:

onTemperatureChange: function(event) {
  let data = event.target.value;
  // In Chrome 50+, a DataView is returned instead of an ArrayBuffer.
  data = data.buffer ? data : new DataView(data);
  let temperature = data.getFloat64(0, /*littleEndian=*/ true);
  document.querySelector('#temp').innerHTML = temperature.toFixed(0);
},

Summary

That was it folks! As you can see, communicating with Bluetooth Low Energy using Web Bluetooth on the client side and Node.js on the Edison is quite easy and very powerful.

Using the Physical Web and Web Bluetooth, Chrome finds the device and allows the user to easily connect to it without installing applications that update from time to time even when the user seldom uses them.

Demo

You can try the client to get inspired on how can you create your own web apps to connect to your custom Internet of Things devices.

Source code

The source code is available here. Feel free to report issues or send patches.

Sketch

If you are really adventurous and want to reproduce what I have done, refer to the Edison and breadboard sketch below:

Sketch

A new Device Mode for a mobile-first generation

$
0
0

A new Device Mode for a mobile-first generation

We introduced Device Mode, a way to emulate devices and work with responsive designs, a bit more than a year ago. Now it’s time for its first major upgrade, starting in Chrome 49. So, what’s new?

Mobile is becoming the starting point in Chrome DevTools. While we offered ways to emulate mobile in the past, the development default was desktop. Mobile emulation always had to be turned on. Now that consumption of mobile sites has overtaken desktop in many places, we’re switching our position in DevTools as well.

What’s new?

New Device Mode

First and foremost, the UI is streamlined and uses a lot less space. We expect the new Device Mode to become the main development mode for most, so a clean and simple design that extends the main DevTools navigation bar was a requirement.

New Device Mode The new quick-jump device ruler over the media queries.

In addition, we’ve centered the viewport and added a new quick-jump device ruler at top, a great help when designing responsively, that gives you an idea of the most common device sizes.

And finally, a lot of options have been bundled or hidden behind a toggle whenever possible. These new composite options make it a lot easier to switch between modes. To toggle certain controls or customize your experience of the toolbar, hit up the little three dot menu icon.

Responsive by default

Device Mode dropdown

The main DevTools toolbar now expands to the left side of the browser window and includes the most important tools to emulate a variety of mobile and desktop devices. You can choose between two development modes:

  • Responsive
  • Specific Device

In both modes, the viewport sits in its own resizable window within Chrome. This has the significant advantage that you can maximize your browser window and the DevTools the way you like them and not have them jump around when you test multiple sizes of your page and go back and forth.

Responsive is the mode you’ll want to be in during active iteration to make sure your site works on all sorts of devices, not just a few specific ones. In this mode, the handles next to the viewport are freely resizable.

Specific Device refers to when you choose a specific device and lock the viewport to its size. This becomes useful when you want to get in final fixes and touches for a few popular devices near the launch. Which is why we’re not just showing a huge list of all sorts of devices in the dropdown, but the currently most popular ones. If you select one, we do our best to make it behave as closely as the real deal: Touch events, user agent, viewport and device chrome and UI (if available) are all emulated.

Integrated Remote Debugging

Emulations, even the best ones available, can only get you so far. There are simply things that emulations can’t do today, like:

  • Check if a button is large enough for your thumb.
  • Test the performance of your site on a slower phone.
  • Debug random quirks and limitations of certain devices.

To sufficiently test all of these scenarios, you need to test, work and debug using actual physical devices.

Inspect Devices dialog

For a while now, you could browse to chrome://inspect, connect your device over USB and open a remote debugging session via DevTools. But we’ve now gone one step further and refactored how remote debugging looks and behaves, embedding it into the core of DevTools. Instead of browsing to another page, you can now access Inspect Devices as a dialog directly within the new main menu. This makes it much easier to include physical debugging into your workflow – just plug in your phone, no need to exit your DevTools!

New homes for the rest of the former emulation controls

Since mobile is now the default across DevTools, features like network throttling moved to their proper home, in this case the Network Panel.

More Tools

Some features, like the emulation of sensors or rendering settings like emulating print media have been moved to a consistent place in the Drawer. You can find all of the extras in the new main menu under “More tools”.

We know this is a significant change to which we’ll all have to get used to. You’ll find full coverage about everything that’s in it in the just-updated Device Mode docs. We’d love to hear from you on Twitter or if you need more than 140 characters, on our bug tracker (yes, even for feature requests).


Experiment Time: Scroll Anchoring

$
0
0

Have you ever visited a web page, started reading some of the content and then the page sort of pops due to ad’s or images loading, making you lose your place on the page?

Well it might be worth checking out the Scroll Anchoring flag in Chrome Dev / Canary.

Scroll Anchoring keeps track of where you are on the page and prevents anything that causes a reflow from disrupting your position on the page.

To try this feature out for yourself do the following:

  1. Go to chrome://flags/#enable-scroll-anchoring on Chrome Dev / Canary
  2. Select “Enabled” from the dropdown
  3. Click “Relaunch Now” at the bottom of the screen

With this you’ll have scroll anchoring enabled.

We’ve been using this for a while and we believe that this drastically improves the experience for all users on the web but we want to make sure that it works well everywhere. If you spot any examples where scroll anchoring failed to handle reflows in the page or examples where it shouldn’t have intervened, we desperately want to hear about it!

Send us feedback / examples where you’ve seen unexpected behaviour by filling out this form: g.co/reportbadreflow

FAQ

How Does This Change Affect JavaScript Scrolling?

In short - it doesn’t.

This change alters the effect of scrolling caused by reflows. For example, adding a class name to an element that causes it to increase in height will cause a reflow and scroll anchoring will prevent the page from jumping around.

Calling window.scrollTo(0, 1) (Yes the old school hack) wouldn’t cause a reflow and will behave normally. The same goes for touch events.

If you find an example where scroll anchoring is affecting your page, please send feedback via this form: g.co/reportbadreflow

What’s New with KeyboardEvents? Keys and Codes!

$
0
0

The past few versions of Chrome have seen two additions to KeyboardEvents, which are used as a parameter passed to keydown, keypress, and keyup event listeners. Both the code attribute (added in Chrome 48) and the key attribute (added in Chrome 51) give developers a straightforward way to get information that would otherwise be difficult using legacy attributes.

The code Attribute

First up is the code attribute. This is set to a string representing the key that was pressed to generate the KeyboardEvent, without taking the current keyboard layout (e.g., QWERTY vs. Dvorak), locale (e.g., English vs. French), or any modifier keys into account. This is useful when you care about which physical key was pressed, rather thanwhich character it corresponds to. For example, if you’re a writing a game, you might want a certain set of keys to move the player in different directions, and that mapping should ideally be independent of keyboard layout.

The key Attribute

Next, we have the new key attribute. It’s also set to a string, but while code returns information about the physical key that was pressed, key contains the character that is generated by that key, taking into account the current keyboard layout, locale and modifier keys. Looking at the key attribute’s value comes in handy when you need to know what character would be displayed on the screen as if the user had typed into a text input.

What’s This Mean in Practice?

To give a concrete example, let’s assume your user is using a U.S. keyboard with a QWERTY keyboard layout. Pressing the physical Q key on that keyboard will result in a KeyboardEvent with a code attribute set to "KeyQ". This is true regardless of keyboard layout, and regardless of any other modifier keys. For comparison, on a French (AZERTY) keyboard this key would still have a code of "KeyQ" even though the letter printed on the keycap is an “a”. Pressing the physical Q key on that same U.S. keyboard will typically generate a KeyboardEvent with key set to "q" (with no modifier keys), or "Q" (with Shift or CapsLock), or "œ" (on OS X, with Alt). On a French AZERTY keyboard, this same key would generate an “a” (or “A” with Shift or CapsLock). And for other keyboard layouts, the key value could be "й", "ض", "ㅂ", "た", or some other character. Revisiting our game example from earlier, if you want your game to use the WASD keys for movement, you can use the code attribute and check for "KeyW", "KeyA", "KeyS" and "KeyD". This will work for all keyboards and all layouts—even AZERTY keyboards that swap the position of the “w” and “z” keys.

Virtual Keyboards

You’ll notice that up until now, we’ve been focusing on the behavior assuming a physical keyboard. What about users who are typing on a virtual keyboard or an alternative input device? The specification offers official guidance for the code attribute. To summarize, a virtual keyboard that mimics the layout of a standard keyboard is expected to result in an appropriate code attribute being set, but virtual keyboards that adopt non-traditional layouts may result in code not being set at all.

Things are more straightforward for the key attribute, which you should expect to be set to a string based on which character the user (virtually) typed.

Try it Out

Gary Kačmarčík has put together a fantastic demo for visualizing all the attributes associated with KeyboardEvents:

Cross-Browser Support

Support for the code attribute is, as of this writing, limited to Chrome 48+, Opera 35+, and Firefox 44+. The key attribute is supported in Firefox 44+, Chrome 51+, and Opera 38+, with partial support in Internet Explorer 9+ and Edge 13+.

Streamlining the Sign-in Flow Using Credential Management API

$
0
0

To provide a sophisticated user experience, it’s important to help users authenticate themselves to your website. Authenticated users can interact with each other using a dedicated profile, sync data across devices, or process data while offline; the list goes on and on. But creating, remembering and typing passwords tends to be cumbersome for end users, especially on mobile screens which leads them to re-using same passwords on different sites. This of course is a security risk.

The latest version of Chrome (51) supports the Credential Management API. It’s a standards-track proposal at the W3C that gives developers programmatic access to a browser’s credential manager and helps users sign in more easily.

What is the Credential Management API?

The Credential Management API enables developers to store and retrieve password credentials and federated credentials and it provides 3 functions:

  • navigator.credentials.get()
  • navigator.credentials.store()
  • navigator.credentials.requireUserMediation()

By using these simple APIs, developers can do powerful things like:

  • Enable users to sign in with just one tap.
  • Remember the federated account the user has used to sign in with.
  • Sign users back in when a session expires.

In Chrome’s implementation, credentials will be stored in Chrome’s password manager. If users are signed into Chrome, they can sync the user’s passwords across devices. Those synced passwords can also be shared with Android apps which have integrated the Smart Lock for Passwords API for Android for a seamless cross-platform experience.

Integrating Credential Management API to Your Site

The way you use the Credential Management API with your website can vary depending on its architecture. Is it a single page app? Is it a legacy architecture with page transitions? Is the sign-in form located only at the top page? Are sign-in buttons located everywhere? Can users meaningfully browse your website without signing in? Does federation work within popup windows? Or does it require interaction across multiple pages?

It’s nearly impossible to cover all those cases, but let’s have a look at a typical single page app.

  • The top page is a registration form.
  • By tapping on “Sign In” button, users will navigate to a sign-in form.
  • Both the registration and sign-in forms have the typical options of id/password credentials and federation, e.g. with Google Sign-In and Facebook Sign-In.

By using Credential Management API, you will be able to add following features to the site, for example:

  • Show an account chooser when signing in: Shows a native account chooser UI when a user taps “Sign In”.
  • Store credentials: Upon successful sign-in, offer to store the credential information to the browser’s password manager for later use.
  • Let the user automatically sign back in: Let the user sign back in if a session is expired.
  • Mediate auto sign-in: Once a user signs out, disable automatic sign-in for the next visit of the user.

You can experience these features implemented in a demo site with its sample code.

Note that this API needs to be used on secure origins such as HTTPS domains or localhost.

Show Account Chooser when Signing In

Between a user tap of a “Sign In” button and navigation to a sign-in form, you can use navigator.credentials.get() to get credential information. Chrome will show an account chooser UI from which the user can pick an account.


An account chooser UI pops up for user to select an account to sign-in

Getting a Password Credential Object:

To show password credentials as account options, use password: true.

navigator.credentials.get({
  password: true, // `true` to obtain password credentials  
}).then(function(cred) {
  // continuation  
  ...

Using a Password Credential to Sign In

Once the user makes an account selection, the resolving function will receive a password credential. You can send it to the server using fetch():

// continued from previous example  
}).then(function(cred) {
  if (cred) {
    if (cred.type == 'password') {
      // Construct FormData object  
      var form = new FormData();

      // Append CSRF Token  
      var csrf_token = document.querySelector('csrf_token').value;
      form.append('csrf_token', csrf_token);

      // You can append additional credential data to `.additionalData`  
      cred.additionalData = form;

      // `POST` the credential object as `credentials`.  
      // id, password and the additional data will be encoded and  
      // sent to the url as the HTTP body.  
      fetch(url, {           // Make sure the URL is HTTPS  
        method: 'POST',      // Use POST  
        credentials: cred    // Add the password credential object  
      }).then(function() {
        // continuation  
      });
    } else if (cred.type == 'federated') {
      // continuation

Using a Federated Credential to Sign In

To show federated accounts to a user, add federated, which takes an array of identity providers, to the get() options.


When multiple accounts are stored in the password manager

navigator.credentials.get({
  password: true, // `true` to obtain password credentials  
  federated: {
    providers: [  // Specify an array of IdP strings  
      'https://account.google.com',
      'https://www.facebook.com'
    ]
  }
}).then(function(cred) {
  // continuation  
  ...

You can examine the type property of the credential object to see if it’s PasswordCredential (type == 'password') or FederatedCredential (type == 'federated').
If the credential is a FederatedCredential, you can call the appropriate API using information it contains.

});
    } else if (cred.type == 'federated') {
      // `provider` contains the identity provider string  
      switch (cred.provider) {
        case 'https://accounts.google.com':
          // Federated login using Google Sign-In  
          var auth2 = gapi.auth2.getAuthInstance();

          // In Google Sign-In library, you can specify an account.  
          // Attempt to sign in with by using `login_hint`.
          return auth2.signIn({
            login_hint: cred.id || ''
          }).then(function(profile) {
            // continuation  
          });
          break;

        case 'https://www.facebook.com':
          // Federated login using Facebook Login  
          // continuation  
          break;

        default:
          // show form  
          break;
      }
    }
  // if the credential is `undefined`  
  } else {
    // show form

Store Credentials

When a user signs in to your website using a form, you can use navigator.credentials.store() to store the credential. The user will be prompted to store it or not. Depending on the type of the credential, use new PasswordCredential() or new FederatedCredential() to create a credential object you’d like to store.


Chrome asks users if they want to store the credential (or a federation provider)

Creating and Storing a Password Credential from a Form Element

The following code uses autocomplete attributes to automatically map the form’s elements to PasswordCredential object parameters.

HTML

<form id="form" method="post">
  <input type="text" name="id" autocomplete="username" />
  <input type="password" name="password" autocomplete="current-password" />
  <input type="hidden" name="csrf_token" value="******" />
</form>

JavaScript

var form = document.querySelector('\#form');
var cred = new PasswordCredential(form);
// Store it  
navigator.credentials.store(cred)
.then(function() {
  // continuation  
});

Creating and Storing a Federated Credential

// After a federation, create a FederatedCredential object using   
// information you have obtained  
var cred = new FederatedCredential({
  id: id,                                  // The id for the user  
  name: name,                              // Optional user name  
  provider: 'https://account.google.com',  // A string that represents the identity provider  
  iconURL: iconUrl                         // Optional user avatar image url  
});
// Store it  
navigator.credentials.store(cred)
.then(function() {
  // continuation  
});

Let the User Automatically Sign Back In

When a user leaves your website and comes back later, it’s possible that the session is expired. Don’t bother the user to type their password every time they come back. Let the user automatically sign back in.


When a user is automatically signed in, a notification will pop up.

Getting a Credential Object

navigator.credentials.get({
  password: true, // Obtain password credentials or not  
  federated: {    // Obtain federation credentials or not  
    providers: [  // Specify an array of IdP strings  
      'https://account.google.com',
      'https://www.facebook.com'
    ]
  },
  unmediated: true // `unmediated: true` lets the user automatically sign in  
}).then(function(cred) {
  if (cred) {
    // auto sign-in possible  
    ...
  } else {
    // auto sign-in not possible  
    ...
  }
});

The code should look similar to what you’ve seen in the “Show Account Chooser when Signing In” section. The only difference is that you will set unmediated: true.

This resolves the function immediately and gives you the credential to automatically sign the user in. There are a few conditions:

  • The user has acknowledged the automatic sign-in feature in a warm welcome.
  • The user has previously signed in to the website using the Credential Management API.
  • The user has only one credential stored for your origin.
  • The user did not explicitly sign out in the previous session.

If any of these conditions are not met, the function will be rejected.

Mediate Auto Sign-in

When a user signs out from your website, it’s your responsibility to ensure that the user will not be automatically signed back in. To ensure this, the Credential Management API provides a mechanism called mediation. You can enable mediation mode by calling navigator.credentials.requireUserMediation(). As long as the user’s mediation status for the origin is turned on, using unmediated: true with navigator.credentials.get(), that function will resolve with undefined.

Mediating Auto Sign-in

navigator.credentials.requireUserMediation();

FAQ

Is it possible for JavaScript on the website to retrieve a raw password?
No. You can only obtain passwords as a part of PasswordCredential and it’s not exposable by any means.

Is it possible to store 3 set of digits for an id using Credential Management API?
Not currently. Your feedback on the specification will be highly appreciated.

Can I use the Credential Management API inside an iframe?
The API is restricted to top-level contexts. Calls to .get() or .store() in an iframe will resolve immediately without effect.

Can I integrate my password management Chrome extension with the Credential Management API?
You may override navigator.credentials and hook it to your Chrome Extension to get() or store() credentials.

API Deprecations and Removals in Chrome 51

$
0
0

In nearly every version of Chrome we see a significant number of updates and improvements to the product, its performance, and also capabilities of the web platform.

Deprecation policy

To keep the platform healthy we sometimes remove APIs from the Web Platform which have run their course. There can be many reasons why we would remove an API, such as: they are superseded by newer APIs, are updated to reflect changes to specifications, to bring alignment and consistency with other browsers, or they are early experiments that never came to fruition in other browsers and thus can increase the burden of support for web developers.

Some of these changes might have an effect on a very small number of sites and to mitigate issues ahead of time we try to give developers advanced notice so that if needed they can make the required changes to keep their sites running.

Chrome currently has a process for deprecations and removals of APIs and the TL;DR is:

  • Announce on blink-dev
  • Set warnings and give time scales in the developer console of the browser when usage is detected on a page
  • Wait, monitor and then remove feature as usage drops

You can find a list of all deprecated features in chromestatus.com using the deprecated filter and removed features by applying the removed filter. We will also try to summarize some of the changes, reasoning, and migration paths in these posts.

In Chrome 51 (April, 2016) there are a number of changes to Chrome.

Remove Custom Messages in onbeforeload Dialogs

TL;DR: A window’s onbeforeload property no longer supports a custom string.

Intent to Remove | Chromestatus Tracker | Chromium Bug

A window’s onbeforeunload property may be set to a function that returns a string that is shown to the user in a dialog box to confirm that the user wants to navigate away. This was intended to prevent users from losing data during navigation. Unfortunately, it is often used to scam users.

Starting in Chrome 51, a custom string will no longer be shown to the user. Chrome will still show a dialog to prevent users from losing data, but it’s contents will be set by the browser instead of the web page.

With this change, Chrome will be consistent with Safari 9.1 and later, as well as Firefox 4 and later.

Deprecated results attribute for <input type=search>

TL;DR: The result attribute is being deprecated because it’s not part of any standard and it inconsistently implemented across browsers.

Intent to Remove | Chromestatus Tracker | Chromium Bug

The results is not implemented on most browser and highly inconsistently on those that do. For example, Chrome responds by adding a magnifier icon to the input box. While Safari desktop , it controls how many submitted queries are shown in a popup shown by click the magnifier icon. Since this isn’t part of any standard, it’s being deprecated.

Removal is expected in Chrome 53.

IntersectionObserver’s Coming into View

$
0
0

Let’s say you want to track when an element in your DOM enters the visible viewport. You might want to do this so you can lazy-load images just in time or because you need to know if the user is actually looking at a certain ad banner. You can do that by hooking up the scroll event or by using a periodic timer and calling getBoundingClientRect() on that element. This approach, however, is painfully slow as each call to getBoundingClientRect() forces the browser to re-layout the entire page and will introduce considerable jank to your website. Matters get close to impossible when you know your site is being loaded inside an iframe and you want to know when the user can see an element. The Single Origin Model and the browser won’t let you access any data from the web page that contains the iframe. This is a common problem for ads for example, that are frequently loaded using iframes.

Making this visibility test more efficient is what IntersectionObserver was designed for, and it’s landed in Chrome 51 (which is, as of this writing, the beta release). IntersectionObservers let you know when an observed element enters or exits the browser’s viewport.

How to Create an IntersectionObserver

The API is rather small, and best described using an example:

var io = new IntersectionObserver(
	entries => {
		console.log(entries);
	},
	{
		/* Using default options. Details below */
	}
);
// Start observing an element
io.observe(element);

// Stop observing an element
// io.unobserve(element);

// Disable entire IntersectionObserver
// io.disconnect();

Using the default options for IntersectionObserver, your callback will be called both when the element comes partially into view and when it completely leaves the viewport.

If you need to observe multiple elements, it is both possible and advised to observe multiple elements using the same IntersectionObserver instance by calling observe() multiple times.

An entries parameter is passed to your callback which is an array of IntersectionObserverEntry objects. Each such object contains updated intersection data for one of your observed elements.


🔽[IntersectionObserverEntry]
    time: 3893.92
  🔽rootBounds: ClientRect
      bottom: 920
      height: 1024
      left: 0
      right: 1024
      top: 0
      width: 920
  🔽boundingClientRect: ClientRect
    // ...
  🔽intersectionRect: ClientRect
    // ...
    intersectionRatio: 0.54
  🔽target: div#observee
    // ...

rootBounds is the result of calling getBoundingClientRect() on the root element, which is the viewport by default. boundingClientRect is the result of getBoundingClientRect() called on the observed element. intersectionRect is the intersection of these two rectangles and effectively tells you which part of the observed element is visible. intersectionRatio is closely related, and tells you how much of the element is visible. With this info at your disposal, you are now able to implement features like just-in-time loading of assets before they become visible on screen. Efficiently.

IntersectionObservers deliver their data asynchronously, and your callback code will run in the main thread. Additionally, [the spec actually says that IntersectionObserver implementations should use requestIdleCallback(). This means that the call to your provided callback is low priority and will be made by the browser during idle time. This is a conscious design decision.

Scrolling divs

I am not a big fan of scrolling inside an element, but I am not here to judge, and neither are IntersectionObservers. The options object takes a root option that lets you define an alternative to the viewport as your root. It is important to keep in mind that root needs to be an ancestor of all the observed elements.

Intersect all the Things!

No! Bad developer! That’s not mindful usage of your user’s CPU cycles. Let’s think about an infinite scroller as an example: In that scenario, it is definitely advisable to add sentinels to the DOM and observe (and recycle!) those. You should add a sentinel close to the last item in the infinite scroller. When that sentinel comes into view, you can use the callback to load data, create the next items, attach them to the DOM and reposition the sentinel accordingly. If you properly recycle the sentinel, no additional call to observe() is needed. The IntersectionObserver keeps working.

Moar Updates, Please

As mentioned earlier, the callback will be triggered a single time when the observed element comes partially into view and another time when it has left the viewport. This way IntersectionObserver gives you an answer to the question, “Is element X in view?”. In some use cases, however, that might not be enough.

That’s where the threshold option comes into play. It allows you to define an array of intersectionRatio thresholds. Your callback will be called every time intersectionRatio crosses one of these values. The default value for threshold is [0], which explains the default behavior. If we change threshold to [0, 0.25, 0.5, 0.75, 1], we will get notified every time an additional quarter of the element becomes visible:

Any Other Options?

As of now, there’s only one additional option to the ones listed above. rootMargin allows you to specify the margins for the root, effectively allowing you to either grow or shrink the area used for intersections. These margins are specified using a CSS-style string, á la “10px 20px 30px 40px”, specifying top, right, bottom and left margin respectively. To summarize, the IntersectionObservers options struct offers the following options:

new IntersectionObserver(entries => {/* … */}, {
  // The root to use for intersection.
  // If not provided, use the top-level document’s viewport.
  root = null;
  // Same as margin, can be 1, 2, 3 or 4 components, possibly negative lengths.  
  // If an explicit root element is specified, components may be percentages of the
  // root element size.  If no explicit root element is specified, using a percentage
  // is an error.
  rootMargin = "0px";
  // Threshold(s) at which to trigger callback, specified as a ratio, or list of
  // ratios, of (visible area / total area) of the observed element (hence all
  // entries must be in the range [0, 1]).  Callback will be invoked when the visible
  // ratio of the observed element crosses a threshold in the list.
  threshold = [0];
});

iframe Magic

IntersectionObservers were designed specifically with ads services and social network widgets in mind, which frequently use iframes and could benefit from knowing whether they are in view. If an iframe observes one of its elements, both scrolling the iframe as well as scrolling the window containing the iframe will trigger the callback at the appropriate times. For the latter case, however, rootBounds will be set to null to avoid leaking data across origins.

What is IntersectionObserver Not About?

Something to keep in mind is that IntersectionObservers are intentionally neither pixel perfect nor low latency. Using them to implement endeavours like scroll-dependent animations are bound to fail, as the data will be – strictly speaking – out of date by the time you’ll get to use it. The explainer has more details about the original use cases for IntersectionObserver.

How Much Work Can I Do in the Callback?

Short’n’Sweet: Spending too much time in the callback will make your app lag – all the common practices apply.

Go Forth and Intersect thy Elements

The browser support for IntersectionObservers is still fairly slim, so it won’t work everywhere right off the bat just yet. In the meantime, a polyfill is being worked on in the WICG’s repository. Obviously, you won’t get the performance benefits using that polyfill that a native implementation would give you.

You can start using IntersectionObservers right now in Chrome Canary! Tell us what you came up with.

Viewing all 599 articles
Browse latest View live