Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

Geolocation API removed from unsecured origins in Chrome 50

$
0
0

Chrome has public intent to deprecate powerful features like geolocation on non-secure origins, and we hope that others will follow.

Starting with Chrome version 50, Chrome no longer supports the HTML5 Geolocation API over non-secure connections to obtain the user’s location. This means that the page that’s making the Geolocation API call must be served over a secure context such as HTTPS and localhost.

When is this changing?

This change is effective as of Chrome version 50 (12PM PST April 20 2016). Chrome has been providing warnings since version 44 (released July 21 2015).
There have been other public announcements, so hopefully this isn’t the first time you’ve seen this:

Why are we making this change?

Location is sensitive data! Requiring HTTPS is the only way to protect the privacy of your users’ location data. If the user’s location is sent over an non-secure connection, anyone on the network will be able to know where that user is. This seriously compromises user privacy.

Who does this affect?

This affects any page currently using the Geolocation API from pages served over HTTP (non-secure). It also affects HTTPS iframes that use the Geolocation API if they are embedded in HTTP pages (you won’t be able to polyfil using a shared HTTPS based service).

Does my whole web app need HTTPS?

It is not a requirement that the whole app be served via HTTPS to use Geolocation. Only pages that use Geolocation need to be served via HTTPS. However, we strongly suggest that you migrate to HTTPS.

I need to use Geolocation. What should I do?

If you would like to use the HTML5 Geolocation API, or if your site already uses the Geolocation API, please migrate the page making the Geolocation API JavaScript function call to HTTPS, ensuring that it’s used in a secure context.

There are other fallback options available to get a user’s location that are not affected by this change, such as Google Maps Geolocation API, GeoIP (as an example, there are other geo based solutions), and user-entered zip code. However, we strongly recommend that the best path to ensure ongoing access to geolocation is to move to HTTPS.


DevTools Digest: More power with the new Command Menu

$
0
0

Read about DevTools' new command menu and its over 60 actions that enable crazy fast workflows.

Cmd+Shift+P to bring up the Command Menu

Command Menu in DevTools

The “Jump to File” dialog that appears when you press Cmd+P in the Sources panel isn’t terribly well known, but has been around for a while. We’ve now went much further than that and developed a text-editor-inspired command menu that can drive almost every important action in DevTools.

Hit Cmd+Shift+P anywhere (even when the page is in focus!) to bring up the Command Menu, then type to filter and hit Enter to trigger the action. A few sample actions you could try:

  • Appearance: Switch to Dark Theme
  • DevTools: Dock to bottom
  • Mobile: Inspect Devices…
  • Network: Go offline

The new command menu is a super quick way to navigate and discover new settings and actions across DevTools.

Pretty-print HTML

Pretty HTML

We’ve had pretty-print for JS and CSS sources built into the Sources panel for a while now, but have just extended it to the Elements panel for full-blown HTML pretty-printing. Give it a try – not only does it reformat the HTML, it also reformats the JavaScript and CSS within it!


As always, let us know what you think via Twitter or the comments below, and submit bugs to crbug.com/new.

Until next month! Paul Bakaus & the DevTools team

Houdini – Demystifying CSS

$
0
0

Have you ever thought about the amount of work CSS does? You change a single attribute and suddenly your entire website appears in a different layout. It’s kind of magic in that regard (can you tell where I am going with this?!). So far, we – the community of web developers – have only been able to witness and observe the magic. What if we want to come up with our own magic tricks and perform them? What if we want to become the magician? Enter Houdini!

The Houdini task force consists of engineers from Mozilla, Apple, Opera, Microsoft, HP, Intel and Google working together to expose certain parts of the CSS engine to web developers. The task force is working on a collection of drafts with the goal to get them accepted by the W3C to become actual web standards. They set themselves a few high-level goals, turned them into specification drafts which in turn gave birth to a set of supporting, lower-level specification drafts. The combination of these drafts is what is usually meant when someone is talking about “Houdini”. At the time of writing, the list of drafts is still incomplete and some of the drafts are mere placeholders. That’s how early in development of Houdini we are.

Disclaimer: I want to give a quick overview of the Houdini drafts so you have an idea of what kind of problems Houdini tries to tackle. As far as the current *state of the specs allow, I’ll try to give code examples, as well. Keeping that *in mind, please be aware that all of these specs are *drafts and very *volatile. There’s no guarantee that these code samples are even remotely *correct in the future or that any of these drafts become reality at all.

The specifications

Worklets (spec)

Worklets by themselves are not really useful. They are a concept introduced to make many of the later drafts possible. If you thought of Web Workers when you read “worklet”, you are not wrong. They have a lot of conceptual overlap. So why a new thing when we already have workers? Houdini’s goal is to expose new APIs to allow web developers to hook up their own code into the CSS engine and the surrounding systems. It’s probably not unrealistic to assume that some of these code fragments will have to be run every. single. frame. Some of them have to by definition. Quoting the Web Worker spec:

Workers […] are relatively heavy-weight, and are not intended to be used in large numbers. For example, it would be inappropriate to launch one worker for each pixel of a four megapixel image.

That means web workers are not viable for the things Houdini plans to do. Therefore, worklets were invented. Worklets make use of ES2015 classes as nice way to define a collection of methods, the signatures of which are predefined by the type of the worklet. They are light-weight and short-lived.

Paint Worklet (spec)

I am starting with this as it is introduces the fewest new concepts. From the spec draft itself:

The paint stage of CSS is responsible for painting the background, content and highlight of an element based on that element’s geometry (as generated by the layout stage) and computed style.

This allows you not only to define how an element should draw itself (think of the synergy with Web Components!) but also alter the visuals of existing elements. No need for hacky things like DOM elements to create a ripple effect on buttons. This would allow a significant reduction in DOM node count for common visuals. Another big advantage of running your code at paint time of an element compared to using a regular <canvas> is that you will know the size of the element you are supposed to paint and that you will be aware of fragments and handle them appropriately.

Wait, what are fragments?

Fragments

I think of elements in the DOM tree as boxes that are laid out by the CSS engine to make up my website. It turns out, however, that this mental model becomes flawed once inline elements come into play. A <span> may need to be wrapped; so while still technically being a single DOM node, it has been fragmented into 2, well, fragments. The spec calls the bounding box of these 2 fragments a fragmentainer. I am not even kidding.

Back to the Paint Worklet: Effectively, your code will get called for each fragment and will be given access to a stripped down <canvas>-like API as well as the styles applied to the element, which allows you to draw (but not visually inspect) the fragment. You can even request an “overflow” margin to allow you to draw effects around the element’s boundaries, just like box-shadow.

class {
  static get inputProperties() {
    return ['border-color', 'border-size'];
  }
  paint(ctx, geom, inputProperties) {
    var offset = inputProperties['border-size']
    var colors = inputProperties['border-color'];
    self.drawFadingEdge(
      ctx,
      0-offset[0], 0-offset[0],
      geom.width+offset[0], 0-offset[0],
      color[0]);
    self.drawFadingEdge(
      ctx,
      geom.width+offset[1], 0-offset[1],
      geom.width+offset[1], geom.height+offset[1],
      color[1]);
    self.drawFadingEdge(
      ctx, 0-offset[2],
      geom.height+offset[2], geom.width+offset[2],
      geom.height+offset[2],
      color[2]);
    self.drawFadingEdge(
      ctx,
      0-offset[3], 0-offset[3],
      0-offset[3], geom.height+offset[3],
      color[3]);
  }
  drawFadingEdge(ctx, x0, y0, x1, y1, color) {
    var gradient =
      ctx.createLinearGradient(x0, y0, x1, y1);
    gradient.addColorStop(0, color);
    var colorCopy = new ColorValue(color);
    colorCopy.opacity = 0;
    gradient.addColorStop(0.5, colorCopy);
    gradient.addColorStop(1, color);
  }
  overflow(inputProperties) {
    // Taking a wild guess here. The return type
    // of overflow() is currently specified
    // as `void`, lol.
    return {
      top: inputProperties['border-size'][0],
      right: inputProperties['border-size'][1],
      bottom: inputProperties['border-size'][2],
      left: inputProperties['border-size'][3],
    };
  }
};

Compositor Worklet

At the time of writing, the compositor worklet doesn’t even have a proper draft and yet it’s the one that gets me excited the most. As you might know, some operations are outsourced to the graphics card of your computer by the CSS engine, although that is dependent on both your grapics card and your device in general. A browser usually takes the DOM tree and, based on specific criteria, decides to give some branches and subtrees their own layer. These subtrees paint themselves onto it (maybe using a paint worklet in the future). As a final step, all these individual, now painted, layers are stacked and positioned on top of each other, respecting z-indices, 3D transforms and such, to yield the final image that is visible on your screen. This process is called “compositing” and is executed by the “compositor”. The advantage of this process is that you don’t have to make all the elements repaint themselves when you scrolled a tiny bit. Instead, you can reuse the layers from the previous frame and just re-run the compositor with the changed scroll position. This makes things fast. This makes us reach 60fps. This makes Paul Lewis happy.

As the name suggests, the compositor worklet lets you hook into the compositor and influence the way an element’s layer, which has already been painted, is positioned and layered on top of the other layers. To get a little more specific: You can tell the browser that you want to hook into the compositing process for a certain DOM node and can request access to certain attributes like scroll position, transform or opacity. This will force this element on its own layer and on each frame your code gets called. You can move your layer around by manipulating the layers transform and change its attributes (like opacity) allowing you to do fancy-schmancy things at a whopping 60 fps. Here’s a full implementation for parallax scrolling using the compositor worklet.

// main.js
window.compositorWorklet.import('worklet.js')
  .then(function() {
    var animator = new CompositorAnimator('parallax');
    animator.postMessage([
      new CompositorProxy($('.scroller'), ['scrollTop']),
      new CompositorProxy($('.parallax'), ['transform']),
    ]);
  });
// worklet.js
registerCompositorAnimator('parallax', class {
  tick(timestamp) {
    var t = self.parallax.transform;
    t.m42 = -0.1 * self.scroller.scrollTop;
    self.parallax.transform = t;
  }

  onmessage(e) {
    self.scroller = e.data[0];
    self.parallax = e.data[1];
  };
});

My colleague Robert Flack has written a polyfill for the compositor worklet so you can give it a try already – obviously with a much higher performance impact.

Layout Worklet (spec)

Again, a specification that is practically empty, but the concept is intriguing: Write your own layout! The layout worklet is supposed to enable you to do display: layout('myLayout') and run your JavaScript to arrange a node’s children in the node’s box . Of course, running a full JavaScript implementation of CSS’s flex-box layout will be slower than running an equivalent native implementation – but it’s easy to imagine a scenario where cutting corners can yield a performance gain. Imagine a website consisting of nothing but tiles á la Windows 10 or a Masonry-style layout. Absolute/fixed positioning is not used, neither is z-index nor do elements ever overlap or have any kind of border or overflow. Being able to skip all these checks on re-layout could yield a performance gain.

registerLayout('random-layout', class {
    static get inputProperties() {
      return [];
    }
    static get childrenInputProperties() {
      return [];
    }
    layout(children, constraintSpace, styleMap) {
        Const width = constraintSpace.width;
        Const height =constraintSpace.height;
        for (let child of children) {
            const x = Math.random()*width;
            const y = Math.random()*height;
            const constraintSubSpace = new ConstraintSpace();
            constraintSubSpace.width = width-x;
            constraintSubSpace.height = height-y;
            const childFragment = child.doLayout(constraintSubSpace);
            childFragment.x = x;
            childFragment.y = y;
        }

        return {
            minContent: 0,
            maxContent: 0,
            width: width,
            height: height,
            fragments: [],
            unPositionedChildren: [],
            breakToken: null
        };
    }
});

Typed CSSOM (spec)

Typed CSSOM (CSS Object Model (Cascading Style Sheets Object Model)) addresses a problem we probably all have encountered and just learned to just put up with. Let me illustrate with a line of JavaScript:

$('#someDiv').style.height = getRandomInt() + 'px';

We are doing math, converting a number to a string to append a unit just to have the browser parse that string and convert it back to a number for the CSS engine to use. This gets even uglier when you manipulate transforms with JavaScript. No more! CSS is about to get some typing! This draft is one of the more mature ones and a polyfill is actually already being worked on (Disclaimer: Using the polyfill will obviously add even more computational overhead. The point is to show how convenient the API is).

Instead of strings you will be working on an element’s StylePropertyMap, where each CSS attribute has it’s own key and corresponding value type. Attributes like width have LengthValue as their value type. A LengthValue is a dictionary of all CSS units like em, rem, px, percent, etc. Setting height: calc(5px + 5%) would yield a LengthValue{px: 5, percent: 5}. Some properties like box-sizing just accept certain keywords and therefore have a KeywordValue. The validity of those attributes could now be checked at runtime.

<div style="width: 200px;" id="div1"></div>
<div style="width: 300px;" id="div2"></div>
<div id="div3"></div>
<div style="margin-left: calc(5em + 50%);" id="div4"></div>
var w1 = $('#div1').styleMap.get('width');
var w2 = $('#div2').styleMap.get('width');
$('#div3').styleMap.set('background-size',
  [new SimpleLength(200, 'px'), w1.add(w2)])
$('#div4')).styleMap.get('margin-left')
  // => {em: 5, percent: 50}

Properties and Values (spec)

Do you know CSS Custom Properties (or their unofficial alias “CSS Variables”)? This is them but with types! So far, variables could only have string values and used a simple search-and-replace approach. This draft would allow you to not only specify a type for your variables, but also define a default value and influence the inheritance behavior using a JavaScript API. Technically, this would also allow custom properties to get animated with standard CSS transitions and animations, which is being considered.

["--scale-x", "--scale-y"].forEach(function(name) {
document.registerProperty({
    name: name,
    syntax: "<number>",
    inherits: false,
    initialValue: "1"
  });
});

Font Metrics

Font metrics is exactly what it sounds like. What is the bounding box (or the bounding boxes when we are wrapping) when I render string X with font Y at size Z? What if I go all crazy unicode on you like using ruby annotations? This has been requested a lot in the past and Houdini should finally make these wishes come true.

But wait, there’s more!

There’s even more specs in Houdini’s list of drafts, but the future of those is rather uncertain and they are not much more than a placeholder for an idea at this point. Examples include custom overflow behaviors, CSS syntax extension API, extension of native scroll behavior and similar ambitious things that allow us new things on the web platform that weren’t possible before.

Gimme!

As of now, nothing of the Houdini specs is in Chrome. However, some of them may be available behind a “OMG this is totally not for production” flag soon(tm) in Canary.

For what it’s worth, I have open-sourced the code for the demo (live demo using polyfill) videos I made so you can get a feeling on what working with worklets feels like.

If you want to get involved, there’s always the Houdini mailing list.

Stream Your Way to Immediate Responses

$
0
0

Anyone who’s used service workers could tell you that they’re asynchronous all the way down. They rely exclusively on event-based interfaces, like FetchEvent, and use promises to signal when asynchronous operations are complete.

Asynchronicity is equally important, albeit less visible to the developer, when it comes to responses provided by a service worker’s fetch event handler. Streaming responses are the gold standard here: they allow the page that made the original request to start working with the response as soon as the first chunk of data is available, and potentially use parsers that are optimized for streaming to progressively display the content.

When writing your own fetch event handler, it’s common to just pass the respondWith() method a Response (or a promise for a Response) that you get via fetch() or caches.match(), and call it a day. The good news is that the Responses created by both of those methods are already streamable! The bad news is that “manually” constructed Responses aren’t streamable, at least until now. That’s where the Streams API enters the picture.

Streams?

A stream is a data source that can be created and manipulated incrementally, and provides an interface for reading or writing asynchronous chunks of data, only a subset of which might be available in memory at any given time. For now, we’re interested in ReadableStreams, which can be used to construct a Response object that’s passed to fetchEvent.respondWith():

self.addEventListener('fetch', event => {
  var stream = new ReadableStream({
    start(controller) {
      if (/* there's more data */) {
        controller.enqueue(/* your data here */);
      } else {
        controller.close();
      }
    });
  });

  var response = new Response(stream, {
    headers: {'content-type': /* your content-type here */}
  });

  event.respondWith(response);
});

The page whose request triggered the fetch event will get a streaming response back as soon as event.respondWith() is called, and it will keep reading from that stream as long as the service worker continues enqueue()ing additional data. The response flowing from the service worker to the page is truly asynchronous, and we have complete control over filling the stream!

Real-world Uses

You’ve probably noticed that the previous example had some placeholder /* your data here */ comments, and was light on actual implementation details. So what would a real-world example look like?

Jake Archibald (not surprisingly!) has a great example of using streams to stitch together an HTML response from multiple cached HTML snippets, along with “live” data streamed via fetch()—in this case, content for his blog.

The advantage using a streaming response, as Jake explains, is that the browser can parse and render the HTML as it streams in, including the initial bit that’s loaded quickly from the cache, without having to wait for the entire blog content fetch to complete. This takes full advantage of the browser’s progressive HTML rendering capabilities. Other resources that can also be progressively rendered, like some image and video formats, can also benefit from this approach.

Streams? Or App Shells?

The existing best practices around using service workers to power your web apps focus on an App Shell + dynamic content model. That approach relies on aggressively caching the “shell” of your web application—the minimal HTML, JavaScript, and CSS needed to display your structure and layout—and then loading the dynamic content needed for each specific page via a client-side request.

Streams bring with them an alternative to the App Shell model, one in which there’s a fuller HTML response streamed to the browser when a user navigates to a new page. The streamed response can make use of cached resources—so it can still provide the initial chunk of HTML quickly, even while offline!—but they end up looking more like traditional, server-rendered response bodies. For example, if your web app is powered by a content management system that server-renders HTML by stitching together partial templates, that model translates directly into using streaming responses, with the templating logic replicated in the service worker instead of your server. As the following video demonstrates, for that use case, the speed advantage that streamed responses offer can be striking:

One important advantage of streaming the entire HTML response, explaining why it’s the fastest alternative in the video, is that HTML rendered during the initial navigation request can take full advantage of the browser’s streaming HTML parser. Chunks of HTML that are inserted into a document after the page has loaded (as is common in the App Shell model) can’t take advantage of this optimization.

So if you’re in the planning stages of your service worker implementation, which model should you adopt: streamed responses that are progressively rendered, or a lightweight shell coupled with a client-side request for dynamic content? The answer is, not surprisingly, that it depends: on whether you have an existing implementation that relies on a CMS and partial templates (advantage: stream); on whether you expect single, large HTML payloads that would benefit from progressive rendering (advantage: stream); on whether your web app is best modeled as a single-page application (advantage: App Shell); and on whether you need a model that’s currently supported across multiple browsers’ stable releases (advantage: App Shell).

We’re still in the very early days of service worker-powered streaming responses, and we look forward to seeing the different models mature and especially to seeing more tooling developed to automate common use cases.

Diving Deeper into Streams

If you’re constructing your own readable streams, simply calling controller.enqueue() indiscriminately might not be either sufficient or efficient. Jake goes into some detail about how the start(), pull(), and cancel() methods can be used in tandem to create a data stream that’s tailored to your use case.

For those who want even more detail, the Streams specification has you covered.

Compatibility

Support for constructing a Response object inside a service worker using a ReadableStream as its source was added in Chrome 52.

Firefox’s service worker implementation does not yet support responses backed by ReadableStreams, but there is a relevant tracking bug for Streams API support that you can follow.

Progress on unprefixed Streams API support in Edge, along with overall service worker support, can be tracked at Microsoft’s Platform status page.

Improving scroll performance with passive event listeners

$
0
0

New to Chrome 51, passive event listeners are an emerging web standard that provide a major potential boost to scroll performance. Check out the video below for a side-by-side demo of the improvements in action:

How it works

When you scroll a page and there’s such a delay that the page doesn’t feel anchored to your finger, that’s called scroll jank. Many times when you encounter scroll jank, the culprit is a touch event listener. Touch event listeners are often useful for tracking user interactions and creating custom scroll experiences, such as cancelling the scroll altogether when interacting with an embedded Google Map. Currently, browsers can’t know if a touch event listener is going to cancel the scroll, so they always wait for the listener to finish before scrolling the page. Passive event listeners solve this problem by enabling you to set a flag in the options parameter of addEventListener indicating that the listener will never cancel the scroll. That information enables browsers to scroll the page immediately, rather than after the listener has finished.

Learn more

Check out the Chromium blog for a high-level overview of how passive event listeners work:

New APIs to help developers improve scroll performance

And the specification’s repository to learn how to implement passive event listeners:

Passive events listener explainer

DevTools Digest: DevTools in 2016 and Beyond

$
0
0

Google I/O 2016 is a wrap. DevTools had a strong presence at I/O, including a talk by Paul Bakaus, Paul Irish, and Seth Thompson outlining the future of DevTools. Check out the video below or read on to learn more about where DevTools is headed in 2016 and beyond.

Authoring

DevTools aims to make every stage of the web development lifecycle easier. You probably know that DevTools can help you debug or profile a website, but you may not know how to use it to help you author a site. By “authoring” we mean the act of creating a site. One of the goals in the foreseeable future is to make it easier for you to iterate, experiment, and emulate your site across multiple devices.

Device Mode

The DevTools team continues to iterate on the Device Mode experience, which received its first major upgrade in Chrome 49. Check out the post from March (A new Device Mode for a mobile-first world) for an overview of the updates. The overarching goal is to provide a seamless workflow for viewing and emulating your site across all form factors.

Here’s a quick summary of some Device Mode updates that have landed in Canary since we posted the article back in March.

When viewing a page as a specific device, you can immerse yourself more in the experience by showing the device hardware around your page.

Showing device frame

The device orientation menu lets you view your page when different system UI elements, such as the navigation bar and keyboard, are active.

Showing system UI elements

The desktop story has improved, too. Thanks to Device Mode’s zoom feature, you can emulate desktop screens larger than the screen that you’re actually working on, such as a 4K (2560px) screen.

Showing a 4K screen

You can throttle the network directly from the Device Mode UI, rather than having to switch to the Network panel.

Network throttling

Source diffs

When you iterate upon a site’s styling, it’s easy to lose track of your changes. To fix this, DevTools is going to use visual cues on the line number gutter of the Sources panel to help you keep track of your changes. Deleted lines are indicated with a red line, modified lines are highlighted purple, and new lines are highlighted green.

Sources diff in Sources panel

You’ll also be able to keep track of your changes in the new Quick Source drawer tab:

Quick source drawer tab

For the first time, the Quick Source tab lets you focus on your HTML or JavaScript source code at the same time as your CSS rules. Also, when you click on a CSS property in the Styles pane, the Quick Source tab automatically jumps to and highlights the definition in the source.

Enable the sources diff experiment in Chrome Canary to try it out today.

Live Sass editing

Here’s a sneak peek of some upcoming major improvements to the Sass editing workflow. These features are very early in the experimental phase. We’ll follow up in a later post when they’re ready for you to try out.

Basically, DevTools is going to let you view and edit Sass variables like you always hoped it would. Click on a value that was compiled from a Sass variable, and the new Quick Sources tab jumps to the definition of the variable.

Viewing a Sass variable definition

When editing a value that was compiled from a Sass variable, your edit updates the Sass variable, not just the individual property that you selected.

Editing a Sass variable

Progressive Web Aps

Look at the list of web and Chrome talks at Google I/O 2016 and you’ll see a huge theme emerging in the world of web development: Progressive Web Apps.

As the Progressive Web App model emerges, DevTools is iterating rapidly to provide the tools developers need to create great progressive web apps. We realized that building and debugging these modern applications often comes with unique requirements, so DevTools has dedicated an entire panel to Progressive Web Application development. Open up Chrome Canary and you’ll see that the Resources panel has been replaced with the Application panel. All of the previous functionality from the Resources panel is still there. There’s just a few new panes designed specifically for Progressive Web App development:

The Manifest pane gives you a visual representation of the app manifest file. From here you can manually trigger the “Add to homescreen” workflow.

Manifest pane

The Service Workers pane lets you inspect and interact with the service worker registered for the current page.

Service Worker pane

And the Clear Storage pane lets you wipe all sorts of data so that you can view a page with a clean slate.

Clear Storage pane

JavaScript

Crossing the boundary between frontend and backend is a difficult part of fullstack web development. If you discover that your backend is returning a 500 status code while debugging a web app, you have effectively reached the limit of DevTools’ usefulness, requiring you to change contexts and fire up your backend development environment to debug the problem.

With backends written in Node.js, however, the boundaries between frontend and backend are starting to blur. Since Node.js runs on the V8 JavaScript engine (the same engine that powers Google Chrome) we wanted to make it possible to debug Node.js from DevTools. Thanks to the V8, DevTools, and Google Cloud Platform for Node.js teams, you can now use all of DevTools’ powerful debugging features to introspect a Node.js app. The functionality has already reached Node.js nightly builds, although DevTools integration is still being polished before being included in a major release. Debugging your Node.js app from DevTools will someday be as simple as passing node --inspect app.js and connecting from DevTools in any Chrome window.

Although existing tools such as Node Inspector provide GUI-based debugging experiences, the new Node.js DevTools integration will remain up-to-date with DevTools’ latest debuging features, such as async stack debugging, blackboxing, and ES6 support.

Performance Observer - Efficient Access to Performance Data

$
0
0

Progressive Web Apps enable developers to build a new class of applications that deliver reliable, high performance user experiences. But to be sure a web app is achieving its desired performance goals, developers need access to high resolution performance measurement data. The W3C Performance Timeline specification defines such an interface for browsers to provide programmatic access to low level timing data. This opens the door to some interesting use cases:

  • offline and custom performance analysis
  • third party performance analysis and visualization tools
  • performance assessment integrated into IDEs and other developer tools

Access to this kind of timing data is already available in most major browsers for navigation timing, resource timing, and user timing. The newest addition is the performance observer interface, which is essentially a streaming interface to gather low level timing information asynchronously, as it’s gathered by the browser. This new interface provides a number of critical advantages over previous methods to access the timeline:

  • Today, apps have to periodically poll and diff the stored measurements, which is costly. This interface offers them a callback (i.e. no need to poll). As a result apps using this API can be more responsive and more efficient.
  • It’s not subject to buffer limits (most buffers are set to 150 items by default), and avoids race conditions between different consumers that may want to modify the buffer.
  • Performance observer notifications are delivered asynchronously and the browser can dispatch them during idle time to avoid competing with critical rendering work.

Beginning in Chrome 52, the performance observer interface is enabled by default. Let’s take a look at how to use it.

<html>
<head>
  <script>
    var observer = new PerformanceObserver(list => {
      list.getEntries().forEach(entry => {
        // Display each reported measurement on console
        if (console) {
          console.log("Name: "       + entry.name      +
                      ", Type: "     + entry.entryType +
                      ", Start: "    + entry.startTime +
                      ", Duration: " + entry.duration  + "\n");
        }
      })
    });
    observer.observe({entryTypes: ['resource', 'mark', 'measure']});
    performance.mark('registered-observer');

    function clicked(elem) {
      performance.measure('button clicked');
    }
  </script>
</head>
<body>
  <button onclick="clicked(this)">Measure</button>
</body>
</html>

This simple page starts with a script tag defining some JavaScript code:

  • We instantiate a new PerformanceObserver object and pass an event handler function to the object constructor. The constructor initializes the object such that our handler will be called every time a new set of measurement data is ready to be processed (with the measurement data passed as a list of objects). The handler is defined here as an anonymous function that simply displays the formatted measurement data on the console. In a real world scenario, this data might be stored in the cloud for subsequent analysis, or piped into an interactive visualization tool.
  • We register for the types of timing events we’re interested in via the observe() method and call the mark() method to mark the instant at which we registered, which we’ll consider the beginning of our timing intervals.
  • We define a click handler for a button defined in the page body. This click handler calls the measure() method to capture timing data about when the button was clicked.

In the body of the page, we define a button, assign our click handler to the onclick event, and we’re ready to go.

Now, if we load the page and open the Chrome DevTools panel to watch the Javascript console, every time we click the button a performance measurement is taken. And because we’ve registered to observe such measurements, they are forwarded to our event handler, asynchronously without the need to poll the timeline, which displays the measurements on the console as they occur:

The start value represents the starting timestamp for events of type mark (of which this app has only one). Events with type measure have no inherent starting time; they represent timing measurements taken relative to the last mark event. Thus, the duration values seen here represent the elapsed time between the call to mark(), which serves as a common interval starting point, and multiple subsequent calls to measure().

As you can see, this API is quite simple and it offers the ability to gather filtered, high resolution, real time performance data without polling, which should open the door to more efficient performance tooling for web apps.

API Deprecations in Chrome 52

$
0
0

In nearly every version of Chrome we see a significant number of updates and improvements to the product, its performance, and also capabilities of the web platform. This article describes the changes in Chrome 52, which is in beta as of June 9. This list is subject to change at any time.

Deprecation policy

To keep the platform healthy we sometimes remove APIs from the Web Platform which have run their course. There can be many reasons why we would remove an API, such as: they are superseded by newer APIs, are updated to reflect changes to specifications, to bring alignment and consistency with other browsers, or they are early experiments that never came to fruition in other browsers and thus can increase the burden of support for web developers.

Some of these changes might have an effect on a very small number of sites and to mitigate issues ahead of time we try to give developers advanced notice so that if needed they can make the required changes to keep their sites running.

Chrome currently has a process for deprecations and removals of API’s and the TL;DR is:

  • Announce on blink-dev.
  • Set warnings and give time scales in the developer console of the browser when usage is detected on a page.
  • Wait, monitor and then remove feature as usage drops.

You can find a list of all deprecated features in chromestatus.com using the deprecated filter and removed features by applying the removed filter. We will also try to summarize some of the changes, reasoning, and migration paths in these posts.

Block pop-ups from cross-origin iframes during touch events except during a tap gesture

TL;DR: Chrome will begin disallowing pop-ups and other sensitive operations on touch events that don’t correspond to a tap from inside of cross-origin iframes.

Intent to Remove | Chromestatus Tracker | Chromium Bug

By their very nature, touch events can be ambiguous when compared to their corresponding mouse events. For example, if a user slides a finger across the screen, is said user sliding a toggle switch or scrolling the view? Some third-party content in iframes have taken advantage of this ambiguity to intentionally disable scrolling on the containing page.

To combat this, pop-ups and other sensitive operations will be disallowed on touch events from cross-origin iframes. The touchend event will continue to behave as before.

Deprecate overload of postMessage()

TL;DR: An unneeded and little-used variant of the postMessage() interface is being deprecated, specifically postMessage(message, transferables, targetOrigin).

Intent to Remove | Chromestatus Tracker | Chromium Bug

The postMessage() method is a way to securely communicate between the scripts of pages on different origins. WebKit/Blink supports three versions:

  • postMessage(message, targetOrigin)
  • postMessage(message, targetOrigin, transferables)
  • postMessage(message, transferables, targetOrigin)

The last item in this list was an accident from the history of the spec’s evolution and implementation. Because it is rarely used, it will be deprecated and later removed. This applies to both window.postMessage() and worker.postMessage().

Removal is anticipated in Chrome 54.

Remove support for X-Frame-Options in tags

TL;DR: To both comply with the spec and increase consistency with other browsers, support for X-Frame-Options inside a <meta> tag is being removed.

Intent to Remove | Chromium Bug

The X-Frame-Options HTTP response header indicates whether a browser can render a page in an<frame>, <iframe>, or <object> tag. This let’s a site avoid clickjacking since such pages cannot be embedded in other sites. The current version of the X-Frame-Options spec explicitely restricts user agents from supporting this field inside a <meta> tag.

To both comply with the spec and increase consistency with other browsers, support for X-Frame-Options inside a <meta> tag is being removed.

Remove non-primary button click event

To bring Chrome in line with the UIEvents spec, we’re removing the mouse events for non-primary mouse buttons. Non-primary mouse buttons varies by device. Generally this means anything other than a right or left mouse button.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove requestAutocomplete()

The requestAutocomplete() function allowed forms to be filled out on demand by the browser’s autofill capability. Yet more than two years in this capability is only supported in Blink and its usage is low. For these reasons, requestAutocomplete() is removed in Chrome 52.

Intent to Remove


CSS Containment in Chrome 52

$
0
0

TL;DR

The new CSS Containment property lets developers limit the scope of the browser’s styles, layout and paint work.

CSS Containment. Before: layout takes 59.6ms. After: layout takes 0.05ms

It has a few values, making its syntax this:

contain: none | strict | content | [ size || layout || style || paint ]

It’s in Chrome 52+ and Opera 40+ (and it has public support from Firefox), so give it a whirl and let us know how you go!

The contain property

When making a web app, or even a complex site, a key performance challenge is limiting the effects of styles, layout and paint. Oftentimes the entirety of the DOM is considered “in scope” for computation work, which can mean that attempting a self-contained “view” in a web app can prove tricky: changes in one part of the DOM can affect other parts, and there’s no way to tell the browser what should be in or out of scope.

For example, let’s say part of your DOM looks like this:

<section class="view">
  Home
</section>

<section class="view">
  About
</section>

<section class="view">
  Contact
</section>

And you append a new element to one view, which will trigger styles, layout and paint:

<section class="view">
  Home
</section>

<section class="view">
  About
  <div class="newly-added-element">Check me out<!/div>
</section>

<section class="view">
  Contact
</section>

In this case, however, the whole DOM is effectively in scope, meaning that style, layout, and paint calculations will have to consider all the elements irrespective of whether or not they were changed. The bigger the DOM, the more computation work that involves, meaning that you could well make your app unresponsive to user input.

The good news is that modern browsers are getting really smart about limiting the scope of styles, layout, and paint work automatically, meaning that things are getting faster without you having to do anything.

But the even better news is that there’s a new CSS property that hands scope controls over to developers: Containment.

The contain property

CSS Containment is a new property, with the keyword contain, which supports four values:

  • layout
  • paint
  • size
  • style

Each of these values allows you to limit how much rendering work the browser needs to do. Let’s take a look at each in a little more detail.

Layout (contain: layout)

This value turns on layout containment for the element. This ensures that the containing element is totally opaque for layout purposes; nothing outside can affect its internal layout, and vice versa. Containment spec

Layout containment is probably the biggest benefit of containment, along with contain: paint.

Layout is normally document-scoped, making it scale proportionally to the size of your DOM, so if you change an element’s left property (as just one example), every single element in the DOM might need to be checked.

Enabling containment here can potentially reduce the number of elements to just a handful, rather than the whole document, saving the browser a ton of unnecessary work and significantly improving performance.

Paint (contain: paint)

This value turns on paint containment for the element. This ensures that the descendants of the containing element don’t display outside its bounds, so if an element is off-screen or otherwise not visible, its descendants are also guaranteed to be not visible. Containment spec

Scoping paint is another incredibly useful benefit of containment. Paint containment essentially clips the element in question, but it has also has a few other side effects:

  • It acts as a containing block for absolutely positioned and fixed position elements. This means any children are positioned based on the element with contain: paint not any other parent element like – say – the document.
  • It becomes a stacking context. This means that things like z-index will have an effect on the element, and children will be stacked according to the new context.
  • It becomes a new formatting context. This means that if you have, for exaple, a block level element with paint containment, it will be treated as a new, independent layout environment. This means that layout outside of the element won’t typically affect the containing element’s children.

Size (contain: size)

The value turns on size containment for the element. This ensures that the containing element can be laid out without needing to examine its descendants. Containment spec

What contain: size means is that the element’s children do not affect the parent’s size, and that its inferred or declared dimensions will be the ones used. Consequently if you were to set contain: size but didn’t specify dimensions for the element (either directly or via flex properties), it would be rendered at 0px by 0px!

Size containment is really a belt-and-braces measure to ensure you don’t rely on child elements for sizing, but by itself it doesn’t offer much performance benefit.

Style (contain: style)

This value turns on style containment for the element. This ensures that, for properties which can have effects on more than just an element and its descendants, those effects don’t escape the containing element. Containment spec

It can be hard to predict what the effects on the DOM tree of changing an element’s styles will be back up the tree. One example of this is in something like CSS counters, where changing a counter in a child can affect counter values of the same name used elsewhere in the document. With contain: style set, style changes won’t get propagated back up past the containing element.

To be super clear, what contain: style doesn’t provide is scoped styling as you’d get from Shadow DOM; containment here is purely about limiting the parts of the tree that are under consideration when styles are mutated, not when they are declared.

Strict and content containment

You can also combine keywords, such as contain: layout paint, which will apply only those behaviors to an element. But contain also supports two additional values:

  • contain: strict means the same as contain: layout style paint size
  • contain: content means the same as contain: layout style paint

Using strict containment is great when you know the size of the element ahead of time (or wish to reserve its dimensions), but bear in mind that if you declare strict containment without dimensions, because of the implied size containment, the element may be rendered as a 0px by 0px box.

Content containment, on the other hand, offers significant scope improvements, but does not require you to know or specify the dimensions of the element ahead of time.

Of the two, contain: content is the one you should look to use by default. You should treat strict containment as something of a more an escape hatch when contain: content isn’t strong enough for your needs.

Let us know how you get on

Containment is a great way to start indicating to the browser what you intend to be kept isolated within your page. Give it a try in Chrome 52+ and let us know how you get on!

Service worker caching, playbackRate and blob URLs for audio and video on Chrome for Android

$
0
0

Sometimes good things have boring names.

Case in point: the Unified Media Pipeline, UMP for short.

This may sound like a sinister Soviet era directive, but in fact it’s an important step towards consistent cross-platform audio and video delivery. Chrome on Android will now use the same media stack as desktop Chrome, rather than relying on the underlying platform implementation.

UMP enables you to do a lot:

  • Cache audio and video with service workers, since media delivery is now implemented directly within Chrome rather than being passed off to the Android media stack.
  • Use blob URLs for audio and video elements.
  • Set playbackRate for audio and video.
  • Pass MediaStreams between Web Audio and MediaRecorder.
  • Develop and maintain media apps more easily across devices — media works the same on desktop and Android.

UMP took some hard engineering work to implement:

  • A new caching layer for improved power performance.
  • Updating a new MediaCodec based video decoder hosted in Chrome’s GPU process.
  • Lots of testing and iteration on different devices.

Here’s a demo of video caching with a service worker:

Screenshot of video playback

Caching the video file and the video poster image is as simple as adding their paths to the list of URLs to prefetch:

<video controls  poster="static/poster.jpg">
  <source src="static/video.webm" type="video/webm" />
  <p>This browser does not support the video element.</p>
</video>
var urlsToPrefetch = [
    'static/video.webm', 'static/poster.jpg',
  ];

The inability to change playbackRate on Android has been a long-standing bug. UMP fixes this. For the demo at simpl.info/video/playbackrate, playbackRate is set to 2. Try it out!

Screenshot of video playback with playbackRate set to 2

UMP enables blob URLs for media elements — which means that, for example, you can now play back a video recorded using the MediaRecorder API in a video element on Android:

Screenshot of playback in Chrome on Android of a video recorded using the MediaRecorder API

Here’s the relevant code:

var recordedBlobs = [];

mediaRecorder.ondataavailable = function(event) {
  if (event.data && event.data.size > 0) {
    recordedBlobs.push(event.data);
  }
};

function play() {
  var superBuffer = new Blob(recordedBlobs, {type: 'video/webm'});
  recordedVideo.src = window.URL.createObjectURL(superBuffer);
}

For the demo at simpl.info/video/offline, video is stored using the File APIs, then played back using a Blob URL:

Screenshot of playback in Chrome on Android of video stored using the File APIs

function writeToFile(fileEntry, blob) {
  fileEntry.createWriter(function(fileWriter) {
    fileWriter.onwriteend = function() {
      readFromFile(fileEntry.fullPath);
    };
    fileWriter.onerror = function(e) {
      log('Write failed: ' + e.toString());
    };
    fileWriter.write(blob);
  }, handleError);
}

function readFromFile(fullPath) {
  window.fileSystem.root.getFile(fullPath, {}, function(fileEntry) {
    fileEntry.file(function(file) {
      var reader = new FileReader();
      reader.onloadend = function() {
        video.src = URL.createObjectURL(new Blob([this.result]));
      };
      reader.readAsArrayBuffer(file);
    }, handleError);
  }, handleError);
}

The Unified Media Pipeline has also been enabled for Media Source Extensions (MSE) and Encrypted Media Extensions (EME).

This is another step towards unifying mobile and desktop Chrome. You don’t need to change your code, but building a consistent media experience across desktop and mobile should now be easier, since the media stack is the same across platforms. Debugging with Chrome DevTools? Mobile emulation now uses the ‘real’ audio and video stack.

If you experience problems as a result of the Unified Media Pipeline, please file issues on the implementation bug or via new.crbug.com.

Demos

Relevant bugs

There are a couple of bugs affecting <video>, service workers and the Cache Storage API:

Browser support

  • Enabled by default in Chrome 52 and above.

Service worker caching, playbackRate and blob URLs for audio and video on Chrome for Android

$
0
0

Sometimes good things have boring names.

Case in point: the Unified Media Pipeline, UMP for short.

This may sound like a sinister Soviet era directive, but in fact it’s an important step towards consistent cross-platform audio and video delivery. Chrome on Android will now use the same media stack as desktop Chrome, rather than relying on the underlying platform implementation.

UMP enables you to do a lot:

  • Cache audio and video with service workers, since media delivery is now implemented directly within Chrome rather than being passed off to the Android media stack.
  • Use blob URLs for audio and video elements.
  • Set playbackRate for audio and video.
  • Pass MediaStreams between Web Audio and MediaRecorder.
  • Develop and maintain media apps more easily across devices — media works the same on desktop and Android.

UMP took some hard engineering work to implement:

  • A new caching layer for improved power performance.
  • Updating a new MediaCodec based video decoder hosted in Chrome’s GPU process.
  • Lots of testing and iteration on different devices.

Here’s a demo of video caching with a service worker:

Screenshot of video playback

Caching the video file and the video poster image is as simple as adding their paths to the list of URLs to prefetch:

<video controls  poster="static/poster.jpg">
  <source src="static/video.webm" type="video/webm" />
  <p>This browser does not support the video element.</p>
</video>
var urlsToPrefetch = [
    'static/video.webm', 'static/poster.jpg',
  ];

There’s some shim code in this demo to handle range requests, which are not yet implemented by Service Worker.

The inability to change playbackRate on Android has been a long-standing bug. UMP fixes this. For the demo at simpl.info/video/playbackrate, playbackRate is set to 2. Try it out!

Screenshot of video playback with playbackRate set to 2

UMP enables blob URLs for media elements — which means that, for example, you can now play back a video recorded using the MediaRecorder API in a video element on Android:

Screenshot of playback in Chrome on Android of a video recorded using the MediaRecorder API

Here’s the relevant code:

var recordedBlobs = [];

mediaRecorder.ondataavailable = function(event) {
  if (event.data && event.data.size > 0) {
    recordedBlobs.push(event.data);
  }
};

function play() {
  var superBuffer = new Blob(recordedBlobs, {type: 'video/webm'});
  recordedVideo.src = window.URL.createObjectURL(superBuffer);
}

For the demo at simpl.info/video/offline, video is stored using the File APIs, then played back using a Blob URL:

Screenshot of playback in Chrome on Android of video stored using the File APIs

function writeToFile(fileEntry, blob) {
  fileEntry.createWriter(function(fileWriter) {
    fileWriter.onwriteend = function() {
      readFromFile(fileEntry.fullPath);
    };
    fileWriter.onerror = function(e) {
      log('Write failed: ' + e.toString());
    };
    fileWriter.write(blob);
  }, handleError);
}

function readFromFile(fullPath) {
  window.fileSystem.root.getFile(fullPath, {}, function(fileEntry) {
    fileEntry.file(function(file) {
      var reader = new FileReader();
      reader.onloadend = function() {
        video.src = URL.createObjectURL(new Blob([this.result]));
      };
      reader.readAsArrayBuffer(file);
    }, handleError);
  }, handleError);
}

The Unified Media Pipeline has also been enabled for Media Source Extensions (MSE) and Encrypted Media Extensions (EME).

This is another step towards unifying mobile and desktop Chrome. You don’t need to change your code, but building a consistent media experience across desktop and mobile should now be easier, since the media stack is the same across platforms. Debugging with Chrome DevTools? Mobile emulation now uses the ‘real’ audio and video stack.

If you experience problems as a result of the Unified Media Pipeline, please file issues on the implementation bug or via new.crbug.com.

Demos

Relevant bugs

There are a couple of bugs affecting <video>, service workers and the Cache Storage API:

Browser support

  • Enabled by default in Chrome 52 and above.

Flexbox gets new behavior for absolute-positioned children

$
0
0

A previous version of the CSS Flexible Box Layout specification set the static position of absolute-positioned children as though they were a 0x0 flex item. The latest version of the spec takes them fully out of flow and sets the static position based on align and justify properties. At the time of this writing, Edge and Opera 39 for desktop and Android already support this.

For an example, let’s apply some positioning behaviors to the following HTML.

<div class="container">
  <div>
    <p>In Chrome 52 and later, the green box should be centered vertically and horizontally in the red box.</p>
  </div>
</div>

We’ll add something like this:

.container {
  display: flex;
  align-items: center;
  justify-content: center;
}
.container > * {
  position: absolute;
}

In Chrome 52 or later, the nested <div> will be perfectly centered in the container <div>.

In non-conforming browsers, the top left corner of the green box will be in the top center of the red box.

Goodbye Short Sessions: a proposal for using service-workers to improve cookie management on the web

$
0
0

We all love how native apps will ask you to login only once and then remember you until you tell them you want to log out. Unfortunately the web doesn’t always work that way.

Now that devices, especially mobile devices, are more personal, and with more sites sending all traffic over HTTPS reducing the risk of token theft, web sites should reconsider their short-lived cookie policies and adopt more user-friendly longer lived sessions.

However, even if you want to make the session last longer, some websites don’t verify the user’s authentication on each request (i.e. there is no way to revoke the session cookie once issued). This normally leads to short sessions, with the user being forced to sign in frequently so their authentication can be re-validated, allowing for things like a password change to invalidate existing sessions in a known amount of time.

If this is an approach that you use, we have a technical solution that may help you automatically re-validate the stateless authentication cookie. It works by having a secondary long-lived token that can be used to refresh your existing short-lived authentication cookie. Leveraging the new Service Worker pattern allows us to regularly “checkin” with the long lived token, verify the user’s authentication (e.g. check to see if they have not recently changed their passwords, or otherwise revoked the session) and re-issue a new short-lived authentication cookie.

A practical proposal for migrating to safe long sessions on the web

From here, this post describes a new technique we’re proposing that we call 2-Cookie-Handoff (2CH). We are hoping to use this article to hear community feedback on whether this approach seems positive, and if so to work with the industry on documenting best practices for using 2CH.

Service workers are a new technology supported by multiple browsers such as Chrome, Firefox, Opera and coming soon to Edge. They allow you to intercept all network requests from your site through a common point of code on the client, without modify the existing pages. This allows you to set up an “2CH worker” for logged in users that can intercept all of the network requests your page is making and perform the token swapping just like mobile apps do.

If your server already has an endpoint used by mobile apps to obtain a new short-lived token, typically using the OAuth protocol. To enable this on the web, that endpoint just needs to be updated to understand when it is being called by a service worker, and then return a new short-lived session cookie formatted in a way that other pages on the site already expect.

If your server doesn’t already have such an endpoint, it can create one just for browser session management.

The two-token pattern with service workers follows the OAuth 2.0 pattern fairly closely, if you already run an OAuth token endpoint, then you can likely re-use it with service workers for your web authentication.

You may also be wondering what happens if the user visits a browser that doesn’t support service workers. If you implement the above approach they will simply experience no difference and continue to experience short sessions.

We have published a sample client and backend. We hope you will try it for yourself and answer a survey about session management.

ECDSA for WebRTC: better security, better privacy and better performance

$
0
0

From Chrome 52, WebRTC uses a much more efficient and secure algorithm for certificate (RTCCertificate) generation: ECDSA. In addition, RTCCertificates can now be stored with IndexedDB.

RTCCertificates are the self-signed certificates used in the DTLS handshake when setting up a WebRTC peer connection. (DTLS is an implementation of the cryptographic protocol TLS for datagram protocols such as UDP, which is used by WebRTC.)

Until recently, WebRTC used RSA-1024 keys for certificates. There are several disadvantages with these keys:

  • Generating RSA-1024 keys can add up to around 1000ms in call setup time.
  • 1024-bit RSA keys do not provide adequate cryptographic strength.

Because certificate generation with RSA-1024 is slow, some mobile apps have resorted to preparing certificates in advance or reusing them.

The key strength issue could be resolved by going to 2048-bit RSA keys or more, but that would delay call setup by several additional seconds. Instead of changing the RSA key size, Chrome 52 implements ECDSA keys (Elliptic Curve Digital Signature Algorithm) for use in certificates. These are as strong as 3072-bit RSA keys‚ but several thousand times faster: call setup overhead with ECDSA is just a few milliseconds.

Breaking an RSA key requires you to factor a large number. We are pretty good at factoring large numbers and getting better all the time. Breaking an ECDSA key requires you to solve the Elliptic Curve Discrete Logarithm Problem (ECDLP). The mathematical community has not made any major progress in improving algorithms to solve this problem since is was independently introduced by Koblitz and Miller in 1985.

— Nick Sullivan, CloudFlare

All in all, ECDSA keys mean better security, better privacy and better performance — especially on mobile. For these reasons, ECDSA has been mandated in the WebRTC Security Architecture draft.

From Chrome 47 you can opt in to ECDSA:

// or webkitRTCPeerConnection
RTCPeerConnection.generateCertificate({
  name: "ECDSA",
  namedCurve: "P-256"
}).then(function(certificate) {
  var pc = new RTCPeerConnection({..., certificates: [certificate]});
});

From Chrome 52, though ECDSA is enabled by default, you can still choose to generate RSA certificates:

pc.generateCertificate({
  name: "RSASSA-PKCS1-v1_5",
  modulusLength: 2048,
  publicExponent: new Uint8Array([1, 0, 1]),
  hash: "SHA-256"
})

(See the W3C draft for more information about generateCertificate().)

Storing RTCCertificate in IndexedDB

Another improvement in Chrome 52: the RTCCertificates used by WebRTC can be saved and loaded from IndexedDB storage, avoiding the need to generate new certificates between sessions. This can be useful, for example, if you still need to use RSA and want to avoid the RSA generation overhead. With ECDSA, caching is not necessary since it is fast enough to generate a new certificate every time.

RTCCertificate IndexedDB storage has already shipped in Firefox and is in Opera 39.

Find out more

Persistent Storage

$
0
0

Persistent Storage

With Chrome 52, we’re introducing the ability to make storage persistent. Storage for web applications is a complex topic, and persistence for data on the frequently-ephemeral web doubly so, so I should explain.

Normally, web applications store local data in various ways - in IndexedDB databases, through the Cache API, even (gasp) localStorage. All of this storage for a given domain takes up space on the local machine, of course.

When storage on the local machine is running tight (“under storage pressure”), user agents automatically clear storage to make more available space. Of course, for offline apps, this may be unfortunate, as they may not have synced their data to the server yet, or they may be apps that the user expects to just work offline (like a music player); so the Storage spec defines two different modes for storage for a given domain - “best effort” and “persistent”. The default mode, of course, is “best effort”. Storage for a domain that is “best effort” (aka “not persistent”) can be cleared automatically, without interrupting or asking the user. However, “persistent” data will not be automatically cleared. (If the system is still under storage pressure after clearing all non-persistent data, the user will need to manually clear any remaining persistent storage.)

How do I make my storage persistent?

So how do I make my storage persistent? Well, you have to ask for it explicitly:

if (navigator.storage && navigator.storage.persist)
  navigator.storage.persist().then(granted => {
    if (granted)
      alert("Storage will not be cleared except by explicit user action");
    else
      alert("Storage may be cleared by the UA under storage pressure.");
  });

This feature is still somewhat experimental. So in order to keep from prematurely baking this design in before it’s fully specified and agreed upon, we’ve implemented this feature in Chrome Stable as an Origin Trial. To use this API in Chrome Stable, you’ll need to request a token and insert it in your application.

The trial will end in October 2016. (By that point, we expect to have figured out any changes necessary to stabilize the feature and move it out from Origin Trials.) Or, of course, your users can use Chrome Canary, or enable experimental web features in chrome://flags.

Today the permission will be automatically granted to any sites that the user has bookmarked, and automatically denied otherwise. We plan to change this very soon to be a usage-based heuristic that takes into account user actions like add to home screen. The goal is to ensure that users can rely on their favorite web apps and not find they have suddenly been cleared.

You can also use the Javascript API to tell if persistence has been granted already:

if (navigator.storage && navigator.storage.persist)
  navigator.storage.persisted().then(persistent=>{
    if (persistent)
      console.log("Storage will not be cleared except by explicit user action");
    else
      console.log("Storage may be cleared by the UA under storage pressure.");
  });

You probably want to request permission, but then use the .persisted API to decide whether to display offline UI (like enabling a checkbox for “make available offline”), confirming to the user they can be confident it will be available offline (even under storage pressure). This will give a graceful degradation if the user won’t get an offline experience.

What about “Clear Data”? Will the user still inadvertently wipe my data?

This is still under development, but in short, the goal is to make users are aware of “persistent” data before clearing it - ideally letting them manually manage any such data. We’re still designing the fine-grained options and user flow for how this can best work, but from your app, you can presume that “persistent” means your data won’t get cleared without the user being explicitly informed and directly in control of that deletion.

The landscape of persistently storing data in the web platform is still changing, but we’re excited to take this strong first step in making web applications more reliable!


Complexities of an infinite scroller

$
0
0

TL;DR: Re-use your DOM elements and remove the ones that are far away from the viewport. Use placeholders to account for delayed data. Here’s a demo and the code for the infinite scroller.

Infinite scrollers pop up all over the internet. Google Music’s artist list is one, Facebook’s timeline is one and Twitter’s live feed is one as well. You scroll down and before you reach the bottom, new content magically appears seemingly out of nowhere. It’s a seamless experience for users and it’s easy to see the appeal.

The technical challenge behind an infinite scroller, however, is harder than it seems. The range of problems you encounter when you want to do The Right Thing™ is vast. It starts with simple things like the links in the footer becoming practically unreachable because content keeps pushing the footer away. But the problems get harder. How do you handle a resize event when someone turns their phone from portrait to landscape or how do you prevent your phone from grinding to a painful halt when the list gets too long?

The Right Thing™

We thought that was reason enough to come up with a reference implementation that shows a way to tackle all these problems in a reusable way while maintaining performance standards.

We are going to use 3 techniques to achieve our goal: DOM recycling, tombstones and scroll anchoring.

Our demo case is going to be a Hangouts-like chat window where we can scroll through the messages. The first thing we need is an infinite source of chat messages. Technically, none of the infinite scrollers out there are truly infinite, but with the amount of data that is available to get pumped into these scrollers they might as well be. For simplicity’s sake we will just hard-code a set of chat messages and pick message, author and occasional image attachment at random with a sprinkle of artificial delay to behave a little bit more like the real network.

DOM recycling

DOM recycling is a underutilized technique to keep the DOM node count low. The general idea is to use already created DOM elements that are off-screen instead of creating new ones. Admittedly, DOM nodes themselves are cheap, but they are not free, as each of them adds extra cost in memory, layout, style and paint. Low-end devices will get noticeably slower if not completely unusable if the website has too big of a DOM to manage. Also keep in mind that every relayout and reapplication of your styles – a process that is triggered whenever a class is added or removed from a node – grows more expensive with a bigger DOM. Recycling your DOM nodes means that we are going to keep the total number of DOM nodes considerably lower, making all these processes faster.

The first hurdle is the scrolling itself. Since we will only have a tiny subset of all available items in the DOM at any given time, we need to find another way to make the browser’s scrollbar properly reflect the amount content that is theoretically there. We will use a 1px by 1px sentinel element with a transform to force the element that contains the items – the runway – to have the desired height. We will promote every element in the runway to their own layer to make sure the layer of the runway itself completely empty. No background color, nothing. If the runway’s layer is non-empty it is not eligible for the browser’s optimizations and we will have to store a texture on our graphics card that has a height of a couple of hundred thousand pixels. Definitely not viable on a mobile device.

Whenever we scroll, we will check if the viewport has come sufficiently close to the end of the runway. If so, we will extend the runway by moving the sentinel element and moving the items that have left the viewport to the bottom of the runway and populate them with new content.

Runway Sentinel Viewport

The same goes for scrolling in the other direction. We will, however, never shrink the runway in our implementation, so that the scrollbar position stays consistent.

Tombstones

As we mentioned earlier, we try to make our data source behave like something in the real world. With network latency and everything. That means that if our users make use of flicky scrolling, they can easily scroll past the last element we have data for. If that happens, we will place a tombstone item – a placeholder – that will get replaced by the item with actual content once the data has arrived. Tombstones are also recycled and have a separate pool for re-usable DOM elements. We need that so we can make a nice transition from a tombstone to the item populated with content, which would otherwise be very jarring to the user and might actually make them lose track of what they were focusing on.

Such tomb. Very stone. Wow.

An interesting challenge here is that real items can have a bigger height than the tombstone item because of differing amounts of text per item or an attached image. To resolve this, we will adjust the current scroll position every time data comes in and a tombstone is being replaced above the viewport, anchoring the scroll position to an element rather than a pixel value. This concept is called scroll anchoring.

Scroll Anchoring

Our scroll anchoring will be invoked both when tombstones are being replaced as well as when the window gets resized (which also happens when the devices is being flipped!). We will have to figure out what the top-most visible element in the viewport is. As that element could only be partially visible, we will also store the offset from the top of the element where the viewport begins.

If the viewport is resized and the runway has changes, we are able to restore a situation that feels visually identical to the user. Win! Except a resized window means that each items has potentially changed its height, so how do we know how far down the anchored content should be placed? We don’t! To find out we would have to layout every element above the anchored item and add up all of their heights; this could cause a significant pause after a resize, and we don’t want that. Instead, we resort to assuming that every item above is the same size as a tombstone and adjust our scroll position accordingly. As elements are scrolled into the runway, we adjust our scroll position, effectively deferring the layout work to when it is actually needed.

Layout

I have skipped over an important detail: Layout. Each recycling of a DOM element would normally relayout the entire runway which would bring us well below our target of 60 frames per second. To avoid this, we are taking the burden of layout onto ourselves and use absolutely positioned elements with transforms. This way we can pretend that all the elements further up the runway are still taking up space when in actuality there is only empty space. Since we are doing layout ourselves, we can cache the positions where each item ends up and we can immediately load the correct element from cache when the user scrolls backwards.

Ideally, items would only get repainted once when they get attached to the DOM and be unfazed by additions or removals of other items in the runway. That is possible, but only with modern browsers.

Bleeding-edge tweaks

Recently, Chrome added support for CSS Containment, a feature that allows us developers to tell the browser an element is a boundary for layout and paint work. Since we are doing layout ourselves here, it’s a prime application for containment. Whenever we add an element to the runway, we know the other items don’t need to be affected by the relayout. So each item should be get contain: layout. We also don‘t want to affect the rest of our website, so the runway itself should get this style directive as well.

Another thing we considered is using IntersectionObservers as a mechanism to detect when the user has scrolled far enough for us to start recycling elements and load new data. However, IntersectionObservers are specified to be high latency (as if using requestIdleCallback), so we might actually feel less responsive with IntersectionObservers than without. Even our current implementation using the scroll event suffers from this problem, as scroll events are dispatched on a “best effort” basis. Eventually, Houdini’s Compositor Worklet would be the high fidelity solution to this problem.

It’s still not perfect

Our current implementation of DOM recycling is not ideal as it adds all elements that pass through the viewport, instead of just caring about the ones that are actually on screen. This means, that when you scroll reaaally fast, you put so much work for layout and paint on Chrome that it can’t keep up. You will end up seeing nothing but the background. It’s not the end of the world but definitely something to improve on.

We hope you see how challenging simple problems can become when you want to combine a great user experience with high performance standards. With Progressive Web Apps becoming first-class citizens on mobile phones, this will become more important and web developers will have to continue investing into using patterns that respect performance constraints.

All the code can be found in our repository. We have done our best to keep it reusable, but won’t be publishing it as an actual library on npm or as a separate repo. The primary use is educational.

Offline Google Analytics Made Easy

$
0
0

So you’ve got a progressive web app, complete with a service worker that allows it to work offline. Great! But you’ve also got existing Google Analytics set up for your web app, and you don’t want to miss out on any analytical insights coming from usage that occurs while offline. But if you try to send data to Google Analytics while offline, those requests will fail and the data will be lost.

The solution, it shouldn’t surprise you to learn, is service workers! Specifically, it’s adding code to your service worker to store Google Analytics requests (using IndexedDB) and retry them later when there’s hopefully a network available. We shared code to handle this logic as part of the open source Google I/O web app, but realized this was a useful pattern, and copying and pasting code can be fragile.

Today, we’re happy to announce that everything you need to handle offline Google Analytics requests within your service worker has been bundled up into an npm package: npm install --save-dev sw-offline-google-analytics

Using sw-offline-google-analytics

From within your existing service worker code, add the following:

// This code should live inside your service worker JavaScript, ideally
// before any other 'fetch' event handlers are defined:

// First, import the library into the service worker global scope:
importScripts('path/to/offline-google-analytics-import.js');

// Then, call goog.offlineGoogleAnalytics.initialize():
// See https://github.com/GoogleChrome/sw-helpers/tree/master/projects/sw-offline-google-analytics#googofflinegoogleanalyticsinitialize
goog.offlineGoogleAnalytics.initialize();

// At this point, implement any other service worker caching strategies
// appropriate for your web app.

That’s all there is to it!

What’s going on under the hood?

sw-offline-google-analytics sets up a new fetch event handler in your service worker, which responds to requests made to the Google Analytics domain. (The library ignores non-Google Analytics requests, giving your service worker’s other fetch event handlers a chance to implement appropriate strategies for those resources.) It will first attempt to fulfill the request against the network. If the user is online, that will proceed as normal.

If the network request fails, the library will automatically store information about the request to IndexedDB, along with a timestamp indicating when the request was initially made. Each time your service worker starts up, the library will check for queued requests and attempt to resend them, along with some additional Google Analytics parameters:

A qt parameter, set to the amount of time that has passed since the request was initially attempted, to ensure that the original time is properly attributed. Any additional parameters and values supplied in the parameterOverrides property of the configuration object passed to goog.offlineGoogleAnalytics.initialize(). For example, you could include a custom dimension to distinguish requests that were resent from the service worker from those that were sent immediately.

If resending the request succeeds, then great! The request is removed from IndexedDB. If the retry fails, and the initial request was made less than 24 hours ago, it will be kept in IndexedDB to be retried the next time the service worker starts. You should note that Google Analytics hits older than four hours are not guaranteed to be processed, but resending somewhat older hits “just in case” shouldn’t hurt.

sw-offline-google-analytics also implements a “network first, falling back to cache” strategy for the actual analytics.js JavaScript code needed to bootstrap Google Analytics.

There’s more to come!

sw-offline-google-analytics is part of the larger sw-helpers project, which is a collection of libraries meant to provide drop-in enhancements to existing service worker implementations.

Also part of that project is sw-appcache-behavior, a library that implements caching strategies defined in an existing AppCache manifest inside of a service worker. It’s intended to help you migrate from AppCache to service workers while maintaining a consistent caching strategy, at least initially.

If you have other library ideas, we’d love to hear from you. So please file a request in the issue tracker!

Web Push Interoperability Wins

$
0
0

When Chrome first supported the Web Push API, it relied on the non-standard Google Cloud Messaging (GCM) sender ID and it’s protocol. Although it was proprietary it allowed the Web Push API to be made available to developers at a time when the Web Push Protocol spec was still being written and later provided authentication (meaning the message sender is who they say they are) at a time when the Web Push Protocol lacked it. Good news: neither of these are true anymore.

GCM and Chrome now support the standard Web Push Protocol, while sender authentication can be achieved by implementing VAPID, meaning your web app no longer needs a gcm_sender_id.

In this article, I’m going to first describe how to convert your existing server code to use the Web Push Protocol with GCM. Next, I’ll show you how to implement VAPID in both your client and server code.

GCM Supports Web Push Protocol

Let’s start with a little context. When your web application registers for a push subscription it’s given the URL of a push service. Your server will use this endpoint to send data to your user via your web app. In Chrome you’ll be given a GCM endpoint if you subscribe a user without VAPID. (We’ll cover VAPID later). Before GCM supported Web Push Protocol you had to extract the GCM registration ID from the end of the URL and and put it in the header before making a GCM API request. For example, a GCM endpoint of https://android.googleapis.com/gcm/send/ABCD1234, would have a registration ID of ‘ABCD1234’.

Now that GCM supports Web Push Protocol you can leave the endpoint intact and use the URL as a Web Push Protocol endpoint. (This brings it in line with Firefox and hopefully every other future browser.)

Before we dive into VAPID, we need to make sure our server code correctly handles the GCM endpoint. Below is an example of making a request to a push service in Node. Notice that for GCM we’re adding the API key to the request headers. For other push service endpoints this won’t be needed. For Chrome prior to version 52, Opera Android and the Samsung Browser, you’re also still required to include a gcm_sender_id in your web app’s manifest.json. The API key and gcm_sender_id are used to check whether the server making the requests is actually allowed to send messages to the receiving user.

const headers = new Headers();
// 12-hour notification time to live.  
headers.append('TTL', 12 * 60 * 60);
// Assuming no data is going to be sent  
headers.append(Content-Length, 0);

// Assuming you're not using VAPID (read on), this
// proprietary header is needed  
if(subscription.endpoint
  .indexOf('https://android.googleapits.com/gcm/send/') === 0) {
  headers.append('Authorization', 'GCM_API_KEY');
}

fetch(subscription.endpoint, {
  method: 'POST',
  headers: headers
})
.then(response => {
  if (response.status !== 201) {
    throw new Error('Unable to send push message'');
  }
});

Remember, this is a change to GCM’s API, so you don’t need to update your subscriptions, just change your server code to define the headers as shown above.

Introducing VAPID for Server Identification

VAPID is the cool new short name for “Voluntary Application Server Identification”. This new spec essentially defines a handshake between your app server and the push service and allows the push service to confirm which site is sending messages. With VAPID you can avoid the GCM-specific steps for sending a push message. You no longer need a Google Developer project, a gcm_sender_id, or an Authorization header.

The process is pretty simple:

  1. Your application server creates a public/private key pair. The public key is given to your web app.
  2. When the user elects to receive pushes, add the public key to the subscribe() call’s options object.
  3. When your app server sends a push message, include a signed JSON Web Token along with the public key.

Let’s look at these steps in detail.

Create a Public/Private Key Pair

I’m terrible at encryption, so here’s the relevant section from the spec regarding the format of the VAPID public/private keys:

Application servers SHOULD generate and maintain a signing key pair usable with elliptic curve digital signature (ECDSA) over the P-256 curve.

You can see how to do this in the web-push node library:

function generateVAPIDKeys() {
  var curve = crypto.createECDH('prime256v1');
  curve.generateKeys();

  return {
    publicKey: curve.getPublicKey(),
    privateKey: curve.getPrivateKey(),
  };
}

Subscribing with the Public Key

To subscribe a Chrome user for push with the VAPID public key, you need to pass the public key as a Uint8Array using the applicationServerKey parameter of the subscribe() method.

const publicKey = new Uint8Array([0x4, 0x37, 0x77, 0xfe, . ]);
serviceWorkerRegistration.pushManager.subscribe(
  {
    userVisibleOnly: true,
    applicationServerKey: publicKey
  }
);

You’ll know if it has worked by examining the endpoint in the resulting subscription object, if the origin is fcm.googleapis.com, it’s working.

https://fcm.googleapis.com/fcm/send/ABCD1234

Note: Even though this is an FCM URL, use the Web Push Protocol not the FCM protocol, this way your server side code will work for any push service.

Sending a Push Message

To send a message using VAPID, you need to make a normal Web Push Protocol request with two additional HTTP headers: an Authorization header and a Crypto-Key header.

Authorization Header

The Authorization header is a signed JSON Web Token (JWT) with ‘Bearer ‘ in front of it, to indicate that the JWT is a bearer token.

A JWT is a way of sharing a JSON object with a second party in such a way that the sending party can sign it and the receiving party can verify the signature is from the expected sender. The structure of a JWT is three encrypted strings, joined with a single dot between them.

<JWTHeader>.<Payload>.<Signature>

JWT Header

The JWT Header contains the algorithm name used for signing and the type of token. For VAPID this must be:

{
  "typ": "JWT",
  "alg": "ES256"
}

This is then base64 url encoded and forms the first part of the JWT.

Payload

The Payload is another JSON object containing the following:

  • Audience (“aud”)
    • This is the origin of the push service (**NOT **the origin of your site). In JavaScript, you could do the following to get the audience: const audience = new URL(subscription.endpoint).origin
  • Expiration Time (“exp”)
    • This is the number of seconds until the request should be regarded as expired. This MUST be within 24 hours of the request being made, in UTC.
  • Subject (“sub”)
    • The subject needs to be a URL or a mailto: URL. This provides a point of contact in case the push service needs to contact the message sender.

An example payload could look like the following:

{
    "aud": "http://push-service.example.com",
    "exp": Math.floor((Date.now() / 1000) + (12 * 60 * 60)),
    "sub": "mailto: my-email@some-url.com"
}

This JSON object is base64 url encoded and forms the second part of the JWT.

Signature

The Signature is the result of joining the encoded header and payload with a dot then encrypting the result using the VAPID private key you created earlier. The result itself should be appended to the header with a dot.

I’m not going to show a code sample for this as there are a number of libraries that will take the header and payload JSON objects and generate this signature for you.

The signed JWT is used as the Authorization header with ‘Bearer ‘ prepended to it and will look something like the following:

Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhdWQiOiJodHRwczovL2ZjbS5nb29nbGVhcGlzLmNvbSIsImV4cCI6MTQ2NjY2ODU5NCwic3ViIjoibWFpbHRvOnNpbXBsZS1wdXNoLWRlbW9AZ2F1bnRmYWNlLmNvLnVrIn0.Ec0VR8dtf5qb8Fb5Wk91br-evfho9sZT6jBRuQwxVMFyK5S8bhOjk8kuxvilLqTBmDXJM5l3uVrVOQirSsjq0A

Notice a few things about this. First, the Authorization header literally contains the word ‘Bearer’ and should be followed by a space then the JWT. Also notice the dots separating the JWT header, payload, and signature.

Crypto-Key Header

As well as the Authorization header, you must add your VAPID public key to the Crypto-Key header as a base64 url encoded string with p256ecdsa= prepended to it.

p256ecdsa=BDd3_hVL9fZi9Ybo2UUzA284WG5FZR30_95YeZJsiApwXKpNcF1rRPF3foIiBHXRdJI2Qhumhf6_LFTeZaNndIo

When you are sending a notification with encrypted data, you will already be using the Crypto-Key header, so to add the application server key, you just need to add a semicolon before adding the above content, resulting in:

dh=BGEw2wsHgLwzerjvnMTkbKrFRxdmwJ5S_k7zi7A1coR_sVjHmGrlvzYpAT1n4NPbioFlQkIrTNL8EH4V3ZZ4vJE; p256ecdsa=BDd3_hVL9fZi9Ybo2UUzA284WG5FZR30_95YeZJsiApwXKpNcF1rRPF3foIiBHXRdJI2Qhumhf6_LFTeZaN

NOTE: The separating semicolon should actually be a comma but there is a bug in Chrome prior to version 52 which prevents push from working if a comma is sent. This is fixed in Chrome version 53, so you should be able to change to a comma once this hits stable.

Reality of these Changes

With VAPID you no longer need to sign up for an account with GCM to use push in Chrome and you can use the same code path for subscribing a user and sending a message to a user in both Chrome and Firefox. Both are following the standards.

What you need to bear in mind is that in Chrome version 51 and before, Opera for Android and Samsung browser you’ll still need to define the gcm_sender_id in your web app manifest and you’ll need to add the Authorization header to the GCM endpoint that will be returned.

VAPID provides an off ramp from these proprietary requirements. If you implement VAPID it’ll work in all browsers that support web push. As more browsers support VAPID you can decide when to drop the gcm_sender_id from your manifest.

Muted autoplay on mobile: say goodbye to <canvas> hacks and animated GIFs!

$
0
0

Muted autoplay for video is supported by Chrome for Android as of version 53. Playback will start automatically for a video element once it comes into view if both autoplay and muted are set, and playback of muted videos can be initiated progamatically with play(). Previously, playback on mobile had to be initiated by a user gesture, regardless of the muted state.

<video autoplay muted>
  <source src="video.webm" type="video/webm" />
  <source src="video.mp4" type="video/mp4" />
</video>

You can see this in action by visiting this sample. Playback of the muted video starts automatically in Chrome 53 or later.

In addition, muted playback can now be initiated using the play() method. Previously, play() would only initiate playback if it came from a user gesture such as a button click. Compare the following two demos on Android — try them on Chrome 53, then on an older version:

We recommend using the autoplay attribute whenever possible, and the play() method only if necessary.

It’s possible to unmute a video programmatically in response to a user gesture such as a click, but if you attempt to unmute a video programmatically without a user gesture, playback will pause.

The muted autoplay change will also make it possible to use play() with a video element not created in the DOM, for example to drive WebGL playback.

The play() method also returns a promise, which can be used to check whether muted programmatic playback is enabled. There is an example of this at simpl.info/video/scripted.

Why the change?

Autoplay was disabled in previous versions of Chrome on Android because it can be disruptive, data-hungry and many users don’t like it.

Disabling autoplay had the unintended effect of driving developers to alternatives such as animated GIFs, as well as <canvas> and <img> hacks. These techniques are much worse than optimized video in terms of power consumption, performance, bandwidth requirements, data cost and memory usage. Video can provide higher quality than animated GIFs, with far better compression: around 10 times on average, and up to 100 times at best. Video decoding in JavaScript is possible, but it’s a huge drain on battery power.

Compare the following — the first is a video and the second is an animated GIF:

They look pretty similar, but the video is less than 200KB in size and the animated GIF is over 900KB.

Chrome and other browser vendors are extremely cautious about user bandwidth. For many users in many contexts high data cost is often a greater barrier to access than poor connectivity. Given the prevalence of workarounds, muted autoplay isn’t something that can be blocked, so offering good APIs and defaults is the best the platform can do.

The web is increasingly media centric. Designers and developers continue to find new and unforeseen ways to use video — and they want consistent behaviour across platforms, for example when using background video as a design element. Muted autoplay enables functionality like this on both mobile and desktop.

The finer points

  • From an accessibility viewpoint, autoplay can be particularly problematic. Chrome 53 and above on Android provide a setting to disable autoplay completely: from Media settings, select Autoplay.
  • This change does not affect the audio element: autoplay is still disabled on Chrome on Android, because muted autoplay doesn’t make much sense for audio.
  • There is no autoplay if Data Saver mode is enabled. If Data Saver mode is enabled, autoplay is disabled in Media settings.
  • Muted autoplay will work for any visible video element in any visible document, iframe or otherwise.
  • Remember that to take advantage of the new behaviour, you’ll need to add muted as well as autoplay: compare simpl.info/video with simpl.info/video/muted.

Support

  • Muted autoplay is supported by Safari on iOS 10 and later.
  • Autoplay, whether muted or not, is already supported on Android by Firefox and UC Browser: they do not block any kind of autoplay.

Find out more

Bringing easy and fast checkout with Payment Request API

$
0
0

It’s no surprise that the majority of online shopping is happening on mobile devices these days. But did you know that 66% of mobile purchases are made through websites rather than apps? Unfortunately though, conversion rate on mobile websites is only 33% of that on desktop. We need to fix this.

Chrome 53 for Android (desktop to be supported in the future) introduces a new API called Payment Request - a new approach for developers to eliminate checkout forms and improve user’s payment experience from the ground up.

Introducing Payment Request API

Payment Request is a new API for the open web that makes checkout flows easier, faster and consistent on shopping sites.

  • Provides a native user interface for users to select or add a payment method, a shipping address and a shipping option in an easy, fast and consistent way.
  • Provides standardized imperative APIs for developers to obtain user’s payment preferences in a consistent format.

How Payment Request API works

Let’s peek at how Payment Request API works in some code. Here’s a minimal example that collects a user’s credit card information and submits it to a server.

function onBuyClicked() {
  if (!window.PaymentRequest) {
    // PaymentRequest API is not available. Forwarding to
    // legacy form based experience.
    location.href = '/checkout';
    return;
  }

  // Supported payment methods
  var supportedInstruments = [{
    supportedMethods: [
      'visa', 'mastercard', 'amex', 'discover',
      'diners', 'jcb', 'unionpay'
    ]
  }];

  // Checkout details
  var details = {
    displayItems: [{
      label: 'Original donation amount',
      amount: { currency: 'USD', value: '65.00' }
    }, {
      label: 'Friends and family discount',
      amount: { currency: 'USD', value: '-10.00' }
    }],
    total: {
      label: 'Total due',
      amount: { currency: 'USD', value : '55.00' }
    }
  };

  // 1. Create a `PaymentRequest` instance
  var request = new PaymentRequest(supportedInstruments, details);

  // 2. Show the native UI with `.show()`
  request.show()
  // 3. Process the payment
  .then(result => {
    var data = {};
    data.methodName = result.methodName;
    data.details    = result.details;

    // POST the payment information to the server
    return fetch('/pay', {
      method: 'POST',
      credentials: 'include',
      headers: {
        'Content-Type': 'application/json'
      },
      body: JSON.stringify(data)
    }).then(response => {
      // Examine server response
      if (response.status === 200) {
        // Payment successful
        return result.complete('success');
      } else {
        // Payment failure
        return result.complete('fail');
      }
    }).catch(() => {
      return result.complete('fail');
    });
  });
}

document.querySelector('#start').addEventListener('click', onBuyClicked);

1. Create a PaymentRequest instance

When a user taps on “Checkout”, start a payment procedure by instantiating PaymentRequest.

var request = new PaymentRequest(supportedInstruments, details);

2. Show the native UI with .show()

Show the native payment UI with show(). Within this UI, a user can determine a payment method already stored in the browser or add a new one.

request.show()
  .then(payment => {
    // pressed "Pay"
  });

3. Process the payment

Upon user tapping on “PAY” button, a promise will be resolved and payment information will be passed to the resolving function. You can send the information either to your own server, or send it through a third party like Stripe for processing.

request.show()
  .then(result => {
    var data = {};
    data.methodName = result.methodName;
    data.details    = result.details;

    // POST the payment information to the server
    return fetch('/pay', {
      method: 'POST',
      credentials: 'include',
      headers: {
        'Content-Type': 'application/json'
      },
      body: JSON.stringify(data)
    }).then(response => {
      // Examine server response
      if (response.status === 200) {
        // Payment successful
        return result.complete('success');
      } else {
        // Payment failure
        return result.complete('fail');
      }
    }).catch(() => {
      return result.complete('fail');
    });
  });

4. Display payment result

If the payment verification was successful, call .complete('success') to complete the purchase, otherwise .complete('fail'). Success / failure status will be displayed using a native UI. Upon resolving the .complete(), you can proceed to the next step.

Payment Request API can do more

Shipping items

If you are selling physical goods, you’ll probably need to collect the user’s shipping address and a shipping preference such as “Free shipping” or “Express shipping”. Payment Request API certainly supports those use cases. See the integration guide to learn more.

Adding more payment solutions

Credit card is not the only supported payment solution for Payment Request. There are a number of other payment services and solutions in the wild and the Payment Request API is designed to support as many of those as possible. Google is working to bring Android Pay to Chrome. Other third party solutions will be supported in the near future as well. Stay tuned for updates.

Resources

To learn more about Payment Request API, a few documents and resources are available:

FAQ

Any restrictions to use the API?

Use Chrome for Android with version 53 or later. Requires secure origin - HTTPS, localhost or file:///.

Is it possible to query the available payment methods?

Currently not supported, but we’re investigating ways of exposing this API in a privacy-sensitive way.
Payment Request API is designed to support the broadest possible array of payment methods. Each payment method is identified by a payment method identifier.
Payment Method Identifiers will support distributed extensibility, meaning that there does not need to be a central machine-readable registry to discover or register payment methods.

Do you plan on offering a coupon code?

We  are investigating how to best do this. For now, you can manually ask for coupon code before or after calling the API.

Does this work with iframes?

Currently not allowed. But planned to be allowed in the future.

Viewing all 599 articles
Browse latest View live