Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

Inside look at modern web browser (part 1)

$
0
0

Inside look at modern web browser (part 1)

CPU, GPU, Memory, and multi-process architecture

In this 4-part blog series, we’ll look inside the Chrome browser from high-level architecture to the specifics of the rendering pipeline. If you ever wondered how the browser turns your code into a functional website, or you are unsure why a specific technique is suggested for performance improvements, this series is for you.

As part 1 of this series, we’ll take a look at core computing terminology and Chrome’s multi-process architecture.

Note: If you are familiar with the idea of CPU/GPU and process/thread you may skip to Browser Architecture.

At the core of the computer are the CPU and GPU

In order to understand the environment that the browser is running, we need to understand a few computer parts and what they do.

CPU

CPU
Figure 1: 4 CPU cores as office worker sitting at each desk handling tasks as they come in

First is the Central Processing Unit - or CPU. The CPU can be considered your computer’s brain. A CPU core, pictured here as an office worker, can handle many different tasks one by one as they come in. It can handle everything from math to art while knowing how to reply to a customer call. In the past most CPUs were a single chip. A core is like another CPU living in the same chip. In modern hardware, you often get more than one core, giving more computing power to your phones and laptops.

GPU

GPU
Figure 2: Many GPU cores with wrench suggesting they handle a limited task

Graphics Processing Unit - or GPU is another part of the computer. Unlike CPU, GPU is good at handling simple tasks but across multiple cores at the same time. As the name suggests, it was first developed to handle graphics. This is why in the context of graphics "using GPU" or "GPU-backed" is associated with fast rendering and smooth interaction. In recent years, with GPU-accelerated computing, more and more computation is becoming possible on GPU alone.

When you start an application on your computer or phone, the CPU and GPU are the ones powering the application. Usually, applications run on the CPU and GPU using mechanisms provided by the Operating System.

Hadware, OS, Application
Figure 3: Three layers of computer architecture. Machine Hardware at the bottom, Operating System in the middle, and Application on top.

Executing program on Process and Thread

process and threads
Figure 4: Process as a bounding box, threads as abstract fish swimming inside of a process

Another concept to grasp before diving into browser architecture is Process and Thread. A process can be described as an application’s executing program. A thread is the one that lives inside of process and executes any part of its process program.

When you start an application, a process is created. The program might create thread(s) to help it do work, but that's optional. The Operating System gives the process a "slab" of memory to work with and all application state is kept in that private memory space. When you close the application, the process also goes away and the Operating System frees up the memory.

process and memory
Figure 5: Diagram of a process using memory space and storing application data

A process can ask the Operating System to spin up another process to run different tasks. When this happens, different parts of the memory are allocated for the new process. If two processes need to talk, they can do so by using Inter Process Communication (IPC). Many applications are designed to work this way so that if a worker process get unresponsive, it can be restarted without stopping other processes which are running different parts of the application.

worker process and IPC
Figure 6: Diagram of separate processes communicating over IPC

Browser Architecture

So how is a web browser built using processes and threads? Well, it could be one process with many different threads or many different processes with a few threads communicating over IPC.

browser architecture
Figure 7: Different browser architectures in process/thread diagram

The important thing to note here is that these different architectures are implementation details. There is no standard specification on how one might build a web browser. One browser’s approach may be completely different from another.

For the sake of this blog series, we are going to use Chrome’s recent architecture described in the diagram below.

At the top is the browser process coordinating with other processes that take care of different parts of the application. For the renderer process, multiple processes are created and assigned to each tab. Until very recently, Chrome gave each tab a process when it could; now it tries to give each site its own process, including iframes (see Site Isolation).

browser architecture
Figure 8: Diagram of Chrome’s multi-process architecture. Multiple layers are shown under Renderer Process to represent Chrome running multiple Renderer Processes for each tab.

Which process controls what?

The following table describes each Chrome process and what it controls:

Process and What it controls
Browser Controls "chrome" part of the application including address bar, bookmarks, back and forward buttons.
Also handles the invisible, privileged parts of a web browser such as network requests and file access.
Renderer Controls anything inside of the tab where a website is displayed.
Plugin Controls any plugins used by the website, for example, flash.
GPU Handles GPU tasks in isolation from other processes. It is separated into different process because GPUs handles requests from multiple apps and draw them in the same surface.
Chrome processes
Figure 9: Different processes pointing to different parts of browser UI

There are even more processes like the Extension process and utility processes. If you want to see how many processes are running in your Chrome, click the options menu icon more_vert at the top right corner, select More Tools, then select Task Manager. This opens up a window with a list of processes that are currently running and how much CPU/Memory they are using.

The benefit of multi-process architecture in Chrome

Earlier, I mentioned Chrome uses multiple renderer process. In the most simple case, you can imagine each tab has its own renderer process. Let’s say you have 3 tabs open and each tab is run by an independent renderer process. If one tab becomes unresponsive, then you can close the unresponsive tab and move on while keeping other tabs alive. If all tabs are running on one process, when one tab becomes unresponsive, all the tabs are unresponsive. That’s sad.

multiple renderer for tabs
Figure 10: Diagram showing multiple processes running each tab

Because processes have their own private memory space, they often contain copies of common infrastructure (like V8 which is a Chrome's JavaScript engine). This means more memory usage as they can't be shared the way they would be if they were threads inside the same process. In order to save memory, Chrome puts a limit on how many processes it can spin up. The limit varies depending on how much memory and CPU power your device has, but when Chrome hits the limit, it starts to run multiple tabs from the same site in one process.

Saving more memory - Servicification in Chrome

The same approach is applied to the browser process. Chrome is undergoing architecture changes to run each part of the browser program as a service allowing to easily split into different processes or aggregate into one.

General idea is that when Chrome is running on powerful hardware, it may split each service into different processes giving more stability, but if it is on a resource-constraint device, Chrome consolidates services into one process saving memory footprint. Similar approach of consolidating processes for less memory usage have been used on platform like Android before this change.

Chrome servicfication
Figure 11: Diagram of Chrome’s servicification moving different services into multiple processes and a single browser process

Per-frame renderer processes - Site Isolation

Site Isolation is a recently introduced feature in Chrome that runs a separate renderer process for each iframe. We’ve been talking about one renderer process per tab model which allowed cross-site iframes to run in a single renderer process with sharing memory space between different sites. Running a.com and b.com in the same renderer process might seem okay.

The Same Origin Policy is the core security model of the web; it makes sure one site cannot access data from other sites without consent. Bypassing this policy is a primary goal of security attacks. Process isolation is the most effective way to separate sites. With Meltdown and Spectre, it became even more apparent that we need to separate sites using processes. With Site Isolation enabled on desktop by default since Chrome 67, each iframe in a tab gets a separate renderer process.

site isolation
Figure 12: Diagram of site isolation; multiple renderer processes pointing to iframes within a site

Enabling Site Isolation has been a multi-year engineering effort. Site Isolation isn’t as simple as assigning different renderer processes; it fundamentally changes the way iframes talk to each other. Opening devtools on a page with iframes running on different processes means devtools had to implement behind-the-scenes work to make it appear seamless. Even running a simple Ctrl+F to find a word in a page means searching across different renderer processes. You can see the reason why browser engineers talk about the release of Site Isolation as a major milestone!

Wrap-up

In this post, we’ve covered a high-level view of browser architecture and covered the benefits of a multi-process architecture. We also covered Servicification and Site Isolation in Chrome that is deeply related to multi-process architecture. In the next post, we’ll start diving into what happens between these processes and threads in order to display a website.

Did you enjoy the post? If you have any questions or suggestions for the future post, I'd love to hear from you in the comment section below or @kosamari on Twitter.

Feedback


The Reporting API

$
0
0

The Reporting API

TL;DR

The Reporting API defines a new HTTP header, Report-To, that gives web developers a way to specify server endpoints for the browser to send warnings and errors to. Browser-generated warnings like CSP violations, Feature Policy violations, deprecations, browser interventions, and network errors are some of the things that can be collected using the Reporting API.

DevTools console warnings for deprecations and interventions.
Browser-generated warnings in the DevTools console.

Introduction

Some errors only occur in production (aka The Wild). You never see them locally or during development because real users, real networks, and real devices change the game. Not to mention all the cross browser issues that get thrown into the mix.

As an example, say your new site relies on document.write() to load critical scripts. New users from different parts of the world will eventually find your site, but they're probably on much slower connections than you tested with. Unbeknownst to you, your site starts breaking for them because of Chrome's browser intervention for blocking document.write() on 2G networks. Yikes! Without the Reporting API there's no way to know this is happening to your precious users.

The Reporting API helps catch potential (even future) errors across your site. Setting it up gives you "peace of mind" that nothing terrible is happening when real users visit your site. If/when they _do_ experience unforeseen errors, you'll be in the know. 👍

The Report-To Header

Right now, the API needs to be enabled using a runtime command line flag: --enable-features=Reporting.

The Reporting API introduces a new HTTP response header, Report-To. Its value is an object which describes an endpoint group for the browser to report errors to:

Report-To: {
             "max_age": 10886400,
             "endpoints": [{
               "url": "https://analytics.provider.com/browser-errors"
             }]
           }

Note: If your endpoint URL lives on a different origin than your site, the endpoint should support CORs preflight requests. (e.g. Access-Control-Allow-Origin: *; Access-Control-Allow-Methods: GET,PUT,POST,DELETE,OPTIONS; Access-Control-Allow-Headers: Content-Type, Authorization, Content-Length, X-Requested-With

In the example, sending this response header with your main page configures the browser to report browser-generated warnings to the endpoint https://example.com/browser-errors for max_age seconds. It's important to note that all subsequent HTTP requests made by the page (for images, scripts, etc.) are ignored. Configuration is setup during the response of the main page.

Configuring multiple endpoints

A single response can configure several endpoints at once by sending multiple Report-To headers:

Report-To: {
             "group": "default",
             "max_age": 10886400,
             "endpoints": [{
               "url": "https://example.com/browser-reports"
             }]
           }
Report-To: {
             "group": "csp-endpoint",
             "max_age": 10886400,
             "endpoints": [{
               "url": "https://example.com/csp-reports"
             }]
           }

or by combining them into a single HTTP header:

Report-To: {
             "group": "csp-endpoint",
             "max_age": 10886400,
             "endpoints": [{
               "url": "https://example.com/csp-reports"
             }]
           },
           {
             "group": "network-endpoint",
             "max_age": 10886400,
             "endpoints": [{
               "url": "https://example.com/network-errors"
             }]
           },
           {
             "max_age": 10886400,
             "endpoints": [{
               "url": "https://example.com/browser-errors"
             }]
           }

Once you've sent the Report-To header, the browser caches the endpoints according to their max_age values, and sends all of those nasty console warnings/errors to your URLs. Boom!

DevTools console warnings for deprecations and interventions.
DevTools warnings and errors that can be sent using the Reporting API.

Explanation of header fields

Each endpoint configuration contains a group name, max_age, and endpoints array. You can also choose whether to consider subdomains when reporting errors by using the include_subdomains field.

Field Type Description
group string Optional. If a group name is not specified, the endpoint is given a name of "default".
max_age number Required. A non-negative integer that defines the lifetime of the endpoint in seconds. A value of "0" will cause the endpoint group to be removed from the user agent’s reporting cache.
endpoints Array<Object> Required. An array of JSON objects that specify the actual URL of your report collector.
include_subdomains boolean Optional. A boolean that enables the endpoint group for all subdomains of the current origin's host. If omitted or anything other than "true", the subdomains are not reported to the endpoint.

The group name is a unique name used to associate a string with an endpoint. Use this name in other places that integrate with the Reporting API to refer to a specific endpoint group.

The max-age field is also required and specifies how long the browser should use the endpoint and report errors to it.

The endpoints field is an array to provide failover and load balancing features. See the section on Failover and load balancing. It's important to note that the browser will select only one endpoint, even if the group lists several collectors in endpoints. If you want to send a report to several servers at once, your backend will need to forward the reports.

How the browser sends a report

Reports are delivered out-of-band from your app, meaning the browser controls when reports are sent to your server(s). The browser attempts to deliver queued reports as soon as they're ready (in order to provide timely feedback to the developer) but it can also delay delivery if it's busy processing higher priority work or the user is on a slow and/or congested network at the time. The browser may also prioritize sending reports about a particular origin first, if the user is a frequent visitor.

The browser periodically batches reports and sends them to the reporting endpoints that you configure. To send reports, the browser issues a POST request with Content-Type: application/reports+json and a body containing the array of warnings/errors which were captured.

Example - browser issues a POST request to your CSP errors endpoint:

POST /csp-reports HTTP/1.1
Host: example.com
Content-Type: application/reports+json

[{
  "type": "csp",
  "age": 10,
  "url": "https://example.com/vulnerable-page/",
  "user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0",
  "body": {
    "blocked": "https://evil.com/evil.js",
    "directive": "script-src",
    "policy": "script-src 'self'; object-src 'none'",
    "status": 200,
    "referrer": "https://evil.com/"
  }
}, }
  ...
}]

Report types discusses how to set up CSP reporting.

The Reporting API was designed to be out of band from your web app. The browser captures, queues and batches, then sends reports automatically at the most opportune time. Reports are sent internally by the browser, so there's little to no performance concern (e.g. network contention with your app) when using the Reporting API. There's also no way to control when the browser sends queued reports.

Debugging report configurations

If you don't see reports showing up on your server, head over to chrome://net-internals/#reporting. That page is useful for verifying things are configured correctly and reports are being sent out properly.

Reporting section in chrome://net-internals/#reporting

Report types

The Reporting API can be used for more than just browser intervention and deprecation messages. In fact, it can be configured to send many other types of interesting warnings/issues that happen throughout your site:

CSP reports

A long time ago, the web elders realized that sending client-side CSP violations to a backend would be pretty handy. If your site breaks because of some new powerful feature (i.e. CSP), you probably want to be notified! Thus, we've had a reporting mechanism for CSP from day one.

When a site violates a CSP rule, it can (optionally) tell the browser to send the error to a server. This is done by adding the report-uri directive in the CSP header:

Content-Security-Policy: ...; report-uri https://example.com/csp-reports
Content-Security-Policy-Report-Only: ...; report-uri https://example.com/csp-reports

The Reporting API integrates with CSP reports by adding a new report-to directive. Unlike report-uri which takes a URL, report-to takes an endpoint group name. It still has a URL, but that gets moved inside endpoints in the configuration object:

New

Content-Security-Policy-Report-Only: ...; report-to csp-endpoint
Report-To: {
    ...
  }, {
    "group": "csp-endpoint",
    "max_age": 10886400,
    "endpoints": [{
      "url": "https://example.com/csp-reports"
    }]
  }

For backwards compatibility, continue to use report-uri along with report-to. In other words: Content-Security-Policy: ...; report-uri https://endpoint.com; report-to groupname. Browsers that support report-to will use it instead of the former.

Network errors

The Network Error Logging (NEL) spec defines a mechanism for collecting client-side network errors from an origin. It uses the new NEL HTTP response header to setup to tell the browser collect network errors, then integrates with the Reporting API to report the errors to a server.

To use NEL, first setup the Report-To header with a collector that uses a named group:

Report-To: {
    ...
  }, {
    "group": "network-errors",
    "max_age": 2592000,
    "endpoints": [{
      "url": "https://analytics.provider.com/networkerrors"
    }]
  }

Next, send the NEL response header to start collecting errors. Since NEL is opt-in for an origin*, you only need to send the header once. Both NEL and Report-To will apply to future requests to the same origin and will continue to collect errors according to the max_age value that was used to set up the collector.

The header value should be a JSON object that contains a max_age and report_to field. Use the latter to reference the group name of your network errors collector:

GET /index.html HTTP/1.1
NEL: {"report_to": "network-errors", "max_age": 2592000}

The Report-To header uses a hyphen. Here, report_to uses an underscore.

Sub-resources

NEL works across navigations and subresources fetches. But for subresources, there's an important point to highlight: the containing page has no visibility into the NEL reports about cross-origin requests that it makes. This means that if example.com loads foobar.com/cat.gif and that resource fails to load, foobar.com's NEL collector is notified, not example.com's. The rule of thumb is that NEL is reproducing server-side logs, just generated on the client. Since example.com has no visibility into foobar.com's server logs, it also has no visibility into its NEL reports.

Feature Policy violations

Currently, Feature Policy violations are not captured with the Reporting API. However, the plan is to integrate Feature Policy into the Reporting API.

Crash reports

Browser crash reports are also still in development but will eventually be capturable via the Reporting API.

Failover and load balancing

Most of the time you'll be configuring one URL collector per group. However, since reporting can generate a good deal of traffic, the spec includes failover and load-balancing features inspired by the DNS SRV record.

The browser will do its best to deliver a report to at most one endpoint in a group. Endpoints can be assigned a weight to distribute load, with each endpoint receiving a specified fraction of reporting traffic. Endpoints can also be assigned a priority to set up fallback collectors.

Example - creating a fallback collector at https://backup.com/reports.

Report-To: {
             "group": "endpoint-1",
             "max_age": 10886400,
             "endpoints": [
               {"url": "https://example.com/reports", "priority": 1},
               {"url": "https://backup.com/reports", "priority": 2}
             ]
           }

Fallback collectors are only tried when uploads to primary collectors fail.

Example server

HTTP examples are great. Actual code is even better.

To see all this stuff in context, below is an example Node server that uses Express and brings together all the pieces discussed in this article. It shows how to configure reporting for several different report types and create separate handlers to capture the results.

const express = require('express');

const app = express();
app.use(express.json({
  type: ['application/json', 'application/csp-report', 'application/reports+json']
}));
app.use(express.urlencoded());

app.get('/', (request, response) => {
  // Note:  report-to replaces report-uri, but it is not supported yet.
  response.set('Content-Security-Policy-Report-Only',
      `default-src 'self'; report-to csp-endpoint`);
   // Note: report_to and not report-to for NEL.
  response.set('NEL', `{"report_to": "network-errors", "max_age": 2592000}`);

  // The Report-To header tells the browser where to send
  // CSP violations, browser interventions, deprecations, and network errors.
  // The default group (first example below) captures interventions and
  // deprecation reports. Other groups are referenced by their "group" name.
  response.set('Report-To', `{
    "max_age": 2592000,
    "endpoints": [{
      "url": "https://reporting-observer-api-demo.glitch.me/reports"
    }],
  }, {
    "group": "csp-endpoint",
    "max_age": 2592000,
    "endpoints": [{
      "url": "https://reporting-observer-api-demo.glitch.me/csp-reports"
    }],
  }, {
    "group": "network-errors",
    "max_age": 2592000,
    "endpoints": [{
      "url": "https://reporting-observer-api-demo.glitch.me/network-reports"
    }]
  }`);

  response.sendFile('./index.html');
});

function echoReports(request, response) {
  // Record report in server logs or otherwise process results.
  for (const report of request.body) {
    console.log(report.body);
  }
  response.send(request.body);
}

app.post('/csp-reports', (request, response) => {
  console.log(`${request.body.length} CSP violation reports:`);
  echoReports(request, response);
});

app.post('/network-reports', (request, response) => {
  console.log(`${request.body.length} Network error reports:`);
  echoReports(request, response);
});

app.post('/reports', (request, response) => {
  console.log(`${request.body.length} deprecation/intervention reports:`);
  echoReports(request, response);
});


const listener = app.listen(process.env.PORT, () => {
  console.log(`Your app is listening on port ${listener.address().port}`);
});

What about ReportingObserver?

Although both are part of the same Reporting API spec, ReportingObserver and the Report-To header have overlap with each other but enable slightly different uses cases.

ReportingObserver is a JavaScript API that can observe simple client-side warnings like deprecation and intervention. Reports are not automatically sent to a server (unless you choose to do so in the callback):

const observer = new ReportingObserver((reports, observer) => {
  for (const report of reports) {
    // Send report somewhere.
  }
}, {buffered: true});

observer.observe();

More sensitive types of errors like CSP violations and network errors cannot be observed by a ReportingObserver. Enter Report-To.

The Report-To header is more powerful in that it can capture more types of error reports (network, CSP, browser crashes) in addition to the ones supported in ReportingObserver. Use it when you want to automatically report errors to a server or capture errors that are otherwise impossible to see in JavaScript (network errors).

Conclusion

Although the Reporting API is a ways out from shipping in all browsers, it's a promising tool for diagnosing issues across your site.

Warnings that get logged to the DevTools console are super helpful but have limited value to you as the site author. That's because they're local to the user's browser! The Reporting API changes this. Use it to configure, detect, and report to a server even errors even when your own code cannot. Propagate browser warnings to a backend, catch issues across your site before they grow out of control, and prevent future bugs before they happen (e.g. know about deprecated APIs ahead of their removal).

Asynchronous Access to HTTP Cookies

$
0
0

Asynchronous Access to HTTP Cookies

The Cookie Store API is available for Origin Trials starting in Chrome 69. The API introduces the following exciting possibilities:

  • Cookies can be accessed asynchronously, avoiding jank on the main thread.
  • Changes to cookies can be observed, avoiding polling.
  • Cookies can be accessed from service workers.

You (probably) don't need cookies

Before diving into the new API, I'd like to state that cookies are still the Web platform's worst client-side storage primitive, and should still be used as a last resort. This isn't an accident - cookies were the Web's first client-side storage mechanism, and we've learned a lot since then.

The main reasons for avoiding cookies are:

  • Cookies bring your storage schema into the your backend API. Each HTTP request carries a snapshot of the cookie jar. This makes it easy for backend engineers to introduce dependencies on the current cookie format. Once this happens, your frontend can't change its storage schema without deploying a matching change to the backend.

  • Cookies have a complex security model. Modern Web platform features follow the same origin policy, meaning that each application gets its own sandbox, and is completely independent from other applications that the user might be running. Cookie scopes make for a significantly more complex security story, and merely attempting to summarize that would double the size of this article.

  • Cookies have high performance costs. Browsers need to include a snapshot of your cookies in every HTTP request, so every change to cookies must be propagated across the storage and network stacks. Modern browsers have highly optimized cookie store implementations, but we'll never be able to make cookies as efficient as the other storage mechanisms, which don't need to talk to the network stack.

For all the reasons above, modern Web applications should avoid cookies and instead store a session identifier into IndexedDB, and explicitly add the identifier to the header or body of specific HTTP requests, via the fetch API.

That being said, you're still reading this article because you have a good reason to use cookies...

Say goodbye to jank

The venerable document.cookie API is a fairly guaranteed source of jank for your application. For example, whenever you use the document.cookie getter, the browser has to stop executing JavaScript until it has the cookie information you requested. This can take a process hop or a disk read, and will cause your UI to jank.

A straightforward fix for this problem is switching from the document.cookie getter to the asynchronous Cookie Store API.

await cookieStore.get('session_id')

// {
//   domain: "example.com",
//   expires: 1593745721000,
//   name: "sesion_id",
//   path: "/",
//   sameSite: "unrestricted",
//   secure: true,
//   value: "yxlgco2xtqb.ly25tv3tkb8"
// }

The document.cookie setter can be replaced in a similar manner. Keep in mind that the change is only guaranteed to be applied after the Promise returned by cookieStore.set resolves.

await cookieStore.set({ name: 'opt_out', value: '1' });

// undefined

Observe, don't poll

A popular application for accessing cokies from JavaScript is detecting when the user logs out, and updating the UI. This is currently done by polling document.cookie, which introduces jank and has a negative impact on battery life.

The Cookie Store API brings an alternative method for observing cookie changes, which does not require polling.

cookieStore.addEventListener('change', (event) => {
  for (const cookie in event.changed) {
    if (cookie.name === 'session_id')
      sessionCookieChanged(cookie.value);
  }
  for (const cookie in event.deleted) {
    if (cookie.name === 'session_id')
      sessionCookieChanged(null);
  }
});

Welcome service workers

Because of synchronous design, the document.cookie API has not been made available to service workers. The Cookie Store API is asynchronous, and therefore is allowed in service workers.

Interacting with the cookies works the same way in document contexts and in service workers.

// Works in documents and service workers.
async function logOut() {
  await cookieStore.delete('session_id');
}

However, observing cookie changes is a bit different in service worker. Waking up a service worker can be pretty expensive, so we have to explicitly describe the cookie changes that the worker is interested in.

In the example below, an application that uses IndexedDB to cache user data monitors changes to the session cookie, and discards the cached data when the user logs off.

// Specify the cookie changes we're interested in during the install event.
self.addEventListener('install', (event) => {
  event.waitFor(async () => {
    await cookieStore.subscribeToChanges([{ name: 'session_id' }]);
  });
});

// Delete cached data when the user logs out.
self.addEventListener('cookiechange', (event) => {
  for (const cookie of event.deleted) {
    if (cookie.name === 'session_id') {
      indexedDB.deleteDatabase('user_cache');
      break;
    }
  }
});

How to enable the API

To get access to this new API on your site, please sign up for the "Cookie Store API" Origin Trial. If you just want to try it out locally, the API can be enabled on the command line:

chrome --enable-blink-features=CookieStore

Passing this flag on the command line enables the API globally in Chrome for the current session.

If you give this API a try, please let us know what you think! Please direct feedback on the API shape to the specification repository, and report implementation bugs to the CookiesAPI Blink component.

We are especially interested to learn about performance measurements and use cases beyond the ones outlined in the explainer.

Additional resources

Inside look at modern web browser (part 2)

$
0
0

Inside look at modern web browser (part 2)

What happens in navigation

This is part 2 of a 4 part blog series looking at the inner workings of Chrome. In the previous post, we looked at how different processes and threads handle different parts of a browser. In this post, we dig deeper into how each process and thread communicate in order to display a website.

Let’s look at a simple use case of web browsing: you type a URL into a browser, then the browser fetches data from the internet and displays a page. In this post, we’ll focus on the part where a user requests a site and the browser prepares to render a page - also known as a navigation.

It starts with a browser process

Browser processes
Figure 1: Browser UI at the top, diagram of the browser process with UI, network, and storage thread inside at the bottom

As we covered in part 1: CPU, GPU, Memory, and multi-process architecture, everything outside of a tab is handled by the browser process. The browser process has threads like the UI thread which draws buttons and input fields of the browser, the network thread which deals with network stack to receive data from the internet, the storage thread that controls access to the files and more. When you type a URL into the address bar, your input is handled by browser process’s UI thread.

A simple navigation

Step 1: Handling input

When a user starts to type into the address bar, the first thing UI thread asks is "Is this a search query or URL?". In Chrome, the address bar is also a search input field, so the UI thread needs to parse and decide whether to send you to a search engine, or to the site you requested.

Handling user input
Figure 1: UI Thread asking if the input is a search query or a URL

Step 2: Start navigation

When a user hits enter, the UI thread initiates a network call to get site content. Loading spinner is displayed on the corner of a tab, and the network thread goes through appropriate protocols like DNS lookup and establishing TLS Connection for the request.

Navigation start
Figure 2: the UI thread talking to the network thread to navigate to mysite.com

At this point, the network thread may receive a server redirect header like HTTP 301. In that case, the network thread communicates with UI thread that the server is requesting redirect. Then, another URL request will be initiated.

Step 3: Read response

HTTP response
Figure 3: response header which contains Content-Type and payload which is the actual data

Once the response body (payload) starts to come in, the network thread looks at the first few bytes of the stream if necessary. The response's Content-Type header should say what type of data it is, but since it may be missing or wrong, MIME Type sniffing is done here. This is a "tricky business" as commented in the source code. You can read the comment to see how different browsers treat content-type/payload pairs.

If the response is an HTML file, then the next step would be to pass the data to the renderer process, but if it is a zip file or some other file then that means it is a download request so they need to pass the data to download manager.

MIME type sniffing
Figure 4: Network thread asking if response data is HTML from a safe site

This is also where the SafeBrowsing check happens. If the domain and the response data seems to match a known malicious site, then the network thread alerts to display a warning page. Additionally, Cross Origin Read Blocking (CORB) check happens in order to make sure sensitive cross-site data does not make it to the renderer process.

Step 3: Find a renderer process

Once all of the checks are done and Network thread is confident that browser should navigate to the requested site, the Network thread tells UI thread that the data is ready. UI thread then finds a renderer process to carry on rendering of the web page.

Find renderer process
Figure 5: Network thread telling UI thread to find Renderer Process

Since the network request could take several hundred milliseconds to get a response back, an optimization to speed up this process is applied. When the UI thread is sending a URL request to the network thread at step 2, it already knows which site they are navigating to. The UI thread tries to proactively find or start a renderer process in parallel to the network request. This way, if all goes as expected, a renderer process is already in standby position when the network thread received data. This standby process might not get used if the navigation redirects cross-site, in which case a different process might be needed.

Step 4: Commit navigation

Now that the data and the renderer process is ready, an IPC is sent from the browser process to the renderer process to commit the navigation. It also passes on the data stream so the renderer process can keep receiving HTML data. Once the browser process hears confirmation that the commit has happened in the renderer process, the navigation is complete and the document loading phase begins.

At this point, address bar is updated and the security indicator and site settings UI reflects the site information of the new page. The session history for the tab will be updated so back/forward buttons will step through the site that was just navigated to. To facilitate tab/session restore when you close a tab or window, the session history is stored on disk.

Commit the navigation
Figure 6: IPC between the browser and the renderer processes, requesting to render the page

Extra Step: Initial load complete

Once the navigation is committed, the renderer process carries on loading resources and renders the page. We will go over the details of what happens at this stage in the next post. Once the renderer process "finishes" rendering, it sends an IPC back to the browser process (this is after all the onload events have fired on all frames in the page and have finished executing). At this point, the UI thread stops the loading spinner on the tab.

I say "finishes", because client side JavaScript could still load additional resources and render new views after this point.

Page finish loading
Figure 7: IPC from the renderer to the browser process to notify the page has "loaded"

The simple navigation was complete! But what happens if a user puts different URL to address bar again? Well, the browser process goes through the same steps to navigate to the different site. But before it can do that, it needs to check with the currently rendered site if they care about beforeunload event.

beforeunload can create "Leave this site?" alert when you try to navigate away or close the tab. Everything inside of a tab including your JavaScript code is handled by the renderer process, so the browser process has to check with current renderer process when new navigation request comes in.

Caution: Do not add unconditional beforeunload handlers. It creates more latency because the handler needs to be executed before the navigation can even be started. This event handler should be added only when needed, for example if users need to be warned that they might lose data they've entered on the page.

beforeunload event handler
Figure 8: IPC from the browser process to a renderer process telling it that it's about to navigate to a different site

If the navigation was initiated from the renderer process (like user clicked on a link or client-side JavaScript has run window.location = "https://newsite.com") the renderer process first checks beforeunload handlers. Then, it goes through the same process as browser process initiated navigation. The only difference is that navigation request is kicked off from the renderer process to the browser process.

When the new navigation is made to a different site than currently rendered one, a separate render process is called in to handle the new navigation while current render process is kept around to handle events like unload. For more, see an overview of page lifecycle states and how you can hook into events with the Page Lifecycle API.

new navigation and unload
Figure 9: 2 IPCs from a browser process to a new renderer process telling to render the page and telling old renderer process to unload

In case of Service Worker

One recent change to this navigation process is the introduction of service worker. Service worker is a way to write network proxy in your application code; allowing web developers to have more control over what to cache locally and when to get new data from the network. If service worker is set to load the page from the cache, there is no need to request the data from the network.

The important part to remember is that service worker is JavaScript code that runs in a renderer process. But when the navigation request comes in, how does a browser process know the site has a service worker?

Service worker scope lookup
Figure 10: the network thread in the browser process looking up service worker scope

When a service worker is registered, the scope of the service worker is kept as a reference (you can read more about scope in this The Service Worker Lifecycle article). When a navigation happens, network thread checks the domain against registered service worker scopes, if a service worker is registered for that URL, the UI thread finds a renderer process in order to execute the service worker code. The service worker may load data from cache, eliminating the need to request data from the network, or it may request new resources from the network.

serviceworker navigation
Figure 11: the UI thread in a browser process starting up a renderer process to handle service workers; a worker thread in a renderer process then requests data from the network

You can see this round trip between the browser process and renderer process could result in delays if service worker eventually decides to request data from the network. Navigation Preload is a mechanism to speed up this process by loading resources in parallel to service worker startup. It marks these requests with a header, allowing servers to decide to send different content for these requests; for example, just updated data instead of a full document.

Navigation preload
Figure 12: the UI thread in a browser process starting up a renderer process to handle service worker while kicking off network request in parallel

Wrap-up

In this post, we looked at what happens during a navigation and how your web application code such as response headers and client-side JavaScript interact with the browser. Knowing the steps browser goes through to get data from the network makes it easier to understand why APIs like navigation preload were developed. In the next post, we’ll dive into how the browser evaluates our HTML/CSS/JavaScript to render pages.

Did you enjoy the post? If you have any questions or suggestions for the future post, I'd love to hear from you in the comment section below or @kosamari on Twitter.

Feedback

Deprecations and removals in Chrome 69

$
0
0

Deprecations and removals in Chrome 69

Remove AppCache from insecure contexts

When used over insecure contexts, AppCache potentially allows persistent online and offline cross-site scripting attacks. This is a serious escalation from regular cross-site scripting.

To mitigate this threat, AppCache is now only supported on origins that serve over HTTPS.

Developers looking for an alternative to AppCache are encouraged to use service workers. An experimental library is available to ease that transition.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove anonymous getter for HTMLFrameSetElement

The anonymous getter for HTMLFrameSetElement is non-standard and therefore being removed. This feature was added 13 years ago to resolve a compatibility issue that then existed, but now does not. Because this is a non-standard feature no alternatives are available. Usage is low enough that we do not expect this to be a problem.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Deprecate and remove Gamepads.item()

The legacy item() accessor is removed from the Gamepads array. This change improves compatibility with Firefox which is so far the only browser to implement GamepadList.

Chromestatus Tracker | Chromium Bug

Deprecate Custom Elements v0

Custom Elements are a Web Components technology that lets you create new HTML tags, beef up existing tags, or extend components authored by other developers. Custom Elements v1 have been implemented in Chrome since version 54, which shipped in October 2016. Custom Elements v0 was an experimental version not implemented in other browsers. As such it is now deprecated with removal expected in Chrome 73, around April 2019.

Intent to Deprecate | Chromestatus Tracker | Chromium Bug

Deprecate HTML Imports

HTML Imports allow HTML to be imported from one document to another. This feature was part of the early experimental version of Web Components not implemented in other browsers. As such it is now deprecated with removal expected in Chrome 73, around April 2019.

Intent to Deprecate | Chromestatus Tracker | Chromium Bug

Deprecate Shadow DOM v0

Shadow DOM is a Web Components technology that uses scoped subtrees inside lements. Shadow DOM v1 has been implemented in Chrome since version 53, which shipped in August of 2016. Shadow DOM v0 was an experimental version not implemented in other browsers. As such it is now deprecated with removal expected in Chrome 73, around April 2019.

Intent to Deprecate | Chromestatus Tracker | Chromium Bug

Deprecate SpeechSynthesis.speak() without user activation

The SpeechSynthesis interface is actively being abused on the web. There's anecdotal evidences that because other autoplay avenues are being closed, abuse is moving to the Web Speech API, which doesn't follow autoplay rules.

The speechSynthesis.speak() function now throws an error if the document has not received a user activation. Removal is expected in Chrome 71, some time in late November.

Intent to Deprecate | Chromestatus Tracker | Chromium Bug

Audio/Video Updates in Chrome 70

$
0
0

Audio/Video Updates in Chrome 70

Support for codec and container switching in MSE

Chrome is adding support for improved cross-codec or cross-bytestream transitions in Media Source Extensions (MSE) playback using a new changeType() method on SourceBuffer. It allows the type of media bytes appended to the SourceBuffer to be changed afterwards.

The current version of MSE (W3C Recommendation 17 November 2016) supports adaptive playback of media; however adaptation requires that any media appended to a SourceBuffer must conform to the MIME type provided when initially creating the SourceBuffer via MediaSource.addSourceBuffer(type). Codecs from that type and any previously parsed initialization segments must remain the same throughout. This means the website has to take explicit steps to accomplish codec or bytestream switching (by using multiple media elements or SourceBuffer tracks and switching among those), increasing application complexity and user-visible latency. (Such transitions require the web app to take synchronous action on the renderer main thread). This transition latency impairs the smoothness of media playback across transitions.

With its new changeType() method, a SourceBuffer can buffer and support playback across different bytestream formats and codecs. This new method retains previously buffered media, modulo future MSE coded frame eviction or removal, and leverages the splicing and buffering logic in the existing MSE coded frame processing algorithm.

Here's how to use the changeType() method:

const sourceBuffer = myMediaSource.addSourceBuffer('video/webm; codecs="opus, vp09.00.10.08"');
sourceBuffer.appendBuffer(someWebmOpusVP9Data);

// Later on...
if ('changeType' in sourceBuffer) {
  // Change source buffer type and append new data.
  sourceBuffer.changeType('video/mp4; codecs="mp4a.40.5, avc1.4d001e"');
  sourceBuffer.appendBuffer(someMp4AacAvcData);
}

As expected, if the passed type is not supported by the browser, this method throws a NotSupportedError exception.

Check out the sample to play with cross-codec and cross-bytestream buffering and playback of an audio element.

Intent to Ship | Chromestatus Tracker | Chromium Bug

Opus in MP4 for MSE

The open and highly versatile audio codec Opus has been supported in the <audio> and <video> elements since Chrome 33. Opus in ISO-BMFF support (aka Opus in MP4) was added after. And now Opus in MP4 is available in Chrome 70 for Media Source Extensions (MSE).

Here's how you can detect if Opus in MP4 is supported for MSE:

if (MediaSource.isTypeSupported('audio/mp4; codecs="opus"')) {
  // TODO: Fetch data and feed it to a media source.
}

If you want to see a full example, check out our official sample.

Due to lack of tools to mux Opus in MP4 with correct end trimming and preskip values, if such precision is important to you, you'll need to use SourceBuffer.appendWindow{Start,End} and SourceBuffer.timestampOffset in Chrome to obtain sample-accurate playback.

Warning: Chrome for Android does not support encrypted Opus content on Android versions prior to Lollipop.

Intent to Ship | Chromestatus Tracker | Chromium Bug

Allow protected content playback by default on Android

In Chrome 70 for Android, the default value of the “protected content” site setting changes from “Ask first” to “Allowed”, lowering the friction associated with playback of such media. This change is possible, in part, because of additional steps taken to clear media licenses alongside cookies and site data, ensuring that media licenses are not used by sites to track users who have cleared browsing data.

Protected content setting in Android.
Figure 1. Protected content setting in Android.

Inside look at modern web browser (part 3)

$
0
0

Inside look at modern web browser (part 3)

Inner workings of a Renderer Process

This is part 3 of 4 part blog series looking at how browsers work. Previously, we covered multi-process architecture and navigation flow. In this post, we are going to look at what happens inside of the renderer process.

Renderer process touches many aspects of web performance. Since there is a lot happening inside of the renderer process, this post is only a general overview. If you'd like to dig deeper, the Performance section of Web Fundamentals has many more resources.

Renderer processes handle web contents

The renderer process is responsible for everything that happens inside of a tab. In a renderer process, the main thread handles most of the code you send to the user. Sometimes parts of your JavaScript is handled by worker threads if you use a web worker or a service worker. Compositor and raster threads are also run inside of a renderer processes to render a page efficiently and smoothly.

The renderer process's core job is to turn HTML, CSS, and JavaScript into a web page that the user can interact with.

Renderer process
Figure 1: Renderer process with a main thread, worker threads, a compositor thread, and a raster thread inside

Parsing

Construction of a DOM

When the renderer process receives a commit message for a navigation and starts to receive HTML data, the main thread begins to parse the text string (HTML) and turn it into a Document Object Model (DOM).

The DOM is a browser's internal representation of the page as well as the data structure and API that web developer can interact with via JavaScript.

Parsing an HTML document into a DOM is defined by the HTML Standard. You may have noticed that feeding HTML to a browser never throws an error. For example, missing closing </p> tag is a valid HTML. Erroneous markup like Hi! <b>I'm <i>Chrome</b>!</i> (b tag is closed before i tag) is treated as if you wrote Hi! <b>I'm <i>Chrome</i></b><i>!</i>. This is because the HTML specification is designed to handle those errors gracefully. If you are curious how these things are done, you can read on "An introduction to error handling and strange cases in the parser" section of the HTML spec.

Subresource loading

A website usually uses external resources like images, CSS, and JavaScript. Those files need to be loaded from network or cache. The main thread could request them one by one as they find them while parsing to build a DOM, but in order to speed up, "preload scanner" is run concurrently. If there are things like <img> or <link> in the HTML document, preload scanner peeks at tokens generated by HTML parser and sends requests to the network thread in the browser process.

DOM
Figure 2: The main thread parsing HTML and building a DOM tree

JavaScript can block the parsing

When the HTML parser finds a <script> tag, it pauses the parsing of the HTML document and has to load, parse, and execute the JavaScript code. Why? because JavaScript can change the shape of the document using things like document.write() which changes the entire DOM structure (overview of the parsing model in the HTML spec has a nice diagram). This is why the HTML parser has to wait for JavaScript to run before it can resume parsing of the HTML document. If you are curious about what happens in JavaScript execution, the V8 team has talks and blog posts on this.

Hint to browser how you want to load resources

There are many ways web developers can send hints to the browser in order to load resources nicely. If your JavaScript does not use document.write(), you can add async or defer attribute to the <script> tag. The browser then loads and runs the JavaScript code asynchronously and does not block the parsing. You may also use JavaScript module if that's suitable. <link rel="preload"> is a way to inform bowser that the resource is definitely needed for current navigation and you would like to download as soon as possible. You are read more on this at Resource Prioritization – Getting the Browser to Help You.

Style calculation

Having a DOM is not enough to know what the page would look like because we can style page elements in CSS. The main thread parses CSS and determines the computed style for each DOM node. This is information about what kind of style is applied to each element based on CSS selectors. You can see this information in the computed section of DevTools.

computed style
Figure 3: The main thread parsing CSS to add computed style

Even if you do not provide any CSS, each DOM node has a computed style. <h1> tag is displayed bigger than <h2> tag and margins are defined for each element. This is because the browser has a default style sheet. If you want to know what Chrome's default CSS is like, you can see the source code here.

Layout

Now the renderer process knows the structure of a document and styles for each nodes, but that is not enough to render a page. Imagine you are trying to describe a painting to your friend over a phone. "There is a big red circle and a small blue square" is not enough information for your friend to know what exactly the painting would look like.

game of human fax machine
Figure 4: A person standing in front of a painting, phone line connected to the other person

The layout is a process to find the geometry of elements. The main thread walks through the DOM and computed styles and creates the layout tree which has information like x y coordinates and bounding box sizes. Layout tree may be similar structure to the DOM tree, but it only contains information related to what's visible on the page. If display: none is applied, that element is not part of the layout tree (however, an element with visibility: hidden is in the layout tree). Similarly, if a pseudo class with content like p::before{content:"Hi!"} is applied, it is included in the layout tree even though that is not in the DOM.

layout
Figure 5: The main thread going over DOM tree with computed styles and producing layout tree
Figure 6: Box layout for a paragraph moving due to line break change

Determining the Layout of a page is a challenging task. Even the simplest page layout like a block flow from top to bottom has to consider how big the font is and where to line break them because those affect the size and shape of a paragraph; which then affects where the following paragraph needs to be.

CSS can make element float to one side, mask overflow item, and change writing directions. You can imagine, this layout stage has a mighty task. In Chrome, a whole team of engineers works on the layout. If you want to see details of their work, few talks form BlinkOn Conference are recorded and quite interesting to watch.

Paint

drawing game
Figure 7: A person in front of a canvas holding paintbrush, wondering if they shoudl draw a circle first or square first

Having a DOM, style, and layout is still not enough to render a page. Let's say you are trying to reproduce a painting. You know the size, shape, and location of elements, but you still have to judge in what order you paint them.

For example, z-index might be set for certain elements, in that case painting in order of elements written in the HTML will result in incorrect rendering.

z-index fail
Figure 8: Page elements appearing in order of an HTML markup, resulting in wrong rendered image because z-index was not taken into account

At this paint step, the main thread walks the layout tree to create paint records. Paint record is a note of painting process like "background first, then text, then rectangle". If you have drawn on <canvas> element using JavaScript, this process might be familiar to you.

paint records
Figure 9: The main thread walking through layout tree and producing paint records

Updating rendering pipeline is costly

Figure 10: DOM+Style, Layout, and Paint trees in order it is generated

The most important thing to grasp in rendering pipeline is that at each step the result of the previous operation is used to create new data. For example, if something changes in the layout tree, then the Paint order needs to be regenerated for affected parts of the document.

If you are animating elements, the browser has to run these operations in between every frame. Most of our displays refresh the screen 60 times a second (60 fps); animation will appear smooth to human eyes when you are moving things across the screen at every frame. However, if the animation misses the frames in between, then the page will appear "janky".

jage jank by missing frames
Figure 11: Animation frames on a timeline

Even if your rendering operations are keeping up with screen refresh, these calculations are running on the main thread, which means it could be blocked when your application is running JavaScript.

jage jank by JavaScript
Figure 12: Animation frames on a timeline, but one frame is blocked by JavaScript

You can divide JavaScript operation into small chunks and schedule to run at every frame using requestAnimationFrame(). For more on this topic, please see Optimize JavaScript Execution . You might also run your JavaScript in Web Workers to avoid blocking the main thread.

request animation frame
Figure 13: Smaller chunks of JavaScript running on a timeline with animation frame

Compositing

How would you draw a page?

Figure 14: Annimation of naive rastering process

Now that the browser knows the structure of the document, the style of each element, the geometry of the page, and the paint order, how does it draw a page? Turning this information into pixels on the screen is called rasterizing.

Perhaps a naive way to handle this would be to raster parts inside of the viewport. If a user scrolls the page, then move the rastered frame, and fill in the missing parts by rastering more. This is how Chrome handled rasterizing when it was first released. However, the modern browser runs a more sophisticated process called compositing.

What is compositing

Figure 15: Annimation of compositing process

Compositing is a technique to separate parts of a page into layers, rasterize them separately, and composite as a page in a separate thread called compositor thread. If scroll happens, since layers are already rasterized, all it has to do is to composite a new frame. Animation can be achieved in the same way by moving layers and composite a new frame.

You can see how your website is divided into layers in DevTools using Layers panel.

Dividing into layers

In order to find out which elements need to be in which layers, the main thread walks through the layout tree to create the layer tree (this part is called "Update Layer Tree" in the DevTools performance panel). If certain parts of a page that should be seplate layer (like slide-in side menu) is not getting one, then you can hint to the browser by using will-change attribute in CSS.

layer tree
Figure 16: The main thread walking through layout tree producing layer tree

You might be tempted to give layers to every element, but compositing across an excess number of layers could result in slower operation than rasterizeing small parts of a page every frame, so it is crucial that you measure rendering performance of your application. For more about on topic, see Stick to Compositor-Only Properties and Manage Layer Count.

Raster and composite off of the main thread

Once the layer tree is created and paint orders are determined, the main thread commits that information to the compositor thread. The compositor thread then rasterizes each layer. A layer could be large like the entire length of a page, so the compositor thread divides them into tiles and sends each tile off to raster threads. Raster threads rasterize each tile and store them in GPU memory.

raster
Figure 17: Raster threads creating the bitmap of tiles and sending to GPU

The compositor thread can prioritize different aster threads so that things within the viewport (or nearby) can be rastered first. A layer also has multiple tilings for different resolutions to handle things like zoom-in action.

Once tiles are rastered, compositor thread gathers tile information called draw quads to create a compositor frame.

Draw quads Contains information such as the tile's location in memory and where in the page to draw the tile taking in consideration of the page compositing.
Compositor frame A collection of draw quads that represents a frame of a page.

A compositor frame is then submitted to the browser process via IPC. At this point, another compositor frame could be added from UI thread for the browser UI change or from other renderer processes for extensions. These compositor frames are sent to the GPU to display it on a screen. If a scroll event comes in, compositor thread creates another compositor frame to be sent to the GPU.

composit
Figure 18: Compositor thread creating compositing frame. Fame is sent to the browser process then to GPU

The benefit of compositing is that it is done without involving the main thread. Compositor thread does not need to wait on style calculation or JavaScript execution. This is why compositing only animations are considered the best for smooth performance. If layout or paint needs to be calculated again then the main thread has to be involved.

Wrap Up

In this post, we looked at rendering pipeline from parsing to compositing. Hopefully, you are now empowered to read more about performance optimization of a website.

In the next and last post of this series, we'll look at the compositor thread in more details and see what happens when user input like mouse move and click comes in.

Did you enjoy the post? If you have any questions or suggestions for the future post, I'd love to hear from you in the comment section below or @kosamari on Twitter.

Next: Input is coming to the compositor

Feedback

Inside look at modern web browser (part 4)

$
0
0

Inside look at modern web browser (part 4)

Input is coming to the Compositor

This is the last of the 4 part blog series looking inside of Chrome; investigating how it handles our code to display a website. In the previous post, we looked at the rendering process and learned about the compositor. In this post, we'll look at how compositor is enabling smooth interaction when user input comes in.

Input events from the browser's point of view

When you hear "input events" you might only think of a typing in textbox or mouse click, but from the browser's point of view, input means any gesture from the user. Mouse wheel scroll is an input event and touch or mouse over is also an input event.

When user gesture like touch on a screen occurs, the browser process is the one that receives the gesture at first. However, the browser process is only aware of where that gesture occurred since content inside of a tab is handled by the renderer process. So the browser process sends the event type (like touchstart) and its coordinates to the renderer process. Renderer process handles the event appropriately by finding the event target and running event listeners that are attached.

input event
Figure 1: Input event routed through the browser process to the renderer process

Compositor receives input events

Figure 2: Viewport hovering over page layers

In the previous post, we looked at how the compositor could handle scroll smoothly by compositing rasterized layers. If no input event listeners are attached to the page, Compositor thread can create a new composite frame completely independent of the main thread. But what if some event listeners were attached to the page? How would the compositor thread find out if the event needs to be handled?

Understanding non-fast scrollable region

Since running JavaScript is the main thread's job, when a page is composited, the compositor thread marks a region of the page that has event handlers attached as "Non-Fast Scrollable Region". By having this information, the compositor thread can make sure to send input event to the main thread if the event occurs in that region. If input event comes from outside of this region, then the compositor thread carries on compositing new frame without waiting for the main thread.

limited non fast scrollable region
Figure 3: Diagram of described input to the non-fast scrollable region

Be aware when you write event handlers

Common event handling pattern in web development is the event delegation. Since events bubble, you can attach one event handler at the topmost element and delegate tasks based on event target. You might have seen or written code like the blow.

document.body.addEventListener('touchstart', 
event => {
    if (event.target === area) {
        event.preventDefault();
    }
}
);

Since you only need to write one event handler for all elements, ergonomics of this event delegation pattern are attractive. However, if you look at this code from the browser's point of view, now the entire page is marked as a non-fast scrollable region. This means even if your application doesn't care about input from certain parts of the page, the compositor thread has to communicate with the main thread and wait for it every time an input event comes in. Thus, the smooth scrolling ability of the compositor is defeated.

full page non fast scrollable region
Figure 4: Diagram of described input to the non-fast scrollable region covering an entire page

In order to mitigate this from happening, you can pass passive: true options in your event listener. This hints to the browser that you still want to listen to the event in the main thread, but compositor can go ahead and composite new frame as well.

document.body.addEventListener('touchstart', 
event => {
    if (event.target === area) {
        event.preventDefault()
    }
 }, {passive: true}
);

Check if the event is cancelable

page scroll
Figure 5: A web page with part of the page fixed to horizontal scroll

Imagine you have a box in a page that you want to limit scroll direction to horizontal scroll only.

Using passive: true option in your pointer event means that the page scroll can be smooth, but vertical scroll might have started by the time you want to preventDefault in order to limit scroll direction. You can check against this by using event.cancelable method.

document.body.addEventListener('pointermove', event => {
    if (event.cancelable) {
        event.preventDefault(); // block the native scroll
        /*
        *  do what you want the application to do here
        */
    } 
}, {passive: true});

Alternatively, you may use CSS rule like touch-action to completely eliminate the event handler.

#area { 
  touch-action: pan-x; 
}

Finding the event target

hit test
Figure 6: The main thread looking at the paint records asking what's drawn on x.y point

When the compositor thread sends an input event to the main thread, the first thing to run is a hit test to find the event target. Hit test uses paint records data that was generated in the rendering process to find out what is underneath the point coordinates in which the event occurred.

Minimizing event dispatches to the main thread

In the previous post, we discussed how our typical display refreshes screen 60 times a second and how we need to keep up with the cadence for smooth animation. For input, a typical touch-screen device delivers touch event 60-120 times a second, and a typical mouse delivers events 100 times a second. Input event has higher fidelity than our screen can refresh.

If a continuous event like touchmove was sent to the main thread 120 times a second, then it might trigger excessive amount of hit tests and JavaScript execution compared to how slow the screen can refresh.

unfiltered events
Figure 7: Events flooding the frame timeline causing page jank

To minimize excessive calls to the main thread, Chrome coalesces continuous events (such as wheel, mousewheel, mousemove, pointermove, touchmove ) and delays dispatching until right before the next requestAnimationFrame.

coalesced events
Figure 8: Same timeline as before but event being coalesced and delayed

Any discrete events like keydown, keyup, mouseup, mousedown, touchstart, and touchend are dispatched immediately.

Use getCoalescedEvents to get intra-frame events

For most web applications, coalesced events should be enough to provide a good user experience. However, if you are building things like drawing application and putting a path based on touchmove coordinates, you may lose in-between coordinates to draw a smooth line. In that case, you can use the getCoalescedEvents method in the pointer event to get information about those coalesced events.

getCoalescedEvents
Figure 9: Smooth touch gesture path on the left, coalesced limited path on the right
window.addEventListener('pointermove', event => {
    const events = event.getCoalescedEvents();
    for (let event of events) {
        const x = event.pageX;
        const y = event.pageY;
        // draw a line using x and y coordinates.
    }
});

Next steps

In this series, we've covered inner workings of a web browser. If you have never thought about why DevTools recommends adding {passive: true} on your event handler or why you might write async attribute in your script tag, I hope this series shed some light on why a browser needs those information to provide faster and smoother web experience.

Use Lighthouse

If you want to make your code be nice to the browser but have no idea where to start, Lighthouse is a tool that runs audit of any website and gives you a report on what's being done right and what needs improvement. Reading through the list of audits also gives you an idea of what kind of things a browser cares about.

Learn how to measure performance

Performance tweaks may vary for different sites, so it is crucial that you measure the performance of your site and decide what fits the best for your site. Chrome DevTools team has few tutorials on how to measure your site's performance.

Add Feature Policy to your site

If you want to take an extra step, Feature Policy is a new web platform feature that can be a guardrail for you when you are building your project. Turning on feature policy guarantees the certain behavior of your app and prevents you from making mistakes. For example, If you want to ensure your app will never block the parsing, you can run your app on synchronous scripts policy. When sync-script: 'none' is enabled, parser-blocking JavaScript will be prevented from executing. This prevents any of your code from blocking the parser, and the browser doesn't need to worry about pausing the parser.

Wrap up

thank you

When I started building websites, I almost only cared about how I would write my code and what would help me be more productive. Those things are important, but we should also think about how browser takes the code we write. Modern browsers have been and continue to invest in ways to provide a better web experience for users. Being nice to the browser by organizing our code, in turn, improves your user experience. I hope you join me in the quest to be nice to the browsers!

Huge thank you to everyone who reviewed early drafts of this series, including (but not limited to): Alex Russell, Paul Irish, Meggin Kearney, Eric Bidelman, Mathias Bynenes, Addy Osmani, Kinuko Yasuda, Nasko Oskov, and Charlie Reis.

Did you enjoy the this series? If you have any questions or suggestions for the future post, I'd love to hear from you in the comment section below or @kosamari on Twitter.

Feedback


Houdini's Animation Worklet

$
0
0

Houdini's Animation Worklet

Supercharge your webapp's animations

TL;DR: Animation Worklet allows you to write imperative animations that run at the device's native frame rate for that extra buttery jank-free smoothness™, make your animations more resilient against main thread jank and are linkable to scroll instead of time. Animation Worklet is in Chrome Canary (behind the "Experimental Web Platform features" flag) and we are planning an Origin Trial for Chrome 71. You can start using it as a progressive enhancement today.

Another Animation API?

Actually no, it is an extension of what we already have, and with good reason! Let's start at the beginning. If you want to animate any DOM element on the web today, you have 2 ½ choices: CSS Transitions for simple A to B transitions, CSS Animations for potentially cyclical, more complex time-based animations and Web Animations API (WAAPI) for almost arbitrarily complex animations. WAAPI's support matrix is looking pretty grim, but it's on the way up. Until then, there is a polyfill.

What all these methods have in common is that they are stateless and time-driven. But some of the effects developers are trying are neither time-driven nor stateless. For example the infamous parallax scroller is, as the name implies, scroll-driven. Implementing a performant parallax scroller on the web today is surprisingly hard.

And what about statelessness? Think about Chrome's address bar on Android, for example. If you scroll down, it scrolls out of view. But the second you scroll up, it comes back, even if you are half way down that page. The animation depends not only on scroll position, but also on your previous scroll direction. It is stateful.

Another issue is styling scrollbars. They are notoriously unstylable — or at least not styleable enough. What if I want a nyan cat as my scrollbar? Whatever technique you choose, building a custom scrollbar is neither performant, nor easy.

The point is all of these things are awkward and hard to impossible to implement efficiently. Most of them rely on events and/or requestAnimationFrame, which might keep you at 60fps, even when your screen is capable of running at 90fps, 120fps or higher and use a fraction of your precious main thread frame budget.

Animation Worklet extends the capabilities of the web's animations stack to make these kind of effects easier. Before we dive in, let's make sure we are up-to-date on the basics of animations.

A primer on animations and timelines

WAAPI and Animation Worklet make extensive use of timelines to allow you to orchestrate animations and effects in the way that you want. This section is a quick refresher or introduction to timelines and how they work with animations.

Each document has document.timeline. It starts at 0 when the document is created and counts the milliseconds since the document started existing. All of a document's animations work relative to this timeline.

To make things a little more concrete, let's take a look at this WAAPI snippet

const animation = new Animation(
  new KeyframeEffect(
    document.querySelector('#a'),
    [
      {
        transform: 'translateX(0)'
      },
      {
        transform: 'translateX(500px)'
      },
      {
        transform: 'translateY(500px)'
      }
    ],
    {
      delay: 3000,
      duration: 2000,
      iterations: 3
    }
  ),
  document.timeline
);

animation.play();

When we call animation.play(), the animation uses the timeline’s currentTime as its start time. Our animation has a delay of 3000ms, meaning that the animation will start (or become "active") when the timeline reaches `startTime

  • 3000. After that time, the animation engine will animate the given element from the first keyframe (translateX(0)), through all intermediate keyframes (translateX(500px)) all the way to the last keyframe (translateY(500px)) in exactly 2000ms, as prescribed by thedurationoptions. Since we have a duration of 2000ms, we will reach the middle keyframe when the timeline'scurrentTimeisstartTime + 3000 + 1000and the last keyframe atstartTime + 3000 + 2000`. The point is, the timeline controls where we are in our animation!

Once the animation has reached the last keyframe, it will jump back to the first keyframe and start the next iteration of the animation. This process repeats a total of 3 times since we set iterations: 3. If we wanted the animation to never stop, we would write iterations: Number.POSITIVE_INFINITY. Here's the result of the code above.

Note: All demos currently require Canary with the "Experimental Web Platform features" flag enabled on chrome://flags.

WAAPI is incredibly powerful and there are many more features in this API like easing, start offsets, keyframe weightings and fill behavior that would blow the scope of this article. If you would like to know more, I recommend reading this article on CSS Animations on CSS Tricks.

Writing an Animation Worklet

Now that we have the concept of timelines down, we can start looking at Animation Worklet and how it allows you to mess with timelines! The Animation Worklet API is not only based on WAAPI, but is — in the sense of the extensible web — a lower-level primitive that explains how WAAPI functions. In terms of syntax, they are incredibly similar:

Animation Worklet WAAPI
new WorkletAnimation(
  'passthrough',
  new KeyframeEffect(
    document.querySelector('#a'),
    [
      {
        transform: 'translateX(0)'
      },
      {
        transform: 'translateX(500px)'
      }
    ],
    {
      duration: 2000,
      iterations: Number.POSITIVE_INFINITY
    }
  ),
  document.timeline
).play();
      
new Animation(

  new KeyframeEffect(
    document.querySelector('#a'),
    [
      {
        transform: 'translateX(0)'
      },
      {
        transform: 'translateX(500px)'
      }
    ],
    {
      duration: 2000,
      iterations: Number.POSITIVE_INFINITY
    }
  ),
  document.timeline
).play();
      
    

The difference is in the first parameter, which is the name of the worklet that drives this animation.

Feature detection

Chrome is the first browser to ship this feature, so you need to make sure your code doesn't just expect AnimationWorklet to be there. So before loading the worklet, we should detect if the user's browser has support for AnimationWorklet with a simple check:

if('animationWorklet' in CSS) {
  // AnimationWorklet is supported!
}

Loading a worklet

Worklets are a new concept introduced by the Houdini task force to make many of the new APIs easier to build and scale. We will cover the details of worklets a bit more later, but for simplicity you can think of them as cheap and lightweight threads (like workers) for now.

We need to make sure we have loaded a worklet with the name "passthrough", before declaring the animation:

// index.html
await CSS.animationWorklet.addModule("passthrough-aw.js");
// ... WorkletAnimation initialization from above ...

// passthrough-aw.js
registerAnimator('passthrough', class {
  animate(currentTime, effect) {
    effect.localTime = currentTime;
  }
});

What is happening here? We are registering a class as an animator using the AnimationWorklet's registerAnimator() call, giving it the name "passthrough". It's the same name we used in the WorkletAnimation() constructor above. Once the registration is complete, the promise returned by addModule() will resolve and we can start creating animations using that worklet.

The animate() method of our instance will be called for every frame the browser wants to render, passing the currentTime of the animation's timeline as well as the effect that is currently being processed. We only have one effect, the KeyframeEffect and we are using currentTime to set the effect's localTime, hence why this animator is called "passthrough". With this code for the worklet, the WAAPI and the AnimationWorklet above behave exactly the same, as you can see in the demo.

Master of time

The currentTime parameter of our animate() method is the currentTime of the timeline we passed to the WorkletAnimation() constructor. In the previous example, we just passed that time through to the effect. But since this is JavaScript code, and we can distort time 💫

function remap(minIn, maxIn, minOut, maxOut, v) {
  return (v - minIn)/(maxIn - minIn) * (maxOut - minOut) + minOut;
}
registerAnimator('sin', class {
  animate(currentTime, effect) {
    effect.localTime =
      remap(-1, 1, 0, 2000, Math.sin(currentTime * 2 * Math.PI / 2000));
  }
});

Note: currentTime can be NaN in certain circumstances (more later). You should keep that in mind when writing animation worklets. Since all mathematical operations can handle NaN (they return NaN when one of their inputs is NaN) we are fine here!

We are taking the Math.sin() of the currentTime, and remapping that value to the range [0; 2000], which is the time range that our effect is defined for. Now the animation looks very different, without having changed the keyframes or the animation's options. The worklet code can be arbitrarily complex, and allows you to programmatically define which effects are played in which order and to which extent.

Options over Options

You might want to reuse a worklet and change its numbers. For this reason the WorkletAnimation constructor allows you pass an options object to the worklet:

registerAnimator('factor', class {
  constructor(options = {}) {
    this.factor = options.factor || 1;
  }
  animate(currentTime, effect) {
    effect.localTime = currentTime * this.factor;
  }
});

new WorkletAnimation(
  'factor',
  new KeyframeEffect(
    document.querySelector('#b'),
    [ /* ... same keyframes as before ... */ ],
    {
      duration: 2000,
      iterations: Number.POSITIVE_INFINITY
    }
  ),
  document.timeline,
  {factor: 0.5}
).play();

Note: The options object will be structurally cloned when it is being sent to the worklet, similar to how postMessage() operates.

In this example, both animations are driven with the same code, but with different options.

Gimme your local state!

As I hinted at before, one of the key problems animation worklet aims to solve is stateful animations. Animation worklets are allowed to hold state. However, one of the core features of worklets is that they can be migrated to a different thread or even be destroyed to save resources, which would also destroy their state. To prevent state loss, animation worklet offers a hook that is called before a worklet is destroyed that you can use to return a state object. That object will be passed to the constructor when the worklet is re-created. On initial creation, that parameter will be undefined.

registerAnimator('randomspin', class {
  constructor(options = {}, state = {}) {
    this.direction = state.direction || (Math.random() > 0.5 ? 1 : -1);
  }
  animate(currentTime, effect) {
    // Some math to make sure that `localTime` is always > 0.
    effect.localTime = 2000 + this.direction * (currentTime % 2000);
  }
  destroy() {
    return {
      direction: this.direction
    };
  }
});

Every time you refresh this demo, you have a 50/50 chance in which direction the square will spin. If the browser were to tear down the worklet and migrate it to a different thread, there would be another Math.random() call on creation, which could cause a sudden change of direction. To make sure that doesn't happen, we return the animations randomly-chosen direction as state and use it in the constructor, if provided.

Note: The destroy() lifecycle hook has been replaced by getter method, but this change is not reflected in the spec or Chrome’s implementation just yet.

Hooking into the space-time continuum: ScrollTimeline

As the previous section has shown, AnimationWorklet allows us to programmatically define how advancing the timeline affects the effects of the animation. But so far, our timeline has always been document.timeline, which tracks time.

ScrollTimeline opens up new possibilities and allows you to drive animations with scrolling instead of time. We are going to reuse our very first "passthrough" worklet for this demo:

new WorkletAnimation(
  'passthrough',
  new KeyframeEffect(
    document.querySelector('#a'),
    [
      {
        transform: 'translateX(0)'
      },
      {
        transform: 'translateX(500px)'
      }
    ],
    {
      duration: 2000,
      fill: 'both'
    }
  ),
  new ScrollTimeline({
    scrollSource: document.querySelector('main'),
    orientation: "vertical", // "horizontal" or "vertical".
    timeRange: 2000
  })
).play();

Instead of passing document.timeline, we are creating a new ScrollTimeline. You might have guessed it, ScrollTimeline doesn't use time but the scrollSource's scroll position to set the currentTime in the worklet. Being scrolled all the way to the top (or left) means currentTime = 0, while being scrolled all the way to the bottom (or right) sets currentTime to timeRange. If you scroll the box in this demo, you can control the position of the red box.

Note: It might look like you should be able to use ScrollTimeline with a normal Animation, and we agree. This is planned, but currently not supported in Chrome.

If you create a ScrollTimeline with an element that doesn't scroll, the timeline's currentTime will be NaN. So especially with responsive design in mind, you should always be prepared for NaN as your currentTime. It’s often sensible to default to a value of 0.

Linking animations with scroll position is something that has long been sought, but was never really achieved at this level of fidelity (apart from hacky workarounds with CSS3D). Animation Worklet allows these effects to be implemented in a straightforward way while being highly performant. For example: a parallax scrolling effect like this demo shows that it now takes just a couple of lines to define a scroll-driven animation.

Under the hood

Worklets

Worklets are JavaScript contexts with an isolated scope and a very small API surface. The small API surface allows more aggressive optimization from the browser, especially on low-end devices. Additionally, worklets are not bound to a specific event loop, but can moved between threads as necessary. This is especially important for AnimationWorklet.

Compositor NSync

You might know that certain CSS properties are fast to animate, while others are not. Some properties just need some work on the GPU to be animated, while others force the browser to re-layout the entire document. Sites like CSSTriggers.com show you which properties are fast to animate, and which are not.

In Chrome (as in many other browsers) we have a process called the compositor, whose job it is — and I'm very much simplifying here — to arrange layers and textures and then utilize the GPU to update the screen as regularly as possible, ideally as fast as the screen can update (typically 60Hz). Depending on which CSS properties are being animated, the browser might just need have the compositor do it's work, while other properties need to run layout, which is an operation that only the main thread can do. Depending on which properties you are planning to animate, your animation worklet will either be bound to the main thread or run in a separate thread in sync with the compositor.

Note: You should avoid "slow" properties at all costs. Limit yourself to animation opacity and transform to make sure your animations run smoothly even on slow devices.

Slap on the wrist

There is usually only one compositor process which is potentially shared across multiple tabs, as the GPU is a highly-contended resource. If the compositor gets somehow blocked, the entire browser grinds to a halt and becomes unresponsive to user input. This needs to be avoided at all costs. So what happens if your worklet cannot deliver the data the compositor needs in time for the frame to be rendered?

If this happens the worklet is allowed — per spec — to "slip". It falls behind the compositor, and the compositor is allowed to re-use the last frame's data to keep the frame rate up. Visually, this will look like jank, but the big difference is that the browser is still responsive to user input.

Note: This is what the spec allows the browser to do. Chrome does not currently do any of these things, but will implement these behaviors soon™

Conclusion

There are many facets to AnimationWorklet and the benefits it brings to the web. The obvious benefits are more control over animations and new ways to drive animations to bring a new level of visual fidelity to the web. But the APIs design also allows you to make your app more resilient to jank while getting access to all the new goodness at the same time.

Animation Worklet is in Canary and we are aiming for an Origin Trial with Chrome 71. We are eagerly awaiting your great new web experiences and hearing about what we can improve. There is also a polyfill that gives you the same API, but doesn't provide the performance isolation.

Keep in mind that CSS Transitions and CSS Animations are still valid options and can be much simpler for basic animations. But if you need to go fancy, AnimationWorklet has your back!

What's New In DevTools (Chrome 71)

$
0
0

What's New In DevTools (Chrome 71)

New features and major changes coming to Chrome DevTools in Chrome 71 include:

Hover over a Live Expression to highlight a DOM node

When a Live Expression evaluates to a DOM node, hover over the Live Expression result to highlight that node in the viewport.

Hovering over a Live Expression result to highlight the node in the viewport.
Figure 1. Hovering over a Live Expression result to highlight the node in the viewport

Store DOM nodes as global variables

To store a DOM node as a global variable, run an expression in the Console that evaluates to a node, right-click the result, and then select Store as global variable.

Store as global variable in the Console.
Figure 2. Store as global variable in the Console

Or, right-click the node in the DOM Tree and select Store as global variable.

Store as global variable in the DOM Tree.
Figure 3. Store as global variable in the DOM Tree

Initiator and priority information now in HAR imports and exports

If you'd like to diagnose network logs with colleagues, you can export the network requests to a HAR file.

Exporting network requests to a HAR file.
Figure 8. Exporting network requests to a HAR file

To import the file back into the Network panel, just drag and drop it.

When you export a HAR file, DevTools now includes initiator and priority information in the HAR file. When you import HAR files back into DevTools, the Initiator and Priority columns are now populated.

The _initiator field provides more context around what caused the resource to be requested. This maps to the Initiator column in the Requests table.

The initiator column.
Figure 9. The initiator column

You can also hold Shift and hover over a request to view its initiator and dependencies.

Viewing initiators and dependencies.
Figure 10. Viewing initiators and dependencies

The _priority field states what priority level the browser assigned to the resource. This maps to the Priority column in the Requests table, which is hidden by default.

The Priority column.
Figure 11. The Priority column

Right-click the header of the Requests table and select Priority to show the Priority column.

How to show the Priority column.
Figure 12. How to show the Priority column

Note: The _initiator and _priority fields begin with underscores because the HAR spec states that custom fields must begin with underscores.

Access the Command Menu from the Main Menu

Use the Command Menu for a fast way to access DevTools panels, tabs, and features.

The Command Menu.
Figure 13. The Command Menu

You can now open the Command Menu from the Main Menu. Click the Main Menu main button and select Run command.

Opening the Command Menu from the Main Menu.
Figure 14. Opening the Command Menu from the Main Menu

Picture-in-Picture breakpoints

Picture-in-Picture is a new experimental API that enables a page to create a floating video window over the desktop.

Enable the enterpictureinpicture, leavepictureinpicture, and resize checkboxes in the Event Listener Breakpoints pane to pause whenever one of these picture-in-picture events fires. DevTools pauses on the first line of the handler.

Picture-in-Picture events in the Event Listener Breakpoints pane.
Figure 16. Picture-in-Picture events in the Event Listener Breakpoints pane

Feedback

To discuss the new features and changes in this post, or anything else related to DevTools:

  • File bug reports at Chromium Bugs.
  • Discuss features and changes on the Mailing List. Please don't use the mailing list for support questions. Use Stack Overflow, instead.
  • Get help on how to use DevTools on Stack Overflow. Please don't file bugs on Stack Overflow. Use Chromium Bugs, instead.
  • Tweet us at @ChromeDevTools.
  • File bugs on this doc in the Web Fundamentals repository.

Consider Canary

If you're on Mac or Windows, consider using Chrome Canary as your default development browser. Canary gives you access to the latest DevTools features.

Note: Canary is released as soon as its built, without testing. This means that Canary breaks about once-a-month. It's usually fixed within a day. You can go back to using Chrome Stable while Canary is broken.

Previous release notes

See the devtools-whatsnew tag for links to all previous DevTools release notes.

Chrome 69 Paint Timing Issues

$
0
0

Chrome 69 Paint Timing Issues

Chrome 69 includes an incorrect change to our paint-timing metrics, which was intended to capture more of the rendering pipeline resulting in some accurate timestamps.

This introduced two issues with the first-paint and first-contentful-paint metrics, which may show up in your site's analytics.

  • A small number of incorrectly high values.
  • About 5% of samples are incorrectly reported as having a 0 value.

To address this, we recommend that you ignore samples with a 0 value and avoid looking at percentiles above 99% and the mean.

The frequency of incorrectly high values is low enough that it's unlikely to affect percentiles below the 99.5'th percentile. However, the mean and other statistics influenced heavily by outliers may show significant skew.

The increased number of 0 values results in significant inaccuracies in low percentiles (0-10%).

Percentiles from 50-99% should continue to be reliable, and the data will return to normal in Chrome 70.

If a site you own runs into issues analyzing paint-timing data for Chrome 69, don't hesitate to reach out to speed-metrics-dev@chromium.org. For the nitty gritty details, see this Chrome bug.

New in Chrome 70

$
0
0

New in Chrome 70

In Chrome 70, we've added support for:

And there’s plenty more!

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 70!

Note: Want the full list of changes? Check out the Chromium source repository change list.

Desktop Progressive Web Apps on Windows

Users can now install Desktop Progressive Web Apps on Windows!

Once installed, they’re launched from the Start menu, and run like all other installed apps, without an address bar or tabs. Service workers ensure that they’re fast, and reliably, the app window experience makes them feel like any other installed app.

Getting started isn't any different than what you're already doing today. All of the work you've done for your existing Progressive Web App still applies! If your app meets the standard PWA criteria, Chrome will fire the beforeinstallprompt event. Save the event; then, add some UI (like an install app button) to tell the user your app can be installed. Then, when the user clicks the button, call prompt() on the saved event; Chrome will then show the prompt to the user. If they click add, Chrome will add your PWA to their start menu and desktop.

See my Desktop PWAs post for complete details.

Note: Mac and Linux support is expected to arrive in Chrome 72.

Credential Management API: Public Key Credentials

The Credential Management API makes sign in super simple for users. It allows your site to interact with the browser’s credential manager or federated account services like Google and Facebook to sign.

Chrome 70 adds support for a third type of credential: Public Key Credential, which allows web applications to create and use, strong, cryptographically attested, and application-scoped credentials to strongly authenticate users.

I'm pretty excited about it because it allows sites to use my fingerprint for 2-factor authentication. But, it also adds support for additional types of security keys and better security on the web.

Check the Credential Management API docs for more details or give it a try with the WebAuthn Demo and how you can get started!

Named workers

Workers are an easy way to move JavaScript off the main thread and into the background. This is critical to keeping your site interactive, because it means that the main thread won’t lock up when it’s running an expensive or complex JavaScript computation.

Without WebWorkers

Main thread
Lots of heavy JavaScript running, resulting in slow, janky experience.

With WebWorkers

Main thread
No heavy JavaScript running, resulting in fast, smooth experience.
WebWorker
Lots of heavy JavaScript running, doesn't affect main thread.

In Chrome 70, workers now have a name attribute, which is specified by an optional argument on the constructor.

const url = '/scripts/my-worker.js';

const wNYC = new Worker(url, {name: 'NewYork'});

const oSF = {name: 'SanFrancisco'};
const wSF = new Worker(url, oSF);

This lets you distinguish dedicated workers by name when you have multiple workers with the same URL. You can also print the name in the DevTools console, making it much easier to know which worker you’re debugging!

Naming workers is already available in Firefox, Edge, and Safari. See the discussion on GitHub for more details.

And more!

These are just a few of the changes in Chrome 70 for developers, of course, there’s plenty more.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 71 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Tweaks to cache.addAll() and importScripts() coming in Chrome 71

$
0
0

Tweaks to cache.addAll() and importScripts() coming in Chrome 71

Developers using service workers and the Cache Storage API should be on the lookout for two small changes rolling out in Chrome 71. Both changes bring Chrome's implementation more in line with specifications and other browsers.

Disallowing asynchronous importScripts()

importScripts() tells your main service worker script to pause its current execution, download additional code from a given URL, and run it to completion in the current global scope. Once that's done, the main service worker script resumes execution. importScripts() comes in handy when you want to break your main service worker script into smaller pieces for organizational reasons, or pull in third-party code to add functionality to your service worker.

Browsers attempt to mitigate the possible performance gotchas of "download and run some synchronous code" by automatically caching anything pulled in via importScripts(), meaning that after the initial download, there's very little overhead involved in executing the imported code.

For that to work, though, the browser needs to know that there won't be any "surprise" code imported into the service worker after the initial installation. As per the service worker specification, calling importScripts() is supposed to only work during the synchronous execution of the top-level service worker script, or if needed, asynchronously inside of the install handler.

Prior to Chrome 71, calling importScripts() asynchronously outside of the install handler would work. Starting with Chrome 71, those calls throw a runtime exception (unless the same URL was previously imported in an install handler), matching the behavior in other browsers.

Instead of code like this:

// This only works in Chrome 70 and below.
self.addEventListener('fetch', event => {
  importScripts('my-fetch-logic.js');
  event.respondWith(self.customFetchLogic(event));
});

Your service worker code should look like:

// Move the importScripts() to the top-level scope.
// (Alternatively, import the same URL in the install handler.)
importScripts('my-fetch-logic.js');
self.addEventListener('fetch', event => {
  event.respondWith(self.customFetchLogic(event));
});

Note: Some users of the Workbox library might be implicitly relying on asynchronous calls to importScripts() without realizing it. Please see this guidance to make sure you don't run into issues in Chrome 71.

Deprecating repeated URLs passed to cache.addAll()

If you're using the Cache Storage API alongside of a service worker, there's another small change in Chrome 71 to align with the relevant specification. When the same URL is passed in multiple times to a single call to cache.addAll(), the specification says that the promise returned by the call should reject.

Prior to Chrome 71, that was not detected, and the duplicate URLs would effectively be ignored.

A screenshot of the warning message in Chrome's console.
Starting in Chrome 71, you'll see a warning message logged to the console.

This logging is a prelude to Chrome 72, where instead of just a logged warning, duplicate URLs will lead to cache.addAll() rejecting. If you're calling cache.addAll() as part of a promise chain passed to InstallEvent.waitUntil(), as is common practice, that rejection might cause your service worker to fail to install.

Here's how you might run into trouble:

const urlsToCache = [
  '/index.html',
  '/main.css',
  '/app.js',
  '/index.html'  // Oops! This is listed twice and should be removed.
];

self.addEventListener('install', event => {
  event.waitUntil(
    caches.open('my-cache')
      .then(cache => cache.addAll(urlsToCache))
  );
});

This restriction only applies to the actual URLs being passed to cache.addAll(), and caching what ends up being two equivalent responses that have different URLs—like '/' and '/index.html'—will not trigger a rejection.

Test your service worker implementation widely

Service workers are widely implemented across all major "evergreen" browsers at this point. If you regularly test your progressive web app against a number of browsers, or if you have a significant number of users who don't use Chrome, then chances are you've already detected the inconsistency and updated your code. But on the off chance that you haven't noticed this behavior in other browsers, we wanted to call out the change before switching Chrome's behavior.

Watch video using Picture-in-Picture

$
0
0

Watch video using Picture-in-Picture

Picture-in-Picture (PiP) allows users to watch videos in a floating window (always on top of other windows) so they can keep an eye on what they’re watching while interacting with other sites, or applications.

With the new Picture-in-Picture Web API, you can initiate and control Picture-in-Picture for video elements on your website. Try it out on our official Picture-in-Picture sample.

Background

In September 2016, Safari added Picture-in-Picture support through a WebKit API in macOS Sierra. Six months later, Chrome automatically played Picture-in-Picture video on mobile with the release of Android O using a native Android API. Six months later, we announced our intent to build and standardize a Web API, feature compatible with Safari’s, that would allow web developers to create and control the full experience around Picture-in-Picture. And here we are!

Get into the code

Enter Picture-in-Picture

Let’s start simply with a video element and a way for the user to interact with it, such as a button element.

<video id="videoElement" src="https://example.com/file.mp4"></video>
<button id="pipButtonElement"></button>

Only request Picture-in-Picture in response to a user gesture, and never in the promise returned by videoElement.play(). This is because promises do not yet propagate user gestures. Instead, call requestPictureInPicture() in a click handler on pipButtonElement as shown below. It is your responsibility to handle what happens if a users clicks twice.

pipButtonElement.addEventListener('click', async function() {
  pipButtonElement.disabled = true;

  await videoElement.requestPictureInPicture();

  pipButtonElement.disabled = false;
});

When the promise resolves, Chrome shrinks the video into a small window that the user can move around and position over other windows.

You’re done. Great job! You can stop reading and go take your well-deserved vacation. Sadly, that is not always the case. The promise may reject for any of the following reasons:

  • Picture-in-Picture is not supported by the system.
  • Document is not allowed to use Picture-in-Picture due to a restrictive feature policy.
  • Video metadata have not been loaded yet (videoElement.readyState === 0).
  • Video file is audio-only.
  • The new disablePictureInPicture attribute is present on the video element.
  • The call was not made in a user gesture event handler (e.g. a button click).

The Feature support section below shows how to enable/disable a button based on these restrictions.

Let’s add a try...catch block to capture these potential errors and let the user know what’s going on.

pipButtonElement.addEventListener('click', async function() {
  pipButtonElement.disabled = true;

  try {
    await videoElement.requestPictureInPicture();
  }
  catch(error) {
    // TODO: Show error message to user.
  }
  finally {
    pipButtonElement.disabled = false;
  }
})

The video element behaves the same whether it is in Picture-in-Picture or not: events are fired and calling methods work. It reflects changes of state in the Picture-in-Picture window (such as play, pause, seek, etc.) and it is also possible to change state programmatically in JavaScript.

Exit Picture-in-Picture

Now, let's make our button toggle entering and exiting Picture-in-Picture. We first have to check if the read-only object document.pictureInPictureElement is our video element. If it isn’t, we send a request to enter Picture-in-Picture as above. Otherwise, we ask to leave by calling document.exitPictureInPicture(), which means the video will appear back in the original tab. Note that this method also returns a promise.

...
try {
  if (videoElement !== document.pictureInPictureElement) {
    await videoElement.requestPictureInPicture();
  } else {
    await document.exitPictureInPicture();
  }
}
...

Listen to Picture-in-Picture events

Operating systems usually restrict Picture-in-Picture to one window, so Chrome's implementation follows this pattern. This means users can only play one Picture-in-Picture video at a time. You should expect users to exit Picture-in-Picture even when you didn't ask for it.

The new enterpictureinpicture and leavepictureinpicture event handlers let us tailor the experience for users. It could be anything from browsing a catalog of videos, to surfacing a livestream chat.

videoElement.addEventListener('enterpictureinpicture', function(event) {
  // Video entered Picture-in-Picture.
});

videoElement.addEventListener('leavepictureinpicture', function(event) {
  // Video left Picture-in-Picture.
  // User may have played a Picture-in-Picture video from a different page.
});

Get the Picture-in-Picture window size

If you want to adjust the video quality when the video enters and leaves Picture-in-Picture, you need to know the Picture-in-Picture window size and be notified if a user manually resizes the window.

The example below shows how to get the width and height of the Picture-in-Picture window when it is created or resized.

let pipWindow;

videoElement.addEventListener('enterpictureinpicture', function(event) {
  pipWindow = event.pictureInPictureWindow;
  console.log(`> Window size is ${pipWindow.width}x${pipWindow.height}`);
  pipWindow.addEventListener('resize', onPipWindowResize);
});

videoElement.addEventListener('leavepictureinpicture', function(event) {
  pipWindow.removeEventListener('resize', onPipWindowResize);
});

function onPipWindowResize(event) {
  console.log(`> Window size changed to ${pipWindow.width}x${pipWindow.height}`);
  // TODO: Change video quality based on Picture-in-Picture window size.
}

I’d suggest not hooking directly to the resize event as each small change made to the Picture-in-Picture window size will fire a separate event that may cause performance issues if you’re doing an expensive operation at each resize. In other words, the resize operation will fire the events over and over again very rapidly. I’d recommend using common techniques such as throttling and debouncing to address this problem.

Feature support

The Picture-in-Picture Web API may not be supported, so you have to detect this to provide progressive enhancement. Even when it is supported, it may be turned off by the user or disabled by a feature policy. Luckily, you can use the new boolean document.pictureInPictureEnabled to determine this.

if (!('pictureInPictureEnabled' in document)) {
  console.log('The Picture-in-Picture Web API is not available.');
}
else if (!document.pictureInPictureEnabled) {
  console.log('The Picture-in-Picture Web API is disabled.');
}

Applied to a specific button element for a video, this is how you may want to handle your Picture-in-Picture button visibility.

if ('pictureInPictureEnabled' in document) {
  // Set button ability depending on whether Picture-in-Picture can be used.
  setPipButton();
  videoElement.addEventListener('loadedmetadata', setPipButton);
  videoElement.addEventListener('emptied', setPipButton);
} else {
  // Hide button if Picture-in-Picture is not supported.
  pipButtonElement.hidden = true;
}

function setPipButton() {
  pipButtonElement.disabled = (videoElement.readyState === 0) ||
                              !document.pictureInPictureEnabled ||
                              videoElement.disablePictureInPicture;
}

Samples, demos, and codelabs

Check out our official Picture-in-Picture sample to try the Picture-in-Picture Web API.

Demos and codelabs will follow.

What’s next

First, check out the implementation status page to know which parts of the API are currently implemented in Chrome and other browsers.

Here's what you can expect to see in the near future:

  • Picture-in-Picture will be supported in Chrome OS and Android O.
  • MediaStreams from MediaDevices.getUserMedia() will work with Picture-in-Picture.
  • Web developers will be able to add custom Picture-in-Picture controls.

Resources

Many thanks to Mounir Lamouri and [Jennifer Apacible] for their work on Picture-in-Picture, and help with this article. And a huge thanks to everyone involved in the [standardization effort].

[https://crbug.com/?q=component:Blink>Media>PictureInPicture]: https://crbug.com/?q=component:Blink>Media>PictureInPicture [https://wicg.github.io/picture-in-picture]: https://wicg.github.io/picture-in-picture [https://github.com/WICG/picture-in-picture/issues]: https://github.com/WICG/picture-in-picture/issues [https://googlechrome.github.io/samples/picture-in-picture/]: https://googlechrome.github.io/samples/picture-in-picture/ [https://github.com/gbentaieb/pip-polyfill/]: https://github.com/gbentaieb/pip-polyfill/ [standardization effort]: https://github.com/WICG/picture-in-picture/issues?utf8=%E2%9C%93&q= [Jennifer Apacible]: https://twitter.com/japacible

The Intl.RelativeTimeFormat API

$
0
0

The Intl.RelativeTimeFormat API

Modern web applications often use phrases like “yesterday”, “42 seconds ago”, or “in 3 months” instead of full dates and timestamps. Such relative time-formatted values have become so common that several popular libraries implement utility functions that format them in a localized manner. (Examples include Moment.js, Globalize, and date-fns.)

One problem with implementing a localized relative time formatter is that you need a list of customary words or phrases (such as “yesterday” or “last quarter”) for each language you want to support. The Unicode CLDR provides this data, but to use it in JavaScript, it has to be embedded and shipped alongside the other library code. This unfortunately increases the bundle size for such libraries, which negatively impacts load times, parse/compile cost, and memory consumption.

The brand new Intl.RelativeTimeFormat API shifts that burden to the JavaScript engine, which can ship the locale data and make it directly available to JavaScript developers. Intl.RelativeTimeFormat enables localized formatting of relative times without sacrificing performance.

Usage examples

The following example shows how to create a relative time formatter using the English language.

const rtf = new Intl.RelativeTimeFormat('en');

rtf.format(3.14, 'second');
// → 'in 3.14 seconds'

rtf.format(-15, 'minute');
// → '15 minutes ago'

rtf.format(8, 'hour');
// → 'in 8 hours'

rtf.format(-2, 'day');
// → '2 days ago'

rtf.format(3, 'week');
// → 'in 3 weeks'

rtf.format(-5, 'month');
// → '5 months ago'

rtf.format(2, 'quarter');
// → 'in 2 quarters'

rtf.format(-42, 'year');
// → '42 years ago'

Note that the argument passed to the Intl.RelativeTimeFormat constructor can be either a string holding a BCP 47 language tag or an array of such language tags.

Here’s an example of using a different language (Spanish):

const rtf = new Intl.RelativeTimeFormat('es');

rtf.format(3.14, 'second');
// → 'dentro de 3,14 segundos'

rtf.format(-15, 'minute');
// → 'hace 15 minutos'

rtf.format(8, 'hour');
// → 'dentro de 8 horas'

rtf.format(-2, 'day');
// → 'hace 2 días'

rtf.format(3, 'week');
// → 'dentro de 3 semanas'

rtf.format(-5, 'month');
// → 'hace 5 meses'

rtf.format(2, 'quarter');
// → 'dentro de 2 trimestres'

rtf.format(-42, 'year');
// → 'hace 42 años'

Additionally, the Intl.RelativeTimeFormat constructor accepts an optional options argument, which gives fine-grained control over the output. To illustrate the flexibility, let’s look at some more English output based on the default settings:

// Create a relative time formatter for the English language, using the
// default settings (just like before). In this example, the default
// values are explicitly passed in.
const rtf = new Intl.RelativeTimeFormat('en', {
 localeMatcher: 'best fit', // other values: 'lookup'
 style: 'long', // other values: 'short' or 'narrow'
 numeric: 'always', // other values: 'auto'
});

// Now, let’s try some special cases!

rtf.format(-1, 'day');
// → '1 day ago'

rtf.format(0, 'day');
// → 'in 0 days'

rtf.format(1, 'day');
// → 'in 1 day'

rtf.format(-1, 'week');
// → '1 week ago'

rtf.format(0, 'week');
// → 'in 0 weeks'

rtf.format(1, 'week');
// → 'in 1 week'

You may have noticed that the above formatter produced the string '1 day ago' instead of 'yesterday', and the slightly awkward 'in 0 weeks' instead of 'this week'. This happens because by default, the formatter uses the numeric value in the output.

To change this behavior, set the numeric option to 'auto' (instead of the implicit default of 'always'):

// Create a relative time formatter for the English language that does
// not always have to use numeric value in the output.
const rtf = new Intl.RelativeTimeFormat('en', { numeric: 'auto' });

rtf.format(-1, 'day');
// → 'yesterday'

rtf.format(0, 'day');
// → 'today'

rtf.format(1, 'day');
// → 'tomorrow'

rtf.format(-1, 'week');
// → 'last week'

rtf.format(0, 'week');
// → 'this week'

rtf.format(1, 'week');
// → 'next week'

Analogous to other Intl classes, Intl.RelativeTimeFormat has a formatToParts method in addition to the format method. Although format covers the most common use case, formatToParts can be helpful if you need access to the individual parts of the generated output:

// Create a relative time formatter for the English language that does
// not always have to use numeric value in the output.
const rtf = new Intl.RelativeTimeFormat('en', { numeric: 'auto' });

rtf.format(-1, 'day');
// → 'yesterday'

rtf.formatToParts(-1, 'day');
// → [{ type: 'literal', value: 'yesterday' }]

rtf.format(3, 'week');
// → 'in 3 weeks'

rtf.formatToParts(3, 'week');
// → [
//  { type: 'literal', value: 'in ' },
//  { type: 'integer', value: '3', unit: 'week' },
//  { type: 'literal', value: ' weeks' }
// ]

For more information about the remaining options and their behavior, see the API docs in the proposal repository.

Conclusion

Intl.RelativeTimeFormat is available by default in V8 v7.1.179 and Chrome 71. As this API becomes more widely available, you’ll find libraries such as Moment.js, Globalize, and date-fns dropping their dependency on hardcoded CLDR databases in favor of the native relative time formatting functionality, thereby improving load-time performance, parse- and compile-time performance, run-time performance, and memory usage.

Questions about this API? Comments about this article? Feel free to ping me on Twitter via @mathias!


Deprecations and removals in Chrome 71

$
0
0

Deprecations and removals in Chrome 71

Chrome 71 also includes changes to cache.addAll() and importScripts(). Read about it in Tweaks to cache.addAll() and importScripts() coming in Chrome 71 by Jeff Posnick.

Remove SpeechSynthesis.speak() without user activation

The SpeechSynthesis interface is actively being abused on the web. There's anecdotal evidences that because other autoplay avenues are being closed, abuse is moving to the Web Speech API, which doesn't follow autoplay rules.

The speechSynthesis.speak() function now throws an error if the document has not received a user activation. This feature has been deprecated since Chrome 70.

Intent to Deprecate | Chromestatus Tracker | Chromium Bug

Remove prefixed versions of APIs

Chrome has removed non-standard aliases for two widely supported standard interfaces.

WebKitAnimationEvent

WebKitAnimationEvent has been fully replaced by AnimationEvent , the event interface used for events relating to CSS Animations. The prefixed form is only supported in Safari. Firefox and Edge only support the un-prefixed AnimationEvent.

Intent to Remove | Chromestatus Tracker | Chromium Bug

WebKitTransitionEvent

WebKitTransitionEvent has been fully replaced by TransitionEvent , the event interface used for events relating to CSS Transitions (for example, transitionstart). The prefixed form is only supported in Safari. Firefox and Edge only support the un-prefixed TransitionEvent.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove URL.createObjectURL from MediaStream

The URL.createObjectURL() method has been removed from the MediaStream interface. This method has been deprecated in 2013 and superseded by assigning streams to HTMLMediaElement.srcObject. The old method was removed because it is less safe, requiring a call to URL.revokeOjbectURL() to end the stream. Other user agents have either deprecated (Firefox) or removed (Safari) this feature feature.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove document.origin

The document.origin property has been removed. This property was only ever implemented in Chromium and WebKit. It is redundant with self.origin which can be used in both window and worker contexts and has wider support.

Intent to Remove | Chromestatus Tracker | Chromium Bug

WebAssembly Threads ready to try in Chrome 70

$
0
0

WebAssembly Threads ready to try in Chrome 70

WebAssembly (Wasm) enables compilation of code written in C++ and other languages to run on the web. One very useful feature of native applications is the ability to use threads - a primitive for parallel computation. Most C and C++ developers would be familiar with _pthreads _which is a standardized API for thread management in an application.

The WebAssembly Community Group has been working on bringing threads to the web to enable real multi-threaded applications. As part of this effort, V8 has implemented necessary support for threads in the WebAssembly engine, available through an Origin Trial. Origin Trials allow developers to experiment with new web features before they are fully standardized. This allows us to gather real-world feedback from intrepid developers, which is critical to validate and improve new features.

The Chrome 70 release supports threads for WebAssembly and we encourage interested developers to start using them and give us feedback.

Threads? What about Workers?

Browsers have supported parallelism via Web Workers since 2012 in Chrome 4; in fact it's normal to hear terms like 'on the main thread' etc. However, Web Workers do not share mutable data between them, instead relying on message-passing for communication. In fact, Chrome allocates a new V8 engine for each of them (called isolates). Isolates share neither compiled code nor JavaScript objects, and thus they cannot share mutable data like pthreads.

WebAssembly threads, on the other hand, are threads that can share the same Wasm memory. The underlying storage of the shared memory is accomplished with a SharedArrayBuffer, a JavaScript primitive that allows sharing a single ArrayBuffer's contents concurrently between workers. Each WebAssembly thread runs in a Web Worker, but their shared Wasm memory allows them to work much like they do on native platforms. This means that the applications that use Wasm threads are responsible for managing access to the shared memory as in any traditional threaded application. There are many existing code libraries written in C or C++ that use pthreads, and those can be compiled to Wasm and run in true threaded mode, allowing more cores to work on the same data simultaneously.

A simple example

Here's an example of a simple 'C' program that uses threads.

#include <pthread.h>
#include <stdio.h>

// Calculate Fibonacci numbers shared function
int fibonacci(int iterations) {
    int     val = 1;
    int     last = 0;

    if (iterations == 0) {
        return 0;
    }
    for (int i = 1; i < iterations; i++) {
        int     seq;

        seq = val + last;
        last = val;
        val = seq;
    }
    return val;
}
// Start function for the background thread
void *bg_func(void *arg) {
    int     *iter = (void *)arg;

    *iter = fibonacci(*iter);
    return arg;
}
// Foreground thread and main entry point
int main(int argc, char *argv[]) {
    int         fg_val = 54;
    int         bg_val = 42;
    pthread_t   bg_thread;

    // Create the background thread
    if (pthread_create(&bg_thread, NULL, bg_func, &bg_val)) {
        perror("Thread create failed");
        return 1;
    }
    // Calculate on the foreground thread
    fg_val = fibonacci(fg_val);
    // Wait for background thread to finish
    if (pthread_join(bg_thread, NULL)) {
        perror("Thread join failed");
        return 2;
    }
    // Show the result from background and foreground threads
    printf("Fib(42) is %d, Fib(6 * 9) is %d\n", bg_val, fg_val);

    return 0;
}

That code begins with the main() function which declares 2 variables fg_val and bg_val. There is also a function called fibonacci() which will be called by both of the threads in this example. The main()function creates a background thread using pthread_create() whose task is to calculate the fibonacci number sequence value corresponding to the value of bg_val variable. Meanwhile, the main() function running in the foreground thread calculates it for the fg_val variable. Once the background thread has completed running, the results are printed out.

Compile for thread support

First, you should have the emscripten SDK installed, preferably version 1.38.11 or later. To build our example code with threads enabled for running in the browser, we need to pass a couple of extra flags to the emscripten emcc compiler. Our command line looks like this:

emcc -O2 -s USE_PTHREADS=1 -s PTHREAD_POOL_SIZE=2 -o test.js test.c

The command line argument '-s USE_PTHREADS=1' turns on threading support for the compiled WebAssembly module and the argument '-s PTHREAD_POOL_SIZE=2' tells the compiler to generate a pool of two (2) threads.

When the program is run, under the hood it will load the WebAssembly module create a Web Worker for each of the threads in the thread pool, share the module with each of the workers, in this case it's 2, and those will be used whenever a call to pthread_create() is made. Each worker instantiates the Wasm module with the same memory, allowing them to cooperate. V8's newest changes in 7.0 share the compiled native code of Wasm modules that are passed between workers, which allows even very large applications to scale to many workers. Note, it makes sense to make sure the thread pool size is equal to the maximum number of threads your application needs, or thread creation may fail. At the same time, if the thread pool size is too large, you'll be creating unnecessary Web Workers that'll sit around doing nothing but using memory.

How to try it out

The quickest way to test out our WebAssembly module is to turn on the experimental WebAssembly threads support in Chrome 70 onwards. Navigate to the URL chrome://flags in your browser as shown below:

Chrome flags page

Next, find the experimental WebAssembly threads setting which looks like this:

WebAssembly threads setting

Change the setting to Enabled as shown below, then restart your browser.

WebAssembly threads setting enabled

After the browser has restarted we can try loading the threaded WebAssembly module with a minimal HTML page, containing just this content:

<!DOCTYPE html>
<html>
  <title>Threads test</title>
  <body>
    <script src="test.js"></script>
  </body>
</html>

To try this page you'll need to run some form of web server and load it from your browser. That will cause the WebAssembly module to load and run. Opening DevTools shows us the output from the run, and you should see something like the output image below in the console:

Console output from fibonacci program

Our WebAssembly program with threads has executed successfully! We'd encourage you to try out your own threaded application using the steps outlined above.

Testing in the field with an Origin Trial

Trying out threads by turning on experimental flags in the browser is fine for development purposes, but if you'd like to test your application out in the field, you can do so with what's known as an origin trial.

Origin trials let you try out experimental features with your users by obtaining a testing token that's tied to your domain. You can then deploy your app and expect it to work in a browser that can support the feature you're testing (in this case Chrome 70 onwards). To obtain your own token to run an origin trial, use the application form here.

We've hosted our simple example above using an origin trial token, so you can try it out for yourself without needing to build anything.

If you want to see what 4 threads running in parallel can do for ASCII art then you must take a look at this demo as well!

Give us feedback

WebAssembly threads are an extremely useful new primitive for porting applications to the web. It's now possible to run C and C++ applications and libraries which require pthreads support in the WebAssembly environment.

We're looking for feedback from developers trying out this feature as it'll help inform the standardization process as well as validate its usefulness. The best way to send feedback is to report issues and/or get involved with the standardization process in the WebAssembly Community Group.

Signed HTTP Exchanges

$
0
0

Signed HTTP Exchanges

TL;DR We are starting an origin trial for Signed HTTP Exchange starting in Chrome 71, and we love to hear your feedback.

Signed HTTP Exchange (or "SXG") is a subset of the emerging technology called Web Packages, which enables publishers to safely make their content portable, i.e. available for redistribution by other parties, while still keeping the content’s integrity and attribution. Portable content has many benefits, from enabling faster content delivery to facilitating content sharing between users, and simpler offline experiences.

So, how do Signed HTTP Exchanges work? This technology allows a publisher to sign a single HTTP exchange (i.e., a request/response pair), in the way that the signed exchange can be served from any caching server. When the browser loads this Signed Exchange, it can safely show the publisher’s URL in the address bar because the signature in the exchange is sufficient proof that the content originally came from the publisher’s origin.

Signed Exchange: The essence

This decouples the origin of the content from who distributes it. Your content can be published on the web, without relying on a specific server, connection, or hosting service! We're excited about possible uses of SXG such as:

  • Privacy-preserving prefetching: While prefetching resources (e.g., by link rel=prefetch) for a subsequent navigation can make the navigation feel a lot faster, it also has privacy downsides. For instance, prefetching resources for cross-origin navigations will disclose to the destination site that the user is potentially interested in a piece of information even if the user ultimately didn’t visit the site. On the other hand, SXG allows for prefetching cross-origin resources from a fast cache without ever reaching out to the destination site, thereby only communicating user interest if and when the navigation occurs. We believe that this can be useful for sites whose goal is to send their users to other websites. In particular, Google plans to use this on Google search result pages to improve AMP URLs and speed up clicks on search results.

  • Benefits of a CDN without ceding control of your certificate private key: Content that has suddenly become popular (e.g. linked from reddit.com's first page) often overloads the site where the content is served, and if the site is relatively small, it tends to slow down or even temporarily become unavailable. This situation can be avoided if the content is shared using fast, powerful cache servers, and SXG makes this possible without sharing your TLS keys.

Trying out Signed Exchanges

Chrome is experimenting with SXG and it is going to be available as an origin trial starting in Chrome 71. The origin trial will allow you to temporarily enable the feature for the users on your website. The experiment process is temporary and iterative, and the feature may be changed before it becomes shippable.

There will be two types of participants for this experiment:

  • Content publishers: If you want to create SXGs for your content to share them with aggregators and cache operators (who can collect and serve SXGs), you do NOT need to participate in the origin trial. Instead you will need your origin’s certificate to sign the SXGs. If you belong to this group skip over to the Creating your SXG section below.

  • Content servers: If you want to host SXGs created by publishers on their behalf, you can participate in the origin trial to have the SXGs processed by Chrome without requiring your users turn on a flag. If you belong to this group keep on reading the Participate in the Origin Trial section below.

Participate in the Origin Trial

If you want to serve SXGs on your own site, please follow these instructions:

  • To serve SXGs on your site and have them processed by Chrome: Request a token for your origin via this form and configure your site to send the provided token as an Origin-Trial HTTP header. Note that you need to send the token via HTTP headers, i.e. using <meta http-equiv> will not work for the SXG origin trial.

  • If you'd like Chrome to advertise support for the trial via Accept: application/signed-exchange HTTP header: Send an email with the subject "SXG Accept Header Sign-up: " to webpackage-ot-application@chromium.org. Please also indicate if you want all subdomains of the origin included. We will get back to you with the necessary procedure, and process your request within 5 business days. Note that this process requires that you have permission to modify the DNS entry of the requested origin for validation.

You do not need to sign-up for both, but if your site wants to rely on Accept header for feature detection consider applying for the latter too.

Please note that origin trials will be globally shut off if its usage exceeds 0.5% of all Chrome page loads, large sites should target a small fraction of users.

Creating your SXG

In order to create SXGs for your origin (as a publisher), you need a certificate key to sign the signature, and the certificate must have a special "CanSignHttpExchanges" extension to be processed as a valid SXG. As of November 2018, DigiCert is the only CA that supports this extension, and you can request the certificate that works for SXG from this page.

Once you get a certificate for SXG you can create your own SXGs by using the reference generator tools published on github.

You can also take a look at the actual SXG example files in the Chrome’s code repository (e.g. this one is the simplest one created for a simple text file). Note that they are generated primarily for local testing, please do not expect that they have valid certificates and timestamps in the signature.

Testing the Feature Locally

For development purposes, you can also enable the feature locally by enabling the Signed HTTP Exchange feature via chrome://flags/#enable-signed-http-exchange.

For creating SXGs for testing purposes, you can create a self-signed certificate and enable chrome://flags/#allow-sxg-certs-without-extension to have your Chrome process the SXGs created with the certificate without the special extension.

Code like the following should work if your server, certificate, and SXGs are correctly set up:

<!-- prefetch the sample.sxg -->
<link rel="prefetch" href="https://your-site.com/sample.sxg">

<!-- clicking the link below should make Chrome navigate to the inner
     response of sample.sxg (and the prefetched SXG is used) -->
<a href="https://your-site.com/sample.sxg">Sample</a>

Note that SXG is only supported by the anchor tag (<a>) and link rel=prefetch in Chrome M71. Also note that the signature’s validity is capped to 7 days per spec, so your signed contents will expire relatively quickly.

Providing Feedback

We are keen to hear your feedback on this experiment at webpackage-dev@chromium.org. You can also join the spec discussion, or report a chrome bug to the team. Your feedback will greatly help the standardization process and also help us address implementation issues.

Capabilities

$
0
0

Capabilities

There are some capabilities, like file system access, idle detection, and more that are available to native but aren’t available on the web. These missing capabilities mean some types of apps can't be delivered on the web, or are less useful.

We strongly believe that every developer should have access to the capabilities they need to make a great web experience, and we are committed to a more capable web.

We want to close the capability gap between the web and native and make it easy for developers to build great experiences on the open web. We plan to design and develop these new capabilities in an open and transparent way, using the existing open web platform standards processes while getting early feedback from developers and other browser vendors as we iterate on the design, to ensure an interoperable design.

In flight

Capability Description
Writable Files API The writable files API is designed to increase interoperability of web applications with native applications, making it possible for users to choose files or directories that a web app can interact with on the native file system, and without having to use a native wrapper like Electron to ship your web app.
See the full list of capabilities including the backlog of ones we've haven't started working on yet.

How will we design & implement these new capabilities?

We developed this process to make it possible to design and develop new web platform capabilities that meet the needs of developers quickly, in the open, and most importantly, work within the existing standards process. It’s no different than how we develop every other web platform feature, but it puts an emphasis on developer feedback.

Developer feedback is critical to help us ensure we’re shipping the right features, but when it comes in late in the process, it can be hard to change course. That’s why we’re starting to ask for feedback earlier. When actionable technical and use-case feedback comes in early, it’s easier to course correct or even stop development, without having shipped poorly thought out or badly implemented features. Features being developed at WICG are not set in stone, and your input can make a big difference in how they evolve.

It’s worth noting that many ideas never make it past the explainer or origin trial stage. The goal of the process is to ship the right feature. That means we need to learn and iterate quickly. Not shipping a feature because it doesn’t solve the developer need is OK. To enable this learning, we have come to employ the following process (although there is frequently some re-ordering of later steps due to feedback):

Identify the developer need

The first step is to identify and understand the developer need. What is the developer trying to accomplish? Who would use it? How are they doing it today? And what problems or frustrations are fixed by this new capability. Typically, these come in as feature request from developers, frequently through bugs filed on bugs.chromium.org.

Create an explainer

After identifying the need for a new capability, create an explainer. The explainer should have enough detail to identify the problem the new capability provides and helps people understand the scope of the problem. The Explainer is a living design document that will go through heavy iteration as the new capability evolves.

Get feedback and iterate on the explainer

Once the explainer has a reasonable level of clarity, it’s time to publicize it, to solicit feedback, and iterate on the design. This is an opportunity to verify the new capability meets the needs of developers and works in a way that they expect. This is also an opportunity to gather public support and verify that there really is a need for this capability.

Move the design to a specification & iterate

At this point, the design work will transition into the standards process, creating a formal specification, working with developers and other browser vendors to iterate and improve on the design.

As the design begins to stabilize, an origin trial might be helpful. Origin trials provide a means to safely experiment with new web platform features in Chrome and help to verify the proposal solves the problem it set out to solve.

Ship it

Finally, after the spec has been finalized, the origin trial is complete and all of the steps and approvals from the Blink launch process have been completed, it’s time to ship it.

The Writable Files API: Simplifying local file access

$
0
0

The Writable Files API: Simplifying local file access

What is the Writable Files API

Today, if a user wants to edit a local file in a web app, the web app needs to ask the user to open the file. Then, after editing the file, the only way to save changes is by downloading the file to the Downloads folder, or having to replace the original file by navigating the directory structure to find the original folder and file. This user experience leaves a lot to be desired, and makes it hard to build web apps that access user files.

The writable files API is designed to increase interoperability of web applications with native applications, making it possible for users to choose files or directories that a web app can interact with on the native file system, and without having to use a native wrapper like Electron to ship your web app.

With the Writable Files API, you could create a simple, single file editor that opens a file, allows the user to edit it, and save the changes back to the same file. Or a multi-file editor like an IDE or CAD style application where the user opens a project containing multiple files, usually together in the same directory. And there are plenty more.

Note: Want to see how this might be implemented? Check the explainer for some sample code.

Read explainer

Security considerations

The primary entry point for this API is a file picker, which ensures that the user is always in full control over what files and directories a website has access to. Every access to a user selected file (either reading or writing) is done through an asynchronous API, allowing the browser to potentially include additional prompting and/or permission checks.

The Writable Files API provides web developers with significant access to user data and has potential to be abused. There are both privacy risks, for example websites getting access to private data they weren’t supposed to have access to, as well as security risks, for example websites able to modify executables, encrypt user data, and so forth. The Writable Files API must be designed in such a way as to limit how much damage a website can do, and make sure that the user understands what they’re giving the site access to.

Current status

Step Status
1. Create explainer Complete
2. Create initial draft of specification In progress
3. Gather feedback & iterate on design In progress
4. Origin trial Not started
5. Launch Not started

Feedback

We need your help to design the Writable Files API in a way that will be useful in a way that is both secure and protects user privacy.

  • Have an idea for a use case or an idea where you'd use it?
  • Are there types of files or directories you don’t expect to have access to?
  • Do you plan to use this?
  • Like it, and want to show your support?

Share your thoughts on the Writable Files WICG Discourse discussion.

Viewing all 599 articles
Browse latest View live