Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

The model-viewer web component

$
0
0

The model-viewer web component

Adding 3D models to a website can be tricky. 3D models ideally will be shown in a viewer that can work responsively on all browsers - from smartphones, to desktop, to new head-mounted displays. The viewer should support progressive enhancement for performance, rendering quality and use cases on all devices ranging from older, lower-powered smartphones to newer devices that support augmented reality. It should stay up to date with current technologies. It should be performant and accessible. However, building such a viewer requires specialty 3D programming skills, and can be a challenge for web developers that want to host their own models instead of using a third-party hosting service.

To help with that, we're introducing the <model-viewer> web component which lets you declaratively add a 3D model to a web page, while hosting the model on your own site. The web component supports responsive design and use cases like augmented reality on some devices, and we're adding features for accessibility, rendering quality, and interactivity. The goal of the component is making it easy to add 3D models to your website without being on top of the latest changes in the underlying technology and platforms.

What is a web component?

A web component is a custom HTML element built from standard web platform features. A web component behaves for all intents and purposes like a standard element. It has a unique tag, it can have properties and methods, and it can fire and respond to events. In short, you don't need to know anything special to use it. In this article, I will show you some things that are particular to <model-viewer>.

What can <model-viewer> do?

More specifically, what can it do now? I'll show you its current capabilities. You'll get Is a great experience today, and ` will get better over time as we add new features and improve rendering quality. The examples I've provided are just to give you a sense of what it does. If you want to try them there are installation and usage instructions in its GitHub repo.

Basic 3D models

Embedding a 3D model is as simple as the markup below. By using gltf files, we've ensured that this component will work on any major browser.

<model-viewer src="assets/Astronaut.gltf" alt="A 3D model of an astronaut">

To see in action, check out our demo hosted on Glitch. The code we have so far looks something like this png.

image

By adding two attributes, I can also make the model rotate and allow users to control it.

Poster image/delayed loading

Some 3D models can be very large, so you might want to hold off loading them until the user has requested the model. For this, the component has a built-in means of delaying loading until the user wants it.

<model-viewer src="assets/Astronaut.gltf" controls auto-rotate
poster="assets/poster2.png">

To show your users that it's a 3D model, and not just an image, you can provide some preload animation by using script to switch between multiple posters.

<model-viewer id="toggle-poster" src="assets/Astronaut.gltf" controls
auto-rotate poster="assets/poster2.png"></model-viewer>  
<script>  
    const posters = ['poster.png', 'poster2.png'];  
    let i = 0;  
    setInterval(() =>  
        $('#toggle-poster').setAttribute('poster', `assets/${posters[i++ %
2]}`), 2000);  
</script>

Responsive Design

The component handles some types of responsive design, scaling for both mobile and desktop. It can also manage multiple instances on a page and uses Intersection Observer to conserve battery power and GPU cycles when a model isn't visible.

image

Looking Forward

Install <model-viewer> and give it a try We want <model-viewer> to be useful to you, and we want your input on its future. That's not to say we don't have ideas, which we have on our project roadmap. So give it a try and let us know what you think by filing an issue in GitHub.

Feedback


Using Trusted Web Activities

$
0
0

Using Trusted Web Activities

Last updated: February 6th, 2019

Trusted Web Activities are a new way to integrate your web-app content such as your PWA with your Android app using a protocol based on Custom Tabs.

Note: Trusted Web Activities are available in Chrome on Android, version 72 and above.

Looking for the code?

There are a few things that make Trusted Web Activities different from other ways to integrate web content with your app:

  1. Content in a Trusted Web activity is trusted -- the app and the site it opens are expected to come from the same developer. (This is verified using Digital Asset Links.)
  2. Trusted Web activities come from the web: they're rendered by the user's browser, in exactly the same way as a user would see it in their browser except they are run fullscreen. Web content should be accessible and useful in the browser first.
  3. Browsers are also updated independent of Android and your app -- Chrome, for example, is available back to Android Jelly Bean. That saves on APK size and ensures you can use a modern web runtime. (Note that since Lollipop, WebView has also been updated independent of Android, but there are a significant number of pre-Lollipop Android users.)
  4. The host app doesn't have direct access to web content in a Trusted Web activity or any other kind of web state, like cookies and localStorage. Nevertheless, you can coordinate with the web content by passing data to and from the page in URLs (e.g. through query parameters, custom HTTP headers, and intent URIs.)
  5. Transitions between web and native content are between activities. Each activity (i.e. screen) of your app is either completely provided by the web, or by an Android activity

To make it easier to test, there are currently no qualifications for content opened in the preview of Trusted Web activities. You can expect, however, that Trusted Web activities will need to meet the same Add to Home Screen requirements. You can audit your site for these requirements using the Lighthouse "user can be prompted to Add to Home screen" audit.

Today, if the user's version of Chrome doesn't support Trusted Web activities, Chrome will fall back to a simple toolbar using a Custom Tab. It is also possible for other browsers to implement the same protocol that Trusted Web activities use. While the host app has the final say on what browser gets opened, we recommend the same policy as for Custom Tabs: use the user's default browser, so long as that browser provides the required capabilities.

Getting started

Setting up a Trusted Web Activity (TWA) doesn’t require developers to author Java code, but Android Studio is required. This guide was created using Android Studio 3.3. Check the docs on how to install it.

Create a Trusted Web Activity Project

When using Trusted Web Activities, the project must target API 16 or higher.

Note: This section will guide you on setting up a new project on Android Studio. If you are already familiar with the tool feel free to skip to the Getting the TWA Library section.

Open Android Studio and click on Start a new Android Studio project.

Android Studio will prompt to choose an Activity type. Since TWAs use an Activity provided by support library, choose Add No Activity and click Next.

Next step, the wizard will prompt for configurations for the project. Here's a short description of each field:

  • Name: The name that will be used for your application on the Android Launcher.
  • Package Name: An unique identifier for Android Applications on the Play Store and on Android devices. Check the documentation for more information on requirements and best practices for creating package names for Android apps.
  • Save location: Where Android Studio will create the project in the file system.
  • Language: The project doesn't require writing any Java or Kotlin code. Select Java, as the default.
  • Minimum API Level: The Support Library requires at least API Level 16. Select API 16 any version above.

Leave the remaining checkboxes unchecked, as we will not be using Instant Apps or AndroidX artifacts, and click Finish.

Get the TWA Support Library

To setup the TWA library in the project you will need to edit a couple of files. Look for the Gradle Scripts section in the Project Navigator. Both files are called build.gradle, which may be a bit confusing, but the descriptions in parenthesis help identifying the correct one.

The first file is the Project level build.gradle. Look for the one with your project name next to it.

Add the Jitpack configuration (in bold below) to the list of repositories. Under the allprojects section:

allprojects {
   repositories {
       google()
       jcenter()
       maven { url "https://jitpack.io" }
   }
}

Android Studio will prompt to synchronize the project. Click on the Sync Now link.

Note: The support library for Trusted Web Activities will be part of Jetpack in the future, and the previous step won’t be required anymore.

The second file we need to change is the Module level build.gradle.

The Trusted Web Activities library uses Java 8 features and the first change enables Java 8. Add a compileOptions section to the bottom of the android section, as below:

android {
        ...
    compileOptions {
       sourceCompatibility JavaVersion.VERSION_1_8
       targetCompatibility JavaVersion.VERSION_1_8
    }
}

The next step will add the TWA Support Library to the project. Add a new dependency to the dependencies section:

dependencies {
   implementation 'com.github.GoogleChrome.custom-tabs-client:customtabs:3a71a75c9f'
}

Android Studio will show prompt asking to synchronize the project once more. Click on the Sync Now link and synchronize it.

Add the TWA Activity

Setting up the TWA Activity is achieved by editing the Android App Manifest.

On the Project Navigator, expand the app section, followed by the manifests and double click on AndroidManifest.xml to open the file.

Since we asked Android Studio not to add any Activity to our project when creating it, the manifest is empty and contains only the application tag.

Add the TWA Activity by inserting an activity tag into the application tag:

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    package="com.example.twa.myapplication">

    <application
        android:allowBackup="true"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher_round"
        android:supportsRtl="true"
        android:theme="@style/AppTheme"
        tools:ignore="GoogleAppIndexingWarning">
        <activity
            android:name="android.support.customtabs.trusted.LauncherActivity">

           <!-- Edit android:value to change the url opened by the TWA -->
           <meta-data
               android:name="android.support.customtabs.trusted.DEFAULT_URL"
               android:value="https://airhorner.com" />

           <!-- This intent-filter adds the TWA to the Android Launcher -->
           <intent-filter>
               <action android:name="android.intent.action.MAIN" />
               <category android:name="android.intent.category.LAUNCHER" />
           </intent-filter>

           <!--
             This intent-filter allows the TWA to handle Intents to open
             airhorner.com.
           -->
           <intent-filter>
               <action android:name="android.intent.action.VIEW"/>
               <category android:name="android.intent.category.DEFAULT" />
               <category android:name="android.intent.category.BROWSABLE"/>

               <!-- Edit android:host to handle links to the target URL-->
               <data
                 android:scheme="https"
                 android:host="airhorner.com"/>
           </intent-filter>
        </activity>
    </application>
</manifest>

The tags added to the XML are standard Android App Manifest. There are two relevant pieces of information for the context of Trusted Web Activities:

  1. The meta-data tag tells the TWA Activity which URL it should open. Change the android:value attribute with the URL of the PWA you want to open. In this example, it is https://airhorner.com.
  2. The second intent-filter tag allows the TWA to intercept Android Intents that open https://airhorner.com. The android:host attribute inside the data tag must point to the domain being opened by the TWA.

Note: When running the project at this stage, the URL Bar from Custom Tabs will still show on the top of the screen. This is not a bug.

The next section will show how to setup Digital AssetLinks to verify relationship between the website and the app, and remove the URL bar.

Remove the URL bar

Trusted Web Activities require an association between the Android application and the website to be established to remove the URL bar.

This association is created via Digital Asset Links and the association must be established in both ways, linking from the app to the website and from the website to the app.

It is possible to setup the app to website validation and setup Chrome to skip the website to app validation, for debugging purposes.

Establish an association from app to the website

Open the string resources file app > res > values > strings.xml and add the Digital AssetLinks statement below:

<resources>
    <string name="app_name">AirHorner TWA</string>
    <string name="asset_statements">
        [{
            \"relation\": [\"delegate_permission/common.handle_all_urls\"],
            \"target\": {
                \"namespace\": \"web\",
                \"site\": \"https://airhorner.com\"}
        }]
    </string>
</resources>

Change the contents for the site attribute to match the schema and domain opened by the TWA.

Back in the Android App Manifest file, AndroidManifest.xml, link to the statement by adding a new meta-data tag, but this time as a child of the application tag:

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.example.twa.myapplication">

    <application
        android:allowBackup="true"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher_round"
        android:supportsRtl="true"
        android:theme="@style/AppTheme">

        <meta-data
            android:name="asset_statements"
            android:resource="@string/asset_statements" />

        <activity>
            ...
        </activity>

    </application>
</manifest>

We have now established a relationship from the Android application to the website. It is helpful to debug this part of the relationship without creating the website to application validation.

Here’s how to test this on a development device:

Enable debug mode

  1. Open Chrome on the development device, navigate to chrome://flags, search for an item called Enable command line on non-rooted devices and change it to ENABLED and then restart the browser.
  2. Next, on the Terminal application of your operating system, use the Android Debug Bridge (installed with Android Studio), and run the following command:
adb shell "echo '_ --disable-digital-asset-link-verification-for-url=\"https://airhorner.com\"' > /data/local/tmp/chrome-command-line"

Close Chrome and re-launch your application from Android Studio. The application should now be shown in full-screen.

Note: It may needed to force close Chrome so it restarts with the correct command line. Go to Android Settings > Apps & notifications > Chrome, and click on Force stop.

Establish an association from the website to the app

There are 2 pieces of information that the developer needs to collect from the app in order to create the association:

  • Package Name: The first information is the package name for the app. This is the same package name generated when creating the app. It can also be found inside the Module build.gradle, under Gradle Scripts > build.gradle (Module: app), and is the value of the applicationId attribute.
  • SHA-256 Fingerprint: Android applications must be signed in order to be uploaded to the Play Store. The same signature is used to establish the connection between the website and the app through the SHA-256 fingerprint of the upload key.

The Android documentation explains in detail how to generate a key using Android Studio. Make sure to take note the path, alias and passwords for the key store, as you will need it for the next step.

Extract the SHA-256 fingerprint using the keytool, with the following command:

keytool -list -v -keystore  -alias  -storepass  -keypass 

The value for the SHA-256 fingerprint is printed under the Certificate fingerprints section. Here’s an example output:

keytool -list -v -keystore ./mykeystore.ks -alias test -storepass password -keypass password

Alias name: key0
Creation date: 28 Jan 2019
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=Test Test, OU=Test, O=Test, L=London, ST=London, C=GB
Issuer: CN=Test Test, OU=Test, O=Test, L=London, ST=London, C=GB
Serial number: ea67d3d
Valid from: Mon Jan 28 14:58:00 GMT 2019 until: Fri Jan 22 14:58:00 GMT 2044
Certificate fingerprints:
     SHA1: 38:03:D6:95:91:7C:9C:EE:4A:A0:58:43:A7:43:A5:D2:76:52:EF:9B
     SHA256: F5:08:9F:8A:D4:C8:4A:15:6D:0A:B1:3F:61:96:BE:C7:87:8C:DE:05:59:92:B2:A3:2D:05:05:A5:62:A5:2F:34
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3

With both pieces of information at hand, head over to the assetlinks generator, fill-in the fields and hit Generate Statement. Copy the generated statement and serve it from your domain, from the URL /.well-known/assetlinks.json.

Note: The AssetLinks file must be under /.well-known/assetlinks.json, at the root of the domain, as that's only the place Chrome will look for it.

Wrapping Up

With the assetlinks file in place in your domain and the asset_statements tag configured in the Android application, the next step is generating a signed app. Again, the steps for this are widely documented.

The output APK can be installed into a test device, using adb:

adb install app-release.apk

If the verification step fails it is possible to check for error messages using the Android Debug Bridge, from your OS’s terminal and with the test device connected.

adb logcat | grep -e OriginVerifier -e digital_asset_links

With the upload APK generated, you can now upload the app to the Play Store.

We are looking forward to see what developers build with Trusted Web Activities. To drop any feedback, reach out to us at @ChromiumDev.

Rendering on the Web

$
0
0

Rendering on the Web

As developers, we are often faced with decisions that will affect the entire architecture of our applications. One of the core decisions web developers must make is where to implement logic and rendering in their application. This can be a difficult, since there are a number of different ways to build a website.

Our understanding of this space is informed by our work in Chrome talking to large sites over the past few years. Broadly speaking, we would encourage developers to consider server rendering or static rendering over a full rehydration approach.

In order to better understand the architectures we’re choosing from when we make this decision, we need to have a solid understanding of each approach and consistent terminology to use when speaking about them. The differences between these approaches help illustrate the trade-offs of rendering on the web through the lens of performance.

Terminology

Rendering

  • SSR: Server-Side Rendering - rendering a client-side or universal app to HTML on the server.
  • CSR: Client-Side Rendering - rendering an app in a browser, generally using the DOM.
  • Rehydration: “booting up” JavaScript views on the client such that they reuse the server-rendered HTML’s DOM tree and data.
  • Prerendering: running a client-side application at build time to capture its initial state as static HTML.

Performance

  • TTFB: Time to First Byte - seen as the time between clicking a link and the first bit of content coming in.
  • FP: First Paint - the first time any pixel gets becomes visible to the user.
  • FCP: First Contentful Paint - the time when requested content (article body, etc).
  • TTI: Time To Interactive - the time at which a page becomes interactive (events wired up, etc).

Server Rendering

Server rendering generates the full HTML for a page on the server in response to navigation. This avoids additional round-trips for data fetching and templating on the client, since it’s handled before the browser gets a response.

Server rendering generally produces a fast First Paint (FP) and First Contentful Paint (FCP). Running page logic and rendering on the server makes it possible to avoid sending lots of JavaScript to the client, which helps achieve a fast Time to Interactive (TTI). This makes sense, since with server rendering you’re really just sending text and links to the user’s browser. This approach can work well for a large spectrum of device and network conditions, and opens up interesting browser optimizations like streaming document parsing.

Diagram showing server rendering and JS execution affecting FCP and TTI

With server rendering, users are unlikely to be left waiting for CPU-bound JavaScript to process before they can use your site. Even when third-party JS can’t be avoided, using server rendering to reduce your own first-party JS costs can give you more "budget" for the rest. However, there is one primary drawback to this approach: generating pages on the server takes time, which can often result in a slower Time to First Byte (TTFB).

Whether server rendering is enough for your application largely depends on what type of experience you are building. There is a longstanding debate over the correct applications of server rendering versus client-side rendering, but it’s important to remember that you can opt to use server rendering for some pages and not others. Some sites have adopted hybrid rendering techniques with success. Netflix server-renders its relatively static landing pages, while prefetching the JS for interaction-heavy pages, giving these heavier client-rendered pages a better chance of loading quickly.

Many modern frameworks, libraries and architectures make it possible to render the same application on both the client and the server. These techniques can be used for Server Rendering, however it’s important to note that architectures where rendering happens both on the server and on the client are their own class of solution with very different performance characteristics and tradeoffs. React users can use renderToString() or solutions built atop it like Next.js for server rendering. Vue users can look at Vue’s server rendering guide or Nuxt. Angular has Universal. Most popular solutions employ some form of hydration though, so be aware of the approach in use before selecting a tool.

Static Rendering

Static rendering happens at build-time and offers a fast First Paint, First Contentful Paint and Time To Interactive - assuming the amount of client-side JS is limited. Unlike Server Rendering, it also manages to achieve a consistently fast Time To First Byte, since the HTML for a page doesn’t have to be generated on the fly. Generally, static rendering means producing a separate HTML file for each URL ahead of time. With HTML responses being generated in advance, static renders can be deployed to multiple CDNs to take advantage of edge-caching.

Diagram showing static rendering and optional JS execution affecting FCP
and TTI

Solutions for static rendering come in all shapes and sizes. Tools like Gatsby are designed to make developers feel like their application is being rendered dynamically rather than generated as a build step. Others like Jekyl and Metalsmith embrace their static nature, providing a more template-driven approach.

One of the downsides to static rendering is that individual HTML files must be generated for every possible URL. This can be challenging or even infeasible when you can't predict what those URLs will be ahead of time, or for sites with a large number of unique pages.

React users may be familiar with Gatsby, Next.js static export or Navi - all of these make it convenient to author using components. However, it’s important to understand the difference between static rendering and prerendering: static rendered pages are interactive without the need to execute much client-side JS, whereas prerendering improves the First Paint or First Contentful Paint of a Single Page Application that must be booted on the client in order for pages to be truly interactive.

If you’re unsure whether a given solution is static rendering or prerendering, try this test: disable JavaScript and load the created web pages. For statically rendered pages, most of the functionality will still exist without JavaScript enabled. For prerendered pages, there may still be some basic functionality like links, but most of the page will be inert.

Another useful test is to slow your network down using Chrome DevTools, and observe how much JavaScript has been downloaded before a page becomes interactive. Prerendering generally requires more JavaScript to get interactive, and that JavaScript tends to be more complex than the Progressive Enhancement approach used by static rendering.

Server Rendering vs Static Rendering

Server rendering is not a silver bullet - its dynamic nature can come with significant compute overhead costs. Many server rendering solutions don't flush early, can delay TTFB or double the data being sent (e.g. inlined state used by JS on the client). In React, renderToString() can be slow as it's synchronous and single-threaded. Getting server rendering "right" can involve finding or building a solution for component caching, managing memory consumption, applying memoization techniques, and many other concerns. You're generally processing/rebuilding the same application multiple times - once on the client and once in the server. Just because server rendering can make something show up sooner doesn't suddenly mean you have less work to do.

Server rendering produces HTML on-demand for each URL but can be slower than just serving static rendered content. If you can put in the additional leg-work, server rendering + HTML caching can massively reduce server render time. The upside to server rendering is the ability to pull more "live" data and respond to a more complete set of requests than is possible with static rendering. Pages requiring personalization are a concrete example of the type of request that would not work well with static rendering.

Server rendering can also present interesting decisions when building a PWA. Is it better to use full-page service worker caching, or just server-render individual pieces of content?

Client-Side Rendering (CSR)

Client-side rendering (CSR) means rendering pages directly in the browser using JavaScript. All logic, data fetching, templating and routing are handled on the client rather than the server.

Client-side rendering can be difficult to get and keep fast for mobile. It can approach the performance of pure server-rendering if doing minimal work, keeping a tight JavaScript budget and delivering value in as few RTTs as possible. Critical scripts and data can be delivered sooner using HTTP/2 Server Push or <link rel=preload>, which gets the parser working for you sooner. Patterns like PRPL are worth evaluating in order to ensure initial and subsequent navigations feel instant.

Diagram showing client-side rendering affecting FCP and TTI

The primary downside to Client-Side Rendering is that the amount of JavaScript required tends to grow as an application grows. This becomes especially difficult with the addition of new JavaScript libraries, polyfills and third-party code, which compete for processing power and must often be processed before a page’s content can be rendered. Experiences built with CSR that rely on large JavaScript bundles should consider aggressive code-splitting, and be sure to lazy-load JavaScript - "serve only what you need, when you need it". For experiences with little or no interactivity, server rendering can represent a more scalable solution to these issues.

For folks building a Single Page Application, identifying core parts of the User Interface shared by most pages means you can apply the Application Shell caching technique. Combined with service workers, this can dramatically improve perceived performance on repeat visits.

Combining server rendering and CSR via rehydration

Often referred to as Universal Rendering or simply “SSR”, this approach attempts to smooth over the trade-offs between Client-Side Rendering and Server Rendering by doing both. Navigation requests like full page loads or reloads are handled by a server that renders the application to HTML, then the JavaScript and data used for rendering is embedded into the resulting document. When implemented carefully, this achieves a fast First Contentful Paint just like Server Rendering, then “picks up” by rendering again on the client using a technique called (re)hydration. This is a novel solution, but it can have some considerable performance drawbacks.

The primary downside of SSR with rehydration is that it can have a significant negative impact on Time To Interactive, even if it improves First Paint. SSR’d pages often look deceptively loaded and interactive, but can’t actually respond to input until the client-side JS is executed and event handlers have been attached. This can take seconds or even minutes on mobile.

Perhaps you’ve experienced this yourself - for a period of time after it looks like a page has loaded, clicking or tapping does nothing. This quickly becoming frustrating... “Why is nothing happening? Why can’t I scroll?”

A Rehydration Problem: One App for the Price of Two

Rehydration issues can often be worse than delayed interactivity due to JS. In order for the client-side JavaScript to be able to accurately “pick up” where the server left off without having to re-request all of the data the server used to render its HTML, current SSR solutions generally serialize the response from a UI’s data dependencies into the document as script tags. The resulting HTML document contains a high level of duplication:

HTML document
containing serialized UI, inlined data and a bundle.js script

As you can see, the server is returning a description of the application’s UI in response to a navigation request, but it’s also returning the source data used to compose that UI, and a complete copy of the UI’s implementation which then boots up on the client. Only after bundle.js has finished loading and executing does this UI become interactive.

Performance metrics collected from real websites using SSR rehydration indicate its use should be heavily discouraged. Ultimately, the reason comes down to User Experience: it's extremely easy to end up leaving users in an “uncanny valley”.

Diagram showing client rendering negatively affecting TTI

There’s hope for SSR with rehydration, though. In the short term, only using SSR for highly cacheable content can reduce the TTFB delay, producing similar results to prerendering. Rehydrating incrementally, progressively, or partially may be the key to making this technique more viable in the future.

Streaming server rendering and Progressive Rehydration

Server rendering has had a number of developments over the last few years.

Streaming server rendering allows you to send HTML in chunks that the browser can progressively render as it's received. This can provide a fast First Paint and First Contentful Paint as markup arrives to users faster. In React, streams being asynchronous in renderToNodeStream() - compared to synchronous renderToString - means backpressure is handled well.

Progressive rehydration is also worth keeping an eye on, and something React has been exploring. With this approach, individual pieces of a server-rendered application are “booted up” over time, rather than the current common approach of initializing the entire application at once. This can help reduce the amount of JavaScript required to make pages interactive, since client-side upgrading of low priority parts of the page can be deferred to prevent blocking the main thread. It can also help avoid one of the most common SSR Rehydration pitfalls, where a server-rendered DOM tree gets destroyed and then immediately rebuilt - most often because the initial synchronous client-side render required data that wasn’t quite ready, perhaps awaiting Promise resolution.

Partial Rehydration

Partial rehydration has proven difficult to implement. This approach is an extension of the idea of progressive rehydration, where the individual pieces (components / views / trees) to be progressively rehydrated are analyzed and those with little interactivity or no reactivity are identified. For each of these mostly-static parts, the corresponding JavaScript code is then transformed into inert references and decorative functionality, reducing their client-side footprint to near-zero.

The partial hydration approach comes with its own issues and compromises. It poses some interesting challenges for caching, and client-side navigation means we can't assume server-rendered HTML for inert parts of the application will be unavailable.

Trisomorphic Rendering

If service workers are an option for you, “trisomorphic” rendering may also be of interest. It's a technique where you can use streaming server rendering for initial/non-JS navigations, and then have your service worker take on rendering of HTML for navigations after it has been installed. This can keep cached components and templates up to date and enables SPA-style navigations for rendering new views in the same session. This approach works best when you can share the same templating and routing code between the server, client page, and server worker.

Diagram of Trisomorphic rendering, showing a browser and service worker
communicating with the server

Wrapping up...

When deciding on an approach to rendering, measure and understand what your bottlenecks are. Consider whether static rendering or server rendering can get you 90% of the way there. It's perfectly okay to mostly ship HTML with minimal JS to get an experience interactive. Here’s a handy infographic showing the server-client spectrum:

Infographic showing the spectrum of options described in this article

Credits

Thanks to everyone for their reviews and inspiration:

Jeffrey Posnick, Houssein Djirdeh, Shubhie Panicker, Chris Harrelson, and Sebastian Markbåge

Audio/Video Updates in Chrome 73

$
0
0

Audio/Video Updates in Chrome 73

In this article, I'll discuss Chrome 73 new media features which include:

Hardware Media Keys support

Many keyboards nowadays have keys to control basic media playback functions such as play/pause, previous and next track. Headsets have them too. Until now, desktop users couldn’t use these media keys to control audio and video playback in Chrome. This changes today!

Keyboard media keys
Figure 1. Keyboard media keys

If user presses the pause key, the active media element playing in Chrome will be paused and receive a "paused" media event. If the play key is pressed, the previously paused media element will be resumed and receive a "play" media event. It works whether Chrome is in foreground or background.

In Chrome OS, Android apps using audio focus will now tell Chrome to pause and resume audio to create a seamless media experience between websites on Chrome, Chrome Apps and Android Apps. This is currently supported only on Chrome OS device running Android P.

In short, it’s a good practice to always listen to these media events and act accordingly.

video.addEventListener('pause', function() {
  // Video is now paused.
  // TODO: Let's update UI accordingly.
});

video.addEventListener('play', function() {
  // Video is now playing.
  // TODO: Let's update UI accordingly.
});

But wait, there’s more! With the Media Session API now available on desktop (it was supported on mobile only before), web developers can handle media related events such as track changing that are triggered by media keys. The events previoustrack, nexttrack, seekbackward, and seekforward are currently supported.

navigator.mediaSession.setActionHandler('previoustrack', function() {
  // User hit "Previous Track" key.
});

navigator.mediaSession.setActionHandler('nexttrack', function() {
  // User hit "Next Track" key.
});

navigator.mediaSession.setActionHandler('seekbackward', function() {
  // User hit "Seek Backward" key.
});

navigator.mediaSession.setActionHandler('seekforward', function() {
  // User hit "Seek Forward" key.
});

Play and pause keys are handled automatically by Chrome. However if the default behavior doesn't work out for you, you can still set some action handlers for "play" and "pause" to prevent this.

navigator.mediaSession.setActionHandler('play', function() {
  // User hit "Play" key.
});

navigator.mediaSession.setActionHandler('pause', function() {
  // User hit "Pause" key.
});

Hardware Media Keys support is available on Chrome OS, macOS, and Windows. Linux will come later.

Note: Setting some media session metadata such as the title, artist, album name, and artwork with the Media Session API is available but not hooked up to desktop notifications yet. It will come in supported platforms later.

Check out our existing developer documentation and try out the official Media Session samples.

Chromestatus Tracker | Chromium Bug

Encrypted Media: HDCP Policy Check

Thanks to the HDCP Policy Check API, web developers can now query whether a specific policy, e.g. HDCP requirement, can be enforced before requesting Widevine licenses, and loading media.

const status = await video.mediaKeys.getStatusForPolicy({ minHdcpVersion: '2.2' });

if (status == 'usable')
  console.log('HDCP 2.2 can be enforced.');

The API is available on all platforms. However, the actual policy checks might not be available on certain platforms. For example, HDCP policy check promise will reject with NotSupportedError on Android and Android WebView.

Check out our previous developer documentation and give a try to the official sample to see all HDCP versions that are supported.

Intent to Ship | Chromestatus Tracker | Chromium Bug

Origin Trial for Auto Picture-in-Picture for installed PWAs

Some pages may want to automatically enter and leave Picture-in-Picture for a video element; for example, video conferencing web apps would benefit from some automatic Picture-in-Picture behavior when user switches back and forth between the web app and other applications or tabs. This is sadly not possible with the user gesture requirement. So here comes Auto Picture-in-Picture!

To support these tab and app switching, a new autopictureinpicture attribute is added to the <video> element.

<video autopictureinpicture></video>

Here’s roughly how it works:

  • When document becomes hidden, the video element whose autopictureinpicture attribute was set most recently automatically enters Picture-in-Picture, if allowed.
  • When document becomes visible, the video element in Picture-in-Picture automatically leaves it.

And that’s it! Note that the Auto Picture-in-Picture feature applies only to Progressive Web Apps (PWAs) that users have installed on desktop.

Check out the spec for more details and try out using the official PWA sample.

Dogfood: To get feedback from web developers, the Auto Picture-in-Picture feature is available as an Origin Trial in Chrome 73 for desktop (Chrome OS, Linux, Mac, and Windows). You will need to request a token, so that the feature is automatically enabled for your origin for a limited period of time. This will eliminate the need to enable the "Web Platform Features" flag.

Intent to Experiment | Chromestatus Tracker | Chromium Bug

Origin Trial for Skip Ad in Picture-in-Picture window

The video advertisement model usually consists of pre-roll ads. Content providers often provide the ability to skip the ad after a few seconds. Sadly, as the Picture-in-Picture window is not interactive, users watching a video in Picture-in-Picture can’t do this today.

With the Media Session API now available on desktop (it was supported on mobile only before), a new skipad media session action may be used to offer this option in Picture-in-Picture.

Skip Ad button in Picture-in-Picture window
Figure 2. "Skip Ad" button in Picture-in-Picture window

To provide this feature pass a function with skipad when calling setActionHandler(). To hide it pass null. As you can read below, it is pretty straightforward.

try {
  navigator.mediaSession.setActionHandler('skipad', null);
  showSkipAdButton();
} catch(error) {
   // The "Skip Ad" media session action is not supported.
}

function showSkipAdButton() {
  // The Picture-in-Picture window will show a "Skip Ad" button.
  navigator.mediaSession.setActionHandler('skipad', onSkipAdButtonClick);
}

function onSkipAdButtonClick() {
  // User clicked "Skip Ad" button, let's hide it now.
  navigator.mediaSession.setActionHandler('skipad', null);

  // TODO: Stop ad and play video.
}

Note: Media session action handlers will persist. I’d suggest always reseting them when media playback starts and ends to avoid showing an unexpected "Skip Ad" button.

Try out the official "Skip Ad" sample and let us know how this feature can be improved.

Dogfood: To get feedback from web developers, the Skip Ad in Picture-in-Picture window feature is available as an Origin Trial in Chrome 73 for desktop (Chrome OS, Linux, Mac, and Windows). You will need to request a token, so that the feature is automatically enabled for your origin for a limited period of time. This will eliminate the need to enable the "Web Platform Features" flag.

Intent to Experiment | Chromestatus Tracker | Chromium Bug

Autoplay granted for Desktop PWAs

Now that Progressive Web Apps (PWAs) are available on all desktop platforms, we are extending the rule that we had on mobile to desktop: autoplay with sound is now allowed for installed PWAs. Note that it only applies to pages in the scope of the web app manifest.

Chromium Bug

Better match results with String.prototype.matchAll()

$
0
0

Better match results with String.prototype.matchAll()

Chrome 73 introduces the String.prototype.matchAll() method. It behaves similarly to match(), but returns an iterator with all regular expression matches in a global or sticky regular expression. This offers a simple way to iterate over matches, especially when you need access to capture groups.

What's wrong with match()?

The short answer is nothing, unless you're trying to return global matches with capturing groups. Here's a programming puzzle for you. Consider the following code:

const regex = /t(e)(st(\d?))/g;
const string = 'test1test2';
const results = string.match(regex);
console.log(results);
// → ['test1', 'test2']

Run this in a console and notice that it returns an array containing the strings 'test1' and 'test2'. If I remove the g flag from the regular expression what I get has all of my capturing groups, but I only get the first match. It looks like this:

['test1', 'e', 'st1', '2', index: 0, input: 'test1test2', groups: undefined]

This string contains a second possible match beginning with 'test2' but I don't have it. Now here's the puzzle: how do I get all of the capturing groups for each match? The explainer for the String.prototype.matchAll() proposal shows two possible approaches. I won't describe them because hopefully you won't need them much longer.

String.prototype.matchAll()

What would the explainer examples look like with matchAll()? Have a look.

const regex = /t(e)(st(\d?))/g;
const string = 'test1test2';
const matches = string.matchAll(regex);
for (const match of matches) {
  console.log(match);
}

There are a few things to note about this. Unlike match() which returns an array on a global search, matchAll() returns an iterable object that works beautifully with for...of loops. The iterable object produces an array for each match, including the capturing groups with a few extras. If you print these to the console they'll look like this:

['test1', 'e', 'st1', '2', index: 0, input: 'test1test2', groups: undefined]
['test2', 'e', 'st2', '2', index: 5, input: 'test1test2', groups: undefined]

You may notice that the value for each match is an array in exactly the same format returned by match() for non-global regular expressions.

Bonus material

This is mainly for people who are new to regular expressions or who aren't experts at it. You may have noticed the results of both match() and matchAll() (for each iteration) are arrays with some additional named properties. While preparing this article, I noticed that these properties have some documentation deficiencies on MDN (which I've fixed). Here's a quick description.

index
The index of the first result in the original string. In the above example test2 starts at position 5 hence index has the value 5.
input
The complete string that matchAll() was run against. In my example, that was 'test1test2'.
groups
Contains the results of any named capturing groups specified in your regular expression.

Conclusion

If I've missed anything please let me know in the comments below. You can read more about recent changes to JavaScript in previous updates or on the V8 website.

Making wheel scrolling fast by default

$
0
0

Making wheel scrolling fast by default

To improve wheel scrolling/zooming performance developers are encouraged to register wheel and mousewheel event listeners as passive by passing the {passive: true} option to addEventListener(). Registering the event listeners as passive tells the browser that the wheel listeners will not call preventDefault() and the browser can safely perform scrolling and zooming without blocking on the listeners.

The problem is that most often the wheel event listeners are conceptually passive (do not call preventDefault()) but are not explicitly specified as such requiring the browser to wait for the JS event handling to finish before it starts scrolling/zooming even though waiting is not necessary. In Chrome 56 we fixed this issue for touchStart and touchMove , and that change was later adopted by both Safari and Firefox. As you can see from the demonstration video we made at that time, leaving the behavior as it was produced a noticeable delay in scroll response. Now in Chrome 73, we've applied the same intervention to wheel and mousewheel events.

The Intervention

Our goal with this change is to reduce the time it takes to update the display after the user starts scrolling by wheel or touchpad without developers needing to change code. Our metrics show that 75% of the wheel and mousewheel event listeners that are registered on root targets (window, document, or body) do not specify any values for the passive option and more than 98% of such listeners do not call preventDefault(). In Chrome 73 we are changing the wheel and mousewheel listeners registered on root targets (window, document, or body) to be passive by default. It means that an event listener like:

window.addEventListener("wheel", func);

becomes equivalent to:

window.addEventListener("wheel", func, {passive: true} );

And calling preventDefault() inside the listener will be ignored with the following DevTools warning:

[Intervention] Unable to preventDefault inside passive event listener due
to target being treated as passive.See https://www.chromestatus.com/features/6662647093133312

Breakage and Guidance

In the vast majority of cases, no breakage will be observed. Only in rare cases (less than 0.3% of pages according to our metrics) unintended scrolling/zooming might happen due to preventDefault() call getting ignored inside the listeners that are treated as passive by default. Your application can determine whether it may be hitting this in the wild by checking if calling preventDefault() had any effect via the defaultPrevented property. The fix for the affected cases is relatively easy: pass {passive: false} to addEventListener() to override the default behavior and preserve the event listener as blocking.

Feedback

Deprecations and removals in Chrome 73

$
0
0

Deprecations and removals in Chrome 73

Removals

Remove EXPLAIN and REINDEX support in WebSQL

EXPLAIN's output is not guaranteed to be stable over SQLite versions, so developers cannot rely on it. REINDEX is only useful when collation sequence definitions change, and Chrome only uses the built-in collation sequences. Both features are now removed.

Chrome Platform Status |

Remove isomorphic decoding of URL fragment identifier

When Chrome opens a URL with a fragment id, it decodes %xx and applies isomorphic-decode to it, then tries to find an element with the decoding result as an ID in some cases. For example, if a user opens example.com/#%F8%C0, Chrome does the following:

  1. It searches the page for an element with id="%F8%C0".
  2. If it’s not found, it searches the page for an element with id="&#xF8;&#xC0;". No other browsers do this, and it's not defined by the standard. Starting in version 73, Chrome no longer does this either.

Chrome Platform Status | Chromium Bug

Deprecations

Deprecate 'drive-by downloads' in sandboxed iframes

Chrome has deprecated downloads in sandboxed iframes that lack a user gesture ('drive-by downloads'), though this restriction could be lifted via an allow-downloads-without-user-activation keyword in the sandbox attribute list. This allows content providers to restrict malicious or abusive downloads.

Downloads can bring security vulnerabilities to a system. Even though additional security checks are done in Chrome and the operating system, we feel blocking downloads in sandboxed iframes also fits the general thought behind the sandbox. Apart from security concerns, it would be a more pleasant user experience for a click to trigger a download on the same page, compared with downloads started automatically when landing at a new page, or started non spontaneously after the click.

Removal is expected in Chrome 74.

Chrome Platform Status |

Feedback

Constructable Stylesheets: seamless reusable styles

$
0
0

Constructable Stylesheets: seamless reusable styles

Constructable Stylesheets are a new way to create and distribute reusable styles when using Shadow DOM.

It has always been possible to create stylesheets using JavaScript. However, the process has historically been to create a <style> element using document.createElement('style'), and then access its sheet property to obtain a reference to the underlying CSSStyleSheet instance. This method can produce duplicate CSS code and its attendant bloat, and the act of attaching leads to a flash of unstyled content whether there is bloat or not. The CSSStyleSheet interface is the root of a collection of CSS representation interfaces referred to as the CSSOM, offering a programmatic way to manipulate stylesheets as well as eliminating the problems associated with the old method.

Diagram showing client-side rendering affecting FCP and TTI

Constructable Stylesheets make it possible to define and prepare shared CSS styles, and then apply those styles to multiple Shadow Roots or the Document easily and without duplication. Updates to a shared CSSStyleSheet are applied to all roots into which it has been adopted, and adopting a stylesheet is fast and synchronous once the sheet has been loaded.

The association set up by Constructable Stylesheets lends itself well to a number of different applications. It can be used to provide a centralized theme used by many components: the theme can be a CSSStyleSheet instance passed to components, with updates to the theme propagating out to components automatically. It can be used to distribute CSS Custom Property values to specific DOM subtrees without relying on the cascade. It can even be used as a direct interface to the browser’s CSS parser, making it easy to preload stylesheets without injecting them into the DOM.

Constructing a StyleSheet

Rather than introducing a new API to accomplish this, the Constructable StyleSheets specification makes it possible to create stylesheets imperatively by invoking the CSSStyleSheet() constructor. The resulting CSSStyleSheet object has two new methods that make it safer to add and update stylesheet rules without triggering Flash of Unstyled Content (FOUC). replace() returns a Promise that resolves once any external references (@imports) are loaded, whereas replaceSync() doesn’t allow external references at all:

const sheet = new CSSStyleSheet();

// replace all styles synchronously:
sheet.replaceSync('a { color: red; }');

// this throws an exception:
try {
  sheet.replaceSync('@import url("styles.css")');
} catch (err) {
  console.error(err); // imports are not allowed
}

// replace all styles, allowing external resources:
sheet.replace('@import url("styles.css")')
  .then(sheet => {
    console.log('Styles loaded successfully');
  })
  .catch(err => {
    console.error('Failed to load:', err);
  });

Using Constructed StyleSheets

The second new feature introduced by Constructable StyleSheets is an adoptedStyleSheets property available on Shadow Roots and Documents. This lets us explicitly apply the styles defined by a CSSStyleSheet to a given DOM subtree. To do so, we set the property to an array of one or more stylesheets to apply to that element.

// Create our shared stylesheet:
const sheet = new CSSStyleSheet();
sheet.replaceSync('a { color: red; }');

// Apply the stylesheet to a document:
document.adoptedStyleSheets = [sheet];

// Apply the stylesheet to a Shadow Root:
const node =
document.createElement('div');
const shadow = node.attachShadow({ mode: 'open' });
shadow.adoptedStyleSheets = [sheet];

Notice that we’re overriding the value of adoptedStyleSheets instead of changing the array in place. This is required because the array is frozen; in-place mutations like push() throw an exception, so we have to assign a new array. To preserve any existing StyleSheets added via adoptedStyleSheets, we can use concat to create a new array that includes the existing sheets as well as additional ones to add:

const sheet = new CSSStyleSheet();
sheet.replaceSync('a { color: red; }');

// Combine existing sheets with our new one:
document.adoptedStyleSheets = [...document.adoptedStyleSheets, sheet];

Putting it All Together

With Constructable StyleSheets, web developers now have an explicit solution for creating CSS StyleSheets and applying them to DOM trees. We have a new Promise-based API for loading StyleSheets from a string of CSS source that uses the browser’s built-in parser and loading semantics. Finally, we have a mechanism for applying stylesheet updates to all usages of a StyleSheet, simplifying things like theme changes and color preferences.

View Demo

Looking Ahead

The initial version of Constructable Stylesheets is shipping with the API described here, but there’s work underway to make things easier to use. There’s a proposal to extend the adoptedStyleSheets FrozenArray with dedicated methods for inserting and removing stylesheets, which would obviate the need for array cloning and avoid potential duplicate stylesheet references.

More Information


Replacing a hot path in your app's JavaScript with WebAssembly

$
0
0

Replacing a hot path in your app's JavaScript with WebAssembly

It's consistently fast, yo.

In my previous articles I talked about how WebAssembly allows you to bring the library ecosystem of C/C++ to the web. One app that makes extensive use of C/C++ libraries is squoosh, our web app that allows you compress images with a variety of codecs that have been compiled from C++ to WebAssembly.

WebAssembly is a low-level virtual machine that runs the bytecode that is stored in .wasm files. This byte code is strongly typed and structured in such a way that it can be compiled and optimized for the host system much quicker than JavaScript can. WebAssembly provides an environment to run code that had sandboxing and embedding in mind from the very start.

In my experience, most performance problems on the web are caused by forced layout and excessive paint but every now and then an app needs to do a computationally expensive task that takes a lot of time. WebAssembly can help here.

Note: Due to legal concerns, I won’t name any browsers in this article.

The Hot Path

In squoosh we wrote a JavaScript function that rotates an image buffer by multiples of 90 degrees. While OffscreenCanvas would be ideal for this, it isn't supported across the browsers we were targeting, and a little buggy in Chrome.

This function iterates over every pixel of an input image and copies it to a different position in the output image to achieve rotation. For a 4094px by 4096px image (16 megapixels) it would need over 16 million iterations of the inner code block, which is what we call a "hot path". Despite that rather big number of iterations, two out of three browsers we tested finish the task in 2 seconds or less. An acceptable duration for this type of interaction.

for (let d2 = d2Start; d2 >= 0 && d2 < d2Limit; d2 += d2Advance) {
  for (let d1 = d1Start; d1 >= 0 && d1 < d1Limit; d1 += d1Advance) {
    const in_idx = ((d1 * d1Multiplier) + (d2 * d2Multiplier));
    outBuffer[i] = inBuffer[in_idx];
    i += 1;
  }
}

One browser, however, takes over 8 seconds. The way browsers optimize JavaScript is really complicated, and different engines optimize for different things. Some optimize for raw execution, some optimize for interaction with the DOM. In this case, we've hit an unoptimized path in one browser.

WebAssembly on the other hand is built entirely around raw execution speed. So if we want fast, predictable performance across browsers for code like this, WebAssembly can help.

WebAssembly for predictable performance

In general, JavaScript and WebAssembly can achieve the same peak performance. However, in JavaScript it's often tricky to stay on the "fast path". One key benefit that WebAssembly offers is predictable performance, even across browsers. The strict typing and low-level architecture allows for stronger assumptions and more optimizations during compilation. The function above is a prime candidate for WebAssembly. But how do you turn hot path written in JavaScript into WebAssembly?

Writing for WebAssembly

Previously we took C/C++ libraries and compiled them to WebAssembly to use their functionality on the web. We didn't really touch the code of the libraries, we just wrote small amounts of C/C++ code to form the bridge between the browser and the library. This time our motivation is different: We want to write something from scratch with WebAssembly in mind so we can make use of the advantages that WebAssembly has.

WebAssembly architecture

When writing for WebAssembly, it's beneficial to understand a bit more about what WebAssembly actually is.

To quote WebAssembly.org:

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.

When you compile a piece of C or Rust code to WebAssembly, you get a .wasm file that contains a module declaration. This declaration consists of a list of "imports" the module expects from its environment, a list of exports that this module makes available to the host (functions, constants, chunks of memory) and of course the actual binary instructions for the functions contained within.

Something that I didn't realize until I looked into this: The stack that makes WebAssembly a "stack-based virtual machine" is not stored in the chunk of memory that WebAssembly modules use. The stack is completely VM-internal and inaccessible to web developers (except through DevTools). As such it is possible to write WebAssembly modules that don't need any additional memory at all and only use the VM-internal stack.

Note: (for the curious) Compilers like Emscripten still use the WebAssembly memory to implement their own stack. This is necessary so you can access values anywhere on the stack through constructs like pointers in C, something the VM-internal stack does not allow. So, somewhat confusingly, when you run C via WebAssembly, two stacks are in use!

In our case we will need to use some additional memory to allow arbitrary access to the pixels of our image and generate a rotated version of that image. This is what WebAssembly.Memory is for.

Memory management

Commonly, once you use additional memory you will find the need to somehow manage that memory. Which parts of the memory are in use? Which ones are free? In C, for example, you have the malloc(n) function that finds a memory space of n consecutive bytes. Functions of this kind are also called "allocators". Of course the implementation of the allocator in use must be included in your WebAssembly module and will increase your file size. This size and performance of these memory management functions can vary quite significantly depending on the algorithm used, which is why many languages offer multiple implementations to choose from ("dmalloc", "emmalloc", "wee_alloc",...).

In our case we know the dimensions of the input image (and therefore the dimensions of the output image) before we run the WebAssembly module. Here we saw an opportunity: Traditionally, we'd pass the input image's RGBA buffer as a parameter to a WebAssembly function and return the rotated image as a return value. To generate that return value we would have to make use of the allocator. But since we know the total amount of memory needed (twice the size of the input image, once for input and once for output), we can put the input image into the WebAssembly memory using JavaScript, run the WebAssembly module to generate a 2nd, rotated image and then use JavaScript to read back the result. We can get away without using any memory management at all!

Spoiled for choice

If you looked at the original JavaScript function that we want to WebAssembly-fy, you can see that it's a purely computational code with no JavaScript-specific APIs. As such it should be fairly straight forward to port this code to any language. We evaluated 3 different languages that compile to WebAssembly: C/C++, Rust and AssemblyScript. The only question we need to answer for each of the languages is: How do we access raw memory without using memory management functions?

Note: I skipped the "boring" parts in the code samples and focused on the actual hot path and the memory access. The full version of each sample along with the benchmark can be found in the gist.

C and Emscripten

Emscripten is a C compiler for the WebAssembly target. Emscripten's goal is to function as a drop-in replacement for well-known C compilers like GCC or clang and is mostly flag compatible. This is a core part of the Emscripten's mission as it wants to make compiling existing C and C++ code to WebAssembly as easy as possible.

Accessing raw memory is in the very nature of C and pointers exist for that very reason:

uint8_t* ptr = (uint8_t*)0x124;
ptr[0] = 0xFF;

Here we are turning the number 0x124 into a pointer to unsigned, 8-bit integers (or bytes). This effectively turns the ptr variable into an array starting at memory address 0x124, that we can use like any other array, allowing us to access individual bytes for reading and writing. In our case we are looking at an RGBA buffer of an image that we want to re-order to achieve rotation. To move a pixel we actually need to move 4 consecutive bytes at once (one byte for each channel: R, G, B and A). To make this easier we can create an array of unsigned, 32-bit integers. By convention, our input image will start at address 4 and our output image will start directly after the input image ends:

int bpp = 4;
int imageSize = inputWidth * inputHeight * bpp;
uint32_t* inBuffer = (uint32_t*) 4;
uint32_t* outBuffer = (uint32_t*) (inBuffer + imageSize);

for (int d2 = d2Start; d2 >= 0 && d2 < d2Limit; d2 += d2Advance) {
  for (int d1 = d1Start; d1 >= 0 && d1 < d1Limit; d1 += d1Advance) {
    int in_idx = ((d1 * d1Multiplier) + (d2 * d2Multiplier));
    outBuffer[i] = inBuffer[in_idx];
    i += 1;
  }
}

Note: The reason we chose to start at address 4 and not 0 is because address 0 has a special meaning in many languages: It's the dreaded null pointer. While technically 0 is a perfectly valid address, many languages exclude 0 as a valid value for pointers and either throw an exception or just tumble into undefined behavior.

After porting the entire JavaScript function to C, we can compile the C file with emcc:

$ emcc -O3 -s ALLOW_MEMORY_GROWTH=1 -o c.js rotate.c

As always, emscripten generates a glue code file called c.js and a wasm module called c.wasm. Note that the wasm module gzips to only ~260 Bytes, while the glue code is around 3.5KB after gzip. After some fiddling, we were able to ditch the glue code and instantiate the WebAssembly modules with the vanilla APIs. This is often possible with Emscripten as long as you are not using anything from the C standard library.

Note: We are working with the Emscripten team to make the glue code smaller or even non-existent at times.

Rust

Rust is a new, modern programming language with a rich type system, no runtime and an ownership model that guarantees memory-safety and thread-safety. Rust also supports WebAssembly as a first-class citizen and the Rust team has contributed a lot of excellent tooling to the WebAssembly ecosystem.

One of these tools is wasm-pack, by the rustwasm working group. wasm-pack takes your code and turns it into a web-friendly module that works out-of-the-box with bundlers like webpack. wasm-pack is an extremely convenient experience, but currently only works for Rust. The group is considering to add support for other WebAssembly-targeting languages.

In Rust, slices are what arrays are in C. And just like in C, we need to create slices that use our start addresses. This goes against the memory safety model that Rust enforces, so to get our way we have to use the unsafe keyword, allowing us to write code that doesn't comply with that model.

Note: This is not a best practice. In our experience it is usually worth it to use binding mechanisms like embind in Emscripten or wasm-bindgen for Rust to work at a higher level.

let imageSize = (inputWidth * inputHeight) as usize;
let inBuffer: &mut [u32];
let outBuffer: &mut [u32];
unsafe {
  inBuffer = slice::from_raw_parts_mut::<u32>(4 as *mut u32, imageSize);
  outBuffer = slice::from_raw_parts_mut::<u32>((imageSize * 4 + 4) as *mut u32, imageSize);
}

for d2 in 0..d2Limit {
  for d1 in 0..d1Limit {
    let in_idx = (d1Start + d1 * d1Advance) * d1Multiplier + (d2Start + d2 * d2Advance) * d2Multiplier;
    outBuffer[i as usize] = inBuffer[in_idx as usize];
    i += 1;
  }
}

Compiling the Rust files using

$ wasm-pack build

yields a 7.6KB wasm module with about 100 bytes of glue code (both after gzip).

AssemblyScript

AssemblyScript is a fairly young project that aims to be a TypeScript-to-WebAssembly compiler. It's important to note, however, that it won't just consume any TypeScript. AssemblyScript uses the same syntax as TypeScript but switches out the standard library for their own. Their standard library models the capabilities of WebAssembly. That means you can't just compile any TypeScript you have lying around to WebAssembly, but it does mean that you don't have to learn a new programming language to write WebAssembly!

for (let d2 = d2Start; d2 >= 0 && d2 < d2Limit; d2 += d2Advance) {
  for (let d1 = d1Start; d1 >= 0 && d1 < d1Limit; d1 += d1Advance) {
    let in_idx = ((d1 * d1Multiplier) + (d2 * d2Multiplier));
    store<u32>(offset + i * 4 + 4, load<u32>(in_idx * 4 + 4));
    i += 1;
  }
}

Considering the small type surface that our rotate() function has, it was fairly easy to port this code to AssemblyScript. The functions load<T>(ptr: usize) and store<T>(ptr: usize, value: T) are provided by AssemblyScript to access raw memory. To compile our AssemblyScript file, we only need to install the AssemblyScript/assemblyscript npm package and run

$ asc rotate.ts -b assemblyscript.wasm --validate -O3

AssemblyScript will provide us with a ~300 Bytes wasm module and _no_ glue code. The module just works with the vanilla WebAssembly APIs.

WebAssembly Forensics

Rust's 7.6KB is surprisingly big when compared to the 2 other languages. There are a couple of tools in the WebAssembly ecosystem that can help you analyze your WebAssembly files (regardless of the language the got created with) and tell you what is going on and also help you improve your situation.

Twiggy

Twiggy is another tool from Rust's WebAssembly team that extracts a bunch of insightful data from a WebAssembly module. The tool is not Rust-specific and allows you to inspect things like the module's call graph, determine unused or superfluous sections and figure out which sections are contributing to the total file size of your module. The latter can be done with Twiggy's top command:

$ twiggy top rotate_bg.wasm

In this case we can see that a majority of our file size stems from the allocator. That was surprising since our code is not using dynamic allocations. Another big contributing factor is a "function names" subsection.

wasm-strip

wasm-strip is a tool from the WebAssembly Binary Toolkit, or wabt for short. It contains a couple of tools that allow you to inspect and manipulate WebAssembly modules. wasm2wat is a disassembler that turns a binary wasm module into a human-readable format. Wabt also contains wat2wasm which allows you to turn that human-readable format back into a binary wasm module. While we did use these two complementary tools to inspect our WebAssembly files, we found wasm-strip to be the most useful. wasm-strip removes unnecessary sections and metadata from a WebAssembly module:

$ wasm-strip rotate_bg.wasm

This reduces the file size of the rust module from 7.5KB to 6.6KB (after gzip).

wasm-opt

wasm-opt is a tool from Binaryen. It takes a WebAssembly module and tries to optimize it both for size and performance based only on the bytecode. Some tools like Emscripten already run this tool, some others do not. It's usually a good idea to try and save some additional bytes by using these tools.

wasm-opt -O3 -o rotate_bg_opt.wasm rotate_bg.wasm

With wasm-opt we can shave off another handful of bytes to leave a total of 6.2KB after gzip.

#![no_std]

After some consultation and research, we re-wrote our Rust code without using Rust's standard library, using the #![no_std] feature. This also disables dynamic memory allocations altogether, removing the allocator code from our module. Compiling this Rust file with

$ rustc --target=wasm32-unknown-unknown -C opt-level=3 -o rust.wasm rotate.rs

yielded a 1.6KB wasm module after wasm-opt, wasm-strip and gzip. While it is still bigger than the modules generated by C and AssemblyScript, it is small enough to be considered a lightweight.

Note: According to Twiggy, the main contributor to the file size is core::fmt, a module that generates turns data into strings (like C's printf()). It is used by code paths that could trigger an exception as they generate a human-readable exception messages. Rust's WebAssembly team is aware of this and is actively working on improvements here.

Performance

Before we jump to conclusions based on file size alone — we went on this journey to optimize performance, not file size. So how did we measure performance and what were the results?

How to benchmark

Despite WebAssembly being a low-level bytecode format, it still needs to be sent through a compiler to generate host-specific machine code. Just like JavaScript, the compiler works in multiple stages. Said simply: The first stage is much faster at compiling but tends to generate slower code. Once the module starts running, the browser observes which parts are frequently used and sends those through a more optimizing but slower compiler.

Our use-case is interesting in that the code for rotating an image will be used once, maybe twice. So in the vast majority of cases we will never get the benefits of the optimizing compiler. This is important to keep in mind when benchmarking. Running our WebAssembly modules 10,000 times in a loop would give unrealistic results. To get realistic numbers, we should run the module once and make decisions based on the numbers from that single run.

Note: Ideally, we should have automated this process of reloading the page and running the module once, and doing that process a large number of times. We decided that a few manual runs are good enough to make an informed decision based on those averaged numbers.

Performance comparison

These two graphs are different views onto the same data. In the first graph we compare per browser, in the second graph we compare per language used. Please note that I chose a logarithmic timescale. It’s also important that all benchmarks were using the same 16 megapixel test image and the same host machine, except for one browser, which could not be run on the same machine.

Without analyzing these graphs too much, it is clear that we solved our original performance problem: All WebAssembly modules run in ~500ms or less. This confirms what we laid out at the start: WebAssembly gives you predictable performance. No matter which language we choose, the variance between browsers and languages is minimal. To be exact: The standard deviation of JavaScript across all browsers is ~400ms, while the standard deviation of all our WebAssembly modules across all browsers is ~80ms.

Effort

Another metric is the amount of effort we had to put in to create and integrate our WebAssembly module into squoosh. It is hard to assign a numeric value to effort, so I won't create any graphs but there are a few things I would like to point out:

AssemblyScript was frictionless. Not only does it allow you to use TypeScript to write WebAssembly, making code-review very easy for my colleagues, but it also produces glue-free WebAssembly modules that are very small with decent performance. The tooling in the TypeScript ecosystem, like prettier and tslint, will likely just work.

Rust in combination with wasm-pack is also extremely convenient, but excels more at bigger WebAssembly projects were bindings and memory management are needed. We had to diverge a bit from the happy-path to achieve a competitive file size.

C and Emscripten created a very small and highly performant WebAssembly module out of the box, but without the courage to jump into glue code and reduce it to the bare necessities the total size (WebAssembly module + glue code) ends up being quite big.

Conclusion

So what language should you use if you have a JS hot path and want to make it faster or more consistent with WebAssembly. As always with performance questions, the answer is: It depends. So what did we ship?

Note: Again, please note that both axis are logarithmic and that the x axis goes to 2000 Bytes, while the y axis goes up to 10 seconds.

Comparing at the module size / performance tradeoff of the different languages we used, the best choice seems to be either C or AssemblyScript. We decided to ship Rust. There are multiple reasons for this decision: All the codecs shipped in Squoosh so far are compiled using Emscripten. We wanted to broaden our knowledge about the WebAssembly ecosystem and use a different language in production. AssemblyScript is a strong alternative, but the project is relatively young and the compiler isn't as mature as the Rust compiler.

While the difference in file size between Rust and the other languages size looks quite drastic in the scatter graph, it is not that big a deal in reality: Loading 500B or 1.6KB even over 2G takes less than a 1/10th of a second. And Rust will hopefully close the gap in terms of module size soon.

In terms of runtime performance, Rust has a faster average across browsers than AssemblyScript. Especially on bigger projects Rust will be more likely to produce faster code without needing manual code optimizations. But that shouldn't keep you from using what you are most comfortable with.

That all being said: AssemblyScript has been a great discovery. It allows web developers to produce WebAssembly modules without having to learn a new language. The AssemblyScript team has been very responsive and is actively working on improving their toolchain. We will definitely keep an eye on AssemblyScript in the future.

_Special thanks to Ashley Williams, Steve Klabnik and Max Graey for all their help on this journey._

Get Ready for Priority Hints

$
0
0

Get Ready for Priority Hints

As performance becomes increasingly important, it's exciting to see browsers implement new features which give developers more control over resource loading. Resource Hints such as rel=preload and rel=preconnect give developers more control over resource loading and connections to cross-origin servers, respectively. Client Hints expose details of a user's device and preferences that developers can use to improve performance in nuanced ways. Continuing in this vein, a new experimental feature known as Priority Hints is available through an Origin Trial in Chrome Beta which will allow you to tell the browser how resources should be prioritized.

Resource priority? What's that?

When a browser downloads a resource, the resource is assigned a priority. By default, priorities depend on the type of resource (e.g., script, image, etc.), and the location of the resource reference in the document. For example in Chrome, CSS loaded in typical fashion via the <link> element in the <head> will be assigned a priority of highest, as it blocks rendering. Images in the viewport may be assigned a priority of high, whereas images outside the viewport may be assigned a priority of low. A <script> loaded at the end of the document may receive a priority assignment of medium or low, but this can be influenced by defer and async.

Resources and their priorities

Figure 1. A list of resources and their corresponding priorities in the network panel of DevTools.

For a long time, we've had little control over resource priority beyond modifying the critical rendering path until rel=preload came around. rel=preload changes the discovery order of a resource by telling the browser about it before the browser would otherwise find it in due course. At the same time, rel=preload doesn't reprioritize the resource, but only sets it the default priority for that particular resource type. Regardless, there are times when browsers prioritize resources in undesirable ways in specific situations: async scripts may be assumed to be of low priority when that may not have been the author's intent, images may be of higher priority than, e.g., non-critical stylesheets, etc. These are the kind of situations Priority Hints can help developers address.

Using priority hints

Priority Hints can be set for resources in HTML by specifying an importance attribute on a <script>, <img>, or <link> element (though other elements such as <iframe> may see support later). An example can be something like this:

<!-- An image the browser assigns "High" priority, but we don't actually want that. -->
<img src="/images/in_viewport_but_not_important.svg" importance="low" alt="I'm an unimportant image!">

The importance attribute accepts one of three values:

  • high: The resource may be prioritized, if the browser's own heuristics don't prevent that from happening.
  • low: The resource may be _de_prioritized, if the browser's heuristics permit.
  • auto: Let the browser decide what priority is appropriate for a resource. This is the default value.

Because <link> elements are affected by the importance attribute, this means priority can be changed not only for typical stylesheet includes, but also for rel=preload hints:

<!-- We want to initiate an early fetch for a resource, but also deprioritize it -->
<link rel="preload" href="/js/script.js" as="script" importance="low">

Priority Hints aren't restricted to HTML usage. You can also change the priority of fetch requests via the importance option, which takes the same values as the HTML attribute:

fetch("https://example.com/", {importance: "low"}).then(data => {
    // Do whatever you normally would with fetch data
});

Priority Hints have a slightly different impact depending on your network stack. With HTTP/1.X, the only way for the browser to prioritize resources is to delay their requests from going out. As a result, lower priority requests simply hit the network after high priority ones, assuming that there are higher priority requests in the queue. If there aren't, the browser may still delay some low priority requests if it predicts that higher priority requests will come along soon (e.g., if the document's <head> is still open and rendering critical resources are likely to be discovered there.

With HTTP/2, the browser may still delay some low-priority requests, but on top of that, it can also set their resource's stream priority to a lower level, enabling the server to better prioritize the resources it is sending down.


Note: Priority Hints in its current form does not affect <iframe> elements, but may, as the implementation matures. This could be useful for demoting priority of third party <iframe>s and their subresources.


So in what circumstances might Priority Hints come in useful? Let's take a look at some quick use cases and find out!

How can I tell if Priority Hints works?

The easiest way to tell if Priority Hints are working is to load your site, open the network panel in DevTools, and ensure the Priority column is checked by right clicking on any of the column headers and potentially enabling it.

DevTools network panel header context menu

Figure 2. The header options context menu in the network panel of DevTools with the Priority option highlighted.

Once enabled, the priority information for resources will be visible as shown in Figure 1. From here, pick any resource in the list and look at its priority. For example, I've chosen a script assigned a low priority in the browser:

Low priority script resource

Figure 3. A script element listed in DevTools given a low priority.

This script is requested via a <script> tag in the footer and uses the defer attribute as well, which causes the browser to lower this script's priority. Let's change that and give it an importance attribute with a value of high:

<script src="/js/app.js" defer importance="high"></script>

When this change is made and deployed, I reload the page and check the value of the Priority column for the script, which should now be given a higher priority:

High priority script resource

Figure 4. A script element listed in DevTools given a high priority.

That's pretty much how it works: If you drop a hint that you would like an element to be prioritized differently, check that resource's priority value in DevTools. If it changes, your priority hint did something!

Use cases

Resource priorities are nuanced and fluctuate based on a number of factors determined by the browser. Once you modify them, the effect can start to become a little less clear. Let's take a look at a few cases where Priority Hints can improve performance.

Deprioritizing images

Browsers do their best to assign reasonable priorities for images so that those in the viewport appear as soon as reasonably possible. In most cases, that's what you want them to do, but what if some above the fold imagery just isn't as important as other page resources? Priority Hints may provide a solution for that.

Here's a common scenario: A carousel of images is at the top of a page with the first slide visible and the remaining slides invisible. The markup of this carousel might look something like this:

<ul class="carousel">
    <!-- This item is visible, since it's the first. -->
    <li class="carousel__item"><img src="img/carousel-1.jpg" alt="I'm a carousel image!"></li>
    <!-- The next few, not so much, as they are hidden by CSS, or occluded by other elements. -->
    <li class="carousel__item"><img src="img/carousel-2.jpg" alt="I'm a carousel image!"></li>
    <li class="carousel__item"><img src="img/carousel-3.jpg" alt="I'm a carousel image!"></li>
    <li class="carousel__item"><img src="img/carousel-4.jpg" alt="I'm a carousel image!"></li>
</ul>

Because of browser heuristics, all four images may be given a high priority ranking, even though three of them are not initially visible. The browser can't really know when those image will actually be scrolled into view, so the cautious thing to do here is to consider them "in the viewport". At the same time, that may not be the desired outcome from the developer's perspective, as they know that those images are of lower priority than the async script that is responsible for making the carousel interactive in the first place.

They could use rel=preload to preload the first image in the carousel, but doing so may not provide the outcome we expect: Using rel=preload may effectively prioritize that image above everything else, and if that image is large, it may block rendering as it will get downloaded before critical stylesheets or blocking scripts. Priority Hints may be the solution here:

<ul class="carousel">
    <!-- We'll let the browser know this image is important: -->
    <li class="carousel__item"><img src="img/carousel-1.jpg" alt="I'm a carousel image!" importance="high"></li>
    <!-- But we'll set the less-important ones to low priority: -->
    <li class="carousel__item"><img src="img/carousel-2.jpg" alt="I'm a carousel image!" importance="low"></li>
    <li class="carousel__item"><img src="img/carousel-3.jpg" alt="I'm a carousel image!" importance="low"></li>
    <li class="carousel__item"><img src="img/carousel-4.jpg" alt="I'm a carousel image!" importance="low"></li>
</ul>

When we assign the off-screen images low priority, this will create less contention between the remaining high priority images and other high priority resources.

Re-prioritizing scripts

The priority of script resource downloads varies wildly in Chrome depending on the script tag's location in the HTML, and on whether the script is declared as async or defer. That means that as a developer, when you avoid making your script a blocking one (which is a known best-practice), you're also implicitly telling the browser that your script is not that important.

While those heuristics work well for many common cases, they may not work well for you.

Maybe you're trying to load a critical script, but in a non-blocking way, so you've made it async to make sure it runs whenever it is available. One example for that may be a script that's responsible for parts of the page's interaction, but which shouldn't block rendering.

Alternatively, maybe you have a blocking script at the bottom of the page (as it relies on running in a specific DOM state), but at the same time, it should not necessarily run before other async scripts, and therefore can be deprioritized.

There exist various hacks that enable you to work around some of these heuristics, but Priority Hints enable you to explicitly declare your intention to the browser and have it do the right thing.

So, if you wanted to prioritize an async script, you could indicate:

<script src="async_but_important.js" async importance="high"></script>

Similarly, for a bottom-of-the-page blocking script, you could indicate the fact that it's less important than other resources, by stating it explicitly:

<script src="blocking_but_unimportant.js" importance="low"></script> 

Deprioritizing fetches

This may not be a common scenario, but it can happen in modern applications: Let's say you have a high volume of fetch calls that fire around the same time. Because fetches are given high priority, they'll contend with one another (and other high priority requests) if enough of them occur in the same space of time. What you could do in this scenario is set an importance of low on fetches for non-critical data:

// Important user data (high by default)
let userData = await fetch("/user");

// Less important content data (explicitly low)
let newsFeedContent = await fetch("/content/news-feed", {importance: "low"});
let suggestedContent = await fetch("/content/suggested", {importance: "low"});

This approach ensures that fetches for critical data won't contend with other fetches for less important data. This could potentially improve performance in some scenarios, particularly where bandwidth is low, and the number of fetch calls is high.

Caveats and conclusion

Now that you've gotten a taste, and you're ready to run out there and start using Priority Hints: hold on! There's a few things you should be aware of before you start dropping hints all over the place.

Priority Hints are hints, not instructions

Hint is the key word. When it comes to resource prioritization, the browser has the final say. Sure, you can slap them on a bunch of elements, and the browser may do what you're asking it to. Or it may ignore some hints and decide the default priority is the best choice for the given situation. This behavior may change as Chrome's implementation matures, so test often!

It's going to take trial and error

Perhaps because Priority Hints are hints rather than instructions, it will take some trial and error to observe its effects. One useful way of looking at how Priority Hints work is to compare them to rel=preload: Where rel=preload's effects are often observable and easily measurable, Priority Hints are much more nuanced. If you don't notice any difference when using them, it could be for any number of reasons, including, but not limited to:

  1. Resource priorities help to make sure critical resources get to the browser before non-critical ones. But that only helps in environments where resource download is a bottleneck. That happens when you're using HTTP/1.X, where the number of connections the browser has open is limiting the amount of resources you can download for each round-trip-time. This also happens when using HTTP/2, but mainly in bandwidth constrained environments. High bandwidth HTTP/2 connections are less likely to benefit from better resource prioritization.
  2. HTTP/2 servers and their prioritization implementations are… not always perfect. Pat Meenan wrote about common hurdles in such implementations and how to fix them. Andy Davies has run a few tests to see which CDNs and services are getting it right. But generally, if you see that HTTP/2 prioritization is not having the impact you expect it to have, make sure that your server is handling it right.
  3. The browser either ignored the hint you gave it, or you attempted to set a priority for a resource that would have been the same as the browser's original choice.

A good way to approach using Priority Hints is that it's a fine-tuning optimization technique that should come later in your performance improvement plan rather than sooner. If you haven't looked at other techniques like image optimization, code splitting, rel=preload, and so forth, do those things first and consider Priority Hints later.

Priority Hints are experimental

The Priority Hints implementation is, like your favorite website from 1996: under construction. The API shape and functionality is not yet set in stone. Given this reality, you need to be aware that the behavior of Priority Hints and their impact could change over time. If you plan to experiment with them, you probably want to keep track of the feature and its implementation evolution. At the same time, as Priority Hints is a performance optimization, those modifications should not cause breaking changes, but may render what you're trying to use Priority Hints for less effective.

Try them out!

Starting from Chrome 73, Priority Hints are going to an Origin Trial. That means that you can register your domain and have the feature turned on for your users for the next two releases of Chrome.

We would love you to take the feature out for a spin, try it to improve your site's performance, and report back the results. We want to get a better understanding of the real world benefits of shipping what we have now, despite the caveats mentioned above, before potentially iterating over the feature a bit more.

So please, if you love speeding up websites and want to try to make them faster while helping us improve the feature, take Priority Hints out for a spin, and let us know how it went!

Trusted Types help prevent Cross-Site Scripting

$
0
0

Trusted Types help prevent Cross-Site Scripting

TL;DR

We've created a new experimental API that aims to prevent DOM-Based Cross Site Scripting in modern web applications.

Cross-Site Scripting

Cross-Site Scripting (XSS) is the most prevalent vulnerability affecting web applications. We see this reflected both in our own data, and throughout the industry. Practice shows that maintaining an XSS-free application is still a difficult challenge, especially if the application is complex. While solutions for preventing server-side XSS are well known, DOM-based Cross-Site Scripting (DOM XSS) is a growing problem. For example, in Google's Vulnerability Reward Program DOM XSS is already the most common variant.

Why is that? We think it's caused by two separate issues:

XSS is easy to introduce

DOM XSS occurs when one of injection sinks in DOM or other browser APIs is called with user-controlled data. For example, consider this snippet that intends to load a stylesheet for a given UI template the application uses:

const templateId = location.hash.match(/tplid=([^;&]*)/)[1];
// ...
document.head.innerHTML += `<link rel="stylesheet" href="./templates/${templateId}/style.css">`

This code introduces DOM XSS by linking the attacker-controlled source (location.hash) with the injection sink (innerHTML). The attacker can exploit this bug by tricking their victim into visiting the following URL:

https://example.com#tplid="><img src=x onerror=alert(1)>

It's easy to make this mistake in code, especially if the code changes often. For example, maybe templateId was once generated and validated on the server, so this value used to be trustworthy? When assigning to innerHTML, all we know is that the value is a string, but should it be trusted? Where does it really come from?

Additionally, the problem is not limited to just innerHTML. In a typical browser environment, there are over 60 sink functions or properties that require this caution. The DOM API is insecure by default and requires special treatment to prevent XSS.

XSS is difficult to detect

The code above is just an example, so it's trivial to see the bug. In practice, the sources and the sinks are often accessed in completely different application parts. The data from the source is passed around, and eventually reaches the sink. There are some functions that sanitize and verify the data. But was the right function called?

Looking at the source code alone, it's difficult to know if it introduces a DOM XSS. It's not enough to grep the .js files for sensitive patterns. For one, the sensitive functions are often used through various wrappers and real-world vulnerabilities look more like this.

Sometimes it's not even possible to tell if a codebase is vulnerable by only looking at it.

obj[prop] = templateID

If obj points to the Location object, and prop value is "href", this is very likely a DOM XSS, but one can only find that out when executing the code. As any part of your application can potentially hit a DOM sink, all of the code should undergo a manual security review to be sure - and the reviewer has to be extra careful to spot the bug. That's unlikely to happen.

Trusted Types

Trusted Types is the new browser API that might help address the above problems at the root cause - and in practice help obliterate DOM XSS.

Trusted Types allow you to lock down the dangerous injection sinks - they stop being insecure by default, and cannot be called with strings. You can enable this enforcement by setting a special value in the Content Security Policy HTTP response header:

Content-Security-Policy: trusted-types *

Then, in the document you can no longer use strings with the injection sinks:

const templateId = location.hash.match(/tplid=([^;&]*)/)[1];
// typeof templateId == "string"
document.head.innerHTML += templateId // Throws a TypeError.

To interact with those functions, you create special typed objects - Trusted Types. Those objects can be created only by certain functions in your application called Trusted Type Policies. The exemplary code "fixed" with Trusted Types would look like this:

const templatePolicy = TrustedTypes.createPolicy('template', {
  createHTML: (templateId) => {
    const tpl = templateId;
    if (/^[0-9a-z-]$/.test(tpl)) {
      return `<link rel="stylesheet" href="./templates/${tpl}/style.css">`;
    }
    throw new TypeError();
  }
});

const html = templatePolicy.createHTML(location.hash.match(/tplid=([^;&]*)/)[1]);
// html instanceof TrustedHTML
document.head.innerHTML += html;

Here, we create a template policy that verifies the passed template ID parameter and creates the resulting HTML. The policy object create* function calls into a respective user-defined function, and wraps the result in a Trusted Type object. In this case, templatePolicy.createHTML calls the provided templateId validation function, and returns a TrustedHTML with the <link ...> snippet. The browser allows TrustedHTML to be used with an injection sink that expects HTML - like innerHTML.

It might seem that the only improvement is in adding the following check:

if (/^[0-9a-z-]$/.test(tpl)) { /* allow the tplId */ }

Indeed, this line is necessary to fix XSS. However, the real change is more profound. With Trusted Types enforcement, the only code that could introduce a DOM XSS vulnerability is the code of the policies. No other code can produce a value that the sink functions accept. As such, only the policies need to be reviewed for security issues. In our example, it doesn't really matter where the templateId value comes from, as the policy makes sure it's correctly validated first - the output of this particular policy does not introduce XSS.

Limiting policies

Did you notice the * value that we used in the Content-Security-Policy header? It indicates that the application can create arbitrary number of policies, provided each of them has a unique name. If applications can freely create a large number of policies, preventing DOM XSS in practice would be difficult.

However, we can further limit this by specifying a whitelist of policy names like so:

Content-Security-Policy: trusted-types template

This assures that only a single policy with a name template can be created. That policy is then easy to identify in a source code, and can be effectively reviewed. With this, we can be certain that the application is free from DOM XSS. Nice job!

In practice, modern web applications need only a small number of policies. The rule of thumb is to create a policy where the client-side code produces HTML or URLs - in script loaders, HTML templating libraries or HTML sanitizers. All the numerous dependencies that do not interact with the DOM, do not need the policies. Trusted Types assures that they can't be the cause of the XSS.

Get started

This is just a short overview of the API. We are working on providing more code examples, guides and documentation on how to migrate applications to Trusted Types. We feel this is the right moment for the web developer community to start experimenting with it.

To get this new behavior on your site, you need to be signed up for the "Trusted Types" Origin Trial (in Chrome 73 through 76). If you just want to try it out locally, starting from Chrome 73 the experiment can be enabled on the command line:

chrome --enable-blink-features=TrustedDOMTypes

or

chrome --enable-experimental-web-platform-features

Alternatively, visit chrome://flags/#enable-experimental-web-platform-features and enable the feature. All of those options enable the feature globally in Chrome for the current session.

We have also created a polyfill that enables you to test Trusted Types in other browsers.

As always, let us know what you think. You can reach us on the trusted-types Google group or file issues on GitHub.

Trust is Good, Observation is Better—Intersection Observer v2

$
0
0

Trust is Good, Observation is Better—Intersection Observer v2

Intersection Observer v1 is one of those APIs that's probably universally loved, and, now that Safari supports it as well, it's also finally universally usable in all major browsers. For a quick refresher of the API, I recommend watching Surma's Supercharged Microtip on Intersection Observer v1—also embedded below for your viewing pleasure—or reading Surma's in-depth article. People have used Intersection Observer v1 for a wide range of use cases like lazy loading of images and videos, being notified when elements reach position: sticky, fire analytics events, and many more.

For the full details, check out the Intersection Observer docs on MDN, but as a short reminder, this is what the Intersection Observer v1 API looks like in the most basic case:

const onIntersection = (entries) => {
  for (const entry of entries) {
    if (entry.isIntersecting) {
      console.log(entry);
    }
  }
};

const observer = new IntersectionObserver(onIntersection);
observer.observe(document.querySelector('#some-target'));

What's challenging with Intersection Observer v1?

To be clear, Intersection Observer v1 is great, but it's not perfect. There are some corner cases where the API falls short. Let's have a closer look! The Intersection Observer v1 API can perfectly tell you when an element is scrolled into the window's viewport, but it doesn't tell you whether the element is covered by any other page content (that is, when the element is occluded) or whether the element's visual display has been modified by visual effects like transform, opacity, filter, etc., which effectively can make it invisible.

Now while for an element in the top-level document this information can be determined by analyzing the DOM via JavaScript, for example via DocumentOrShadowRoot.elementFromPoint() and then digging deeper, the same information cannot be obtained if the element in question is located in a third-party iframe.

Why is actual visibility such a big deal?

The Internet is, unfortunately, a place that attracts bad actors with even worse intentions. For example, a shady publisher that serves pay-per-click ads on a content site might be incentivized to trick people into clicking their ads to increase the publisher's ad payout (at least for a short period, until the ad network catches them). Typically, such ads are served in iframes. Now if the publisher wanted to get users to click such ads, they could make the ad iframes completely transparent by applying a CSS rule iframe { opacity: 0; } and overlaying the iframes on top of something attractive, like a cute cat video that users would actually want to click. This is called clickjacking. You can see such a clickjacking attack in action in the upper section of this demo (try "watching" the 🐈 cat video and ☑️ activate "trick mode"). You will notice that the ad in the iframe "thinks" it received legitimate clicks, even if it was completely transparent when you (pretendedly involuntarily) clicked it.

Tricking a user into clicking an ad by
    styling it transparent and overlaying it on top of something attractive

How does Intersection Observer v2 fix this?

Intersection Observer v2 introduces the concept of tracking the actual "visibility" of a target element as a human being would define it. By setting an option in the IntersectionObserver constructor, intersecting IntersectionObserverEntrys (pardon the wrong plural ending here) will then contain a new boolean field named isVisible. A true value for isVisible is a strong guarantee from the underlying implementation that the target element is completely unoccluded by other content and has no visual effects applied that would alter or distort its display on screen. In contrast, a false value means that the implementation cannot make that guarantee.

An important detail of the spec is that the implementation is permitted to report false negatives (that is, setting isVisible to false even when the target element is completely visible and unmodified). For performance or other reasons, implementations should limit themselves to working with bounding boxes and rectilinear geometry; they shouldn't try to achieve pixel-perfect results for modifications like border-radius.

That said, false positives are not permitted under any circumstances (that is, setting isVisible to true when the target element is not completely visible and unmodified).

Warning: Visibility is much more expensive to compute than intersection. For that reason, Intersection Observer v2 is not intended to be used broadly in the way that Intersection Observer v1 is. Intersection Observer v2 is focused on combatting fraud and should be used only when Intersection Observer v1 functionality is truly insufficient.

What does the new code look like in practice?

The IntersectionObserver constructor now takes two additional configuration properties: delay and trackVisibility. The delay is a number indicating the minimum delay in milliseconds between notifications from the observer for a given target. The trackVisibility is a boolean indicating whether the observer will track changes in a target's visibility.

⚠️ It's important to note here that when trackVisibility is true, delay is required to be at least 100 (that is, no more than one notification every 100ms). As noted before, visibility is expensive to calculate, and this requirement is a precaution against performance degradation (and battery consumption). The responsible developer will use the largest tolerable value for delay.

According to the current spec, visibility is calculated as follows:

  • If the observer's trackVisibility attribute is false, then the target is considered visible. This corresponds to the current v1 behavior.

  • If the target has an effective transformation matrix other than a 2D translation or proportional 2D upscaling, then the target is considered invisible.

  • If the target, or any element in its containing block chain, has an effective opacity other than 1.0, then the target is considered invisible.

  • If the target, or any element in its containing block chain, has any filters applied, then the target is considered invisible.

  • If the implementation cannot guarantee that the target is completely unoccluded by other page content, then the target is considered invisible.

This means current implementations are pretty conservative with guaranteeing visibility. For example, applying an almost unnoticeable grayscale filter like filter: grayscale(0.01%) or setting an almost invisible transparency with opacity: 0.99 would all render the element invisible.

Below is a short code sample that illustrates the new API features. You can see this click tracking logic in action in the second section of the demo (but now, try "watching" the 🐶 puppy video). Be sure to activate "trick mode" again to immediately convert yourself into a shady publisher and see how Intersection Observer v2 prevents non-legitimate ad clicks from being tracked. This time, Intersection Observer v2 has our back! 🎉

Note: Different from typical lazy-loading code, if you use Intersection Observer to prevent this kind of clickjacking attacks, you must not unobserve the element after the first intersection.

Intersection Observer v2 preventing an unintended click on an ad.

<!DOCTYPE html>
<!-- This is the ad running in the iframe -->
<button id="callToActionButton">Buy now!</button>
// This is code running in the iframe.

// The iframe must be visible for at least 800ms prior to an input event
// for the input event to be considered valid.
const minimumVisibleDuration = 800;

// Keep track of when the button transitioned to a visible state.
let visibleSince = 0;

const button = document.querySelector('#callToActionButton');
button.addEventListener('click', (event) => {
  if ((visibleSince > 0) &&
      (performance.now() - visibleSince >= minimumVisibleDuration)) {
    trackAdClick();
  } else {
    rejectAdClick();
  }
});

const observer = new IntersectionObserver((changes) => {
  for (const change of changes) {
    // ⚠️ Feature detection
    if (typeof change.isVisible === 'undefined') {
      // The browser doesn't support Intersection Observer v2, falling back to v1 behavior.
      change.isVisible = true;
    }
    if (change.isIntersecting && change.isVisible) {
      visibleSince = change.time;
    } else {
      visibleSince = 0;
    }
  }
}, {
  threshold: [1.0],
  // 🆕 Track the actual visibility of the element
  trackVisibility: true,
  // 🆕 Set a minimum delay between notifications
  delay: 100
}));

// Require that the entire iframe be visible.
observer.observe(document.querySelector('#ad'));

Acknowledgements

Thanks to Simeon Vincent, Yoav Weiss, and Mathias Bynens for reviewing this article, as well as Stefan Zager likewise for reviewing and for implementing the feature in Chrome.

Exploring a back/forward cache for Chrome

$
0
0

Exploring a back/forward cache for Chrome

On the Chrome team, we are exploring a new back/forward cache to cache pages in-memory (preserving JavaScript & DOM state) when the user navigates away. This is definitely not a trivial endeavor but if it succeeds it will make navigating back and forth very fast.

A back/forward cache (bfcache) caches whole pages (including the JavaScript heap) when navigating away from a page, so that the full state of the page can be restored when the user navigates back. Think of it as pausing a page when you leave it and playing it when you return.

Below is a first-look of an early prototype of back/forward cache in action on desktop:

We also have a preview of the back/forward cache working on Chrome for Android:

We estimate this change could improve performance up to 19% of all navigations for mobile Chrome. You can find more detail about this feature in the bfcache explainer.

There is medium cross-browser interop risk with this change. Both Firefox and Safari already have back-forward cache implementations that are subtly different. Chrome is opting not to use WebKit’s implementation of bfcache due to incompatibility with Chrome’s multiprocess architecture.

Our formal intent-to-implement for the back-forward cache is on blink-dev for anyone wishing to contribute to the discussions.

Thanks to Arthur Sonzogni, Alexander Timin, Kenji Baheux and Sami for their help putting together our prototype videos.

Web Dev Ecosystem team - February wrap up

$
0
0

Web Dev Ecosystem team - February wrap up

Welcome to the first installment of a monthly wrap up which we look back what's been happening in Web Developer Ecosystem team✨

We are a team of engineers and communicators who produce articles and code samples such as this website web fundamentals and our brand new portal web.dev. You can also catch our work over on our YouTube Channel, and don't forget to follow us on @ChormiumDev :)

February is a short month but we are certainly not short on content. Let's start with big releases from the team.

Releases

Workbox

Hot off the press, Workbox 4.0 was released just a few days ago.🎉 This release includes great new features like workbox-window and improvements to many of the existing workbox packages. For those of you who are already using workbox, check out the v3 to v4 migration guide. Wondering how you can use Workbox in your existing project? Here is a guide to use them with bundlers of your choice. Not sure what problem workbox helps to solve? Check out this interview on service workers over on the State of the Web show.

lit-html and LitElement

The team at the polymer project has been busy working on stable release of lit-html and LitElement - two next-generation web development libraries. Do you want to try them out? start with Try LitElement guide 📝

Trusted Web Activities

With the release of Chrome 72, Trusted Web Activity (TWA) have entered to the market! TWAs let you have full screen Chrome inside of an Android Activity, which means you can bring your web content into app-sphere📱 Check out this getting started guide or read on how @svenbudak put their PWA on Google Play Store!

What's coming next

With Chrome 73 stable release on the horizon (March 12), we have lots of exciting features to cover!

V8 - Chrome's JavaScript engine has a bunch of updates including Object.fromEntries and String.prototype.matchAll. Check out the v8 release note.

Working with audio and video on the web? Hardware media keys support is here and "Skip Ad" in Picture-in-Picture window is now in origin trial! Check out Audio/Video Updates in Chrome 73 for more.

Speaking of origin trial, get ready for Priority Hints with Priority Hints, developers can set the importance of a <script>, <img>, or <link> element to give the browser how to load them. It is still an experimental feature, so please do try out and send feedback!

Rendering performance is always on top of our mind. In Chrome 73 wheel and mousewheel listeners registered on root targets (window, document, or body) will be passive listeners by default, providing fast wheel scrolling by default.

As we say hello to new features, we also have to say goodbye, so be sure to check deprecations and removals for Chrome 73 as well!

New development

Here are a few more things we've been working on that will hit a browser near you.

To help prevent Cross-Site Scripting, we are developing a new API called Trusted Types. Opting into trusted-types (via Content Security Policy) will lock down the document from DOM injection. We are working on providing more code examples and guides on this, but in the meanwhile please read more about Trusted Types to try it out.

Hitting back and forward button on Chrome may soon be really fast! We are exploring a new back/forward cache to cache pages in-memory when the user navigates away. Check out the explainer and a prototype of bfcache in this post.

Lastly, Intersection observer v2 introduces the idea of tracking the actual "visibility" of a target.

What we are tinkering with

Our work does not end at browser features! We also look at web application performance, build web apps, and think about different ways to help web developers everywhere. Here are some of the things we've been tinkering with this month.

New Videos and Podcasts

Martin is starting a new series called JavaScript SEO, the first episode is about how Google search indexes JavaScript sites! Meggin recently presented reflections on the web.dev project at a meetup. Jake and Surma are back with new HTTP203 podcast episode discussing Image rotation experiment.

We also have regular shows such as "New in Chrome", "What's New in DevTools", and "The State of the Web"" on our YouTube Channel.

Special shout-out

Have you seen Puppeteer Examples? You might have seen it from Eric Bidelman's tweet "📯The 12 Days of Puppeteer 🤹🏻‍♂️🎁" last year. It's an awesome collection of Puppeteer code samples that let you think creatively about what you can do with the browser. You should check them out!

(Best of luck to your new endeavor Eric! We'll miss you!!)

Wrapping up

How did you like the first monthly wrap up? If you enjoyed it or have ideas to improve it, please do let me know on twitter @kosamari

If you've built something new using features introduced here or changed something in your codebase based on our articles, be sure to let us know at @ChromiumDev.

In March, a few of us are off to India hoping to learn more about mobile web experience there ✈️ Looking forward to sharing what we learn there!

See you next month👋

What's New In DevTools (Chrome 74)

$
0
0

What's New In DevTools (Chrome 74)

Whoops! Our deadline snuck up on us. We'll have the full post up by Monday at latest.

In the meantime, check out our new DOM tutorial.

Feedback

To discuss the new features and changes in this post, or anything else related to DevTools:

  • File bug reports at Chromium Bugs.
  • Discuss features and changes on the Mailing List. Please don't use the mailing list for support questions. Use Stack Overflow, instead.
  • Get help on how to use DevTools on Stack Overflow. Please don't file bugs on Stack Overflow. Use Chromium Bugs, instead.
  • Tweet us at @ChromeDevTools.
  • File bugs on this doc in the Web Fundamentals repository.

Consider Canary

If you're on Mac or Windows, consider using Chrome Canary as your default development browser. Canary gives you access to the latest DevTools features.

Note: Canary is released as soon as its built, without testing. This means that Canary breaks about once-a-month. It's usually fixed within a day. You can go back to using Chrome Stable while Canary is broken.

<<../../_shared/discover.md>>


Move Ya! Or maybe, don't, if the user prefers-reduced-motion!

$
0
0

Move Ya! Or maybe, don't, if the user prefers-reduced-motion!

tl;dr: Not everyone likes decorative animations or transitions, and some users outright experience motion sickness when faced with parallax scrolling, zooming effects, etc. Chrome (as of Canary 74) now supports a user preference media query prefers-reduced-motion that lets you design a motion-reduced variant of your site for users who have expressed this preference.

Too much motion in real life and on the web

The other day, I was ice skating with my kids. It was a lovely day, the sun was shining, and the ice rink was crammed with people ⛸. The only issue with that: I don't cope with crowds well. With so many moving targets, I fail to focus on anything, and end up lost and with a feeling of complete visual overload, almost like staring at an anthill 🐜.

Throng of feet of ice skating people
Figure 1: Visual overload in real life.

Occasionally, the same can happen on the web: with flashing ads, fancy parallax effects, surprising reveal animations, autoplaying videos, etc., the web sometimes can honestly be quite overwhelming… Happily, unlike in real life, there is a solution to that. The CSS media query prefers-reduced-motion lets developers create a variant of a page for users who, well, prefer reduced motion. This can comprise anything from refraining from having autoplaying videos to disabling certain purely decorative effects, to completely redesigning a page for certain users.

Before I dive into the feature, let's take one step back and think of what animations are used for on the web. If you want, you can also skip the background information and jump right into the technical details below.

Animation on the web

Animation is oftentimes used to provide feedback to the user, for example, to let them know that an action was received and is being processed. More concretely, on a shopping website, a product could be animated to "fly" into a virtual shopping cart, depicted as an icon in the top-right corner of the site.

Another use case involves using motion to hack user perception by using a mixture of skeleton screens, contextual metadata, and low quality image previews to occupy a lot of the user's time and make the whole experience feel faster. The idea is to give context to the user of what's coming and meanwhile load in things as quickly as possible.

Finally, there are decorative effects like animated gradients, parallax scrolling, background videos, and several others. While many users enjoy such animations, some users dislike them because they feel distracted or slowed down from them. In the worst case, users may even suffer from motion sickness as if it were a real life experience, so for these users reducing animations is a medical necessity.

Motion-triggered vestibular spectrum disorder

Some users experience distraction or nausea from animated content. For example, if scrolling a page causes elements to move other than the essential movement associated with scrolling—as with parallax scrolling, where backgrounds move at a different rate to foregrounds—it can trigger vestibular disorders. Vestibular (inner ear) disorder reactions include dizziness, nausea and headaches. The impact of animation on people with vestibular disorders can be quite severe. Triggered reactions include nausea, migraine headaches, and potentially needing bed rest to recover.

Remove motion on operating systems

Operating systems like Android, iOS, macOS, or Windows in their accessibility settings have allowed users for a long time to reduce motion wherever possible. The screenshots below show Android Pie's "remove animations" preference and macOS Mojave's "reduce motion" preference that, when checked, cause the particular operating systems to not use decorative effects like app launching animations. Applications themselves can and should honor this setting, too, and remove all unnecessary animations.

Android settings screen with 'remove animations' checkbox checked macOS settings screen with 'reduce motion' checkbox checked
Figure 2: Prefers reduced motion settings in Android and macOS.

Remove motion on the web

Media Queries Level 5 brings this user preference to the web as well. Media queries allow authors to test and query values or features of the user agent or display device, independent of the document being rendered. The media query prefers-reduced-motion is used to detect if the user has requested the system minimize the amount of animation or motion it uses. It can take two possible values:

  • no-preference: Indicates that the user has made no preference known to the system. This keyword value evaluates as false in the boolean context.
  • reduce: Indicates that the user has notified the system that they prefer an interface that minimizes the amount of movement or animation, preferably to the point where all non-essential movement is removed.

Working with the media query

Note: prefers-reduced-motion is available as of Chrome Canary 74. For other browsers, let me refer you to the Can I use tables.

As all media queries, prefers-reduced-motion can be checked from a CSS context and from a JavaScript context.

To illustrate both, let's say I have an important sign-up button that I want the user to click. I could define an attention-catching "vibrate" animation, but as a good web citizen only play it for those users who are explicitly OK with animations, but not everyone else, which can be users who have opted out of animations, or users on browsers that don't understand the media query.

/*
  If the user has expressed their preference for
  reduced motion, then don't use animations on buttons.
*/
@media (prefers-reduced-motion: reduce) {
  button {
    animation: none;
  }
}

/*
  If the browser understands the media query and the user
  explicitly hasn't set a preference, then use animations on buttons.
*/
@media (prefers-reduced-motion: no-preference) {
  button {
    /* `vibrate` keyframes are defined elsewhere */
    animation: vibrate 0.3s linear infinite both;
  }
}

Note: If you have a lot of animation-related CSS, you can spare your opted-out users from downloading it by outsourcing all animation-related CSS into a separate stylesheet that you only load conditionally via the media attribute on the link element 😎:
<link rel="stylesheet" href="animations.css" media="(prefers-reduced-motion: no-preference)">

To illustrate how to work with prefers-reduced-motion with JavaScript, let's imagine I have defined a complex animation with the Web Animations API. While CSS rules will be dynamically triggered by the browser when the user preference changes, for JavaScript animations I have to listen for changes myself, and then manually stop my potentially in-flight animations (or restart them if the user lets me):

const mediaQuery = window.matchMedia('(prefers-reduced-motion: reduce)');
mediaQuery.addEventListener('change', () => {
  console.log(mediaQuery.media, mediaQuery.matches);
  // Stop JavaScript-based animations.
});

Note: The parentheses around the actual media query are obligatory:
/* 🚫 Wrong */ window.matchMedia('prefers-reduced-motion: reduce')
You always have to use this syntax:
/* ✅ Correct */ window.matchMedia('(prefers-reduced-motion: reduce)')

Demo

I have created a little demo based on Rogério Vicente's amazing 🐈 HTTP status cats. First, take a moment to appreciate the joke, it's hilarious and I'll wait. Now that you're back, let me introduce the demo. When you scroll down, each HTTP status cat alternatingly appears from either the right or the left side. It's a buttery smooth 60fps animation, but as outlined above, some users may dislike it or event get motion sick by it, so the demo is programmed to respect prefers-reduced-motion. This even works dynamically, so users can change their preference on-the-fly, no reload required. If a user prefers reduced motion, the non-necessary reveal animations are gone, and just the regular scrolling motion is left. The screencast below shows the demo in action:

Figure 3: Video of the prefers-reduced-motion demo app (test it on Chrome Canary 74 or later).

(Bonus) Forcing reduced motion on all websites

Not every site will use prefers-reduced-motion, or maybe not consequently enough for your taste. If you, for whatever reason, want to stop motion on all websites, you actually can. One way to make this happen is to inject a stylesheet with the following CSS into every web page you visit. There are several browser extensions out there (use at your own risk!) that allow for this.

@media (prefers-reduced-motion: reduce) {
  * {
    animation-duration: 0.001s !important;
    transition-duration: 0.001s !important;
  }
}

The way this works is that the CSS above overrides the durations of all animations and transitions to such a short time that they are not noticeable anymore. As some websites depend on an animation to be run in order to work correctly (maybe because a certain step depends on the firing of the animationend event), the more radical animation: none !important; approach wouldn't work. Even the above hack is not guaranteed to succeed on all websites (for example, it can't stop motion that was initiated via the Web Animations API), so be sure to deactivate it when you notice breakage.

Conclusions

Respecting user preferences is key for modern websites, and browsers expose more and more features to enable web developers to do so. The CSS Working Group are currently standardizing more user preference media queries like prefers-reduced-transparency (detects if the user prefers reduced transparency), prefers-contrast (detects if the user has requested the system increase or decrease the amount of contrast between adjacent colors), prefers-color-scheme (detects if the user prefers a light or dark color scheme), and inverted-colors (detects if the user prefers inverted colors). 👀 Watch this space, we will definitely let you know once they launch in Chrome!

Acknowledgements

Massive shout-out to Stephen McGruer who has implemented prefers-reduced-motion in Chrome and—together with Rob Dodson—has also reviewed this article.

KV Storage, the Web's First Built-in Module

$
0
0

KV Storage, the Web's First Built-in Module

Browser vendors and web performance experts have been saying for the better part of the last decade that localStorage is slow, and web developers should stop using it.

To be fair, the people saying this are not wrong. localStorage is a synchronous API that blocks the main thread, and any time you access it you potentially prevent your page from being interactive.

The problem is the localStorage API is just so temptingly simple, and the only asynchronous alternative to localStorage is IndexedDB, which (let's face it) is not known for its ease of use or welcoming API.

So developers are left with a choice between something hard to use and something bad for performance. And while there are libraries that offer the simplicity of the localStorage API while actually using asynchronous storage APIs under the hood, including one of those libraries in your app has a file-size cost and can eat into your performance budget.

But what if it were possible to get the performance of an asynchronous storage API with the simplicity of the localStorage API, without having to pay the file size cost?

Well, now there is. Chrome is experimenting with a new feature called built-in modules, and the first one we're planning to ship is an asynchronous key/value storage module called KV Storage

But before I get into the details of the KV Storage module, let me explain what I mean by built-in modules.

What are built-in modules?

Built-in modules are just like regular JavaScript modules, except that they don't have to be downloaded because they ship with the browser.

Like traditional web APIs, built-in modules must go through a standardization process and have well-defined specifications, but unlike traditional web APIs, they're not exposed on the global scope—they're only available via imports.

Not exposing built-in modules globally has a lot of advantages: they won't add any overhead to starting up a new JavaScript runtime context (e.g. a new tab, worker, or service worker), and they won't consume any memory or CPU unless they're actually imported. Furthermore, they don't run the risk of naming collisions with other variables defined in your code.

To import a built-in module you use the prefix std: followed by the built-in module's identifier. For example, in supported browsers, you could import the KV Storage module with the following code (see below for how to use a KV Storage polyfill in unsupported browsers):

import {storage, StorageArea} from 'std:kv-storage';

The KV Storage module

The KV Storage module is similar in its simplicity to the localStorage API, but its API shape is actually closer to a JavaScript Map. Instead of getItem(), setItem(), and removeItem(), it has get(), set(), and delete(). It also has other map-like methods not available to localStorage, like keys(), values(), and entries(), and like Map, its keys do not have to be strings. They can be any structured-serializable type.

Unlike Map, all KV Storage methods return either promises or async iterators (since the main point of this module is it's not synchronous, in contrast to localStorage). To see the full API in detail, you can refer to the specification.

As you may have noticed from the code example above, the KV Storage module has two named exports: storage and StorageArea.

storage is an instance of the StorageArea class with the name 'default', and it's what developers will use most often in their application code. The StorageArea class is provided for cases where additional isolation is needed (e.g. a third-party library that stores data and wants to avoid conflicts with data stored via the default storage instance). StorageArea data is stored in an IndexedDB database with the name kv-storage:${name}, where name is the name of the StorageArea instance.

Here's an example of how to use the KV Storage module in your code:

import {storage} from 'std:kv-storage';

const main = async () => {
  const oldPreferences = await storage.get('preferences');

  document.querySelector('form').addEventListener('submit', () => {
    const newPreferences = Object.assign({}, oldPreferences, {
      // Updated preferences go here...
    });

    await storage.set('preferences', newPreferences);
  });
};

main();

What if a browser doesn't support a built-in module?

If you're familiar with using native JavaScript modules in browsers, you probably know that (at least up until now) importing anything other than a URL will generate an error. And std:kv-storage is not a valid URL.

So that raises the question: do we have to wait until all browsers support built-in module before we can use it in our code?

Thankfully, the answer is no! You can actually use built-in modules in your code today, with the help of another new feature called import maps.

Import maps

Import maps are essentially a mechanism by which developers can alias import identifiers to one or more alternate identifiers.

This is powerful because it gives you a way to change (at runtime) how a browser resolves a particular import identifier across your entire application.

In the case of built-in modules, this allows you to reference a polyfill of the module in your application code, but a browser that supports the built-in module can load that version instead!

Here's how you would declare an import map to make this work with the KV Storage module:

<!-- The import map is inlined into your page -->
<script type="importmap">
{
  "imports": {
    "/path/to/kv-storage-polyfill.mjs": [
      "std:kv-storage",
      "/path/to/kv-storage-polyfill.mjs"
    ]
  }
}
</script>

<!-- Then any module scripts with import statements use the above map -->
<script type="module">
  import {storage} from '/path/to/kv-storage-polyfill.mjs';

  // Use `storage` ...
</script>

The key point in the above code is the URL /path/to/kv-storage-polyfill.mjs is being mapped to two different resources: std:kv-storage and then the original URL again, /path/to/kv-storage-polyfill.mjs.

So when the browser encounters an import statement referencing that URL (/path/to/kv-storage-polyfill.mjs), it first tries to load std:kv-storage, and if it can't then it falls back to loading /path/to/kv-storage-polyfill.mjs.

Again, the magic here is that the browser doesn't need to support import maps _or_ built-in modules for this technique to work since the URL being passed to the import statement is the URL for the polyfill. The polyfill is not actually a fallback, it's the default. The built-in module is a progressive enhancement!

What about browsers that don't support modules at all?

In order to use import maps to conditionally load built-in modules, you have to actually use import statements, which also means you have to use module scripts, i.e. <script type="module">.

Currently, more than 80% of browsers support modules, and for browsers that don't, you can use the module/nomodule technique to serve a legacy bundle to older browsers. Note that when generating your nomodule build, you'll need to include all polyfills because you know for sure that browsers that don't support modules will definitely not support built-in modules.

KV Storage demo

To illustrate that it's possible to use built-in modules today while still supporting older browsers, I've put together a demo that incorporates all the techniques described above:

  • Browsers that support modules, import maps, and the built-in module do not load any unneeded code.
  • Browsers that support modules and import maps but do not support the built-in module load the KV Storage polyfill (via the browser's module loader).
  • Browsers that support modules but do not support import maps also load the KV Storage polyfill (via the browser's module loader)
  • Browsers that do not support modules at all get the KV Storage polyfill in their legacy bundle (loaded via <script nomodule>).

The demo is hosted on Glitch, so you can view its source. I also have a detailed explanation of the implementation in the README. Feel free to take a look if you're curious to see how it's built.

In order to actually see the native built-in module in action, you have to load the demo in Chrome 74 (currently Chrome Dev or Canary) with the experimental web platform features flag turned on (chrome://flags/#enable-experimental-web-platform-features).

You can verify that the built-in module is being loaded because you won't see the polyfill script in the source panel in DevTools; instead you'll see the built-in module version (fun fact: you can actually inspect the module's source code or even put breakpoints in it!):

The KV Storage module source in Chrome DevTools

Please give us feedback

This introduction should have given you a taste of what's possible with built-in modules. And hopefully you're excited! We'd really love for developers to try out the KV Storage module (as well as all the new features discussed here) and give us feedback.

Here are the GitHub links where you can give us feedback for each of the features mentioned in this article:

If your site currently uses localStorage, you should try switching to the KV Storage API, and if you sign up for the KV Storage origin trial, you can actually deploy your changes today! All your users should benefit from better performance, and Chrome 74+ users won't have to pay any extra download cost.

New in Chrome 73

$
0
0

New in Chrome 73

In Chrome 73, we've added support for:

And there’s plenty more!

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 73!

Change log

This covers only some of the key highlights, check the links below for additional changes in Chrome 73.

Progressive Web Apps work everywhere

Progressive Web Apps provide an installable, app-like experience, built and delivered directly via the web. In Chrome 73, we’ve added support for macOS, bringing support for Progressive Web Apps to all desktop platforms - Mac, Windows, Chrome OS and Linux, as well as mobile, simplifying web app development.

A Progressive Web App is fast, and reliably so; always loading and performing at the same speed, regardless of network connection. They provide rich, engaging experiences via modern web features that take full advantage of the device capabilities.

Users can install your PWA from Chrome’s context menu, or you can directly promote the installation experience using the beforeinstallprompt event. Once installed, a PWA integrates with the OS to behave like a native application: users find and launch them from the same place as other apps, they run in their own window, they appear in the task switcher, their icons can show notification badging, and so on.

We want to close the close the capability gap between the web and native to provide a solid foundation for modern applications delivered on the web. We’re working to add new web platform capabilities that give you access to things like the file system, wake lock, adding an ambient badge to the address bar to let users know your PWA can be installed, policy installation for enterprises, and plenty more.

If you’re already building a mobile PWA, a desktop PWA is no different. In fact, if you’ve used responsive design, you’re likely good to go already. Your single codebase will work across desktop and mobile. If you’re just starting out with PWAs, you’ll be surprised at how easy it is to create them!

  1. Add a manifest
  2. Create a set of icons
  3. Add a boilerplate service worker

Then, iterate from there.

Signed HTTP Exchanges

Signed HTTP Exchanges (SGX), part of an emerging technology called Web Packages is now available in Chrome 73. A Signed HTTP Exchange makes it possible to create “portable” content that can be delivered by other parties, and this is the key aspect, it retains the integrity and attribution of the original site.

Signed Exchange: The essence

This decouples the origin of the content from the server that delivers it, but because it’s signed, it’s like it’s being delivered from your server. When the browser loads this Signed Exchange, it can safely show your URL in the address bar because the signature in the exchange indicates the content originally came from your origin.

Signed HTTP exchanges enables faster content delivery for users, making it possible to get the benefits of a CDN without having to cede control of your certificate’s private key. The AMP team is planning to use signed HTTP exchanges on Google search result pages to improve AMP URLs and speed up clicks on search results.

Check out Kinuko’s Signed HTTP Exchanges post for details on how to get started.

Constructable style sheets

Constructable Stylesheets, new in Chrome 73, gives us a new way to create and distribute reusable styles, which is particularly important when using Shadow DOM.

It’s always been possible to create stylesheets using JavaScript. Create a <style> element using document.createElement('style'). Then access its sheet property to obtain a reference to the underlying CSSStyleSheet instance, and set the style.

Diagram showing preparation and application of CSS

Using this method tends to lead to style sheet bloat. Even worse, it causes a flash of unstyled content. Constructable Stylesheets make it possible to define and prepare shared CSS styles, and then apply those styles to multiple Shadow Roots or the Document easily and without duplication.

Updates to a shared CSSStyleSheet are applied to all roots where it’s been adopted, and adopting a stylesheet is fast and synchronous once the sheet has been loaded.

Getting started is simple, create a new instance of CSSStyleSheet, then use either replace or replaceSync to update the stylesheet rules.

const sheet = new CSSStyleSheet();

// replace all styles synchronously:
sheet.replaceSync('a { color: red; }');

// this throws an exception:
try {
  sheet.replaceSync('@import url("styles.css")');
} catch (err) {
  console.error(err); // imports are not allowed
}

// replace all styles, allowing external resources:
sheet.replace('@import url("styles.css")')
  .then(sheet => {
    console.log('Styles loaded successfully');
  })
  .catch(err => {
    console.error('Failed to load:', err);
  });

Check out Jason Miller’s Constructable Stylesheets: seamless reusable styles post for more details and code samples!

And more!

These are just a few of the changes in Chrome 73 for developers, of course, there’s plenty more.

  • matchAll(), is a new regular expression matching method on the string prototype, and returns an array containing the complete matches.
  • The <link> element now supports imagesrcset and imagesizes properties to correspond to srcset and sizes attributes of HTMLImageElement.
  • Blink's shadow blur radius implementation, now matches Firefox and Safari.
  • Dark mode is now supported on Mac, and Windows support is on the way.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 74 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Deprecations and removals in Chrome 74

$
0
0

Deprecations and removals in Chrome 74

Remove PaymentAddress's languageCode property

The PaymentAddress.languageCode property has been removed from the Payment Request API. This property is the browser's best guess for the language of the text in the shipping, billing, delivery, or pickup address in the Payment Request API. The languageCode property is marked at risk in the specification and has already been removed from Firefox and Safari. Usage in Chrome is small enough for safe removal.

Intent to Remove | Chrome Platform Status | Chromium Bug

Don't allow popups during page unload

Pages may no longer use window.open() to open a new page during unload. The Chrome popup blocker already prohibited this, but now it is prohibited whether or not the popup blocker is enabled.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Deprecate drive-by downloads in sandboxed iframes

Chrome will soon prevent downloads in sandboxed iframes that lack a user gesture, though this restriction could be lifted via an allow-downloads-without-user-activation keyword in the sandbox attribute list. This allows content providers to restrict malicious or abusive downloads.

Downloads can bring security vulnerabilities to a system. Even though additional security checks are done in Chrome and the operating system, we feel blocking downloads in sandboxed iframes also fits the general thought behind the sandbox. Apart from security concerns, it would be a more pleasant user experience for a click to trigger a download on the same page, compared with downloads starting automatically when a user lands on a new page, or started non-spontaneously after the click.

Removal is expected in Chrome 74.

Intent to Remove | Chrome Platform Status | Chromium Bug

Feedback

The Chromium Chronicle: Task Scheduling Best Practices

$
0
0

The Chromium Chronicle: Task Scheduling Best Practices

The Chrome team is proud to introduce the Chromium Chronicle, a monthly series geared specifically to Chromium developers, developers who build the browser.

The Chromium Chronicle will primarily focus on spreading technical knowledge and best practices to write, build, and test Chrome. Our plan is to feature topics that are relevant and useful to Chromium developers, such as code health, helpful tools, unit testing, accessibility and much more! Each article will be written and edited by Chrome engineers.

We are excited about this new series, and hope you are too! Ready to dive in? Take a look at our first episode below!

Task Scheduling Best Practices

Episode 1: April 2019

by Gabriel Charette in Montréal

Chrome code that needs in-process asynchronous execution typically posts tasks to sequences. Sequences are chrome-managed “virtual threads” and are preferred to creating your own thread. How does an object know which sequence to post to?

The old paradigm is to receive a SequencedTaskRunner from the creator:

Foo::Foo(scoped_refptr backend_task_runner)
    : backend_task_runner_(std::move(backend_task_runner)) {}

The preferred paradigm is to create an independent SequencedTaskRunner:

Foo::Foo()
    : backend_task_runner_(
          base::CreateSequencedTaskRunnerWithTraits({
              base::MayBlock(), base::TaskPriority::BEST_EFFORT})) {}

This is easier to read and write as all the information is local and there’s no risk of inter-dependency with unrelated tasks.

This paradigm is also better when it comes to testing. Instead of injecting task runners manually, tests can instantiate a controlled task environment to manage Foo’s tasks:

class FooTest : public testing::Test {
 public
  (...)
 protected:
  base::test::ScopedTaskEnvironment task_environment_;
  Foo foo_;
};

Having ScopedTaskEnvironment first in the fixture naturally ensures it manages the task environment throughout Foo’s lifetime. The ScopedTaskEnvironment will capture Foo’s request-on-construction to create a SequencedTaskRunner and will manage its tasks under each FooTest.

To test the result of asynchronous execution, use the RunLoop::Run()+QuitClosure() paradigm:

TEST_F(FooTest, TestAsyncWork) {
  RunLoop run_loop;
  foo_.BeginAsyncWork(run_loop.QuitClosure());
  run_loop.Run();
  EXPECT_TRUE(foo_.work_done());
}

This is preferred to RunUntilIdle(), which can be flaky if the asynchronous workload involves a task outside of the ScopedTaskEnvironment’s purview, e.g. a system event, so use RunUntilIdle() with care.

Want to learn more? Read our documentation on threading and tasks or get involved in the migration to ScopedTaskEnvironment!

Viewing all 599 articles
Browse latest View live