Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

Get started with GPU Compute on the Web

$
0
0

Get started with GPU Compute on the Web

This article is about me playing with the experimental WebGPU API and sharing my journey with web developers interested in performing data-parallel computations using the GPU.

Background

As you may already know, the Graphic Processing Unit (GPU) is an electronic subsystem within a computer that was originally specialized for processing graphics. However, in the past 10 years, it has evolved towards a more flexible architecture allowing developers to implement many types of algorithms, not just render 3D graphics, while taking advantage of the unique architecture of the GPU. These capabilities are referred to as GPU Compute, and using a GPU as a coprocessor for general-purpose scientific computing is called general-purpose GPU (GPGPU) programming.

GPU Compute has contributed significantly to the recent machine learning boom, as convolution neural networks and other models can take advantage of the architecture to run more efficiently on GPUs. With the current Web Platform lacking in GPU Compute capabilities, the W3C’s “GPU for the Web” Community Group is designing an API to expose the modern GPU APIs that are available on most current devices. This API is called WebGPU.

WebGPU is a low-level API, like WebGL. It is very powerful and quite verbose, as you’ll see. But that’s OK. What we’re looking for is performance.

In this article, I’m going to focus on the GPU Compute part of WebGPU and, to be honest, I'm just scratching the surface, so that you can start playing on your own. I will be diving deeper and covering WebGPU rendering (canvas, texture, etc.) in forthcoming articles.

Dogfood: WebGPU is available for now in Chrome 78 for macOS behind an experimental flag. You can enable it at chrome://flags/#enable-unsafe-webgpu. The API is constantly changing and currently unsafe. As GPU sandboxing isn't implemented yet for the WebGPU API, it is possible to read GPU data for other processes! Don’t browse the web with it enabled.

Access the GPU

Accessing the GPU is easy in WebGPU. Calling navigator.gpu.requestAdapter() returns a JavaScript promise that will asynchronously resolve with a GPU adapter. Think of this adapter as the graphics card. It can either be integrated (on the same chip as the CPU) or discrete (usually a PCIe card that is more performant but uses more power).

Once you have the GPU adapter, call adapter.requestDevice() to get a promise that will resolve with a GPU device you’ll use to do some GPU computation.

const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();

Both functions take options that allow you to be specific about the kind of adapter (power preference) and device (extensions, limits) you want. For the sake of simplicity, we’ll use the default options in this article.

Write buffer memory

Let’s see how to use JavaScript to write data to memory for the GPU. This process isn’t straightforward because of the sandboxing model used in modern web browsers.

The example below shows you how to write four bytes to buffer memory accessible from the GPU. It calls device.createBufferMappedAsync() which takes the size of the buffer and its usage. Even though the usage flag GPUBufferUsage.MAP_WRITE is not required for this specific call, let's be explicit that we want to write to this buffer. The resulting promise resolves with a GPU buffer object and its associated raw binary data buffer.

Writing bytes is familiar if you’ve already played with ArrayBuffer; use a TypedArray and copy the values into it.

// Get a GPU buffer and an arrayBuffer for writing.
// Upon success the GPU buffer is put in the mapped state.
const [gpuBuffer, arrayBuffer] = await device.createBufferMappedAsync({
  size: 4,
  usage: GPUBufferUsage.MAP_WRITE
});

// Write bytes to buffer.
new Uint8Array(arrayBuffer).set([0, 1, 2, 3]);

At this point, the GPU buffer is mapped, meaning it is owned by the CPU, and it’s accessible in read/write from JavaScript. So that the GPU can access it, it has to be unmapped which is as simple as calling gpuBuffer.unmap().

The concept of mapped/unmapped is needed to prevent race conditions where GPU and CPU access memory at the same time.

Read buffer memory

Now let’s see how to copy a GPU buffer to another GPU buffer and read it back.

Since we’re writing in the first GPU buffer and we want to copy it to a second GPU buffer, a new usage flag GPUBufferUsage.COPY_SRC is required. The second GPU buffer is created in an unmapped state with the synchronous device.createBuffer(). Its usage flag is GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ as it will be used as the destination of the first GPU buffer and read in JavaScript once GPU copy commands have been executed.

// Get a GPU buffer and an arrayBuffer for writing.
// Upon success the GPU buffer is returned in the mapped state.
const [gpuWriteBuffer, arrayBuffer] = await device.createBufferMappedAsync({
  size: 4,
  usage: GPUBufferUsage.MAP_WRITE | GPUBufferUsage.COPY_SRC
});

// Write bytes to buffer.
new Uint8Array(arrayBuffer).set([0, 1, 2, 3]);

// Unmap buffer so that it can be used later for copy.
gpuWriteBuffer.unmap();

// Get a GPU buffer for reading in an unmapped state.
const gpuReadBuffer = device.createBuffer({
  size: 4,
  usage: GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ
});

Because the GPU is an independent coprocessor, all GPU commands are executed asynchronously. This is why there is a list of GPU commands built up and sent in batches when needed. In WebGPU, the GPU command encoder returned by device.createCommandEncoder()is the JavaScript object that builds a batch of “buffered” commands that will be sent to the GPU at some point. The methods on GPUBuffer, on the other hand, are “unbuffered”, meaning they execute atomically at the time they are called.

Once you have the GPU command encoder, call copyEncoder.copyBufferToBuffer() as shown below to add this command to the command queue for later execution. Finally, finish encoding commands by calling copyEncoder.finish() and submit those to the GPU device command queue. The queue is responsible for handling submissions done via device.defaultQueue.submit() with the GPU commands as arguments. This will atomically execute all the commands stored in the array in order.

// Encode commands for copying buffer to buffer.
const copyEncoder = device.createCommandEncoder();
copyEncoder.copyBufferToBuffer(
  gpuWriteBuffer /* source buffer */,
  0 /* source offset */,
  gpuReadBuffer /* destination buffer */,
  0 /* destination offset */,
  4 /* size */
);

// Submit copy commands.
const copyCommands = copyEncoder.finish();
device.defaultQueue.submit([copyCommands]);

At this point, GPU queue commands have been sent, but not necessarily executed. To read the second GPU buffer, call gpuReadBuffer.mapReadAsync(). It returns a promise that will resolve with an ArrayBuffer containing the same values as the first GPU buffer once all queued GPU commands have been executed.

// Read buffer.
const copyArrayBuffer = await gpuReadBuffer.mapReadAsync();
console.log(new Uint8Array(copyArrayBuffer));

You can try out this sample.

In short, here’s what you need to remember regarding buffer memory operations:

  • GPU buffers have to be unmapped to be used in device queue submission.
  • When mapped, GPU buffers can be read and written in JavaScript.
  • GPU buffers are mapped when mapReadAsync(), mapWriteAsync(), createBufferMappedAsync() and createBufferMapped() are called.

Shader programming

Programs running on the GPU that only perform computations (and don't draw triangles) are called compute shaders. They are executed in parallel by hundreds of GPU cores (which are smaller than CPU cores) that operate together to crunch data. Their input and output are buffers in WebGPU.

To illustrate the use of compute shaders in WebGPU, we’ll play with matrix multiplication, a common algorithm in machine learning illustrated below.

Matrix multiplication diagram
Figure 1. Matrix multiplication diagram

In short, here’s what we’re going to do:

  1. Create three GPU buffers (two for the matrices to multiply and one for the result matrix)
  2. Describe input and output for the compute shader
  3. Compile the compute shader code
  4. Set up a compute pipeline
  5. Submit in batch the encoded commands to the GPU
  6. Read the result matrix GPU buffer

GPU Buffers creation

For the sake of simplicity, matrices will be represented as a list of floating point numbers. The first element is the number of rows, the second element the number of columns, and the rest is the actual numbers of the matrix.

Simple representation of a matrix in JavaScript and it's equivalent in mathematical notation
Figure 2. Simple representation of a matrix in JavaScript and it's equivalent in mathematical notation

The three GPU buffers are storage buffers as we need to store and retrieve data in the compute shader. This explains why the GPU buffer usage flags include GPUBufferUsage.STORAGE for all of them. The result matrix usage flag also has GPUBufferUsage.COPY_SRC because it will be copied to another buffer for reading once all GPU queue commands have all been executed.

const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();


// First Matrix

const firstMatrix = new Float32Array([
  2 /* rows */, 4 /* columns */,
  1, 2, 3, 4,
  5, 6, 7, 8
]);

const [gpuBufferFirstMatrix, arrayBufferFirstMatrix] = await device.createBufferMappedAsync({
  size: firstMatrix.byteLength,
  usage: GPUBufferUsage.STORAGE,
});
new Float32Array(arrayBufferFirstMatrix).set(firstMatrix);
gpuBufferFirstMatrix.unmap();


// Second Matrix

const secondMatrix = new Float32Array([
  4 /* rows */, 2 /* columns */,
  1, 2,
  3, 4,
  5, 6,
  7, 8
]);

const [gpuBufferSecondMatrix, arrayBufferSecondMatrix] = await device.createBufferMappedAsync({
  size: secondMatrix.byteLength,
  usage: GPUBufferUsage.STORAGE,
});
new Float32Array(arrayBufferSecondMatrix).set(secondMatrix);
gpuBufferSecondMatrix.unmap();


// Result Matrix

const resultMatrixBufferSize = Float32Array.BYTES_PER_ELEMENT * (2 + firstMatrix[0] * secondMatrix[1]);
const resultMatrixBuffer = device.createBuffer({
  size: resultMatrixBufferSize,
  usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_SRC
});

Bind group layout and bind group

Concepts of bind group layout and bind group are specific to WebGPU. A bind group layout defines the input/output interface expected by a shader, while a bind group represents the actual input/output data for a shader.

In the example below, the bind group layout expects some storage buffers at numbered bindings 0, 1, and 2 for the compute shader. The bind group on the other hand, defined for this bind group layout, associates GPU buffers to the bindings: gpuBufferFirstMatrix to the binding 0, gpuBufferSecondMatrix to the binding 1, and resultMatrixBuffer to the binding 2.

const bindGroupLayout = device.createBindGroupLayout({
  bindings: [
    {
      binding: 0,
      visibility: GPUShaderStage.COMPUTE,
      type: "storage-buffer"
    },
    {
      binding: 1,
      visibility: GPUShaderStage.COMPUTE,
      type: "storage-buffer"
    },
    {
      binding: 2,
      visibility: GPUShaderStage.COMPUTE,
      type: "storage-buffer"
    }
  ]
});

const bindGroup = device.createBindGroup({
  layout: bindGroupLayout,
  bindings: [
    {
      binding: 0,
      resource: {
        buffer: gpuBufferFirstMatrix
      }
    },
    {
      binding: 1,
      resource: {
        buffer: gpuBufferSecondMatrix
      }
    },
    {
      binding: 2,
      resource: {
        buffer: resultMatrixBuffer
      }
    }
  ]
});

Compute shader code

The compute shader code for multiplying matrices is written in GLSL, a high-level shading language used in WebGL, which has a syntax based on the C programming language. Without going into detail, you should find below the three storage buffers marked with the keyword buffer. The program will use firstMatrix and secondMatrix as inputs and resultMatrix as its output.

Note that each storage buffer has a binding qualifier used that corresponds to the same index defined in bind group layouts and bind groups declared above.

const computeShaderCode = `#version 450

  layout(std430, set = 0, binding = 0) readonly buffer FirstMatrix {
      vec2 size;
      float numbers[];
  } firstMatrix;

  layout(std430, set = 0, binding = 1) readonly buffer SecondMatrix {
      vec2 size;
      float numbers[];
  } secondMatrix;

  layout(std430, set = 0, binding = 2) buffer ResultMatrix {
      vec2 size;
      float numbers[];
  } resultMatrix;

  void main() {
    resultMatrix.size = vec2(firstMatrix.size.x, secondMatrix.size.y);

    ivec2 resultCell = ivec2(gl_GlobalInvocationID.x, gl_GlobalInvocationID.y);
    float result = 0.0;
    for (int i = 0; i < firstMatrix.size.y; i++) {
      int a = i + resultCell.x * int(firstMatrix.size.y);
      int b = resultCell.y + i * int(secondMatrix.size.y);
      result += firstMatrix.numbers[a] * secondMatrix.numbers[b];
    }

    int index = resultCell.y + resultCell.x * int(secondMatrix.size.y);
    resultMatrix.numbers[index] = result;
  }
`;

Pipeline setup

WebGPU in Chrome currently uses bytecode instead of raw GLSL code. This means we have to compile computeShaderCode before running the compute shader. Luckily for us, the @webgpu/glslang package allows us to compile computeShaderCode in a format that WebGPU in Chrome accepts. This bytecode format is based on a safe subset of SPIR-V.

Note that the “GPU on the Web” W3C Community Group has still not decided at the time of writing on the shading language for WebGPU.

import glslangModule from 'https://unpkg.com/@webgpu/glslang@0.0.8/dist/web-devel/glslang.js';

The compute pipeline is the object that actually describes the compute operation we're going to perform. Create it by calling device.createComputePipeline(). It takes two arguments: the bind group layout we created earlier, and a compute stage defining the entry point of our compute shader (the main GLSL function) and the actual compute shader module compiled with glslang.compileGLSL().

const glslang = await glslangModule();

const computePipeline = device.createComputePipeline({
  layout: device.createPipelineLayout({
    bindGroupLayouts: [bindGroupLayout]
  }),
  computeStage: {
    module: device.createShaderModule({
      code: glslang.compileGLSL(computeShaderCode, "compute")
    }),
    entryPoint: "main"
  }
});

Commands submission

After instantiating a bind group with our three GPU buffers and a compute pipeline with a bind group layout, it is time to use them.

Let’s start a programmable compute pass encoder with commandEncoder.beginComputePass(). We'll use this to encode GPU commands that will perform the matrix multiplication. Set its pipeline with passEncoder.setPipeline(computePipeline) and its bind group at index 0 with passEncoder.setBindGroup(0, bindGroup). The index 0 corresponds to the set = 0 qualifier in the GLSL code.

Now, let’s talk about how this compute shader is going to run on the GPU. Our goal is to execute this program in parallel for each cell of the result matrix, step by step. For a result matrix of size 2 by 4 for instance, we’d call passEncoder.dispatch(2, 4) to encode the command of execution. The first argument “x” is the first dimension, the second one “y” is the second dimension, and the latest one “z” is the third dimension that defaults to 1 as we don’t need it here. In the GPU compute world, encoding a command to execute a kernel function on a set of data is called dispatching.

Execution in parallel for each result matrix cell
Figure 3. Execution in parallel for each result matrix cell

In our code, “x” and “y” will be respectively the number of rows of the first matrix and the number of columns of the second matrix. With that, we can now dispatch a compute call with passEncoder.dispatch(firstMatrix[0], secondMatrix[1]).

As seen in the drawing above, each shader will have access to a unique gl_GlobalInvocationID object that will be used to know which result matrix cell to compute.

const commandEncoder = device.createCommandEncoder();

const passEncoder = commandEncoder.beginComputePass();
passEncoder.setPipeline(computePipeline);
passEncoder.setBindGroup(0, bindGroup);
passEncoder.dispatch(firstMatrix[0] /* x */, secondMatrix[1] /* y */);
passEncoder.endPass();

To end the compute pass encoder, call passEncoder.endPass(). Then, create a GPU buffer to use as a destination to copy the result matrix buffer with copyBufferToBuffer. Finally, finish encoding commands with copyEncoder.finish() and submit those to the GPU device queue by calling device.defaultQueue.submit() with the GPU commands.

// Get a GPU buffer for reading in an unmapped state.
const gpuReadBuffer = device.createBuffer({
  size: resultMatrixBufferSize,
  usage: GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ
});

// Encode commands for copying buffer to buffer.
commandEncoder.copyBufferToBuffer(
  resultMatrixBuffer /* source buffer */,
  0 /* source offset */,
  gpuReadBuffer /* destination buffer */,
  0 /* destination offset */,
  resultMatrixBufferSize /* size */
);

// Submit GPU commands.
const gpuCommands = commandEncoder.finish();
device.defaultQueue.submit([gpuCommands]);

Read result matrix

Reading the result matrix is as easy as calling gpuReadBuffer.mapReadAsync() and logging the ArrayBuffer returned by the resulting promise.

Matrix multiplication result
Figure 4. Matrix multiplication result

In our code, the result logged in DevTools JavaScript console is “2, 2, 50, 60, 114, 140”.

// Read buffer.
const arrayBuffer = await gpuReadBuffer.mapReadAsync();
console.log(new Float32Array(arrayBuffer));

Congratulations! You made it. You can play with the sample.

Performance findings

So how does running matrix multiplication on a GPU compare to running it on a CPU? To find out, I wrote the program just described for a CPU. And as you can see in the graph below, using the full power of GPU seems like an obvious choice when the size of the matrices is greater than 256 by 256.

GPU vs CPU benchmark
Figure 5. GPU vs CPU benchmark

This article was just the beginning of my journey exploring WebGPU. Expect more articles soon featuring more deep dives in GPU Compute and on how rendering (canvas, texture, sampler) works in WebGPU.


The Chromium Chronicle: GWP-ASan: Detect bugs in the wild

$
0
0

The Chromium Chronicle: GWP-ASan: Detect bugs in the wild

Episode 8: November, 2019

by Vlad Tsyrklevich in Seattle

Debugging memory safety errors, such as use-after-frees or buffer overflows, can be difficult. Tools like AddressSanitizer (ASan) are helpful to pinpoint memory errors in unit tests and fuzzers, but many bugs only manifest after deployment to users where ASan’s overhead is prohibitively high.

GWP-ASan is a heap-only memory error detector designed to be used in the wild. It detects use-after-frees, buffer overflows/underflows, and double frees. Unlike ASan, it does not detect errors on the stack or in globals.

By sampling a tiny percentage of allocations, GWP-ASan is able to provide probabilistic error detection with negligible memory and performance overhead. GWP-ASan will cause the process to crash immediately when a memory error occurs with a sampled allocation. This makes it easier to spot the bug as the crash happens right where the error is made instead of at some later point when corrupt memory is used.

Like ASan, GWP-ASan crash reports include allocation and deallocation stack traces to help debug memory issues. Let's take a look at an example (crbug/956230) of some of the additional data presented in the crash UI:

The use and deallocation both originate in PDFiumEngine::ExtendSelection(). The source quickly shows the bug is a use of an invalidated std::vector iterator.

GWP-ASan is enabled on the stable channel for allocations made using malloc/new and PartitionAlloc on Windows and macOS. Android support is in progress. Over 60 GWP-ASan bugs have been reported so far and about 70% have been fixed. GWP-ASan crashes are all candidate security issues that may be exploitable so please triage them quickly and request backports where necessary.

What's New In DevTools (Chrome 80)

$
0
0

What's New In DevTools (Chrome 80)

Support for let and class redeclarations in the Console

The Console now supports redeclarations of let and class statements. The inability to redeclare was a common annoyance for web developers who use the Console to experiment with new JavaScript code.

For example, previously, when redeclaring a local variable with let, the Console would throw an error:

A screenshot of the Console in Chrome 78 showing that the let redeclaration fails.

Now, the Console allows the redeclaration:

A screenshot of the Console in Chrome 80 showing that the let redeclaration succeeds.

Chromium issue #1004193

Improved WebAssembly debugging

DevTools has started to support the DWARF Debugging Standard, which means increased support for stepping over code, setting breakpoints, and resolving stack traces in your source languages within DevTools. Check out Improved WebAssembly debugging in Chrome DevTools for the full story.

A screenshot of the new DWARF-powered WebAssembly debugging.

Network panel updates

Request Initiator Chains in the Initiator tab

You can now view the initiators and dependencies of a network request as a nested list. This can help you understand why a resource was requested, or what network activity a certain resource (such as a script) caused.

A screenshot of a Request Initiator Chain in the Initiator tab

After logging network activity in the Network panel, click a resource and then go to the Initiator tab to view its Request Initiator Chain:

  • The inspected resource is bold. In the screenshot above, https://web.dev/default-627898b5.js is the inspected resource.
  • The resources above the inspected resource are the initiators. In the screenshot above, https://web.dev/bootstrap.js is the initiator of https://web.dev/default-627898b5.js. In other words, https://web.dev/bootstrap.js caused the network request for https://web.dev/default-627898b5.js.
  • The resources below the inspected resource are the dependencies. In the screenshot above, https://web.dev/chunk-f34f99f7.js is a dependency of https://web.dev/default-627898b5.js. In other words, https://web.dev/default-627898b5.js caused the network request for https://web.dev/chunk-f34f99f7.js.

Chromium issue #842488

Highlight the selected network request in the Overview

After you click a network resource in order to inspect it, the Network panel now puts a blue border around that resource in the Overview. This can help you detect if the network request is happening earlier or later than expected.

A screenshot of the Overview pane highlighting the inspected resource.

Chromium issue #988253

URL and path columns in the Network panel

Use the new Path and URL columns in the Network panel to see the absolute path or full URL of each network resource.

A screenshot of the new Path and URL columns in the Network panel.

Right-click the Waterfall table header and select Path or URL to show the new columns.

Chromium issue #993366

Updated User-Agent strings

DevTools supports setting a custom User-Agent string through the Network Conditions tab. The User-Agent string affects the User-Agent HTTP header attached to network resources, and also the value of navigator.userAgent.

The predefined User-Agent strings have been updated to reflect modern browser versions.

A screenshot of the User Agent menu in the Network Conditions tab.

To access Network Conditions, open the Command Menu and run the Show Network Conditions command.

Chromium issue #1029031

Audits panel updates

New configuration UI

The configuration UI has a new, responsive design, and the throttling configuration options have been simplified. See Audits Panel Throttling for more information on the throttling UI changes.

The new configuration UI.

Coverage tab updates

Per-function or per-block coverage modes

The Coverage tab has a new dropdown menu that lets you specify whether code coverage data should be collected per function or per block. Per block coverage is more detailed but also far more expensive to collect. DevTools uses per function coverage by default now.

The coverage mode dropdown menu.

Coverage must now be initiated by a page reload

Toggling code coverage without a page reload has been removed because the coverage data was unreliable. For example, a function can be reported as unused if its execution was a long time ago and V8's garbage collector has cleaned it up.

Chromium issue #1004203

Feedback

To discuss the new features and changes in this post, or anything else related to DevTools:

Consider Canary

If you're on Mac or Windows, consider using Chrome Canary as your default development browser. Canary gives you access to the latest DevTools features.

Note: Canary is released as soon as its built, without testing. This means that Canary breaks about once-a-month. It's usually fixed within a day. You can go back to using Chrome Stable while Canary is broken.

<<../../_shared/discover.md>>

Improved WebAssembly debugging in Chrome DevTools

$
0
0

Improved WebAssembly debugging in Chrome DevTools

Background

Until recently, the only WebAssembly debugging that Chrome DevTools supported was viewing raw WebAssembly stack traces, and stepping over individual instructions in a disassembled WebAssembly text format.

A screenshot of the previously limited WebAssembly debugging support in 
            Chrome DevTools.

While this works with any WebAssembly module and helps somewhat with debugging small, isolated functions, it’s not very practical for larger apps where the mapping between the disassembled code and your sources is less obvious.

A temporary workaround

To work around this problem, Emscripten and DevTools have temporarily adapted the existing source maps format to WebAssembly. This allowed mappings between binary offsets in the compiled module to original locations in source files.

A screenshot of the source-maps-powered debugging.

However, source maps were designed for text formats with clear mappings to JavaScript concepts and values, not for binary formats like WebAssembly with arbitrary source languages, type systems, and a linear memory. This made the integration hacky, limited, and not widely supported outside Emscripten.

Enter DWARF

On the other hand, many native languages already have a common debugging format, DWARF, that provides all the necessary information for debuggers to resolve locations, variable names, type layouts, and more.

While there are still some WebAssembly-specific features that need to be added for full compatibility, compilers like Clang and Rust already support emitting DWARF information in WebAssembly modules, which enabled the DevTools team to start using it directly in DevTools.

As a first step, DevTools now supports native source mapping using this information, so you can start debugging Wasm modules produced by any of these compilers without resorting to the disassembled format or having to use any custom scripts.

Instead, you just need to tell your compiler to include debug info like you normally would on other platforms. For example, in Clang this can be done by passing the -g flag during compilation:

clang -g ...sources… -target wasm32 -o out.wasm

You can use same -g flag in Rust:

rustc -g source.rs --target wasm32-unknown-unknown -o out.wasm

Or, if you’re using Cargo, the debug info will be included by default:

cargo build --target wasm32-unknown-unknown

This new DevTools integration with DWARF already covers support for stepping over the code, setting breakpoints, and resolving stack traces in your source languages.

A screenshot of the new DWARF-powered debugging.

The future

There is still quite a bit of work to do though. For example, on the tooling side, Emscripten (Binaryen) and wasm-pack (wasm-bindgen) don’t support updating DWARF information on transformations they perform yet. For now, they won’t benefit from this integration.

And on the Chrome DevTools side, we’ll be evolving integration more over time to ensure a seamless debugging experience, including:

  • Resolving variable names
  • Pretty-printing types
  • Evaluating expressions in source languages
  • …and much more!

Stay tuned for future updates!

New in Chrome 79

$
0
0

New in Chrome 79

Chrome 79 is rolling out now!

I’m Pete LePage, let’s dive in and see what’s new for developers in Chrome 79!

Maskable Icons

If you’re running Android O or later, and you’ve installed a Progressive Web App, you’ve probably noticed the annoying white circle around the icon.

Thankfully, Chrome 79 now supports maskable icons for installed Progressive Web Apps.You’ll need to design your icon to fit within the safe zone - essentially a circle with a diameter that’s 80% of the canvas.

Then, in the web app manifest, you’ll need to add a new purpose property to the icon, and set its value to maskable.

{
  ...
  "icons": [
    ...
    {
      "src": "path/to/maskable_icon.png",
      "sizes": "196x196",
      "type": "image/png",
      "purpose": "maskable"
  ]
  ...
}

Tiger Oakes has a great post on CSS Tricks - Maskable Icons: Android Adaptive Icons for Your PWA with all of the details, and has a great tool you can use for testing your icons to make sure they’ll fit.

Web XR

You can now create immersive experiences for smartphones and head-mounted displays with the WebXR Device API.

WebXR enables a whole spectrum of immersive experiences. From using augmented reality to see what a new couch might look like in your home before you buy it, to virtual reality games and 360 degree movies, and more.

To get started with the new API, read Virtual Reality Comes to the Web.

New origin trials

Origin trials provide an opportunity for us to validate experimental features and APIs, and make it possible for you to provide feedback on their usability and effectiveness in broader deployment.

Experimental features are typically only available behind a flag, but when we offer an Origin Trial for a feature, you can register for that origin trial to enable the feature for all users on your origin.

Opting into an origin trial allows you to build demos and prototypes that your beta testing users can try for the duration of the trial without requiring them to flip any special flags in Chrome.

There’s more info on origin trials in the Origin Trials Guide for Web Developers. You can see a list of active origin trials, and sign up for them on the Chrome Origin Trials page.

Wake Lock

One of my biggest pet peeves about Google Slides is that if you leave the deck open on a single slide for too long, the screensaver kicks in. Before you can continue, you need to unlock your computer. Ugh.

But, with the new Wake Lock API, a page can request a lock, and prevent the screen from dimming or the screensaver from kicking in. It’s perfect for Slides, but it’s also helpful for things like recipe sites - where you might want to keep the screen on while you follow the instructions.

To request a wake lock, you need to call navigator.wakeLock.request(), and save the WakeLockSentinel object that it returns.

// The wake lock sentinel.
let wakeLock = null;

// Function that attempts to request a wake lock.
const requestWakeLock = async () => {
  try {
    wakeLock = await navigator.wakeLock.request('screen');
    wakeLock.addEventListener('release', () => {
      console.log('Wake Lock was released');
    });
    console.log('Wake Lock is active');
  } catch (err) {
    console.error(`${err.name}, ${err.message}`);
  }
};

The lock is maintained until the user navigates away from the page, or you call release on the WakeLockSentinel object you saved earlier.

// Function that attempts to release the wake lock.
const releaseWakeLock = async () => {
  if (!wakeLock) {
    return;
  }
  try {
    await wakeLock.release();
    wakeLock = null;
  } catch (err) {
    console.error(`${err.name}, ${err.message}`);
  }
};

More details are at web.dev/wakelock.

rendersubtree attribute

There are times when you don’t want part of the DOM to render immediately. For example scrollers with a large amount of content, or tabbed UIs where only some of the content is visible at any given time.

The new rendersubtree attribute tells the browser it can skip rendering that subtree. This allows the browser to spend more time processing the rest of the page, increasing performance.

When rendersubtree is set to invisible, the element's content is not drawn or hit-tested, allowing for rendering optimizations.

Changing the rendersubtree to activatable, makes the content visible by removing the invisible attribute, and rendering the content.

Chrome Dev Summit 2019

If you missed Chrome Dev Summit, all of the talks are on our YouTube channel.

Jake also has a great Twitter thread with all the fun stuff that went on between the talks, including the newest member of our team, Surjiko.

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 78.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 80 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

The Chromium Chronicle: ClusterFuzz

$
0
0

The Chromium Chronicle: ClusterFuzz

Episode 9: December, 2019

by Adrian Taylor in Mountain View

You may find you are asked to fix high-priority security bugs discovered by ClusterFuzz. What is it? Should you take those bugs seriously? How can you help?

Fuzzing flow chart

ClusterFuzz feeds input to Chrome and watches for crashes. Some of those Chrome builds have extra checks turned on, for example AddressSanitizer, which looks for memory safety errors.

ClusterFuzz assigns components based on the crash location, and assigns severity based on the type of crash and whether it happened in a sandboxed process. For example, a heap use-after-free will be high severity, unless it’s in the browser process, in which case it’s critical (no sandbox to limit impact!):

class Foo {
  Widget* widget;
};

void Foo::Bar() {
  delete widget;
  ...
  widget->Activate();  // Bad in the renderer process, worse in the browser process.
}                      // Obviously, real bugs are more subtle. Usually.

ClusterFuzz generates input from fuzzers or from bugs submitted externally. Some fuzzers are powered by libFuzzer, which evolves input to increase code coverage. Some understand the grammar of the input language converted into protobufs. Once ClusterFuzz finds a crash, it will try to minimize the input test case and even bisect to find the offending commit. It finds a lot...

You can help:

  • Be paranoid about object lifetimes & integer overflows.
  • Add new fuzzers, especially when you process untrustworthy data or IPC (see links below, often < 20 lines of code).
  • Fix ClusterFuzz-reported bugs: its severity heuristics can be trusted because they’re based on real-world exploitability: Even a single byte overflow has led to arbitrary code execution by an attacker.

Resources

WebVR 1.1 removed from Chrome

$
0
0

WebVR 1.1 removed from Chrome

Feedback

Deprecations and removals in Chrome 78

$
0
0

Deprecations and removals in Chrome 78

Disallow Synchronous XMLHTTPRequest() in Page Dismissal

Chrome now disallows synchronous calls to XMLHTTPRequest() during page dismissal when the page is being navigated away from or is closed by the user. This applies to beforeunload, unload, pagehide, and visibilitychange.

To ensure that data is sent to the server when a page unloads, we recommend sendBeacon() or Fetch keep-alive. For now, enterprise users can use the AllowSyncXHRInPageDismissal policy flag and developers can use the origin trial flag allow-sync-xhr-in-page-dismissal to allow synchronous XHR requests during page unload. This is a temporary "opt-out" measure, and we expect to remove this flag in Chrome 82.

For details about this and the alternatives, see Disallowing synchronous XMLHTTPRequest() during page dismissal.

Intent to Remove | Chrome Platform Status | Chromium Bug

FTP support deprecated

The current FTP implementation in Chrome has no support for encrypted connections (FTPS), nor proxies. Usage of FTP in the browser is sufficiently low that it is no longer viable to invest in improving the existing FTP client. In addition more capable FTP clients are available on all affected platforms.

Chrome 72 removed support for fetching document subresources over FTP and rendering of top level FTP resources. Currently navigating to FTP URLs results in showing a directory listing or a download depending on the type of resource. A bug in Google Chrome 74 and later resulted in dropping support for accessing FTP URLs over HTTP proxies. Proxy support for FTP was removed entirely in Google Chrome 76.

The remaining capabilities of Google Chrome’s FTP implementation are restricted to either displaying a directory listing or downloading a resource over unencrypted connections.

The deprecation timeline is tentatively set as follows:

Chrome 80 (stable in February 2020)

FTP is disabled by default for non-enterprise clients, but may be turned on using either the --enable-ftp or the --enable-features=FtpProtocol command-line flags. Alternatively, it can be turned on using the #enable-ftp option on chrome://flags.

Chrome 81 (stable in March 2020)

FTP is disabled by default for all Chrome installations, but may be turned on using either the --enable-ftp or the --enable-features=FtpProtocol command-line flags.

Chrome 82 (stable in April 2020)

FTP support will be completely removed.

Intent to Remove | Chrome Platform Status | Chromium Bug

Disallow allow popups during page unload

Pages may no longer use window.open() to open a new page during unload. The Chrome popup blocker already prohibited this, but now it is prohibited whether or not the popup blocker is enabled.

Enterprises can use the AllowPopupsDuringPageUnload policy flag to allow popups during unload. Chrome expects to remove this flag in Chrome 82.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Non-origin-clean ImageBitmap serialization and transferring removed

Errors will now be raised when a script tries to serialize or transfer a non-origin-clean ImageBitmap. A non-origin-clean ImageBitmap is one that contains data from cross cross-origin images that is not verified by CORS logic.

Intent to Remove | Chrome Platform Status | Chromium Bug

Protocol handling now requires a secure context

The methods registerProtocolHandler() and unregisterProtocolHandler() now require a secure context. These methods capable of reconfiguring client states such that they would allow transmission of potentially sensitive data over a network.

The registerProtocolHandler() method gives a webpage a mechanism to register itself to handle a protocol after a user consents. For example, a web-based email application could register to handle the mailto: scheme. The corresponding unregisterProtocolHandler() method allows a site to abandon its protocol-handling registration.

Intent to Remove | Chrome Platform Status | Chromium Bug

Web Components v0 removed

Web Components v0 are now removed from Chrome. The Web Components v1 APIs are a web platform standard that has shipped in Chrome, Safari, Firefox, and (soon) Edge. For guidance on upgrading, read Web Components update: more time to upgrade to v1 APIs. The following features have now been removed. This deprecation covers the items listed below.

Custom Elements

Intent to Remove | Chrome Platform Status | Chromium Bug

HTML Imports

Intent to Remove | Chrome Platform Status | Chromium Bug

Shadow DOM

Intent to Remove | Chrome Platform Status | Chromium Bug

Remove -webkit-appearance:button for arbitrary elements

Changes -webkit-appearance:button to work only with <button> and <input> buttons. If button is specified for an unsupported element, the element has the default appearance. All other -webkit-appearance keywords already have such restriction.

Intent to Remove | Chrome Platform Status | Chromium Bug

Feedback


Introducing android-browser-helper, a library for building Trusted Web Activities

$
0
0

Introducing android-browser-helper, a library for building Trusted Web Activities

We have released version 1.0.0 of, android-browser-helper, a new Android Library for Trusted Web Activity (TWA) which, besides being built on top of the modern Android JetPack libraries, makes it easier for developers to use Trusted Web Activity to build their Android applications.

android-browser-helper is now the recommended library to build applications that use Trusted Web Activity.

The library is hosted on the official Google Maven repository, which works out of the box in Android Projects, and is also compatible with AndroidX, which was a common issue with the previous library.

More features and development experience improvements will be added this library. This is a short list of what has already been added:

  • Handles opening the content in a browser that supports TWA and, if one is not installed, implements a fallback strategy.
  • Makes the fallback strategy customizable, so developers can customize how their application behaves when a browser the supports TWA is not installed. The twa-webview-fallback demo shows how to use a fallback strategy that uses the Android WebView, for example.
  • Makes configuring TWAs that work with multiple origins easier, as illustrated on the twa-multi-domain demo.

The library can be added to Android application by using the following dependency to the appllication build.gradle:

dependencies {
    //...
    implementation 'com.google.androidbrowserhelper:androidbrowserhelper:1.0.0'
}

Migrating from the custom-tabs-client

Developers who were using the previous custom-tabs-client will have to implement a few changes in their application, when migrating to android-browser-helper.

Fortunately, besides replacing using the old library with the new library, those changes mainly involve changing searching and replacing a few strings throughout AndroidManifest.xml.

Here’s a summary of the names changed:

Name on custom-tabs-client (Old Library) Name on android-browser-helper (New Library)
android.support.customtabs.trusted.LauncherActivity com.google.androidbrowserhelper.trusted.LauncherActivity
android.support.v4.content.FileProvider androidx.core.content.FileProvider
android.support.customtabs.trusted.TrustedWebActivityService com.google.androidbrowserhelper.trusted.DelegationService

The svgomg-twa demo has been updated to use android-browser-helper. this diff shows all the changes required when migrating an existing project using custom-tabs-client to android-browser-helper.

Are we missing anything?

android-browser-helper has the goal of simplifying the development of applications using Trusted Web Activities. The library will continue to evolve as TWAs get more features.

If you are missing a feature in Trusted Web Activities, think of ways that android-browser-helper could make make the development work simpler, or have a question on how to use the library, make sure to pop by the GitHub repository and file an issue.

Feedback

Introducing android-browser-helper, a library for building Trusted Web Activities

$
0
0

Introducing android-browser-helper, a library for building Trusted Web Activities

We have released version 1.0.0 of, android-browser-helper, a new Android Library for Trusted Web Activity (TWA) which, besides being built on top of the modern Android JetPack libraries, makes it easier for developers to use Trusted Web Activity to build their Android applications.

android-browser-helper is now the recommended library to build applications that use Trusted Web Activity.

The library is hosted on the official Google Maven repository, which works out of the box in Android Projects, and is also compatible with AndroidX, which was a common issue with the previous library.

More features and development experience improvements will be added this library. This is a short list of what has already been added:

  • Handles opening the content in a browser that supports TWA and, if one is not installed, implements a fallback strategy.
  • Makes the fallback strategy customizable, so developers can customize how their application behaves when a browser the supports TWA is not installed. The twa-webview-fallback demo shows how to use a fallback strategy that uses the Android WebView, for example.
  • Makes configuring TWAs that work with multiple origins easier, as illustrated on the twa-multi-domain demo.

The library can be added to Android application by using the following dependency to the appllication build.gradle:

dependencies {
    //...
    implementation 'com.google.androidbrowserhelper:androidbrowserhelper:1.0.0'
}

Migrating from the custom-tabs-client

Developers who were using the previous custom-tabs-client will have to implement a few changes in their application, when migrating to android-browser-helper.

Fortunately, besides replacing using the old library with the new library, those changes mainly involve changing searching and replacing a few strings throughout AndroidManifest.xml.

Here’s a summary of the names changed:

Name on custom-tabs-client (Old Library) Name on android-browser-helper (New Library)
android.support.customtabs.trusted.LauncherActivity com.google.androidbrowserhelper.trusted.LauncherActivity
android.support.v4.content.FileProvider androidx.core.content.FileProvider
android.support.customtabs.trusted.TrustedWebActivityService com.google.androidbrowserhelper.trusted.DelegationService

The svgomg-twa demo has been updated to use android-browser-helper. this diff shows all the changes required when migrating an existing project using custom-tabs-client to android-browser-helper.

Are we missing anything?

android-browser-helper has the goal of simplifying the development of applications using Trusted Web Activities. The library will continue to evolve as TWAs get more features.

If you are missing a feature in Trusted Web Activities, think of ways that android-browser-helper could make make the development work simpler, or have a question on how to use the library, make sure to pop by the GitHub repository and file an issue.

Feedback

Multi-Origin Trusted Web Activities

$
0
0

Multi-Origin Trusted Web Activities

If you are new to Trusted Web Activities (TWAs), you may want to read the TWA Quick Start Guide or the Introduction to TWAs before reading this documentation.


Trusted Web Activities are a new way to integrate your web-app content such as your PWA with your Android app using a protocol based on Custom Tabs.

Off-origin navigation
Figure 1. Off-origin navigation

A Trusted Web Activity needs the origins being opened to be validated using Digital Asset Links, in order to show the content in full-screen.

When a user navigates off the validated origin, Custom Tab UI is shown. The URL bar in the Custom Tab tells the users they are now navigating in a domain outside the application, while also providing the user with an X button that allows them to quickly return to the validated origin.

But it is also common for Web Apps to create experiences that span multiple origins - An example would be a shopping application with the main experience at www.example.com, while the checkout flow is hosted at checkout.example.com.

In cases like that, showing the Custom Tabs is undesirable, not only because the user is in the same application, but also because the top bar could make the user think they left the application and abandon the checkout.

TWAs allow developers to validate multiple origins, and the user will remain in full-screen when navigating across those origins. As with the main domain, the developer must be able to control each validated origin.

Setting up validation for multiple origins

As in the main origin, the validation is achieved via Digital Asset Links and each domain to be validated needs to have its own assetlinks.json file.

In our example with www.example.com and checkout.example.com, we would have:

  • https://www.example.com/.well-known/assetlinks.json
  • https://checkout.example.com/.well-known/assetlinks.json

Since each domain is getting connected to the same Android application, the assetlinks.json files look exactly the same.

Assuming the package name for the Android application is com.example.twa, both assetlink.json files would contain something similar to the following:

[{
  "relation": ["delegate_permission/common.handle_all_urls"],
  "target": {
  "namespace": "android_app",
  "package_name": "com.example",
   "sha256_cert_fingerprints": ["..."]}
}]

Note: An application using TWAs can have any number of validated domains, as long as Digital Asset Links are implemented for all of them.

Add multiple origins to the Android Application

On the Android application, the asset_statements declaration needs to be updated to contain all origins that need to be validated:

<string name="asset_statements">
[{
    \"relation\": [\"delegate_permission/common.handle_all_urls\"],
    \"target\": {
        \"namespace\": \"web\",
        \"site\": \"https://www.example.com\"
    }
}],
[{
    \"relation\": [\"delegate_permission/common.handle_all_urls\"],
    \"target\": {
        \"namespace\": \"web\",
        \"site\": \"https://checkout.example.com\"
    }
}],
</string>

Note: Applications based on the svgomg-twa demo application or llama-pack have the asset_statements declaration inside app/build.gradle. Even though the location of the declaration is different, the JSON content is the same.

Add extra origins to the LauncherActivity

Using the default LauncherActivity

The LauncherActivity that is part of the android-browser-helper support library provides a way to add multiple origins to be validated by configuring the Android project.

First, add a string-array element to the res/values/strings.xml file. Each extra URL to be validated will be inside an item sub-element:

...
<string-array name="additional_trusted_origins">
    <item>https://www.google.com</item>
</string-array>
...

Next, add a new meta-data tag inside the existing activity element that references the LauncherActivity, inside AndroidManifest.xml:

...
<activity android:name="com.google.androidbrowserhelper.trusted.LauncherActivity"
    android:label="@string/app_name">


    <meta-data
        android:name="android.support.customtabs.trusted.ADDITIONAL_TRUSTED_ORIGINS"
        android:value="@array/additional_trusted_origins" />


    ...
</activity>
...

Using a custom LauncherActivity

When using custom code to launch a TWA, adding extra origins can be achieved by calling setAdditionalTrustedOrigins when building the Intent to launch the TWA:

com.example.CustomLauncherActivity

public void launcherWithMultipleOrigins(View view) {
  List<String> origins = Arrays.asList(
      "https://checkout.example.com/"
  );


  TrustedWebActivityIntentBuilder builder = new TrustedWebActivityIntentBuilder(LAUNCH_URI)
      .setAdditionalTrustedOrigins(origins);


  new TwaLauncher(this).launch(builder, null, null);
}

Conclusion

With those steps, the TWA is now ready to support multiple origins. android-browser-helper has a sample application for multi origin TWAs. Make sure to check it.

Troubleshooting

Setting up Digital Asset Links has a few moving parts. If the application is still showing the Custom Tabs bar on the top, it’s likely that something is wrong with the configuration.

The TWA Quick Start Guide has a great troubleshooting section on how to debug Digital Asset Link issues.

There’s also the amazing Peter’s Asset Link Tool, which helps debuggint Digital Asset Links on applications installed on the device..

Feedback

What's New In DevTools (Chrome 81)

$
0
0

What's New In DevTools (Chrome 81)

Moto G4 support in Device Mode

After enabling the Device Toolbar you can now simulate the dimensions of a Moto G4 viewport from the Device list.

Simulating a Moto G4 viewport

Click Show Device Frame to show the Moto G4 hardware around the viewport.

Showing the Moto G4 hardware

Related features:

  • Open the Command Menu and run the Capture screenshot command to take a screenshot of the viewport that includes the Moto G4 hardware (after enabling Show Device Frame).
  • Throttle the network and CPU to more accurately simulate a mobile user's web browsing conditions.

Chromium issue #924693

Blocked cookies in the Cookies pane

The Cookies pane in the Application panel now colors blocked cookies with a yellow background.

Blocked cookies in the Cookies pane of the Application panel

See also Debug why a cookie was blocked to learn how to access a similar UI from the Network panel.

Chromium issue #1030258

The Cookies tables in the Network and Application panels now include a Priority column.

Chromium issue #1026879

All cells in the Cookie tables are editable now, except cells in the Size column because that column represents the network size of the cookie, in bytes. See Fields for an explanation of each column.

Editing a cookie value

Right-click a network request and select Copy > Copy as Node.js fetch to get a fetch expression that includes cookie data.

Copy as Node.js fetch

Chromium issue #1029826

More accurate web app manifest icons

Previously, the Manifest pane in the Application panel would perform its own requests in order to display web app manifest icons. DevTools now shows the exact same manifest icon that Chrome uses.

Icons in the Manifest pane

Chromium issue #985402

Hover over CSS content properties to see unescaped values

Hover over the value of a content property to see the unescaped version of the value.

For example, on this demo when you inspect the p::after pseudo-element you see an escaped string in the Styles pane:

The escaped string

When you hover over the content value you see the unescaped string:

The unescaped string

Source map errors in the Console

The Console now tells you when a source map has failed to load or parse.

A source map loading error in the Console

Setting for disabling scrolling past the end of a file

Open Settings and then disable Preferences > Sources > Allow scrolling past end of file to disable the default UI behavior that allows you to scroll well past the end of a file in the Sources panel.

Here's a GIF of the feature.

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>

New in Chrome 80

$
0
0

New in Chrome 80

Chrome 80 is rolling out now, and there’s a ton of new stuff in it for developers!

There’s support for:

I’m Pete LePage, let’s dive in and see what’s new for developers in Chrome 80!

Module workers

Module Workers, a new mode for web workers - with the ergonomics, and performance benefits of JavaScript modules is now available. The Worker constructor accepts a new {type: "module"} option, which changes the way scripts are loaded and executed, to match <script type="module">

const worker = new Worker('worker.js', {
  type: 'module'
});

Moving to JavaScript modules, also enables the use of dynamic import for lazy-loading code, without blocking the execution of the worker. Check out Jason’s post Threading the web with module workers on web.dev for more details.

Optional chaining

Trying to read deeply nested properties in an object can be error-prone, especially if there’s a chance something might not evaluate.

// Error prone-version, could throw.
const nameLength = db.user.name.length;

Checking each value before proceeding easily turns into a deeply nested if statement, or requires a try / catch block.

// Less error-prone, but harder to read.
let nameLength;
if (db && db.user && db.user.name)
  nameLength = db.user.name.length;

Chrome 80 adds support for a new JavaScript feature called optional chaining. With optional chaining, if one of the properties returns a null, or undefined, instead of throwing an error, the whole thing simply returns undefined.

// Still checks for errors and is much more readable.
const nameLength = db?.user?.name?.length;

Check out the Optional Chainging blog post on the v8 blog for all the details!

Origin trial graduations

There are three new capabilities that graduated from Origin Trial to stable, allowing them to be used by any site, without a token.

Periodic background sync

First up, is periodic background sync, it periodically synchronizes data in the background, so that when a user opens your installed PWA, they always have the freshest data.

Contact picker

Next up, is the Contact Picker API, an on-demand API that allows users to select entries from their contact list and share limited details of the selected entries with a website.

It allows users to share only what they want, when they want, and makes it easier for users to reach and connect with their friends and family.

And finally, the Get Installed Related Apps method allows your web app to check if your native app is installed on a user's device.

One of the most common uses cases is for deciding whether to promote the installation of your PWA, if your native app isn’t installed. Or, you might want to disable some functionality of one app if it’s provided by the other app.

New origin trials

Content indexing API

How do you let users know about content you’ve cached in your PWA? There’s a discovery problem here. Will they know to open your app? Or what content is available?

The Content Indexing API, is a new origin trial, that allows you to add URLs and metadata of offline-capable content, to a local index, maintained by the browser, and easily visible to the user.

const registration = await navigator.serviceWorker.ready;
await registration.index.add({
  id: 'article-123',
  launchUrl: '/articles/123',
  title: 'Article title',
  description: 'Amazing article about things!',
  icons: [{
    src: '/img/article-123.png',
    sizes: '64x64',
    type: 'image/png',
  }],
});

To add something to the index, I need to get the service worker registration, then call index.add, and provide metadata about the content.

Once the index is populated, it’s shown in a dedicated area of Chrome for Android’s Downloads page. Check out Jeff’s post Indexing your offline-capable pages with the Content Indexing API on web.dev for complete details.

Notification triggers

Notifications are a critical part of many apps. But, push notifications are only as reliable as the network you’re connected to. While that works in most cases, it sometimes breaks. For example, if a calendar reminder, notifying you of an important event doesn’t come through because you’re in airplane mode, you might miss the event.

Notification Triggers let you schedule notifications in advance, so that the operating system will deliver the notification at the right time - even if there is no network connectivity, or the device is in battery saver mode.

const swReg = await navigator.serviceWorker.getRegistration();
swReg.showNotification(title, {
  tag: tag,
  body: "This notification was scheduled 30 seconds ago",
  showTrigger: new TimestampTrigger(timestamp + 30 * 1000)
});

To schedule a notification, call showNotification on the service worker registration. In the notification options, add a showTrigger property with a TimestampTrigger. Then, when the time arrives, the browser will show the notification.

The origin trial is planned to run through Chrome 83, so check out Tom’s Notification Triggers post on web.dev for complete details.

Other origin trials

There are a few other origin trials starting in Chrome 80:

  • Web Serial
  • The ability for PWAs to register as file handlers
  • New properties for the contact picker

Check https://developers.chrome.com/origintrials/#/trials/active for a a complete list of features in origin trial.

And more

Of course, there’s plenty more!

  • You can now link directly to text fragments on a page, by using #:~:text=something. Chrome will scroll to and highlight the first instance of that text fragment. For example https://en.wikipedia.org/wiki/Rickrolling#:~:text=New%20York
  • Setting display: minimal-ui on a Desktop PWA adds a back and reload button to the title bar of the installed PWA.
  • And Chrome now supports using SVG images as favicons.

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 80.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 81 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

The Chromium Chronicle: Catching UI Regressions with Pixel Tests

$
0
0

The Chromium Chronicle: Catching UI Regressions with Pixel Tests

Episode 10: January, 2020

by Sven Zheng in Bellevue, WA

Chrome’s testing strategy relies heavily on automated functional correctness tests and manual testing, but neither of these reliably catch minor UI regressions. Use pixel tests to automate testing your desktop browser UI.

When writing a pixel test, avoid flakiness by (1) disabling animation, (2) using mock data, and (3) testing the minimum possible surface area.

Here is a sample image used to verify pixel correctness for the omnibox:

Omnibox image used for pixel comparison.

And the code to verify the browser matches this image:


IN_PROC_BROWSER_TEST_F(SkiaGoldDemoPixelTest, TestOmnibox) {
  // Always disable animation for stability.
  ui::ScopedAnimationDurationScaleMode disable_animation(
      ui::ScopedAnimationDurationScaleMode::ZERO_DURATION);
  GURL url("chrome://bookmarks");
  AddTabAtIndex(0, url, ui::PageTransition::PAGE_TRANSITION_FIRST);
  auto* const browser_view = BrowserView::GetBrowserViewForBrowser(browser());
  // CompareScreenshot() takes a screenshot and compares it with the golden image,
  // which was previously human-approved, is stored server-side, and is managed
  // by Skia Gold. If any pixels differ, the test will fail and output a link
  // for the author to triage the new image.
  bool ret = GetPixelDiff().CompareScreenshot("omnibox",
      browser_view->GetLocationBarView());
  EXPECT_TRUE(ret);
}

This code lives at chrome/test/pixel/demo/skia_gold_demo_pixeltest.cc. The relevant headers are skia_gold_pixel_diff.h for unit tests and browser_skia_gold_pixel_diff.h for browser tests.

The pixel diff and approval process is powered by Skia Gold. Skia Gold pixel tests provide a visual approval workflow, and allow developers to accept small flakes by approving multiple gold images.

Currently the test suite is running on the Windows FYI bot. Browser tests and Views unit tests are supported.

Adding notification permission data to the Chrome User Experience Report

$
0
0

Adding notification permission data to the Chrome User Experience Report

Chrome 80 introduced quieter permission UI for notifications. To help site owners understand notification permission metrics, we’re adding this data to the Chrome User Experience Report (CrUX) in the 202001 dataset, released on February 11,

  1. This will allow site owners gain a better understanding of typical user notification permission responses for their sites and comparable sites in their category.

CrUX only provides a high level summary of notification permission request Accept, Block, Ignore, and Dismiss rates. We recommend you augment this data with detailed analytics from your preferred analytics platform.

About CrUX notification permission data

The CrUX data format and methodology is described in detail in the developer documentation, and you should review considerations on population and analysis best practices. Because reported data is only from opt-in users, there may be variance between data in the CrUX dataset and data you collect from your own analytics.

When a notification permission is requested, Chrome will show users a prompt. Users can actively or passively take one of four actions, described in the table below.

Allow The user explicitly allows the website to show them notifications.
Block The user has explicitly disallowed the website from showing them notifications.
Dismiss If the user closes the permission prompt without giving any explicit response.
Ignore If the user does not interact with the prompt at all.

The CrUX dataset includes data for each of these user actions as a percentage of responses.

How to interpret your data

Block and Accept rates are the two most important metrics. As described in the quieter notification permissions blog post, Chrome will automatically enroll sites with very low Accept rates into the quieter permissions UI. Block rate is also a strong signal. When a user clicks Block, the user has sent a clear message that they are not interested in receiving the site’s notification, not just at that moment, but at any time. Most often this means that the user does not understand the intended use of the notification, the value of the product and service, or has not established trust with your website. Both low Accept or high Block rates are a clear indicator that the website should review the recommended patterns section in this article.

It is normal and expected that different types of sites will have different Accept and Block rates. For example, a chat app or email app has a very strong use case and we could expect Accept rates to be quite high. It’s also normal that rates for the same app may vary significantly between desktop and mobile, as the use cases can be different and users may have a strong preference for notification on one type of device over the other.

As more users enroll in quieter notifications UI we expect that Ignore rates will increase relative to other metrics. You should view this trend as normal and expected.

Let your users take the initiative. Integrate toggles or buttons into your website’s user interface and allow users to turn on notifications at their own pace. Only actively prompt for notifications when the benefit is obvious from the context. For example, on an ecommerce site an order delivery notification is an obvious value add to the user and sites asking for permission for this purpose have very high Accept rates.

Avoid requesting the notification permission immediately after a user lands on the site. The user’s browsing experience is interrupted without context as to why notifications are needed or useful to them.

Querying the dataset

Beginning with the 202001 CrUX dataset, you can access notification permission data by querying the experimental.permission.notifications field.

SELECT
  SUM(experimental.permission.notifications.accept) AS accept,
  SUM(experimental.permission.notifications.deny) AS deny,
  SUM(experimental.permission.notifications.ignore) AS `ignore`,
  SUM(experimental.permission.notifications.dismiss) AS dismiss
FROM
  `chrome-ux-report.all.202001`
WHERE
  origin = 'https://news.google.com'

Note: Because "ignore" is a reserved keyword in BigQuery, we have to enclose it in backticks so it's interpreted as a field name.

In this example, we're querying the notification permission data for Google News. We use the SUM function to add up the permission rates for each dimension (form factor and effective connection type) so we get an origin-wide view.

accept deny ignore dismiss
0.8231 0.0476 0.0502 0.0791

Pie chart representing accept rates

The results show that 82.3% of users accept the notification permission prompt, while 4.8% deny, 5.0% ignore, and 7.9% dismiss it.

Learn more about using CrUX on BigQuery and browse the CrUX Cookbook for more example queries.

Feedback

For any questions, or to share your thoughts/feedback about the notification permission data in CrUX, you can reach us on the CrUX support forum or @ChromeUXReport on Twitter.


Trusted Web Activities, the Lay of the Land

$
0
0

Trusted Web Activities, the Lay of the Land

There’s a fair amount of ecosystem around Trusted Web Activities and it can be quite difficult to see how everything relates and what you should use. This article hopes to address that.

If you are new to Trusted Web Activities or just looking for the recommended set of tools you should be using today, here’s what you need to be aware of:

  • llama-pack: a NodeJS tool that allows developers to create and build an Android APK that wraps an existing PWA. The generated application is powered by TWAs, but this is transparent to the developer. No Android development experience is required. Check the llama-pack documentation to get started.
  • android-browser-helper: an Android Library that encapsulates the TWA protocol. Recommended for developers who are familiar with Android development and want to use TWAs as one of the Activities in their Android App or make customisations that are not supported by llama-pack. To get started with android-browser-helper, check the documentation and our demos.

The next section gives a brief summary of all the projects in relation to each other. Finally (for the really curious) there’s a history section to show you how we got here and where we’re planning to go in the near future.

An overview of the libraries

Here’s a short, single sentence summary of each of the libraries you may end up using:

  • androidx.browser, an Android library for interacting with the browser installed on the user’s device.
  • The Android Browser Helper, a library building on androidx.browser for Trusted Web Activity clients providing convenience methods and sensible defaults.
  • Llama-pack, a tool to create Trusted Web Activities from PWAs without touching any Java code.

In addition, each of these libraries/tools replaces an older one:

History

The Android Support Library

The Android Support Library extends the Android platform with new APIs and compatibility features. It is split across multiple packages, with the Custom Tabs Support Library containing functionality for interacting with browsers on the user’s system. Development of the Custom Tabs Support Library was primarily done in the custom-tabs-client GitHub repo, with the changes being upstreamed back into the Android Support Library.

A Custom Tab is an Android Activity that uses a browser to display a web page. The primary benefit for the developer is that it can be themed and has a close button, so the user still remains in the developer’s app (instead of leaving the app and going to the full browsing experience). As an Android API, Custom Tabs can be supported by any browser and will use the user’s default browser (although this can be overridden by developers).

Because Trusted Web Activities are built on top of Custom Tabs, they started their life in this custom-tabs-client library. Trusted Web Activities remove the Custom Tabs top bar when the user is browsing a site owned by the app’s developer. This opens the doors to seamless integration of your website within a native Android app, and can be used to create apps where all functionality is provided by the web.

AndroidX

The Android Support Library was later rebranded as AndroidX, which itself is part of a larger effort to improve developer experience called JetPack. So, Custom Tabs and Trusted Web Activities had to move from the Custom Tabs Support Library to the new androidx.browser.

Some of the code that we had written in custom-tabs-client was appropriate for a library of Trusted Web Activity helper classes, but not for an Android API. Code dealing with checking for out of date Chrome versions and prompting the user to update or making decisions about how data should be stored could not move into AndroidX. Therefore, we created an alternative library to contain these parts of custom-tabs-client that couldn’t go into androidx.browser, and so the Android Browser Helper was born.

The Android Browser Helper was created to contain code that can be specific to browsers (not just Chrome, we’re open to code specifically for other browsers) and can make concrete decisions that libraries shouldn’t. We took this opportunity to generally separate the roles of these two libraries:

  • androidx.browser contains the basic building blocks for interacting with browsers on the user’s system.
  • The Android Browser Helper contains convenient to use and sensible default implementations.

Bootstrapping

Developers are busy people, with a lot of work to do and deadlines to match. To help with this, we created two tools to let user’s bootstrap their TWA.

The first (and oldest) is svgomg-twa, which is a GitHub hosted Android Project that launches a Trusted Web Activity. It was originally designed to be a demo project which evolved into more of a template. Users can clone that repo and modify the build.gradle file to point to their own website, build it and produce a Trusted Web Activity without touching any Java code. (Getting the Digital Asset Links to verified does require more effort, read more here.)

svgomg-twa started out depending on custom-tabs-client, but then moved over to the Android Browser Helper (and transitively androidx.browser).

The newest and shiniest tool is llama-pack, a Node.js tool that will take your Web App Manifest and generate a Trusted Web Activity for you. This is the easiest way to create a Trusted Web Activity from an existing PWA and doesn’t require any Android development knowledge.

Near Future

We will be deprecating svgomg-twa for two reasons:

  • llama-pack essentially generates a filled out svgomg-twa for a developer. It does this interactively and can take the configuration from a Web App manifest (which a PWA will likely already have).
  • If developers want a reference for how to start their own Trusted Web Activity project from scratch, they can look at the Android Browser Helpers demos directory.

New developers should use llama-pack to generate their project instead. If you’re already using svgomg-twa and have made some heavy modifications, you’ll be fine to continue doing so, but won’t get updates.

We plan to make llama-pack as capable as possible, so if there’s an obvious feature missing or you come across a bug, feel free to create an issue.

Feedback

Deprecations and removals in Chrome 81

$
0
0

Deprecations and removals in Chrome 81

Deprecation and Remove "basic-card" support Payment Handler

This version of Chrome removes the basic-card polyfill for Payment Request API in iOS Chrome. As a result, the Payment Request API is temporarily disabled in iOS Chrome. For full details, see Rethinking Payment Request for iOS.

Intent to Remove | Chrome Platform Status | Chromium Bug

Remove supportedType field from BasicCardRequest

Specifying "supportedTypes":[type] parameter for "basic-card" payment method shows cards of only the requested type, which is one of "credit", "debit", or "prepaid".

The card type parameter has been removed from the spec and is now removed from Chrome, because of the difficulty of accurate card type determination. Merchants today must check card type with their PSP, because they cannot trust the card type filter in the browser:

  • Only issuing banks know the card type with certainty and downloadable card type databases have low accuracy, so it's impossible to know accurately the type of the cards stored locally in the browser.
  • The "basic-card" payment method in Chrome no longer shows cards from Google Pay, which may have connections with issuing banks.

Intent to Remove | Chrome Platform Status | Chromium Bug

Remove the <discard> element

Chrome 81 removes the <discard> element. It is only implemented in Chromium, and is thus not possible to use interoperably. For most use cases it can be replaced with a combination of animation of the display property and a removal (JavaScript) callback/event handler.

Intent to Remove | Chrome Platform Status | Chromium Bug

Remove TLS 1.0 and TLS 1.1

TLS (Transport Layer Security) is the protocol which secures HTTPS. It has a long history stretching back to the nearly twenty-year-old TLS 1.0 and its even older predecessor, SSL. Both TLS 1.0 and 1.1 have a number of weaknesses.

  • TLS 1.0 and 1.1 use MD5 and SHA-1, both weak hashes, in the transcript hash for the Finished message.
  • TLS 1.0 and 1.1 use MD5 and SHA-1 in the server signature. (Note: this is not the signature in the certificate.)
  • TLS 1.0 and 1.1 only support RC4 and CBC ciphers. RC4 is broken and has since been removed. TLS’s CBC mode construction is flawed and was vulnerable to attacks.
  • TLS 1.0’s CBC ciphers additionally construct their initialization vectors incorrectly.
  • TLS 1.0 is no longer PCI-DSS compliant.

Supporting TLS 1.2 is a prerequisite to avoiding the above problems. The TLS working group has deprecated TLS 1.0 and 1.1. Chrome has now also deprecated these protocols.

Intent to Remove | Chromestatus Tracker | Chromium Bug

TLS 1.3 downgrade hardening bypass

TLS 1.3 includes a backwards-compatible hardening measure to strengthen downgrade protections. However, when we shipped TLS 1.3 last year, we had to partially disable this measure due to incompatibilities with some non-compliant TLS-terminating proxies. Chrome currently implements the hardening measure for certificates which chain up to known roots, but allows a bypass for certificates chaining up to unknown roots. We intend to enable it for all connections.

Downgrade protection mitigates the security impact of the various legacy options we retain for compatibility. This means user's connections are more secure and, when security vulnerabilities are discovered, it is less of a scramble to respond to them. (That, in turn, means fewer broken sites for users down the road.) This also aligns with RFC 8446.

Intent to Remove | Chrome Platform Status | Chromium Bug

Feedback

Passing Information to a Trusted Web Activity using Query Parameters

$
0
0

Passing Information to a Trusted Web Activity using Query Parameters

When using Trusted Web Activity in their applications, developers may need to pass information from the native part of the application into the Progressive Web App (PWA).

A common use-case for this is implementing custom analytics segmentations to measure installations and sessions started from the Trusted Web Activity. Query parameters can be added to the launch URL to implement this.

Modifying the start URL

If the parameter being passed to the PWA will remain the same across users and launches, the parameter can be appended directly to the launch URL. An example of this usage is when developers want to measure the number of navigation sessions created from a Trusted Web Activity.

Using llama-pack

Llama Pack 🦙 is a tool created to helps developers to creating a Project for an Android application that launches an existing PWAs using a Trusted Web Activity. It contains both a library and a Command Line Interface (CLI).

Creating a new project:

When using the llama-pack CLI, a project is initialized with the init command, and creates default values from a Web Manifest, provided as a parameter:

llama-pack init --manifest https://material.money/manifest.json

The wizard will use the start_url from the Web Manifest as default and will ask users to confirm the value, giving developers the change to add extra parameters to the url used to start the Progressive Web App.

Showing the llama-pack CLI output

Modifying an existing project

When llama-pack generates a project, information for that particular project is stored in a file called twa-manifest.json, in the project folder. To modify the start url for existing project, developers need to modify the file:

{
  ...

  "startUrl": "/?utm_source=trusted-web-activity",
  ...

}

Then, re-generate the project files and apply the new start URL

llama-pack update

Using Android Studio

When using Android Studio and the default LauncherActivity, the startUrl is defined as a meta tag inside AndroidManifest.xml, and we can change the URL used to launch the Trusted Web Activity by modifying it:

<activity android:name="com.google.androidbrowserhelper.trusted.LauncherActivity"
    android:label="@string/app_name">
    ...
    <meta-data android:name="android.support.customtabs.trusted.DEFAULT_URL"
        android:value="https://svgomg.firebaseapp.com/?utm_source=trusted-web-activity" />
    ...
</activity>

Note: llama-pack takes care of ensuring the URLs across the application match the same origin and, therefore, uses a relative for start URL. When modifying AndroidManifest.xml, the entire URL, including schema and domain must be used.

Modifying the start URL dynamically

In other cases, developers may want to create parameters that change across users or sessions, for instance. In most cases, this will involve collecting details from the Android side of the application to pass it to the Progressive Web App.

Step 1: Create a custom LauncherActivity:

public class CustomQueryStringLauncherActivity extends LauncherActivity {
    private String getDynamicParameterValue() {
        return String.valueOf((int)(Math.random() * 1000));
    }

    @Override
    protected Uri getLaunchingUrl() {
        // Get the original launch Url.
        Uri uri = super.getLaunchingUrl();

        // Get the value we want to use for the parameter value
        String customParameterValue = getDynamicParameterValue();

        // Append the extra parameter to the launch Url
        return uri
                .buildUpon()
                .appendQueryParameter("my_parameter", customParameterValue)
                .build();
    }
}

Step 2: Modify the AndroidManifest.xml to use the custom LauncherActivity

<activity android:name="com.myapp.CustomQueryStringLauncherActivity"
    android:label="@string/app_name">
    ...
    <meta-data android:name="android.support.customtabs.trusted.DEFAULT_URL"
        android:value="https://squoosh.app/?utm_source=trusted-web-activity" />
    ...
</activity>

Note: llama-pack doesn’t support dynamically generating query parameters at this moment. We’re interested in hearing from developers who have the need for this feature. Check out the llama-pack issue tracker and tell us about your use-case.

Conclusion

Passing information from the native part to the web part of an application can be achieved by using query parameters. When a parameter is added to the query string, it will be accessible to scripts running on the page and may also be part of the referral when users navigate to a different page or the developer implements a share action.

Developers must be aware of those implications, and can mitigate them using link rel=noreferrer or cleaning-up the URL using the page location API.

The Trusted Web Activity protocol doesn’t currently provide a mechanism to exchange messages with the native part of the application after the web part is invoked.

We believe existing or upcoming Web Platform APIs enable most use cases needed by developers. If you are looking for new or upcoming Web APIs, check out the New Capabilities status page.

Feedback

What's New In DevTools (Chrome 80)

$
0
0

What's New In DevTools (Chrome 80)

Support for let and class redeclarations in the Console

The Console now supports redeclarations of let and class statements. The inability to redeclare was a common annoyance for web developers who use the Console to experiment with new JavaScript code.

For example, previously, when redeclaring a local variable with let, the Console would throw an error:

A screenshot of the Console in Chrome 78 showing that the let redeclaration fails.

Now, the Console allows the redeclaration:

A screenshot of the Console in Chrome 80 showing that the let redeclaration succeeds.

Chromium issue #1004193

Improved WebAssembly debugging

DevTools has started to support the DWARF Debugging Standard, which means increased support for stepping over code, setting breakpoints, and resolving stack traces in your source languages within DevTools. Check out Improved WebAssembly debugging in Chrome DevTools for the full story.

A screenshot of the new DWARF-powered WebAssembly debugging.

Network panel updates

Request Initiator Chains in the Initiator tab

You can now view the initiators and dependencies of a network request as a nested list. This can help you understand why a resource was requested, or what network activity a certain resource (such as a script) caused.

A screenshot of a Request Initiator Chain in the Initiator tab

After logging network activity in the Network panel, click a resource and then go to the Initiator tab to view its Request Initiator Chain:

  • The inspected resource is bold. In the screenshot above, https://web.dev/default-627898b5.js is the inspected resource.
  • The resources above the inspected resource are the initiators. In the screenshot above, https://web.dev/bootstrap.js is the initiator of https://web.dev/default-627898b5.js. In other words, https://web.dev/bootstrap.js caused the network request for https://web.dev/default-627898b5.js.
  • The resources below the inspected resource are the dependencies. In the screenshot above, https://web.dev/chunk-f34f99f7.js is a dependency of https://web.dev/default-627898b5.js. In other words, https://web.dev/default-627898b5.js caused the network request for https://web.dev/chunk-f34f99f7.js.

Chromium issue #842488

Highlight the selected network request in the Overview

After you click a network resource in order to inspect it, the Network panel now puts a blue border around that resource in the Overview. This can help you detect if the network request is happening earlier or later than expected.

A screenshot of the Overview pane highlighting the inspected resource.

Chromium issue #988253

URL and path columns in the Network panel

Use the new Path and URL columns in the Network panel to see the absolute path or full URL of each network resource.

A screenshot of the new Path and URL columns in the Network panel.

Right-click the Waterfall table header and select Path or URL to show the new columns.

Chromium issue #993366

Updated User-Agent strings

DevTools supports setting a custom User-Agent string through the Network Conditions tab. The User-Agent string affects the User-Agent HTTP header attached to network resources, and also the value of navigator.userAgent.

The predefined User-Agent strings have been updated to reflect modern browser versions.

A screenshot of the User Agent menu in the Network Conditions tab.

To access Network Conditions, open the Command Menu and run the Show Network Conditions command.

Chromium issue #1029031

Audits panel updates

New configuration UI

The configuration UI has a new, responsive design, and the throttling configuration options have been simplified. See Audits Panel Throttling for more information on the throttling UI changes.

The new configuration UI.

Coverage tab updates

Per-function or per-block coverage modes

The Coverage tab has a new dropdown menu that lets you specify whether code coverage data should be collected per function or per block. Per block coverage is more detailed but also far more expensive to collect. DevTools uses per function coverage by default now.

The coverage mode dropdown menu.

Coverage must now be initiated by a page reload

Toggling code coverage without a page reload has been removed because the coverage data was unreliable. For example, a function can be reported as unused if its execution was a long time ago and V8's garbage collector has cleaned it up.

Chromium issue #1004203

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>

New in Chrome 79

$
0
0

New in Chrome 79

Chrome 79 is rolling out now!

I’m Pete LePage, let’s dive in and see what’s new for developers in Chrome 79!

Maskable Icons

If you’re running Android O or later, and you’ve installed a Progressive Web App, you’ve probably noticed the annoying white circle around the icon.

Thankfully, Chrome 79 now supports maskable icons for installed Progressive Web Apps.You’ll need to design your icon to fit within the safe zone - essentially a circle with a diameter that’s 80% of the canvas.

Then, in the web app manifest, you’ll need to add a new purpose property to the icon, and set its value to maskable.

{
  ...
  "icons": [
    ...
    {
      "src": "path/to/maskable_icon.png",
      "sizes": "196x196",
      "type": "image/png",
      "purpose": "maskable"
  ]
  ...
}

Tiger Oakes has a great post on CSS Tricks - Maskable Icons: Android Adaptive Icons for Your PWA with all of the details, and has a great tool you can use for testing your icons to make sure they’ll fit.

Web XR

You can now create immersive experiences for smartphones and head-mounted displays with the WebXR Device API.

WebXR enables a whole spectrum of immersive experiences. From using augmented reality to see what a new couch might look like in your home before you buy it, to virtual reality games and 360 degree movies, and more.

To get started with the new API, read Virtual Reality Comes to the Web.

New origin trials

Origin trials provide an opportunity for us to validate experimental features and APIs, and make it possible for you to provide feedback on their usability and effectiveness in broader deployment.

Experimental features are typically only available behind a flag, but when we offer an Origin Trial for a feature, you can register for that origin trial to enable the feature for all users on your origin.

Opting into an origin trial allows you to build demos and prototypes that your beta testing users can try for the duration of the trial without requiring them to flip any special flags in Chrome.

There’s more info on origin trials in the Origin Trials Guide for Web Developers. You can see a list of active origin trials, and sign up for them on the Chrome Origin Trials page.

Wake Lock

One of my biggest pet peeves about Google Slides is that if you leave the deck open on a single slide for too long, the screensaver kicks in. Before you can continue, you need to unlock your computer. Ugh.

But, with the new Wake Lock API, a page can request a lock, and prevent the screen from dimming or the screensaver from kicking in. It’s perfect for Slides, but it’s also helpful for things like recipe sites - where you might want to keep the screen on while you follow the instructions.

To request a wake lock, you need to call navigator.wakeLock.request(), and save the WakeLockSentinel object that it returns.

// The wake lock sentinel.
let wakeLock = null;

// Function that attempts to request a wake lock.
const requestWakeLock = async () => {
  try {
    wakeLock = await navigator.wakeLock.request('screen');
    wakeLock.addEventListener('release', () => {
      console.log('Wake Lock was released');
    });
    console.log('Wake Lock is active');
  } catch (err) {
    console.error(`${err.name}, ${err.message}`);
  }
};

The lock is maintained until the user navigates away from the page, or you call release on the WakeLockSentinel object you saved earlier.

// Function that attempts to release the wake lock.
const releaseWakeLock = async () => {
  if (!wakeLock) {
    return;
  }
  try {
    await wakeLock.release();
    wakeLock = null;
  } catch (err) {
    console.error(`${err.name}, ${err.message}`);
  }
};

More details are at web.dev/wakelock.

rendersubtree attribute

There are times when you don’t want part of the DOM to render immediately. For example scrollers with a large amount of content, or tabbed UIs where only some of the content is visible at any given time.

The new rendersubtree attribute tells the browser it can skip rendering that subtree. This allows the browser to spend more time processing the rest of the page, increasing performance.

When rendersubtree is set to invisible, the element's content is not drawn or hit-tested, allowing for rendering optimizations.

Changing the rendersubtree to activatable, makes the content visible by removing the invisible attribute, and rendering the content.

Chrome Dev Summit 2019

If you missed Chrome Dev Summit, all of the talks are on our YouTube channel.

Jake also has a great Twitter thread with all the fun stuff that went on between the talks, including the newest member of our team, Surjiko.

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 78.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 80 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Viewing all 599 articles
Browse latest View live