Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

Audio/Video Updates in Chrome 75

$
0
0

Audio/Video Updates in Chrome 75

Chrome's media capabilities were updated in version 75. In this article, I'll discuss those new features which include:

  • Predicting whether playback will be smooth and power efficient for encrypted media.
  • Support of the video element's playsInline attribute hint.

Encrypted Media: Decoding Info

Since Chrome 66, web developers have been able to use Decoding Info to query the browser about the clear content decoding abilities of the device based on information such as the codecs, profile, resolution, bitrates, etc. It indicates whether the playback will be smooth (timely) and power efficient based on previous playback statistics recorded by the browser.

The Media Capabilities API specification, defining Decoding Info, has since been updated to handle encrypted media configurations as well so that websites using encrypted media (EME) can select the optimal media streams.

In a nutshell, here’s how Decoding Info for EME works. Give a try to the official sample.

const encryptedMediaConfig = {
  type: 'media-source', // or 'file'
  video: {
    contentType: 'video/webm; codecs="vp09.00.10.08"',
    width: 1920,
    height: 1080,
    bitrate: 2646242, // number of bits used to encode a second of video
    framerate: '25' // number of frames used in one second
  },
  keySystemConfiguration: {
    keySystem: 'com.widevine.alpha',
    videoRobustness: 'SW_SECURE_DECODE' // Widevine L3
  }
};

navigator.mediaCapabilities.decodingInfo(encryptedMediaConfig).then(result => {
  if (!result.supported) {
    console.log('Argh! This encrypted media configuration is not supported.');
    return;
  }

  if (!result.keySystemAccess) {
    console.log('Argh! Encrypted media support is not available.')
    return;
  }

  console.log('This encrypted media configuration is supported.');
  console.log('Playback should be' +
      (result.smooth ? '' : ' NOT') + ' smooth and' +
      (result.powerEfficient ? '' : ' NOT') + ' power efficient.');

  // TODO: Use `result.keySystemAccess.createMediaKeys()` to setup EME playback.
});

EME playbacks have specialized decoding and rendering code paths, meaning different codec support and performance compared to clear playbacks. Hence a new keySystemConfiguration key must be set in the media configuration object passed to navigator.mediaCapabilities.decodingInfo(). The value of this key is a dictionary that holds a number of well-known EME types. This replicates the inputs provided to EME's requestMediaKeySystemAccess() with one major difference: sequences of inputs provided to requestMediaKeySystemAccess() are flattened to a single value wherever the intent of the sequence was to have requestMediaKeySystemAccess() choose a subset it supports.

Decoding Info describes the quality (smoothness and power efficiency) of support for a single pair of audio and video streams without making a decision for the caller. Callers should still order media configurations as they do with requestMediaKeySystemAccess(). Only now they walk the list themselves.

navigator.mediaCapabilities.decodingInfo() returns a promise that resolves asynchronously with an object containing three booleans: supported, smooth, and powerEfficient. However when akeySystemConfiguration key is set and supported is true, yet another MediaKeySystemAccess object named keySystemAccess is returned as well. It can be used to request some media keys and setup encrypted media playback. Here’s an example:

// Like rMSKA(), orderedMediaConfigs is ordered from most to least wanted.
const capabilitiesPromises = orderedMediaConfigs.map(mediaConfig =>
  navigator.mediaCapabilities.decodingInfo(mediaConfig)
);

// Assume this app wants a supported and smooth media playback.
let bestConfig = null;
for await (const result of capabilitiesPromises) {
  if (result.supported && result.smooth) {
    bestConfig = result;
    break;
  }
}

if (bestConfig) {
  const mediaKeys = await bestConfig.keySystemAccess.createMediaKeys();
  // TODO: rest of EME path as-is
} else {
  // Argh! No smooth configs found.
  // TODO: Maybe choose the lowest resolution and framerate available.
}

Note that Decoding Info for encrypted media requires HTTPS.

Moreover, be aware that it may trigger a user prompt on Android and Chrome OS in the same way as requestMediaKeySystemAccess(). It won’t show more prompts than requestMediaKeySystemAccess() though, in spite of requiring more calls to setup encrypted media playback.

Protected content prompt
Figure 1. Protected content prompt

Dogfood: To get feedback from web developers, this feature is available as an Origin Trial in Chrome 75. You will need to request a token, so that the feature is automatically enabled for your origin for a limited period of time.

Intent to Experiment | Chromestatus Tracker | Chromium Bug

HTMLVideoElement.playsInline

Chrome now supports the playsInline boolean attribute. If present, it hints to the browser that the video ought to be displayed "inline" in the document by default, constrained to the element's playback area.

Similarly to Safari, where video elements on iPhone don’t automatically enter fullscreen mode when playback begins, this hint allows some embedders to have an auto-fullscreen video playback experience. Web developers can use it to opt-out of this experience if needed.

<video playsinline></video>

As Chrome on Android and Desktop don’t implement auto-fullscreen, the playsInline video element attribute hint is not used.

Intent to Ship | Chromestatus Tracker | Chromium Bug


The Chromium Chronicle: Test your Web Platform Features with WPT

$
0
0

The Chromium Chronicle: Test your Web Platform Features with WPT

Episode 4: July 2019

by Robert in Waterloo

If you work on Blink, you might know of web_tests (formerly LayoutTests). web-platform-tests (WPT) lives inside web_test/external/wpt. WPT is the preferred way to test web-exposed features as it is shared with other browsers via GitHub. It has two main types of tests: reftests and testharness.js tests.

reftests take and compare screenshots of two pages. By default, screenshots are taken after the load event is fired; if you add a reftest-wait class to the <html> element, the screenshot will be taken when the class is removed. Disabled tests mean diminishing test coverage. Be aware of font-related flakiness; use the Ahem font when possible.

testharness.js is a JavaScript framework for testing anything except rendering. When writing testharness.js tests, pay attention to timing, and remember to clean up global state.

Flaky timeout & potential leaked states:

<script>
promise_test(async t => {
  assert_equals(await slowLocalStorageTest(), "expected", "message");
  localStorage.clear();
});
</script>

A better test with long timeout & cleanup:

<meta name="timeout" content="long">
<script>
promise_test(async t => {
  t.add_cleanup(() => localStorage.clear());
  assert_equals(await slowLocalStorageTest(), "expected", "message");
});
</script>

Use testdriver.js if you need automation otherwise unavailable on the web. You can get a user gesture from test_driver.bless, generate complex, trusted inputs with test_driver.action_sequence, etc.

WPT also provides some useful server-side features through file names. Multi-global tests (.any.js and its friends) run the same tests in different scopes (window, worker, etc.); .https.sub.html asks the test to be loaded over HTTPS with server-side substitution support like below:

var anotherOrigin = "https://{{hosts[][www1]}}:{{ports[https][0]}}/path/to/page.html";

Some features can also be enabled in query strings. baz.html?pipe=sub|header(X-Key,val)|trickle(d1) enables substitution, adds X-Key: val to the headers of the response, and delays 1 second before responding. Search for "pipes" on web-platform-tests.org for more.

WPT can also test behaviors that are not included in specs yet; just name the test as .tentative. If you need Blink internal APIs (e.g. testRunner, internals), put your tests in web_tests/wpt_internal.

Changes made to WPT are automatically exported to GitHub. You will see comments from a bot in your CL. GitHub changes from other vendors are also continuously imported. To receive automatically filed bugs when new failures are imported, create an OWNERS file in a subdirectory in WPT:

# TEAM: your-team@chromium.org
# COMPONENT: Blink>YourComponent
# WPT-NOTI FY: true
emails-here-will-be-cc@chromium.org

Additional Resources

  • Want to find out how your tests run on other browsers, and how interoperable your feature is? Use wpt.fyi.
  • Looking for more documentation on APIs, guidelines, examples, tips and more? Visit web-platform-tests.org.

New in Chrome 76

$
0
0

New in Chrome 76

In Chrome 76, we've added support for:

I’m Pete LePage, let’s dive in and see what’s new for developers in Chrome 76!

PWA Omnibox Install Button

In Chrome 76, we're making it easier for users to install Progressive Web Apps on the desktop, by adding an install button to the address bar, sometimes called the omnibox.

If your site meets the Progressive Web App installability criteria, Chrome will show an install button in the omnibox indicating to the user that your PWA can be installed. If the user clicks the install button, it’s essentially the same as calling prompt() on the beforeinstallprompt event; it shows the install dialog, making it easy for the user to install your PWA.

See Address Bar Install for Progressive Web Apps on the Desktop for complete details.

More control over the PWA mini-infobar

Example of the Add to Home screen mini-infobar for AirHorner

On mobile, Chrome shows the mini-infobar the first time a user visits your site if it meets the Progressive Web App installability criteria. We heard from you that you want to be able to prevent the mini-infobar from appearing, and provide your own install promotion instead.

Starting in Chrome 76, calling preventDefault() on the beforeinstallprompt event will stop the mini-infobar from appearing.

window.addEventListener('beforeinstallprompt', (e) => {
  // Don't show mini-infobar
  e.preventDefault();
  // Stash the event so it can be triggered later.
  deferredPrompt = e;
  // Update UI to promote PWA installation
  pwaInstallAvailable(true);
});

Be sure to update your UI - to let users know your PWA can be installed. Check out Patterns for Promoting PWA Installation for our recommend best practices for promoting the installation of your Progressive Web Apps.

Faster updates to WebAPKs

When a Progressive Web App is installed on Android, Chrome automatically requests and installs a Web APK. After it’s been installed, Chrome periodically checks if the web app manifest has changed, maybe you’ve updated the icons, colors, or changed the app name, to see if a new WebAPK is required.

Starting in Chrome 76, Chrome will check the manifest more frequently; checking every day, instead of every three days. If any of the key properties have changed, Chrome will request and install a new WebAPK, ensuring the title, icons and other properties are up to date.

See Updating WebAPKs More Frequently for complete details.

Dark mode

Many operating systems now support a dark mode, or dark theme.

The prefers-color-scheme media query, allows you to adjust the look and feel of your site to match the user's preferred mode.

@media (prefers-color-scheme: dark) {
  body {
    background-color: black;
    color: white;
  }
}

Tom has a great article Hello darkness, my old friend on web.dev with everything you need to know, plus tips for architecting your style sheets to support both a light, and a dark mode.

And more!

These are just a few of the changes in Chrome 76 for developers, of course, there’s plenty more.

Promise.allSettled()

Personally, I’m really excited about Promise.allSettled(). It’s similar to Promise.all(), except it waits until all of the promises are settled before returning.

const promises = [
  fetch('/api-call-1'),
  fetch('/api-call-2'),
  fetch('/api-call-3'),
];
// Imagine some of these requests fail, and some succeed.

await Promise.allSettled(promises);
// All API calls have finished (either failed or succeeded).

Reading blobs is easier

Blobs are easier to read with three new methods: text(), arrayBuffer(), and stream(); this means we don’t have to create a wrapper around file reader any more!

// New easier way
const text = await blob.text();
const aBuff = await blob.arrayBuffer();
const stream = await blob.stream();

// Old, wrapped reader
return new Promise((resolve) => {
  const reader = new FileReader();
  reader.addEventListener('loadend', (e) => {
    const text = e.srcElement.result;
    resolve(text);
  });
  reader.readAsText(file);
});

Image support in the async clipboard API

And, we’ve added support for images to the Asynchronous Clipboard API, making it easy to programmatically copy and paste images.

Further reading

This covers only some of the key highlights, check the links below for additional changes in Chrome 76.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 77 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

A Contact Picker for the Web

$
0
0

A Contact Picker for the Web

What is the Contact Picker API?

Access to the user’s contacts has been a feature of native apps since (almost) the dawn of time. It’s one of the most common feature requests I hear from web developers, and is often the key reason they build a native app.

The Contact Picker API is a new, on-demand picker that allows users to select entries from their contact list and share limited details of the selected entries with a website. It allows users to share only what they want, when they want, and makes it easier for users to reach and connect with their friends and family.

For example, a web-based email client could use the Contact Picker API to select the recipient(s) of an email. A voice-over-IP app could look up which phone number to call. Or a social network could help a user discover which friends have already joined.

Want to give the Contact Picker API a try? Check out the Contact Picker API demo or view the source

Current status

Step Status
1. Create explainer Complete
2. Create initial draft of specification In Progress
3. Gather feedback & iterate on design In progress
4. Origin trial Starts in Chrome 77
Expected to run through Chrome 80.
5. Launch Not started

Using the Contact Picker API

The Contact Picker API requires a single API call with an options parameter that specifies the types of contact information you want.

Note: Want to try the Contact Picker API? Check out the Contact Picker API demo and view the source.

Enabling via chrome://flags

To experiment with the Contact Picker API locally, without an origin trial token, enable the #enable-experimental-web-platform-features flag in chrome://flags.

Enabling support during the origin trial phase

Starting in Chrome 77, the Contact Picker API will be available as an origin trial on Chrome for Android. Origin trials allow you to try new features and give feedback on their usability, practicality, and effectiveness, both to us, and to the web standards community. For more information, see the Origin Trials Guide for Web Developers.

  1. Request a token for your origin.
  2. Add the token to your pages, there are two ways to provide this token on any pages in your origin:
    • Add an origin-trial <meta> tag to the head of any page. For example, this may look something like: <meta http-equiv="origin-trial" content="TOKEN_GOES_HERE">
    • If you can configure your server, you can also provide the token on pages using an Origin-Trial HTTP header. The resulting response header should look something like: Origin-Trial: TOKEN_GOES_HERE

Feature detection

To check if the Contact Picker API is supported, use:

const supported = ('contacts' in navigator && 'ContactsManager' in window);

Opening the Contact Picker

The entry point to the Contact Picker API is navigator.contacts.select(). When called it returns a Promise and shows the Contact Picker, allowing the user to select the contact(s) they want to share with the site. After selecting what to share and clicking Done, the promise resolves with an array of contacts selected by the user.

You must provide an array of properties you’d like returned as the first parameter, and optionally whether multiple contacts can be selected as a second parameter.

const props = ['name', 'email', 'tel'];
const opts = {multiple: true};

try {
  const contacts = await navigator.contacts.select(props, opts);
  handleResults(contacts);
} catch (ex) {
  // Handle any errors here.
}

The Contacts Picker API can only be called from a secure, top-level browsing context, and like other powerful APIs, it requires a user gesture.

Handling the results

The Contact Picker API returns an array of contacts, and each contact includes an array of the requested properties. If a contact doesn’t have data for the requested property, or the user chooses to out-out of sharing a particular property, it returns an empty array.

For example, if a site requests name, email, and tel, and a user selects a single contact that has data in the name field, provides two phone numbers, but does not have an email address, the response returned will be:

[{
  "email": [],
  "name": ["Queen O'Hearts"],
  "tel": ["+1-206-555-1000", "+1-206-555-1111"]
}]

Note: Labels and other semantic information on contact fields are dropped.

Security and permissions

We’ve designed and implemented the Contact Picker API using the core principles defined in Controlling Access to Powerful Web Platform Features, including user control, transparency, and ergonomics.

User control

Access to the users' contacts is via the picker, it can only be called with a user gesture, on a secure, top-level browsing context. This ensures that a site can’t show the picker on page load, or randomly show the picker without any context.

User can choose not to share some properties, in this screenshot, the user has unchecked the 'Phone numbers' button. Even though the site asked for phone numbers, they will not be shared with the site.

There's no option to bulk-select all contacts so that users are encouraged to select only the contacts that they need to share for that particular website. Users can also control which properties are shared with the site by toggling the property button at the top of the picker.

Transparency

To clarify which contact details are being shared, the picker will always show the contact's name and icon, plus any properties that the site has requested. For example, if a site requests name, email, and tel, values all three properties will be shown in the picker. Alternatively, if a site only requests tel, the picker will show only the name, and telephone numbers.

Picker, site requesting name, email, and tel, one contact selected.
Picker, site requesting only tel, one contact selected.

A long press on a contact will show all of the information that will be shared if the contact is selected (image right).

No permission persistence

Access to contacts is on-demand, and not persisted. Each time a site wants access, it must call navigator.contacts.select() with a user gesture, and the user must individually choose the contact(s) they want to share with the site.

Feedback

We want to hear about your experiences with the Contact Picker API.

Tell us about the API design

Is there something about the API that doesn’t work like you expected? Or are there missing methods or properties that you need to implement your idea?

Problem with the implementation?

Did you find a bug with Chrome's implementation? Or is the implementation different from the spec?

  • File a bug at https://new.crbug.com. Be sure to include as much detail as you can, simple instructions for reproducing, and set Components to Blink>Contacts. Glitch works great for sharing quick and easy repros.

Planning to use the API?

Planning to use the Contact Picker API? Your public support helps us to prioritize features, and shows other browser vendors how critical it is to support them.

Thanks

Big shout out and thanks to Finnur Thorarinsson and Rayan Kanso who are implementing the feature and Peter Beverloo whose code I shamelessly

stole and refactored for the demo.

PS: The 'names' in my contact picker, are characters from Alice in Wonderland.

Deprecations and removals in Chrome 77

$
0
0

Deprecations and removals in Chrome 77

Removals

Card issuer networks as payment method names

Removes support for calling PaymentRequest with card issuer networks (e.g., "visa", "amex", "mastercard") in the supportedMethods field.

Intent to Remove | Chrome Platform Status | Chromium Bug

Deprecate Web MIDI use on insecure origins

Web MIDI use is classified into two groups: non-privilege use, and privilege use with sysex permission. Until Chrome 77, only the latter use prompts users for permission. To reduce security concerns, permissions will always be requested regardless of sysex use. This means that using Web MIDI on insecure origins will no longer be allowed.

Intent to Remove | Chrome Platform Status | Chromium Bug

Deprecations

Deprecate WebVR 1.1 API

This API is now deprecated in Chrome, being replaced by the WebXR Device API, which is expected to ship in Chrome 78. The WebVR Origin Trial ended on July 24, 2018.

WebVR was never enabled by default in Chrome, and was never ratified as a web standard. The WebXR Device API is the replacement API for WebVR. Removing WebVR from Chrome allows us to focus on the future of WebXR and remove the maintenance burden of WebVR, as well as reaffirm that Chrome is committed to WebXR as the future for building immersive web-based experiences. Removal is expected in Chrome 79.

Intent to Remove | Chrome Platform Status | Chromium Bug

Feedback

Experimenting with Periodic Background Sync

$
0
0

Experimenting with Periodic Background Sync

What's periodic background sync?

Have you ever been in any of the following situations? Riding a fast train or the subway with flaky or no connectivity, being throttled by your carrier after watching too many videos on the go, or living in a country where bandwidth is struggling to keep up with the demand? If you have, then you’ve surely experienced the frustration of getting certain things done on the web, and wondered why native apps tend to do better in these scenarios.

Native apps can fetch fresh content, such as timely news articles or up-to-date weather information, ahead of time. Even if there’s no network in the subway, you can still read the news. Periodic background sync (PBS) is an experimental feature that gives people the same feature on the web. You can enjoy instant page loads with the latest news from your favorite newspaper, have enough music or videos to entertain yourself during an otherwise boring no-connectivity commute, and more.

Why add periodic background sync to your web app?

Consider a web app that uses a service worker to offer a rich offline experience:

  • When a person launches the app, it may only have stale content loaded.
  • Without periodic background sync, the app can only refresh itself when launched. As a result, people will see a flash of old content being slowly replaced by new content, or just a loading spinner.
  • With PBS, the app can update itself in the background, giving people a smoother and reliably fresh experience.
  • Now people can read the latest news, even in the subway!

Let’s now look at two types of updates that would be beneficial if done ahead of time.

Updating an application

This is the data required for your web app to work correctly.

Examples:

  • Updated search index for a search app.
  • A critical application update.
  • Updated icons or user interface.

Updating content

If your web app regularly publishes updates, you can fetch the newest content to give folks using your site a better experience.

Examples:

  • Fresh articles from news sites.
  • New songs from a favorite artist.
  • Badges and achievements in a fitness app.

Non-goals

Triggering events at a specific time is outside the scope of this API. PBS can't be used for time-based "alarm clock" scenarios.

There is no guaranteed cadence of the periodic sync tasks. When registering for PBS, you provide a minInterval value that acts as a lower bound for the sync interval, but there is no way to guarantee an upper bound. The browser decides this cadence for each web app.

A web app can register multiple periodic tasks, and the frequency determined by the browser for the tasks may or may not end up being the same.

Getting this right

We are putting periodic background sync through a trial period so that you can help us make sure that we got it right. This section explains some of the design decisions we took to make this feature as helpful as possible.

The first design decision we made is that a web app can only use PBS once a person has installed it on their device, and has launched it as a distinct application. PBS is not available in the context of a regular tab in Chrome.

Furthermore, since we don’t want unused or seldom used web apps to gratuitously consume battery or data, we designed PBS such that developers will have to earn it by providing value to their users. Concretely, we are using a site engagement score to determine if and how often periodic background syncs can happen for a given web app. In other words, a periodicsync event won't be fired at all unless the engagement score is greater than zero, and its value will affect the frequency at which the periodicsync event will fire. This ensures that the only apps syncing in the background are the ones you are actively using.

PBS shares some similarities with existing APIs and practices on popular platforms. For instance, one-off background sync as well as push notifications allow a web app's logic to live a little longer (via its service worker) after a person has closed the page. On most platforms, it’s common for people to have installed apps that periodically access the network in the background to provide a better user experience—for critical updates, prefetching content, syncing data, etc. Similarly, periodic background sync also extends the lifetime of a web app's logic to run at regular periods, for what might be a few minutes at a time.

If the browser allowed this to occur frequently and without restrictions, it could result in some privacy concerns. Here's how Chrome has addressed this risk for PBS:

  • The background sync activity only occurs on a network that the device has previously connected to. We recommend to only connect to networks operated by trustworthy parties.
  • As with all internet communications, PBS reveals the IP addresses of the client and the server it's talking to, and the name of the server. To reduce this exposure to roughly what it would be if the app only synced when it was in the foreground, the browser limits the frequency of an app's background syncs to align with how often the person uses that app. If the person stops frequently interacting with the app, PBS will stop triggering. This is a net improvement over the status quo in native apps.

Alternatives

Before PBS, web apps had to jump through hoops to keep content fresh—like triggering a push notification to wake up their service worker and update content as a side effect. But the timing of those notifications is decided by the developer. PBS leaves it to the browser to work with the operating system to figure out when an update should happen, allowing it to optimize for things like power and connectivity state, and prevent resource abuse in the background.

Using PBS instead of push notifications also means that these updates will happen without the fear of interrupting users, which might be the case with a regular notification. Developers still have the option of using push notifications for truly important updates, such as significant breaking news. Users can uninstall the web app, or disable the "Background Sync" site setting for specific web apps if needed.

Note: Periodic background sync should not be confused with a different web platform feature: "one-off" background sync. While their names are similar, their use cases are different. One-off background sync allows your web app's service worker to respond to network availability on a non-repeated basis. It's most commonly used to automatically retry sending a request that failed because the network was temporarily unavailable.

Origin trial

The current experimental implementation of periodic background sync is available in Chrome 77 and higher. It's implemented as an "origin trial," and you must join the origin trial before it can be enabled for your web app's users.

Note: Origin trials allow you to try new features and give feedback on their usability, practicality, and effectiveness to the web standards community. For more information, see the Origin Trials Guide for Web Developers.

We anticipate that the trial will end around March 2020, at which point the web platform community can use the feedback collected during the trial to inform a decision about the future of the feature.

During the origin trial, PBS can be tested on all platforms on which Chrome supports installing web apps, including macOS, Windows, Linux, Chrome OS, and Android. On macOS, Windows, and Linux, PBS events will only be fired if an instance of Chrome is actively running. This restriction is similar to how push notifications work on those platforms. If Chrome is quit and then re-launched after multiple background sync intervals have elapsed, a single periodicsync event will be fired soon after Chrome starts up, assuming all other conditions are met.

Note: For local testing purposes, developers can also try out PBS functionality by visiting chrome://flags/#periodic-background-sync in Chrome 77 and above, and enabling the feature there. This setting only applies to your local copy of Chrome, and is not a scalable substitute for the origin trial.

As part of the origin trial process, the Chrome team welcomes your input. Feedback on the experimental specification can be provided via GitHub, and comments or bug reports on Chrome's implementation can be provided by filing a bug with the Component field set to "Blink>BackgroundSync".

Example code

The following snippets cover common scenarios for interacting with periodic background sync. Some of them are meant to run within the context of your web app, possibly in response to someone clicking a UI element that opts-in to periodic background sync. Other snippets are meant to be run in your service worker's code.

You can see these snippets in context by reading the source code for the live demo.

Checking whether periodic sync can be used

The Permissions API tells you whether PBS can be enabled. You can query for 'periodic-background-sync' permission from either your web app's window context, or from within a service worker.

If the status is 'granted', then your web app meets the requirements to register for PBS.

If the status is anything other than 'granted' (most likely 'denied'), then your web app can't use PBS. This might be because the current browser doesn't support it, or because one of the other requirements outlined above haven't been met.

const status = await navigator.permissions.query({
  name: 'periodic-background-sync',
});
if (status.state === 'granted') {
  // PBS can be used.
} else {
  // PBS cannot be used.
}

Registering a periodic sync

You can register for PBS within your web app's window context, but it must be after the service worker is registered. Both a tag ('content-sync' in the below example) and a minimum sync interval (in milliseconds) are required. You can use whatever string you'd like for the tag, and it will be passed in as a parameter to the corresponding periodicsync event in your service worker. This allows you to distinguish between multiple types of sync activity that you might register.

If you attempt to register when PBS is not supported, the call throw an exception.

const registration = await navigator.serviceWorker.ready;
if ('periodicSync' in registration) {
  try {
    registration.periodicSync.register('content-sync', {
      // An interval of one day.
      minInterval: 24 * 60 * 60 * 1000,
    });
  } catch (error) {
    // PBS cannot be used.
  }
}

Responding to a periodic sync event

To respond to PBS syncs, add a periodicsync event listener to your service worker. The callback parameter contains the tag matching the string you used during registration. This allows you to customize the callback's behavior—like updating one set of cached data as opposed to another—based on different tag values.

self.addEventListener('periodicsync', (event) => {
  if (event.tag === 'content-sync') {
    // See the "Think before you sync" section for
    // checks you could perform before syncing.  
    event.waitUntil(syncContent());
  }
  // Other logic for different tags as needed.
});

Checking if a sync with a given tag is registered

You can use the getTags() method to retrieve an array of tag strings, corresponding to active PBS registrations.

One use case is to check whether or not a PBS registration used to update cached data is already active, and if it is, avoid updating the cached data again.

You might also use this method to show a list of active registrations in your web app's settings page, and allow people to enable or disable specific types of syncs based on their preferences.

const registration = await navigator.serviceWorker.ready;
if ('periodicSync' in registration) {
  const tags = await registration.periodicSync.getTags();
  // Only update content if sync isn't set up.
  if (!tags.includes('content-sync')) {
    updateContentOnPageLoad();
  }
} else {
  // If PSB isn't supported, always update.
  updateContentOnPageLoad();
}

Unregistering a previously registered sync

You can stop future periodicsync events from firing by calling unregister() and passing in a tag string that was previously registered.

const registration = await navigator.serviceWorker.ready;
if ('periodicSync' in registration) {
  registration.periodicSync.unregister('content-sync');
}

Think before you sync

When your service worker wakes up to handle a periodicsync event, you have the opportunity to request data, but not the obligation to do so. While handling the event, you may want to take the current network, data saver status, and available storage quota into account before refreshing cached data. You also might structure your code so that there are "lightweight" and "heavyweight" network payloads, depending on those criteria.

The following features can be used inside of a service worker to help make the decision about how much (if anything) to refresh inside your periodicsync handler:

Debugging

It can be a challenge to get the "big picture" view of periodic background sync while testing things locally. information about active registrations, approximate sync intervals, and logs of past sync events can provide valuable context while debugging your web app's behavior. Fortunately, all of that information can be found as an experimental feature in Chrome's DevTools.

Note: PBS debugging is currently disabled by default. Please read "Enabling the DevTools interface" for the steps needed to enable it during the origin trial.

Recording local activity

The "Periodic Background Sync" panel's interface is organized around key events in the PBS lifecycle: registering for sync, performing a background sync, and unregistering. In order to obtain information about these events, you need to "start recording" from within DevTools first.

The record button in DevTools

While recording, entries will appear in DevTools corresponding to events, with context and metadata logged for each.

<img src="/web/updates/images/2019/08/periodic-background-sync/2-record-result.png" class="browser-screenshot"s alt="Recorded PBS activity in DevTools">

After enabling recording once, it will stay enabled for up to three days, allowing DevTools to capture local debugging information about background syncs that might take place, e.g., hours in the future.

Simulating events

While recording background activity can be helpful, there are times when you'd want to test your periodicsync handler immediately, without waiting for the event to fire on its normal cadence.

You can do this via the "Service Workers" panel within the Applications tab in Chrome DevTools. The "Periodic Sync" field allows you to provide a tag for the event to use, and trigger it as many times as you'd like.

The service workers panel in DevTools

Manually triggering a periodicsync event did not make it into Chrome 77, so the best way to test it out is to use Chrome 78 (currently in Canary) or later. You'll need to follow the same "Enabling the DevTools interface" steps to turn it on.

Live demo

You can try out this live demo app that uses periodic background sync. Make sure that:

  • You're using Chrome 77 or later.
  • You "install" the web app before trying to enable periodic background sync.

(The demo app's author already took the step of signing up for the origin trial.)

References and acknowledgements

This article is adapted from Mugdha Lakhani & Peter Beverloo's original write-up, with contributions from Chris Palmer. Mughda also wrote the code samples, live demo, and the code for the Chrome implementation of this feature.

Enabling the DevTools interface

The following steps are required while periodic background sync remains an origin trial. If and when it progresses out of the origin trial phase, the DevTools interface will be enabled by default.

  • Visit chrome://flags/#enable-devtools-experiments and change the "Developer Tools experiments" setting to "Enabled".
The Developer Tools Experiments flag setting
The settings panel in DevTools
  • In the Experiments section of the Settings panel, enable "Background services section for Periodic Background Sync".
The background service section checkbox in DevTools
  • Close, and then reopen DevTools.
  • You should now see a "Periodic Background Sync" section within the "Application" panel in DevTools.
The periodic background sync panel in DevTools

The Native File System API: Simplifying access to local files

$
0
0

The Native File System API: Simplifying access to local files

What is the Native File System API?

The new Native File System API enables developers to build powerful web apps that interact with files on the user's local device, like IDEs, photo and video editors, text editors, and more. After a user grants a web app access, this API allows web apps to read or save changes directly to files and folders on the user's device.

Current status

Step Status
1. Create explainer Complete
2. Create initial draft of specification In progress
3. Gather feedback & iterate on design In progress
4. Origin trial Expected to start in Chrome 78
5. Launch Not started

Using the Native File System API

To show off the true power and usefulness of the Native File System APIs, I wrote a single file text editor. It lets you open a text file, edit it, save the changes back to disk, or start a new file and save the changes to disk. It's nothing fancy, but provides enough to help you understand the concepts.

Enabling via chrome://flags

If you want to experiment with the Native File System API locally, enable the #native-file-system-api flag in chrome://flags.

Read a file from the local file system

The first use case I wanted to tackle was to ask the user to choose a file, then open and read that file from disk.

Ask the user to pick a file to read

The entry point to the Native File System API is window.chooseFileSystemEntries(). When called, it shows a file picker dialog, and prompts the user to select a file. After selecting a file, the API returns a handle to the file. An optional options parameter lets you influence the behavior of the file picker, for example, allowing the user to select multiple files, or directories, or different file types. Without any options specified), the file picker allows the user to select a single file, perfect for our text editor.

Like many other powerful APIs, calling chooseFileSystemEntries() must be done in a secure context, and must be called from within a user gesture.

let fileHandle;
butOpenFile.addEventListener('click', async (e) => {
  fileHandle = await window.chooseFileSystemEntries();
  // Do something with the file handle
});

Once the user selects a file, chooseFileSystemEntries() returns a handle, in this case a FileSystemFileHandle that contains the properties and methods needed to interact with the file.

It’s helpful to keep a reference to the file handle around so that it can be used later. It’ll be needed to save changes back to the file, or any other file operations. In the next few milestones, installed Progressive Web Apps will also be able to save the handle to IndexedDB and persist access to the file across page reloads.

Read a file from the file system

Now that you have a handle to a file, you can get the properties of the file, or access the file itself. For now, let’s simply read the contents of the file. Calling handle.getFile() returns a File object, which contains binary data as a blob. To get the data from the blob, call one of the reader methods (slice(), stream(), text(), arrayBuffer()).

const file = await fileHandle.getFile();
const contents = await file.text();

Putting it all together

Putting it all together, when the user clicks the Open button, the browser shows a file picker. Once they’ve selected a file, the app reads the contents and puts them into a <textarea>.

let fileHandle;
butOpenFile.addEventListener('click', async (e) => {
  fileHandle = await window.chooseFileSystemEntries();
  const file = await fileHandle.getFile();
  const contents = await file.text();
  textArea.value = contents;
});

Write the file to the local file system

In the text editor, there are two ways to save a file: Save, and Save As. Save simply writes the changes back to the original file using the file handle we got earlier. But Save As creates a new file, and thus requires a new file handle.

Create a new file

The chooseFileSystemEntries() API with {type: 'saveFile'} will show the file picker in “save” mode, allowing the user to pick a new file they want to use for saving. For the text editor, I also wanted it to automatically add a .txt extension, so I provided some additional parameters.

function getNewFileHandle() {
  const opts = {
    type: 'saveFile',
    accepts: [{
      description: 'Text file',
      extensions: ['txt'],
      mimeTypes: ['text/plain'],
    }],
  };
  const handle = window.chooseFileSystemEntries(opts);
  return handle;
}

Save changes to the original file

To write data to disk, I needed to create a FileSystemWriter by calling createWriter() on the file handle, then call write() to do the write. If permission to write hasn’t been granted already, the browser will prompt the user for permission to write to the file first (during createWriter()). The write() method takes a string, which is what we want for a text editor, but it can also take a BufferSource, or a Blob.

async function writeFile(fileHandle, contents) {
  // Create a writer (request permission if necessary).
  const writer = await fileHandle.createWriter();
  // Make sure we start with an empty file
  await writer.truncate(0);
  // Write the full length of the contents
  await writer.write(0, contents);
  // Close the file and write the contents to disk
  await writer.close();
}

Note: There's no guarantee that the contents are written to disk until the close() method is called.

The keepExistingData option when calling createWriter() isn’t supported yet, so once I get the writer, I immediately call truncate(0) to ensure I started with an empty file. Otherwise, if the length of the new content is shorter than the existing content, the existing content after the new content would remain. keepExistingData will be added in a future milestone.

When createWriter() is called, Chrome checks if the user has granted write permission. If not, it requests permission. If the user grants permission, the app can write the contents to the file. But, if the user does not grant permission, createWriter(), will throw a DOMException, and the app will not be able to write to the file. In the text editor, these DOMExceptions are handled in the saveFile() method.

Note: All the code behind the text editor is on GitHub, and the core file system interactions are in fs-helpers.js.

What else is possible?

Beyond reading and writing files, the Native File System API provides several other new capabilities.

Open a directory and enumerate its contents

To enumerate all files in a directory, call chooseFileSystemEntries() with the type option set to openDirectory. The user selects a directory in a picker, after which a FileSystemDirectoryHandle is returned, which lets you enumerate and access the directory’s files.

const butDir = document.getElementById('butDirectory');
butDir.addEventListener('click', async (e) => {
  const opts = {type: 'openDirectory'};
  const handle = await window.chooseFileSystemEntries(opts);
  const entries = await handle.getEntries();
  for await (const entry of entries) {
    const kind = entry.isFile ? 'File' : 'Directory';
    console.log(kind, entry.name);
  }
});

What’s currently supported?

We’re still working on some of the implementation for the Native File System API, and not everything in the spec (or explainer) has been completed.

As of Chrome 78, the following functionality is not available, or doesn't match the spec:

  • Handles are not serializable, meaning they cannot be passed via postMessage(), or stored in IndexedDB.
  • Non-atomic writes (i.e. calls to FileSystemFileHandle.createWriter() with inPlace: true).
  • Writing to a file using a WritableStream.
  • The FileSystemDirectoryHandle.resolve() method.

Security and permissions

We’ve designed and implemented the Native File System API using the core principles we defined in Controlling Access to Powerful Web Platform Features, including user control and transparency, and user ergonomics.

Opening a file or saving a new file

File picker used to open an existing file for reading.

When opening a file, the user provides permission to read a file or directory via the file picker. The open file picker can only be shown via a user gesture when served from a secure context. If the user changes their mind, they can cancel the selection in the file picker and the site does not get access to anything. This is the same behavior as the <input type="file"> element.

File picker used to save a file to disk.

Similarly, when a web app wants to save a new file, the browser will show the save file picker, allowing the user to specify the name and location of the new file. Since they are saving a new file to the device (versus overwriting an existing file), the file picker grants the app permission to write to the file.

Restricted folders

To help protect users and their data, the browser may limit the user’s ability to save to certain folders, for example, core operating system folders like Windows, the macOS Library folders, etc. When this happens, the browser will show a modal prompt and ask the user to choose a different folder.

Modifying an existing file or directory

A web app cannot modify a file on disk without getting explicit permission from the user.

Permission prompt

Prompt shown to users before the browser is granted write permission on an existing file.

If a person wants to save changes to a file that they previously granted read access, the browser will show a modal permission prompt, requesting permission for the site to write changes to disk. The permission request can only be triggered by a user gesture, for example, clicking a “Save” button.

Alternatively, a web app that edits multiple files, like an IDE, can also ask for permission to save changes at the time of opening.

If the user chooses Cancel, and does not grant write access, the web app cannot save changes to the local file. It should provide an alternative method to allow the user to save their data, for example providing a way to “download” the file, saving data to the cloud, etc.

Transparency

Omnibox icon indicating the user has granted the website permission to save to a local file.

Once a user has granted permission to a web app to save a local file, Chrome will show an icon in the omnibox. Clicking on the omnibox icon opens a popover showing the list of files the user has given access to. The user can easily revoke that access if they choose.

Permission persistence

The web app can continue to save changes to the file without prompting as long as the tab is open. Once a tab is closed, the site loses all access. The next time the user uses the web app, they will be re-prompted for access to the files. In the next few milestones, installed Progressive Web Apps (only) will also be able to save the handle to IndexedDB and persist access to handles across page reloads. In this case, an icon will be shown in the omnibox as long as the app has write access to local files.

Feedback

We want to hear about your experiences with the Native File System API.

Tell us about the API design

Is there something about the API that doesn’t work like you expected? Or are there missing methods or properties that you need to implement your idea? Have a question or comment on the security model?

Problem with the implementation?

Did you find a bug with Chrome's implementation? Or is the implementation different from the spec?

  • File a bug at https://new.crbug.com. Be sure to include as much detail as you can, simple instructions for reproducing, and set Components to Blink>Storage>FileSystem. Glitch works great for sharing quick and easy repros.

Planning to use the API?

Planning to use the Native File System API on your site? Your public support helps us to prioritize features, and shows other browser vendors how critical it is to support them.

The Chromium Chronicle: Coding Outside the Sandbox

$
0
0

The Chromium Chronicle: Coding Outside the Sandbox

Episode 5: August 2019

by Ade in Mountain View

Chrome is split into processes. Some of them are sandboxed, which means that they have reduced access to the system and to users' accounts. In a sandboxed process, bugs that allow malicious code to run are much less severe.

The browser process has no sandbox, so a bug could give malicious code full access to the whole device. What should you do differently? And what's the situation with other processes?

sandbox diagram

All code has bugs. In the browser process, those bugs allow malicious code to install a program, steal user data, adjust computer settings, access content of all browser tabs, login data, etc.

In other prcoesses, OS access is limited via platform-specific restrictions. For more information, see Chrome's sandbox implementation guide.

Make sure to avoid the following common mistakes:

rule of two

  • Don’t parse or interpret untrustworthy data using C++ in the browser process.
  • Don’t trust the origin a renderer claims to represent. The browser’s RenderProcessHost can be used to get the current origin securely.

Instead, use the following best practices:

  • Be extra paranoid if your code is in the browser process.
  • Validate all IPC from other processes. Assume all other processes are already compromised and out to trick you.
  • Do your processing in a renderer or utility process or some other sandboxed process. Ideally, also use a memory safe language such as JavaScript (solves >50% security bugs).

For years, we ran network stacks (e.g. HHTP, DNS, QUIC) in the browser process, which led to some critical vulnerabilities. On some platforms, networking now has its own process, with a sandbox coming.

Additional Resources

  • Chromium's Rule of Two: no more than two of unsafe data, unsafe code, and unsafe process.
  • Validating IPC Data: a guide on how to ensure that IPCs from the renderer process are not full of fibs and misrepresentations.

Get started with GPU Compute on the Web

$
0
0

Get started with GPU Compute on the Web

This article is about me playing with the experimental WebGPU API and sharing my journey with web developers interested in performing data-parallel computations using the GPU.

Background

As you may already know, the Graphic Processing Unit (GPU) is an electronic subsystem within a computer that was originally specialized for processing graphics. However, in the past 10 years, it has evolved towards a more flexible architecture allowing developers to implement many types of algorithms, not just render 3D graphics, while taking advantage of the unique architecture of the GPU. These capabilities are referred to as GPU Compute, and using a GPU as a coprocessor for general-purpose scientific computing is called general-purpose GPU (GPGPU) programming.

GPU Compute has contributed significantly to the recent machine learning boom, as convolution neural networks and other models can take advantage of the architecture to run more efficiently on GPUs. With the current Web Platform lacking in GPU Compute capabilities, the W3C’s “GPU for the Web” Community Group is designing an API to expose the modern GPU APIs that are available on most current devices. This API is called WebGPU.

WebGPU is a low-level API, like WebGL. It is very powerful and quite verbose, as you’ll see. But that’s OK. What we’re looking for is performance.

In this article, I’m going to focus on the GPU Compute part of WebGPU and, to be honest, I'm just scratching the surface, so that you can start playing on your own. I will be diving deeper and covering WebGPU rendering (canvas, texture, etc.) in forthcoming articles.

Dogfood: WebGPU is available for now in Chrome 78 for macOS behind an experimental flag. You can enable it at chrome://flags/#enable-unsafe-webgpu. The API is constantly changing and currently unsafe. As GPU sandboxing isn't implemented yet for the WebGPU API, it is possible to read GPU data for other processes! Don’t browse the web with it enabled.

Access the GPU

Accessing the GPU is easy in WebGPU. Calling navigator.gpu.requestAdapter() returns a JavaScript promise that will asynchronously resolve with a GPU adapter. Think of this adapter as the graphics card. It can either be integrated (on the same chip as the CPU) or discrete (usually a PCIe card that is more performant but uses more power).

Once you have the GPU adapter, call adapter.requestDevice() to get a promise that will resolve with a GPU device you’ll use to do some GPU computation.

const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();

Both functions take options that allow you to be specific about the kind of adapter (power preference) and device (extensions, limits) you want. For the sake of simplicity, we’ll use the default options in this article.

Write buffer memory

Let’s see how to use JavaScript to write data to memory for the GPU. This process isn’t straightforward because of the sandboxing model used in modern web browsers.

The example below shows you how to write four bytes to buffer memory accessible from the GPU. It calls device.createBufferMappedAsync() which takes the size of the buffer and its usage. Even though the usage flag GPUBufferUsage.MAP_WRITE is not required for this specific call, let's be explicit that we want to write to this buffer. The resulting promise resolves with a GPU buffer object and its associated raw binary data buffer.

Writing bytes is familiar if you’ve already played with ArrayBuffer; use a TypedArray and copy the values into it.

// Get a GPU buffer and an arrayBuffer for writing.
// Upon success the GPU buffer is put in the mapped state.
const [gpuBuffer, arrayBuffer] = await device.createBufferMappedAsync({
  size: 4,
  usage: GPUBufferUsage.MAP_WRITE
});

// Write bytes to buffer.
new Uint8Array(arrayBuffer).set([0, 1, 2, 3]);

At this point, the GPU buffer is mapped, meaning it is owned by the CPU, and it’s accessible in read/write from JavaScript. So that the GPU can access it, it has to be unmapped which is as simple as calling gpuBuffer.unmap().

The concept of mapped/unmapped is needed to prevent race conditions where GPU and CPU access memory at the same time.

Read buffer memory

Now let’s see how to copy a GPU buffer to another GPU buffer and read it back.

Since we’re writing in the first GPU buffer and we want to copy it to a second GPU buffer, a new usage flag GPUBufferUsage.COPY_SRC is required. The second GPU buffer is created in an unmapped state with the synchronous device.createBuffer(). Its usage flag is GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ as it will be used as the destination of the first GPU buffer and read in JavaScript once GPU copy commands have been executed.

// Get a GPU buffer and an arrayBuffer for writing.
// Upon success the GPU buffer is returned in the mapped state.
const [gpuWriteBuffer, arrayBuffer] = await device.createBufferMappedAsync({
  size: 4,
  usage: GPUBufferUsage.MAP_WRITE | GPUBufferUsage.COPY_SRC
});

// Write bytes to buffer.
new Uint8Array(arrayBuffer).set([0, 1, 2, 3]);

// Unmap buffer so that it can be used later for copy.
gpuWriteBuffer.unmap();

// Get a GPU buffer for reading in an unmapped state.
const gpuReadBuffer = device.createBuffer({
  size: 4,
  usage: GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ
});

Because the GPU is an independent coprocessor, all GPU commands are executed asynchronously. This is why there is a list of GPU commands built up and sent in batches when needed. In WebGPU, the GPU command encoder returned by device.createCommandEncoder()is the JavaScript object that builds a batch of “buffered” commands that will be sent to the GPU at some point. The methods on GPUBuffer, on the other hand, are “unbuffered”, meaning they execute atomically at the time they are called.

Once you have the GPU command encoder, call copyEncoder.copyBufferToBuffer() as shown below to add this command to the command queue for later execution. Finally, finish encoding commands by calling copyEncoder.finish() and submit those to the GPU device command queue. The queue is responsible for handling submissions done via device.getQueue().submit() with the GPU commands as arguments. This will atomically execute all the commands stored in the array in order.

// Encode commands for copying buffer to buffer.
const copyEncoder = device.createCommandEncoder();
copyEncoder.copyBufferToBuffer(
  gpuWriteBuffer /* source buffer */,
  0 /* source offset */,
  gpuReadBuffer /* destination buffer */,
  0 /* destination offset */,
  4 /* size */
);

// Submit copy commands.
const copyCommands = copyEncoder.finish();
device.getQueue().submit([copyCommands]);

At this point, GPU queue commands have been sent, but not necessarily executed. To read the second GPU buffer, call gpuReadBuffer.mapReadAsync(). It returns a promise that will resolve with an ArrayBuffer containing the same values as the first GPU buffer once all queued GPU commands have been executed.

// Read buffer.
const copyArrayBuffer = await gpuReadBuffer.mapReadAsync();
console.log(new Uint8Array(copyArrayBuffer));

You can try out this sample.

In short, here’s what you need to remember regarding buffer memory operations:

  • GPU buffers have to be unmapped to be used in device queue submission.
  • When mapped, GPU buffers can be read and written in JavaScript.
  • GPU buffers are mapped when mapReadAsync(), mapWriteAsync(), createBufferMappedAsync() and createBufferMapped() are called.

Shader programming

Programs running on the GPU that only perform computations (and don't draw triangles) are called compute shaders. They are executed in parallel by hundreds of GPU cores (which are smaller than CPU cores) that operate together to crunch data. Their input and output are buffers in WebGPU.

To illustrate the use of compute shaders in WebGPU, we’ll play with matrix multiplication, a common algorithm in machine learning illustrated below.

Matrix multiplication diagram
Figure 1. Matrix multiplication diagram

In short, here’s what we’re going to do:

  1. Create three GPU buffers (two for the matrices to multiply and one for the result matrix)
  2. Describe input and output for the compute shader
  3. Compile the compute shader code
  4. Set up a compute pipeline
  5. Submit in batch the encoded commands to the GPU
  6. Read the result matrix GPU buffer

GPU Buffers creation

For the sake of simplicity, matrices will be represented as a list of floating point numbers. The first element is the number of rows, the second element the number of columns, and the rest is the actual numbers of the matrix.

Simple representation of a matrix in JavaScript and it's equivalent in mathematical notation
Figure 2. Simple representation of a matrix in JavaScript and it's equivalent in mathematical notation

The three GPU buffers are storage buffers as we need to store and retrieve data in the compute shader. This explains why the GPU buffer usage flags include GPUBufferUsage.STORAGE for all of them. The result matrix usage flag also has GPUBufferUsage.COPY_SRC because it will be copied to another buffer for reading once all GPU queue commands have all been executed.

const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();


// First Matrix

const firstMatrix = new Float32Array([
  2 /* rows */, 4 /* columns */,
  1, 2, 3, 4,
  5, 6, 7, 8
]);

const [gpuBufferFirstMatrix, arrayBufferFirstMatrix] = await device.createBufferMappedAsync({
  size: firstMatrix.byteLength,
  usage: GPUBufferUsage.STORAGE,
});
new Float32Array(arrayBufferFirstMatrix).set(firstMatrix);
gpuBufferFirstMatrix.unmap();


// Second Matrix

const secondMatrix = new Float32Array([
  4 /* rows */, 2 /* columns */,
  1, 2,
  3, 4,
  5, 6,
  7, 8
]);

const [gpuBufferSecondMatrix, arrayBufferSecondMatrix] = await device.createBufferMappedAsync({
  size: secondMatrix.byteLength,
  usage: GPUBufferUsage.STORAGE,
});
new Float32Array(arrayBufferSecondMatrix).set(secondMatrix);
gpuBufferSecondMatrix.unmap();


// Result Matrix

const resultMatrixBufferSize = Float32Array.BYTES_PER_ELEMENT * (2 + firstMatrix[0] * secondMatrix[1]);
const resultMatrixBuffer = device.createBuffer({
  size: resultMatrixBufferSize,
  usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_SRC
});

Bind group layout and bind group

Concepts of bind group layout and bind group are specific to WebGPU. A bind group layout defines the input/output interface expected by a shader, while a bind group represents the actual input/output data for a shader.

In the example below, the bind group layout expects some storage buffers at numbered bindings 0, 1, and 2 for the compute shader. The bind group on the other hand, defined for this bind group layout, associates GPU buffers to the bindings: gpuBufferFirstMatrix to the binding 0, gpuBufferSecondMatrix to the binding 1, and resultMatrixBuffer to the binding 2.

const bindGroupLayout = device.createBindGroupLayout({
  bindings: [
    {
      binding: 0,
      visibility: GPUShaderStage.COMPUTE,
      type: "storage-buffer"
    },
    {
      binding: 1,
      visibility: GPUShaderStage.COMPUTE,
      type: "storage-buffer"
    },
    {
      binding: 2,
      visibility: GPUShaderStage.COMPUTE,
      type: "storage-buffer"
    }
  ]
});

const bindGroup = device.createBindGroup({
  layout: bindGroupLayout,
  bindings: [
    {
      binding: 0,
      resource: {
        buffer: gpuBufferFirstMatrix
      }
    },
    {
      binding: 1,
      resource: {
        buffer: gpuBufferSecondMatrix
      }
    },
    {
      binding: 2,
      resource: {
        buffer: resultMatrixBuffer
      }
    }
  ]
});

Compute shader code

The compute shader code for multiplying matrices is written in GLSL, a high-level shading language used in WebGL, which has a syntax based on the C programming language. Without going into detail, you should find below the three storage buffers marked with the keyword buffer. The program will use firstMatrix and secondMatrix as inputs and resultMatrix as its output.

Note that each storage buffer has a binding qualifier used that corresponds to the same index defined in bind group layouts and bind groups declared above.

const computeShaderCode = `#version 450

  layout(std430, set = 0, binding = 0) readonly buffer FirstMatrix {
      vec2 size;
      float numbers[];
  } firstMatrix;

  layout(std430, set = 0, binding = 1) readonly buffer SecondMatrix {
      vec2 size;
      float numbers[];
  } secondMatrix;

  layout(std430, set = 0, binding = 2) buffer ResultMatrix {
      vec2 size;
      float numbers[];
  } resultMatrix;

  void main() {
    resultMatrix.size = vec2(firstMatrix.size.x, secondMatrix.size.y);

    ivec2 resultCell = ivec2(gl_GlobalInvocationID.x, gl_GlobalInvocationID.y);
    float result = 0.0;
    for (int i = 0; i < firstMatrix.size.y; i++) {
      int a = i + resultCell.x * int(firstMatrix.size.y);
      int b = resultCell.y + i * int(secondMatrix.size.y);
      result += firstMatrix.numbers[a] * secondMatrix.numbers[b];
    }

    int index = resultCell.y + resultCell.x * int(secondMatrix.size.y);
    resultMatrix.numbers[index] = result;
  }
`;

Pipeline setup

WebGPU in Chrome currently uses bytecode instead of raw GLSL code. This means we have to compile computeShaderCode before running the compute shader. Luckily for us, the @webgpu/glslang package allows us to compile computeShaderCode in a format that WebGPU in Chrome accepts. This bytecode format is based on a safe subset of SPIR-V.

Note that the “GPU on the Web” W3C Community Group has still not decided at the time of writing on the shading language for WebGPU.

import glslangModule from 'https://unpkg.com/@webgpu/glslang/web/glslang.js';

The compute pipeline is the object that actually describes the compute operation we're going to perform. Create it by calling device.createComputePipeline(). It takes two arguments: the bind group layout we created earlier, and a compute stage defining the entry point of our compute shader (the main GLSL function) and the actual compute shader module compiled with glslang.compileGLSL().

const glslang = await glslangModule();

const computePipeline = device.createComputePipeline({
  layout: device.createPipelineLayout({
    bindGroupLayouts: [bindGroupLayout]
  }),
  computeStage: {
    module: device.createShaderModule({
      code: glslang.compileGLSL(computeShaderCode, "compute")
    }),
    entryPoint: "main"
  }
});

Commands submission

After instantiating a bind group with our three GPU buffers and a compute pipeline with a bind group layout, it is time to use them.

Let’s start a programmable compute pass encoder with commandEncoder.beginComputePass(). We'll use this to encode GPU commands that will perform the matrix multiplication. Set its pipeline with passEncoder.setPipeline(computePipeline) and its bind group at index 0 with passEncoder.setBindGroup(0, bindGroup). The index 0 corresponds to the set = 0 qualifier in the GLSL code.

Now, let’s talk about how this compute shader is going to run on the GPU. Our goal is to execute this program in parallel for each cell of the result matrix, step by step. For a result matrix of size 2 by 4 for instance, we’d call passEncoder.dispatch(2, 4) to encode the command of execution. The first argument “x” is the first dimension, the second one “y” is the second dimension, and the latest one “z” is the third dimension that defaults to 1 as we don’t need it here. In the GPU compute world, encoding a command to execute a kernel function on a set of data is called dispatching.

Execution in parallel for each result matrix cell
Figure 3. Execution in parallel for each result matrix cell

In our code, “x” and “y” will be respectively the number of rows of the first matrix and the number of columns of the second matrix. With that, we can now dispatch a compute call with passEncoder.dispatch(firstMatrix[0], secondMatrix[1]).

As seen in the drawing above, each shader will have access to a unique gl_GlobalInvocationID object that will be used to know which result matrix cell to compute.

const commandEncoder = device.createCommandEncoder();

const passEncoder = commandEncoder.beginComputePass();
passEncoder.setPipeline(computePipeline);
passEncoder.setBindGroup(0, bindGroup);
passEncoder.dispatch(firstMatrix[0] /* x */, secondMatrix[1] /* y */);
passEncoder.endPass();

To end the compute pass encoder, call passEncoder.endPass(). Then, create a GPU buffer to use as a destination to copy the result matrix buffer with copyBufferToBuffer. Finally, finish encoding commands with copyEncoder.finish() and submit those to the GPU device queue by calling device.getQueue().submit() with the GPU commands.

// Get a GPU buffer for reading in an unmapped state.
const gpuReadBuffer = device.createBuffer({
  size: resultMatrixBufferSize,
  usage: GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ
});

// Encode commands for copying buffer to buffer.
commandEncoder.copyBufferToBuffer(
  resultMatrixBuffer /* source buffer */,
  0 /* source offset */,
  gpuReadBuffer /* destination buffer */,
  0 /* destination offset */,
  resultMatrixBufferSize /* size */
);

// Submit GPU commands.
const gpuCommands = commandEncoder.finish();
device.getQueue().submit([gpuCommands]);

Read result matrix

Reading the result matrix is as easy as calling gpuReadBuffer.mapReadAsync() and logging the ArrayBuffer returned by the resulting promise.

Matrix multiplication result
Figure 4. Matrix multiplication result

In our code, the result logged in DevTools JavaScript console is “2, 2, 50, 60, 114, 140”.

// Read buffer.
const arrayBuffer = await gpuReadBuffer.mapReadAsync();
console.log(new Float32Array(arrayBuffer));

Congratulations! You made it. You can play with the sample.

Performance findings

So how does running matrix multiplication on a GPU compare to running it on a CPU? To find out, I wrote the program just described for a CPU. And as you can see in the graph below, using the full power of GPU seems like an obvious choice when the size of the matrices is greater than 256 by 256.

GPU vs CPU benchmark
Figure 5. GPU vs CPU benchmark

This article was just the beginning of my journey exploring WebGPU. Expect more articles soon featuring more deep dives in GPU Compute and on how rendering (canvas, texture, sampler) works in WebGPU.

Trusted Web Activities Quick Start Guide

$
0
0

Trusted Web Activities Quick Start Guide

Trusted Web Activities (TWAs) can be a bit tricky to set up, especially if all you want to do is display your website. This guide will take you through creating a basic TWA, covering all the gotchas.

By the end of this guide, you will:

  • Have built a Trusted Web Activity that passes verification.
  • Understand when your debug keys and your release keys are used.
  • Be able to determine the signature your TWA is being built with.
  • Know how to create a basic Digital Asset Links file.

To follow this guide you'll need:

  • Android Studio Installed
  • An Android phone or emulator connected and set up for development (Enable USB debugging if you’re using a physical phone).
  • A browser that supports Trusted Web Activities on your development phone. Chrome 72 or later will work. Support in other browsers is on its way.
  • A website you'd like to view in the Trusted Web Activity.

A Trusted Web Activity lets your Android App launch a full screen Browser Tab without any browser UI. This capability is restricted to websites that you own, and you prove this by setting up Digital Asset Links. Digital Asset Links consist essentially of a file on your website that points to your app and some metadata in your app that points to your website. We'll talk more about them later.

When you launch a Trusted Web Activity, the browser will check that the Digital Asset Links check out, this is called verification. If verification fails, the browser will fall back to displaying your website as a Custom Tabs.

Clone and customize the example repo

The svgomg-twa repo contains an example TWA that you can customize to launch your website:

  1. Clone the project (git clone https://github.com/GoogleChromeLabs/svgomg-twa.git).
  2. Import the Project into Android Studio, using File > New > Import Project, and select the folder to which the project was cloned.
  3. Open the app's build.gradle and modify the values in twaManifest. There are two build.gradle files. You want the module one at app/build.gradle.

    • Change hostName to point to your website. Your website must be available on HTTPS, though you omit that from the hostName field.
    • Change name to whatever you want.
    • Change applicationId to something specific to your project. This translates into the app’s package name and is how the app is identified on the Play Store - no two apps can share the applicationId and if you change it you’ll need to create a new Play Store listing.

Build and run

In Android Studio hit Run, Run ‘app’ (where ‘app’ is your module name, if you’ve changed it) and the TWA will be built and run on your device! You’ll notice that your website is launched as a Custom Tab, not a Trusted Web Activity, this is because we haven’t set up our Digital Asset Links yet, but first...

A note on signing keys

Digital Asset Links take into account the key that an APK has been signed with and a common cause for verification failing is to use the wrong signature. (Remember, failing verification means you'll launch your website as a Custom Tab with browser UI at the top of the page.) When you hit Run or Build APK in Android Studio, the APK will be created with your developer debug key, which Android Studio automatically generated for you.

If you deploy your app to the Play Store you’ll hit Build > Generate Signed APK, which will use a different signature, one that you’ll have created yourself (and protected with a password). That means that if your Digital Asset Links file specifies your production key, verification will fail when you build with your debug key. This also can happen the other way around - if the Digital Asset Links file has your debug key your TWA will work fine locally, then when you download the signed version from the Play Store, verification will fail.

You can put both your debug key and production key in your asset link file (see Adding More Keys below), but your debug key is less secure. Anyone who gets a copy of the file can use it. Finally, if you have your app installed on your device with one key, you can’t install the version with the other key. You must uninstall the previous version first.

Building your app

  • To build with debug keys:
    1. Click Run 'app' where 'app' is the name of your module if you changed it.
  • To build with release keys:
    1. Click Build then Generate Signed APK.
    2. Choose APK.
    3. If you're doing this for the first time, on the next page press Create New to create a new key and follow the Android documentation. Otherwise select your previously created key.
    4. Press Next and pick the release build variant.
    5. Make sure you check both the V1 and the V2 signatures (the Play Store won’t let you upload the APK otherwise).
    6. Click Finish.

If you built with debug keys, your app will be automatically deployed to your device. On the other hand if you built with release keys, after a few seconds a pop up will appear in the bottom right corner giving you the option to locate or analyze the APK. (If you miss it, you can press on the Event Log in the bottom right.) You’ll need to use adb manually to install the signed APK with adb install app-release.apk.

This table shows which key is used based on how you create your APK.

KeyDebugRelease
When is it created? Automatically by Android Studio. Manually by you.
When is it used?
  • Run 'app'.
  • Debug 'app'.
  • Build APK.
  • Generate Signed APK.
  • When the app is downloaded from the Play Store.

Now that your app is installed (with either the debug or release key) you can generate the Digital Asset Link file. I’ve created the Asset Link Tool to help you do this. If you'd prefer not to download the Asset Link Tool, you can determine your app's signature manually.

  1. Download the Asset Link Tool.
  2. When the app launches, you’ll be given a list of all applications installed on your device by applicationId. Filter the list by the applicationId you chose earlier and click on that entry.
  3. You’ll see a page listing your app’s signature and with a generated Digital Asset Link. Click on the Copy or Share buttons at the bottom to export it however you like (eg, save to Google Keep, email it to yourself).

Put the Digital Asset Link in a file called assetlinks.json and upload it to your website at .well-known/assetlinks.json (relative to the root).

Now that you’ve uploaded it, make sure you can access your asset link file in a browser. Check that https://example.com/.well-known/assetlinks.json resolves to the file you just uploaded.

Jekyll based websites

If your website is generated by Jekyll (such as GitHub Pages), you’ll need to add a line of configuration so that the .well-known directory is included in the output. GitHub help has more information on this topic. Create a file called _config.yml at the root of your site (or add to it if it already exists) and enter:

# Folders with dotfiles are ignored by default.
include: [.well-known]

Adding more keys

A Digital Asset Link file can contain more than one app, and for each app, it can contain more than one key. For example, to add a second key just use the Asset Link Tool to determine the key and add it as a second entry to the sha256_cert_fingerprints field. The code in Chrome that parses this JSON is quite strict, so make sure you don’t accidentally add an extra comma at the end of the list.

[{
  "relation": ["delegate_permission/common.handle_all_urls"],
  "target": {
    "namespace": "android_app",
    "package_name": "com.appspot.pwa_directory",
    "sha256_cert_fingerprints": [
      "FA:2A:03:CB:38:9C:F3:BE:28:E3:CA:7F:DA:2E:FA:4F:4A:96:F3:BC:45:2C:08:A2:16:A1:5D:FD:AB:46:BC:9D",
      "4F:FF:49:FF:C6:1A:22:E3:BB:6F:E6:E1:E6:5B:40:17:55:C0:A9:F9:02:D9:BF:28:38:0B:AE:A7:46:A0:61:8C"

    ]
  }
}]

Troubleshooting

Viewing relevant logs

Chrome logs the reason that Digital Asset Links verification fails and you can view the logs on an Android device with adb logcat. If you’re developing on Linux/Mac you can see the read the relevant logs from a connected device with:

> adb logcat -v brief | grep -e OriginVerifier -e digital_asset_links

For example if you see the message Statement failure matching fingerprint. you should use the Asset Link Tool to see your app’s signature and make sure it matches that in your assetlinks.json file (Be wary of confusing your debug and release keys. Look at the A note on signing keys section.)

Checking your browser

A Trusted Web Activity will try to adhere to the user’s default choice of browser. If the user’s default browser supports TWAs, it will be launched. Failing that if any installed browser supports TWAs, they will be chosen. Finally, the default behavior is to fall back to a Custom Tabs mode.

This means that if you’re debugging something to do with Trusted Web Activities you should make sure you’re using the browser you think that you are. You can use the following command to check which browser is being used:

> adb logcat -v brief | grep -e TWAProviderPicker
D/TWAProviderPicker(17168): Found TWA provider, finishing search: com.google.android.apps.chrome

Next Steps

Hopefully if you’ve followed this guide, you'll have a working Trusted Web Activity and have enough knowledge to debug what's going on when verification fails. If not, please have a look at the Troubleshooting section or file a GitHub issue against these docs.

For your next steps, I’d recommend you start off by creating an icon for your app. With that done you can consider deploying your app to the Play Store.

What's New In DevTools (Chrome 78)

$
0
0

What's New In DevTools (Chrome 78)

Lighthouse 5.2 in the Audits panel

The Audits panel is now running Lighthouse 5.2. The new Third-Party Usage diagnostic audit tells you how much third-party code was requested and how long that third-party code blocked the main thread while the page loaded. See Optimize your third-party resources to learn more about how third-party code can degrade load performance.

A screenshot of the 'Third-Party Usage' audit in the Lighthouse report UI.
Figure 1. The Third-party usage audit.

Chromium issue #772558

Largest Contentful Paint in the Performance panel

When analyzing load performance in the Performance panel, the Timings section now includes a marker for Largest Contentful Paint (LCP). LCP reports the render time of the largest content element visible in the viewport.

The LCP marker in the Timings section.
Figure 2. The LCP marker in the Timings section.

To highlight the DOM node associated with LCP:

  1. Click the LCP marker in the Timings section.
  2. Hover over the Related Node in the Summary tab to highlight the node in the viewport.

    The Related Node section of the Summary tab.
    Figure 3. The Related Node section of the Summary tab.
  1. Click the Related Node to select it in the DOM Tree.

File DevTools issues from the Main Menu

If you ever encounter a bug in DevTools and want to file an issue, or if you ever get an idea on how to improve DevTools and want to request a new feature, go to Main Menu > Help > Report a DevTools issue to create an issue in the DevTools engineering team's tracker. Providing a minimal, reproducible example on Glitch dramatically increases the team's ability to fix your bug or implement your feature request!

Main Menu > Help > Report a DevTools issue.
Figure 4. Main Menu > Help > Report a DevTools issue.

Feedback

To discuss the new features and changes in this post, or anything else related to DevTools:

Consider Canary

If you're on Mac or Windows, consider using Chrome Canary as your default development browser. Canary gives you access to the latest DevTools features.

Note: Canary is released as soon as its built, without testing. This means that Canary breaks about once-a-month. It's usually fixed within a day. You can go back to using Chrome Stable while Canary is broken.

<<../../_shared/discover.md>>

New in Chrome 77

$
0
0

New in Chrome 77

Chrome 77 is rolling out now!

I’m Pete LePage, let’s dive in and see what’s new for developers in Chrome 77!

Largest Contentful Paint

Understanding and measuring the real world performance of your site can be hard. Metrics like load, or DOMContentLoaded, don’t tell you what the user is seeing on screen. First Paint, and First Contentful Paint, only capture the beginning of the experience. First Meaningful Paint is better, but it’s complex, and sometimes wrong.

The Largest Contentful Paint API, available starting in Chrome 77, reports the render time of the largest content element visible in the viewport and makes it possible to measure when the main content of the page is loaded.

To measure the Largest Contentful Paint, you’ll need to use a Performance Observer, and look for largest-contentful-paint events.

let lcp;
const po = new PerformanceObserver((eList) => {
  const e = eList.getEntries();
  const last = e[e.length - 1];
  lcp = last.renderTime || last.loadTime;
});

const poOpts = {
  type: 'largest-contentful-paint',
  buffered: true
}
po.observe(poOpts);

Since a page often loads in stages, it’s possible that the largest element on a page will change, so you should only report the last largest-contentful-paint event to your analytics service.

addEventListener('visibilitychange', function fn() {
  const visState = document.visibilityState;
  if (lcp && visState === 'hidden') {
    sendToAnalytics({'lcp': lcp});
    removeEventListener('visibilitychange', fn, true);
  }
}, true);

Phil has a great post about the Largest Contentful Paint on web.dev.

New forms capabilities

Many developers build custom form controls, either to customize the look and feel of existing elements, or to build new controls that aren’t built in to the browser. Typically this involves using JavaScript and hidden <input> elements, but it’s not a perfect solution.

Two new web features, added in Chrome 77, make it easier to build custom form controls, and remove the many of the existing limitations.

The formdata event

The formdata event, is a low-level API that lets any JavaScript code participate in a form submission. To use it, add a formdata event listener to the form you want to interact with.

const form = document.querySelector('form');
form.addEventListener('formdata', ({formData}) => {
  formData.append('my-input', myInputValue);
});

When the user clicks the submit button, the form fires the formdata event, which includes a FormData object that holds all of the data being submitted. Then, in your formdata event handler, you can update or modify the formdata before it’s submitted.

Form-associated custom elements

Form-associated custom elements help to bridge the gap between custom element, and native controls. It tells the browser to treat the custom element like all other form elements, and adds common properties found on input elements, like name, value, and validity.

class MyCounter extends HTMLElement {
  static formAssociated = true;

  constructor() {
    super();
    this._internals = this.attachInternals();
    this._value = 0;
  }
  ...
}

Check out More capable form controls on web.dev for all the details!

Native lazy loading

I’m not sure how I missed native lazy loading in my last video! It’s pretty amazing, so I’m including it now. Lazy loading is a technique that allows you to defer the loading of non-critical resources, like off-screen <img>, or <iframes> - until they’re needed, increasing the performance of your page.

Starting in Chrome 76, the browser handles lazy loading for you, without the need to write custom lazy loading code, or use a separate JavaScript library.

To tell the browser you want an image, or iframe lazy loaded, use the loading=”lazy” attribute. Images and <iframes> that are “above the fold”, load normally. And those that are below, are only fetched when the user scrolls near them.

<img src="image.jpg" loading="lazy" width="400" height="250" alt="...">

Check out Native lazy-loading for the web on web.dev for details.

Chrome Dev Summit 2019

The Chrome Dev Summit is coming up November 11th and 12th.

It’s a great opportunity to learn about the latest tools and updates coming to the web platform, and hear directly from the Chrome engineering team.

It’ll be streamed live on our YouTube channel, or if you want to attend in person, you can request your invite at the Chrome Dev Summit 2019 website.

And more!

These are just a few of the changes in Chrome 77 for developers, of course, there’s plenty more.

The Contact Picker API, available as an origin trial, is a new, on-demand picker that allows users to select an entry or entries from their contact list and share limited details of the selected contacts with a website.

And there are new measurement units in the intl.NumberFormat API.

Further reading

This covers only some of the key highlights, check the links below for additional changes in Chrome 77.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 78 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Fresher service workers, by default

$
0
0

Fresher service workers, by default

Note: This article was updated to reflect that the byte-for-byte service worker update check applies to imported scripts starting in Chrome 78.

tl;dr

Starting in Chrome 68, HTTP requests that check for updates to the service worker script will no longer be fulfilled by the HTTP cache by default. This works around a common developer pain point, in which setting an inadvertent Cache-Control: header on your service worker script could lead to delayed updates.

If you've already opted-out of HTTP caching for your /service-worker.js script by serving it with Cache-Control: max-age=0, then you shouldn't see any changes due to the new default behavior.

Additionally, starting in Chrome 78, the byte-for-byte comparison will be applied to scripts loaded in a service worker via importScripts(). Any change made to an imported script will trigger the service worker update flow, just like a change to the top-level service worker would.

Background

Every time you navigate to a new page that's under a service worker's scope, explicitly call registration.update() from JavaScript, or when a service worker is "woken up" via a push or sync event, the browser will, in parallel, request the JavaScript resource that was originally passed in to the navigator.serviceWorker.register() call, to look for updates to the service worker script.

For the purposes of this article, let's assume its URL is /service-worker.js and that it contains a single call to importScripts(), which loads additional code that's run inside the service worker:

// Inside our /service-worker.js file:
importScripts('path/to/import.js');

// Other top-level code goes here.

What's changing?

Prior to Chrome 68, the update request for /service-worker.js would be made via the HTTP cache (as most fetches are). This meant if the script was originally sent with Cache-Control: max-age=600, updates within the next 600 seconds (10 minutes) would not go to the network, so the user may not receive the most up-to-date version of the service worker. However, if max-age was greater than 86400 (24 hours), it would be treated as if it were 86400, to avoid users being stuck with a particular version forever.

Starting in 68, the HTTP cache will be ignored when requesting updates to the service worker script, so existing web applications may see an increase in the frequency of requests for their service worker script. Requests for importScripts will still go via the HTTP cache. But this is just the default—a new registration option, updateViaCache is available that offers control over this behavior.

updateViaCache

Developers can now pass in a new option when calling navigator.serviceWorker.register(): the updateViaCache parameter. It takes one of three values: 'imports', 'all', or 'none'.

The values determine if and how the browser's standard HTTP cache comes into play when making the HTTP request to check for updated service worker resources.

  • When set to imports, the HTTP cache will never be consulted when checking for updates to the /service-worker.js script, but will be consulted when fetching any imported scripts (path/to/import.js, in our example). This is the default, and it matches the behavior starting in Chrome 68.

  • When set to 'all', the HTTP cache will be consulted when making requests for both the top-level /service-worker.js script, as well as any scripts imported inside of the service worker, like path/to/import.js. This option corresponds to the previous behavior in Chrome, prior to Chrome 68.

  • When set to 'none', the HTTP cache will not be consulted when making requests for either the top-level /service-worker.js or for any imported scripted, such as the hypothetical path/to/import.js.

For example, the following code will register a service worker, and ensure that the HTTP cache is never consulted when checking for updates to either the /service-worker.js script, or for any scripts that are referenced via importScripts() inside of /service-worker.js:

if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/service-worker.js', {
    updateViaCache: 'none',
    // Optionally, set 'scope' here, if needed.
  });
}

Checks for updates to imported scripts

Prior to Chrome 78, any service worker script loaded via importScripts() would be retrieved only once (checking first against the HTTP cache, or via the network, depending on the updateViaCache configuration). After that initial retrieval, it would be stored internally by the browser, and never re-fetched.

The only way to force an already installed service worker to pick up changes to an imported script was to change the script's URL, usually either by adding in a semver value (e.g. importScripts('http://example.com/v1.1.0/index.js')) or by including a hash of the contents (e.g. importScripts('https://example.com/index.abcd1234.js')). A side-effect of changing the imported URL is that the top-level service worker script's contents change, which in turn triggers the service worker update flow.

Starting with Chrome 78, each time an update check is performed for a top-level service worker file, checks will be made at the same time to determine whether or not the contents of any imported scripts have changed. Depending on the Cache-Control headers used, these imported script checks might be fulfilled by the HTTP cache if updateViaCache is set to 'all' or 'imports' (which is the default value), or the checks might go directly against the network if updateViaCache is set to 'none'.

If an update check for an imported script results in a byte-for-byte difference compared to what was previously stored by the service worker, that will in turn trigger the full service worker update flow, even if the top-level service worker file remains the same.

The Chrome 78 behavior matches what Firefox implemented several years ago, in Firefox 56. Safari already implements this behavior as well.

What do developers need to do?

If you've effectively opted-out of HTTP caching for your /service-worker.js script by serving it with Cache-Control: max-age=0 (or a similar value), then you shouldn't see any changes due to the new default behavior.

If you do serve your /service-worker.js script with HTTP caching enabled, either intentionally or because it's just the default for your hosting environment, you may start seeing an uptick of additional HTTP requests for /service-worker.js made against your server—these are requests that used to be fulfilled by the HTTP cache. If you want to continue allowing the Cache-Control header value to influence the freshness of your /service-worker.js', you'll need to start explicitly setting updateViaCache: 'all' when registering your service worker.

Given that there may be a long-tail of users on older browser versions, it's still a good idea to continue setting the Cache-Control: max-age=0 HTTP header on service worker scripts, even though newer browsers might ignore them.

Developers can use this opportunity to decide whether they want to explicitly opt their imported scripts out of HTTP caching now, and add in updateViaCache: 'none' to their service worker registration if appropriate.

Serving imported scripts

Starting with Chrome 78, developers might see more incoming HTTP requests for resources loaded via importScripts(), since they will now be checked for updates.

If you would like to avoid this additional HTTP traffic, set long-lived Cache-Control headers when serving scripts that include semver or hashes in their URLs, and rely on the default updateViaCache behavior of 'imports'.

Alternatively, if you want your imported scripts to be checked for frequent updates, then make sure you either serve them with Cache-Control: max-age=0, or that you use updateViaCache: 'none'.

Further reading

"The Service Worker Lifecycle" and "Caching best practices & max-age gotchas", both by Jake Archibald, are recommended reading for all developers who deploy anything to the web.

Deprecations and removals in Chrome 78

$
0
0

Deprecations and removals in Chrome 78

Disallow sync XHR in page dismissal

Chrome now disallows synchronous XHR during page dismissal when the page is being navigated away from or closed by the user. This applies to the following events:

  • beforeunload
  • unload
  • pagehide
  • visibilitychange

To ensure that data is sent to the server when a page unloads, we recommend SendBeacon or Fetch keep-alive.

For now, enterprise users can use the AllowSyncXHRInPageDismissal policy flag and developers can use the origin trial flag ‘allow-sync-xhr-in-page-dismissal’ to allow synchronous XHR requests during page unload. This is a temporary “opt-out” measure, and we expect to remove this flag in Chrome 82.

Intent to Remove | Chrome Platform Status | Chromium Bug

XSS Auditor

XSS Auditor has been removed from Chromeß. The XSS Auditor can introduce cross-site information leaks and mechanisms to bypass the Auditor are widely known.

Intent to Remove | Chrome Platform Status | Chromium Bug

Feedback

The Chromium Chronicle: Monorail’s Grid View!

$
0
0

The Chromium Chronicle: Monorail’s Grid View!

Episode 6: September 2019

by Tiffany in San Francisco

Chrome’s issue tracker, Monorail, offers a grid view that allows you to visualize your issues in a Kanban style board. When you’re viewing a list of issues, you can click the “Grid” button to activate grid mode!

While on the grid page, you can customize your view to sort issues by almost any field you want! Status, Priority, NextAction, Milestone, Owner, you name it!

The flexibility of the grid view allows you to customize it to fit your team’s needs. For example, below we’ve set up the grid view to show all pending Q3 Monorail work, sorted by owner and sprint date.

If you need more information on each issue, you can view the grid view with “Tile” cells instead. And if you want a bird’s eye view of many, many issues, you can view issues in the grid view as counts. In fact, the grid view even supports loading up to 6,000 issues at once.

All setting changes in the grid view are reflected in the page URL. So once you’ve configured your grid to your needs, you can share a link to your new view with your team. If you want, you could even use the grid view for your weekly team status meetings.

As you use Monorail’s grid view, please file feedback! We’d love to hear your suggestions on how we can make the grid view better.

Additional Resources


Badging for App Icons

$
0
0

Badging for App Icons

What is the Badging API?

Example of Twitter with eight notifications and another app showing a flag type badge.

The Badging API allows installed web apps to set an application-wide badge, shown in an operating-system-specific place associated with the application (such as the shelf or home screen).

Badging makes it easy to subtly notify the user that there is some new activity that might require their attention, or to indicate a small amount of information, such as an unread count.

Badges tend to be more user-friendly than notifications, and can be updated with a much higher frequency, since they don’t interrupt the user. And, because they don’t interrupt the user, they don't need the user's permission.

Read explainer

Possible use cases

Examples of sites that may use this API include:

  • Chat, email, and social apps, to signal that new messages have arrived, or to show the number of unread items.
  • Productivity apps, to signal that a long-running background task (such as rendering an image or video) has completed.
  • Games, to signal that a player action is required (e.g., in Chess, when it is the player's turn).

Current status

Step Status
1. Create explainer Complete
2. Create initial draft of specification Complete
3. Gather feedback & iterate on design In progress
4. Origin trial In progress
5. Launch Not started

See it in action

  1. Using Chrome 73 or later on Windows or Mac, open the Badging API demo.
  2. When prompted, click Install to install the app, or use the Chrome menu to install it.
  3. Open it as an installed PWA. Note, it must be running as an installed PWA (in your task bar or dock).
  4. Click the Set or Clear button to set or clear the badge from the app icon. You can also provide a number for the Badge value.

Note: While the Badging API in Chrome requires an installed app with an icon that can actually be badged, we advise against making calls to the Badging API dependent on the install state. The Badging API can apply to anywhere a browser might want to show a badge, so developers shouldn’t make any assumptions about situations where the browser will display them. Just call the API when it exists. If it works, it works. If not, it simply doesn’t.

How to use the Badging API

Starting in Chrome 73, the Badging API is available as an origin trial for Windows (7+) and macOS.

Origin trials allow you to try out new features and give feedback on usability, practicality, and effectiveness to us, and the web standards community. For more information, see the Origin Trials Guide for Web Developers.

Note: Android is not supported because it requires you to show a notification, though this may change in the future. Chrome OS support is pending implementation of badging on the platform.

Register for the origin trial

  1. Request a token for your origin.
  2. Add the token to your pages, there are two ways to provide this token on any pages in your origin:
    • Add an origin-trial <meta> tag to the head of any page. For example, this may look something like: <meta http-equiv="origin-trial" content="TOKEN_GOES_HERE">
    • If you can configure your server, you can also provide the token on pages using an Origin-Trial HTTP header. The resulting response header should look something like: Origin-Trial: TOKEN_GOES_HERE

Alternatives to the origin trial

If you want to experiment with the Badging API locally, without an origin trial, enable the #enable-experimental-web-platform-features flag in chrome://flags.

Using the Badging API during the origin trial

Dogfood: During the origin trial, the API will be available via window.ExperimentalBadge. The code below is based on the current design, and will change before it lands in the browser as a standardized API.

To use the Badging API, your web app needs to meet Chrome’s installability criteria, and users must add it to their home screens.

The ExperimentalBadge interface is a member object on window. It contains two methods:

  • set([number]): Sets the app's badge. If a value is provided, set the badge to the provided value otherwise, display a plain white dot (or other flag as appropriate to the platform).
  • clear(): Removes app's badge.

For example:

// In a web page
const unreadCount = 24;
window.ExperimentalBadge.set(unreadCount);

ExperimentalBadge.set() and ExperimentalBadge.clear() can be called from a foreground page, or potentially in the future, a service worker. In either case, it affects the whole app, not just the current page.

In some cases, the OS may not allow the exact representation of the badge, in this case, the browser will attempt to provide the best representation for that device. For example, while the Badging API isn’t supported on Android, Android only ever shows a dot instead of a numeric value.

Note: Don’t assume anything about how the user agent wants to display the badge. We expect some user agents will take a number like "4000" and rewrite it as "99+". If you saturate the badge yourself (for example by setting it to "99") then the "+" won’t appear. No matter the actual number, just set Badge.set(unreadCount) and let the user agent deal with displaying it accordingly.

Feedback

We need your help to ensure that the Badging API works in a way that meets your needs and that we’re not missing any key scenarios.

We’re also interested to hear how you plan to use the Badging API:

  • Have an idea for a use case or an idea where you'd use it?
  • Do you plan to use this?
  • Like it, and want to show your support?

Share your thoughts on the Badging API WICG Discourse discussion.

What's New In DevTools (Chrome 79)

$
0
0

What's New In DevTools (Chrome 79)

New features for cookies

After recording network activity, select a network resource and then navigate to the updated Cookies tab to understand why that resource's request or response cookies were blocked. See Changes to the default behavior without SameSite to understand why you might be seeing more blocked cookies in Chrome 76 and later.

The Cookies tab.
The Cookies tab.
  • Yellow Request Cookies were not sent over the wire. These are hidden by default. Click show filtered out request cookies to show them.
  • Yellow Response Cookies were sent over the wire but not stored.
  • Hover over More Information info to learn why a cookie was blocked.
  • Most of the data in the Request Cookies and Response Cookies tables comes from the resource's HTTP headers. The Domain, Path, and Expires/Max-Age data comes from the Chrome DevTools Protocol.

Chromium issues #856777, #993843

Click a row in the Cookies pane to view the value of that cookie.

Viewing the value of a cookie.
Viewing the value of a cookie.

Note: The main difference between the Cookies tab in the Network panel and the Cookies pane in the Application panel is that the Cookies pane in the Application panel lets you edit and delete cookies.

Chromium issue #462370

Simulate different prefers-color-scheme and prefers-reduced-motion preferences

The prefers-color-scheme media query lets you match your site's style to your user's preferences. For example, if the prefers-color-scheme: dark media query is true, it means that your user has set their operating system to dark mode and prefers dark mode UIs.

Open the Command Menu, run the Show Rendering command, and then set the Emulate CSS media feature prefers-color-scheme dropdown to debug your prefers-color-scheme: dark and prefers-color-scheme: light styles.

When the 'prefers-color-scheme: dark' option is set in the Rendering tab
            the Styles pane shows the CSS that gets applied when that media query is true
            and the viewport shows the dark mode styles.
When prefers-color-scheme: dark is set (middle box) the Styles pane (right box) shows the CSS that gets applied when that media query is true and the viewport shows the dark mode styles (left box).

You can also simulate prefers-reduced-motion: reduce using the Emulate CSS media feature prefers-reduced-motion dropdown next to the Emulate CSS media feature prefers-color-scheme dropdown.

Chromium issue #1004246

Code coverage updates

The Coverage tab can help you find unused JavaScript and CSS.

The Coverage tab now uses new colors to represent used and unused code. This color combination is proven to be more accessible for people with color vision deficiencies. The red bar on the left represents unused code, and the bluish bar on the right represents used code.

The new URL filter text box lets you filter out patterns of URLs.

The Coverage tab.
The Coverage tab.

The Sources panel now displays code coverage data by default. Clicking the red or bluish marks next to the line number opens the Coverage tab and highlights the file.

Coverage data in the Sources panel.
Coverage data in the Sources panel. Line 8 is an example of unused code. Line 11 is an example of used code.

Chromium issues #1003671, #1004185

Debug why a network resource was requested

After recording network activity, select a network resource and then navigate to the Initiator tab to understand why the resource was requested. The Request call stack section describes the JavaScript call stack leading up to the network request.

The Initiator tab.
The Initiator tab.

Note: You can also access this data by hovering over the Initiator column in the Network Log. We added the Initiator tab because it's more accessible.

Chromium issues 963183, 842488

Console and Sources panels respect indentation preferences again

For a long time DevTools has had a setting to customize your indentation preference to 2 spaces, 4 spaces, 8 spaces, or tabs. Recently the setting was essentially useless because the Console and Sources panels were ignoring the setting. This bug is now fixed.

Go to Settings > Preferences > Sources > Default Indentation to set your preference.

Chromium issue #977394

New shortcuts for cursor navigation

Press Control+P in the Console or Sources panels to move your cursor to the line above. Press Control+N to move your cursor to the line below.

Chromium issue #983874

Feedback

To discuss the new features and changes in this post, or anything else related to DevTools:

Consider Canary

If you're on Mac or Windows, consider using Chrome Canary as your default development browser. Canary gives you access to the latest DevTools features.

Note: Canary is released as soon as its built, without testing. This means that Canary breaks about once-a-month. It's usually fixed within a day. You can go back to using Chrome Stable while Canary is broken.

<<../../_shared/discover.md>>

New in Chrome 78

$
0
0

New in Chrome 78

Chrome 78 is rolling out now!

I’m Pete LePage, let’s dive in and see what’s new for developers in Chrome 78!

CSS Properties and Values API

CSS variables, technically called custom properties, are awesome. They let you define and use your own properties throughout your CSS. But, custom properties are not much more than a simple search and replace.

html {
  --my-color: green;
}
.thing {
  color: var(--my-color);
}

If you used a variable for a color, but assigned a URL as a value, the rule would just be silently discarded. With the CSS Properties and Values API, you can define a type and default fallback value for your custom properties.

html {
  --my-color: url(‘not-a-color’); // Oops, not a color!
}
.thing {
  color: var(--my-color);
}

Registering a property is as easy as calling window.CSS.registerProperty() and providing the name of the property you’re defining the type of property it is, if it should inherit, and it’s initial value.

window.CSS.registerProperty({
  name: '--my-color',
  syntax: '<color>',
  inherits: false,
  initialValue: 'black',
});

Take a look at Sam Richard's Smarter custom properties with Houdini’s new API article on web.dev for complete details.

Fresher service workers

Byte-for-byte checks are now performed for service worker scripts imported by importScripts(). In the past, the only way to force an installed service worker to pick up changes to an imported script was to change the imported script's URL usually either by adding in a semver value, or hash in the URL.

importScripts('https://example.com/v1.1.0/index.js'));
importScripts('https://example.com/index.abcd1234.js'));

Starting in Chrome 78, each time an update check is performed for a top-level service worker file, Chrome will also check whether or not the contents of any imported scripts have changed. If it has, it will trigger the full service worker update flow. This brings Chrome into conformance with the spec, and matches what Firefox and Safari do.

Jeff has all the details in Fresher service workers, by default, including some important things to know about how the HTTP cache impacts the update cycle.

New origin trials

Origin trials provide an opportunity for us to validate experimental features and APIs, and make it possible for you to provide feedback on their usability and effectiveness in broader deployment.

Experimental features are typically only available behind a flag, but when we offer an Origin Trial for a feature, you can register for that origin trial to enable the feature for all users on your origin.

Opting into an origin trial allows you to build demos and prototypes that your beta testing users can try for the duration of the trial without requiring them to flip any special flags in Chrome.

There’s more info on origin trials in the Origin Trials Guide for Web Developers. You can see a list of active origin trials, and sign up for them on the Chrome Origin Trials page.

Native File System

An Origin Trial for the Native File System API starts in Chrome 78 and is expected to run through Chrome 80.

The Native File System API enables developers to build powerful web apps that interact with files on the user's local device. After a user grants a web app access, this API allows web apps to read or save changes directly to files and folders on the user's device.

I’m really excited about all of the new experiences this enables, no more having to “upload” or “download” files I want to work with. Check out my post about the Native File System for all the details, including code, a demo, and how we’re working to keep users safe.

SMS Receiver

An Origin Trial for the SMS Reciver API starts in Chrome 78 and is expected to run through Chrome 80.

The SMS Receiver API, now available as an origin trial, lets your web app receive specially formatted SMS messages for your app's origin. From this, you can programmatically obtain an OTP from an SMS message and verify a phone number for the user more easily.

Eiji wrote Verify phone numbers on the web with the SMS Reciver API with all the details, and how to sign up for the origin trial.

Chrome Dev Summit 2019

Don’t forget to tune into the Chrome Dev Summit on November 11th and 12th, it’ll be streaming live, on the Chrome Developers YouTube channel.

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 78.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 79 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

The Chromium Chronicle: Preprocessing Source

$
0
0

The Chromium Chronicle: Preprocessing Source

Episode 7: October 2019

by Bruce Dawson in Seattle

Sometimes it is helpful to compile a single Chromium source file by hand, perhaps to experiment with compiler optimization options, to preprocess it to a single file to understand some subtle macro details, or to minimize a compiler bug.

A few tricks will let a Chromium developer find and execute the command that compiles a particular source file, with modifications as needed.

Start by going to your output directory and using autoninja (or ninja) to compile the file of interest (and any dependencies) using the ^ suffix. This suffix tells ninja to build the output of the specified file—version.o in this case. Then, touch the file, and compile it (and only it) again with the -v (verbose) flag to ninja:

On Linux or OSX:

$ autoninja ../../base/version.cc^
$ touch ../../base/version.cc
$ autoninja -v ../../base/version.cc^

In the Windows cmd shell ^ is a special character and must be escaped:

C:\> autoninja ../../base/version.cc^^
C:\> touch ../../base/version.cc
C:\> autoninja -v ../../base/version.cc^^

Typical output of the autoninja -v command looks like this (significantly trimmed):

..\..\third_party\llvm-build\Release+Asserts\bin\clang-cl.exe /nologo /showIncludes -imsvc ...

This command allows you to compile the file of interest. To get the preprocessed output, use the following steps:

On Linux or OSX, remove the -o obj/base/base/version.o block from the end, and add -E. This tells the compiler to print the preprocessed file to stdout.

Redirect the output to a file, like this:

../../third_party/llvm-build/Release+Asserts/bin/clang++ -MMD ... -E >version.i

On Windows, remove the /showIncludes option from the beginning (it prints a line of output for each #include) and then add /P in order to preprocess the file instead of compiling it. The results will be saved in the current directory in version.i:

..\..\third_party\llvm-build\Release+Asserts\bin\clang-cl.exe /nologo -imsvc ... /P

Now you can examine the preprocessed file to see what the macros are actually doing, or make experimental compiler-switch changes and recompile to see what happens.

Additional Resources

  • Fast Chrome Builds: For more build-optimization tips (focused on Windows).
  • ETW: Find out how to find Windows performance problems—in Chrome or in the build—by reading the ETW (also known as Xperf) docs.

Deprecations and removals in Chrome 78

$
0
0

Deprecations and removals in Chrome 78

-webkit-appearance keywords for arbitrary elements

Changes -webkit-appearance keywords to work only with specific element types. If a keyword is applied to a non-supported element, the element takes the default appearance.

Chrome Platform Status | Chromium Bug

Feedback

Viewing all 599 articles
Browse latest View live