Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

Web Audio, Autoplay Policy and Games

$
0
0

Web Audio, Autoplay Policy and Games

In September 2017 we announced an upcoming change to how audio would be handled with autoplay behavior policy in Chrome. The policy change was released with Chrome 66 Stable in May 2018.

After feedback from the Web Audio development community we delayed the release of the Web Audio portion of the autoplay policy to give developers more time to update their websites. We’ve also made some changes to the implementation of the policy for Web Audio which will reduce the number of websites that need to adjust their code – especially web games – and therefore provide a better experience for our users.

This policy change is now scheduled to roll out with Chrome 71 in December 2018.

What does the policy change do exactly?

Autoplay is the name given to a piece of content which immediately plays upon the loading of a webpage. For websites which expected to be able to autoplay their content, this change will prevent playback by default. In most cases, the playback will be resumed but in others, a small adjustment to the code will be needed. Specifically, developers must add code which resumes their content if the user interacts with the webpage.

However, if the user arrives on a page with autoplay content and they navigated to that page from a page of the same origin, then that content will never be blocked. Read our earlier blog post on the autoplay policy for more detailed examples.

Additionally, we added a heuristic to learn from users’ past behavior with regard to websites that autoplay audio. We detect when users regularly let audio play for more than 7 seconds during most of their visits to a website, and enable autoplay for that website.

We do this with an index that is stored locally per Chrome profile on a device – it is not synced across devices and is only shared as part of the anonymized user statistics. We call this index the Media Engagement Index (MEI) and you can view it via chrome://media-engagement.

MEI keeps track of how many visits to a site include audio playback that is more than 7 seconds long. Based on a user’s MEI, we believe we can understand whether a user expects audio from a particular website or not – and anticipate the user's intent in the future.

If the user often lets a website’s domain play audio for more than 7 seconds then we assume in future that the user is expecting this website to have the right to autoplay audio. Therefore, we grant that website the right to autoplay audio without requiring the user to interact with a tab from that domain.

However, this right is not guaranteed indefinitely. If the user’s behavior switches – e.g. stopping audio playback or closing the tab within the 7 seconds threshold over the course of several visits – then we remove the website’s right to autoplay.

Both usage of media HTML elements (video and audio) and Web Audio (JavaScript instantiated AudioContext objects) will contribute to the MEI. In preparation for the rollout of this policy user behavior in relation to Web Audio will start contributing to the MEI from Chrome 70 and onwards. This will ensure we are already able to anticipate the user’s desired intent with regard to autoplay and the websites they commonly visit.

It should be noted that iframes can only gain the right to autoplay without user interaction if the parent webpage that embeds the iframe extends that right to the given iframe.

Delaying change to support the community

The Web Audio developer community – particularly the web game developer and WebRTC developer portions of this community – took notice when this change appeared in the Chrome Stable channel.

The community feedback was that many web games and web audio experiences would be affected negatively by this change – specifically, many sites which were not updated would no longer play audio to users. As a result, our team decided it was worth delaying this change to give web audio developers more time to update their websites.

Additionally, we took this time to:

  • Seriously consider whether this policy change was the best course of action or not.
  • Explore ways we could help reduce the number of websites with audio that would be impacted.

For the former, we ultimately decided that the policy change is indeed necessary to improve the user experience for the majority of our users. More detail on what problem the policy change is solving can be read in the next section of this article.

For the latter, we have made an adjustment to our implementation for Web Audio which will reduce the number of websites that were originally impacted. Of the sites we knew were broken by the change – many of which were provided as examples by the web game development community – this adjustment meant that more than 80% of them would work automatically. Our analysis and testing of these example sites can be viewed here. This new adjustment is described in more detail below.

We also made a change to support WebRTC applications; while there is an active capture session, autoplay will be allowed.

What problem is this behavior change aiming to solve?

Browsers have historically been poor at helping the user manage sound. When users open a webpage and receive sound they did not expect or want, they have a poor user experience. This poor user experience is the problem we are trying to solve. Unwanted noise is the primary reason that users do not want their browser to autoplay content.

However, sometimes users want content to autoplay, and a meaningful number of blocked autoplays in Chrome are subsequently played by the user.

Therefore, we believe by learning from the user – and anticipating their intention on a per website basis – we can create the best user experience. If users tend to let content play from a website, we will autoplay content from that site in the future. Conversely, if users tend to stop autoplay content from a given website, we will prevent autoplay for that content by default.

One proposal put forward by the community has been to mute the audio of a tab instead of pausing the autoplay. However, we believe it’s better to halt the autoplay experience so that the website is aware that the autoplay was blocked, and allow the website developer to react to this. For example, while some developers may wish to simply mute audio, other developers might prefer their audio content be paused until the user actively engages with the content – otherwise the user might miss part of the audio experience.

New adjustments to help web game developers

The most common way developers use the Web Audio API is by creating two types of objects to play audio:

Web audio developers will create an AudioContext for playing audio. In order to resume their audio after the autoplay policy has automatically suspended their AudioContext, they need to call the resume() function on this object after the user interacts with the tab:

const context = new AudioContext();

// Setup an audio graph with AudioNodes and schedule playback.
...

// Resume AudioContext playback when user clicks a button on the page.
document.querySelector('button').addEventListener('click', function() {
  context.resume().then(() => {
    console.log('AudioContext playback resumed successfully');
  });
});

There are many interfaces which inherit from AudioNode, one of which is the AudioScheduledSourceNode interface. AudioNodes that implement the AudioScheduledSourceNode interface are commonly referred to as source nodes (such as AudioBufferSourceNode, ConstantSourceNode, and OscillatorNode). Source nodes implement a start() method.

Source nodes commonly represent individual audio snippets that games play, for example: the sound that is played when a player collects a coin or the background music that plays in the current stage. Game developers are very likely to be calling the start() function on source nodes whenever any of these sounds are necessary for the game.

Once we recognized this common pattern in web games we decided to adjust our implementation to the following:

An AudioContext will be resumed automatically when two conditions are met:

  • The user has interacted with a page.
  • The start() method of a source node is called.

Due to this change, most web games will now resume their audio when the user starts playing the game.

Moving the web forward

In order to move the web platform forward it’s sometimes necessary to make changes which can break compatibility. Unfortunately, audio autoplay is complex and falls into this category of change. But making this shift is critical to ensure that the web doesn’t stagnate or lose its innovative edge.

Nonetheless, we recognize that applying fixes for websites is not always feasible in the short term for various reasons:

  • Web developers might be focused on a new project and maintenance to an older website is not immediately possible.
  • Web game portals may not have control over the implementation of the games in their catalog and updating hundreds – if not thousands – of games can be time consuming and expensive for publishers.
  • Some websites may simply be very old and – for one reason or another – are no longer maintained but still hosted for historical purposes.

Here is a short JavaScript code snippet which intercepts the creation of new AudioContext objects and will autotrigger the resume function of these objects when the user performs various user interactions. This code should be executed before the creation of any AudioContext objects in your webpage – for example, you could add this code to the

tag of your webpage:
(function () {
  // An array of all contexts to resume on the page
  const audioContextList = [];

  // An array of various user interaction events we should listen for
  const userInputEventNames = [
      'click', 'contextmenu', 'auxclick', 'dblclick', 'mousedown',
      'mouseup', 'pointerup', 'touchend', 'keydown', 'keyup'
  ];

  // A proxy object to intercept AudioContexts and
  // add them to the array for tracking and resuming later
  self.AudioContext = new Proxy(self.AudioContext, {
    construct(target, args) {
      const result = new target(...args);
      audioContextList.push(result);
      return result;
    }
  });

  // To resume all AudioContexts being tracked
  function resumeAllContexts(event) {
    let count = 0;

    audioContextList.forEach(context => {
      if (context.state !== 'running') {
        context.resume()
      } else {
        count++;
      }
    });

    // If all the AudioContexts have now resumed then we
    // unbind all the event listeners from the page to prevent
    // unnecessary resume attempts
    if (count == audioContextList.length) {
      userInputEventNames.forEach(eventName => {
        document.removeEventListener(eventName, resumeAllContexts); 
      });
    }
  }

  // We bind the resume function for each user interaction
  // event on the page
  userInputEventNames.forEach(eventName => {
    document.addEventListener(eventName, resumeAllContexts); 
  });
})();

It should be noted that this code snippet will not assist with resuming AudioContexts that are instantiated within an iframe, unless this code snippet is included within the scope of the content of the iframe itself.

Serving our users better

To accompany the policy change we are also introducing a mechanism for users to disable the autoplay policy to cover the cases where the automatic learning isn’t working as expected, or for websites that are rendered unusable by the change. This change will be rolling out with the new policy in Chrome 71 and can be found in the Sound Settings; sites where the user wants to allow autoplay can be added to the Allow list.

How is the MEI constructed for new users?

As mentioned earlier, we build the MEI automatically over time based on the user’s behavior to anticipate their desired intent with regard to a given website with autoplay content. Each website has a score between zero and one in this index. Higher scores indicate the user expects content to play from that website.

However, for new user profiles or if a user clears their browsing data, instead of blocking autoplay everywhere, a pre-seed list based on anonymized user aggregated MEI scores is used to determine which websites can autoplay. This data only determines the initial state of the MEI at the creation of the user profile. As the user browses the web and interacts with websites with autoplay content their personal MEI overrides the default configuration.

The pre-seeded site list is algorithmically generated, rather than manually curated, and any website is eligible to be included. Sites are added to the list if enough users who visit that site permit autoplay on that site. This threshold is percentage-based so as not to favor larger sites.

Finding balance

We have posted new documentation to give more insight into our decision making process and the design rationale behind this policy. As well as new documentation on how the pre-seeded site list works.

We always put our users first but we also don’t want to let down the web development community. Sometimes being the browser means that these two goals must be carefully balanced. We believe that with our adjustments to the implementation of the policy, and the additional time we provided for web audio developers to update their code, that we will achieve this balance with Chrome 71.

Feedback


What's New In DevTools (Chrome 72)

$
0
0

What's New In DevTools (Chrome 72)

Note: We'll publish the video version of this page in early February 2019.

New features and major changes coming to Chrome DevTools in Chrome 72 include:

Visualize performance metrics

After recording a page load, DevTools now marks performance metrics like DOMContentLoaded and First Meaningful Paint in the Timings section.

First Meaningful Paint in the Timing section
Figure 1. First Meaningful Paint in the Timing section

Highlight text nodes

When you hover over a text node in the DOM Tree, DevTools now highlights that text node in the viewport.

Highlighting a text node
Figure 2. Highlighting a text node

Copy JS path

Suppose you're writing an automation test that involves clicking a node (using Puppeteer's page.click() function, perhaps) and you want to quickly get a reference to that DOM node. The usual workflow is to go to the Elements panel, right-click the node in the DOM Tree, select Copy > Copy selector, and then pass that CSS selector to document.querySelector(). But if the node is in a Shadow DOM this approach doesn't work because the selector yields a path from within the shadow tree.

To quickly get a quickly get a reference to a DOM node, right-click the DOM node and select Copy > Copy JS path. DevTools copies to your clipboard a document.querySelector() expression that points to the node. As mentioned above, this is particularly helpful when working with Shadow DOM, but you can use it for any DOM node.

Copy JS path
Figure 3. Copy JS path

DevTools copies an expression like the one below to your clipboard:

document.querySelector('#demo1').shadowRoot.querySelector('p:nth-child(2)')

Audits panel updates

The Audits panel is now running Lighthouse 3.2. Version 3.2 includes a new audit called Detected JavaScript libraries. This audit lists out what JS libraries Lighthouse has detected on the page. You can find this audit in your report under Best Practices > Passed audits.

Detected JavaScript libraries
Figure 4. Detected JavaScript libraries

Also, you can now access the Audits panel from the Command Menu by typing Lighthouse or PWA.

Typing 'lighthouse' into the Command Menu
Figure 5. Typing lighthouse into the Command Menu

Feedback

To discuss the new features and changes in this post, or anything else related to DevTools:

  • File bug reports at Chromium Bugs.
  • Discuss features and changes on the Mailing List. Please don't use the mailing list for support questions. Use Stack Overflow, instead.
  • Get help on how to use DevTools on Stack Overflow. Please don't file bugs on Stack Overflow. Use Chromium Bugs, instead.
  • Tweet us at @ChromeDevTools.
  • File bugs on this doc in the Web Fundamentals repository.

Consider Canary

If you're on Mac or Windows, consider using Chrome Canary as your default development browser. Canary gives you access to the latest DevTools features.

Note: Canary is released as soon as its built, without testing. This means that Canary breaks about once-a-month. It's usually fixed within a day. You can go back to using Chrome Stable while Canary is broken.

Previous release notes

See the devtools-whatsnew tag for links to all previous DevTools release notes.

Introducing Background Fetch

$
0
0

Introducing Background Fetch

In 2015 we introduced Background Sync which allows the service worker to defer work until the user has connectivity. This means the user could type a message, hit send, and leave the site knowing that the message will be sent either now, or when they have connectivity.

It's a useful feature, but it requires the service worker to be alive for the duration of the fetch. That isn't a problem for short bits of work like sending a message, but if the task takes too long the browser will kill the service worker, otherwise it's a risk to the user's privacy and battery.

So, what if you need to download something that might take a long time, like a movie, podcasts, or levels of a game. That's what Background Fetch is for. Background Fetch is a web standard implemented behind the Experimental Web Platform features flag in Chrome 71.

Here's a quick two minute demo showing the traditional state of things, vs using Background Fetch:

Try the demo yourself and browse the code. It requires Chrome 71, and the Experimental Web Platform features flag to be enabled.

This is also being run as an Origin Trial. If you're interested in testing this API with real users without a flag, see below.

How it works

A background fetch works like this:

  1. You tell the browser to perform a group of fetches in the background.
  2. The browser fetches those things, displaying progress to the user.
  3. Once the fetch has completed or failed, the browser opens your service worker and fires an event to tell you what happened. This is where you decide what to do with the responses, if anything.

If the user closes pages to your site after step 1, that's ok, the download will continue. Because the fetch is highly visible and easily abortable, there isn't the privacy concern of a way-too-long background sync task. Because the service worker isn't constantly running, there isn't the concern that it could abuse the system, such as mining bitcoin in the background.

On some platforms (such as Android) it's possible for the browser to close after step 1, as the browser can hand off the fetching to the operating system.

If the user starts the download while offline, or goes offline during the download, the background fetch will be paused and resumed later.

The API

Feature detect

As with any new feature, you want to detect if the browser supports it. For Background Fetch, it's as simple as:

if ('BackgroundFetchManager' in self) {
  // This browser supports Background Fetch!
}

Starting a background fetch

The main API hangs off a service worker registration, so make sure you've registered a service worker first. Then:

navigator.serviceWorker.ready.then(async (swReg) => {
  const bgFetch = await swReg.backgroundFetch.fetch('my-fetch', ['/ep-5.mp3', 'ep-5-artwork.jpg'], {
    title: 'Episode 5: Interesting things.',
    icons: [{
      sizes: '300x300',
      src: '/ep-5-icon.png',
      type: 'image/png',
    }],
    downloadTotal: 60 * 1024 * 1024,
  });
});

Note: Many examples in this article use async functions. If you aren't familiar with them, check out the guide.

backgroundFetch.fetch takes three arguments:

Parameters
id string
uniquely identifies this background fetch.

backgroundFetch.fetch will reject the ID matches an existing background fetch.

requests Array<Request|string>
The things to fetch. Strings will be treated as URLs, and turned into Requests via new Request(theString).

You can fetch things from other origins as long as the resources allow it via CORS.

Note: Chrome doesn't currently support requests that would require a CORS preflight.

options An object which may include the following:
options.title string
A title for the browser to display along with progress.
options.icons Array<IconDefinition>
An array of objects with a src, size, and type.
options.downloadTotal number
The total size of the response bodies (after being un-gzipped).

Although this is optional, it's strongly recommended that you provide it. It's used to tell the user how big the download is, and to provide progress information. If you don't provide this, the browser will tell the user the size is unknown, and as a result the user may be more likely to abort the download.

If the background fetch downloads exceeds the number given here, it will be aborted. It's totally fine if the download is smaller than the downloadTotal, so if you aren't sure what the download total will be, it's best to err on the side of caution.

backgroundFetch.fetch returns a promise that resolves with a BackgroundFetchRegistration. I'll cover the details of that later. The promise rejects if the user has opted out of downloads, or one of the provided parameters is invalid.

Providing many requests for a single background fetch lets you combine things that are logically a single thing to the user. For example, a movie may be split into 1000s of resources (typical with MPEG-DASH), and come with additional resources like images. A level of a game could be spread across many JavaScript, image, and audio resources. But to the user, it's just "the movie", or "the level".

Caution: Chrome's implementation currently only accepts requests without a body. In future, bodies will be allowed, meaning you can use background fetch for large uploads, such as photos and video.

Getting an existing background fetch

You can get an existing background fetch like this:

navigator.serviceWorker.ready.then(async (swReg) => {
  const bgFetch = await swReg.backgroundFetch.get('my-fetch');
});

…by passing the id of the background fetch you want. get returns undefined if there's no active background fetch with that ID.

A background fetch is considered "active" from the moment it's registered, until it either succeeds, fails, or is aborted.

You can get a list of all the active background fetches using getIds:

navigator.serviceWorker.ready.then(async (swReg) => {
  const ids = await swReg.backgroundFetch.getIds();
});

Background fetch registrations

A BackgroundFetchRegistration (bgFetch in the above examples) has the following:

Properties
id string
The background fetch's ID.
uploadTotal number
The number of bytes to be sent to the server.
uploaded number
The number of bytes successfully sent.
downloadTotal number
The value provided when the background fetch was registered, or zero.
downloaded number
The number of bytes successfully received.

This value may decrease. For example, if the connection drops and the download cannot be resumed, in which case the browser restarts the fetch for that resource from scratch.

result

One of the following:

  • "" - The background fetch is active, so there's no result yet.
  • "success" - The background fetch was successful.
  • "failure" - The background fetch failed. This value only appears when the background fetch totally fails, as in the browser cannot retry/resume.
failureReason

One of the following:

  • "" - The background fetch hasn't failed.
  • "aborted" – The background fetch was aborted by the user, or abort() was called.
  • "bad-status" - One of the responses had a not-ok status, e.g. 404.
  • "fetch-error" - One of the fetches failed for some other reason, e.g. CORS, MIX, an invalid partial response, or a general network failure for a fetch that cannot be retried.
  • "quota-exceeded" - Storage quota was reached during the background fetch.
  • "download-total-exceeded" - The provided downloadTotal was exceeded.
recordsAvailable boolean
Can the underlying requests/responses can be accessed?

Once this is false match and matchAll cannot be used.

Methods
abort() Returns Promise<boolean>
Abort the background fetch.

The returned promise resolves with true if the fetch was successfully aborted.

matchAll(request, opts) Returns Promise<Array<BackgroundFetchRecord>>
Get the requests and responses.

The arguments here are the same as the cache API. Calling without arguments returns a promise for all records.

See below for more details.

match(request, opts) Returns Promise<BackgroundFetchRecord>
As above, but resolves with the first match.
Events
progress Fired when any of uploaded, downloaded, result, or failureReason change.

Tracking progress

This can be done via the progress event. Remember that downloadTotal is whatever value you provided, or 0 if you didn't provide a value.

bgFetch.addEventListener('progress', () => {
  // If we didn't provide a total, we can't provide a %.
  if (!bgFetch.downloadTotal) return;

  const percent = Math.round(bgFetch.downloaded / bgFetch.downloadTotal * 100);
  console.log(`Download progress: ${percent}%`);
});

Getting the requests and responses

Caution: In Chrome's current implementation you can only get the requests and responses during backgroundfetchsuccess, backgroundfetchfailure, and backgroundfetchabort service worker events (see below). In future you'll be able to get in-progress fetches.

bgFetch.match('/ep-5.mp3').then(async (record) => {
  if (!record) {
    console.log('No record found');
    return;
  }

  console.log(`Here's the request`, record.request);
  const response = await record.responseReady;
  console.log(`And here's the response`, response);
});

record is a BackgroundFetchRecord, and it looks like this:

Properties
request Request
The request that was provided.
responseReady Promise<Response>
The fetched response.

The response is behind a promise because it may not have been received yet. The promise will reject if the fetch fails.

Service worker events

Events
backgroundfetchsuccess Everything was fetched successfully.
backgroundfetchfailure One or more of the fetches failed.
backgroundfetchabort One or more fetches failed.

This is only really useful if you want to perform clean-up of related data.

backgroundfetchclick The user clicked on the download progress UI.

The event objects have the following:

Properties
registration BackgroundFetchRegistration
Methods
updateUI({ title, icons }) Lets you change the title/icons you initially set. This is optional, but it lets you provide more context if necessary. You can only do this once during backgroundfetchsuccess and backgroundfetchfailure events.

Reacting to success/failure

We've already seen the progress event, but that's only useful while the user has a page open to your site. The main benefit of background fetch is things continue to work after the user leaves the page, or even closes the browser.

If the background fetch successfully completes, your service worker will receive the backgroundfetchsuccess event, and event.registration will be the background fetch registration.

After this event, the fetched requests and responses are no longer accessible, so if you want to keep them, move them somewhere like the cache API.

As with most service worker events, use event.waitUntil so the service worker knows when the event is complete.

Note: You can't hold the service worker open indefinitely here, so avoid doing things that would keep the service worker open a long time here, such as additional fetching.

For example, in your service worker:

addEventListener('backgroundfetchsuccess', (event) => {
  const bgFetch = event.registration;

  event.waitUntil(async function() {
    // Create/open a cache.
    const cache = await caches.open('downloads');
    // Get all the records.
    const records = await bgFetch.matchAll();
    // Copy each request/response across.
    const promises = records.map(async (record) => {
      const response = await record.responseReady;
      await cache.put(record.request, response);
    });

    // Wait for the copying to complete.
    await Promise.all(promises);

    // Update the progress notification.
    event.updateUI({ title: 'Episode 5 ready to listen!' });
  }());
});

Failure may have come down to a single 404, which may not have been important to you, so it might still be worth copying some responses into a cache as above.

Reacting to click

The UI displaying the download progress and result is clickable. The backgroundfetchclick event in the service worker lets you react to this. As above event.registration will be the background fetch registration.

The common thing to do with this event is open a window:

addEventListener('backgroundfetchclick', (event) => {
  const bgFetch = event.registration;

  if (bgFetch.result === 'success') {
    clients.openWindow('/latest-podcasts');
  } else {
    clients.openWindow('/download-progress');
  }
});

Origin trial

To get access to this new API on your site, please sign up for the "Background Fetch API" Origin Trial. If you just want to try it out locally, the API can be enabled on the command line:

chrome --enable-blink-features=BackgroundFetch

Passing this flag on the command line enables the API globally in Chrome for the current session.

If you give this API a try, please let us know what you think! Please direct feedback on the API shape to the specification repository, and report implementation bugs to the BackgroundFetch Blink component.

Additional resources

Feedback

New in Chrome 71

$
0
0

New in Chrome 71

In Chrome 71, we've added support for:

And there’s plenty more!

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 71!

Change log

This covers only some of the key highlights, check the links below for additional changes in Chrome 71.

Display relative times with Intl.RelativeTimeFormat()

Twitter showing relative time for latest post

Many web apps use phrases like “yesterday”, “in two days”, or “an hour ago” to indicate when something happened - or is going to happen, instead of displaying the full date and time.

Displaying relative times has become so common that most of the common date/time libraries provide localized functions to handle this for us. In fact, almost every web app I build, Moment JS is one of the first libraries I add, expressly for this purpose.

Chrome 71 introduces Intl.RelativeTimeFormat(), which shifts the work to the JavaScript engine, and enables localized formatting of relative times. This gives us a small performance boost, and means we only need those libraries as a polyfill when a browser doesn’t support the new APIs yet.

const rtf = new Intl.RelativeTimeFormat('en');

rtf.format(3.14, 'second');
// → 'in 3.14 seconds'

rtf.format(-15, 'minute');
// → '15 minutes ago'

Using it is simple, create a new instance and specify the locale, then just call format with the relative time. Check out Mathias’ The Intl.RelativeTimeFormat API post for full details.

Specifying underline location for vertical text

Vertical text with inconsistent underlines

When Chinese or Japanese text is displayed in a vertical flow, browsers are inconsistent with where the underline is placed, it may be on the left, or on the right.

In Chrome 71, the text-underline-position property now accepts left or right as part of the CSS3 text decoration spec. The CSS3 text decoration spec adds several new properties that allow use to specify things like what kind of line to use, the style, color, and position.

.left {
  text-underline-position: left;
}

.right {
  text-underline-position: right;
}

Speech synthesis requires user activation

We’ve all been surprised when we hit a site and it suddenly starts talking to us. Autoplay policies prevent sites from automatically playing playing audio, or video files with audio. Some sites have tried to get around this by using the speech synthesis API instead.

Starting in Chrome 71, the speech synthesis API now requires some kind of user activation on the page before it’ll work. This brings it in line with other autoplay policies. If you try to use it before the user has interacted with the page, it will fire an error.

const utterance = new window.SpeechSynthesisUtterance('Hello');
utterance.lang = lang || 'en-US';
try {
  window.speechSynthesis.speak(utterance);
} catch (ex) {
  console.log('speechSynthesis not available', ex);
}

Success: To make sure your code works, I recommend wrapping your speech synthesis call in a try/catch block, so if the page isn't activated, it won't break the rest of your app.

There’s nothing worse than going to a site and having it surprise you, and the co-workers sitting around you.

And more!

These are just a few of the changes in Chrome 71 for developers, of course, there’s plenty more.

  • The Element.requestFullscreen() method can now be customized on Android and allows you to choose between making the navigation bar visible versus a completely immersive mode where no user agent controls are shown until a user gesture is performed.
  • The default credentials mode for module script requests, has changed from omit to same-origin.
  • And bringing Chrome inline with the Shadow DOM v1 spec, Chrome 71 now calculates the specificity for the :host() and :host-context() pseudo classes as well as for the arguments for ::slotted().

Chrome DevSummit Videos

If you didn’t make it to Chrome Dev Summit, or maybe you did, but didn’t see all the talks, check out the Chrome Dev Summit 2018 playlist on our YouTube channel.

Eva and Phil went into some neat techniques for using service workers in Building Faster, More Resilient Apps with Service Workers.

Mariko and Jake talked about how they build Squoosh in Complex JS-heavy Web Apps, Avoiding the Slow.

Katie and Houssein covered some great techniques to maximize the performance of your site in Speed Essentials: Key Techniques for Fast Websites.

Jake dropped the cake. And there are plenty of other great videos in the Chrome DevSummit 2018 playlist, so check them out.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 72 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Registering as a Share Target with the Web Share Target API

$
0
0

Registering as a Share Target with the Web Share Target API

What is the Web Share Target API?

System-level share target picker with installed PWA as an option.

On your mobile device, sharing something is usually as simple as clicking the Share button, choosing which app you want to send it to, and then who you to share it with. For example, after reading an interesting article, I may want to share it via email with a few friends, or Tweet about it.

Until now, only native apps could register as a share target. The Web Share Target API allows installed web apps to register with the underlying OS as a share target to receive shared content from either the Web Share API or system events, like the OS-level share button.

Users can easily share content to your web app because it appears in the system-level share target picker.

Current status

Step Status
1. Create explainer Complete
2. Create initial draft of specification Complete
3. Gather feedback & iterate on design Complete
4. Origin trial Complete
5. Launch Chrome 71+

Web Share Target is currently supported on Android in Chrome 71 or later. Both Mozilla and Microsoft have indicated their support but have not implemented it yet.

We’ve started working on Web Share Target - Level 2, adding support for sharing file objects. Look for a post about that coming soon.

See it in action

  1. Using Chrome 71 or later, open the Web Share Target demo.
  2. When prompted, click Install to add the app to your home screen, or use the Chrome menu to add it to your home screen.
  3. Open any app that includes a native share intent, or use the Share button in the demo app.
  4. Choose Web Share Test

After sharing to the demo app, you should see all of the information sent via the web share target web app.

Register your app as a share target

Note: To register a web app as a share target, it must be meet Chrome’s installability criteria, and have been installed by the user.

To register your app as a share target, the web app needs to meet Chrome’s installability criteria. In addition, before a user can share to your app, they must add it to their home screen. This prevents sites from randomly adding themselves to the users share intent chooser, and ensures that it’s something that they want to use.

Update your web app manifest

To register your app as a share target, add a share_target entry to the web app manifest.

In the manifest.json file, add the following:

"share_target": {
  "action": "/share-target/",
  "params": {
    "title": "title",
    "text": "text",
    "url": "url"
  }
}

If your application already has a share URL scheme, you can replace the param values with your existing query parameters. For example, if your share URL scheme uses body instead of text, you could replace the above with "text": "body",.

When another application tries to share, your application will be listed as an option in the share intent chooser.

Note: You can only have one share_target per manifest, if you want to share to different places within your app, provide that as an option within the share target landing page.

Handle the incoming content

If the user selects your application, the browser opens a new window at the action URL. It will then generate a query string using the values supplied in the manifest. For example if the other app provides title and text, the query string would be ?title=hello&text=world.

window.addEventListener('DOMContentLoaded', () => {
  const parsedUrl = new URL(window.location);
  console.log('Title shared: ' + parsedUrl.searchParams.get('title'));
  console.log('Text shared: ' + parsedUrl.searchParams.get('text'));
  console.log('URL shared: ' + parsedUrl.searchParams.get('url'));
});

How you deal with the incoming shared data is up to you, and dependent on your app.

  • An email client could draft a new email, using title as the subject of an email, with text and url concatenated together as the body.
  • A social networking app could draft a new post, ignoring title, using text as the body of the message and adding url as a link. If text is missing, it might use url in the body as well. If url is missing, it might scan text looking for a URL and add that as a link.
  • A text messaging app could draft a new message, using text and url concatenated together and dropping title.

What gets shared?

System-level share target picker with installed PWA as an option.

Be sure to check the incoming data, unfortunately, there is no guarantee that other apps will share the appropriate content in the right parameter.

On Android, the url field will be empty because it’s not supported in Android’s share system. Instead URLs will often appear in the text field, or occasionally in the title field.

Badging for App Icons

$
0
0

Badging for App Icons

What is the Badging API?

Examples of badges on launch icons across different platforms.

The Badging API is a new web platform API that allows installed web apps to set an application-wide badge, shown in an operating-system-specific place associated with the application (such as the shelf or home screen).

Badging makes it easy to subtly notify the user that there is some new activity that might require their attention, or it can be used to indicate a small amount of information, such as an unread count.

Badges tend to be more user friendly than notifications, and can be updated with a much higher frequency, since they don’t interrupt the user. And, because they don’t interrupt the user, there’s no special permission needed to use them.

Read explainer

Suggested use cases for the badging API

Examples of sites that may use this API include:

  • Chat, email and social apps, to signal that new messages have arrived, or show the number of unread items
  • Productivity apps, to signal that a long-running background task (such as rendering an image or video) has completed.
  • Games, to signal that a player action is required (e.g., in Chess, when it is the player's turn).

Current status

Step Status
1. Create explainer Complete
2. Create initial draft of specification In Progress
3. Gather feedback & iterate on design In progress
4. Origin trial Not started
5. Launch Not started

How to use the badging API

Dogfood: We are still iterating on the design of the Badging API, and it’s not available in the browser yet. The sample code you see is based on the current design, and will likely change between now and the time it lands in the browser.

To use the badging API, your web app needs to meet Chrome’s installability criteria, and a user must add it to their home screen.

The Badge interface is a member object on Window and Worker. It contains two methods:

  • set([number]): Sets the app's badge. If a value is provided, set the badge to the provided value.
  • clear(): Removes app's badge.
// In a web page
const unreadCount = 24;
window.Badge.set(unreadCount);

// In a service worker
self.addEventListener('sync', () => {
  self.Badge.set(getUnreadCount());
});

Badge.set() and Badge.clear() can be called from either a foreground page or potentially a service worker. In either case, it affects the whole app, not just the current page.

Caution: The spec and explainer currently allow for strings in the badge, but that is being removed. Only numbers will be permitted.

In some cases, the OS may not allow the exact representation of the badge, in this case, the browser will attempt to provide the best representation for that device. For example, Android only shows a white dot, instead of the numeric value.

Feedback

We need your help to ensure that the Badging API works in a way that meets your needs and that we’re not missing any key scenarios.

We’re also interested to hear how you plan to use the Badging API:

  • Have an idea for a use case or an idea where you'd use it?
  • Do you plan to use this?
  • Like it, and want to show your support?

Share your thoughts on the Badging API WICG Discourse discussion.

Public and private class fields

$
0
0

Public and private class fields

Several proposals expand the existing JavaScript class syntax with new functionality. This article explains the new public class fields syntax in V8 v7.2 and Chrome 72, as well as the upcoming private class fields syntax.

Here’s a code example that creates an instance of a class named IncreasingCounter:

const counter = new IncreasingCounter();
counter.value;
// logs 'Getting the current value!'
// → 0
counter.increment();
counter.value;
// logs 'Getting the current value!'
// → 1

Note that accessing the value executes some code (i.e., it logs a message) before returning the result. Now ask yourself, how would you implement this class in JavaScript? 🤔

ES2015 class syntax

Here’s how IncreasingCounter could be implemented using ES2015 class syntax:

class IncreasingCounter {
  constructor() {
    this._count = 0;
  }
  get value() {
    console.log('Getting the current value!');
    return this._count;
  }
  increment() {
    this._count++;
  }
}

The class installs the value getter and an increment method on the prototype. More interestingly, the class has a constructor that creates an instance property _count and sets its default value to 0. We currently tend to use the underscore prefix to denote that _count should not be used directly by consumers of the class, but that’s just a convention; it’s not really a “private” property with special semantics enforced by the language.

const counter = new IncreasingCounter();
counter.value;
// logs 'Getting the current value!'
// → 0

// Nothing stops people from reading or messing with the
// `_count` instance property. 😢
counter._count;
// → 0
counter._count = 42;
counter.value;
// logs 'Getting the current value!'
// → 42

Public class fields

The new public class fields syntax allows us to simplify the class definition:

class IncreasingCounter {
  _count = 0;
  get value() {
    console.log('Getting the current value!');
    return this._count;
  }
  increment() {
    this._count++;
  }
}

The _count property is now nicely declared at the top of the class. We no longer need a constructor just to define some fields. Neat!

However, the _count field is still a public property. In this particular example, we want to prevent people from accessing the property directly.

Private class fields

That’s where private class fields come in. The new private fields syntax is similar to public fields, except you mark the field as being private by using #. You can think of the # as being part of the field name:

class IncreasingCounter {
  #count = 0;
  get value() {
    console.log('Getting the current value!');
    return this.#count;
  }
  increment() {
    this.#count++;
  }
}

Private fields are not accessible outside of the class body:

const counter = new IncreasingCounter();
counter.#count;
// → SyntaxError
counter.#count = 42;
// → SyntaxError

Public and static properties

Class fields syntax can be used to create public and private static properties and methods as well:

class FakeMath {
  // `PI` is a static public property.
  static PI = 22 / 7; // Close enough.

  // `#totallyRandomNumber` is a static private property.
  static #totallyRandomNumber = 4;

  // `#computeRandomNumber` is a static private method.
  static #computeRandomNumber() {
    return FakeMath.#totallyRandomNumber;
  }

  // `random` is a static public method (ES2015 syntax)
  // that consumes `#computeRandomNumber`.
  static random() {
    console.log('I heard you like random numbers…')
    return FakeMath.#computeRandomNumber();
  }
}

FakeMath.PI;
// → 3.142857142857143
FakeMath.random();
// logs 'I heard you like random numbers…'
// → 4
FakeMath.#totallyRandomNumber;
// → SyntaxError
FakeMath.#computeRandomNumber();
// → SyntaxError

Simpler subclassing

The benefits of the class fields syntax become even clearer when dealing with subclasses that introduce additional fields. Imagine the following base class Animal:

class Animal {
  constructor(name) {
    this.name = name;
  }
}

To create a Cat subclass that introduces an additional instance property, you’d previously have to call super() to run the constructor of the Animal base class before creating the property:

class Cat extends Animal {
  constructor(name) {
    super(name);
    this.likesBaths = false;
  }
  meow() {
    console.log('Meow!');
  }
}

That’s a lot of boilerplate just to indicate that cats don’t enjoy taking baths. Luckily, the class fields syntax removes the need for the whole constructor, including the awkward super() call:

class Cat extends Animal {
  likesBaths = false;
  meow() {
    console.log('Meow!');
  }
}

Conclusion

Public class fields are shipping in V8 v7.2 and Chrome 72. We plan on shipping private class fields soon.

Questions about this new feature? Comments about this article? Feel free to ping me on Twitter via @mathias!

The Intl.ListFormat API

$
0
0

The Intl.ListFormat API

Modern web applications often use lists consisting of dynamic data. For example, a photo viewer app might display something like:

This photo includes Ada, Edith, and Grace.

A text-based game might have a different kind of list:

Choose your superpower: invisibility, psychokinesis, _or_ empathy.

Since each language has different list formatting conventions and words, implementing a localized list formatter is non-trivial. Not only does this require a list of all the words (such as “and” or “or” in the above examples) for each language you want to support — in addition you need to encode the exact formatting conventions for all those languages! The Unicode CLDR provides this data, but to use it in JavaScript, it has to be embedded and shipped alongside the other library code. This unfortunately increases the bundle size for such libraries, which negatively impacts load times, parse/compile cost, and memory consumption.

The brand new Intl.ListFormat API shifts that burden to the JavaScript engine, which can ship the locale data and make it directly available to JavaScript developers. Intl.ListFormat enables localized formatting of lists without sacrificing performance.

Usage examples

The following example shows how to create a list formatter for conjunctions using the English language:

const lf = new Intl.ListFormat('en');
lf.format(['Frank']);
// → 'Frank'
lf.format(['Frank', 'Christine']);
// → 'Frank and Christine'
lf.format(['Frank', 'Christine', 'Flora']);
// → 'Frank, Christine, and Flora'
lf.format(['Frank', 'Christine', 'Flora', 'Harrison']);
// → 'Frank, Christine, Flora, and Harrison'

Disjunctions (“or” in English) are supported as well through the optional options parameter:

const lf = new Intl.ListFormat('en', { type: 'disjunction' });
lf.format(['Frank']);
// → 'Frank'
lf.format(['Frank', 'Christine']);
// → 'Frank or Christine'
lf.format(['Frank', 'Christine', 'Flora']);
// → 'Frank, Christine, or Flora'
lf.format(['Frank', 'Christine', 'Flora', 'Harrison']);
// → 'Frank, Christine, Flora, or Harrison'

Here’s an example of using a different language (Chinese, with language code zh):

const lf = new Intl.ListFormat('zh');
lf.format(['永鋒']);
// → '永鋒'
lf.format(['永鋒', '新宇']);
// → '永鋒和新宇'
lf.format(['永鋒', '新宇', '芳遠']);
// → '永鋒、新宇和芳遠'
lf.format(['永鋒', '新宇', '芳遠', '澤遠']);
// → '永鋒、新宇、芳遠和澤遠'

The options parameter enables more advanced usage. Here’s an overview of the various options and their combinations, and how they correspond to the list patterns defined by UTS#35:

Type Options Description Examples
standard (or no type) {} (default) A typical “and” list for arbitrary placeholders 'January, February, and March'
or { type: 'disjunction' } A typical “or” list for arbitrary placeholders 'January, February, or March'
unit { type: 'unit' } A list suitable for wide units '3 feet, 7 inches'
unit-short { type: 'unit', style: 'short' } A list suitable for short units '3 ft, 7 in'
unit-narrow { type: 'unit', style: 'narrow' } A list suitable for narrow units, where space on the screen is very limited '3′ 7″'

Note that in many languages (such as English) there may not be a difference among many of these lists. In others, the spacing, the length or presence of a conjunction, and the separators may change.

Conclusion

Intl.ListFormat is available by default in V8 v7.2 and Chrome 72. As this API becomes more widely available, you’ll find libraries dropping their dependency on hardcoded CLDR databases in favor of the native list formatting functionality, thereby improving load-time performance, parse- and compile-time performance, run-time performance, and memory usage.

Questions about this API? Comments about this article? Feel free to let us know on Twitter via @mathias!

Feedback


I’m Awake! Stay Awake with the WakeLock API

$
0
0

I’m Awake! Stay Awake with the WakeLock API

What is the Wake Lock API?

To avoid draining the battery, most devices quickly go to sleep when left idle. While this is fine most of the time, some applications need to keep the screen or the device awake in order to complete their work. For example, a run-tracking app (turns the screen off, but keeps the system awake), or a game, like Ball Puzzle, that uses the device motion APIs for input.

The Wake Lock API provides a way to prevent the device from dimming and locking the screen or prevent the device from going to sleep. This capability enables new experiences that, until now, required a native app.

The Wake Lock API aims to reduce the need for hacky and potentially power-hungry workarounds. It addresses the shortcomings of an older API which was limited to simply keeping the screen on, and had a number of security and privacy issues.

Suggested use cases for the Wake Lock API

RioRun a web app developed by The Guardian that takes you on a virtual audio tour of Rio, following the route of the 2016 Olympic marathon would be a perfect use case. Without wake locks, your screen will turn off frequently, making it hard to use.

Of course, there are plenty of others:

  • Kiosk-style apps where it’s important to prevent the screen from turning off.
  • Web based presentation apps where it’s essential to prevent the screen from going to sleep while in the middle of a presentation.

Current status

Step Status
1. Create explainer Complete
2. Create initial draft of specification Complete
3. Gather feedback & iterate on design In Progress
4. Origin trial Not Started
5. Launch Not Started

Note: Big thanks to the folks at Intel, specifically Mrunal Kapade for doing the work to implement this. We depend on a community of committers working together to move the Chromium project forward. Not every Chromium committer is a Googler, and they deserve special recognition!

How to use the Wake Lock API

Dogfood: We’re still working on the Wake Lock API, and it’s only available behind a flag (#enable-experimental-web-platform-features). While in development, bugs are expected, or it may fail to work completely.

Check out the Wake Lock demo and source for the demo.

Wake lock types

The Wake Lock API provides two types of wake locks, screen and system. While they are treated independently, one may imply the effects of the other. For example, a screen wake lock implies that the app should continue running.

screen wake lock

A screen wake lock prevents the device’s screen from turning off so that the user can see the information that’s displayed on screen.

system wake lock

A system wake lock prevents the device’s CPU from entering standby mode so that your app can continue running.

Get a wake lock object

In order to use the Wake Lock API, we need to create and initialize a wakelock object for the type of wake lock we want. Once created, the promise resolves with a wakelock object, but note, the wake lock isn’t active yet, it’ll need to be activated first.

let wakeLockObj;
if ('getWakeLock' in navigator) {
  try {
    // Create a wake lock for the type we want.
    wakeLockObj = await navigator.getWakeLock('screen');
    console.log('👍', 'getWakeLock', wakeLockObj);
  } catch (ex) {
    console.error('👎', 'getWakeLock', err);
  }
}

In some instances, the browser may fail to create the wake lock object, and instead throws an error, for example if the batteries on the device are low.

Use the wake lock object

We can use the newly created wakeLockObj to activate a lock, or determine the current wake lock state and to receive notifications when the wake lock state is changed.

To acquire a wake lock, we simply need to call wakeLockObj.createRequest(), which creates a new WakeLockRequest object. The WakeLockRequest object allows multiple components on the same page to request their own locks. The WakeLock object automatically handles the requests. The wake lock is released when all of the requests have been cancelled.

let wakeLockRequest;
function toggleWakeLock() {
  if (wakeLockRequest) {
    // Stop the existing wake lock request.
    wakeLockRequest.cancel();
    wakeLockRequest = null;
    return;
  }
  wakeLockRequest = wakeLockObj.createRequest();
  // New wake lock request created.
}

Caution: It’s critical to keep a reference to wakeLockRequest created by wakeLockObj.createRequest() in order to release the wake lock later. If the reference to the wakeLockRequest is lost, you won’t be able to cancel the wake lock until the page is closed.

You can track if a wake lock is active by listening for the activechange event on the WakeLock object.

wakeLockObj.addEventListener('activechange', () => {
  console.log('⏰', 'wakeLock active:', wakeLockObj.active);
});

Best Practices

The approach you take depends on the needs of your app. However, you should use the most lightweight approach possible for your app, to minimize your app's impact on system resources.

Before adding wake lock to your app, consider whether your use cases could be solved with one of the following alternative solutions:

  • If your app is performing long-running downloads, consider using background fetch.
  • If your app is synchronizing data from an external server, consider using background sync.

Note: Like most other powerful web APIs, the Wake Lock API is only available when served over HTTPS.

Feedback

We need your help to ensure that the Wake Lock API works in a way that meets your needs and that we’re not missing any key scenarios.

What should the permission model look like? When should the browser notify the user that there’s a wake lock active? Add your thoughts to how should UAs infer consent to take a wakelock GitHub issue.

If there are any features we’re missing, or there are scenarios that are either difficult or impossible to implement with the current design, please file an issue in the w3c/wake-lock repo and provide as much detail as you can.

We’re also interested to hear how you plan to use the Wake Lock API:

  • Have an idea for a use case or an idea where you'd use it?
  • Do you plan to use this?
  • Like it and want to show your support?

Share your thoughts on the Wake Lock API WICG Discourse discussion.

Deprecations and removals in Chrome 72

$
0
0

Deprecations and removals in Chrome 72

Removals

Don't allow popups during page unload

Pages may no longer use window.open() to open a new page during unload. The Chrome popup blocker already prohibited this, but now it is prohibited whether or not the popup blocker is enabled.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove HTTP-Based Public Key Pinning

HTTP-Based Public Key Pinning (HPKP) was intended to allow websites to send an HTTP header that pins one or more of the public keys present in the site's certificate chain. Unfortunately, it has very low adoption, and although it provides security against certificate misissuance, it also creates risks of denial of service and hostile pinning. For these reasons, this feature is being removed.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Remove rendering FTP resources.

FTP is a non-securable legacy protocol. When even the Linux kernel is migrating off of it, it's time to move on. One step toward deprecation and removal is to deprecate rendering resources from FTP servers and instead download them. Chrome will still generate directory listings, but any non-directory listing will be downloaded rather than rendered in the browser.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Deprecations

Deprecate TLS 1.0 and TLS 1.1

TLS (Transport Layer Security) is the protocol which secures HTTPS. It has a long history stretching back to the nearly twenty-year-old TLS 1.0 and its even older predecessor, SSL. Both TLS 1.0 and 1.1 have a number of weaknesses.

  • TLS 1.0 and 1.1 use MD5 and SHA-1, both weak hashes, in the transcript hash for the Finished message.
  • TLS 1.0 and 1.1 use MD5 and SHA-1 in the server signature. (Note: this is not the signature in the certificate.)
  • TLS 1.0 and 1.1 only support RC4 and CBC ciphers. RC4 is broken and has since been removed. TLS’s CBC mode construction is flawed and was vulnerable to attacks.
  • TLS 1.0’s CBC ciphers additionally construct their initialization vectors incorrectly.
  • TLS 1.0 is no longer PCI-DSS compliant.

Supporting TLS 1.2 is a prerequisite to avoiding the above problems. The TLS working group has deprecated TLS 1.0 and 1.1. Chrome has now also deprecated these protocols. Removal is expected in Chrome 81 (early 2020).

Intent to Remove | Chromestatus Tracker | Chromium Bug

Deprecate PaymentAddress.languageCode

PaymentAddress.languageCode is the browser's best guess for the language of the text in the shipping, billing, delivery, or pickup address in the Payment Request API. The languageCode is marked at risk in the specification and has already been removed from Firefox and Safari. Usage in Chrome is small enough for safe deprecation and removal. Removal is expected in Chrome 74.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Feedback

Check If Your Native App Is Installed With getInstalledRelatedApps

$
0
0

Check If Your Native App Is Installed With getInstalledRelatedApps

What is the getInstalledRelatedApps API?

A web app using getInstalledRelatedApps to determine if it's related native app is already installed.

As the capability gap between web and native gets smaller, it becomes easier to offer the same experience for both web and native users. This may lead to cases where users have both the web and native versions installed on the same device. Apps should be able to detect this situation.

The getInstalledRelatedApps API is a new web platform API that allows your web app to check to see if your native app is installed on the users device, and vice versa. With the getInstalledRelatedApps API, you can disable some functionality of one app if it should be provided by the other app instead.

Read explainer

Suggested use cases

There may be some cases where there isn’t feature parity between the web and native apps. With the getInstalledRelatedApps API, you can check if the other version is installed, and switch to the other app, using the functionality there. For example, one of the most common scenarios we’ve heard, and the key reason behind this API is to help reduce duplicate notifications. Using the getInstalledRelatedApps API, allows you check to see if the user has the native app installed, then disable the notification functionality in the web app.

Installable web apps can help prevent confusion between the web and native versions by checking to see if the native version is already installed and either not prompting to install the PWA, or providing different prompts.

Current status

Step Status
1. Create explainer Complete
2. Create initial draft of specification In Progress
3. Gather feedback & iterate on design In Progress
4. Origin trial Not started
5. Launch Not started

If getInstalledRelatedApps looks familiar, it is. We originally announced this feature in April 2017, when it first went to an origin trial. After that origin trial ended, we took stock of the feedback and spent some time iterating on the design. We hope to launch a new origin trial in the first half of 2019.

How to use the getInstalledRelatedApps API

Dogfood: We are still iterating on the design of the getInstalledRelatedApps API. It’s only available behind a flag (#enable-experimental-web-platform-features). While in development, bugs are expected, or it may fail to work completely. The code below is based on the current design, and will likely change between now and the time it’s standardized.

Check out the getInstalledRelatedApps API Demo and getInstalledRelatedApps API Demo source

Establish a relationship between your apps

In order to use the getInstalledRelatedApps, you must first create a relationship between your two apps. This relationship is critical and prevents other apps from using the API to detect if your app is installed, and prevents sites from collecting information about the apps you have installed on your device.

Define the relationship to your native app

In your web app manifest, add a related_applications property that contains a list of the apps that you want to detect. The related_applications property is an array of objects that contain the platform on which the app is hosted and the unique identifier for your app on that platform.

{
  ...
  "related_applications": [{
    "platform": "play",
    "id": "<package-name>",
    "url": "https://example.com",
  }],
  ...
}

The url property is optional, and the API works fine without it. On Android, the platform must be play. On other devices, platform will be different.

Define the relationship to your web app

Each platform has its own method of verifying a relationship. On Android, the Digital Asset Links system is used to define the association between a website and an application. On other platforms, the way you define the relationship will differ slightly.

In AndroidManifest.xml, add an asset statement that links back to your web app:

<manifest>
  <application>
   ...
    <meta-data android:name="asset_statements" android:resource="@string/asset_statements" />
   ...
  </application>
</manifest>

Then, in strings.xml, add the following asset statement, updating site with your domain. Be sure to include the escaping characters.

<string name="asset_statements">
  [{
    \"relation\": [\"delegate_permission/common.handle_all_urls\"],
    \"target\": {
      \"namespace\": \"web\",
      \"site\": \"https://example.com\"
    }
  }]
</string>

Test for the presence of your native app

Once you’ve updated your native app and added the appropriate fields to the web app manifest, you can add the code to check for the presence of your native app to you web app. Calling navigator.getInstalledRelatedApps() returns a Promise that resolves with an array of your apps that are installed on the users device.

navigator.getInstalledRelatedApps().then((relatedApps) => {
  relatedApps.forEach((app) => {
    console.log(app.id, app.platform, app.url);
  });
});

Note: Like most other powerful web APIs, the getInstalledRelatedApps API is only available when served over HTTPS.

Feedback

We need your help to ensure that the getInstalledRelatedApps API works in a way that meets your needs and that we’re not missing any key scenarios.

We’re also interested to hear how you plan to use the getInstalledRelatedApps API:

  • Have an idea for a use case or an idea where you'd use it?
  • Do you plan to use this?
  • Like it, and want to show your support?

Share your thoughts on the getInstalledRelatedApps API WICG Discourse discussion.

A Picture is Worth a Thousand Words, Faces, and Barcodes—The Shape Detection API

$
0
0

A Picture is Worth a Thousand Words, Faces, and Barcodes—The Shape Detection API

Warning: We’re currently working the specification for this API as part of the capabilities project. We’ll keep this post updated as this new API moves from design to implementation.

What is the Shape Detection API?

With APIs like navigator.mediaDevices.getUserMedia and the new Chrome for Android photo picker, it has become fairly easy to capture images or live video data from device cameras, or to upload local images. So far, this dynamic image data—as well as static images on a page—has been opaque, even though images may actually contain a lot of interesting features such as faces, barcodes, and text.

In the past, if developers wanted to extract such features on the client side, for example to build a QR code reader, they had to rely on external JavaScript libraries. This could be expensive from a performance point of view and increase the overall page weight. On the other side, operating systems including Android, iOS, and macOS, but also hardware chips found in camera modules, typically already have performant and highly optimized feature detectors such as the Android FaceDetector or the iOS generic feature detector CIDetector.

The Shape Detection API opens up these native implementations and exposes them through a set of JavaScript interfaces. Currently, the supported features are face detection through the FaceDetector interface, barcode detection through the BarcodeDetector interface, and text detection (Optical Character Recognition, [OCR]) through the TextDetector interface.

Note: Text detection, despite being an interesting field, is not considered stable enough across either computing platforms or character sets to be standardized at the moment, which is why text detection has been moved to a separate informative specification.

Read explainer

Suggested use cases for the Shape Detection API

As outlined above, the Shape Detection API currently supports the detection of faces, barcodes, and text. The following bullet list contains examples of use cases for all three features.

  • Face detection

    • Online social networking or photo sharing sites commonly let their users annotate people in images. By highlighting the boundaries of detected faces, this task can be facilitated.
    • Content sites can dynamically crop images based on potentially detected faces rather than rely on other heuristics, or highlight detected faces with Ken Burns panning and zooming effects in story-like formats.
    • Multimedia messaging sites can allow their users to overlay funny objects like sunglasses or mustaches on detected face landmarks.
  • Barcode detection

    • Web applications that read QR codes can unlock interesting use cases like online payments or web navigation, or use barcodes for establishing social connections on messenger applications.
    • Shopping apps can allow their users to scan EAN or UPC barcodes of items in a physical store to compare prices online.
    • Airports can expose web kiosks where passengers can scan their boarding passes’ Aztec codes to show personalized information related to their flights.
  • Text detection

    • Online social networking sites can improve the accessibility of user-generated image content by adding detected texts as img[alt] attribute values when no other descriptions are provided.
    • Content sites can use text detection to avoid placing headings on top of hero images with contained text.
    • Web applications can use text detection to translate texts, for example, to translate restaurant menus.

Current status

Step Status
1. Create explainer Complete
2. Create initial draft of specification In Progress
3. Gather feedback & iterate on design In progress
4. Origin trial In progress
5. Launch Not started

How to use the Shape Detection API

The interfaces of all three detectors, the FaceDetector, the BarcodeDetector, and the TextDetector, are very similar. They all provide a single asynchronous method detect that takes an ImageBitmapSource as an input (that is, either a CanvasImageSource, a Blob, or ImageData).

In the case of FaceDetector and BarcodeDetector, optional parameters can be passed to the detector’s constructor that allow for providing hints to the underlying native detectors.

Note: If your ImageBitmapSource has an effective script origin which is not the same as the document’s effective script origin, then attempts to call detect will fail with a new DOMException whose name is SecurityError. If your image origin supports CORS, you can use the crossorigin attribute to request CORS access.

Working with the FaceDetector

const faceDetector = new FaceDetector({
  // (Optional) Hint to try and limit the amount of detected faces
  // on the scene to this maximum number.
  maxDetectedFaces: 5,
  // (Optional) Hint to try and prioritize speed over accuracy
  // by, e.g., operating on a reduced scale or looking for large features.
  fastMode: false
});
try {
  const faces = await faceDetector.detect(image);
  faces.forEach(face => console.log(face));
} catch (e) {
  console.error('Face detection failed:', e);
}

Working with the BarcodeDetector

const barcodeDetector = new BarcodeDetector({
  // (Optional) A series of barcode formats to search for.
  // Not all formats may be supported on all platforms
  formats: [
    'aztec',
    'code_128',
    'code_39',
    'code_93',
    'codabar',
    'data_matrix',
    'ean_13',
    'ean_8',
    'itf',
    'pdf417',
    'qr_code',
    'upc_a',
    'upc_e'
  ]
});
try {
  const barcodes = await barcodeDetector.detect(image);
  barcodes.forEach(barcode => console.log(barcode));
} catch (e) {
  console.error('Barcode detection failed:', e);
}

Working with the TextDetector

const textDetector = new TextDetector();
try {
  const texts = await textDetector.detect(image);
  texts.forEach(text => console.log(text));
} catch (e) {
  console.error('Text detection failed:', e);
}

Feature detection

Purely checking for the existence of the constructors to feature detect the Shape Detection API doesn’t suffice, as Chrome on Linux and Chrome OS currently still expose the detectors, but they are known to not work (bug). As a temporary measure, we instead recommend doing feature detection like this:

const supported = await (async () => 'FaceDetector' in window &&
    await new FaceDetector().detect(document.createElement('canvas'))
    .then(_ => true)
    .catch(e => e.name === 'NotSupportedError' ? false : true))();

Best practices

All detectors work asynchronously, that is, they are not blocking the main thread 🎉, so don’t rely on realtime detection, but rather allow for some time for the detector to do its work.

If you are a fan of Web Workers (and who isn’t?), the good news is that the detectors are exposed there as well. The detection results are serializable and can thus be passed back from the worker to the main app via postMessage. The demo shows this in action.

Not all platform implementations support all features, so be sure to check the support situation carefully and see the API more like a progressive enhancement. For example, some platforms might support face detection per se, but not face landmark detection (eyes, nose, mouth,…); or the existence and the location of text may be recognized, but not the actual text contents.

Note: This API is an optimization and not something guaranteed to be available from the platform for every user. Developers are expected to combine this with their own image recognition code and take advantage of the native optimization when it is available.

Feedback

We need your help to ensure that the Shape Detection API works in a way that meets your needs and that we’re not missing any key scenarios.

We’re also interested to hear how you plan to use the Shape Detection API:

  • Have an idea for a use case or an idea where you’d use it?
  • Do you plan to use this?
  • Like it, and want to show your support?

Share your thoughts on the Shape Detection API WICG Discourse discussion.

Making user activation consistent across APIs

$
0
0

Making user activation consistent across APIs

To prevent malicious scripts from abusing sensitive APIs like popups, fullscreen etc., browsers control access to those APIs through user activation. User activation is the state of a browsing session with respect to user actions: an "active" state typically implies either the user is currently interacting with the page, or has completed an interaction since page load. User gesture is a popular but misleading term for the same idea. For example, a swipe or flick gesture by a user does not activate a page and hence is not, from a script standpoint, a user activation.

Major browsers today show widely divergent behavior around how user activation controls the activation-gated APIs. In Chrome, the implementation was based on a token-based model that turned out to be too complex to define a consistent behavior across all activation-gated APIs. For example, Chrome has been allowing incomplete access to activation-gated APIs through postMessage() and setTimeout() calls; and user activation wasn't supported with Promises, XHR, Gamepad interaction, etc. Note that some of these are popular yet long-standing bugs.

In version 72, Chrome ships User Activation v2 which makes user activation availability complete for all activation-gated APIs. This resolves the inconsistencies mentioned above (and a few more, like MessageChannels), which we believe would ease web development around user activation. Moreover, the new implementation provides a reference implementation for a proposed new specification that aims to bring all browsers together in the long run.

How does User Activation v2 work?

The new API maintains a two-bit user activation state at every window object in the frame hierarchy: a sticky bit for historical user activation state (if a frame has ever seen a user activation), and a transient bit for current state (if a frame has seen a user activation in about a second). The sticky bit never resets during the frame's lifetime after it gets set. The transient bit gets set on every user interaction, and is reset either after an expiry interval (about a second) or through a call to an activation-consuming API (e.g. window.open()).

Note that different activation-gated APIs rely on user activation in different ways; the new API is not changing any of these API-specific behaviors. E.g. only one popup is allowed per user activation because window.open() consumes user activation as it used to be, Navigator.prototype.vibrate() continues to be effective if a frame (or any of its subframes) has ever seen user action, and so on.

What's changing?

  • User Activation v2 formalizes the notion of user activation visibility across frame boundaries: a user interaction with a particular frame will now activate all containing frames (and only those frames) regardless of their origin. (In Chrome 72, we have a temporary workaround in place to expand the visibility to all same-origin frames. We will remove this workaround once we have a way to explicitly pass user activation to sub-frames.)
  • When an activation-gated API is called from an activated frame but from outside an event handler code, it will work as long as the user activation state is "active" (e.g. has neither expired nor been consumed). Before User Activation v2, it would unconditionally fail.
  • Multiple unused user interactions within the expiry time interval fuses into a single activation corresponding to the last interaction.

Emscripten and npm

$
0
0

Emscripten and npm

WebAssembly (wasm) is often framed as either a performance primitive or a way to run your existing C++ codebase on the web. With squoosh.app we wanted to show that there is at least a third perspective for wasm: making use of the huge ecosystems of other programming languages. With Emscripten you can use C/C++ code, Rust has wasm support built in and the Go team is working on it, too. I'm sure many other languages will follow.

In these scenarios, wasm is not the centerpiece of your app, but rather a puzzle piece: yet another module. Your app already has JavaScript, CSS, image assets, a web-centric build system and maybe even a framework like React. How do you integrate WebAssembly into this setup? In this article we are going to work this out with C/C++ and Emscripten as an example.

Docker

Note: While I will be using Docker, you don't need a deep understanding of Docker to follow this article. If you have Docker installed on your machine, you are good to go!

I have found Docker to be invaluable when working with Emscripten. C/C++ libraries are often written to work with the operating system they are built on. It is incredibly helpful to have a consistent environment. With Docker you get a virtualized Linux system that is already set up to work with Emscripten and has all the tools and dependencies installed. If something is missing, you can just install it without having to worry about how it affects your own machine or your other projects. If something goes wrong, throw the container away and start over. If it works once, you can be sure that it will continue to work and produce identical results.

The Docker Registry has an Emscripten image by trzeci that I have been using extensively.

Integration with npm

In the majority of cases, the entry point to a web project is npm's package.json. By convention most projects can be built with npm install && npm run build.

In general the build artifacts produced by Emscripten (a .js and a .wasm file) should be treated as just another JavaScript module and just another asset. The JavaScript file can be handled by a bundler like webpack or rollup, and the wasm file should be treated like any other bigger binary asset, like images.

As such, the Emscripten build artifacts need to be built before your "normal" build process kicks in:

{
  "name": "my-worldchanging-project",;
  "scripts": {
    "build:emscripten": "docker run --rm -v $(pwd):/src trzeci/emscripten
./build.sh",
    "build:app": "<the old build command>",
    "build": "npm run build:emscripten && npm run build:app",
    // ...
  },
  // ...
}

The new build:emscripten task could invoke Emscripten directly, but as mentioned before, I recommend using Docker to make sure the build environment is consistent.

docker run ... trzeci/emscripten ./build.sh tells Docker to spin up a new container using the trzeci/emscripten image and run the ./build.sh command. build.sh is a shell script that you are going to write next! --rm tells Docker to delete the container after it's done running. This way you don't build up a collection of stale machine images over time. -v $(pwd):/src means that you want Docker to "mirror" the current directory ($(pwd)) to /src inside the container. Any changes you make to files in /src directory inside the container will be mirrored to your actual project. These mirrored directories are called "bind mounts".

Let's take a look at build.sh:

#!/bin/bash

set -e

export OPTIMIZE="-Os"
export LDFLAGS="${OPTIMIZE}"
export CFLAGS="${OPTIMIZE}"
export CPPFLAGS="${OPTIMIZE}"

echo "============================================="
echo "Compiling wasm bindings"
echo "============================================="
(
  # Compile C/C++ code
  emcc \
    ${OPTIMIZE} \
    --bind \
    -s STRICT=1 \
    -s ALLOW_MEMORY_GROWTH=1 \
    -s MALLOC=emmalloc \
    -s MODULARIZE=1 \
    -s EXPORT_ES6=1 \
    -o ./my-module.js \
    src/my-module.cpp

  # Create output folder
  mkdir -p dist
  # Move artifacts
  mv my-module.{js,wasm} dist
)
echo "============================================="
echo "Compiling wasm bindings done"
echo "============================================="

There's a lot to dissect here!

set -e puts the shell into "fail fast" mode. If any commands in the script return an error, the entire script gets aborted immediately. This can be incredibly helpful as the last output of the script will always be a success message or the error that caused the build to fail.

With the export statements you define the values of a couple of environment variables. They allow you to pass additional command-line parameters to the C compiler (CFLAGS), the C++ compiler (CPPFLAGS) and the linker (LDFLAGS). They all receive the optimizer settings via OPTIMIZE to make sure that everything gets optimized the same way. There are a couple of possible values for the OPTIMIZE variable:

  • -O0: Don't do any optimization. No dead code is eliminated, and Emscripten does not minify the JavaScript code it emits, either. Good for debugging.
  • -O3: Optimize aggressively for performance.
  • -Os: Optimize aggressively for performance and size as a secondary criterion.
  • -Oz: Optimize aggressively for size, sacrificing performance if necessary.

For the web I mostly recommend -Os.

The emcc command has a myriad of options of its own. Note that emcc is supposed to be a "drop-in replacement for compilers like GCC or clang". So all flags that you might know from GCC will most likely be implemented by emcc as well. The -s flag is special in that it allows us to configure Emscripten specifically. All available options can be found in Emscripten's settings.js, but that file can be quite overwhelming. Here's a list of the Emscripten flags that I think are most important for web developers:

  • --bind enables embind.
  • -s STRICT=1 drops support for all deprecated build options. This ensures that your code builds in a forward compatible manner.
  • -s ALLOW_MEMORY_GROWTH=1 allows memory to be automatically grown if necessary. At the time of writing, Emscripten will allocate 16MB of memory initially. As your code allocates chunks of memory, this option decides if these operations will make the entire wasm module fail when memory is exhausted or if the glue code is allowed expand the total memory to accommodate the allocation.
  • -s MALLOC=... chooses which malloc() implementation to use. emmalloc is a small and fast malloc() implementation specifically for Emscripten. The alternative is dlmalloc, a fully-fledged malloc() implementation. You only need to switch to dlmalloc if you are allocating a lot of small objects frequently or if you want to use threading.
  • -s EXPORT_ES6=1 will turn the JavaScript code into an ES6 module with a default export that works with any bundler. Also requires `-s MODULARIZE=1 to be set.

The following flags are not always necessary or are only helpful for debugging purposes:

  • -s FILESYSTEM=0 is a flag that relates to Emscripten and it's ability to emulate a filesystem for you when your C/C++ code uses filesystem operations. It does some analysis on the code it compiles to decide whether to include the filesystem emulation in the glue code or not. Sometimes, however, this analysis can get it wrong and you pay a rather hefty 70kB in additional glue code for a filesystem emulation that you might not need. With -s FILESYSTEM=0 you can force Emscripten to not include this code.
  • -g4 will make Emscripten include debugging information in the .wasm and also emit a source maps file for the wasm module. You can read more on debugging with Emscripten in their debugging section.

And there you go! To test this setup, let's whip up a tiny my-module.cpp:

#include <emscripten/bind.h>

using namespace emscripten;

int say_hello() {
  printf("Hello from your wasm module\n");
  return 0;
}

EMSCRIPTEN_BINDINGS(my_module) {
  function("sayHello", &say_hello);
}

and an index.html:

<!doctype html>
<title>Emscripten + npm example</title>
Open the console to see the output from the wasm module.
<script type="module">
import wasmModule from "./my-module.js";

const instance = wasmModule({
  onRuntimeInitialized() {
    instance.sayHello();
  }
});
</script>

(Here is a gist containing all files.)

To build everything, run

$ npm install
$ npm run build
$ npm run serve

Navigating to localhost:8080 should show you the following output in the DevTools console:

DevTools showing a message printed via C++ and Emscripten

Adding C/C++ code as a dependency

If you want to build a C/C++ library for your web app, you need its code to be part of your project. You can add the code to your project's repository manually or you can use npm to manage these kind of dependencies as well. Let's say I want to use libvpx in my webapp. libvpx is a C++ library to encode images with VP8, the codec used in .webm files. However, libvpx is not on npm and doesn't have a package.json, so I can't install it using npm directly.

To get out of this conundrum, there is napa. napa allows you to install any git repository URL as a dependency into your node_modules folder.

Note: If you dislike using napa, please take a look at the appendix for a more Docker-centric solution that doesn't require napa.

Install napa as a dependency:

$ npm install --save napa

and make sure to run napa as an install script:

{
  // ...
  "scripts": {
    "install": "napa",
    // ...
  },
  "napa": {
    "libvpx": "git+https://github.com/webmproject/libvpx"
  }
  // ...
}

When you run npm install, napa takes care of cloning the libvpx GitHub repository into your node_modules under the name libvpx.

You can now extend your build script to build libvpx. libvpx uses configure and make to be built. Luckily, Emscripten can help ensure that configure and make use Emscripten's compiler. For this purpose there are the wrapper commands emconfigure and emmake:

# ... above is unchanged ...
echo "============================================="
echo "Compiling libvpx"
echo "============================================="
(
  rm -rf build-vpx || true
  mkdir build-vpx
  cd build-vpx
  emconfigure ../node_modules/libvpx/configure \
    --target=generic-gnu
  emmake make
)
echo "============================================="
echo "Compiling libvpx done"
echo "============================================="

echo "============================================="
echo "Compiling wasm bindings"
echo "============================================="
# ... below is unchanged ...

Note: Some libraries provided a --target flag (or similar) to target a specific processor architecture. This will often pull in assembler code that takes advantage of features specific to that architecture and can't be compiled to WebAssembly. If a flag like that is present (check with ./configure --help), make sure to set it to a generic target.

A C/C++ library is split into two parts: The headers (traditionally .h or .hpp files) that define the data structures, classes, constants etc that a library exposes and the actual library (traditionally .so or .a files). To use the VPX_CODEC_ABI_VERSION constant of the library in your code, you have to include the library's header files using an #include statement:

#include "vpxenc.h"
#include <emscripten/bind.h>

int say_hello() {
  printf("Hello from your wasm module with libvpx %d\n", VPX_CODEC_ABI_VERSION);
  return 0;
}

The problem is that the compiler doesn't know where to look for vpxenc.h. This is what the -I flag is for. It tells the compiler which directories to check for header files. Additionally, you also need to give the compiler the actual library file:

# ... above is unchanged ...
echo "============================================="
echo "Compiling wasm bindings"
echo "============================================="
(
  # Compile C/C++ code
  emcc \
    ${OPTIMIZE} \
    --bind \
    -s STRICT=1 \
    -s ALLOW_MEMORY_GROWTH=1 \
    -s ASSERTIONS=0 \
    -s MALLOC=emmalloc \
    -s MODULARIZE=1 \
    -s EXPORT_ES6=1 \
    -o ./my-module.js \
    -I ./node_modules/libvpx \
    src/my-module.cpp \
    build-vpx/libvpx.a

# ... below is unchanged ...

If you run npm run build now, you will see the process builds a new .js and a new .wasm file and that the demo page will indeed output the constant:

DevTools
showing a the ABI version of libvpx printed via emscripten

You will also notice that the build process takes a long time. The reason for long build times can vary. In the case of libvpx, it takes a long time because it compiles an encoder and a decoder for both VP8 and VP9 every time you run your build command, even though the source files haven't changed. Even a small change to your my-module.cpp will take a long time to build. It would be very beneficial to keep the build artifacts of libvpx around once they have been built the first time.

One way to achieve this is using environment variables.

Note: If you dislike this solution, please take a look at the appendix for a more Docker-centric solution.

# ... above is unchanged ...
eval $@

echo "============================================="
echo "Compiling libvpx"
echo "============================================="
test -n "$SKIP_LIBVPX" || (
  rm -rf build-vpx || true
  mkdir build-vpx
  cd build-vpx
  emconfigure ../node_modules/libvpx/configure \
    --target=generic-gnu
  emmake make
)
echo "============================================="
echo "Compiling libvpx done"
echo "=============================================
# ... below is unchanged ...

(Here's a gist containing all the files.)

The eval command allows us to set environment variables by passing parameters to the build script. The test command will skip building libvpx if $SKIP_LIBVPX is set (to any value).

Now you can compile your module but skip rebuilding libvpx:

$ npm run build:emscripten -- SKIP_LIBVPX=1

Customizing the build environment

Sometimes libraries depend on additional tools to build. If these dependencies are missing in the build environment provided by the Docker image, you need to add them yourself. As an example, let's say you also want to build the documentation of libvpx using doxygen. Doxygen is not available inside your Docker container, but you can install it using apt.

If you were to do that in your build.sh, you would re-download and re-install doxygen everytime you want to build your library. Not only would that be wasteful, but it would also stop you from working on your project while offline.

Here it makes sense to build your own Docker image. Docker images are built by writing a Dockerfile that describes the build steps. Dockerfiles are quite powerful and have a lot of commands, but most of the time you can get away with just using FROM, RUN and ADD. In this case:

FROM trzeci/emscripten

RUN apt-get update && \
    apt-get install -qqy doxygen

With FROM you can declare which Docker image you want to use as a starting point. I chose trezeci/emscripten as a basis — the image you have been using all along. With RUN you instruct Docker to run shell commands inside the container. Whatever changes these commands make to the container is now part of the Docker image. To make sure that your Docker image has been built and is available before you run build.sh, you have to adjust your package.json a bit:

{
  // ...
  "scripts": {
    "build:dockerimage": "docker image inspect -f '.' mydockerimage || docker build -t mydockerimage .",
    "build:emscripten": "docker run --rm -v $(pwd):/src mydockerimage ./build.sh",
    "build": "npm run build:dockerimage && npm run build:emscripten && npm run build:app",
    // ...
  },
  // ...
}

(Here's a gist containing all the files.)

This will build your Docker image, but only if it has not been built yet. Then everything runs as before, but now the build environment has the doxygen command available, which will cause the documentation of libvpx to be built as well.

Conclusion

It is not surprising that C/C++ code and npm are not a natural fit, but you can make it work quite comfortably with some additional tooling and the isolation that Docker provides. This setup will not work for every project, but it's a decent starting point that you can adjust for your needs. If you have improvements, please share.

Appendix: Making use of Docker image layers

An alternative solution is to encapsulate more of these problems with Docker and Docker's smart approach to caching. Docker executes Dockerfiles step-by-step and assigns the result of each step an image of it's own. These intermediate images are often called "layers". If a command in a Dockerfile hasn't changed, Docker won't actually re-run that step when you are re-building the Dockerfile. Instead it reuses the layer from the last time the image was built.

Previously, you had to go through some effort to not rebuild libvpx every time you build your app. Instead you can move the building instructions for libvpx from your build.sh into the Dockerfile to make use of Docker's caching mechanism:

FROM trzeci/emscripten

RUN apt-get update && \
    apt-get install -qqy doxygen git && \
    mkdir -p /opt/libvpx/build && \
    git clone https://github.com/webmproject/libvpx /opt/libvpx/src
RUN cd /opt/libvpx/build && \
    emconfigure ../src/configure --target=generic-gnu && \
    emmake make

(Here's a gist containing all the files.)

Note that you need to manually install git and clone libvpx as you don't have bind mounts when running docker build. As a side-effect, there is no need for napa anymore.

What's New In DevTools (Chrome 73)

$
0
0

What's New In DevTools (Chrome 73)

Note: We'll publish the video version of this page in mid-March 2019.

Here's what's new in DevTools in Chrome 73.

Logpoints

Use Logpoints to log messages to the Console without cluttering up your code with console.log() calls.

To add a logpoint:

  1. Right-click the line number where you want to add the Logpoint.

    Adding a Logpoint
    Figure 1. Adding a Logpoint
  2. Select Add logpoint. The Breakpoint Editor pops up.

    The Breakpoint Editor
    Figure 2. The Breakpoint Editor
  3. In the Breakpoint Editor, enter the expression that you want to log to the Console.

    Typing the Logpoint expression
    Figure 3. Typing the Logpoint expression
  4. Press Enter or click outside of the Breakpoint Editor to save. The orange badge on top of the line number represents the Logpoint.

    An orange Logpoint badge on line 174
    Figure 4. An orange Logpoint badge on line 174

The next time that the line executes, DevTools logs the result of the Logpoint expression to the Console.

The result of the Logpoint expression in the Console
Figure 5. The result of the Logpoint expression in the Console

See Chromium issue #700519 to report bugs or suggest improvements.

Styles properties in Inspect Mode

When inspecting a node, DevTools now shows an expanded tooltip containing commonly important style properties like font, margin, and padding.

Inspecting a node
Figure 6. Inspecting a node

To inspect a node:

  1. Click Inspect Inspect.

  2. In your viewport, hover over the node.

Export code coverage data

Code coverage data can now be exported as a JSON file. The JSON looks like this:

[
  {
    "url": "https://wndt73.glitch.me/style.css",
    "ranges": [
      {
        "start": 0,
        "end": 21
      },
      {
        "start": 45,
        "end": 67
      }
    ],
    "text": "body { margin: 1em; } figure { padding: 0; } h1 { color: #317EFB; }"
  },
  ...
]

url is the URL of the CSS or JavaScript file that DevTools analyzed. ranges describes the portions of the code that were used. start is the starting offset for a range that was used. end is the ending offset. text is the full text of the file.

In the example above, the range 0 to 21 corresponds to body { margin: 1em; } and the range 45 to 67 corresponds to h1 { color: #317EFB; }. In other words, the first and last rulesets were used and the middle one was not.

To analyze how much code is used on page load and export the data:

  1. Press Control+Shift+P or Command+Shift+P (Mac) to open the Command Menu.

    The Command Menu
    Figure 7. The Command Menu
  2. Start typing coverage, select Show Coverage and then press Enter.

    Show Coverage
    Figure 8. Show Coverage

    The Coverage tab opens.

    The Coverage tab
    Figure 9. The Coverage tab
  1. Click Reload Reload. DevTools reloads the page and records how much code is used compared to how much is shipped.
  2. Click Export Export to export the data as a JSON file.

Code coverage is also available in Puppeteer, a browser automation tool maintained by the DevTools team. See Coverage.

See Chromium issue #717195 to report bugs or suggest improvements.

You can now use the keyboard to navigate the Console. Here's an example.

Pressing Shift+Tab focuses the last message (or result of an evaluated expression). If the message contains links, the last link is highlighted first. Pressing Enter opens the link in a new tab. Pressing the Left arrow key highlights the entire message (see Figure 13).

Focusing a link
Figure 11. Focusing a link

Pressing the Up arrow key focuses the next link.

Focusing another link
Figure 12. Focusing another link

Pressing the Up arrow key again focuses the entire message.

Focusing an entire message
Figure 13. Focusing an entire message

Pressing the Right arrow key expands a collapsed stack trace (or object, array, and so on).

Expanding a collapsed stack trace
Figure 14. Expanding a collapsed stack trace

Pressing the Left arrow key collapses an expanded message or result.

See Chromium issue #865674 to report bugs or suggest improvements.

AAA contrast ratio line in the Color Picker

The Color Picker now shows a second line to indicate which colors meet the AAA contrast ratio recommendation. The AA line has been there since Chrome 65.

The AA line (top) and AAA line (bottom)
Figure 15. The AA line (top) and AAA line (bottom)

Colors between the 2 lines represent colors that meet the AA recommendation but do not meet the AAA recommendation. When a color meets the AAA recommendation, anything on the same side of that color also meets the recommendation. For example, in Figure X anything below the lower line is AAA, and anything above the upper line does not even meet the AA recommendation.

See Contrast ratio in the Color Picker to learn how to access this feature.

See Chromium issue #879856 to report bugs or suggest improvements.

Save custom geolocation overrides

The Sensors tab now lets you save custom geolocation overrides.

  1. Press Control+Shift+P or Command+Shift+P (Mac) to open the Command Menu.

    The Command Menu
    Figure 16. The Command Menu
  2. Type sensors, select Show Sensors, and press Enter. The Sensors tab opens.

    The Sensors tab
    Figure 17. The Sensors tab
  3. In the Geolocation section click Manage. Settings > Geolocations opens up.

    The Geolocations tab in Settings
    Figure 18. The Geolocations tab in Settings
  4. Click Add location.

  5. Enter a location name, latitude, and longitude, then click Add.

    Adding a custom geolocation
    Figure 19. Adding a custom geolocation

See Chromium issue #649657 to report bugs or suggest improvements.

Code folding

The Sources and Network panels now support code folding.

Lines 54 to 65 have been folded
Figure 20. Lines 54 to 65 have been folded

To enable code folding:

  1. Press F1 to open Settings.
  2. Under Settings > Preferences > Sources enable Code folding.

To fold a block of code:

  1. Hover your mouse over the line number where the block starts.
  1. Click Fold Fold.

See Chromium issue #328431 to report bugs or suggest improvements.

Messages tab

The Frames tab has been renamed to the Messages tab. This tab is only available in the Network panel when inspecting a web socket connection.

The Messages tab
Figure 21. The Messages tab

See Chromium issue #802182 to report bugs or suggest improvements.

Feedback

To discuss the new features and changes in this post, or anything else related to DevTools:

  • File bug reports at Chromium Bugs.
  • Discuss features and changes on the Mailing List. Please don't use the mailing list for support questions. Use Stack Overflow, instead.
  • Get help on how to use DevTools on Stack Overflow. Please don't file bugs on Stack Overflow. Use Chromium Bugs, instead.
  • Tweet us at @ChromeDevTools.
  • File bugs on this doc in the Web Fundamentals repository.

Consider Canary

If you're on Mac or Windows, consider using Chrome Canary as your default development browser. Canary gives you access to the latest DevTools features.

Note: Canary is released as soon as its built, without testing. This means that Canary breaks about once-a-month. It's usually fixed within a day. You can go back to using Chrome Stable while Canary is broken.

<<../../_shared/discover.md>>


New in Chrome 72

$
0
0

New in Chrome 72

In Chrome 72, we've added support for:

And there’s plenty more!

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 72!

Change log

This covers only some of the key highlights, check the links below for additional changes in Chrome 72.

Public class fields

My first language was Java, and learning JavaScript threw me for a bit of a loop. How did I create a class? Or inheritance? What about public and private properties and methods? Many of the recent updates to JavaScript that make object oriented programming much easier.

I can now create classes, that work like I expect them to, complete with constructors, getters and setters, static methods and public properties.

Thanks to V8 7.2, which ships with Chrome 72, you can now declare public class fields directly in the class definition, eliminating the need to do it in the constructor.

class Counter {
  _value = 0;
  get value() {
    return this._value;
  }
  increment() {
    this._value++;
  }
}

const counter = new Counter();
console.log(counter.value);
// → 0
counter.increment();
console.log(counter.value);
// → 1

Support for private class fields is in the works!

More details are in Mathias’s article on class fields for more details.

User Activation API

Remember when sites could automatically play sound as soon as the page loaded? You scramble to hit the mute key, or figure out which tab it was, and close it. That’s why some APIs require activation via a user gesture before they’ll work. Unfortunately, browsers handle activation in different ways.

User activation API before and after user has interacted with the page.

Chrome 72 introduces User Activation v2, which simplifies user activation for all gated APIs. It’s based on a new specification that aims to standardize how activation works across all browsers.

There’s a new userActivation property on both navigator and MessageEvent, that has two properties: hasBeenActive and isActive:

  • hasBeenActive indicates if the associated window has ever seen a user activation in its lifecycle.
  • isActive indicates if the associated window currently has a user activation in its lifecycle.

More details are in Making user activation consistent across APIs

Localizing lists of things with Intl.format

I love the Intl APIs, they’re super helpful for localizing content into other languages! In Chrome 72, there’s a new .format() method that makes rendering lists easier. Like other Intl APIs, it shifts the burden to the JavaScript engine, without sacrificing performance.

Initialize it with the locale you want, then call format, and it’ll use the correct words and syntax. It can do conjunctions - which adds the localized equivalent of and (and look at those beautiful oxford commas). It can do disjunctions - adding the local equivalent of or. And by providing some additional options, you can do even more.

const opts = {type: 'disjunction'};
const lf = new Intl.ListFormat('fr', opts);
lf.format(['chien', 'chat', 'oiseau']);
// → 'chien, chat ou oiseau'
lf.format(['chien', 'chat', 'oiseau', 'lapin']);
// → 'chien, chat, oiseau ou lapin'

Check out the Intl.ListFormat API post for more details!

And more!

These are just a few of the changes in Chrome 72 for developers, of course, there’s plenty more.

  • Chrome 72 changes the behavior of Cache.addAll() to better match the spec. Previously, if there were duplicate entries in the same call, later requests would simply overwrite the first. To match the spec, if there are duplicate entries, it will reject with an InvalidStateError.
  • Requests for favicons are now handled by the service worker, as long as the request URL is on the same origin as the service worker.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 73 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Prototyping Platform Packs for Lighthouse

$
0
0

Prototyping Platform Packs for Lighthouse

TL;DR: Platform Packs is a new Lighthouse feature that is currently under development, and we would love to hear your feedback!

By auditing for performance, accessibility and other best practices, Lighthouse provides developers with important guidance that they can use to improve their web pages. Many developers, however, use different technologies to build their site (such as a CMS or JavaScript framework) and may need more specific advice.

Platform Packs is a feature that will extend Lighthouse to also include specific platform-based recommendations. Instead of only surfacing generalized advice, additional messages will be provided that explain how to address certain audits with platforms that have been detected to be used on a website.

Prototype of the WordPress Platform Pack

The community will get to decide what the recommendations for different platforms should be. A separate repository has been created to consolidate ideas and a prototype of this feature can already be viewed with Lighthouse Viewer.

Which platforms will Lighthouse support?

We are starting with WordPress as our first prototype platform, and plan to expand the list in the future to other popular CMS and JavaScript frameworks (React, Angular, etc...).

How will this feature show up on my Lighthouse report?

There are two options that are being considered:

  • Automatically detect which platforms are being used on a page (such as WordPress) and immediately surface additional platform-pack messages for applicable audits.
  • Automatically detect which platforms are being is being used on a page (such as WordPress) and provide a toggle that allows the user to switch between a regular and an updated version of Lighthouse.

How will platform-specific descriptions be modified by the community?

We’re exploring options to enable anyone to recommend platform-specific recommendations in the near future. In the meantime, feel free to submit PRs directly to the Lighthouse Platform Packs repository or leave suggestions in this Google Sheet for WordPress.

Feedback

We would love to hear any feedback you may have:

  • Which platforms should we prioritize in the future after WordPress?
  • Do you have a preference for how this feature will show up on your Lighthouse report?
  • Any other suggestions?

Leave a comment in this discussion issue if you have any thoughts.

RTCQuicTransport Coming to an Origin Trial Near You (Chrome 73)

$
0
0

RTCQuicTransport Coming to an Origin Trial Near You (Chrome 73)

What?

The RTCQuicTransport is a new web platform API that allows exchanging arbitrary data with remote peers using the QUIC protocol. It’s intended for peer to peer use cases, and therefore is used with a standalone RTCIceTransport API to establish a peer-to-peer connection through ICE. The data is transported reliably and in order (see section below for details on unordered & unreliable delivery). Since it is a generic, bidirectional data transport, it can be used for gaming, file transfers, media transport, messaging, etc.

Why?

A powerful low level data transport API can enable applications (like real time communications) to do new things on the web. You can build on top of the API, creating your own solutions, pushing the limits of what can be done with peer to peer connections, for example, unlocking custom bitrate allocation knobs. In the future, further support for encoded media could even enable building your own video communication application with low level controls. WebRTC’s NV effort is to move towards lower level APIs, and experimenting early with this is valuable.

Why QUIC?

The QUIC protocol is desirable for real time communications. It is built on top of UDP, has built in encryption, congestion control and is multiplexed without head of line blocking. The RTCQuicTransport gives very similar abilities as the RTCDataChannel API, but uses QUIC rather than SCTP as its transport protocol. Because the RTCQuicTransport is a standalone API, it doesn’t have the overhead of the RTCPeerConnection API, which includes the real time media stack.

How?

General API overview

The API has 3 main abstractions, the RTCIceTransport, RTCQuicTransport and RTCQuicStream.

RTCQuicTransport diagram showing architecture of API

RTCIceTransport

ICE is a protocol to establish peer-to-peer connections over the internet and is used in WebRTC today. This object provides a standalone API to establishes an ICE connection. It is used as the packet transport for the QUIC connection, and the RTCQuicTransport takes it in its constructor.

RTCQuicTransport

Represents a QUIC connection. It is used to establish a QUIC connection and create QUIC streams. It also exposes relevant stats for the QUIC connection level.

RTCQuicStream

Used for reading and writing data to/from the remote side. Streams transport data reliably and in order. Multiple streams can be created from the same RTCQuicTransport and once data is written to a stream it fires an “onquicstream” event on the remote transport. Streams offer a way to distinguish different data on the same QUIC connection. Common examples can be sending separate files across separate streams, small chunks of data across different streams, or different types of media across separate streams. RTCQuicStreams are lightweight, are multiplexed over a QUIC connection and do not cause head of line blocking to other RTCQuicStreams.

Connection Setup

The following is an example for setting up a peer-to-peer QUIC connection. Like RTCPeerConnection, the RTCQuicTransport API requires the use of a secure signaling channel to negotiate the parameters of the connection, including its security parameters. The RTCIceTransport negotiates it’s ICE parameters (ufrag and password), as well as RTCIceCandidates.

Note: The RTCQuicTransport connection is setup with a pre shared key API. We do not currently plan on keeping this API past the origin trial. It will be replaced by signaling remote certificate fingerprints to validate self-signed certificates used in the handshake, once this support has been added to QUIC in Chromium.

RTCQuicTransport diagram showing architecture of API

Client perspective:

const iceTransport = new RTCIceTransport();
const quicTransport = new RTCQuicTransport(iceTransport);
// Signal parameters, key and candidates.
signalingChannel.send({
  iceParams: iceTransport.getLocalParameters(),
  quicKey: quicTransport.getKey(),
});
iceTransport.onicecandidate = e => {
  if (e.candidate) {
    signalingChannel.send({candidate: e.candidate} );
  }
}

// When remote parameters are signaled, start connection.
signalingChannel.onMessage = async ({iceParams, candidate}) => {
  if (iceParams) {
    iceTransport.start(iceParams);
    quicTransport.connect();
  } else if (candidate) {
    iceTransport.addRemoteCandidate(candidate);
  }
};

Server perspective:

const iceTransport = new RTCIceTransport();
const quicTransport = new RTCQuicTransport(iceTransport);
// Signal parameters, key and candidates.
signalingChannel.send({
  iceParams: iceTransport.getLocalParameters(),
});
iceTransport.onicecandidate = e => {
  if (e.candidate) {
    signalingChannel.send({candidate: e.candidate});
  }
}

// When remote parameters are signaled, start connection.
signalingChannel.onMessage = async ({iceParams, quicKey, candidate}) => {
  if (iceParams && quicKey) {
    iceTransport.start(iceParams);
    quicTransport.listen(quicKey);
  } else if (candidate) {
    iceTransport.addRemoteCandidate(candidate);
  }
};

Data Transfer

Data transfer can be achieved using the RTCQuicStream APIs for reading and writing:

RTCQuicStreamReadResult readInto(Uint8Array data);
void write(RTCQuicStreamWriteParameters data);
Promise<void> waitForWriteBufferedAmountBelow(unsigned long amount);
Promise<void> waitForReadable(unsigned long amount);

Buffering

The promises returned by the waitFor* methods allow buffering data when JavaScript is busy. Back pressure is applied to the send side when the read buffer becomes full on the receive side. The send side has a write buffer that can fill when back pressure has been applied, and therefore the write side has a waitForWriteBufferedAmountBelow method as well to allow waiting for room in the buffer to write. More information on writing/reading data can be found in the further developer documentation.

Unordered/Unreliable Delivery

While an RTCQuicStream only supports sending data reliably and in order, unreliable/unordered delivery can be achieved through other means. For unordered delivery, one can send small chunks of data on separate streams because data is not ordered between streams. For unreliable delivery, one can send small chunks of data with finish set to true, followed by calling reset() on the stream after a timeout. The timeout should be dependent on how many retransmissions are desired before dropping the data.

When?

The origin trial will start in the Chrome 73 version, and will be available up to and including the M75 version. After this the origin trial will end. Based upon feedback and interest we will make appropriate changes and either ship the API, continue with a new origin trial of this API, or discontinue the API.

Where?

Chrome browser in all platforms but iOS.

What else?

Feedback

One of the main goals of the origin trial is to get feedback from you, the developers. We’re interested in:

  • What does this API enable for you?
  • How does this API improve upon other data transport APIs (WebSockets or WebRTC’s RTCDataChannel)? How could it improve?
  • Performance
  • API ergonomics

Register for the origin trial

  1. Request a token for your origin.
  2. Add the token to your pages, there are two ways to provide this token on any pages in your origin:
    • Add an origin-trial <meta> tag to the head of any page. For example, this may look something like: <meta http-equiv="origin-trial" content="TOKEN_GOES_HERE">
    • If you can configure your server, you can also provide the token on pages using an Origin-Trial HTTP header. The resulting response header should look something like: Origin-Trial: TOKEN_GOES_HERE

Web Specification

The draft specification has moved ahead of the API in the origin trial including:

  • Unidirectional streams that are more closely aligned with WHATWG streams
  • Disabling retransmissions
  • (Coming soon) datagrams

We are interested in implementing the full specification and beyond (including WHATWG stream support), but want to hear your feedback first!

Security

Security in the QUIC handshake is enforced through usage of a pre shared key to establish an encrypted P2P QUIC connection. This key needs to be signaled over a secure out of band channel with confidentiality and integrity guarantees. Note that the key will be exposed to JavaScript.

Active Attack

Unlike DTLS-SRTP, which just requires integrity for signaling the certificate fingerprint, signaling the pre shared key requires integrity and confidentiality. If the PSK is compromised (say by the server in the signaling channel), an active attacker could potentially mount a man-in-the-middle attack against the QUIC handshake.

Current status

Step Status
1. Create explainer Complete
2a. RTCQuicTransport Specification In Progress
2b. RTCIceTransport Specification In Progress
3. Gather feedback & iterate on design In Progress
4. Origin trial Starts in Chrome 73!
5. Launch Not started

Lightning-fast templates & Web Components: lit-html & LitElement

$
0
0

Lightning-fast templates & Web Components: lit-html & LitElement

Today we're excited to announce the first stable releases of our two next-generation web development libraries: lit-html and LitElement.

lit-html is a tiny, fast, expressive library for HTML templating. LitElement is a simple base class for creating Web Components with lit-html templates.

If you've been following the projects, you probably know what lit-html and LitElement are all about (and you can skip to the end if you like). If you're new to lit-html and LitElement, read on for an overview.

lit-html: a tiny, fast library for HTML templating

lit-html is a tiny (just over 3k bundled, minified, and gzipped) and fast JavaScript library for HTML templating. lit-html works well with functional programming approaches, letting you express your application's UI declaratively, as a function of its state.

const myTemplate = (name) => html`
    <div>
      Hi, my name is ${name}.
    </div>
`;

It's simple to render a lit-html template:

render(myTemplate('Ada'), document.body);

Re-rendering a template only updates the data that's changed:

render(myTemplate('Grace'), document.body);

lit-html is efficient, expressive, and extensible:

  • Efficient. lit-html is lightning fast. When data changes, lit-html doesn't need to do any diffing; instead, it remembers where you inserted expressions in your template and only updates these dynamic parts.
  • Expressive. lit-html gives you the full power of JavaScript, declarative UI, and functional programming patterns. The expressions in a lit-html template are just JavaScript, so you don't need to learn a custom syntax and you have all the expressiveness of the language at your disposal. lit-html supports many kinds of values natively: strings, DOM nodes, arrays and more. Templates themselves are values that can be computed, passed to and from functions, and nested.
  • Extensible: lit-html is also customizable and extensible—your very own template construction kit. Directives customize how values are handled, allowing for asynchronous values, efficient keyed-repeats, error boundaries, and more. lit-html includes several ready-to-use directives and makes it easy to define your own.

A number of libraries and projects have already incorporated lit-html. You can find a list of some of these libraries in the awesome-lit-html repo on GitHub.

If templating is all you need, you can get started now with the lit-html docs. If you'd like to build components to use in your app or share with your team, read on to learn more.

LitElement: a lightweight Web Component base class

LitElement is a lightweight base class that makes it easier than ever to build and share Web Components.

LitElement uses lit-html to render components and adds APIs to declare reactive properties and attributes. Elements update automatically when their properties change. And they update fast, without diffing.

Here's a simple LitElement component in TypeScript:

@customElement('name-tag')
class NameTag extends LitElement {
  @property()
  name = 'a secret';

  render() {
    return html`<p>Hi, my name is ${this.name}!</p>`;
  }
}

(We have a great vanilla JavaScript API also.)

This creates an element you can use anywhere you'd use a regular HTML element:

<name-tag name="Ida"></name-tag>

If you use Web Components already, you'll be happy to hear that they're now natively supported in Chrome, Safari and Firefox. Edge support is coming soon, and polyfills are only needed for legacy browser versions.

If you're new to Web Components, you should give them a try! Web Components let you extend HTML in a way that interoperates with other libraries, tools, and frameworks. This makes them ideal for sharing UI elements within a large organization, publishing components for use anywhere on the web, or building UI design systems like Material Design.

You can use custom elements anywhere you use HTML: in your main document, in a CMS, in Markdown, or in views built with frameworks like React and Vue. You can also mix and match LitElement components with other Web Components, whether they've been written using vanilla web technologies or made with the help of tools like Salesforce Lightning Web Components, Ionic's Stencil, SkateJS or the Polymer library.

Get started

Want to try lit-html and LitElement? A good starting point is the LitElement tutorial:

If you're interested in using lit-html by itself, or integrating lit-html templating into another project, see the lit-html docs:

As always, let us know what you think. You can reach us on Slack or Twitter. Our projects are open source (of course!) and you can report bugs, file feature requests or suggest improvements on GitHub:

Using Trusted Web Activity

$
0
0

Using Trusted Web Activity

Last updated: February 6th, 2019

Trusted Web Activities are a new way to integrate your web-app content such as your PWA with your Android app using a protocol based on Custom Tabs.

Note: Trusted Web Activities are available in Chrome on Android, version 72 and above.

Looking for the code?

There are a few things that make Trusted Web Activities different from other ways to integrate web content with your app:

  1. Content in a Trusted Web activity is trusted -- the app and the site it opens are expected to come from the same developer. (This is verified using Digital Asset Links.)
  2. Trusted Web activities come from the web: they're rendered by the user's browser, in exactly the same way as a user would see it in their browser except they are run fullscreen. Web content should be accessible and useful in the browser first.
  3. Browsers are also updated independent of Android and your app -- Chrome, for example, is available back to Android Jelly Bean. That saves on APK size and ensures you can use a modern web runtime. (Note that since Lollipop, WebView has also been updated independent of Android, but there are a significant number of pre-Lollipop Android users.)
  4. The host app doesn't have direct access to web content in a Trusted Web activity or any other kind of web state, like cookies and localStorage. Nevertheless, you can coordinate with the web content by passing data to and from the page in URLs (e.g. through query parameters, custom HTTP headers, and intent URIs.)
  5. Transitions between web and native content are between activities. Each activity (i.e. screen) of your app is either completely provided by the web, or by an Android activity

To make it easier to test, there are currently no qualifications for content opened in the preview of Trusted Web activities. You can expect, however, that Trusted Web activities will need to meet the same Add to Home Screen requirements. You can audit your site for these requirements using the Lighthouse "user can be prompted to Add to Home screen" audit.

Today, if the user's version of Chrome doesn't support Trusted Web activities, Chrome will fall back to a simple toolbar using a Custom Tab. It is also possible for other browsers to implement the same protocol that Trusted Web activities use. While the host app has the final say on what browser gets opened, we recommend the same policy as for Custom Tabs: use the user's default browser, so long as that browser provides the required capabilities.

Getting started

Setting up a Trusted Web Activity (TWA) doesn’t require developers to author Java code, but Android Studio is required. This guide was created using Android Studio 3.3. Check the docs on how to install it.

Create a Trusted Web Activity Project

When using Trusted Web Activities, the project must target API 16 or higher.

Note: This section will guide you on setting up a new project on Android Studio. If you are already familiar with the tool feel free to skip to the Getting the TWA Library section.

Open Android Studio and click on Start a new Android Studio project.

Android Studio will prompt to choose an Activity type. Since TWAs use an Activity provided by support library, choose Add No Activity and click Next.

Next step, the wizard will prompt for configurations for the project. Here's a short description of each field:

  • Name: The name that will be used for your application on the Android Launcher.
  • Package Name: An unique identifier for Android Applications on the Play Store and on Android devices. Check the documentation for more information on requirements and best practices for creating package names for Android apps.
  • Save location: Where Android Studio will create the project in the file system.
  • Language: The project doesn't require writing any Java or Kotlin code. Select Java, as the default.
  • Minimum API Level: The Support Library requires at least API Level 16. Select API 16 any version above.

Leave the remaining checkboxes unchecked, as we will not be using Instant Apps or AndroidX artifacts, and click Finish.

Get the TWA Support Library

To setup the TWA library in the project you will need to edit a couple of files. Look for the Gradle Scripts section in the Project Navigator. Both files are called build.gradle, which may be a bit confusing, but the descriptions in parenthesis help identifying the correct one.

The first file is the Project level build.gradle. Look for the one with your project name next to it.

Add the Jitpack configuration (in bold below) to the list of repositories. Under the allprojects section:

allprojects {
   repositories {
       google()
       jcenter()
       maven { url "https://jitpack.io" }
   }
}

Android Studio will prompt to synchronize the project. Click on the Sync Now link.

Note: The support library for Trusted Web Activities will be part of Jetpack in the future, and the previous step won’t be required anymore.

The second file we need to change is the Module level build.gradle.

The Trusted Web Activities library uses Java 8 features and the first change enables Java 8. Add a compileOptions section to the bottom of the android section, as below:

android {
        ...
    compileOptions {
       sourceCompatibility JavaVersion.VERSION_1_8
       targetCompatibility JavaVersion.VERSION_1_8
    }
}

The next step will add the TWA Support Library to the project. Add a new dependency to the dependencies section:

dependencies {
   implementation 'com.github.GoogleChrome.custom-tabs-client:customtabs:3a71a75c9f'
}

Android Studio will show prompt asking to synchronize the project once more. Click on the Sync Now link and synchronize it.

Add the TWA Activity

Setting up the TWA Activity is achieved by editing the Android App Manifest.

On the Project Navigator, expand the app section, followed by the manifests and double click on AndroidManifest.xml to open the file.

Since we asked Android Studio not to add any Activity to our project when creating it, the manifest is empty and contains only the application tag.

Add the TWA Activity by inserting an activity tag into the application tag:

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    package="com.example.twa.myapplication">

    <application
        android:allowBackup="true"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher_round"
        android:supportsRtl="true"
        android:theme="@style/AppTheme"
        tools:ignore="GoogleAppIndexingWarning">
        <activity
            android:name="android.support.customtabs.trusted.LauncherActivity">

           <!-- Edit android:value to change the url opened by the TWA -->
           <meta-data
               android:name="android.support.customtabs.trusted.DEFAULT_URL"
               android:value="https://airhorner.com" />

           <!-- This intent-filter adds the TWA to the Android Launcher -->
           <intent-filter>
               <action android:name="android.intent.action.MAIN" />
               <category android:name="android.intent.category.LAUNCHER" />
           </intent-filter>

           <!--
             This intent-filter allows the TWA to handle Intents to open
             airhorner.com.
           -->
           <intent-filter>
               <action android:name="android.intent.action.VIEW"/>
               <category android:name="android.intent.category.DEFAULT" />
               <category android:name="android.intent.category.BROWSABLE"/>

               <!-- Edit android:host to handle links to the target URL-->
               <data
                 android:scheme="https"
                 android:host="airhorner.com"/>
           </intent-filter>
        </activity>
    </application>
</manifest>

The tags added to the XML are standard Android App Manifest. There are two relevant pieces of information for the context of Trusted Web Activities:

  1. The meta-data tag tells the TWA Activity which URL it should open. Change the android:value attribute with the URL of the PWA you want to open. In this example, it is https://airhorner.com.
  2. The second intent-filter tag allows the TWA to intercept Android Intents that open https://airhorner.com. The android:host attribute inside the data tag must point to the domain being opened by the TWA.

Note: When running the project at this stage, the URL Bar from Custom Tabs will still show on the top of the screen. This is not a bug.

The next section will show how to setup Digital AssetLinks to verify relationship between the website and the app, and remove the URL bar.

Remove the URL bar

Trusted Web Activities require an association between the Android application and the website to be established to remove the URL bar.

This association is created via Digital Asset Links and the association must be established in both ways, linking from the app to the website and from the website to the app.

It is possible to setup the app to website validation and setup Chrome to skip the website to app validation, for debugging purposes.

Establish an association from app to the website

Open the string resources file app > res > values > strings.xml and add the Digital AssetLinks statement below:

<resources>
    <string name="app_name">AirHorner TWA</string>
    <string name="asset_statements">
        [{
            \"relation\": [\"delegate_permission/common.handle_all_urls\"],
            \"target\": {
                \"namespace\": \"web\",
                \"site\": \"https://airhorner.com\"}
        }]
    </string>
</resources>

Change the contents for the site attribute to match the schema and domain opened by the TWA.

Back in the Android App Manifest file, AndroidManifest.xml, link to the statement by adding a new meta-data tag, but this time as a child of the application tag:

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.example.twa.myapplication">

    <application
        android:allowBackup="true"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher_round"
        android:supportsRtl="true"
        android:theme="@style/AppTheme">

        <meta-data
            android:name="asset_statements"
            android:resource="@string/asset_statements" />

        <activity>
            ...
        </activity>

    </application>
</manifest>

We have now established a relationship from the Android application to the website. It is helpful to debug this part of the relationship without creating the website to application validation.

Here’s how to test this on a development device:

Enable debug mode

  1. Open Chrome on the development device, navigate to chrome://flags, search for an item called Enable command line on non-rooted devices and change it to ENABLED and then restart the browser.
  2. Next, on the Terminal application of your operating system, use the Android Debug Bridge (installed with Android Studio), and run the following command:
adb shell "echo '_ --disable-digital-asset-link-verification-for-url=\"https://airhorner.com\"' > /data/local/tmp/chrome-command-line"

Close Chrome and re-launch your application from Android Studio. The application should now be shown in full-screen.

Note: It may needed to force close Chrome so it restarts with the correct command line. Go to Android Settings > Apps & notifications > Chrome, and click on Force stop.

Establish an association from the website to the app

There are 2 pieces of information that the developer needs to collect from the app in order to create the association:

  • Package Name: The first information is the package name for the app. This is the same package name generated when creating the app. It can also be found inside the Module build.gradle, under Gradle Scripts > build.gradle (Module: app), and is the value of the applicationId attribute.
  • SHA-256 Fingerprint: Android applications must be signed in order to be uploaded to the Play Store. The same signature is used to establish the connection between the website and the app through the SHA-256 fingerprint of the upload key.

The Android documentation explains in detail how to generate a key using Android Studio. Make sure to take note the path, alias and passwords for the key store, as you will need it for the next step.

Extract the SHA-256 fingerprint using the keytool, with the following command:

keytool -list -v -keystore  -alias  -storepass  -keypass 

The value for the SHA-256 fingerprint is printed under the Certificate fingerprints section. Here’s an example output:

keytool -list -v -keystore ./mykeystore.ks -alias test -storepass password -keypass password

Alias name: key0
Creation date: 28 Jan 2019
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=Test Test, OU=Test, O=Test, L=London, ST=London, C=GB
Issuer: CN=Test Test, OU=Test, O=Test, L=London, ST=London, C=GB
Serial number: ea67d3d
Valid from: Mon Jan 28 14:58:00 GMT 2019 until: Fri Jan 22 14:58:00 GMT 2044
Certificate fingerprints:
     SHA1: 38:03:D6:95:91:7C:9C:EE:4A:A0:58:43:A7:43:A5:D2:76:52:EF:9B
     SHA256: F5:08:9F:8A:D4:C8:4A:15:6D:0A:B1:3F:61:96:BE:C7:87:8C:DE:05:59:92:B2:A3:2D:05:05:A5:62:A5:2F:34
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3

With both pieces of information at hand, head over to the assetlinks generator, fill-in the fields and hit Generate Statement. Copy the generated statement and serve it from your domain, from the URL /.well-known/assetlinks.json.

Note: The AssetLinks file must be under /.well-known/assetlinks.json, at the root of the domain, as that's only the place Chrome will look for it.

Wrapping Up

With the assetlinks file in place in your domain and the asset_statements tag configured in the Android application, the next step is generating a signed app. Again, the steps for this are widely documented.

The output APK can be installed into a test device, using adb:

adb install app-release.apk

If the verification step fails it is possible to check for error messages using the Android Debug Bridge, from your OS’s terminal and with the test device connected.

adb logcat | grep -e OriginVerifier -e digital_asset_links

With the upload APK generated, you can now upload the app to the Play Store.

We are looking forward to see what developers build with Trusted Web Activities. To drop any feedback, reach out to us at @ChromiumDev.

Viewing all 599 articles
Browse latest View live