Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

Lighthouse 2.7 Updates

$
0
0

Lighthouse 2.7 Updates

Lighthouse 2.7 is out! Highlights include:

See the 2.7 release notes for the full list of new features, changes, and bug fixes.

How to update to 2.7

  • NPM. Run npm update lighthouse, or npm update lighthouse -g flag if you installed Lighthouse globally.
  • Chrome Extension. The extension should automatically update, but you can manually update it via chrome://extensions.
  • DevTools. Lighthouse 2.7 is shipping in Chrome 65. You can check what version of Chrome you're running via chrome://version. Chrome updates to a new version about every 6 weeks. You can run the latest Chrome code by downloading Chrome Canary.

New SEO audits

The new SEO category provides audits that help improve your page's ranking in search engine results.

Note: Many factors affect a page's search engine ranking. Lighthouse does not test all of these factors. A perfect 100 score in Lighthouse does not guarantee a top ranking spot on any search engine!

The new SEO category. New audits include: Document uses legible font sizes,
            Has a meta viewport tag with width or initial-scale attribute,
            Document has a title element, Document has a meta description, Page has
            successful HTTP code, Links have descriptive text, Page isn't blocked from indexing,
            and Document has a valid hreflang.
Figure 1. The new SEO category

New, manual accessibility audits

The new, manual accessibility audits inform you of things you can do to improve the accessibility of your page. "Manual" here means that Lighthouse can't automate these audits, so you need to manually test them yourself.

The new, manual accessibility audits, which includes: The page has a logical tab order,
            Interactive controls are keyboard focusable, The user's focus is directed to new
            content added to the page, User focus is not accidentally trapped in a region,
            Custom controls have associated labels, Custom controls have ARIA roles, Visual order
            on the page follows DOM order, Offscreen content is hidden from assistive technology,
            Headings don't skip levels, and HTML5 landmark elements are used to improve
            navigation.
Figure 2. The new, manual Accessibility audits

Updates to the WebP audit

Thanks to some community feedback, the WebP audit is now more inclusive of other next-generation, high-performance image formats, like JPEG 2000 and JPEG XR.

The new WebP audit.
Figure 3. The new WebP audit

What's New In DevTools (Chrome 65)

$
0
0

What's New In DevTools (Chrome 65)

Note: The video version of these release notes will be published around early-March 2018.

New features coming to DevTools in Chrome 65 include:

Note: Check what version of Chrome you're running at chrome://version. If you're running an earlier version, these features won't exist. If you're running a later version, these features may have changed. Chrome auto-updates to a new major version about every 6 weeks.

Local Overrides

Local Overrides let you make changes in DevTools, and keep those changes across page loads. Previously, any changes that you made in DevTools would be lost when you reloaded the page. Local Overrides work for most file types, with a couple of exceptions. See Limitations.

Persisting a CSS change across page loads with Local Overrides.
Figure 1. Persisting a CSS change across page loads with Local Overrides

How it works:

  • You specify a directory where DevTools should save changes.
  • When you make changes in DevTools, DevTools saves a copy of the modified file to your directory.
  • When you reload the page, DevTools serves the local, modified file, rather than the network resource.

To set up Local Overrides:

  1. Open the Sources panel.
  2. Open the Overrides tab.

    The Overrides tab
    Figure 2. The Overrides tab
  3. Click Setup Overrides.

  4. Select which directory you want to save your changes to.
  5. At the top of your viewport, click Allow to give DevTools read and write access to the directory.
  6. Make your changes.

Limitations

  • DevTools doesn't save changes made in the DOM Tree of the Elements panel. Edit HTML in the Sources panel instead.
  • If you edit CSS in the Styles pane, and the source of that CSS is an HTML file, DevTools won't save the change. Edit the HTML file in the Sources panel instead.
  • Workspaces. DevTools automatically maps network resources to a local repository. Whenever you make a change in DevTools, that change gets saved to your local repository, too.

The Changes tab

Track changes that you make locally in DevTools via the new Changes tab.

The Changes tab
Figure 3. The Changes tab

New accessibility tools

Use the new Accessibility pane to inspect the accessibility properties of an element, and inspect the contrast ratio of text elements in the Color Picker to ensure that they're accessible to users with low-vision impairments or color-vision deficiencies.

Accessibility pane

Use the Accessibility pane on the Elements panel to investigate the accessibility properties of the currently-selected element.

The Accessibility pane shows the ARIA attributes and computed
            properties for the element that's currently selected in the DOM Tree of
            the Elements panel, as well as its position in the accessibility tree.
Figure 4. The Accessibility pane shows the ARIA attributes and computed properties for the element that's currently selected in the DOM Tree on the Elements panel, as well as its position in the accessibility tree

Check out Rob Dodson's A11ycast on labeling below to see the Accessibility pane in action.

Contrast ratio in the Color Picker

The Color Picker now shows you the contrast ratio of text elements. Increasing the contrast ratio of text elements makes your site more accessible to users with low-vision impairments or color-vision deficiencies. See Color and contrast to learn more about how contrast ratio affects accessibility.

Improving the color contrast of your text elements makes your site more usable for all users. In other words, if your text is grey with a white background, that's hard for anyone to read.

Inspecting the contrast ratio of the highlighted H1 element.
Figure 5. Inspecting the contrast ratio of the highlighted h1 element

In Figure 5, the two checkmarks next to 4.61 means that this element meets the enhanced recommended contrast ratio (AAA). If it only had one checkmark, that would mean it met the minimum recommended contrast ratio (AA).

Click Show More Show More to expand the Contrast Ratio section. The white line in the Color Spectrum box represents the boundary between colors that meet the recommended contrast ratio, and those that don't. For example, since the grey color in Figure 6 meets recommendations, that means that all of the colors below the white line also meet recommendations.

The expanded Contrast Ratio section.
Figure 6. The expanded Contrast Ratio section

The Audits panel has an automated accessibility audit for ensuring that every text element on a page has a sufficient contrast ratio.

See Run Lighthouse in Chrome DevTools, or watch the A11ycast below, to learn how to use the Audits panel to test accessibility.

New audits

Chrome 65 ships with a whole new category of SEO audits, and many new performance audits.

Note: The Audits panel is powered by Lighthouse. Chrome 64 runs Lighthouse version 2.5. Chrome 65 runs Lighthouse version 2.8. So this section is simply a summary of the Lighthouse updates from 2.6, 2.7, and 2.8.

New SEO audits

Ensuring that your pages pass each of the audits in the new SEO category may help improve your search engine rankings.

The new SEO category of audits.
Figure 7. The new SEO category of audits

New performance audits

Chrome 65 also ships with many new performance audits:

  • JavaScript boot-up time is high
  • Uses inefficient cache policy on static assets
  • Avoids page redirects
  • Document uses plugins
  • Minify CSS
  • Minify JavaScript

Other updates

Reliable code stepping with workers and asynchronous code

Chrome 65 brings updates to the Step Into Step Into button when stepping into code that passes messages between threads, and asynchronous code. If you want the previous stepping behavior, you can use the new Step Step button, instead.

Stepping into code that passes messages between threads

When you step into code that passes messages between threads, DevTools now shows you what happens in each thread.

For example, the app in Figure 8 passes a message between the main thread and the worker thread. After stepping into the postMessage() call on the main thread, DevTools pauses in the onmessage handler in the worker thread. The onmessage handler itself posts a message back to the main thread. Stepping into that call pauses DevTools back in the main thread.

Stepping into message-passing code in Chrome 65.
Figure 8. Stepping into message-passing code in Chrome 65

When you stepped into code like this in earlier versions of Chrome, Chrome only showed you the main-thread-side of the code, as you can see in Figure 9.

Stepping into message-passing code in Chrome 63.
Figure 9. Stepping into message-passing code in Chrome 63

Stepping into asynchronous code

When stepping into asynchronous code, DevTools now assumes that you want to pause in the the asynchronous code that eventually runs.

For example, in Figure 10 after stepping into setTimeout(), DevTools runs all of the code leading up to that point behind the scenes, and then pauses in the function that's passed to setTimeout().

Stepping into asynchronous code in Chrome 65.
Figure 10. Stepping into asynchronous code in Chrome 65

When you stepped into code like this in Chrome 63, DevTools paused in code as it chronologically ran, as you can see in Figure 11.

Stepping into asynchronous code in Chrome 63.
Figure 11. Stepping into asynchronous code in Chrome 63

Multiple recordings in the Performance panel

The Performance panel now lets you temporarily save up to 5 recordings. The recordings are deleted when you close your DevTools window. See Get Started with Analyzing Runtime Performance to get comfortable with the Performance panel.

Selecting between multiple recordings in the Performance panel.
Figure 12. Selecting between multiple recordings in the Performance panel

Bonus: Automate DevTools actions with Puppeteer 1.0

Note: This section isn't related to Chrome 65.

Version 1.0 of Puppeteer, a browser automation tool maintained by the Chrome DevTools team, is now out. You can use Puppeteer to automate many tasks that were previously only available via DevTools, such as capturing screenshots:

const puppeteer = require('puppeteer');
(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('https://example.com');
  await page.screenshot({path: 'example.png'});
  await browser.close();
})();

It also has APIs for lots of generally useful automation tasks, such as generating PDFs:

const puppeteer = require('puppeteer');
(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('https://news.ycombinator.com', {waitUntil: 'networkidle2'});
  await page.pdf({path: 'hn.pdf', format: 'A4'});
  await browser.close();
})();

See Quick Start to learn more.

You can also use Puppeteer to expose DevTools features while browsing without ever explicitly opening DevTools. See Using DevTools Features Without Opening DevTools for an example.

A request from the DevTools team: consider Canary

If you're on Mac or Windows, please consider using Chrome Canary as your default development browser. If you report a bug or a change that you don't like while it's still in Canary, the DevTools team can address your feedback significantly faster.

Note: Canary is the bleeding-edge version of Chrome. It's released as soon as its built, without testing. This means that Canary breaks from time-to-time, about once-a-month, and it's usually fixed within a day. You can go back to using Chrome Stable when Canary breaks.

Feedback

The best place to discuss any of the features or changes you see here is the google-chrome-developer-tools@googlegroups.com mailing list. You can also tweet us at @ChromeDevTools if you're short on time. If you're sure that you've encountered a bug in DevTools, please open an issue.

Previous release notes

See the devtools-whatsnew tag for links to all previous DevTools release notes.

CSS Paint API

$
0
0

CSS Paint API

New possibilities in Chrome 65

CSS Paint API (also known as “CSS Custom Paint” or “Houdini’s paint worklet”) is about to be enabled by default in Chrome Stable. What is it? What can you do with it? And how does it work? Well, read on, will ya’…

CSS Paint API allows you to programmatically generate an image whenever a CSS property expects an image. Properties like background-image or border-image are usually used with url() to load an image file or with CSS built-in functions like linear-gradient(). Instead of using those, you can now use paint(myPainter) to reference a paint worklet.

Writing a paint worklet

To define a paint worklet called myPainter, we need to load a CSS paint worklet file using CSS.paintWorklet.addModule('my-paint-worklet.js'). In that file we can use the registerPaint function to register a paint worklet class:

class MyPainter {
  paint(ctx, geometry, properties) {
    // ...
  }
}

registerPaint('myPainter', MyPainter);

Inside the paint() callback, we can use ctx the same way we would a CanvasRenderingContext2D as we know it from <canvas>. If you know how to draw in a <canvas>, you can draw in a paint worklet! geometry tells us the width and the height of the canvas that is at our disposal. properties I will explain later in this article.

Note: A paint worklet’s context is not 100% the same as a <canvas> context. As of now, text rendering methods are missing and for security reasons you cannot read back pixels from the canvas.

As an introductory example, let’s write a checkerboard paint worklet and use it as a background image of a <textarea>. (I am using a textarea because it’s resizable by default.):

<!-- index.html -->
<!doctype html>
<style>
  textarea {
    background-image: paint(checkerboard);
  }
</style>
<textarea></textarea>
<script>
  CSS.paintWorklet.addModule('checkerboard.js');
</script>
// checkerboard.js
class CheckerboardPainter {
  paint(ctx, geom, properties) {
    // Use `ctx` as if it was a normal canvas
    const colors = ['red', 'green', 'blue'];
    const size = 32;
    for(let y = 0; y < geom.height/size; y++) {
      for(let x = 0; x < geom.width/size; x++) {
        const color = colors[(x + y) % colors.length];
        ctx.beginPath();
        ctx.fillStyle = color;
        ctx.rect(x * size, y * size, size, size);
        ctx.fill();
      }
    }
  }
}

// Register our class under a specific name
registerPaint('checkerboard', CheckerboardPainter);

If you’ve used <canvas> in the past, this code should look familiar. See the live demo here.

Note: As with almost all new APIs, CSS Paint API is only available over HTTPS (or localhost).


  Textarea with a checkerboard pattern as a background image.

The difference from using a common background image here is that the pattern will be re-drawn on demand, whenever the user resizes the textarea. This means the background image is always exactly as big as it needs to be, including the compensation for high-density displays.

That’s pretty cool, but it’s also quite static. Would we want to write a new worklet every time we wanted the same pattern but with differently sized squares? The answer is no!

Parameterizing your worklet

Luckily, the paint worklet can access other CSS properties, which is where the additional parameter properties comes into play. By giving the class a static inputProperties attribute, you can subscribe to changes to any CSS property, including custom properties. The values will be given to you through the properties parameter.

<!-- index.html -->
<!doctype html>
<style>
  textarea {
    /* The paint worklet subscribes to changes of these custom properties. */
    --checkerboard-spacing: 10;
    --checkerboard-size: 32;
    background-image: paint(checkerboard);
  }
</style>
<textarea></textarea>
<script>
  CSS.paintWorklet.addModule('checkerboard.js');
</script>
// checkerboard.js
class CheckerboardPainter {
  // inputProperties returns a list of CSS properties that this paint function gets access to
  static get inputProperties() { return ['--checkerboard-spacing', '--checkerboard-size']; }

  paint(ctx, geom, properties) {
    // Paint worklet uses CSS Typed OM to model the input values.
    // As of now, they are mostly wrappers around strings,
    // but will be augmented to hold more accessible data over time.
    const size = parseInt(properties.get('--checkerboard-size').toString());
    const spacing = parseInt(properties.get('--checkerboard-spacing').toString());
    const colors = ['red', 'green', 'blue'];
    for(let y = 0; y < geom.height/size; y++) {
      for(let x = 0; x < geom.width/size; x++) {
        ctx.fillStyle = colors[(x + y) % colors.length];
        ctx.beginPath();
        ctx.rect(x*(size + spacing), y*(size + spacing), size, size);
        ctx.fill();
      }
    }
  }
}

registerPaint('checkerboard', CheckerboardPainter);

Now we can use the same code for all different kind of checkerboards. But even better, we can now go into DevTools and fiddle with the values until we find the right look.

Note: It would be great to parameterize the colors, too, wouldn’t it? The spec allows for the paint() function to take a list of arguments. This feature is not implemented in Chrome yet, as it heavily relies on Houdini’s Properties and Values API, which still needs some work before it can ship.

Browsers that don’t support paint worklet

At the time of writing, only Chrome has paint worklet implemented. While there are positive signals from all other browser vendors, there isn’t much progress. To keep up to date, check Is Houdini Ready Yet? regularly. In the meantime, be sure to use progressive enhancement to keep your code running even if there’s no support for paint worklet. To make sure things work as expected, you have to adjust your code in two places: The CSS and the JS.

Detecting support for paint worklet in JS can be done by checking the CSS object:

if ('paintWorklet' in CSS) {
  CSS.paintWorklet.addModule('mystuff.js');
}

For the CSS side, you have two options. You can use @supports:

@supports (background: paint(id)) {
  /* ... */
}

A more compact trick is to use the fact that CSS invalidates and subsequently ignores an entire property declaration if there is an unknown function in it. If you specify a property twice — first without paint worklet, and then with the paint worklet — you get progressive enhancement:

textarea {
  background-image: linear-gradient(0, red, blue);
  background-image: paint(myGradient, red, blue);
}

In browsers with support for paint worklet, the second declaration of background-image will overwrite the first one. In browsers without support for paint worklet, the second declaration is invalid and will be discarded, leaving the first declaration in effect.

Use cases

There are many use cases for paint worklets, some of them more obvious than others. One of the more obvious ones is using paint worklet to reduce the size of your DOM. Oftentimes, elements are added purely to create embellishments using CSS. For example, in Material Design Lite the button with the ripple effect contains 2 additional <span> elements to implement the ripple itself. If you have a lot of buttons, this can add up to quite a number of DOM elements and can lead to degraded performance on mobile. If you implement the ripple effect using paint worklet instead, you end up with 0 additional elements and just one paint worklet. Additionally, you have with something that is much easier to customize and parameterize.

Another upside of using paint worklet is that — in most scenarios — a solution using paint worklet is small in terms of bytes. Of course, there is a trade-off: your paint code will run whenever the canvas’s size or any of the parameters change. So if your code is complex and takes long it might introduce jank. Chrome is working on moving paint worklets off the main thread so that even long-running paint worklets don’t affect the responsiveness of the main thread.

To me, the most exciting prospect is that paint worklet allows to efficient polyfilling of CSS features that a browser doesn’t have yet. One example would be polyfill conic gradients until they land in Chrome natively. Another example: in a CSS meeting it was decided that you can now have multiple border colors. While this meeting was still going on, my colleague Ian Kilpatrick wrote a polyfill for this new CSS behavior using paint worklet.

Thinking outside the “box”

Most people start to think about background images and border images when they learn about paint worklet. One less intuitive use case for paint worklet is mask-image to make DOM elements have arbitrary shapes. For example a diamond:


  A DOM element in the shape of a diamond.

mask-image takes an image that is the size of the element. Areas where the mask image is transparent, the element is transparent. Areas where the mask image is opaque, the element opaque.

Now in Chrome

Paint worklet has been in Chrome Canary for a while. With Chrome 65, it is enabled by default. Go ahead and try out the new possibilities that paint worklet opens up and show us what you built! For more inspiration, take a look at Vincent De Oliveira’s collection.

Note: Breakpoints are currently not supported in CSS Paint API, but will be enabled in a later release of Chrome.

Using DevTools Features Without Opening DevTools

$
0
0

Using DevTools Features Without Opening DevTools

I commonly see questions along the lines of "I really like feature X of DevTools, but it stops working when I close DevTools. How do I keep feature X running even when DevTools is closed?"

The short answer is: you probably can't.

However, you can hack together a Puppeteer script that launches Chromium, opens a remote debugging client, then turns on the DevTools feature that you like (via the Chrome DevTools Protocol), without ever explicitly opening DevTools.

For example, the script below lets me overlay the FPS Meter over the top-right of the viewport, even though DevTools never opens, as you can see in the video below.

// Node.js version: 8.9.4
const puppeteer = require('puppeteer'); // version 1.0.0

(async () => {
  // Prevent Puppeteer from showing the "Chrome is being controlled by automated test
  // software" prompt, but otherwise use Puppeteer's default args.
  const args = await puppeteer.defaultArgs().filter(flag => flag !== '--enable-automation');
  const browser = await puppeteer.launch({
    headless: false,
    ignoreDefaultArgs: true,
    args
  });
  const page = await browser.newPage();
  const devtoolsProtocolClient = await page.target().createCDPSession();
  await devtoolsProtocolClient.send('Overlay.setShowFPSCounter', { show: true });
  await page.goto('https://developers.google.com/web/tools/chrome-devtools');
})();

This is just one of many, many DevTools features that you can potentially access via the Chrome DevTools Protocol.

A general suggestion: check out the Puppeteer API before resorting to creating a DevTools Protocol client. Puppeteer already has dedicated APIs for many DevTools features, such as code coverage and intercepting Console messages.

If you need help accessing a DevTools feature via Puppeteer, ask a question on Stack Overflow.

If you want to show off a Puppeteer script that makes use of the DevTools Protocol, tweet us at @ChromeDevTools.

New in Chrome 64

$
0
0

New in Chrome 64

And there’s plenty more!

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 64!

Note: Want the full list of changes? Check out the Chromium source repository change list.

ResizeObserver

Tracking when an element’s size changes can be a bit of a pain. Most likely, you’ll attach a listener to the document’s resize event, then call getBoundingClientRect or getComputedStyle. But, both of those can cause layout thrashing.

And what if the browser window didn’t change size, but a new element was added to the document? Or you added display: none to an element? Both of those can change the size of other elements within the page.

ResizeObserver notifies you whenever an element’s size changes, and provides the new height and width of the element, reducing the risk of layout thrashing.

Like other Observers, using it is pretty simple, create a ResizeObserver object and pass a callback to the constructor. The callback will be given an array of ResizeOberverEntries – one entry per observed element – which contain the new dimensions for the element.

const ro = new ResizeObserver( entries => {
  for (const entry of entries) {
    const cr = entry.contentRect;
    console.log('Element:', entry.target);
    console.log(`Element size: ${cr.width}px × ${cr.height}px`);
    console.log(`Element padding: ${cr.top}px ; ${cr.left}px`);
  }
});

// Observe one or multiple elements
ro.observe(someElement);

Check out ResizeObserver: It's like document.onresize for Elements for more details and real world examples.

Improved Pop-up Blocker

I hate tab-unders. You know them, it’s when a page opens a pop-up to some destination AND navigates the page. Usually one of them is an ad or something that you didn’t want.

Starting in Chrome 64, these type of navigations will be blocked, and Chrome will show some native UI to the user - allowing them to follow the redirect if they want.

import.meta

When writing JavaScript modules, you often want access to host-specific metadata about the current module. Chrome 64 now supports the import.meta property within modules and exposes the URL for the module as import.meta.url.

This is really helpful when you want to resolve resources relative to the module file as opposed to the current HTML document.

And more!

These are just a few of the changes in Chrome 64 for developers, of course, there’s plenty more.

  • Chrome now supports named captures and Unicode property escapes in regular expressions.
  • The default preload value for <audio> and <video> elements is now metadata. This brings Chrome in line with other browsers and helps to reduce bandwidth and resource usage by only loading the metadata and not the media itself.
  • You can now use Request.prototype.cache to view the cache mode of a Request and determine whether a request is a reload request.
  • Using the Focus Management API, you can now focus an element without scrolling to it with the preventScroll attribute.

window.alert()

Oh, and one more! While this isn’t really a ‘developer feature’, it makes me happy. window.alert() no longer brings a background tab to the foreground! Instead, the alert will be shown when the user switches to back to that tab.

No more random tab switching because something fired a window.alert on me. I’m looking at you old Google Calendar.

Be sure to subscribe to our YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 65 is released, I’ll be right here to tell you -- what’s new in Chrome!

Chrome User Experience Report: New country dimension

$
0
0

Chrome User Experience Report: New country dimension

The Chrome User Experience Report (CrUX) is a public dataset of real user performance data. Since we announced the report, one of the most requested additions has been the ability to better understand differences in user experience across locations. Based on this feedback, we are expanding the existing CrUX dataset––which provides a global view across all geographic regions––to also include a collection of separate country-specific datasets!

Map of countries included in the CrUX dataset

For example, in the screenshot above we see a query that compares the aggregate densities for 4G and 3G effective connection types across a few countries. What’s interesting is to see how prevalent 4G speeds are in Japan, while 3G speeds are still very common in India. Insights like these are made possible thanks to the new country dimension.

To get started, head over to the CrUX project on BigQuery and you’ll see a list of datasets organized by country code from country_ae (United Arab Emirates) to country_za (South Africa). The familiar all dataset is still there to capture the global aggregate performance data. Within each dataset there are monthly tables starting with the most recent report, 201712. For a detailed walkthrough on how to get started, please refer to our updated CrUX documentation.

We’re excited to share this new data with you and hope to see you use it in ways to improve the user experience on the web. To get help, ask questions, offer feedback, or share findings from your own analysis, join the discussion on the CrUX forum. And if the free tier on BigQuery isn’t enough to contain your querying enthusiasm, we’re still running a promotion to give you an extra 10 TB free, so go get your credits while supplies last!

Meltdown/Spectre

$
0
0

Meltdown/Spectre

Overview

On January 3rd Project Zero revealed vulnerabilities in modern CPUs that a process can use to read (at worst) arbitrary memory — including memory that doesn’t belong to that process. These vulnerabilities have been named Spectre and Meltdown. What is Chrome doing to help keep the web secure, and what should web developers do for their own sites?

TL; DR

As a user browsing the web, you should make sure you keep your operating system and your browser updated. In addition, Chrome users can consider enabling Site Isolation.

If you are a web developer, the Chrome team advises:

  • Where possible, prevent cookies from entering the renderer process' memory by using the SameSite and HTTPOnly cookie attributes, and by avoiding reading from document.cookie.
  • Make sure your MIME types are correct and specify an X-Content-Type-Options: nosniff header for any URLs with user-specific or sensitive content, to get the most out of cross-site document blocking for users who have Site Isolation enabled.
  • Enable Site Isolation and let the Chrome team know if it causes problems for your site.

If you are wondering why these steps help, read on!

The risk

There have been a wide variety of explanations of these vulnerabilities, so I am not going to add yet another one. If you are interested in how these vulnerabilities can be exploited, I recommend taking a look at the blog post by my colleagues from the Google Cloud team.

Both Meltdown and Spectre potentially allow a process to read memory that it is not supposed to be able to. Sometimes, multiple documents from different sites can end up sharing a process in Chrome. This can happen when one has opened the other using window.open, or <a href="..." target="_blank">, or iframes. If a website contains user-specific data, there is a chance that another site could use these new vulnerabilities to read that user data.

Mitigations

There are multiple efforts the Chrome and V8 engineering team is deploying to mitigate this threat.

Site Isolation

The impact of successfully exploiting Spectre can be greatly reduced by preventing sensitive data from ever sharing a process with attacker-controlled code. The Chrome team has been working on a feature to achieve this called “Site Isolation”:

“Websites typically cannot access each other's data inside the browser[...]. Occasionally, security bugs are found in this code and malicious websites may try to bypass these rules to attack other websites. [...] Site Isolation offers a second line of defense to make such attacks less likely to succeed. It ensures that pages from different websites are always put into different processes, each running in a sandbox that limits what the process is allowed to do.

Site Isolation has not been enabled by default yet as there are a couple of known issues and the Chrome team would like as much field testing as possible. If you are a web developer, you should enable Site Isolation and check whether your site remains functional. If you’d like to opt-in now, enable chrome://flags#enable-site-per-process. If you find a site that doesn’t work, please help us by filing a bug and mention that you have Site Isolation enabled.

Cross-site document blocking

Even when all cross-site pages are put into separate processes, pages can still legitimately request some cross-site subresources, such as images and JavaScript. To help prevent sensitive information from leaking this information, Site Isolation includes a “cross-site document blocking” feature that limits which network responses are delivered to the renderer process.

A website can request two types of data from a server: “documents” and “resources”. Here, documents are HTML, XML, JSON and TXT files. A website is able to receive documents from its own domain or from other domains with permissive CORS headers. Resources include things like images, JavaScript, CSS and fonts. Resources can be included from any site.

The cross-site document blocking policy prevents a process from receiving “documents” from other origins if:

  1. They have an HTML, XML, JSON, or text/plain MIME type, and
  2. They have either a "X-Content-Type-Options: nosniff" HTTP response header, or a quick content analysis (“sniffing”) confirms that the type is correct
  3. CORS doesn’t explicitly allow access to the document

Documents that are blocked by this policy are presented to the process as empty, although the request still happens in the background.

For example: Imagine an attacker creating an <img> tag that includes a JSON file with sensitive data, like <img src="https://yourbank.com/balance.json">. Without Site Isolation, the contents of the JSON file would make it to the renderer process’s memory, at which point the renderer notices that it is not a valid image format and doesn’t render an image. With Spectre, however, there is now a way to potentially read that chunk of memory. Cross-site document blocking would prevent the contents of this file from ever entering the memory of the process the renderer is running in because the MIME type is blocked by cross-site document blocking.

According to user metrics, there are a lot of JavaScript and CSS files that are delivered with text/html or text/plain MIME types. To avoid blocking resources that are accidentally marked as documents, Chrome attempts to sniff the response to ensure the MIME type is correct. This sniffing is imperfect, so if you are sure that you are setting the correct Content-Type headers on your website, the Chrome team recommends adding the X-Content-Type-Options: nosniff header to all your responses.

If you want to try cross-site document blocking, opt-in to Site Isolation as described above.

SameSite cookies

Let’s go back to the example above: <img src="https://yourbank.com/balance.json">. This only works if yourbank.com has stored a cookie that automatically logs the user in. Cookies typically get sent for all requests to the website that sets the cookie — even if the request is made by a third party using an <img> tag. SameSite cookies are a new attribute that specify that a cookie should only be attached to a request that originates from the same site, hence the name. Sadly, at the time of writing, only Chrome and Firefox 58+ support this attribute.

If your site's cookies are only used server-side, not by client JavaScript, there are ways you can stop the cookie's data from entering the renderer process. You can set the HTTPOnly cookie attribute, which explicitly prevents the cookie from being accessed through client side script on supported browsers, such as Chrome. If setting HTTPOnly isn't possible, you can help limit the exposure of loading cookie data to the rendered process by not reading document.cookie unless absolutely necessary.

When you link to another page using target="_blank", the opened page has access to your window object, can navigate your page to a different URL, and without Site Isolation will be in the same process as your page. To better protect your page, links to external pages that open in a new window should always specify rel="noopener".

High-resolution timers

To exploit Meltdown or Spectre, an attacker needs to measure how long it takes to read a certain value from memory. For this, a reliable and accurate timer is needed.

One API the web platform offers is performance.now() which is accurate to 5 microseconds. As a mitigation, all major browsers have decreased the resolution of performance.now() to make it harder to mount the attacks.

Another way to get a high-resolution timer is to use a SharedArrayBuffer. The buffer is used by a dedicated worker to increment a counter. The main thread reads this counter and uses that as a timer. For the time being browsers have decided to disable SharedArrayBuffer until other mitigations are in place.

V8

To exploit Spectre, a specifically crafted sequence of CPU instructions is needed. The V8 team has implemented mitigations for known attack proofs of concept, and is working on changes in TurboFan, their optimizing compiler, that make its generated code safe even when these attacks are triggered. However, these code generation changes may come at a performance penalty.

Keeping the web safe

There has been a lot of uncertainty around the discovery of Spectre and Meltdown and their implications. I hope this article shed some light on what the Chrome and V8 teams are doing to keep the web platform safe, and how web developers can help by using existing security features. If you have any questions, feel free to reach out to me on Twitter.

Deprecations and removals in Chrome 65

$
0
0

Deprecations and removals in Chrome 65

In nearly every version of Chrome, we see a significant number of updates and improvements to the product, its performance, and also capabilities of the Web Platform. This article describes some of the deprecations and removals in Chrome 65, which is in beta as of February 8.

Chrome no longer trusting certain Symantec certificates

As previously announced, Chrome 65 will not trust certificates issued from Symantec’s Legacy PKI after December 1st, 2017, and will result in interstitials. This will only affect site operators who explicitly opted-out of the transition from Symantec’s Legacy PKI to DigiCert’s new PKI.

Block cross-origin <a download>

To avoid what is essentially a user-mediated cross-origin information leakage, Blink will now ignore the presence of the download attribute on anchor elements with cross origin attributes. Note that this applies to HTMLAnchorElement.download as well as to the element itself.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Document.all is no longer replaceable

For a long time now, It's been possible for web developers to overwrite document.all. According to the current standard, this should not be so. Starting in version 65, Chrome complies with the standard.

Chromestatus Tracker | Chromium Bug


Lighthouse 2.8 Updates

$
0
0

Lighthouse 2.8 Updates

Lighthouse 2.8 is out! Highlights include:

See the 2.8 release notes for the full list of new features, changes, and bug fixes.

How to update to 2.8

  • NPM. Run npm update lighthouse, or npm update lighthouse -g flag if you installed Lighthouse globally.
  • Chrome Extension. The extension should automatically update, but you can manually update it via chrome://extensions.
  • DevTools. The Audits panel will be shipping with 2.8 in Chrome 65. You can check what version of Chrome you're running via chrome://version. Chrome updates to a new version about every 6 weeks. You can run the latest Chrome code by downloading Chrome Canary.

New Performance and SEO audits

The Avoid Plugins audit lists plugins that you should remove, since plugins prevent the page from being mobile-friendly. Most mobile devices don't support plugins.

The Avoid Plugins audit.
Figure 1. The Avoid Plugins</b audit

The Document Has A Valid rel=canonical audit in the SEO category checks for a rel=canonical URL to make sure that a crawler knows which URL to show in search results.

The Document Has A Valid rel=canonical audit.
Figure 2. The Document Has A Valid rel=canonical audit

The Page Is Mobile-Friendly and Structured Data Is Valid manual audits can help further improve your SEO. "Manual" in this case means that Lighthouse can't automate these audits, so you need to test them yourself.

The manual SEO audits.
Figure 3. The manual SEO audits

The Minify CSS and Minify JavaScript audits in the Performance category check for any CSS or Javascript that can be minified to reduce payload size and parse time.

The Minify CSS and Minify JavaScript audits.
Figure 4. The Minify CSS and Minify JavaScript audits

Performance as the first category in Lighthouse reports

Performance is now the first category you see in Lighthouse reports. Some users thought that Lighthouse was only for Progressive Web Apps, since that was the first category in reports. In reality, Lighthouse can help you understand how to improve any web page, whether or not it's a Progressive Web App.

Updated Accessibility scoring

If an accessibility audit is not applicable for a given page, that audit no longer counts towards the Accessibility score.

New loading message and fast facts

Note: This update is only visible when you run Lighthouse from the Audits panel of Chrome DevTools.

The loading message and fast facts in Chrome DevTools.
Figure 5. The loading message and fast facts in Chrome DevTools

New Lighthouse release guide

Check out the Release Guide For Maintainers for information on release timing, masters, naming conventions, and more.

What's New In DevTools (Chrome 66)

$
0
0

What's New In DevTools (Chrome 66)

Note: The video version of these release notes will be published around mid-April 2018.

New features and major changes coming to DevTools in Chrome 66 include:

Note: Check what version of Chrome you're running at chrome://version. If you're running an earlier version, these features won't exist. If you're running a later version, these features may have changed. Chrome auto-updates to a new major version about every 6 weeks.

Blackboxing in the Network panel

The Initiator column in the Network panel tells you why a resource was requested. For example, if JavaScript causes an image to be fetched, the Initiator column shows you the line of JavaScript code that caused the request.

Note: You can hide or show columns in the Network panel by right-clicking the table header.

Previously, if your framework wrapped network requests in a wrapper, the Initiator column wouldn't be that helpful. All network requests pointed to the same line of wrapper code.

The Initiator column shows that all of the requests were initiated by line 2
            of requests.js.
Figure 1. The Initiator column shows that all of the requests were initiated by line 2 of requests.js

What you really want in this scenario is to see the application code that causes the request. That's now possible:

  1. Hover over the Initiator column. The call stack that caused the request appears in a pop-up.
  2. Right-click the call that you want to hide from the initiator results.
  3. Select Blackbox script. The Initiator column now hides any calls from the script that you blackboxed.
Blackboxing requests.js.
Figure 2. Blackboxing requests.js
After blackboxing requests.js, the Initiator column now shows more
            helpful results.
Figure 3. After blackboxing requests.js, the Initiator column now shows more helpful results

Manage your blackboxed scripts from the Blackboxing tab in Settings.

See Ignore a script or pattern of scripts to learn more about blackboxing.

Pretty-printing in the Preview and Response tabs

The Preview tab in the Network panel now pretty-prints resources by default when it detects that those resources have been minified.

The Preview tab pretty-printing the contents of analytics.js by default.
Figure 4. The Preview tab pretty-printing the contents of analytics.js by default

To view the unminified version of a resource, use the Response tab. You can also manually pretty-print resources from the Response tab, via the new Format button.

Manually pretty-printing the contents of analytics.js via the Format button.
Figure 5. Manually pretty-printing the contents of analytics.js via the Format button

Previewing HTML content in the Preview tab

Previously, the Preview tab in the Network panel showed the code of an HTML resource in certain situations, while rendering a preview of the HTML in others. The Preview tab now always does a basic rendering of the HTML. It's not intended to be a full browser, so it may not display HTML exactly as you expect. If you want to see the HTML code, click the Response tab, or right-click a resource and select Open in Sources panel.

Previewing HTML in the Preview tab.
Figure 6. Previewing HTML in the Preview tab

Auto-adjust zooming in Device Mode

When in Device Mode, open the Zoom dropdown and select Auto-adjust zoom to automatically resize the viewport whenever you change device orientation.

Local Overrides now works with some styles defined in HTML

Back when DevTools launched Local Overrides in Chrome 65, one limitation was that it couldn't track changes to styles defined within HTML. For example, in Figure 7 there's a style rule in the head of the document that declares font-weight: bold for h1 elements.

An example of styles defined within HTML
Figure 7. An example of styles defined within HTML

In Chrome 65, if you changed the font-weight declaration via the DevTools Style pane, Local Overrides wouldn't track the change. In other words, on the next reload, the style would revert back to font-weight: bold. But in Chrome 66, changes like this now persist across page loads.

Caution: Local Overrides can track changes like this so long as the style is defined in the HTML document that was sent over the network. If you have a script that dynamically adds styles to an HTML document, Local Overrides still won't be able to detect those changes.

Bonus tip: Blackbox framework scripts to make Event Listener Breakpoints more useful

Note: This section is not related to Chrome 66. It's just a bonus tip about an existing feature that you may find useful.

Back when I created the Get Started With Debugging JavaScript video, some viewers commented that event listener breakpoints aren't useful for apps built on top of frameworks, because the event listeners are often wrapped in framework code. For example, in Figure 8 I've set up a click breakpoint in DevTools. When I click the button in the demo, DevTools automatically pauses in the first line of listener code. In this case, it pauses in Vue.js's wrapper code on line 1802, which isn't that helpful.

The click breakpoint pauses in Vue.js' wrapper code.
Figure 8. The click breakpoint pauses in Vue.js' wrapper code

Since the Vue.js script is in a separate file, I can blackbox that script from the Call Stack pane in order to make this click breakpoint more useful.

Blackboxing the Vue.js script from the Call Stack pane.
Figure 9. Blackboxing the Vue.js script from the Call Stack pane

The next time I click the button and trigger the click breakpoint, it executes the Vue.js code without pausing in it, and then pauses on the first line of code in my app's listener, which is where I really wanted to pause all along.

The click breakpoint now pauses on the app's listener code.
Figure 10. The click breakpoint now pauses on the app's listener code

A request from the DevTools team: consider Canary

If you're on Mac or Windows, please consider using Chrome Canary as your default development browser. If you report a bug or a change that you don't like while it's still in Canary, the DevTools team can address your feedback significantly faster.

Note: Canary is the bleeding-edge version of Chrome. It's released as soon as its built, without testing. This means that Canary breaks from time-to-time, about once-a-month, and it's usually fixed within a day. You can go back to using Chrome Stable while Canary is broken.

Feedback

The best place to discuss any of the features or changes you see here is the google-chrome-developer-tools@googlegroups.com mailing list. You can also tweet us at @ChromeDevTools if you're short on time. If you're sure that you've encountered a bug in DevTools, please open an issue.

Previous release notes

See the devtools-whatsnew tag for links to all previous DevTools release notes.

Emscripting a C library to Wasm

$
0
0

Emscripting a C library to Wasm

Sometimes you want to use a library that is only available as C or C++ code. Traditionally, this is where you give up. Well, not anymore, because now we have Emscripten and WebAssembly (or Wasm)!

Note: In this article I will describe my journey of compiling libwebp to Wasm. To make use of this article as well as Wasm in general, you will need knowledge of C, especially pointers, memory management and compiler options.

The toolchain

I set myself the goal of working out how to compile some existing C code to Wasm. There's been some noise around LLVM's Wasm backend, so I started digging into that. While you can get simple programs to compile this way, the second you want to use C's standard library or even compile multiple files, you will probably run into problems. This led me to the major lesson I learned:

While Emscripten used to be a C-to-asm.js compiler, it has since matured to target Wasm and is in the process of switching to the official LLVM backend internally. Emscripten also provides a Wasm-compatible implementation of C's standard library. Use Emscripten. It carries a lot of hidden work, emulates a file system, provides memory management, wraps OpenGL with WebGL — a lot of things that you really don't need to experience developing for yourself.

While that might sound like you have to worry about bloat — I certainly worried — the Emscripten compiler removes everything that's not needed. In my experiments the resulting Wasm modules are appropriately sized for the logic that they contain and the Emscripten and WebAssembly teams are working on making them even smaller in the future.

You can get Emscripten by following the instructions on their website or using Homebrew. If you are a fan of dockerized commands like me and don't want to install things on your system just to have a play with WebAssembly, there is a well-maintained Docker image that you can use instead:

$ docker pull trzeci/emscripten
$ docker run --rm -v $(pwd):/src trzeci/emscripten emcc <emcc options here>

Compiling something simple

Let's take the almost canonical example of writing a function in C that calculates the nth fibonacci number:

#include <emscripten.h>

EMSCRIPTEN_KEEPALIVE
int fib(int n) {
  int i, t, a = 0, b = 1;
  for (i = 0; i < n; i++) {
    t = a + b;
    a = b;
    b = t;
  }
  return b;
}

If you know C, the function itself shouldn't be too surprising. Even if you don't know C but know JavaScript, you will hopefully be able to understand what's going on here.

emscripten.h is a header file provided by Emscripten. We only need it so we have access to the EMSCRIPTEN_KEEPALIVE macro, but it provides much more functionality. This macro tells the compiler to not remove a function even if it appears unused. If we omitted that macro, the compiler would optimize the function away — nobody is using it after all.

Let's save all that in a file called fib.c. To turn it into a .wasm file we need to turn to Emscripten's compiler command emcc:

$ emcc -O3 -s WASM=1 -s EXTRA_EXPORTED_RUNTIME_METHODS='["cwrap"]' fib.c

Let's dissect this command. emcc is Emscripten's compiler. fib.c is our C file. So far, so good. -s WASM=1 tells Emscripten to give us a Wasm file instead of an asm.js file.. -s EXTRA_EXPORTED_RUNTIME_METHODS='["cwrap"]' tells the compiler to leave the cwrap() function available in the JavaScript file — more on this function later. -O3 tells the compiler to optimize aggressively. You can choose lower numbers to decrease build time, but that will also make the resulting bundles bigger as the compiler might not remove unused code.

After running the command you should end up with a JavaScript file called a.out.js and a WebAssembly file called a.out.wasm. The Wasm file (or "module") contains our compiled C code and should be fairly small. The JavaScript file takes care of loading and initializing our Wasm module and providing a nicer API. If needed it will also take care of setting up the stack, the heap and other functionality usually expected to be provided by the operating system when writing C code. As such the JavaScript file is a bit bigger, weighing in at 19KB (~5KB gzip'd).

Running something simple

The easiest way to load and run your module is to use the generated JavaScript file. Once you load that file, you will have a Module global at your disposal. Use cwrap to create a JavaScript native function that takes care of converting parameters to something C-friendly and invoking the wrapped function. cwrap takes the function name, return type and argument types as arguments, in that order:

<!doctype html>
<title>Demo</title>
<script src="a.out.js"></script>
<script>
  Module.onRuntimeInitialized = _ => {
    const fib = Module.cwrap('fib', 'number', ['number']);
    console.log(fib(12));
  };
</script>

If you run this code, you should see the "233" in the console, which is the 12th Fibonacci number.

Note: Emscripten offers a couple of options to handle loading multiple modules. More about that in their documentation.

The holy grail: Compiling a C library

Up until now, the C code we have written was written with Wasm in mind. A core use-case for WebAssembly, however, is to take the existing ecosystem of C libraries and allow developers to use them on the web. These libraries often rely on C's standard library, an operating system, a file system and other things. Emscripten provides most of these features, although there are some limitations.

Let's go back to my original goal: compiling an encoder for WebP to Wasm. The source for the WebP codec is written in C and available on GitHub as well as some extensive API documentation. That's a pretty good starting point.

$ git clone https://github.com/webmproject/libwebp

To start off simple, let's try to expose WebPGetEncoderVersion() from encode.h to JavaScript by writing a C file called webp.c:

#include "emscripten.h"
#include "src/webp/encode.h"

EMSCRIPTEN_KEEPALIVE
int version() {
  return WebPGetEncoderVersion();
}

This is a good simple program to test if we can get the source code of libwebp to compile, as we don't require any parameters or complex data structures to invoke this function.

To compile this program, we need to tell the compiler where it can find libwebp's header files using the -I flag and also pass it all the C files of libwebp that it needs. I'm going to be honest: I just gave it all the C files I could find and relied on the compiler to strip out everything that was unnecessary. It seemed to work brilliantly!

$ emcc -O3 -s WASM=1 -s EXTRA_EXPORTED_RUNTIME_METHODS='["cwrap"]' \
    -I libwebp \
    webp.c \
    libwebp/src/{dec,dsp,demux,enc,mux,utils}/*.c

Note: This strategy will not work with every C project out there. Many projects rely on autoconf/automake to generate system-specific code before compilation. Emscripten provides emconfigure and emmake to wrap these commands and inject the appropriate parameters. You can find more in the Emscripten documentation.

Now we only need some HTML and JavaScript to load our shiny new module:

<!doctype html>
<title>Demo</title>
<script src='/a.out.js'></script>
<script>
  Module.onRuntimeInitialized = async _ => {
    const api = {
      version: Module.cwrap('version', 'number', []),
    };
    console.log(api.version());
  };
</script>

And we will see the correction version number in the output:


  Screenshot of the DevTools console showing the correct version
number.

Note: libwebp returns the current version a.b.c as a hexadecimal number 0xabc. So v0.6.1 is encoded as 0x000601 = 1537.

Get an image from JavaScript into Wasm

Getting the encoder's version number is great and all, but encoding an actual image would be more impressive, right? Let's do that, then.

The first question we have to answer is: How do we get the image into Wasm land? Looking at the encoding API of libwebp, it expects an array of bytes in RGB, RGBA, BGR or BGRA. Luckily, the Canvas API has getImageData(), that gives us an Uint8ClampedArray containing the image data in RGBA:

async function loadImage(src) {
  // Load image
  const imgBlob = await fetch(src).then(resp => resp.blob());
  const img = await createImageBitmap(imgBlob);
  // Make canvas same size as image
  const canvas = document.createElement('canvas');
  canvas.width = img.width;
  canvas.height = img.height;
  // Draw image onto canvas
  const ctx = canvas.getContext('2d');
  ctx.drawImage(img, 0, 0);
  return ctx.getImageData(0, 0, img.width, img.height);
}

Now it's "only" a matter of copying the data from JavaScript land into Wasm land. For that, we need to expose two additional functions. One that allocates memory for the image inside Wasm land and one that frees it up again:

EMSCRIPTEN_KEEPALIVE
uint8_t* create_buffer(int width, int height) {
  return malloc(width * height * 4 * sizeof(uint8_t));
}

EMSCRIPTEN_KEEPALIVE
void destroy_buffer(uint8_t* p) {
  free(p);
}

create_buffer allocates a buffer for the RGBA image — hence 4 bytes per pixel. The pointer returned by malloc() is the address of the first memory cell of that buffer. When the pointer is returned to JavaScript land, it is treated as just a number. After exposing the function to JavaScript using cwrap, we can use that number to find the start of our buffer and copy the image data.

const api = {
  version: Module.cwrap('version', 'number', []),
  create_buffer: Module.cwrap('create_buffer', 'number', ['number', 'number']),
  destroy_buffer: Module.cwrap('destroy_buffer', '', ['number']),
}
const image = await loadImage('/image.jpg');
const p = api.create_buffer(image.width, image.height);
Module.HEAP8.set(image.data, p);
// ... call encoder ...
api.destroy_buffer(p);

Grand Finale: Encode the image

The image is now available in Wasm land. It is time to call the WebP encoder to do its job! Looking at the WebP documentation, WebPEncodeRGBA seems like a perfect fit. The function takes a pointer to the input image and its dimensions, as well as a quality option between 0 and 100. It also allocates an output buffer for us, that we need to free using WebPFree() once we are done with the WebP image.

The result of the encoding operation is an output buffer and its length. Because functions in C can't have arrays as return types (unless we allocate memory dynamically), I resorted to a static global array. I know, not clean C (in fact, it relies on the fact that Wasm pointers are 32bit wide), but to keep things simple I think this is a fair shortcut.

int result[2];
EMSCRIPTEN_KEEPALIVE
void encode(uint8_t* img_in, int width, int height, float quality) {
  uint8_t* img_out;
  size_t size;

  size = WebPEncodeRGBA(img_in, width, height, width * 4, quality, &img_out);

  result[0] = (int)img_out;
  result[1] = size;
}

EMSCRIPTEN_KEEPALIVE
void free_result(uint8_t* result) {
  WebPFree(result);
}

EMSCRIPTEN_KEEPALIVE
int get_result_pointer() {
  return result[0];
}

EMSCRIPTEN_KEEPALIVE
int get_result_size() {
  return result[1];
}

Now with all of that in place, we can call the encoding function, grab the pointer and image size, put it in a JavaScript-land buffer of our own, and release all the Wasm-land buffers we have allocated in the process.

api.encode(p, image.width, image.height, 100);
const resultPointer = api.get_result_pointer();
const resultSize = api.get_result_size();
const resultView = new Uint8Array(Module.HEAP8.buffer, resultPointer, resultSize);
const result = new Uint8Array(resultView);
api.free_result(resultPointer);

Note: new Uint8Array(someBuffer) will create a new view onto the same memory chunk, while new Uint8Array(someTypedArray) will copy the data.

Depending on the size of your image, you might run into an error where Wasm can't grow the memory enough to accommodate both the input and the output image:


  Screenshot of the DevTools console showing an error.

Luckily, the solution to this problem is in the error message! We just need to add -s ALLOW_MEMORY_GROWTH=1 to our compilation command.

And there you have it! We compiled a WebP encoder and transcoded a JPEG image to WebP. To prove that it worked, we can turn our result buffer into a blob and use it on an <img> element:

const blob = new Blob([result], {type: 'image/webp'});
const blobURL = URL.createObjectURL(blob);
const img = document.createElement('img');
img.src = blobURL;
document.body.appendChild(img)

Behold, the glory of a new WebP image!


  DevTools’ network panel and the generated image.

Conclusion

It's not a walk in the park to get a C library to work in the browser, but once you understand the overall process and how the data flow works, it becomes easier and the results can be mind-blowing.

WebAssembly opens many new possibilities on the web for processing, number crunching and gaming. Keep in mind that Wasm is not a silver bullet that should be applied to everything, but when you hit one of those bottlenecks, Wasm can be an incredibly helpful tool.

Bonus content: Running something simple the hard way

If you want to try and avoid the generated JavaScript file, you might be able to. Let's go back to the Fibonacci example. To load and run it ourselves, we can do the following:

<!doctype html>
<script>
  (async function() {
    const imports = {
      env: {
        memory: new WebAssembly.Memory({initial: 1}),
        STACKTOP: 0,
      }
    };
    const {instance} = await WebAssembly.instantiateStreaming(await fetch('/a.out.wasm'), imports);
    console.log(instance.exports._fib(12));
  })();
</script>

Note: Make sure that your .wasm files have Content-Type: application/wasm. Otherwise they will be rejected by WebAssembly.

WebAssembly modules that have been created by Emscripten have no memory to work with unless you provide them with memory. The way you provide a Wasm module with anything is by using the imports object — the second parameter of the instantiateStreaming function. The Wasm module can access everything inside the imports object, but nothing else outside of it. By convention, modules compiled by Emscripting expect a couple of things from the loading JavaScript environment:

  • Firstly, there is env.memory. The Wasm module is unaware of the outside world so to speak, so it needs to get some memory to work with. Enter WebAssembly.Memory. It represents a (optionally growable) piece of linear memory. The sizing parameters are in "in units of WebAssembly pages", meaning the code above allocates 1 page of memory, with each page having a size of 64 KiB. Without providing a maximum option, the memory is theoretically unbounded in growth (Chrome currently has a hard limit of 2GB). Most WebAssembly modules shouldn't need to set a maximum.
  • env.STACKTOP defines where the stack is supposed to start growing. The stack is needed to make function calls and to allocate memory for local variables. Since we don't do any dynamic memory management shenanigans in our little Fibonacci program, we can just use the entire memory as a stack, hence STACKTOP = 0.

New in Chrome 65

$
0
0

New in Chrome 65

And there’s plenty more!

I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 65!

Note: Want the full list of changes? Check out the Chromium source repository change list.

CSS Paint API

The CSS Paint API allows you to programmatically generate an image for CSS properties like background-image or border-image.

Instead of referencing an image, you can use the new paint function to draw the image - much like a canvas element.

<style>
  .myElem { background-image: paint(checkerboard); }
</style>
<script>
  CSS.paintWorklet.addModule('checkerboard.js');
</script>

For example, instead of adding extra DOM elements to create the ripple effect on a material styled button, you could use the paint API.

It’s also a powerful method of polyfilling CSS features that aren’t supported in a browser yet.

Surma has a great post with several demos in his explainer.

Server Timing API

Hopefully you’re using the navigation and resource timing APIs to track the performance of your site for real users. Until now, there hasn’t been an easy way for the server to report it’s performance timing.

The new Server Timing API allows your server to pass timing information to the browser; giving you a better picture of your overall performance.

You can track as many metrics as you want: database read times, start-up time, or whatever is important to you, by adding a Server-Timing header to your response:

'Server-Timing': 'su=42;"Start-up",db-read=142;"Database Read"'

They’re shown in Chrome DevTools, or you can pull them out of the response header and save them with your other performance analytics.

display: contents

The new CSS display: contents property is pretty slick!

When added to a container element, any children take its place in the DOM, and it essentially disappears. Let’s say I’ve got two div’s, one inside the other. My outer div has a red border, a gray background and I’ve set a width of 200 pixels. The inner div has a blue border, and a light blue background.

.disp-contents-outer {
  border: 2px solid red;
  background-color: #ccc;
  padding: 10px;
  width: 200px;
}
.disp-contents-inner {
  border: 2px solid blue;
  background-color: lightblue;
  padding: 10px;
}

By default, the inner div is contained in the outer div.

I'm the inner <div>

Adding display: contents to the outer div, makes the outer div disappear and it’s constraints are no longer applied to the inner div. The inner div is now 100% width.

Use DevTools to inspect the DOM, and notice the outer div still exists.

There are plenty of cases where this might be helpful, but the most common one is with flexbox. With flexbox, only the immediate children of a flex container become flex items.

But, once you apply display: contents to a child, it’s children become flex items and are laid out using the same rules that would have been applied to their parent.

Check out Rachel Andrew’s excellent post Vanishing boxes with display contents for more details and other examples.

And more!

These are just a few of the changes in Chrome 65 for developers, of course, there’s plenty more.

  • The syntax for specifying HSL and HSLA, and RGB and RGBA coordinates for the color property now match the CSS Color 4 spec.
  • There’s a new feature policy that allows you to control synchronous XHRs through an HTTP header or the iframe allow attribute.

Be sure to check out New in Chrome DevTools, to learn what’s new in for DevTools in Chrome 65. And, if you’re interested in Progressive Web Apps, check out the new PWA Roadshow video series. Then, click the subscribe button on our YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 66 is released, I’ll be right here to tell you -- what’s new in Chrome!

Unblocking Clipboard Access

$
0
0

Unblocking Clipboard Access

Over the past few years, browsers have converged on using [document.execCommand] for clipboard interactions. It's great to have a single widely-supported way to integrate copy and paste into web apps, but this came at a cost: clipboard access is synchronous, and can only read & write to the DOM.

Synchronous copy & paste might seem fine for small bits of text, but there are a number of cases where blocking the page for clipboard transfer leads to a poor experience. Time consuming sanitization or decoding might be needed before content can be safely pasted. The browser may need to load linked resources from a pasted document - that would block the page while waiting on the disk or network. Imagine adding permissions into the mix, requiring that the browser block the page while asking the user if an app can access the clipboard.

At the same time, the permissions put in place around document.execCommand for clipboard interaction are loosely defined and vary between browsers. So, what might a dedicated clipboard API look like if we wanted to address blocking and permissions problems?

That's the new Async Clipboard API, the text-focused portion of which we're shipping in Chrome 66. It's a replacement for execCommand-based copy & paste that has a well-defined permissions model and doesn't block the page. This new API also Promises (see what I did there?) to simplify clipboard events and align them with the Drag & Drop API.

Copy: Writing Text to the Clipboard

Text can be copied to the clipboard by calling writeText(). Since this API is asynchronous, the writeText() function returns a Promise that will be resolved or rejected depending on whether the text we passed is copied successfully:

navigator.clipboard.writeText('Text to be copied')
  .then(() => {
    console.log('Text copied to clipboard');
  })
  .catch(err => {
    // This can happen if the user denies clipboard permissions:
    console.error('Could not copy text: ', err);
  });

Similarly, we can write this as an async function, then await the return of writeText():

async function copyPageUrl() {
  try {
    await navigator.clipboard.writeText(location.href);
    console.log('Page URL copied to clipboard');
  } catch (err) {
    console.error('Failed to copy: ', err);
  }
}

Paste: Reading Text from the Clipboard

Much like copy, text can be read from the clipboard by calling readText() and waiting for the returned Promise to resolve with the text:

navigator.clipboard.readText()
  .then(text => {
    console.log('Pasted content: ', text);
  })
  .catch(err => {
    console.error('Failed to read clipboard contents: ', err);
  });

For consistency, here's the equivalent async function:

async function getClipboardContents() {
  try {
    const text = await navigator.clipboard.readText();
    console.log('Pasted content: ', text);
  } catch (err) {
    console.error('Failed to read clipboard contents: ', err);
  }
}

Handling Paste Events

There are plans to introduce a new event for detecting clipboard changes, but for now it's best to use the "paste" event. It works nicely with the new asynchronous methods for reading clipboard text:

document.addEventListener('paste', event => {
  event.preventDefault();
  navigator.clipboard.getText().then(text => {
    console.log('Pasted text: ', text);
  });
});

Security and Permissions

Clipboard access has always presented a security concern for browsers. Without proper permissions in place, a page could silently copy all manner of malicious content to a user's clipboard that would produce catastrophic results when pasted. Imagine a web page that silently copies rm -rf / or a decompression bomb image to your clipboard.

Giving web pages unfettered read access to the clipboard is even more troublesome. Users routinely copy sensitive information like passwords and personal details to the clipboard, which could then be read by any page without them ever knowing.

As with many new APIs, [navigator.clipboard] is only supported for pages served over HTTPS. To help prevent abuse, clipboard access is only allowed when a page is the active tab. Pages in active tabs can write to the clipboard without requesting permission, but reading from the clipboard always requires permission.

To make things easier, two new permissions for copy & paste have been added to the Permissions API. The clipboard-write permission is granted automatically to pages when they are the active tab. The clipboard-read permission must be requested, which you can do by trying to read data from the clipboard.

{ name: 'clipboard-read' }
{ name: 'clipboard-write' }

Screenshot
of the permissions prompt shown when attempting to read from the clipboard.

As with anything using the Permissions API, it's possible to check if your app has permission to interact with the clipboard:

navigator.permissions.query({
  name: 'clipboard-read'
}).then(permissionStatus => {
  // Will be 'granted', 'denied' or 'prompt':
  console.log(permissionStatus.state);

  // Listen for changes to the permission state
  permissionStatus.onchange = () => {
    console.log(permissionStatus.state);
  };
});

Here's where the "async" part of the Clipboard API really comes in handy though: attempting to read or write clipboard data will automatically prompt the user for permission if it hasn't already been granted. Since the API is promise-based this is completely transparent, and a user denying clipboard permission rejects the promise so the page can respond appropriately.

Since Chrome only allows clipboard access when a page is the current active tab, you'll find some of the examples here don't run quite right if pasted directly into DevTools, since DevTools itself is the active tab. There's a trick: we need to defer the clipboard access using setTimeout, then quickly click inside the page to focus it before the functions are called:

setTimeout(async () => {
  const text = await navigator.clipboard.readText();
  console.log(text);
}, 2000);

Looking Back

Prior to the introduction of the Async Clipboard API, we had a mix of different copy & paste implementations across web browsers.

In most browsers, the browser's own copy and paste can be triggered using document.execCommand('copy') and document.execCommand('paste'). If the text to be copied is a string not present in the DOM, we have to inject and select it:

button.addEventListener('click', e => {
  const input = document.createElement('input');
  document.body.appendChild(input);
  input.value = text;
  input.focus();
  input.select();
  const result = document.execCommand('copy');
  if (result === 'unsuccessful') {
    console.error('Failed to copy text.');
  }
})

Similarly, here's how you can handle pasted content in browsers that don't support the new Async Clipboard API:

document.addEventListener('paste', e => {
  const text = e.clipboardData.getData('text/plain');
  console.log('Got pasted text: ', text);
})

In Internet Explorer, we can also access the clipboard through window.clipboardData. If accessed within a user gesture such as a click event - part of asking permission responsibly - no permissions prompt is shown.

Detection and Fallback

It's a good idea to use feature detection to take advantage of Async Clipboard while still supporting all browsers. You can detect support for the Async Clipboard API by checking for the existence of navigator.clipboard:

document.addEventListener('paste', async e => {
  let text;
  if (navigator.clipboard) {
    text = await navigator.clipboard.readText()
  }
  else {
    text = e.clipboardData.getData('text/plain');
  }
  console.log('Got pasted text: ', text);
});

What's Next for the Async Clipboard API?

As you may have noticed, this post only covers the text part of navigator.clipboard. There are more generic read() and write() methods in the specification, but these come with additional implementation complexity and security concerns (remember those image bombs?). For now, Chrome is rolling out the simpler text parts of the API.

More Information

Credential Management API Feature Detection Check-up

$
0
0

Credential Management API Feature Detection Check-up

TL;DR

WebAuthn helps increase security by bringing public-key credential based authentication to the Web, and is soon to be supported in Chrome, Firefox and Edge (with the updated spec). It adds a new kind of Credential object, which, however, may break websites that use the Credential Management API without feature-detecting the specific credential types they're using.

If you are currently doing this for feature detection:

if (navigator.credentials && navigator.credentials.preventSilentAccess) {
  // use CM API
}

Do these instead:

if (window.PasswordCredential || window.FederatedCredential) {
  // Call navigator.credentials.get() to retrieve stored
  // PasswordCredentials or FederatedCredentials.
}

if (window.PasswordCredential) {
  // Get/Store PasswordCredential
}

if (window.FederatedCredential) {
  // Get/Store FederatedCredential
}

if (navigator.credentials && navigator.credentials.preventSilentAccess) {
  // Call navigator.credentials.preventSilentAccess()
}

See changes made to the sample code as an example.

Read on to learn more.

Note: If you are using Google identity as a primary way for your users to sign-in, consider using the one tap sign-up and automatic sign-in JavaScript library built on the Credential Management API. It combines Google sign-in and password-based sign-in into one API call, and adds support for one-tap account creation.

What is the Credential Management API

The Credential Management API (CM API) gives websites programmatic access to the user agent’s credential store for storing/retrieving user credentials for the calling origin.

Basic APIs are:

  • navigator.credentials.get()
  • navigator.credentials.store()
  • navigator.credentials.create()
  • navigator.credentials.preventSilentAccess()

The original CM API specification defines 2 credential types:

  • PasswordCredential
  • FederatedCredential

The PasswordCredential is a credential that contains user's id and password.
The FederatedCredential is a credential that contains user's id and a string that represents an identity provider.

With these 2 credentials, websites can:

  • Let the user sign-in with a previously saved password-based or federated credential as soon as they land (auto sign-in),
  • Store the password-based or federated credential the user has signed in with,
  • Keep the user's sign-in credentials up-to-date (e.g. after a password change)

What is WebAuthn

WebAuthn (Web Authentication) adds public-key credentials to the CM API. For example, it gives websites a standardized way to implement second-factor authentication using FIDO 2.0 compliant authenticator devices.

On a technical level, WebAuthn extends the CM API with the PublicKeyCredential interface.

What is the problem?

Previously we have been guiding developers to feature detect the CM API with following code:

if (navigator.credentials && navigator.credentials.preventSilentAccess) {
  // Use CM API
}

But as you can see from the descriptions above, the navigator.credentials is now expanded to support public-key credentials in addition to password credentials and federated credentials.

The problem is that user agents don't necessarily support all kinds of credentials. If you continue feature detect using navigator.credentials, your website may break when you are using a certain credential type not supported by the browser.

Supported credential types by browsers

PasswordCredential / FederatedCredential PublicKeyCredential
Chrome Available In development
Firefox N/A Aiming to ship on 60
Edge N/A Implemented with older API. New API (navigator.credentials) coming soon.

The solution

You can avoid this by modifying feature detection code as follows to explicitly test for the credential type that you intend to use.

if (window.PasswordCredential || window.FederatedCredential) {
  // Call navigator.credentials.get() to retrieve stored
  // PasswordCredentials or FederatedCredentials.
}

if (window.PasswordCredential) {
  // Get/Store PasswordCredential
}

if (window.FederatedCredential) {
  // Get/Store FederatedCredential
}

if (navigator.credentials && navigator.credentials.preventSilentAccess) {
  // Call navigator.credentials.preventSilentAccess()
}

See actual changes made to the sample code as an example.

For a reference, here's how to detect PublicKeyCredential added in WebAuthn:

if (window.PublicKeyCredential) {
  // use CM API with PublicKeyCredential added in the WebAuthn spec
}

Timeline

Earliest available implementation of WebAuthn is Firefox and is planned to be stable around early May 2018.

Finally

If you have any questions, send them over to @agektmr or agektmr@chromium.org.

#SmooshGate FAQ

$
0
0

#SmooshGate FAQ

What the smoosh happened?!

A proposal for a JavaScript language feature called Array.prototype.flatten turns out to be Web-incompatible. Shipping the feature in Firefox Nightly caused at least one popular website to break. Given that the problematic code is part of the widespread MooTools library, it’s likely that many more websites are affected. (Although MooTools is not commonly used for new websites in 2018, it used to be very popular and is still present on many production websites.)

The proposal author jokingly suggested renaming flatten to smoosh to avoid the compatibility issue. The joke was not clear to everyone, some people started to incorrectly believe that the new name had already been decided, and things escalated quickly.

What does Array.prototype.flatten do?

Array.prototype.flatten flattens arrays recursively up to the specified depth, which defaults to 1.

// Flatten one level:
const array = [1, [2, [3]]];
array.flatten();
// → [1, 2, [3]]

// Flatten recursively until the array contains no more nested arrays:
array.flatten(Infinity);
// → [1, 2, 3]

The same proposal includes Array.prototype.flatMap, which is like Array.prototype.map except it flattens the result into a new array.

[2, 3, 4].flatMap((x) => [x, x * 2]);
// → [2, 4, 3, 6, 4, 8]

What is MooTools doing that causes this problem?

MooTools defines their own non-standard version of Array.prototype.flatten:

Array.prototype.flatten = /* non-standard implementation */;

MooTools’ flatten implementation differs from the proposed standard. However, this is not the problem! When browsers ship Array.prototype.flatten natively, MooTools overrides the native implementation. This ensures that code relying on the MooTools behavior works as intended regardless of whether native flatten is available. So far, so good!

Unfortunately, something else then happens. MooTools copies over all its custom array methods to Elements.prototype (where Elements is a MooTools-specific API):

for (var key in Array.prototype) {
  Elements.prototype[key] = Array.prototype[key];
}

for-in iterates over “enumerable” properties, which doesn’t include native methods like Array.prototype.sort, but it does include regularly-assigned properties like Array.prototype.foo = whatever. But — and here’s the kicker — if you overwrite a non-enumerable property, e.g. Array.prototype.sort = whatever, it remains non-enumerable.

Currently, Array.prototype.flatten = mooToolsFlattenImplementation creates an enumerable flatten property, so it’s later copied to Elements. But if we ship a native version of flatten, it becomes non-enumerable, and isn’t copied to Elements. Any code relying on MooTools’ Elements.prototype.flatten is now broken.

Although it seems like changing the native Array.prototype.flatten to be enumerable would fix the problem, it would likely cause even more compatibility issues. Every website relying on for-in to iterate over an array (which is a bad practice, but it happens) would then suddenly get an additional loop iteration for the flatten property.

The bigger underlying problem here is modifying built-in objects. Extending native prototypes is generally accepted as a bad practice nowadays, as it doesn’t compose nicely with other libraries and third-party code. Don’t modify objects you don’t own!

Why don’t we just keep the existing name and break the Web?

In 1996, before CSS became widespread, and long before “HTML5” became a thing, the Space Jam website went live. Today, the website still works the same way it did 22 years ago.

How did that happen? Did someone maintain that website for all these years, updating it every time browser vendors shipped a new feature?

As it turns out, “don’t break the Web” is the number one design principle for HTML, CSS, JavaScript, and any other standard that’s widely used on the Web. If shipping a new browser feature causes existing websites to stop working, that’s bad for everyone:

  • visitors of the affected websites suddenly get a broken user experience;
  • the website owners went from having a perfectly-working website to a non-functional one without them changing anything;
  • browser vendors shipping the new feature lose market share, due to users switching browsers after noticing “it works in browser X”;
  • once the compatibility issue is known, other browser vendors refuse to ship it. The feature specification does not match reality (“nothing but a work of fiction”), which is bad for the standardization process.

Sure, in retrospect MooTools did the wrong thing — but breaking the web doesn’t punish them, it punishes users. These users do not know what a moo tool is. Alternatively, we can find another solution, and users can continue to use the web. The choice is easy to make.

Does that mean bad APIs can never be removed from the Web Platform?

It depends. In rare cases, bad features can be removed from the Web. Even just figuring out whether it’s possible to remove a feature is a very tricky effort, requiring extensive telemetry to quantify how many web pages would have their behavior changed. But when the feature is sufficiently insecure, is harmful to users, or is used very rarely, this can be done.

<applet>, <keygen>, and showModalDialog() are all examples of bad APIs that were successfully removed from the Web Platform.

Why don’t we just fix MooTools?

Patching MooTools so that it no longer extends built-in objects is a good idea. However, it doesn’t solve the problem at hand. Even if MooTools were to release a patched version, all existing websites using it would have to update for the compatibility problem to go away.

Can’t people just update their copy of MooTools?

In a perfect world, MooTools would release a patch, and every single website using MooTools would magically be updated the next day. Problem solved, right?!

Unfortunately, this is unrealistic. Even if someone were to somehow identify the full set of affected websites, manage to find contact information for each and every one of them, successfully reach out to all the website owners, and convince them all to perform the update (which might mean refactoring their entire code base), the entire process would take years, at best.

Keep in mind that many of these websites are old and likely unmaintained. Even if the maintainer is still around, it’s possible they’re not a highly-skilled web developer like yourself. We can’t expect everyone to go and change their 8-year-old website because of a web compatibility issue.

How does the TC39 process work?

TC39 is the committee in charge of evolving the JavaScript language through the ECMAScript standard.

#SmooshGate caused some to believe that “TC39 wants to rename flatten to smoosh”, but it was an in-joke that wasn’t well-communicated externally. Major decisions like renaming a proposal are not taken lightly, are not taken by a single person, and are definitely not taken overnight based on a single GitHub comment.

TC39 operates on a clear staging process for feature proposals. ECMAScript proposals and any major changes to them (including method renamings) are discussed during TC39 meetings, and need to be approved by the entire committee before they become official. In the case of Array.prototype.flatten, the proposal has already gone through several stages of agreement, all the way up to Stage 3, indicating the feature is ready to be implemented in Web browsers. It’s common for additional spec issues to come up during implementation. In this case, the most important feedback came after trying to ship it: the feature, in its current state, breaks the Web. Hard-to-predict issues like these are part of the reason why the TC39 process doesn’t just end once browsers ship a feature.

TC39 operates on consensus, meaning the committee has to agree on any new changes. Even if smoosh had been a serious suggestion, it seems likely that a committee member would object to it in favor of a more common name like compact or chain.

The renaming from flatten to smoosh (even if it hadn’t been a joke) has never been discussed at a TC39 meeting. As such, the official TC39 stance on this topic is currently unknown. No single individual can speak on behalf of all of TC39 until consensus is reached at the next meeting.

TC39 meetings are generally attended by people with highly diverse backgrounds: some have years of programming language design experience, others work on a browser or JavaScript engine, and an increasing number of attendants are there to represent the JavaScript developer community.

What happens next?

The next TC39 meeting takes place this week. There’s an item on the agenda to discuss flatten and its web compatibility issues. Hopefully, we’ll know more about next steps after the meeting.


Working with the new CSS Typed Object Model

$
0
0

Working with the new CSS Typed Object Model

TL;DR

CSS now has a proper object-based API for working with values in JavaScript.

el.attributeStyleMap.set('padding', CSS.px(42));
const padding = el.attributeStyleMap.get('padding');
console.log(padding.value, padding.unit); // 42, 'px'

The days of concatenating strings and subtle bugs are over!

Heads up: Chrome 66 adds support for the CSS Typed Object Model for a subset of CSS properties.

Introduction

Old CSSOM

CSS has had an object model (CSSOM) for many years. In fact, any time you read/set .style in JavaScript you're using it:

// Element styles.
el.style.opacity = 0.3;
typeof el.style.opacity === 'string' // Ugh. A string!?

// Stylesheet rules.
document.styleSheets[0].cssRules[0].style.opacity = 0.3;

New CSS Typed OM

The new CSS Typed Object Model (Typed OM), part of the Houdini effort, expands this worldview by adding types, methods, and a proper object model to CSS values. Instead of strings, values are exposed as JavaScript objects to facilitate performant (and sane) manipulation of CSS.

Instead of using element.style, you'll be accessing styles through a new .attributeStyleMap property for elements and a .styleMap property for stylesheet rules. Both return a StylePropertyMap object.

// Element styles.
el.attributeStyleMap.set('opacity', 0.3);
typeof el.attributeStyleMap.get('opacity').value === 'number' // Yay, a number!

// Stylesheet rules.
const stylesheet = document.styleSheets[0];
stylesheet.cssRules[0].styleMap.set('background', 'blue');

Because StylePropertyMaps are Map-like objects, they support all the usual suspects (get/set/keys/values/entries), making them flexible to work with:

// All 3 of these are equivalent:
el.attributeStyleMap.set('opacity', 0.3);
el.attributeStyleMap.set('opacity', '0.3');
el.attributeStyleMap.set('opacity', CSS.number(0.3)); // see next section
// el.attributeStyleMap.get('opacity').value === 0.3

// StylePropertyMaps are iterable.
for (const [prop, val] of el.attributeStyleMap) {
  console.log(prop, val.value);
}
// → opacity, 0.3

el.attributeStyleMap.has('opacity') // true

el.attributeStyleMap.delete('opacity') // remove opacity.

el.attributeStyleMap.clear(); // remove all styles.

Note that in the second example, opacity is set to string ('0.3') but a number comes back out when the property is read back later.

If a given CSS property supports numbers, Typed OM will accept a strings as input, but always returns a number! The analogy between the old CSSOM and the new Typed OM is similar to how .className grew up and got its own API, .classList.

Benefits

So what problems is CSS Typed OM trying to solve? Looking at the examples above (and throughout the rest of this article), you might argue that CSS Typed OM is far more verbose than the old object model. I would agree!

Before you write off Typed OM, consider some of the key features it brings to the table:

  1. Fewer bugs. e.g. numerical values are always returned as numbers, not strings.

     el.style.opacity += 0.1;
     el.style.opacity === '0.30.1' // dragons!
    
  2. Arithmetic operations & unit conversion. convert between absolute length units (e.g. px -> cm) and do basic math.

  3. Value clamping & rounding. Typed OM rounds and/or clamps values so they're within the acceptable ranges for a property.
  4. Better performance. The browser has to do less work serializing and deserializing string values. Now, the engine uses a similar understanding of CSS values across JS and C++. Tab Akins has shown some early perf benchmarks that put Typed OM at ~30% faster in operations/sec when compared to using the old CSSOM and strings. This can be significant for rapid CSS animations using requestionAnimationFrame(). crbug.com/808933 tracks additional performance work in Blink.
  5. Error handling. New parsing methods brings error handling in the world of CSS.
  6. "Should I use camel-cased CSS names or strings?" There's no more guessing if names are camel-cased or strings (e.g. el.style.backgroundColor vs el.style['background-color']). CSS property names in Typed OM are always strings, matching what you actually write in CSS :)

Browser support & feature detection

Typed OM landed in Chrome 66 and is being implemented in Firefox. Edge has shown signs of support, but has yet to add it to their platform dashboard.

Note: Only a subset of CSS properties are supported in Chrome 66+ for now.

For feature detection, you can check if one of the CSS.* numeric factories is defined:

if (window.CSS && CSS.number) {
  // Supports CSS Typed OM.
}

API Basics

Accessing styles

Values are separate from units in CSS Typed OM. Getting a style returns a CSSUnitValue containing a value and unit:

el.attributeStyleMap.set('margin-top', CSS.px(10));
// el.attributeStyleMap.set('margin-top', '10px'); // string arg also works.
el.attributeStyleMap.get('margin-top').value  // 10
el.attributeStyleMap.get('margin-top').unit // 'px'

// Use CSSKeyWorldValue for plain text values:
el.attributeStyleMap.set('display', new CSSKeywordValue('initial'));
el.attributeStyleMap.get('display').value // 'initial'
el.attributeStyleMap.get('display').unit // undefined

Computed styles

Computed styles have moved from an API on window to a new method on HTMLElement, computedStyleMap():

Old CSSOM

el.style.opacity = 0.5;
window.getComputedStyle(el).opacity === "0.5" // Ugh, more strings!

New Typed OM

el.attributeStyleMap.set('opacity', 0.5);
el.computedStyleMap().get('opacity').value // 0.5

Note: One gotcha between window.getComputedStyle() and element.computedStyleMap() is that the former returns resolved values whereas the latter returns computed values. For example, Typed OM retains percentage values (width: 50%), while CSSOM resolves them to lengths (e.g. width: 200px).

Value clamping / rounding

One of the nice features of the new object model is automatic clamping and/or rounding of computed style values. As an example, let's say you try to set opacity to a value outside of the acceptable range, [0, 1]. Typed OM clamps the value to 1 when computing the style:

el.attributeStyleMap.set('opacity', 3);
el.attributeStyleMap.get('opacity').value === 3  // val not clamped.
el.computedStyleMap().get('opacity').value === 1 // computed style clamps value.

Similarly, setting z-index:15.4 rounds to 15 so the value remains an integer.

el.attributeStyleMap.set('z-index', CSS.number(15.4));
el.attributeStyleMap.get('z-index').value  === 15.4 // val not rounded.
el.computedStyleMap().get('z-index').value === 15   // computed style is rounded.

CSS numerical values

Numbers are represented by two types of CSSNumericValue objects in Typed OM:

  1. CSSUnitValue - values that contain a single unit type (e.g. "42px").
  2. CSSMathValue - values that contain more than one value/unit such as mathematical expression (e.g. "calc(56em + 10%)").

Unit values

Simple numerical values ("50%") are represented by CSSUnitValue objects. While you could create these objects directly (new CSSUnitValue(10, 'px')) , most of the time you'll be using the CSS.* factory methods:

const {value, unit} = CSS.number('10');
// value === 10, unit === 'number'

const {value, unit} = CSS.px(42);
// value === 42, unit === 'px'

const {value, unit} = CSS.vw('100');
// value === 100, unit === 'vw'

const {value, unit} = CSS.percent('10');
// value === 10, unit === 'percent'

const {value, unit} = CSS.deg(45);
// value === 45, unit === 'deg'

const {value, unit} = CSS.ms(300);
// value === 300, unit === 'ms'

Note: as shown in the examples, these methods can be passed a Number or String representing a number.

See the spec for the full list of CSS.* methods.

Math values

CSSMathValue objects represent mathematical expressions and typically contain more than one value/unit. The common example is creating a CSS calc() expression, but there are methods for all the CSS functions: calc(), min(), max().

new CSSMathSum(CSS.vw(100), CSS.px(-10)).toString(); // "calc(100vw + -10px)"

new CSSMathNegate(CSS.px(42)).toString() // "calc(-42px)"

new CSSMathInvert(CSS.s(10)).toString() // "calc(1 / 10s)"

new CSSMathProduct(CSS.deg(90), CSS.number(Math.PI/180)).toString();
// "calc(90deg * 0.0174533)"

new CSSMathMin(CSS.percent(80), CSS.px(12)).toString(); // "min(80%, 12px)"

new CSSMathMax(CSS.percent(80), CSS.px(12)).toString(); // "max(80%, 12px)"
Nested expressions

Using the math functions to create more complex values gets a bit confusing. Below are a few examples to get you started. I've added extra indentation to make them easier to read.

calc(1px - 2 * 3em) would be constructed as:

new CSSMathSum(
  CSS.px(1),
  new CSSMathNegate(
    new CSSMathProduct(2, CSS.em(3))
  )
);

calc(1px + 2px + 3px) would be constructed as:

new CSSMathSum(CSS.px(1), CSS.px(2), CSS.px(3));

calc(calc(1px + 2px) + 3px) would be constructed as:

new CSSMathSum(
  new CSSMathSum(CSS.px(1), CSS.px(2)),
  CSS.px(3)
);

Arithmetic operations

One of the most useful features of The CSS Typed OM is that you can perform mathematical operations on CSSUnitValue objects.

Basic operations

Basic operations (add/sub/mul/div/min/max) are supported:

CSS.deg(45).mul(2) // {value: 90, unit: "deg"}

CSS.percent(50).max(CSS.vw(50)).toString() // "max(50%, 50vw)"

// Can Pass CSSUnitValue:
CSS.px(1).add(CSS.px(2)) // {value: 3, unit: "px"}

// multiple values:
CSS.s(1).sub(CSS.ms(200), CSS.ms(300)).toString() // "calc(1s + -200ms + -300ms)"

// or pass a `CSSMathSum`:
const sum = new CSSMathSum(CSS.percent(100), CSS.px(20)));
CSS.vw(100).add(sum).toString() // "calc(100vw + (100% + 20px))"
Conversion

Absolute length units can be converted to other unit lengths:

// Convert px to other absolute/physical lengths.
el.attributeStyleMap.set('width', '500px');
const width = el.attributeStyleMap.get('width');
width.to('mm'); // CSSUnitValue {value: 132.29166666666669, unit: "mm"}
width.to('cm'); // CSSUnitValue {value: 13.229166666666668, unit: "cm"}
width.to('in'); // CSSUnitValue {value: 5.208333333333333, unit: "in"}

CSS.deg(200).to('rad').value // "3.49066rad"
CSS.s(2).to('ms').value // 2000
Equality
const width = CSS.px(200);
CSS.px(200).equals(width) // true

const rads = CSS.deg(180).to('rad');
CSS.deg(180).equals(rads.to('deg')) // true

CSS transform values

CSS transforms are created with a CSSTransformValue and passing an array of transform values (e.g. CSSRotate, CSScale, CSSSkew, CSSSkewX, CSSSkewY). As an example, say you want to re-create this CSS:

transform: rotateZ(45deg) scale(0.5) translate3d(10px,10px,10px);

Translated into Typed OM:

const transform =  new CSSTransformValue([
  new CSSRotate(CSS.deg(45)),
  new CSSScale(CSS.number(0.5), CSS.number(0.5)),
  new CSSTranslate(CSS.px(10), CSS.px(10), CSS.px(10))
]);

In addition to its verbosity (lolz!), CSSTransformValue has some cool features. It has a boolean property to differentiate 2D and 3D transforms and a .toMatrix() method to return the DOMMatrix representation of a transform:

new CSSTranslate(CSS.px(10), CSS.px(10)).is2D // true
new CSSTranslate(CSS.px(10), CSS.px(10), CSS.px(10)).is2D // false
new CSSTranslate(CSS.px(10), CSS.px(10)).toMatrix() // DOMMatrix

Example: animating a cube

Let's see a practical example of using transforms. We'll using JavaScript and CSS transforms to animate a cube.

const rotate = new CSSRotate(0, 0, 1, CSS.deg(0));
const transform = new CSSTransformValue([rotate]);

const box = document.querySelector('#box');
box.attributeStyleMap.set('transform', transform);

(function draw() {
  requestAnimationFrame(draw);
  transform[0].angle.value += 5; // Update the transform's angle.
  // rotate.angle.value += 5; // Or, update the CSSRotate object directly.
  box.attributeStyleMap.set('transform', transform); // commit it.
})();

Notice that:

  1. Numerical values means we can increment the angle directly using math!
  2. Rather than touching the DOM or reading back a value on every frame (e.g. no box.style.transform=`rotate(0,0,1,${newAngle}deg)`), the animation is driven by updating the underlying CSSTransformValue data object, improving performance.

Demo

Below, you'll see a red cube if your browser supports Typed OM. The cube starts rotating when you mouse over it. The animation is powered by CSS Typed OM! 🤘

Drats! Sorry, your browser doesn't support Typed OM.

CSS custom properties values

CSS var() become a CSSVariableReferenceValue object in the Typed OM. Their values get parsed into CSSUnparsedValue because they can take any type (px, %, em, rgba(), etc).

const foo = new CSSVariableReferenceValue('--foo');
// foo.variable === '--foo'

// Fallback values:
const padding = new CSSVariableReferenceValue(
    '--default-padding', new CSSUnparsedValue(['8px']));
// padding.variable === '--default-padding'
// padding.fallback instanceof CSSUnparsedValue === true
// padding.fallback[0] === '8px'

If you want to get the value of a custom property, there's a bit of work to do:

<style>
  body {
    --foo: 10px;
  }
</style>
<script>
  const styles = document.querySelector('style');
  const foo = styles.sheet.cssRules[0].styleMap.get('--foo').trim();
  console.log(CSSNumericValue.parse(foo).value); // 10
</script>

Position values

CSS properties that take a space-separated x/y position such as object-position are represented by CSSPositionValue objects.

const position = new CSSPositionValue(CSS.px(5), CSS.px(10));
el.attributeStyleMap.set('object-position', position);

console.log(position.x.value, position.y.value);
// → 5, 10

Parsing values

The Typed OM introduces parsing methods to the web platform! This means you can finally parse CSS values programmatically, before trying to use it! This new capability is a potential life saver for catching early bugs and malformed CSS.

Parse a full style:

const css = CSSStyleValue.parse(
    'transform', 'translate3d(10px,10px,0) scale(0.5)');
// → css instanceof CSSTransformValue === true
// → css.toString() === 'translate3d(10px, 10px, 0) scale(0.5)'

Parse values into CSSUnitValue:

CSSNumericValue.parse('42.0px') // {value: 42, unit: 'px'}

// But it's easier to use the factory functions:
CSS.px(42.0) // '42px'

Error handling

Example - check if the CSS parser will be happy with this transform value:

try {
  const css = CSSStyleValue.parse('transform', 'translate4d(bogus value)');
  // use css
} catch (err) {
  console.err(err);
}

Conclusion

It's nice to finally have an updated object model for CSS. Working with strings never felt right to me. The CSS Typed OM API is a bit verbose, but hopefully it results in fewer bugs and more performant code down the line.

Deprecations and removals in Chrome 66

$
0
0

Deprecations and removals in Chrome 66

ImageCapture.setOptions() removed

Current thinking on setting device options is to use the constrainable pattern . Consequently this property was removed from the ImageCapture specification . Since this method appears to have little to no use on production websites, it is being removed. A replacement method is not available at this time.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Service worker: disallow CORS responses for same-origin requests

Previous versions of the service worker specification allowed a service worker to return a CORS response to a same-origin request. The thinking was that the service worker could read from a CORS response to create a completely synthetic response. In spite of this, the original request URL was maintained in the response. So outerResponse.url exactly equaled url and innerResponse.url exactly equaled crossOriginURL.

A recent change to the Fetch specification requires that Response.url be exposed if it is present. A consequence of this is scenarios in which self.location.href returns a different origin than self.origin. To avoid this, service workers are no longer allowed to return CORS responses for same origin requests.

For a longer discussion on this change, see the issue filed agains the Fetch specification in November 2017.

Chromestatus Tracker | Chromium Bug

WebAudio: dezippering removed

Web audio originally shipped with dezippering support. When an AudioParam value was set directly with the value setter, the value was not updated immediately. Instead, an exponential smoother was applied with a time constant of about 10 ms so that the change was done smoothly, limiting glitches. It was never specified which parameters had smoothing and what the time constant was. It wasn’t even obvious if the actual time constant was the appropriate value.

After much discussion , the working group removed dezippering from the spec. Now, the value is changed immediately when set. In place of dezippering, it is recommended that developers use the existing AudioParam.setTargetAtTime() method to do the dezippering, giving you full control on when to apply it, how fast to change, and on which parameters should be smoothed.

Removing this reduces developer confusion which audio parameters support dezippering.

Intent to Remove | Chromestatus Tracker | Chromium Bug

CSS position values with three parts deprecated

Recently specifications have required that new properties accepting position values not support values with three parts. It's believed this approach makes processing shorthand syntax easier. The current version of the CSS Values and Units Module applies this requirement to all CSS position values. As of Chrome 66, three-part position values are deprecated. Removal is expected in Chrome 68, around July 2018.

Intent to Remove | Chromestatus Tracker | Chromium Bug

Methods document.createTouch(), document.createTouchList() are deprecated

The TouchEvent() constructor has been supported in Chrome since version 48. To comply with the specification, document.createTouch() and document.createTouchList() are now deprecated.

Intent to Remove | Chromestatus Tracker | Chromium Bug

macOS native echo cancellation

$
0
0

macOS native echo cancellation

Since version 10.12 (Sierra), macOS includes a native echo canceller. Usage of it can be experimentally enabled in Chrome M66 by opting in to an Origin Trial or by supplying a command line flag when starting Chrome; see below.

With the experiment enabled, the macOS native echo canceller will be used for getUserMedia streams with the echoCancellation constraint enabled. On other platforms, and on earlier versions of macOS, enabling the experiment will effectively do nothing; the same echo canceller will be used as before (usually the software one from WebRTC).

Why are we doing this?

We want to evaluate the performance of the macOS native echo canceller. Being an Apple developed component, it has the opportunity to be specifically tuned for their hardware. Its placement in the audio pipeline should also make it less sensitive to certain audio glitches that can happen in Chrome.

What is an echo canceller?

An echo canceller tries to remove from the microphone signal any sound played out on the speakers. Without this, what you're saying as one party of a call, will be picked up by the microphone of the other parties and then sent back to you. You'll hear an echo of yourself!

How to enable the experiment

To get this new behavior on your site, your need to be signed up for the "macOS native echo cancellation" Origin Trial. If you just want to try it out locally, the experiment can be enabled on the command line:

chrome --enable-blink-features=ExperimentalHardwareEchoCancellation

Passing this flag on the command line enables the feature globally in Chrome for the current session.

With this experiment, we want to evaluate any qualitative differences when using the macOS native echo canceller, like:

  • How well does it cancel echo?
  • How well does it handle double talk scenarios - i.e. when both sides are talking at the same time?
  • Does it negatively affect audio quality when there is no echo to cancel?
  • Do certain audio devices (like headsets) cause problems?
  • etc.

We're also interested in how Chrome interacts with other applications when using the native echo canceller on macOS, as well as any stability issues or other problems with the implementation.

If you're trying this out, please file your feedback in this bug. If possible, include what hardware was used (macOS version, hardware model, microphone / headset / etc.). If doing more large-scale experiments, links to comparative statistics on audio call quality are appreciated; whether objective or subjective.

Present web pages to secondary attached displays

$
0
0

Present web pages to secondary attached displays

Chrome 66 allows web pages to use a secondary attached display through the Presentation API and to control its contents through the Presentation Receiver API.

Step 1
1/2. User picks a secondary attached display
Step 2
2/2. A web page is automatically presented to the display previously picked

Background

Until now, web developers could build experiences where a user would see local content in Chrome that is different from the content they’d see on a remote display while still being able to control that experience locally. Examples include managing a playback queue on youtube.com while videos play on the TV, or seeing a slide reel with speaker notes on a laptop while the fullscreen presentation is shown in a Hangout session.

There are scenarios though where users may simply want to present content onto a second, attached display. For example, imagine a user in a conference room outfitted with a projector to which they are connected via an HDMI cable. Rather than mirroring the presentation onto a remote endpoint, the user really wants to present the slides full-screen on the projector, leaving the laptop screen available for speaker notes and slide control. While the site author could support this in a very rudimentary way (e.g. popping up a new window, which the user has to then manually drag to the secondary display and maximize to fullscreen), it is cumbersome and provides an inconsistent experience between local and remote presentation.

Key Point: This change is about enabling secondary, attached displays to be used as endpoints for presentations in the same way as remote endpoints.

Present a page

Let me walk you through how to use the Presentation API to present a web page on your secondary attached display. The end result is available at https://googlechrome.github.io/samples/presentation-api/.

First, we’ll create a new PresentationRequest object that will contain the URL we want to present on the secondary attached display.

const presentationRequest = new PresentationRequest('receiver.html');

In this article, I won’t cover use cases where the parameter passed to PresentationRequest can be an array like ['cast://foo’, 'apple://foo', 'https://example.com'] as this is not relevant there.

We can now monitor presentation display availability and toggle a "Present" button visibility based on presentation displays availability. Note that we can also decide to always show this button.

Caution: the browser may use more energy while the availability object is alive and actively listening for presentation display availability changes. Please use it with caution in order to save energy on mobile.

presentationRequest.getAvailability()
.then(availability => {
  console.log('Available presentation displays: ' + availability.value);
  availability.addEventListener('change', function() {
    console.log('> Available presentation displays: ' + availability.value);
  });
})
.catch(error => {
  console.log('Presentation availability not supported, ' + error.name + ': ' +
      error.message);
});

Showing a presentation display prompt requires a user gesture such as a click on a button. So let’s call presentationRequest.start() on a button click and wait for the promise to resolve once the user has selected a presentation display (.e.g. a secondary attached display in our use case).

function onPresentButtonClick() {
  presentationRequest.start()
  .then(connection => {
    console.log('Connected to ' + connection.url + ', id: ' + connection.id);
  })
  .catch(error => {
    console.log(error);
  });
}

The list presented to the user may also include remote endpoints such as Chromecast devices if you’re connected to a network advertising them.

Presentation Display Picker

When promise resolves, the web page at the PresentationRequest object URL is presented to the chosen display. Et voilà!

We can now go further and monitor "close" and "terminate" events as shown below. Note that it is possible to reconnect to a "closed" presentationConnection with presentationRequest.reconnect(presentationId) where presentationId is the ID of the previous presentationRequest object.

function onCloseButtonClick() {
  // Disconnect presentation connection but will allow reconnection.
  presentationConnection.close();
}

presentationConnection.addEventListener('close', function() {
  console.log('Connection closed.');
});


function onTerminateButtonClick() {
  // Stop presentation connection for good.
  presentationConnection.terminate();
}

presentationConnection.addEventListener('terminate', function() {
  console.log('Connection terminated.');
});

Communicate with the page

Now you're thinking, that’s nice but how do I pass messages between my controller page (the one we’ve just created) and the receiver page (the one we’ve passed to the PresentationRequest object)?

First, let’s retrieve existing connections on the receiver page with navigator.presentation.receiver.connectionList and listen to incoming connections as shown below.

// Receiver page

navigator.presentation.receiver.connectionList
.then(list => {
  list.connections.map(connection => addConnection(connection));
  list.addEventListener('connectionavailable', function(event) {
    addConnection(event.connection);
  });
});

function addConnection(connection) {

  connection.addEventListener('message', function(event) {
    console.log('Message: ' + event.data);
    connection.send('Hey controller! I just received a message.');
  });

  connection.addEventListener('close', function(event) {
    console.log('Connection closed!', event.reason);
  });
}

A connection receiving a message fires a "message" event you can listen for. The message can be a string, a Blob, an ArrayBuffer, or an ArrayBufferView. Sending it is as simple as calling connection.send(message) from the controller page or the receiver page.

// Controller page

function onSendMessageButtonClick() {
  presentationConnection.send('Hello!');
}

presentationConnection.addEventListener('message', function(event) {
  console.log('I just received ' + event.data + ' from the receiver.');
});

Play with the sample at https://googlechrome.github.io/samples/presentation-api/ to get a sense of how it works. I’m sure you’ll enjoy this as much as I do.

Samples and demos

Check out the official Chrome sample we've used for this article.

I recommend the interactive Photowall demo as well. This web app allows multiple controllers to collaboratively present a photo slideshow on a presentation display. Code is available at https://github.com/GoogleChromeLabs/presentation-api-samples.

Photowall demo screenshot
Photo by José Luis Mieza / CC BY-NC-SA 2.0

One more thing

Chrome has a "Cast" browser menu users can invoke at any time while visiting a website. If you want to control the default presentation for this menu, then assign navigator.presentation.defaultRequest to a custom presentationRequest object created earlier.

// Make this presentation the default one when using the "Cast" browser menu.
navigator.presentation.defaultRequest = presentationRequest;

Dev tips

To inspect the receiver page and debug it, go to the internal chrome://inspect page, select “Other”, and click the “inspect” link next to the currently presented URL.

Inspect presentation receiver pages

You may also want to check out the internal chrome://media-router-internals page for diving into the internal discovery/availability processes.

What's next

As of Chrome 66, Chrome OS, Linux, and Windows platforms are supported. Mac support will come later.

Resources

[https://crbug.com/?q=component:Blink>PresentationAPI]: https://crbug.com/?q=component:Blink>PresentationAPI

What's New In DevTools (Chrome 67)

$
0
0

What's New In DevTools (Chrome 67)

Note: The video version of these release notes will be published around early June 2018.

New features and major changes coming to DevTools in Chrome 67 include:

Note: Check what version of Chrome you're running at chrome://version. If you're running an earlier version, these features won't exist. If you're running a later version, these features may have changed. Chrome auto-updates to a new major version about every 6 weeks.

Search across all network headers and responses

Open the Network panel then press Command+F (Mac) or Control+F (Windows, Linux, Chrome OS) to open the new Network Search pane. DevTools searches the headers and bodies of all network requests for the query that you provide.

Searching for the text 'cache-control' with the new Network Search pane.
Figure 1. Searching for the text cache-control with the new Network Search pane

Click Match Case Match Case to make your query case-sensitive. Click Use Regular Expression Use Regular Expression to show any results that match the pattern you provide. You don't need to wrap your RegEx in forward slashes.

A regular expression query in the Network Search pane.
Figure 2. A regular expression query in the Network Search pane.

Search pane UI updates

The UI of the Global Search pane now matches the UI of the new Network Search pane. It now also pretty-prints results to aid scannability.

The old and new UI.
Figure 3. The old UI on the left, and the new UI on the right

Press Command+Option+F (Mac) or Control+Shift+F (Windows, Linux, Chrome OS) to open Global Search. You can also open it via the Command Menu.

CSS variable value previews in the Styles pane

When the value of a CSS color property, such as background-color or color, is set to a CSS variable, DevTools now shows a preview of that color.

An example of CSS variable color values.
Figure 4. In the old UI on the left, there is no color preview next to color: var(--main-color), whereas in the new UI on the right, there is

Copy as fetch

Right-click a network request then select Copy > Copy As Fetch to copy the fetch()-equivalent code for that request to your clipboard.

Copying the fetch()-equivalent code for a request.
Figure 5. Copying the fetch()-equivalent code for a request

DevTools produces code like the following:

fetch("https://preload.glitch.me/styles.css", {
  "credentials": "omit",
  "headers": {},
  "referrer": "https://preload.glitch.me/after/",
  "referrerPolicy": "no-referrer-when-downgrade",
  "body": null,
  "method": "GET",
  "mode": "cors"
});

Audits panel updates

New audits

The Audits panel has 2 new audits, including:

  • Preload key requests. Preloading requests can speed up page load time by giving hints to the browser to download resources that are important for your Critical Rendering Path as soon as possible.
  • Avoid invisible text while webfonts are loading. Ensuring that text is visible while webfonts load makes the page more useful to users faster.

New configuration options

You can now configure the Audits panel to:

  • Preserve desktop viewport and user agent settings. In other words, you can prevent the Audits panel from simulating a mobile device.
  • Disable network and CPU throttling.
  • Preserve storage, such as LocalStorage and IndexedDB, across audits.
New audit configuration options.
Figure 6. New audit configuration options

View traces

After auditing a page, click View Trace to view the load performance data that your audit is based off of in the Performance panel.

The View Trace button.
Figure 7. The View Trace button

Stop infinite loops

If you work with for loops, do...while loops, or recursion a lot, you've probably executed an infinite loop by mistake while developing your site. To stop the infinite loop, you can now:

  1. Open the Sources panel.
  2. Click Pause Pause. The button changes to Resume Script Execution Resume.
  3. Hold Resume Script Execution Resume then select Stop Current JavaScript Call Stop.

In the video above, the clock is being updated via a setInterval() timer. Clicking Start Infinite Loop runs a do...while loop that never stops. The interval resumes because it wasn't running when Stop Current JavaScript Call Stop was selected.

User Timing in the Performance tabs

When viewing a Performance recording, click the User Timing section to view User Timing measures in the Summary, Bottom-Up, Call Tree and Event Log tabs.

Viewing User Timing measures in the Bottom-Up tab.
Figure 8. Viewing User Timing measures in the Bottom-Up tab. The blue bar to the left of the User Timing section indicates that it is selected.

In general, you can now select any of the sections (Main Thread, User Timing, GPU, ScriptStreamer, and so on) and view that section's activity in the tabs.

Select JavaScript VM instances in the Memory panel

The Memory panel now clearly lists out all JavaScript VM instances associated with a page, rather than hiding them behind the Target dropdown menu as before.

Before and after screenshots of the Memory panel.
Figure 9. In the old UI on the left, the JavaScript VM instances are hidden behind the Target dropdown menu, whereas in the new UI on the right they are shown in the Select JavaScript VM Instance table

Next to the developers.google.com instance there are 2 values: 8.7 MB and 13.3 MB. The left value represents memory allocated because of JavaScript. The right value represents all OS memory that is being allocated because of that VM instance. The right value is inclusive of the left value. In Chrome's Task Manager, the left value corresponds to JavaScript Memory and the right value corresponds to Memory Footprint.

Network tab renamed to Page tab

On the Sources panel, the Network tab is now called the Page tab.

Two DevTools windows side-by-side, demonstrating the name change.
Figure 10. In the old UI on the left, the tab showing the page's resources is called Network, whereas in the new UI on the right it's called Page

Dark theme updates

Chrome 67 ships with a number of minor changes to the dark theme color scheme. For example, the breakpoint icons and the current line of execution are now green.

A screenshot of the new breakpoint icon and current line of execution color scheme.
Figure 11. A screenshot of the new breakpoint icon and current line of execution color scheme

Certificate transparency in the Security panel

The Security panel now reports certificate transparency information.

Certificate transparency information in the Security panel.
Figure 12. Certification transparency information in the Security panel

Site Isolation in the Performance panel

If you've got Site Isolation enabled, the Performance panel now provides a flame chart for each process so that you can see the total work that each process is causing.

Per-process flame charts in a Performance recording.
Figure 13. Per-process flame charts in a Performance recording

Feedback

  • File bug reports at Chromium Bugs.
  • Discuss features and changes on the Mailing List. Please don't use this channel for support questions. Use Stack Overflow, instead.
  • Get help on how to use DevTools on Stack Overflow. Please don't file bugs here. Use Chromium Bugs, instead.
  • Tweet us at @ChromeDevTools.
  • File bugs on this doc in the Web Fundamentals repository.

Consider Canary

If you're on Mac or Windows, please consider using Chrome Canary as your default development browser. If you report a bug or a change that you don't like while it's still in Canary, the DevTools team can address your feedback significantly faster.

Note: Canary is the bleeding-edge version of Chrome. It's released as soon as its built, without testing. This means that Canary breaks from time-to-time, about once-a-month, and it's usually fixed within a day. You can go back to using Chrome Stable while Canary is broken.

Previous release notes

See the devtools-whatsnew tag for links to all previous DevTools release notes.

Viewing all 599 articles
Browse latest View live