Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

The Chromium Chronicle: ClusterFuzz

$
0
0

The Chromium Chronicle: ClusterFuzz

Episode 9: December, 2019

by Adrian Taylor in Mountain View

You may find you are asked to fix high-priority security bugs discovered by ClusterFuzz. What is it? Should you take those bugs seriously? How can you help?

Fuzzing flow chart

ClusterFuzz feeds input to Chrome and watches for crashes. Some of those Chrome builds have extra checks turned on, for example AddressSanitizer, which looks for memory safety errors.

ClusterFuzz assigns components based on the crash location, and assigns severity based on the type of crash and whether it happened in a sandboxed process. For example, a heap use-after-free will be high severity, unless it’s in the browser process, in which case it’s critical (no sandbox to limit impact!):

class Foo {
  Widget* widget;
};

void Foo::Bar() {
  delete widget;
  ...
  widget->Activate();  // Bad in the renderer process, worse in the browser process.
}                      // Obviously, real bugs are more subtle. Usually.

ClusterFuzz generates input from fuzzers or from bugs submitted externally. Some fuzzers are powered by libFuzzer, which evolves input to increase code coverage. Some understand the grammar of the input language converted into protobufs. Once ClusterFuzz finds a crash, it will try to minimize the input test case and even bisect to find the offending commit. It finds a lot...

You can help:

  • Be paranoid about object lifetimes & integer overflows.
  • Add new fuzzers, especially when you process untrustworthy data or IPC (see links below, often < 20 lines of code).
  • Fix ClusterFuzz-reported bugs: its severity heuristics can be trusted because they’re based on real-world exploitability: Even a single byte overflow has led to arbitrary code execution by an attacker.

Resources


WebVR 1.1 removed from Chrome

$
0
0

WebVR 1.1 removed from Chrome

Feedback

What's New In DevTools (Chrome 82)

$
0
0

What's New In DevTools (Chrome 82)

Emulate vision deficiencies

Open the Rendering tab and use the new Emulate vision deficiencies feature to get a better idea of how people with different types of vision deficiencies experience your site.

Emulating blurred vision.
Emulating blurred vision.

DevTools can emulate blurred vision and the following types of color vision deficiencies:

  • Protanopia. The inability to perceive red light.
  • Protanomaly. A reduced sensitivity to red light.
  • Deuteranopia. The inability to perceive green light.
  • Deuteranomaly. A reduced sensitivity to green light.
  • Tritanopia. The inability to perceive blue light.
  • Tritanomaly. A reduced sensitivity to blue light (extremely rare).
  • Achromatopsia. The inability to perceive any color except for shades of grey (extremely rare).
  • Achromatomaly. A reduced sensitivity to green, red, and blue light (extremely rare).

The -anomaly forms are the (rare) extreme versions of the -ia forms. Every person with an -ia vision deficiency is different and might see things differently (being able to perceive more/less of the relevant colors). The DevTools emulations are just a visual approximation of how someone might experience one of these vision deficiencies. Although the approximation should be good enough for you to identify and resolve issues, there's no way to simulate exactly what a given person would experience.

Send feedback to Chromium issue #1003700.

Cross-Origin Opener Policy (COOP) and Cross-Origin Embedder Policy (COEP) debugging

The Network panel now provides Cross-Origin Opener Policy and Cross-Origin Embedder Policy debugging information.

The Status column now provides a quick explanation of why a request was blocked as well as a link to view that request's headers for further debugging:

Blocked requests in the Status column

The Response Headers section of the Headers tab provides more guidance on how to resolve the issues:

More guidance in the Response Headers section

Send feedback to Chromium issue #1051466.

Dock to left from the Command Menu

Open the Command Menu and run the Dock to left command to move DevTools to the left of your viewport.

DevTools docked to the left of the viewport

Note: DevTools has had the Dock to left feature for a long time but it was previously only accessible from the Main Menu. The new feature in Chrome 82 is that you can now access this feature from the Command Menu.

Send feedback to Chromium issue #1011679.

The Audits panel is now the Lighthouse panel

The DevTools and Lighthouse teams frequently got feedback from web developers that they would hear that it's possible to run Lighthouse from DevTools, but when they went to try it out they couldn't find the "Lighthouse" panel, so the Audits panel is now the Lighthouse panel.

The Lighthouse panel

Delete all Local Overrides in a folder

After setting up Local Overrides you can now right-click a folder and select the new Delete all overrides option to delete all Local Overrides in that folder.

Delete all overrides

Send feedback to Chromium issue #1016501.

Updated Long tasks UI

A Long Task is JavaScript code that monopolizes the main thread for a long time, causing a web page to freeze.

You've been able to visualize Long Tasks in the Performance panel for a while now, but in Chrome 82 the Long Task visualization UI in the Performance panel has been updated. The Long Task portion of a task is now colored with a striped red background.

The new Long Task UI

Send feedback to Chromium issue #1054447.

Maskable icon support in the Manifest pane

Android Oreo introduced adaptive icons, which display app icons in a variety of shapes across different device models. Maskable icons are a new icon format that support adaptive icons, which enable you to ensure that your PWA icon looks good on devices that support the maskable icons standard.

Enable the new Show only the minimum safe area for maskable icons checkbox in the Manifest pane to check that your maskable icon will look good on Android Oreo devices. Check out Are my current icons ready? to learn more.

The "Show only the minimum safe area for maskable icons" checkbox

Note: This feature launched in Chrome 81. We're covering it here in Chrome 82 because we forgot to cover it in What's New In DevTools (Chrome 81).

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>

The Chromium Chronicle: Time-Travel Debugging with RR

$
0
0

The Chromium Chronicle: Time-Travel Debugging with RR

Episode 13: March, 2020

by Christian Biesinger in Madison, WI

Do you find yourself running the same test over and over in the debugger, trying to figure out how the code got in a bad state? We have a tool for you! Easy to install and setup, it will record an execution trace, and that gives magical new powers to gdb. Step backwards, run backwards, see where variables changed their value or when a function was last called on an object (using conditional breakpoints).

On Linux, you can use rr. Install using sudo apt-get install rr or from https://rr-project.org/.

This is not officially supported, but very useful. The way rr works is that you first record a trace, then replay it.

rr record .../content_shell --no-sandbox  --disable-hang-monitor --single-process
# record the trace. --single-process is optional, see below. The other flags are required.
rr replay # This will replay the last trace
(gdb)       # rr uses GDB to let you replay traces

Conveniently, timing and pointer addresses stay the same every time you replay the same trace. Traces can be made portable using rr pack so that you can copy them to another machine and replay there, or replay even after recompiling. Run your program using continue. You can use all regular GDB commands -b, next, watch, etc. However, you can also use reverse-next (rn), reverse-cont (rc), reverse-step (rs), reverse-fin.

These still respect any breakpoints you’ve set. For example:

(gdb) c  # Execute to the end
(gdb) break blink::LayoutFlexibleBox::UpdateLayout
(gdb) rc # Run back to the last layout call
Thread 5 hit Breakpoint 1, blink::LayoutBlock::UpdateLayout (
    this=0x121672224010)
(gdb) # Inspect anything you want here. To find the previous Layout call on this object:
(gdb) cond 1 this == 0x121672224010
(gdb) rc
Thread 5 hit Breakpoint 1, blink::LayoutBlock::UpdateLayout (
    this=0x121672224010)
(gdb) watch -l style_.ptr_ # Or find the last time the style_ was changed
(gdb) rc
Thread 5 hit Hardware watchpoint 2: -location style_.ptr_

Old value = (const blink::ComputedStyle *) 0x1631ad3dbb0
New value = (const blink::ComputedStyle *) 0x0
0x00007f68cabcf78e in std::__Cr::swap<blink::ComputedStyle const*> (

In this example, I have used --single-process for simplicity, but that’s not necessary. RR can trace multiple processes; after recording, you can see a list using rr ps and pick one to replay with rr replay -f PID.

There are lots of ways RR can be useful. There are other commands you can use, such as when to find out at which event number you are at, or rr replay -M to annotate stdout with a process ID and event number for each line. See the RR website and documentation for more details.

New in Chrome 81

$
0
0

New in Chrome 81

Chrome 81 is starting to roll out to stable now.

Here's what you need to know:

  • I've got an update on the adjusted Chrome release schedule.
  • App Icon Badging graduates from its origin trial.
  • Hit testing for augmented reality is now available in the browser.
  • Web NFC starts its origin trial.
  • And more.

I’m Pete LePage, working and shooting from home, let’s dive in and see what’s new for developers in Chrome 81!

Note: We've created a set of resources to help you ensure your site remains available and accessible to all during the COVID-19 situation.

Updated Chrome release schedule

We recently announced an adjusted release schedule for Chrome. We did this because it is important to ensure Chrome continues to be stable, secure, and work reliably for anyone who depends on it.

Screenshot of Chromium Calendar

In short, Chrome 81 is rolling out now. We’re going to skip Chrome 82, and move directly to Chrome 83, which will be released 3 weeks earlier than planned, in approximately mid-May.

We’ll keep everyone informed of any changes to our schedule on our release blog, and will share additional details on the schedule in the Chromium Developers group. You can also check our schedule page for specific dates for each milestone at any time.

WebXR hit testing

There are a handful of native apps that let you see what a new couch or chair might look like in your home. With an update to the Web XR Device API, it’s now possible to do that on the web too.

With the Web XR Hit Test API, you can place virtual objects into your camera’s view of the real world.

Check out the Immersive Web Working Group's Hit Testing sample (code) where you can place virtual sunflowers on surfaces in the real world, or Positioning virtual objects in real-world views for more details.

App icon badging

App icon badging is graduating from Origin Trial to stable, which means you can now use it on any site, without a token.

Badging of app icons makes it easy to subtly notify the user that there is some new activity that might require their attention, or to indicate a small amount of information, such as an unread count.

It’s more user-friendly than notification. And because it doesn’t interrupt the user, it can be updated with a much higher frequency. It’s perfect for chat or email apps to indicate the number of unread messages. Social media apps could use it to indicate the number of times you’ve been tagged in other peoples posts. Or for games, to indicate to a user that it’s their turn.

Check out my Badging API article on web.dev for full details.

New origin trials

Web NFC

Web NFC is starting its origin trial in Chrome 81. Web NFC allows a web app to read and write to NFC tags. This opens new use cases, including providing more details about museum exhibits, inventory management, reading information from a conference badge, and more.

It’s super easy to use. To read a tag, create a new instance of the NDEFReader API, and start the scan.

const reader = new NDEFReader();

async function startScan() {
  await reader.scan();
  reader.onreading = (e) => {
    console.log(e.message);
  };
}

Then, when an NFC tag is scanned, the reader will fire a read event that you can use to loop through the incoming messages.

Francois has a great post that covers all the details and it includes a number of common patterns that you might want to use.

Other origin trials

Check https://developers.chrome.com/origintrials/#/trials/active for a complete list of features in origin trial.

And more

  • The media session API now supports tracking position state so you can see where you are in a track and easily skip back or forwards.
  • The INTL API now provides a DisplayNames method that gets the localized names of languages, currency, and other commonly used names, no more having to include that yourself.
  • We had planned to remove support for TLS 1.0 and TLS 1.1, but have postponed that until at least Chrome 83.

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 81.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and as soon as Chrome 83 is released, I’ll be right here to tell you -- what’s new in Chrome!

A personal note from Pete

Over the last week, two songs have brought me joy, and I wanted to share them with you, in hopes they bring you some joy.

A huge thanks to my production team, Sean Meehan, Lee Carruthers, Loren Borja, Taylor Reifurth, and the whole Google Developers Studio team. They got me the equipment, helped me get it all setup in my tiny NYC apartment, and then busted their butts to get this video out in the tight turn-around time we had. Working with them is a pleasure. Thank you, you all rock!

Feedback

Deprecations and removals in Chrome 83

$
0
0

Deprecations and removals in Chrome 83

Disallow downloads in Sandboxed iframes

Chrome now prevents downloads in sandboxed iframes, though this restriction can be lifted via an 'allow-downloads' keyword in the sandbox attribute list. This allows content providers to restrict malicious or abusive downloads. Downloads can bring security vulnerabilities to a system. Even though additional security checks are done in Chrome and the operating system, we feel blocking downloads in sandboxed iframes also fits the purpose of the sandbox.

Intent to Remove | Chrome Platform Status

Feedback

What's New In DevTools (Chrome 84)

$
0
0

What's New In DevTools (Chrome 84)

Fix site issues with the new Issues tab

The new Issues tab in the Drawer was built to help reduce the notification fatigue and clutter of the Console. Currently, the Console is the central place for website developers, libraries, frameworks, and Chrome itself to log messages, warnings, and errors. The Issues tab aggregates warnings from the browser in a structured, aggregated, and actionable way, links to affected resources within DevTools, and provides guidance on how to fix the issues. Over time, more and more of Chrome's warnings will be surfaced in the Issues tab rather than the Console, which should help reduce the Console's clutter.

Check out Find And Fix Problems With The Chrome DevTools Issues Tab to get started.

The Issues tab.

Chromium Bug: #1068116

View accessibility information in the Inspect Mode tooltip

The Inspect Mode tooltip now indicates whether the element has an accessible name and role and is keyboard-focusable.

The Inspect Mode tooltip with accessibility information.

Chromium Bug: #1040025

Performance panel updates

After recording your load performance, the Performance panel now shows Total Blocking Time (TBT) information in the footer. TBT is a load performance metric that helps quantify how long it takes a page to become usable. It essentially measures how long a page appears to be usable (because its content has been rendered to the screen) but isn't actually usable because JavaScript is blocking the main thread and therefore the page can't respond to user input. TBT is the main lab metric for approximating First Input Delay, which is one of Google's new Core Web Vitals.

To get Total Blocking Time information, do not use the Reload Page Reload Page workflow for recording page load performance. Instead, click Record Record, manually reload the page, wait for the page to load, and then stop recording. If you see Total Blocking Time: Unavailable it means that DevTools did not get the information it needs from Chrome's internal profiling data.

Total Blocking Time information in the footer of a Performance panel recording.

Chromium Bug: #1054381

Layout Shift events in the new Experience section

The new Experience section of the Performance panel can help you detect layout shifts. Cumulative Layout Shift (CLS) is a metric that can help you quantify unwanted visual instability and is one of Google's new Core Web Vitals.

Click a Layout Shift event to see the details of the layout shift in the Summary tab. Hover over the Moved from and Moved to fields to visualize where the layout shift occurred.

The details of a layout shift.

More accurate promise terminology in the Console

When logging a Promise the Console used to incorrectly describe the state of the Promise as resolved:

An example of the Console using the old "resolved" terminology.

The Console now uses the term fulfilled, which aligns with the Promise spec:

An example of the Console using the new "fulfilled" terminology.

V8 Bug: #6751

Styles pane updates

Support for the revert keyword

The Styles pane's autocomplete UI now detects the revert CSS keyword, which reverts the cascaded value of a property to what the value would have been if no changes had been made to the element's styling.

Setting the value of a property to revert.

Chromium Bug: #1075437

Image previews

Hover over a background-image value in the Styles pane to see a preview of the image in a tooltip.

Hovering over a background-image value.

Chromium Bug: #1040019

Color Picker now uses space-separated functional color notation

CSS Color Module Level 4 specifies that color functions like rgb() should support space-separated arguments. For example, rgb(0, 0, 0) is equivalent to rbg(0 0 0).

When you choose colors with the Color Picker or alternate between color representations in the Styles pane by holding Shift and then clicking the color value, you'll now see the space-separated argument syntax.

Using space-separated arguments in the Styles pane.

You'll also see the syntax in the Computed pane and the Inspect Mode tooltip.

DevTools is using the new syntax because upcoming CSS features like color() do not support the deprecated comma-separated argument syntax.

The space-separated argument syntax has been supported in most browsers for a while. See Can I use Space-separated functional color notations?

Chromium Bug: #1072952

Deprecation of the Properties pane in the Elements panel

The Properties pane in the Elements panel has been deprecated. Run console.dir($0) in the Console instead.

The deprecated Properties pane.

References:

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>

Handling Heavy Ad Interventions

$
0
0

Handling Heavy Ad Interventions

Ads that consume a disproportionate amount of resources on a device negatively impact the user’s experience—from the obvious effects of degrading performance to less visible consequences such as draining the battery or eating up bandwidth allowances. These ads range from the actively malicious, such as cryptocurrency miners, through to genuine content with inadvertent bugs or performance issues.

Chrome is experimenting with setting limits on the resources an ad may use and unloading that ad if the limits are exceeded. You can read the announcement on the Chromium blog for more details. The mechanism used for unloading ads is the Heavy Ad Intervention.

Heavy Ad criteria

An ad is considered heavy if the user has not interacted with it (for example, has not tapped or clicked it) and it meets any of the following criteria:

  • Uses the main thread for more than 60 seconds in total
  • Uses the main thread for more than 15 seconds in any 30 second window
  • Uses more than 4 megabytes of network bandwidth

All resources used by any descendant iframes of the ad frame count against the limits for intervening on that ad. It’s important to note that the main thread time limits are not the same as elapsed time since loading the ad. The limits are on how long the CPU takes to execute the ad's code.

Testing the intervention

You can test the new intervention in Chrome 84 and upwards.

  • Enable chrome://flags/#enable-heavy-ad-intervention
  • Disable chrome://flags/#heavy-ad-privacy-mitigations

Setting chrome://flags/#enable-heavy-ad-intervention to Enabled activates the new behavior, but by default there is some noise and variability added to the thresholds to protect user privacy. Setting chrome://flags/#heavy-ad-privacy-mitigations to Disabled prevents this, meaning the restrictions are applied deterministically, purely according to the limits. This should make debugging and testing easier.

Note: Earlier versions of Chrome include the #heavy-ad-privacy-mitigations-opt-out flag which should be set to Enabled for testing.

When the intervention is triggered you should see the content in the iframe for a heavy ad replaced with an Ad removed message. If you follow the included Details link, you will see a message explaining: "This ad uses too many resources for your device, so Chrome removed it."

You can see the intervention applied to sample content on heavy-ads.glitch.me You can also use this test site to load an arbitrary URL as a quick way of testing your own content.

Be aware when testing that there are a number of reasons that may prevent an intervention being applied.

  • Reloading the same ad within the same page will exempt that combination from the intervention. Clearing your browsing history and opening the page in a new tag can help here.
  • Ensure the page remains in focus - backgrounding the page (switching to another window) will pause task queues for the page, and so will not trigger the CPU intervention.
  • Ensure you do not tap or click ad content while testing - the intervention will not be applied to content that receives any user interaction.

What do you need to do?

You show ads from a third-party provider on your site

No action needed, just be aware that users may see ads that exceed the limits removed when on your site.

You show first-party ads on your site or you provide ads for third-party display

Continue reading to ensure you implement the necessary monitoring via the Reporting API for Heavy Ad interventions.

You create ad content or you maintain a tool for creating ad content

Continue reading to ensure that you are aware of how to test your content for performance and resource usage issues. You should also refer to the guidance on the ad platforms of your choice as they may provide additional technical advice or restrictions, for example, see the Google Guidelines for display creatives. Consider building configurable thresholds directly into your authoring tools to prevent poor performing ads escaping into the wild.

What happens when an ad is removed?

An intervention in Chrome is reported via the aptly named Reporting API with an intervention report type. You can use the Reporting API to be notified about interventions either by a POST request to a reporting endpoint or within your JavaScript.

These reports are triggered on the root ad-tagged iframe along with all of its descendants, i.e. every frame unloaded by the intervention. This means that if an ad comes from a third-party source, i.e. a cross-site iframe, then it’s up to that third-party (for example, the ad provider) to handle the report.

To configure the page for HTTP reports, the response should include the Report-To header:

Report-To: { "url": "https://example.com/reports", "max_age": 86400 }

The POST request triggered will include a report like this:

POST /reports HTTP/1.1
Host: example.com
…
Content-Type: application/report

[{
 "type": "intervention",
 "age": 60,
 "url": "https://example.com/url/of/ad.html",
 "body": {
   "sourceFile": null,
   "lineNumber": null,
   "columnNumber": null,
   "id": "HeavyAdIntervention",
   "message": "Ad was removed because its CPU usage exceeded the limit. See https://www.chromestatus.com/feature/4800491902992384"
 }
}]

Note: The null values are expected. The intervention will trigger when the limits are reached, but that particular point in the code is not necessarily the problem.

The JavaScript API provides the ReportingObserver with an observe() method that can be used to trigger a provided callback on interventions. This can be useful if you want to attach additional information to the report to aid in debugging.

// callback that will handle intervention reports
function sendReports(reports) {
  for (let report of reports) {
    // Log the `report` json via your own reporting process
    navigator.sendBeacon('https://report.example/your-endpoint', report);
  }
}

// create the observer with the callback
const observer = new ReportingObserver(
  (reports, observer) => {
    sendReports(reports);
  },
  { buffered: true }
);

// start watching for interventions
observer.observe();

However, because the intervention will literally remove the page from the iframe, you should add a failsafe to ensure that the report is definitely captured before the page is gone completely, for example, an ad within an iframe. For this, you can hook your same callback into the unload event.

window.addEventListener('unload', (event) => {
  // pull all pending reports from the queue
  let reports = observer.takeRecords();
  sendReports(reports);
});

Caution: The unload and beforeunload events both restrict the amount of work that can happen within them to protect the user experience. For example, trying to send a fetch() request with the reports will result in that request being canceled. You should use navigator.sendBeacon() to send that report and even then, this is only best-effort by the browser not a guarantee.

The resulting JSON from the JavaScript is similar to that sent on the POST request:

[
  {
    type: 'intervention',
    url: 'https://example.com/url/of/ad.html',
    body: {
      sourceFile: null,
      lineNumber: null,
      columnNumber: null,
      id: 'HeavyAdIntervention',
      message:
        'Ad was removed because its network usage exceeded the limit. See https://www.chromestatus.com/feature/4800491902992384',
    },
  },
];

Diagnosing the cause of an intervention

Ad content is just web content, so make use of tools like Lighthouse to audit the overall performance of your content. The resulting audits provide inline guidance on improvements. You can also refer to the web.dev/fast collection.

You may find it helpful to test your ad in a more isolated context. You can use the custom URL option on https://heavy-ads.glitch.me to test this with a ready-made, ad-tagged iframe. You can use Chrome DevTools to validate content has been tagged as an ad. In the Rendering panel (accessible via the three dot menu then More Tools > Rendering) select "Highlight Ad Frames". If testing content in the top-level window or other context where it is not tagged as an ad the intervention will not be triggered, but you can still manually check against the thresholds.

Network usage

Bring up the Network panel in Chrome DevTools to see the overall network activity for the ad. You will want to ensure the "Disable cache" option is checked to get consistent results over repeated loads.

Network panel in DevTools.
Network panel in DevTools.

The transferred value at the bottom of the page will show you the amount transferred for the entire page. Consider using the Filter input at the top to restrict the requests just to the ones related to the ad.

If you find the initial request for the ad, for example, the source for the iframe, you can also use the Initiator tab within the request to see all of the requests it triggers.

Initiator tab for a request.
Initiator tab for a request.

Sorting the overall list of requests by size is a good way to spot overly large resources. Common culprits include images and videos that have not been optimized.

Sort requests by response size.
Sort requests by response size.

Additionally, sorting by name can be a good way to spot repeated requests. It may not be a single large resource triggering the intervention, but a large number of repeated requests that incrementally go over the limit.

CPU usage

The Performance panel in DevTools will help diagnose CPU usage issues. The first step is to open up the Capture Settings menu. Use the CPU dropdown to slow down the CPU as much as possible. The interventions for CPU are far more likely to trigger on lower-powered devices than high-end development machines.

Enable network and CPU throttling in the Performance panel.
Enable network and CPU throttling in the Performance panel.

Next, click the Record button to begin recording activity. You may want to experiment with when and how long you record for, as a long trace can take quite a while to load. Once the recording is loaded you can use the top timeline to select a portion of the recording. Focus on areas on the graph in solid yellow, purple, or green that represent scripting, rendering, and painting.

Summary of a trace in the Performance panel.
Summary of a trace in the Performance panel.

Explore the Bottom-Up, Call Tree, and Event Log tabs at the bottom. Sorting those columns by Self Time and Total Time can help identify bottlenecks in the code.

Sort by Self Time in the Bottom-Up tab.
Sort by Self Time in the Bottom-Up tab.

The associated source file is also linked there, so you can follow it through to the Sources panel to examine the cost of each line.

Execution time shown in the Sources panel.
Execution time shown in the Sources panel.

Note: DevTools may not always display the timing information if the frame has already been unloaded, so you may want to capture the traces with the ad isolated or with the intervention disabled.

Common issues to look for here are poorly optimized animations that are triggering continuous layout and paint or costly operations that are hidden within an included library.

How to report incorrect interventions

Chrome tags content as an ad by matching resource requests against a filter list. If non-ad content has been tagged, consider changing that code to avoid matching the filtering rules. If you suspect an intervention has been incorrectly applied, then you can raise an issue via this template. Please ensure you have captured an example of the intervention report and have a sample URL to reproduce the issue.


New in Chrome 83

$
0
0

New in Chrome 83

Chrome 83 is starting to roll out to stable now.

Here's what you need to know:

I’m Pete LePage, working and shooting from home, let’s dive in and see what’s new for developers in Chrome 83!

Note: App shortcuts were supposed to be landing in Chrome 83, but were delayed until Chrome 84, scheduled for July 14th.

Trusted types

DOM-based cross-site scripting is one of the most common security vulnerabilities on the web. It can be easy to accidentally introduce one to your page. Trusted types can help prevent these kinds of vulnerabilities, because they require you to process the data before passing it into a potentially dangerous function.

Take innerHTML for example, with trusted types turned on, if I try to pass a string, it'll fail with a TypeError because the browser doesn’t know if it can trust the string.

// Trusted types turned on
const elem = document.getElementById('myDiv');
elem.innerHTML = `Hello, world!`;
// Will throw a TypeError

Instead, I need to either use a safe function, like textContent, pass in a trusted type, or create the element and use appendChild().

// Use a safe function
elem.textContent = ''; // OK

// Pass in a trusted type
import DOMPurify from 'dompurify';
const str = `Hello, world!`;
elem.innerHTML = DOMPurify.sanitize(str, {RETURN_TRUSTED_TYPE: true});

// Create an element
const img = document.createElement('img');
img.src = 'xyz.jpg';
elem.appendChild(img);

Before you turn on trusted types, you’ll want to identify and fix any violations using a report-only CSP header.

Content-Security-Policy-Report-Only: require-trusted-types-for 'script'; report-uri //example.com

Then once you’ve got everything buttoned up, you can turn it on properly. Complete details are in Prevent DOM-based cross-site scripting vulnerabilities with Trusted Types on web.dev.

Updates to form controls

We use HTML form controls every day, and they are key to so much of the web's interactivity. They're easy to use, have built-in accessibility, and are familiar to our users. The styling of form controls can be inconsistent across browsers and operating systems. And we frequently have to ship a number of CSS rules just to get a consistent look across devices.

Before, default styling of form controls.
After, updated styling of form controls.

I’ve been really impressed by the work Microsoft has been doing to modernize the appearance of form controls. Beyond the nicer visual style, they bring better touch support, and better accessibility, including improved keyboard support!

The new form controls have already landed in Microsoft Edge, and are now available in Chrome 83. For more information, see Updates to Form Controls and Focus on the Chromium blog.

Origin trials

Measure memory with measureMemory()

Starting an origin trial in Chrome 83, performance.measureMemory() is a new API that makes it possible to measure the memory usage of your page, and detect memory leaks.

Memory leaks are easy to introduce:

  • Forgetting to unregister an event listener
  • Capturing objects from an iframe
  • Not closing a worker
  • Accumulating objects in arrays
  • and so on.

Memory leaks lead to pages that appear slow, and bloated to users.

if (performance.measureMemory) {
  try {
    const result = await performance.measureMemory();
    console.log(result);
  } catch (err) {
    console.error(err);
  }
}

Check out Monitor your web page's total memory usage with measureMemory() on web.dev for all the details of the new API.

Updates to the Native File System API

The Native File System API started a new origin trial in Chrome 83 with support for writable streams, and the ability to save file handles.

async function writeURLToFile(fileHandle, url) {
  // Create a FileSystemWritableFileStream to write to.
  const writable = await fileHandle.createWritable();
  // Make an HTTP request for the contents.
  const response = await fetch(url);
  // Stream the response into the file.
  await response.body.pipeTo(writable);
  // pipeTo() closes the destination pipe automatically.
}

Writable streams make it much easier to write to a file, and because it’s a stream, you can easily pipe responses from one stream to another.

Saving file handles to IndexedDB allows you to store state, or remember which files a user was working on. For example keep a list of recently edited files, open the last file that the user was working on, and so on.

You’ll need a new origin trial token to use these features, so check out my updated article The Native File System API: Simplifying access to local files on web.dev with all the details, and how to get your new origin trial token.

Other origin trials

Check https://developers.chrome.com/origintrials/#/trials/active for a complete list of features in origin trial.

New cross-origin policies

Some web APIs increase the risk of side-channel attacks like Spectre. To mitigate that risk, browsers offer an opt-in-based isolated environment called cross-origin isolated. The cross-origin isolated state also prevents modifications of document.domain. Being able to alter document.domain allows communication between same-site documents and has been considered a loophole in the same-origin policy.

Check out Eiji's post Making your website "cross-origin isolated" using COOP and COEP for complete details.

Web vitals

Measuring the quality of user experience has many facets. While some aspects of user experience are site and context specific, there is a common set of signals — "Core Web Vitals" — that is critical to all web experiences. Such core user experience needs include loading experience, interactivity, and visual stability of page content, and combined are the foundation of the 2020 Core Web Vitals.

  • Largest Contentful Paint measures perceived load speed and marks the point in the page load timeline when the page's main content has likely loaded.
  • First Input Delay measures responsiveness and quantifies the experience users feel when trying to first interact with the page.
  • Cumulative Layout Shift measures visual stability and quantifies the amount of unexpected layout shift of visible page content.

All of these metrics capture important user-centric outcomes, are field measurable, and have supporting lab diagnostic metric equivalents and tooling. For example, while Largest Contentful Paint is the topline loading metric, it is also highly dependent on First Contentful Paint (FCP) and Time to First Byte (TTFB), which remain critical to monitor and improve.

To learn more, check out Introducing Web Vitals: essential metrics for a healthy site on the Chromium Blog for complete details.

And more

Note: Curious about what's coming in the future? Check out the Fugu API Tracker to see!

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 83.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and I need a hair cut, but as soon as Chrome 84 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Deprecations and removals in Chrome 84

$
0
0

Note: Chrome expects to start the spec-mandated turn down of AppCache in Chrome

  1. For details and instructions for managing the transition gracefully, see Preparing for AppCache removal. For information on a feature that will help you identify uses of this and other deprecated APIs, see Know your code health

Deprecations and removals in Chrome 84

@import rules in CSSStyleSheet.replace() removed

The original spec for constructable stylesheets allowed for calls to:

sheet.replace("@import('some.css');")

This use case is being removed. Calls to replace() now throw an exception if @import rules are found in the replaced content.

Intent to Remove | Chrome Platform Status | Chromium Bug

Feedback

What's New In DevTools (Chrome 85)

$
0
0

What's New In DevTools (Chrome 85)

Style editing for CSS-in-JS frameworks

The Styles pane now has better support for editing styles that were created with the CSS Object Model (CSSOM) APIs. Many CSS-in-JS frameworks and libraries use the CSSOM APIs under the hood to construct styles.

You can also edit styles added in JavaScript using Constructable Stylesheets now. Constructable Stylesheets are a new way to create and distribute reusable styles when using Shadow DOM.

For example, the h1 styles added with CSSStyleSheet (CSSOM APIs) are not editable previously. There are editable now in the Styles pane:

Chromium issue #946975

Lighthouse 6 in the Lighthouse panel

The Lighthouse panel is now running Lighthouse 6. Check out What's New in Lighthouse 6.0 for a summary of all the major changes, or the v6.0.0 release notes for a full list of all changes.

Lighthouse 6.0 introduces three new metrics to the report: Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and Total Blocking Time (TBT). LCP and CLS are 2 of Google's new Core Web Vitals, and TBT is a lab measurement proxy for another Core Web Vital, First Input Delay.

The performance score formula has also been reweighted to better reflect the users’ loading experience.

New performance metrics in Lighthouse 6.0

Chromium issue #772558

First Meaningful Paint (FMP) deprecation

First Meaningful Paint (FMP) is deprecated in Lighthouse 6.0. It has also been removed from the Performance panel. Largest Contentful Paint is the recommended replacement for FMP. See First Meaningful Paint for an explanation of why it was deprecated.

Chromium issue #1096008

Support for new JavaScript features

DevTools now has better support for some of the latest JavaScript language features:

  • Optional chaining syntax autocompletion - property auto-completion in the Console now supports optional chaining syntax, e.g. name?. now works in addition to name. and name[.
  • Syntax highlighting for private fields - private class fields are now properly syntax-highlighted and pretty-printed in the Sources panel.
  • Syntax highlighting for Nullish coalescing operator - DevTools now properly pretty-prints the nullish coalescing operator in the Sources panel.

Chromium issues #1083214, #1073903, #1083797

New app shortcut warnings in the Manifest pane

App shortcuts help users quickly start common or recommended tasks within a web app.

The Manifest pane now shows warnings if:

  • the app shortcut icons are smaller than 96x96 pixels
  • the app shortcut icons and manifest icons are not square (as they will be ignored)

App shortcut warnings

Chromium issue #955497

Service worker respondWith events in the Timing tab

The Timing tab of the Network panel now includes service worker respondWith events. respondWith is the time immediately before the service worker fetch event handler runs to the time when the fetch handler's respondWith promise is settled.

service worker `respondWith`

Chromium issue #1066579

Consistent display of the Computed pane

The Computed pane in the Elements panel now displays consistently as a pane across all viewport sizes. Previously the Computed pane would merge inside the Styles pane when the width of the DevTools' viewport was narrow.

Chromium issue #1073899

Bytecode offsets for WebAssembly files

DevTools now uses bytecode offsets for displaying line numbers of Wasm disassembly. This makes it clearer that you're looking at binary data, and is more consistent with how the Wasm runtime references locations.

Bytecode offsets

Chromium issue #1071432

Line-wise copy and cut in Sources Panel

When performing copy or cut with no selection in the Sources panel editor, DevTools will copy or cut the current line content. For example, in the video below, the cursor is at the end of line 1. After pressing the cut keyboard shortcut, the entire line is copied to the clipboard and deleted.

Chromium issue #800028

Console Settings updates

Ungroup same console messages

The Group similar toggle in Console Settings now applies to duplicate messages. Previously it just applied to similar messages.

For example, previously, DevTools did not ungroup the messages hello even though Group similar is unchecked. Now, the hello messages are ungrouped:

Chromium issue #1082963

Persisting Selected context only settings

The Selected context only settings in Console Settings is now persisted. Previously the settings were reset every time you closed and reopened DevTools. This change makes the setting behavior consistent with other Console Settings options.

Selected context only

Chromium issue #1055875

Performance panel updates

JavaScript compilation cache information in Performance panel

JavaScript compilation cache information information is now always displayed in the Summary tab of the Performance panel. Previously, DevTools wouldn’t show anything related to code caching if code caching didn’t happen.

JavaScript compilation cache information

Chromium issue #912581

The Performance panel used to show times in the rulers based on when the recording started. This has now changed for recordings where the user navigates, where DevTools now shows ruler times relative to the navigation instead.

Align navigation timing in Performance panel

We've also updated times for DOMContentLoaded, First Paint, First Contentful Paint, and Largest Contentful Paint events to be relative to the start of the navigation, which means they match the timings reported by PerformanceObserver.

Chromium issue #974550

New icons for breakpoints, conditional breakpoints, and logpoints

The Sources panel has new designs for breakpoints, conditional breakpoints, and logpoints. Breakpoints get a refreshed flag design with brighter and friendlier colors. Icons are added to differentiate conditional breakpoints and logpoints.

Breakpoints

Chromium issue #1041830

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>

Using Custom Tabs with Android 11

$
0
0

Using Custom Tabs with Android 11

Android 11 introduced changes on how apps can interact with other apps that the user has installed on the device. You can read more about those changes on Android documentation.

When an Android app using Custom Tabs targets SDK level 30 or above some changes may be necessary. This article goes over the changes that may be needed for those apps.

In the simplest case, Custom Tabs can be launched with a one-liner like so:

new CustomTabsIntent.Builder().build()
        .launchUrl(this, Uri.parse("https://www.example.com"));

Applications launching applications using this approach, or even adding UI customizations like changing the toolbar color, adding an action button won’t need to do any changes in the application.

Preferring Native Apps

But, if you followed the best practices some changes may be required.

The first relevant best practice is that applications should prefer a native app to handle the intent instead of a Custom Tab if an app that is capable of handling it is installed.

On Android 11 and above

Android 11 introduces a new Intent flag, FLAG_ACTIVITY_REQUIRE_NON_BROWSER, which is the recommended way to try opening a native app, as it doesn’t require the app to declare any package manager queries.

static boolean launchNativeApi30(Context context, Uri uri) {
    Intent nativeAppIntent = new Intent(Intent.ACTION_VIEW, uri)
            .addCategory(Intent.CATEGORY_BROWSABLE)
            .addFlags(Intent.FLAG_ACTIVITY_NEW_TASK |
                    Intent.FLAG_ACTIVITY_REQUIRE_NON_BROWSER);
    try {
        context.startActivity(nativeAppIntent);
        return true;
    } catch (ActivityNotFoundException ex) {
        return false;
    }
}

The solution is to try to launch the Intent and use FLAG_ACTIVITY_REQUIRE_NON_BROWSER to ask Android to avoid browsers when launching.

If a native app that is capable of handling this Intent is not found, an ActivityNotFoundException will be thrown.

Before Android 11

Even though the application may target Android 11, or API level 30, previous Android versions will not understand the FLAG_ACTIVITY_REQUIRE_NON_BROWSER flag, so we need to resort to querying the Package Manager in those cases:

private static boolean launchNativeBeforeApi30(Context context, Uri uri) {
    PackageManager pm = context.getPackageManager();

    // Get all Apps that resolve a generic url
    Intent browserActivityIntent = new Intent()
            .setAction(Intent.ACTION_VIEW)
            .addCategory(Intent.CATEGORY_BROWSABLE)
            .setData(Uri.fromParts("http", "", null));
    Set<String> genericResolvedList = extractPackageNames(
            pm.queryIntentActivities(browserActivityIntent, 0));

    // Get all apps that resolve the specific Url
    Intent specializedActivityIntent = new Intent(Intent.ACTION_VIEW, uri)
            .addCategory(Intent.CATEGORY_BROWSABLE);
    Set<String> resolvedSpecializedList = extractPackageNames(
            pm.queryIntentActivities(specializedActivityIntent, 0));

    // Keep only the Urls that resolve the specific, but not the generic
    // urls.
    resolvedSpecializedList.removeAll(genericResolvedList);

    // If the list is empty, no native app handlers were found.
    if (resolvedSpecializedList.isEmpty()) {
        return false;
    }

    // We found native handlers. Launch the Intent.
    specializedActivityIntent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
    context.startActivity(specializedActivityIntent);
    return true;
}

The approach used here is to query the Package Manager for applications that support a generic http intent. Those applications are likely browsers.

Then, query for applications that handle itents for the specific URL we want to launch. This will return both browsers and native applications setup to handle that URL.

Now, remove all browsers found on the first list from the second list, and we’ll be left only with native apps.

If the list is empty, we know there are no native handlers and return false. Otherwise, we launch the intent for the native handler.

Putting it all together

We need to ensure using the right method for each occasion:

static void launchUri(Context context, Uri uri) {
    boolean launched = Build.VERSION.SDK_INT >= 30 ?
            launchNativeApi30(context, uri) :
            launchNativeBeforeApi30(context, uri);

    if (!launched) {
        new CustomTabsIntent.Builder()
                .build()
                .launchUrl(context, uri);
    }
}

Build.VERSION.SDK_INT provides the information we need. If it’s equal or larger than 30, Android knows the FLAG_ACTIVITY_REQUIRE_NON_BROWSER and we can try launching a nativa app with the new approach. Otherwise, we try launching with the old approach.

If launching a native app fails, we then launch a Custom Tabs.

There’s some boilerplate involved in this best practice. We’re working on making this simpler by encapsulating the complexity in a library. Stay tuned for updates to the android-browser-helper support library.

Detecting browsers that support Custom Tabs

Another common pattern is to use the PackageManager to detect which browsers support Custom Tabs on the device. Common use-cases for this are setting the package on the Intent to avoid the app disambiguation dialog or choosing which browser to connect to when connecting to the Custom Tabs service.

When targeting API level 30, developers will need to add a queries section to their Android Manifest, declaring an intent-filter that matches browsers with Custom Tabs support.

<queries>
    <intent>
        <action android:name=
            "android.support.customtabs.action.CustomTabsService" />
    </intent>
</queries>

With the markup in place, the existing code used to query for browsers that support Custom Tabs will work as expected.

Frequently Asked Questions

Q: The code that looks for Custom Tabs providers queries for applications that can handle https:// intents, but the query filter only declares an android.support.customtabs.action.CustomTabsService query. Shouldn’t a query for https:// intents be declared?

A: When declaring a query filter, it will filter the responses to a query to the PackageManager, not the query itself. Since browsers that support Custom Tabs declare handling the CustomTabsService, they won’t be filtered out. Browsers that don’t support Custom Tabs will be filtered out.

Conclusion

Those are all the changes required to adapt an existing Custom Tabs integration to work with Android 11. To learn more about integrating Custom Tabs into an Android app, start with the implementation guide then check out the best practices to learn about building a first-class integration.

Let us know if you have any questions or feedback!

New in Chrome 84

$
0
0

New in Chrome 84

Chrome 84 is starting to roll out to stable now.

Here's what you need to know:

I’m Pete LePage, working and shooting from home, let’s dive in and see what’s new for developers in Chrome 84!

App icon shortcuts

App icon shortcuts for Twitter's PWA

App icon shortcuts make it easy for users to quick start common tasks within your app. For example, compose a new tweet, send a message, or see their notifications. They’re supported in Chrome and Edge, and on both desktop and mobile.

These shortcuts are invoked by right-clicking the app icon on Windows and macOS, or long pressing the app icon on Android. Adding a shortcut to your PWA is easy, create a new shortcuts property in your web app manifest, describe the shortcut, and add your icons.

"shortcuts": [
  {
    "name": "Open Play Later",
    "short_name": "Play Later",
    "description": "View the list you saved for later",
    "url": "/play-later",
    "icons": [
      { "src": "//play-later.png", "sizes": "192x192" }
    ]
  },
]

Check out Getting things done quickly with app shortcuts for complete details.

Web animations API

Chrome 84 adds a slew of previously unsupported features to the Web Animations API.

  • animation.ready and animation.finished have been promisified.
  • The browser can now cleanup and remove old animations, saving memory and improving performance.
  • And you can now combine animations using composite modes - with the add and accumulate options.

I simply can't do justice to all the improvements or offer good examples here, so check out Web Animations API improvements in Chromium 84 for complete details.

Content indexing API

If your content is available without a network connection. But the user doesn’t know about it? Is it really available? There’s a discovery problem!

With the Content Indexing API, which just graduated from original trial, you can add URLs and metadata for content that’s available offline. Using that metadata, the content is then surfaced to the user, improving discoverability.

To add content to the index, call index.add() on the service worker registration, and provide the required metadata about the content.

const registration = await navigator.serviceWorker.ready;
await registration.index.add({
  id: 'article-123',
  url: '/articles/123',
  launchUrl: '/articles/123',
  title: 'Article title',
  description: 'Amazing article about things!',
  icons: [{
    src: '/img/article-123.png',
    sizes: '64x64',
    type: 'image/png',
  }],
});

Want to see what’s already in your index? Call index.getAll() on the service worker registration.

const registration = await navigator.serviceWorker.ready;
const entries = await registration.index.getAll();
for (const entry of entries) {
  // entry.id, entry.launchUrl, etc. are all exposed.
}

See Indexing your offline-capable pages with the Content Indexing API for complete details.

Wake lock API

Wake lock implementation on the Betty Crocker website.

I like to cook, but I find it super frustrating when following a recipe, and the screen saver kicks in! With the wake lock API, which also graduates from its origin trial in Chrome 84, sites can request a wake lock to prevent the screen from dimming and locking.

In fact, the Betty Crocker website is using this today, and published a case study on web.dev showing a 300% increase in purchase intent indicators.

To get a wake lock, call navigator.wakeLock.request(), it returns a WakeLockSentinel object, used to “release” the wake lock.

// Request the wake lock
const wl = await navigator.wakeLock.request('screen');

// Release the wake lock
wl.release();

Of course, there’s a little more to it then that, so check out Stay awake with the Screen Wake Lock API, but at least my screen won’t be covered in flour any more!

Origin trials

There are two new origin trials I want to call out. If you're new to origin trials, check out Getting started with Chrome's origin trials.

Idle detection

The Idle Detection API notifies you when a user is idle, indicating they are potentially away from their computer. This is great for things like chat applications, or social networking sites, to let users know if their contacts are available or not.

// Create the idle detector
const idleDetector = new IdleDetector();

// Set up an event listener that fires when idle state changes.
idleDetector.addEventListener('change', () => {
  const uState = idleDetector.userState;
  const sState = idleDetector.screenState;
  console.log(`Idle change: ${uState}, ${sState}.`);
});

// Start the idle detector.
await idleDetector.start({
  threshold: 60000,
  signal,
});

See Detect inactive users with the Idle Detection API to learn more about the API, and how you can start experimenting with it today.

Web Assembly SIMD

And Web Assembly SIMD starts an origin trial. It introduces operations that map to commonly available SIMD instructions in hardware. SIMD operations are used to improve performance, especially in multimedia applications.

To learn more about WebAssembly SIMD, check out Fast, parallel applications with WebAssembly SIMD.

And more

Chrome 84 is big, but there are a few other important updates I want to point out.

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 83.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and I still need a hair cut, but as soon as Chrome 85 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Deprecations and removals in Chrome 85

$
0
0

Deprecations and removals in Chrome 85

AppCache Removal Begins

Chrome 85 starts a spec-mandated turn down of AppCache in Chrome. For details and instructions for managing the transition gracefully, see Preparing for AppCache removal. For information on a feature that will help you identify uses of this and other deprecated APIs, see Know your code health

Intent to Remove | Chrome Platform Status | Chromium Bug

Reject insecure SameSite=None cookies

Use of cookies with SameSite set to None without the Secure attribute is no longer supported. Any cookie that requests SameSite=None but is not marked Secure will be rejected. This feature started rolling out to users of Stable Chrome on July 14, 2020. See SameSite Updates for a full timeline and details. Cookies delivered over plaintext channels may be cataloged or modified by network attackers. Requiring secure transport for cookies intended for cross-site usage reduces this risk.

Intent to Remove | Chrome Platform Status | Chromium Bug

-webkit-box quirks from -webkit-line-clamp

Intent to Remove | Chrome Platform Status | Chromium Bug

Feedback

A new default Referrer-Policy for Chrome: strict-origin-when-cross-origin

$
0
0

A new default Referrer-Policy for Chrome: strict-origin-when-cross-origin

Before we start:

  • If you're unsure of the difference between "site" and "origin", check out Understanding "same-site" and "same-origin".
  • The Referer header is missing an R, due to an original misspelling in the spec. The Referrer-Policy header and referrer in JavaScript and the DOM are spelled correctly.

Summary

  • Browsers are evolving towards privacy-enhancing default referrer policies, to provide a good fallback when a website has no policy set.
  • Chrome plans to gradually enable strict-origin-when-cross-origin as the default policy in 85; this may impact use cases relying on the referrer value from another origin.
  • This is the new default, but websites can still pick a policy of their choice.
  • To try out the change in Chrome, enable the flag at chrome://flags/#reduced-referrer-granularity. You can also check out this demo to see the change in action.
  • Beyond the referrer policy, the way browsers deal with referrers might change—so keep an eye on it.

What's changing and why?

HTTP requests may include the optional Referer header, which indicates the origin or web page URL the request was made from. The Referer-Policy header defines what data is made available in the Referer header, and for navigation and iframes in the destination's document.referrer.

Exactly what information is sent in the Referer header in a request from your site is determined by the Referrer-Policy header you set.

Diagram: Referer sent in
      a request.
Referrer-Policy and Referer.

When no policy is set, the browser's default is used. Websites often defer to the browser’s default.

For navigations and iframes, the data present in the Referer header can also be accessed via JavaScript using document.referrer.

Up until recently, no-referrer-when-downgrade has been a widespread default policy across browsers. But now many browsers are in some stage of moving to more privacy-enhancing defaults.

Chrome plans to switch its default policy from no-referrer-when-downgrade to strict-origin-when-cross-origin, starting in version 85.

This means that if no policy is set for your website, Chrome will use strict-origin-when-cross-origin by default. Note that you can still set a policy of your choice; this change will only have an effect on websites that have no policy set.

Note: this step to help reduce silent cross-site user tracking is part of a larger initiative: the Privacy Sandbox. Check Digging into the Privacy Sandbox for more details.

What does this change mean?

strict-origin-when-cross-origin offers more privacy. With this policy, only the origin is sent in the Referer header of cross-origin requests.

This prevents leaks of private data that may be accessible from other parts of the full URL such as the path and query string.

Diagram: Referer sent
      depending on the policy, for a cross-origin request.
Referer sent (and document.referrer) for a cross-origin request, depending on the policy.

For example:

Cross-origin request, sent from https://site-one.example/**stuff/detail?tag=red** to https://site-two.example/…:

What stays the same?

  • Like no-referrer-when-downgrade, strict-origin-when-cross-origin is secure: no referrer (Referer header and document.referrer) is present when the request is made from an HTTPS origin (secure) to an HTTP one (insecure). This way, if your website uses HTTPS (if not, make it a priority), your website's URLs won't leak in non-HTTPS requests—because anyone on the network can see these, so this would expose your users to man-in-the-middle-attacks.
  • Within the same origin, the Referer header value is the full URL.

For example: Same-origin request, sent from https://site-one.example/**stuff/detail?tag=red** to https://site-one.example/…:

What's the impact?

Based on discussions with other browsers and Chrome's own experimentation run in Chrome 84, user-visible breakage is expected to be limited.

Server-side logging or analytics that rely on the full referrer URL being available are likely to be impacted by reduced granularity in that information.

What do you need to do?

Chrome plans to start rolling out the new default referrer policy in 85 (July 2020 for beta, August 2020 for stable). See status in the Chrome status entry.

Understand and detect the change

To understand what the new default changes in practice, you can check out this demo.

You can also use this demo to detect what policy is applied in the Chrome instance you are running.

Test the change, and figure out if this will impact your site

You can already try out the change starting from Chrome 81: visit chrome://flags/#reduced-referrer-granularity in Chrome and enable the flag. When this flag is enabled, all websites without a policy will use the new strict-origin-when-cross-origin default.

Chrome screenshot: how
      to enable the flag chrome://flags/#reduced-referrer-granularity.
Enabling the flag.

You can now check how your website and backend behave.

Another thing to do to detect impact is to check if your website's codebase uses the referrer—either via the Referer header of incoming requests on the server, or from document.referrer in JavaScript.

Certain features on your site might break or behave differently if you're using the referrer of requests from another origin to your site (more specifically the path and/or query string) AND this origin uses the browser's default referrer policy (i.e. it has no policy set).

If this impacts your site, consider alternatives

If you're using the referrer to access the full path or query string for requests to your site, you have a few options:

  • Use alternative techniques and headers such as Origin and Sec-fetch-Site for your CSRF protection, logging, and other use cases. Check out Referer and Referrer-Policy: best practices.
  • You can align with partners on a specific policy if this is needed and transparent to your users. Access control—when the referrer is used by websites to grant specific access to their resources to other origins—might be such a case although with Chrome's change, the origin will still be shared in the Referer Header (and in document.referrer).

Note that most browsers are moving in a similar direction when it comes to the referrer (see browser defaults and their evolutions in Referer and Referrer-Policy: best practices.

Implement an explicit, privacy-enhancing policy across your site

What Referer should be sent in requests originated by your website, i.e. what policy should you set for your site?

Even with Chrome's change in mind, it's a good idea to set an explicit, privacy-enhancing policy like strict-origin-when-cross-origin or stricter right now.

This protects your users and makes your website behave more predictably across browsers. Mostly, it gives you control —rather than having your site depend on browser defaults.

Check Referer and Referrer-Policy: best practices for details on setting a policy.

About Chrome enterprise

The Chrome enterprise policy ForceLegacyDefaultReferrerPolicy is available to IT administrators who would like to force the previous default referrer policy of no-referrer-when-downgrade in enterprise environments. This allows enterprises additional time to test and update their applications.

This policy will be removed in Chrome 88.

Send feedback

Do you have feedback to share or something to report? Share feedback on Chrome's intent to ship, or tweet your questions at @maudnals.

With many thanks for contributions and feedback to all reviewers - especially Kaustubha Govind, David Van Cleve, Mike West, Sam Dutton, Rowan Merewood, Jxck and Kayce Basques.

Resources


What's New In DevTools (Chrome 86)

$
0
0

What's New In DevTools (Chrome 86)

New Media panel

DevTools now displays media players information in the Media panel.

New Media panel

Prior to the new media panel in DevTools, logging and debug information about video players could be found in chrome://media-internals.

The new Media panel provides an easier way to view events, logs, properties, and a timeline of frame decodes in the same browser tab as the video player itself. You can live view and inspect on potential issues quicker (e.g. why dropped frames are occurring, why JavaScript is interacting with the player in an unexpected way).

Chromium issue: 1018414

Capture node screenshots via Elements panel context menu

You can now capture node screenshots via the context menu in the Elements panel.

For example, you can take a screenshot of the table of content by right clicking the element and select Capture node screenshot.

Capture node screenshots

Chromium issue: 1100253

Issues tab updates

The Issues warning bar on the Console panel is now replaced with a regular message.

Issues in console message

Third-party cookie issues are now hidden by default in the Issues tab. Enable the new Include third-party cookie issues checkbox to view them.

third-party cookie issues checkbox

Chromium issues: 1096481, 1068116, 1080589

Emulate missing local fonts

Open the Rendering tab and use the new Disable local fonts feature to emulate missing local() sources in @font-face rules.

For example, when the font “Rubik” is installed on your device and the @font-face src rule uses it as a local() font, Chrome uses the local font file from your device.

When Disable local fonts is enabled, DevTools ignores the local() fonts and fetches them from the network.

Emulate missing local fonts

Chromium issue: 384968

Emulate inactive users

The Idle Detection API allows developers to detect inactive users and react on idle state changes. You can now use DevTools to emulate idle state changes in the Sensors tab for both the user state and the screen state instead of waiting for the actual idle state to change. You can open the Sensors tab from the Drawer.

Emulate inactive users

Chromium issue: 1090802

Emulate prefers-reduced-data

The prefers-reduced-data media query detects if the user prefers being served alternate content that uses less data for the page to be rendered.

You can now use DevTools to emulate the prefers-reduced-data media query.

Emulate prefers-reduced-data

Chromium issue: 1096068

Support for new JavaScript features

DevTools now has better support for some of the latest JavaScript language features:

  • Logical assignment operators - DevTools now supports logical assignment with the new operators &&=, ||=, and ??= in the Console and Sources panels.
  • Pretty-print numeric separators - DevTools now properly pretty-prints the numeric separators in the Sources panel.

Chromium issues: 1086817, 1080569

Lighthouse 6.2 in the Lighthouse panel

The Lighthouse panel is now running Lighthouse 6.2. Check out the release notes for a full list of changes.

Unsize image

New audits in Lighthouse 6.2:

  • Avoid long main thread tasks. Reports the longest tasks on the main thread, useful for identifying worst contributors to input delay.
  • Links are crawlable. Check if the href attribute of anchor elements links to an appropriate destination, so the links can be discovered.
  • Unsized image elements - Check if an explicit width and height is set on image elements. Explicit image size can reduce layout shifts and improve CLS.
  • Avoid non-composited animations. Reports non-composited animations that appear janky and reduce CLS.
  • Listens for the unload events. Reports the unload event. Consider using the pagehide or visibilitychange events instead as the unload event does not fire reliably.

Updated audits in Lighthouse 6.2:

  • Remove unused JavaScript. Lighthouse will now enhance the audit if a page has publicly-accessible JavaScript source maps.

Chromium issue: 772558

Deprecation of “other origins” listing in the Service Workers pane

DevTools now provides a link to view the full list of service workers from other origins in a new browser tab - chrome://serviceworker-internals/?devtools.

Previously DevTools displayed a list nested under the Application panel > Service workers pane.

Link to other origins

Chromium issue: 807440

Show coverage summary for filtered items

DevTools now recalculates and displays a summary of coverage information dynamically when filters are applied in the Coverage tab. Previously, the Coverage tab always displayed a summary of all coverage information.

In the example below notice how the summary initially says 446 kB of 2.0 MB (22%) used so far. 1.5 MB unused. and then says 57 kB of 604 kB (10%) used so far. 546 kB unused. after CSS filtering has been applied.

Coverage summary for filtered items

Chromium issue: 1061385

New frame detailed view in Application panel

DevTools now show a detailed view for each frame. Access it by clicking a frame under the Frames menu in the Application panel.

New frame detailed view in Application panel

Chromium issue: 1093247

Frame details for opened windows

DevTools now displays opened windows / pop-ups under the frame tree as well. The frame detailed view of the opened windows includes additional security information.

New frame detailed view in Application panel

More security information will be added to the frame detailed view soon.

Chromium issue: 1107766

Elements and Network panel updates

Accessible color suggestion in the Styles pane

DevTools now provides color suggestions for low color contrast text.

In the example below, h1 has low contrast text. To fix it, open the color picker of the color property in the Styles pane. After you expand the Contrast ratio section, DevTools provides AA and AAA color suggestions. Click on the suggested color to apply the color.

Chromium issue: 1093227

Human-readable X-Client-Data header values in the Network panel

When inspecting a network resource in the Network panel, DevTools now formats any X-Client-Data header values in Headers pane as code.

The X-Client-Data HTTP header contains a list of experiment IDs and Chrome flags that are enabled in your browser. The raw header values look like opaque strings since they are base-64-encoded, serialized protocol buffers. To make the contents more transparent to developers, DevTools is now showing the decoded values.

Human-readable `X-Client-Data` header values

Chromium issue: 1103854

Auto-complete custom fonts in the Styles pane

Imported font faces are now added to the list of CSS auto-completion when editing the font-family property in the Styles pane.

In this example, 'Noto Sans' is a custom font installed in the local machine. It is displayed in the CSS completion list. Previously, it was not.

Auto-complete custom fonts

Chromium issue: 1106221

Consistently display resource type in Network panel

DevTools now consistently displays the same resource type as the original network request and appends / Redirect to the Type column value when redirection (status 302) happens.

Previously DevTools changed the type to Other sometimes.

Display redirect resource type

Chromium issue: 997694

Clear buttons in the Elements and Network panels

The filter text boxes in the Styles pane and Network panel, as well as the DOM search text box in the Elements panel, now have Clear buttons. Clicking Clear removes any text that you have input.

Clear buttons in the Elements and Network panels

Chromium issue: 1067184

<<../../_shared/devtools-feedback.md>>

<<../../_shared/canary.md>>

<<../../_shared/discover.md>>

New in Chrome 85

$
0
0

New in Chrome 85

Chrome 85 is starting to roll out to stable now.

Here's what you need to know:

I’m Pete LePage, working and shooting from home, let’s dive in and see what’s new for developers in Chrome 85!

Content Visibility

Browsers rendering process

Turning your HTML into something the user can see, requires the browser to go through a number of steps before it can even paint the first pixel. And it does it for the whole page, even for content that isn’t visible in the viewport.

Applying content-visibility: auto to an element, tells the browser that it can skip the rendering work for that element until it scrolls into the viewport, providing a faster initial render.

.my-class {
  content-visibility: auto;
}

To get the most impact out of content-visibility, apply it to parent sections with more complex layout algorithms, like flexbox, and grid, or that have children with contained layouts of their own.

By chunking content and adding content-visibility: auto;, this page went from a rendering time of 232ms to only 30ms.

Check out the content visibility to see how you can use it to improve your rendering performance.

@property and CSS variables

CSS variables, technically called custom properties, are awesome. With the Houdini CSS Properties and Values API, you can define a type and default fallback value for your custom properties. I previously covered them in New in Chrome 78, when we added support for defining them in JavaScript.

Starting in Chrome 85, you can also define and set CSS properties directly in your CSS. What I love about CSS Properties - is that it gives the property semantic meaning, fallback values, and even enables CSS testing.

@property --colorPrimary {
  syntax: '<color>';
  initial-value: magenta;
  inherits: false;
}

Una has a great post @property: giving superpowers to CSS variables that shows you how you can use them.

The getInstalledRelatedApps() API makes it possible for you to check if your app is installed, then, customize the user experience.

For example, show different content to the user on a landing page if your app is already installed. Centralize overlapping functionality in one app to prevent confusion. Or, if your native app is already installed, don’t promote the installation of your PWA.

When it first shipped in Chrome 80, it only worked for Android apps. Now, on Android, it can also check if your PWA is installed. And on Windows, it can check if your Windows UWP app is installed.

const relatedApps = await navigator.getInstalledRelatedApps();
relatedApps.forEach((app) => {
  console.log(app.id, app.platform, app.url);
});

Check out my article Is your app installed? getInstalledRelatedApps() will tell you! on web.dev to see how it works, and how to sign your apps to prove they’re yours.

App Icon Shortcuts

App icon shortcut on Windows

In Chrome 84, we added support for App Icon Shortcuts. I accidentally said they were available everywhere, but they were only available on Android. Now, in Chrome 85, they’re available on Android and Windows, and in both Chrome and Edge.

"shortcuts": [
  {
    "name": "Open Play Later",
    "short_name": "Play Later",
    "description": "View the list you saved for later",
    "url": "/play-later",
    "icons": [
      { "src": "//play-later.png", "sizes": "192x192" }
    ]
  },
]

Check out the App Icon Shortcuts article on web.dev for complete details, and I’m sorry for the confusion I caused.

Origin Trial: Streaming requests with fetch()

Starting in Chrome 85, fetch upload streaming is available as an origin trial. It lets you start a fetch before the request body is ready. Previously, you could only start a request once you had the whole body ready to go. But now, you can start sending content, even while you're still generating it.

const { readable, writable } = new TransformStream();

const responsePromise = fetch(url, {
  method: 'POST',
  body: readable,
});

For example, use it to warm up the server, or stream audio or video as it’s captured from the microphone or camera.

Jake has an in-depth look in Streaming requests with the fetch API on web.dev, and also covered it in the latest HTTP203 - Streaming requests with fetch video.

And more

Of course, there’s plenty more.

Promise.any returns a promise that is fulfilled by the first given promise to be fulfilled or rejected.

try {
  const first = await Promise.any(arrayOfPromises);
  console.log(first);
} catch (error) {
  console.log(error.errors);
}

Replacing all instances in a string is easier with .replaceAll(), no more regular expressions!

const myName = 'My name is Bond, James Bond.'
    .replaceAll('Bond', 'Powers')
    .replace('James', 'Austin');
console.log(myName);
// My name is Powers, Austin Powers.

Chrome 85 adds decode support for AVIF, an image format encoded with AV1 and standardized by the Alliance for Open Media. AVIF offers significant compression gains vs. JPEG and WebP, with a recent Netflix study showing 50% savings vs. standard JPEG and > 60% savings on 4:4:4 content.

And AppCache removal has begun.

Further reading

This covers only some of the key highlights. Check the links below for additional changes in Chrome 85.

Subscribe

Want to stay up to date with our videos, then subscribe to our Chrome Developers YouTube channel, and you’ll get an email notification whenever we launch a new video, or add our RSS feed to your feed reader.

I’m Pete LePage, and I finally got a hair cut!

As soon as Chrome 86 is released, I’ll be right here to tell you -- what’s new in Chrome!

Feedback

Deprecations and removals in Chrome 86

$
0
0

Deprecations and removals in Chrome 86

Remove WebComponents v0

Web Components v0 was removed from desktop and Android in Chrome 80. Chromium 86 removes them from WebView. This removal includes Custom Elements v0, Shadow DOM v0, and HTML Imports.

Deprecate FTP support

Chrome is deprecating and removing support for FTP URLs. The current FTP implementation in Google Chrome has no support for encrypted connections (FTPS), nor proxies. Usage of FTP in the browser is sufficiently low that it is no longer viable to invest in improving the existing FTP client. In addition, more capable FTP clients are available on all affected platforms.

Google Chrome 72 and later removed support for fetching document subresources over FTP and rendering of top level FTP resources. Currently navigating to FTP URLs results in showing a directory listing or a download depending on the type of resource. A bug in Google Chrome 74 and later resulted in dropping support for accessing FTP URLs over HTTP proxies. Proxy support for FTP was removed entirely in Google Chrome 76.

The remaining capabilities of Google Chrome’s FTP implementation are restricted to either displaying a directory listing or downloading a resource over unencrypted connections.

Deprecation of support will follow this timeline:

Chrome 86

FTP is still enabled by default for most users, but turned off for pre-release channels (Canary and Beta) and will be experimentally turned off for one percent of stable users. In this version you can re-enable it from the command line using either the --enable-ftp command line flag or the --enable-features=FtpProtocol flag.

Chrome 87

FTP support will be disabled by default for fifty percent of users but can be enabled using the flags listed above.

###Chrome 88

FTP support will be disabled.

Feedback

DevTools architecture refresh: Migrating to JavaScript modules

$
0
0

DevTools architecture refresh: Migrating to JavaScript modules

As you might know, Chrome DevTools is a web application written using HTML, CSS and JavaScript. Over the years, DevTools has gotten more feature-rich, smarter and knowledgeable about the broader web platform. While DevTools has expanded over the years, its architecture largely resembles the original architecture when it was still part of WebKit.

This post is part of a series of blog posts describing the changes we are making to DevTools' architecture and how it is built. We will explain how DevTools has historically worked, what the benefits and limitations were and what we have done to alleviate these limitations. Therefore, let's dive deep into module systems, how to load code and how we ended up using JavaScript modules.

In the beginning, there was nothing

While the current frontend landscape has a variety of module systems with tools built around them, as well as the now-standardized JavaScript modules format, none of these existed when DevTools was first built. DevTools is built on top of code that initially shipped in WebKit more than 12 years ago.

The first mention of a module system in DevTools stems from 2012: the introduction of a list of modules with an associated list of sources. This was part of the Python infrastructure used back then to compile and build DevTools. A follow-up change extracted all modules into a separate frontend_modules.json file (commit) in 2013 and then into separate module.json files (commit) in 2014.

An example module.json file:

{
  "dependencies": [
    "common"
  ],
  "scripts": [
    "StylePane.js",
    "ElementsPanel.js"
  ]
}

Since 2014, the module.json pattern has been used in DevTools to specify its modules and source files. Meanwhile, the web ecosystem rapidly evolved and multiple module formats were created, including UMD, CommonJS and the eventually standardized JavaScript modules. However, DevTools stuck with the module.json format.

While DevTools remained working, there were a couple of downsides of using a non-standardized and unique module system:

  1. The module.json format required custom build tooling, akin to modern bundlers.
  2. There was no IDE integration, which required custom tooling to generate files modern IDEs could understand (the original script to generate jsconfig.json files for VS Code).
  3. Functions, classes and objects were all put on the global scope to make sharing between modules possible.
  4. Files were order-dependent, meaning the order in which sources were listed was important. There was no guarantee that code you rely on would be loaded, other than that a human had verified it.

All in all, when evaluating the current state of the module system in DevTools and the other (more widely used) module formats, we concluded that the module.json pattern was creating more problems than it solved and it was time to plan our move away from it.

The benefits of standards

Out of the existing module systems, we chose JavaScript modules as the one to migrate to. At the time of that decision JavaScript modules were still shipping behind a flag in Node.js and a large amount of packages available on NPM did not have an JavaScript modules bundle we could use. Despite this, we concluded that JavaScript modules were the best option.

The primary benefit of JavaScript modules is that it is the standardized module format for JavaScript. When we listed the downsides of the module.json (see above), we realized that almost all of them were related to using a non-standardized and unique module format.

Choosing a module format that is non-standardized means that we have to invest time ourselves into building integrations with the build tools and tools our maintainers used.

These integrations often were brittle and lacked support for features, requiring additional maintenance time, sometimes leading to subtle bugs that would eventually ship to users.

Since JavaScript modules were the standard, it meant that IDEs like VS Code, type checkers like Closure Compiler/TypeScript and build tools like Rollup/minifiers would be able to understand the source code we wrote. Moreover, when a new maintainer would join the DevTools team, they would not have to spend time learning a proprietary module.json format, whereas they would (likely) already be familiar with JavaScript modules.

Of course, when DevTools was initially built, none of the above benefits existed. It took years of work in standards groups, runtime implementations and developers using JavaScript modules providing feedback to get to the point where they are now. But when JavaScript modules became available we had a choice to make: either keep maintaining our own format, or invest in migrating to the new one.

The cost of the shiny new

Even though JavaScript modules had plenty of benefits that we would like to use, we remained in the non-standard module.json world. Reaping the benefits of JavaScript modules meant that we had to significantly invest in cleaning up technical debt, performing a migration that could potentially break features and introduce regression bugs.

At this point, it was not a question of "Do we want to use JavaScript modules?", but a question of "How expensive is it to be able to use JavaScript modules?". Here, we had to balance the risk of breaking our users with regressions, the cost of engineers spending (a large amount of) time migrating and the temporary worse state we would work in.

That last point turned out to be very important. Even though we could in theory get to JavaScript modules, during a migration we would end up with code that would have to take into account both module.json and JavaScript modules. Not only was this technically difficult to achieve, it also meant that all engineers working on DevTools would need to know how to work in this environment. They would have to continuously ask themselves "For this part of the codebase, is it module.json or JavaScript modules and how do I make changes?".

Sneak peek: The hidden cost of guiding our fellow maintainers through a migration was bigger than we anticipated.

After the cost analysis, we concluded that it was still worthwhile to migrate to JavaScript modules. Therefore, our main goals were the following:

  1. Make sure that the usage of JavaScript modules reaps the benefits to the fullest extent possible.
  2. Make sure that the integration with the existing module.json-based system is safe and does not lead to negative user impact (regression bugs, user frustration).
  3. Guide all DevTools maintainers through the migration, primarily with checks and balances built-in to prevent accidental mistakes.

Spreadsheets, transformations and technical debt

While the goal was clear, the limitations imposed by the module.json format proved to be difficult to workaround. It took several iterations, prototypes and architectural changes before we developed a solution we were comfortable with. We wrote a design doc with the migration strategy we ended up. The design doc also listed our initial time estimation: 2-4 weeks.

Spoiler alert: the most intensive part of the migration took 4 months and from start to finish took 7 months!

The initial plan, however, stood the test of time: we would teach the DevTools runtime to load all files listed in the scripts array in the module.json file using the old way, while all files in listed in the modules array with JavaScript modules dynamic import. Any file that would reside in the modules array would be able to use ES imports/exports.

Additionally, we would perform the migration in 2 phases (we eventually split up the last phase into 2 sub-phases, see below): the export- and import-phases. The status of which module would be in which phase was tracked in a large spreadsheet:

JavaScript modules migration spreadsheet

A snippet of the progress sheet is publicly available here.

export-phase

The first phase would be to add export-statements for all symbols that were supposed to be shared between modules/files. The transformation would be automated, by running a script per folder. Given the following symbol would exist in the module.json world:

Module.File1.exported = function() {
  console.log('exported');
  Module.File1.localFunctionInFile();
};
Module.File1.localFunctionInFile = function() {
  console.log('Local');
};

(Here, Module is the name of the module and File1 the name of the file. In our sourcetree, that would be front_end/module/file1.js.)

This would be transformed to the following:

export function exported() {
  console.log('exported');
  Module.File1.localFunctionInFile();
}
export function localFunctionInFile() {
  console.log('Local');
}

/** Legacy export object */
Module.File1 = {
  exported,
  localFunctionInFile,
};

Initially, our plan was to rewrite same-file imports during this phase as well. For example, in the above example we would rewrite Module.File1.localFunctionInFile to localFunctionInFile. However, we realized that it would be easier to automate and safer to apply if we separated these two transformations. Therefore, the "migrate all symbols in the same file" would become the second sub-phase of the import-phase.

Since adding the export keyword in a file transforms the file from a "script" to a "module", a lot of the DevTools infrastructure had to be updated accordingly. This included the runtime (with dynamic import), but also tools like ESLint to run in module mode.

One discovery we made while working through these issues is that our tests were running in "sloppy" mode. Since JavaScript modules imply that files run in "use strict" mode, this would also affect our tests. As it turned out, a non-trivial amount of tests were relying on this sloppiness, including a test that used a with-statement 😱.

In the end, updating the very first folder to include export-statements took about a week and multiple attempts with relands.

import-phase

After all symbols are both exported using export-statements and remained on the global scope (legacy), we had to update all references to cross-file symbols to use ES imports. The end goal would be to remove all "legacy export objects”, cleaning up the global scope. The transformation would be automated, by running a script per folder.

For example, for the following symbols that exist in the module.json world:

Module.File1.exported();
AnotherModule.AnotherFile.alsoExported();
SameModule.AnotherFile.moduleScoped();

They would be transformed to:

import * as Module from '../module/Module.js';
import * as AnotherModule from '../another_module/AnotherModule.js';

import {moduleScoped} from './AnotherFile.js';

Module.File1.exported();
AnotherModule.AnotherFile.alsoExported();
moduleScoped();

However, there were some caveats with this approach:

  1. Not every symbol was named as Module.File.symbolName. Some symbols were named solely Module.File or even Module.CompletelyDifferentName. This inconsistency meant that we had to create an internal mapping from the old global object to the new imported object.
  2. Sometimes there would be clashes between moduleScoped names. Most prominently, we used a pattern of declaring certain types of Events, where each symbol was named just Events. This meant that if you were listening for multiple types of events declared in different files, a nameclash would occur on the import-statement for those Events.
  3. As it turned out, there were circular dependencies between files. This was fine in a global scope context, as the usage of the symbol was after all code was loaded. However, if you require an import, the circular dependency would be made explicit. This isn't a problem immediately, unless you have side-effect function calls in your global scope code, which DevTools also had. All in all, it required some surgery and refactoring to make the transformation safe.

A whole new world with JavaScript modules

In February 2020, 6 months after the start in September 2019, the last cleanups were performed in the ui/ folder. This marked the unofficial end to the migration. After letting the dust settle down, we officially marked the migration as finished on March 5th 2020. 🎉

Now, all modules in DevTools use JavaScript modules to share code. We still put some symbols on the global scope (in the module-legacy.js files) for our legacy tests or to integrate with other parts of the DevTools architecture. These will be removed over time, but we don't consider them a blocker for future development. We also have a style guide for our usage of JavaScript modules.

Statistics

Conservative estimates for the number of CLs (abbreviation for changelist - the term used in Gerrit that represents a change - similar to a GitHub pull request) involved in this migration are around 250 CLs, largely performed by 2 engineers. We don't have definitive statistics on the size of changes made, but a conservative estimate of lines changed (calculated as the sum of absolute difference between insertions and deletions for each CL) is roughly 30,000 (~20% of all of DevTools frontend code).

The first file using export shipped in Chrome 79, released to stable in December 2019. The last change to migrate to import shipped in Chrome 83, released to stable in May 2020.

We are aware of one regression that shipped to Chrome stable and that was introduced as part of this migration. The auto-completion of snippets in the command menu broke due to an extraneous default export. We have had several other regressions, but our automated test suites and Chrome Canary users reported these and we fixed them before they were able to reach Chrome stable users.

You can see the full journey (not all CLs are attached to this bug, but most of them are) logged on crbug.com/1006759.

What we learned

  1. Decisions made in the past can have a long-lasting impact on your project. Even though JavaScript modules (and other module formats) were available for quite some time, DevTools was not in a position to justify the migration. Deciding when to and when not to migrate is difficult and based on educated guesses.
  2. Our initial time estimates were in weeks rather than months. This largely stems from the fact that we found more unexpected problems than we anticipated in our initial cost analysis. Even though the migration plan was solid, technical debt was (more often than we would have liked) the blocker.
  3. The JavaScript modules migration included a large amount of (seemingly unrelated) technical debt cleanups. The migration to a modern standardized module format allowed us to realign our coding best practices with modern day web development. For example, we were able to replace our custom Python bundler with a minimal Rollup configuration.
  4. Despite the large impact on our codebase (~20% of code changed), very few regressions were reported. While we did have numerous issues migrating the first couple of files, after a while we had a solid, partially automated, workflow. This meant that negative user impact for our stable users was minimal for this migration.
  5. Teaching the intricacies of a particular migration to fellow maintainers is difficult and sometimes impossible. Migrations of this scale are difficult to follow and require a lot of domain knowledge. Transferring that domain knowledge to others working in the same codebase is not desirable per se for the job they are doing. Knowing what to share and what details not to share is an art, but a necessary one. It is therefore crucial to reduce the amount of large migrations, or at the very least not perform them at the same time.

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

How we built the Chrome DevTools Issues tab

$
0
0

How we built the Chrome DevTools Issues tab

In the last quarter of 2019, the Chrome DevTools team started improving the developer experience in DevTools around cookies. This was particularly important because Google Chrome and other browsers had begun to change their default cookie behavior.

While researching the tools that DevTools already provides, we often found ourselves in a situation like the following:

Issues in the Console panel

😰 The console was full of warnings and error messages, that contained rather technical explanations and sometimes links to chromestatus.com. All messages looked roughly equally important, making it hard to figure out which to address first. More importantly, the text was not linking to additional information inside DevTools, making it difficult to understand what happened. And finally, the messages often left it entirely to the web developer to figure out how to fix the problem or even learn about the technical context.

If you also use the console for messages from your own application, you’ll sometimes have a hard time finding them between all the messages from the browser.

As well as humans, it's also difficult for automated processes to interact with console messages. For example, developers might use Chrome Headless and Puppeteer in a Continuous Integration/Continuous Deployment scenario. Because console messages are just strings, developers need to write regular expressions or some other parser to extract actionable information.

The solution: structured and actionable issue reporting

To find a better solution to the problems we discovered, we first started thinking about the requirements and collected them in a Design Doc.

Our goal is to present issues in a way that clearly explains the problem, and how to fix it. From our design process we realised that each issue should contain the following four parts:

  • Title
  • Description
  • Links to affected resources within DevTools
  • And a link to further guidance

For the title, we want it to be short and precise to help developers understand the core problem, and often already hints at the fix. For example, a cookie issue now simply reads:

Mark cross-site cookies as Secure to allow setting them in cross-site contexts

Every issue contains more detailed information in a description, which explains what happened, gives actionable advice on how to fix it, and links to other panels inside DevTools to understand the problem in context. We also provide links to in-depth articles on web.dev to enable web developers to learn about the topic in greater detail.

An important part of each issue is the affected resources section, which links to other parts of DevTools and makes it easy to investigate further. For the cookie issue example, there should be a list of network requests that triggered the issue, and clicking on the request directly takes you to the Network panel. We hope that this is not only convenient, but also reinforces which panels and tools inside DevTools can be used to debug a certain kind of issue.

Thinking about developer interaction with the Issues tab long-term, we imagine the following evolution of developer interaction:

  • When encountering a particular issue for the first time, a web developer would read the article to understand the issue in-depth.
  • When encountering the issue the second time, we hope that the issue description would be enough to remind the developer of what the issue was about, and allow them to immediately investigate and take action to resolve it.
  • After encountering an issue for a couple of times, we hope that the issue title is enough for a developer to recognize the type of issue.

Another important aspect we wanted to improve is aggregation. For example, if the same cookie caused the same problem multiple times, we wanted to report the cookie only once. Besides reducing the number of messages considerably, this often helps to identify the root cause of an issue more quickly.

Aggregated issues

The implementation

With those requirements in mind, the team started to look into how to implement the new feature. Projects for Chrome DevTools usually span three different areas:

Implementation was then comprised of three tasks:

  • Inside Chromium, we had to identify the components that have the information we want to surface and make that information accessible to the DevTools Protocol without compromising speed or security.
  • We then needed to design the Chrome DevTools Protocol (CDP) to define the API that exposes the information to clients, such as the DevTools frontend.
  • Finally, we needed to implement a component in DevTools frontend that requests the information from the browser via CDP and displays it in an appropriate UI such that developers can easily interpret and interact with the information.

For the browser side, we first looked into how console messages were handled, because their behavior is very similar to what we had in mind for issues. CodeSearch is usually a good starting point for explorations like these. It allows you to search and explore the whole source code of the Chromium project online. That way, we learned about the implementation of console messages and could build up a parallel, but more structured way around the requirements we collected for the issues.

The work here is especially challenging because of all the security implications we always have to keep in mind. The Chromium project goes a long way to separate things into different processes and have them only communicate through controlled communication channels to prevent information leaks. Issues may contain sensitive information, so we have to take care to not send that information to a part of the browser that shouldn't know about it.

In DevTools frontend

DevTools itself is a web application written in JavaScript and CSS. It’s a lot like many other web applications - except that it’s been around for more than 10 years. And of course its back-end is basically a direct communication channel to the browser: the Chrome DevTools Protocol.

For the Issues tab, we first thought about user stories and what developers would have to do to resolve an issue. Our ideas mostly evolved around having the Issues tab as a central starting point for investigations that linked people to the panels that show more detailed information. We decided to put the Issues tab with the other tabs at the bottom of DevTools so it can stay open while a developer interacts with another DevTools component, such as the Network or the Application panel.

With that in mind, our UX designer understood what we were aiming at, and prototyped the following initial proposals:

Prototypes

After a lot of discussion around the best solution, we started implementing the design and reiterating decisions to gradually arrive at what the Issues tab looks like today.

Another very important factor was the discoverability of the Issues tab. In the past, many great Devtools features were not discoverable without the developer knowing what specifically to look for. For the Issues tab, we decided to highlight issues in multiple different areas to make sure developers wouldn't miss important issues.

We decided to add a notification to the console panel, because certain console messages are now removed in favor of issues. We also added an icon to the warnings and errors counter in the top right of the DevTools window.

Issues notification

Finally, the Issues tab not only links to other DevTools panels, but resources that are related to an issue also link back to the Issues tab.

Related issues

In the protocol

The communication between the frontend and the backend works over a protocol called Chromium DevTools Protocol (CDP). The CDP can be thought of as the back-end of the web app that is Chrome DevTools. The CDP is subdivided into multiple domains and every domain contains a number of commands and events.

For the Issues tab, we decided to add a new event to the Audits domain that triggers whenever a new issue is encountered. To make sure that we can also report on issues that arise while DevTools is not yet opened, the Audits domain stores the most recent issues and dispatches them as soon as DevTools connects. DevTools then collects all those issues and aggregates them.

The CDP also enables other protocol clients, such as Puppeteer, to receive and process issues. We hope the structured issue information sent over the CDP will enable and simplify integration into existing continuous integration infrastructure. This way, issues can be detected and fixed even before the page is deployed!

Future

First of all, a lot more messages have to move from the console to the Issues tab. This will take some time, especially because we want to provide clear, actionable documentation for every new issue we add. We hope that in the future developers will go looking for issues in the Issues tab instead of the console!

Furthermore, we are thinking how to integrate issues from other sources besides the Chromium back-end into the Issues tab.

We are looking into ways to keep the Issues tab tidy and improve usability. Searching, filtering, and better aggregation are on our list for this year. To structure the increasing number of reported issues, we are in the process of introducing issue categories that would, for example, make it possible to only show issues that are about upcoming deprecations. We are also thinking about adding a snooze feature, that a developer can use to hide issues temporarily.

To keep issues actionable, we want to make it easier to discover which part of a page triggered an issue. In particular, we are thinking about ways to distinguish and filter issues that are genuinely from your page (i.e. first-party) from issues that are triggered by resources you embed, but are not directly under your control (such as an ad network). As a first step, it will be possible to hide third-party cookie issues in Chrome 86.

If you have any suggestions to improve the Issues tab, let us know by filing a bug!

<<../../_shared/devtools-feedback.md>>

<<../../_shared/discover-devtools-blog.md>>

Viewing all 599 articles
Browse latest View live