Quantcast
Channel: Updates
Viewing all 599 articles
Browse latest View live

Instant Loading Web Apps with An Application Shell Architecture

$
0
0

An application shell is the minimal HTML, CSS, and JavaScript powering a user interface. The application shell should:

  • load fast
  • be cached
  • dynamically display content

An application shell is the secret to reliably good performance. Think of your app’s shell like the bundle of code you’d publish to an app store if you were building a native app. It’s the load needed to get off the ground, but might not be the whole story. It keeps your UI local and pulls in content dynamically through an API.

App Shell Separation of HTML, JS and CSS shell and the HTML Content

Background

Alex Russell’s Progressive Web Apps article describes how a web app can progressively change through use and user consent to provide a more native-app-like experience complete with offline support, push notifications and the ability to be added to the home screen. It depends very much on the functionality and performance benefits of service worker and their caching abilities. This allows you to focus on speed, giving your web apps the same instant loading and regular updates you’re used to seeing in native applications.

To take full advanrtage of these capabilities we need a new way of thinking about web sites: the application shell architecture.

Let’s dive into how to structure your app using a service worker augmented application shell architecture. We’ll look at both client and server-side rendering and share an end-to-end sample you can try today.

To emphasize the point, the example below shows the first load of an app using this architecture. Notice the ‘App is ready for offline use’ toast at the bottom of the screen. If an update to the shell becomes available later, we can inform the user to refresh for the new version.

Image of service worker running in DevTools for the application shell

What are Service Workers, Again?

A service worker is a script that runs in the background, separate from your web page. It responds to events, including network requests made from pages it serves and push notices from your server. A service worker has an intentionally short lifetime. It wakes up when it gets an event and runs only as long as it needs to process it.

Service workers also have a limited set of APIs when compared to JavaScript in a normal browsing context. This is standard for workers on the web. A Service worker can’t access the DOM but can access things like the Cache API, and they can make network requests using the Fetch API. The IndexedDB API and postMessage() are also available to use for data persistence and messaging between the service worker and pages it controls. Push events sent from your server can invoke the Notification API to increase user engagement.

A service worker can intercept network requests made from a page (which triggers a fetch event on the service worker) and return a response retrieved from the network, or retrieved from a local cache, or even constructed programmatically. Effectively, it’s a programmable proxy in the browser. The neat part is that, regardless of where the response comes from, it looks to the web page as though there were no service worker involvement.

To learn more about service workers in depth, read an Introduction to Service Workers.

Performance Benefits

Service workers are powerful for offline caching but they also offer significant performance wins in the form of instant loading for repeat visits to your site or web app. You can cache your application shell so it works offline and populate its content using JavaScript.

On repeat visits, this allows you to get meaningful pixels on the screen without the network, even if your content eventually comes from there. Think of it as displaying toolbars and cards immediately, then loading the rest of your content progressively.

To test this architecture on real devices, we’ve run our application shell sample on WebPageTest.org and shown the results below.

Test 1: Testing on Cable with a Nexus 5 using Chrome Dev

The first view of the app has to fetch all the resources from the network and doesn’t achieve a meaningful paint until 1.2 seconds in. Thanks to service worker caching, our repeat visit achieves meaningful paint and fully finishes loading in 0.5 seconds.

Web Page Test Paint Diagram for Cable Connection

Test 2: Testing on 3G with a Nexus 5 using Chrome Dev

We can also test our sample with a slightly slower 3G connection. This time it takes 2.5 seconds on first visit for our first meaningful paint. It takes 7.1 seconds to fully load the page. With service worker caching, our repeat visit achieves meaningful paint and fully finishes loading in 0.8 seconds.

Web Page Test Paint Diagram for 3G Connection

Other views tell a similar story. Compare the 3 seconds it takes to achieve first meaningful paint in the application shell:

Paint timeline for first view from Web Page Test

to the 0.9 seconds it takes when the same page is loaded from our service worker cache. Over 2 seconds of time is saved for our end users.

Paint timeline for repeat view from Web Page Test

Similar and reliable performance wins are possible for your own applications using the application shell architecture.

Does Service Worker Require us to Rethink How We Structure Apps?

Service workers imply some subtle changes in application architecture. Rather than squashing all of your application into an HTML string, it can be beneficial to do things AJAX-style. This is where you have a shell (that is always cached and can always boot up without the network) and content that is refreshed regularly and managed separately.

The implications of this split are large. On the first visit you can render content on the server and install the service worker on the client. On subsequent visits you need only request data.

What about Progressive Enhancement?

While service worker isn’t currently supported by all browsers, the application content shell architecture uses progressive enhancement to ensure everyone can access the content. For example, take our sample project.

Below you can see the full version rendered in Chrome, Firefox Nightly and Safari. On the very left you can see the Safari version where the content is rendered on the server without a service worker. On the right we see the Chrome and Firefox Nightly versions powered by service worker.

Image of Application Shell loaded in Safari, Chrome and Firefox

When Does it Make Sense to Use This Architecture?

The application shell architecture makes the most sense for apps and sites that are dynamic. If your site is small and static, you probably don’t need an application shell and can simply cache the whole site in a service worker oninstall step. Use the approach that makes the most sense for your project. A number of JavaScript frameworks already encourage splitting your application logic from the content, making this pattern more straight-forward to apply.

Are There any Production Apps Using this Pattern Yet?

The application shell architecture is possible with just a few changes to your overall application’s UI and has worked well for large-scale sites such as Google’s I/O 2015 Progressive Web App and Google’s Inbox.

Image of Google Inbox loading. Illustrates Inbox using service worker.

Offline application shells are a major performance win and are also demonstrated well in Jake Archibald’s offline Wikipedia app and Flipkart Lite’s progressive web app.

Screenshots of Jake Archibald's Wikipedia Demo.

Explaining the Architecture

During the first load experience, your goal is to get meaningful content to the user’s screen as quickly as possible.

First Load and Loading Other Pages

Diagram of the First Load with the App Shell

In general the application shell architecture will:

  • Prioritize the initial load, but let service worker cache the application shell so repeat visits do not require the shell to be re-fetched from the network.

  • Lazy-load or background load everything else. One good option is to use read-through caching for dynamic content.

  • Use service worker tools, such as sw-precache, for example to reliably cache and update the service worker that manages your static content. (More about sw-precache later.)

To achieve this:

  • Server will send HTML content that the client can render and use far-future HTTP cache expiration headers to account for browsers without service worker support. It will serve filenames using hashes to enable both ‘versioning’ and easy updates for later in the application lifecycle.

  • Page(s) will include inline CSS styles in a <style> tag within the document <head> to provide a fast first paint of the application shell. Each page will asynchronously load the JavaScript necessary for the current view. Because CSS cannot be asynchronously loaded, we can request styles using JavaScript as it IS asynchronous rather than parser-driven and synchronous. We can also take advantage of requestAnimationFrame() to avoid cases where we might get a fast cache hit and end up with styles accidentally becoming part of the critical rendering path. requestAnimationFrame() forces the first frame to be painted before the styles to be loaded. Another option is to use projects such as Filament Group’s loadCSS to request CSS asynchronously using JavaScript.

  • Service worker will store a cached entry of the application shell so that on repeat visits, the shell can be loaded entirely from the service worker cache unless an update is available on the network.

App Shell for Content

A Practical Implementation

We’ve written a fully working sample using the application shell architecture, vanilla ES2015 JavaScript for the client, and Express.js for the server. There is of course nothing stopping you from using your own stack for either the client or the server portions (e.g PHP, Ruby, Python).

Service Worker Lifecycle

For our application shell project, we use sw-precache which offers the following service worker lifecycle:

Event Action
Install Cache the application shell and other single page app resources.
Activate Clear out old caches.
Fetch Serve up a single page web app for URL's and use the cache for assets and predefined partials. Use network for other requests.

Server Bits

In this architecture, a server side component (in our case, written in Express) should be able to treat content and presentation separately. Content could be added to an HTML layout that results in a static render of the page, or it could be served separately and dynamically loaded.

Understandably, your server-side setup may drastically differ from the one we use for our demo app. This pattern of web apps is achievable by most server setups, though it does require some rearchitecting. We’ve found that the following model works quite well:

Diagram of the App Shell Architecture

  • Endpoints are defined for three parts of your application: the user facing URL’s (index/wildcard), the application shell (service worker) and your HTML partials.

  • Each endpoint has a controller that pulls in a handlebars layout which in turn can pull in handlebar partials and views. Simply put, partials are views that are chunks of HTML that are copied into the final page. Note: JavaScript frameworks that do more advanced data synchronization are often way easier to port to an Application Shell architecture. They tend to use data-binding and sync rather than partials.

  • The user is initially served a static page with content. This page registers a service worker, if it’s supported, which caches the application shell and everything it depends on (CSS, JS etc).

  • The app shell will then act as a single page web app, using javascript to XHR in the content for a specific URL. The XHR calls are made to a /partials* endpoint which returns the small chunk of HTML, CSS and JS needed to display that content. Note: There are many ways to approach this and XHR is just one of them. Some applications will inline their data (maybe using JSON) for initial render and therefore aren’t “static” in the flattened HTML sense.

  • Browsers without service worker support should always be served a fall-back experience. In our demo, we fall back to basic static server-side rendering, but this is only one of many options. The service worker aspect provides you with new opportunities for enhancing the performance of your Single-page Application style app using the cached application shell.

File Versioning

One question that arises is how to handle file versioning and updating. This is application specific and the options are:

  • Network first and use the cached version otherwise.

  • Network only and fail if offline.

  • Cache the old version and update later.

For the application shell itself, a cache-first approach should be taken for your service worker setup. If you aren’t caching the application shell, you haven’t properly adopted the architecture.

Note: The application shell sample does not (at the time of writing) use file versioning for the assets referenced in the static render, often used for cache busting. We hope to add this in the near future. The service worker is otherwise versioned by sw-precache (covered in the Tooling section).

Tooling

We maintain a number of different service worker helper libraries that make the process of precaching your application’s shell or handling common caching patterns easier to setup.

Screenshot of the Service Worker Library Site on Web Fundamentals

Use sw-precache for Your Application Shell

Using sw-precache to cache the application shell should handle the concerns around file revisions, the install/activate questions, and the fetch scenario for the app shell. Drop sw-precache into your application’s build process and use configurable wildcards to pick up your static resources. Rather than manually hand-crafting your service worker script, let sw-precache generate one that manages your cache in a safe and efficient, using a cache-first fetch handler.

Initial visits to your app trigger precaching of the complete set of needed resources. This is similar to the experience of installing a native app from an app store. When users return to your app, only updated resources are downloaded. In our demo, we inform users when a new shell is available with the message, “App updates. Refresh for the new version.” This pattern is a low-friction way of letting users know they can refresh for the latest version.

Use sw-toolbox for Runtime Caching

Use sw-toolbox for runtime caching with varying strategies depending on the resource:

  • cacheFirst for images, along with a dedicated named cache that has a custom expiration policy of N maxEntries.

  • networkFirst or fastest for API requests, depending on the desired content freshness. Fastest might be fine, but if there’s a specific API feed that’s updated frequently, use networkFirst.

Conclusion

Application shell architectures comes with several benefits but only makes sense for some classes of applications. The model is still young and it will be worth evaluating the effort and overall performance benefits of this architecture.

In our experiments, we took advantage of template sharing between the client and server to minimise the work of building two application layers. This ensures progressive enhancement is still a first-class citizen.

If you’re already considering using service workers in your app, take a look at the architecture and evaluate if it makes sense for your own projects.

With thanks to our reviewers: Jeff Posnick, Paul Lewis, Alex Russell, Seth Thompson, Rob Dodson, Taylor Savage and Joe Medley.


Chrome 47 WebRTC: media recording, secure origins & proxy handling

$
0
0

Chrome 47 includes several significant WebRTC enhancements and updates.

Record video from your web apps

The MediaStreamRecorder API has long been the top chromium.org request, with over 2500 stars. Media recording has now been added to Chrome behind the experimental Web Platform features flag — though it’s desktop only for the moment. This allows you to record and play back or download video. There is a simple demo on the WebRTC samples repo and you can find out more from the discuss-webrtc announcement. A sample Chrome App for recording video from screen capture is available at github.com/niklasenbom/RecordingApp. These are brand-new implementations and there may still be bugs to iron out: please file issues on the repos if you encounter problems.

Screenshot of MediaRecorder demo on the WebRTC GitHub samples repo

Audio output device selection

MediaDevices.enumerateDevices() has been released. More details are available from Chromium issue 504280. You can now enumerate audio output devices in addition to the audio input and video input devices that MediaStreamTrack.getSources() already provides. You can find out more about how to use it in this update.

Device support on Windows

Default communications device support on Windows has now been added. This means that when enumerating audio devices on Windows, there will be an additional entry for the communications device whose ID will be ‘communications’.

Device IDs for the default audio device (and communications on Windows) will no longer be hashed (Issue 535980). Instead, two reserved IDs, ‘default’ and ‘communications’ are supported and are the same across all security origins. Device labels will be translated to the browser locale so developers should not expect labels to have a predetermined value. Video rendering accuracy has been improved by propagating the capture timestamp all the way to the rendering algorithm, where the right vsync can be chosen based on that. For Windows platform the capture timestamp is also more accurate in Chrome 47.

Proxy handling

Chrome 47 adds a new preference to force WebRTC traffic to be sent through a local proxy server, if one is configured, which is important for some users browsing via a VPN. This means that the WebRTC application will only see the proxy IP address. Be aware that this will hurt application performance, and won’t work at all unless the application supports TURN/TCP or ICE-TCP. Look for a new version of the WebRTC Network Limiter Extension soon to provide a UI for this preference. There’s more information about IP address ‘leakage’ in What’s Next for WebRTC.

WebRTC Network Limiter Chrome extension

…and more

Data channel throughput has been greatly improved for high latency connections.

We will gradually roll out support for DTLS 1.2 in the Chrome 47 timeframe.

Though neither VP9 nor H.264 are supported in this release, work on these continues, and we hope to implement support for VP9 and an initial version of H.264 (behind a flag) in Chrome 48.

Public Service Announcements

  • Starting with Chrome 47, getUserMedia() requests are only allowed from secure origins: HTTPS or localhost.
  • RTP data channel support has been removed. Any remaining applications still using RTP data channels should use the standard data channels instead.

As with all releases, we encourage developers to try Chrome on the Canary, Dev, and Beta channels and report any issues found. The help we receive is invaluable. For pointers on how to file a good bug report, please take a look at the WebRTC bug page.

Demos

Find out more

DevTools Digest (CDS Edition): A glimpse into the future + RAIL Profiling

$
0
0

Learn how DevTools is going mobile first with a new, streamlined Device Mode that’s always on. Use the color buttons to quickly add colors to your selectors and find out what’s coming to DevTools soon.

A glimpse into the future of authoring

We’re just coming back from the Chrome Dev Summit where I showed you what working with DevTools looks like today and in the future. I usually don’t mention any features that are still experiments or heavy works of progress in this digest but I’m making an exception this time. If you don’t have time to watch the whole talk, here’s the gist:

Device Mode v2

We’re still heavily working on this new, bold iteration of the Device Mode but wanted to give everyone an opportunity to try it out, so we’ve enabled it in your Canary today. With the changes, we are pushing DevTools into a mobile-first future where mobile development is the default, and Desktop development is the “add-on”. Expect more iteration over the next few weeks with an extended blog post when we’re done.

Powerful Animation Inspection

The work-in-progress Animation Inspection gives you a full, detailed picture over what’s happening on anything moving. Instead of showing you a transition on one element at a time, we added heuristics that group complex animations so you can focus on all you’re seeing. Have a look at the video. We’ll offer a bigger, standalone blog post when we’re fully launched.

Layout Mode (Sneak Peek)

Not quite ready for prime time but very promising is the new Layout Mode, a feature I can’t wait to see fully built out. The Layout Mode adds WYSIWYG editing capabilities to DevTools for any element on the page. So far, the height, width, paddings and margins can be edited in context. It’s going to take a little longer until we’re ready to let you try but we’ll keep you updated.

Performance profiling with the updated Timeline panel

As part of a bigger push of introducing the new RAIL performance model, there have been hundreds of smaller and bigger changes to the Timeline panel, and together they transform and improve the performance profiling story quite a bit. So instead of showing every individual change in isolation, our own Paul Irish showed us how to properly debug the performance of a site, in this case the mobile site of Hotel Tonight, live on stage. Among the newly announced features are the load and performance film strips, the included network waterfall, the treeview summary and the ability to view perf costs by domain & file.

Easily add foreground and background colors to any element

Whenever you wanted to add a background-color or color property to your element, you couldn’t just open the color picker. Instead, most of you type in something like “background: red;” each time so the color picker icon appears, then choose the actual color you wanted.

We thought we could simplify this. We added two nifty buttons that appear when hovering over the bottom right corner of a selector, allowing you to add a color and bring up the picker with a single click:

The Best of the Rest

  • We’ve wasted a lot of previous real estate in the Style panel by showing generic media types. We now hide that stuff before your selectors if it’s not unusual!
  • You can now long hover over a CSS selector in the Style panel to see how many elements on the page it applies to.
  • Didn’t give up on printing yet? Print media emulation is still around to see how your page would look like when printed – we just moved it to the Rendering Settings.
  • When choosing an element to inspect, we now auto-expand and close the relevant DOM sub tree. Hard to explain, seeing is believing.

As always, let us know what you think via Twitter or the comments below, and submit bugs to crbug.com/new.

Until next month!
Paul Bakaus & the DevTools team

Introducing Background Sync

$
0
0

Background sync is a new web API that lets you defer actions until the user has stable connectivity. This is useful for ensuring that whatever the user wants to send, is actually sent.

The problem

The internet is a great place to waste time. Without wasting time on the internet, we wouldn’t know cats dislike flowers, chameleons love bubbles, and “You do What They Told Ya” as sung by “Rage Against the Machine” sounds like the Japanese for, “Break the chicken nugget, daddy”.

But sometimes, just sometimes, we’re not looking to waste time. The desired user experience is more like:

  1. Phone out of pocket.
  2. Achieve minor goal.
  3. Phone back in pocket.
  4. Resume life.

Unfortunately this experience is frequently broken by poor connectivity. We’ve all been there. You’re staring at a white screen or a spinner, and you know you should just give up and get on with your life, but you give it another 10 seconds just in case. After that 10 seconds? Nothing. But why give up now? You’ve invested time already, so walking away with nothing would be a waste, so you carry on waiting. By this point you want to give up, but you know the second you do so, is the second before everything would have loaded if only you’d waited.

Service workers solve the page loading part by letting you serve content from a cache. But what about when the page needs to send something to the server?

At the moment, if the user hits “send” on a message they have to stare at a spinner until it completes. If they try to navigate away or close the tab, we use onbeforeunload to display a message like, “Nope, I need you to stare at this spinner some more. Sorry”. If the user has no connection we tell the user “Sorry, you must come back later and try again”.

This is rubbish. Background sync lets you do better.

The solution

The following video shows Emojoy, a simple emoji-only chat demo… thing. It’s a progressive web app. It works offline-first. It uses push messages and notifications, and it uses background sync.

If the user tries to send a message when they have zero connectivity, then, thankfully, the message is sent in the background once they get connectivity.

Background sync hasn’t hit the main release of Chrome yet, so if you want to try this you’ll need either Chrome Dev for Android, or Chrome Canary for desktop. You’ll also need to enable chrome://flags/#enable-experimental-web-platform-features and restart the browser. Then:

  1. Open Emojoy.
  2. Go offline (either using airplane-mode or visit your local Faraday cage).
  3. Type a message.
  4. Go back to your home screen (optionally close the tab/browser).
  5. Go online.
  6. Message sends in the background!

Being able to send in the background like this also yields a perceived performance improvement. The app doesn’t need to make such a big deal about the message sending, so it can add the message to the output straight away.

How to request a background sync

In true extensible web style, this is a low level feature that gives you the freedom to do what you need. You ask for an event to be fired when the user has connectivity, which is immediate if the user already has connectivity. Then, you listen for that event and do whatever you need to do.

Like push messaging, it uses a service worker as the event target, which enables it to work when the page isn’t open. To begin, register for a sync from a page:

// Register your service worker:
navigator.serviceWorker.register('/sw.js');

// Then later, request a one-off sync:
navigator.serviceWorker.ready.then(function(swRegistration) {
  return swRegistration.sync.register('myFirstSync');
});

Then listen for the event in /sw.js:

self.addEventListener('sync', function(event) {
  if (event.tag == 'myFirstSync') {
    event.waitUntil(doSomeStuff());
  }
});

And that’s it! In the above, doSomeStuff() should return a promise indicating the success/failure of whatever it’s trying to do. If it fulfills, the sync is complete. If it fails, another sync will be scheduled to retry. Retry syncs also wait for connectivity, and employ an exponential back-off.

The tag name of the sync (‘myFirstSync’ in the above example) should be unique for a given sync. If you register for a sync using the same tag as a pending sync, it coalesces with the existing sync. That means you can register for an “clear-outbox” sync every time the user sends a message, but if they send 5 messages while offline, you’ll only get one sync when they become online. If you want 5 separate sync events, just use unique tags!

Here’s a simple demo that does the bare minimum; it uses the sync event to show a notification.

What could I use background sync for?

Ideally, you’d use it to schedule any data sending that you care about beyond the life of the page. Chat messages, emails, document updates, settings changes, photo uploads… anything that you want to reach the server even if user navigates away or closes the tab. The page could store these in an “outbox” store in indexedDB, and the service worker would retrieve them, and send them.

Although, you could also use it to fetch small bits of data…

Another demo!

This is the offline wikipedia demo I created for Supercharging Page Load. I’ve since added some background sync magic to it.

Try this out yourself. As before make sure you’re in Chrome Dev for Android, or Chrome Canary for desktop with chrome://flags/#enable-experimental-web-platform-features.

  1. Go to any article, perhaps Chrome.
  2. Go offline (either using airplane-mode or join a terrible mobile provider like I have).
  3. Click a link to another article.
  4. You should be told the page failed to load (this will also appear if the page just takes a while to load).
  5. Agree to notifications.
  6. Close the browser.
  7. Go online
  8. You get notified when the article is downloaded, cached, and ready to view!

Using this pattern, the user can put their phone in their pocket and get on with their life, knowing the phone will alert them when it’s fetched want they wanted.

Permissions

The demos I’ve shown use web notifications, which require permission, but background sync itself does not.

Sync events will often complete while the user has a page open to the site, so requiring user permission would be a poor experience. Instead, we’re limiting when syncs can be registered and triggered to prevent abuse. E.g.:

  • You can only register for a sync event when the user has a window open to the site.
  • The event execution time is capped, so you can’t use them to ping a server every x seconds, mine bitcoins or whatever.

Of course, these restrictions may loosen/tighten based on real-world usage.

Progressive Enhancement

It’ll be a while before all browsers support background sync, especially as Safari and Edge don’t yet support service workers. But progressive enhancement helps here:

if ('serviceWorker' in navigator && 'SyncManager' in window) {
  navigator.serviceWorker.ready.then(function(reg) {
    return reg.sync.register('tag-name');
  }).catch(function() {
    // system was unable to register for a sync,
    // this could be an OS-level restriction
    postDataFromThePage();
  });
} else {
  // serviceworker/sync not supported
  postDataFromThePage();
}

If service workers or background sync aren’t available, just post the content from the page as you’d do today.

Note that it’s worth using background sync even if the user appears to have good connectivity, as it protects you against navigations and tab closures during data send.

The future

We’re aiming to ship background sync to a stable version of Chrome in the first half of 2016. But we’re also working on a variant, “periodic background sync”. This will allow you to request a “periodicsync” event restricted by time interval, battery state and network state. This would require user permission, of course, but it will also be down to the will of the browser for when and how often these events fire. E.g., a news site could request to sync every hour, but the browser may know you only read that site at 07:00, so the sync would fire daily at 06:50. This idea is a little further off than one-off syncing, but it’s coming.

Bit by bit we’re bringing successful patterns from Android/iOS onto the web, while still retaining what makes the web great!

Security Panel debuts in Chrome DevTools

$
0
0

The Chrome Security team has been hard at work (rewatch the video above for a great overview) to realize a future without HTTP, a future where you and your users can be reasonably sure that whatever data you’re sending to the web stays between you and the site you’re looking at. And to to make it even easier to jump ship and join the glorious HTTPS future, we’ve made Security a first-class citizen in DevTools.

The new Security Panel

The new Security panel introduced in Chrome 48 makes it a lot easier to see any issues you have with certificates and mixed content. You can head to it directly in DevTools or by clicking on the URL bar’s lock icon, then the “Details” link.

Addressing the problems with “Connection Info”

Our current solution for those of you who want data about page security is a click onto the little lock icon next to the URL, then parsing the info available on the “Connection” tab.

Unfortunately, this tab had several problems:

  • It’s too complicated for most users
  • …but too basic for most developers
  • and makes it unclear what causes a lock icon “downgrade”

Overview: Explain lock icon and surface mixed content

Overview tab

The lock icon represents the security state of the page, so knowing when and why it appears is extremely important. The overview screen in the new security panel explains the important parts that contribute to a secure page:

  • Identity (certificate)
  • Connection (protocol, cipher suite)
  • Subresources

You’ll now know at a glance why your site does or does not get the little green badge of awesomeness.

Have mixed content appear out of nowhere? No worries. We show it directly on the overview, and a click brings you to a filtered view of the Network Panel, so you can quickly look at the offending requests:

Mixed Content

Origin View: Connection Type and Certificate Details

Connection tab

If you need information about a specific TLS connection, the Origin view will help. Reload the page and you’ll see every individual origin for all resources appear in the left hand navigation.

From here, you can find out everything about the certificate used and the connection type. In addition, it gives you the handy ability to drill down further to inspect all resources coming from that origin via the Network Panel.


Give the new Security panel a try and and let us know what you think on Twitter or via bug/feature ticket!

Google Cast for Chrome on Android

$
0
0

Imagine being able to use a web app from your phone to present a slide deck to a conference projector — or share images, play games or watch videos on a TV screen — using the mobile web app as a controller.

The latest release of Chrome on Android allows sites to present to Google Cast devices using the Cast Web SDK. This means you can now create Cast sender apps using the Web SDK with Chrome on Android or iOS (or on desktop with the extension) as well as creating apps that use the native Cast SDK for Android and iOS. (Previously, a Google Cast sender application needed the Google Cast Chrome extension, so on Android it was only possible to interact with Cast devices from native apps.)

Below is a brief introduction to building a Cast sender app using the Web SDK. More comprehensive information is available from the Chrome Sender App Development Guide.

All pages using Cast must include the Cast library:

<script type="text/javascript"
  src="https://www.gstatic.com/cv/js/sender/v1/cast_sender.js"></script>

Add a callback to handle API availability and initialize the Cast session (make sure to add the handler before the API is loaded!):

window['__onGCastApiAvailable'] = function(isLoaded, error) {
  if (isLoaded) {
    initializeCastApi();
  } else {
    console.log(error);
  }
}

function initializeCastApi() {
  var sessionRequest = new chrome.cast.SessionRequest(applicationID);
  var apiConfig = new chrome.cast.ApiConfig(sessionRequest,
      sessionListener, receiverListener);
  chrome.cast.initialize(apiConfig, onInitSuccess, onError);
};

If you’re using the default Styled Media Receiver application and not a roll-your-own, registered Custom Receiver application, you can create a SessionRequest like this:

var sessionRequest = new chrome.cast.SessionRequest(chrome.cast.media.
  DEFAULT_MEDIA_RECEIVER_APP_ID);

The receiverListener callback above is executed when one or more devices becomes available:

function receiverListener(e) {
  if (e === chrome.cast.ReceiverAvailability.AVAILABLE) {
    // update UI
  }
}

Launch a Cast session when your user clicks the Cast icon, as mandated by the User Experience Guidelines:

chrome.cast.requestSession(onRequestSessionSuccess,
    onRequestSessionError);

function onRequestSessionSuccess(e) {
  session = e;
}

The user will be presented with a device picker:

Cast device selection dialog

The route details dialog is shown when the page is already connected and calls requestSession():

Cast route details dialog

Once you have a Cast session, you can load media for the selected Cast device, and add a listener for media playback events:

var mediaInfo = new chrome.cast.media.MediaInfo(mediaURL);
var request = new chrome.cast.media.LoadRequest(mediaInfo);
session.loadMedia(request,
    onMediaDiscovered.bind(this, 'loadMedia'),
    onMediaError);

function onMediaDiscovered(how, media) {
  currentMedia = media;
  media.addUpdateListener(onMediaStatusUpdate);
}

The currentMedia variable here is a chrome.cast.media.Media object, which can be used for controlling playback:

function playMedia() {
  currentMedia.play(null, success, error)
}

// ...

A play/pause notification is shown when media is playing:

Cast play/pause notification

If no media is playing, the notification only has a stop button, to stop casting:

Cast stop notification

The sessionListener callback for chrome.cast.ApiConfig() (see above) enables your app to join or manage an existing Cast session:

function sessionListener(e) {
  session = e;
  if (session.media.length !== 0) {
    onMediaDiscovered('onRequestSessionSuccess', session.media[0]);
  }
}

If Chrome on Android allows casting media from your website but you want to disable this feature so the default casting UI doesn’t interfere with your own, use the disableRemotePlayback attribute, available in Chrome 49 and above:

<video disableRemotePlayback src="...">

Alt Sender and receiver devices

The Cast Web SDK guide has links to sample apps, and information about Cast features such as session management, text tracks (for subtitles and captions) and status updates.

At present, you can only present to a Cast Receiver Application using the Cast Web SDK, but there is work underway to enable the Presentation API to be used without the Cast SDK (on desktop and Android) to present any web page to a Cast device without registration with Google. Unlike the Chrome-only Cast SDK, using the standard API will allow the page work with other user agents and devices that support the API.

The Presentation API, along with the Remote Playback API, is part of the Second Screen Working Group effort to enable web pages to use second screens to display web content.

These APIs take advantage of the range of devices coming online — including connected displays that run a user agent — to enable a rich variety of applications with a ‘control’ device and a ‘display’ device.

We’ll keep you posted on progress with implementation.

In the meantime, please let us know if you find bugs or have feature requests: crbug.com/new.

Find out more

Easy URL manipulation with URLSearchParams

$
0
0

The URLSearchParams API provides a consistent interface to the bits and pieces of the URL and allows trivial manipulation of the query string (that stuff after "?").

Traditionally, developers use regexs and string splitting to pull out query parameters from the URL. If we’re all honest with ourselves, that’s no fun. It can be tedious and error prone to get right. One of my dark secrets is that I’ve reused the same get|set|removeURLParameter helper methods in several large Google.com app, including Google Santa Tracker and the Google I/O 2015 web.

It’s time for a proper API that does this stuff for us!

URLSearchParams API

Try the demo

Chrome 49 implements URLSearchParams from the URL spec, an API which is useful for fiddling around with URL query parameters. I think of URLSearchParams as an equivalent convenience to URLs as FormData was to forms.

So what can you do with it? Given a URL string, you can easily extract parameter values:

// Can also constructor from another URLSearchParams
let params = new URLSearchParams('q=search+string&version=1&person=Eric');

params.get('q') === "search string"
params.get('version') === "1"
Array.from(params).length === 3

Note: If there are several values for a param, get returns the first value. iterate over parameters:

for (let p of params) {
  console.log(p);
}

set a parameter value:

params.set('version', 2);

Note: If there are several values, set removes all other parameters with the same name.

append another value for an existing parameter:

params.append('person', 'Tim');
params.getAll('person') === ['Eric', 'Tim']

delete a parameter(s):

params.delete('person');

Note: this example removes all person query parameters from the URL, not just the first occurrence.

Working with URLs

Most of the time, you’ll probably be working with full URLs or modifying your app’s URL. The URL constructor can be particularly handy for these cases:

let url = new URL('https://example.com?foo=1&bar=2');
let params = new URLSearchParams(url.search.slice(1));
params.set('baz', 3);

params.has('baz') === true
params.toString() === 'foo=1&bar=2&baz=3'

To make actual changes to the URL, you can grab parameters, update their values, then use history.replaceState to update the URL.

// URL: https://example.com?version=1.0
let params = new URLSearchParams(location.search.slice(1));
params.set('version', 2.0);

window.history.replaceState({}, '', `${location.pathname}?${params}`);
// URL: https://example.com?version=2.0

Here, I’ve used ES6 template strings to reconstruct an updated URL from the app’s existing URL path and the modified params.

Integration with other places URLs are used

By default, sending FormData in a fetch() API request creates a multipart body. If you need it, URLSearchParams provides an alternative mechanism to POST data that’s urlencoded rather than mime multipart.

let params = new URLSearchParams();
params.append('api_key', '1234567890');

fetch('https://example.com/api', {
  method: 'POST',
  body: params
}).then(...)

Although it’s not yet implemented in Chrome, URLSearchParams also integrates with the URL constructor and a tags. Both support our new buddy by providing a read-only property, .searchParams for accessing query params:

// Note: .searchParams on URL is not implemented in Chrome 49.

let url = new URL(location);
let foo = url.searchParams.get('foo') || 'somedefault';

Links also get a .searchParams property:

// Note: .searchParams on links is not implemented in Chrome 49.

let a = document.createElement('a');
a.href = 'https://example.com?filter=api';

// a.searchParams.get('filter') === 'api';

Feature detection & browser support

Currently, Chrome 49, Firefox 44, and Opera 36 support URLSearchParams.

if ('URLSearchParams' in window) {
  // Browser supports URLSearchParams
}

For polyfills, I recommend the one at github.com/WebReflection/url-search-params.

Demo

Try out the sample!

To see URLSearchParams in a real-world app, check out Polymer’s material design Iconset Generator. I used it to setup the app’s initial state from a deep link. Pretty handy :)

High Resolution Timestamps for Events

$
0
0

The timeStamp property of the Event interface indicates the time at which a given event took place.

In versions of Chrome prior to 49, this timeStamp value was represented as a DOMTimeStamp, which was a whole number of milliseconds since the system epoch, much like the value returned by Date.now().

Starting with Chrome 49, timeStamp is a DOMHighResTimeStamp value. This value is still a number of milliseconds, but with microsecond resolution, meaning the value will include a decimal component. Additionally, instead of the value being relative to the epoch, the value is relative to the PerformanceTiming.navigationStart, i.e. the time at which the user navigated to the page.

The benefits of additional time stamp accuracy can be seen in these examples:

Cross-browser & Legacy Considerations

If you have existing code that compares Event.timeStamp values from two events, you should not have to adjust your code on account of the shift to DOMHighResTimeStamp. Moreover, on browsers that support DOMHighResTimeStamp, your existing code will benefit from the increased microsecond accuracy, as well as the fact that the DOMHighResTimeStamp is guaranteed to increase monotonically, regardless of whether the system clock changes in the middle of your web page’s execution.

If, instead of comparing two Event.timeStamp values, your code needs to determine how long ago an event took place, the new DOMHighResTimeStamp value can be compared directly to performance.now(). And if you need to transform Event.timeStamp to an absolute number of milliseconds since the system epoch, you can get that value by adding a DOMHighResTimeStamp to performance.timing.navigationStart.

In both of those cases, DOMTimeStamp and DOMHighResTimeStamp behave differently, but you can simplify your cross-browser code by using this conversion function, courtesy of Majid Valipour. It takes an Event object as a parameter and returns a DOMHighResTimeStamp-like value, ready to be compared to performance.now() or added to performance.timing.navigationStart.


Notification Actions in Chrome 48

$
0
0

Early in 2015 we introduced Push Messaging and Notification in to Chrome for Android and Desktop. It was a great step forward on the web. Users could start to engage more deeply with experiences on the web even when the browser was closed.

Whilst it is great that you can send these messages, the only thing you could do with one was either to click it and open a page or dismiss it entirely.

If you look at the notifications provided natively to apps on mobile platforms such as iOS and Android, they each let the developer define contextual actions that the user can invoke and interact with. In Chrome 48 we have now added a similar ability to Web Notifications across Desktop and Chrome for Android.

The addition to the API is pretty simple. You just need to create an Array of actions and add them into the NotificationOptions object when you call showNotification from a ServiceWorker registration (either directly in the ServiceWorker or on a page via navigator.serviceWorker.ready).

Currently Chrome only supports two actions on each notification. Some platforms might be able to support more, and some platforms may support less or none at all. You can determine what the platform supports by checking Notification.maxActions. In the following examples we are assuming the platform supports two actions.

self.registration.showNotification('New message from Alice', {
  actions: [
   {action: 'like', title: 'Like'},
   {action: 'reply', title: 'Reply'}]
});

This will create a simple notification with two buttons. Note, it is not possible to add icons to the action directly (yet), but you can use Emoji and the extended Unicode character set to add more context to your notifications buttons.

For example:

self.registration.showNotification("New message from Alice", {
  actions: [
   {action: 'like', title: '👍Like'},
   {action: 'reply', title: '⤻ Reply'}]
});

Now that you have created a notification and made it look 😻, the user may interact with the notification at some time in the future. Interactions with the notification all currently (as of Chrome 48) come through the notificationclick event registered in your service worker and they can either be a general click on the notification or a specific tap on one of the action buttons. Side note, in the future you will also be able to respond to the notificationclose event.

To understand which action the user took you need to inspect the action property on the event and then you have the choice of either opening a new page for the user to complete an action or to perform the task in the background.

self.addEventListener('notificationclick', function(event) {
  var messageId = event.notification.data;

  event.notification.close();

  if (event.action === 'like') {
    silentlyLikeItem();
  }
  else if (event.action === 'reply') {
    clients.openWindow("/messages?reply=" + messageId);
  }
  else {
    clients.openWindow("/messages?reply=" + messageId);
  }
}, false);

The interesting thing is that the actions don’t have to open up a new window, they can perform general application interactions without creating a user interface. For example, a user could “Like” or “Delete” a social media post that would perform the action on the user’s local data and then synchronize it with the cloud without opening a UI (although it is good practice to message the data change to any open windows so the UI can be updated). For an action that requires user interaction you would open a window for the user to reply.

Because platforms will not support the same number of actions, or in some cases not be able to support Notification Action buttons at all, you will need to ensure that you always provide a sensible fallback to a task that is what you would expect the user to do if they were to just click the notification.

If you want to see this in action today, check out Peter Beverloo’s Notification Test Harness and read up on the Notifications specification or follow along with spec as it updates.

VP9 is now available in WebRTC

$
0
0

Two years ago Chrome enabled support for the VP9 codec. From Chrome 48 on desktop and Android, VP9 will be an optional video codec for video calls using WebRTC.

While VP9 uses the same basic blueprint as previous codecs, the WebM team has packed improvements into VP9 to get more quality out of each byte of video. For instance, the encoder prioritizes the sharpest image features, and the codec now uses asymmetric transforms to help keep even the most challenging scenes looking crisp and block-free.

With VP9, internet connections that are currently able to serve 720p without packet loss or delay will be able to support a 1080p video call at the same bandwidth. VP9 can also reduce data usage for users with poor connections or expensive data plans, requiring in best cases only 40% of the bitrate of VP8.

You can see how VP8 calls compare with VP9 in the screenshot below of recordings we made with the WebRTC encoder settings, showing 30% bitrate savings:

Screenshot of video showing VP8 and VP9 WebRTC calls side by side

The codec for a WebRTC call, along with other media settings such as bitrate, is negotiated between caller and callee by exchanging Session Description Protocol (SDP) metadata messages that describe the media capabilities of the client.

This handshaking process — exchanging media capabilities — is known as offer/answer. For example, a caller might send an offer (an SDP message) stating a preference for VP9, with VP8 as a fallback. If the answer confirms that the callee can handle VP9, the video call can proceed using VP9. If the callee responds with an answer that it can only use VP8, the call will proceed with VP8.

To see this in action, take a look at the code for the canonical WebRTC video chat application appr.tc.

Top tip: In appr.tc, you can press I to get information about call state including signaling and codec details:

Screenshot of appr.tc infobox, showing signaling and codec information

In appcontroller.js, VP9 is set as the preferred codec unless a vsc or vrc parameter is specified in the URL:

AppController.prototype.loadUrlParams_ = function() {
  // ...
  var DEFAULT_VIDEO_CODEC = 'VP9';
  // …
  this.loadingParams_.videoSendCodec = urlParams['vsc'];
  // ...
  this.loadingParams_.videoRecvCodec = urlParams['vrc'] || DEFAULT_VIDEO_CODEC;
}

In sdputils.js the custom codec value (if specified) is then used for the SDP metadata:

function maybePreferVideoSendCodec(sdp, params) {
  return maybePreferCodec(sdp, 'video', 'send', params.videoSendCodec);
}

function maybePreferVideoReceiveCodec(sdp, params) {
  return maybePreferCodec(sdp, 'video', 'receive', params.videoRecvCodec);
}

The maybePreferCodec() function used here sets values for the requested codec in the text of the SDP metadata. SDP is verbose and not designed to be human readable, but you can view the SDP used by appr.tc from the DevTools console once a call has been made. The important part with regard to codecs is the m line:

{
    "sdp": "v=0\r\no=- 9188830394109743399 2 IN IP4 127.0.0.1\r\ns … m=video ...",
    "type": "offer"
}

Using appr.tc with its default settings in a recent version of Chrome, you will see that VP9 is the first codec listed in the SDP m line — followed by VP8, which Chrome can also use. If you set VP8 as the preferred codec (via URL parameters in appr.tc, for example) VP8 will be listed first instead.

Find out more

Record audio and video with MediaRecorder

$
0
0

Break out the champagne and doughnuts! The most starred Chrome feature EVER has now been implemented.

Imagine a ski-run recorder that synchronizes video with GeoLocation data, or a super-simple voice memo app, or a widget that enables you to record a video and upload it to YouTube — all without plugins.

The MediaRecorder API enables you to record audio and video from a web app. It’s available now in Firefox and in Chrome for Android and desktop.

Try it out here.

Screenshot of mediaRecorder demo on Android Nexus 5X

A word about support:

• To use MediaRecorder in Chrome 47 and 48, enable experimental Web Platform features from the chrome://flags page.

• Audio recording work in Firefox and in Chrome 49 and above; Chrome 47 and 48 only support video recording.

• In Chrome on Android you can save and download recordings made with MediaRecorder, but it’s not yet possible to view a recording in a video element via window.URL.createObjectURL(). See this bug.

The API is straightforward, which I’ll demonstrate using code from the WebRTC sample repo demo. Note that the API can only be used from secure origins only: HTTPS or localhost.

First up, instantiate a MediaRecorder with a MediaStream. Optionally, use an options parameter to specify the desired output format:

var options = {mimeType: 'video/webm, codecs=vp9'};
mediaRecorder = new MediaRecorder(stream, options);

The MediaStream can be from:

  • A getUserMedia() call.
  • The receiving end of a WebRTC call.
  • A screen recording.
  • Web Audio, once this issue is implemented.

For options it’s possible to specify the MIME type and, in the future, audio and video bitrates.

MIME types have more or less specific values, combining container and codecs. For example:

  • audio/webm
  • video/webm
  • video/webm;codecs=vp8
  • video/webm;codecs=vp9

Use the static method MediaRecoder.isTypeSupported() to check if a MIME type is supported, for example when you instantiate MediaRecorder:

var options;
if (MediaRecorder.isTypeSupported('video/webm;codecs=vp9')) {
  options = {mimeType: 'video/webm, codecs=vp9'};
} else if (MediaRecorder.isTypeSupported('video/webm;codecs=vp8')) {
   options = {mimeType: 'video/webm, codecs=vp8'};
} else {
  // ...
}

The full list of MIME types supported by MediaRecorder in Chrome is available here.

Beware: Instantiation will fail if the browser doesn’t support the MIME type specified, so use MediaRecoder.isTypeSupported() or try/catch — or leave out the options argument if you’re happy with the browser default.

Next, add a data handler and call the start() method to begin recording:

var recordedChunks = [];

var options = {mimeType: 'video/webm,codecs=vp9'};
mediaRecorder = new MediaRecorder(stream, options);
mediaRecorder.ondataavailable = handleDataAvailable;
mediaRecorder.start();

function handleDataAvailable(event) {
  if (event.data.size > 0) {
    recordedChunks.push(event.data);
  } else {
    // ...
  }
}

This examples adds a Blob to the recordedChunks array whenever data becomes available. The start() method can optionally be given a timeSlice argument that specifies the length of media to capture for each Blob.

When you’ve finished recording, tell the MediaRecorder:

mediaRecorder.stop();

Play the recorded Blobs in a video element by creating a ‘super-Blob’ from the array of recorded Blobs:

function play() {
  var superBuffer = new Blob(recordedChunks);
  videoElement.src =
    window.URL.createObjectURL(superBuffer);
}

Alternatively, you could upload to a server via XHR, or use an API like YouTube (see the experimental demo below).

Download can be achieved with some link hacking:

function download() {
  var blob = new Blob(recordedChunks, {
    type: 'video/webm'
  });
  var url = URL.createObjectURL(blob);
  var a = document.createElement('a');
  document.body.appendChild(a);
  a.style = 'display: none';
  a.href = url;
  a.download = 'test.webm';
  a.click();
  window.URL.revokeObjectURL(url);
}

Feedback on the APIs and demos

The ability to record audio and video without plugins is relatively new to web apps, so we particularly appreciate your feedback on the APIs.

We’d also like to know what usage scenarios are most important to you, and what features you would like us to prioritize. Comment on this article or track progress at crbug.com/262211.

Demos

Apps

  • Paul Lewis’s Voice Memos app now has MediaRecorder support, polyfilled for browsers that don’t support MediaRecorder audio.

Polyfills

  • Muaz Khan’s MediaStreamRecorder is a JavaScript library for recording audio and video, compatible with MediaRecorder.
  • Recorderjs enables recording from a Web Audio API node. You can see this in action in Paul Lewis’s Voice Memos app.

Browser support

  • Chrome 49 and above by default
  • Chrome desktop 47 and 48 with Experimental Web Platform features enabled from chrome://flags
  • Firefox from version 25
  • Edge: ‘Under Consideration’

Spec

w3c.github.io/mediacapture-record/MediaRecorder.html

API information

developer.mozilla.org/en/docs/Web/API/MediaRecorder_API

CSS Variables: Why Should You Care?

$
0
0

CSS variables, more accurately known as CSS custom properties, are landing in Chrome 49. They can be useful for reducing repetition in CSS, and also for powerful runtime effects like theme switching and potentially extending/polyfilling future CSS features.

CSS Clutter

When designing an application it’s a common practice to set aside a set of brand colors that will be reused to keep the look of the app consistent. Unfortunately, repeating these color values over and over again in your CSS is not only a chore, but also error prone. If, at some point, one of the colors needs to be changed, you could throw caution to the wind and “find-and-replace” all the things, but on a large enough project this could easily get dangerous.

In recent times many developers have turned to CSS preprocessors like SASS or LESS which solve this problem through the use of preprocessor variables. While these tools have boosted developer productivity immensely, the variables that they use suffer from a major drawback, which is that they’re static and can’t be changed at runtime. Adding the ability to change variables at runtime not only opens the door to things like dynamic application theming, but also has major ramifications for responsive design and the potential to polyfill future CSS features. With the release of Chrome 49, these abilities are now available in the form of CSS custom properties.

Custom properties in a nutshell

Custom properties add two new features to our CSS toolbox:

  • The ability for an author to assign arbitrary values to a property with an author-chosen name.
  • The var() function, which allows an author to use these values in other properties.

Here’s a quick example to demonstrate

:root {
  --main-color: #06c;
}

#foo h1 {
  color: var(--main-color);
}

--main-color is an author defined custom property with a value of #06c. Note that all custom properties begin with two dashes.

The var() function retrieves and replaces itself with the custom property value, resulting in color: #06c; So long as the custom property is defined somewhere in your stylesheet it should be available to the var function.

The syntax may look a little strange at first. Many developers ask, “Why not just use $foo for variable names?” The approach was specifically chosen to be as flexible as possible and potentially allow for $foo macros in the future. For the backstory you can read this post from one of the spec authors, Tab Atkins.

Custom property syntax

The syntax for a custom property is straightforward.

--header-color: #06c;

Note that custom properties are case sensitive, so --header-color and --Header-Color are different custom properties. While they may seem simple at face value, the allowed syntax for custom properties is actually quite permissive. For example, the following is a valid custom property:

--foo: if(x > 5) this.width = 10;

While this would not be useful as a variable, as it would be invalid in any normal property, it could potentially be read and acted upon with JavaScript at runtime. This means custom properties have the potential to unlock all kinds of interesting techniques not currently possible with today’s CSS preprocessors. So if you’re thinking “yawn I have SASS so who cares…” then take a second look! These are not the variables you’re used to working with.

The cascade

Custom properties follow standard cascade rules, so you can define the same property at different levels of specificity

:root { --color: blue; }
div { --color: green; }
#alert { --color: red; }
* { color: var(--color); }
<p>I inherited blue from the root element!</p>
<div>I got green set directly on me!</div>
<div id="alert">
  While I got red set directly on me!
  <p>I’m red too, because of inheritance!</p>
</div>

This means you can leverage custom properties inside of media queries to aid with responsive design. One use case might be to expand the margining around your major sectioning elements as the screen size increases:

:root {
  --gutter: 4px;
}

section {
  margin: var(--gutter);
}

@media (min-width: 600px) {
  :root {
    --gutter: 16px;
  }
}

It’s important to call out that the above snippet of code is not possible using today’s CSS preprocessors which are unable to define variables inside of media queries. Having this ability unlocks a lot of potential!

It’s also possible to have custom properties that derive their value from other custom properties. This can be extremely useful for theming:

:root {
  --primary-color: red;
  --logo-text: var(--primary-color);
}

The var() function

To retrieve and use the value of a custom property you’ll need to use the var() function. The syntax for the var() function looks like this:

var(<custom-property-name> [, <declaration-value> ]? )

Where <custom-property-name> is the name of an author defined custom property, like --foo, and <declaration-value> is a fallback value to be used when the referenced custom property is invalid. Fallback values can be a comma separated list, which will be combined into a single value. For example var(--font-stack, "Roboto", "Helvetica"); defines a fallback of "Roboto", "Helvetica". Keep in mind that shorthand values, like those used for margin and padding, are not comma separated, so an appropriate fallback for padding would look like this.

p {
  padding: var(--pad, 10px 15px 20px);
}

Using these fallback values, a component author can write defensive styles for their element:

/* In the component’s style: */
.component .header {
  color: var(--header-color, blue);
}
.component .text {
  color: var(--text-color, black);
}

/* In the larger application’s style: */
.component {
  --text-color: #080;
  /* header-color isn’t set,
     and so remains blue,
     the fallback value */
}

This technique is especially useful for theming Web Components that use Shadow DOM, as custom properties can traverse shadow boundaries. A Web Component author can create an initial design using fallback values, and expose theming “hooks” in the form of custom properties.

<!-- In the web component's definition: -->
<x-foo>
  #shadow
    <style>
      p {
        background-color: var(--text-background, blue);
      }
    </style>
    <p>
      This text has a yellow background because the document styled me! Otherwise it
      would be blue.
    </p>
</x-foo>
/* In the larger application's style: */
x-foo {
  --text-background: yellow;
}

When using var() there are a few gotchas to watch out for. Variables cannot be property names. For instance:

.foo {
  --side: margin-top;
  var(--side): 20px;
}

However this is not equivalent to setting margin-top: 20px;. Instead, the second declaration is invalid and is thrown out as an error.

Similarly, you can’t (naively) build up a value where part of it is provided by a variable:

.foo {
  --gap: 20;
  margin-top: var(--gap)px;
}

Again, this is not equivalent to setting margin-top: 20px;. To build up a value, you need something else: the calc() function.

Building values with calc()

If you’ve never worked with it before, the calc() function is a handly little tool that lets you perform calculations to determine CSS values. It’s supported on all modern browsers, and can be combined with custom properties to build up new values. For example:

.foo {
  --gap: 20;
  margin-top: calc(var(--gap) * 1px); /* niiiiice */
}

Working with custom properties in JavaScript

To get the value of a custom property at runtime, use the getPropertyValue() method of the computed CSSStyleDeclaration object.

/* CSS */
:root {
  --primary-color: red;
}

p {
  color: var(--primary-color);
}
<!-- HTML -->
<p>I’m a red paragraph!</p>
/* JS */
var styles = getComputedStyle(document.documentElement);
var value = String(styles.getPropertyValue('--primary-color')).trim();
// value = 'red'

Similarly, to set the value of custom property at runtime, use the setProperty() method of the CSSStyleDeclaration object.

/* CSS */
:root {
  --primary-color: red;
}

p {
  color: var(--primary-color);
}
<!-- HTML -->
<p>Now I’m a green paragraph!</p>
/* JS */
document.documentElement.style.setProperty('--primary-color', 'green');

You can also set the value of the custom property to refer to another custom property at runtime by using the var() function in your call to setProperty().

/* CSS */
:root {
  --primary-color: red;
  --secondary-color: blue;
}
<!-- HTML -->
<p>Sweet! I’m a blue paragraph!</p>
/* JS */
document.documentElement.style.setProperty('--primary-color', 'var(--secondary-color)');

Because custom properties can refer to other custom properties in your stylesheets, you could imagine how this could lead to all sorts of interesting runtime effects.

Browser Support

Currently Chrome 49, Firefox 42, Safari 9.1, and iOS Safari 9.3 support custom properties.

Demo

Try out the sample for a glimpse at all of the interesting techniques you can now leverage thanks to custom properties.

Further Reading

If you’re interested to learn more about custom properties, Philip Walton from the Google Analytics team has written a primer on why he’s excited for custom properties and you can keep tab on their progress in other browsers over on chromestatus.com.

Controlling Font Performance with font-display

$
0
0

Deciding the behavior for a web font as it is loading can be an important performance tuning technique. The new font-display descriptor for `@font-face` lets developers decide how their web fonts will render (or fallback), depending on how long it takes for them to load.

Differences in Font Rendering Today

Web Fonts give developers the ability to incorporate rich typography into their projects with the tradeoff that if the user does not already posses a typeface the browser must spend some time downloading it. Because networks can be flaky, this download time has the potential to adversely affect the user’s experience. After all, no one’s going to care how pretty your text is if it takes an inordinate amount of time to display!

To mitigate some of the risk of a slow font download, most browsers implement a timeout after which a fallback font will be used. This is a useful technique but unfortunately browsers differ on the actual implementation.

Browser Timeout Fallback Swap
Chrome 35+ 3 seconds Yes Yes
Opera 3 seconds Yes Yes
Firefox 3 seconds Yes Yes
Internet Explorer 0 seconds Yes Yes
Safari No timeout N/A N/A
  • Chrome and Firefox have a three second timeout after which the text is shown with the fallback font. If the font manages to download, then eventually a swap occurs and the text is re-rendered with the intended font.
  • Internet Explorer has a zero second timeout which results in immediate text rendering. If the requested font is not yet available, a fallback is used, and text is re-rendered later once the requested font becomes available.
  • Safari has no timeout behavior (or at least nothing beyond a baseline network timeout).

To make matters worse, developers have limited control in deciding how these rules will affect their application. A performance minded developer may prefer to have a faster initial experience that uses a fallback font, and only leverage the nicer web font on subsequent visits after it has had a chance to download. Using tools like the Font Loading API, it may be possible to override some of the default browser behaviors and achieve performance gains, but it comes at the cost of needing to write non-trivial amounts of JavaScript which must then be inlined into the page or requested from an external file, incurring additional HTTP latency.

To help remedy this situation the CSS Working Group has proposed a new @font-face descriptor, font-display, and a corresponding property for controlling how a downloadable font renders before it is fully loaded.

Font Download Timelines

Similar to the existing font timeout behaviors that some browsers implement today, font-display segments the lifetime of a font download into three major periods.

  1. The first period is the font block period. During this period, if the font face is not loaded, any element attempting to use it must instead render with an invisible fallback font face. If the font face successfully loads during the block period, the font face is then used normally.
  2. The font swap period occurs immediately after the font block period. During this period, if the font face is not loaded, any element attempting to use it must instead render with a fallback font face. If the font face successfully loads during the swap period, the font face is then used normally.
  3. The font failure period occurs immediately after the font swap period. If the font face is not yet loaded when this period starts, it’s marked as a failed load, causing normal font fallback. Otherwise, the font face is used normally.

Understanding these periods means you can use font-display to decide how your font should render depending on whether or when it was downloaded.

Which font-display is Right for You?

To work with the font-display descriptor, add it your @font-face at-rules:

@font-face {
  font-family: 'Arvo';
  font-display: auto;
  src: local('Arvo'), url(https://fonts.gstatic.com/s/arvo/v9/rC7kKhY-eUDY-ucISTIf5PesZW2xOQ-xsNqO47m55DA.woff2) format('woff2');
}

font-display currently supports the following range of values auto | block | swap | fallback | optional.

auto

auto uses whatever font display strategy the user-agent uses. Most browsers currently have a default strategy similar to block.

block

block gives the font face a short block period (3s is recommended in most cases) and an infinite swap period. In other words, the browser draws “invisible” text at first if the font is not loaded, but swaps the font face in as soon as it loads. To do this the browser creates an anonymous font face with metrics similar to the selected font but with all glyphs containing no “ink.” This value should only be used if rendering text in a particular typeface is required for the page to be useable.

swap

swap gives the font face a zero second block period and an infinite swap period. This means the browser draws text immediately with a fallback if the font face isn’t loaded, but swaps the font face in as soon as it loads. Similar to block, this value should only be used when rendering text in a particular font is important for the page, but rendering in any font will still get a correct message across. Logo text is a good candidate for swap since displaying a company’s name using a reasonable fallback will get the message across but you’d eventually use the official typeface.

fallback

fallback gives the font face an extremely small block period (100ms or less is recommended in most cases) and a short swap period (three seconds is recommended in most cases). In other words, the font face is rendered with a fallback at first if it’s not loaded, but the font is swapped as soon as it loads. However, if too much time passes, the fallback will be used for the rest of the page’s lifetime. fallback is a good candidate for things like body text where you’d like the user to start reading as soon as possible and don’t want to disturb their experience by shifting text around as a new font loads in.

optional

optional gives the font face an extremely small block period (100ms or less is recommended in most cases) and a zero second swap period. Similar to fallback, this is a good choice for when the downloading font is more of a “nice to have” but not critical to the experience. The optional value leaves it up to the browser to decide whether to initiate the font download, which it may choose not to do or it may do it as a low priority depending on what it thinks would be best for the user. This can be beneficial in situations where the user is on a weak connection and pulling down a font may not be the best use of resources.

Browser Support

font-display is currently behind the Experimental Web Platform Features flag in desktop Chrome 49, and is shipping in Opera and Opera for Android.

Demo

Check out the sample to give font-display a shot. For performance minded developers it can be one more useful tool in your toolbelt!

Smooth Scrolling in Chrome 49

$
0
0

If there’s one thing that people really want from scrolling, it’s for it to be smooth. Historically Chrome has had smooth scrolling in some places, like -- say -- when users scroll with their trackpads, or fling a page on mobile. But if the user has a mouse plugged in then they’d get a more jittery “stepped” scrolling behavior, which is way less aesthetically pleasing. That's all about to change in Chrome 49.

The solution to the stepped native, input-driven scroll behavior for many developers has been to use libraries, the goal of which being to remap it to something smoother and nicer on the eyes. Users also do this, too, through extensions. There are downsides to both libraries and extensions that change scrolling, though:

  • An uncanny valley feel. This manifests itself in two ways: firstly, one site may have a smooth scroll behavior, but another may not, so the user can end up feeling disoriented by the inconsistency. Secondly, the library’s smoothness physics won’t necessarily match those of the platform’s. So while the motion may be smooth it can feel wrong or uncanny.
  • Increased propensity for main thread contention and jank. As with any JavaScript added to the page, there will be an increased CPU load. That’s not necessarily a disaster, depending on what else the page is doing, but if there is some long-running work on the main thread, and scrolling has been coupled to the main thread, the net result can be stuttering scrolls and jank.
  • More maintenance for developers, more code for users to download. Having a library to do smooth scrolling is going to be something that has to be kept up-to-date and maintained, and it will add to the overall page weight of the site.

These drawbacks are often also true of many libraries that deal with scroll behaviors, whether that’s parallax effects, or other scroll-coupled animations. They all too often trigger jank, get in the way of accessibility, and generally damage the user experience. Scrolling is a core interaction of the web, and altering it with libraries should be done with great care.

In Chrome 49, the default scroll behavior will be changing Windows, Linux, and Chrome OS. The old, stepped scrolling behavior is going away, and scrolling will be smooth by default! No changes to your code are necessary, except maybe removing any smooth scrolling libraries if you’ve used them.

More scrolling goodies

There are other scroll-related goodies in the works that are also worth mentioning. Many of us want scroll-coupled effects, like parallaxing, smooth scrolling to a document fragment (like example.com/#somesection). As I mentioned earlier, the approaches that are used today can often be detrimental to both developers and users. There are two platform standards that are being worked on that could help: Compositor Worklets and the scroll-behavior CSS property.

Houdini

Compositor Worklets are part of Houdini, and are yet to be fully spec’d out and implemented. That said, as the patches land, they will allow you to write JavaScript that’s run as part of the compositor’s pipeline, which in general means that scroll-coupled effects like parallaxing will be kept perfectly in sync with the current scroll position. Given the way that scrolling is handled today, where scroll events are only periodically sent to the main thread (and can be blocked by other main thread work), this would represent a huge leap forward. If you’re interested in Compositor Worklets, or any of the other exciting new features that Houdini brings, look over the Intro to Houdini post by Surma, the Houdini specs, and contribute your thoughts to the Houdini mailing list!

scroll-behavior

When it comes to fragment-based scrolling, the scroll-behavior CSS property is something else that could help. If you want to try it out you’ll be pleased to know it’s shipped in Firefox already, and you can enable it in Chrome Canary using the “Enable experimental Web Platform features” flag. If you set – say – the <body> element to scroll-behavior: smooth, all scrolls that are triggered either by fragment changes or by window.scrollTo will be animated smoothly! That’s way better than having to use and maintain code from a library that tries to do the same thing. With something as fundamental as scrolling, it’s really important to avoid breaking user expectation, so while these features are in flux it’s still worth adopting a Progressive Enhancement approach, and removing any libraries that attempt to polyfill these behaviors.

Go forth and scroll

As of Chrome 49, scrolling is getting smoother. But that’s not all: there are more potential improvements that may land, through Houdini and CSS properties like smooth-scroll. Give Chrome 49 a try, let us know what you think, and, most of all, let the browser do the scrolling where you can!

API Deprecations and Removals in Chrome 49

$
0
0

In nearly every version of Chrome we see a significant number of updates and improvements to the product, its performance, and also capabilities of the web platform.

Deprecation policy

To keep the platform healthy we sometimes remove APIs from the Web Platform which have run their course. There can be many reasons why we would remove an API, such as: they are superseded by newer APIs, are updated to reflect changes to specifications, to bring alignment and conistency with other browsers, or are early experimentations that never came to fruition in other browsers and thus can increase the burden of support for web developers.

Some of these changes might have an effect on a very small number of sites and to mitigate issues ahead of time we try to give developers advanced notice so that if needed they can make the required changes to keep their sites running.

Chrome currently has a process for deprecations and removals of API’s and the TL;DR is:

  • Announce on blink-dev
  • Set warnings and give time scales in the developer console of the browser when usage is detected on a page
  • Wait, monitor and then remove feature as usage drops

You can find a list of all deprecated features in chromestatus.com using the deprecated filter and removed features by applying the removed filter. We will also try to summarize some of the changes, reasoning, and migration paths in these posts.

In Chrome 49 (Beta Feb 2nd, 2016. Estimated stable date: March 2016) there are a number of changes to Chrome

Use of “css” prefix in getComputedStyle(e).cssX is deprecated

TL;DR: The use of the “css” prefix in getComputedStyle(e) has been deprecated since it was not a part of the formal spec.

getComputedStyle is a great little function. It will return all CSS values of the DOM element’s styles as they have been computed by the rendering engine. So for example, you could run getComputedStyle(_someElement_).height and it might return 224.1px because that is the height of the element as it is currently displayed.

It seems quite a handy API. So what are we changing?

Before the rendering engine of Chrome changed from Blink it was powered by WebKit and that had let you prefix “css” to the start of a property. For example getComputedStyle(e).cssHeight instead of getComputedStyle(e).height. Both would return the same data as they mapped to the same underlying values, but it is this usage of the “css” prefix that is non-standard and has been deprecated and removed.

If you access a property this way in Chrome 49 it will return undefined and you will have to fix your code.

Use of initTouchEvent is deprecated.

TL;DR: initTouchEvent has been deprecated in favor of the TouchEvent constructor to improve spec compliance and will be removed altogether in Chrome 53.

Intent to Remove Chromestatus Tracker CRBug Issue

For a long time you have been able to create synthetic touch events in Chrome using the initTouchEvent API, these are frequently used to simulate Touch Events either for testing or automating some UI in your site. In Chrome 49 we have deprecated this API and will display the following warning with the intention to remove it completely in Chrome 53.

'TouchEvent.initTouchEvent' is deprecated and will be removed in M53, around September 2016. Please use the TouchEvent constructor instead. See https://www.chromestatus.com/features/5730982598541312 for more details.

There are a number of reasons why this change is good. It is also not in the Touch Events spec. The Chrome implementation of initTouchEvent was not compatible at all with Safari’s initTouchEvent API and was different to Firefox on Android’s. And finally, the TouchEvent constructor is a lot easier to use.

It was decided that we will aim to follow the spec rather than maintain an API that is neither specced nor compatible with the only other implementation. Consequently we are first deprecating and then removing the initTouchEvent function and requiring developers to use the TouchEvent constructor.

There is usage of this API on the Web but we know it is used by a relatively low number of sites so we are not removing it as quickly as we might normally. We do believe that some of the usage is broken due to sites not handling Chrome’s version of the signature.

Because the iOS and Android/Chrome implementations of the initTouchEvent API were so wildly different you would often have some code along the lines of (frequently forgetting Firefox)

var event = document.createEvent('TouchEvent');

if(ua === 'Android') {
  event.initTouchEvent(touchItem, touchItem, touchItem, "touchstart", window,
    300, 300, 200, 200, false, false, false, false);
} else {
  event.initTouchEvent("touchstart", false, false, window, 0, 300, 300, 200,
    200, false, false, false, false, touches, targetTouches, changedTouches, 0, 0);
}

document.body.dispatchEvent(touchEvent);

Firstly, this is bad because it looks for “Android” in the User-Agent and Chrome on Android will match and hit this deprecation. It can’t be removed just yet though because there will be other WebKit and older Blink based browsers on Android for a while that you will still need to support the older API.

To correctly handle TouchEvents on the web you should change your code to support Firefox, IE Edge, and Chrome by checking for the existence of TouchEvent on the window object and if it has a positive “length” (indicating it’s a constructor that takes an argument) you should use that.

if('TouchEvent' in window && TouchEvent.length > 0) {
  var touch = new Touch({
    identifier: 42,
    target: document.body,
    clientX: 200,
    clientY: 200,
    screenX: 300,
    screenY: 300,
    pageX: 200,
    pageY: 200,
    radiusX: 5,
    radiusY: 5
  });

  event = new TouchEvent("touchstart", {
    cancelable: true,
    bubbles: true,
    touches: [touch],
    targetTouches: [touch],
    changedTouches: [touch]
  });
}
else {
  event = document.createEvent('TouchEvent');

  if(ua === 'Android') {
    event.initTouchEvent(touchItem, touchItem, touchItem, "touchstart", window,
      300, 300, 200, 200, false, false, false, false);
  } else {
    event.initTouchEvent("touchstart", false, false, window, 0, 300, 300, 200,
      200, false, false, false, false, touches, targetTouches,
      changedTouches, 0, 0);
  }
}

document.body.dispatchEvent(touchEvent);

Error and success handlers required in RTCPeerConnection methods

TL;DR: The WebRTC RTCPeerConnection methods createOffer() and createAnswer() now require an error handler as well as a success handler. Previously it had been possible to call these methods with only a success handler. That usage is deprecated.

In Chrome 49 we’ve also added a warning if you call setLocalDescription() or setRemoteDescription() without supplying an error handler. We expect to make the error handler argument mandatory for these methods in Chrome 50.

This is part of clearing the way for introducing promises on these methods, as required by the WebRTC spec.

Here’s an example from the WebRTC RTCPeerConnection demo (main.js, line 126):

function onCreateOfferSuccess(desc) {
  pc1.setLocalDescription(desc, function() {
     onSetLocalSuccess(pc1);
  }, onSetSessionDescriptionError);
  pc2.setRemoteDescription(desc, function() {
    onSetRemoteSuccess(pc2);
  }, onSetSessionDescriptionError);
  pc2.createAnswer(onCreateAnswerSuccess, onCreateSessionDescriptionError);
}

Note that both setLocalDescription() and setRemoteDescription() have an error handler. Older browsers expecting only a success handler will simply ignore the error handler argument if it’s present; calling this code in an older browser will not cause an exception.

In general, for production WebRTC applications we recommend that you use adapter.js, a shim, maintained by the WebRTC project, to insulate apps from spec changes and prefix differences.

Document.defaultCharset is deprecated

TL;DR: Document.defaultCharset has been deprecated to improve spec compliance.

Intent to Remove Chromestatus Tracker CRBug Issue

The Document.defaultCharset is a read-only property that returns the default character encoding of the user’s system based on their regional settings. It’s not been found to be useful to maintain this value because of the way that browsers use the character encoding information in the HTTP Response or in the meta tag embedded in the page.

By using document.characterSet you will get the first value specified in the HTTP header. If that is not present then you will get the value specified in the charset attribute of the <meta> element (for example, <meta charset="utf-8">). Finally if none of those are available the document.characterSet will be the user’s system setting.

Gecko has not supported this property and it is not cleanly specced so this property will be deprecated from Blink in Chrome 49 (Beta in January 2016). The following warning will appear in your console until the removal of the property in Chrome 50:

'Document.defaultCharset' is deprecated and will be removed in M50, around April 2016. See https://www.chromestatus.com/features/6217124578066432 for more details.

More discussion of the reasoning not to spec this out can be read on github https://github.com/whatwg/dom/issues/58

getStorageUpdates() removed

TL;DR: Navigator.getStorageUpdates() has been removed as it is no longer in the Navigator spec.

Intent to Remove Chromestatus Tracker CRBug Issue

If this impacts anyone I will eat my hat. getStorageUpdates() has hardly ever (if at all) been used on the web.

To quote the (very old version) of the HTML5 spec:

If a script uses the document.cookie API, or the localStorage API, the browser will block other scripts from accessing cookies or storage until the first script finishes.

Calling the navigator.getStorageUpdates() method tells the user agent to unblock any other scripts that may be blocked, even though the script hasn’t returned.

Values of cookies and items in the Storage objects of localStorage attributes can change after calling this method, whence its name.

Sounds pretty cool right? The spec even uses the word “whence” (which by happenstance is the only instance of whence in the spec). At the spec level there was a StorageMutex that controlled access to blocking storage such as localStorage and cookies, and this API would help free that mutex so other scripts would not be blocked by this StorageMutex. But it was never implemented, it’s not supported in IE or Gecko, and WebKit’s (and thus Blink’s) implementation has been a no-op.

It’s been removed from the specs for quite a while and has been removed completely from Blink (for the longest time it has been a no-op and did nothing even if called).

In the unlikely event that you had code that called navigator.getStorageUpdates() then you will have to check for the presence of the function before calling it.

Object.observe() is deprecated

TL;DR: Object.observe() has been deprecated as it is no longer on the standardization track and will be removed in a future release.

Intent to Remove Chromestatus Tracker CRBug Issue

In November 2015 it was announced that Object.Observe was being withdrawn from TC39. It has been deprecated from Chrome 49 and you will see the following warning in the console if you try to use it: “__ for more details.

'Object.observe' is deprecated and will be removed in M50, around April 2016. See https://www.chromestatus.com/features/6147094632988672 for more details.

Many developers liked this API and if you have been experimenting with it and are now seeking a transition path, consider using a polyfill such as MaxArt2501/object-observe or a wrapper library like polymer/observe-js.


Introducing ES2015 Proxies

$
0
0

ES2015 Proxies (in Chrome 49 and later) provide JavaScript with an intercession API, enabling us to trap or intercept all of the operations on a target object and modify how this target operates.

Proxies have a large number of uses, including:

  • Interception
  • Object virtualization
  • Resource management
  • Profiling or logging for debugging
  • Security and access control
  • Contracts for object use

The Proxy API contains a Proxy constructor that takes a designated target object and a handler object.

var target = { /* some properties */ };
var handler = { /* trap functions */ };
var proxy = new Proxy(target, handler);

The behavior of a proxy is controlled by the handler, which can modify the original behavior of the target object in quite a few useful ways. The handler contains optional trap methods (e.g .get(), .set(), .apply()) called when the corresponding operation is performed on the proxy.

Interception

Let’s begin by taking a plain object and adding some interception middleware to it using the Proxy API. Remember, the first parameter passed to the constructor is the target (the object being proxied) and the second is the handler (the proxy itself). This is where we can add hooks for our getters, setters, or other behaviour.

var target = {};

var superhero = new Proxy(target, {
   get: function(target, name, receiver) {
       console.log('get was called for: ', name);
       return target[name];
   }
});

superhero.power = 'Flight';
console.log(superhero.power);

Running the above code in Chrome 49 we get the following:

get was called for: power “Flight”

As we can see in practice, performing our property get or property set on the proxy object correctly resulted in a meta-level call to the corresponding trap on the handler. Handler operations include property reads, property assignment, and function application, all of which get forwarded to the corresponding trap.

The trap function can, if it chooses, implement an operation arbitrarily (e.g forwarding the operation to the target object). This is indeed what happens by default if a trap doesn’t get specified. E.g., here is a no-op forwarding proxy that does just this:

var target = {};

var proxy = new Proxy(target, {});
 // operation forwarded to the target
proxy.paul = 'irish';
// 'irish'. The operation has been  forwarded
console.log(target.paul);

Note: An example of using all available Proxy handler traps can be found on MDN.

We just looked at proxying plain objects, but we can just as easily proxy a function object, where a function is our target. This time we’ll use the handler.apply() trap:

// Proxying a function object
function sum(a, b) {
    return a + b;
}

var handler = {
    apply: function(target, thisArg, argumentsList) {
        console.log(`Calculate sum: ${argumentsList}`);
        return target.apply(thisArg, argumentsList);
    }
};

var proxy = new Proxy(sum, handler);
proxy(1, 2);
// Calculate sum: 1, 2
// 3

Identifying Proxies

The identity of a proxy can be observed using the JavaScript equality operators (== and ===). As we know, when applied to two objects these operators compare object identities. The next example demonstrates this behavior. Comparing two distinct proxies returns false despite the underlying targets being the same. In a similar vein, the target object is different from any of its proxies:

// Continuing previous example

var proxy2 = new Proxy (sum, handler);
(proxy==proxy2); // false
(proxy==sum); // false

Ideally, you shouldn’t be able to distinguish a proxy from a non-proxy object so that putting a proxy in place doesn’t really affect the outcome of your app. This is one reason the Proxy API doesn’t include a way to check if an object is a proxy nor provides traps for all operations on objects.

Use cases

As mentioned, Proxies have a wide array of use cases. Many of those above, such as access control and profiling fall under Generic wrappers: proxies that wrap other objects in the same address “space”. Virtualization was also mentioned. Virtual objects are proxies that emulate other objects without those objects needing to be in the same address space. Examples include remote objects (that emulate objects in other spaces) and transparent futures (emulating results that are not yet computed).

Proxies as Handlers

A pretty common use case for proxy handlers is to perform validation or access control checks before performing an operation on a wrapped object. Only if the check is successful does the operation get forwarded. The below validation example demonstrates this:

var validator = {
  set: function(obj, prop, value) {
    if (prop === 'yearOfBirth') {
      if (!Number.isInteger(value)) {
        throw new TypeError('The yearOfBirth is not an integer');
      }

      if (value > 3000) {
        throw new RangeError('The yearOfBirth seems invalid');
      }
    }

    // The default behavior to store the value
    obj[prop] = value;
  }
};

var person = new Proxy({}, validator);

person.yearOfBirth = 1986;
console.log(person.yearOfBirth); // 1986
person.yearOfBirth = 'eighties'; // Throws an exception
person.yearOfBirth = 3030; // Throws an exception

More complex examples of this pattern might take into account all of the different operations proxy handlers can intercept. One could imagine an implementation having to duplicate the pattern of access checking and forwarding the operation in each trap.

This can be tricky to easily abstract, given each op may have to be forwarded differently. In a perfect scenario, if all operations could be uniformly funneled through just one trap, the handler would only need to perform the validation check once in the single trap. You could do this by implementing the proxy handler itself as a proxy. This is unfortunately out of scope for this article.

Object Extension

Another common use case for proxies is extending or redefining the semantics of operations on objects. You might for example want a handler to log operations, notify observers, throw exceptions instead of returning undefined, or redirect operations to different targets for storage. In these cases, using a proxy might lead to a very different outcome than using the target object.

function extend(sup,base) {

  var descriptor = Object.getOwnPropertyDescriptor(base.prototype,"constructor");

  base.prototype = Object.create(sup.prototype);

  var handler = {
    construct: function(target, args) {
      var obj = Object.create(base.prototype);
      this.apply(target,obj, args);
      return obj;
    },

    apply: function(target, that, args) {
      sup.apply(that,args);
      base.apply(that,args);
    }
  };

  var proxy = new Proxy(base, handler);
  descriptor.value = proxy;
  Object.defineProperty(base.prototype, "constructor", descriptor);
  return proxy;
}

var Vehicle = function(name){
  this.name = name;
};

var Car = extend(Vehicle, function(name, year) {
  this.year = year;
});

Car.prototype.style = "Saloon";

var Tesla = new Car("Model S", 2016);

console.log(Tesla.style); // "Saloon"
console.log(Tesla.name); // "Model S"
console.log(Tesla.year);  // 2016

Access Control

Access control is another good use case for Proxies. Rather than passing a target object to a piece of untrusted code, one could pass its proxy wrapped in a sort of protective membrane. Once the app deems that the untrusted code has completed a particular task, it can revoke the reference which detaches the proxy from its target. The membrane would extend this detachment recursively to all objects reachable from the original target that was defined.

Using Reflection with Proxies

Reflect is a new built-in object that provides methods for interceptable JavaScript operations, very much useful for working with Proxies. In fact, Reflect methods are the the same as those of proxy handlers.

Statically typed languages like Python or C# have long offered a reflection API, but JavaScript hasn’t really needed one being a dynamic language. One can argue ES5 already has quite a few reflection features, such as Array.isArray() or Object.getOwnPropertyDescriptor() which would be considered reflection in other languages. ES2015 introduces a Reflection API which will house future methods for this category, making them easier to reason about. This makes sense as Object is meant to be a base prototype rather than a bucket for reflection methods.

Using Reflect, we can improve on our earlier Superhero example for proper field interception on our get and set traps as follows:

// Field interception with Proxy and the Reflect API

var pioneer = new Proxy({}, {
  get: function(target, name, receiver) {
      console.log(`get called for field: ${name}`);
      return Reflect.get(target, name, receiver);
  },

  set: function(target, name, value, receiver) {
      console.log(`set called for field: ${name} and value: ${value}`);
      return Reflect.set(target, name, value, receiver);
  }
});

pioneer.firstName = 'Grace';
pioneer.secondName = 'Hopper';
// Grace
pioneer.firstName

Which outputs:

set called for field: firstName and value: Grace

set called for field: secondName and value: Hopper

get called for field: firstName

Another example is where one might want to:

  • Wrap a proxy definition inside a custom constructor to avoid manually creating a new proxy each time we want to work with specific logic.

  • Add the ability to ‘save’ changes, but only if data has actually been modified (hypothetically due to the save operation being very expensive).

function Customer() {

    var proxy = new Proxy({
      save: function(){
        if (!this.dirty){
          return console.log('Not saving, object still clean');
        }
        console.log('Trying an expensive saving operation: ', this.changedProperties);
      },

    }, {

      set: function(target, name, value, receiver) {
        target.dirty = true;
        target.changedProperties = target.changedProperties || [];

        if(target.changedProperties.indexOf(name) == -1){
          target.changedProperties.push(name);
        }
        return Reflect.set(target, name, value, receiver);
      }

    });

    return proxy;
  }


  var customer = new Customer();

  customer.name = 'seth';
  customer.surname = 'thompson';
  // Trying an expensive saving operation:  ["name", "surname"]
  customer.save();

For more Reflect API examples, see ES6 Proxies by Tagtree.

Polyfilling Object.observe()

Although we’re saying goodbye to Object.observe(), it’s now possible to polyfill them using ES2015 Proxies. Simon Blackwell wrote a Proxy-based Object.observe() shim recently that’s worth checking out. Erik Arvidsson also wrote a fairly spec complete version all the way back in 2012.

Browser Support

ES2015 Proxies are supported in Chrome 49, Opera, Microsoft Edge and Firefox. Safari have had mixed public signals towards the feature but we remain optimistic. Reflect is in Chrome, Opera, and Firefox and is in-development for Microsoft Edge.

Further Reading

Web Audio Updates in Chrome 49

$
0
0

Chrome has been consistently and quietly improving its support for the Web Audio API. In Chrome 49 (Beta as of Feb 2016, and expect to be Stable in March 2016) we’ve updated several features to track the specification, and also added one new node.

decodeAudioData() now returns a Promise.

The decodeAudioData() method on AudioContext now returns a Promise, enabling Promise-based asynchronous pattern handling. The decodeAudioData() method has always taken success and error callback functions as parameters:

context.decodeAudioData( arraybufferData, onSuccess, onError);

But now you can use standard Promise method to handle the asynchronous nature of decoding audio data instead:

context.decodeAudioData( arraybufferData ).then(
     (buffer) => { /* store the buffer */ },
     (reason) => { console.log("decode failed! " + reason) });

Although in a single example this looks more verbose, Promises make asynchronous programming easier and more consistent. For compatibility, the Success and Error callback functions are still supported, as per the specification.

OfflineAudioContext now supports suspend() and resume()

At first glance, it might seem strange to have suspend() on an OfflineAudioContext. After all, suspend() was added to AudioContext to enable putting the audio hardware into standby mode, which seems pointless in scenarios when you’re rendering to a buffer (which is what OfflineAudioContext is for, of course). However, the point of this feature is to be able to construct only part of a “score” at a time, to minimize the memory usage. You can create more nodes while suspended in the middle of a render.

As an example, Beethoven’s Moonlight Sonata contains around 6,500 notes. Each “note” probably deconstructs to at least a couple of audio graph nodes (e.g. an AudioBuffer and a Gain node). If you wanted to render the entire seven-and-a-half minutes into a buffer with OfflineAudioContext, you probably don’t want to create all those nodes at once. Instead, you can create them in chunks of time:

var context = new OfflineAudioContext(2, length, sampleRate);
scheduleNextBlock();
context.startRendering().then( (buffer) => { /* store the buffer */ } );

function scheduleNextBlock() {
    // create any notes for the next blockSize number of seconds here
    // ...

    // make sure to tell the context to suspend again after this block;
    context.suspend(context.currentTime + blockSize).then( scheduleNextBlock );

    context.resume();
}

This will enable you to minimize the number of nodes that need to be pre-created at the beginning of the rendering, and lessen memory demands.

IIRFilterNode

The spec has added a node for audiophiles who want to create their own precisely-specified infinite-impulse-response: the IIRFilterNode. This filter complements the BiquadFilterNode, but allows complete specification of the filter response parameters (rather than the BiquadFilterNode’s easy-to-use AudioParams for type, frequency, Q, and the like). The IIRFilterNode allows precise specification of filters that couldn’t be created before, like single-order filters; however, using the IIRFilterNode requires some deep knowledge of how IIR filters work, and they are also not schedulable like BiquadFilterNodes.

Previous changes

I also want to mention a couple of improvements that have gone in previously: in Chrome 48, BiquadFilter node automation started running at audio rate. The API didn’t change at all to do this, but this means your filter sweeps will sound even smoother. Also in Chrome 48, we added chaining to the AudioNode.connect() method by returning the node we’re connecting to. This makes it simpler to create chains of nodes, as in this example:

sourceNode.connect(gainNode).connect(filterNode).connect(context.destination);

That’s all for now, and keep rocking!

DevTools go dark, @keyframe editing and smarter autocomplete

$
0
0

Learn how DevTools makes you type less with smarter Console autocomplete, how to edit @keyframe rules directly in the Styles pane, how to have fun with CSS Custom Variables and how to join the dark side.

A smarter autocomplete in your Console

If you’re like me and many others, you type a command into the console to debug your app, see it not working, iterate and type it again, and again, and again. In order to help with that, we’re now autocompleting full statements you typed previously, like so:

Autocomplete

Directly edit @keyframe rules in Styles pane

When we introduced the animation inspector and easing editor to DevTools, it was limited to transitions because we never showed @keyframe-based CSS animations in the Style pane. I’m pleased to say that that’s now a thing of the past, so go wild! Check out our video tweet for a preview.

Custom CSS Properties support

Custom CSS properties in DevTools

There’s a lot of goodness coming to CSS, and one of them is custom variables, launching in Chrome 49. We made sure to include full support in DevTools, so if you’ve been using variables in Sass before, give the native ones a try, as they allow you to edit properties on the fly in the Styles pane and immediately update dependent elements.

A Dark Theme for DevTools

Dark Theme in action

We finally gave in to a request that we’ve heard dozens of times over the last couple years: You can now choose the dark side in DevTools. Head to the DevTools settings, Set theme to dark and enjoy. I’d say this is still in beta as a lot of it is auto-generated, so if you see any areas that could see improvements, definitely let us know!

The Best of the Rest

  • The console drawer now auto collapses when you click on the full Console tab.
  • The file tree in Sources got a nice overhaul with new icons and new grouping functionality

As always, let us know what you think via Twitter or the comments below, and submit bugs to crbug.com/new.

Until next month!
Paul Bakaus & the DevTools team

Delivering Fast and Light Applications with Save-Data

$
0
0

Hundreds of millions of users are relying on proxy browsers and various transcoding services to access the web on a daily basis. For some, these services are critical because they significantly lower the associated costs of browsing the web. For others, they enable a much faster browsing experience in situations where network connectivity is slow. In short, they significantly improve the user experience, hence their continuing growth in use and popularity.

However, the popularity of the proxy browsers and transcoding services is also an indicator that we—the site owners and web developers—are ignoring the high user demand for fast and light applications and pages. Let’s fix that.

The new Save-Data client hint request header available in Chrome, Opera, and Yandex browsers enables developers to deliver fast and light applications to users who have opted-in to ‘data savings’ mode in their browser. By identifying this request header, the application can customize and deliver an optimized user experience to cost- and performance-constrained users.

The Need for Lightweight Apps and Pages

Weblight stats

“Google shows faster, lighter pages to people searching on slow mobile connections in selected countriesOur experiments show that optimized pages load four times faster than the original page and use 80% fewer bytes. Because these pages load so much faster, we also saw a 50% increase in traffic to these pages.

The number of 2G connections is finally on the decline. However, 2G was still the dominant network technology by number of connections in 2015. The penetration and availability of 3G and 4G networks is growing rapidly, but the associated ownership costs and network constraints are still a significant factor for hundreds of millions of users:

  • Accessing a faster network requires reasonably modern (“smartphone”) hardware and a data plan to back it. The total cost of ownership for such a device can be staggering for many users—e.g., a 500MB data plan can cost 17 hours worth of minimum wage work in India. Not surprisingly many users opt for prepaid plans, often topping up their quota daily, and carefully monitor and control their device access to the network. Every megabyte counts.

  • Those that can afford the latest flagship 4G-enabled phone and data plan can still find themselves constrained by the network: the device may be connected to a fast network but sipping data through a straw, due to capacity issues, signal quality, roaming policies, and so on. For example, many providers cap connection throughput to <300 kbps when the user is roaming, or when the allotted data plan cap has been exceeded, regardless of the network technology in use.

The point being, the need for lightweight and optimized experiences may be more pronounced in some markets (typically, in areas with higher ratio of 2G/3G users and higher data costs), but it is also a universal need because even 4G subscribers can often find themselves with poor and expensive connectivity.

Note: as a corollary to the above, the need for lightweight experiences is not a problem that will “go away” in any foreseeable future.

Limits of Proxy Browsers and Transcoding Services

Many popular browsers, both desktop and mobile, allow the user to enable a “data saving” mode, which gives permission to the browser to apply some set of optimizations to reduce the amount of data required to render the page. For example, when enabled, some browsers may request lower resolution images, defer loading of some resources, or route requests through a proxy service that can apply other content-specific optimizations—e.g. recompress images, compress text resources, and so on.

The savings from such optimizations vary, but as one data point, the Chrome Data Saver feature can reduce the size of pages by 50%. Other popular proxy browsers such as Opera browsers and Yandex.Browser offer similar functionality. However, while these proxy browsers are popular with users, they have their own set of limitations:

  • Proxy browsers achieve most of their savings by recompressing images into more efficient formats and with lower quality, and applying text compression where it was omitted. In other words, they can only optimize what you give them; they can’t build and deliver an alternate and better “lightweight” experience.

  • Most proxy browsers restrict themselves to resources delivered over HTTP. Secure connections (HTTPS) are routed directly from the client to the destination, bypassing the proxies.

On the other hand, transcoding services, such as the “web light” experience offered by Google search, often take a more drastic approach and may reformat the site to make it accessible to users on very slow networks. This yields a different set of disadvantages and limitations:

  • Our applications may look very different because we can’t control how the information is presented to the user, and may omit or ignore important site functionality.

  • The optimized experience is available to a subset of users—e.g. those navigating to our site after a search. Repeat visits may result in inconsistent experience, and so on.

In short, counting on third-party services is both suboptimal and unreliable. We—the site owners and web developers—need to take the responsibility and control over the user experience for data- and cost-constrained users—e.g. respond with an alternative “lighter” application template, reduce the number of image bytes (fewer images, higher compression ratios, smaller display size, and so on), switch to on-demand loading of expensive content, and so on.

Detecting the Save-Data User Preference

How do you know when to deliver the “light” experience to your users? Your application should check for the new Save-Data client hint request header:

The “Save-Data” client hint request header indicates the client’s preference for reduced data usage, due to high transfer costs, slow connection speeds, or other reasons._

Whenever the user enables a “data savings” mode in their browser, the browser will append the new Save-Data request header to all outgoing requests (both HTTP and HTTPS). Today, the browser will only advertise one “on” token in the header (i.e. Save-Data: on), but it may be extended in the future to indicate other user preferences.

Save-Data header in DevTools

In turn, if your application is using a service worker, it can inspect the request headers and apply relevant logic to optimize the experience. Alternatively, the server can look for the advertised preferences in the Save-Data request header and return an alternate response—e.g. different markup, smaller images and video, and so on.

Tip: Are you using PageSpeed for Apache or Nginx to optimize your pages? If so, see this discussion to learn how to enable Save-Data savings for your users.

Browser Support

  • Chrome 49+ will advertise Save-Data whenever the user enables the “Data Saver” option on mobile, or the “Data Saver” extension on desktop browsers.

  • Opera 35+ will advertise Save-Data whenever the user enables “Opera Turbo” mode on desktop, or the “Data savings” option on Android browsers.

  • Yandex 16.2+ will advertise Save-Data whenever Turbo mode is enabled on desktop, or mobile browsers.

Implementation Tips and Best Practices

  1. Lightweight applications are not lesser applications. They don’t omit important functionality or data that is critical to help the user find and achieve what they’re looking for; they’re just more cognizant of the involved costs and the user experience. Do not restrict or remove critical functionality and where possible give the user a choice to toggle between experiences. For example:
    • A photo gallery application may deliver lower resolution previews when Save-Data is advertised, but it should still allow the user to view high-resolution previews if desired.
    • A search application may return fewer results, reduce the amount of “heavy” media results, and rely on reducing the number of dependencies required to render the page.
    • A news-oriented site may surface fewer stories and provide smaller media previews to enable faster and lighter browsing.
    • And so on…
  2. Enable server logic to check for the Save-Data request header and consider providing an alternate (lighter) response—e.g. reduce number of required resources and dependencies to display the page, apply higher image compression, etc..

    If you’re serving an alternate response based on the Save-Data header, don’t forget to add it to the Vary list—e.g. Vary: Save-Data, to indicate to upstream caches that they should cache and serve this version only if the Save-Data request header is present. For more details, consult the best practices for interaction with caches.

  3. If you’re using a service worker, your application can detect when the data savings option is enabled by checking for the presence of the Save-Data request header. If enabled, consider if you can rewrite the request to fetch fewer bytes, or use an already fetched response.

  4. Consider augmenting Save-Data with other signals such as information about the users’ connection type and technology (see NetInfo API). For example, you might want to serve the lightweight experience to any users on a 2G connection. Conversely, just because the user is on a “fast” 4G connection does not mean they’re not interested in saving data—e.g. roaming.

Supercharged Remote Rebugging, Class Toggles and our own late night show?!

$
0
0

Learn all about the revamped “Inspect Devices” UI, toggle classes easily in the now-fixed style panel and watch the pilot of DevTools Tonight.

Welcome back to the latest edition of the digest for all you Canary users out there! Turns out I missed a few updates in December (I’ve been a little preoccupied with my newborn daughter), so here they come, along a few super fresh ones.

The new “Inspect Devices” Dialog

The (currently outdated) Remote Debugging documentation for DevTools has been our most popular guide for many years in a row, which could only mean one thing: Nobody had a frickin clue how to use it!

So we went ahead and revamped the UX. Instead of having to open an entirely different page (“chrome://inspect”), all of “Inspect Devices” is now conveniently embedded into DevTools itself for quick access and less context switches.

Class Toggles in the Style Panel

cls toggles in Style Panel

It’s now easier than ever to quickly toggle a class on an element to preview how it would look like with or without the associated styles. And we’ve also added a input to quickly add new classes so you don’t have to edit the attribute. Click on the new “.cls” button in the Style panel to try it out.

DevTools Tonight

It is with great pleasure that I can announce yet another way to keep up with what’s happening in the world of Chrome DevTools: I present to you the pilot of DevTools Tonight:

In the new show that’ll be serialized on a bi-weekly schedule, I’ll focus on bigger features coming to stable Chrome (instead of Canary) and will go a little more in-depth on each. Be sure to subscribe to the Chrome Developers channel to get notified when #1 ships, and let me know what you think in the YouTube comments!

The Best of the Rest


As always, let us know what you think via Twitter or the comments below, and submit bugs to crbug.com/new.

Until next month!
Paul Bakaus & the DevTools team

Viewing all 599 articles
Browse latest View live