Take Photos and Control Camera Settings
Image Capture is an API to capture still images and configure camera hardware settings.
The API enables control over camera features such as zoom, focus mode, contrast, ISO and white balance. Best of all, Image Capture allows you to access the full resolution capabilities of any available device camera or webcam. Previous techniques for taking photos on the Web have used video snapshots, which are lower resolution than that available for still images.
Image Capture is available in Chrome 56 on Android and desktop as an Origin Trial, or in Chrome Canary on desktop and Android with Experimental Web Platform features enabled.
The API has four methods:
takePhoto()
returns aBlob
, the result of a single photographic exposure, which can be downloaded, stored by the browser or displayed in animg
element. This method uses the highest available photographic camera resolution.grabFrame()
returns anImageBitmap
object, a snapshot of live video, which could (for example) be drawn on a canvas and then post-processed to selectively change color values. Note that theImageBitmap
will only have the resolution of the video source — which will be lower than the camera's still-image capabilities.getPhotoCapabilities()
returns aPhotoCapabilities
object that provides access to available camera options and their current values.setOptions()
is used to configure camera settings such as zoom, white balance or focus mode.
The Image Capture API gets access to a camera via a MediaStream
from getUserMedia()
:
navigator.mediaDevices.getUserMedia({video: true})
.then(gotMedia)
.catch(error => console.error('getUserMedia() error:', error));
function gotMedia(mediaStream) {
const mediaStreamTrack = mediaStream.getVideoTracks()[0];
const imageCapture = new ImageCapture(mediaStreamTrack);
console.log(imageCapture);
}
You can try out this code from the DevTools console.
Note: To choose between different cameras, such as the front and back camera on
a phone, get a list of available devices via the
MediaDevices.enumerateDevices()
method, then set deviceId
in getUserMedia()
constraints as per the demo here.
You can use takePhoto()
to get a still image and then set it as the src
of
an <img>
:
const img = document.querySelector('img');
// ...
imageCapture.takePhoto()
.then(blob => {
img.src = URL.createObjectURL(blob);
})
.catch(error => console.error('takePhoto() error:', error));
Use grabFrame()
to get data for a frame of video and then draw
it on a <canvas>
:
const canvas = document.querySelector('canvas');
// ...
imageCapture.grabFrame()
.then(imageBitmap => {
canvas.width = imageBitmap.width;
canvas.height = imageBitmap.height;
canvas.getContext('2d').drawImage(imageBitmap, 0, 0);
})
.catch(error => console.error('grabFrame() error:', error));
Camera capabilities
If you run the code above, you'll notice a difference in dimensions between the
grabFrame()
and takePhoto()
results.
The takePhoto()
method gives access to the camera's maximum resolution.
grabFrame()
just takes the next-available VideoFrame
in the MediaStream
inside the renderer process, whereas takePhoto()
interrupts the MediaStream
,
reconfigures the camera, takes the photo (usually in a compressed format,
hence the Blob
) and then resumes the MediaStream
. In essence, this means
that takePhoto()
gives access to the full still-image resolution
capabilities of the camera. Previously, it was only possible to 'take a photo' by
calling drawImage()
on a canvas
element, using a video as the source (as per the
example here).
In this demo, the <canvas>
dimensions are set to the resolution of the video
stream, whereas the natural size of the <img>
is the maximum still-image
resolution of the camera. CSS, of course, is used to set the display
size of both.
The full range of available camera resolutions for still images can be get and set
using the MediaSettingsRange
values for PhotoCapabilities.imageHeight
and
imageWidth
. Note that the minimum and maximum width and height constraints for
getUserMedia()
are for video, which (as discussed) may be different from the
camera capabilities for still images. In other words, you may not be able to
access the full resolution capabilities of your device when saving from
getUserMedia()
to a canvas. The WebRTC resolution constraint
demo
shows how to set getUserMedia()
constraints for resolution.
Anything else?
The Shape Detection API works well with Image Capture: call
grabFrame()
repeatedly to feedImageBitmap
s to aFaceDetector
orBarcodeDetector
. Find out more about the API from Paul Kinlan's blog post.The Camera flash (device light) can be accessed via
FillLightMode
.
Demos and code samples
Support
- Chrome 56 on Android and desktop as an Origin Trial.
- Chrome Canary on Android and desktop with Experimental Web Platform features enabled.