The MediaDevices.getUserMedia() is a part of the webRTC media capture API and is used to get access to the camera and the microphone connected to the user device (user computer, smartphone, etc.) from the browser. When getUserMedia() is invoked, the browser asks for permission from the user to use the media inputs (camera or microphone or both) connected to the user device.
Syntax :
navigator.mediaDevices.getUserMedia( MediaStreamConstraints ) .then( MediaStream => { // Code that uses the MediaStream }).catch( error => { // Code to handle the error });
Using the getUserMedia() API: getUserMedia() can be accessed by accessing the navigator.mediaDevices singleton object that offers various methods for accessing the camera, microphone as well as screen sharing. When getUserMedia() is invoked, the browser prompts the User for permission to access the available device camera or microphone (based on the given MediaStreamConstraints parameter). If the User gives the permission, then it returns a Promise that resolves to a MediaStream.
Note: getUserMedia() can be accessed through navigator.getUserMedia() also, but it is deprecated and hence not recommended.
What is a MediaStream?
MediaStream is simply a stream of audio or video content data that is not saved in the device memory, rather it is just “passed” to a specific video or audio element as the source data. Note that the media content data is not saved in the running memory, which means if you don’t have a “pre-downloaded” video or audio data, still you can use the media stream to play the video or audio content as it does not need to be downloaded.
What is MediaStreamConstraints?
MediaStreamConstraints is an object containing all the constraints like what type of data is/are to be streamed (video or audio or both ), what should be the resolution of the camera etc.
const mediaStreamConstraints = { audio: true, video: true }; // It basically tells the getUserMedia() to // open both the camera and the microphone const stream = await navigator.mediaDevices .getUserMedia(mediaStreamConstraints);
Using the MediaStream received from the getUserMedia(): If the user gives permission to access the camera and the microphone, then this method returns a promise whose fulfillment handler receives a MediaStream that we can use in the handler function. Usually, we pass the received MediaStream to a video or audio element as the source (using the srcObject attribute of video or audio element).
Javascript
navigator.mediaDevices.getUserMedia( MediaStreamConstraints ) .then( MediaStream => { /*assuming that there is a video tag having an id 'video' in the index.html*/ const videoElem = document.getElementById( 'video' ); /*it is important to use the srcObject attribute and not the src attribute because, the src attribute does not support the mediaStream as a value*/ videoElem.srcObject = mediaStream; //Don't forget to set the autoplay attribute to true video.autoplay = true ; }). catch ( error => { //code to handle the error }); //Or using async/await — async function accessCamera() { const videoElem = document.getElementById( 'video'); let stream = null ; try { stream = await navigator.mediaDevices.getUserMedia( MediaStreamConstraints ); //adding the received stream to the source of the video element videoElem.srcObject = stream; videoElem.autoplay = true ; } catch (err) { //code to handle the error } } |
What if the user does not give permission to access the camera and microphone ?
If the User denies giving permission then getUserMedia() throws a NotAllowedError that we can catch using the catch block. If the user simply ignores the prompt, then nothing happens( the promise is never resolved nor rejected).
Now, let’s discuss the MediaStreamConstraints with more details:
MediaStreamConstraints:
This is basically an object containing all the information regarding what type of media to use, what should be the resolution of the camera, which device to use as the media input etc. The simplest MediaStreamConstraints object looks like this —
const constraints = { audio: false, // or true if you want to enable audio video: true // or false if you want to disable video }; // Includes only the video media if available
MediaStreamConstraints object has only two properties(members): audio and video, both of them accepts a boolean value that tells the browser whether to include that specific media content in the resulting media stream. “true” makes the media “required” in the resulting media stream, if that “required” media is not available then getUserMedia() throws “NotFoundError”. For example, getUserMedia() will throw “NotFoundError” for the above constraints if the user device doesn’t have a camera.
Additionally, you can add more constraints to use media content with some preferred capabilities :
You can specify the resolution of the camera the browser should prefer:
Javascript
/* This constraints object tells the browser to use the camera having 1200 X 800 resolution if available*/ const constraints = { //disables the audio media content in the resulting media stream audio: false , video: { width: 1200, height: 800 } // Inherently sets video content to true or "required" }; |
“prefer” because the above constraints object does not ensure that the browser will use a camera with that resolution. The browser first checks if there is any type of media input that matches the given constraints, and if there is any then only the browser uses that matching media input. But if there are none that match the given constraints, then the browser uses the device with the closest match.
But if you want to make some capabilities “required” or mandatory or apply some limit on the capabilities, then you can use various key-words:
- The “min” and “max” keywords :
As the names of the key-words suggest, the “min” key-word tells the browser that the corresponding media is required to have at least the given capabilities, similarly, the “max” key-word tells the browser that it is mandatory for the corresponding media to have at-most the specified constraint. If no such media input device meets the given constraints, then the returned promise is rejected with “NotFoundError”.
Javascript
/* This constraints object tells the browser to use the camera having resolution between 1200 X 720 and 1600 X 1080 resolution if available otherwise the returned promise is rejected with "NotFoundError" */ const constraints = { //disables the audio media content in the resulting media stream audio: false , video: { width: { min: 1200, max: 1600 }, height: { min: 720, max: 1080 } } }; // Inherently sets video content to true or "required" |
- The “deviceId” keyword :
The “deviceId” property asks the browser to use the media input device having the given deviceId if available otherwise use other available input devices. The deviceId can be obtained from the navigator.mediaDevices.enumerateDevices() method that returns a promise that resolves to a list of connected cameras, microphones, headsets, etc. Each connected device has a unique and unguessable id called “deviceId”.
Javascript
//initializing the deviceId with null let deviceId = null ; //returns a promise that resolves to a list of //connected devices navigator.mediaDevices.enumerateDevices() .then( devices => { devices.forEach(device => { //if the device is a video input //then get its deviceId if (device.kind === 'videoinput' ){ deviceId = device.deviceId; } }); }) . catch (err => { //handle the error }); /* This constraints object tells the browser to use the camera having the given deviceId if available otherwise use other available devices*/ const constraints = { //disables the audio media content in the resulting media stream audio: false , video: { deviceId: deviceId } }; // Inherently sets video content to true or "required" |
3. The “exact” keyword :
The “exact” property tells the browser that it is mandatory to use a media having exactly the corresponding constraint
Javascript
// This constraints object tells the browser to use the camera having exactly the given deviceId if available otherwise the returned promise is rejected with "NotFoundError" */ const constraints = { audio: { /* browser prefers the device having the given deviceId if available otherwise use other devices*/ deviceId: audioDeviceId }, video: { deviceId: { exact: someId } } }; // Inherently sets video and audio content to true or "required" |
4. The “ideal” property:
The “ideal” property tells the browser that the given constraint value is the ideal value and the device having the “ideal” constraint should be used. Any normal property values are inherently ideal
Javascript
/* This constraints object tells the browser to use the camera having 1200 X 800 resolution if available */ const constraints = { audio: true //enable the audio media track video: { width: 1200, // same as width: {ideal: 1200} height: 800 // same as height: {ideal: 800} } }; // Inherently sets video content to true or "required" |
5. The “facingMode” property:
Specifies whether to use the front camera or the rear camera if available. It is mainly used for mobile devices.
Javascript
const constraints = { //disables the audio media content in the resulting media stream audio: false , video: { facingMode: "user" //prefers front camera if available // or facingMode: "environment" --> perfers rear camera if available } }; // Inherently sets video content to true or "required" |
Example of using getUserMedia() :
HTML
<!DOCTYPE html> < html lang = "en" > < head > < meta charset = "UTF-8" > < meta name = "viewport" content=" width = device -width, initial-scale = 1 .0"> < style > body { text-align: center; display: flex; flex-direction: column; justify-content: center; align-items: center; } video { background-color: black; margin-bottom: 1rem; } #error { color: red; padding: 0.6rem; background-color: rgb(236 157 157); margin-bottom: 0.6rem; display: none; } </ style > < title >GetUserMedia demo</ title > </ head > < body > < h1 > WebRTC getUserMedia() demo</ h1 > <!-- If you use the playsinline attribute then the video is played "inline". If you omit this attribute then it works normal in the desktop browsers, but for the mobile browsers, the video takes the fullscreen by default. And don't forget to use the autoplay attribute--> < video id = 'video' width = "600" height = "300" autoplay playsinline> Sorry, video element not supported in your browser </ video > < div id = "error" ></ div > < button onclick = "openCamera()" > Open Camera</ button > < script > const videoElem = document.getElementById('video'); const errorElem = document.getElementById('error'); //Declare the MediaStreamConstraints object const constraints = { audio: true, video: true } function openCamera() { //Ask the User for the access of the device camera and microphone navigator.mediaDevices.getUserMedia(constraints) .then(mediaStream => { /* The received mediaStream contains both the video and audio media data*/ /*Add the mediaStream directly to the source of the video element using the srcObject attribute*/ videoElem.srcObject = mediaStream; }).catch(err => { // handling the error if any errorElem.innerHTML = err; errorElem.style.display = "block"; }); } </ script > </ body > </ html > |
Output:
Now if you click the “Open Camera” button, the browser will ask for your permission, if you allow, then you will see yourself on the screen. But if you deny, then you can see the error right below the video element in a red box:
How to close the camera and the microphone:
Till now, we have discussed how to open the camera from the browser, we have done nothing to stop using the camera and the microphone. Though, if you close the tab or window, the browser automatically stops using the camera and the microphone. But if you want to close the camera and the microphone yourself, then you can follow the below code
Example 2: First, add a “Close Camera” button and “closeCamera() method.
HTML
<!DOCTYPE html> < html lang = "en" > < head > < meta charset = "UTF-8" > < meta name = "viewport" content=" width = device -width, initial-scale = 1 .0"> < style > body { text-align: center; display: flex; flex-direction: column; justify-content: center; align-items: center; } video { background-color: black; margin-bottom: 1rem; } #error { color: red; padding: 0.6rem; background-color: rgb(236 157 157); margin-bottom: 0.6rem; display: none; } </ style > < title >GetUserMedia demo</ title > </ head > < body > < h1 > WebRTC getUserMedia() demo</ h1 > <!-- If you use the playsinline attribute then the video is played "inline". If you omit this attribute then it works normal in the desktop browsers, but for the mobile browsers, the video takes the fullscreen by default. And don't forget to use the autoplay attribute--> < video id = 'video' width = "600" height = "300" autoplay playsinline> Sorry, video element not supported in your browsers </ video > < div id = "error" ></ div > < div id = "button-container" > < button onclick = "openCamera()" > Open Camera</ button > <!-- Close Camera button --> < button onclick = 'closeCamera()' >Close Camera</ button > </ div > < script > const videoElem = document.getElementById('video'); const errorElem = document.getElementById('error'); let receivedMediaStream = null; //Declare the MediaStreamConstraints object const constraints = { audio: true, video: true } function openCamera() { //Ask the User for the access of the device camera and microphone navigator.mediaDevices.getUserMedia(constraints) .then(mediaStream => { // The received mediaStream contains both the // video and audio media data //Add the mediaStream directly to the source of the video element // using the srcObject attribute videoElem.srcObject = mediaStream; // make the received mediaStream available globally receivedMediaStream = mediaStream; }).catch(err => { // handling the error if any errorElem.innerHTML = err; errorElem.style.display = "block"; }); } const closeCamera = () => { if (!receivedMediaStream) { errorElem.innerHTML = "Camera is already closed!"; errorElem.style.display = "block"; } else { /* MediaStream.getTracks() returns an array of all the MediaStreamTracks being used in the received mediaStream we can iterate through all the mediaTracks and stop all the mediaTracks by calling its stop() method*/ receivedMediaStream.getTracks().forEach(mediaTrack => { mediaTrack.stop(); }); errorElem.innerHTML = "Camera closed successfully!" errorElem.style.display = "block"; } } </ script > </ body > </ html > |
Output:
Before:
After click “Close Camera” button:
The closeCamera() method checks if the camera and the microphone both are closed or not by checking the receivedMediaStream variable. If it is null, it means the camera and the microphone both are closed, else it calls the getTracks() method of the received MediaStream which returns an array of MediaStreamTracks. You can stop those media tracks by calling its stop() method.