The following code lets you take pictures inside of jupyter notebooks. It uses Javascript inside of jupyterhub to access the client computers camera and transfer images back into python.

I am particularly proud of this code because of the following features:

  • Does not require the installation of OpenCV (This can be tricky)
  • Will work with Jupyterhub. This is a big one. If you run OpenCV on Jupyterhub it will look for the camera on the server and not the client computer. Since this code runs in javascript it uses the client's computer.

Some negativies to this approach:

  • Does not work in Jupyterlab. This is because the default of JupyterLab does not enable javascript as a security measure.
  • Is not fast enough to transmit video. This is because I use Unicode to transmit the image and it can't handle enough images in a reasonable amount of time. There may be a way to record the video inside of javascript and then transmit the entire video but I have not figured that out.

Step 1: Access the camera in the Javascript

This program works in two major steps. The first step is written in javascript. In summary, the code creates a javascript canvas and attaches the local camera. The code also creates a simple javascript button. WHen the user presses the button a picture is taken and the context is saved as a Unicode URL. The information is passed back to the python kernel using the IPython.notebook.kernel.execute command.

In [ ]:
# Code developed by Dirk Colbry
# This code snipit tries to read from your computer's camera.  It is not fully tested so it may not work for everyone.

from IPython.display import HTML

main_text = """
<video id="video" width="320" height="240" autoplay></video>
<button id="snap">Snap Photo</button>
<canvas id="canvas" width="320" height="240"></canvas>

// Grab elements, create settings, etc.
var video = document.getElementById('video');

// Get access to the camera!
if(navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
    // Not adding `{ audio: true }` since we only want video now
    navigator.mediaDevices.getUserMedia({ video: true }).then(function(stream) {
        //video.src = window.URL.createObjectURL(stream);

// Elements for taking the snapshot
var canvas = document.getElementById('canvas');
var context = canvas.getContext('2d');
var video = document.getElementById('video');

// Trigger photo take
document.getElementById("snap").addEventListener("click", function() {
	context.drawImage(video, 0, 0, 320, 240);
    var myCanvas = document.getElementById('canvas');
    var image = myCanvas.toDataURL("image/png");
    IPython.notebook.kernel.execute("image = '" + image + "'")


Step 2: Convert string back into image

We can now access the URL string from inside of python. The following function does all of the magic to decode the base 64 bit image into an IO stream which is then passed into the function. The end result is a image in the Python Image Library (PIL) format.

In [ ]:
from PIL import Image
import base64
import io

pil_im =',')[1])))

Step 3: (Optional) Convert PIL image to Numpy array

Typically I like to work with images as a 3D numpy array (row, columns, channel). The following code just converts the PIL image into a numpy array.

In [ ]:
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np

im3 = np.array(pil_im)
im3 = im3[:,:,0:3]

I hope you found this example useful. Please leave a comment if you use it in your project. I would really like to see how it is used.

  • Dirk

Migration from Blogger to Pelican

Fri 10 August 2018 by Dirk Colbry

Today I was showning someone my old Blogger website and one of the reasons I moved away from blogger is that it was hard to edit and did not work well with my workflow. It occured to me that it may be easy to migrate from Blogger to my new Pelican Blog.

read more