Experiments with Oculus Rift, Kinect and a drone

Recently, thanks to my friends at Digitalmind, I was able to put my hands on an Oculus rift’s developer kit and an Ar.Drone.

I wanted to start by simply streaming the video feed of the parrot to the Oculus.
Initially, I thought of putting the video on a wall of a 3d room so I started from a project in XNA with the idea to convert it to MonoGame later.

The first problems arised quite immediately. Streaming and handling the video, at least for me, was something very new, so I spent the some hours searching on how to handle a stream in h264 with C#. FFmpeg was the answer.
Once downloaded and decompressed in a folder, to connect to the parrot, you simply open a command prompt and type:

ffplay tcp://

A window will pop up with the video signal, so far so good.
Next step was to find a decent wrapper for FFmpeg in C#. There are some projects out there, but I found them a little too young/unstable. After more digging I found this wrapper, and, with a little modification to the file names of the DLLs imported, worked like a charm. I took the class with all the invoke and imported it in my project. The guy who implemented the wrapper is also developing a Parrot client in C#. I learned a lot from him.

Once I had my video working, and mapped to a texture2d, I just needed to implement the stereoscopic view for the Oculus.

To do that, I wrote a shader to apply to the final frame. With SetRenderTarget I created my frame with everything (video and some text) and then rendered it with the shader:

RenderTarget2D renderTarget;
Texture2D videoTexture;
Texture2D renderTexture;

protected override void LoadContent()
   renderTarget = new RenderTarget2D(GraphicsDevice, graphics.PreferredBackBufferWidth, graphics.PreferredBackBufferHeight);
   videoTexture = new Texture2D(GraphicsDevice, 640, 360, false, SurfaceFormat.Color);


private Rectangle leftSize;
private Rectangle rightSize;
private int viewportWidth;
private int viewportHeight;

private void UpdateSize()
  //if the window size is not changed, I do not need to do the math again
  if (viewportWidth != GraphicsDevice.Viewport.Width || viewportHeight != GraphicsDevice.Viewport.Height)
    viewportWidth = GraphicsDevice.Viewport.Width;
    viewportHeight = GraphicsDevice.Viewport.Height;
    leftSize = new Rectangle(0, 0, viewportWidth / 2, viewportHeight);
    rightSize = new Rectangle(viewportWidth / 2, 0, viewportWidth / 2, viewportHeight);

Effect oculusRiftShader;
protected override void Draw(GameTime gameTime)
  //Here I draw everything needed
  spriteBatch.Draw(videoTexture, new Vector2(0, 40), Color.White);//1280x720 centered on 1280x800
  //create rudimentary HUD and draw some text

  renderTexture = (Texture2D)renderTarget;

  //Then I pass the texture to the shader
  spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive, null, null, null, oculusRiftShader);
  spriteBatch.Draw(renderTexture, leftSize, Color.White);
  spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive, null, null, null, oculusRiftShader);
  spriteBatch.Draw(renderTexture, rightSize, Color.White);


From the Oculus’s SDK you get all the info needed to create the shader.

Here my test with the video.

After about a week, I have added some new functionalities:

  • Takeoff and Land with voice commands via Kinect.
  • The parrot turn left or right when I turn my head left or right.

Here is a video of my progress

Now I’m working on implementing a face tracking on what the Parrot is seeing, (Maybe I can write some code to have the parrot follow a specific face) and I’m still not sure if using Kinect gestures to handle the others commands of the parrot or using the Razer Hydra for a better handling of the Parrot.

When my code will be enough decent and clean (now is a total mess of trials and errors) I will post it on GitHub.