An innovative VR filmmaking tool


Angela Chen

Arnav Banerji

Kevin Liu

Varun Mehra

Shera Zhan


Producer, UX Designer


16 weeks, Jan - May 2020


Virtual Reality


Project Management


Foresight is a semester project at the Entertainment Technology Center of CMU in Spring 2020. I worked as the producer and UX Designer in a team of 5.


Our team's goal is to explore creative solutions to visualize entire movie scripts to help filmmakers understand their story before production begins. To enable us to do this in a short timeline, with shot based iterations, we are developing a pipeline that will leverage non-traditional and traditional 3D techniques to show rough versions of the film, favoring speed over complexity.


  • Led research and concept development

  • Designed and prototyped user interface for the VR camera system

  • Organized team meetings, set up and shared a schedule with the team, and regularly re-evaluated to ensure successful completion

  • Built the project website and updates regularly with photos, videos or development blogs

Asset 2@4x.png
 Virtual Camera System 

The main goal of our virtual camera system is to empower filmmakers to understand, prototype, and control the cinematography elements in the story, mainly the physical dimensions of the set and the camera movements.

For the camera movements, we strive to create an adjustable camera that can recreate the properties of real-life cameras and implement different navigation schemes in VR to enable filmmakers to experiment more shots in a fixed amount of time.

Features in the system:

  • Vertical and Horizontal navigating

  • Tilting & Panning

  • Dollying / Creating Camera Track

  • Adjusting Depth of Field & Racking Focus

  • Saving and loading camera motion

 Motion Capture System 

Traditional film previz involves either keyframed animation done by hand, which is time consuming, or motion capture, which requires a dedicated space and setup, and requires significant clean-up after. We are using a VR setup to quickly block out character motion using an inverse kinematics driven rig. The animation is expected to look rough and not intended to be a replacement for the real performance an actor would bring to a set.

Earlier, we designed our system to be used by 3 people, with one acting, one operating the controls and one directing the performance. After things went into remote, that changed. We had reserved controllers to capture hand poses, but we’re now using the trigger to calibrate, and to start and stop recording. We added a countdown so as to get into the right pose when animation begins recording.

We record one animation for one character at a time based on an animation list.  Each clip is exported into a file. We do roughly 1 – 5 takes. Eventually, selected clips are imported manually into a Unity timeline to create a whole sequence of animations. We do not do any cleanup or post processing on the animation apart from editing the start and end times or repositioning the character in space. Since the mocap space is small, there are small time jumps in the sequence, which can be a limitation sometimes from the camera perspective, but it works in previz.

For scenes where interaction between two characters is crucial, we found it useful to record one performance and then play back that recorded performance at will. That way the second character can respond to the first, and match the positioning and timing better.

Here is the demo video:

For further details and findings, please see the full version of project documentation: