(Seattle, WA--November 29, 2017) Since launching in late 2015, Seattle-based MocapNow
has become a premier mocap service provider in the Pacific Northwest, delivering a range of motion capture services from full performance capture to creating custom rigs, cleaning up data, and building or optimizing external motion capture systems. The studio’s credits span feature films, video games and virtual reality productions, including HBO’s Emmy Award-winning “Westworld” VR experience, Epic Games’ “Paragon” announcement trailer and the upcoming PlayStation VR game “Golem” from Highwire, among other projects.
Co-owners CJ Markham
and Ander Bergstrom
have almost 25 years of combined animation and mocap experience, having worked on AAA games such as "Halo 5" at 343 Industries and "Grand Theft Auto" at Rockstar Games; and feature films, including VFX Academy Award-winning "King Kong" at Weta Digital and Best Animated Feature Oscar winner "Happy Feet" at Animal Logic. While setting up a mocap stage for Rockstar London in 2007, Markham encountered his first OptiTrack system, and has since watched the progression of the technology firsthand, and provided feedback on new features. A longtime OptiTrack enthusiast, Markham integrated the technology into MocapNow's workflow, outfitting both a spacious 70 x 30 foot stage with 17-foot high ceilings for full body capture and an intimate 18 x 18 foot sound stage for simultaneous audio and facial capture with OptiTrack
“Motive’s one-button click character setup is huge. Part of the reason people use mocap is for speedy turnaround at high quality, and that feature saves at least an hour each session,” Markham shared. “Connected to that is the ability to export the FBX actor file directly into content creation pipelines allowing animators to make edits without significant time or budget costs. When data goes directly to a skeleton, like with some other software, it can limit an animator’s artistic influence and control over the pipeline to fix issues, resulting in subpar deliverables.”
MocapNow’s performance capture setup features 34 cameras, a mix of Prime 41s, Prime 17s, Prime 13s and Prime 13Ws all running through Motive software. Sessions are typically captured at 180 FPS, often with five to six simultaneous performers. When clients want to see CG characters mirroring a real-time performance during sessions, MocapNow uses Motive’s live stream plug-in to create the necessary effects.
Unlike Markham who has been using Motive throughout its evolution, Bergstrom was introduced to the software more recently. “Motive is really powerful, but also really simple to use,” Bergstrom explained. “The software is easy to learn, which means I don’t have to stress about an extended ramp up time when we bring in contractors to help with projects; they’re up and running in less than a day.”
MocapNow executes facial capture using 24 Flex 13 cameras, Motive software and, as Markham puts it, “a whole lot of facial markers.” The setup is designed to capture two performers simultaneously, a process MocapNow has tailored for VR experiences. Unlike linear entertainment with a fixed POV, VR allows users to explore the entire environment. If VR conversation scenes are filmed like traditional features, cinematics or cut scenes – one participant at a time then spliced together – it would leave a dead space on the non-speaking character, which may be what the VR viewer chooses to focus on and would compromise the quality of the experience.
“We've taken a new approach to facial capture, addressing the specific pitfalls facing VR companies,” said Markham. “Capturing high quality audio and facial for two performers simultaneously is unprecedented, and our clients couldn’t be happier with the results we deliver with help from the OptiTrack systems.”
Whether collaborating with major studios or indie developers, MocapNow ensures a smooth, efficient motion capture experience resulting in the best quality animation for each project.