Live performance for Adobe Remix using Rokoko motion capture

July 11, 2017
5 min read
By
Rokoko

We recently had the pleasure of being a part of a fantastic collaboration with Australian creative studio S1T2 who tackled the opportunity to interpret and remix the Adobe logo. The end result was performed live at the Sydney Opera House for the 2017 Adobe Symposium. We interviewed Chris Panzetta, project lead at S1T2 to find out how the team brought their concept from idea to polished performance.

Here is a video documentation of the process:

1 . Where did the story idea come from?

Because it was about remixing the Adobe Logo, we really wanted to tell the story of how the company was founded; a computer scientist, John Warnock, developed a program to help his wife, Marva, a graphic designer, create. It really is the age old story of science inspiring art, so this was a story about the relationship between the two. Each initially cautious and ignorant of each other and then developing a relationship that emboldens them both.

2. How did you envision this project unfolding?

We knew it was going to be like trying to get home blind drunk. You don’t really know how, but somehow you make it. We had 7 weeks, from concept to curtain call, and we’d never done a anything like it - let alone at a packed Sydney Opera House. There were plenty of moving parts too: the musical composition, the choreography, the data capture, and most importantly, the visualisation. The anxiety was not centered around “can we do it” because there are usually many paths to functionality. However, beauty, aesthetics, emotion - these things take time and iteration to craft. And it's not something you can do while you're working on plugin bugs and what not. One thing we didn’t envison, though, was having to change our mocap tech halfway through the project, but you always put your faith in the team over tech. When the Rokoko team were cray enough to join our crew, we knew we could make it work.

3. What were the happy surprises along the way?

There were lots of little ones, but the biggest were in the fidelity of the translation. Human performance is beautiful - it’s organic and imperfect. The fact that those attributes started to translate very early on in the pre-visualisation was exciting. But for me, it was the addition of the cello. It was a last minute add-in from our CD and it just gave such a nice, dark, emotional depth to the piece.

4. What challenge did you face with needing live data on a stage, and how did you solve that?

The optical system we were originally planning to use had insurmountable operational problems, and because the Smartsuit that we were using was a very early prototype, it too had its own quirks. We were scared the environment at the opera house could drop or corrupt the data over wifi. Because we were using the suit for one long take with no pauses to reset, the suit would also become less responsive over time. We couldn't have these issues on stage so we had to devise a system that would keep these errors in check so the show could go on. To solve that issue we used a 'blend' between the raw data and a prerecorded mocap session. With each frame we would check the raw data's position against the prerecorded data to make sure that all of Naomis limbs were in the right place. If they weren't, we would use a blend between the last position of the raw data and the prerecorded data. It was a patchwork fix, due mostly to the schedule, that we don't imagine having to repeat in the future.

5. For you, stories come first, but in this case, did the tech shape its evolution to any extent?

I think the story is great initially to give everyone an idea of what you want to achieve. That’s what it's for: to give everyone an idea. With anything creative, you want to leave room for your artists and technicians to contribute and take it to where you never thought possible. Emerging technologies always play this role too, as creating with them is a process of discovery. What works well and what doesn’t? For example, the suit doesn’t work so well rolling across the floor, so maybe we’ll cut that down a fair bit. Live performance and interactive story requires room to evolve even more so. You need to leave that space for life to get in and shape the outcome. In this way you're not really storytelling anymore but your story making which is super exciting.

6. What differences did you notice between working with Optical and Inertial mocap systems? Why wasn’t your existing optical system suitable for this project?

The differences defined a lot of what the project became. We initially based the project on our existing optical systems. They were ours and a known quantity, but that went out the window when halfway through the project we knew there wouldn’t be time to bump them in and off stage at the opera house for such a performance. So it was almost game over for the concept. So the first major difference was that the inertial suits left a much smaller technical and financial footprint. In theory you just need a wifi signal and your away. But where optical systems deal with optical challenges such as light, reflection, and so on, inertial systems are somewhat susceptible to metal and interference. So it had its own environmental factors to be aware of that would create drift. Obviously inertial systems aren’t as accurate or pin point as an expensive optical system, but it is a much more accessible and portable solution. It really democratises the whole process, which means many more people can access and play around with the tech. Fortunately, our visualisations were abstractions and not about pinpoint accuracy, so that didn’t affect our outcomes too much.

7. How did your team go about previsualizing a live performance?

It took a lot of forms from story arcs, to references, to mood boards, etc., but when it got down to it, it’s like any live performance, practice makes perfect. The same way a dancer will practice her performance and hone it down to its essence that’s what we needed to do. Once we had a music track and some ideas for movement we captured a lot of the early choreography on video. But it was pretty crucial for our tech artists to get to work on physical behaviors as soon as possible in order to make sure it all looked sexy and not just a generative mess. So we used our existing optical rig to capture our first iteration of the performance. Which we later discovered you can do with the suit too.

We then took this data into Cinema 4D and used some quick particle plugins to test the movement and flow of particles with difference forces and her movements. It also helped at this stage to start playing with camera movements to see how that added to the effect. While this was going on the team was building the tools so once we had an idea of the look we were going for we could work on simulating this in OF. We had to do a similar process for the pianist's music as we we’re taking it straight from the midi feed we needed to quickly develop a previs system to get an idea of how many particles were enough or too much.

8. How did it affect your overall workflow?

The previs gave us a solid direction but the best thing about realtime is you get instant feedback, so we could rely on iterating the performance as we collaborated based on what was working and what wasn’t which was super important with such a tight dealing.It let us work in kind of creative silos then come together to jam it all together before breaking off and repeating the process.

9. Why was it essential for the data to be live projected, rather than a recording synced to the choreography?

Interactive storytelling is about letting the moment contribute to the experience and the story. So everything we do is exploring the interactive nature of things and how that affects the outcome. We think if an experience reacts to your interactions uniquely it will be a much more memorable and impactful one. So this was another way we explored these live technologies as creative tools. What we have created can be used by another dancer on another stage and change to their own interpretation and movements, so it has an infinite number of possible performances.

10. After this experience, do you hope to work with live performance motion capture again?

Most definitely. We do a lot of VR work as well so our next step will be to take these inertial systems into virtual worlds. Live performance mocap is, essentially, another great leap towards more natural human computer interaction. And that’s what's really exciting. The LIVE element isn’t what's important, it's the UNIQUE element. The temporal difference in this space and time allowing the interactor or the audience to create their own memories or experiences and explore them personally and therefore we hope touching them far more deeply. With AI, real time technologies, ubiquitous networks, and computing power, these types of experiences are within reach. Like all storytellers of the past, we want to use them to create greater curiosity and equal understanding.

Book a personal demonstration

Schedule a free personal Zoom demo with our team, we'll show you how our mocap tools work and answer all your questions.

Product Specialists Francesco and Paulina host Zoom demos from the Copenhagen office