VTubing (and VTuber, short for Virtual YouTuber) is a form of live streaming, where you use motion capture to control a 2D or 3D virtual avatar instead of your real-life face and body.
Like most wacky things, VTubing got its start in Japan. It’s still relatively new (about five years) and was quickly adopted by Japanese streamers who tapped into their “otaku” (mega fan) anime audience.
One of the key selling points of VTubing to streamers worldwide is the ability to protect their identity while still gaining fame and fortune. As a VTuber, you’re in total control of your gender, voice, race (fantasy or otherwise), and identity.
What is VTubing? And how late am I to the trend?
As of 2021, VTubing has a solid subculture following worldwide.
For the English side of the internet, VTubing was an underground, indie thing - a subculture within a subculture of gamers. It suddenly became a big deal in late 2020 when the famous Twitch streamer Pokimaine made live streaming history and gamed using a lookalike 3D anime avatar. Shortly after, Pewdipie jumped in on the trend.
While the internet has not been a fan of these big stars suddenly hiding behind a kawaii Japanese anime avatar because they don’t feel like showing their face on camera, VTubing is still extremely popular in niche communities.
For the most famous Japanese Vtuber Kizuna AI, that translates into 3 million Youtube subscribers.
Japanese VTubing vs. Western VTubing
Most of the western-style VTubing is done live by amateurs or independent creators. The content leans towards gaming and reaction videos. The exception is one of the very first VTubers, Ami - who vlogs using a Pixar-style 3D model.
If we look to futuristic Japan, VTubers are so much more than “for fun” personalities. VTuber characters are developed by studios and used as influencers for promotions and fan interactions. While independent VTubers exist, the most popular ones are created by studios like Polygon Pictures. Voice actors are chosen, extensively trained, and paid to ‘play’ the character.
How does VTubing work?
VTubers apply their motions to a digital avatar using motion capture technology, mainly in real-time.
This tech can be as rudimentary as the software Instagram uses to give you bunny ears. Or it can be as sophisticated as the hardware used to create animated feature films.
To live stream using an avatar, you will need:
- A rigged 3D or 2D character.
- 2D or 3D software that supports real-time motion capture such as Blender, Unity, Unreal, iClone, CTA4 or Live2D.
- Mocap hardware such as tracking markers and mocap suits.
Once a VTuber has their avatar rigged (the process of creating a moveable skeleton), they can throw on a mocap tracking marker or suit, fire up software that can handle live streaming, and they’re ready to go.
Here’s a full video tutorial on how to do this with Unity.
In the video below, we take a look at a VTuber workflow for Unreal Metahumans using Rokoko motion capture and OBS.
How do I make money by creating VTubing avatars?
It all goes back to the age-old practice of custom commissions.
Streamers and YouTubers will need to invest a minimum of $500 for a simple 2D character with very basic movements.
To create a character in 3D creators are, on average, willing to pay $1500 to $2500 for the model and rigging. For large-scale projects such as the ones run by Hololive, you can expect costs running into $10,000 and up.
Make money by creating character model sheets for VTubers
Character model sheets and concept art are the starting point for every single character. These model sheets are what a 3D modeler will use as a reference while creating the model.
To create character model sheets, you need a high level of drawing skills. In some cases, your concept art will be used to create 2D rigs (instead of 3D models). 2D commissions tend to be more polished and expensive as opposed to simple model sheets
Make money by creating 3D models for VTubers
3D models for VTubers are pretty similar to 3D models for games, both in complexity and rigging. Your models will need to be able to render in real-time so low-polygon modelling is a must!
Here’s what skills a 3D VTuber modeler needs:
- Experience in game development
- Experience in 3D animation
- Understanding of current-gen workflows in systems like Unity
- Strong skills in Maya or Blender (ZBrush is a plus)
- An understanding if not actual experience in “limited animation” of 3D into 2D
Make money by rigging 3D models for VTubers
In most cases, 3D modelers will also rig the model to make it motion capture ready. However, in the case of 2D models, the rig is often done by a separate artist.
When rigging for VTubing, you will:
- Need to use a live-streaming program such as Unity
- Be able to rig IK joints
- Understand muscle mass and account for it at a basic level
Offer custom animation segments and trailers for VTubers
Once the VTuber avatar is completed, popular channels with extra budget to spare might want to commission custom animations. In most cases this is done by the 3D artist that built the original model.
Here’s an example segment of a VTubing character in an animated sequence (as opposed to raw live-streaming animation).
How will VTubers use the 3D model I create?
VTubers will use motion capture technology and software to live stream their reactions as the animated character.
It’s possible to use mocap software alone to capture basic facial movements and body movements. But it’s recommended that your customers have proper motion capture hardware for the best result.
There’s a misconception that such hardware is so expensive only Hollywood studios can afford it. Maybe so in the past, but now you have access to cheaper tools like motion capture suits and gloves.
To get started with VTubing, download a free 30-day trial of Face Capture and start mapping high-fidelity facial movements to your 3D characters today.