OVERVIEW
In a world where digital presence is just as relevant as physical presence, The Virtual Self brings the viewer face-to-face with several versions of a virtual persona. Utilizing my own limitations, I made a human as realistic as possible, acknowledging that I cannot fully encompass realism. It is a way to express that, digitally, we cannot fully emulate life. However, this does not matter because we can still relate to the digital forms. The high-detail, low-detail, and facial trackers are all understood as human, no matter how uneasy these figures can make us feel. These different forms represent the different digital personas we take on. This piece poses the question: How many versions of ourselves exist?
The Virtual Self is a virtual reality experience with three human figures facing the viewer in a white dome. One human figure has high detail, one has low detail, and another has only facial tracking points. Each figure has motion capture animation on it that loops. The viewer is free to walk around the space and view the figures from any angle.
Keywords: deconstruction, uncanny valley, human, motion capture, digital, virtual reality
Programs used: Autodesk Maya, Adobe Photoshop, Adobe Substance, Blender, Unity
The Virtual Self is a virtual reality experience with three human figures facing the viewer in a white dome. One human figure has high detail, one has low detail, and another has only facial tracking points. Each figure has motion capture animation on it that loops. The viewer is free to walk around the space and view the figures from any angle.
Keywords: deconstruction, uncanny valley, human, motion capture, digital, virtual reality
Programs used: Autodesk Maya, Adobe Photoshop, Adobe Substance, Blender, Unity
REFERENCES
Stelarc
Combining technology and body; the rig is an extension of the 3D model, not a separate being. The two work hand in hand. |
Morgan Rauscher
Motion tracking; there are physical faces with motion tracking eyes that follow the viewer in the space; further in this project, I would like to real time render the form so that its eyes can follow the viewer |
Stranger Visions. Heather Dewey-Hagborg. Installation at Saint-Gaudens National Historic Site. 2014
|
Heather Dewey-Hagborg
Everyday people; the usage of DNA to create average people on the street interested me, so I want to create a 3D form that does not follow beauty standards; I want this model to appear as if it is a random person on the street |
Lil Miquela
Digital influencer; although she is not real, she has a presence on popular culture. She is involved in fashion, music, and compositions over real environments. She’s not alive, but is presented as such based off of her social media posts. |
Process
While I was modeling, I was using smoothing filter to view between a high polygonal version and a low polygonal version. In the final model, I added divisions to the original mesh to permanently smooth the high polygonal version (thus, adding polygons). I kept a version without the subdivisions to be the low polygonal version. In Maya using the quad draw tool, I retopologized the mesh to optimize it for rigging purposes.
|
3D Model. Kirby Mealer. Autodesk Maya. 2021
|
Screen capture of workspace. Kirby Mealer. Adobe Substance Painter. 2021.
|
In Maya, I created a UV map that allowed me to have a seamless texture. I picked the edges on the model that would be turned into seam Using Adobe Substance Painter, I used a human skin preset and added different brush layers to create the final desired look.
|
Following a tutorial by CGMatter on Youtube, I applied motion capture footage to my 3D model.I first put dots on my face on areas that would have the most deformation on my face. I then recorded myself doing subtle facial movements. In Blender, I tracked each point on my face and parented bones to those trackers to create a rig. Once applied to the mesh, the movement of the trackers would be applied to the 3D model. I baked the animation onto the model and exported it as a .fbx file to import into Unity.
I referenced the bone structure in Blender and recreated a polygonal version of it in Maya, I then exported it as an .obj file to import into Blender. In Blender I duplicated the model to be able to cover each joint that was made. I then combined these meshes into one object and joined that with the rig. |
Screen capture of workspace in Blender. Kirby Mealer. Blender. 2021.
|
Screen capture of workspace in Unity. Kirby Mealer. Unity. 2021.
|
In Unity I put the models in a triangular formation with the camera in the middle of the 3 figures. This is to present the viewer from seeing more than one model at a time, implying that when you have a presence on the internet, your multiple presenses do not inherently overlap.
The environment is of a dome to hide any evidence of an a corner or wall, creating a more welcoming space. The textures are also a gray to white color to hide any attention from the environment and allow the viewer to focus on the models. |
As a test, I exported my Unity scene as a 360 video and uploaded it to YouTube. Due to the large file size, the resolution is of low quality. But this was just for testing purposes. You can use your mouse to click and drag to turn around in the scene.