Individual Project: Matchimation
Today I delved back into the subway shot. Launching the scene with the shave and a haircut plugin engaged worked like a charm. In a matter of an hour or so, I was able to get the rig into my tracked scene and scaled it up to the size of my model. Unfortunately, I didn’t anticipate the fact that this rig is intended to be short, so I’d have to make him rather big comparatively in order to cover the head of my model. As a result, the top of his hat is cropped off.
This is how the scene looked when I first managed to scale the model up to the correct size and completely obscure my model in the reference footage:
Obviously, there are two problems with this immediately; one being that the hands cannot correctly grasp the poles and the other being that the beard is gray- as well as the fur on his coat- it appears somewhat dirty. In contrast, when I had tried to render out the rig on the farm by itself earlier, this image had been the result:
I am not entirely sure why this issue resolved itself when I transferred the rig into my tracked scene, with the gray beard problem arising instead. I shot a message over to Luke asking why the hair appeared to be glowing.
In the meantime, I set to work building geo in the scene for those poles so that his hands could convincingly grasp them. Then, over the course of the next few hours, animated the rig, taking care to move the character convincingly in conjunction with the motion of the subway carriage while also focusing on small details and overlap. This took up the majority of my work day, despite the clip being only two seconds long- I wanted to make sure to lay down a solid baseline. My rough animation (minus the facial expression) is shown below, and I’m pretty happy with it, especially with consideration to the short amount of time it took me to complete.
After getting my matchimation work out of the way, I caught up on my messages from Luke. His response was that the fur was not glowing; rather, the plugin was not working correctly. I sent over the gray beard image, and he told me that it looked like an issue with the self-shadowing options in the fur shader, and instructed me to try playing around with some of the sliders.
Above is the gray beard issue, made more intense by my adjustment of the hair roots to be thicker (I found it looked rather sparse before). While toying with the self shadow and geometry shadow options in the fur shader had no effect on the problem,
I found significant difference in the result when varying the cast and receive shadows options in the Arnold render stats.
Above is the result of turning off both cast and receive shadows. The beard and sleeve fur are altogether too white; however, it’s a lot closer to what I want. I decided that the best middle ground- for now- is to check only one of the Arnold render options.
After bringing the rig into my scene, scaling and sizing it, completing my matchimation work and solving the shave and a haircut issue, almost eight hours had already passed, but I was determined to work on lighting and shadowmatte before calling it quits for the day. Unfortunately, it wasn’t in the cards this time around. A simple skydome lights the scene realistically but not as dramatically as I hoped, and too bright of an exposure blows out the beard highlights. It will require tweaking later on.
Here is the result of today’s work. Looking at it reminds me how much more I want to do, but it’s not bad for one day.
Individual Project: Rig Errors & Data Loss
I intended to spend my time today working on scaling my character rig into my tracked scene, but unfortunately, due to unforeseen circumstances, spent most of my work day navigating horrifying errors that threatened to erase the work I’d done on this project throughout the term.
Ready to scale my rig, I downloaded all of the necessary (and expensive) files from turbosquid and moved them into the appropriate folders, then dragged the character into my Maya scene.
Immediately upon doing so, I was met with this notification:
I believed this to be due to the fact that the rig utilizes the shave and a haircut plugin and I was attempting to launch Maya 2018 without launching the plugin first. I decided to go ahead with scaling, planning to only open shave and a haircut when I was ready to render out my scene. I applied texture to the rig, then hit the keyboard shortcut cmd + s to save my file. Immediately upon doing so, Maya crashed.
When I went to re-open my scene, I found the file corrupted.
I tried deleting the character rig and all assets related to it, but unfortunately, I found that my scene was still ruined. The geo was completely corrupted; when I tried to look through the camera lens I could only see this.
It appeared that almost all of the assets in my scene had changed and would require reworking. However, I did find this new file in my scenes folder, which, according to Luke, was a temporary file saved by the software in the event of a crash.
Dragging it into Maya, I found that my scene opened perfectly, and decided to call off my heart attack.
For a little while, though, I was confused as to why I could not save this file as a regular Maya file, and was nervous that I had perhaps lost all of my work from the term after all.
Thankfully, I found a simple answer: it was an error with my naming convention. Maya believed I was trying to save the file as a “.01” file, solved when I switched to “v1” and “v2”.
I’d grown used to using “.01” in my 3DE work.
I’ve tried a couple of times, now, to save the scene (incrementally) with the character rig in it, and am met with a crash each time. My leading theory is that it has to do with the shave and a haircut data; I need to open the program with the shave and a haircut plugin installed in order to avoid data loss. Tomorrow morning I will review my one-to-one with Luke in which we discussed using the plugin and give it another try.
Individual Project Wireframe & Geo
Individual Project: Tutoring Help and Image Plane Error Solutions
As I mentioned in my previous post, I’d lost a lot of time on my individual project due to technical difficulties with the image plane, unsolvable in the hypershade, the attribute editor, or even by recreating the dewarped footage.
After running through a couple of options today, we found the solution: the EXR files were simply to large to run compatibly in Maya on the image plane. I put the EXR files through Media Encoder, converted them to JPEGs, and found that they ran perfectly.
So now that I’ve finally got a working scene, I can return to the task from last week and build geo for it, aligning it as tightly as I can with the shot in order to make up for the unfortunate use of the nodal camera necessary for this track. Once I’ve got my geo built and my character rig in and scaled, I will review the scene with Dom to make sure it is ready for animation. Thankfully, I’ve got a second session with Dom scheduled for this week on Friday, so I’ll be able to move along with my progress at a faster rate and make up for some of the time lost on this issue.
Below I am including a list from my handwritten notes on how to snap a 3DE scene to the ground plane in Maya, because I continually forget and end up writing and researching it repeatedly. I attempted to break down the steps more thoroughly in my notes this time.
3DE Scene Snap to Ground Plane in Maya Process:
- Select scene, then hit W, followed by D, to bring up the pivot point
- Middle mouse drag the pivot to a point in the scene while holding V to snap.
- Hit D again, and, holding X, snap the scene to the center of the world.
- Turn on frustrum display controls, allowing you to see the camera’s angle in the world.
- Add another 0 to the image plane depth, allowing geo to move through it.
Now, when you load a character or object into the scene, they will automatically spawn in the image plane, and once the scene is scaled and sized correctly, blend seamlessly.
Indie Film Collab Project: Tutoring Help, Motion Blur Tracking Workarounds
Speaking with Dom about the motion blur issue in our indie film project footage was a godsend. While I originally wasn’t able to track any single point for longer than about ten frames, and even then wasn’t able to keep a very solid track on them, I’ve now got four well tracked points in the midground and plan to continue placing more into tonight in order to catch up on lost time this past week.
Though I’d made it to point 74 in my original attempt to track this blurry shot, I deleted all of my previous subpar work and started again with Dom, in order to avoid my messy track interfering with the calculation of the shot. The first thing we did was open the image controls window and up the contrast and brightness to their respective maximums. Then we began searching for very dramatic patterns, looking to place points at the very corner of a high contrast area. As I tracked a high-contrast point, Dom helped me with a couple of tactics for blurry footage.
When tracking a blurry shot, it’s always necessary to use a very wide search box and to constantly adjust the pattern. For a shot as blurry as this one, we tracked every single frame by hand, which is, although time-consuming, necessary. Fortunately, Dom did tell me that after six or so points tracked accurately, the software will be able to account for the blur in calculation.
As I tracked every point by hand, I found that for each point I had to hand-select the area where it should be placed at least once or twice, with the software simply unable to account for the massive blur at times, and meticulously pinpoint its exact location using page-up and page-down, though Dom did tell me to make sure not to do this too frequently in order to avoid vibration of each point.
Though using a wide search area with high contrast is imperative for a blurry shot, it’s also important to make sure that the search box never crosses into the depth of another object in the shot. I found it somewhat difficult to discern whether my point had moved during some of the camera shaking, and though I only have four points, I spent almost half an hour placing them.
This shot may be strenuous, but getting these points tracked is a glimmer of hope that all is not lost and it won’t be impossible. It is fortunate that we know the camera specs, and, as Dom pointed out, this will be a showstopper of a showreel shot.
Collab Project: Tracking Difficulty due to Camera Shaking/Blurriness
Similarly to my individual project, I struggled this week to make real progress on the collaborative project; hitting a wall with the software’s as well as my own capabilities. The shots include frequent camera movement, and unfortunately, this results in near-constant motion blur. Because of this I’m only able to track around 10 frames at a time per point. I am hoping to bring this up with Dom tonight for some potential workarounds; I expect we will discuss advanced techniques for splining. I’ve spent several days trying to work this out myself with no luck, and am worried about the project deadline.
Individual Project Technical Difficulties to be Addressed
After my discussion with Dom last week regarding my next steps in my individual project, I was prepared to construct geo for the scene and scale it accordingly to my rig in order to move ahead before animating. I’d even had a discussion with Luke on the rig I wanted to use, and how to load up the Shave and a Haircut plugin for it.
Unfortunately, I’ve hit a roadblock on this project before I could even begin creating geo; the image plane simply refuses to work correctly no matter how many ways I try to fix the problem. I’ve tried deleting it from the hypershade and reloading it, as well as running Warp4 again in order to make sure nothing was wrong with the EXR files themselves, and I’ve made sure every time that “use as image sequence” is checked.
I will discuss this problem with Dom tonight and make a new post regarding the solution. It is unfortunate that this setback has cost me a lot of time I could spend working.
Individual Project Tutoring May 19th
During my catch-up with Dom tonight, we reviewed some of the shakiness still present in the track and fine-tuned some of the points used in order to prevent any jolting from the subway car.
We assumed first that it may be the points on the foreground seat disturbing the parallax, as the front is not attached to the wall and jiggles with the movement of the car, but deleting these points still did not achieve satisfactory results. After several different attempts to turn points on and off, track different spots and sim the lens, Dom suggested to me that this shot may have to rely on what’s called a nodal camera due to the software’s inability to understand the depth of the scene (possibly resulting from the very-nearly-static camera).
He explained this to me as creating all of the points on one flat plane, which the camera movement will be tracked around. A definition I found of nodal camera movement is, “The no parallax point–or nodal point–refers to the specific axis on a camera lens around which the camera rotates so as not to create a parallax error. By rotating around this point, the scenery will all move at exactly the same speed in the resulting pan shot.”
I was initially confused and skeptical of this strategy, as I had assumed that failing to accurately track depth was failing to complete my job as a matchmover, but Dom explained to me that what I would do instead is create the depth in Maya when I build the geo along the correct angles.
My next step on this project is to bring it into Maya, line it up to the ground plane, and scale it- making sure that all the flat lines of the floor line up- then I will add in my character model and send that over to Dom for the okay before I go ahead on the next step, which is to bring in animation.
Individual Project Motion Tracking
Attempt 1
The benefits of using my own footage for a tracked shot are that I can intentionally set the scene to both challenge myself and provide easy patterns to track. I planned for the subway car to be a relatively easy background track, with plenty of signs and seat markings to get a solid track on. The camera is completely static, and the only motion path necessary to figure out is the slight shaking from the stopping and starting of the car. The only plane that I struggled to track was the background wall, as the only markings/signs on the wall intersect with my model’s arm and the midground pole.
I felt astonished with my track when my very first calculation resulted in a deviation curve of only 0.0486.
However, my excitement began to wear off when I considered a few factors. This was only the deviation for one point group, and some of the others had slight inconsistencies or large spikes. On top of that, when I moved my shot into lineup view, the depth of the planes was very clearly incorrect. In the video below I’ve tried to demonstrate what I mean- although the ground plane looks okay, it appears almost as though 3D Equalizer thinks the back is the front; the back wall and midground seats have much larger X’s than the foreground seats.
That being said, the deviation curves in every point group look relatively okay.
I spoke to Dom about this issue over Discord messaging in advance of our session tomorrow and he informed me of the issue: by placing each plane of the background in a separate point group, I was asking 3D Equalizer to consider them as objects to track rather than being part of the background. The camera group must all be part of the same point group. And so, happy that I learned this before wasting our entire session trying to figure that out, I went back to the drawing board to track again in the camera group this time.
Attempt 2
Upon tracking the background of the shot a second time in the camera group only, I was met with a parameter adjustment chart that makes a lot more sense; however, the depth of the back wall does still seem to be slightly off. I reached out to Dom again about this and he reminded me to calculate a second time.
Now that the background track looks a lot better, I’ve tracked a couple points on the object as a starting point.
During my session with Dom tomorrow, I’ll revise and edit the track as needed.