Dom Mentoring: Dynamic Shot

After working on the simple tracking shot with Dom- the skateboarding track with the R2D2 3D model animated in- I’ve started working on a more dynamic shot. He chose this video for me because the camera angle and bumps on the road will prove to be a challenge of my skills, sharpening my knowledge of the software and producing a better shot for my showreel when the time comes to potentially look for jobs in 3DE.

Every week, we’ve been having check-ins so that he can critique the tracking I’ve done and offer tips. I’ve learned how to use time weight blending in the timeline editor on points that turn off suddenly, and have been using this to smooth dips in the average curve of the deviation graph.

He’s also shown me some other useful advice on tricky shots to track, such as creating keyframes on either frame of a point that jumps and splining it in the middle.

At this point, after refining my track a couple of times, I’m much more confident in my ability to remember the steps for calculation and parameter adjustment. Dom’s positive reinforcement in my work has really helped me gain a better understanding of the purpose of each step as well as ways that I can improve the track. This used to merely be memorized head knowledge, but now that I understand why the steps are necessary, it’s easier for me to get an idea of what I need to refine and what can be left alone as is.

The next step in working on this project is to track the buildings along the side. There is no object to track in this shot, so as soon as I’m done working on my camera lens adjustment, I can go ahead and throw the shot into Maya to put in whatever 3D object or animated rig I like. I was thinking that a good starting point would be a helicopter approaching over the sides of the buildings and landing in the foreground. It would give a good sense of the depth of my track, and the only animation would be quite simple.

That being said, if I want to take it to the next level, I may even incorporate a flying creature rather than a helicopter. This would require rather complex animation, though, and I may wait until we’ve talked about creature animation to get into that.

Mehdi Tutorials: Houdini Week Five

I’ve had a very hard time with the Houdini tutorials lately, and so I was proud to even get through the first forty minutes following along. My smoke looks okay, but it’s doing something weird towards the end of the simulation, when a large box of light develops.

I was able to follow along until rendering, but then this happened. I’m going to book a session with Mehdi on Monday and just ask him what’s going on, I suspect there are multiple issues with my file.

UPDATE

I spoke to Mehdi and he informed me that the only issues were either with my graphics card or not hiding the geometry. I rendered again with the geometry hidden and was able to produce this:

The density adjustment slider was really interesting to me, and it’s cool to be able to choose between pure flames and a more smokey fire.

When it came time to render our simulation, I was at first met with the same issue as before:

But happily, I was able to solve the problem on my own this time! I realized that it was because the material shader name had not been updated and I reassigned it as well as making sure all the geo was hidden. As soon as I did, I got this image:

I feel that this week helped me understand rendering better and continued to help me get accustomed to the Houdini workflow. Being able to find my own material shader issue is a big step in the right direction of understanding the way Houdini nodes interact.

3DE Independant Work Challenge

After learning about our external collaboration unit coming up in the summer, I got in touch with Dom to talk about practicing my skills in 3DE in order to prepare to possibly take a tracking role in the project. I’ve been told that 3DE is one of the best ways to get started working in the industry, and I like it a lot, so I think it would be a very smart move to try to angle my showreel around it.

When we first started learning 3DE, I had a really tough time even finishing the tutorials, as the software was all so new and the process requires the memorization of a lot of very small but very important steps. However, as is often the case, struggling to get over that hump ended up helping me in the long run because I remember the solution to each problem I had so solidly. I find tracking shots to be actually pretty fun now, and find that I understand the significance of most of the steps.

I talked to Dom about starting a few different shots on my own to challenge myself. I feel that working independently forces me to learn more because I have to solve my own problems, so I will be working on a couple different shots over the coming weeks and meeting with Dom each week to check in and critique.

This is the shot that Dom recommended I work with as an easy starter, just to see if I can remember how to get through the process by myself:

I felt pretty proud of myself when I was able to get everything set up without a reminder: converting the video to an EXR sequence, importing it in the camera tab, exporting buffer compression files. I created a Maya project and set up the project window, saved the file path, and found an obj file for free online, which I textured.

I found the tracking relatively easy, as we had planned.

Still, I was nervous that having to choose all of my own points would be difficult, but when I first pressed “calc all from scratch”, I was relieved to find that the line was very straight.

And, even better, my deviation was only 0.8!

Unfortunately, though, I suspect that this may not be what the parameter adjustment should ideally look like. From what I can recall, it should look more like a cube. My theory is that I did not include enough of the sides of the buildings on the street, and only the street and the horizon line were detected. But when I pulled it into lineup view, it looked perfect, so I continued.

I did have to rewatch the superhero training video to remember the order of calculating the lens distortion in the parameter adjustment window, though. I wasn’t able to remember that on my own. I remembered how to do everything and why, though, just not what happens when. I’ve written it down to study.

I put in the 3D model and was at first confused that I was unable to move it, but realized quickly that I needed to turn off “contains survey data.”

I projected my points onto the 3D model and had no issues.

So I exported the mel script- it crashed a couple times and I was forced to export without 3D models- and ran Warp4. This was the first time that Warp4 has not crashed on me on the first try. I saved the dewarped footage and brought it all into Maya.

Then came some purely aesthetic work. I tried to match my skydome’s exposure to the fading light of the shot and I rotated R2-D2 so that he tilts as he swerves, in order to make it look less like he’s floating along and more like he’s powering his own journey.

This R2-D2 was not rigged and the geo was in numerous small pieces, so I didn’t have the option to move the body and ambulatory limbs separately.

Overall I’d say I did a fairly decent job. I’m not sure if there was anything I forgot that I just happened to get lucky with not ruining my shot. I do suspect that I’m lacking some information on the sides of the shot. However, considering I did this without help and only had to look up one step, I feel pretty good about it.

KK Tutorials: Lighting Week 1

I made it all the way through the first lighting tutorial and only ran into a couple of issues. When I first started, I was confused because my file opened to look like this:

-as opposed to the full scene that KK had on his screen. No matter how many times I re-pathed the images I kept getting this result.

I reached out to the class and found that Crystal was having the same issue. After some troubleshooting, she informed me that the answer was simple: pressing 1 on a node displays it in the viewport, and KK’s scene happened to be on a different node than ours when we loaded ours in. I was able to follow along with him after that.

The only other issue I ran into was when we were rendering the shot in Maya. For some reason when I pulled up the geo, I see this fully constructed scene rather than the HDRI in the background. I repeated this step a few times and achieved the same results. I hesitate to tamper with it as I’m not sure whether there’s an actual issue in the file structure or if this is simply what the final outcome should look like.

Houdini Tutorials: Week Four

This week I was able to follow along all the way until the point in which we got back into the particle animation. For some reason, my computer crashes every single time I try to render even one frame of it.

But I learned a lot about rendering just from following along with the rubber toy example, and I will try to render one of my destruction sequences using this knowledge.

Houdini Tutorial 1

Time to learn Houdini! Despite my growing trepidation related to the constant reassurances that it will be fun after a grievously long learning curve, I went into this first tutorial eager to learn. I tentatively want to declare my specialization as lighting and texturing. I still have a very long ways to go in this area, and in fact I’m really just getting started. But I’ve heard from everyone that Houdini is the software to use for this and I’m excited to see what I can do.

Here’s some of the most important notes I took during the beginning of the session.

Vocab

obj > object

img > compositing

ch > animation

mat > materials

shop > shaders

out > rendering

stage > USD (?)

tasks > pipeline

SOP: old term for geo –Surface OPerators

OBJ: object

DOP: Dynamics OPerators

ROP: Rendering OPerators

VOP: Vex OPerators > Vex: Houdini scripting language, similar to Mel.

$HIP > file output

$OS > object name

bgeo: Houdini file format that can save anything. bgeo.sc = compressed

1 unit = 1 meter

About half an hour in, I was sure I was lost because Mehdi added a geometry node to his sphere, while I could not find the node “geometry” listed and only had more advanced options-

-but I realized quickly that it came down to being in the object context rather than the SOP context- the SOP context has many more options as most of the work is done there. I found a couple times that when I was unable to follow along it was because I was in the OBJ rather than SOP context- for example, trying to place a file node.

I brought in my own OBJ file to follow along with the file SOP node- a set of dice I’d used in my performance animation:

I was originally unsure why they are wireframe, and quickly realized that toggling between these options-

-allows for different levels of visibility.

After some trial and error I managed to merge my dice and sphere objects, and scale them down when I realized that a unit is equivalent to a meter. This should be important knowledge down the road.


Moving onto the next Houdini scene. I was stymied for a while because Mehdi, when creating a ROP Geo node for his torus, saved his project under $HIP, and I had changed my project’s path to a specific folder for schoolwork. I kept trying to change the file output to this folder and was unable to locate the geo. However, I went back into the tutorial the next day and realized, upon listening more closely, that $HIP does not necessarily mean any kind of Houdini preferences folder, instead it refers to whichever path the file is saved in, therefore there is no need for me to change the $HIP path to my own- it’s already there. Sure enough I saved it under $HIP and was able to locate it in the geo folder of my project.

$HIP can be a variable, $HIPNAME cannot.

The only other problem I ran into was all the way at the very end.

Somehow I could not stop the raised points from being deleted when I merged the roof with the main cabin.

The way I ended up solving this was simply re-creating the transform node for the points. It still wasn’t working, but I deleted and re-attached the connector and somehow that changed everything. I am not sure how this worked. To me, the tree looks exactly the same before and after I did this.

I moved on to Mehdi’s bonus project: building the cabin with pronounced wood slats. I was able to remake the entire cabin, but for some reason my booleans for the windows weren’t working, and I’m sure that although the cabin looks good there is something mathematically off. I am sure that the Q&A session will provide me with more insight.

Additional questions-

Why create a transform node instead of just working in the viewport?

How does the divide node work?

I’m still not sure what the delete node actually does.

Dom Session 2: Rotomation

First Attempt

Similarly to our last session, I found the first half easy and intuitive. I had retained my basic tracking knowledge and kept up with the face tracking with no problem. I did not run into the deviation error that I had the last time, and was in fact extremely proud of my deviation curve:

I was very excited by how this looks exactly as I believe it should look:

But, similarly to the last session, I once again ended up falling behind, failing to find a solution to my problem, and being unable to continue with the lecture. This happened with about an hour left of the lecture, which was about the same time as last week. This time, I made it up until we were to place the iron man mask object over the face, and it didn’t appear.

All I could see was this. I went back and forth making sure I was in my object group, lineup view, and that 3D models was turned on, but no luck. When I showed Dom my orientation view, he informed me that the points were somehow not actually placed on the mask.

Which made absolutely no sense to me, because they had snapped directly onto the mask when I chose “extract vertices”. All I could think was that the mask had somehow moved and left the points behind, but Dom said it was not likely.

I struggled to delete my points and redo them as fast as I can, hoping not to drop out of the lecture this time as there was undoubtedly a lot more to learn. But when I finally finished redo-ing all of my work, the same thing happened.


Why was it sideways this time? When I had loaded it into orientation view it wasn’t sideways, and when I’d had extracted the vertices it wasn’t sideways. I was completely lost, but the lecture had already moved well past this and once again I was unable to continue. I resigned myself to trying to learn it on my own time later.

Second Attempt

I had been originally convinced that the problem was that when I loaded my mask into 3DE, it appeared to load halfway between the translation arrows, like this:

Mine

But upon re-watching Dom’s video, I can see that his also does this.

Dom’s

This rules out my original theory as to what the problem could be. I watched the process of placing vertexes on the helmet again, copying Dom’s actions second by second, and thought I figured it out-

Maybe I hadn’t clicked out of point group the first time! I got my hopes up, but, crushingly, the same exact thing happened a third time.

Staring at this, though, I can see that the points clearly are on the mask. It just isn’t snapping to the face for whatever reason. I decided to try again one more time. Something I noticed is that in lineup view, the depth of my points appear to be incorrect. I noticed that when I finish tracking the face and calculating but before I add in the iron man helmet, my lineup view looks like this (pretty good):

However, when I add in the helmet and then calculate, my lineup view changes like this:

I don’t understand why my points appear to have changed depth after I applied them to the mask vertices. I try, painfully, once again, and this happens:

All I can possibly hope for at this point is just starting the entire project over.

Starting Over: Day One

One and a Half Hours into Starting Over

For the first day I plan to work until I get to the point where I correct that mistake, and then tomorrow I will work on the section of the lecture that I had missed. I’ve now been following Dom’s actions as closely as possible for an hour and a half and gotten to the end of us tracking the background, as well as putting in our lens and camera information and informing 3DE4 that our camera is fixed. I’m starting to suspect that the reason I was running into my error is because maybe the camera should be marked as fixed for the face too, but we’ll find out. Regardless, this practice is helping me retain a lot of the information that, the first time I did it, was merely me following along and taking Dom’s words at face value; I now feel I’ve gained a higher understanding.

Little bit more than halfway through with the first session- at the hour and a half mark of the lecture.
My lineup view also looks a lot better than the last time- just like Dom’s, rather than points everywhere.

Also, this time, I was overjoyed to immediately see a flat plane in my parameter adjustment window the first time I tried, as I had struggled with it during our first tutorial. I’ve uploaded my success this time vs my struggle last time, in that order, below:

My parameter adjustment window this time
My parameter adjustment window last time.

I completed the three checks Dom mentioned- parameter adjustment, orientation, and lineup, and my work looked correct so far. I’m not sure if I made it to this point correctly the first time, but if I did, I didn’t understand it as I do now. [Addendum before moving ahead: I now realize that my theory as to the cause of the issue cannot be the camera not being fixed as the same camera settings actually serve both/all point groups. That, and the camera group creates a threshold for the camera whereas the object group does not interact with the camera controls at all].

Two and a Half Hours into Starting Over

Voìla. I nearly cried of happiness.

However, upon playback, I discovered I wasn’t out of the woods just yet.

I’ve now resolved my problem of trying to figure out how to make the mask snap on to the face. I am going to allow myself to work on getting that mask looking good, as well as finishing up the tutorial, next time.

Day Two

I reached out for some help, and Luca suggested that I add some more points, which was one of my theories- that there is not enough data during the beginning of the nose-pick for the mask to accurately track. Sure enough, I added just 3 points and it’s already looking better. Instead of the mask ballooning off the face into the atmosphere, it now only jumps a little bit. I will continue to fine tune this until it’s acceptable.

Better and better!

Something that I discovered as I worked was that I did not have to extract every single new point as a vertex on the mask in order to fix the problem. This makes sense, because Dom mentioned that the sum of the object points calculate the general motion of the object, therefore the extra points help the other applied vertices know what to do.

Yay, it’s projected and fitted! I’m ready to move on to the next part! I’m now about to move on to the last hour of the session, which I had not attempted previously.

Three Hours Later

The exporting process gripped my heart in fear with an icy fist. Imagine my joy when I imported my mel script and was greeted with this:

Absolutely beautiful. Breathing as if I’ve run a marathon.

Get Schwifty | Rick and Morty Wiki | Fandom

I then spent about two hours just getting the helmet to open, due to technical difficulties (VMware, Maya being slow, moving pivot points, etc) but I made it:

….and then, just because I wanted to, I did this.

….and I also constrained the eyes to the movement of the frontplate because I thought it looked better,

The render went smoothly, other than me briefly forgetting that a skydome must have camera visibility set to zero, and wondering why I couldn’t light both my mask and my image plane at the same time.

But I turned my brain back on and got everything set up. Time to move on to Nuke! On my own projects, I’ve been using Adobe Media Encoder, but it’s about time I learn to use professional grade software.

Well, here it is!! There are still some things I could fix (like making sure the mouth can be seen inside), but I’m still incredibly proud I made it this far. I overcame a lot of issues to get to this point and it’s time to let this be my first finished version of the project. Also, my brain feels exhausted, to the point where I’m not proofreading this, but I will maybe publish an update with some fine tuning. That’s all for now!

Dom Session 1: Basic Tracking in 3DE

I started off Dom’s session with a hot cup of coffee, a breakfast sandwich ready to go, sunlight streaming in, and my brain turned on.

The first half of the day, I followed along decently well. 3D Equalizer was big and scary, but I followed Dom’s instructions to a T and made my way through this new jungle. I was using the PLE software that I had downloaded and installed on my own Mac and had set up over the previous few days. I felt pretty happy with my ability to place and track points, and was optimistic about the whole thing, all the way up until my return from lunch.

Why was my deviation curve so incorrect? I asked Dom and he informed me that I should hide some of the points that were causing the problem, but when I did this, my deviation curved collapsed into an almost entirely straight line- one spike stayed at the beginning and the rest just hit zero. This couldn’t be right. That would mean that it’s not tracking at all. I tried again, and even started over from my last saved file, but ultimately could not figure it out and fell so far behind that I was unable to return to the lecture.

This was extremely discouraging for me. On top of that, the lecture had no audio or (for the first part) visible cursor, which made it very hard to follow. Thankfully, one of my classmates introduced me to a very helpful number of videos which explained most of the very basic tools we went over in a clear, concise series. The first one is embedded below:

Using this as well as the original lecture I was able to start over with the Camden lock footage and make my way through the beginner guidance Dom had gone over.