Framestore Talk with Andrew Schlussel, May 10th

Earlier today I attended a lecture with Andrew Schlussel, a recruiter from Framestore. His Linkedin also lists him as a director as well as a professor at Academy of Art, which is interesting, because I turned down an offer from Academy of Art in order to come here, favoring UAL for its connections, opportunities, name recognition, and more competitive acceptance rate. That said, I am very grateful for the opportunity we had to speak with Andrew today. Getting the chance to speak with a recruiter from Framestore face-to-face (as it were), was an incredible opportunity and really taught me a lot about the way the hiring process and growth within the industry works as well as nudged me to pursue a new path for my showreel in accordance with the demand in the industry.

The question that I posed to Andrew was regarding the number of entry level roles that Framestore looks to hire as they go through the hiring process each year (season?) and what that number is dependent on.

Andrew answered me that the role that could be considered the most “entry level” aside from a runner position would be as a matchmove artist or as a 2D paint artist, and that they are looking to hire for these positions year round, but of course it depends on the number of roles that are available for each project. We discussed the field of matchmove and he informed me that a skill highly in demand right now is something called “matchimation”, which involves animating a character rig over footage of an actor. I believe this is the same sort of thing we had been working on with Dom in January with the superhero suit. Andrew then also talked to me about the career path for a matchmove artist- it is typically considered entry level but there is growth within the industry.

I plan to incorporate matchimation in my showreel by re-centering my individual project for this term around it. I will go back to acquiring footage with this goal in mind.

One last piece of information mentioned by Schlussel that stood out to me was that, obviously, a junior position after school is ideal, but an internship may have a 70% chance of landing an applicant a role upon completion, and then a runner position can still open many doors. As someone who’s worked in customer service for over five years while finishing my education, I’m not afraid to wash dishes if it gets my foot in the door.

I also liked his discussion of a “playful” approach to learning software. He told us that we should try out software like Unreal Engine, but focus on exploring its tools and learning what it has to offer rather than forcing ourselves to visualize an end goal.

Dom Mentoring: Dynamic Shot

After working on the simple tracking shot with Dom- the skateboarding track with the R2D2 3D model animated in- I’ve started working on a more dynamic shot. He chose this video for me because the camera angle and bumps on the road will prove to be a challenge of my skills, sharpening my knowledge of the software and producing a better shot for my showreel when the time comes to potentially look for jobs in 3DE.

Every week, we’ve been having check-ins so that he can critique the tracking I’ve done and offer tips. I’ve learned how to use time weight blending in the timeline editor on points that turn off suddenly, and have been using this to smooth dips in the average curve of the deviation graph.

He’s also shown me some other useful advice on tricky shots to track, such as creating keyframes on either frame of a point that jumps and splining it in the middle.

At this point, after refining my track a couple of times, I’m much more confident in my ability to remember the steps for calculation and parameter adjustment. Dom’s positive reinforcement in my work has really helped me gain a better understanding of the purpose of each step as well as ways that I can improve the track. This used to merely be memorized head knowledge, but now that I understand why the steps are necessary, it’s easier for me to get an idea of what I need to refine and what can be left alone as is.

The next step in working on this project is to track the buildings along the side. There is no object to track in this shot, so as soon as I’m done working on my camera lens adjustment, I can go ahead and throw the shot into Maya to put in whatever 3D object or animated rig I like. I was thinking that a good starting point would be a helicopter approaching over the sides of the buildings and landing in the foreground. It would give a good sense of the depth of my track, and the only animation would be quite simple.

That being said, if I want to take it to the next level, I may even incorporate a flying creature rather than a helicopter. This would require rather complex animation, though, and I may wait until we’ve talked about creature animation to get into that.

Mehdi Tutorials: Houdini Week Five

I’ve had a very hard time with the Houdini tutorials lately, and so I was proud to even get through the first forty minutes following along. My smoke looks okay, but it’s doing something weird towards the end of the simulation, when a large box of light develops.

I was able to follow along until rendering, but then this happened. I’m going to book a session with Mehdi on Monday and just ask him what’s going on, I suspect there are multiple issues with my file.

UPDATE

I spoke to Mehdi and he informed me that the only issues were either with my graphics card or not hiding the geometry. I rendered again with the geometry hidden and was able to produce this:

The density adjustment slider was really interesting to me, and it’s cool to be able to choose between pure flames and a more smokey fire.

When it came time to render our simulation, I was at first met with the same issue as before:

But happily, I was able to solve the problem on my own this time! I realized that it was because the material shader name had not been updated and I reassigned it as well as making sure all the geo was hidden. As soon as I did, I got this image:

I feel that this week helped me understand rendering better and continued to help me get accustomed to the Houdini workflow. Being able to find my own material shader issue is a big step in the right direction of understanding the way Houdini nodes interact.

3DE Independant Work Challenge

After learning about our external collaboration unit coming up in the summer, I got in touch with Dom to talk about practicing my skills in 3DE in order to prepare to possibly take a tracking role in the project. I’ve been told that 3DE is one of the best ways to get started working in the industry, and I like it a lot, so I think it would be a very smart move to try to angle my showreel around it.

When we first started learning 3DE, I had a really tough time even finishing the tutorials, as the software was all so new and the process requires the memorization of a lot of very small but very important steps. However, as is often the case, struggling to get over that hump ended up helping me in the long run because I remember the solution to each problem I had so solidly. I find tracking shots to be actually pretty fun now, and find that I understand the significance of most of the steps.

I talked to Dom about starting a few different shots on my own to challenge myself. I feel that working independently forces me to learn more because I have to solve my own problems, so I will be working on a couple different shots over the coming weeks and meeting with Dom each week to check in and critique.

This is the shot that Dom recommended I work with as an easy starter, just to see if I can remember how to get through the process by myself:

I felt pretty proud of myself when I was able to get everything set up without a reminder: converting the video to an EXR sequence, importing it in the camera tab, exporting buffer compression files. I created a Maya project and set up the project window, saved the file path, and found an obj file for free online, which I textured.

I found the tracking relatively easy, as we had planned.

Still, I was nervous that having to choose all of my own points would be difficult, but when I first pressed “calc all from scratch”, I was relieved to find that the line was very straight.

And, even better, my deviation was only 0.8!

Unfortunately, though, I suspect that this may not be what the parameter adjustment should ideally look like. From what I can recall, it should look more like a cube. My theory is that I did not include enough of the sides of the buildings on the street, and only the street and the horizon line were detected. But when I pulled it into lineup view, it looked perfect, so I continued.

I did have to rewatch the superhero training video to remember the order of calculating the lens distortion in the parameter adjustment window, though. I wasn’t able to remember that on my own. I remembered how to do everything and why, though, just not what happens when. I’ve written it down to study.

I put in the 3D model and was at first confused that I was unable to move it, but realized quickly that I needed to turn off “contains survey data.”

I projected my points onto the 3D model and had no issues.

So I exported the mel script- it crashed a couple times and I was forced to export without 3D models- and ran Warp4. This was the first time that Warp4 has not crashed on me on the first try. I saved the dewarped footage and brought it all into Maya.

Then came some purely aesthetic work. I tried to match my skydome’s exposure to the fading light of the shot and I rotated R2-D2 so that he tilts as he swerves, in order to make it look less like he’s floating along and more like he’s powering his own journey.

This R2-D2 was not rigged and the geo was in numerous small pieces, so I didn’t have the option to move the body and ambulatory limbs separately.

Overall I’d say I did a fairly decent job. I’m not sure if there was anything I forgot that I just happened to get lucky with not ruining my shot. I do suspect that I’m lacking some information on the sides of the shot. However, considering I did this without help and only had to look up one step, I feel pretty good about it.

KK Tutorials: Lighting Week 1

I made it all the way through the first lighting tutorial and only ran into a couple of issues. When I first started, I was confused because my file opened to look like this:

-as opposed to the full scene that KK had on his screen. No matter how many times I re-pathed the images I kept getting this result.

I reached out to the class and found that Crystal was having the same issue. After some troubleshooting, she informed me that the answer was simple: pressing 1 on a node displays it in the viewport, and KK’s scene happened to be on a different node than ours when we loaded ours in. I was able to follow along with him after that.

The only other issue I ran into was when we were rendering the shot in Maya. For some reason when I pulled up the geo, I see this fully constructed scene rather than the HDRI in the background. I repeated this step a few times and achieved the same results. I hesitate to tamper with it as I’m not sure whether there’s an actual issue in the file structure or if this is simply what the final outcome should look like.

Houdini Tutorials: Week Four

This week I was able to follow along all the way until the point in which we got back into the particle animation. For some reason, my computer crashes every single time I try to render even one frame of it.

But I learned a lot about rendering just from following along with the rubber toy example, and I will try to render one of my destruction sequences using this knowledge.

Houdini Tutorials: Week Three

One Hour In

I found last week’s tutorial very easy and fun, but I expected this to be a little more intense, as it says on the tin. However, it did take me about 3 hours to get through the first hour of the tutorial. Most of the extra time it takes me to get through these is spent researching terms I don’t understand like “voronoi”, “vex”, “VDB” and “voxel”, going back to repeat the step when it happens too fast, or going even farther back to figure out why my results don’t match his. While the process makes sense to me overall, I often don’t understand the significance of many of the little steps or how Mehdi is able to tell that they are necessary. I’m also only about 80% confident that I understand the goal of our work in the first hour. I know that we’re creating a realistic fracture of the objects, but is exploded view the end result of the destruction or simply a way to review whether or not the fracture seems realistic? Despite this, I am able to follow along, and often understand it better towards the end of the tutorial, so perhaps it will all make more sense soon.

Side note- here’s a video I found that helped me get on the same page really quickly.

Hour and a Half In

Somehow, this took me even longer, and only for 30 minutes worth of tutorial. I was just about ready to give up for the night when I finally got to see the product of my work- this beautiful destruction- for the first time, and regained motivation.

I feel that, as with the last tutorial, it helps it come together for me to be able to visualize the product of the long line of work.

After a couple Houdini crashes on my slow computer, I figured out how to get my boolean fracture linked up to the DOP network for an even better effect.

I got the rbdmaterialfracture to run, but it’s incredibly slow and not really doable on my machine. I’m going to follow along with what Mehdi is doing in it so I can learn but I’ll have to keep my Houdini on manual update.

Day Two

Before plunging back into the tutorial, I took a mini break to smash something else out of sheer curiousity as well as to test how well I retained the skill.

I also wanted to try a building complex, but these are not made of a solid material and I’m not sure how to do that yet, so I’ll finish the tutorial first.

2 Hours In

The polyreduce node was a big help in speeding up my simulation enough to follow along with Mehdi, in addition of course to keeping my work on manual mode. I will keep that in mind in the future as our tutorials continue.

Unfortunately, it looks as though this will be the first tutorial I don’t complete. I got to the point where we create and active and inactive point group on the cabin so that the entire object isn’t moved during the collision, but rather a small part breaks. I copied Mehdi exactly over and over again and my product was always the same- no change.

Stopping Point

The only possible explanation for the problem I could think was that my geometry supposedly has holes. I couldn’t figure out where or how to locate these, though, and I was using the simple geometry that we made during the tutorial.

Houdini Tutorial- Week 2

In the beginning of the tutorial I got confused very quickly. I was having a hard time understanding what Mehdi was talking about when he discussed UVs. The last time I also struggled to understand the concept of normals, which is a similar subject. I did a little bit of research online and found this diagram to be helpful:

UV mapping - Wikipedia

-as well as the explanation that UV does not stand for anything but instead refers to the points used on the map, as XYZ are already taken in reference to area in space.

On to the project-

POP Object: a container for particles

POP Solver: toolbox for calculating the physics

POP Source: generates particles on the surface of the object

Scatter: generates points on the surface of the object

SOP Path: sources the object

At first I was worried because I couldn’t see my object in the POP network while Mehdi could, and then realized I simply had to move to a frame in which the points had begun conglomerating.

I was already really excited about my simulation with still half an hour left to go, and was surprised when Mehdi said “let’s make it more interesting”! There’s so much in Houdini I don’t know, I can only imagine all the awesome things that we could choose from.

I rendered our first flipbook:

I specifically chose to work with a little less points than Mehdi, just because I felt like they were obscuring my character and I didn’t like it aesthetically.

But in the next step I was a bit confused. Mehdi created a sphere as a test object to work on our point attributes with, but when I did that, Houdini would not let me. Logically, the point simulation we created was coded to work with the group testgeometry_crag1, so they would not apply to the sphere.

I’m not sure why Houdini allowed Mehdi to do this, unless I missed a step along the way.

I managed to resolve this on my own, but the issue I’m proudest of resolving on my own was the inability to see any particle at all at frame 0- I remembered that we set the beginning frame to 235 in our popsource and went back to include all the frames. That, and I once again adjusted the parameters in the pscale to my own liking. I actually turned the birth rate way back up and made the particles even smaller.

Unfortunately, Houdini then crashed. And it was about 1 AM, so I called it a night.

Next day, though, I returned, with my brain function switched on, and I found it super easy to recreate the finished product.

Houdini Tutorial 1

Time to learn Houdini! Despite my growing trepidation related to the constant reassurances that it will be fun after a grievously long learning curve, I went into this first tutorial eager to learn. I tentatively want to declare my specialization as lighting and texturing. I still have a very long ways to go in this area, and in fact I’m really just getting started. But I’ve heard from everyone that Houdini is the software to use for this and I’m excited to see what I can do.

Here’s some of the most important notes I took during the beginning of the session.

Vocab

obj > object

img > compositing

ch > animation

mat > materials

shop > shaders

out > rendering

stage > USD (?)

tasks > pipeline

SOP: old term for geo –Surface OPerators

OBJ: object

DOP: Dynamics OPerators

ROP: Rendering OPerators

VOP: Vex OPerators > Vex: Houdini scripting language, similar to Mel.

$HIP > file output

$OS > object name

bgeo: Houdini file format that can save anything. bgeo.sc = compressed

1 unit = 1 meter

About half an hour in, I was sure I was lost because Mehdi added a geometry node to his sphere, while I could not find the node “geometry” listed and only had more advanced options-

-but I realized quickly that it came down to being in the object context rather than the SOP context- the SOP context has many more options as most of the work is done there. I found a couple times that when I was unable to follow along it was because I was in the OBJ rather than SOP context- for example, trying to place a file node.

I brought in my own OBJ file to follow along with the file SOP node- a set of dice I’d used in my performance animation:

I was originally unsure why they are wireframe, and quickly realized that toggling between these options-

-allows for different levels of visibility.

After some trial and error I managed to merge my dice and sphere objects, and scale them down when I realized that a unit is equivalent to a meter. This should be important knowledge down the road.


Moving onto the next Houdini scene. I was stymied for a while because Mehdi, when creating a ROP Geo node for his torus, saved his project under $HIP, and I had changed my project’s path to a specific folder for schoolwork. I kept trying to change the file output to this folder and was unable to locate the geo. However, I went back into the tutorial the next day and realized, upon listening more closely, that $HIP does not necessarily mean any kind of Houdini preferences folder, instead it refers to whichever path the file is saved in, therefore there is no need for me to change the $HIP path to my own- it’s already there. Sure enough I saved it under $HIP and was able to locate it in the geo folder of my project.

$HIP can be a variable, $HIPNAME cannot.

The only other problem I ran into was all the way at the very end.

Somehow I could not stop the raised points from being deleted when I merged the roof with the main cabin.

The way I ended up solving this was simply re-creating the transform node for the points. It still wasn’t working, but I deleted and re-attached the connector and somehow that changed everything. I am not sure how this worked. To me, the tree looks exactly the same before and after I did this.

I moved on to Mehdi’s bonus project: building the cabin with pronounced wood slats. I was able to remake the entire cabin, but for some reason my booleans for the windows weren’t working, and I’m sure that although the cabin looks good there is something mathematically off. I am sure that the Q&A session will provide me with more insight.

Additional questions-

Why create a transform node instead of just working in the viewport?

How does the divide node work?

I’m still not sure what the delete node actually does.

Dom Session: Superhero Suit

Similar to most of my 3DEqualizer sessions, I was able to follow closely on the 8th during the first half of the day, but fell off the wagon in the second half. That being said, I made it much farther than I have before using 3DE and attribute my higher competency to my one on one sessions with Dom. I believe that I have a greater understanding not only of how tracking works, but of the software in general. These sessions have also allowed me to understand more about paths in Maya and how projects are set as well as control parenting in the Outliner.

I made it to our first check-in happy with my calculation curve, but with a high deviation:

Dom informed me that I must include some of the points in the background in order for the track to understand that area. I had originally hid the points that I attempted to track in this area, because they weren’t tracking well due to the motion blur and were throwing off my deviation. But I copied the points that he put down and came up with a much better number:

After this we began adding points along the road. I couldn’t understand why, but each time I placed a point on the road, my “calc from scratch” window showed me an alarming zig-zagging line clearly confused by my attemps. So I worked through the lunch break, and discovered that some points along the left side of the road were enough to even it out, as long as I placed enough down. I also found a couple small spots on the right side that tracked well. After tracking the road, we returned from lunch and adjusted parameters:

I was relieved to find that so far my background track was going perfectly. This was genuinely a big difference from every other full class session I had done in 3DE I had done in the past.

When I placed the obj file, I was a little nervous that I’d encounter the iron man helmet problem, but it went without a hitch:

I was a little concerned that the suit seemed to be a little too big for the man, though, as I tried to line the shoulder points up, until Dom informed me that it was ok if some were along the arms as we would be deleting the image plane and animating the arms anyway.

I was unable to keep following when we encountered an issue with moving the rig’s geo and its skeleton at the same time. Dom explained his fix with the locators a couple times, but I found myself hopelessly lost. And so, later on, I will take a look into those files with the locator, study them, and animate the rig myself.