3DE Independant Work Challenge

After learning about our external collaboration unit coming up in the summer, I got in touch with Dom to talk about practicing my skills in 3DE in order to prepare to possibly take a tracking role in the project. I’ve been told that 3DE is one of the best ways to get started working in the industry, and I like it a lot, so I think it would be a very smart move to try to angle my showreel around it.

When we first started learning 3DE, I had a really tough time even finishing the tutorials, as the software was all so new and the process requires the memorization of a lot of very small but very important steps. However, as is often the case, struggling to get over that hump ended up helping me in the long run because I remember the solution to each problem I had so solidly. I find tracking shots to be actually pretty fun now, and find that I understand the significance of most of the steps.

I talked to Dom about starting a few different shots on my own to challenge myself. I feel that working independently forces me to learn more because I have to solve my own problems, so I will be working on a couple different shots over the coming weeks and meeting with Dom each week to check in and critique.

This is the shot that Dom recommended I work with as an easy starter, just to see if I can remember how to get through the process by myself:

I felt pretty proud of myself when I was able to get everything set up without a reminder: converting the video to an EXR sequence, importing it in the camera tab, exporting buffer compression files. I created a Maya project and set up the project window, saved the file path, and found an obj file for free online, which I textured.

I found the tracking relatively easy, as we had planned.

Still, I was nervous that having to choose all of my own points would be difficult, but when I first pressed “calc all from scratch”, I was relieved to find that the line was very straight.

And, even better, my deviation was only 0.8!

Unfortunately, though, I suspect that this may not be what the parameter adjustment should ideally look like. From what I can recall, it should look more like a cube. My theory is that I did not include enough of the sides of the buildings on the street, and only the street and the horizon line were detected. But when I pulled it into lineup view, it looked perfect, so I continued.

I did have to rewatch the superhero training video to remember the order of calculating the lens distortion in the parameter adjustment window, though. I wasn’t able to remember that on my own. I remembered how to do everything and why, though, just not what happens when. I’ve written it down to study.

I put in the 3D model and was at first confused that I was unable to move it, but realized quickly that I needed to turn off “contains survey data.”

I projected my points onto the 3D model and had no issues.

So I exported the mel script- it crashed a couple times and I was forced to export without 3D models- and ran Warp4. This was the first time that Warp4 has not crashed on me on the first try. I saved the dewarped footage and brought it all into Maya.

Then came some purely aesthetic work. I tried to match my skydome’s exposure to the fading light of the shot and I rotated R2-D2 so that he tilts as he swerves, in order to make it look less like he’s floating along and more like he’s powering his own journey.

This R2-D2 was not rigged and the geo was in numerous small pieces, so I didn’t have the option to move the body and ambulatory limbs separately.

Overall I’d say I did a fairly decent job. I’m not sure if there was anything I forgot that I just happened to get lucky with not ruining my shot. I do suspect that I’m lacking some information on the sides of the shot. However, considering I did this without help and only had to look up one step, I feel pretty good about it.

KK Tutorials: Lighting Week 1

I made it all the way through the first lighting tutorial and only ran into a couple of issues. When I first started, I was confused because my file opened to look like this:

-as opposed to the full scene that KK had on his screen. No matter how many times I re-pathed the images I kept getting this result.

I reached out to the class and found that Crystal was having the same issue. After some troubleshooting, she informed me that the answer was simple: pressing 1 on a node displays it in the viewport, and KK’s scene happened to be on a different node than ours when we loaded ours in. I was able to follow along with him after that.

The only other issue I ran into was when we were rendering the shot in Maya. For some reason when I pulled up the geo, I see this fully constructed scene rather than the HDRI in the background. I repeated this step a few times and achieved the same results. I hesitate to tamper with it as I’m not sure whether there’s an actual issue in the file structure or if this is simply what the final outcome should look like.

Houdini Tutorials: Week Four

This week I was able to follow along all the way until the point in which we got back into the particle animation. For some reason, my computer crashes every single time I try to render even one frame of it.

But I learned a lot about rendering just from following along with the rubber toy example, and I will try to render one of my destruction sequences using this knowledge.

The Researcher: Collab 2/8/21 Meetup

These are the decisions reached by the group during today’s call.

This Week’s Work:

Monday to Tuesday- Emma creates an animatic from the storyboard with consideration of timing so that our Sound Designers can begin working with a clearer idea of what is needed and when.

Tuesday- Gherardo and Antoni attend 3D lesson, possibly meet with professor one on one to ask about rocket simulation aspect.

Wednesday- Kamil and Emma attend mocap lesson, learn more about the projected length of time it will take us to animate each shot.

Thursday- Entire group meets with Luke to discuss project.

Wednesday to next Monday- Kamil and Emma begin building the set for Act I. VFX team works on space simulation for first shot and finding 3D assets to incorporate for the set. On Monday we meet back up to discuss whether our set is done and plan to begin animation.

During our meeting we went through the storyboard and discussed any possible changes or updates on our shots, found some 3D interior models to use for our sets, and discussed a realistic output for the project. Our goal is to have at least one act fully finished for our showreels by the end of the project.

Key Roles to Know in the Animation Industry

Previs vs Postvis

Previs artists are responsible for creating a rough idea of what the shot is going to look like for the directors to review. The benefit of using previs is so that the team has an idea of what the shot will look like before spending the money to go ahead and film (or pay a team to go ahead and start working). Oftentimes the animation does not have to look good- what’s important is that the camera work, placement, and timing can be reviewed.

Postvis artists apply the finishing touches after filming. This can include visual effects and additional animation to show the director as a blueprint for what the final version will look like. This postvis work will then be brought to the final animation team to create.

Techvis is a subset of these that involves taking the previs or postvis work and making it precisely accurate to the millimeter in order to make sure that it reflects what the equipment that will be used is capable of.

VFX vs Film vs Game Animation

VFX Animation is most often used in conjuction with live action film and involves adding in particle and light effects, rotomation- animating on top of shots using a motion track- or animating in CGI objects, like weapons and superhero suits.

Film Animation usually refers to a movie that is entirely 3D Animated and may be more similar to a Pixar project, with a cartoonish and less realistic style.

Game Animation is usually a longer project. It’s less stylized, and requires the team to take into account the player’s control and world exploration. Everything must be seen from any and all angles.

Houdini Tutorials: Week Three

One Hour In

I found last week’s tutorial very easy and fun, but I expected this to be a little more intense, as it says on the tin. However, it did take me about 3 hours to get through the first hour of the tutorial. Most of the extra time it takes me to get through these is spent researching terms I don’t understand like “voronoi”, “vex”, “VDB” and “voxel”, going back to repeat the step when it happens too fast, or going even farther back to figure out why my results don’t match his. While the process makes sense to me overall, I often don’t understand the significance of many of the little steps or how Mehdi is able to tell that they are necessary. I’m also only about 80% confident that I understand the goal of our work in the first hour. I know that we’re creating a realistic fracture of the objects, but is exploded view the end result of the destruction or simply a way to review whether or not the fracture seems realistic? Despite this, I am able to follow along, and often understand it better towards the end of the tutorial, so perhaps it will all make more sense soon.

Side note- here’s a video I found that helped me get on the same page really quickly.

Hour and a Half In

Somehow, this took me even longer, and only for 30 minutes worth of tutorial. I was just about ready to give up for the night when I finally got to see the product of my work- this beautiful destruction- for the first time, and regained motivation.

I feel that, as with the last tutorial, it helps it come together for me to be able to visualize the product of the long line of work.

After a couple Houdini crashes on my slow computer, I figured out how to get my boolean fracture linked up to the DOP network for an even better effect.

I got the rbdmaterialfracture to run, but it’s incredibly slow and not really doable on my machine. I’m going to follow along with what Mehdi is doing in it so I can learn but I’ll have to keep my Houdini on manual update.

Day Two

Before plunging back into the tutorial, I took a mini break to smash something else out of sheer curiousity as well as to test how well I retained the skill.

I also wanted to try a building complex, but these are not made of a solid material and I’m not sure how to do that yet, so I’ll finish the tutorial first.

2 Hours In

The polyreduce node was a big help in speeding up my simulation enough to follow along with Mehdi, in addition of course to keeping my work on manual mode. I will keep that in mind in the future as our tutorials continue.

Unfortunately, it looks as though this will be the first tutorial I don’t complete. I got to the point where we create and active and inactive point group on the cabin so that the entire object isn’t moved during the collision, but rather a small part breaks. I copied Mehdi exactly over and over again and my product was always the same- no change.

Stopping Point

The only possible explanation for the problem I could think was that my geometry supposedly has holes. I couldn’t figure out where or how to locate these, though, and I was using the simple geometry that we made during the tutorial.

Collab Unit: Week One

This morning my collab group had our first video call discussion to begin planning out our project. The roles in my group are as follows:

3D Animators– Kamil Fauzi, Emma Copeland

Sound Design– Ben Harrison, Callum Spence

VFX– Gherardo Varani, Antoni Floritt

Antoni was unable to make this meeting today as he was traveling back to London, but Gherardo and he will be meeting later to catch up on everything, and we plan to video call again on Monday all together to check in once more. Today we began discussing our work for this week as well as everyone’s initial ideas going into the project. Our sound designers had already begun planning with each other some ideas for soundtrack and effects, and we’d been sharing some files with each other over the past few days suggesting some different textures, simulations, and objects.

For example, on Monday I had found this cave that could be used:

Antoni shared with us a solar system simulation that I’m very excited to see in action:

-and Kamil shared with us some very useful sets and objects that he’d used in the past in different programs and was converting to be used in Maya for our project.

I’m feeling extremely confident in my project, as all of my teammates are very talented, motivated, and bring a lot to our group. We also are having a very easy time communicating so far, which I believe is going to be essential to our project getting off the ground.

During the call we decided that the first thing to do would be create a storyboard. This way we can plan for the timing of the film and give the sound designers a more solid idea of what they’re working with. I will begin working on the storyboard for Act I, and Kamil will begin working on Act II.

We will be continuously uploading our work to the Discord server to check in with the group, and once the storyboard is done the plan is to start building our Maya scenes, with a different scene for each shot. Ideally, we’ll be able to get moving on that starting Monday after our next call.

Advanced Unit Overview: Goals

My progress over the past few months has been incredible, and I’ve learned so much more than I ever imagined myself being able to do. I owe this huge advancement to the course’s thorough structure, the vast wealth of resources available to us, and our leaders’ genuine desire for our professional success. However, I’ve still got quite a long ways to go before my work meets industry standards.

Some of my pitfalls in the past have been understanding controls on more complex rigs, as well as accounting for body weight and having a better eye for realistic posing. I hope to get a lot better in my technical skills as I proceed in the course.

Tentatively, I’d like to consider lighting and camera work as my specialism. I spend a very long time setting up my lights and cameras and it’s always one of the most fun parts of my project.

These are the two most recent projects that I’ve touched. I spent quite a while setting up the lights on both. In the tailed ball scene, I worked hard to give the light a natural, daytime effect, dappling yellow through the leaves of the trees (which I had carefully placed), while never completely obscuring the animal in darkness, and making sure to project its shadow on the rocks for observation of the silhouette. I spend a long time playing with the water, trying to find a balance in depth and reflectivity. On my performance animation, I spend almost two days on the lights (not to mention collecting a tasteful amount of debris and scene elements). My idea had many aims: frame Janine (which I did with four surrounding area lights illuminating only her, specifically a stronger backlight for an eerie feel), highlight the blood on the stairs (which also serves to frame her), cast the rest of the house into sharp-shadowed darkness (drawing the eye to Janine and the stairs), and create an eerie reflection on the floor. I chose a bright red velvet shirt and heels to associate her with the blood on the stairs as well as give her an untrustworthy aura. I also used a camera technique I hadn’t tried before, panning around quickly to the rest of the characters on the opposite side of the room. This was intended to both cut the tension as well as add to the fast-paced nature of the scene.

-I digress, I’m very interested in lighting and cinematography. I know I have to learn basically everything in order to even begin understanding it technically, so hopefully I’m not put off by what this work actually entails.

Houdini Tutorial- Week 2

In the beginning of the tutorial I got confused very quickly. I was having a hard time understanding what Mehdi was talking about when he discussed UVs. The last time I also struggled to understand the concept of normals, which is a similar subject. I did a little bit of research online and found this diagram to be helpful:

UV mapping - Wikipedia

-as well as the explanation that UV does not stand for anything but instead refers to the points used on the map, as XYZ are already taken in reference to area in space.

On to the project-

POP Object: a container for particles

POP Solver: toolbox for calculating the physics

POP Source: generates particles on the surface of the object

Scatter: generates points on the surface of the object

SOP Path: sources the object

At first I was worried because I couldn’t see my object in the POP network while Mehdi could, and then realized I simply had to move to a frame in which the points had begun conglomerating.

I was already really excited about my simulation with still half an hour left to go, and was surprised when Mehdi said “let’s make it more interesting”! There’s so much in Houdini I don’t know, I can only imagine all the awesome things that we could choose from.

I rendered our first flipbook:

I specifically chose to work with a little less points than Mehdi, just because I felt like they were obscuring my character and I didn’t like it aesthetically.

But in the next step I was a bit confused. Mehdi created a sphere as a test object to work on our point attributes with, but when I did that, Houdini would not let me. Logically, the point simulation we created was coded to work with the group testgeometry_crag1, so they would not apply to the sphere.

I’m not sure why Houdini allowed Mehdi to do this, unless I missed a step along the way.

I managed to resolve this on my own, but the issue I’m proudest of resolving on my own was the inability to see any particle at all at frame 0- I remembered that we set the beginning frame to 235 in our popsource and went back to include all the frames. That, and I once again adjusted the parameters in the pscale to my own liking. I actually turned the birth rate way back up and made the particles even smaller.

Unfortunately, Houdini then crashed. And it was about 1 AM, so I called it a night.

Next day, though, I returned, with my brain function switched on, and I found it super easy to recreate the finished product.