Thesis: Best of Cut-For-Time Content [2: History of Animation and its Potential Effects on Contemporary Audiences by Age Group ]

1.2 THE OLDER AUDIENCE: A BRIEF OVERVIEW OF MIDCENTURY ANIMATION, CHILDREN’S PROGRAMMING, AND CONTEMPORARY PREFRENCES

            For the purposes of this paper, the “older audience” will refer not only to the baby boomer generation (birth years: 1946-1964) but broadly to the generation born before 1971, due to the landscape of animation’s steady predictability until landmarks in the animation industry emerged after that time and thus changed public opinion drastically; specifically, the lifting of the Hays Code in 1968 (Lawrence, 2020), which allowed for the introduction of adult themes in animated media, as well as the demand for 24-hour television content in the beginning of the 1980s (Marques, 2021). Therefore, the elder viewership in this case refers to audience members over the age of 50 years old.

            Famously, animation owes its roots to features intended for audiences of all ages, with adult themes explored and satirized in the early days of Betty Boop and Felix the Cat (Lawrence, 2020). As Walt Disney gained domination of the industry in one long, methodical monopoly, television was also gaining ground on the forefront of daily entertainment, and as the world became smaller, religion began slipping from its hold on society, and people grew more and more curious about the shifting culture around them, so the aforementioned Hays Code was instituted to dictate what constituted morally correct content permissible to broadcast into the homes of millions. Consequentially, as animation gained prominence, it also found itself stunted by a strict set of regulations, and cartoons were reshaped to utilize unwaveringly family-friendly humor, with pre-existing characters and concepts forced to undergo a rapid reformation. This was the first contributing factor to the older audience’s perception of animation as solely a form of children’s entertainment.

            The nail in the coffin was Hanna-Barbera. Before William Hanna and Joseph Barbera arrived on the scene, the animation industry was dying; a budget simply didn’t exist for such an expensive and time-consuming art form (Coleman, 2017), and the entertainment industry is a business just like any other, so regardless of public opinion on the resulting production quality of barebones budget cuts, their success speaks for itself. Their technique of limited animation involved creating shorts with fewer frames per second, running the same cel repeatedly while animating only certain components (for example, the head turning, mouth moving, and eyes blinking), and looping a previously completed animation repeatedly. Subtle consequences took form in recognizable threads poking from the seams of this cheaply stitched shamble, such as a collar, necktie, or beard placed on all characters to prevent the audience from noticing a head separated onto its own cel to turn independently from a character’s body. As disparaged as Hanna-Barbera is among the community of animators and artists, once again, financial figures always speak for themselves, and Hanna-Barbera was awarded seven Academy Awards and reached a net worth of $300 million U.S.D. at its peak performance.

            Nonetheless, cheap animation cemented the idea that cartoons were poor quality, churned out only as mindless drivel to fill the Saturday morning slot as an easy cash grab. To this day, many adults over the age of 50 continue to view animation as a medium that exists only in this capacity, and thus are an easy group to accidentally drive away when utilizing animation in media directed towards a broad demographic. Mowe Studios, a motion graphics studio of Hallendale Beach, Florida, advises in their article, How to Market to Different Generations with Animation,that “Baby boomers are naturally more inclined to think of animation as children’s entertainment…. Overall, they prefer things that look simpler and natural. They aren’t so inclined to represent human characters with unusual colors or limb proportions, like longer legs or tiny heads. And if your story has a character that isn’t human, be aware that adding life to an object can look childish for them, depending on how it looks, acts, and moves.” (Marques, 2021)

            Therefore, the eldest audience of an animated feature is arguably the most important to research when considering the artistic style employed in one’s work. Unless intent on focusing solely on a separate demographic due to absolute certainty that lack of viewership in the older generation will be compensated for by another, it is crucial, if one is indeed planning to release media that is intended to appeal to a wide audience, to ensure that this group does not consider one’s work to be unprofessional and immature.

1.3 THE MIDDLE AUDIENCE: MILLENNIALS AND THE DEVELOPMENT OF CGI

            Though Steven King’s name hasn’t budged from its lofty throne as legendary master of horror novelists, Michael Crichton may not forever be remembered as the 1980’s icon of the science fiction thriller genre. Playing on the technology explosion and scientific advancements in the field of medicine, RNA splicing, and artificial intelligence that made his contemporaries so uneasy, Crichton approached his stories with a Mary Shelley approach: What if we took it too far?

            Within the world of Crichton’s wildly successful work arose films that necessitated the most realistic and heart-stopping of visual effects. The audience needed to feel the abject terror of the characters as a twenty-foot-tall Tyrannosaurus Rex bites a Ford Explorer in half with one crush of its mighty jaws, the uncanny glare of unstoppable, heartless robots bearing the faces of trusted human companions. True to his inspiration, not a dollar was spared on the latest in cutting-edge technology to instill fear in the hearts of his fans, and, in 1976, with Westworld came the first 3D computer-generated imagery in film. As the entertainment industry raced to catch up, Crichton broke barriers again with Jurassic Park, in which four minutes in total of pure dinosaur-inflicted peril are completely computer generated (Semlyen, 2010).

            Though both films were created for an adult demographic, the first full-length computer-generated film, Toy Story, was decidedly for children, though it continues to appeal to all audiences to this day. Toy Story in itself was a remarkable feat, as half of its crew, a meager 27 animators, had never even used a computer before (Semlyen, 2010). Yet, remarkably, it pulled in a box office smash of $373 million, a number which tripled as Pixar continued to release sequels to the saga (Nash Information Services, LLC, n.d.). Immediate was the boom in 3D computer animated films marketed towards children, with traditional hand-drawn work now considered massively passé by audiences, while other studios battled to replicate Pixar’s financial success.

            Simultaneously, an interesting new phenomenon was blossoming in forefront of the animation industry. The Simpsons had begun to sink its hooks into mainstream culture as the first successful adult animated series. Funny, edgy, and clever, Matt Groening promised a whole new dawn of storytelling for animators and artists, but unfortunately, somewhat of a failure to launch instead took place. Studios placed utmost importance, as always, on consideration of guaranteed financial success rather than pushing forward new- and risky- ideas, and for well over a decade, the only television programs that reached the popularity of The Simpsons only pounded the same concepts into the dirt, to greater and greater extents, searching for a breaking point: a barrage of sitcoms revolving around the American nuclear family, all 2D animated, all with the same round-eyed style, all featuring shocking, offensive, raunchy and violent humor (Aitchison, 2019). This coincides with a revolution in censorship, particularly, a sharp increase in the acceptance of profanity and vulgarity in media, with a study conducted by San Diego State University finding the prevalence of swearwords to be 28 times more likely to appear in books published since the mid-1950s (Flood, 2017). Simultaneously, adult themes have quickly become more openly addressed in televised media, though strong variation in censorship of this exists across Western countries, for example, European countries often permit nudity more frequently whilst the USA is more lenient with graphic violence (UKEssays, 2017) (Head, 2019).

            While children’s animation embraced 3D animation in both film and television, exploring a diverse portfolio of genres, styles, and characters, adult cinematic animation stagnated in 2D television programming only, with 3D animation only flourishing for adult audiences in the video gaming industry. For this reason, somewhat of a disparity exists in the way millennials, or the “middle audience”, views animation. Many millennial viewers choose to welcome children’s animated films for their own entertainment, accepting the somewhat simplified plotlines in order to enjoy the bold ideas and artistic opportunities, found only rarely in the often lifeless mold of adult animation. Simultaneously, a similarly significant number of adult audiences continue to only view overly violent or raunchy animated work as fit to consume, carrying the same prejudice as the older audiences- the concept that all animated work is inherently juvenile, and requiring that it break boundaries into shock-value territory to prove itself.

            This generation is the trickiest and most interesting to study in their response to animated content as a form of entertainment, due to opinion being so divided. Much of it may be influenced by whether this person is willing to consume children’s entertainment, such as contemporary releases of new Disney movies, in which case the theory may be posited that they would hypothetically enjoy a more cartoonish style of animation, or whether this person enjoys video games, in which case it is possible that the viewer may prefer a more realistic style of animation. It is this exact matter that shall be further discussed in the personal investigation conducted for this report, delving into what components of 3D animation interest or disinterest the “middle audience”, as well as whether there exists a trend in whether these viewers consider animation style, particularly cartoonish or realistic animation, as a factor in whether they assess the medium to be more or less juvenile.

1.4 THE YOUNGEST AUDIENCE: THE MOTION GRAPHICS ARTISTIC RENAISSANCE IN CLICKBAIT CULTURE

            Gen Z is commonly mocked for their purported social media dependence and short attention span, but with the popularity of video streaming platforms like TikTok converging with the normalization of video game culture, visual artists find a platform for their work to flourish. Young animators have found a place in TikTok culture, with some amassing up to five million followers, and some even claiming that being active on the app has taught them more about animation than art school ever could (Kastrenakes, 2020).

            This could perhaps be traced back to the popularity of CGI integration in live-action videos taking hold in the dawn of Snapchat, and the rise of looping animated filters, or even to the hold that anime, the art of Japanese animation, has finally taken upon the Western public. Once a genre considered nerdy and niche, a huge cultural shift has occurred within the last decade or so, and in recent market research conducted by anime streaming site Crunchyroll, it was discovered that only 6% of Gen Z participants never heard of anime, compared to 27% of the general population (Morrissey & May, 2021). While this paper will focus only on Western animation, due to the vast differences in historical and cultural context relating to Eastern animation, it is essential to note that the acceptance of anime as a widespread source of entertainment among Gen-Zers may easily have led to the current upswing of interest in animation and new, different, and artistically-driven animation in young adults.

            Gen Z is a platform that is, without a doubt, the most accepting of animation of all the adult audiences. One difference to note, according to Mowe Studio, is that “Millennials are more inclined to cleaner, ‘pixel-perfect’ aesthetics, while Gen Z prefers more rough, organic, and natural styles.” This description lends itself best to 2D animation, but the question remains as to whether it holds true for 3D as well, because, currently, there is a noticeable, gaping lack of 3D-animated films and television shows created for an adult audience, presumably because, between the younger and middle audiences, studios aren’t sure what adults will respond best to, and thus are not willing to take a financial risk.

            While children, as always, prefer stories that they can relate to, with characters that act like children, bright colors, and engaging, interactive themes (McPherson, 2020), there is little surprise in the fact that they still, as much as ever, prefer animation to live-action, and tried-and-true methods of marketing to developing minds hold as fast as ever. For this reason, 3D animation has almost exclusively been explored with its youngest audience in mind, and, despite one or two exceptions, the industry has held back, timid, from roaming into the territory of young adults, eagerly awaiting animated work made with them, for once, in mind.

Addendum: After reviewing this segment, feedback was given to change the emotive tone and downplay objective, opinionated, baseless statements. Much of this was changed, before ultimately being cut for time anyway. This is the original version.

Character Rigging II

Over the past month or so, I’ve been pushing to devote eight hours per day to both my internship and FMP respectively. Though it’s been stressful, it’s been rewarding, and the quality of my work has improved so, so drastically since starting my position at the studio. Of course, it’ll be necessary to delve into this in an entirely separate post as I finally compose a few updates on my FMP work.

In the meantime, I took a break from FMP work for the past two days, as I was moving house, and in my free time I found https://www.models-resource.com/, a site to which models from all sorts of video games are uploaded. The Animal Crossing resources are extensive, and because it’s sort of a nostalgic comfort food for me- and because I sort of misinterpreted the task as potentially being easy due to the character’s simple shape- I decided to rig Blathers for fun.

Blathers with correct joint orientation

I was excited to discover that the FBX file came with a pre-built joint skeleton, already skinned to the mesh, and assumed I’d be able to somewhat speedrun the rigging process. Unfortunately, I’d soon realize that most of the joints were not oriented the right way, which involved an hour or so of playing around with my JointOrient Mel script as well as manually orienting a couple to make sure that Y always faced out/up and X followed the path of the skeleton. In the end it was still a huge time saver that the skeleton already existed, but having to unbind the skin and re-orient the joints ended up being a bit of a step backwards anyway.

Another couple roadblocks that I hadn’t expected working with someone else’s skeleton was the fact that the original creator had not put a bend in the arms or legs, which resulted in my IK Handles not working correctly, and took me a while to figure out the cause of.

Two other minor learning moments were definitely my journey with painting the joint weights correctly on such a simplistic model, particularly the wings, as well as trying to figure out a workaround for how to toggle the regular eye vs blinking eye texture visibility on the same geo. I would prefer that the creator build the mesh with the “eye-cap” (as it’s listed in the outliner) a bit more smoothed into the mesh, or allow a way to toggle blinking, either through blend shapes or actual eyelids, because I ended up having to duplicate the geo, apply the blinking texture, then parent it to the neck control separately, then toggle visibility on the separate eye-caps.

I also built the beak joints and controls myself.

This project was a fun little break from my work, and helped me build upon and reinforce my rigging abilities, while also opening my eyes to how much more there is to learn. I’m mostly glad to have a rigged Blathers to play with, and it does make for a nice little clip for my showreel, though I’m unsure whether the animation comes off as strange/amateur to a viewer who is unfamiliar with the animation style Animal Crossing uses (which I was imitating, specifically Blathers’ waiting idle). Once I’ve got my FMP work ready to add to the reel, I’ll be able to evaluate a lot of my more recent, more advanced work and narrow down what should be included.

Character Rigging I

Above is a video displaying my successful first attempt to learn character rigging. I’m very proud of the rig and skeleton that I’ve created, and I already know a lot more about Maya itself just by having spent the time to practice this. I predict that in the future my new understanding of joints, IK handles, and NURBS curves will help me solve a lot of my own animation problems, and am pleased to share this clip on my showreel to compliment my animation and motion tracking skills.

A good example of the way this has greatly impacted my animation skills for the positive is that I’ve long struggled with incorrect joint weights deforming mesh inappropriately, and only now do I realize the cause. As I rigged this model, I noticed that the fingers and feet were deforming with movement, and was ecstatic to finally learn how to fix this error.

That being said, in my further experiments in rigging I’d like to set up an arm rig that does not rely on an elbow pole vector to steady the arm, and in the same vein allows for smoother and less hands-on work. I’d also like to explore facial rigging, as I’ve only (as of yet) mastered the use of blend shapes for facial expression.

I must now spend my time diving back into my FMP, as I have been over the past several weeks, but I’m glad that I took some time to get this valuable knowledge and set my eyes on a fresh project for the time being.

Individual Project: Matchimation

Today I delved back into the subway shot. Launching the scene with the shave and a haircut plugin engaged worked like a charm. In a matter of an hour or so, I was able to get the rig into my tracked scene and scaled it up to the size of my model. Unfortunately, I didn’t anticipate the fact that this rig is intended to be short, so I’d have to make him rather big comparatively in order to cover the head of my model. As a result, the top of his hat is cropped off.

This is how the scene looked when I first managed to scale the model up to the correct size and completely obscure my model in the reference footage:

Obviously, there are two problems with this immediately; one being that the hands cannot correctly grasp the poles and the other being that the beard is gray- as well as the fur on his coat- it appears somewhat dirty. In contrast, when I had tried to render out the rig on the farm by itself earlier, this image had been the result:

I am not entirely sure why this issue resolved itself when I transferred the rig into my tracked scene, with the gray beard problem arising instead. I shot a message over to Luke asking why the hair appeared to be glowing.

In the meantime, I set to work building geo in the scene for those poles so that his hands could convincingly grasp them. Then, over the course of the next few hours, animated the rig, taking care to move the character convincingly in conjunction with the motion of the subway carriage while also focusing on small details and overlap. This took up the majority of my work day, despite the clip being only two seconds long- I wanted to make sure to lay down a solid baseline. My rough animation (minus the facial expression) is shown below, and I’m pretty happy with it, especially with consideration to the short amount of time it took me to complete.

After getting my matchimation work out of the way, I caught up on my messages from Luke. His response was that the fur was not glowing; rather, the plugin was not working correctly. I sent over the gray beard image, and he told me that it looked like an issue with the self-shadowing options in the fur shader, and instructed me to try playing around with some of the sliders.

cast and recieve shadows on in arnold- gray beard situation

Above is the gray beard issue, made more intense by my adjustment of the hair roots to be thicker (I found it looked rather sparse before). While toying with the self shadow and geometry shadow options in the fur shader had no effect on the problem,

I found significant difference in the result when varying the cast and receive shadows options in the Arnold render stats.

cast and recieve shadows off in arnold- white beard situation

Above is the result of turning off both cast and receive shadows. The beard and sleeve fur are altogether too white; however, it’s a lot closer to what I want. I decided that the best middle ground- for now- is to check only one of the Arnold render options.

After bringing the rig into my scene, scaling and sizing it, completing my matchimation work and solving the shave and a haircut issue, almost eight hours had already passed, but I was determined to work on lighting and shadowmatte before calling it quits for the day. Unfortunately, it wasn’t in the cards this time around. A simple skydome lights the scene realistically but not as dramatically as I hoped, and too bright of an exposure blows out the beard highlights. It will require tweaking later on.

Here is the result of today’s work. Looking at it reminds me how much more I want to do, but it’s not bad for one day.

Individual Project: Tutoring Help and Image Plane Error Solutions

As I mentioned in my previous post, I’d lost a lot of time on my individual project due to technical difficulties with the image plane, unsolvable in the hypershade, the attribute editor, or even by recreating the dewarped footage.

After running through a couple of options today, we found the solution: the EXR files were simply to large to run compatibly in Maya on the image plane. I put the EXR files through Media Encoder, converted them to JPEGs, and found that they ran perfectly.

So now that I’ve finally got a working scene, I can return to the task from last week and build geo for it, aligning it as tightly as I can with the shot in order to make up for the unfortunate use of the nodal camera necessary for this track. Once I’ve got my geo built and my character rig in and scaled, I will review the scene with Dom to make sure it is ready for animation. Thankfully, I’ve got a second session with Dom scheduled for this week on Friday, so I’ll be able to move along with my progress at a faster rate and make up for some of the time lost on this issue.

Below I am including a list from my handwritten notes on how to snap a 3DE scene to the ground plane in Maya, because I continually forget and end up writing and researching it repeatedly. I attempted to break down the steps more thoroughly in my notes this time.

3DE Scene Snap to Ground Plane in Maya Process:

  1. Select scene, then hit W, followed by D, to bring up the pivot point
  2. Middle mouse drag the pivot to a point in the scene while holding V to snap.
  3. Hit D again, and, holding X, snap the scene to the center of the world.
  4. Turn on frustrum display controls, allowing you to see the camera’s angle in the world.
  5. Add another 0 to the image plane depth, allowing geo to move through it.

Now, when you load a character or object into the scene, they will automatically spawn in the image plane, and once the scene is scaled and sized correctly, blend seamlessly.

Indie Film Collab Project: Tutoring Help, Motion Blur Tracking Workarounds

Speaking with Dom about the motion blur issue in our indie film project footage was a godsend. While I originally wasn’t able to track any single point for longer than about ten frames, and even then wasn’t able to keep a very solid track on them, I’ve now got four well tracked points in the midground and plan to continue placing more into tonight in order to catch up on lost time this past week.

Though I’d made it to point 74 in my original attempt to track this blurry shot, I deleted all of my previous subpar work and started again with Dom, in order to avoid my messy track interfering with the calculation of the shot. The first thing we did was open the image controls window and up the contrast and brightness to their respective maximums. Then we began searching for very dramatic patterns, looking to place points at the very corner of a high contrast area. As I tracked a high-contrast point, Dom helped me with a couple of tactics for blurry footage.

When tracking a blurry shot, it’s always necessary to use a very wide search box and to constantly adjust the pattern. For a shot as blurry as this one, we tracked every single frame by hand, which is, although time-consuming, necessary. Fortunately, Dom did tell me that after six or so points tracked accurately, the software will be able to account for the blur in calculation.

As I tracked every point by hand, I found that for each point I had to hand-select the area where it should be placed at least once or twice, with the software simply unable to account for the massive blur at times, and meticulously pinpoint its exact location using page-up and page-down, though Dom did tell me to make sure not to do this too frequently in order to avoid vibration of each point.

Though using a wide search area with high contrast is imperative for a blurry shot, it’s also important to make sure that the search box never crosses into the depth of another object in the shot. I found it somewhat difficult to discern whether my point had moved during some of the camera shaking, and though I only have four points, I spent almost half an hour placing them.

This shot may be strenuous, but getting these points tracked is a glimmer of hope that all is not lost and it won’t be impossible. It is fortunate that we know the camera specs, and, as Dom pointed out, this will be a showstopper of a showreel shot.

Camera specs- we researched the filmback width and height for a Canon EFS 15_85mm. I was initially frustrated with the fact that I cannot have the width be fixed at 36 mm, the height be fixed at 24 mm, and the pixel aspect be fixed at 1 all at the same time without one automatically turning to passive, but Dom has instructed me that it is fine for the software to edit; as long as the pixel aspect stays fixed at one.

Collab Project: Tracking Difficulty due to Camera Shaking/Blurriness

Similarly to my individual project, I struggled this week to make real progress on the collaborative project; hitting a wall with the software’s as well as my own capabilities. The shots include frequent camera movement, and unfortunately, this results in near-constant motion blur. Because of this I’m only able to track around 10 frames at a time per point. I am hoping to bring this up with Dom tonight for some potential workarounds; I expect we will discuss advanced techniques for splining. I’ve spent several days trying to work this out myself with no luck, and am worried about the project deadline.

Individual Project Technical Difficulties to be Addressed

After my discussion with Dom last week regarding my next steps in my individual project, I was prepared to construct geo for the scene and scale it accordingly to my rig in order to move ahead before animating. I’d even had a discussion with Luke on the rig I wanted to use, and how to load up the Shave and a Haircut plugin for it.

Unfortunately, I’ve hit a roadblock on this project before I could even begin creating geo; the image plane simply refuses to work correctly no matter how many ways I try to fix the problem. I’ve tried deleting it from the hypershade and reloading it, as well as running Warp4 again in order to make sure nothing was wrong with the EXR files themselves, and I’ve made sure every time that “use as image sequence” is checked.

I will discuss this problem with Dom tonight and make a new post regarding the solution. It is unfortunate that this setback has cost me a lot of time I could spend working.

Individual Project Tutoring May 19th

During my catch-up with Dom tonight, we reviewed some of the shakiness still present in the track and fine-tuned some of the points used in order to prevent any jolting from the subway car.

We assumed first that it may be the points on the foreground seat disturbing the parallax, as the front is not attached to the wall and jiggles with the movement of the car, but deleting these points still did not achieve satisfactory results. After several different attempts to turn points on and off, track different spots and sim the lens, Dom suggested to me that this shot may have to rely on what’s called a nodal camera due to the software’s inability to understand the depth of the scene (possibly resulting from the very-nearly-static camera).

He explained this to me as creating all of the points on one flat plane, which the camera movement will be tracked around. A definition I found of nodal camera movement is, “The no parallax point–or nodal point–refers to the specific axis on a camera lens around which the camera rotates so as not to create a parallax error. By rotating around this point, the scenery will all move at exactly the same speed in the resulting pan shot.”

I was initially confused and skeptical of this strategy, as I had assumed that failing to accurately track depth was failing to complete my job as a matchmover, but Dom explained to me that what I would do instead is create the depth in Maya when I build the geo along the correct angles.

My next step on this project is to bring it into Maya, line it up to the ground plane, and scale it- making sure that all the flat lines of the floor line up- then I will add in my character model and send that over to Dom for the okay before I go ahead on the next step, which is to bring in animation.

Individual Project Motion Tracking

Attempt 1

The benefits of using my own footage for a tracked shot are that I can intentionally set the scene to both challenge myself and provide easy patterns to track. I planned for the subway car to be a relatively easy background track, with plenty of signs and seat markings to get a solid track on. The camera is completely static, and the only motion path necessary to figure out is the slight shaking from the stopping and starting of the car. The only plane that I struggled to track was the background wall, as the only markings/signs on the wall intersect with my model’s arm and the midground pole.

I felt astonished with my track when my very first calculation resulted in a deviation curve of only 0.0486.

However, my excitement began to wear off when I considered a few factors. This was only the deviation for one point group, and some of the others had slight inconsistencies or large spikes. On top of that, when I moved my shot into lineup view, the depth of the planes was very clearly incorrect. In the video below I’ve tried to demonstrate what I mean- although the ground plane looks okay, it appears almost as though 3D Equalizer thinks the back is the front; the back wall and midground seats have much larger X’s than the foreground seats.

That being said, the deviation curves in every point group look relatively okay.

back wall
ground plane
foreground seats
background seats

I spoke to Dom about this issue over Discord messaging in advance of our session tomorrow and he informed me of the issue: by placing each plane of the background in a separate point group, I was asking 3D Equalizer to consider them as objects to track rather than being part of the background. The camera group must all be part of the same point group. And so, happy that I learned this before wasting our entire session trying to figure that out, I went back to the drawing board to track again in the camera group this time.

Attempt 2

Upon tracking the background of the shot a second time in the camera group only, I was met with a parameter adjustment chart that makes a lot more sense; however, the depth of the back wall does still seem to be slightly off. I reached out to Dom again about this and he reminded me to calculate a second time.

Now that the background track looks a lot better, I’ve tracked a couple points on the object as a starting point.

During my session with Dom tomorrow, I’ll revise and edit the track as needed.