The special effects of HBO’s Game of Thrones is something that really interested many fans before production started, because of the grand epic sweep of the setting and the fact that epic fantasy on television was a tall order. It certainly captured our imaginations as well, as we considered what HBO had been able to achieve in its other shows, as well as what other television programs and VFX vendors were able to create. The end result? Some of what some imagined (and even what we tried to project) proved to be too ambitious for a show on such a tight schedule and with so many other budgetary needs to meet outside of visual effects (massive use of sets and locations, many costumes and extras, etc.)
Still, despite the fact that the battles never really transpired that some might have hoped for, there were some jaw-droppers, and those are thanks to BlueBolt, the lead VFX vendor for the series. We had the opportunity to speak with Lucy Ainsworth-Taylor, one of BlueBolt’s founders and the VFX producer for the first season of the show. We learn some surprising things along the way, with a refreshing level of frankness regarding the reality of producing VFX for a television program such as HBO.
My understanding is that BlueBolt is a pretty new company, but its founders have quite a lot of experience in VFX for film and television. What role has landing the lead vendor role on Game of Thrones played in the present and future of your company?
Will BlueBolt be the lead VFX vendor in the second season? If so, has any preparatory work already started given that cameras are due to roll in late July?
We’ve read that the prep time before shooting began was very short, and that there were a number of rewrites that led to unbudgeted VFX requirements. On top of that, it’s been noted that a TV production has a few extra layers of approvals compared to film. Has there been any talks about reducing such issues, whether by streamlining approvals or locking down scripts at an earlier stage?
2.5D has played a role in some of the visual effects. Could you discuss some examples of the use of the technique, and what its advantages are over fully 2D or 3D shots?
Most of our big environment builds were done as 2.5D projections, or a hybrid of 3D and 2.5D. A good example of this was Winterfell. We built a basic model and applied basic textures for the whole of the Winterfell castle. For each shot the model was matchmoved into the shot and a basic lighting setup was done. This meant that the castle was always in the correct place and at the correct scale with lighting to match the plate. This was rendered out as the basis of the castle then additional matte painting work was done to add detail, aging, staining etc. This additional matte painting work was then re-projected back onto the model.
The problem with using pure 2D is that you restrict the camera move to being locked off or nodal, unless the CG you’re adding is so far in the distance you can get away with not noticing the lack of perspective shift. Using 2.5D allows the camera to be freed up a bit. As long as you are roughly viewing the CG from one angle it doesn’t matter if the camera moves and you get perspective shift to reveal new pieces of the CG. You wouldn’t want to use this technique with very extreme camera moves (especially on a highly detailed object) as you’d have to build so much detail into the model and use so many projection cameras that it wouldn’t be worth it.
The Red Keep of King’s Landing is spectacular, especially in the establishing shot in the first episode. How much of that was real world imagery and how much of that was purely a digital creation? Was the Red Keep itself a 3D build?
On average, how many effects shots have there been per episode, and how long does it usually take to get all those effects rendered? Is there any type of VFX shot that was particularly prevalent for the show?
This really varied per episode, the heaviest being Episode One. This was due to us picking up shots from the pilot show a year earlier and trying to make them work with the shots we had shot during the main shoot. We also had a huge geography issue with the layout of Winterfell, to tie in with the Kings Arrival, Bran’s climbing, Bran’s fall etc. Winterfell as a set only had one tower and nearly every shot required some sort of set extension to it.
The most prevalent type of shot that BlueBolt delivered were environment shots – either extensions or full CG environments.
One criticism the VFX of the show has received—it’s received very little, we hasten to add!—is in the representation of Drogo’s Khalsar, his band of horsemen. The number 40,000 is repeatedly stated… but every time the Dothraki are shown as a group, it looks like there may just be a couple of hundred, with only the second episode hinting at anything grander using silhouetted figures in the background. I believe that some crowd duplication has been done on the show—the crowd at the tourney at one point—so what’s the barrier to showing a glimpse of this vast mounted army? Just the scope of it proving infeasible within the time and/or budget allotted?
Given remarks from the producers, expansive, VFX-laden battle sequences are going to be pretty rare on the show, and understandably—they seem to be one of the most expensive things to render at high levels of detail (right up there with anything involving water, apparently). However, the second season will see more battles, including one that’s probably even larger than what’s seen in the first season. What are the particular difficulties associated with such large-scale, rendered action sequences?
The producers have hinted that the second season will likely be seeing CG direwolves, to convey their growing size and ferocity. How challenging will that be, to integrate digital animals into a live action television show?
Similarly, the end of this season has featured the hatching of Daenerys Targaryen’s dragon eggs, birthing her three dragons. We know that the author George R.R. Martin was heavily consulted in the design stage. Once that was done, what was the approach to creating this creature effect, and what were the challenges?
How did you pull off the final shot of the season? Was it simply a matter of choreographing the actresses movements and then eyeballing placements, or were there some sort of physical props she could interact with which were then replaced/enhanced with VFX?