The 'A Song of Ice and Fire' Domain


Interview with VFX Producer Lucy Ainsworth-Taylor

The special effects of HBO’s Game of Thrones is something that really interested many fans before production started, because of the grand epic sweep of the setting and the fact that epic fantasy on television was a tall order. It certainly captured our imaginations as well, as we considered what HBO had been able to achieve in its other shows, as well as what other television programs and VFX vendors were able to create. The end result? Some of what some imagined (and even what we tried to project) proved to be too ambitious for a show on such a tight schedule and with so many other budgetary needs to meet outside of visual effects (massive use of sets and locations, many costumes and extras, etc.)

Still, despite the fact that the battles never really transpired that some might have hoped for, there were some jaw-droppers, and those are thanks to BlueBolt, the lead VFX vendor for the series. We had the opportunity to speak with Lucy Ainsworth-Taylor, one of BlueBolt’s founders and the VFX producer for the first season of the show. We learn some surprising things along the way, with a refreshing level of frankness regarding the reality of producing VFX for a television program such as HBO.


My understanding is that BlueBolt is a pretty new company, but its founders have quite a lot of experience in VFX for film and television. What role has landing the lead vendor role on Game of Thrones played in the present and future of your company?

BlueBolt was in its infancy when we were approached by HBO, specifically Mark Huffam the producer.  He knew our background history and persuaded HBO to come and meet us.  It was a golden hand shake for a new company and allowed us to bring in some exceptional talent.  The 3 founders of BlueBolt had been pivotal in building and creating one of the largest VFX houses in London, so had the experience to tackle this head on. Game of Thrones has certainly put us on the map as one of the best VFX boutique facilities in London.

Will BlueBolt be the lead VFX vendor in the second season? If so, has any preparatory work already started given that cameras are due to roll in late July?

Sadly not.  As BlueBolt is a small new VFX house, tackling CG creatures with fur, plus CG fire and water would be too much of a stretch. Although we have all done this many times as individuals at previous companies, BlueBolt’s pipeline and development would struggle to do this in the short amount of time needed for the second series.

We’ve read that the prep time before shooting began was very short, and that there were a number of rewrites that led to unbudgeted VFX requirements. On top of that, it’s been noted that a TV production has a few extra layers of approvals compared to film. Has there been any talks about reducing such issues, whether by streamlining approvals or locking down scripts at an earlier stage?

The prep time for us on Season One was 6 weeks which was incredibly tight, but admittedly we only had one director on board and the scripts were not ready, so it just made the shoot even harder and faster for us to keep up with prepping and shooting the whole way through, however in all fairness it was the same for all the cast and crew.

2.5D has played a role in some of the visual effects. Could you discuss some examples of the use of the technique, and what its advantages are over fully 2D or 3D shots?

Most of our big environment builds were done as 2.5D projections, or a hybrid of 3D and 2.5D. A good example of this was Winterfell. We built a basic model and applied basic textures for the whole of the Winterfell castle. For each shot the model was matchmoved into the shot and a basic lighting setup was done. This meant that the castle was always in the correct place and at the correct scale with lighting to match the plate. This was rendered out as the basis of the castle then additional matte painting work was done to add detail, aging, staining etc. This additional matte painting work was then re-projected back onto the model.

The problem with using pure 2D is that you restrict the camera move to being locked off or nodal, unless the CG you’re adding is so far in the distance you can get away with not noticing the lack of perspective shift. Using 2.5D allows the camera to be freed up a bit. As long as you are roughly viewing the CG from one angle it doesn’t matter if the camera moves and you get perspective shift to reveal new pieces of the CG. You wouldn’t want to use this technique with very extreme camera moves (especially on a highly detailed object) as you’d have to build so much detail into the model and use so many projection cameras that it wouldn’t be worth it.

The Red Keep of King’s Landing is spectacular, especially in the establishing shot in the first episode. How much of that was real world imagery and how much of that was purely a digital creation? Was the Red Keep itself a 3D build?

Most of this shot was created by BlueBolt.  The base plate was taken of Valetta in Malta but we replaced practically everything. We added in the Irish hills in the background, changed a lot of the water and of course added the Red Keep. The original plan was to create the Red Keep as a 2D matte painting, but due to the number of different Red Keep shots and the uncertainty of orientation in many of them, BlueBolt took the cue to do a simple 3D build to allow us to move it around to suit the HBO executives’ requests.

On average, how many effects shots have there been per episode, and how long does it usually take to get all those effects rendered? Is there any type of VFX shot that was particularly prevalent for the show?

This really varied per episode, the heaviest being Episode One.  This was due to us picking up shots from the pilot show a year earlier and trying to make them work with the shots we had shot during the main shoot.  We also had a huge geography issue with the layout of Winterfell, to tie in with the Kings Arrival, Bran’s climbing, Bran’s fall etc.  Winterfell as a set only had one tower and nearly every shot required some sort of set extension to it.

The most prevalent type of shot that BlueBolt delivered were environment shots – either extensions or full CG environments.

One criticism the VFX of the show has received—it’s received very little, we hasten to add!—is in the representation of Drogo’s Khalsar, his band of horsemen. The number 40,000 is repeatedly stated… but every time the Dothraki are shown as a group, it looks like there may just be a couple of hundred, with only the second episode hinting at anything grander using silhouetted figures in the background. I believe that some crowd duplication has been done on the show—the crowd at the tourney at one point—so what’s the barrier to showing a glimpse of this vast mounted army? Just the scope of it proving infeasible within the time and/or budget allotted?

I am afraid I agree, but financially only a certain amount of horses, riders and people on foot could be provided by production.  We then composited more of the same, doing repeat passes in every available place.  In order to put the 40,000 as in the books, we would have had to go into the 3D realm and build CG horses and people (as is done on epic films such as Troy, Kingdom of Heaven etc) and again we did not have the budget or the time to achieve this sadly.

Given remarks from the producers, expansive, VFX-laden battle sequences are going to be pretty rare on the show, and understandably—they seem to be one of the most expensive things to render at high levels of detail (right up there with anything involving water, apparently). However, the second season will see more battles, including one that’s probably even larger than what’s seen in the first season. What are the particular difficulties associated with such large-scale, rendered action sequences?

Time.  The turnaround time of this show from filming to broadcast is incredibly fast and it takes time and manpower to create these large shots.  The only way around this is to shoot these scenes first and get going on them, if HBO’s VFX budget on the next series can stretch to it.

The producers have hinted that the second season will likely be seeing CG direwolves, to convey their growing size and ferocity. How challenging will that be, to integrate digital animals into a live action television show?

CG creatures with fur are a whole different ball game.  To build, model, texture and light with fur, makes these complex and very time consuming.  As the direwolves grow larger than small ponies in the second season, this is the only way to do it,

Similarly, the end of this season has featured the hatching of Daenerys Targaryen’s dragon eggs, birthing her three dragons. We know that the author George R.R. Martin was heavily consulted in the design stage. Once that was done, what was the approach to creating this creature effect, and what were the challenges? 

Once the design was signed off, a maquette was made (model) and BlueBolt had this scanned and began building it.  As the 3 dragons are all the same size with different colouring and movement, the meant we only had to build one and then texture, light and rig the other two accordingly.  The challenge was time – the turnover of these episodes came in February and we had to have them all built beforehand and ready to move on Daenery’s to get the blocking signed off by the producers.  It was only a few shots but they are key and I think they look incredible.

How did you pull off the final shot of the season? Was it simply a matter of choreographing the actresses movements and then eyeballing placements, or were there some sort of physical props she could interact with which were then replaced/enhanced with VFX?

The whole sequence had been pre-vized ahead of time so that everyone knew where the dragons would be and what they would be doing. The actress had tracking markers on her shoulder and leg where the dragons were going to cling onto her to help us matchmove her in post. She was holding a green shaped small object to represent the dragon she would hold so that she moved correctly, especially as she stood up.  It took a lot of roto animation and tracking to work them onto her body and the skin interaction was all done in post.