One approach that cuts across all departments at Walt Disney Imagineering (WDI) is that everything starts with a story, and technology is utilized to serve those tales. The constant advancement of technology in WDI’s various disciplines has opened up entire new worlds and their stories to generations of guests, old and young alike. Decades ago, one could only dream of flying the Millennium Falcon or watching astromech droids rolling through a crowd while essentially living inside a fictional universe. Here is an inside look at how some of the technologies being explored at WDI are powering new adventures at Disney parks around the globe.
Rolling with Jake
When visiting Disney parks, it’s not uncommon to see scores of people marveling at the sights and sounds of dreams brought to life. But for Star Wars fans, witnessing an autonomous droid roaming Disneyland in Anaheim, California, is an entirely new type of magic. Jake, a weathered orange and white robot, was tested by navigating the crowds and greeting people in front of Star Wars Launch Bay in Tomorrowland at Disneyland. While autonomous robots were not rolling around the parks when Star Wars: Galaxy’s Edge opened at Disneyland in May 2019, Jake’s advancement is the new hope forging the future for AI and autonomous robot experiences.
WDI was involved in the AI and autonomous robot space before Star Wars: Galaxy’s Edge was ever proposed. AI is a “pretty squishy term,” explains Michael Honeck, senior R&D Imagineer. It can range anywhere from something that is the very basic definition of AI, which is simply a computer using input to make decisions. There is also the much more emerging machine-learning space of being able to feed a computer fuzzy data and have it produce a human-like decision.
What has catapulted the fields forward is the current affordability of sensors that provide robust, high-fidelity information at the scale that Imagineers require, says Honeck, “Now we are in a place where sensors are going through another evolution,” he explains. “Things like lidar, for example, are beginning to be certified to international industrial standards. So we can be sure that we’re getting quality data and hardware that will perform in the field. It really extends the kinds of experiences we can bring to life. Character interactions like this could once only be achieved through puppetry or telepresence. Today, robotics can build on this new generation of hardware that enables computers to make creative and operational decisions on their own, and to do so reliably.”
At their core, robots like Jake are navigating, emoting, and interacting with guests. There is a physical structure that can move around, but there are myriad other components that execute intelligent tasks. “You’ve got a head that can express different emotions based on how it’s tilted,” Honeck says. “We went back to the basic animation principle of anticipation. If you have an animated character and you want it to look in a direction, you move the eyes, and then the head, and then the shoulders along this line of action. We do the same thing with this autonomous system. Its head moves to look to where it’s going to walk, just like a human would. That both makes it seem alive and intelligent, and it also gives humans a subconscious cue about how this thing is going to behave and how you might want to adjust your behavior relative to it. A good example is how humans interact with their pets. Those relationships are going to tell you quite a bit about how humans are going to successfully communicate with an autonomous system.”
Personality and emotion are essential to such a project, but the robots must also be robust, maintainable, and able to survive interactions with throngs of people who are treating them as living and “breathing” things. As a result, WDI leverages the knowledge across all of its disciplines. Since a droid has wheels and suspension, the ride group is consulted. The head is essentially an Audio-Animatronics figure, so show mechanical engineers and show animators are called upon to discuss best practices. Of course, an autonomous robot must look like an authentic character, so model shop and shell technicians provide an understanding of how to take requirements (such as sensor clearances), meet all of WDI’s safety standards, and wrap that in a design language that is authentic.
While droids would be asked to weave their way through heavy crowds on nonlinear pathways, powering an autonomous robot is less of a challenge for technical reasons; it’s more a matter of addressing safety and regulatory issues. Batteries with appropriately safe chemistry and long life are available, although fitting them into the limited space of a robot can be a bit of a challenge, Honeck admits. In his day-to-day experiences, Honeck says the team spends as much time on regulatory challenges as they do on the technical and creative aspects of robot building. Imagineers are out on the bleeding edge of what is available as far as sensors, batteries, and components go, and many of the companies that produce such evolved technologies just haven’t walked down the regulatory pathways that WDI requires.
“These are very much early days,” Honeck explains. “We’re thrilled that our guests connected so strongly with Jake. But when we look at the sophistication of other robotic systems we have in development and how Jake fits into our extended creative vision, he’s kind of like an amoeba. He’s the very beginning of what’s going to be possible here. The learning really begins once these experiences enter daily operation. Over the long term, what do guests expect of this thing? What are the ways that we can push to meet and exceed those expectations? Once you’re in that operational reality, the people who are experts in operating, maintaining, and sustaining are in the driver’s seat. And with their help, the technology goes places that we never even thought to consider originally because we’re so focused on getting that initial vision realized, out the door, and into operation.”
A living land not so far, far away
Piloting the iconic Millennium Falcon has guests in a frenzy at Disney’s new Star Wars: Galaxy’s Edge in Disneyland and Walt Disney World in Orlando, Florida. (The land’s second attraction, Star Wars: Rise of the Resistance, is scheduled to open on 5 December 2019 in Walt Disney World and 17 January 2020 in Disneyland.) But the lynchpin tying it all together is Black Spire Outpost, the locale in which the Star Wars amusements reside. Black Spire Outpost is much more than rock and steel and rebels and rogues—the land itself is alive.
“We are treating the land as the third attraction of Star Wars: Galaxy’s Edge,” says Casey Ging, senior concept designer. “From the beginning, we’ve said there are going to be things that are familiar to you and things that are not so familiar, and you should be able to organically discover what those things that are not so familiar to you actually are. Every piece of content in this land, every droid and every mark on a wall, has a story behind it. Why is that droid broken? Where did those blast marks on the wall come from? Who are the characters that own these shops? What are their relationships to one another? All that stuff is discoverable organically through this experience, and it’s something that hasn’t been traditionally available through themed entertainment.”
With the introduction of the Play Disney Parks app, Disney is looking to take that type of content, customize it, and deliver stories through guests’ smartphones. The app started with queue-based games and music integration throughout the parks. But Galaxy’s Edge goes beyond the queue to allow guests to build a reputation and let their story unfold throughout the land, if they choose to opt in. For example, in Millennium Falcon: Smugglers Run, your participation and interaction within the attraction not only impacts the story as you play through it but also the other stories that you will experience throughout the land. If you do really well piloting the Falcon and depart with a lot of fanfare, credits, and success, that result will follow you throughout the land. If you perform poorly and, say, make your way over to the Cantina, people may have some choice words for you for banging up the ship.
Creating this new “living land” from the ground up allowed Imagineers to create a hardware ecosystem throughout the area, utilizing various technologies, including innovative use of Bluetooth, that essentially allow the land to be a platform to sense different activities that you have done throughout it. “One of the challenges of creating these systems of technology that talk and communicate with each other is creating a team,” states Producer Rachel Sherbill. “It took a huge number of people with incredible expertise in everything from networking solutions to electrical engineering to scheduling to building construction, because all of these things need to be maintainable for all of the years that this land will be around. It requires input from architects, technology designers, engineers, and software developers, who have been a huge part of figuring this thing out. These people and the systems that they are creating all need to talk to each other and have a seamlessness to them.”
Since the content is driven through individuals’ smartphones rather than singular installations throughout the outpost, the land can have multiple people experiencing the same thing at the same time or enjoying different things at the same time in different ways. Leveraging smartphones helps with bearing the load of the thousands of people that will explore Galaxy’s Edge. The project included the platform team and software developers to ensure that the different systems put in place understand the correct load so everyone who wants to participate in the interaction can do so.
“Traditionally, we build these attractions to last a very long time,” Ging adds. “The difference here is that we are building a platform that needs to be flexible to match the velocity of technology. Technology changes, particularly on cell phones, a lot faster than it ever has. So, a big challenge is building something that is flexible enough to still be relevant—not just six months but 5–10 years down the line when technology is almost unpredictable at that stage. We work with some of the best technologists around, just listening to their vision and understanding their view of the future and how technology is going to change over time. That impacts the way that we think about these experiences.”
For the first time since the original film was released in theaters in 1977, fans now have the opportunity to live their own Star Wars adventure as they explore the Black Spire Outpost. The web of living land technology ties together an ecosystem designed to allow guests to more deeply dive into the story details of the Star Wars universe.
There is an unprecedented number of things with which to interact, including activities such as hacking, scanning, and tuning into transmissions. There is also a series of missions available for guests who want to completely dig into their own personalized narrative and get involved with deeper storytelling. There are items that people can collect digitally—seeing the land, looking through it, and discovering items. For example, Ging explains, there are many crates located all through the area. Guests can scan them, discover what’s inside, learn where they are supposed to go, and collect some of the digital items. Each task furthers the in-land story.
On this living, breathing, real-life planet, all of the droids, items, and characters found by guests are also real, including the ones you get to build yourself. At the Droid Depot, visitors can assemble customized companions who utilize technology to react with the land around them as if they are truly living droids.
“You’re not just experiencing Star Wars on an attraction, you are experiencing it walking down the street and discovering what’s in that crate or looking at a drawing or a sketch on a wall,” says Sherbill. “Through leveraging technology, you can actually go up to a droid and, using your phone, receive content that helps you understand that droid’s history. Maybe it has important information that it needs to get to the Resistance. All of these sorts of details expand as you are walking around throughout your day. One of the things about this type of content existing digitally is that we can update, change things, and add more stories. It’s a key way to keep things fresh, flexible, and timeless.”
We are Vyloo
There is no better example of how WDI projects can grow than the Vyloo, cute and fuzzy birdlike creatures who were developed with the goal of building compelling interactions with simple animatronic characters. What began with this targeted idea ultimately led to a home in the Disneyland attraction Guardians of the Galaxy–Mission: Breakout! and a brief cameo in Marvel Studios’ Guardians of the Galaxy Vol. 2.
The inspiration for the Vyloo initiated with a look back at the history of Walt Disney animated features. One of the hallmarks of Disney films is the variety of woodland creatures that interact with human characters; they are smart and charming but don’t have the luxury of human speech to convey their thoughts and feelings. Consequently, the dream of filling Disney parks with engaging, interactive, autonomous simple creatures was born.
“Essentially, we had to start out by making a choice: Are we going to build a familiar Disney character, or are we going to build a unique creation?” recounts Leslie Evans, R&D Imagineer manager. “In this case, we decided to build a unique creation because we wanted to explore how we can approach our animatronic characters as actors. We want to be able to teach each of our animatronic characters a unique personality—you’re shy, and you’re outgoing, and you’re really sleepy—and then let them use their programmed thinking patterns to interact with the world. Because we knew we wanted to explore a breadth of personalities, we recognized it was more important to be flexible than to represent a specific character. So we decided to invent our own unique creations.”
Development of the Vyloo started with puppets. Evans, who worked on the project with Executive R&D Imagineer Alexis Wieland, created a simple rod puppet from spare parts and materials from other WDI projects and used it to explore personality types. How would an introvert say hello to someone? If you are shy and see a new face, how do you move? Imagineers filmed and recorded these experimental sessions and analyzed what each motion meant and how they could distill those motions down into something simple, yet autonomous.
The result was a creature with no human in the loop. The Vyloo were programmed, told their personalities, and left to run all day, every day, making their own choices. There is no behind-the-scenes puppeteering; it’s AI, and the characters go about each day doing their own thing. There is some universal design in the Vyloo. Evans learned which motions were really compelling and helped convey the emotion of the character. Vyloo have eight functions, and through playing with them, she realized that body squash and stretch were important. However, any universality should not encroach on the uniqueness of each character.
“They are all a little bit different, and each has its own agenda,” Evans explains. “We believe passionately that play-testing is important. You build a thing, but you have to get people in front of it to really understand how people perceive it, and then you can make tweaks based on that. We had done some early testing at Imagineering, but when we installed the Vyloo in the park and shared them with guests, we learned some new things. What was important in our technology was retaining flexibility in the system, so as we learned, we could quickly make changes. When we first turned them all on, it was like they were real animals—they were very overstimulated. So we had to dial things down. We made those changes, got them back running again, and watched how guests interacted with them.”
The design of the Vyloo culminated in a demonstration of the three original characters to introduce them to people across the Walt Disney Company. The visibility led to a connection with the team working on the Guardians of the Galaxy–Mission: Breakout! attraction, and they thought it would be an interesting addition to the ride’s queue, which is a display of the menagerie of Taneleer Tivan, the Collector. From there, they caught the attention of Director James Gunn and the crew working on Guardians of the Galaxy Vol. 2. resulting in a brief cameo in the film. “It was an exciting way for various business units within the Disney Company to work together to tell the story and bring these creatures to life across a couple of different mediums,” Evans says.
Working in an advanced development group, Evans notes, often results in physical projects and ideas that do not leave the building in the form in which they were realized. Pieces of technology may move to other projects, or the experience will inspire other teams to incorporate it into their work. But the diversity of tasks—including app-based experiences, drones, character experiences, and robotics—allows Imagineers to tackle these new problems that all have a backbone of trying to use new technology to bring experiences to life.
“I sometimes wish I had known earlier in my schooling just how important technology and software would be to so many things that we tend to take for granted,” she admits. “People are using game engines now to prototype things that we couldn’t have imagined a decade ago. The applications for technology in the entertainment industry are only growing, and I think it’s a super-exciting space, especially for people who want to live halfway between the engineering and design worlds.”
When Tony Stark rockets across the Marvel Universe, the visual wonder is a result of computer-generated graphics. The vast collection of Disney intellectual property is ripe with characters that take to the sky, but current theme park animatronics tend to be rooted to the ground. Stuntronics—a combination of the terms stunt double and animatronics—aims to uproot the status quo and launch robotics into the heavens. WDI is working to bring to life a realistic robotic figure with the ability to execute complex, acrobatic stunts. Imagineers designed a 90-lb Stuntronics figure that makes its own real-time decisions, such as when to tuck its knees or maneuver its arms, while flying through the air.
The project began with the BRICK (Binary Robotic Inertially Controlled bricK), says Tony Dohi, principal R&D Imagineer. WDI Associate Research Scientist Morgan Pope started off with a rectangular robot that weighed approximately 5 lb, and it included weights, an inertial measurement unit (IMU) on board, a laser distance finder, and a simple microprocessor. Pope took it up to a high ceiling, spun it on a threaded rod, and let it go. The BRICK whirled at various rates, dropped, and shot its weights out at the right time so that they passed through an opening that was approximately 3/4 in larger than the actual size of the robot. This early test would lead to a much more humanlike evolution.
“We started off with something quite simple, and it was a robot that you could barely even call a robot—it had no motors or CPU on board or sensors, but it was tough and durable and it wasn’t tethered,” explains Dohi. “We used the infrastructure around it to have all the smarts and the robotic controls and the show control system. We launched this entirely passive figure across the room, and it landed 13 ft down on a long table. It would be picked up by a magnetic base, and it would skid. Because it was a very cleverly designed automaton, if you will, with dampened springs in it and latches, it would come up and do a very simple animation. We soon realized that we could spend all of this design work trying to come up with this very clever but passive thing, or we could switch gears and start to actually put the smarts back into the robot, keeping it untethered and making it as autonomous as possible.”
The next stage evolved into the Z-shaped “Stickman,” which was a body with three sections and two flexible joints and also utilized an IMU and laser distance finders so that calculations could be conducted on board the robot. The goal was to move beyond just a timing-based series of moves to actually control the variables that you get from a robot (for example, swinging from a pendulum, having it release, and then executing an action). Imagineers wanted it to land in a very specific orientation but have control of the performance as to when it would tuck and untuck to change its rotational velocity.
“There’s a timed sequence of events, and then we’ll let the IMU feed it data based on its spin rate and its height,” Dohi indicates. “Because we can always know where it’s going to land, it’s just doing projectile parabolic motion. Knowing those things and the parameters we have to work with, we let the IMU interrupt when we know it needs to have a final position in a certain orientation. We now have an understanding of the behavior of how this thing needs to move through the air because we’ve studied acrobats. We have one on our team who gives us his intuition about when you need to move. If we assume a robot that is more anthropomorphic, we are looking at moving things asymmetrically at the right time and inducing a twist depending on the configuration and which axis is the most stable as this thing goes through the air.”
Knowing the center of mass for every part of the entire system is essential to the robot’s execution upon launch, Dohi adds. Therefore, the robot is disassembled, the arms and legs are weighed, it’s put together as a system, and the center of gravity is determined. Imagineers also perform simulation work to see how it should perform and get a rough guideline of when the timing cues will occur. They then see if the robot is matching the simulation; the more precise the measurements on the robot pieces, the more accurate the simulation. “We’ve had really close correlation between the two,” Dohi says, “which is nice because sometimes simulations don’t get you anywhere close.”
From Stickman, Imagineers started pushing the robot into the shape of a human and began to work in multiple axes. Dohi admits that the human-shaped robot was “a pretty rough looking thing.” It was all pneumatic, approximately 150 lb, and it did not have body shells on it. The current version is a much more polished robot. It was a two-year process to progress from the BRICK to the more-advanced robot iteration.
The giant leaps forward are still just scratching the surface of what WDI will be able to do with robots in roles that are too dangerous for human performers. “You would never put a stuntman in the parks to do show after show, six to 12 times a day, where they are being thrown
65 ft in the air, which is what our robot is doing,” Dohi explains. “There’s a 15-G acceleration on this thing that would make a human in a vertical orientation blackout. There’s also a hit into the net that is about 12 Gs. If you land the wrong way, as a human, you’ve broken your spine. But if you look at what we have currently, it’s about 4.5 s in the air. It’s pretty neat to watch, but our shows are much longer than that, and you have to fill that time with other aspects of a performance. So we are not looking to replace human performers; we’re looking to enhance entire performances.”
A-1000-times more animated
Disney parks have been renowned for their Audio-Animatronics figures that have brought characters to life including Captain Jack Sparrow, the Seven Dwarfs, and U.S. presidents. With A-1000 advanced robotics, WDI is producing the next generation of the A-100 Audio-Animatronics figures that were produced in the 1980s.
Improved movement and functionality resulted from replacing traditional hydraulic systems with electric motors. Most of the figures produced by WDI have been hydraulic, and, while they have been entertaining audiences for years, their performance degrades over time. Moving to electric motors presented its own set of unique challenges. For every motor, electric figures require a power and encoder cable, and they all need to route all the way down the figure and through the foot and the base frame. There is a large range of motion, so cable stress points are significant.
According to Kathryn Yancey, show mechanical engineer, Disney constructed its own actuators and removed many of the seals and exterior shielding, so they were nice and small. Those actuators were a perfect fit for the organic shape of the human body. For example, a wrist is long and skinny as is a hydraulic actuator. Since there are three functions in a wrist, Imagineers packaged three actuators into the wrist to make it move. The current electric motors are rectangular and have harsh corners, which presents packaging issues.
“With hydraulic, you can get a lot of punch from a much smaller actuator,” Yancey says. “Now we’re having to do a lot of rigid body dynamics and analysis in order to make sure that our motors and the bearings within our motors can have a long life because we are putting so much on them. They’re going through a lot of stress, especially with our dynamic and speed requirements. With the torque we need at the speed that we need, we are definitely asking a lot more from our electric motors.”
Both Yancey and fellow show mechanical engineer Victoria Thomas worked on one of the first attractions that was all electric—Frozen Ever After at the Norway Pavilion of Epcot’s World Showcase. At the same time, Imagineers were working on the Shaman of Songs figure for Na’vi River Journey in Pandora–The World of Avatar at Disney’s Animal Kingdom.
“Both projects were breaking all kinds of new ground,” Thomas elaborates. “The Na’vi Shaman was aiming for top of the line—every bell and whistle—and Frozen was aiming for as much functionality as possible with more common-grade materials so you could afford more stuff. Between those two projects, we were able to learn a lot about what works well with electric figures, and we took all of that information and incorporated it. Both projects were pretty expensive, so we asked, how can we develop something that’s a little more generic, a little more off the shelf? If a project team comes in and says, I need some human figures—I’m not exactly sure which humans—I just need them to be a reasonable human size and I’d like to just buy them. I don’t want to worry about having to spend years designing these complicated things; I just want a tall guy and short lady. I want them to stand and to give them a couple minutes of performance. That’s kind of where our project kicked off with the A-1000.”
The A-1000 Audio-Animatronics project focuses on what Thomas terms “human humans,” those with proportions falling between a 5-ft, 5-in female and a 6-ft, 2-in male. Examples of some rides that feature figures with these types of dimensions are Pirates of the Caribbean and some of the new Star Wars: Galaxy’s Edge attractions. The idea is to cut down the time it takes to design new characters while still providing high functionality.
“We wanted to take that idea of having a kit of parts and being able to create a new character from those,” Yancey explains. “We are creating subassemblies where you can defunction your figure. A function is what we call an articulation point, like an elbow. We sum up the level of complexity of a figure off of how many functions it has. You can define a price based off of function. With the subassembly, we’ve broken out these key functions that you can pair with a head or pair with just a torso, and it’s not necessarily like one and done. You can have the option, like a menu—we are calling it ‘configurable in CAD.’ You have your CAD designer or engineer that can use this library and you will have some design time to create your new character based off what creative gives you but it’s significantly less time spent in design.”
The characters Hondo Ohnaka and Kylo Ren in Galaxy’s Edge are all examples of first-article A-1000 figures. Both are the standard 6-ft, 2-in male and have the same assemblies along with slight variations for their specific characters. But in terms of creating the figures, Imagineers are able to design once and produce nine times. Since there are often great variations between characters’ faces, heads are their own stories. As a result, the A-1000 program came up with a simple head that contains 10 functions, but the functions can be adjusted depending on a character’s unique need.
“You recognize that people will want to adjust the range, so we’re providing as much range as we can for every function,” Thomas adds. “There’s a lot of changes that trickle down like a domino effect: How much longer do my cables need to be? What do the shells that cover the mechanism need to be shaped like? Do they need different clearance cuts? What kind of costume adjustments need to be made? It ends up affecting a lot of things. While we have tried to provide for every scenario, there’s always going to be more that come up.”
Dealing with the adjustments and challenges of working on the A-1000 project is a reward in itself, Thomas says. “One of the biggest reasons that I wanted to work here, as opposed to the aerospace or automotive industries, was because if I started as an intern in those industries, I would be doing double checks on someone else’s work, paperwork, or the mundane for years until I proved myself competent enough to be able to handle minor design work. As an Imagineer, every time I walk out into the shop and see the Hondo figure, my jaw drops even though I know everything that’s going inside of the figure.”
About the author
Craig Causer (firstname.lastname@example.org) is the managing editor of IEEE Potentials.
For more about this article see link below.