The Last of Us Part II impressions — Tracking Nora to a hospital

Sony has been dribbling out news about The Last of Us Part II, and today it has opened the door for reviewers to give their first impressions about one of the tense action scenes in the game.

Sony’s The Last of Us Part II has been seven years in the making since the first game debuted in 2013 and won numerous Game of the Year Awards. The title debuts on June 19 for the PlayStation 4. With more than 100 million PS4s sold, Sony has a chance to sell tens of millions of copies of this game (as of 2018, The Last of Us sold 17 million copies).

I played the original game and found the story deeply touching. It was about the teenage girl Ellie and gruff smuggler Joel — two survivors of the zombie apocalypse who spend their days just trying to survive. The graphic violence of the original was horrific, but more often than not it was perpetrated by Joel in the name of protecting Ellie, and later on it was Ellie protecting Joel. After 22 or so hours of playing it, I decided it was my favorite game of all time, and I interviewed the game’s creators about the grueling experience.

At the close of that interview, game co-director Neil Druckmann told me, “Now that we’re done and we’ve had some time to rest, the question is, is there another story to tell in this world? We’re trying to figure that out. We don’t want to do the Matrix Reloaded of video games. [laughs] Can you do the Godfather Part II of video games, though? That would be the test. If we come up with something that’s exciting on that level, we’ll do it. If we don’t, we’ll do something else.”

VB Transform 2020 Online – July 15-17. Join leading AI executives:
Register for the free livestream.

Naughty Dog did find its way with the sequel. In part two, Ellie and Joel have settled in a thriving human town in Jackson, Wyoming, five years after the events of the first game. But something tragic happens, and Ellie goes off in search of retribution and justice.

The scene I’m talking about — dubbed “Finding Nora” — takes place in Seattle, midway through Ellie’s second day there. I have played the full game under a review embargo, and this is my first chance to give my impressions of this particular scene, which Sony showed last week during a State of Play presentation. This solo mission is a fair representation of the gameplay and how it’s different from the original game.

Improvements from the original

Above: The broken city of Seattle in The Last of Us Part II.

Image Credit: Naughty Dog/Sony

As I saw the extended trailer for Part II this week and played the scene, I felt a new appreciation for the difference between the old game and the new game in gameplay and graphics improvements. The story spans seasons and climates, including the snows of Jackson, Wyoming and the lush landscapes of Seattle. The shadows and lighting and attention to detail bring the cities to life, in all kinds of weather. This makes it perhaps the prettiest game ever created on the PlayStation 4 in terms of graphics, but it also has the ugliest fighting.

You can’t interact with everything. You can open certain drawers. And the environment isn’t destructible, in the fashion of Battlefield games. But the look is right. The cities look as you would expect them to look after a pandemic. Jackson has snowflakes flying through the air, and overgrown Seattle has blades of grass — which sometimes look a little too regular in their formation — blowing in the wind.

Ellie can do more as she interacts with the environment and enemy combatants. She can swing on ropes, traverse vertical structures to avoid trouble, navigate boats, ride horses, climb ropes, jump over gaps, break glass, and crawl through the grass to sneak up on enemies for stealth kills. Her movements are fluid, and more than ever you’ll feel like you are interacting in a movie-like experience.

She faces tougher enemies such as dogs who can trace her footsteps, stealth warriors who can attack her with arrows, and large numbers of zombies. Ellie can sprint, dodge attacks, and time her counterattacks. She can use enemies as a shield, and she can get help from her friends. This makes combat far more diverse than in The Last of Us.

And as she could do in the first game, Ellie can pit enemies against each other, making zombies attack human enemies. You can invest in role-playing game skills, upgrade your weapons at workbenches, and scavenge resources so you can craft everything including medical kits and explosive arrows.

What this scene shows

Above: If you get knocked down in The Last of Us Part II, you can shoot from the floor.

Image Credit: Naughty Dog/Sony

The Last of Us stood out from other zombie games because the fighting was intimate and intricate. Each duel with a zombie or a human enemy was a life-or-death struggle. You barely had enough bullets to get through a section of the game. You had to get your headshots right or waste precious ammo. You were not a superhero. And if you ignored one enemy, it would blindside you from your flank. You have to consider whether to run or fight, and how to take on a force that, if you approached head-on, will easily kill you.

The section begins as Ellie leaves her own base in a theater in Seattle in search of a character named Nora at a hospital controlled by the Washington Liberation Front, one of the groups that oppose the FEDRA central government in life after the pandemic. The WLF will shoot trespassers on sight, and Ellie has them on edge as she takes down guards one by one.

In Finding Nora, Ellie treks through downtown Seattle to the hospital. But she can’t just walk down the streets, which are overgrown, destroyed by earthquakes, and beset with the Infected, as the zombies are called, and WLF patrols.

Hand-to-hand combat in The Last of Us Part II.Hand-to-hand combat in The Last of Us Part II.

Above: Hand-to-hand combat in The Last of Us Part II.

Image Credit: Naughty Dog/Sony

Ellie can fashion a silencer onto her .45 caliber pistol, or use a bow from long distances, to quietly take out soldiers. But Clickers, who have hard shells on their heads, take two headshots to bring down.

But the most efficient way to take down an enemy is silently, with just a knife. As Ellie sneaks up on human enemies, she grabs them with a hand over their mouth, lulls them into stop struggling by saying “shut up,” and then brutally stabs them in the jugular.

If you are in the right position, you can execute these takedown moves simply by pressing a triangle button on the PlayStation controller, and then the square button. If you mess them up, and the target turns on you, the scene goes loud. The target will scream, and everyone within hearing distance will converge on Ellie.

The Last of Us Part II.The Last of Us Part II.

Above: You can also fight at a long range in The Last of Us Part II.

Image Credit: Naughty Dog/Sony

Too often, that brings death. But if Ellie runs and takes out enemies with clubs with spiked scissors attached to them, she can dispatch enemies in melee combat, look for cover, and escape from the enclosing circle of enemies. That happens quite a bit in this scene as Ellie makes her way through multi-story buildings and woods.

To make progress through the city, you have to take detours into almost every available building, looking for supplies, Infected, or WLF soldiers to take out. You often find last notes from long ago, next to skeletons on the floor, where people describe their last moments and wishes for their loved ones. That brings home the gravity of the pandemic.

You have plenty of puzzles to solve. You can use vertical thinking, like going to an upper floor to find a bridge across buildings or a window ledge where you can step out, throw a rope over a balcony rail, and swing to another platform. There, you can shatter glass and get to a room behind a closed door. You have to figure out how to get a dumpster outside of a locked garage so you can use it to vault over a fence. This is how you spend much of the time in the Nora mission. The action scenes are few and far between.

Above: Climbing in the ducts in The Last of Us Part II.

Image Credit: Naughty Dog/Sony

You can get lulled into complacency. When Ellie finds a workbench, you can upgrade a gun. But while you’re doing that, a WLF soldier sneaks up on you and tries to take you out. A gunfight with multiple soldiers takes place. I pulled out a shotgun to take each one down with one shot, and if I wounded one, I brought them down with a melee strike with a spiked club to the head.

When you find the Infected, you have to deal with multiple types. The Clickers are fast, but they can only “see” via echolocation. The Runners can see, but they aren’t as strong. The Stalkers are weak but surprisingly fast, and they spring out at you from the darkness. If you make too much noise taking out one, many will come running at you. As Ellie emerges from one of these fights, she says, “Fuck Seattle.”

On the way to the hospital, Ellie comes to a forest with tall trees and ferns covering the ground. She encounters a new type of human enemy, from a religious faction called “Seraphites,” or derisively called “Scars” by the WLF. They use stealth and rely on bows and melee weapons. Ellie has to fight inside a broken parking garage, hiding among the cars and the grass growing on the concrete. If you’re lucky, you can get through this section with knife work. But it requires a lot of patience and crawling around.

Cornering Nora

Above: The disturbing death of the woman with the PlayStation Vita in The Last of Us Part II.

Image Credit: Naughty Dog/Sony

When you finally swim your way into the hospital area, you come upon an unsuspecting Asian female from the WLF. She’s taking a break, playing a game on a PlayStation Vita (perhaps one of the last surviving in the world), and she can’t hear Ellie sneak up on her. Ellie interrogates her and loosens her grip. The woman turns and tries to stab Ellie, who blocks it and stabs the woman in the throat. It’s yet another disturbing death among many in this game. After that, I went loud and took out the guards in the building.

When Ellie opens doors, they’re often locked. She has to find a way around them. Once inside the hospital, she climbs up into the ducts to avoid the heavily guarded corridors. In the trailer, Ellie passes by a resource in the form of a roll of tape and keeps going. In the game, I would never do that, as these resources are precious for crafting. So I clear a section and I look all over the place for resources. Only when I am done do I move on, and that stretches out my gameplay sessions.

Finally, Ellie drops out of the ducts, corners Nora alone, and points a gun at her.

As Ellie finds Nora, she asks, “Do you remember me?” It’s as tense and intimate a moment as you’ll find throughout the game.

Facebook’s Digit is a low-cost tactile sensor for robotic hands

In a preprint technical paper published earlier this year, Facebook AI researchers propose Digit, a low-cost, compact, high-resolution tactile sensor geared toward the challenging task of robotic grasping. Digit is designed to be mountable on multi-fingered robot hands, and the coauthors claim it provides enhanced durability and a more repeatable manufacturing process compared with other sensors available on the market.

Despite decades of research, general-purpose in-hand manipulation remains an unsolved challenge in robotics. One of the contributing factors is the difficulty in sensing contact forces, which is critical to controlling interactions within an environment because it provides a natural and direct way to measure forces. According to the researchers, Digit accomplishes this sensing with a small, modular form factor and assembly process that reduces costs — an individual sensor costs an estimated $15 when manufactured in a batch of 1,000 — and potentially supports large-scale manufacturing.

Digit, which measures 20 millimeters in width, 27 millimeters in height, and 18 millimeters and weighs around 20 grams, has a plastic body with a three-piece enclosure that’s conducive to both 3D printing and injection molding. A camera and gel are mounted to the body using “press fit” connections so that any one component can be swapped out, and the housing is replaceable to allow for different focal lengths. Additionally, the elastomer materials under the surface of the sensor’s contact can be replaced with a screw, enabling task-specific elastomers to be swapped in — for example, materials with hardness and opaqueness tuned to required sensitivity and forces.

Facebook Digit sensorFacebook Digit sensor

Above: The Digit sensor capturing the visual characteristics of various objects.

Image Credit: Facebook

Under the hood, Digit packs custom-designed electronics that control camera characteristics, illumination, and video capture while measuring only slightly larger than a human fingertip (7 centimeters squared). Three RGB LEDs provide illumination over the elastomer gel surface, which was also custom-designed using a three-stage silicon-and-acrylic manufacture process that balances ruggedness and sensitivity. In point of fact, the researchers say that in tests, the Digital gel transmitted 767 lux (a measure of illumination intensity) versus third-party gels’ 17 and 16 lux, respectively, even after 15 cycles in a machine designed to inflict abrasions.

VB Transform 2020 Online – July 15-17. Join leading AI executives:
Register for the free livestream.

During a separate experiment designed to measure Digit’s in-hand manipulation performance, the researchers tasked an Allegro robotic hand equipped with Digit sensors to hold and manipulate a glass marble between the thumb and middle finger and move the marble to certain goal locations. At the beginning of each trial, the marble was raised by a metallic stand mounted on a linear motor, and a Sawyer robotic arm executed a preprogrammed motion to pick up the marble. At this point, the Allegro hand — which was mounted to the Sawyer arm — had to learn to roll its fingers carefully over the marble by modeling slipping and deforming dynamics over the Digit surfaces under varying degrees of pressure.

To develop a dynamics model, the researchers collected data from 4,800 trials during which the fingers moved randomly over the marble for 10 seconds and the Digit cameras recorded video. Afterward, they trained a separate model to detect key points in the marble representing factors of variation in the input data, which they used to create a planner model that moves the marble toward a given target position without dropping it or pressing it too hard.

Facebook Digit sensorFacebook Digit sensor

Above: Facebooks Digit sensor integrated with a robotic hand.

Image Credit: Facebook

The researchers report that in the course of 50 trials, the hand dropped the marble about 25% of the time. However, they attribute this to planning inaccuracies and noisy data as opposed to flaws in Digit’s design.

“Tactile sensing is an important component towards human-level manipulation skills for robots,” wrote the coauthors. “We believe that DIGIT is a step forward in the design of versatile tactile sensors that can be mass-produced and widely adopted in the robotic community towards reaching human-level manipulation skills.”

The manufacturing files for Digit’s plastics enclosure, gel, and electronics are on GitHub, as well ts the Digit firmware binary for programming and a Python interface. In future work, the research team intends to miniaturize the form factor of the sensor and design sensors with curved, omni-directional sensing fields.

Interestingly, Facebook’s Digit comes on the heels of two exploratory works in robotic grasping from MIT’s Computer Science and Artificial Intelligence Laboratory. One built on existing research that employs a cone-shaped origami-inspired structure designed to collapse in on objects, while the other gave a robotic gripper more nuanced, humanlike senses in the form of LEDs and two cameras.

Otto Motors raises $29 million to staff warehouses with autonomous mobile robots

Otto Motors, a company providing self-driving robot technology and services for research and industrial clients, this week announced it closed a $29 million financing round. Matthew Rendall, CEO of Otto parent company Clearpath Robotics, says the proceeds will enable the company to meet the needs of its customers both during and after the pandemic.

Worker shortages caused by the spread of coronavirus have prompted some retailer, fulfillment, and logistics companies to accelerate the rollout of mobile robots. For instance, Gap more than tripled the number of item-picking machines it uses to 106 in total, while Amazon says it’s relying more heavily on automation for product sorting. According to ABI Research, more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from just under 4,000 warehouses in 2018.

Otto is well-positioned to address the surging demand — it provides autonomous mobile robots for materials handling inside manufacturing and warehouse facilities, with clients including GE, Nestle, and Berry Global, among others. Perhaps predictably, the company says it’s seen increased interest from businesses responding to risks associated with the pandemic, including those in food, beverage, and medical device fabrication segments.

Otto MotorsOtto Motors

Above: The Otto 100 carrying a shelf.

Image Credit: Otto Motors

“The coronavirus pandemic has really put the spotlight on business continuity,” said Rendall in a statement. “Businesses that invested in automation are still up and running, and others are realizing they need to catch up. We’re seeing manufacturers that had five-year implementation plans are compressing those to one or two years.”

VB Transform 2020 Online – July 15-17. Join leading AI executives:
Register for the free livestream.

Otto spun off from Clearpath, a company founded in 2009 by University of Waterloo graduates who initially sought to develop a robot that could detect and remove land mines. (Clearpath continues to provide less productized services for research firms.) The Otto team built its first prototype in 2014 — a driverless vehicle for automating material movement — before shifting mostly to industrial applications, like transporting raw materials to production lines and moving parts between processes.

Today, Otto provides the Material Movement Platform, which comprises Autonomous Mobile Robots (AMRs), Fleet Manager, and Otto Care. Otto’s AMRs come in three configurations, from the Otto 100 (which has an integrated lift and can carry up to 220 pounds) to the Otto 1500 (which has lift and conveyor attachments and can carry over 3 tons) — all of which feature lidar sensors that scan, monitor, and interact with the environment. As for Otto Care, it’s a support offering that includes access to firmware upgrades and live technical support, optionally with annual hardware and software maintenance for AMR fleets.

Fleet Manager handles tasks like robot traffic control, job supervision, management, and facility integration, continuously mapping facilities to visualize where AMRs might be. It lets managers customize the way robots move throughout a building by applying rules like setting speed limits or marking out heavy pedestrian areas, and it continuously processes data about the fleet to keep track of every robot’s status (including charge level, location, payload, vehicle capability, and team). Fleet Manager intelligently assigns jobs like material pickups and dropoffs as well as battery charging, notifying managers about job times and throughput via Slack and other platforms. And it connects with existing systems through protocols like HTTP Rest and WebSockets.

Otto MotorsOtto Motors

Above: An Otto 1500 robot adjacent to the Boston Dynamics-made Handle.

Image Credit: Otto Motors

Otto competes in the $3.1 billion intelligent machines market with Los Angeles-based robotics startup InVia, which leases automated robotics technologies to fulfillment centers; Gideon Brothers, a Croatia-based industrial startup backed by TransferWise cofounder Taavet Hinrikus; robotics systems company GreyOrange; and Berkshire Grey, which combines AI and robotics to automate multichannel fulfillment for retailers, ecommerce, and logistics enterprises. Fulfillment alone is a $9 billion industry — roughly 60,000 employees handle orders in the U.S., and companies like Apple manufacturing partner Foxconn have deployed tens of thousands of assistive robots in assembly plants overseas.

But Otto has cutting-edge partners in companies like Boston Dynamics, with which it collaborated to develop an automated box-picking and pallet-building solution. And business is booming, with over 70% of the company’s AMR installs in recent months heading for Fortune 500 customers and a growth rate over the last three years of between 70% to 100%. (According to Rendall, Otto has over 3,000 robots deployed worldwide.)

At GE Healthcare’s repair operations center near Milwaukee, which tests the functionality of medical equipment and manages warranty service programs, Otto self-driving vehicles are loading and delivering parts to workers for repair and dispatching materials to shipping. At Toyota Motor Manufacturing Mississippi, an Otto 1500 robot is handling ground tire delivery within the Corolla assembly plant. And recently, Otto deployed a fleet of 19 robots at a Berry Global Group plant in Kentucky to supply cases to and from automated production machines 24 hours a day.

“Our mission to ensure a safe and productive work environment, along with the challenges of persistent labor constraints, has led us to increase investments in creative automation solutions,” Berry Global director of corporate automation Scott Spaeth said in a statement. “The Otto vehicles address those challenges and deliver improved operations reliability, while enhancing the working environment for our employees.”

Otto’s latest fundraising round — a series C — was led by Kensington Private Equity Fund with participation from BMO Capital Partners, Export Development Canada (EDC), and previous investors iNovia Capital and RRE Ventures, bringing the company’s total raised to $83 million. At least of portion of it will be used to expand the company’s workforce from 260 employees across tech, product, sales, marketing support, and account management teams, Rendall says.

Sign up for
Funding Weekly to start your week with VB’s top funding stories.

MIT CSAIL teams propose grippers with a humanlike sense of touch

In a pair of recently published technical papers, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) propose new applications of soft robotics — the subfield dealing with machines made from tissue-like materials — that aim to tackle the challenge of grasping objects of different shapes, weights, and sizes. One builds on an existing work that employs a cone-shaped origami-inspired structure designed to collapse in on objects, while the other gives a robotic gripper more nuanced, humanlike senses in the form of LEDs and two cameras.

Despite the promise of soft robotics technologies, they’re limited by their lack of tactile sense. Ideally, a gripper should be able to feel what it’s touching and sense the positions of its fingers, but most soft robots can’t. The MIT CSAIL teams’ approaches ostensibly fix that.

“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” said MIT professor and CSAIL director Daniela Rus in a statement.

Venus flytrap

Last year, scientists at MIT CSAIL and Harvard demonstrated a gripper design capable of lifting a wide range of household objects. The team’s hollow, cone-shaped device comprises three parts that together surround items as opposed to clutching them. In one experiment where the gripper was mounted on a robot to test its strength, it managed to lift and grasp objects that were 70% of its diameter and up to 120 times its weight without damaging them.

VB Transform 2020 Online – July 15-17. Join leading AI executives:
Register for the free livestream.

A new MIT CSAIL team thought there was room for improvement in the existing gripper design. To give it versatility and adaptability closer to that of a human hand, they added tactile sensors made from latex bladders (balloons) connected to pressure transducers. The sensors let the gripper pick up objects as delicate as potato chips while classifying them, enabling it to better understand what it’s grasping.

MIT CSAIL gripper chip

MIT CSAIL gripper chip

The silicon-adhered sensors — one of which is on the outer circumference of the gripper to capture its changing diameter, while the other four are attached to the inside to measure contact forces — experience internal pressure changes upon force or strain. The team measured each of these changes, using them to train an object-detecting algorithm running on an Arduino Due.

In 10 experiments during which the sensors captured and averaged together 256 samples (at a rate of 20Hz), the algorithm classified some objects — including a bottle, an apple, a box, and a Pringles can — with 100% accuracy. Other objects it classified with between 80% to 90% accuracy, including another bottle, a scrubber, a can, and a bag of cookies. (One bottle was misidentified as a can, which had a similar profile, and the toothbrush was misclassified as a box.)

In separate experiments, the researchers tested the sensor-equipped grippers’ ability to grasp delicate objects and detect when those objects might be slipping. They observed that the success rate over the course of 100 trials varied depending on the rate of the slip, with upwards of 100% success when the slip rates were higher. And they report that, when tasked with picking up 20 randomly selected kettle chips, the gripper grasped 80% without damage.


In the second paper, a CSAIL team describes GelFlex, a gripper consisting of a soft, transparent silicone finger with one camera near the fingertip, a second camera near the middle, reflective ink on the front and side, and LED lights affixed to the back.

The cameras, which are equipped with fisheye lenses, capture the finger’s deformations in great detail, enabling AI models trained by the team to extract information like bending angles and the shape and size of objects being grabbed. These models and GelFlex’s design allow it to pick up various items such as a Rubik’s cube, a DVD case, or a block of aluminum. During experiments, the average positional error while gripping was less than 0.77 millimeters — better than that of a human finger — and the gripper successfully recognized various cylinders and boxes 77 out of 80 times.

MIT CSAIL gripper chipMIT CSAIL gripper chip

Above: The proposed gripper holding a glass box.

Image Credit: MIT CSAIL

In the future, the team hopes to improve the proprioception (i.e., sense of self-movement) and tactile sensing algorithms, while utilizing vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending. They’re scheduled to present their research virtually at the 2020 International Conference on Robotics and Automation, alongside the other gripper team.

“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” lead author on the GelFlex paper Yu She said in a statement. “By constraining soft fingers with a flexible exoskeleton, and performing high resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”