Skip to content Skip to footer
0 items - £0.00 0

Improving trust in autonomous technology

The combined power of AI and robotics is revolutionizing mobility and manufacturing. Automated vehicles, airplanes, people movers, and warehouse robots are improving in their range, flexibility, situational awareness, and intelligence, while better technology, a hunger for increased productivity and efficiency, and the pressures of covid-19 lockdowns have fueled investment in autonomous systems. In 2020 and 2021, market debuts for self-driving vehicles alone boasted a collective initial valuation of over $50 billion.

But the sector also faces significant growing pains. Many companies are not yet profitable, and their timelines for being in the black shift ever further into the future. A 2022 J.D. Power study found low consumer confidence in fully automated vehicles, with public readiness for the technology actually decreasing from 2021. Regulators are rightly sharpening their focus on safety and security of autonomous technologies. Those combined challenges could make investors more cautious about backing the sector, especially in a downturn, where capital is more expensive.

Trust and assurance—from consumers, the public, and governments—will be critical issues for the AI and autonomous technology space in the year ahead. Yet, earning that trust will require fundamental innovations in the way autonomous systems are tested and evaluated, according to Shawn Kimmel, EY-Parthenon Quantitative Strategies and Solutions executive director at Ernst & Young LLP. Thankfully, the industry now has access to innovative techniques and emerging methods that promise to transform the field.

The new autonomy environment

Automation has historically been pitched as a replacement for “dull, dirty, and dangerous” jobs, and that continues to be the case, whether it be work in underground mines, offshore infrastructure maintenance or, prompted by the pandemic, in medical facilities. Removing humans from harm’s way in sectors as essential and varied as energy, commodities, and healthcare remains a worthy goal.

But self-directed technologies are now going beyond those applications, finding ways to improve efficiency and convenience in everyday spaces and environments, says Kimmel, thanks to innovations in computer vision, artificial intelligence, robotics, materials, and data. Warehouse robotics have evolved from glorified trams shuttling materials from A to B into intelligent systems that can range freely across space, identify obstacles, alter routes based on stock levels, and handle delicate items. In surgical clinics, robots excel at microsurgical procedures in which the slightest human tremor has negative impacts. Startups in the autonomous vehicle sector are developing applications and services in niches like mapping, data management, and sensors. Robo-taxis are already commercially operating in San Francisco and expanding from Los Angeles to Chongqing.

As autonomous technology steps into more contexts, from public roads to medical clinics, safety and reliability become simultaneously more important to prove and more difficult to assure. Self-driving vehicles and unmanned air systems have already been implicated in crashes and casualties. “Mixed” environments, featuring both human and autonomous agents, have been identified as posing novel safety challenges.

The expansion of autonomous technology into new domains brings with it an expanding cast of stakeholders, from equipment manufacturers to software startups. This “system of systems” environment complicates testing, safety, and validation norms. Longer supply chains, along with more data and connectivity, introduce or accentuate safety and cyber risk.

As the behavior of autonomous systems becomes more complex, and the number of stakeholders grows, safety models with a common framework and terminology and interoperable testing become necessities. “Traditional systems engineering techniques have been stretched to their limits when it comes to autonomous systems,” says Kimmel. “There is a need to test a far larger set of requirements as autonomous systems are performing more complex tasks and safety-critical functions.” This need is, in turn, driving interest in finding efficiencies, to avoid test costs ballooning.

That requires innovations like predictive safety performance measures and preparation for unexpected “black swan” events, Kimmel argues, rather than relying on conventional metrics like mean time between failures. It also requires ways of identifying the most valuable and impactful test cases. The industry needs to increase the sophistication of its testing techniques without making the process unduly complex, costly, or inefficient. To achieve this goal, it may need to manage the set of unknowns in the operating mandate of autonomous systems, reducing the testing and safety “state space” from being semi-infinite to a testable set of conditions.

Testing, testing

The toolkit for autonomous system safety, testing, and assurance continues to evolve. Digital twins have become a development asset in the autonomous vehicles space. Virtual and hybrid “in-the-loop” testing environments are allowing system-of-system testing that includes components developed by multiple organizations across the supply chain, and reducing the cost and complexity of real-world testing through digital augmentation.

Model-based systems engineering is a full lifecycle approach that uses modeling to explore the behavior of a system, the interactions of components, and intersections with potential future environments. This allows for the simulation and prediction of system behavior under different circumstances, enabling developers to proactively seek weaknesses or threats. These and other methodologies will change how AI- and robotics-powered products are developed and validated, ultimately reducing cost and time to market.

Over time, Kimmel predicts, safety and testing collaboration between ecosystem partners will itself generate new standards and leading practices for validation and verification, paving the way for seamless, safe, and widespread deployment of autonomous systems across sectors.

EY-Parthenon teams support original equipment manufacturers (OEMs) in autonomous systems integration. This includes developing safety strategies and performance indicators, helping with data for training of autonomous systems, training algorithms, and developing digital twins, such as digitizing human-defined “road rules” that could boost transparency in autonomous vehicle safety. “We also support the development of testing and evaluation tools that create interoperable live virtual constructive test environments, and cataloging performance data and creating ‘test databases’ including common operating cases and known risks,” says Kimmel. “This allows participants to benchmark performance, for instance, on issues like pedestrian interactions as a factor for autonomous vehicle safety.”

Looking to the future, Kimmel outlines five coming trends in the autonomous systems industry.

  • Trust will be key for autonomous systems, both for consumers and regulators. As a result, companies are building cultures of safety and risk management, such as through safety management systems (SMS).
  • Interoperability and virtual testing will become an imperative. Different systems may need to interact effectively with one another and be tested together in virtual test environments. These environments and testing toolchains will become able to assess performance in a large range of potential scenarios and conditions far more quickly than physical testing can.
  • Safety performance indicators will level up. The industry likely needs to shift from conventional approaches, like numbers of crashes or failures, to predictive metrics like incursions into a “safety envelope,” erratic or unpredictable motion control, and latency—and to provide evidence of the predictive power of these new metrics.
  • Standards and common verification systems will offer credibility as emerging technologies scale. Without standards, a fragmented approach to safety may prove detrimental to the industry. Companies that take proactive approaches to shaping and complying with standards can reduce risks and build a competitive advantage.  
  • Governments will take a proactive role to both to regulate and accelerate. Governments function both as regulators and as catalysts for R&D, raising safety concerns and also accelerating development of strategies and enabling technologies for safer AI and robotic systems.

Learn more about EY-Parthenon disruptive technology solutions at ey.com/us/disruptivetech.

The views expressed in this article are not necessarily the views of Ernst & Young LLP or other members of the global EY organization.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.