Autonomous Vessels Need to be More Afraid of Dying
[By Live Oftedahl]
8 November 2018 at 0400 hours: The frigate KNM Helge Ingstad is heading south in the clear winter darkness in Hjelteforden northwest of Bergen.
It is heading towards Scotland after a major NATO exercise in Trøndelag County. The warship, with 137 people on board, does not emit AIS signals (automatic identification system), but other ships in the fjord can see it on their radar.
At 0401 hours, the 133-meter-long frigate crashes with the 250-meter-long tanker TS Sola. The tanker is heading towards England loaded with crude oil.
The collision tears open an almost 50-meter-long hole along the starboard side of Helge Ingstad.
Manually controlled by humans
Sleeping sailors are abruptly woken in their cabins where ice-cold seawater starts to pour in and electric wires and other cables stick out everywhere. Luckily, everyone is evacuated, no lives are lost, and only seven people on board are injured.
The tanker has only received a few scratches, and an environmental disaster has been avoided.
The morning news that day played the audio log from the incident. How was it possible not to spot a 250-meter-long tanker on a collision course?
There are many complex reasons why the accident occurred, but would the same have happened if Helge Ingstad had been equipped with higher levels of autonomy and artificial intelligence (AI)?
Opinions vary.
AI can reduce possible accidents
Ingrid Bouwer Utne is professor of Marine Safety and Risk at NTNU’s Department of Marine Technology. She conducts research on the development of safer and more intelligent autonomous systems – for shipping, underwater robotics and flying drones. Utne’s research is organized under the Fjord Laboratory section of the Norwegian Ocean Technology Centre.
Utne believes that more AI in the maritime sector could have contributed to a better understanding of the maritime accident, both on the bridge of Helge Ingstad and at the Fedje Vessel Traffic Service Centre, which monitors and regulates vessel traffic in the area.
“Neither the outgoing nor the incoming officer of the watch on board Helge Ingstad understood that TS Sola was a tanker. And the operator at Fedje forgot to plot the course of Helge Ingstad when the ship arrived at what is called the precautionary area,” says Utne.
She thinks that these are examples of where better decision support could have reduced the likelihood of these kinds of misunderstandings and oversights, even if AI and autonomous systems alone are not adequate risk mitigation measures.
Utne is a former operations officer on frigates in the Royal Norwegian Navy. She has been a member of a committee working on an Official Norwegian Report (NOU) looking into cruise traffic in Norwegian waters after Viking Sky almost ran aground during a storm in Hustadvika in 2019 with almost 1400 people on board.
In two of the research projects she has worked on in recent years, the aim is to incorporate risk understanding into the ‘thinking’ of autonomous systems.
AI must be able to reason more like human beings
The research conducted in the ORCAS project is about further developing autonomous ships. Kongsberg Maritime and Det Norske Veritas (DNV) are partners in this project. In the UNLOCK project, autonomy research is focused on flying drones and underwater robots.
Among other things, the aim is to get drones and robots to carry out inspections in hard-to-reach areas, such as in closed tanks and under sheets of ice.
“The projects are about connecting the way robots sense risk with control so that risk assessment becomes a more integrated part of the decision-making process for robots,” explains Utne.
With more autonomous systems operating independently of a human operator, good risk assessments must be made.
“If robots are to be made more intelligent, it is natural to think that they need to be able to reason more like human beings. They must be able to assess risk.”
“A lot of good research takes place at the intersection of different disciplines, and it requires creative and open-minded people,” says Utne.
Just prior to the Helge Ingstad accident, the sailors on board the frigate were undergoing optical navigation training.
“When the tanker TS Sola called the officer of the watch on Helge Ingstad over the radio to request a change of course, the officer of the watch did not understand that the tanker was moving and that there was a risk of collision.
In addition, the Norwegian Safety Investigation Authority’s first report after the accident states that Fedje Vessel Traffic Service Centre’s automatic plotting, warning and alarm functions were not good enough.
“AI can provide more information about the surroundings and therefore better understanding of the situation, assuming that the systems are actually used and are user-friendly. However, striking a good balance between the manual control performed by a human operator and autonomous control is demanding,” says Utne.
Low level
Utne is of the opinion that autonomy development is still at a relatively low level of maturity, despite the recent acceleration in the use of artificial intelligence.
“It is not uncommon for the people designing and programming the systems to spend a long time working on control systems and algorithms, only for risk analyses to be introduced late in the development process,” says Utne.
What is unique about the ORCAS and UNLOCK research projects is that advanced risk analyses and models form the basis for developing algorithms in the early programming phase of the control system. As a result, the risk analyses become more integrated into the system because functions are actually created that enable robots to make safer decisions.
“I am not aware of any others who are working in this way, even though many people are talking about AI, autonomy and risk. There has been no systematic or professional basis for understanding what a risk analyst should contribute to the programming of autonomous systems. It seems there is a bit of a silo mentality,” says Utne.
Important to involve risk analysts
As a researcher at NTNU AMOS, the Centre for Autonomous Marine Operations and Systems, she found that researchers were able to collaborate across disciplines. This was very important for the development of her research.
“Cyberneticists and AI experts may struggle a little to understand what risk analysts can contribute, but AMOS was quick to understand the importance of risk management expertise and working in an interdisciplinary manner,” she said.
The Helge Ingstad accident showed us just how complex causal relationships and risk factors can be.”
Although the development of a robot’s understanding of risk is primarily for application in marine areas, the methods and results can also be used on land and in the air.
“A lot of good research takes place at the intersection of different disciplines, and it requires creative and open-minded people,” she said.
Risk is about more than just distance
Risk models have been created in the ORCAS and UNLOCK projects. The models represent risks associated with various operations and systems, and these are then linked to ways in which vessels are controlled.
“We have conducted simulations with real vessels, and carried out experiments on Grethe – an unmanned surface vessel owned by NTNU’s Department of Marine Technology," says Utne.
The next step is to further develop and test the models and algorithms more thoroughly in field studies and simultaneously with several other vessels. There is also a need to improve the situational awareness of the human operators. Utne has recently received an ERC grant of NOK 29 million to conduct research on this.
“Including risk management experts when creating decision systems for robots is something new. One typical risk factor already being used by many people working with control and artificial intelligence is the distance between vessels, but risk is about much more than just that,” says Utne.
For example, Utne mentions the risks of running aground, fire, capsizing, and sinking. Therefore, if these types of incidents are to be prevented from happening, measuring the distance between ships is simply not enough.
“The Helge Ingstad accident showed us just how complex causal relationships and risk factors can be,” she said.
Utne believes there is a need for a much more systematic approach to identifying, analyzing, and modeling risk factors.
“That is what makes systems smarter. Risk models provide a more holistic picture and can contribute to better situational awareness, rather than creating algorithms that focus on minimum distances for example,” she said. “Why not vary things and use different risk models that determine whether a vessel should speed up or slow down, and that take multiple factors into account – such as the weather forecast?”
The next generation of frigates must also be able to navigate optically in manual mode, but hopefully, will also have more intelligent systems that can provide better warnings if the people on board do not realize they are on a collision course with another ship.
“More intelligent systems will better understand possible risk factors way before they might actually occur – where death is the ultimate risk factor,” Utne said.
This article appears courtesy of Gemini.no and may be found in its original form here.
The opinions expressed herein are the author's and not necessarily those of The Maritime Executive.