Search
Close this search box.

Risk Management Keeps Planes and NAV Systems Flying

Whether its for aircraft maintenance or NAV systems, Risk identification is the first step in the risk management process.
This post is part of the Risk Management blog series.

On 23rd March 1994, Russian Airlines Areoflot Flight 593 en route to Hong Kong, crashed into a mountain range killing all 75 people on board. On 10th June 1990, British Airways Flight 5390’s cockpit windscreen blew out en route to Spain with the pilot partially ejected. Were the causes of these incidents not identified as risks? Were these outcomes not foreseeable?

Risk identification is the first step in the risk management process. And this applies to NAV projects. Quite simply, it is thinking about every possible thing that could prevent a project from achieving success. Many companies believe formal risk management is unnecessary they think risks can be managed on the fly. Perhaps a day long risk workshop is not  necessary for every project, but it is critical that the project team discusses and understands what could cause an event that may lead to the derailing of their project.

Whether it’s for aircraft maintenance or NAV systems, Risk identification is the first step in the risk management process.

Having a structured approach to risk identification will make the task far less daunting. Identifying “every possible thing” is not entirely practical, so there needs to be some method to the madness, so to speak.

For NAV implementations, a good place to start is the solution itself. Consider the configuration of the system for your business. Are the user and functional requirements well defined? Will your business processes change? Another area carrying significant risk is resourcing. Are the right people involved in the project and are they involved at the right times? Then consider the external factors that may have an impact. Do they create competing priorities?

But one of the best ways to identify risks is through lessons learned from previous experience. It’s unlikely that you have implemented an ERP system more than a couple of times, but Fenwick has a wealth of experience, with a ‘lessons learned’ list to match.

Areoflot Flight 593 crashed because the pilot’s 15-year-old son accidentally disengaged the autopilot when he was allowed in the cockpit. From a resourcing perspective, having a 15-year-old with no idea how to fly a commercial airliner in the pilot’s seat is a risky choice. The risk had been identified but regulations were broken.

The pilot of British Airways Flight 5390 miraculously survived being outside the plane for 21 minutes while cabin crew clung to his legs. The window blew out because the bolts were .66mm too small in diameter. From a requirements perspective, the correct bolts were clearly defined. But under time constraints, a maintenance engineer failed to consult the documentation, instead confirming like for like by sight when replacing the windscreen. The correct procedures were not followed. The failure ultimately came down to schedule taking priority over quality, something that also occurs on IT systems. This will form part of a future blog in this series—how we define and measure project success.

Related Posts in