Can you assess AI risk?

There is a revolution today regarding artificial intelligence with hot topics like deep learning, predictive neural nets, big data, the digital organization and other technologies and techniques on the business analysis and analytics horizon. While organizations are rushing to take advantage of the accelerating AI capabilities, there is little published about the organization’s value side of the AI use equation. AI must be thought through and planned like any other organization effort. Like building business applications, AI is a business organization project that is very technical in nature.

The orderly inclusion of AI into planning is at best very fragmented. AI is not yet mature enough to the point of careful assessment of value to the organization. This is the result of early applications of AI to very specific uses such as stock market analysis and analysis of consumer behavior. Linking issues or problems to an AI solution requires understanding of both the organization and the AI technology.

Planning for inclusion of AI into the organization capabilities can be handled in an orderly and beneficial fashion with a little planning effort. AI efforts should link to organization initiatives which in turn are linked to strategic missions or objectives. They can be integrated into the organization planning process from a strategic level down to the operational level. They can also be linked to any of the management models used today such as balanced scorecard, value chain, value-based management and any of the 65+ other management models used today.

In any case, the capability to effectively integrate AI into an organization follows the same path as inclusion of any new technology. It starts with awareness of the technology, proliferating that awareness, identifying some early use cases for AI, integrating that into the planning cycle and identifying the risk associated with the technology.

AI technology is changing at a fast rate. Organizations are now in various stages of managing their AI interests. Some organizations are just entering the awareness stage while others are charging ahead and starting projects based on applications that have appeared in publications with hope of some type of success.

An approach to assessing AI project risk

As part of both strategic and operational planning, there is a point where the risk of the organizational value of the project(s) is assessed. Governing risk is critical to the nurturing of a new technology in an organization. For each project, key factors must be identified.

For the risk assessment to be useful, a combination of attributes of an AI project should be considered. Here is a starter list. On a per project basis, the items in bold are typical of an AI project risk assessment. A core subset of the list below should include a composite ranking of complexity of at least 3 technical factors and at least 5 business factors.

AI Complexity index

  • Number of goals/objectives
  • Number of AI techniques
  • Number of layers
  • Technical risk factors (e.g. age of software, quality of data )
  • Business risk factors (a composite of several attributes of AI project

Business risk factors (a composite of several attributes of AI projects)

  • Degree of importance (1 -5 where 5 is very important)
  • Degree of impact (1 – 5)
  • Benefit (1 – 5)
  • Perceived Business risk
  • Risk remediation cost
  • Risk damage cost/loss
  • Business value (monetary)
  • Business value (qualitative)
  • Project cost
  • Project time frame

The selected attributes of the AI projects are captured in a simple form of list model with attributes for each of the characteristics.

Attributes are then used to form a composite ranking, one with multiple attributes contributing to the rank. In the end you have two indices:

  1. AI technical index (x axis) – number of goals/objectives
  2. Business analysis index (y axis) – multiple attributes

Below is a small variation of the suggested starting attributes. It uses 6 attributes for the business/organization index and only the number of goals/objectives for the AI technical index. The number of goals/objective is used as that is one of the stronger indicators of technical risk. The resulting 4-box shown below helps to identify where the best opportunities exist.

What does it all mean?

Any new technology that is set for integration into an existing organization structure requires an assessment of the risk involved with deploying that technology. Interpreting the simple 4-box above provides this assessment:

  • Lower Left – Not very important but likely success, low risk (some small projects)
  • Upper Left- Important, high success, low risk (full of small projects, good yield low risk)
  • Upper Right – Important but likely failure, high risk (larger projects and more complex but still good yield)
  • Lower Right – Unimportant and likely failure, high risk (very complex with low yield)

To successfully execute an AI strategy, an organization would start in the lower left quadrant to train an AI team. Later, the organization can use the values in the upper left quadrant to evaluate risk and harvest value. The organization should only get into the lower right projects when technology and skills are mature enough to lower the risk.

While not perfect, such an approach to AI project strategy provides an organization with some direction in choosing projects. Keep in mind, it took many years to develop reliable AI software that works in assessing stock market moves. Planning and processes used to apply AI to other areas in a business will take time to develop

A small issue with AI software tools

One area of risk lies in the use of software tools. Many tools are software libraries with no or minimal user interface capabilities. It is misleading to think that is all you need. Current AI libraries consist of a set of code algorithms that require an interface or need to be included as part of other applications. There are exceptions however such as tools used for stock market movement analysis. These are standalone applications and have reasonable user interfaces developed for them. They do not need additional costly investment in a user interface.

Another risk with tools is the preparation of data that feeds the AI tool. This data must be formatted to fit the tool input formats. In many cases the data is extracted from a database or input from an external source and formatted into a table for input to the AI capability. Further, facilities must be provided to let a user select the input variables that drive the AI algorithm. Some tools provide for selection of input and objective variables if the data is formatted in a manner acceptable to the tool.

Discovering these needs at the time of implementation costs time and money and puts the AI project at risk. The less prepared the greater the risk.

So Be Prepared

Technology, especially AI technology, requires much more attention today than it did even 5 years ago. The organizations that keep up with the trends and changes will be best prepared to provide valuable goods and services in the future. Effectively using new forms of analytics including AI types of technology provide better opportunities to reduce operational uncertainty.