As a senior executive or CIO, how can you assure yourself that the artificial intelligence or machine learning-derived recommendations are reasonable and flow logically from the project work that has been performed?
While you want to be supportive and encouraging of your team’s work, you don’t want to be misled inadvertently, and you want to confirm that the data science team hasn’t misled itself.
“Superior data quality and a sufficiently rich data volume are essential to quality, real-time AI/ML recommendations,” says Dr. Jim Webber, the Chief Data Scientist at Neo4j, a leading vendor of graph database software.
Here are some high-level questions that you can ask the team about the data. They’re designed to raise everyone’s assurance that the AI/ML recommendations are sound and can be confidently implemented even though you and everyone else know you’re not an expert. Start by selecting one question that you’re most concerned about, and you’re most comfortable asking.
Data quality
The confidence you can have in AI/ML-derived recommendations is highly dependent on the data quality found in the data sources used by the project team. Data quality is likely the most critical element of all the system components that underlie the recommendations. Here are some related questions that will illuminate the actual data quality:
- How did you determine that the data sources were relevant to the problem space?
- How do we know that the data sources you employed are sufficient in number to support the model comprehensively?
- How do we know that the quality of the data you employed is sufficient to support the model comprehensively?
- How deeply did you profile the data to assess its initial quality and identify the actions you undertook to improve the data quality?
- How did you raise the data quality and volume sufficiently to ensure that the error term associated with the model is small or modest?
- How do we know that the data volume you employed is sufficient to support the model requirements?
- How did your domain experts determine that the data context is rich enough to support the model and avoid misunderstanding of data meanings?
- How did you determine that the training data for your model was rich enough to ensure model accuracy?
- How do we know that the data contains enough current and historical data to support the model requirements?
- How did you approach metadata management to build assurance that the data elements in the data sources are adequate for a reliable model?
Evaluating answers
Here’s how to evaluate the answers that you’ll receive to these questions from your data science team:
- If you receive blank stares, that means your question’s topic has not been addressed and needs more attention before the recommendations should be accepted. It will be necessary to add missing skills to the team or even replace the entire team.
- If you receive a lengthy answer filled with a lot of data science jargon or techno-babble, the topic has not been sufficiently addressed, or worse, your team may be missing critical skills required to deliver confident recommendations. Your confidence in the recommendations should decrease or even evaporate.
- If you receive a thoughtful answer that references uncertainties and risks associated with the recommendations, your confidence in the work should increase.
- If you receive a response that describes potential unanticipated consequences, your confidence in the recommendations should increase.
- If the answers you receive are supported by additional slides containing relevant numbers and charts, your confidence in the team should increase significantly.
- If the project team acknowledges that your question’s topic should receive more attention, your confidence in the team should increase. It will likely be necessary to allocate more resources, such as external data science consultants, to address the deficiency.
For a summary discussion of the topics you should consider as you seek to assure yourself that AI/ML recommendations are sound, please read this article: Skeptical about AI-derived recommendations? Here are some tips to get you started.
What ideas can you contribute to help senior executives assure themselves that the AI/ML-derived recommendations are reasonable and flow logically from the project work performed? Let us know in the comments below.