AI adoption is increasing but many companies report struggling to scale AI across the organization. A pilot or a point solution, such as a chatbot or using natural language processing to parse documents, while valuable, will not in itself develop any core organizational capabilities that support AI-at-scale.
Here are the six must-do steps for developing organizational capability in scaling AI:
- Have a clear, shared vision for AI. Make sure the person tasked with leading the company’s AI, analytics or data initiatives sets up a series of workshops to school the other execs in what it’s all about.
This is a gap when: the executive team doesn’t have a clear, shared vision about “why AI?”
- Be clear about short term value. Someone needs to identify 3-5 clear and compelling, high impact, short time-frame use cases. Ideally these should be achievable within a time frame that would be considered short term for your company. They must be high impact – things people really care about.
This is a gap when: no one can say what’s most feasible and most impactful in the same sentence.
- Have a clear AI strategy. This sounds obvious but it’s often a real problem. Beyond the initial AI deployments, it should be clear what the AI strategy is about. What threats come from AI? What are the opportunities? What else can you do with AI that could grow revenue, decrease costs, decrease risks?
This is a gap when: the AI strategy stalls out at handful of obvious use cases.
- Have quantitative and objective measures of value. Developing the right metrics for AI can be tricky because sometimes AI’s impact is not independent of other initiatives. This is especially true when machine employees are supplementing human roles and augmenting human performance. However, it’s vital to have some financial metric that people agree is meaningful and to include objective review of the initiatives, which can help catch other, less tangible benefits (or costs).
This is a gap when: No one can identify a relevant financial metric for each AI initiative.
- Have people who “speak AI.” AI is very different from traditional software. Much of its behavior is a result of its post-design experience. It is also non-intuitive — much of its value is derived from how it can find patterns in data that humans can’t. It also fails. This means that the level of business knowledge required to derive value in the data and models is much deeper and is more nuanced than in traditional business analysis.
AI needs new job descriptions and a network of processes for scale. It also means that analysts need to have super-sized skills in translating math into meaning. These people take time to train. McKinsey calls them “translators” and says that many companies set up their own internal academies to train them.
This is a gap when: you don’t have people who can sit with data scientists and translate business objectives into AI models and it’s not possible to point to a full suite of AI-specific skill sets and job descriptions.
- Understand and govern the new risks of AI. AI risks are different – unintended consequences, false positives, false negatives, bias, accountability issues can surprise and be a significant source of risk. AI ethics is a growing field and AI ethicist is now a real job. But there are many things that make it difficult to manage AI development and a governance process is vital for AI-at-scale.
This is a gap when: you don’t have anyone who is hyper-obsessed with AI ethics.
There are many new skills sets that are needed to manage AI and machine employees. Having a clear vision and strategy for leveraging machines that learn, understanding the ethical considerations, identifying of unintended consequences, designing for failure, anticipating bias and unfairness, translating mathematical models and statistical outcomes into product features are all completely new and underpin the core practices needed to scale successful AI.