In the last few weeks, two important studies have come out about the progress of AI and automation in business. The Economist Intelligence Unit published a briefing paper called “The advance of automation: business hopes, fears and realities” while McKinsey partners Tim Fountaine, Brian McCarthy and Tamin Saleh wrote an article entitled “Building the AI-Powered organization” in the Harvard Business Review. Both articles come to similar conclusions: the main challenge isn’t technology, it’s culture.
The biggest thing slowing down successful AI deployments is people: the absence of skills and employee resistance. Absense of skills isn’t just technical, it’s at all levels. Executives who don’t understand enough about AI, product managers who don’t understand how they can contribute to AI product design and people at all levels of an organization who simply don’t have the confidence to adopt AI. This means that AI is only being fully leveraged in the IT organization, followed by Production/operations and Finance, while departments such as HR, Legal and Sales are yet to reap the benefits of AI.
McKinsey’s analysis is startling: only 8% of firms have practices that support successful AI deployment, the cultural barriers are “formidable” and the partners have seen “failure after failure caused by the lack of a foundational understanding of AI among senior partners.”
We think 5 things are critical to AI success and form the basis of AI best practice:
Strategy that takes account of how human and machine intelligence evolves together
AI learns on its own. Its behavior is somewhat dependent on its post-design experience of the world. Humans can be replaced or augmented, can learn from AI and teach AI or can operate independently of AI’s intelligence; receiving instruction or dealing with AI action without understanding how or why the AI came to its internal knowledge.
This means determining an AI strategy is different: evolutionary, based on data, dependent on what people want and new opportunities people can intuit based on understanding what’s possible.
Best practice starts with leaders understanding what AI is, developing a cross-functional shared understanding and being able to rapidly sketch out a roadmap that can galvanize others.
- Strategy is first developed by the senior leadership team, CEO is the CAIO, so that organizational “cross-cutting” starts upfront.
- Linked to increased revenue, lower cost and lower risk across all parts of the company operations, not just as a point solution
- Supported by training and education of the leadership team in the business value of AI as a transformative technology
- Iterative approach to developing ideas and solutions, using leading-edge AI examples from other industries as examples and inspiration
- Roadmap that includes assessing technology, people and data alongside governance design
- Roadmap time spans solve the problem of quarterly reporting of progress with the AI delivery challenge. Includes longer term and shorter term that make sense together and act as a virtuous learning cycle with the people involved
People know they are more valuable than machines
The future of work. The future of consumer choice. The future of democracy. The future of society. The future of countries. AI is a new kind of intelligence. So it’s vital to understand how it’s different at a small scale such as a project or a product, right through to understanding how it’s different across large scale domains, such as entire societies.
People need to understand AI at any scale because the themes, opportunity and concerns are often similar. People need to understanding how machines and humans collaborate, what AI is good at and what humans want AI to do.
Best practice companies are clear about the value of their human employees and how machines will support their goals. While there are some roles that will be automated away, there is now enough evidence that machines will change jobs and enhance human performance that it’s safe to assume that the majority of leaders’ focus needs to go into helping people transform their jobs and performance with AI.
- Attitude and philosophy of AI being for the service of people, not the other way around
- Long term goals articulate the impact of AI, for the employees as well as for the business overall
- Leadership development is augmentation of humans by getting the best from humans and the best from machines
- Knowledge creation and co-learning of people and machines is the future of work
- People create AI as a positive future force and can describe AI impact and opportunity down to the level of their work tasks
- All people are trained in AI – starting with business-based training and then continuing to be as technical as required.
- People rewarded for being AI-first – agile, experimental, cross-functional, multi-modal decisions and budgets
- Jobs crafted in context of augment versus automate
Governance starts with people’s own experience of good and bad AI
AI is a huge investment, it’s complex and expensive and hard to deploy. An AI project requires more diverse involvement, more creativity and more risk than most technology projects.
People are worried about AI. Bias, ethics, equality, accountability, privacy, employment and competition are all critical issues that interact and influence each other.
Best practice includes designing governance upfront – who to include, how to buy or what to build, what to automate, what to measure, how to manage the creative process, how to deal with bias and how to set strategy, targets and expectations with employees and investors.
- Have someone accountable for AI governance across all seven areas: privacy, bias, ethics, accountability, competition, equality, employment.
- Use employees’, customers’ and other stakeholders’ experiences to inform your choices
- Make sure they have a budget and can bring in people from outside the organization
- Governance reaches into technical standards, data labeling, model training and other areas where technical people make front-line decisions that affect AI behavior
Make sure design starts with diversity
Now, machines learn. AI learns from human reactions and humans change their behavior as AI interacts with them. This means that AI is a new kind of relationship. Creating a relationship through an AI-enabled product means that design now needs to take into account an entirely new agent; the machine intelligence. What kind of relationship do you want to build? The only way to truly define this is to foster cross-functional expertise in the specifics of AI design – diversity of perspective from across the company and including users.
AI is complex and the mathematics and practice are beyond the reach of most people. The effect of AI is too significant to leave decisions to only those with technical expertise. Best practice is to make sure that all AI is developed with many diverse perspectives, which means a diversity of knowledge, education, genders, races, experiences and world views.
- AI tradeoffs, learning and prototyping are coordinated from an AI hub
- Broad across organization, not just point problems
- Front line design – enable employees at the front lines to drive design that solves their – or their immediate customer’s – problems
- Ensure design teams are accountable for defining AI workflow and other policy or process changes that need to align with AI (or visa versa)
- Encourage experimentation and prototyping and make sure to test the learning aspects of AI
Deployment that focuses on learning
AI starts with data so best practice starts with defining proxies, approximations, alternative data, graph connections, missing data, data completeness, data skew, bias, representation and cleanliness. Then it’s about the algorithms: different algorithms will deal with transparency and trade offs in different ways so the voice-of-the-customer is vital.
There are important trade-offs to track when building AI models: selection of training data, detecting overfit and tuning of models. It’s important that machine learning engineers and data scientists can be creative, and there may not be a bright line between a creative decision and a policy decision. Non-technical leads need to be knowledgeable about model creation to be able to support the creative work of technical people. It’s vital that new practices and checks can be adopted without over-burdening technologists.
Best practice hard wires strong ties between business and technical leads.
- AI hubs and centers of expertise
- All investments (in people, technology and physical assets) screened via an AI-first lens
- Production automation and infrastructure that enables a smooth process for models to go from research and development to production
- Creativity and freedom for engineers to experiment but balanced against risk of shortcuts or embedded bias
AI is a deeply challenging transformation. Too many companies are expecting their technologists to the work that would traditionally be done by diverse, cross-functional teams.
Best practice companies recognize that, in an AI transformation, culture is more important than technology. They skill up, empower and mobilize their people for success in the Age of AI.