Only 7% of businesses have an AI strategy. Are you one of them?
Probably not if this recent stat from Adobe is anything to go by.
But are you exploring AI, running pilots or even scaling AI solutions? Quite possibly, as the stats show that the year on year growth and adoption of AI is exponential.
And if you don’t have a strategy, there’s a fair chance that you haven’t considered any number of the following; your customers’ actual needs, your employees and culture, work systems, product/service impact, data capabilities or ethics.
Again, this isn’t uncommon.
Responsible AI Adoption
Over the past 6 months, we’ve run a number of AI Adoption and Maturity workshops to help clients self-identify where they sit on the AI Maturity index and to co-create frameworks for responsible AI Adoption.
We’ve worked with clients right across the spectrum of adoption however the commonality is that AI is often being developed in silos, whether it be the data science, tech, marketing, innovation or individual product teams. As such, critical factors for the project's success are often overlooked.
Taking the need for AI to align with the overarching business strategy as read, this article will touch upon the importance of due diligence being given to the customer, employee/culture and ethics.
The customer is still king
Much has been made of the desire to create friction-free and real-time products and experiences and it’s fair to say that this has been further accelerated by integrating AI and machine learning but it’s not always what the customer wants.
Working with psychologists, we’ve discovery sometimes, experiences can be too seamless, and AI has been deployed in the wrong place within a product.
Finance is a great example of this. Machine learning in back-end-processes means that customers can be scored and loans approved within seconds, however, such a rapid transaction can unnerve customers. There is an expectation that greater time would be required for due diligence before such a decision was made. A client I spoke with recently confirmed that by tweaking the automated processing to delay the decision by a couple of hours greatly improved customer confidence in the business.
It’s fair to say that AI could greatly enhance most financial products but it’s important to understand from customers where it will add most value for them. An approach that should have been adopted by an investment company that developed its robo-advice product in isolation and recently announced its closure, losing millions in investment and numerous jobs.
The robots are coming!
News headlines about robots and dystopic futures, where jobs are fully automated are creating distrust amongst employees, despite the fact that their homes and phones are increasingly using micro AI to improve their lives.
Recent research from SAS and Forbes lays out the issue clearly.
“Nearly 20% identified ‘resistance from employees due to concerns about job security’ as a challenge to their AI efforts”.
Let’s not make the same mistakes that led to so many digital transformation failures over the past decade.
This means, developing the right culture within a business to embrace AI is critical. AI needs to be defined, socialised and celebrated throughout the business. When framed correctly and positioned in a way so people understand that the boring and repetitive tasks within their role can be taken on by AI, empowering them to focus more time on value creation, there are fewer barriers to adoption. It’s when AI is, not only piloted in silos but scaled in silos and rolled out across business without proper thought to communication and onboarding that people become defensive and resistant to change.
Bias in, bias out
Ethics appears to be the area of our adoption framework that has been most overlooked by clients despite some of them being very mature in their adoption, application and implementation of AI. This is despite the plethora of headlines over the past few years calling businesses out for ‘racist robots’, perpetuating inequality and the recent decision by San Francisco to ban facial recognition use by local agencies due to concerns over both bias and privacy.
Ethics are of course a thorny issue and one that’s hard to get right, Google has grappled with a solution disastrously with their ethics council being shut down before it even officially launched. Salesforce’s CEO’s top-down approach has however fared much better so far.
That said, businesses ignore ethics at their peril.
Mature AI businesses will have considered and agreed upon their own framework and ethical principles. They will have engaged ethicists during this time, having recognised the difference between compliance, law and ethics. They will have a publicly available policy and those experimenting with and implementing AI within the business will have undergone ethics training aligned to this.
The benefits of this approach are two-fold, as we have also found that, when employees recognise that there is a policy in place and red lines that won’t be crossed, there is again less resistance to adopting these new technologies.
So, whether you’re at ground zero when it comes to AI adoption or you are already scaling successfully piloted solutions, a broad and integrated framework is essential for success.
Don’t just think about the efficiencies that AI can bring or how it can enhance the customer experience, consider your broader business strategy, the talent pool you have, your products, work processes, customer needs, culture and ethics.
Get this right and very soon you’ll be one of the few with a robust and long-term AI strategy. And with McKinsey research out this week stating that British companies who fully incorporate artificial intelligence tools into their organisations could increase their economic value by 120% by 2030, that can’t be a bad thing.