Data first, AI second
AI can be the answer to many business challenges – but is, ultimately, worthless without good quality data – and the right data
With the heightened awareness of the ‘power of data’, combined with complex algorithms touted as the panacea for some organisations, I have outlined below a genre of conversations that I’m having regularly with my clients in the backdrop of that mindset.
One of my clients is a leading UK retailer, looking to leverage AI / machine learning and other tools to increase revenue by providing its customer management team with customer behaviour patterns, insights into when they buy and how can they increase the sale/customer engagement.
A meeting, over coffee, was requested with the client’s head of sales and it went like this:
Next article in the series
If you found this article interesting, you might like our article on:
'Artificial Intelligence: Is it the 4th industrial revolution?'
Client: We have a challenge from our board to increase the “% of revenue from existing customers”. But the feedback we’re getting from our customers is that, as an organisation, we don’t offer a wide enough range of products, and the suggestions on the alternative products do not appeal to our target demographic group. Meanwhile, our competition is using innovative technology and advanced analytics to identify the right product and offer compatible services that is causing our customers to shift loyalty – for instance, offering an extended warranty on items purchased.
After attending an AI seminar recently, and having talked to my peers, I’m convinced that we should be using a cognitive/AI solution to help my team understand our customers ‘real-time’ and propose products and services that are relevant to drive revenue. There are several companies providing this kind of solution and I am keen to act on this now. I’m worried that, if we don’t, we’ll be out of business What do you think?
Me: Let’s separate your objectives from the solution – I don’t want to dwell on the business problem right now, but this is a good time for me to talk you through what you would need to do first to successfully deploy an AI / machine learning solution. And while there are several inputs required, there’s one thing that stands out for me – ‘data quality’. Let me elaborate on what we know – and have learnt – from our experience on similar projects:
- Any cognitive/AI technology requires data which is consistent, accurate, trustworthy, has integrity, and is complete etc – amongst other aspects. The logic and the complex algorithm of your solution is only as accurate as its source data. These solutions are costly to train so if you want to ensure the solution works as intended, data is critical.
- Every cognitive/AI project that I’ve led, needs to start with an extensive data-assessment phase – and we often find that the available data is not good enough to support the AI tech to work the way it’s meant to. In these cases, we need to create a separate data cleansing workstream, whose sole objective is to ensure the base data set allows the cognitive/AI solution to function. If you can imagine a team of analysts trying to figure out if an outlier is a critical business discovery or an unknown/ poorly handled data issue; even worse, consider real-time decisions being made by a system unable to distinguish between real data or poor data which accidentally has been fed into the process.
- Most organisations have multiple data sources such as CRM, Finance systems, webchat, social media channels, etc, which are accessed to draw insights in real time – eg, an inbound customer contact agent gets a 360-degree view of the customer, while in conversation, to recommend an alternative product, discount on shipping or a money-off voucher to neutralise a negative experience. Many of my customers believe that there is a need for a central data store in order for this to happen – untrue – it’s perfectly OK to have different data sources, then to extract and transform data towards one or more data models to be consumed by the cognitive/AI solution.
- Data quality issues typically stem from result of poor software implementations, customisation to suit the function or changes in process and systems which inevitably impacts the data formats and more. Some of these are under your control and some not – however, strong governance and active management is required to ensure that integrity of data is maintained.
Finally, for me – it’s the smell test – does your team trust the data? If they don’t, over time, the reliance on that updating and using the data to decision making will be abandoned. The tell-tale signs are when functions create their own ‘cut’ of the data to use rather than accessing the core systems. For example, we once had a credit control team create a parallel client register which was used to assign credit scores and maintain collections. It was done because maintaining the list was easier and so was reporting up the chain. It was only when the system was being upgraded this came to light.
Sorry to be the naysayer but it’s important that you ensure the you are cognizant of the state of your data, how its stored, what is collected and – most importantly – the quality of that data. Most of my clients are surprised with what they find. While there are others who have progressed to hiring a couple of data scientist to start experimenting, only to realise that results are not as expected.
I recommend you create a joint task force comprising of the business, marketing and IT, with senior stakeholders providing oversight, to conduct a diagnostic plot of the roadmap of known system changes. Armed with this knowledge, you should then evaluate tools that can help you with your business challenge. It will be a much richer conversation when you can talk with confidence on what data you have and what you don’t.