While some companies think true AI can’t be explained, Empathy believes both retailers and customers have the right to understand why they’re seeing what they’re seeing. For retailers, explainable AI leads to better decision making and customer understanding. For shoppers, it builds greater trust with the brand and improves the shopping experience.

Add to that the legal and regulatory requirements retailers need to follow, and understanding why AI is doing what it’s doing has never been more important. 

So, let’s dig a little deeper.  

The first step towards explainable AI? Datasets

The backbone of all AI is the dataset it’s been trained on. Being able to explain where your dataset has come from is the first step towards explainable AI. 

When it comes to datasets, there are two models that are used most frequently: Foundation models, which are datasets trained on huge amounts of data scraped from the public internet, and domain specific models, which are trained on data from a specific domain, like your website. 

AI trained on foundation models has access to huge amounts of data, however it comes with a number of risks around ethics such as whether the datasets violate reasonable assumptions of privacy, whether consent for the use of the data has been given, whether the foundation model is using data from minors and more. When using any dataset, we need to be aware of consent integrity, and ensure we have valid consent from the true owner of the data. 

What explainable AI means for merchandisers 

For retailers, understanding how AI influences what customers see is vital. Knowing what’s happening behind the scenes allows for better business decisions moving forward. 

It’s important for any merchant using AI to be able to trust that the results are in line with business objectives, which is why using explainable AI is key.

Explainable AI ensures merchants can be certain the algorithm is working with them, not against them. An example of explainability would be related tags in the Empathy Playboard where merchandisers can see which tags have been generated by AI and which have been created by a human.

What explainable AI means for customers

Let’s imagine a customer adding a party dress to their cart. The customer is then given the option to search for party bags. While behind the scenes the AI has inferred she may be interested in party bags, instead of being automatically shown party bags, which may add confusion, the customer is instead shown a prompt.

This prompt is explainability designed into the shopping experience. Thanks to the prompt, the customer understands that their original search for party dress has led to them seeing the prompt for party bags. This prompt adds transparency and builds trust with customers.

Retailers can benefit hugely from designing AI explainability into the shopping experience. For customers, explainable AI is about understanding personalisation, understanding where results are coming from understanding and why we’re seeing what we’re seeing. 

A future of explainable AI

AI explainability is a challenge that lies at the intersection of technology, law, and customer experience. For retailers, it’s a practical challenge that affects the very core of business operations.

In an era where customers are increasingly tech-savvy, and regulations increasingly stringent, we can’t afford to keep AI in a black box and pretend it can’t be understood. We need to innovate, offer transparency, and focus on our customers.

The quest for AI explainability isn’t just about compliance, it’s about building the future of shopping.