For years, retailers have been trying to mitigate the effects of inherent bias or unintended discrimination in their physical shopping experiences. And while no one would claim the problem has been solved entirely, many retailers are now taking steps to make sure their customers aren’t profiled by the way they look, who they’re with, or how they dress or act when they walk into a store.
But with shopping becoming an increasingly digital experience, retailers must confront a new and perhaps more unfamiliar challenge: digital bias. Instead of combatting prejudice or unconscious bias among frontline workers, retailers must now look to eliminate bias in their own data, in the related algorithms, and the use of these in their digital practices.
New retail, new risks
This is a growing issue. More and more shopping is moving online, a trend that was supercharged by the massive digital acceleration seen during the pandemic. At the same time, retailers are looking to ramp up their abilities to personalize their offers and interactions — searching for that sweet spot of understanding that builds a stronger and more profitable bond with a customer.
What’s more, retailers are faced with a more competitive digital arena to search for net new customers putting enormous pressure on marketing spend and the cost of customer acquisition. Reality here is that it will cost more to get the next generation of VIPs, which is why retailers are very sensitive about ways to target. With analytics and the ability to get data from the variety of touchpoints that customers leave behind as they are using their devices and making purchases, one would think that it would be easy to get this right.
The big picture is that the number of digital (or digitally enabled) touchpoints with customers is expanding rapidly — and so are the opportunities for digital bias to emerge. Consider the growing use of artificial intelligence. As machine learning algorithms are embedded into ever more retail experiences, the risks associated with biased or incomplete training data escalate hugely. Think, for example, of an interactive digital skin care experience trained on a third-party dataset, which, unbeknownst to the retailer, was massively skewed toward lighter skin tones. The risks of unintended discrimination or offence are obvious.
Or what about personalized marketing based on purchase history? Here, outdated or simplistic presumptions in category demographics risk leading retailers down the wrong path — whether it’s the woman who wears a blazer designed for men, the man who buys foundation to cover a blemish, or the shopper who simply wants gender-neutral products. Thinking outside of traditional category norms is increasingly critical, both in ensuring you’re marketing to the right people, and not causing offence by making the wrong assumptions about customers.
Strategies to combat digital bias
There are significant risks in getting it wrong. At best, mistakes will annoy and alienate customers — and risk losing their trust and any chance of a repeat purchase. At worst, the impact of digital bias can be genuinely offensive or even discriminatory. So it’s a problem that urgently needs to be solved.
However, the sheer number of opportunities for digital bias to creep into retail experiences means there’s no simple fix here. Instead, it’s about developing a holistic set of strategies and a framework for the responsible use of AI across the business.
There are several different aspects to think about here.
Process and people. It’s important to establish clear ethical standards and accountability based on fairness, accountability, transparency and explainability. Retailers might consider bringing a chief ethics officer into the C-suite to provide oversight. They should also ensure their people are intimately involved in the process — this “human plus machine” combination can act as a critical sanity check on what an automated solution is doing.
Design. When creating a new digital solution or AI-powered experience, retailers should understand and apply ethical design standards from the start. That includes having mechanisms to ensure training data for machine learning is inclusive. It also means accounting for data security and building in data privacy by design.
Transparency. Retailers should consider transparency as a way of maintaining customer trust. That might include, for example, being open and honest about when artificial intelligence is being used and explaining which data points have led them to make a particular recommendation or offer to an individual. Bringing customers into the process, gaining their trust, and being transparent in designing solutions that work for everybody is key.
Partners. Retailers will often use a partner to develop and maintain AI-driven algorithms and solutions, especially where they lack their own skills in advanced data science. But if an algorithm doesn’t perform as expected and/or offends a customer, it’s the retailer’s reputation on the line. It’s vital to choose partners wisely, ensuring they adhere to the same corporate values and purpose as the retailer’s own brand.
Monitoring. It’s important to keep a rigorous check on how a digital solution is performing once it’s up and running with customers — even more so where it contains self-learning AI components that evolve the experience over time. Retailers should be running regular audits of all algorithmic solutions against key bias and security metrics.
Ultimately, a retailer should be aiming for an approach that is honest, fair, transparent, accountable and centered around human needs. Given how widespread the use of data and AI now is across so many aspects of retail, this kind of principles-based approach is the best way to ensure we build experiences that are truly inclusive for all customers across all shopping channels.
About the authors: Jill Standish is senior managing director and global head of retail, and Joe Taiano is managing director and consumer industries marketing lead, at Accenture.