How Data and Analytics Can Help SmartTest Retailers Across the Chain
Rick Muldowney, chief analytics officer at digm, explains why testing, along with solid data and analytics, can help retailers avoid costly technology disasters and generate high ROI.
When retailers want to try something new with their stores, whether it’s pricing, in-store displays, or offer formatting, they don’t want to blindly roll it out across all of their locations and hope for it. better.
First, they can try to predict consumer response by turning to market research to ask questions and track responses on a five-point scale. Unfortunately, the reality of the marketplace, where so many physical retailers are struggling to survive, can offer a very different experience and response. It’s a risky gap that has cost many retailers dearly over the years. They may also decide to try it out in a few stores and see how it goes. It’s a beginning. But a better idea is to try an experience-based process in the market – a SmartTest – that measures and captures effective tactics at selected locations to generate a high ROI investment.
Test, measure and apply
It sounds simple: instead of guessing, implement your plan in selected locations, then determine next steps based on your results. But harnessing this opportunity also means using data and analytics to understand and predict customer response across all kinds of variables.
There are a lot of questions that need to be answered. What do all the branded placements look like through the chosen settings? What are the differences? How do they compare to the ones you are testing? Do you want to test only in certain pockets or more widely across the entire brand? It’s worth finding out, as establishing that baseline can give you important clues about what is moving the needle for traffic and sales, driving capital spending.
Sometimes it’s black and white: After testing new decors and displays in some stores, a fast food chain has achieved positive results and implemented a chain-wide rollout. After testing and learning before, she was able to move forward with a much higher degree of confidence.
The results of the location of the tests can also lead to a mixed approach, for many reasons. Not all stores and outlets are the same, and the differences can clearly guide marketing choices. Are they franchises or corporate locations? Do budget constraints require you to focus on where you get the most bang for your buck? Often times a modulated deployment makes more sense, and even then different stores may benefit from different treatments for the foreseeable future.
It may be to test an offer. Let’s say an auto service retailer offers oil changes. The entry point for oil change increases as the quality increases, so the goal is to get the customer to exchange. The same goes for upselling tires or other products and services.
Often times, there is a physical entity such as an in-store display reinforcing this push. How does it work in some places? What are the customers who use this location like compared to those who don’t? Are there socio-economic differences? Do some locations have more bays or are they inherently more geared towards tire sales? You can track all of these to see what’s impacting oil change sales and change the trajectory of customer behavior, first at those locations and then down the chain.
Suppose you have a thousand auto service outlets and set up a structured test with a subset of one hundred. You build the prototype, place it in those locations, and measure the impact. Are people trading more, is it more income, is it more profit? What does it look like? Did this generate additional sales that you weren’t expecting? What is the ROI of the prototype and how long will it take to break even? You can compare them to a test against a control to see how well it performed and performed against your KPIs. You can also find out why it didn’t work, by comparing these variables to certain types of locations to continue to guide you smarter.
Then, and this is where the magic really happens, through similar processing and algorithms, you can find stores not included in this initial test that look like these stores. This comparison group of stores is designed to provide an apples-to-apples comparison. You take everything you’ve learned into account, and then apply what worked to reflect the successful strategy and desired customer behavior, rolling down the chain with a much higher degree of confidence and profit.
Just as brands can target similar customers, marketers can recognize unique point-of-sale attributes that accurately reflect the synergy – and selling opportunities – between the many moving parts of a point of sale and its customers. and prospects. After identifying these attributes, the brand can associate them with other locations.
Banks can also approach it operationally, such as where to offer after-hours service. They build a control group of locations so that it is a proper comparison of locations: this is what will happen versus not doing, controlling all other relevant factors, from the measures of wealth, travel times and customer density. For them, as with all of these other retail operations, it’s a smart way to hedge their bet instead of betting on what seems like a good idea in all places.
SmartTests like these will not only be much closer to the optimum than a ‘forced march’ through the chain, but they will allow the retailer to become smarter now and in the future about the relationship between its locations and his clients.
Rick Muldowney is director of analytics at marketing agency digm.