Testing is critical to understanding the needs of your customer. Tests can be simple, like asking your users to choose between different task flows on an app. Or they can be complex, like telling an entire country they will no longer be able to use a credit card on your site, then seeing what people there do about it. That second test, if we can believe an article from Quartz, is what Amazon UK is currently doing in a game of chicken with Visa about credit card fees.
If the article’s assertion is accurate—and I have no insight into whether it is—one thing is certain: it’s an example of user testing at a massive scale that is, in some ways, awe-inspiring.
First, some background on what’s going on: In November, Amazon announced it would stop accepting Visa credit cards as of today, Jan. 19. Its reason: every credit card transaction costs money, and the interchange fees Visa charges—1.5% in the UK to the credit card merchant’s bank, as opposed to a legal cap of 0.3% in the EU—were, from Amazon’s point of view, too high.
“The cost of accepting card payments continues to be an obstacle for businesses striving to provide the best prices for customers,” an Amazon spokesperson told Bloomberg News.
According to The Verge, MasterCard raised its fees in the UK to the exact same amount, yet Amazon has not taken the same stance against that card issuer.
Why? The data may be the answer. Maybe. If we accept this whole episode as a giant test, think about the data they had access to and what Amazon’s data scientists could interpret from user behavior:
- Overall purchase behavior in the UK as compared to past years. 2021 is still a funny year for data collection in retail given the pandemic. 2020 was certainly an anomaly from a data perspective and this announcement was made near the end of the crest of the Delta variant wave.
- Overall purchase behavior in the UK as compared to the rest of the world. Again, Covid-19 makes results wonky, but less so when compared to the year-over-year data.
- Changes in purchase behavior before the announcement and after, such as whether customers in the UK switched from one credit card to another or a different form of payment, then comparing that behavior to what happened in the rest of the world.
- The amount of customer service requests for help in changing payment options or simply cancellation of accounts (a number, I suspect, that’s so low it hardly bears mention).
- Increases or decreases in big-ticket purchases that might require users to carry a balance. Assessing reasoning for this may be more challenging given the pandemic, as some customers may have had more money to spend from cancelled vacations or the like.
Finally, after crunching all of the data and getting the answers they needed, Amazon changed its mind. Perhaps not enough UK citizens bothered to change their default credit card settings. Maybe they got too much negative feedback. Or Amazon UK simply decided they had made a statement and could go back to business as usual. Regardless, on January 18, one day before they were supposed to flip the switch, Amazon announced they would continue to accept Visa credit cards after all.
So how can we use this case as a way to be more creative about testing?
- Think about your audience. Most of the rest of us aren’t as big as Amazon and will likely need to work at finding the right subjects for each test. The testing pool won’t be as broad as “lives in England and shops with a Visa card,” so finding the right people may require some creativity.
- Timing matters. Visa and MasterCard will be raising its rates in other parts of the world later this year, so there was some urgency in getting data on user behavior before that change occurred. By timing the experiment for the five weeks before the busiest shopping season of the year and the four-week lull that generally follows, Amazon could get incredible insights into how shopping habits may have changed. Any test should be time-bound to keep results contained.
- Some insights are accidental. Amazon may have been specifically looking for the purchase data but undoubtedly came across other insights that will change its continuous customer experience improvement. In a recent look at the data from a livestream show on our BLive Music app, for example, we got the data of how many people came, how long they stayed, and other basic information. However, we also learned that the only people who used the chat feature or clapped (direct audience feedback for our performers — something they definitely asked for and appreciate) had been to previous shows. Not a single person turned on their cameras for the performer to see their faces. Do we eliminate these features or educate the audience on how to use them?
- Consider the ethics of such an experiment and how it figures into product or service design. Checking to see how many people turned on their cameras seems pretty innocuous—but if someone had turned on a camera and then immediately turned it off, should we dig further to find out why? Similarly, if you’re the largest store on earth and many of your customers are low to middle income, do you tell your customers they can no longer shop with that card? How does that affect their habits? Can experiments like this take customer anxiety into account?
Any user test will have different quantitative and qualitative aspects to it. Both are incredibly important, and both should lead to stronger, easier-to-use products that can be inclusive of everyone. That’s why testing, at scale when possible, is so important! If we talk only to ourselves, we build only for ourselves. And that seriously limits anyone’s potential.