Intelligent Payment Routing

Share on facebook
Share on google
Share on twitter
Share on linkedin

Using a Multi-Armed Bandit Algorithm To Optimize Payments Performance

Multi-Armed Bandits a.k.a. Slot Machines

If you have an e-commerce website, you most likely spend countless hours in finding the right mix of products, the content, buttons, colors and other variables that could improve your conversion rate, average transaction value and eventually your revenue. Where most merchants stop is when it comes to payments.

The payments industry for many years has made us believe that there is very little that we can do to actually optimize the performance of the checkout process. Most U.S. based businesses only require the ability to accept cards, which can range from the standard Mastercard and Visa, to adding American Express, JCB, Diners/Discover or UnionPay. But throughout the world, local payment methods have been able to capture a large part of the transactions especially the ones that are happening online. In the Netherlands we have iDeal, in Germany Sofort (Klarna), France has Cart Bancaire, China embraced AliPay and Japan has Konbini. Globally there are now over 250 alternative payment methods, which makes it even harder for merchants to decide what payment options they should offer, let alone optimize for.

The global brand general purpose cards — Visa, UnionPay, Mastercard, JCB, Diners Club/Discover, and American Express — generated 295.65 billion purchase transactions for goods and services in 2017, up 18.0%.

Experimenting

Having worked in the Financial Services industry for over 14 years, I have always used Data to help myself and others make better decisions. Either through Descriptive Analytics (what happend), Diagnostic Analytics (why something happend), Predictive Analytics (what is likely to happen) or Prescriptive Analytics (what action to take).

As a Data Scientist, I enjoy the final stages the most, because that is where I get to experiment. The first stages are necessary to figure out what happend or even why something happend, but being able to use math and algorithms to predict what is going to happen and developing technology to automate the actions that need to be taken, is what really excites me.

So when I worked on a project for a large omni-channel retailer back in 2015, they struggled with making a decision to decide what payment method mix to use. During six months I researched their business, new and established payment methods and the behavior of customers from the moment they landed on the website untill they paid or decided not to. I created experiments to test different scenario’s and optimized until I got the best payment mix . In hindsight the experiment was a great success, we increased the Checkout Conversion Rate (the conversion of shoppers clicking the payment button and actually inputting and succeeding in paying for the transaction), from 91% to 95%. The added benefit was an additional 1 Million Euro’s in revenue and a projected 4 Million Euro’s in revenue in the next twelve months. Instead of staying with the company, I got an offer from the PSP that processed the merchants transactions (they were the initiator of the project), to use my approach to help other larger merchants to get the same type of results.

Joining a hyper growth company that focused on results more than perfect execution, I was able to run well over one hundred different analysis and experiments over a span of almost two years, creating descriptive, diagnostic, predictive and prescriptive analytical results. From calculating authorization rates, improving acquiring performance, figuring out what variables in payment fields lead to a higher likelihood of being authorized. But out of all of them, helping merchants to find the best payment mix stood out the most.

Thinking about this problem and many other problems, where running multiple experiments was the only way to get the answer, I stubbled upon the Multi-Armed Bandit Algorithm.

Multi-Armed Bandit Experiment

A Multi-Armed Bandit is a type of experiment where:

  • The goal is to find the best or most profitable action
  • The randomization distribution can be updated as the experiment progresses

The name “multi-armed bandit” describes a hypothetical experiment where you face several slot machines (“one-armed bandits”) with potentially different expected payouts. You want to find the slot machine with the best payout rate, but you also want to maximize your winnings. The fundamental tension is between “exploiting” arms that have performed well in the past and “exploring” new or seemingly inferior arms in case they might perform even better.

Used by Google to run Content Experiments, I started thinking about how similar Payment Service Providers and more specific Acquiring Connections are. Unlike what most people might assume, the PSP or Acquirer can have a big impact on the Performance of the transaction being processed. Dependent on variables like region, reputation and data quality, card transactions on average have a 80% chance of being approved. Reasons for their decline can range from insufficient funds, to transactions not permitted to cardholder or the widely covering do not honor. By having multiple PSPs or Acquiring connections a Multi-Armed Bandit Experiment can produce the best possible performance without manual interference.

How Bandits work

In essence a Multi-Armed Bandit Algorithm starts out with multiple variations, which based on an input produce an output. Dependent on the performance the fraction of traffic to each variation will automatically be adjusted. A variation that performance better than the other variations will be allocated a larger fraction of traffic, while the underperforming variations will see the traffic reduced. Each adjustment is based on a statistical formula which used the sample size and performance metrics together, to ensure that the changes are real performance differences and not just random chance. Dependent on the time and traffic, one or more of the variations will come out a winner, to which we award all the traffic or decide to run a new experiment. Unlike the traditional A/B experiment, the Multi-Armed Bandit experiment is about achieving and taking advantage of results while you are running the experiment versus waiting till you have the results to decide which variation is the best.

Applying it on Payments for Merchants

As with any algorithm applying it on a real life scenario gives us the opportunity to learn if the results can be improved. Proposing the idea to a large Merchant in the Travel industry with multiple PSP connections, we were able to test the idea.

The first experiment focused on the Authorization Rate, a metric that within the Payments industry provided feedback on the Performance of a PSP. Knowing that in regions where the differences are small, getting results that are statistically significant was going to be challenging, we decided to focus on countries where the Authorization Rates were between 60 and 70% and other PSPs (not connected) where saying that they where able to get Authorization Rates around the 80%. The two existing integrations were expanded to a three. The logic needed to switch between each PSP connection was developed as well as the ability to track the performance.

As we tested the solution, we made different adjustments to ensure that traffic would only be redistributed after the results where statistically significant. Over a period of one month, we routed over one-hundred-thousand transactions, producing an Authorization Rate of 74%, 66% and 59% for the three PSPs, with the local PSP producing the highest performance.

Other scenario’s to try a Multi-Armed Bandit Experiment

Besides routing transactions to multiple PSPs, there are of course many other ways of using the Multi-Armed Bandit Algorithm. Within Payments routing transactions to different Acquirers is the next best option, but also testing different Fraud Protection Tools. Outside of Payments the options are endless, from testing content on your website or in your emails. Whenever you want to compare more than two variations and waiting till the end before you make a decision is too costly, a Multi-Armed Bandit Experiment should definitely be considered.

Thanks for reading 😉 , if you enjoyed it, hit the applause button below, it would mean a lot to me and it would help others to see the story. Let me know what you think by reaching out on Twitter or Linkedin. Or follow me to read my weekly posts on Data Science, Payments and Product Management.


Intelligent Payment Routing was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.