We’ve been using machine learning for many years at Flywire – both internally and in the products we build. It’s a big driver of the patient payment experience in our healthcare platform – from identifying the most effective communication strategies to use with payers to recommending payment plans based on capacity to pay. What’s more, internally, machine learning technology helps automate and ease manual parts of the accounting and finance process, including reconciliation and forecasting.
But could we build a machine learning algorithm that played Wordle?
Finding relevant use cases can be a major impediment to applying machine learning in a thoughtful way. At first glance, building a Wordle bot may not seem super relevant in that regard. But it is.
When we built an algorithm to get the Wordle (yes, every time), we were able to explain what can be a challenging concept to understand through a very relatable and familiar example. This helps colleagues form mental models of how machine learning can be applied in practice. When they better understand it, there’s a higher likelihood it surfaces as a solution to a problem in a product we’re building, or a challenge they’re facing in their own role. And they’re more likely to come to us for help.
That’s a major way our analytics team overcomes a big hurdle to thoughtful machine learning use – finding use cases that drive business value. I’ll share how we’re having success surfacing machine learning use cases at Flywire in a collaborative way.
Introduce machine learning technology in familiar ways – and don’t be afraid to get technical
We used reinforcement learning to build the Wordle algorithm, and I wrote about how it worked on our internal blog. By explaining how reinforcement learning played the familiar game, we gave our colleagues a basic framework for understanding how the technology works. That helps them more clearly think about how it can be applied to specific business challenges that only they understand. This bridges the knowledge gap between product owners who are subject matter experts and data scientists who are experts in machine learning.
We combine writing with talks – and have started holding company-wide workshops to explain machine learning in relatable ways, giving everyone a chance to see, learn and ask questions. Demos are key! We strip away the intimidation factor, but also offer up enough technical detail to make it educational and actionable.
When we provide the technical side and invite the business insight, good things happen. There are problems and opportunities that only colleagues and their teams are familiar with – and that we would never know about if they didn’t bring them to us for open conversation. This style of education provides a framework to break down silos and begin to build useful use cases.
Regularly encourage teams to bring ideas
As I mentioned, when users have a broad understanding of the ways in which machine learning can be applied, they begin to think about it as a solution to challenges in the product or internally. We bring together data scientists, data engineers, software engineers and the SMEs in these areas. We actively create opportunities for input through company-wide meetings, smaller vertical forums and Slack channels. We are consciously fostering a culture of yes – making it clear that if we’re not collecting the data needed to train algorithms today, we can start instrumenting those areas and fill the gaps in preparation for the applications of tomorrow.
Make sure you have a process for determining whether ML is a good fit
When a use case is brought to us, we have a simple workflow for determining whether machine learning technology can help. It helps us to narrow and focus potential use cases. We ask things like:
- Does it fit a standard machine learning application?
- Do we have sufficient training data?
- Can we do this using a traditional software approach?
- What is the impact of including this in customer experience?
Celebrate early iterations of projects
We recently launched a machine learning-enabled payment method recommendation engine, and debuted it to the entire company on an internal video conference. By showing our colleagues real applications of the technology, we build knowledge and incite ideas. We received dozens of questions after our talk on this project and a few colleagues shared innovative ideas on how the technology could be applied in tangential use cases we hadn’t yet envisioned.
Don’t lob it over the fence to engineering to build it all
This might be the most important tip. In the traditional data science function, we develop and evaluate machine learning models. Many teams stop there and run into major hurdles getting their models into production. When we develop a model at Flywire, we don’t just push it over to engineering to figure out how to run it in production. We deploy it and we make it easy for engineering teams to work with by providing an API and all the necessary documentation so it can be easily integrated into the product. And we don’t just say, here are the docs. We help with debugging and evaluating across the engineering process.
Machine learning is not a magic tool we can use to solve every problem. But it is a powerful and important one for us to consider, champion, and support within our organizations to make sure it is on the table as a way to solve the right problems for our customers.
Chad Lieberman is a Principal Data Scientist at Flywire. He has 15 years of experience in data science and computational engineering. Chad holds a PhD and MS from MIT.