By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Revenue Operations

SQL 301: Fact Checking Your Machine-Learning Forecast Models

swirled squiggle accent

This RevOps Co-op webinar features Jeff Ignacio, Owner of RevOps Impact, Ryan Garland, CTO & Co-Founder of Infer, and Freddie Hammond, Senior Director of RevOps at Pinpoint. They discuss when a machine learning model is appropriate, how to test the strength of the model, and when to try a different approach.

When is machine learning appropriate for analyzing data?

There are a lot of opinions about what makes a good lead, customer, or expansion opportunity. Machine learning models are fantastic at removing bias from this type of analysis. By objectively identifying signals that correlate to a desired result, machine learning can cut through guesswork and elevate the signals that really matter. 

When trying to determine the likely accuracy of a machine learning model, many RevOps professionals have traditionally used sample size or volume of data. But contrary to popular belief, the efficacy of a machine learning model has more to do with the strength of the signals than it does the volume of data.

Let’s say that your company sells digital advertising products to car dealerships. You have a special deal with Ford because they want consistent branding and features across all of their websites. As a result, Ford dealerships are 80% more likely to buy your product than a Toyota dealership. We also find that dealerships closer to major metropolitan areas are more likely to follow Ford’s directive, increasing the likelihood of purchase by an additional 20%. 

Even if you only have 50 customers out of 21,000 dealerships in the United States, machine learning would be an effective way to elevate leads that are more likely to convert because of these factors' correlation to closing an opportunity.

But machine learning isn’t ideal for all analyses. Let’s say your company sells marketing software to B2B companies and the use cases are very broad or you don’t have a clear ICP. 50 customers won’t provide enough information for lead targeting because you don’t have consistent industries, technographic data, or other firmographic data.  

“In practice, I usually say you really need at least a sample size of maybe a hundred if you're going to do lead scoring, but it depends.” - Ryan Garland

Jump to the clip to hear more about when a machine-learning algorithm is most appropriate

What is “good” data quality for machine learning?

Ask anyone at your company how they define your ICP and they’ll have a strong opinion. Unfortunately, our systems don’t necessarily have the data points that employees tout as a strong ICP signal. The data could even simply contradict common beliefs about your ICP. 

It’s hypercritical to understand which signals will be ingested by the model and which signals are likely correlated to a positive outcome. Pressure test your assumptions and model outputs to determine whether the signals you’ve provided meaningfully correlate with the results your company is trying to achieve.

When it comes to using machine learning for lead scoring or forecasting probability, some fields are more important than others. If you’re selling websites to Ford dealerships, it makes sense to buy a list of all car dealerships in the U.S. and lock down account creation. You should have lead-to-account matching in place so that if someone requests a demo, they can be matched with the appropriate dealership and routed to the correct salesperson. 

It’s also critical to understand what kind of cars that dealership sells (OEM data) and how close that dealership is to a major metropolitan area. Finally, you should estimate the likelihood of the dealership being influenced by Ford based on the volume of cars they sell or the total revenue they bring in. 

Your critical data points would include:

  • Primary OEM
  • Volume of sales
  • Annual revenue
  • Zip code
  • Miles from a major metro

If you’re uncertain about your ICP and have a wide range of customers, it’s important to collect data to bolster or negate a hypothesis about ICP. If your company sells to marketers and suspects that Marketo license-holders are more likely to buy your product, obtain technographic data through a provider like BuiltWith and compare that information with your customer data.

“We used machine learning for expansion opportunities and churn analysis. The model showed us a couple of things. Some of the core features or key usage indicators were actually quite predictive of renewal outcomes and somewhat predictive of churn risk. One churn risk was that the champion user was no longer using our product.” Jeff Ignacio

Jump to the clip to hear more about data quality.

How to sell your company on Machine Learning

Machine learning can be incredibly powerful, but it’s human nature to doubt results that contradict personal beliefs about what makes a good lead, account, or opportunity. The fastest way to sow seeds of doubt is to roll out a lead-scoring mechanism without collecting feedback from your sales organization or educating them about the findings of your model. 

Involve stakeholders early on in the process and adhere to change management best practices. Analyze and test your model against commonly held assumptions and communicate how and why the model differs from those assumptions. Finally, set up feedback processes and explain how any feedback is either disproven or incorporated into the model.

“One of the big missions of RevOps is getting everybody bought into being data-led. Make sure everybody knows what's going on and what actions contribute to our core objectives. We have a very public go-to-market model and use a tool called Runway, which gives us the ability to share the model. Everybody sees what contributes to that model.” - Freddie Hammond

Jump to the clip to hear more about how to build trust in a machine-learning model.

This super bright panel covered so much more–including which machine learning models they recommend and which data points they use for customer analysis. Check out the video at the top of the page for the full recording!

Looking for more great content? Check out our blog and join the community.

Related posts

Join the Co-op!

Or