It’s no surprise that 99% of survey respondents said they struggle with technical data quality - things like missing fields, stale values, and duplicates are still rampant. But the real issue is how many teams have resigned themselves to the problem. A staggering 42% of respondents said their data was “good enough,” and 11% even went so far as to call it “excellent.”
That kind of complacency is dangerous, especially when 71% of the same group admitted that their data issues are actively preventing them from executing more effectively. Whether it’s lead scoring, segmentation, or campaign personalization, these operations all suffer under the weight of incomplete or incorrect data.
Camela pointed out that in some companies, lead scoring models were built on as many as 15 variables - but half of them were rarely or never populated, and blank values converted at a higher rate than expected, skewing results entirely. When AI or advanced automation is layered on top of flawed data, it only amplifies those issues.
👉 For more insights, read Data quality: The key ingredient in a data-driven strategy.
Data problems aren’t just technical - they’re cultural. The survey highlighted major disconnects between leadership and operations teams:
Without top-down accountability, even the best RevOps strategies will stall. Jeff shared how a lack of executive sponsorship led one client to push for territory planning despite only having 40% CRM data completion. When asked what quality bar they needed to hit, the client replied “85 to 90%” - a goal they hadn’t invested in achieving.
Leadership inertia also leads to “magic elf” thinking, as Camela put it - the belief that someone else will solve the data problem without resources or cross-functional ownership.
👉 Learn more in Poor data quality is sabotaging your GTM. Here’s how to fix it.
Everyone wants to jump on the AI bandwagon, but most organizations are hitting the brakes once they realize their foundation isn’t ready. The survey showed that the biggest barriers to AI adoption include:
Ali emphasized that launching AI without the right infrastructure - like clean buying unit data or accurate persona tagging - sets you up for failure. Without foundational understanding of who your buyers are and how they buy, generative AI tools will produce misguided or irrelevant outputs.
Jeff shared examples of product orgs wanting to use AI for feature-level revenue impact analysis and customer behavior modeling, but struggling due to fragmented product usage data and inconsistent CRM inputs.
Bottom line: AI works best when it enhances existing workflows, not when it's expected to solve broken ones.
👉 Check out AI will expose your data quality issues.
Poor data quality doesn’t just affect day-to-day execution - it derails long-term planning too. Jeff walked through the complexity of annual planning, where everything from addressable market analysis to territory carving and sales comp design depends on accurate data.
If your segmentation data is off or your product usage metrics are incomplete, you risk assigning the wrong accounts to reps, forecasting inaccurately, or missing the mark on quota setting.
Jeff emphasized starting the planning process earlier - ideally July or August for calendar fiscal years - and ensuring your core data infrastructure supports decisions about growth levers, sales capacity, and GTM strategy.
Failing to do this doesn’t just delay territory rollouts or comp plans - it erodes rep trust and reduces your team’s ability to execute at scale.
👉 Explore 2025 revenue planning: forecasting, pipeline management, and automation.
One of the easiest ways to fall into the “good enough” trap is by assuming that buying a data provider will fix your problems. Ali and Camela both cautioned against relying too heavily on a single vendor.
Ali shared that even with access to high-quality account-level data through Definitive Healthcare, Health Catalyst still needed supplemental contact data and validation tools. She stressed building your own “data recipe” and recognizing that no vendor will cover everything.
Camela added that many enrichment providers do "bakeoffs" with sample records. She recommended testing 100 accounts across providers and asking hard questions about how data is sourced, updated, and owned.
Jeff echoed the importance of layering multiple tools to triangulate accuracy - especially when selling to niche markets like dog grooming businesses, where most contacts have Gmail addresses and little online presence.
👉 Learn more in A roadmap for your data enrichment strategy.
One of the most common frustrations for RevOps pros? Being treated like a human API. Camela described the experience of being peppered with ad hoc reporting requests from execs who haven’t aligned on definitions or strategy. It’s a symptom of reactive RevOps.
Jeff challenged teams to move beyond this by elevating their strategic posture: anticipating executive needs, packaging insights (not just metrics), and establishing a core set of sanctioned reports that reduce noise and build trust.
“When you walk into a meeting and sales and marketing are quoting different pipeline numbers, the meeting is a waste,” Jeff said. "You need to align on source-of-truth dashboards and get to insights faster.”
Ali reinforced the value of influencing up and driving executives to align on a data dictionary. Even if leadership doesn’t want to do the work, RevOps can serve as a facilitator to get everyone speaking the same language.
👉 See How to benchmark operational data quality in RevOps.
If bad data is limiting your segmentation, scoring, forecasting, and AI initiatives, then “good enough” is no longer good enough.
Here’s what you can do:
The future of GTM is agile, data-driven and AI-enabled - but only if your data is up to the task.
👉 Looking for more great content? Check out Openprise’s blog and the RevOps Co-op blog and join the RevOps Co-op community.