CDO Blueprint: Partnering with the Business on Trusted Data, AI Use Cases & Business Outcomes
Lessons from a CDO Magazine panel
At a recent CDO Blueprint panel hosted by CDO Magazine, senior data leaders Andrew Foster (Chief Data Officer at M&T Bank), Sully McConnell (Head of Insurance at Snowflake), and Jay Limburn (Chief Product Officer at Ataccama) gathered to discuss what it really takes to build trusted data infrastructure in the age of AI. Here’s what stood out.
Every data leader has a story about bad data. It might be a first job validating trade records for nine dollars an hour, or a brand-new data platform where the first 17 queries all came back as zero. These aren’t just origin stories from the margins of the industry; they belong to some of its most experienced CDOs, and they all point to the same conclusion: data quality has always mattered, and it matters more now than it ever has before.
At a recent CDO Blueprint session hosted by CDO Magazine, leaders from M&T Bank, Snowflake, and Ataccama came together to discuss the infrastructure shifts and AI challenges reshaping what it means to be a modern CDO. The conversation covered substantial ground, and one theme ran through all of it: Trust in data has become the infrastructure layer everything else depends on.
From centralized to federated: The operating model pendulum
The panel opened with a familiar tension: centralize everything and risk becoming a bottleneck, or federate everything and risk losing consistency. As Andrew Foster framed it, most organizations have swung between the two without quite landing on what works.
“Most organizations start off decentralized, and it doesn’t really work at scale,” he said. “You’re just inconsistent across every business function.” Centralization carries its own failure mode, though. Often it results in a large central data team treated as a cost center while business units stop feeling accountable for their own data.
The panel agreed that the model showing the most promise is federated with strong central standards. Accountability for data quality sits in the business, where context actually lives, and the center provides tooling and governance within a common discipline. Foster’s rule of thumb is that business-aligned analysts should be 70% business expert and 30% data-disciplined, with his central team operating at the inverse ratio.
A more recent evolution discussed is the shift from federating people to federating products. Rather than embedding analysts in business units, organizations are moving toward a model where specific teams own specific data products and are accountable for their quality, particularly as those products feed AI systems.
AI changes who (and what) consumes your data
For years, the primary consumers of data were people. This included analysts, executives, and dashboard users who would notice when something looked off and raise the alarm. That dynamic is changing quickly.
“The consumers are becoming the AI agents and the AI models,” said Jay Limburn. “And an agent that’s automating a business process doesn’t care if the data’s accurate or not. It’s just going to act on it.”
This shift has substantial implications. A business user who notices a dashboard looks “wonky” will flag it, but an AI agent processing insurance submissions or executing financial workflows will not. It will act on whatever data it receives, whether that data reflects reality or not.
Andrew Foster made this concrete with a personal analogy, using an AI assistant to research and buy a snowboard. The experience was positive because the data it worked with was reliable and the stakes were low. But translate that into a banking context where AI is making credit decisions or executing trades, and the trust requirements land in a completely different category.
The real differentiator in AI is the trust layer underneath the model, not the model itself. The panel was unanimous on this point.
Data quality and observability: Complementary capabilities
One of the most useful frameworks to emerge from the conversation was the relationship between data quality and data observability, and why organizations need both working together.
As Jay Limburn explained, quality answers the what. Is the data accurate? Is it complete? Is this interest rate showing as a negative number when it shouldn’t be? Observability answers the when and the how. Is the data arriving on time? Is the volume consistent? Did something break upstream, leaving you with only half the loan applications you expected?
The two capabilities are distinct and complementary. A data quality program without observability can catch bad values but miss absent ones. Observability alone gives you pipeline health, but with no signal on whether what flows through that pipeline is worth trusting.
Sully McConnell added another dimension, drawing on an observation from a chief data scientist colleague in insurance. In complex agentic systems with multiple agents working in concert, drift becomes the enemy. Each individual agent might be performing within tolerance, while the cumulative drift across the system causes the whole thing to produce unreliable outputs. Without observability tooling capturing telemetry across the system, you will not see it coming until something has already gone wrong.
For data teams building toward AI-powered products and automated workflows, quality and observability need to be treated as core infrastructure, with the same seriousness as any other foundational investment.
Modernizing infrastructure for AI-ready data
The conversation shifted naturally to infrastructure, and more specifically, to the gap between what most organizations have built and what AI actually requires.
The data infrastructure most enterprises have in place was designed for human consumers. Dashboards, BI tools, and analyst query layers were built to answer questions that a person asked. AI agents have different requirements. They need data that is reliable, well-documented, and semantically clear, continuously monitored rather than merely available on demand.
This is where the “bridge” Jay Limburn described becomes critical. A great deal is happening at the top of the AI stack, with new models and interfaces arriving constantly. A great deal is also happening in the familiar data space. Connecting those two layers requires a trust infrastructure that most organizations are still in the process of building. CDOs who recognize this are increasingly focused less on chasing tools and more on ensuring their data estate can support the systems they are being asked to deliver.
Andrew Foster described his approach at M&T Bank as identifying where he can “squeeze the juice out of existing investment,” finding places where better tooling and governance on the data layer unlocks business value without requiring a ground-up rebuild.
Upskilling for a world where the bar keeps moving
One of the more candid moments in the session came around upskilling frontline teams and executives for an AI-enabled world.
The honest answer from the panel was that the bar is moving so fast, it is hard to know exactly what to teach. Tools that required SQL knowledge a year ago no longer do. Applications that took developers weeks to build can now be prototyped in hours through natural language. The technical barrier is dropping quickly, which means the real barrier is increasingly one of mindset and culture rather than capability.
– Jay Limburn, Chief Product Officer at Ataccama
“The inhibitor is the fear and the lack of curiosity to actually go and use these tools,” said Jay Limburn.
The practical recommendation from the panel was to meet people where they are and give them low-stakes opportunities to see the value before they are asked to trust it. Foster described M&T Bank’s data academy as a way to draw people toward that value, offering learning paths that range from intensive courses to lightweight workshops on tools like Copilot.
What matters is ensuring that the people making decisions with AI-assisted tools understand enough about the data behind them to ask the right questions.
The bottom line for CDOs
The panel closed with a speed round: What should every CDO prioritize over the next 12 months?
Sully McConnell recommended giving functional areas the tools to build data products, while holding them accountable for keeping those products clean for AI use.
Jay Limburn urged leaders to treat data quality as infrastructure rather than a project. You would not build a factory without connecting it to the electricity grid. Building AI without a trust layer carries the same structural risk.
Andrew Foster pushed CDOs to get hands-on with the tools themselves. A CDO who is personally experimenting with AI capabilities will guide their organization far more effectively than one who is only reading about them.
A common thread ran through all three answers: The organizations that will succeed with AI are the ones that invest in the foundation. Data quality, observability, and governance are what make AI adoption viable, and treating them as anything less than core infrastructure is a risk that will eventually surface.
As moderator Maribeth Achterberg noted in her closing remarks, today represents the least capable AI will ever be. Everything from here on becomes more autonomous and more consequential. The question for every CDO, then, is whether their data infrastructure is ready to support what is coming, or whether the next horror story is already in the pipeline.