ModelOps CTO On Strong Governance Is Essential for Agentic AI

With the rise of agentic AI, strong governance will help organizations manage security and accuracy. ModelOp CTO Jim Olsen describes enterprise agentic AI risks and solutions.
Get tech leader insights to move faster and smarter
Get more stories by subscribing to The Forecast.
Podcast transcript:
Jason Lopez: How do companies control, govern, and deploy AI safely at scale? When Jim Olsen, the Chief Technology officer at ModelOp, talks about AI adoption, the theme of what he says rests on the idea of restraint.
Jim Olsen: Try not to just build things for the heck of it. That’s why you have to tie all this back to a use case to understand, what is my goal and what does success look like?
If you don’t have a clear process in place everybody can understand and follow, then you lose that trust, and the solutions just don’t happen. That’s, of course, a missed opportunity.
If you have a truly resilient agentic system, it can adapt to new business needs, new things that just pop up. Given that autonomy, how do you actually control and make sure it doesn’t disclose all your user passwords?
Jason Lopez: This is the Tech Barometer podcast I’m Jason Lopez. This story is part of our ongoing thought leader series with people at the cusp of AI technological development, like Jim Olsen of ModelOp, a platform that helps govern, monitor, and manage AI and machine learning models to ensure they are compliant, reliable, and aligned with regulatory and ethical standards. Agentic AI is the operating model that sets direction, plans the work, and brings team members together to achieve a larger goal. When you talk to Jim Olsen, before he gets into the AI conversation, he’s laser focused on why you need it in your organization. What’s the use case?
Jim Olsen: If you can actually get agents to automatically do that stuff and do it reliably, it obviously increases the success of your business at its core. That’s why it really comes down to what is your business value, what is your use case, and the success is going to look very different based on that. Ultimately, my business is successful, everyone’s happier, and I’ve reduced my overall costs.
Jason Lopez: But that promise only holds if the system performs as expected. The models need to be trusted and safe.
Jim Olsen: The nature of generative AI is that it does go out and perform differently based on very minor changes or even sometimes no changes at all. If you don’t have some insight into that, naturally, people are concerned and paranoid about what could happen. You need that clear, transparent process in place in order to build that trust that we can see what’s going on. We do know we’ve put the research in behind this to make sure it’s going to behave okay.
Jason Lopez: Olsen says that while a clearly defined use case is the bedrock of an organization’s deployment of AI, another critical part of the strategy has to be managing AI’s behavior.
Jim Olsen: When you use these tools or allow people to use these tools, what impact that could have? What kind of information could go out? What kind of information could come in? What kind of damage could be done? So you need an approval mechanism in place to actually do that.
You do need some automated process in order to scale this in an appropriate manner, because what you really need to know is, okay, what are the use cases out there that need to use agentic AI? Is it appropriate for them to be using agentic AI? Then what pieces are they using? What tools? What model? How many tokens are they actually using? Are you getting your value back out of your investment in these areas? You do need to track all of this information. You do need some approvals in place to make sure you have that process to do that. You can try to do it on a spreadsheet or something like that. We see that quite often, but we find that really gets lost.
Jason Lopez: One of those mechanisms is MCP, or Model Context Protocol, an open protocol designed to let AI systems securely connect to external tools, data sources, and services in a standardized and predictable way.
Jim Olsen: But now you introduce these MCP tools and true agents that can act autonomously. How do you ensure that they only have access to the tools and data that they’re supposed to? That’s where one piece we’re providing that we saw a lot lacking that ties directly into our overall automated approval process is an MCP gateway slash proxy that only allows specific use cases to use specific tools and blocks access to tools otherwise, so that way you know what they’re being used. As A2A gets in there, we’re going to see even more pieces that need to go in there and monitor and understand what’s going on.
Jason Lopez: Olsen says he’s seen cases of MCP tools that have access to a company’s proprietary information. What ModelOp does is essentially act as a control plane AI across an organization. It provides insight into what MCP tools are doing, where they’re enabled with agents, AI models, workflows, and governance.
Jim Olsen: What we’ve done at ModelOp is we’ve actually created that kind of a gateway or a proxy where you can deploy approved tools and actually monitor what use cases in your organization use those, and are they approved to use those tools, and block them if they aren’t, and put protections in place that can do things like detect PII and say, hey, you can’t send PII out of our company, these kinds of things. You can get some control around these MCP tools and understand their usage.
Jim Olsen: Reducing costs, increasing customer satisfaction, increasing accuracy of processes, et cetera, are all kind of standard business goals that a true agentic system can deliver on by being adaptable.
Jason Lopez: Jim Olsen is the CTO of ModelOp, a software platform that helps organizations govern, manage, and scale AI systems responsibly. This is the Tech Barometer podcast, I’m Jason Lopez. We’ve got some other great stories in our thought leader series on AI. Check out our profile of David Kanter, co-founder of ML Commons. That’s at theforecastbynutanix.com.
 
Posted in:
Artificial Intelligence, Audio Podcast, Tech Barometer - From The Forecast by Nutanix






























