ECI 2026 report shows strain between AI innovation and IT governance

March 30th, 2026 | | 17:20
Cover image
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be adjusted to different sizes:


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | YouTube | Twitter | RSS Feed
Tech Barometer - iTunes | Spotify | RSS Feed
 

The 2026 Enterprise Cloud Index, a survey of IT professionals, reveals tension between the need for IT oversight and the reality of easy-to-build-and-deploy containerized apps. Demand for AI capabilities is driving up shadow IT use, forcing IT teams to manage more risks.

Get tech leader insights to move faster and smarter.

Get more stories by subscribing to The Forecast.

Podcast transcript:

Jason Lopez:: This is the Tech Barometer podcast. I’m Jason Lopez. Nutanix released its eighth annual enterprise cloud index this month, and two of the key findings stood out. Containers are rapidly becoming the foundation of how modern applications are built and run, and Shadow AI is spreading through organizations largely unmanaged. We talked about this issue of Shadow AI with NAND Research chief analyst, Steve McDowell.

Steve McDowell::You’re like, “You know what? These IT guys, they’re out of their mind. I need to use this because it’s going to help me and I’m just going to pull the trigger and make the decision.” Yeah, sure. The employee’s getting a lot of benefit from this engagement, but you have no idea it’s happening. I don’t know where it’s happening and I don’t know what data’s being exposed and where that data’s going to. I think we see this kind of every big technology transition. Users are going to make their own decisions. Your employees are going to make their own decisions about the technology they use. It may or may not intersect with your corporate guidelines for IT. I remember when smartphones entered the world, that caused a lot of consternation among enterprise IT teams because how do I manage these devices? And they even coined a word, right?

BYOD, bring your own device. That was a hot topic for several years until Apple came out with kind of enterprise management tools and things smoothed over. And CloudWorld kind of did the same thing a half a decade later. We’re doing that now with AI. I mean, AI is so beneficial. Now, scrolling through LinkedIn and reading all the AI-generated slop, it’s not clear everybody knows how to use it, but they’re using it.

[Related: AI’s Next Wave]

Jason Lopez: The Enterprise Cloud Index is a snapshot of an industry in the middle of a profound transition, and it raises a question worth asking. What exactly is driving all of this and where does it lead? Dan Seruli has watched this transformation from the inside. As the cloud native product leader at Nutanix, he spends his days talking to organizations navigating this shift firsthand. Ken Kaplan, editor-in-chief of the forecast, interviewed him and asked him to step back and survey the arc of technology he’s lived with for more than a decade. And what he’s hearing now from the people on the front lines.

Dan Ciruli: I think it is safe to say at this point that containers in general and Kubernetes in particular are, I’m going to say it out loud, the de facto standard for developing and deploying new applications. It is no longer something that is a science experiment, which it was eight years ago. It’s no longer a viable option, which it might have been three or four years ago. I would say these days, building your application to be packaged in containers and deployed on Kubernetes is the defacto standard for how applications are being developed. And I’m pretty comfortable saying that out loud.

Ken Kaplan: What are you seeing the success and struggles now in this new period compared to the early days? How have things changed?

Dan Ciruli: One thing that people are wrestling with is that when it was in that phase that it was more of a science experiment or just some of the new applications, you had certain developers who would lean in and certain developers who were not affected by it. And now I think we’re reaching the stage that everybody needs to be comfortable with it. Everybody needs to be ready for whatever application you’re working on the next time they ask you for a thing like, “Okay, it better be packaged up in a container. We better have some Kubernetes running wherever it is. We need to deploy that application.” In becoming the de facto standard, that has means we’re no longer, there’s a pocket of the organization who’s doing in Kubernetes. Now we’re in the phase that everybody needs to be comfortable talking about it, using it, deploying it, running things on it, SREing on it.

Ken Kaplan: You’re talking to more customers probably than ever before. You would like to talk to more. What are you learning from them?

Dan Ciruli: One of the things we’re trying to help our customers with is as they approach that transition is how they standardize. When you have pockets of people leaning into a technology, then you have pockets of people leaning into a technology and some might lean in a slightly different direction than others. Some people might be making a certain architectural choice, a certain security choice using a certain technology, which works for that team, but might not work for the organization as a whole. And as you adopt this technology wholesale across the organization, then you have to think about, well, how do we standardize? What policies do we want to set that apply to everybody? How do we centralize this so that this can be run by a centralized team rather than by individual teams? So it’s a big transition. And as I say, it’s become the de facto standard.

And I say that with confidence, but for many organizations, they are still figuring out how that will affect them and how they, as an organization, do that in an efficient way that gives them the benefits that developers want, that give them the ability to innovate quickly, deploy frequently and scale as needed.

Ken Kaplan: And do you see a correlation or it’s similar to what happened with cloud and people could go out and use a credit card and get compute services for the first time. And then there was all these projects going on and IT didn’t know about it. Do you see some similarities?

Dan Ciruli: I had never thought about that way, and I think that is a fantastic way to look at it. Cloud was very much the same way. I was part of a team that we got frustrated with internal IT and the length of time it was taking them to get us literally just VMs and the amount they were going to charge us back. And we said, “Somebody on your corporate card, go to Amazon and start using this stuff.” Yes. So I think Kubernetes started the same way where a team was like, “Hey, I don’t want to deploy in VMs. I’m just going to start using Kubernetes.” And then another team is. And then another team is, at some point, someone higher up and more central in the IT organization said, “Wait, there’s way too much of this going on. This is not efficient. We’ve got different teams.

We’re spending too much money on this. We’re duplicating effort all over the place. And from a security posture, we’re at risk.” So someone centrally said, “We need to, for all of those reasons, we need to centralize this. We need to standardize this and we need to do this in a way that does give the teams what they want, but does it in a way that doesn’t put us at risk financially or from a security perspective?” That’s an excellent analogy.

[Related: Ecosystem Scorecards Help CIOs Avoid Vendor Lock-In]

Ken Kaplan: Yeah. You hear about smart cloud strategy now. It used to be cloud first, now smart, but those people who have lived through that transition and are getting smart about their use of cloud, those are probably lessons that they can be applying to Kubernetes and containers. Why would the companies benefit now from saying, “That stuff’s useful. Let’s do it here together on this platform that can do the old and the new.”

Dan Ciruli: So the interesting thing about me saying very strongly, this is the defacto standard for how applications are written. That doesn’t change history and it doesn’t change the fact that at essentially every enterprise, they have decades worth of applications running in virtual machines that are for the most part going to stay in virtual machines. That isn’t changing. And while the new stuff is all containerized, the old stuff was virtualized and will continue to be virtualized. Very few of those apps will be rewritten to be containerized, which means as you centralize operations for all of this, now you’ve got one IT team that is going to be for the next decade, two decades, responsible for running tons of VMs and a growing number of containers that some point will outnumber the VMs, but those VMs will still be there. Companies have a choice. They can build up two separate groups of people, pieces of hardware, networking strategies, security technologies to manage those, in which case they’re building literal silos in data centers of which hardware can run which applications, building silos in their organizations of which people can work on which organizations and actually hampering integration.

You might have a new application that needs to get an old database, old piece of data that is in a virtual machine. And if you’re building those things entirely separately, that’s a challenge every time you do it. Whereas companies that say, “Let’s combine these on one.” They allow themselves to run essentially any application. It doesn’t matter if it’s virtualized or containerized, you can run it in the same locations. It means the same people who work on running the infrastructure can run that infrastructure, whether it is running virtualized or any combination of containerized applications. And it means you can do things like set security policies, set backup policies, set disaster recovery, networking policies. All of those can be consistent and it becomes much easier to interoperate between the new and the old. So I think it’s a really strong driver for companies to invest in ways to run all of their both containerized and virtualized applications in a way that makes sense.

[Related: Clouds With Borders: IT Teams Design for Geopatriation]

Ken Kaplan: And when they’re doing that, are the apps that are running on VMs and containers, are they able to tap into the same storage or databases or different aspects of the system? Or is it you have this ability to manage them both, but they’re still kind of separate?

Dan Ciruli: I mean, I talked to some organizations who they have their VMware environment and their OpenShift environment. They are literally buying separate teams, buying separate hardware, run by separate people. From a integration calling back and forth, it might as well be separate companies because you’ve got different networking, you’ve got different security, right? And what we advocate for is running a hypervisor on all of that hardware that you buy. And then within that, some of those, it’s just an app running in a VM and some of those it’s Kubernetes running in those, but there’s the same networking underneath. And with Flow, you can write a networking policy that says, “This containerized application needs to talk to this virtualized application.” That we feel is a tremendous advantage. You don’t end up with hardware silos. Hardware silos are always a bad idea because inevitably one team has more hardware than they need.

We bought a huge cluster and we’re only using 60% of it for our, say, containerized applications. Meanwhile, we’ve got this other cluster that’s running virtualized applications. It’s 100% full, but we have another virtualized application that we need to run. In that case, the only way to do that is to go procure more hardware. Whereas when you’re running things more homogeneously, one system that can handle either, you can say, “Oh, no problem. We’ll run more VMs on it. ” So the hardware siloing is a really big deal. And that same thing that feeds down into the networking, “I want this application to talk to this application.” Well, that’s complicated when you’ve got completely different networking solutions on the two side.

Ken Kaplan: In the report, it said, “Currently, 71% are running their AI enabled applications on a mix of traditional apps and virtual machines and modern apps in containers on VMs.” While 14% are running their AI enabled apps directly on bare metal servers, is that significant and is it something you see might be changing?

Dan Ciruli: I think what that’s pointing to is AI is, in some cases, it’s an experiment. Companies are buying hardware specifically to run these AI applications. So we do see people doing that, but what most enterprises will quickly realize is that you want AI embedded in all of your applications. I’m firmly convinced that in three years we won’t differentiate between applications and AI applications. We’ll just call them applications. It doesn’t matter if it’s just a business process workflow, if it’s your email, if it’s your sales database, you want AI everywhere. As I said before, some of those applications aren’t going to move out of a VM, which means you need to be able to figure out how to, maybe it’s an API call out that is going to hit a new service, which is an agent running in a container, but you need that logic embedded in that traditional application.

So there is some experimentation going on where people are just going to buy some new hardware and run AI there. In the long run, you’re going to want to run your containerizer and your virtualized applications together. You will also want that to be, I’ll call it AI enabled. You will want to be able to, from anywhere, embed logic that depends on maybe an agent or maybe an LLM that is independent of whether or not that original application is deployed in a VM or deployed in a container.

[Related: AI Flips the Data Storage Paradigm]

Ken Kaplan: Since we’re on this topic, like AI agents, you just think of AI agents as another app that’s probably cloud native.

Dan Ciruli: Almost certainly said before, it’s become the defacto … Containerization has become the defacto standard for writing and deploying new applications. The word agentic is only about 18 months old. All of these agents are new applications. They’re all being written to be deployed in containers. And so yeah, all of that. And what I think of as agents is these are AI that can do things.That’s how I think of AI. In LLM, you ask it a question and answers and an agent is actually doing things. It might be validating someone else’s answer. It might be going to actually interact with a system, but the agents is where AI actually does things. And yes, we have already moved from the phase of AI where I just wanted to help me answer a question too. I want it to go do a thing for me.

Ken Kaplan: Yeah. I’m going to set up a playbook and have them run my playbook. And yeah, it’s happened pretty quickly. All right. Here’s my last question here. Does it take specialized skills to manage containers and Kubernetes and is that changing IT teams?

Dan Ciruli: It still does take additional knowledge to run containerized applications, to run Kubernetes. It does mean running Kubernetes, and there are still things that you do need to learn. There’s no doubt about that. In part, I think that this gets solved through education and people learning to use new tools. If you think about how the data center evolved. At some point, Linux became a pretty kind of, I won’t say de facto standard, but super, super common. And people had to learn how to navigate in Linux, right? If people in infrastructure and operations. At some point, virtualization went from being a science experiment to the standard way things were networked and people had to learn the concepts of virtualization. The same thing is happening with the Kubernetes landscape and more and more people are having to learn these concepts. But the other thing I think that is happening at the same time is that as an industry, and this is someplace that we are leaning in as a company, how do we make that process easier?

How do we not just say, make that easier by giving lots and lots of education, but how do we make it easier for the average operator to run a Kubernetes cluster? How do we make the tools easy to use? How do we make the tools smart so that the tools are doing more and more of the grunt work? And I think this is a case where AI already is making their lives easier. The analogy I like to use with this is when I learned to drive in the 1980s, it was very common that you knew how to change the oil in your car, certainly check the oil level. You had to know where your dipstick was. You were going to be doing that. You probably knew where your carburetor was. You definitely knew where your spark plugs were. You knew how to clean a spark plug, you knew how to replace a spark plug.

[Related: Why the Future of IT Belongs to Open Systems]

Not every driver did, but most did. That was just common things, right? When was the last time anyone did any of those things in their car? The car has gotten better. It is tuning itself constantly. It is so much better at operating. The car in effect has gotten mechanically smarter, and now in some cases, actually computationally smarter. Well, our systems are doing the same thing. As vendors we are trying to make just as the car is continually tuning itself, we want Kubernetes to continually be tuning itself. We don’t need to teach everybody in the world to be a mechanic. We want to make this machine easier to operate. And that’s a big part of where we’re investing is how do we make it easier for those operations teams for all the reasons we discussed are having to run Kubernetes. How do we make it so that doesn’t feel like a burden for them, but it’s just another tool in their toolbox.

Jason Lopez: Dan Serulli is the general manager of cloud native at Nutanix. He’s also co-founder of the OpenAPI initiative, which established the universal standard for how software systems describe and communicate with each other. Ken Kaplan is the editor-in-chief of The Forecast. The forecast produced this interview. It’s part of the Tech Parameter Podcast series. I’m your host, Jason Lopez. You can find articles and other podcasts at the forecast. Just go to the forecast by Nutanix. That’s all one word, theforecastbynutanix.com.

Transcript Read/Download the transcript.
 

Tags: , , , , , , , , , , ,
 
Posted in: Artificial Intelligence, Cloud Computing, Tech Barometer - From The Forecast by Nutanix