Versent, Nutanix and IBM discuss balancing cost, efficiency and innovation. Credit: Getty Images As cloud maturity increases, so does the need for organisations to be intentional with their workload placement, which requires enterprise-wide operational alignment across architecture, operations, talent, and business goals. In an interview with ARN, Versent chief technology officer Tim Hope said while many customers are moving to the cloud, there’s a shift towards hybrid solutions due to cost pressures, particularly in storage. Hope emphasised the importance of enterprise tooling, upskilling teams and integrating cloud into organisational processes as customers balance multiple clouds rather than repatriating or repackaging to on-premises. “More customers are moving into the cloud and there’s much more pressure from moving existing on-premises hardware to these platforms particularly due to higher storage costs,” he said. “Storage cost in the cloud is a lot higher, which creates an anchor that keeps them on-premises. “Customers are moving around clouds rather than going back on-premises.” Versent was having conversations with customers about modernisation in the cloud to gain efficiency or moving some workloads that weren’t 100 per cent fit for the platform to dedicated hardware or instances, explained Hope. “We are definitely seeing a trend toward hybrid solutions,” Hope said. “In the past, ‘cloud first’ was the big focus, then hardware providers came in with more efficient hardware costs. “Now, managed data centres are rising, offering better price points for customers.” According to Hope, the hybrid cloud conversation has become important when it comes to deciding where to put assets, how much management customers want and how much velocity to drive while optimising infrastructure cost across a hybrid environment. “That conversation has matured in the market, with customers [considering] full repatriation generally haven’t done the people and process uplift within their organisations and still have a traditional infrastructure mindset,” he said. “When we talk to them, the conversations often include that they haven’t put in the right enterprise tooling to be effective in the cloud, they’ve overused open source, or overcommitted to what cloud providers offer. “Then it comes down to looking at their processes, how to upskill teams and how to integrate cloud into the organisation.” When Versent helped put the right enterprise tools on top of the cloud, customers found more success than those who approached it with a traditional infrastructure mindset. Shifting cloud usage Even with the right tools, many organisations are now reconsidering their cloud usage, as rising cloud costs trigger “knee-jerk reactions,” such as comparing the sticker price of physical servers to cloud hosting costs without considering the full picture, said Nutanix Asia Pacific and Japan head of cloud Michael Car. “Customers need help to see the true cost of running on premises and running on the cloud,” he said. “Then you look at the adjacent features and subsets, around data creation, the applications that talk to the data, scaling it in the cloud and making it available in different geographies.” “Additionally, Microsoft licensing for Windows and SQL is a significant factor, representing another layer.” Car sees rising cloud costs needing to be attributed to the utilisation of platforms, as opposed to it just getting more expensive. “Cloud is not getting more expensive,” he said. “How much is consumed of the cloud will drive that price up, and then it should dovetail into a discussion around efficiency of using the cloud.” Organisations have spent 10 to 20 years on premises defining the tools and how efficiently they can run that hardware, explained Car. That maturity operationally hasn’t really migrated into the cloud operational space and is where bimodal operations can really hinder organisations that run a cloud team. “They run an on-premises or legacy infrastructure team, so they don’t bring the learnings across into the cloud and the efficiency isn’t there,” he said. “That’s what we need to look at.” “We’ve seen security get breached because of it, because the policies that were created didn’t adapt.” Five or six years ago, the industry used the term ‘shadow IT’, where development teams would go and spin up solutions in the cloud, Car said. “We’ve definitely seen that neither the two shall meet and it’s starting to get a little bit better as the lines are blurred,” he said. “However, I still see a lot of organisations want to treat cloud as cloud and want to treat on-premises as on-premises and that’s not going to deliver the efficiency in the cloud that is possible.” Car pointed out that tool sets that are sort of native in the cloud providers today aren’t at the maturity compared to those used on-premises. “In terms of density efficiency, that’s going to allow you to really get the absolute maximum out of the cloud infrastructure,” he said. Defining hybrid cloud strategies and drivers This is why it is important for an organisation to define the drivers for their hybrid cloud strategy, said IBM partner for hybrid cloud, data and applications operations for Australia Iskra Nikolova. Depending on the driver, the thinking behind the strategy will be different, she pointed out. “We have a concept which we call hybrid by design, but many companies out there developed what I would describe as a hybrid by default strategy. “They do some applications on-premises, may invest in some kind of private cloud and some other applications — perhaps the majority — would be destined for public cloud.” As an example, Nikolova, who previously worked at Telstra, said the telecommunications provider as part of its T25 strategy aimed to have 90 per cent of relevant applications on public cloud by 2025 as part of its key performance metric. “This is a process that happens almost organically and is what we call hybrid by default,” she said. “Hybrid by design is where companies design upfront what will be most relevant to put on public cloud and what will be best suited for private cloud — in order to transform the business. “This more intentional approach to designing workloads and where they’re placed helps unlock better productivity in the short term but also enables automation.” Hybrid for AI This is where artificial intelligence (AI) comes into the picture in the long term. Without the right workloads in the right place, this can be a barrier to substantive end-to-end automation, explained Nikolova. “This is where AI, especially generative AI, enables dramatic productivity gains which were perhaps not possible before, with less mature levels of automation,” she said. This is a driver to really have the most appropriate cloud strategy — hybrid by design, ideally — in order to transform the business at every level, Nikolova said. This isn’t just at the infrastructure level, but also at the operating system, the applications and the data, with automation on top. Another driver is automation and AI, because these tools require more intentional and thoughtful planning for how companies run workloads in multiple locations — on-premises, private or public. “Another observation now is that lots of new tools and methods for analysis are emerging that basically help with hybrid strategies,” Nikolova said. The focus on end-to-end transformation was also shifting how organisations think about AI more broadly. Hope said when digital transformation was at its peak, it was about driving customer value and cohesion, with that same mindset carried into the early stages of AI. “Now the economics are shifting,” he pointed out. “Customers aren’t just saying ‘Go experiment’, they’re asking how to get real value from what they’ve already invested in and how to avoid overlapping suppliers or duplicate spending. Technologists are pushing back, saying costs are going up, and the business needs to be more disciplined, noted Hope. There was a lot of platform-as-a-service (PaaS) and software-as-a-service (SaaS) coming in and IT was being told to just ‘deal with it,’ even though they already have viable, cost-effective solutions in place. “Generative AI came in with a strong product-led push, but now with more agentic workloads and AI engineering, it’s becoming a big platform play,” he said. “That brings in serious cost considerations — we’re already seeing cost blowouts. “Now the conversation is moving from cloud cost to AI cost. That’s why you’re hearing companies say, ‘Let’s bring AI on-premises so we can control cost better.’” According to Hope, the really big enterprises have invested in understanding where they should place workloads and are having conversations around service management and observability. “Many don’t fully understand what’s happening in this evolving ecosystem, consuming SaaS and PaaS platforms layered on top of hyperscalers like Microsoft and AWS,” he said. “They’re asking for observability, resilience and clarity on data location, which leads into IT service management and risk conversations. “Customers are stretching traditional ITIL (information technology infrastructure library) processes while trying to maintain a DevOps mindset with a service management overlay. “This complexity adds pressure on cost and operational management, and AI will only stretch this further as it moves from product to platform scale.” SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe