In a research report commissioned by Hewlett Packard Enterprise (NYSE: HPE), nearly half(44%) of IT leaders surveyed believe their organizations are fully set up to realize the benefits of AI. The report reveals critical gaps in their strategies, such as lack of alignment between processes and metrics, resulting in consequential fragmentation in approach, which will further exacerbate delivery issues.
The report, ‘Architect an AI Advantage’, which surveyed more than 2,000 IT leaders from 14 countries, found that while global commitment to AI shows growing investments, businesses are over looking key areas that will have a bearing on their ability to deliver successful AI outcomes – including low data maturity levels, possible deficiencies in their networking and compute provisioning, and vital ethics and compliance considerations. The report also uncovered significant disconnects in both strategy and understanding that could adversely affect future return on investment (ROI).
“There’s no doubt AI adoption is picking up pace, with nearly all IT leaders planning to increase their AI spend over the next 12 months,” said Sylvia Hooks, VP, HPE Aruba Networking. “These findings clearly demonstrate the appetite for AI, but they also highlight very real blind spots that could see progress stagnate if a more holistic approach is not followed. Misalignment on strategy and department involvement – for example – can impede organizations from leveraging critical areas of expertise, making effective and efficient decisions, and ensuring a holistic AI roadmap benefits all areas of the business congruently.”
Acknowledging Low Data Maturity
Strong AI performance that impacts business outcomes depends on quality data input, but the research shows that while organizations clearly understand this – labelling data management as one of the most critical elements for AI success – their data maturity levels remain low. Only a small percentage(7%) of organizations can run real-time data pushes/pulls to enable innovation and external data monetization, while just 26% have set up data governance models and can run advanced analytics.
Of greater concern, fewer than 6 in 10 respondents said their organization is completely capable of handling any of the key stages of data preparation for use in AI models –from accessing (59%) and storing (57%), to processing (55%) and recovering (51%). This discrepancy not only risks slowing down the AI model creation process, but also increases the probability the model will deliver inaccurate insights and a negative ROI.
Provisioning for the end-to-end lifecycle
A similar gap appeared when respondents were asked about the compute and networking requirements across the end-to-end AI lifecycle. On the surface, confidence levels look high in this regard: 93% of IT leaders believe their network infrastructure is set up to support AI traffic, while 84% agree their systems have enough flexibility in compute capacity to support the unique demands across different stages of the AI lifecycle.
Gartner® expects “GenAI will play a role in 70% of text- and data-heavy tasks by 2025, up from less than 10% in 2023,” *yet less than half of IT leaders admitted to having a full understanding of what the demands of the various AI work loads across training, tuning and inferencing might be – calling into serious question how accurately they can provision for them.
Ignoring cross-business connections, compliance, and ethics
Organizations are failing to connect the dots between key areas of business, with over a quarter (28%) of IT leaders describing their organization’s overall AI approach as “fragmented.” As evidence of this, over a third (35%) of organizations have chosen to create separate AI strategies for individual functions, while 32% are creating different sets of goals altogether.
More dangerous still, it appears that ethics and compliance are being completely overlooked, despite growing scrutiny around ethics and compliance from both consumers and regulatory bodies. The research shows that legal/compliance (13%) and ethics (11%) were deemed by IT leaders to be the least critical for AI success. In addition, the results showed that almost 1 in 4 organizations (22%) aren’t involving legal teams in their business’s AI strategy conversations at all.
The fear of missing out on AI and the business risk of over confidence
As businesses move quickly to understand the hype around AI, without proper AI ethics and compliance, businesses run the risk of exposing their proprietary data – a cornerstone for retaining their competitive edge and maintaining their brand reputation. Among the issues, businesses lacking an AI ethics policy risk developing models that lack proper compliance and diversity standards, resulting in negative impacts to the company’s brand, loss in sales or costly fines and legal battles.
There are additional risks as well, as the quality of the outcomes from AI models is limited to the quality of the data they ingest. This is reflected in the report, which shows data maturity levels remain low. When combined with the metric that half of IT leaders admitted to having a lack of full understanding on the IT infrastructure demands across the AI lifecycle, there is an increase in the overall risk of developing ineffective models, including the impact from AI hallucinations. Also, as the power demand to run AI models is extremely high, this can contribute to an unnecessary increase in data center carbon emissions. These challenges lower the ROI from a company’s capital investment in AIand can further negatively impact the overall company brand.
“AI is the most data and power intensive workload of our time, and to effectively deliver on the promise of GenAI, solutions must be hybrid by design and built with a modern AI architecture,” said Dr. Eng Lim Goh, SVP for Data & AI, HPE. “From training and tuning models on-premises, in a colocation or in the public cloud, to inferencing at the edge, GenAI has the potential to turn data into insights from every device on the network. However, businesses must carefully weigh the balance of being a first mover, and the risk of not fully understanding the gaps across the AI lifecycle, otherwise the large capital investments can end up delivering a negative ROI.”