As we reach the end of the calendar year, our glasses fill with our favourite Christmas tipple, and we slow down for the Christmas break, it's a great opportunity to reflect on the past year. Way back in January, I sat down to make my top 5 predictions for AI in 2024, and whilst i've kept my eye on how each was progressing... now it's time to dig in and assess how those predictions really fared.
1️. Slowing of the Generative AI Hype Train
Prediction: It was one of the noisiest advancements of 2024, butorganisations have struggled to consistently reap the promised rewards.Generative AI is not going anywhere but expect the hype to cool to a more considered approach as those that have watched from the sidelines dip their toes in and proceed with caution until issues with security, regulatory uncertainty, and legal precedent are cleared up.
Verdict: Unclear. (But largely incorrect.)
Generative AI (or GenAI) remains one of the most controversial technologies of recent years, and measuring the hype train canlead to different answers depending on where you look or who you ask.
In terms of adoption, a CEPR survey highlights that the adoption of GenAI has accelerated much quicker than technologies such as the PC or the Internet. Yet, on the other hand, surveys such as this one from Boston Consulting Group highlight the alarming statistic that 74%of companies struggle to achieve and scale value with GenAI.
The views are similarly conflicting when you turn to investment, with plenty of talk of a GenAI “bubble” emerging, whilst simultaneously TechCrunch highlighting $3.9 billion of investments in GenAI startups secured in Q3 of2024, alongside OpenAI’s whopping $6.6 billion round, valuing the company at $157 billion.
Whether the hype train is slowing or not is likely to dependmore on your own optimism about GenAI as a technology. The one thing we can sayfor certain: GenAI isn’t going away, particularly whilst investment remains strong. Those that identify the right strategy and use cases for it will find themselves in a strong and enviable position ahead of those who are struggling to get value from their investments.
2️. LLMs vs SLMs
Prediction: As developers begin to understand the strengths of Large Language Models for conversational interfaces, we're likely to see far more chat bot style solutions for interacting with users. The demand for competitiveness in this space is likely to drive competition with Small Language Models, as leaner and less intensive language models close the performance gap on their larger counterpart.
Verdict: Correct.
The year started strongly for the Small Language Models (SLM), as numerous big players in the Large Language Model (LLM) market continued to emphasise and expand their smaller model offerings (Llama, Phi, Mistral), and IBM returning to the LLM market with its Granite models focusing on smaller size to lower cost without losing performance. But the battle didn’t stop at SLMs,with developers competing to release so-called Tiny Language Models (<1 billion parameters) such as HuggingFace’s SmolLM series, and even Super Tiny Language Models, with the aim of significantly reducing the compute costs required to run language solutions and opening the opportunity for language interfaces to be embedded within low-spec hardware.
Beyond just parameter size, there has been substantial critique of the model “scaling laws” often cited by LLM developers as areasoning for continually growing model size to increase performance, with evidence beginning to suggest this isn’t as clear-cut as previously claimed, and a number of studies (such as DeepSeekLLM) challenging this narrative further. (Interestingly, OpenAI itself has seemingly acknowledged this shift with the release of its o1 model, focusing on increased inference compute rather than a growth in parameter size)
3️. The Rise of Organisational Knowledge Bases
Prediction: To leverage the potential of recent AI advances, organisations are likely to be forced to look inwards at their own data infrastructure. Expect a further push in Digital Transformation to create internal Knowledge Bases to power AI applications, as well as a rise in the use of Knowledge Graphs to ground these in fact.
Verdict: Correct – mostly?
The understanding that AI needs high quality data to be truly successful, alongside the realisation that allowing third-party providers to train models off your data risks leaking of sensitive information has led many organisations to review their data infrastructure, with those with already accessible data able to demonstrate quick wins on AI workflows to increase buy in. Projections for the cloud market continue to show strong growth, where organisations such as Microsoft have invested heavily in cloud infrastructure across the globe (Mexico, Japan, Malaysia, and many more), a commitment matched by competitors such as Amazon (UK, Italy, Japan, and more).
Whilst there has been a drive from data infrastructure to provide AI ready features, there is little evidence to suggest Knowledge Graphs have been considered important in this, however there are early signs of recognition in the importance of graph structures in GenAI workflows. GraphRAG emerged as a technique to improve the retrieval and output capabilities of genAI solutions, with Microsoft continuing to research the benefits of this approach. Meanwhile, Knowledge Graph market leader neo4j saw its revenue grow beyond $200 million this year, crediting its ability to improve accuracy, transparency and explainability of GenAI solutions a major factor to this growth.
Whilst enterprise scale graphs may not be disrupting the market yet, expect them to become a hot topic as GenAI adoption grows.
4️. Large Vision Models
Prediction: As the Large Language Model momentum stables out, expect to see competition to develop foundation models in the Computer Vision space. These models will be able to form a wide range of tasks on image and video data, helping accelerate adoption of AI for visual tasks.
Verdict: Correct – but quietly.
The Computer Vision market has been steadily growing, and it Is projected to continue, but Large Vision Models (LVMs) haven’t received anywhere near the level of attention and hype of LLMs despite the range of use cases they enable. Likely this is because these models are mostly being picked up by designers and developers, rather than being exposed directly to consumers. However, early in the year, we saw image capabilities rolled into popular consumer facing tools.
After a late 2023 announcement, OpenAI rolled out its GPT-4 vision model throughout 2024, a key advancement to their offering, allowing ChatGPT to run off so-called Multi-Modal Models (models that can process a range of input types such as text, image, audio and video). In the last couple of years,Multi-Modal models have become the new frontier for the major AI model developers as they seek to combine previously separate processing streams into one model architecture. Across the board numerous new models have been coming quick and fast: Meta, Anthropic, Google, Amazon, Mistral, and more have all made major Multi-Modal releases this year.
Another major advancement this year has also come in the form of Generative Video, with Runway and OpenAIs Sora catching the headlines at different stages of the year.
Aside from model-development, product developers have sought to integrate computer vision progress into their solutions, and buyers have been following suit with continued growth of adoption in Computer Vision tools for security, health and safety, and quality control amongst other use cases. In fact a Gartner survey this year predicted 50% of warehouse operations will leverage a Computer Vision solution by 2027, emphasizing that competition in the Large Vision Model space is likely to grow.
5️. Cloud Computing vs Edge Computing
The growth in data and AI applications is putting a tremendous strain on cloud compute infrastructure, and the growth of distributed technologies looks to bea further headache. Expect to see more Internet of Things technologies focused on computing on device and Federated Learning, whilst cloud providers seek to find ways to reduce cost and latency to remain competitive.
Verdict: Unclear.
The main computing takeaway from 2024 is that both Cloud and Edge computing are growing quickly and as demand for hardware and compute power surges there has been little need for the two markets to directly compete. Whilst we touched on cloud growth earlier, Edge Computing growth is arguably stronger, with increased demand in Real-time analytics, automation, and enhanced customer experiences.
The Edge trend is unlikely to slow down, and chip manufacturers are investing heavily in preparation, with NVIDIA, AMD, Intel and Altera (amongst others) all making significant improvements to their Edge AI offerings as they continue to posture over a market that is likely to grow substantially in the coming years. Whilst Cloud and Edge are co-existing peacefully at present, this remains a space to watch in the future as awareness of the advantages of Edge computing could muscle in on Cloud solutions.
In Conclusion
The AI market in 2024 has been a filled with noise, confusion and chaotic energy as organisations have faced pressure to find and adopt AI use cases in a turbulent economic environment. That being said, the noise had begun to quieten in the latter portion of the year, with success stories and torch carriers beginning to emerge to guide the journey, leaving an optimistic outlook for 2025.