News + Blog

Artificial intelligence cloud
Education

The Future is Federated

An introduction to Federated Learning

Filter
Filter
History
Read Time
A portrait of Ada Lovelace

Celebrating Ada Lovelace: A Computing Pioneer

As today is International Women’s Day, we thought we’d give a shout out to the absolute pioneer in mathematics Augusta Ada King, Countess of Lovelace, better known by the name Ada Lovelace.

Born in 1815 to the iconic rake of Regency London - Lord Byron and reformer Anne Milbanke, Ada showed a keen in interest in logic and maths from a young age. Her mother was supportive of these passions and urged her to explore them, mainly due to a concern that she would end up ‘insane’ like her estranged Father who had left them behind when she was only a month old.

The Enchantress of Numbers

At the age of eighteen, thanks to her obvious talents and interests, Ada was brought into contact with Charles Babbage (also known as the Father of Computing). This meeting happened at one of his Saturday Night Soirees and would possibly have never occurred if not for Ada’s private tutor - scientist, polymath and writer Mary Somerville. A peculiar character herself, Ada felt that she needed someone equally as open-minded to teach her successfully and their working-relationship and friendship blossomed quickly. Later that month Babbage invited her to see the prototype for his difference engine (a mechanical computer), which she immediately became fascinated with. Inspired by her new teacher, Ada used her relationship with Somerville to her advantage and visited Babbage as often as she could. Incredibly impressed by her analytic skills and intellectual ability, he christened her ‘The Enchantress of Numbers,’ a nickname which has stood the test of time.

The First Computer Program

Lovelace would go on to document Babbage's difference engine, as well as envisioning how it may be used by writing algorithms in her notes. She is widely credited as having written the first published computer program, when her algorithm to calculate the Bernoulli numbers was printed in a scientific journal in 1843.

Ada’s exploits also helped her create relationships with scientists such as Andrew Crosse, Sir David Brewster, Charles Wheatstone, and the author Charles Dickens, contacts which she used to further her knowledge and gain more insight into her passions. Ada described her approach as ‘poetical science’ and was a self-described Analyst & Metaphysician.

More than a century after her death, her notes on Babbage's Analytical Engine were republished and the Engine itself has now been recognised as an early model for the computer. Her notes describing this and the software show us just how advanced Charles and Ada both were in their thinking. Ada had many ideas about the potential of these machines and anticipated modern computing one hundred years early... Now that’s impressive!!

Insights
Read Time
New years fireworks viewed over rooftops

A Lookback on our 2024 Predictions for AI

As we reach the end of the calendar year, our glasses fill with our favourite Christmas tipple, and we slow down for the Christmas break, it's a great opportunity to reflect on the past year. Way back in January, I sat down to make my top 5 predictions for AI in 2024, and whilst i've kept my eye on how each was progressing... now it's time to dig in and assess how those predictions really fared.

1️. Slowing of the Generative AI Hype Train

Prediction: It was one of the noisiest advancements of 2024, butorganisations have struggled to consistently reap the promised rewards.Generative AI is not going anywhere but expect the hype to cool to a more considered approach as those that have watched from the sidelines dip their toes in and proceed with caution until issues with security, regulatory uncertainty, and legal precedent are cleared up.

Verdict: Unclear. (But largely incorrect.)

Generative AI (or GenAI) remains one of the most controversial technologies of recent years, and measuring the hype train canlead to different answers depending on where you look or who you ask.

In terms of adoption, a CEPR survey highlights that the adoption of GenAI has accelerated much quicker than technologies such as the PC or the Internet. Yet, on the other hand, surveys such as this one from Boston Consulting Group highlight the alarming statistic that 74%of companies struggle to achieve and scale value with GenAI.

The views are similarly conflicting when you turn to investment, with plenty of talk of a GenAI “bubble” emerging, whilst simultaneously TechCrunch highlighting $3.9 billion of investments in GenAI startups secured in Q3 of2024, alongside OpenAI’s whopping $6.6 billion round, valuing the company at $157 billion.

Whether the hype train is slowing or not is likely to dependmore on your own optimism about GenAI as a technology. The one thing we can sayfor certain: GenAI isn’t going away, particularly whilst investment remains strong. Those that identify the right strategy and use cases for it will find themselves in a strong and enviable position ahead of those who are struggling to get value from their investments.

2️. LLMs vs SLMs

Prediction: As developers begin to understand the strengths of Large Language Models for conversational interfaces, we're likely to see far more chat bot style solutions for interacting with users. The demand for competitiveness in this space is likely to drive competition with Small Language Models, as leaner and less intensive language models close the performance gap on their larger counterpart.

Verdict: Correct.

The year started strongly for the Small Language Models (SLM), as numerous big players in the Large Language Model (LLM) market continued to emphasise and expand their smaller model offerings (Llama, Phi, Mistral), and IBM returning to the LLM market with its Granite models focusing on smaller size to lower cost without losing performance. But the battle didn’t stop at SLMs,with developers competing to release so-called Tiny Language Models (<1 billion parameters) such as HuggingFace’s SmolLM series, and even Super Tiny Language Models, with the aim of significantly reducing the compute costs required to run language solutions and opening the opportunity for language interfaces to be embedded within low-spec hardware.  

Beyond just parameter size, there has been substantial critique of the model “scaling laws” often cited by LLM developers as areasoning for continually growing model size to increase performance, with evidence beginning to suggest this isn’t as clear-cut as previously claimed, and a number of studies (such as DeepSeekLLM) challenging this narrative further. (Interestingly, OpenAI itself has seemingly acknowledged this shift with the release of its o1 model, focusing on increased inference compute rather than a growth in parameter size)  


3️. The Rise of Organisational Knowledge Bases

Prediction: To leverage the potential of recent AI advances, organisations are likely to be forced to look inwards at their own data infrastructure. Expect a further push in Digital Transformation to create internal Knowledge Bases to power AI applications, as well as a rise in the use of Knowledge Graphs to ground these in fact.

Verdict: Correct – mostly?

The understanding that AI needs high quality data to be truly successful, alongside the realisation that allowing third-party providers to train models off your data risks leaking of sensitive information has led many organisations to review their data infrastructure, with those with already accessible data able to demonstrate quick wins on AI workflows to increase buy in. Projections for the cloud market continue to show strong growth, where organisations such as Microsoft have invested heavily in cloud infrastructure across the globe (Mexico, Japan, Malaysia, and many more), a commitment matched by competitors such as Amazon (UK, Italy, Japan, and more).

Whilst there has been a drive from data infrastructure to provide AI ready features, there is little evidence to suggest Knowledge Graphs have been considered important in this, however there are early signs of recognition in the importance of graph structures in GenAI workflows. GraphRAG emerged as a technique to improve the retrieval and output capabilities of genAI solutions, with Microsoft continuing to research the benefits of this approach. Meanwhile, Knowledge Graph market leader neo4j saw its revenue grow beyond $200 million this year, crediting its ability to improve accuracy, transparency and explainability of GenAI solutions a major factor to this growth.

Whilst enterprise scale graphs may not be disrupting the market yet, expect them to become a hot topic as GenAI adoption grows.

4️. Large Vision Models

Prediction: As the Large Language Model momentum stables out, expect to see competition to develop foundation models in the Computer Vision space. These models will be able to form a wide range of tasks on image and video data, helping accelerate adoption of AI for visual tasks.

Verdict: Correct – but quietly.

The Computer Vision market has been steadily growing, and it Is projected to continue,  but Large Vision Models (LVMs) haven’t received anywhere near the level of attention and hype of LLMs despite the range of use cases they enable. Likely this is because these models are mostly being picked up by designers and developers, rather than being exposed directly to consumers. However, early in the year, we saw image capabilities rolled into popular consumer facing tools.

After a late 2023 announcement, OpenAI rolled out its GPT-4 vision model throughout 2024, a key advancement to their offering, allowing ChatGPT to run off so-called Multi-Modal Models (models that can process a range of input types such as text, image, audio and video). In the last couple of years,Multi-Modal models have become the new frontier for the major AI model developers as they seek to combine previously separate processing streams into one model architecture. Across the board numerous new models have been coming quick and fast: Meta, Anthropic, Google, Amazon, Mistral, and more have all made major Multi-Modal releases this year.

Another major advancement this year has also come in the form of Generative Video, with Runway and OpenAIs Sora catching the headlines at different stages of the year.

Aside from model-development, product developers have sought to integrate computer vision progress into their solutions, and buyers have been following suit with continued growth of adoption in Computer Vision tools for security, health and safety, and quality control amongst other use cases. In fact a Gartner survey this year predicted 50% of warehouse operations will leverage a Computer Vision solution by 2027, emphasizing that competition in the Large Vision Model space is likely to grow.


5️. Cloud Computing vs Edge Computing

The growth in data and AI applications is putting a tremendous strain on cloud compute infrastructure, and the growth of distributed technologies looks to bea further headache. Expect to see more Internet of Things technologies focused on computing on device and Federated Learning, whilst cloud providers seek to find ways to reduce cost and latency to remain competitive.

Verdict: Unclear.

The main computing takeaway from 2024 is that both Cloud and Edge computing are growing quickly and as demand for hardware and compute power surges there has been little need for the two markets to directly compete. Whilst we touched on cloud growth earlier, Edge Computing growth is arguably stronger, with increased demand in Real-time analytics, automation, and enhanced customer experiences.

The Edge trend is unlikely to slow down, and chip manufacturers are investing heavily in preparation, with NVIDIA, AMD, Intel and Altera (amongst others) all making significant improvements to their Edge AI offerings as they continue to posture over a market that is likely to grow substantially in the coming years. Whilst Cloud and Edge are co-existing peacefully at present, this remains a space to watch in the future as awareness of the advantages of Edge computing could muscle in on Cloud solutions.

In Conclusion

The AI market in 2024 has been a filled with noise, confusion and chaotic energy as organisations have faced pressure to find and adopt AI use cases in a turbulent economic environment. That being said, the noise had begun to quieten in the latter portion of the year, with success stories and torch carriers beginning to emerge to guide the journey, leaving an optimistic outlook for 2025.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Your future awaits

Ready to start your AI journey?

No matter your technological know-how, we’re here to help. Send us a message or book a free consultation call today.