Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare.
Want to see us in action? Browse our recent projects and discover more about the transformative power of AI.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique.
As we reach the end of the calendar year, our glasses fill with our favourite Christmas tipple, and we slow down for the Christmas break, it's a great opportunity to reflect on the past year. Way back in January, I sat down to make my top 5 predictions for AI in 2024, and whilst i've kept my eye on how each was progressing... now it's time to dig in and assess how those predictions really fared.
Prediction: It was one of the noisiest advancements of 2024, butorganisations have struggled to consistently reap the promised rewards.Generative AI is not going anywhere but expect the hype to cool to a more considered approach as those that have watched from the sidelines dip their toes in and proceed with caution until issues with security, regulatory uncertainty, and legal precedent are cleared up.
Verdict: Unclear. (But largely incorrect.)
Generative AI (or GenAI) remains one of the most controversial technologies of recent years, and measuring the hype train canlead to different answers depending on where you look or who you ask.
In terms of adoption, a CEPR survey highlights that the adoption of GenAI has accelerated much quicker than technologies such as the PC or the Internet. Yet, on the other hand, surveys such as this one from Boston Consulting Group highlight the alarming statistic that 74%of companies struggle to achieve and scale value with GenAI.
The views are similarly conflicting when you turn to investment, with plenty of talk of a GenAI “bubble” emerging, whilst simultaneously TechCrunch highlighting $3.9 billion of investments in GenAI startups secured in Q3 of2024, alongside OpenAI’s whopping $6.6 billion round, valuing the company at $157 billion.
Whether the hype train is slowing or not is likely to dependmore on your own optimism about GenAI as a technology. The one thing we can sayfor certain: GenAI isn’t going away, particularly whilst investment remains strong. Those that identify the right strategy and use cases for it will find themselves in a strong and enviable position ahead of those who are struggling to get value from their investments.
Prediction: As developers begin to understand the strengths of Large Language Models for conversational interfaces, we're likely to see far more chat bot style solutions for interacting with users. The demand for competitiveness in this space is likely to drive competition with Small Language Models, as leaner and less intensive language models close the performance gap on their larger counterpart.
Verdict: Correct.
The year started strongly for the Small Language Models (SLM), as numerous big players in the Large Language Model (LLM) market continued to emphasise and expand their smaller model offerings (Llama, Phi, Mistral), and IBM returning to the LLM market with its Granite models focusing on smaller size to lower cost without losing performance. But the battle didn’t stop at SLMs,with developers competing to release so-called Tiny Language Models (<1 billion parameters) such as HuggingFace’s SmolLM series, and even Super Tiny Language Models, with the aim of significantly reducing the compute costs required to run language solutions and opening the opportunity for language interfaces to be embedded within low-spec hardware.
Beyond just parameter size, there has been substantial critique of the model “scaling laws” often cited by LLM developers as areasoning for continually growing model size to increase performance, with evidence beginning to suggest this isn’t as clear-cut as previously claimed, and a number of studies (such as DeepSeekLLM) challenging this narrative further. (Interestingly, OpenAI itself has seemingly acknowledged this shift with the release of its o1 model, focusing on increased inference compute rather than a growth in parameter size)
Prediction: To leverage the potential of recent AI advances, organisations are likely to be forced to look inwards at their own data infrastructure. Expect a further push in Digital Transformation to create internal Knowledge Bases to power AI applications, as well as a rise in the use of Knowledge Graphs to ground these in fact.
Verdict: Correct – mostly?
The understanding that AI needs high quality data to be truly successful, alongside the realisation that allowing third-party providers to train models off your data risks leaking of sensitive information has led many organisations to review their data infrastructure, with those with already accessible data able to demonstrate quick wins on AI workflows to increase buy in. Projections for the cloud market continue to show strong growth, where organisations such as Microsoft have invested heavily in cloud infrastructure across the globe (Mexico, Japan, Malaysia, and many more), a commitment matched by competitors such as Amazon (UK, Italy, Japan, and more).
Whilst there has been a drive from data infrastructure to provide AI ready features, there is little evidence to suggest Knowledge Graphs have been considered important in this, however there are early signs of recognition in the importance of graph structures in GenAI workflows. GraphRAG emerged as a technique to improve the retrieval and output capabilities of genAI solutions, with Microsoft continuing to research the benefits of this approach. Meanwhile, Knowledge Graph market leader neo4j saw its revenue grow beyond $200 million this year, crediting its ability to improve accuracy, transparency and explainability of GenAI solutions a major factor to this growth.
Whilst enterprise scale graphs may not be disrupting the market yet, expect them to become a hot topic as GenAI adoption grows.
Prediction: As the Large Language Model momentum stables out, expect to see competition to develop foundation models in the Computer Vision space. These models will be able to form a wide range of tasks on image and video data, helping accelerate adoption of AI for visual tasks.
Verdict: Correct – but quietly.
The Computer Vision market has been steadily growing, and it Is projected to continue, but Large Vision Models (LVMs) haven’t received anywhere near the level of attention and hype of LLMs despite the range of use cases they enable. Likely this is because these models are mostly being picked up by designers and developers, rather than being exposed directly to consumers. However, early in the year, we saw image capabilities rolled into popular consumer facing tools.
After a late 2023 announcement, OpenAI rolled out its GPT-4 vision model throughout 2024, a key advancement to their offering, allowing ChatGPT to run off so-called Multi-Modal Models (models that can process a range of input types such as text, image, audio and video). In the last couple of years,Multi-Modal models have become the new frontier for the major AI model developers as they seek to combine previously separate processing streams into one model architecture. Across the board numerous new models have been coming quick and fast: Meta, Anthropic, Google, Amazon, Mistral, and more have all made major Multi-Modal releases this year.
Another major advancement this year has also come in the form of Generative Video, with Runway and OpenAIs Sora catching the headlines at different stages of the year.
Aside from model-development, product developers have sought to integrate computer vision progress into their solutions, and buyers have been following suit with continued growth of adoption in Computer Vision tools for security, health and safety, and quality control amongst other use cases. In fact a Gartner survey this year predicted 50% of warehouse operations will leverage a Computer Vision solution by 2027, emphasizing that competition in the Large Vision Model space is likely to grow.
The growth in data and AI applications is putting a tremendous strain on cloud compute infrastructure, and the growth of distributed technologies looks to bea further headache. Expect to see more Internet of Things technologies focused on computing on device and Federated Learning, whilst cloud providers seek to find ways to reduce cost and latency to remain competitive.
Verdict: Unclear.
The main computing takeaway from 2024 is that both Cloud and Edge computing are growing quickly and as demand for hardware and compute power surges there has been little need for the two markets to directly compete. Whilst we touched on cloud growth earlier, Edge Computing growth is arguably stronger, with increased demand in Real-time analytics, automation, and enhanced customer experiences.
The Edge trend is unlikely to slow down, and chip manufacturers are investing heavily in preparation, with NVIDIA, AMD, Intel and Altera (amongst others) all making significant improvements to their Edge AI offerings as they continue to posture over a market that is likely to grow substantially in the coming years. Whilst Cloud and Edge are co-existing peacefully at present, this remains a space to watch in the future as awareness of the advantages of Edge computing could muscle in on Cloud solutions.
The AI market in 2024 has been a filled with noise, confusion and chaotic energy as organisations have faced pressure to find and adopt AI use cases in a turbulent economic environment. That being said, the noise had begun to quieten in the latter portion of the year, with success stories and torch carriers beginning to emerge to guide the journey, leaving an optimistic outlook for 2025.
Governments around the world have thankfully recognised corporate misuse of personal data and have brought in legislation to give citizens more rights over their data. GDPR, CCPA, PIPEDA, APPI and more give individuals around the world varying levels of protection and control over their personal information.
It’s this precarious data landscape in which we see AI starting to reach mainstream adoption. Many of us will be aware of the well-documented data privacy and copyright concerns reported in the press surrounding AI. But don’t be fooled into thinking that these worries are only present for the likes of OpenAI and Anthropic!
Even small and medium sized organisations need to carefully navigate data privacy when implementing their own AI-driven tech.
We’re not privacy lawyers, and none of this article is intended as legal advice. However I would like to point you to the ICO’s guidance around using artificial intelligence within the confines of GDPR.
In this section, they rightly point out:
It is not possible to list all known security risks that might be exacerbated when you use AI to process personal data. The impact of AI on security depends on:
Due to the vast scope of potential use cases that AI presents, the precise way that you protect and secure user data within such a system is largely dependent on the scope, function, and construction of that system.
With this in mind, any SME exploring the use of AI and automation within their organisation needs to be aware of the below seven AI and data privacy considerations, at the very least.
Under GDPR, all European and British organisations now need to think more carefully about what personal data they collect, what risks they introduce by working with that data, and how to keep that data secure.
However, AI can introduce certain temptations when it comes to data processing.
AI is incredible at filtering through and making sense of large amounts of data. Many organisations have a lot of siloed info that they desperately need to assimilate, understand, and get their heads around. Charging AI with this task would seem like a silver bullet solution.
Yet there can be real data risks in lobbing chunks of personally identifiable data into the AI meat-grinder, just to see what comes out the other end!
One of the guiding tenets of GDPR is transparency. Data processors need to be honest and transparent about what data they collect, why they collect it, and how they use that data. AI adoption can present two stumbling blocks in the way of this transparency.
When a piece of software is “closed source,” that means that both users and the wider public are unable to personally inspect the software’s code because it is proprietary to a given organisation. Microsoft’s Windows operating system is a good example of closed source software.
When a solution is closed source and proprietary to an external provider, it can be difficult to interrogate quite what happens to the data you put into it, where that data goes, and what it does. Could the data end up on an insecure server somewhere? Could the data be used to further train the AI model against your data subjects’ wishes? There may not be a way for you, as the average user, to tell.
We’re not accusing any AI model or software of this behaviour, of course. But without having access to the code that runs the software, organisations like yours have little way of knowing what is truly happening under the bonnet.
The second issue is that of AI’s renowned “black box problem.” A lot of deep learning systems rely on swathes of training data and inferences that have now become so complex that even their creators don’t understand why they give some of the answers that they do.
Understandably, both issues present a significant challenge for those trying to be as transparent as possible about how personal data is used.
GDPR also contains stringent rules about automated decision making.
Individuals covered by GDPR have a right to opt out of solely automated decision making - i.e., where data controllers make significant decisions about individuals purely using an automatic programme or algorithm. Individuals also have a right to ask a human to reassess any decision solely made through automation. This remains the case whether AI plays a part in that decision process or not.
Additionally, our readers in the EU should also be aware of the new EU AI Act. This effectively bans the use of AI tools to impose “social scoring” on individuals or to identify people in real time using biometric data.
If you are considering creating a system that makes significant decisions about people’s lives, there are a few things you should bear in mind.
Firstly, identify the bare minimum data points that a human would need in order to make that decision about an individual case. This should be the absolute maximum data that you feed into your AI decision-making solution. If you give your AI solution more information than it is likely to need, you risk overexposing individuals’ data, you risk introducing bias into the AI model, and you risk regularly overworking the AI tool, which can present energy costs.
Secondly, you need to consider how your solution is going to respect the wishes of those who opt out of automated decision making. How you achieve this is going to depend heavily on what the solution does and how it works, but a way of excluding data subjects from automatic decisions should always be built in from the outset.
Above all, always keep data subjects informed about the use of their data, tell them about your use of automated processing, and give them clear ways to opt out or to challenge any automated decision. Schedule in regular checks to ensure that your decision-making tools are working as they should be too - especially when your AI tools use machine learning to pick up new things and adapt their judgement over time.
Essential Reading from ICO: Rights related to automated decision making including profiling
Data minimisation is where an organisation collects the bare minimum amount of personal data they need in order to function, and it’s wise data privacy practice. After all, minimising the amount of data you hold similarly minimises your data exposure risk and minimises data storage costs too.
You might also want to adopt a related concept: purpose limitation. That’s where personal data is only collected for specified, explicit, and legitimate purposes and never processed in ways incompatible with those purposes.
So where does AI come into this? Again, it might depend on what the AI is tasked with doing. For example, say you’re developing an AI solution that is designed to monitor a video feed and flag errors on an assembly line, though not to identify those responsible. It simply doesn't make sense to store vast amounts of likely repetitive video data, which may also introduce privacy concerns for workers and visitors in the vicinity. Such huge amounts of storage would also be vastly outside the scope of the application.
It would respect individuals’ privacy a lot more to only store and analyse video data whilst an instigating error is taking place; with measures in place to obscure any personally identifying images of team members captured in that segment of video.
It’s also worth bearing in mind that when an AI model has a smaller amount of purposeful, clean data to trudge through in order to formulate a response, this can have a positive impact on the model’s performance and hardiness.
If personal details aren’t relevant to data processing or storage, then keeping that data completely anonymised is great data protection practice. After all, if personal data isn’t present, it can’t be breached or misused.
But anonymising data has another benefit too. When identifying characteristics (such as name, gender, ethnicity, sexuality and geography) are completely absent from a system, this eliminates the potential for bias towards or against certain individuals or groups. We’re all aware of how humans can bring their own biases into a process - but without careful instruction and training to the contrary, AI can introduce biases too.
In an older, well-documented case, Amazon developed an ML recruiting tool to review job applicants’ CVs and spit out the best few candidates for each role in a completely objective, neutral way. However, the tool was trained using CVs submitted to the company over a 10-year period - most of which were from male candidates due to the male-dominated nature of the tech industry. The system therefore ended up “teaching itself” to favour male candidates over female ones.
Therefore, measures need to be built into systems to eradicate bias - and build in total anonymity if the scope of the project allows.
For example, RAIven is building a real-time, AI/ML-powered, health and safety monitoring tool for a leading corporate client, which incorporates data from video streams. In order to respect anonymity, we’ve built in layers of abstraction so certain actions get flagged as potentially desirable or undesirable without feeding in any data that is identifies an individual. This built-in anonymity eliminates possible privacy concerns around storing people’s physical likenesses - but it also helps to eradicate the possibility of the system picking up any biases along the way.
Care also needs to be taken around what AI tools are allowed to infer about data subjects. Even with a few seemingly innocuous data points, a solution may be able to deduce highly personal things like gender, medical conditions, or sexual orientation, simply through its incredible pattern-matching prowess!
Bias can also be purposefully built into AI tools, as evidenced by Google Gemini well-meaningly “over-diversifying” images it generated from prompts where a level of historical precision was expected.
In our view, AI tools need to be constructed with the maximum amount of anonymity and with unbiased neutrality built in from the outset.
AI tools are able to receive, process, and create new data at breakneck speeds, making it essential that any organisation using AI carefully considers the practicalities of storing that data.
Keeping your data minimised, sanitised, and process-specific obviously reduces the amount of space it is going to hold on a disc. This reduces storage costs (and environmental costs) in and of itself.
However, there’s another factor to consider here – transfer costs. Transferring data from one location to another is going to use energy and incur cost. Transferring data, especially over public networks, can also introduce cyber and privacy risks too.
With this in mind, aim to keep any data and computation as local as possible. Does a piece of data really need to be transferred halfway across the country to be computed and then returned? Or can the whole process happen on-site?
Also bear in mind that AI requires a lot more computational power than standard computing, so any hardware that is tasked with on-site AI computing will need to be fit for purpose.
For example, within some of the solutions we develop, we are able to plug an AI-ready computational device directly into a camera or sensor, so the data generated doesn’t need to travel through miles of cable in order to be computed. The needed computing all happens right there before the results of that computation are moved on to where they need to go. This keeps data risk and transfer costs to an absolute minimum.
Many of us are aware of attacks on people’s private data like social engineering attacks. But did you know there are AI-specific privacy attacks that can be used to uncover personally identifiable information from an AI powered system?
In membership inference attacks, hackers probe an AI model using previously obtained personally identifying data about a target individual. Their aim is to work out whether that individual’s data was part of the AI’s training data or not. This could let hackers know whether an individual had interacted with a particular service during the time the training data was being amassed.
Another type of attack is a model inversion attack, where criminals (armed with some initial identifying data about their target/s) aim to probe an AI model to infer and extract personal information about those individuals within its dataset.
However, there is an important caveat here: both of these attacks involve the criminals already having some personally identifying information about the individuals they’re targeting, and both require attackers to gain access to the AI model itself. This makes a strong case for data privacy and access control best practices.
The ICO make an excellent point about recording what you do with the data under your care:
ML systems require large sets of training and testing data to be copied and imported from their original context of processing, shared and stored in a variety of formats and places, including with third parties. This can make them more difficult to keep track of and manage.
Your technical teams should record and document all movements and storing of personal data from one location to another. This will help you apply appropriate security risk controls and monitor their effectiveness. Clear audit trails are also necessary to satisfy accountability and documentation requirements.
You may also find it enlightening to interrogate your technical supply chains, especially those which directly interact with sensitive data and AI components.
The best way to ensure the most stringent control over data privacy within an IT system is to have it custom built. This way, you have total visibility into its internal workings, you are less beholden to external supply chain fluctuations, and you’re not locked into a particular vendor’s way of doing things.
Having already reflected on my predictions for 2024, it feels that the past year has been somewhat chaotic in the land of AI. With constant model releases, battles on benchmark leaderboards, and a trial and error approach to developing solutions, it was clear that the world needed some time to find its feet and really think about the implications of AI technologies on society and the ways we work. But, if 2024 was the year of noise, then 2025 is the year of order. Or at least, the year we start to put things into order. Now the 2024 dust has settled, we find a lot of organisations reflecting on what the future really looks like for them, and how they can use a promising technology in a way that makes sense for their business. It will be an interesting year ahead, and below I've pulled out 5 key areas that I'm paying attention to in 2025...let's check in again at the end of the year to see what really happened.
Let's get this one out the way, AI agents became the big genAI topic of the last part of 2024, and 2025 is likely to see that popularity surge as OpenAI continue to push their o1 model, and focus on how they can chain model outputs together to tackle more complicated workflows. However, that will come with the same challenges that came with LLMs and AI assistants, as developers and users trial and error to find the right use cases. Questions will also arise around the cost-effectiveness of these solutions, as prices to deliver these workflows remain higher than automations (and in some cases higher than for a human to complete them). Whilst it is likely that agent based systems will stick around and become more sophisticated in the future, companies pushing agents will focus largely on language use cases and automations for now. Think: marketing/CRM, web scraping, and collaborative tools.
The gap between OpenAI and the chasers closed significantly in 2024, and 2025 could be a challenging a year for the company as it struggles to build a moat around its product. Questions had already started to emerge about the company's ability to maintain its lead as the chasing pack of LLMs significantly closed the gap, and OpenAI shifted focus to chain-of-thought "reasoning" and additional features such as search. Since penning this prediction, OpenAI has already faced a significant challenge from DeepSeek, the Chinese competitor that has matched its models in benchmarks andintroduced a new method for training models. The challenge is exasperated further by DeepSeek's decision to publish a paper detailing their training and technical approach, as well as open sourcing their model for anyone to use. So far, OpenAI's response has been to release further features built on top of their model such as the newly announced "Research".
As the initial wave of fear, excitement, and noise has calmed down, and buyers/developers are beginning to understand more clearly what AI can do, conversations are turning more towards "What can AI do for me?". Organisations are beginning to learn more about the data and processes that allow AI models to be built, and explore how they could tackle challenging workflows within their organisations with this technology. We can expect a major focus on building AI solutions in 2025, with 64% of CEOs listing AI as a top investment priority. This will show in two ways: organisations will begin to demonstrate how they have built their own internal AI workflows to empower their businesses, and we will begin to see more comprehensive AI powered products gaining traction with a clear value proposition.
The drive to create smaller models is not just about ecology and efficienct, it's also enabling language models to be embedded into hardware. By growing the capabilities of smaller models, we are enabling engineers to explore how they can run models on devices with limited storage, memory, and compute power, which means: we may begin to see a wave of language features appearing on other devices. We're already familiar with smart speakers such as Alexa, but we may now start to see them deployed locally within our phones (allowing access to the models without internet connection), within robots to allow you to give spoken instructions, or perhaps even within appliances as people re-explore the smart home dream. With these products just around the corner, it's likely that we will see early version of these hit the market in 2025. It will be interesting to see how this advancement influences the design of products.
As the AI demand refuses to ease, and as organisations are realising the potential of AI, and facing pressure to show that they are doing something about it, we will see a lot of businesses committing to building skills within their teams. The 2025 World Economic Forum Jobs Report published at the start of January highlighted that 70% of respondents intended to hire for AI skills. This creates a major challenge, as the market for these skills is already limited. There is a high likelihood that organisations will struggle to fill roles and retain staff with AI skills as companies jostle to build their own internal capabilities. It will take time for universities, apprenticeships and other training routes to deliver the talent needed to relieve this demand. One route some organisations may consider is upskilling/reskilling internal staff, however 63% of survey respondents considered this a major barrier, with it likely to be costly to invest in and a need for training organisations to step up and provide support.
The AI world is moving fast, and it will be interesting to see how organisations adapt to the technology in 2025, I look forward to returning to these predictions at the end of the year to see what actually played out.
No matter your technological know-how, we’re here to help. Send us a message or book a free consultation call today.