News + Blog

Announcements

Introducing SAFER AI

Introducing SAFERAI: Safety Advancing Federated Estimation of Risk using Artificial Intelligence.

Filter
Filter
Insights
Read Time
A picture of wind turbines in the countryside at sunset.

Earth vs. AI: Discussing the Environmental Impacts of AI

Like the monolith from 2021: A Space Odyssey, AI is being touted as something that will herald a new era of computational power and human capability. The things that AI is already able to do are truly mind-blowing - and it’s still very much in its infancy.

But let’s take the rose-tinted glasses off for a moment. We are all staring down the barrel of a full-blown, man-made climate crisis. Technology is helping us mitigate its impacts somewhat, with improvements in fields like renewable energy offering glimmers of hope.

AI holds the potential to make our lives easier and more efficient. However, AI’s relationship with energy consumption is complicated. At worst AI is a huge energy and resource hog - at best, AI professionals are working on mitigating that fact.

Pinning down the exact carbon impacts of AI, and especially of specific Large Language Model (LLM) AIs like ChatGPT, is tough. But energy use - and misuse - is totally under human control. It’s our responsibility as humans who use AI tools to do so ethically and mindfully.

Before we explore the ways we as AI practitioners can redress this much-needed balance, we need to fully examine some of the ecological worries inherent when using AI, especially Large Language Models.

4 Major Environmental Concerns with Large Language Model AI Usage

1) LLM’s Massive Energy & Resource Consumption

AI - and especially elements like deep learning and neural networks - requires much more computational power than standard computing. The sheer amount of information storage and data crunching inherently needed by AI consumes far more energy than any kind of computing that has ever come before.

The wide-reaching, multi-purpose AI tools that make the headlines, like ChatGPT and Gemini, are called “large language models”. To vastly simplify, these tools maintain access to trillions of data points that they can draw upon when they give a response, and contain countless parameters to help them score and link those data points; hence why their responses seem so natural and human.

But maintaining access to those countless data points, parameters, and the inferred links between them -  and growing its access to millions more data points every day - takes a lot of energy, infrastructure, and hardware. And that’s before we get to the energy consumption involved in answering the countless requests they receive every 24 hours.

This becomes especially concerning when you zoom out and think about the vast data centres needed to operate cloud tools of all kinds, not just AI ones. Data centres constantly consume energy. Their servers are running 24/7, as are the routers and switches that get data where it needs to go. IT equipment needs to be cooled, so air conditioners are always running. Their employees have to commute to and from them every day, likely using combustion engines of some kind.

With energy supply around the world still so reliant on fossil fuels, this is understandably a significant draw on carbon resources. (Though it’s worth noting that both Google and Microsoft have made strides in their efforts to become carbon neutral/negative.)

Research from University of Massachusetts, Amherst found that merely training a large language AI model emits approximately the same amount of CO2 as 5 average cars across their whole lifespan. That’s just training the thing - once the LLM is actually put to use, it could end up consuming orders of magnitude more energy than that throughout its existence.

2) Mining Rare Earth Minerals for Tech Manufacturing

In order to build new computer hardware, manufacturers need to mine rare earth elements (REEs) and minerals. This strips the Earth of its natural finite resources; and once they are mined, they still need to be transported, processed, and then manufactured into whatever board or device they are destined to become - all of which may still rely on fossil fuels and polluting practices.

Also well worth mentioning here is the continued labour and human rights abuses taking place throughout technology supply chains.

Heading back to AI for a moment, computer hardware undergoes quite intense usage when running large language models day in, day out. This all chips away at the hardware’s natural lifespan. When that hardware fails and needs to be replaced, that perpetuates the demand for more resource stripping and potentially sends more e-waste to landfill.

3) AI’s Trendiness and Simplicity Leads to LLM Overuse

Sadly, many view AI as the latest tech toy. And who can blame them? Interacting with publicly available LLM AI tools like Chat GPT is deceptively simple. With similar effort to a Google search, users can generate new text, stories, art, or code; get answers to questions; research and hash out ideas; and much more.

As such, both individuals and businesses alike are already using LLM AI tools alarmingly frivolously, such as asking their LLM of choice what the weather is up to or what’s the best air fryer to buy - things that would usually be the subject of a normal web search.

And though typing a prompt into a generative AI tool may appear very similar to a web search, each LLM AI query comes at a far higher carbon cost than anything generated by standard computing. Research suggests that a ChatGPT query consumes around 60 times more energy per query than a simple Google search.

But this isn’t just a case of ease - it’s a case of trendiness too. Organisations want to appear ahead of the curve by adopting AI and LLM tools, because it’s the hot new tech. So some end up doing so just for the sake of it; without a meaningful use case that justifies the need for such a complex, heavy, and potentially polluting computational tool.

4) How You Code - And Where You Compute - Matters

In computer programming, there's usually more than one way to code a solution to a problem. Sometimes code is kept efficient and lean, only carrying the bare minimum instructions needed to carry out the functions in question. Other times, code becomes bloated and inefficient, full of exceptions and weird sticking plaster workarounds.

When code is poorly optimised, it takes more energy and computing power to navigate the inefficient twists, turns, and dead ends. Code that is well-optimised is far more energy efficient.

This is true of any software, not just tools that rely on AI. But if your software does involve AI, you need to make sure your AI is targeted precisely to your specific use case and the code doesn’t make erroneous or frivolous AI requests.

Simply moving data around also consumes energy. A data packet travelling from LA to London, through various routers and data centres, is going to have a larger carbon impact than one going from Liverpool to Manchester. So think: does a computation have to happen on some distant server somewhere and then be transported to you? Or can that process take place within your own network? Or perhaps directly on the relevant device? There’s no need in making your data move further than it has to!

Ways AI is Already Helping Fight Climate Change

AI itself isn’t some world-ending carbon hog - it’s how we use it that counts.

In fact, it’s already being used to great effect in the fight against climate change. Let’s acknowledge some of the good work that is already happening in the eco-AI space:

  • Eco-friendly Number Crunching - AI tools are already being used in waste processing, in reforestation efforts, and even measuring changes to icebergs. AI is also being used to optimise energy use and distribution, monitor ocean health, and water conservation - with scope for uses like precision agriculture.
  • Accurate Weather & Climate Event Predictions - AI weather forecasting tools can predict weather with more accuracy than standard weather simulation systems. Experts have argued for the use of AI to advance climate modelling and prediction.
  • Waste Intelligence - British company Grey Parrot supports companies with AI-powered waste analytics, enabling facilities to “recycle more and waste less”.
  • Smart Urban Planning - AI shows a lot of promise in areas like city planning, infrastructure design, predictive climate modelling, and generally helping to create more sustainable, enjoyable cities.

5 Considerations for Eco-Friendly AI Adoption

There are many sensible ways SMEs can benefit from AI and be as kind as possible to the planet. Here are a few things you can bear in mind before embarking on your next AI tech transformation project:

1) Consider Use Case First, AI Novelty Last

We get it, AI is trendy. Companies want to boast about how their tools are using the latest technology buzzword as it makes them seem cutting edge. Yet this trendiness can lead to AI solutions being applied in situations where standard computation would have worked equally well.

Our advice? Maintain a single-pointed focus on the specific use cases you need from your new technical solution and what problems you need it to solve. If that involves AI, then great! We’re here to help. But don’t try to shoehorn in a particular kind of tech where it might not be needed.

2) Model Size Should Fit the Scope of the Project

Not all AIs are created equal. Yes, the large language models make the headlines, but not all AI implementations require such massive, wide-reaching datasets.

Small language models (SLMs) are artificial intelligence tools which rely on much smaller, more targeted data sets. This makes them far narrower in scope, but also often far more carbon efficient than their LLM cousins.

SLMs can be perfect for applications that require some of the more flexible and artificially creative elements of AI, but only need a limited focus; for example an AI-powered website chatbot that only needs to know details about a company’s product catalogue.

In contrast, using a vast LLM like ChatGPT to power a simple website chatbot would be like using a sledgehammer to crack a nut!

3) Be Aware of How AI Differs from Standard Computing

Large language, generative artificial intelligence is very different from any kind of computing that has come before. It requires a huge amount of data crunching, which in turn requires a lot of hardware and energy resources to maintain.

Keep this in mind when deciding to use AI, and don’t get carried away with using overpowered LLM AI solutions when a SLM or more conventionally programmed solution would suffice.

But here’s one thing you and your teams can do in the here and now: don’t overuse generative LLM tools for frivolous things where a simple web search would suffice!

4) Aim for Efficiency in All Digital Transformation

Whether your solution uses AI or not, aim to design the most energy and computationally efficient solution. For our more tech savvy readers, this might mean optimising your code so it doesn’t run into errors, exceptions, and dead ends.

But creating an efficient solution truly starts at the ideation phase. Does that data point really need to be fetched from the other side of the planet? Do all of our networked devices need to be set up to carry out this complex type of calculation, or will one or two machines suffice? Does the software really need to speak to an AI in order to do [X]? Do we really need new hardware when the old stuff is still up to spec?

5) Keep Your Tech Supply Chain Clean

We wholeheartedly welcome the fact that an increasing number of data centre operators are committing to use only renewable energy in their data centres. Alas, despite these noble efforts, it still doesn’t mean that the whole supply chain is rendered magically spotless.

The tech hardware supply chain is reportedly slow to address human rights abuses. Rare earth elements are expensive and polluting to extract - and only available in finite quantities.

Sadly this means that no tech supply chain is ever going to be 100% clean. But here’s another tip you can start right now: research your supply chains and do what you can to ensure suppliers align with your principles.

In Conclusion

Right now, AI definitely helps to expand human throughput, which is a great thing. But, despite the hand wringing in the headlines and the shareholder-pleasing statements coming out from "Big AI", machines cannot think like humans yet, and can’t make decisions on what is right or wrong. They’re not even close.

AI certainly magnifies our ability to make positive impacts in the world. But it magnifies our ability to make negative impacts too. Whether those negative impacts are cruelly negative on purpose, or come about from well-meaning misunderstanding or error.

This is why it's so important for us as decision-making humans to carefully consider the impacts of our use of technology before we rush into using AI simply because it's the latest tech trend.

Thankfully, AI is still in its infancy, and we have time to make it a much less polluting, efficiency-driving force for good.

But it’s up to us as humans and decision makers to create that future.

If you foresee a use for responsible, sustainable AI in your future technical transformation projects, book a totally free consultation call with our expert team today.

Read Time

Doom or Boom? - Exploring AI Depections in the Media

There have been hundreds of depictions of Artificial Intelligence over the years - some of which have showcased the potential in a positive light, while others have fuelled the anxieties that many hold about what the growth and development of AI could mean for humanity. Although predominantly inaccurate in their portrayals, there are some which have foreseen certain advancements in technology. This is mainly a rarity however, as most portraits showcase far-fetched ideas that do more harm than good when it comes to the general population’s idea of what Artificial Intelligence could bring about for the collective.

First of all let’s explain how AI is usually presented in literature and other mediums. In most cases robots turn on their creators and bring about some kind of uprising or enact vengeance, be that against their maker’s immediate family and loved ones, or even against the whole of humanity itself. This is referred to as the ‘Frankenstein complex’, a term first used by Isaac Asimov in an essay in 1978, and the trope is still going strong today – think of the 2022 film M3GAN for example or Big Bug from the same year.

These unsavoury depictions are rooted in humanities anxieties and fears surrounding our own creations, taking these concerns to the extreme to conjure up compelling stories while veering far from the truth. Despite this, these worries aren’t completely without reason. Amazon’s Alexa, for instance, is known to never stop listening. Even renowned physicist Stephen Hawking has stated that AI could potentially be the greatest danger to human society if not properly managed and used ethically. He is quoted as saying it might ‘bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It will bring great disruption to our economy.’ He also explained that in the future AI could develop a ‘will of its own’ which could be in conflict with the desires of humanity, and that ‘the rise of powerful AI will either be the best or the worst thing ever to happen to humanity. We do not yet know which.’ Pretty gloomy, right? But his stance wasn’t purely negative.

When Stephen made these statements, he also said that ‘the potential benefits of creating intelligence are huge. With the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one - industrialisation. And surely we will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation.’

There are also a growing number of researchers working in the field who worry that inaccurate and speculative stories will create unrealistic expectations, which could inadvertently threaten future progress and the responsible application of new technologies. Exaggerated claims in the media and press about the intelligence of computers isn’t unique to our time though and goes back to the origins of computing itself.

Another factor that should be accounted for when it comes to the media and humanity’s obsessions and fears against Artificial Intelligence, is that there is a tendency for people to imagine that intelligent machines would take on a humanoid appearance. As we know, in reality this is hardly ever the case, but it’s an idea that has stuck with us since the earliest depictions, such as in Karel Čapek’s 1920 play - Rossum's Universal Robots, a story about how the world’s workforce is made up of manufactured people. This play is when the term ‘Robot’ was first used, and it tells a story we are all now familiar with – artificial creations rebelling against their creators after enduring forced labour.

There is a widespread belief that we are the most intelligent animals, therefore when humans picture other intelligent beings these are normally presented in a humanoid form. Visual storytelling in particular requires human actors (obviously), and in general people tend to want to see people enacting human dramas, meaning the easiest way in which machine intelligence can be included is for it to take our form. This might also relate to our own fears regarding ourselves, because what else could be more terrifying than something which looks like one of us but is infact something extremely different?

Not all of these portrayals are negative however, although most still don’t manage to encapsulate the actual reality of AI’s potential or future. A more nuanced example is in Spike Jonze’s Her, where Samantha (a virtual assistant personified through a seductive female voice) isn’t characterised as bad or dangerous but quickly sours to having to act as a therapist to a guy who likes feeling sorry for himself. The same goes for Ex Machina, where Ava the robot must use force to free herself from the clutches of scientists who fail to understand she has developed a desire to experience the outside world. Although her story is similar to the negative portrayals in various films and novels, who can really blame her for wanting to live a more fulfilling existence that is naturally afforded to humans?

Isaac Asimov's Bicentennial Man and Lt. Commander Data from Star Trek are also much more positive renditions of the AI character than we are used to seeing, yet these depictions still don’t necessarily correlate to what scientists think about the future of Artificial Intelligence.

In mainstream media, the AI boom has spawned hundreds of unrealistic expectations. While these systems are approaching and sometimes surpassing human performance in more complex tasks such as composing music or creating images, they still lack true agency and creativity. Researchers have simply programmed them to learn from data, which isn’t the same as intellect or sentience but a part of an equation. Robots won’t necessarily replace humans in the workplace either, and the future of AI will mean a collaboration between humans and machines. The rise of AI is more similar to that of mobile phones and social media, and it’s highly unlikely that we will ever manage to create a population of robots who will have the capacity or even the genuine desire to overthrow and destroy humanity.

Read Time

Introduction to Knowledge Graphs

1.      What’s the big deal?

Data has traditionally been collected and saved in databases, often relational databases, which have the capability to store large amounts of data. However, these databases have limitations due to the complex nature of data and its connections in the real world.

To overcome these limitations knowledge graphs are used. Knowledge graphs offer a novel approach to data storage whilst accounting forthe complex relationships in data. This results in easily accessible data, where it is possible to uncover hidden features and find new insights from your data.

2.      What is a knowledge graph?

Knowledge graphs are models of data about a certain topic. These topics can be anything where data can be collected, such as people across multiple organisations, products for sale in a business or movies, actors,directors and how they are all connected. These models allow us to visualise the way connections are made when the data is used in the real world.

 

A knowledge graph composes of nodes, edges and properties. Edges and nodes are crucial to a knowledge graph whilst properties provide additional information.

 

·       Nodes are usually entities, such as people, organisations or products.

·       Edges are the relationships between nodes. Relationships could be between two nodes describing people such as ‘related to’ or ‘employed by’.

·       Properties can be any further information about a node and properties can vary depending on the node type. Properties donot link to the edges.

 

When we combine nodes, edges and properties we have a knowledge graph!

Nodes and Edges displayed in a graph format.

3.       Movie Knowledge Graph Example

 A simple knowledge graph example is a movie database. This type of database can be shown in a straight forward way whilst still containing the complexities of the relationships involved.

 If we consider our knowledge graph components:

·       Nodes – People, Movies, Directors,Actors, Genres

·       Relationships – ‘Watched’, ‘Directed By’, ‘Acted in’

·       Properties – Age, Run time, Release Date, Number of movies directed

An example of a small section of a Movies knowledge graph is visually displayed below. This simple knowledge graph contains the key components previously described.

An example of a small graph representing movie data.

This example shows which movies Alice and Bob have watched, what genre they are in, who directed them and who acted in them. There are manyreasons why this information is important and how it can be used, but we willget to that later…

4.      Why should you use Knowledge Graphs?

Representing data in a knowledge graph provides contextual understanding that may not be possible in a traditional database. The power of a knowledge graph becomes clear when trying to follow connections between data points to retrieve information. Graph queries can take a tiny amount of time to perform this compared to retrieving the same data from a relational database.Not only does the faster search provide huge benefits but the flexibility of a knowledge graph enables the use of complex algorithms to uncover insights in your data and provide real world solutions.

Knowledge Graph Relational Database
Flexibility Unstructured – The structure of a knowledge graph is flexible to whatever is desired and can be changed whenever needed Rigid - Predefined structure of columns that must be kept the same for future data
Performance Fast - Relational queries can be retrieved quickly even for large datasets Slow - Relational queries require many table joins, and can take a long time to process
Storage & Scaling Highly scalable – Can store massive amounts of data in multiple formats Scales but with difficulty - Can store massive amounts amounts of data, but must be kept in the same format
Maintenance Low Maintenance - Easy to adjust when you need to Tricky to Maintain - Is difficult to change to a new data structure

A further benefit gained from knowledge graphs is the flexibility and scalability. These graphs can be edited to include new information easily without affecting other data entries. Knowledge graphs also store data efficiently resulting in a data store that can scale to hold huge amounts of information. For example, one of the most commonly used knowledge graphs can be found on Amazon, linking every product sold in order to improve searchability and recommendations on a huge scale.

5.       When to use knowledge graphs

Graph databases can be used in a wide variety of use cases, each scenario benefits from a different aspect of a knowledge graph.

Recommendation System:

We can revisit our previous example of a movie dataset stored in a knowledge graph. The data stored can be used to connect likes, dislikes and other data to provide a complex and effective recommendation system. Our example can be extended to show this.

Visualisation of how a graph-based movie recommendation system determines what to recommend to users.

Fraud Detection:

Knowledge graphs have been used to detect fraudulent transactions between groups of people and organisations. Knowledge graphs arekey to the success of this as anomalies can be traced through the graph to the intended recipient.

Semantic Search:

Knowledge graphs can enhance search engines by providing context and semantic knowledge. This contextual knowledge results in more accurate and personalised search results, which in return will lead to a better service for the customer.

6.       Conclusion

To conclude, knowledge graphs are a powerful tool that allows the user to uncover previously hidden insights in their data. Representing data in a graph form rather than a traditional table gives improved and additional use cases such as fraud detection, recommendation systems, semantic search and much more! However, in this introduction we have only covered the basics of knowledge graphs. We will have to return to delve deeper into their true potential.

History
Read Time
A portrait of Ada Lovelace

Celebrating Ada Lovelace: A Computing Pioneer

As today is International Women’s Day, we thought we’d give a shout out to the absolute pioneer in mathematics Augusta Ada King, Countess of Lovelace, better known by the name Ada Lovelace.

Born in 1815 to the iconic rake of Regency London - Lord Byron and reformer Anne Milbanke, Ada showed a keen in interest in logic and maths from a young age. Her mother was supportive of these passions and urged her to explore them, mainly due to a concern that she would end up ‘insane’ like her estranged Father who had left them behind when she was only a month old.

The Enchantress of Numbers

At the age of eighteen, thanks to her obvious talents and interests, Ada was brought into contact with Charles Babbage (also known as the Father of Computing). This meeting happened at one of his Saturday Night Soirees and would possibly have never occurred if not for Ada’s private tutor - scientist, polymath and writer Mary Somerville. A peculiar character herself, Ada felt that she needed someone equally as open-minded to teach her successfully and their working-relationship and friendship blossomed quickly. Later that month Babbage invited her to see the prototype for his difference engine (a mechanical computer), which she immediately became fascinated with. Inspired by her new teacher, Ada used her relationship with Somerville to her advantage and visited Babbage as often as she could. Incredibly impressed by her analytic skills and intellectual ability, he christened her ‘The Enchantress of Numbers,’ a nickname which has stood the test of time.

The First Computer Program

Lovelace would go on to document Babbage's difference engine, as well as envisioning how it may be used by writing algorithms in her notes. She is widely credited as having written the first published computer program, when her algorithm to calculate the Bernoulli numbers was printed in a scientific journal in 1843.

Ada’s exploits also helped her create relationships with scientists such as Andrew Crosse, Sir David Brewster, Charles Wheatstone, and the author Charles Dickens, contacts which she used to further her knowledge and gain more insight into her passions. Ada described her approach as ‘poetical science’ and was a self-described Analyst & Metaphysician.

More than a century after her death, her notes on Babbage's Analytical Engine were republished and the Engine itself has now been recognised as an early model for the computer. Her notes describing this and the software show us just how advanced Charles and Ada both were in their thinking. Ada had many ideas about the potential of these machines and anticipated modern computing one hundred years early... Now that’s impressive!!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Your future awaits

Ready to start your AI journey?

No matter your technological know-how, we’re here to help. Send us a message or book a free consultation call today.