7 October, 2025
AI Generated Image

What does the 1984 The Terminator movie and the 2024 EU AI Act have in common?

1984 – Recall a post-apocalyptic vision of 2029 when the machines had taken over, driven by Skynet’s Artificial Intelligence (AI) “self-awareness” deciding it didn’t want to let humans deactivate it.  The Terminator hit the big screen.

Twenty years later, 2004, the masterpiece that was i-Robot skirted around Isaac Asimov’s “Three Laws of Robotics“:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Fiction?  Of course.

Forty years after the release of The Terminator, the 2024 European Commission AI Act is presented.  So whats all the fuss and concern about?  

Well the AI Act is an attempt to try and limit or control AI Use cases and applications to be “Trustworthy” ones.  A proposed mechanism to ensure that AI “respects fundamental rights, safety, and ethical principles”, which frankly is hard enough with human intelligence, let alone artificial.  While the big screen fictional examples I gave paint a bleak picture of an AI Armageddon or of rogue robots – it is this kind of ‘vision’ that clearly worries many inside and outside of the Industry.  Elon Musk a prominent public and business figure was one of tens of thousands of signatories to the much publicised ‘Future of Life Institute’ open letter calling for:  all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.     That was March 2023 – there has been no such pause, and since then Elon has gone on to tell the Wall Street Journal CEO Council Summit, that “AI had the potential to assume control of humanity”. “It’s a small likelihood of annihilating humanity, but it’s not zero.”

The AI Act outlines four levels of “risk” posed by AI systems; unacceptable, high, limited and minimal.  Rightfully, the Act declares an outright ban on any system that is considered a clear threat to the safety, livelihoods and rights of people. Sounds reasonable, nobody fancies Armageddon thank you – but the Act then goes on to give some examples – one of which is “Social scoring by Governments”, probably a nod to some of the systems purported to be used in China that focus primarily on economic activity in commerce, government affairs, social integrity, and judicial credibility of citizens.  This is where it could all get complicated, because many of these kinds of systems will be created for many purposes other than an obvious danger to human life.  Consider a banks use of AI in their credit rating system used to determine (without a humans oversight) – your suitability for a loan, or a supermarket targeting precise demographics of consumer behaviour and personal attributes such as size, weight, ethnicity, sexual orientation, political persuasion and so on – to favour product choices and discounts to specific individuals.  Remote biometric identification – think “Mission Impossible” style facial recognition systems  – these are all considered high risk, and in principle – prohibited.

So where does this leave business – because it would seem that outside of a chat bot to help your consumers with how to operate their new microwave oven, or to process a product return because the shoes you ordered don’t fit – there’s an awful lot of systems that could fall into what is defined as the  “high risk” category, and thus require , registration, oversight and regulation.

If you are considering an AI system, or already deploying AI – you can run some details through the EU AI Act Compliance Checker tool, to determine your risk rating.

Once you’re clear on the risk rating of your AI solution, your next task will be to build a business case around your AI investment, as the AI act now compounds your business  challenge by needing to ensure compliance in an ever evolving regulatory environment.  

While you’re building out your business case – consider the analysis published by David Cahn of Sequoia Capital .  David describes the gap between the revenue expectations implied by the AI infrastructure build out and actual revenue growth in the AI ecosystem as equating to around $500Bn.  Thats a pretty big gap.  Davids analysis is centred around the huge GPU capex spend by the large hyperscalers, and the minimal revenues generated so far from their AI services.

So how can you de-risk your AI investments?  Well there’s a few ideas out there.  You could use existing LLM’s for your Gen-AI projects, this will save you a great deal of model training cost and time, but its of limited use if its not relevant to your business or customers.  You are likely then, to consider deploying a RAG solution (Retrieval-Augmented Generation).  A RAG solution will help enhance an existing LLM by using your own data and information sources.  

You’ll need to be sure that your data infrastructure is going to be able to feed the hunger of your RAG solution, but investment in your storage platform is now playing second fiddle to spend on  “AI” projects.  Evergreen //One from Pure is the best way to satisfy that hunger.  Why, well it dramatically minimises your initial storage spend,  which is highly likely to be an unknown quantity with unknown performance requirements and unknown growth.  Evergreen //One is going to let you pay for only what you consume – it will let you vary your consumption, adjust your performance and SLA’s and ultimately its going to de-risk your AI project and improve your ROI.  Since you will likely be deploying your AI apps and components using Kubernetes and containers – you will also deploy PortWorx by Pure to enable you to manage your container storage, orchestrate it, and ensure its protected across your on-prem and the hyperscaler clouds.

Now – all you need is to get your hands on some GPU’s 😉

Loading

Leave a Reply

Your email address will not be published. Required fields are marked *