AI is becoming increasingly common in the tech stack, and there’s little doubt that it’s a game-changer for productivity.
Most global employees (70%) are willing to use AI to help manage their workload — a productivity boost that will be particularly welcome in the UK, where productivity levels are dwindling.
But as AI adoption accelerates, can we be sure our relationship with it is headed in the right direction? Vanta’s State of Trust 2024 Report details how AI governance and risk management are still relatively nascent.
For instance, only 43% of UK organisations currently conduct or are in the process of conducting regular AI risk assessments.
The EU AI Act may shift in the right direction, but there is concern over how the act will adapt to evolving AI threats. Moreover, UK organisations do not have to comply with it unless they operate within Europe.
This means a lot rests on UK organisations’ self-regulation. However, when it comes to formal policies for governing AI usage, only 42% of them have or are in the process of implementing a company AI policy.
This indicates the lag between security and the increased use of AI tools — and the need to discuss trust when discussing AI.
The risks of AI without trust
While AI is helping drive productivity, it also complicates the security landscape. Most UK businesses (66%) plan to invest more in security around AI use within their organisation in the next year.
This is a clear reaction to the fact that since the proliferation of AI, cyber-risks and threats have gone up — with businesses reportedly experiencing more phishing attacks (35%), a rise in AI-based malware (34%) and more compliance violations (27%).
As if that weren’t enough, the potential damage of using AI without risk management and an in-house policy goes beyond data breaches and cyber threats — it hits a company where it hurts most: its reputation.
More than half (53%) of UK organisations say that customer trust results from good security practices, meaning they must do more to protect it.
Demonstrating trust in the age of AI
Security professionals are already under pressure from a challenging security landscape and the burden of manual compliance tasks.
AI is helping while further complicating this, and teams can’t be expected to take on more without losing their focus on mission-critical work. But equally, they can’t afford to ignore the problem either.
Below are three ways for organisations to ensure they maintain a baseline of cybersecurity readiness at all times that go beyond the basic requirements of the EU AI Act.
This includes ensuring that their use of AI remains compliant and they are reaping the benefits without exposing themselves to unnecessary risks.
1. Strengthen their entire trust network
Trust is not just a reflection of an organisation. It also extends to its partners and the wider business ecosystem.
For instance, almost two-thirds (63%) of UK businesses agree that third-party breaches negatively impact their organisation’s reputation.
AI is set to complicate this further as more companies add AI to their tech stack and/or develop their own tools.
Therefore, UK organisations must strengthen their entire trust network to maintain trust.
Companies must raise the security bar and build a bespoke standard of trust that centralises visibility, however and wherever AI is being used.
This is how they can define good security — for themselves and the companies they work with.
2. Take tools (and customers and employees) seriously
Within the workplace, half of UK organisations (49%) have concerns about the use of AI and the risks it poses for the organisation’s security.
Plus, there are increasing news stories surrounding the consent-free use of customer data to train AI. In fact, our research found that while 29% of UK organisations require opt-in from customers to use their data for AI training, 74% of companies don’t offer an opt-out option.
When trust and transparency matter so much in and outside of an organisation, it’s imperative that companies go beyond the standard when demonstrating trust.
3. Power trust with automation and AI
The irony of AI complicating the day job of your security team is that it can also play a crucial role in helping relieve the compliance burden they face.
When asked about AI’s transformative impact, UK leaders listed the most transformative areas as improving the accuracy of security questionnaires (45%), streamlining vendor risk reviews and onboarding (43%), eliminating manual work (37%), and reducing the need for large teams (32%).
This shows just how transformative AI can be in unlocking efficiencies for security teams and ultimately driving business value for organisations.
By looking to AI as the solution rather than just the problem, organisations can do more with less.
This can transform the compliance burden security teams face, and companies can protect themselves against themselves and focus on mission-critical work instead.
This is how organisations keep pace with where the world is headed and ensure that trust goes on the journey with them.