Trustworthy AI: one year after the Executive Order
Privacy-enhancing technologies and global initiatives are shaping secure, responsible and trustworthy AI adoption, writes Ellison Anne Williams
Trustworthy AI: one year after the Executive Order
Over the past two years, the hype around Artificial Intelligence (AI) has been unprecedented — and so has the resulting push to understand and adopt AI-powered business-enabling capabilities.
Enterprise leaders across verticals want to harness the power of AI to improve efficiency, extract data-driven insights, and drive positive business outcomes.
While AI tools are indeed on the path to delivering value to many organisations, the increased visibility around this quickly evolving category exposes another by-product of AI usage: elevated organisational risk.
Recognising this risk has spurred several global regulators and lawmakers to action. One prominent example is the directives the US government outlines, which aim to ensure a safe and sustainable path forward for government-facilitated AI efforts.
Big tech’s closed AI ecosystems are hindering trust and development, report claims
On October 30, 2024, the White House issued the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, a framework designed to guide US federal agencies as they adopt AI-powered capabilities.
The AI Executive Order was notable for its depth and clear directives, including specific calls to action for more than 20 agencies. Implementation deadlines spanned between 30 and 365 days.
As we examine the AI landscape one year after this directive, the progress made, including the recently released National Security Memorandum on Artificial Intelligence, is encouraging.
While these actions effectively establish the baseline expectation that privacy and security cannot be afterthoughts when adopting AI but must be intentionally integrated into AI systems from the beginning, one action-filled year is not the end of the story.
It is important to continue this commitment to creating an environment where AI risks are acknowledged, privacy is respected, and security is foundational.
At its core, Secure AI is about minimising risk and enabling trust and security while enhancing decision-making, protecting privacy, and combating risks.
To deliver the best outcomes, AI/ML capabilities need to be trained and enriched using a broad, diverse range of data sources.
Foundational to these efforts are Privacy-Enhancing Technologies (PETs), a family of technologies uniquely equipped to enable, enhance, and preserve data privacy throughout its lifecycle.
PETs allow users to capitalise on the power of AI while mitigating risk and prioritising protection.
Data is the foundation upon which AI is built, so it may seem obvious that the privacy and security challenges that have long been associated with data also extend to AI tools and workflows.
Yet, within many organisations, the fog of AI hype seems to have hidden this reality. Since the surge of activity driven by the host of Generative AI tools that burst onto the scene in late 2022, numerous AI efforts have advanced without a passing thought to the security implications or long-term sustainability.
Responsible AI innovation requires action — and systemic action requires resources.
Like the AI Executive Order directives that initiated a number of workstreams in the US, there remains a role for global governments to work alongside industry to support safe, responsible, trustworthy, and sustainable AI practices.
Technical AI experts from nine countries and the European Union will soon meet in San Francisco to discuss international cooperation on AI safety science through a network of AI safety institutes.
Legislative and regulatory actions and the funding of tools and technologies that prioritise privacy and security further bolster global AI leadership.
Dedicating resources to adopting technology-enabled solutions, such as PETs, will help ensure that the protection of models and workflows is foundational, safeguarding the vast amount of sensitive data used during AI training.
Reflecting this pursuit, the European Union approved the EU Artificial Intelligence Act in March 2024. This consumer-centric act mandated the right to privacy by stating that personal data protection must be guaranteed throughout the entire lifecycle of the AI system.
“Measures taken by providers to ensure compliance with those principles may include not only anonymisation and encryption but also the use of technology that permits algorithms to be brought to the data and allows training of AI systems without the transmission between parties or copying of the raw or structured data themselves.”
The NCSC Guidelines for Secure AI System Development were released in the UK in November 2023.
They identified security as a core requirement, not just in the development phase, but throughout the life cycle of the system and pointed to PETs as a means of mitigating risk to AI systems: “Privacy-enhancing technologies (such as differential privacy or homomorphic encryption) can be used to explore or assure levels of risk associated with consumers, users and attackers having access to models and outputs.”
As the AI market continues to expand exponentially, leaders must understand and support efforts to drive the responsible use of these technologies.
That support includes crafting directives, policies, and budgets to advance Secure AI efforts. It also includes working with tech leaders, academics, and entrepreneurs who have a strong stake in advancing the adoption of these technologies in a secure and sustainable way.
Prioritising the safe, secure, and responsible use of AI and providing the funding necessary to sustain its advancement will ensure the impact of these transformative tools far into the future.
Subscribe to our Editor's weekly newsletter