Read More Cybersecurity
This post is also available in:
עברית (Hebrew)
In a significant move for global governance of artificial intelligence, the UK has signed its first international treaty aimed at addressing the risks associated with AI technologies. Lord Chancellor Shabana Mahmood announced the agreement in early September, which has been developed under the auspices of the Council of Europe. This treaty represents a collective commitment from participating countries to manage AI products responsibly and protect the public from their potential dangers.
AI has the potential to deliver substantial benefits, a lot of which we are experiencing in our day-today lives in the past few years. However, the treaty underscores the need for robust safeguards to mitigate the risks posed by misinformation and biased data, which can adversely affect decision-making processes.
The newly established framework mandates that countries monitor AI development and manage technologies within strict boundaries. Key provisions focus on safeguarding human rights, protecting data privacy, and upholding democratic values. Signatory nations are required to take decisive action against AI misuse that could threaten public services and endanger citizens.
Once ratified in the UK, this treaty will bolster existing laws, ensuring that the challenges posed by biased data and unfair outcomes are effectively addressed.
This is the first legally binding international treaty on AI, aiming to unite countries in managing the risks associated with this rapidly advancing technology. Notably, nations outside the Council of Europe, such as the US and Australia, are also being invited to become signatories, emphasizing a global effort to ensure the safe use of AI.
The agreement outlines three primary safeguards:
Human Rights Protection: Ensuring appropriate use of personal data, respect for privacy, and non-discrimination.
Democracy Protection: Preventing the erosion of public institutions and democratic processes.
Rule of Law: Mandating that signatory countries regulate AI risks and ensure public safety.
The UK has been at the forefront of promoting safe and trustworthy AI. As the treaty progresses towards ratification, the government will collaborate with regulators, local authorities, and devolved administrations to implement its requirements effectively.
Peter Kyle, Secretary of State for Science, Innovation, and Technology, highlighted the transformative potential of AI for economic growth and public service enhancement, asserting that trust in these innovations is essential for their success. He explained that this treaty will be key for enhancing protections for human rights, the rule of law, and democracy, while advancing the global cause of safe, secure, and responsible AI.