The AI Safety Summit is a major global event to be held on November 1-2 at Bletchley Park, UK. This is a significant location in the history of computer science development and was once the home of British Enigma code-breaking. This choice of location is symbolic of the UK's commitment to using its technological expertise to address the challenges and opportunities of AI safety.
The AI Safety Summit will focus on how to best manage the risks from the most recent advances in AI (‘Frontier AI’). The summit will focus on certain types of AI systems based on the risks they may pose. The summit will bring together international governments, leading AI companies, civil society groups, and research experts.
“As AI continues to rapidly evolve, we need a global approach that seizes the opportunities that AI can bring to solving humanity’s shared challenges. The UK-hosted AI summit this November will be key to helping us achieve this.”
"The UK will set out an ambitious vision for how the enormous potential of AI technology can be harnessed to speed up development in the world’s poorest nations at UNGA today.
The Foreign Secretary will call on international partners to come together to coordinate efforts for AI development in Africa and accelerate progress towards the UN’s Sustainable Development Goals. In leading the way, the UK will launch the ‘AI for Development’ programme, in partnership with Canada’s International Development Research Centre to focus on helping developing countries, initially in Africa, build local AI skills and boost innovation."
- Address by Foreign Secretary James Cleverly before the UN General Assembly (Sept. 18, 2023)
The first global AI Safety Summit has five objectives:
Day 1 (November 1, 2023)
The program for Day one will consist of roundtable discussions covering the following themes:
● Understanding Frontier AI Risks
● Improving Frontier AI Risks
● A panel discussion on AI for good- AI for the next generation
Day 2 (November 2, 2023)
The Prime Minister will convene a small group of governments, companies and experts to further the discussion on what steps can be taken to address the risks in emerging AI technology and ensure it is used as a force for good.
In parallel, UK Technology Secretary Michelle Donelan will reconvene international counterparts to agree next steps.
● ICO, GDPR Guidance on AI and Data Protection, (Mar 15, 2023)
● GOV.UK, Digital Regulation Cooperation Forum, (Mar 28, 2023)
● GOV.UK, A pro-Innovation approach to AI regulation, (Aug 3, 2023)
● GOV UK, Competition and Market Authority, AI Foundations Models initial report, (Sep 18, 2023)
● GOV UK, AI Summit Introduction, (Sep 25, 2023)
● GOV UK, Introduction to the AI Summit Introduction, (Oct 11,2023)
● Reuters, Britain invites China to its Global AI summit, (Sep 19, 2023)
● UN, World must pass AI stress test’, UK Deputy Minister says announcing summit (Sep 22, 2023)
● Reuters, EU considering whether to attend Britain’s AI Summit, (Sep 22, 2023)
● GOV.UK, AI Safety Summit: Introduction (html), (Sep 25, 2023)
● UK Tech News, UK unveils Government AI events to the run-up of AI summit, (Sep 25, 2023)
● Evening Standard, Tech Secretary hoping for agreement over AI safety ‘smoke alarm’ at summit (Sep 25, 2023)
● Infosecurity, AI safety summit faces criticism for narrow focus,( Sep 29, 2023)
● Tech Monitor, UK government urged to widen scope of AI Safety Summit beyond Frontier models, (Sept 29, 2023)
● Twitter, I visited @Bletchey Park where the world’s first AI Safety Summit will take place in one month’s time, (Oct 2, 2023),
● GOV.UK, AI Safety Summit Introduction, (Oct 11, 2023)
● Bloomberg, AI Summit to Weight Election Disruption, Security Risks, (Oct 15, 2023)
● GOV.UK, AI Safety Summit: Day 1 and 2 Programme, (Oct 16, 2023)
TechUK, Call for Case Studies ahead of AI Safety Summit, (Oct 18, 2023), https://www.techuk.org/resource/call-for-case-studies-ahead-of-the-global-ai-safety-summit.html
● Chatham House, UK AI Summit, what can it achieve, Members event (Oct 24, 2023),
● Tech UK, techUK and DSIT joint roundtable on frontier AI safety, Official pre-AI Safety Summit event,
● Linkedin, Q & A with Secretary of State Michell Donelan
● Royal Society, Horizon scanning AI safety risks across scientific disciplines.
(As published by Financial Times Oct 28, 2023)
1. Ada Lovelace Institute
2. Adept
3. Advanced Research and Invention Agency
4. African Commission on Human and People’s Rights
5. Al Now Institute
6. Alan Turing Institute
7. Aleph Alpha
8. Algorithmic Justice League
9. Alibaba
10. Alignment Research Center
11. Amazon Web Services
12. Anthropic
13. Apollo Research
14. ARM
15. Australia (government)
16. Berkman Center for Internet & Society, Harvard University
17. Blavatnik School of Government
18. British Academy
19. Brookings Institution
20. Canada (government)
21. Carnegie Endowment
22. Centre for Al Safety
23. Centre for Democracy and Technology
24. Centre for Long-Term Resilience
25. Centre for the Governance of Al
26. Chinese Academy of Sciences
27. Cohere
28. Cohere for Al
29. Columbia University
30. Concordia Al
31. Conjecture
32. Council of Europe
33. Cybersecurity and Infrastructure Security Agency
34. Darktrace
35. Databricks
36. Eleuther Al
37. ETH Al Center
38. European Commission
39. Faculty Al
40. France (government)
41. Frontier Model Forum
42. Future of Life Institute
43. Germany (government)
44. Global Partnership on Artificial Intelligence (GPAI)
45. Google
46. Google DeepMind
47. Graphcore
48. Helsing
49. Hugging Face
50. IBM
51. Imbue
52. Inflection Al
53. India (government)
54. Indonesia (government)
55. Institute for Advanced Study
56. International Telecommunication Union (ITU)
57. Ireland (government)
58. Italy (government)
59. Japan (government)
60. Kenya (government)
61. Kingdom of Saudi Arabia (government)
62. Liverpool John Moores University
63. Luminate Group
64. Meta
65. Microsoft
66. Mistral
67. Montreal Institute for Learning Algorithms
68. Mozilla Foundation
69. National University of Córdoba
70. National University of Singapore
71. Naver
72. Netherlands (government)
73. Nigeria (government)
74. Nvidia
75. Organisation for Economic Co-operation and Development (OECD)
76. Open Philanthropy
77. OpenAI
78. Oxford Internet Institute
79. Palantir
80. Partnership on Al
81. RAND Corporation
82. Real ML
83. Republic of Korea (government)
84. Republic of the Philippines (government)
85. Responsible AI UK
86. Rise Networks
87. Royal Society
88. Rwanda (government)
89. Salesforce
90. Samsung
91. Scale Al
92. Singapore (government)
93. Sony
94. Spain (government)
95. Stability Al
96. Stanford Cyber Policy Institute
97. Stanford University
98. Switzerland (government)
99. Technology Innovation Institute
100. TechUK
101. Tencent
102. Trail of Bits
103. United Nations
104. United States of America (government)
105. Université de Montréal
106. University College Cork
107. University of Birmingham
108. University of California, Berkeley
109. University of Oxford
110. University of Southern California
111. University of Virginia
112. x.ai
"We share your assessment that Britain’s “light-touch” approach to regulating AI is unlikely to establish the necessary guardrails to make it safe and reliable. In our comprehensive survey of national AI policies and practices, the Artificial Intelligence and Democratic Values index, we found that countries favour greater regulation as they develop a deeper understanding of the uses of AI. This is true not only in the EU and China, but also in America, where Joe Biden has recently stated that companies should not release commercial AI products that are not safe. The White House has called for an AI bill of rights, and federal agencies, including the Federal Trade Commission, have issued a joint declaration on enforcement efforts against discrimination and bias in automated systems. Chuck Schumer, the leader of the Senate, has made AI a legislative priority.
"As for the principles-based approach you propose, one possibility is the Universal Guidelines for Artificial Intelligence, a foundational framework for AI policy that outlines rights and responsibilities for the development and deployment of AI systems to maximise the benefits and minimise the risks."
Article for the Council on Foreign Relations (CFR). The UK AI Summit: Time to Elevate Democratic Values (Sept 27, 2023)
"First, the Global AI Summit must be inclusive. Prime Minister Sunak is already under criticism for a preliminary announcement that included statements from only tech CEOs and a plan that appears to sideline academics and civil society.
"Second, the AI safety agenda should not ignore the AI fairness agenda. Prime Minister Sunak is right to underscore the need for an international framework to ensure the safe and reliable development of AI.
"Third, human rights and democratic values should remain key pillars of the UK AI Summit. There are many AI policy challenges ahead and several of the solutions do not favor democratic outcomes. For example, countries emphasizing safety and security are also establishing ID requirements for users of AI systems. And the desire to identify users and build massive new troves of personal data is not limited to governments."
CAIDP Letter to UK Prime Minister Rishi Sunak on the UK AI Summit (Oct 16, 2023)
"Civil society groups and independent academic experts must be at the table with tech CEOs, government ministers, and others who will shape the UK AI strategy and propose the next steps after the Summit is concluded. They should not be relegated to fringe events. Public participation is central to democratic legitimacy.
"We also urge you to establish prohibitions on the development and deployment of certain AI systems. The previous UN High Commissioner for Human Rights has said that there should be a moratorium on AI systems that violate human rights. Computer scientists have urged a pause on advanced AI systems. If it is not possible to maintain control of an AI system, then there should be a clear obligation to terminate the AI system. One concrete outcome of the AI Summit should be the endorsement of the Termination Obligation set out in the Universal Guidelines for AI.
"The UK government must seize this moment to ensure that AI is trustworthy and humancentric, that AI governance protects fundamental rights, upholds democratic values, and preserves the rule of law. The Center for AI and Digital Policy stands ready to assist you in this critical undertaking.
CAIDP Statement to UK Parliament on Governance of AI (Nov 25, 2022)
CAIDP strongly suggests that the UK regulatory system address harmonization of rules and standards and the national, regional, and global level, to address the cross-border nature of AI and data systems, in line with the resolutions of the Council of Europe. Such harmonization will ensure achieving a ‘common ground around security, safety, and system resilience
CAIDP Statement to UK regarding AI Accountability (Nov 17, 2021)
Based on our assessment of the relationship between AI policy and democratic values, the recent pronouncements of the G7, the uproar in the UK regarding the use of algorithms for educational placement as well as new concerns about monitoring workers, CAIDP recommends
that the DCMS withdraw the proposal to remove Article 22 or otherwise diminish the legal accountability for the use of AI techniques. After all, human review is at the heart of the British political tradition. The protections afforded in Article 22 are fundamental safeguards towards protecting fundamental rights and rule of law.
The 2022 AI Index covers 75 countries. The 2022 report results from the work of more than 200 AI policy experts in almost 60 countries. The UK is in Tier 2 of the AIDV 2022 index. The UK is engaged internationally in the development of AI governance in line with the values of fairness, freedom and democracy. This engagement includes working with partners to shape AI governance under development including the EU AI Act and the potential Council of Europe legal framework. The UK endorsed the UNESCO Recommendation on the Ethics of AI.