Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
For the past six years, INHR has been convening international dialogues about safety of AI military systems with retired generals, diplomats and current tech professionals. Our conclusion, after discussion with US, European, Chinese and Indian experts: There is no one-size fits all answer on how to govern sensitive AI applications. Bans are impractical, the UN is not suited to regulate, and limits on national law prevents them from investigating or correcting problems when an AI incident occurs. INHR experts have developed a voluntary Code of Conduct and best practices guide for mitigation of the risks of AI weapons. We offer training on these practical approaches to Responsible AI for companies and countries alike.
Thanks to Founders Pledge, INHR's multilateral dialogue on AI military safety began in May 2024 to explore the risk that terrorists might use AI tools to create biological weapons. Governments and the international community lack the capacity and institutions to prevent and respond to AI x bio risks. Companies devise guardrails for Large Language Models and Biological Design Tools to prevent misuse, but malign actors "jail break" and find loopholes in these safety measures. INHR hopes its multistakeholder dialogue helps the international community stay one step ahead of malign actors by building capacity and sharing model safety practices to mitigate these risks.
In August 2023, INHR co-hosted a Track II AI dialogue at Frederiksberg Castle in Copenhagen, with our sponsors Center for New American Security (CNAS) and the Royal Danish Defence College! We were pleased to meet with multi-domain experts from the U.S., Europe, China and India. The conference was a success, with productive discussion from all members, in-person and online. For more details see here our Model Practices Guide to Test, Evaluation, Validation and Verification (TEVV) of AI-enabled military systems!
At the AAAI-23 conference in Washington DC, Eric Richardson explained for computer scientists and data professionals why diplomacy is important to the future of AI. He provided an overview of UN Human Rights Council resolutions touching on AI and UNESCO's AI Ethics Rrinciples (see blog below). As a human rights NGO, we know how important it is to protect data privacy, fight bias, and ensure that AI is a force for good in human rights. But as technology savvy professionals, we are not afraid of AI and know that AI applications can help to meet these challenges and help to protect other human rights, while also documenting their violation as needed around the globe. Informing international legal and human rights principles with technical considerations and a deeper understanding of AI, we believe is a constructive way forward.
UN regulation of artificial intelligence has been scattershot to date, but we hope that the UN Secretary General's new expert group can help to bring coherence to the international playing field.
When serious discussion of AI governance occurs, INHR is there, including our attendance or presentation at recent events such as:
- the AAAI-23 Conference in Washington DC (Feb 2023)
- the REAIM Conference in the Hague (Feb 2023)
- the Luxembourg LAWS conference (May 2023)
- the UNIDIR Innovations Dialogue (June 2023), and
- the 53rd Human Rights Council discussion of new technologies (June/July 2023).
We host our next trilateral meeting with the Royal Danish Defense University in Copenhagen in August 2023.
INHR has been fortunate to convene a trilateral dialogue among US, Chinese and international experts which produced important recommendations on international governance and safety for Ai-enabled military applications. Click below to learn more about our most recent recommendations on the AI - biosecurity intersection.
INHR experts can help to link developing countries and civil society with expertise for technology and policy decisions necessary to address telecommunications, artificial intelligence and cyber challenges. We can link you with industry, academia, and policy makers who are confronting the challenges of new technologies and help you in deploying solutions in your country or organization. For example, we helped SIDS/LDCs at the UN GGE on Lethal Autonomous Weapons Systems (LAWS)
INHR represents states and civil society think about the opportunities and challenges of the cyber and Artificial Intelligence revolutions. We help states use technology for good in areas like health, human rights or development work. This includes internet freedom, privacy and data security,. Second, we can help you in representation in the UN technical agencies at the forefront of addressing technology. We understand the balance between innovation and regulation and prefer establishing guard rails to squelching creativity.
The United Nations alphabet soup of agencies addressing emerging tech can be complicated. The CCW has a Group of Governmental Experts on LAWS (Lethal Autonomous Weapons Systems). UNIDIR holds conferences on cyber issues and Outer Space & Security. The UN Human Rights Council has resolutions on the impact of neutrotechnology, cyberbullying, and the impact of new and emerging technologies in the military domain. Whether you just want to attend these meetings or understand the impact of the UN resolutions and decisions, INHR can help.
We are experts in many of the multistakeholder platforms that bring industry, civil society and governments together. We have experience in the Internet Governance Forum, the Business and Human Rights Forum, the Open Government Partnership and many similar areas. Let us help you strategize about how to use these platforms to maximize your priorities. Our work on removing gender bias from AI has been acknowledged by the AI for Good Summit and the UN International Telecommunications Union.
For all to benefit from emerging technologies, ensuring access and inclusivity is key. INHR was an AI for Good finalist for our proposals to combat gender bias and we can help you think about how to address challenges like:
- working on product design to avoid AI bias based on race or gender,
- making technologies more accessible to people with disabilities,
- promoting connectivity to ensure those in remote areas and countries can access the potential of new technologies.
Sign-up here if you would like to receive INHR's latest AI and new technology governance updates