A Security focused AI Centre for Norway: What could it do?


This is a high level overview of the NORA SIG event hosted by Simula on 9th April at their office in Oslo.

The Security in AI event hosted by Simula showcased a series of insightful talks addressing the multifaceted challenges and opportunities at the intersection of AI, security, and law. This summary consolidates key ideas from each presentation, offering a snapshot of the current landscape in AI security.

1. Challenges and Strategies in AI Security (Olav Lysne, Simula)

Lysne underscored the pressing challenges in AI security, emphasizing the need for robust testing, evaluation, and standardization to mitigate threats from adversarial attacks and unintentional errors. The talk highlighted the dynamic nature of AI security threats and the continuous effort required to adapt and develop new protective mechanisms.

2. European Cybersecurity Priorities (Lasse Gullvåg Sætre, Forskningsrådet)

Sætre’s discussion focused on the thematic priorities of the European Cybersecurity Competence Centre, stressing the importance of aligning national security efforts with broader European objectives to enhance collective resilience against cyber threats.

3. Integrity and Trust in AI (Elisavet Kozyri, UiT)

Kozyri explored the themes of employing AI to ensure integrity and the inherent challenges in maintaining the integrity of AI systems themselves. This dialogue emphasized the need for reliable models, the assessment of training data’s integrity, and the application of AI in enhancing system security through anomaly detection.

4. Legal Challenges in AI and Security (Lee Bygrave, UiO)

Bygrave tackled the legal intricacies of AI and cybersecurity, critiquing current legal frameworks and the overwhelming “regulatory swell” in the EU. His analysis called for the integration of legal principles from the inception of system development to address cybersecurity effectively.

5. GDPR Compliance in AI (Ferhat Özgur Catak, UiS)

Catak’s presentation delved into the intersection of AI development and GDPR compliance, highlighting the importance of trustworthiness, privacy, and the challenges in aligning AI models with GDPR, especially regarding data erasure and the right to be forgotten.

6. Agile Approaches for High-Risk Systems (Thor Myklebust, SINTEF)

Myklebust discussed agile methodologies for testing high-risk systems, emphasizing the critical role of compliance with legislation and standards for market access. The talk provided insights into SINTEF’s projects on autonomous systems and the evolving landscape of machine learning and cybersecurity standards.

7. AI Risks in Critical Sectors (Sandeep Pirbhulal, Norsk Regnesentral)

Pirbhulal highlighted the vulnerabilities and risks associated with the growing use of AI in critical sectors, citing real-world cyberattack incidents. He advocated for collaborative efforts and the establishment of an AI Security Center to address these challenges collectively.

8. Security Issues with LLMs (Veronica Exposito de Maekimarttunen, OUS)

Exposito de Maekimarttunen raised concerns about the security implications of using Large Language Models (LLMs) in research, especially when handling sensitive data. She called for clear guidelines and dialogue to balance the power of AI with data protection needs.

9. Cryptology’s Role in AI Security (Martijn Stam, Simula UiB)

Stam presented on the potential of cryptology in securing AI systems, discussing various cryptographic techniques and their applications. He emphasized the need for ongoing research and collaboration in cryptology to enhance AI security.

10. AI Security Research Challenges (Anders Løland, Norsk Regnesentral)

Løland focused on specific AI security research challenges, such as dealing with small, sparse, and confidential data, and the diversity of problems AI aims to solve. He highlighted the importance of leveraging prior knowledge and efficient data use in addressing security issues.

Panel Discussion: A Vision for AI in Norway

The panel, comprising experts from various domains, debated the future direction of AI research and implementation in Norway. They stressed a multidisciplinary approach, embedding security at AI’s core, and fostering collaboration across sectors to address the complex challenges posed by AI and cybersecurity.

These discussions illuminate the breadth of issues at the nexus of AI, security, and law, highlighting the critical need for a holistic, interdisciplinary approach to navigate this evolving landscape effectively.

What do you think such a centre should do?

,

Leave a comment

Create a website or blog at WordPress.com