Panel Intro:
Panel Discussion at SafeComp 2025
Title: ”AI Under the Law: Building Safe, Trustworthy, and Compliant Systems”
Chair: Hakima Shiralizade (Ericsson Research)
Introduction and Scope
This panel seeks to pay attention to the world´s most comprehensive AI law which came into effect on August 1, 2024, with safety at its core. Given reference to its safety-centric provisions, this landmark legislation aims to protect citizens’ rights while regulating AI deployments across various sectors to enable AI adoption with ensured safety. In particular, the tiered risk approach outlined in the Act is aimed at emphasizing areas where the stakes are highest for safety, security, and human well-being, hence Trustworthy AI matters the most.
However, while consumers gain from increased safety and trust in AI-driven products and services, businesses must proactively evaluate their AI practices and invest in compliance infrastructure to meet evolving standards. Balancing societal safety with fostering innovation remains a critical challenge. Hence this panel will focus on the following topics:
- Affected Sectors
- AI Maintenance and Resource Allocation
- Safety Components and Product vs Service Distinction
- General-Purpose AI Models
- Innovation & Regulation
Sebastian Hallensleben
Dr Sebastian Hallensleben is the Chair of CEN-CENELEC JTC 21 where European AI standards to underpin EU regulation are being developed, and co-chairs the AI risk and accountability work at OECD. Sebastian is the initiator and Programme Chair of the Digital Trust Convention and is Principal Advisor Digital Trust at KI Park. As Chief Trust Officer at resaro, he works towards drilling down to ground truths about capabilities of AI systems. Previously, Sebastian Hallensleben headed Digitalisation and Artificial Intelligence at VDE Association for Electrical, Electronic and Information Technologies. He focuses in particular on operationalising AI ethics, on characterizing AI quality and on building privacy-preserving trust infrastructures for a more resilient digital space.
Ibrahim Habli
Ibrahim Habli is a Professor of Safety-Critical Systems in the Computer Science Department at the University of York. He specializes in the design and safety assurance of software-intensive systems with a particular focus on AI and autonomous applications. His research is inherently interdisciplinary, involving close collaborations with ethicists, lawyers, social scientists and clinicians and long-standing partnerships with organizations such as Jaguar Land Rover, NASA and NHS England. He currently serves as the Director of the UKRI Centre for Doctoral Training in Safe AI Systems (SAINTS), a £16 million cross-disciplinary initiative that brings together five academic departments across three faculties and 35 industry policy and regulatory partners. Professor Habli is also the Research Director of the Centre for Assuring Autonomy (CfAA), a £10 million partnership between Lloyd’s Register Foundation and York dedicated to pioneering evidence-based and impactful research at the intersection of AI and safety. Professor Habli has served on several national and international safety standardization committees including BSI DS/1, EUROCAE/RTCA, IEEE and MISRA. He also has provided expert advice on AI safety to industry, public organizations and the UK Government.
Rafia Inam
Dr Rafia Inam is a senior research manager at Ericsson Research and Adjunct Professor at KTH in research area Trustworthy Artificial Intelligence, Sweden. She has conducted research for Ericsson since 2015 on 5G for industries, 5G network slices and management, using AI for automation of telecom. She is specialized in trustworthy AI, explainable AI, explainable RL, AI Regulations specially EU AI Act, risk assessment and mitigations using AI methods.
Hans Hedin
Hans Hedin serves as Intelligence Analyst & Intelligence Operations Lead at the Swedish Post and Telecom Authority (PTS), where he leads and advises on intelligence processes and operations across areas such as AI, digital inclusion, 6G, twin transition, and unmanned systems. He is also a member of the European Working Group of Competent Authorities on Artificial Intelligence, contributing to the development of regulatory supervision frameworks for the EU AI Act, in collaboration with representatives from over 15 EU countries, UNESCO, ENISA, and GD Reform. With over 25 years of experience, he advised global companies related to strategy and business development including designing and implementing security programs, conducting country and location risk analyses, and publishing books, white papers, conference presentations and articles related to intelligence, business development and security. He managed and delivered over 700 projects as a consultant, analyst, and facilitator, supporting strategic decision-making through industry studies, benchmarking, and scenario planning.