### speakers Tim McGarr (BSI Adam Leon Smith (dragonfly) Hollie Hamblett (Policy specialist - consumer perspective) Chanell Daniels (digital catapult) - helping early stage innovators to navigate safety & ethical concerns - clear expectations of what trustworthy and ethical will look like are invaluable. Being able to demonstrate where you have been able to implement safeguards Cristina Muresan (cambridge) - responsible AI procurement - "soft low governance mechanism" - consensus based governance ### discussion where are key challenges in responsible AI? - Adam: issues to solve are legal (Biden's executive order), we're missing regulatory guidance in UK. Does regulation stifle innovation? No - innovators like regulation, it helps build confidence with investors - Hollie: when things go wrong, redress systems need to work (for consumers). - Chanell: not what sits with core of technology, rather thinking about risks & trade-offs of application. What is the threshold of appropriate risk management? - Cristina: people & process. We need interdisciplinary teams developing standards. bridging between theory and practice (eg highlevel academia and practicalities of users, eg procurement teams) audience questions: there are multiple assurance methods, but talking to small tech companies, they don't really know which method to use when (or have resources to do it formally). How can we guide - Ch:how to find the right risk assessment? companies don't need to understand everything about a standard, but knowing that there is a consultancy that can help them addressing a standard can build trust. or technical tools that can help mitigate known risks! this helps with understanding the quality of what could be done - A: you need an AI management / risk assessment process first, then the process will be guiding you to different standards to then point to more specific risks & techniques - T: accessibility should be down to standards bodies. audience question: standards ecosystem is very complicated. Why are documents named with numbers for example? too many acronyms? How can standards be more welcoming? - Development side is very complicated, but the resulting work should be more consumable - proposal to encourage AI literacy, public consultations need to include a priori information package that includes a translation of technical language question: will we see the emergence of a globally recognized standard? if so, where would that come from? - Cr: we have been working on a global standard specifically for procurement - A: world trade organisation develop ISOIC standards. Lots of global consensus already - Ch: people might not use the standards language, but business are looking for benchmarks to be assessed against / to higher people to help meet certain standards. - T: EU AI act covers some global standards. How do standards keep up with innovation? - Cr: reframing question: how could innovators use standards now to benefit market, thinking about innovation from standards startpoint - A: innovate in terms of use case or technology. for use cases most standards will already cover, if technology standards won't apply and cannot keep up pace truly - Ch: its a matter of framing the standard as relevant for someone. lots of pressure on business to launch quickly and cheap.