owned this note
owned this note
Published
Linked with GitHub
---
tags: seminar series
title: AI Regulation in the EU – challenges and latest developments
---
:::danger
**General info**
- **Video connection details:**
- Zoom ID: 662 0907 5434
- Zoom password: rse
- Zoom invite link: https://uwasa.zoom.us/j/66209075434?pwd=VmRBaFRVOXNKNFRYb1NDRGY5SXZndz09
- **Contact:** jarno.rantaharju@aalto.fi
- **Date and time**: 24th May 2023, 13:00 CEST [convert to your time zone](https://arewemeetingyet.com/Helsinki/2023-05-24/14:00) | [add to your calendar](https://link.webropolsurveys.com/EP/3757A49B24DBED6D)
- **This page:** <https://hackmd.io/@nordic-rse/seminar-May-2023>
:::
# AI Regulation in the EU – challenges and latest developments
## Ice breaker: Which AI systems have you used this month for your work? Which of them do you use regularly?
- ChatGPT, LLaMa (and Alpaca and other zoo LLMs), OpenNMT, NLLB, Whisper
- ChatGPT, GPT3.5 (Both regularly)
- ChatGPT
- OpenAI API, alpaca
- ChatGPT, GitHub Copilot
- ChatGPT
- ChatGPT
- ChatGPT
- None yet, just discussing and watching how other use
- ChatGPT
## About the series
This is an event in the Nordic RSE seminar series.
* Reminder about starting recording
* Find out about future events:
* Check https://nordic-rse.org/events/seminar-series/.
* Previous seminar talks videos available at [Youtube channel](https://www.youtube.com/channel/UC8OyVrmJEuT2lrH7zXoBrhQ)
* Follow [@nordic_rse](https://twitter.com/nordic_rse) on Twitter for announcements
* Join the [Nordic RSE stream](https://coderefinery.zulipchat.com/#narrow/stream/213720-nordic-rse) of the CodeRefinery chat
* Suggest speakers:
* on the [Nordic RSE stream](https://coderefinery.zulipchat.com/#narrow/stream/213720-nordic-rse)
* by creating an issue on the [Nordic RSE website repository](https://github.com/nordic-rse/nordic-rse.github.io/issues)
## About the Nordic RSE
* Represents Research Software Engineers in the Nordics.
* Check out [nordic-rse.org](https://nordic-rse.org/) for other activities.
* Registereed as an association in Fall 2021.
* To become a member, fill in the [membership form](https://forms.gle/qCVVRGXPi3Hq7inW6).
## Speaker: Béatrice Schütte
- Postdoctoral Researcher
- University of Helsinki, Faculty of Law, Legal Tech Lab
- University of Lapland, Faculty of Law
- Ethics advisory board of the Finnish Center for Artificial Intelligence.
- Research areas: AI regulation, liability for damage caused by AI, emotional AI, private law, comparative law, questions related to sustainability, maritime law, environmental law.
## Abstract
The EU institutions are currently preparing a regulation on Artificial Intelligence (AI). It is a complex process: The continuous and rapid technological development makes it difficult to stay on track – the law is always behind the technology.
This presentation will outline the latest developments on the path to regulating AI and point out critical aspects.
## Ask your questions here
1. do we need to adjust existing legislation to match with the development of AI or should we consider implement a completely new piece of legislation?
2. for developers: have you considered how to design an AI patforms to comply with data protection legislation?
- Well, we can follow them when decigning a platform for example for Aalto scientists to use. The AI itself is another issue.
- So if reasearchers have access to a locally running model, nothing leaves the local network.
- But if the model is running externally (like ChatGPT when using the API), the messages must be sent to the server. We can still store everything locally.
- .
3. for Beatrice: what do you suggest to comply the idea of AI in terms of data collection and processing (including non-limited data collection without a clear purpose) with data protection legislation
- Some technologies (like OpenAI ChatGPT) do continuous data collection, but other AI technologies (that can be run locally) do not collect data. Note that OpenAI recently set a limit of 30 days to comply to this after the Italian Data Protection Authority raised these issues regarding continuous processing of personal data.
4. What is the implication of the newest accepted version (by the EU Parliament) of the AI Act with regard to generative AI / foundation models?
5. In my understanding the AI act does not put strict limitations to academic research on AI, however often the research software (and machine learning models) are released with open licenses for anyone to reuse. What happens when a company reuses the research code/models? Does the company become liable for misuse of the AI or are the original researchers responsible for it?
- Subquestion: who should draw the line? ethics committees? researchers themselves?
6. In respect of trustworthy AI, are the criteria quantifiable or in other words, can it be specifically defined?
7. It would help to see this lists in written format, so can you Beatrice provide links for Annex II etc?
- https://artificialintelligenceact.eu/documents/
8. for Beatrice: till what extent the AI algorythm has to be transparent for the data recipient.
9. Isn't it so that the legal basis (TFEU 114) dictates that the member states cannot regulate nationally whatever is in the scope of the AI Act? So if the scope is determined by some definition of AI, then if this definition is too broad, then does e.g. the very recent Finnish law about automated decision making in public services become illegal? Making for example our national taxation processes illegal again?
10. How about copywright issues? E.g. if one developes parts of software with ChatGPT +1
11. The misuse of an AI system is tied to the users only or the designer of the AI system also blamed for loose prevention policy?
12. You cited a current definition of AI could you share it in writing. Edit (from Petri M): I believe the definition has already changed (again), see https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence. it is again very broad, more or less meaningless. At least in this proposal from the Parliament. I belive the definition now states: ‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. (At least to me, this sounds a zero-infromation definition. "A machine-based system that...generates outputs")
13. How many effects do you think are precisely defined, and how many will end up being defined by court cases? (and does this lead to actions being based on risk tolerance?)
14. Another problem with the technology/definition based regulation is that the risks do not necessarily follow this type of definitions. For example, regarding opacity, if you used Google today, are the results fair (or perhaps favoring US links)? We do not know and Google will not tell. Does it matter whether there is AI or not, isn't opacity and fairness the point? Or how about the ABS brakes in your car or the traffic lights in Helsinki: do we understand the underlying logic? Does it matter whether it is AI or not? Couldn't we have an act about computer-systems with increased risks?