General info
This is an event in the Nordic RSE seminar series.
Postdoctoral Researcher
University of Helsinki, Faculty of Law, Legal Tech Lab
University of Lapland, Faculty of Law
Ethics advisory board of the Finnish Center for Artificial Intelligence.
Research areas: AI regulation, liability for damage caused by AI, emotional AI, private law, comparative law, questions related to sustainability, maritime law, environmental law.
The EU institutions are currently preparing a regulation on Artificial Intelligence (AI). It is a complex process: The continuous and rapid technological development makes it difficult to stay on track – the law is always behind the technology.
This presentation will outline the latest developments on the path to regulating AI and point out critical aspects.
do we need to adjust existing legislation to match with the development of AI or should we consider implement a completely new piece of legislation?
for developers: have you considered how to design an AI patforms to comply with data protection legislation?
for Beatrice: what do you suggest to comply the idea of AI in terms of data collection and processing (including non-limited data collection without a clear purpose) with data protection legislation
What is the implication of the newest accepted version (by the EU Parliament) of the AI Act with regard to generative AI / foundation models?
In my understanding the AI act does not put strict limitations to academic research on AI, however often the research software (and machine learning models) are released with open licenses for anyone to reuse. What happens when a company reuses the research code/models? Does the company become liable for misuse of the AI or are the original researchers responsible for it?
In respect of trustworthy AI, are the criteria quantifiable or in other words, can it be specifically defined?
It would help to see this lists in written format, so can you Beatrice provide links for Annex II etc?
for Beatrice: till what extent the AI algorythm has to be transparent for the data recipient.
Isn't it so that the legal basis (TFEU 114) dictates that the member states cannot regulate nationally whatever is in the scope of the AI Act? So if the scope is determined by some definition of AI, then if this definition is too broad, then does e.g. the very recent Finnish law about automated decision making in public services become illegal? Making for example our national taxation processes illegal again?
How about copywright issues? E.g. if one developes parts of software with ChatGPT +1
The misuse of an AI system is tied to the users only or the designer of the AI system also blamed for loose prevention policy?
You cited a current definition of AI could you share it in writing. Edit (from Petri M): I believe the definition has already changed (again), see https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence. it is again very broad, more or less meaningless. At least in this proposal from the Parliament. I belive the definition now states: ‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. (At least to me, this sounds a zero-infromation definition. "A machine-based system that…generates outputs")
How many effects do you think are precisely defined, and how many will end up being defined by court cases? (and does this lead to actions being based on risk tolerance?)
Another problem with the technology/definition based regulation is that the risks do not necessarily follow this type of definitions. For example, regarding opacity, if you used Google today, are the results fair (or perhaps favoring US links)? We do not know and Google will not tell. Does it matter whether there is AI or not, isn't opacity and fairness the point? Or how about the ABS brakes in your car or the traffic lights in Helsinki: do we understand the underlying logic? Does it matter whether it is AI or not? Couldn't we have an act about computer-systems with increased risks?