or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
![image alt](https:// "title") | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | Emoji list | ||
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Syncing
xxxxxxxxxx
General info
AI Regulation in the EU – challenges and latest developments
Ice breaker: Which AI systems have you used this month for your work? Which of them do you use regularly?
About the series
This is an event in the Nordic RSE seminar series.
About the Nordic RSE
Speaker: Béatrice Schütte
Postdoctoral Researcher
University of Helsinki, Faculty of Law, Legal Tech Lab
University of Lapland, Faculty of Law
Ethics advisory board of the Finnish Center for Artificial Intelligence.
Research areas: AI regulation, liability for damage caused by AI, emotional AI, private law, comparative law, questions related to sustainability, maritime law, environmental law.
Abstract
The EU institutions are currently preparing a regulation on Artificial Intelligence (AI). It is a complex process: The continuous and rapid technological development makes it difficult to stay on track – the law is always behind the technology.
This presentation will outline the latest developments on the path to regulating AI and point out critical aspects.
Ask your questions here
do we need to adjust existing legislation to match with the development of AI or should we consider implement a completely new piece of legislation?
for developers: have you considered how to design an AI patforms to comply with data protection legislation?
for Beatrice: what do you suggest to comply the idea of AI in terms of data collection and processing (including non-limited data collection without a clear purpose) with data protection legislation
What is the implication of the newest accepted version (by the EU Parliament) of the AI Act with regard to generative AI / foundation models?
In my understanding the AI act does not put strict limitations to academic research on AI, however often the research software (and machine learning models) are released with open licenses for anyone to reuse. What happens when a company reuses the research code/models? Does the company become liable for misuse of the AI or are the original researchers responsible for it?
In respect of trustworthy AI, are the criteria quantifiable or in other words, can it be specifically defined?
It would help to see this lists in written format, so can you Beatrice provide links for Annex II etc?
for Beatrice: till what extent the AI algorythm has to be transparent for the data recipient.
Isn't it so that the legal basis (TFEU 114) dictates that the member states cannot regulate nationally whatever is in the scope of the AI Act? So if the scope is determined by some definition of AI, then if this definition is too broad, then does e.g. the very recent Finnish law about automated decision making in public services become illegal? Making for example our national taxation processes illegal again?
How about copywright issues? E.g. if one developes parts of software with ChatGPT +1
The misuse of an AI system is tied to the users only or the designer of the AI system also blamed for loose prevention policy?
You cited a current definition of AI could you share it in writing. Edit (from Petri M): I believe the definition has already changed (again), see https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence. it is again very broad, more or less meaningless. At least in this proposal from the Parliament. I belive the definition now states: ‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. (At least to me, this sounds a zero-infromation definition. "A machine-based system that…generates outputs")
How many effects do you think are precisely defined, and how many will end up being defined by court cases? (and does this lead to actions being based on risk tolerance?)
Another problem with the technology/definition based regulation is that the risks do not necessarily follow this type of definitions. For example, regarding opacity, if you used Google today, are the results fair (or perhaps favoring US links)? We do not know and Google will not tell. Does it matter whether there is AI or not, isn't opacity and fairness the point? Or how about the ABS brakes in your car or the traffic lights in Helsinki: do we understand the underlying logic? Does it matter whether it is AI or not? Couldn't we have an act about computer-systems with increased risks?