owned this note
owned this note
Published
Linked with GitHub
# Helmholtz AI FFT seminar series #6: </br> Helmnholtz OS Office & Peter Steinbach
###### tags: `HelmholtzAI`,`FFT`
[ToC]
## :memo: Seminar details
**07 October 2021, 11:00 - 12:00**
- Short introduction talk of the **Helmholtz Open Science Office**
- Speaker: **Peter Steinbach**, Head of the Helmholtz AI consultant team @ Helmholtz-Zentrum Dresden-Rossendorf (HZDR)
- Title: **Echoing why AI is harder than we think**
- Chair: **Nico Hoffmann**, Head of the Helmholtz AI young investigator group @ HZDR
### VC details
**Access to online venue (BlueJeans):**
https://bluejeans.com/897273637/1919
*Meeting ID: 897 273 637
Passcode: 1919*
**Want to dial in from a phone?**
Dial the following number: +49 69 808 84246 (Germany (Frankfurt, German))
Enter the meeting ID and passcode followed by #
## :memo: Notes
:::info
:bulb: Write down notes and/or interesting information of the seminar. For example, observations auxiliar to the content which is not contained in the slidedeck.
:::
- ...
## :question: Questions for the speaker(s)
:::info
:bulb: Write down any questions or topics you wish to discuss during the seminar
_(either with your initials or anonymously)_
:::
> Leave in-line comments! [color=#3b75c6]
- can the open-science office support us for being protected against patent trolls?
+ tricky subject, details play a key role,
+ open for discussions because answer lives on the interface of patents and publications
+ please get in touch
- infrastructure/funding for reproducibility
- German reproducibility network (GRN) currently aimed to connect players and stakeholder
- no funding available at this moment
- build such things in Helmholtz (think HIFIS and the training they offer)
- also push the discussion how to share such infrastructures
- is there some open training material to teach reproducibility and/or open science methods?
- Comment: Revolution and Evolution here are also very much cultural terms in regard of how strongly things have changed since 2006 in machine lerning field
- Comment: in generic sense, every output of a classifier can be casted as an action (choosing a class to predict)
:arrow_right: ...
## :question: Questions for the audience
With respect to ML/AI, I'd be interested in the audience's opinion on. Note, I am also interested to learn how people tackle these questions with respect to very practical data-driven tasks. So for example, given a dataset or scientific question how should we describe the actual abilities of AI systems?
- How can we assess actual progress toward “general” or “human-level” AI?
-
- How can we assess the difficulty of a particular domain for AI as compared with humans?
-
- [CA] The way this question is phrased almost calls for a circular definition. We can look at various tasks and see how AI and humans compare. I would also add non-AI technical solutions to the comparison, because they are much better for many tasks than humans. We can then try to see similarities between these tasks. But to do so we again use the human conception.
- How should we describe the actual abilities of AI systems without fooling ourselves and others with wishful mnemonics?
-
- [CA] I think this is a strawman argument. Does Google oversell its AlphaGo and AlphaFold products with fancy terms? Yes of course. Does this mean people expect that general AI is around the corner? I do not think so. For communication, both within the community and to the general public, analogies are very useful, and I think words like "learn/understand" are used as an analogy rather than as a "wishful mnemonic".
- To what extent can the various dimensions of human cognition be disentangled?
-
-
- How can we improve our intuitions about what intelligence is?
-
- Comment fallacy 1: see strawman arguments here, on one side this article uses cultural terms and critizes their (over)use, on the other hand the questions use these terms
- Comment fallacy 1: someone has to take the first step, not sure what is problematic about describing it in this fashion
- Comment in general: one piece of evidence that we haven't grasped intelligence scientifically is the struggle of neuroscience to describe/measure __conciousness__
- how does anyone know that I am conscious and not a zombie? https://en.wikipedia.org/wiki/Philosophical_zombie
- comment on benchmarks: while they are problematic, they are needed otherwise there will never be a scientific approach to intelligence
- only what I can measure, is science
- be careful, sociology is full of problematic metrics
- if I can measure intelligence, I can compare algorithm versus human
- https://pgpbpadilla.github.io/chollet-arc-challenge
- wait a second: just because I cannot measure how good a book is, doesn't mean anything with respect to science -> also coming up with a metric is science!
- [CA] General comment: (1) I think this debate would profit from getting rid of the numerous strawman arguments presented. (2) The question of what constitutes the human consciousness has been at the center of philosophy for centuries. In its current status, the discussion of AI within the computer science community falls behind these insights. It would be nice to have some truly interdisciplinary view on the question, involving probably philosophers, cognitice scientists, and AI experts, rather than trying to re-invent the wheel of human cognition all over again. The reductionist view that many computer scientists are inclined to, I believe, does not reflect the current state of the debate in other disciplines.
## :question: Your Feedback
:::info
:bulb: Write down your feedback about the seminar
_(either with your initials or anonymously)_
:::
### Share something that you learned or liked :+1:
- ...
### Share something that you didn’t like or would like us to improve :-1:
- ...
:::info
:pushpin: Want to learn more? ➜ [HackMD Tutorials](https://hackmd.io/c/tutorials)
:::