# SciBeh Virtual Workshop 2.2
## Tools for Online Research Curation
[Session whiteboard](http://bit.ly/scibeh-tools) | [Discussion forum](https://www.reddit.com/r/BehSciAsk/comments/jkznkr/ideas_for_discussion_tools_for_online_research/)
The proliferation of new COVID-19 research in journals and pre-print servers presents a knowledge aggregation problem. In order to synthesise and communicate new knowledge, one needs to be able to find, organise, and evaluate it. In this session, we bring together experts in machine tools to tackle the problem of knowledge retrieval, aggregation, and evaluation. We look at what has been done in the past year to aggregate and quality-check new information using machine learning and NLP techniques, and ask what is the next step in delivering robust knowledge to those who need it.
Some of our questions include:
* What are the pros and cons of the various search and filter systems created now?
* What design features do researchers, policy-makers, and the public need in a COVID-19 knowledge base?
* How can we adapt the tools we have to improve the curation of crisis-relevant knowledge?
**[Haoyan Huo](https://haoyan-huo.com/)** works on Natural Language Processing, Machine Learning, and Automatic Synthesis Design. He is a member of the [COVIDScholar literature search project](https://covidscholar.org/about). In this session, he will introduce COVIDScholar and how it addresses knowledge management issues in the crisis.
**[Nick Lindsay](https://mitpress.mit.edu/staff)** and **[Hildy Fong Baker](https://rapidreviewscovid19.mitpress.mit.edu/editors2)** are part of the [MIT Press Rapid Reviews COVID-19](https://rapidreviewscovid19.mitpress.mit.edu/reviewapproach) team. In this session, they will talk about how the project addresses the need for rapid, reliable new knowledge during the crisis.
**[Jaron Porciello](https://cals.cornell.edu/jaron-porciello)** is Associate Director for Research Data Engagement at Cornell University, and an expert in information and data science. In this session, she will talk about how data science tools can help to connect science and policy.
**[Martyn Harris](https://www.dcs.bbk.ac.uk/about/people/research-assistants/martyn/)** is the Institute of Coding Manager at Birkbeck University of London. He was a lead researcher in a [the Computing for Cultural Heritage project](https://www.bl.uk/projects/computingculturalheritage). In this session, he will talk about how computer science tools can help in organising knowledge online for more targeted evidence search in policy-making.
**[Mark Levene](https://www.dcs.bbk.ac.uk/~mark/)** is Head of the Computer Science Department at Birkbeck College, and an expert on search engine technology, applied machine learning, and computational social science.
**[Michael Perk](https://github.com/michip)** is a representative of [Collabovid](https://www.collabovid.org/), a search engine that helps researchers to identify the most relevant information by using Natural Language Processing.
**[Stefan Herzog](http://www.stefanherzog.org/)** is a research scientist at the [Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin](https://www.mpib-berlin.mpg.de/staff/stefan-herzog). His research covers boosting of decision making by jointly understanding humans and algorithms, combining insights and methods from cognitive science, collective intelligence, heuristics, and machine learning.
**[Ulrike Hahn](http://www.bbk.ac.uk/psychology/our-staff/ulrike-hahn)** is the director of the Centre for Cognition, Computation and Modelling at Birkbeck University of London. She researches the role of perceived source reliability for human beliefs as parts of larger communicative social networks.