---
tags: workshop2020
---
# Quick Summaries of SciBeh Virtual Workshop Day 2
### Talk by Martha Scherzer
* Prior to COVID, there have always been emergencies that require social and behavioural science in the response. E.g., ebola, zika, natural disasters, HIV/AIDS and the social determinants of that. These outbreaks highlight the importance of social and behavioural science alongside biomedical responses---we need to know how people respond to help.
* We have attempted to bridge the gap between epidemiological approaches (presenting solutions to a specific problem) and real world needs (requiring rapid responses, large population behaviours), finding a common ground between epidemiologists, social scientists, and academics. The focus is to integrate the perspective of social and behavioural scientists into outbreak response preparedness.
* In an emergency, we need more than the straightforward and rigid solution of epidemiologists and the relatively slow working style of academics.
* The Socialnet project is a training initiative in risk communication and community engagement, which takes a regional approach to these issues. Some activities:
* Simulation of emergency response (4 days, Dec 2019)
* 50 multidisciplinary experts from 19 member states, focussing on teaching effective and empathetic communication, sensitising the team to understand emotions of people.
* Immersive experience for participants, lots of positive feedback.
* Rather than building skills, primary aim was to make participants think about and appreciate various facets of emergency response communication.
* Socialnet inadvertently became a training relevant to COVID-19. Human behaviour, culture, psychology, and society are interlinked when it comes to developing an appropriate and holistic outbreak response.
* WHO is addressing the behavioural component of it now by:
* Research roadmap that includes behavioural science
* Behavioural and culture unit at WHO headquarters, other initiatives across regional offices (e.g., Social Science Pool)
* Initiatives in the EU Regional Office:
* Behavioural insights surveys, thousands of participants, cross-regional comparisons.
* Focus groups, particularly with healthcare workers.
* HealthBuddy app: a chatbot that allows us to compile an FAQ list based on questions asked.
* Monitoring of media and social media to test how messages are received.
* Journal publications
* Challenges in current crisis response:
* Misinformation in the media
* Mistrust in science (particularly around vaccines)---this varies with countries and communities
* Pandemic fatigue: people are tired of hearing about it, responders are tired and see no end in sight.
* Opportunities from current crisis response:
* Research opportunities, particularly more long-term academic research as response moves forward.
* We need to consolidate the scientific community to propagate good research practices.
* Translating data into action---a challenge here to think about: what do we do with the big amounts of data already collected?
_Question:_ Which research aspects are the most fruitful for the wider behavioral science community (e.g. those who study individual differences)?
* There are various approaches---some through readily available channels such as social media monitoring; other approaches are more resource-intensive; there is the need to use a combination.
_Question:_ To relate to one of the main themes from yesterday: how do you access the most recent scientific evidence? Journals? Pre-prints? Via expert panel? Other...?
* A variety of things in emergency response is prior relationships: contacting people who have already done similar research, networking relationships.
* Journals and pre-prints are also referred to.
* If research is linked to WHO, they get preliminary results from there.
* Largely using personal connections to gain first-hand insights.
_Question:_ To what extent do health intervention needs to be targeted to specific communities? Are there general principles, or do they need to be targeted at each local emergency?
* We value diversity and need multiple channels of response data collection. Emergency response is a very dynamic process where catering to different audiences is vital.
* Overall, behavioural insights help to understand the population. It helps target communication and interventions, and understand how people would react to restrictions even before they are brought in.
* E.g., in a country, the government asked the citizens questions about their attitude towards opening school and the citizens' cumulative response decided on the later policy.
##### Further questions (to be answered via email):
_What do you think we can learn from previous experience with other anti-vaccination movements to help counter similar efforts aimed at COVID-19 vaccines?_
**Martha:** We can learn some things from previous and ongoing anti-vaccination activities, for sure. However, there are many different aspects to the COVID-19 vaccine that make this quite unique. The technology is new, it would ultimately target all age groups and all people but with priority groups getting access sooner than others, so these add some new and different dimensions to the situation.
_What do you think are the most central differences between effective online and offline risk communication challenges?_
**Martha:** One of the biggest differences is how fast online information and mis/disinformation spreads. Even when a post or a video is taken down from a site, it still may have reached thousands of people, and that is a challenge.
_I see two extreme perspectives on the role of social and behavioural science in these emergencies, with several intermediate options in between them. Paternalistic approach: we know what you have to do, social and behavioural science is needed to make sure you comply to what we have identified as the desirable behaviors. Participatory approach: stakeholders are included in the decision-making process, social and behavioural science is needed to ensure engagement and avoid untenable scenarios (e.g. inefficient deliberation, poor quality debate, widespread misinformation). Granted, the latter scenario is much riskier and harder to achieve during an emergency; however, if we stick to the paternalistic picture, social tensions will keep escalating, because the problem is not only that people misunderstand the facts, but also that they disagree (possibly with valid reasons) with the proposed solutions and feel excluded from the deliberation process. Any thought on the matter?_
**Martha:** Incorporating social and behavioural science into emergency/outbreak response helps in both process and outcome of participation. Through surveys, focus groups and other listening mechanisms, we listen to people and they have a chance to share their opinions and feelings. So they act of collecting the data is participatory. Then we feed that information back to decision makers and encourage them to consider the perspectives of their citizens in the response. We have seen several countries specifically shift their approach to reopening schools, for example, after learning that most people were against certain proposed policies.
## Session 2.1: Managing online discourse
### Talk by Pat Healey: how we misunderstand one another
* Communication is an essential part of modern science, especially in comparison with science in the past. There are different ways to communicate information to the scientific and non-scientific community. Now we have trackers, presentations, TED talks, visualisations to communicate our work. Yet actual interaction and conversation is often neglected when thinking about scientific communication.
* The traditional 'broadcasting' way of communication (in the form of papers) ignores interaction.
* Real-life conversations include a lot of corrections, retractions, and lack of understanding across languages that have to be managed in real time---we don't automatically understand each other; communication is a dynamic, collaborative process. The processes of conversation are essential to communicate effectively.
* In contrast, scientific communication is a drawn-out process, not face-to-face, and takes place with a lot of delay.
* Experts of different disciplines also routinely misunderstand each other, and do not have the automatic means to correct this. Technology and media interfere with collaborative efforts, make it hard to follow up or ask questions. The various real-time feedback (e.g., non-verbal cues) are removed when communication comes only through paper publication.
* These are important aspects to keep in mind when creating online platforms for discussion.
* There has been a huge increase recently in the number of documents produced, but the number of _conversations_ has reduced: an increase in the monologue version of science, and decrease in dialogue version of science.
* The informal communication is more difficult to achieve, and happens less frequently in academia now due to the need for remote communication. But it is a chance to give meaningful information.
* We could improve the type of conversations using machine learning and AI, but at present the technology misses the misinformation detection mechanism automatic in informal conversation.
* 'Big data' focused systems (e.g., machine learning, AI) hence miss the diversity of interpretation.
_Question:_ Can the usage of extant tools (social media..) which are algorithmically altered/propagated, be repurposed to connect people meaningfully based on their area of interest/expertise, rather than for advertisement purposes?
* It would be interesting to understand where the misunderstandings arise and try to reduce them meaningfully.
_Question:_ Regarding applications that highlight “areas where people actually change their mind”, the standard picture of polarized debate on social media, obtained using social network analysis, seems to suggest that these areas are few and far between. Do you think this picture is too pessimistic? And do you think that social network analysis may help in tracking actual shifts of opinion and genuine debate?
* I am optimistic but my intuition is that social network analysis is too high level to be really useful for this. It is better at capturing the effects of patterns of connection for established social identities. In my ideal world there would be media that are engineered to encourage more substantive interaction e.g. make it easier to build understanding incrementally with more subtle feedback loops. I would expect mapping these patterns would be more productive than say the topology of the social networks of the individuals involved (although that’s complete speculation).
_Reference article:_
Latour, B. (1990). Drawing things together. Representation in scientific practice. MIT Press, Cambrige, MA.
### Talk by Namkje Koudenburg: comparing online and face to face communication
* Are online environments a fertile ground for conflict escalation?
* There is a classic viewpoint that technology changes psychological processes, leading to behavioural change which informs societal change (e.g., online anonymity---further detachment from consequences---more aggression).
* But we proposed that technology changes behaviour, which in turn affects both psychological and societal change.
* Example 1: maintaining conversational flow. This involves:
* Periodic affirmation (e.g., 'uh-huh'), fluent turn-taking
* Signals: shared understanding, solidarity ('being on the same wavelength')
* Disagreement signals: flow disruptions (e.g., frowns, hesitation, silence) as a diplomatic tool to communicate this is not a shared view, without having any aggression.
* Asynchronicity of online communication makes it more difficult to convey such signals efficiently, and they are less likely to be employed (requiring more need to be explicit).
* Example 2: ambiguity. People use hedges, disclaimers, vague expressions.
* In face-to-face interaction, it is used to prevent controversy ('testing the waters'), and effective in reaching a shared understanding as they frame the issue in a way to confirm if the other agrees.
* Online, this is more difficult, signals are less likely to be used or picked up on, potentially leading to higher conflict and polarisation because people just have to be more explicit.
* We compared f2f and online communication (in text-based chat)
* A confederate introduced points of disagreement, usually controversial (e.g., women are evolutionarily less fit leaders).
* Conducted message content analysis.
* There was more clarity ('explicitness') in the chat than f2f (which was more ambiguous).
* More responsiveness (less standalone responses) in f2f
* People were more concerned in f2f about how the other person takes the statement.
* Similar levels of agreement reached in both types
* Correlation between clarity/explicitness and perceptions of (im)politeness. When it seems that the person does not care about others' opinions, it seems impolite.
* When building an online platform, we should consider that disagreement is unavoidable, but it can be beneficial and healthy even for scientific discourse, and should not devolve into outright conflict. So the best way is to propagate diplomacy, to reduce conflicts from disagreements.
_Question:_ Fascinating research. Do you know if there is similar research in the area of telemedicine / telehealth? Thank you!
* Thanks for your question - I’m not aware of such research. Maybe one of the other panelists?
### Talk by Darren Dahly, epidemiologist on Twitter
* Well-known researchers started engaging in discussion from around 2012, sharing experiences. And then there was an increasing use of Twitter by researchers to forward meaningful conversations.
* In COVID times, it has been used mostly to educate people regarding statistical methods, proper science etc.
* These discussions can be a starting point to investigate the quality of certain identified studies. A group of statisticians came together to review pre-prints and articles. Vigilance by the online community helped identify inconsistencies in some famous datasets.
* But there is little follow-up after a Twitter review. Advice by-and-large goes unheeded (especially on studies moving from pre-prints to publication), or even outright dismissed.
* Discussions can sometime promote change (e.g., observational studies where data seem implausible---such as suspicious participant characteristics)...but no guarantee.
* As an online platform, there tends to be lots of conservatism, with Twitter criticism being taken with suspicion. There is much work involved in making 'raw' criticism a more formalised critique that might be taken seriously.
### Panel discussion
**What pathologies in Twitter arise from the combination of scientific discourse with public discussion? Would they be gone if the online platform was specifically dedicated to scientific discourse?**
**David**: Criticisms not getting listened to by the scientific community is a problem. Also, studies can get politicised, and it meant some papers were not listened to completely, while others were listened to too much.
**Darren**: Career-wise, engagement on Twitter can be beneficial, but it is difficult to manage situations when some people seem impolite. There is a risk too in being misinterpreted/misrepresented and picked up by the media. Disagreements in conversations can appear to be rude and aggressive on Twitter. But on the good side, increasing attention to Twitter conversations to the media means it is becoming accessible to a wider community.
**Namkje**: There is a distinction between scientific communication (common ground, presumably) and discussion with the public, where often the goal is to convince ‘a camp’ about a point. The audience changes with different goals.
**Pat**: With Twitter, it's hard to know who the audience actually is---we have little information about that, making it difficult to adjust and 'pitch' the discussion at the appropriate level. So an online platform limits the information we can propagate. Many of the f2f cues that could be informative are also missing.
**Ulrike**: The impermanence of such discussions, the lack of accumulation over time (as compared to journal publications, for example) makes it easy for discussions to be ‘drowned out’, which might promote lower engagement/initiative from other scientists. In scientific papers there is an attempt to bring together a large section that has been done in the past, but the only way to keep focus on one topic on Twitter is to continuously discuss it.
**It's in the nature of science to critique each other's work. Can we figure out ways that are constructive to disagree? Could there be 'online discourse' training in the future?**
**Darren**: Adding qualifier words like 'in my opinion', 'my two cents' could make criticism kinder, and introduce some ambiguity. Trust has to be built over a long period of time. Online communities are built with a lot of continuous interaction, and Twitter gives a common platform to do it all in one place, opening up the scope to network. We can use other people's feed and connections to gauge trustworthiness, intentions (e.g., they don't seem like the kind of person who would try to bait me).
**Robert**: It's better to prevent or minimise such 'Twitter discussions' by looking at the publishing process which precedes it. Pick up on the methodological/statistical imperfections in studies, include a more rigorous checking process _before_. As a research community, we should have better checks and balances here. Twitter conversations occur once a study comes out in public. But if we invest more efforts to scrutinise the study pre-publication, we could avoid some of these issues. Pre-registration checks can help here, by taking place before a paper gets published---registered reports, for instance, where a paper is reviewed prior to data collection, and review of the methods determine whether it will eventually get published.
**Ulrike**: SciBeh's forum tried to target study discussion before it was conducted---so researchers can feel benefits from the criticism, as opposed to later when it may be perceived as an insult.
**Namkje**: There must be trust among researchers that their ideas will not be taken by someone else. Often people are worried about 'scooping' of their ideas, and it is easier to get critique in a smaller environment, such as one's lab group, rather than publicly online to avoid risk of being scooped.
**How do we incentivise writing good critique, given the effort and time it takes to write even a minor one?**
**Darren**: The current incentive system would need revisions! There is no pay-out now for it.
**David**: There is a push to publish original work because that is what matters. Peer-based acknowledgement for feedback is needed—like a review that researchers given each other. There is first a need to remove the 'disincentives'. Moving beyond Twitter, this might mean having a platform where you could include a helpfulness score, potential critics acknowledged in publication.
**Pat**: Accountability is difficult and fuzzy on social media (partly because things can be searched for retroactively, out of context). It's difficult to predict when and how one could be held accountable---especially when it is an evolving idea.
**What should be the approach to sharing 'half-baked ideas'?**
**Darren**: There is disproportionality in sharing ideas: a double standard where some get praised for a partial idea, others get harassed for sharing such ideas. There is a need for a more welcoming and inclusive environment that is sensitive to researchers of different cultural and social backgrounds.
**Ulrike**: The consequences especially can be tough for early career researchers. There is more at stake for them. We need also a culture of accepting when we are wrong about something. This should be normalised, since it is how science advances. Worth noting too: Twitter is still unpopular among researchers, only 40% have an account.
**Pat**: Twitter does not work for everyone equally---there is no single generic approach. At present, there is no optimal way of promoting healthy debate online.
**David**: Senior scientists taking more initiative in facilitating such an environment would help younger researchers.
**Robert**: Twitter seems to be taking over part of the job that's supposed to be done by journals (which often refuse to publish critic pieces on administrative grounds). But when criticising, a peer-review journal is still safer than blog posts for early career researchers.
**Namkje**: It will need to start with senior researchers to set the norms and expectations in the field.
**What means do we have for distinguishing single observations from repeated findings in online platforms? We need tools like this, also for fighting misinformation (e.g., preventing experts from being "outshouted").**
**David**: Creating a set of 'tags' (e.g., what does the study apply to) could help differentiate between arguments.
_Comments from audience._
* The same way we have a Twitter verification button. Maybe we need a Statistician verified button? :)
* Do the panel think there is a degree to which 'training' in online discourse could be useful?
* I think we all agreed "yes"!
* People younger than us mainly interact on TikTok or Instagram, which are predominantly visual media. Do you see any room for science to ‘invade’ that space for the purposes of dissemination to the public?
* Listening to `[`Darren`]`'s remarks about the lack of impact of your public reviews of pre-registered trials, it made me think that the general public (and possibly to some extent also policy-makers) insist on transparency about science only when they suspect some malpractice, but in general they are more comfortable with the whole “block box” scenario. They lack the capacity, and possibly the motivation, for following the details of scientific discussion, hence they do not care about it: they want the output. Unfortunately, the output is still often understood as “whatever gets published”, and that's an ambiguous message: in most disciplines, “good enough for publication” means something closer to “robust enough to be publicly discussed” than “proved beyond reasonable doubt”. This generates a PR dilemma: either laypeople overinterpret the significance of scientific results, or they dismiss the whole scientific enterprise as cheap talk without practical value. Any thought on how to escape that dilemma?
* When attending conferences I'm most fascinated when learning about what "failed". We don't celebrate thoughtful reflection on past failures enough. :)
## Session 2.2: Tools for Online Knowledge Curation
### Talk by Martyn Harris & Mark Levene
* Background: digital archives consist of all sizes, language types, etc. Our project was to create search and mining tools to label archives, with a framework of tools that help with search, recommendation, and comparisons.
* Framework was based on statistical language models, and was one of the first practical implementations of this.
* Some frameworks use word-level properties to inform search (e.g., what tense)
* Current n-gram model works on characters instead (e.g., sequences of characters, like "new coronavirus vaccine", "ew coronavirus vaccine"), allowing greater specificity and flexibility of search (it takes into consideration word sequences)---humans can't easily achieve this level of categorisation.
* It models words in tree diagrams (so there is a tree hierarchy)
* The framework can be used as a comparison tool as well, e.g., it can reconcile differences in historical language that would be an issue with word-level processing. For instance, words with the same meaning in different years have different representations---so it can capture this, instead of missing information due to this variation.
* Tool needs to be trained over a diverse training set of annotations, to enable an efficient search.
* Also now exploring faceted browsing.
* Questions for this technology: is it sufficient? What approach/method might be best? Each has its different advantages/disadvantages, and it is necessary to tailor methods to the body of knowledge one is labelling.
### Talk by Jaron Porciello
* Background: Ceres2030 is a project to find a sustainable solution to end hunger. It serves as a case study of using data science to support evidence-based policy-making.
* Causes of hunger and populations of small-scale farmer are diverse. The heterogeneity of the voices that need to be heard means there is a need to gain more data and understand the context of hunger, and gaps in current knowledge.
* Research in this area often relies on consultant groups, and scientists are not usually involved in this decision process, but need to be. We need to bring togther interdisciplinary, diverse groups, and involve younger, not-yet-established researchers, too.
* We used machine learning to integrate all the required data (target populations, geography, interventions) in a way that reflects the needs of the research teams.
* In this process, agricultural data is ingested (normalised and deduplicated), enriched (using support vector machines, keywords and specialised dictionaries), analysed, and becomes evidence synthesis.
* Brings together articles for global assessment.
* These techniques don't seem common yet in academia.
* We have assembled a global team with researchers in specific and relevant fields (e.g., collabortions with nature researchers). Machine learning helped to identify researchers in these fields but who were not highly cited.
* It also enabled better literature aggregation: search across various websites and repositories, as not all literature appears in peer-reviewed journals.
* Evaluation and dissemination: project was published and used in policy-making.
* Key learning points: we need to reduce bias (e.g., gender imbalance) and look beyond simple metrics like citations in aggregating knowledge.
### Talk by Nick Lindsay and Hildy Fong Baker: Rapid Reviews Covid-19
* This project with MIT press publishes peer reviews of pre-prints of various discplines to increase transparency.
* There is a need to root out bad science and promote good practice to reduce misinformation (and perhaps politicisation of science).
* Science communication is shifting towards pre-prints, so we need to focus attention here. Publishers need to respond quickly and be more flexible to avoid misleading people and meet the needs of the scientific community.
* This pushes us to find new business models to fund publishing activities, and it helps to deploy new technologies that are flexible, and come from the non-profit community (which allows close collaboration).
* Timeline of project:
* March: decision on creating overlay journal
* May: vital networking and establishing connections
* Grant-writing, machinery set-up after this
* August: first reviews published (normally it takes about a year!)
* Five domains based on discipline, which work as largely independent hubs.
* Machine learning (collaboration with COVIDScholar and other platforms/tools) was vital to ensure smooth, quick workflow, helping to automate various processes within the process (selecting peer reviewers, gathering literature etc.)
* There is still space for further innovation/scrutiny.
_Question:_ What is your relationship with other repositories where you are harvesting the studies?
* A complicated process: first, pre-prints are differentiated into five domains, which can then be upvoted.
* We want to promote interesting/innovative work, but also pay special attention to work that needs debunking.
* Mutual collaboration with repositories: in the near future this will be made more salient with the repositories themselvs.
### Talk by Haoyan Huo: COVIDScholar
* COVIDScholar addressed a problem of too many papers published during the pandemic period (>200 everyday), and typical search engines were not well tuned at filtering these new publications (and also not designed for high quality search).
* It is important to have efficient search engines that understand both search queries and the text corpus (e.g., domain, topic, entities, similarities).
* COVIDScholar uses different filters (e.g., topics, peer-reviewed or not) to limit the pool of papers, and machine learning mechanisms extract keywords from abstracts.
* Searches large number of sources rapidly (especially important since within hours, new pre-prints appear).
* Our database has Covid classification, domain classifiers, machine-extracted keywords and their nearest neighbours.
* Word embeddings used to categorise words on wide array of dimensions: for instance, text understanding e.g., "king to queen similar to man to women". This enable search for neighbouring words as well as the input word.
* Use of BERT, a neural network language model and attention mechanism to allow the model to preserve paragraph context for text understanding. This further enhances searchability potential.
* Systematic study was benefitted by Natural Language Processing. There has been longitudinal change over the crisis, with social sciences seeing increasingly more publications, driven mainly by mental health related publications.
### Panel discussion
**Michael Perk** introducing his role as founder of Collabovid: This project uses word models to semantically analyse literature with similarities. The focus is on more exploratory approaches---papers from different countries, papers with specific topics with connections e.g., between COVID-19 and HIV. So this is is not just a search engine. It can extract a map of relations even from a small input of a few words.
**Beyond COVID-19, knowledge aggregation must move beyond search and retrieval of new publications. Will you be integrating prior knowledge into your systems? (e.g., information published before the pandemic)**
**Haoyan:** COVIDScholar has plans to start extracting information from articles in the future.
**Jaron**: We will need both tools and access for an effective infrastructure. We need to know what are the barriers. Our tools and database don't need to be as large as Google; we can use alternative ways and be specific to extract knowledge.
**Stefan**: Costs will be high to doing this, so funding opportunities are vital. But API could be reused between different initiatives, and create taxonomies across the literature within topics.
**Michael**: There's a lot of noise in the data (e.g., masks)---relevant papers in the research are not just COVID-related, they could include the common cold as well. This is an aggregation problem.
**There is especially this problem in behavioural sciences, where a lot of the knowledge is not generated in response to the COVID pandemic. How do we address this?**
**Stefan**: Maybe the minutiae of the search and context might be helpful?
**Mark**: A limitation here is in the resource: we will always have a smaller knowledge than that of Google, for instance.
**Martyn**: There is a discrepancy between the language used in the literature and the language used by people searching/reading the literature that we need to account for. Moving away from a searchbar set-up might be necessary.
**Jaron**: Perhaps the next-best step is to provide various indices to the user.
**Michael**: The idea here would be to provide a user with other examples of the kind of research they are searching for---using metrics, classifiers etc.
**The volume of the corpus could be a bit counterproductive, e.g., Google Scholar outputs many publications, a large proportion of which are only tangentially relevant. How do we filter this better?**
**Stefan**: Would it be possible to "repurpose" powerful, existing models?
**Mark**: It might be possible to use, for instance, language models. But constant updating of the model would be necessary.
**How robust are these databases to information pollution, when the focus is on harvesting information as opposed to filtering nonsense?**
**Michael**: An engine with metrics on where it's been mentioned/cited could help with establishing trust.
**Jaron**: Who determines what is nonsense and what is not? How do we determine that algorithmically, as our algorithms now are not tuned towards this purpose.
**Hildy**: Here we need to incorporate machine learning and humans working together. ML itself can't solve it without the human input.
**Mark**: This brings the peer review process back to the fore, even if the process itself is less-than-perfect.