Prior to COVID, there have always been emergencies that require social and behavioural science in the response. E.g., ebola, zika, natural disasters, HIV/AIDS and the social determinants of that. These outbreaks highlight the importance of social and behavioural science alongside biomedical responses–-we need to know how people respond to help.
We have attempted to bridge the gap between epidemiological approaches (presenting solutions to a specific problem) and real world needs (requiring rapid responses, large population behaviours), finding a common ground between epidemiologists, social scientists, and academics. The focus is to integrate the perspective of social and behavioural scientists into outbreak response preparedness.
In an emergency, we need more than the straightforward and rigid solution of epidemiologists and the relatively slow working style of academics.
The Socialnet project is a training initiative in risk communication and community engagement, which takes a regional approach to these issues. Some activities:
Socialnet inadvertently became a training relevant to COVID-19. Human behaviour, culture, psychology, and society are interlinked when it comes to developing an appropriate and holistic outbreak response.
WHO is addressing the behavioural component of it now by:
Initiatives in the EU Regional Office:
Challenges in current crisis response:
Opportunities from current crisis response:
Question: Which research aspects are the most fruitful for the wider behavioral science community (e.g. those who study individual differences)?
Question: To relate to one of the main themes from yesterday: how do you access the most recent scientific evidence? Journals? Pre-prints? Via expert panel? Other…?
Question: To what extent do health intervention needs to be targeted to specific communities? Are there general principles, or do they need to be targeted at each local emergency?
We value diversity and need multiple channels of response data collection. Emergency response is a very dynamic process where catering to different audiences is vital.
Overall, behavioural insights help to understand the population. It helps target communication and interventions, and understand how people would react to restrictions even before they are brought in.
E.g., in a country, the government asked the citizens questions about their attitude towards opening school and the citizens' cumulative response decided on the later policy.
What do you think we can learn from previous experience with other anti-vaccination movements to help counter similar efforts aimed at COVID-19 vaccines?
Martha: We can learn some things from previous and ongoing anti-vaccination activities, for sure. However, there are many different aspects to the COVID-19 vaccine that make this quite unique. The technology is new, it would ultimately target all age groups and all people but with priority groups getting access sooner than others, so these add some new and different dimensions to the situation.
What do you think are the most central differences between effective online and offline risk communication challenges?
Martha: One of the biggest differences is how fast online information and mis/disinformation spreads. Even when a post or a video is taken down from a site, it still may have reached thousands of people, and that is a challenge.
I see two extreme perspectives on the role of social and behavioural science in these emergencies, with several intermediate options in between them. Paternalistic approach: we know what you have to do, social and behavioural science is needed to make sure you comply to what we have identified as the desirable behaviors. Participatory approach: stakeholders are included in the decision-making process, social and behavioural science is needed to ensure engagement and avoid untenable scenarios (e.g. inefficient deliberation, poor quality debate, widespread misinformation). Granted, the latter scenario is much riskier and harder to achieve during an emergency; however, if we stick to the paternalistic picture, social tensions will keep escalating, because the problem is not only that people misunderstand the facts, but also that they disagree (possibly with valid reasons) with the proposed solutions and feel excluded from the deliberation process. Any thought on the matter?
Martha: Incorporating social and behavioural science into emergency/outbreak response helps in both process and outcome of participation. Through surveys, focus groups and other listening mechanisms, we listen to people and they have a chance to share their opinions and feelings. So they act of collecting the data is participatory. Then we feed that information back to decision makers and encourage them to consider the perspectives of their citizens in the response. We have seen several countries specifically shift their approach to reopening schools, for example, after learning that most people were against certain proposed policies.
Question: Can the usage of extant tools (social media..) which are algorithmically altered/propagated, be repurposed to connect people meaningfully based on their area of interest/expertise, rather than for advertisement purposes?
Question: Regarding applications that highlight “areas where people actually change their mind”, the standard picture of polarized debate on social media, obtained using social network analysis, seems to suggest that these areas are few and far between. Do you think this picture is too pessimistic? And do you think that social network analysis may help in tracking actual shifts of opinion and genuine debate?
Reference article:
Latour, B. (1990). Drawing things together. Representation in scientific practice. MIT Press, Cambrige, MA.
Question: Fascinating research. Do you know if there is similar research in the area of telemedicine / telehealth? Thank you!
What pathologies in Twitter arise from the combination of scientific discourse with public discussion? Would they be gone if the online platform was specifically dedicated to scientific discourse?
David: Criticisms not getting listened to by the scientific community is a problem. Also, studies can get politicised, and it meant some papers were not listened to completely, while others were listened to too much.
Darren: Career-wise, engagement on Twitter can be beneficial, but it is difficult to manage situations when some people seem impolite. There is a risk too in being misinterpreted/misrepresented and picked up by the media. Disagreements in conversations can appear to be rude and aggressive on Twitter. But on the good side, increasing attention to Twitter conversations to the media means it is becoming accessible to a wider community.
Namkje: There is a distinction between scientific communication (common ground, presumably) and discussion with the public, where often the goal is to convince ‘a camp’ about a point. The audience changes with different goals.
Pat: With Twitter, it's hard to know who the audience actually is–-we have little information about that, making it difficult to adjust and 'pitch' the discussion at the appropriate level. So an online platform limits the information we can propagate. Many of the f2f cues that could be informative are also missing.
Ulrike: The impermanence of such discussions, the lack of accumulation over time (as compared to journal publications, for example) makes it easy for discussions to be ‘drowned out’, which might promote lower engagement/initiative from other scientists. In scientific papers there is an attempt to bring together a large section that has been done in the past, but the only way to keep focus on one topic on Twitter is to continuously discuss it.
It's in the nature of science to critique each other's work. Can we figure out ways that are constructive to disagree? Could there be 'online discourse' training in the future?
Darren: Adding qualifier words like 'in my opinion', 'my two cents' could make criticism kinder, and introduce some ambiguity. Trust has to be built over a long period of time. Online communities are built with a lot of continuous interaction, and Twitter gives a common platform to do it all in one place, opening up the scope to network. We can use other people's feed and connections to gauge trustworthiness, intentions (e.g., they don't seem like the kind of person who would try to bait me).
Robert: It's better to prevent or minimise such 'Twitter discussions' by looking at the publishing process which precedes it. Pick up on the methodological/statistical imperfections in studies, include a more rigorous checking process before. As a research community, we should have better checks and balances here. Twitter conversations occur once a study comes out in public. But if we invest more efforts to scrutinise the study pre-publication, we could avoid some of these issues. Pre-registration checks can help here, by taking place before a paper gets published–-registered reports, for instance, where a paper is reviewed prior to data collection, and review of the methods determine whether it will eventually get published.
Ulrike: SciBeh's forum tried to target study discussion before it was conducted–-so researchers can feel benefits from the criticism, as opposed to later when it may be perceived as an insult.
Namkje: There must be trust among researchers that their ideas will not be taken by someone else. Often people are worried about 'scooping' of their ideas, and it is easier to get critique in a smaller environment, such as one's lab group, rather than publicly online to avoid risk of being scooped.
How do we incentivise writing good critique, given the effort and time it takes to write even a minor one?
Darren: The current incentive system would need revisions! There is no pay-out now for it.
David: There is a push to publish original work because that is what matters. Peer-based acknowledgement for feedback is needed—like a review that researchers given each other. There is first a need to remove the 'disincentives'. Moving beyond Twitter, this might mean having a platform where you could include a helpfulness score, potential critics acknowledged in publication.
Pat: Accountability is difficult and fuzzy on social media (partly because things can be searched for retroactively, out of context). It's difficult to predict when and how one could be held accountable–-especially when it is an evolving idea.
What should be the approach to sharing 'half-baked ideas'?
Darren: There is disproportionality in sharing ideas: a double standard where some get praised for a partial idea, others get harassed for sharing such ideas. There is a need for a more welcoming and inclusive environment that is sensitive to researchers of different cultural and social backgrounds.
Ulrike: The consequences especially can be tough for early career researchers. There is more at stake for them. We need also a culture of accepting when we are wrong about something. This should be normalised, since it is how science advances. Worth noting too: Twitter is still unpopular among researchers, only 40% have an account.
Pat: Twitter does not work for everyone equally–-there is no single generic approach. At present, there is no optimal way of promoting healthy debate online.
David: Senior scientists taking more initiative in facilitating such an environment would help younger researchers.
Robert: Twitter seems to be taking over part of the job that's supposed to be done by journals (which often refuse to publish critic pieces on administrative grounds). But when criticising, a peer-review journal is still safer than blog posts for early career researchers.
Namkje: It will need to start with senior researchers to set the norms and expectations in the field.
What means do we have for distinguishing single observations from repeated findings in online platforms? We need tools like this, also for fighting misinformation (e.g., preventing experts from being "outshouted").
David: Creating a set of 'tags' (e.g., what does the study apply to) could help differentiate between arguments.
Comments from audience.
The same way we have a Twitter verification button. Maybe we need a Statistician verified button? :)
Do the panel think there is a degree to which 'training' in online discourse could be useful?
People younger than us mainly interact on TikTok or Instagram, which are predominantly visual media. Do you see any room for science to ‘invade’ that space for the purposes of dissemination to the public?
Listening to [
Darren]
's remarks about the lack of impact of your public reviews of pre-registered trials, it made me think that the general public (and possibly to some extent also policy-makers) insist on transparency about science only when they suspect some malpractice, but in general they are more comfortable with the whole “block box” scenario. They lack the capacity, and possibly the motivation, for following the details of scientific discussion, hence they do not care about it: they want the output. Unfortunately, the output is still often understood as “whatever gets published”, and that's an ambiguous message: in most disciplines, “good enough for publication” means something closer to “robust enough to be publicly discussed” than “proved beyond reasonable doubt”. This generates a PR dilemma: either laypeople overinterpret the significance of scientific results, or they dismiss the whole scientific enterprise as cheap talk without practical value. Any thought on how to escape that dilemma?
When attending conferences I'm most fascinated when learning about what "failed". We don't celebrate thoughtful reflection on past failures enough. :)
Question: What is your relationship with other repositories where you are harvesting the studies?
Michael Perk introducing his role as founder of Collabovid: This project uses word models to semantically analyse literature with similarities. The focus is on more exploratory approaches–-papers from different countries, papers with specific topics with connections e.g., between COVID-19 and HIV. So this is is not just a search engine. It can extract a map of relations even from a small input of a few words.
Beyond COVID-19, knowledge aggregation must move beyond search and retrieval of new publications. Will you be integrating prior knowledge into your systems? (e.g., information published before the pandemic)
Haoyan: COVIDScholar has plans to start extracting information from articles in the future.
Jaron: We will need both tools and access for an effective infrastructure. We need to know what are the barriers. Our tools and database don't need to be as large as Google; we can use alternative ways and be specific to extract knowledge.
Stefan: Costs will be high to doing this, so funding opportunities are vital. But API could be reused between different initiatives, and create taxonomies across the literature within topics.
Michael: There's a lot of noise in the data (e.g., masks)–-relevant papers in the research are not just COVID-related, they could include the common cold as well. This is an aggregation problem.
There is especially this problem in behavioural sciences, where a lot of the knowledge is not generated in response to the COVID pandemic. How do we address this?
Stefan: Maybe the minutiae of the search and context might be helpful?
Mark: A limitation here is in the resource: we will always have a smaller knowledge than that of Google, for instance.
Martyn: There is a discrepancy between the language used in the literature and the language used by people searching/reading the literature that we need to account for. Moving away from a searchbar set-up might be necessary.
Jaron: Perhaps the next-best step is to provide various indices to the user.
Michael: The idea here would be to provide a user with other examples of the kind of research they are searching for–-using metrics, classifiers etc.
The volume of the corpus could be a bit counterproductive, e.g., Google Scholar outputs many publications, a large proportion of which are only tangentially relevant. How do we filter this better?
Stefan: Would it be possible to "repurpose" powerful, existing models?
Mark: It might be possible to use, for instance, language models. But constant updating of the model would be necessary.
How robust are these databases to information pollution, when the focus is on harvesting information as opposed to filtering nonsense?
Michael: An engine with metrics on where it's been mentioned/cited could help with establishing trust.
Jaron: Who determines what is nonsense and what is not? How do we determine that algorithmically, as our algorithms now are not tuned towards this purpose.
Hildy: Here we need to incorporate machine learning and humans working together. ML itself can't solve it without the human input.
Mark: This brings the peer review process back to the fore, even if the process itself is less-than-perfect.