--- tags: workshop2020 --- # Quick summaries of SciBeh Virtual Workshop Day 1 A huge thank you to our volunteer note-takers for enabling us to generate these summary points quickly. These summaries will be updated over the week to create a higher-level synthesis of the topics and themes. ## Session 1: Open Science and Crisis Knowledge Management ### Talk by Chiara Varazzani * There is a tension between policy-making and academic research, where governments/policy-makers are looking for quick, straightforward answers, but academic research is typically slow and carries uncertainties and dependencies. This is a systemic challenge that reaches beyond the crisis. * With a fast-moving crisis like COVID-19, there is the risk of slow research becoming obsolete as the crisis evolves. * Yet there are pitfalls in rapid research dissemination. Being quick is a trade-off for being robust. * It is difficult to combine the evidence in a way that is sensible for the public because the public doesn’t always distinguish between good and bad evidence, nor pick up on all the limitations of the methodology used. Policy-makers as well suffer from the same biases in judging evidence (e.g., confirmation bias), and they also often make decisions based on intuition. * In a study, policy-makers were asked to judge evidence on an open policy issue. They found it harder to integrate the evidence that went against their beliefs (whether they were already in favour of the policy or not). * However, simplifying the evidence made it more likely that policy-makers would integrate evidence contrary to their beliefs. * Promising signs: * It is possible to nudge policy-makers to be more open to evidence contrary to their own views about the policies, when simplified evidence is presented (they are better at integrating and accepting contrary evidence). So we need to present studies and data as simply as is possible. * [BETA](https://behaviouraleconomics.pmc.gov.au/projects) is an example of pre-registrations of policy-related randomised control trials. * Future questions that need to be addressed: * How can we quickly generate and implement robust evidence and share it? * How do we foster greater transparency in the community Could we produce pre-registered pre-prints from policy studies? * How to design a system where practitioners and researchers register, publish, and share initiatives and experiments? What is needed to make it happen? _Question:_ What is the incentive for policy-makers to use transparent evidence (e.g., registered trials)? * Having researchers embedded in the government means there's an in-house team looking for the answers for policy-makers. * Transparency is important because it is public money ultimately used for research. _Question:_ Can you give an example of the policies in the experiment about integrating evidence? * In the study, we looked at a policy on mandating bicycle helmets, and an example of the evidence involved number of lives that could be saved with the policy. _Question:_ To the best of your knowledge, has any national government or international organization considered the option of creating a science curation task-force, i.e. a bunch of scientists devoted to monitor 24/7 and summarize all the relevant research being currently done on a relevant topic, such as COVID-19? Would it work, in your opinion? If not, why not? * As far as I know, this doesn't exist, but it is a great idea. * _Audience comment:_ There are some discussions about this in New Zealand, but it seems to be in the early stages. _Question:_ What do we need to create channels between scientists and governments? * Even with the best database, we must ensure it is user friendly so that people would want to use it. ### Q&A with Alex Holcombe **What would you say are the key aspects or principles of open science?** * Open science is a broad umbrella. It used to be the term 'open access', and now includes also diversity and equity, reproducibility and accessibility---both to research, and involvement in research. On the point of opening up science to everyone, we have seen a rise in larger scale collaborations. * Also transparency to all domains of science. **Pre-prints have become a good way for rapid open research sharing during the pandemic, but what are the issues with quality, when research is shared too early?** * We do need data faster in a crisis than how it moves through journals. The rise of pre-prints is a godo way of getting around the slow journal system. * In my own practice I've published data as soon as it gets in, uploading it straight to GitHub. * In the case of crisis knowledge management, quality control can be a problem for rapid data and research. If errors are overlooked when data goes out too quickly, the work may be devalued **What is our role as researchers in managing pre-prints?** * We've stuck to journal systems for too long---these are outdated compared to the Internet. One thing we can do is commit to abandoning journals if they won't open things up. _Question:_ By talking about "studies" instead of pre-prints: do you have thoughts on how to manage this? * It's important for the public to get past the myth about the quality of published research in journals. Pre-prints can help people to see this, and we need to educate the public that newly produced research can have problems, and science is evolving. New cutting-edge studies are more likely to be false compared to old. _Question:_ There are arguments against complete transparency in democratic policy making, because making the background process constantly observable from voters not only discourages shady dealings and corruption (a positive result), but also prevents politicians from the sort of unpalatable compromises that are actually conducive of good government and make them behave as constantly campaigning for votes (a VERY negative result). I wonder if similar concerns would apply to the scientific process as well: is there any value in keeping some processes behind closed doors in science? Could it be the case that transparency introduces undesirable consequences also to science? * There's a tension between what is scientific discourse when looking at the evidence, and what is your stance on it. * Consensus statements are not out there yet. * The lack of transparency can arise when we may not want to be quoted saying something on record in case we are wrong. _Additional comments from audience_ * Part of my worries relates to (what I perceive to be) the increase in ideological censorship / criticism of scientific results, e.g. because discriminatory or inflamatory for minorties: "ideological" does not mean necessarily wrong, but it sis till a bias to consider. * I think transparency is essential but, like all good things, it has potential dark sides: https://www.nature.com/news/research-integrity-don-t-let-transparency-damage-science-1.19219 ### Talk by Iratxe Puebla * ASAPbio wants to make science communication faster and more transparent, mainly preprints and peer review. * The focus is on pre-prints. Pre-prints benefit science because they give control over putting the latest research publicly out there back into the hands of researchers. * Thousands of pre-prints were released during the COVID-19 pandemic, and a large proportion of early-stage evidence (~40%) was as pre-prints. An unprecedented amount of research is now reaching non-specialist audiences. * Pre-prints have been influencing policy decisions in response to the pandemic (e.g., second UK lockdown). * Sometimes pre-prints are amplified by social media and taken out of context, or discussion about them by a non-specialist public ignores the limitations of the work. * Public commentary and reviews exist to put pre-prints into context, for example, Sinani Immunology review project, Outbreak Science Rapid PREreview * These projects also have a focus on journalism, to help co-ordinate and amplify the value of participating in research and review. * ASAPbio also has a dedicated page to help to find pre-prints on COVID-19. * We are running a project on how to find the best way to present pre-prints to the public eye: * What information do we provide? * How do we make it consistent and simple? * What is best practice for labelling pre-prints, and explaining peer review and pre-prints to non-specialist stakeholders? * Guidance and principles to researchers for press coverage of research * Principles for journalists covering pre-prints in the media _Question:_ The emergence of initiatives to provide public commentary and review for preprints during the pandemic has been great, but there are all self-selected groups/initiatives- in the longer term, do we need to institutionalise this to ensure the right level of representation and diversity of expertise and perspective? Especially as there is an overall limit in capacity for review as it is an unsupported resource. * The beauty of pre-prints is its openness: groups that might not participate in journal reviews can still do so with pre-prints. There's a lot of potential to increase participation and representation in the review process here. * We need to make the platform easier to use (remove technical barriers). * Many initiatives out there are great, but they are not linked together. _Question:_ Do you have data on how many preprints get spntaneously reviewed by other scientists outside of your initiatives? * Around 5% of the biggest pre-prints get comments. * There is space and potential to increase this activity. _Additional comments from audience:_ * Question for all: to the best of my knowledge, we don’t have a dedicated pre-registration and preprints platform for behavioural science applied to public policy. What existing resources/platforms do you currently use to pre-register and preprint your work in applied behavioural science? * Scibeh had the initial goal of using the Reddits to review preprints and that aspect didn’t really work- input and effort just couldn’t keep up. And it seems to me a characteristic of the initiatives that have ‘worked’ that they are all quite tightly knit groups (focussed on a single institution for example), that seems problematic to me going forward ### Talk by Michele Starnini * Introducing PRINCIPIA: a decentralised peer-review ecosystem * The current academic publishing system has three functions, which impact massively on researchers' careers: distributing papers within the science ecosystem, disseminating them outside, and checking the quality of papers. However, problems with it are: costs, low-quality peer-review. * Comparison of closed science vs. open science: no publishing fee, cheaper for universities to pay for it vs. publishing fee and 30% more expensive for scientists to pay and if your lab doesn't have funding, you can't publish open access (ultimately, it is all paid for by taxpayers...) * Where does the money go? --Journals have a high profit margin, but referees don't get paid, which makes ensuring high quality reviews an issue. * PRINCIPIA proposes that referees are remunerated and remuneration depends on review quality. Journals are open access, cost-effective, and easy to start. * Reviewers write a report and assign score to paper, where papers are accepted if score is greater than an agreed threshold. * Others read the report and assign scores to it; referee collects the fee if score is greater than an agreed threshold. * The value of the referee fee depends on a reputation score: quality of referee's reports, independence, and citations * The quality check for the review process is in-built, depending on the scores, reports, and reputation score. Referees, authors, and journals are also incentivised: * Incentives for referees: writing quality reports to increase their reputation score, as their fee corresponds to their reputation. * Incentives for authors: they bid higher fees to secure referees with high reputation scores, because accepted papers that were reviewed by referees with high reputation scores will have higher quality. * The incentives are different, but the review fee levels should match demand from author and supply from referees. * Liquid journals where journal score depends on quality of papers published (which in turn depend on referee reputation and reports). * Scientists can join journals by bidding a journal fee. Reputation scores of scientists depend on what journal they join. Journal members choose referees, and can also create new journals. * Incentives for scientists: join high quality journals to increase reputation score. * Incentives for journals: publishing high quality papers to increase journal score. * This is the white paper https://arxiv.org/abs/2008.09011 _Question:_ I wonder whether the motivation for reviewing would be the same for referees (us) (e.g. internal vs. external motivation). Do you have any idea on this? * The motivation for referees does not change: contributing to the production of science. But in PRINCIPIA, you got rewarded for your work as referee. _Question:_ Is there a check for the match between paper and referee (i.e. that the referee is on topic)? * sure, matching is based on keywords. Keywords can be updated. _Question:_ What would prevent referees from gaming the system by adopting the simple rule of thumb "never give a low score to another report"? More general version: to have the people benefiting from the evaluation of reports (referees) being the ones in charge of making that evaluation seems VERY risky! * the same that prevents referees from "never give a low score to another paper” in the current system: integrity * also the current system could be tricked by always giving high score to papers we review, right? But it did not happen _Question:_ re the integrity/low feedback point- but now your are creating an explicit feedback loop there in terms of reward- wouldn’t you expect that to be considerably more amplificatory - possibly amplifying the ‘gaming’ as well? * sure, the feedback is more explicit. One needs to try the system to check the weaknesses _Question:_ In your model, how necessary are the fees as opposed to a publication ‘bank’ where authors need to commit to a certain number of reviews in order to get their own work peer reviewed? * The idea of a publication "bank" is very interesting and could work as well. However, I think it should include also the reputation system somehow: with a reputation score, your commitment is much stronger. Finally, please consider that "fees" are not to be necessarily considered as money. In the paper, we show the implementation in blockchain, where fees are tokens. _Additional comments from audience:_ * There is interesting research on aggregating scores in the context of strategic incentives. In a nutshell: Trimmed means (e.g., median) are the way to go. See: Hurley & Lior (2002). https://doi.org/10.1016/S0377-2217(01)00226-0 * I guess this only possible when referee names are undisclosed because otherwise you would run the risk of retaliation among reviewers that hurt fees, right? * If the RS is posted along with the paper as a sign of the paper ‘quality’, is there not a danger for rewarding the ‘richest’ researchers, rather than the best science? (Researcher X’s paper could be equally high quality, but they couldn’t afford the high RS reviewers) * If there are actual fees paid what are the tax implications? I know in some countries researchers can’t even pay participants because of tax issues. They may also not able to accept such fees. ### Talk by Cooper Smout * Context: change to entrenched publishing systems is difficult if individuals do not act together. * Free Our Knowledge is a collective action platform, aiming to bring together a critical mass of individuals who act simultaneously to advance their position/create favourable outcomes (e.g., in the vein of worker strikes, Kickstarter campaigns). * Open science practices are beneficial for all researchers when adopted by all, but if only some individuals take them on, it could be a threat to careers if others don't get on board. (e.g., how to abandon high-profile journals who won't do open access?) * Thus it's important to accummulate behavioural capital in order to start a new system. * Free Our Knowledge is a bid to escalate progress in Open Science through these means. * Anyone can create a campaign to ask peers to adopt a new behaviour. Researchers pledge anonymously to act. The campaign becomes live once it recruits sufficient numbers pledging to support it, thereafter pledgers are listed on the website and work together to achieve the goal. * Examples: publishing only in Green Open Access, Gold Open Access, or Platinum Open Access journals---pledge options can be adjusted based on authorship position, anonymous pledging, and support thresholds. * New directions currently after initial feedback: trying simpler campaigns with small thresholds. * Use of GitHub for anyone to propose and develop new campaigns. * Long view: a marketplace of progressive ideas, backed by community action. ## Session 2: Interfacing with Policy Read more about the panellists [here](https://hackmd.io/@scibeh/SJvII_uOD). ### Talk by Alison Wright * The Human Behaviour Change project's aim is to provide evidence about interventions to different users (especially for interventions aimed at individuals), which is essential for improving wellbeing, including managing the impact of COVID-19. * The goal is to make evidence useful for policy-makers, especially when evidence is accumulated faster now than evidence traditionally is. * HBCP uses an automated feature extraction system, using reasoning and machine learning algorithms, to create an interface for researchers and policy-makers. * Algorithms learn to identify relevant research: learning what is the scope of a review, and automatically identifying papers. You can try out the automated information extraction [here](https://hbcpinfoextraction.mybluemix.net) * Knowledge is represented in the form of classes, labels & definitions, and specifications of properties and relationships between them. * Study identification is automated, with data regularly pulled into a central store. * AI trained using manually annotated data * HBCP is also developing user interfaces * Users can browse annotated records * "help me change my behaviour": predictive interface prompts users to specify queries. * Full research and prediction interface (queries -> quick answers) * These interfaces should support interaction with policy-makers * Challenges for the project in using an AI-based knowledge system as a policy-making aid: * Trade-off between speed and accuracy---how much are policy-makers willing to accept? * How do we present the data in a way that policy-makers need? * Biases in the data (AI learns from a dataset---if the primary data gathered is representative for some groups but not others, we can't overturn decades of bias in data collection.) * Explainability: what is an appropriate level of trust in this sort of system? How transparent do we make the algorithm (i.e., people knowing what it is doing) _Question:_ What would be the outcome of this project for policy-makers and the public? * They will be able to query the system to get a meaningful answer, make predictions, and recommendations. _Question:_ If a policy-maker needs evidence, they typically ask others for reviews of the evidence. What would this system offer the review process beyond Google scholar? * It would have already identified the useful studies, thus save time in extracting the studies manually. * It won't do a traditional review, but it will be able to predict X under certain conditions based on the data and outcome variable asked, and what is the biggest effect achieved under these conditions. * Users won't have to do all this work manually. _Question:_ What is the current timeframe of the project? * We have finished annotating reports for smoking cessation. The simplest version will be ready early 2021, with a "help me quit" interface towards the second half of 2021, and everything rolled out by the end of 2021. _Question:_ Does AI reduce transparency? * We could consider: is AI doing the task to the same standard as humans? * The system is a bit less transparent, but more accurate. * People tend to trade off in decision-making: if they know that the accuracy is just as good/better, then transparency doesn't seem as important. ### Talk by Lindsey Pike * What are the problems and solutions in connecting research and policy? The UK Evidence Information Service (EIS) focused on connecting academics with parliament, identifying a problem around access and interaction. * A problem identified in 2013 was there is only a small proportion of the academic community that engages with policy-makers. * "Can we help provide parliament with broad and rapid connections with the scientific community?" * An issue was that policy-makers were unsure how to find "experts", and they also lacked time (won't read a whole paper for insights). * EIS was a database of UK scientists to be ready for immediate response: policy-makers/parliamentarians could seek specialists and get a quick response. This also targeted: * Diversity of scientists * Diversity of science * Diversity of locations * Greater academic engagement * There was enthusiasm from both sides. However, a database wasn't enough. Both sides still needed to speak the same language, and encouragement and incentives were needed to promote engagement from both sides. Encouraging diversity of contributors was also an issue. * Report produced exploring the viability of the EIS in October 2018 had the following key points: * Clear presentations promote research use. * Academics should package their research in different ways. * Policy-makers most value academia when they share an understanding of the question and personal trust is formed. * Conclusion: the cost-benefit analysis found an online platform for the database wasn't feasible. * So what works? * Co-production, building close relationships, engagement...and there is no one-size-fits-all strategy for interfacing between policy and research. * Some new directions: * Areas of Research Interest give details about the main research interest * Capabilities in Academic Policy Engagement * Universities Policy Engagement Network: for networking and co-operation * Still to explore: how can ARIs complement the role of human expertise? _Question:_ A lot of recommendations and scientists that were given time by government were male. How can we address this issue? * Diversity is a main priority (e.g., UPEN has a working group for diversity). * Academics that engage with parliament are often senior academics who are often older white men * We need to provide minorities training with how to engage with policymakers etc. * There is indeed a big gender imbalance and women’s views on the pandemic haven’t had enough recognition. * Research funding institutes are doing lots of work on diversity (many people sitting on funding panels). _Question:_ How can researchers preserve their integrity when politics gets involved in research? * Co-production is a way of working with policy-makers, but yes, it is not risk-free to engage, misrepresentation and misinterpretation of data can take place. It will always be a question for researchers: where do they draw the line on integrity? * And policy-makers will not always like what they hear, and it's important for researchers to recognise this. _Question:_ I think the issue of trust and building trust is very important but challenging. Do you have any specific examples of how this has been done at Bristol and any main learning points for making initial connections (with e.g. civil servant policy advisors)? * Trust is about creating relationships. * At Policy Bristol, they advise researchers to work on the "5 Ts": who is your audience, when your findings will be most relevant, translation, drawing on the right tools, talking and communicating with the people who would be interested in the research). * Build trust by creating relationships, knowing who is interested. Staff at your university might also have contacts and expertise to get in touch with relevant people in policy-making. _Further links from Lindsey's presentation:_ Links from my presentation: https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.bath.ac.uk%2fpublications%2funderstanding-and-navigating-the-landscape-of-evidence-based-policy%2f&c=E,1,GRWwDzJIu8bk96F7v0OViNm71VtnWI3YBkZIy0c6qxARNnofLtepueTl0dnwAM00wsC-KP5HXOoT7SDCanLevE71c0tLB-0DdtbTfc0h&typo=1 https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fwww.alliance4usefulevidence.org%2fassets%2fAlliance-Policy-Using-evidence-v4.pdf&c=E,1,hcmZEqghEPs3cSmD2cBBX7Zq3oVtKGdbiryQlq0I2xOQsbnDOK4_WUG7qwzwlrOgvzvB3qt9TyP1BUP9AfVr7O-RNvmOo2rABqhVpk5c8yI,&typo=1 https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fdoi.org%2f10.1057%2fs41599-019-0232-y&c=E,1,QyTCBPRPQ-Jbn9iNdtT88aqzSbDHkEEFANpl_lMQkD80OH9dlQHsKwL7x6a2woPvT8uHmTx_PJ8x6tuDiqBAz84iL6Y5icW9CDEuWGmO9XVJHw1M8NU,&typo=1 https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.upen.ac.uk%2f&c=E,1,6hbQqBvVdMudKgX2E65kUZA4idlS90hSjeMjPB7BFK-LDJzkk_GKNNPBU91KQgenWU_9dEQd9OyHXOVNtKCz_2D3YuLayx2DKLNpYgw_hLp-b7S3X9PzRHz7LQ,,&typo=1 https://www.gov.uk/government/collections/areas-of-research-interest https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fpost.parliament.uk%2fcovid-19-areas-of-research-interest%2f&c=E,1,E9uhbW5eb7QULgYbU9LDXg_HL_IpXbH65zI_Omj2ysFlO_x4Sagy5PsmqkiGO6p9fAYAjQpzYeu4EscXSZTK9kd_t_2DBiszQ7Fp8eF_-Fg7ZbyXVBg,&typo=1 https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.cape.ac.uk%2f&c=E,1,dfwCHOfFmjjfmO5rcv-BEEf6RARkysGYLKyuOQkK0X2wcjVsrNsrLQFmi9JFhcka-rPsLo_amTDI4JcfxTXQEWTELwhOZTzgJXTT260fVNaY&typo=1 https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fopeninnovation.blog.gov.uk%2f&c=E,1,Cujj_3eRjLNLCdY91YJhXs-KzXkLdlXm-94B_25tSlsqDKKgR3CvdMu96FpaznRpW_HLAfcGZGWgviXaqWuwX-GdfUdsmMd6BNlhosvYxUuTRA,,&typo=1 ### Panel Discussion _Question:_ **What safeguards could be put in place to make sure this doesn’t arrive at “policy-based evidence” i.e. Academics being sought to support conclusions the policymakers want to hear?** * **Lindsey**: Researchers need to have working knowledge of who does what in the government (e.g., civil service are not politicians). * **Rachel**: Academics may not be aware of the different steps in policy, and where the evidence is most valuable at which step. It is important to know where to feed in information. * **Rene**: In the European context, there is this idealised view that getting knowledge to the right people means everything will be fine, but this is naive. There are cases of cherry picking, contractors with pressure to produce results that are desired. Everyone needs to provide evidence---ideally we first have evidence, and then direction for policy. But often politics comes first and then has to be backed up with evidence from science. Knowledge can be ammunition in politics. For instance, with "behavioural fatigue", it is an example of the term sticking while trying to legitimise a policy with science. * **Comment**: Realistically speaking, isn't it worth thinking about this as a "spectrum" where sometimes we get to the pinnacle of evidence based policy, but isn't it worth thinking about that it is certainly better to get policy-based evidence that is based rigor and research? _Question:_ **Transparency in these processes is becoming very different now, with more transparency going on. How do we navigate the issue of "scientists as activists"?** * **Lindsey**: We need to have clarity of thought and questions. Where do we draw the line for lobbying? It's important to ask, can you be objective and impartial? And policy-makers tend to want researchers for a literature review rather than their own research, opinions, and findings. * **Paulina**: Policy-makers do need to ask the right questions to academics to minimise bias. Policy-makers should refrain from making assumptions. The cabinet office has a team that are quite academic, so advice sought from academics can be filtered by them in order to avoid bias. The teams work with policy-makers to refrain from making assumptions about people's behaviour. For example, breaking a question down into specific problems and behaviours. This helps to look for the right evidence and objective evidence. * **Rachel**: The framing of questions is important too. How do we get questions that are scoped to include the right amount of evidence? How do we bridge a recommendation algorithm to determine what is included and excluded evidence? * **Alison**: This is a problem we need to tackle, bringing evidence together is going to involve humans for a while, and that will always be susceptible to biases. Articulating that the system is not perfect is the best we can do. **When on the frontlines, interfacing with public office, what do we do about the issue of uncertainty and conflicting science?** * **Mirjam**: The goal is to have a unified language and message when it comes to informing the public on how to behave during a pandemic, as long as it is not politicised. In my role, behavioural topics are often asked to biologists/doctors who don't know so much about this, so we need to get behavioural scientists more visible. For example, in Germany, we have done poorly on uncertainty: the recommendations about masks weren't well communicated at first. Now the main topic is going to be about communicating vaccines. We need to communicate about the scientific processes, and that uncertainty to an extent is normal. * Stephan**: The data from Germany is interesting, with a recent survey on people's attitudes to science. There was an increase in trust, and most people thought that scientists battling out with different opinions brings you closer to the truth---different opinions is good. _Question:_ **The very kinds of things that will make the policy-science interface effective (building trust through working relationships) will be the things that undermine the relationship between science and the public. Yet both are vital, particularly in a context like now. It looks no-win to me. Am I being too gloomy?** * **Rachel**: We need to be clear about working in the public interest, working with civil servants, rather than only working with people in policy. These are different links and perceptions: you are seen as working with the public interest with civil servants rather than with politicians. * **Paulina**: It's common to combine the two. Policy-makers are often people who advise and they come from an academic background (civil servants rather than politicians). Civil servants will aim to be objective and make an effort to not display that they are a certain political party, but they need to remind everyone about the differences and make the definitions clear. * **Lindsey**: Policy-making isn't purely done on a national-level. There is a drive for universities to become more civic, be positive contributors to the region and take part in conversations with policy-makers and other scientists. * **Rene**: We need to make science transparent and gain trust from the public. _Question:_ **To what extent do we try and make ourselves more transparent? With the extent of misinformation, and when scientists are considered the "elite", is transparency worth it if we are up against people who are just not interested?** * **Rene**: The dynamics of being in "in-groups" and "out-groups" explains a lot of this. There is a question here of whether it is the "liberal elites" job to make science transparent, or is that just a wasted effort? * **Lindsay**: Public engagement is important here, and it is linked to the issue of diversity---more diversity and inclusion gives the research community more trust. A researcher who looks like you makes it harder to say they come from this exclusive elite. * **Rene**: When a database is open to everyone and journalist can ask questions, etc., this would make the process of sharing evidence seem more legitimate. _Question:_ I wonder if maybe study designs including participative Methods like Citizen Science can be a solution for a trustful relationship? The public can be involved before results are reported into politics. * **Mirjam**: We don't have hard evidence on this yet, but we've been having people who have had COVID to speak to the public, so they can relate more to this information coming from those like them.