owned this note
owned this note
Published
Linked with GitHub
# Notes: Civic tech in the global south : assessing technology for the public good, Chapter 1
## When Does ICT-Enabled Citizen Voice Lead to Government Responsiveness? by Tiago Peixoto, Jonathan Fox
## DZN notes
01/06/2018
:::info
http://pubdocs.worldbank.org/en/835741452530215528/WDR16-BP-When-Does-ICT-Enabled-Citizen-Voice-Peixoto-Fox.pdf
:::
### Key terms
* citizen uptake (**"yelp"**)
* the degree to which public service providers respond to expressions of citizen voices (**"teeth"**).
## Exerpts from the text (highlighted portions are the most pertinent to vTaiwan)
analysis revealed that we do not see a generic type of platform leading to a generic type of response. Instead, we see key differences in the institutional (not technological) design of the interface that may be relevant for voice, citizen action and institutional response. **The evidence so far indicates that most of the ICT platforms that manage to leverage responsiveness somehow directly involve government.**
While ICT-enabled voice platforms vary widely across many dimensions, this analysis emphasises **several differences that are hypothesised to influence both citizen uptake and institutional response. These include the degree of public access to information about the expression of voice – does the public see what the public says? Does the ICT platform document and disclose how the public sector responds? They also include institutional mechanisms for public sector response – do the agencies or organisations take specific offline actions to prompt service providers’ response?** As a first step towards homing in on these variables, this paper maps the 23 platforms studied in terms of various empirical indicators of these distinct dynamics. This exercise is followed by a discussion of propositions that may or may not link voice to institutional response.
**The first analytical challenge is to disentangle voice from responsiveness. Much of the first wave of research on ICT-enabled voice platforms focuses primarily on citizen uptake (e.g. Gigler and Bailur 2014), without clear evidence that the feedback loop actually closes.** In practice, the concept of feedback loop often used to imply that uptake (e.g. citizen usage of crowd-sourced platforms to report feedback) necessarily leads to positive institutional responses. In other words, there is a high degree of optimism embedded in the way the concept tends to be used. In contrast, the framework proposed here avoids this assumption by treating the degree of institutional response as an open question.
**The second conceptual challenge is to specify the relationship between the role of ICT-enabled voice platforms and the broader question of the relationship between transparency and accountability.** In spite of the widely-held view that ‘sunshine is the best disinfectant’, the empirical literature on the relationship between transparency and accountability is far from clear (Fox 2007; Gaventa and McGee 2013; Peixoto 2013). The assumed causal mechanism is that transparency will inform and stimulate collective action, which in turn will provoke an appropriate institutional response (Brockmyer and Fox 2015, Fox 2014).In this model, both analysts and practitioners have only just begun to spell out the process behind that collective action (Fung, Graham and Weil 2007; Joshi 2014; Lieberman, Posner and Tsai 2014). **In light of widely held unrealistic expectations about the ‘power of sunshine’, convincing propositions about causal mechanisms involved need to specify how and why the availability of an ICT platform (a) would motivate citizen action and (b) why the resulting user feedback would motivate improvements in service provision. After all, decisionmakers’ lack of information about problems is not the only cause of low-quality service provision.**
**Third, the relationship between ICT-enabled voice platforms and the transparency/accountability question is complicated by the fact that, in practice, a significant subset of those platforms does not publicly disclose the user feedback**. Yet if citizen voice is not made visible to other citizens, where does its leverage come from? Such feedback systems aggregate data by asking citizens to share their assessments of service provision, but if the resulting information is not made public, then it cannot inform citizen action. In these systems, if users’ input is going to influence service provision, voice must activate “teeth” through a process other than public transparency—such as the use of data dashboards that inform senior managers’ discretionary application of administrative discipline.
![](https://i.imgur.com/StOeUQ3.png)
Based on these conceptual propositions, this review of twenty-three ICT-enabled voice platforms distinguishes between two different types of citizen voice, “user feedback” and “civic action.” While these two approaches can overlap in practice, they are analytically distinct. Their common denominator is the use of dedicated ICT platforms to solicit and collect feedback on public service delivery. The differences between them involve **three dimensions: i) whether the feedback provided is disclosed; ii) through which pathway individual or collective citizens’ preferences and views are expressed; and iii) whether these mechanisms tend to promote downwards or upwards accountability.**
...
**the first dimension, we will assess cases in terms of the extent to which the feedback provided by individuals is publicly disclosed or not, thus enabling citizens to potentially act to hold governments accountable.**
look in chapter for examples
...
**The second dimension that we use to categorise platforms assesses the mechanisms by which citizens’ views and preferences are expressed – either individually or collectively.**
collective mechanisms refer to those in which it is the magnitude, nature and intensity of the aggregation of citizen concerns that is expected to trigger governmental action.
example in doc
In both initiatives, it is the collective mobilisation around a cause or preference that is intended to trigger government responsiveness. **The core of the technological platforms that support these mechanisms lies in the reduction of transaction costs for collective action that can address policy agenda-setting, in contrast to reacting to policy outputs.**
This collective dimension, we argue, is what gives the character of ‘civic-ness’ to ICT-enabled voice platforms, insofar as they enable individuals
to engage in collective action – or at least to address public concerns. **In contrast to feedback systems that receive individual reactions to specific service delivery problems, ICT platforms that enable the public aggregation of citizens’ views have more potential to constitute input into the setting of broader policy priorities.**
![](https://i.imgur.com/CIFKXz1.png)
On the left side of Figure 1 (blue) feedback is individual and undisclosed, which we can describe as a typical case of governmental user feedback platforms. On the right side (yellow), citizen voice is simultaneously collective and disclosed, meeting the two criteria for our definition of civic engagement. At the intersection point, however, we find platforms that both collect individually specific feedback and make those inputs public (sometimes also reporting whether and how the government responds). This overlap involves the fact that, **while individualised feedback mechanisms are not designed to spur online collective action within the platform itself, the fact that the feedback is publicised may inform and facilitate collective action – offline as well as online.** This may be the case, for instance, when the sum of individual feedback in a certain platform, such as FixMyStreet, reveals to the public the patterns of failure in a certain service, or in certain locations. In this case, even though the platform is not specifically designed to support collective action, the disclosure of evidence of patterns of failure in a given service may support well-targeted collective action to address service delivery problems.
![](https://i.imgur.com/8eOGKTQ.png)
Figure 2 shows that several cases do not involve individualized user feedback and fall entirely within the civic action category. In these cases, the ICT platform’s primary goal is to support collective action through the aggregation of individual citizen inputs. In other words, the role of individual inputs is not simply to identify specific service delivery problems, but to demonstrate the extent of citizen concern, through
the process of aggregation. The civic action cases considered here are significantly less numerous and more heterogeneous than either the user feedback cases or the citizen engagement cases. **They include projects as diverse as web-based participatory budgeting in Rio Grande do Sul and the international online petitioning platform Change.org. However, if the scope of this research was broadened to include e-participation, crowdsourced political deliberation, or the role of social media in enabling political protest, the number of relevant ICT platforms would increase. However, the focus here is on citizen voice platforms that specifically address public service provision.**
**we have identified eleven factors that may have a relationship with institutional responsiveness—Disclosure of feedback, Disclosure of response, Proactive listening, Voicing modality, Accountability directionality, Uptake, Combined offline action, Driver, Partnerships between public service provider and civil society organization(s), Level of government, and Institutional responsiveness.**
...
To conclude the discussion of these empirical findings, when examining the table above, one of the most noticeable patterns is the existence of numerous digital engagement initiatives that meet dead ends despite different pathways – at least in the relatively short run. The majority of the 23 cases studied led to low levels of institutional responsiveness, with 11 reporting medium to high levels (defined conservatively as leading to at least 20% response rates). Notably, the multiple dead ends do not seem to be motivated by the absence of any one specific factor. None of these variables appear to be a sufficient condition for institutional responsiveness, suggesting that none of these factors can be a considered as a ‘magic bullet’. The findings suggest multiple pathways to institutional responsiveness – involving the convergence of multiple, mutually reinforcing factors. **If one factor does stand out, however, it is government involvement, insofar as four of the six cases of government-led voice platforms were associated with high rates of service delivery responsiveness.**
...
While platforms that enable upwards accountability (e.g., large-scale opinion surveys) are associated with only modest levels of institutional responsiveness, there appears to be a relationship between platforms that are conducive to downwards accountability and platforms that produce greater responsiveness: Five of the seven high-impact platforms disclosed feedback. **Six of the seven high-impact platforms involved offline citizen engagement. In all of the high-impact cases, government was present either as a driver (4 cases) or as a partner (3 cases).** This suggests that for downwards accountability to work most effectively, both public disclosure of feedback and public collective action may be necessary. In other words, civic engagement, in addition to information, is what generates the civic muscle necessary to hold senior policymakers and frontline service providers accountable.
...
Proactive listening – Indicates whether at some point the service provider proactively contacts the citizen in order to collect feedback on the quality of services provided.
**One of the relevant findings from this review of the evidence is that proactive listening systems are both relatively rare and yet quite significant.** Two of the most well known cases of ICT enabled citizen voice—Punjab Feedback (which has the most uptake of any cases by far) and U-Reporters—involve proactive listening...This uneven pattern of uptake and responsiveness in such a diverse set of proactive listening cases suggests that more institutional experimentation and innovation is needed, with a **strong emphasis on connecting the dots between incentives for citizens to express voice and the capacity of service providers to respond.**