# ICML23 Topology Matters
## Response to Reviewer qak9
We thank the reviewer for the constructive comments. To address your concerns, we give the following point-to-point responses and revise our paper at https://anonymous.4open.science/r/FairGR-A4A2/paper/ICML23_Topology.pdf.
**Q1:The theoretical analysis in Section 3 relies on the assumptions in Definition 3.1, 3.3 and 3.4, which restrict the interplay of sensitive attributes and graph topology. I am curious on how such assumptions are really satisfied in real-world data. The authors may need to provide some justifications on that.**
A1: We would like to clarify that it is **intractable** to theoretically analyze the GNNs behavior for **real-world graph data due to without ground truth of real graph data (e.g., node features and topology) distribution**. Instead, the Contextual Stochastic Block Model (CSBM) is a standard model in existing literature for theoretical study on graph data [A,B]. As a pilot study, we theoretically analyze bias amplification problem in fair graph learning based on CSBM and empirically validate bias amplification problem on real-world data.
Specifically, we consider CSBM defined in Definition 3.1, where the parameters (e.g, sensitive homophily) are different with [A,B]. This model allows for the analysis of aggregation for various dataset statistical information (such as graph size, label homophily coefficient, and sensitive homophily coefficient) in a parameterized manner, which cannot be done for real data with fixed statistical information. Regarding Definitions 3.3 and 3.4, it is important to note that these definitions are **not assumptions**, but rather statistical information defined based on the CSBM. They are crucial to describing the sufficient condition we found for bias amplification. In addition, we also empirically validate the bias amplification for Graph Neural Networks (GNN) in Table 1, using real-world datasets.
[A] Baranwal, Aseem, et.al. "Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization." arXiv preprint arXiv:2102.06966 (2021).
[B] Baranwal, Aseem, et.al. Effects of Graph Convolutions in Multi-layer Networks, ICLR 2023.
**Q2: The loss formulation in Section 4 puzzles me a bit: how do the adjacency matrix for training nodes generalize to all nodes? Or do the authors only modify the subgraph of all training nodes? Some explanations seem needed.**
A2: Note that **only sensitive attribute of training nodes is accessible**, it is **infeasible to rewire whole graph links** based on our method since we can only calculate sensitive homophily for training subgraph.
Our goal is to mitigate bias through graph rewiring as a pre-processing fairness method, and **rewiring training subgraph, as shown in our experiments, is sufficient to mitigate bias**. Specifically, in the loss formulation, we only modify the topology of the sub-graph used for training, while keeping the remaining edges intact, including edges between training nodes and testing nodes, as well as edges within testing nodes. The main objectives of graph rewiring are twofold. Firstly, the proposed method aim to mitigate the issue of unfairness in Graph Neural Networks (GNNs). Secondly, the effectiveness of our approach can support the claim that "topology matters in fair graph learning," since rewiring the topology can lead to improved fairness performance.
**Q3: For empirical results, the authors only consider three simple architectures: GCN, GAT and SGC. I am not sure if some recent architectures like GIN [1] are also worth comparing, since it has theoretically stronger expressive power and may lead to different results.**
A3: Thank you for your comments. We tail Graph Isomorphism Network (GIN) for node classification task and add the results of GIN in Appendix H.6. GIN was originally designed for graph classification tasks, with a graph pooling layer adopted for graph readout. However, our paper focuses on the fair node classification task, which is distinct from the task addressed by GIN. For this reason, we modify GIN by removing the graph pooling layer and tailoring it to the node classification task in our experiments in Appendix H.6.
**Q4: Also, how does FairGR lead to different results in Figure 3? I suppose that is due to different selection of \alpha and \beta in (4), but the authors do not explicitly mention that. The authors may need to elaborate more in the implementation details part.**
A4: Figure 3 illustrates FairGR performance across different hyperparameters in adding regularization and adversarial debiasing to show Parato frontier between fairness and prediction performance. Hyperparameters $\alpha$ and $\beta$ are adopted to rewire graph topology. To achieve fair comparison, the same hyperparameters (e.g., in adding regularization and adversarial debiasing) are selected in Figure 3 to validate the effectiveness of rewired graph compared with the original graph. We have added the details in the revised version of Section 5.2.1.
[1] How Powerful are Graph Neural Networks? ICLR 2019
## Response to Reviewer Evsj
We thank the reviewer for the constructive comments. It seems that you have a negative attitude toward our work. To address your concerns, we give the following point-to-point responses and revise our paper at https://anonymous.4open.science/r/FairGR-A4A2/paper/ICML23_Topology.pdf. We believe that if you read it carefully, you will change your attitude.
**Q1: The motivation for using mutual information between sensitive attributes and node features as bias measurement is unclear. The theorems mostly discuss this mutual information, but it does not directly prove anything about fairness.**
A1: We would like to clarify that **mutual information can measure the mutual independence between sensitive attributes and node representations**. several literature demonstrate that mutual information can be regarded as the measurement for representation bias and mitigate representation bias can lead to fair prediction bias in terms of demographic parity [A,B,C,D]. Compared with conventional demographic parity for classification problem with binary sensitive attribute, mutual information can be applied more broadly to continuous/categorical/binary sensitive attributes, and classification/regression tasks.
In our work, we aim to analyze the representation bias difference before and after aggregation. As such, group fairness metrics, such as demographic parity and equal opportunity, can not be adopted to measure representation bias. Instead, mutual information is a reasonable choice as the bias measurement.
[A]Locatello, Francesco, et al. "On the fairness of disentangled representations." Advances in neural information processing systems 32 (2019).
[B] Louppe, Gilles, Michael Kagan, and Kyle Cranmer. "Learning to pivot with adversarial networks." Advances in neural information processing systems 30 (2017).
[C] Kang, Jian, et al. "InfoFair: Information-Theoretic Intersectional Fairness." 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022.
[D] Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2012. Fairness-aware classifier with prejudice remover regularizer. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 35–50.
**Q2: The theoretical analysis is not entirely new. There are some existing papers that leverage the stochastic block model to prove that the aggregation in GNNs can expand the distance between the feature means of different classes and reduce the variance [A, B].**
A2: We respectively disagree with the comments that our theoretical analysis is not new. Firstly, compared with [A,B], we clarify that there are **several significant differences** from the following perspectives:
* **Different research problems**. Our research problem (i.e., bias amplification in fair graph learning) is significantly different from that in [A, B]. Specifically, [A] theoretically investigates why graph convolution improves **linear separability and out-of-distribution generalization**, while [B] finds that graph convolution shows stronger classification ability than MLP given the distance of mean node features in CSBM. In other words, we investigate the **node presentation bias difference before/after aggregation** while [A,B] investigates classification performance comparison between GNNs and MLP.
* **Different considered assumptions**. We only consider CSBM model in our analysis while **assumptions on inter/intra-class connection assumptions are required in [A,B]**. Firstly, we would like to clarify that, even though CSBM is considered in our work and [A,B], the parameters in CSBM are different. We highlight the sensitive homophily parameter in CSBM since this parameter is highly related to the sufficient condition for bias amplification. Secondly, there is no additional assumption in our analysis except the graph data generated by CSBM. However, the main findings in [A,B] are based on intra-class and inter-class connection probability $p, q=\Omega(\frac{\log^2n}{n})$, where $n$ is the number of nodes.
* **Different findings**. Our main finding is on the **sufficient condition of bias amplification in GNNs aggregation**, while [A,B] **quantitatively measures the improvements** of graph convolution compared with MLP in terms of **prediction performance under assumption intra-class and inter-class connection probability $p, q=\Omega(\frac{\log^2n}{n})$**. Specifically, our work mainly identifies when and why bias amplification happens. It is shown that bias amplification conditionally happens, where the sufficient condition is related to many factors, including sensitive homophily, connection density, and the number of nodes.
Additionally, We highlight our findings:
* **When (Bias amplification is conditional)**: We provide **a sufficient condition** to guarantee amplification bias happens in CSBM in Theorem 3.6. The existing analysis based on CSBM [A,B] mainly focuses on prediction performance or linear separability, which is different from our fairness research problem. They don't even consider the sensitive attributes in the synthetic data. For the final findings, we provide a comprehensive discussion of several cases in that amplification happens (in the last paragraph in Section 3.3). For example, large sensitive homophily leads to bias amplification. We believe that our theoretical analysis and the corresponding discussion are novel and new in the fairness community.
* **Why (Counter effect for fairness deriving from concentration property)**: We provide a theoretical analysis **why bias amplification happens in certain scenarios**. The analysis is related to concentration property in GNNs, which is comprehensively discussed from the definition, aggregation, and comparison perspectives in Appendix D.3. Concentration property represents node representation convergence, especially for deep GNNs. In other words, group mean node presentation convergences (beneficial to bias mitigation) while the variance of group node representation decreases ((beneficial to bias amplification)) after aggregation. In a nutshell, node presentation concentration may have a two-sided effect for fairness in a single aggregation, although bias can be mitigated for vanilla deep GNNs due to over-smoothing issues. Our contribution is to identify the sufficient condition (e.g., high sensitive homophily coefficient) that bias amplification happens. In practice, the sensitive homophily coefficient is even higher than label homophily coefficient in the experimental real datasets.
In a nutshell, our work is significantly different from [A,B] from the research problem, assumptions, and findings perspective. We believe that the only common part is adopting CSBM model, but with different graph synthesizer parameters. We hope our response can tackle your concern.
[A] Baranwal, Aseem, et.al. "Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization." arXiv preprint arXiv:2102.06966 (2021).
[B] Baranwal, Aseem, et.al. Effects of Graph Convolutions in Multi-layer Networks, ICLR 2023.
**Q3: The empirical evaluation of the paper is weak. The major results shown in Table 1 suggest that the proposed method GR does not outperform a simple MLP in most cases.**
A3: We respectively disagree with this weak experiments comment. Firstly, GAT-GR, GCN-GR, and SGC-GR achieve lower DP and EO than MLP in Pokec-n and NBA dataset, while GAT-GR achieve lower DP and EO than MLP. Overall, GR achieves lower bias than MLP for **15 cases out of 18 cases (3 GNN backbones, 3 datasets, 2 fairness metrics)**. We highlight the results in Boldface. Secondly, our main claim is that **graph rewiring is beneficial to fairness mitigation**, i.e., GR method can outperform the counterpart without GR, which is validated for all 18 cases in Table. 1. The competitive tradeoff performance of MLP is mainly due to that GNN aggregation amplifies bias in real-world data. Thirdly, the proposed graph rewiring method is a **plug-in module** and can **further improve many in-processing fairness methods**, including adding regularization, adversarial debiasing, and Fair Mixup. We believe that our experiments is strong and sufficient to support our claims.
**Q4: The paper claims the proposed method can be compatible with existing debiased methods such as adversarial debiasing and regularized training, and it shows how FairGR can further improve them. But it is still better to provide a direct comparison between FairGR and these approaches to show the improvement. For instance, can the proposed method outperform MLP + adversarial debiasing or MLP + regularized training?**
A4: We have added the experiments for MLP + adversarial debiasing and MLP + regularized training in Figures 3 and 9. It is seen that graph rewiring with an appropriate backbone can still achieve comparable or even better tradeoffs than MLP for different in-processing methods.
**Q5: The proposed method only deals with the subgraph of training nodes. Then how can it deal with the topological bias in the whole graph?**
A5: We would like to clarify that **only sensitive attributes for training nodes are accessible**. In other words, we cannot calculate sensitive homophily for the whole graph during graph rewiring. The goal of graph rewiring is to train more fair GNN models in a pre-processing manner. Additionally, experiments demonstrate that sub-graph pre-processing is **sufficient to mitigate prediction bias**.
**Q6: There is no detailed discussion of the implementation and hyperparameter setting of the proposed algorithm. In particular, the loss function in Equation (4) is not differentiable, and it is unclear how the Project Gradient Descent optimizes this objective. There is no discussion about the algorithm details, which makes it hard to replicate the reported results.**
A6: We have added a discussion of the hyperparameters in Section 5.2, and more details in Optimization Strategy paragraph. Specifically, we first treat optimized variables as continuous variables to make Equation (4) differentiable. Subsequently, we use PGD to project optimized variables within $\{0,1\}$ to satisfy the constraint. We also clarify that PGD is a common-used strategy in constraint optimization problems and may not lead to the optimal solution. However, the main goal of the proposed method is to show that the insight from our theoretical analysis is beneficial in practice.
**Q7: The algorithm requires $O(n^2)$
complexities for memory and computation, which suggests the limitation in scalability.**
A7: We agree on the scalability issue of our proposed method. We have **provided computation complexity analysis in Section 4, and future work discussion in Appendix-I**. We would remind the reviewer that the proposed method is to show that the **insight from our theoretical analysis is beneficial to mitigate bias** in practice, and the scalability issue can be tackled in future work.
In conclusion, we **respectfully disagree with Reviewer Evsj's comments, especially regarding the novelty of theoretical analysis and weak experiments**. We hope our response can tackle your concerns and the reviewers can reevaluate our work.
## Response to Reviewer iMWd
We thank the reviewer for the constructive comments. To address your concerns, we give the following point-to-point responses and revise our paper at https://anonymous.4open.science/r/FairGR-A4A2/paper/ICML23_Topology.pdf. We believe that if you read it carefully, you will change your attitude.
**Q1: Topological biases (sometimes also known as structural biases or relational information/biases) in GNNs have been extensively investigated by existing studies. There are also theoretical analyses on this topic. See for example this survey paper for the numerous existing literature: https://arxiv.org/pdf/2204.09888.pdf. The review of literature in this paper is fairly outdated.**
A1: We have revised Related Work section in Appendix. E to discuss more recent literature on fair graph learning. As for theoretical analysis on this topic, to the best of our knowledge, our paper is the first work to analyze **why** and **when** GNNs aggregation operation amplify bias compared with MLP. We clarify our theoretical contributions are significant and novel as follows:
* **When (Bias amplification is conditional)**: We provide **a sufficient condition** to guarantee amplification bias happens in CSBM in Theorem 3.6. The existing analysis based on CSBM [A,B] mainly focuses on prediction performance or linear separability, which is different from our fairness research problem. They don't even consider the sensitive attributes in the synthetic data. For the final findings, we provide a comprehensive discussion of several cases in that amplification happens (in the last paragraph in Section 3.3). For example, large sensitive homophily leads to bias amplification. We believe that our theoretical analysis and the corresponding discussion are novel and new in the fairness community.
* **Why (Counter effect for fairness deriving from concentration property)**: We provide a theoretical analysis **why bias amplification happens in certain scenarios**. The analysis is related to concentration property in GNNs, which is comprehensively discussed from the definition, aggregation, and comparison perspectives in Appendix D.3. Concentration property represents that group mean node presentation convergences (beneficial to bias mitigation) while the variance of group node representation decreases ((beneficial to bias amplification)) after aggregation. In a nutshell, node presentation concentration may have a two-sided effect for fairness in a single aggregation. Our contribution is to identify the sufficient condition (e.g., high sensitive homophily coefficient) that bias amplification happens. In practice, the sensitive homophily coefficient is even higher than label homophily coefficient in the experimental real datasets.
[A] Baranwal, Aseem, et.al. "Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization." arXiv preprint arXiv:2102.06966 (2021).
[B] Baranwal, Aseem, et.al. Effects of Graph Convolutions in Multi-layer Networks, ICLR 2023.
**Q2: The theoretical analysis in this paper is essentially a straightforward adaptation of the work by Baranwal et al. 2021. Therefore the technical novelty of the theoretical analysis is rather limited.**
A2: We respectively disagree that the technical novelty of our paper is limited, especially compared with [A, B]. We clarify that there are several significant differences from the following perspectives:
* **Different research problems**. Our research problem (i.e., **node presentation bias difference before/after aggregation**) is significantly different from that in [A, B]. Specifically, [A] theoretically investigates why graph convolution improves **linear separability and out-of-distribution generalization**, while [B] finds that graph convolution shows stronger classification ability than MLP given the distance of mean node features in CSBM.
* **Different considered assumptions**. We only consider CSBM model in our analysis while **assumptions on inter/intra-class connection assumptions are required in [A,B]**. Firstly, we would like to clarify that, even though CSBM is considered in our work and [A,B], the parameters in CSBM are different. We highlight the sensitive homophily parameter in CSBM since this parameter is highly related to the sufficient condition for bias amplification. Secondly, there is no additional assumption in our analysis except the graph data generated by CSBM. However, the main findings in [A,B] are based on intra-class and inter-class connection probability $p, q=\Omega(\frac{\log^2n}{n})$, where $n$ is the number of nodes.
* **Different findings**. Our main finding is on the **sufficient condition of bias amplification in GNNs aggregation**, while [A,B] **quantitatively measures the improvements** of graph convolution compared with MLP in terms of **prediction performance under assumption intra-class and inter-class connection probability $p, q=\Omega(\frac{\log^2n}{n})$**. Specifically, our work mainly identifies when and why bias amplification happens. It is shown that bias amplification conditionally happens, where the sufficient condition is related to many factors, including sensitive homophily, connection density, and the number of nodes.
In a nutshell, our work is **significantly different from [A,B]** from the research problem, assumptions, and findings perspectives. We believe that the only common part is adopting CSBM model, but with different graph synthesizer parameters.
[A] Baranwal, Aseem, et.al. "Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization." arXiv preprint arXiv:2102.06966 (2021).
[B] Baranwal, Aseem, et.al. Effects of Graph Convolutions in Multi-layer Networks, ICLR 2023.
**Q3: The proposed method is only compared with vanilla GNN models. The practical significance of the proposed method is unclear, given that there are already many existing bias mitigation methods for GNNs.**
A3: We respectively disagree with that we only compare with vanilla models. Instead, we conduct experiments for adding regularization (REG), adversarial debiasing (ADV), and fair mixup with or without FairGR on various datasets. Please see Figures. 3 and 9.
We would like to remind the reviewer that our proposed FairGR is a graph rewiring method to achieve a fair GNNs model, and can serve as a plug-in module and further improve many in-processing methods, such as adding regularization (REG), adversarial debiasing (ADV), and fair mixup.
**Q4: The notations could be improved. For example, in Definition 3.1, there are notations like E_{ij}[P(A_{ij} = 1)] and E_i[P(s_i = 1)]. However, both P(A_{ij} = 1) and P(s_i = 1) are already deterministic numbers. What is the point of taking expectations over them?**
A4: $P(A_{ij} = 1)$ and $P(s_i = 1)$ are already deterministic numbers only given fixed node index $i,j$. In this paper, we **treat node index $i,j$ as random variables**. Therefore, $\rho_d=P(A_{ij} = 1)$ represents connection density in the whole graph, while $c=P(s_i = 1)$ represents the ratio sensitive attribute $s=1$.
In conclusion, we **respectfully disagree with Reviewer iMWd's comments, especially regarding the novelty of theoretical analysis and weak experiments**. We hope our response can tackle your concerns and that the reviewer can reevaluate our work.
## Second Round
## Response to Reviewer Evsj24
We thank the reviewer for the constructive comments and would like to improve the score. To address your concerns, we give the following point-to-point responses and revise our paper at https://anonymous.4open.science/r/FairGR-A4A2/paper/ICML23_Topology.pdf.
Q3. My point is that if the accuracy of your models can not be better than MLP (see Table 1), then why would people even use the GNNs? We can just use MLP without worrying about the potentially biased caused by graphs, especially considering the computation complexity of the proposed method is very high.
A3: We respectively disagree with this comment. First, our work aims to tackle the fairness problem in graphs. Accuracy is not the only metric for evaluation. Instead, we care more about fairness and accuracy tradeoff performance. More importantly,
GCN-GR achieves **higher accuracy** and lower EO in Pokex-z and Pokec-n datasets. As for NBA dataset, the lower performance of GNNs is due to the low label homophily coefficient ($39.22\%$ in Appendix. F, which means the number of inter edges is even more than that of intra edges. [A,B] show similar observations that GCN achieves lower accuracy than MLP in datasets with low label homophily coefficient, and [B] proposes an advanced GNN, named H2GCN, to tackle low label homophily coefficient case. We **add the experimental results for GIN and H2GCN in Appendix H.6**. It is seen that H2GCN ($67.30\%$ acc) and H2GCN-GR ($66.98\%$ acc) can still achieve higher accuracy than that of MLP ($65.56\%$ acc). In this paper, we only consider data perspective to fairness and accuracy tradeoff performance. The advanced GNN backbone can also improve the tradeoff performance, which indicates that the backbone also matters. But advanced GNN backbone design is out of our scope.
[A] Ma, Yao, et al. "Is homophily a necessity for graph neural networks?." ICLR 2022.
[B] Zhu, Jiong, et al. "Beyond homophily in graph neural networks: Current limitations and effective designs." Advances in Neural Information Processing Systems 33 (2020): 7793-7804.
Q5. The proposed method can only handle the bias in the subgraph induced by training nodes. In various applications, the labeling ratio is small so the training subgraph is very limited. I do not see why it is claimed that it will be sufficient to mitigate the prediction bias. If the proposed can only handle the training subgraph, this will be a big limitation.
A5: As in the previous response, we consider that only the sensitive attributes of training nodes are available and thus we only rewire the training subgraph. Our results demonstrate that training subgraph rewiring rather than the whole graph can mitigate the prediction bias.
As for the case with limited labeling nodes, our proposed method can still work in this case. We **conduct additional experiments for the case of $50\%$ nodes with sensitive attributes across various GNN backbones and datasets in Appendix H.7**. It is seen that our proposed method can still mitigate prediction bias for various GNN backbones and datasets. Additionally, there are many other methods to tackle the case of partial labeling nodes. For example, a sensitive attribute estimator can be trained using partial labeling nodes and provide surrogate sensitive attributes for labeling nodes. Subsequently, our proposed GR methods can be adopted using both sensitive attributes and surrogates for labeling and unlabelled nodes. However, the advanced GR method for the more broad cases is beyond the scope of this paper and we leave it for future work. We have added a discussion of the case of partial labeling in Appendix I. We believe our current experiments can support the claim "topology matters in fair graph learning".
Q6. My point is not about whether the variable is continuous or discrete. The question is that the objective in Eq. (4) is not differentiable with the L1 norm, and some L1 norm terms are in the denominator. How is the gradient being computed?
A6: We apologize for any confusion regarding the differentiability of the L1 norm in Eq. (4). L1 norm regularization is a common item in loss function and popular deep learning packages (such as PyTorch, and TensorFlow) can support model parameters update for the loss with L1 norm. Firstly, L1 norm is **almost differentiable everywhere excluding at $0$ value**. For this non-differentiable point, the **subgradient (any value in $[-1,1]$)** can be adopted in model parameters update. As for some L1 norm terms in the denominator, we would like to clarify that the denominator $|\mathbf{A}|_1$ is always larger than $0$. Therefore, the L1 norm terms in the denominator are not an issue for gradient computation.
Q7. It will be great if the paper can improve the computation complexity with sampling strategies.
A7: We argue that computation complexity reduction is beyond the scope of this paper. The main point here is to support "topology matters in fair graph learning". As **we discussed in Appendix I**, we leave efficiency improvement and other advanced graph rewiring for future work.
## Rebuttal by Authors
We thank all the reviewers for their constructive comments and helpful feedback. We would like to highlight several key points as follows:
* We theoretically, for the first time, investigate why GNNs have a higher representation/prediction bias of than MLP. Even though this observation is empirically observed in several previous works, the theoretical reason behind this observation is still unclear. Our core contribution is to theoretically unveil the underlying reason.
* We consider CSBM model, which is commonly used in theoretical GNN analysis. It is infeasible to conduct theoretical analysis without ground-truth graph data distribution. We theoretically show that such observations **conditionally** happen. We provide a sufficient condition in Theorem 3.6.
* We develop a simple yet effective graph topology rewiring method FairGR to keep high label homophily while reducing sensitive homophily. Experiments show that FairGR can improve tradeoff performance across various GNNs and datasets, which supports the claim "Topology matters in fair graph learning".
We have additionally performed experiments to address the reviewer's concerns, including more backbones, and the scenario with partial nodes with sensitive attributes. Please find below our detailed response to the questions and any concerns raised by the reviewers. We incorporate all these comments and comprehensive experimental evaluations at https://anonymous.4open.science/r/FairGR-A4A2/paper/ICML23_Topology.pdf.
As the deadline for reviewer-author discussion (3/26/23 3 pm EST) approaches, we would be happy to take this opportunity to have more discussions about any of your questions or concerns. If our response has addressed your concerns, we would be grateful if you could re-evaluate our paper based on our feedback.
Best regards,
Authors