Thanks for the feedback. We will address all comments in the camera-ready version.
# **Reviewer A**
**A.1: QOS contribution/novelty.**
**QOS Architecture:** QOS is one of the first attempts to build an operating system for managing QPUs while addressing several unique challenges of quantum computing, including spatial/temporal variances of QPU states, scaling of qubits, fidelity, QPU utilization, etc. The key insight of QOS is that the tradeoffs associated with these challenges are manageable when approached holistically. The QOS architecture follows a layered architecture comprising the QOS transpiler and the QOS runtime.
**QOS Transpiler:** The QOS transpiler provides a modular compiler infrastructure for compiling large circuits into smaller *Qernels* that can be executed on smaller, noisy QPUs.
To achieve this, the QOS transpiler makes the following contributions: (1) the Qernel abstraction and generic intermediate representation (IR) ($\S5.1$) as a generic substrate to implement different analysis or transformation passes, (2) a modular set of optimization passes ($\S5.2, \S5.3$), and (3) an extensible infrastructure for implementing new circuit compaction techniques through the virtualizer ($\S5.4$).
Further, the proposed optimizations in the QOS transpiler are important for two reasons:
* *A unified and general approach*: Many optimizations (e.g., gate virtualization [53, 70]) exist only in theory and have not been implemented in practice. The QOS transpiler IR provides a systematic way to express a range of optimizations.
* *Performance*: Further, the QOS transpiler provides a modular architecture to combine these optimizations, significantly improving performance, fidelity, and scalability ($\S7.2$).
**QOS Runtime:** The QOS runtime schedules the optimized circuits to run on specific QPUs while providing scalable, high-fidelity, and efficient execution of circuits.
To achieve this, the QOS runtime makes the following contributions: (1) circuit performance estimation based on three cost functions to explore accuracy-performance tradeoffs systematically, (2) multi-programming to spatially multiplex circuits for increased utilization of QPUs while minimizing fidelity penalties, and (3) a scheduler for temporal multiplexing of circuits for improving the load-balancing and minimizing the waiting times.
**A.2: Evaluation vs. SOTA.**
There is no end-to-end transpiler that addresses all the challenges covered by QOS, including Qiskit [75]. The related work in the domain of quantum transpilers targets a specific optimization technique, which is not composable. The key advantage of the QOS transpiler is the ability to compose different optimization passes via its generic DAG-based IR (similar to LLVM IR). Therefore, we compare the QOS transpiler against the individual optimization techniques, including qubit mapping, routing, crosstalk mitigation, Qiskit compiler, and multi-programming [48, 56, 21].
**A.3: Selection of transpilation techniques.**
In the same vein, the LLVM compiler infrastructure is also not *novel either*; LLVM implements standard compiler optimizations (e.g., deadcode elimination, reachability analysis) that have been proposed since the 1970s. However, LLVM is the prominent compiler framework because it provides a generic, extensible, composable, and modular infrastructure for implementing *existing* compiler optimizations. The QOS transpiler strives for the same goal.
**A.4: Collaboration of the QOS Transpiler and the QOS Runtime.**
QOS is a layered architecture that decouples the static and dynamic state into the QOS transpiler and the QOS runtime, respectively.
The static component is required for analyzing and optimizing the quantum circuits (similarly to program analysis in classical computing) based on the circuit properties, e.g., circuit depth, number, and types of gates.
The dynamic component is equally important for scheduling, multiplexing (temporally and spatially), and executing the optimized circuits based on the QPU states, which vary across space and time.
# **Reviewer B**
**B.1: More technical details.**
*a) Transpiler metadata:* The metadata consists of important circuit properties such as depth, width, number of gates, etc., and the properties described in SupermarQ [96], e.g., entanglement ratio, measurement density, and critical depth. We deduce this metadata by traversing the graph IR(s) as part of the QOS transpilation process.
*b)* Optimization goals: We optimize the circuit's properties ($\S2, \S7.1, \S7.2$), i.e., circuit size, depth, and number of CNOT gates, which directly impact fidelity. This is achieved by circuit compaction, e.g., circuit cutting or qubit freezing, which eliminates CNOT gates, while lowering the depth.
*c) QPU performance:* QPUs exhibit noisy qubits and couplings ($\S2$). Larger circuits have more qubits and gates, therefore lower performance ($\S3.1$). The equations in $\S6.1$ estimate the errors based on the QPU's calibration data and the circuit's properties. Since the errors are probabilistic, we can multiply the individual probabilities to compute the total expected error.
*d) Scheduler scoring:* The QOS scheduler provides a generic mechanism to plug different scheduling policies. In the paper, we show a simple policy that strikes a balance between conflicting objectives: fidelity, waiting time, and utilization. The constant *c* tunes this tradeoff by giving priority (higher weight) to the respective objective. By default, we use $c=0.5$ for a balanced approach. Our scheduler can support other scheduling policies too.
**B.2: Open-source.**
QOS will be released as open-source software.
**B.3: Further QPU resource management challenges.**
While QOS strives to address pressing challenges in quantum computing, there are still many open challenges, e.g., different types of QPUs (superconducting, neutral atoms, ion traps, etc.), calibration cross-overs, and job migration.
# **Reviewer C**
**C.1: Novelty.**
See A.1.
**C.2: Evaluation of tradeoffs.**
Quantum computing offers a range of tradeoffs in terms of fidelity, utilization, waiting times, and performance. In our paper, we investigate three such trade-offs: (1) fidelity vs. overheads ($\S7.2$), (2) fidelity vs utilization ($\S7.4$), and (3), fidelity vs. waiting times ($\S7.5$). For the requested tradeoffs with parameters *b* and *s*, we indeed investigated them, but we omitted the results due to the space issue.
**C.3: What is borrowed from SOTA.**
See A.1.
**C.4: QAOA example.**
We use QAOA as an example because FrozenQubits can only be applied to QAOA circuits [3]. For other circuits, we can still apply all the remaining techniques. There are numerous quantum benchmarks [47, 57], and QOS supports *all* of them.
**C.5: Effective utilization.**
Intuition: QPU utilization can be measured both in space (number of qubits) and in time (circuit's duration/depth). Therefore, effective utilization measures *both* metrics.
**C.6: Reevaluation policy complexity.**
Each such pass costs 10s-100s milliseconds, depending on the size and complexity of the circuits (see C.8.). In reality, the complexity is *constant* since the system sets *N* and *W*, and they do not depend on the actual pending circuits, say *M*, or number of QPUs. Even if $M\gg N$, we always compare up to *N* circuits. Still, since $M$ and $N$ are tunable, we state that the complexity is $O(W \times N)$. Also, we avoid comparing all possible pairs. We filter the circuits based on (1) the same best QPU, (2) low-utilization pairs, and (3) high compatibility scores. In Figure 7, steps 2-4 are applied to a single pair only.
**C.7: Results unpacking/unbundling in multiprogramming.**
The multi-programmer keeps track of which Qernels are bundled together. The execution results are probability distributions and are represented as key-value pairs of {bitstring: float}. Every bit in the bitstring is the measurement result of a qubit (and the float is a probability in [0,1]). The multi-programmer splits the bitstrings based on the initial Qernels' sizes (i.e., the left-most *n* bits belong to the first Qernel, the rest to the second).
**C.8: QOS Transpiler overheads.**
For small circuits, the absolute overheads are small (milliseconds), but the factor is large, and vice versa. Certain transpilation stages are NP-hard [89] therefore, larger (uncut) circuits will have exponentially longer compilation runtime. By compacting them with the transpiler, the sum of the individual (smaller) overheads can be lower in comparison to the initial (larger) one. This explains the drop from $16.6\times$ to $2.5\times$ in Table 1. For large enough circuits, our approach will eventually win.
**C.9: Figure 10 experimental setup.**
Sorry, the takeaway message is incorrect. Our methodology is the ground truth because we compare against the best possible fidelity across all QPUs. We selected the best QPU (on average) based on standard performance metrics (median readout error, median T2 error, etc.). In fact, this is the standard practice adopted by cloud users to select the best-performing QPU. For instance, IBM Auckland offering the best performance metrics (that day), had the longest queue when we ran this experiment.
**C.10: Figure 11(a) takeaway.**
The takeaway is that the combination of circuit compaction and multi-programming gives both higher fidelity and higher utilization than neither of the techniques achieves individually. The QOS compatibility score achieves higher fidelity than [21] for the same utilization target. Large circuits do not have to run with low fidelity since the QOS transpiler will optimize them and reduce their size.
**C.11: Error bars.**
We measure effective utilization, which is deterministic and doesn't have any variance.
# **Reviewer D**
**D.1: Novelty.**
See A.1.
**D.2: Large-scale hypothetical QPU.**
Unfortunately, we do have access to QPUs with more than 133 qubits. For QPUs larger than 133 qubits, we will provide an analytical model to show QOS' performance in the camera-ready version.
**D.3: Scheduler scoring.**
See B.1(d).
**D.4: Performance improvement explanations.**
The QOS transpiler improves the circuit properties, i.e., number of CNOTs, depth, and circuit size, which gives the performance improvements ($\S7.2$).
The QOS estimator's cost functions accurately identify the best-performing QPU for a given circuit ($\S7.3$).
The QOS multi-programmer's compatibility score function improves the effective utilization and minimizes the fidelity penalties ($\S7.4$).
The QOS scheduler uses the variable *c* to prioritize fidelity over waiting times, and vice versa ($\S7.5$).
**D.5: Evaluation baselines.**
See A.2.
**D.6: Abstract nit.**
Compared to the respective baselines (A.2).
# **Reviewer E**
**E.1: Difference with heterogeneous SoCs.**
Existing classical scheduling cannot be directly applied in the quantum context because of several fundamental differences between QPUs and xPUs (GPU, TPU, etc.). For instance, there exists (1) spatiotemporal performance variance, where daily calibrations vastly affect QPU error rates. The QPU performance is not deterministic in either dimension (time or space). (2) Quantum states collapse when measuring them, which doesn't allow schedulers to preempt and migrate quantum circuits. (3) The QPU architectures also differ in qubit connectivity, which makes reusing pre-compiled quantum programs impossible. When considering different QPU architectures, the programs have to be recompiled. (4) The performance-utilization tradeoff is more challenging in quantum because even low utilization incurs high fidelity penalties ($\S3.3$).
**E.2: QOS behaviour.**
The QOS transpiler behaves like a compiler, e.g., LLVM. The QOS runtime behaves like a cluster scheduler (space and time multiplexer); its goal is to manage NISQ resources similarly to classical orchestrators.
**E.3: Self-contained paper.**
We will further expand the background to make the paper more accessible.
**E.4: Prior work on multiplexing.**
QOS supports spatial and temporal multiplexing. Specifically, QOS implements multi-programming, which is spatial multiplexing, and scheduling, which is time multiplexing. There is limited prior work in multi-programming and scheduling, as cited in $\S8$.
**E.5: Multi-QPU environment.**
For technical details about QPUs, see [here](https://blogs.nvidia.com/blog/what-is-a-qpu/) and [here](https://www.ibm.com/quantum). For currently available QPUs see [here](https://quantum.ibm.com/services/resources).
**E.6: QOS integration.**
Yes, QOS is designed for quantum cloud providers, such as IBM, to better manage the QPUs while improving the users' programs' completion time and fidelity.
The current IBM cloud supports very limited scheduling, does not support spatial multiplexing, i.e., multi-programming, and does not support scalable circuit compaction workflows.
**E.7: Scheduling quanta.**
By default, QOS schedules every 100 seconds.
Since the current waiting times are already in the order of hours [80-81], we can select a scheduling window of 10-100 seconds.
**E.8: Evaluation setup.**
All experiments are run on real QPUs, except the scheduler results, which are simulation-based, because we do not own a quantum cluster. All other experiments are based on *real* quantum hardware, specifically the QPUs provided by IBM.
We report the average of 5 runs.
# **Reviewer F**
**F.1: Evaluation against SOTA.**
See A.2.
**F.2: Transpiler contributions.**
See A.1.
**F.3: QOS Transpiler metadata.**
See B.1(a).
**F.4: Evaluation setup.**
See E.8.
**F.5: QPU pool.**
The pool contained all the available IBM QPUs at that time with more than 16 qubits. See Figure 3c, second and third groups (divided by vertical red dashed lines).