<style>
section {
text-align: left;
}
.reveal .slides section {
font-size: 30px;
}
</style>
# A New Perspective on the Windows Scheduling Problem
<!-- Put the link to this slide here so people can follow -->
Slide: https://hackmd.io/@tobi-alafin/windows-scheduling-new-perspective
---
# 1. Abstract Slide:
1. Introduction
- Present an overview of the Windows Scheduling Problem (WSP) and its variants
- Introduce the Partial Coding Problem (PCP) and its relation to WSP
----
2. Goals/Objectives
- Investigate the computational complexity of WSP and its variants
- Establish connections between WSP and PCP through polynomial time reductions
- Discuss hardness results for single- and multi-machine cases of WSP
- Provide insights into the theoretical underpinnings of WSP and PCP
----
3. Significance
- Enhance understanding of the problem complexity in scheduling and combinatorial problems
- Offer new perspectives on WSP and its connections to other computational problems
- Inform the development of algorithms for scheduling and related applications
---
# 2. Basic Notions of Complexity
---
## 2.1 Decision Problems
Decision problems are computational problems that can be posed as questions with a binary (yes or no) answer. They often involve determining the existence of a solution or property that satisfies certain conditions.
Examples of decision problems:
- Does a given graph have a Hamiltonian cycle?
- Is a given number prime?
- Can a given Boolean formula be satisfied?
----
Decision problems are central to complexity theory, as they provide a way to classify and compare computational problems based on the resources (time, space, etc.) required to solve them.
Many computational problems may appear at first glance to not be decision problems (e.g. finding a particular solution vs determining that a solution exists) but such search problems can often be recast as a decision variant (e.g. the decision version of the traveling salesman problem is for a given length $L$ to determine if the graph has a tour of at most $L$).
---
## 2.2 Complexity Classes
### 2.3 Overview
Complexity classes are sets of problems that share similar resource requirements for their solution, such as time or space. Some well-known complexity classes are:
- P: Problems solvable in polynomial time
- NP: Problems verifiable in polynomial time
- co-NP: Problems whose complements are in NP
- NP-complete: Problems in NP to which all other problems in NP can be reduced
- NP-hard: Problems at least as hard as the hardest problems in NP
----

---
### 2.4 P Class
P is the class of decision problems that can be solved by a deterministic Turing machine in polynomial time. In other words, there exists an algorithm that can solve any problem in P within a number of steps that is a polynomial function of the input size.
Examples of problems in P:
- Checking if a given number is even
- Finding the greatest common divisor of two numbers
- Determining if a given graph is connected
---
### 2.5 NP Class
NP is the class of decision problems for which a "yes" instance can be verified by a deterministic Turing machine in polynomial time.
Examples of problems in NP:
- Finding a Hamiltonian cycle in a given graph
- Factoring a given integer
- Determining if a given Boolean formula is satisfiable (SAT)
----
#### P vs NP
It is an open question whether P equals NP or not. Most computer scientists believe that P is not equal to NP, which would mean that some problems in NP are harder than those in P.
---
### 2.6 co-NP
co-NP is the class of decision problems for which a "no" instance can be verified by a deterministic Turing machine in polynomial time. This means that if the answer is "no", there's a certificate (proof) that can be checked quickly.
---
#### Example: UNSAT (Boolean formula unsatisfiability)
- UNSAT is the complement of the SAT problem (Boolean formula satisfiability).
- A candidate solution for UNSAT is a proof or certificate demonstrating that the given Boolean formula is unsatisfiable (i.e., no assignment of truth values to the variables results in the formula evaluating to true).
One common way to represent such a proof is through a resolution refutation.
----
#### Resolution Refutation
- A resolution refutation is a sequence of steps, each of which applies the resolution rule to pairs of clauses derived from the original formula or from previous steps in the sequence.
- The resolution rule combines two clauses with complementary literals (e.g., x and ¬x) and eliminates the variable involved, generating a new clause.
- The process continues until it derives the empty clause, which represents a contradiction. The existence of a contradiction implies that the original formula is unsatisfiable.
----
The candidate solution for UNSAT, then, would be the resolution refutation itself, which can be checked efficiently by a deterministic Turing machine. However, finding such a refutation efficiently is not known to be possible in general.
This illustrates the core distinction between NP and co-NP: While "yes" instances of SAT can be verified efficiently (by checking a proposed assignment of Boolean variables), it is an open question whether "yes" instances of UNSAT (i.e., proofs of unsatisfiability) can also be verified efficiently.
---
### 2.7 NP-hard and NP-complete
- **NP-hard:** Problems that are at least as hard as the hardest problems in NP. They may or may not be in NP themselves.
- **NP-complete:** Problems that are both in NP and NP-hard. They serve as a "benchmark" for the difficulty of problems in NP since all other problems in NP can be reduced to NP-complete problems in polynomial time.
----
Examples of NP-complete problems:
- Traveling Salesman Problem (TSP)
- 3-SAT
- Clique Problem
Significance:
- If an NP-complete problem can be solved efficiently, then P = NP, and all problems in NP can be solved efficiently.
- NP-hard problems are considered intractable, as they are at least as hard as the hardest problems in NP.
---
# 3. Polynomial Time Reductions
## 3.1. Definition and Role
- Polynomial time reductions are used to transform instances of one decision problem into instances of another.
- The transformation can be performed by an algorithm that runs in polynomial time with respect to the input size.
- The purpose of polynomial time reductions is to show that solving the second problem efficiently would also allow us to solve the first problem efficiently.
---
# 3. Polynomial Time Reductions
---
## 3.1. Definition and Role
**Formal Definition:**
- Let $A$ and $B$ be two decision problems.
- Where:
- $\Sigma^*$ is the set of strings over the alphabet $\Sigma$
- $A, B: \Sigma^* \to \{0, 1\}$
- $A(x) = 1$ indicates that the solution to decision problem $A$ for input $x$ is "yes"
- $A(x) = 0$ indicates that the solution to decision problem $A$ for input $x$ is "no"
- Likewise for $B$
----
- A polynomial-time reduction from $A$ to $B$ is a function $f: \Sigma^* \rightarrow \Sigma^*$.
- The function $f$ must be computable in polynomial time and maintain the correctness of the problems (i.e., $A(x) = B(f(x))$ for all $x \in \Sigma^*$).
If a polynomial-time reduction exists from problem $A$ to problem $B$, denoted as $A \le_p B$, it demonstrates that problem $B$ is at least as hard as problem $A$.
---
## 3.2. Example: 3-Colouring to 3-SAT
**3-SAT Problem:**
- Given a Boolean formula in conjunctive normal form (CNF) with exactly three literals per clause, decide whether there exists an assignment of truth values to the variables that satisfies the formula.
- CNF: Conjunction of clauses
- Clause: disjunction of literals
----
Examples of Boolean formulae in CNF:
- $(A \vee \neg B \vee \neg C) \wedge(\neg D \vee E \vee F)$
- $(A \vee B) \wedge(C)$
- $(A \vee B)$
- $(A)$
----
Examples of Boolean formulae not in CNF:
- $\neg(B \vee C)$, since an OR is nested within a NOT
- $(A \wedge B) \vee C$
- $A \wedge(B \vee(D \wedge E))$, since an AND is nested within an OR
----
Every formula can be equivalently written as a formula in conjunctive normal form. The three non-examples in CNF are:
- $(\neg B) \wedge(\neg C)$
- $(A \vee C) \wedge(B \vee C)$
- $(A) \wedge(B \vee D) \wedge(B \vee E)$
---
**3-Coloring Problem:**
- Given an undirected graph, decide whether there is a valid assignment of three colors to the vertices such that no two adjacent vertices have the same color.
We will show how to reduce an instance of 3-SAT to an instance of 3-Coloring in polynomial time.
----
Given an instance of 3-SAT with variables $x_1, x_2, ..., x_n$ and clauses $C_1, C_2, ..., C_m$, construct an undirected graph $G$ as follows:
1. Create a triangle for each variable $x_i$ and its negation $\overline{x_i}$, connected by an edge. These triangles represent "gadgets" that enforce the truth assignment to each variable.
2. For each clause $C_j$, create a "clause gadget" as a triangle with vertices representing the three literals in the clause. Connect each literal in the clause gadget to the corresponding literal in the variable gadgets using edges. This enforces that at least one literal in the clause is true.
The resulting graph $G$ can be 3-colored if and only if the given 3-SAT instance is satisfiable.
----
**Proof:**
*If the 3-SAT instance is satisfiable:*
- Assign a truth value to each variable according to the satisfying assignment.
- Color each vertex corresponding to a true literal with color 1, and each vertex corresponding to a false literal with color 2.
- Color the remaining vertex in each variable gadget with color 3.
- Assign the remaining vertices in each clause gadget colors 2 and 3.
- This results in a valid 3-coloring of the graph.
----
*If the graph can be 3-colored:*
- A valid 3-coloring ensures that no two adjacent vertices have the same color.
- In each variable gadget, one vertex must have color 1, one vertex must have color 2, and the remaining vertex must have color 3.
- Assign truth values to the variables according to the coloring: if a literal vertex has color 1, the corresponding variable is true; otherwise, it is false.
- This assignment satisfies all clauses because at least one vertex in each clause gadget has color 1, which corresponds to a true literal.
This reduction can be carried out in polynomial time, as the constructed graph $G$ has $O(n+m)$ vertices and $O(n+m)$ edges. This demonstrates that 3-SAT reduces to 3-Coloring in polynomial time.
----
### 3.2.1 3-SAT to 3-Coloring Demonstration
- Let's consider a trivial 3-SAT instance with only one clause:
- $(x_1 \lor \overline{x_2} \lor x_3)$
**Step 1:** Create a triangle for each variable and its negation, connected by an edge.
- Construct two triangles:
- Triangle 1: $x_1$, $\overline{x_1}$, $u_1$
- Triangle 2: $x_2$, $\overline{x_2}$, $u_2$
----
**Step 2:** For each clause, create a "clause gadget" as a triangle with vertices representing the three literals in the clause. Connect each literal in the clause gadget to the corresponding literal in the variable gadgets using edges.
- Create a triangle for the clause: $x_1$, $\overline{x_2}$, $x_3$
- Connect the literals:
- $x_1$ (from the clause) to $x_1$ (from Triangle 1)
- $\overline{x_2}$ (from the clause) to $\overline{x_2}$ (from Triangle 2)
----
**The resulting graph will look like this:**
```
x1---u1
|\ /|
| \/ |
| /\ |
|/ \|
x2---u2
\
\
x3
```
----
**Step 3:** Verify that the 3-SAT instance is satisfiable if and only if the resulting graph is 3-colorable.
- Let's consider a satisfying assignment for our 3-SAT instance: $x_1 = \text{True}$, $x_2 = \text{True}$, and $x_3 = \text{True}$.
----
**Step 4:** Assign colors to the vertices based on the satisfying assignment:
1. Color each vertex corresponding to a true literal with color 1 (red).
2. Color each vertex corresponding to a false literal with color 2 (blue).
3. Color the remaining vertex in each variable gadget with color 3 (green).
----
**Color assignment:**
- $x_1$ (red), $\overline{x_1}$ (blue), $u_1$ (green)
- $x_2$ (red), $\overline{x_2}$ (blue), $u_2$ (green)
- $x_3$ (red)
----
**The colored graph will look like this:**
```
x1(R)---u1(G)
|\ /|
| \ / |
| \/ |
| /\ |
| / \ |
|/ \|
x2(R)---u2(G)
\
\
x3(R)
```
----
**The graph is 3-colorable, as no two adjacent vertices have the same color.**
----
**Conclusion:**
- We demonstrated a polynomial-time reduction from a trivial 3-SAT instance to a 3-coloring instance.
- The resulting 3-colorable graph confirms the satisfiability of the original 3-SAT instance.
- This example shows how the 3-SAT problem can be transformed into the 3-Coloring problem through a reduction process.
----
**Recap:**
1. Construct variable gadgets (triangles) for each variable and its negation, connected by an edge.
2. Create a clause gadget (triangle) for each clause, connecting the literals in the clause gadget to the corresponding literals in the variable gadgets.
3. Verify that the 3-SAT instance is satisfiable if and only if the resulting graph is 3-colorable.
4. Assign colors to the vertices based on a satisfying assignment.
5. Verify that the graph is 3-colorable, as no two adjacent vertices have the same color.
---
## 3.3. Implications
- Polynomial-time reductions are essential tools in complexity theory, as they allow us to compare the relative difficulty of various problems.
- They are particularly useful for proving the hardness of problems within complexity classes such as NP-hardness and co-NP-hardness.
- If we can show a polynomial-time reduction from a known hard problem to another problem, we can conclude that the second problem is also hard (within the same complexity class).
----
**Example:**
- The Cook-Levin theorem proves that the Boolean satisfiability problem (SAT) is NP-complete by providing a polynomial-time reduction from any problem in NP to SAT.
- The Cook-Levin theorem implies that if there were an efficient algorithm for solving SAT, we could efficiently solve all problems in NP, implying P = NP.
- Polynomial-time reductions are a powerful method for understanding the relative difficulty of problems, helping us determine which problems are likely to be solvable with efficient algorithms and which are not.
---
### 3.4. Summary/Recap
- Polynomial time reductions transform instances of one decision problem into instances of another, with the transformation being computable in polynomial time.
- They are crucial for comparing problem difficulty and proving hardness within complexity classes.
- The example reduction from 3-SAT to 3-Coloring demonstrates how such reductions can be constructed and used to understand the relationships between problems.
---
# 4. The Windows Scheduling Problem
---
## 4.1. Problem Definition
- Addressing the problem of scheduling periodic jobs subject to processing frequency demands
- Objective: decide whether an infinite non-preemptive schedule exists for each job $j$ with time between any two subsequent executions no longer than $p_j$
- Equivalently, in any time interval of length $p_j$ an execution of job $j$ must be started at least once.
- Applications include maintenance scheduling, communication channel time-multiplexing, and satellite transmissions
----

---
### 4.1.1. Constraints
- Periodic jobs with specific processing frequency demands
- Time intervals between consecutive job executions
- Non-preemptive scheduling
- Once a job has been started, it must be executed to completion without any interruptions
- Allocated jobs "lock up" the resources (processor/machine) they have been assigned to
---
### 4.1.2. Density
Let:
* $j$ represent each task
* $p_j$ represent the period of a task
* $l_j$ represent the length of a task (number of timesteps required to complete)
$$\sum_j \frac{\ell_j}{p_j}$$
The density of the problem instance) is an important parameter, with a known necessary condition for feasibility being a density of no more than $1$.
---
## 4.2. Problem Variants
1. Uniform vs. non-uniform job lengths
2. Exact vs. inexact periods
3. Single vs. multi-machine cases
- Multi-machine with migration vs. without migration
---
### 4.2.1. Uniform vs Non-Uniform Job Lengths
- Unit-length jobs: each job has a processing time of 1
- Non-uniform job lengths: each job $j$ has a length $\ell_j \in \mathbb{N}$ given as part of the input
---
### 4.2.2. Exact vs Inexact Periods
- Exact periods: time interval between any two subsequent executions of the same job $j$ must be *exactly* length $p_j$
- Inexact periods: $p_j$ is an upper bound on the length of that time interval
---
### 4.2.3. Single vs Multi-Machine Cases
1. Single-machine case
2. Multi-machine cases
- Disallowed job migration: each job must be assigned to a fixed machine
- Allowed job migration: each execution of a job can take place on a different machine
---
## 4.3. Related Work
- Windows Scheduling also known as Pinwheel Scheduling or Periodic Scheduling
- Various special cases and approximations have been studied
- Confusion regarding computational complexity of Windows Scheduling with unit-length jobs in the literature
---
### 4.3.1. Pinwheel Scheduling
- Holte et al. introduced Pinwheel Scheduling for single-machine cases with inexact periods and unit-length jobs
- Provided algorithms for special cases such as harmonic instances and instances with up to 3 distinct period lengths
- Chan and Chin improved density bounds
---
### 4.3.2. Windows Scheduling with Exact Periods
- Also known as Perfectly Periodic Schedules
- Optimization problem to minimize the deviation from demanded job periods
- Studied in various works
---
### 4.3.3. Computational Complexity
- Confusion regarding the computational complexity of Windows Scheduling with unit-length jobs in literature
- Correct proof of NP-hardness for the compact representation can be assembled from available results
---
# 5. Partial Coding
---
## 5.1. Introducing the Partial Coding Problem
- Definition of the Partial Coding Problem
- Types of Partial Coding Problem
- General
- Binary
- Applications and example
---
## 5.2. Partial Coding - Intuitive Explanation
- Given a set of attributes and a family of symbols
- Each symbol is associated with a subset of attributes
- Assign integer values to attributes to encode symbols
- A feasible encoding distinguishes any pair of symbols based on their common attributes
---
## 5.3. Formal Definition of Partial Coding Problem
- Given $d$ attributes with value ranges $f_1,\ldots,f_d$
- Family of $n$ symbols $s_1,\ldots,s_n \subseteq \{1,\ldots,d\}$
- Encoding is a mapping $c_j: s_j \to \mathbb{N}$, with $c_j(i) \in [0,f_i-1]$
- Feasible encoding: for any index pair $j \neq k$, there exists an attribute $i \in s_j \cap s_k$ with $c_j(i) \neq c_k(i)$
---
## 5.4. Types of Partial Coding Problem
1. General Partial Coding Problem:
- Attributes can have any integer value range
2. Binary Partial Coding Problem:
- All attributes have a binary value range ($0$ or $1$)
- $\forall i \, (f_i = 2)$
---
## 5.5. Applications of Partial Coding Problem
1. Electronic identifier tags with faulty memory cells
- Tags can store a number of $d$ digits, representing an identifier
- Some memory cells in the tags may be faulty, causing unreliable reading
2. Unambiguously identifying objects despite faulty memory cells
- Partial Coding helps configure identifiers to ensure accurate identification
- Overcomes challenges posed by faulty memory cells
- Enables reliable tracking and identification of objects in various industries (e.g., logistics, manufacturing, retail)
---
## 5.6. Example of a Partial Coding Problem
$$
\begin{aligned}
&\begin{array}{l|cccc}
& s_1 & s_2 & s_3 & s_4 \\
\hline\{0,1\} & & \times & & \\
\{0,1,2\} & \times & \times & & \\
\{0,1\} & & & &
\end{array}\\
&\begin{array}{l|cccc}
& s_1 & s_2 & s_3 & s_4 \\
\hline\{0,1\} & 0 & \times & 1 & 1 \\
\{0,1,2\} & \times & \times & 0 & 2 \\
\{0,1\} & 0 & 1 & 0 & 0
\end{array}
\end{aligned}
$$
Note:
- Show Figure 1 from the provided content
- Explain how the instance is represented and how a feasible solution is obtained
- $\times$ indicates that the symbol is not associated with that attribute
----
Figure 1: An instance of the Partial Coding Problem with 4 symbols and 3 binary attributes is shown on the left-hand side; a feasible solution to it is given on the right-hand side. The problem instance would be formally written as $d=3$, $f_1 = f_3 = 2$, $f_2 = 3$, $n=4$, $s_1 = \{1,3\}$, $s_2 = \{3\}$, $s_3 = s_4 = \{1,2,3\}.$
---
## 5.7. Number Theoretic Prerequisites
- Topics:
- Relative prime vectors
- Residue vectors
- Relative prime factorisations
---
### 5.7.1. Relative Prime Vectors
- Definition: A vector containing pairwise relatively prime numbers.
- Example: $F = (3, 4, 5)$ is a relative prime vector since the GCD of any two numbers in $F$ is $1$.
---
### 5.7.2. Residue Vectors
- Step 1: Choose an integer $N$.
- Step 2: For each value of $m$ in the relative prime vector $F$, find the residue of the integer $N$ modulo $m$.
- Step 3: Combine the residues to form the residue vector.
- Example: $N = 23, F = (3, 4, 5)$, Residue Vector $= (2, 3, 3)$
---
### 5.7.3. Residue Class
- Explanation: The residue class of $N$ modulo the product of the members of $F$ is a set of integers that have the same residues as $N$ when divided by each of the members of $F$.
- Example: Residue Class for $N = 23$ and $F = (3, 4, 5)$ includes $-37, 83, 143, 203$, etc.
Note:
- Any number that is $23$ modulo the LCM of $F \, (60 \text{ in this case})$
---
#### 5.7.3.1. Chinese Remainder Theorem
- Explanation: The Chinese Remainder Theorem states that there is a unique integer $X$ that satisfies a system of congruences with pairwise relatively prime numbers.
- This theorem helps prove the uniqueness of the residue vector.
- I.e. any number that has the same residue vector as $N$ for $F$ must be congruent to $N$ modulo the LCM of $F$
---
### 5.7.4. Relative Prime Factorisations
- Definition: Given a multi-set of strictly positive integers $P$, a relative prime factorisation of $P$ is a relative prime vector $F$ where for each $p_j$ in $P$, there is some subset $s_j$ with the product of the elements in $s_j$ equal to $p_j$.
- Example: $P = (30, 42)$, possible relative prime factorisations: $(2, 3, 5, 7)$ and $(5, 6, 7)$
- The canonical prime factorisation is always a relative prime factorisation
---
### 5.7.4.1. Computational Expensiveness of Prime Factorisation
- Explanation: Using the canonical prime factorisation to compute relative prime factorisations can be computationally expensive, as no efficient algorithm for prime factorisation is known.
- Alternative: There are other efficient methods to compute relative prime factorisations that can be helpful in solving problems like the Windows Scheduling Problem.
---
### 5.8. Polynomial Time Reduction from WSP to PCP
- Steps:
1. Compute the relative prime factorisation $F$ of the set of periods $P$.
2. Define the symbols for the PCP instance based on the factorisation $F$ and the periods $p_j$.
3. Establish the equivalence of feasible solutions in both problem instances.
---
#### 5.8.1. Theorem 12 and Proof Strategy
- Theorem 12: Windows Scheduling Problem can be reduced to Partial Coding Problem in polynomial time.
- Proof Strategy:
1. Compute relative prime factorisation of periods.
2. Determine symbols of PCP instance.
3. Show equivalence between feasibility of schedules and encodings.
---
#### 5.8.2. Polynomial-time Algorithm for Computing Relative Prime factorisations
- Algorithm: Using the split operator
$$\textrm{split}(x,y):=\left(\frac{x}{\textrm{gcd}(x,y)},\textrm{gcd}(x,y),\frac{y}{\textrm{gcd}(x,y)}\right)$$
----
- Steps:
1. Select a pair of integers $(x, y)$ from the set $F$ with a common prime factor.
2. Replace the pair $(x, y)$ in the set $F$ with the numbers computed by $\textrm{split}(x,y)$.
3. Repeat steps 1-2 until no pairs of integers with a common prime factor are left in the set $F$.
4. Sort the elements of $F$ in nondecreasing order
5. The elements of $F$ are a relative prime factorisation.
- Computational complexity: Polynomial in the representation size of the scheduling instance.
---
#### 5.8.3. Determining Symbols of the PCP Instance
- Process:
1. For each job $j$ with period $p_j$, assign a set $s_j \subseteq \{1,...,d\}$.
- The indices in $s_j$ correspond to the position of the factors in $F$
2. Set $s_j$ is the lexicographically smallest set satisfying the product of all elements in $s_j$ taken from $F$ equal to $p_j$.
$$\prod_{i \in s_j} f_i = p_j$$
Note:
- "Lexicographically smallest" means the smallest set when comparing the sets in lexicographic (dictionary) order.
- In lexicographic order, elements are compared one-by-one from left to right, and the order is determined by the first element that differs between the sets. For example, the set $\{1, 2, 3\}$ is lexicographically smaller than the set $\{1, 2, 4\}$ because $3$ is smaller than $4$.
----
##### 5.8.3.1 Example
- Windows Scheduling Problem with periods $P = \{6, 8, 12\}$
- Relative prime factorization $F = (2, 2, 2, 3)$.
We will assign a set $s_j$ for each job with period $p_j$ as follows:
----
1. For job 1 with period $p_1 = 6$:
- Factors of $6$ in $F$ are $\{2, 3\}$.
- Therefore, $s_1 = \{1, 4\}$, which is the lexicographically smallest set satisfying the product of all elements in $s_1$ taken from $F$ equal to $6 \, (2 * 3 = 6)$
- I.e. $s_1$ represents the set of factors $\{2, 3\}$ from $F$, and the product of these factors $6 \, (2 * 3 = 6)$.
----
2. For job 2 with period $p_2 = 8$:
- Factors of $8$ in $F$ are $\{2, 2, 2\}$.
- Therefore, $s_2 = \{1, 2, 3\}$, which is the lexicographically smallest set satisfying the product of all elements in $s_2$ taken from $F$ equal to $8 \, (2^3 = 8)$.
----
3. For job 3 with period $p_3 = 12$:
- Factors of $12$ in $F$ are $\{2, 2, 3\}$.
- Therefore, $s_3 = \{1, 2, 4\}$, which is the lexicographically smallest set satisfying the product of all elements in $s_3$ taken from $F$ equal to $12 \, (2 * 2 * 3 = 12)$.
----
In this example, we have determined the symbols for the PCP instance as $s_1 = \{1, 4\}$, $s_2 = \{1, 2, 3\}$, and $s_3 = \{1, 2, 4\}$.
---
#### 5.8.4. Equivalence between Feasibility of Schedules and Encodings
- Main Idea: Two jobs $j, k$ collide if and only if the symbols $s_j, s_k$ cannot be distinguished.
- A collision of two jobs is the situation when they have to be executed at the same time.
- A schedule is feasible if and only if there are no collissions
- Let $i,j$ be two distinct jobs having unit length and strict periods $p_i$ and $p_j$, and let $t_i$ and $t_j$ be a start time of $i$ and $j$ in a one-machine schedule, respectively
----
- Proof:
1. Use necessary and sufficient condition for pairs of jobs to collide in a schedule.
- $$t_j \equiv t_k \pmod{\text{gcd}(p_j, p_k)}$$
2. Show that $t_j \equiv t_k \pmod{\text{gcd}(p_j, p_k)}$ if and only if $r_i(t_j) = r_i(t_k)$ for each $i$ in $s_j \cap s_k$, where $r_i(t_j)$ and $r_i(t_k)$ are the residue vector components with respect to $F$.
----
- Establishes the "correctness" of the reduction
- The set of feasible windows schedule problems correspond directly to feasible partial coding problems
- The transformed partial coding problem is feasible if and only if the original windows schedule problem was feasible
---
#### 5.8.5. Conclusions
- Implications: If a polynomial-time algorithm for PCP exists, there also exists a polynomial-time algorithm for WSP.
- Polynomial time transformation from WSP to PCP
- Apply the polynomial time algorithm for PCP
- Recover the original solution to WSP
---
### 5.9. Polynomial Time Reduction from Binary PCP to WSP
- Main Idea: Transform instances of BPCP with $d$ attributes to instances of WSP with a maximum period of $d^{O(d)}$.
- With one machine, uniform job lengths and exact periods
---
#### 5.9.1. Reduction Strategy
- Steps:
1. Use a relative prime vector $F$ containing the first $2d$ prime numbers.
2. Construct Windows Scheduling instances where jobs' periods can be represented as products over subsets of $F$.
3. Transform BPCP instances with attribute value ranges as pairwise distinct prime numbers into WSP instances.
4. Transform a given BPCP instance into an equivalent PCP instance with attribute value ranges that are pairwise distinct primes.
---
### 5.9.2. Prime Number Theorem
- Fact: The $m$th prime number is asymptotically $m \ln m$.
- Implication: The maximum period of the resulting scheduling instance is upper bounded by $$O([(2d) \ln (2d)]^{2d}) = d^{O(d)}$$
---
### 5.9.3. Transforming BPCP Instances
- Process: Define the period of the $j\text{th}$ job as the product of the value ranges of all attributes contained by the $j\text{th}$ symbol.
- Assume an arbitrary but fixed set $A$ of $2d$ pairwise distinct attribute value ranges
---
#### 5.9.3.1. Transformation in Iterations
- Process:
1. Transform one binary attribute into a pair of non-binary attributes in each iteration.
2. Pick two arbitrary value ranges $a$ and $b$ from $A$ that have not been picked in a previous iteration.
3. Define sets $S(i)$ and $\bar{S}(i)$ as the set of symbols that contain and do not contain attribute $i$, respectively.
---
#### 5.9.3.2. Transformation Steps
- Steps:
1. Remove attribute $i$.
2. Add two new attributes $\alpha$ and $\beta$ with value ranges $a$ and $b$, respectively.
3. Add $\alpha$ to all symbols in $\bar{S}(i)$ and add $\beta$ to all symbols in $S(i)$.
4. Introduce $(a-1) \cdot (b-2)$ new auxiliary symbols, each containing only $\alpha$ and $\beta$.
---
### 5.9.4. Feasible Encoding
- Main Idea: In any feasible encoding, the auxiliary symbols must assume exactly $a-1$ different values of attribute $\alpha$ and at least $b-2$ different values of attribute $\beta$.
- Result: The remaining two values of $\beta$ can be assumed by the symbols in $S(i)$, and therefore, $\beta$ replaces $i$ as a binary attribute.
---
### 5.9.5. Conclusion
- Recap: By following the steps outlined, the reduction from the Binary Partial Coding Problem to the Windows Scheduling Problem is established.
- Implication: The complexity of the Binary Partial Coding Problem is shown to be closely related to the complexity of the Windows Scheduling Problem.
---
## 5.10. Polynomial Time Reduction from General Partial Coding to Binary Partial Coding
- Steps:
1. Replace a single non-binary attribute with a set of binary attributes.
2. Apply this procedure to all non-binary attributes.
---
### 5.10.1. Replacing Non-binary Attributes
- For a non-binary attribute $f_i > 2$:
- Let $k := \lceil \log f_i \rceil$ (number of binary attributes to represent $f_i$)
- Define $S(i)$ and $\bar{S}(i)$ (sets of symbols containing and not containing attribute $i$, respectively)
- Remove attribute $i$ and introduce $k + 1$ new binary attributes
- Replace index $i$ with indices of $k$ binary attributes in symbols $s_j \in S(i)$
---
### 5.10.2. Introducing Auxiliary Symbols
- Let $m := 2^k - f_i$
- Represent $m$ as a binary number: $$m = \sum_{j=0}^{k-1} c_j 2^j \text{ for } j = 0, \ldots, k-1$$
----
- For each $j$ with $c_j = 1$:
- Introduce an auxiliary symbol containing the first $k - j$ auxiliary attributes
- Add the $k$-th binary attribute to all new auxiliary symbols and symbols in $\bar{S}(i)$
---
### 5.10.3. Pairwise Distinguishable Auxiliary Symbols
- Condition: Length $k$ binary vectors with auxiliary symbol as a prefix must be disjoint for different $j$ values.
- Result: Sum of sizes of the sets equals $m$.
---
### 5.10.4. Equivalence to Original Problem Instance
- Available attribute value combinations for symbols in $S(i)$: $2^k - m = f_i$
- Shows equivalence to the original problem instance.
---
### 5.10.5. Runtime Analysis
- Each time a non-binary attribute is eliminated:
- Number of attributes increases by no more than $\lceil \log f_i \rceil$
- Number of symbols increases by no more than $\lceil \log f_i \rceil$
- Resulting Binary Partial Coding Problem instance:
- At most $O(d \log f_{\max})$ attributes
- $n + O(d \log f_{\max})$ symbols
- Transformation is polynomial time.
---
### 5.10.6. Example
#### Partial Coding Problem

----
#### Equivalent Binary Version

Note:
- Show transformation from given Partial Coding Problem to equivalent Binary Partial Coding Problem using the tabular representations provided in the notes.
- Emphasize the introduction of auxiliary symbols and the maintenance of distinction between symbols.
---
### 5.10.6. Conclusion
- Partial Coding Problem can be reduced to Binary Partial Coding Problem in polynomial time.
- Transformation ensures the number of possible value combinations for binary attributes is the same as the original non-binary attribute's value range.
- Auxiliary symbols help maintain distinction between symbols when the value range is not a power of 2.
- Polynomial time transformation is essential for solving the Binary Partial Coding Problem and providing a solution for the original Partial Coding Problem.
---
# 6. Hardness Proofs for Single Machine Case
- The Binary Partial Coding Problem (BPC) is proven NP-hard by reducing the Graph Coloring Problem to the Partial Coding Problem (PC).
---
## 6.1. Graph Coloring Problem Definition
- Given a graph $G=(V,E)$ and a number $k$ of colors, assign each node in $V$ one of the $k$ colors such that no connected nodes share the same color.
- Known to be NP-complete
---
## 6.2. Reduction from Graph Coloring Problem to Partial Coding Problem
- Steps 1-3: Create symbols and attributes based on nodes and unconnected pairs.
- Feasible k-coloring can be transformed into an encoding.
----
Step 1: Introduce Symbols
- Let $V = \{v_1, \dots, v_n\}$
- For each node $v_j$ in $V$, introduce a symbol $s_j$
----
Step 2: Define Attributes for Unconnected Pairs
- For every pair of nodes $v_i, v_j$ in $V$ that are $\textit{not}$ connected by an edge:
- Define a binary attribute $a_{ij}$
- Add $a_{ij}$ only to the two symbols $s_i$ and $s_j$
----
Step 3: Add Attribute $a^*$ to all Symbols
- Introduce an attribute $a^*$ with a value range $k$
- Add $a^*$ to all $n$ symbols
----
Transformation: Feasible $k$-coloring to Encoding
- Assign the $a^*$ value corresponding to the color of node $v_i$ to symbol $s_i$
- Distinguish symbols $s_i$ and $s_j$ by assigning distinct values of $a_{ij}$ for unconnected pairs
- Symbol pairs with connected nodes are distinguished by attribute $a^*$ due to the feasibility of the coloring
---
### 6.2.1. Partial Coding Problem is NP-complete
- BPC and one-machine Windows Scheduling Problem with unit-length jobs and exact periods are also NP-complete.
- Shown via the reduction from 3-colouring
---
### 6.3. Pseudo-polynomial Algorithms for One-Machine Windows Scheduling
- Theorem: $3$-Graph Coloring Problem can be reduced to BPC in expected polynomial time.
- Corollary: Windows Scheduling Problem does not admit a pseudo-polynomial time algorithm unless SAT can be solved in expected time $n^{O(\log n \log \log n)}$.
---
### 6.3.1. Proof of Theorem
- Use edge clique cover of complement graph instead of one binary attribute for each unconnected node pair.
- Construct Partial Coding Problem and transform it into Binary Partial Coding instance with $O(\log^2 n)$ attributes.
---
## 6.4. Hardness Result for One-Machine Windows Scheduling Problem with Unit-Length Jobs and Inexact Periods
- Density must be less than one for a feasible schedule.
- For instances with density $1$, periods can be assumed to be exact.
- Weak NP-hardness result for exact periods extends to inexact periods case.
---
### 6.4.1 Step-by-Step Explanation of Proof
- Steps 1-7: Augment instances by adding extra jobs, create problem instance with density 1, and show that a pseudo-polynomial algorithm would decide the NP-hard Graph Coloring problem.
----
Step 1: Necessary condition for feasibility
- Density of the instance must be less than one for unit-length jobs: $\sum_{j=1}^{n} \frac{1}{p_j}$.
- If density is exactly one, the time interval between any two consecutive executions of job $j$ is exactly $p_j$.
----
Step 2: Augment instances with exact periods
- Add extra jobs such that the density becomes one.
- Period of additional jobs is the least common multiple (LCM) of all original jobs' periods.
----
Step 3: Compact representation of inexact periods
- Number of additional jobs is exponential, but can be represented compactly in polynomial space.
- Weak NP-hardness result for the exact periods case extends to the compact representation of the inexact periods case.
----
Step 4: Reduction from 3-Graph Coloring to Windows Scheduling
- A 3-Graph Coloring instance with constant degree and $n$ nodes can be reduced to a Windows Scheduling problem instance with a maximum period of $n^{O(\log n \log \log n)}$.
- LCM of all periods is $q = n^{O(\log n \log \log n)}$.
----
Step 5: Periodic schedule for the obtained scheduling instance
- Every schedule for the obtained scheduling instance is periodic with period $q$.
- For each job $j$, there are $\frac{q}{p_j}$ time slots within the interval $[0,q-1]$ where it is executed.
- Number of idle time slots within $[0,q-1]$ is $$r := q - \sum_j \frac{q}{p_j}$$
----
Step 6: Creating a problem instance with density one
- Add $r$ additional jobs with period $q$.
- The new problem instance has a density of one and admits a feasible schedule if and only if the original problem instance does.
----
Step 7: Implications for pseudo-polynomial algorithms
- The resulting Windows Scheduling instance with inexact periods has $n^{O(\log n \log \log n)}$ jobs and a maximum period of $n^{O(\log n \log \log n)}$.
- A pseudo-polynomial algorithm for the Windows Scheduling Problem with inexact periods could solve this problem in time $n^{O(\log n \log \log n)}$, which would decide the NP-hard Graph Coloring problem that the reduction started from.
---
## 6.5. Conclusion
- Single-machine Windows Scheduling Problem with unit-length jobs and inexact periods does not admit a pseudo-polynomial time algorithm unless SAT can be solved in expected time $n^{O(\log n \log \log n)}$.
---
# 7. Hardness Results in Multi-machine Windows Scheduling
- Generalization of the single-machine case
- Two variants of the problem with multiple machines
1. Job migration disallowed
2. Job migration permitted
---
## 7.1. Variant 1 - Job Migration Disallowed
- Not harder than the single-machine case
- Incorrect proof of NP-hardness in prior work
- Correct NP-hardness proof: generalization of single-machine problem
---
### 7.1.1. Theorem - Multi to Single
- Multi-machine Windows Scheduling with unit-size jobs, exact periods, and no machine migration can be reduced to the single-machine variant in polynomial time.
---
## 7.2. Variant 2 - Job Migration Allowed
- Computationally harder than the single-machine variant
- Feasible schedule: at most $m$ jobs executed in any time slot
- Apply Generalized Chinese Remainder Theorem for infeasible schedule characterization
---
### 7.2.1. Lemma - Infeasible Schedules
- Given an instance of Windows Scheduling with $m$ machines, $n$ unit-size jobs, exact periods $p_1,\ldots,p_n$, and machine migration allowed
- Schedule is infeasible if and only if there exists a set $K$ of $m+1$ jobs such that each pair of jobs from $K$ collides in that schedule
---
### 7.2.2. Theorem - co-NP-hardness
- Windows Scheduling with $m$ machines, exact periods, and machine migration is co-NP-hard even for unit-length jobs
- Introduce generalization of the Partial Coding Problem
---
#### 7.2.2.1 $k$-ary Partial Coding Problem
- Given attributes with value ranges $(f_1,\ldots,f_d)$ and symbols $s_1,\ldots,s_n \subseteq \{1,\ldots,d\}$
- Determine whether there is an encoding of symbols such that any subset of more than $k$ symbols contains two elements distinguishable by the values of their common attributes
----
Formally, the k-ary Partial Coding Problem can be defined as follows:
**Input:**
- A set $A = \{A_1, A_2, \ldots, A_d\}$ of $d$ attributes, where each attribute $A_i$ has a value range $f_i$.
- A set $S = \{s_1, s_2, \ldots, s_n\}$ of $n$ symbols, where each symbol $s_j \subseteq \{1, \ldots, d\}$.
----
**Question:**
Is there an encoding $E: S \rightarrow \prod_{i=1}^d f_i$ such that for any subset $K \subseteq S$ with $|K| > k$, there exist $s_p, s_q \in K$ with $p \neq q$ and $E(s_p)$ and $E(s_q)$ are distinguishable by the values of their common attributes?
---
#### 7.2.2.2 Proof Sketch - co-NP-hardness
- Show co-NP-hardness of the $k$-ary Partial Coding Problem by reduction from the $(k+1)$-Independent Set Problem
- Construct Partial Coding instance with a symbol for each node and an attribute for each edge in the graph
- Independent set of size $k+1$ exists if and only if there is a set of $k+1$ symbols that pairwise cannot be distinguished in any solution of the Partial Coding instance
- Co-NP-hardness of $k$-ary Partial Coding implies co-NP-hardness of $m$-machine Windows Scheduling with machine migration
---
### 7.2.3 Complexity of Variant 2 - Job Migration Allowed
* NP-hard
* Shown earlier via the reductions from partial coding and 3-colouring
* Also co-NP-hard
* Shown in the previous section
* The problem is neither in NP nor in co-NP unless these two complexity classes are equal
---
# 8. Summary and Conclusion
---
## 8.1. Key Contribution
- New interpretation of the Windows Scheduling Problem with exact periods as Partial Coding Problem
- Polynomial-time reductions in both directions
- Both hardness proofs and algorithms for Partial Coding transfer to Windows Scheduling
---
## 8.2. Results and Implications
- Single-machine case with unit-length jobs cannot be solved in pseudo-polynomial time under standard complexity-theoretic assumptions
- Co-NP-hardness of the multi-machine case with machine migration
- Reduction of multi-machine version to single machine case
---
## 8.3. Future Directions
- Explore potential for efficient algorithms for instances with constantly many different jobs
- Investigate approximation algorithms for Windows Scheduling using the Partial Coding interpretation
---
## 8.4. Conclusion
- Established the Partial Coding Problem as the combinatorial core of the Windows Scheduling Problem
- Opened new avenues for both algorithmic results and hardness proofs for Windows Scheduling
{"contributors":"[{\"id\":\"7cfe1b4a-8dc1-4d22-83e3-6d5018def783\",\"add\":53699,\"del\":10352}]","description":"A paper on the pinwheel/windows scheduling problem","title":"A New Perspective on the Windows Scheduling Problem","showTags":"true"}