SCIPERS: 311427, 312075, 310003, 311166, 315732 # Problem 1 ```scala= def rle(data: ParSeq[Char]): Buffer[(Char, Int)] ={ def combop(ls: Buffer[(Char, Int)], rs: Buffer[(Char, Int)]) = { if ls.isEmpty || rs.isEmpty || ls.last._1 != rs.head._1 then ls ++ rs else ls.init.append((rs.head._1, ls.last._2 + rs.head._2)) ++ rs.tail } def seqop(acc: Buffer[(Char, Int)], c: Char): Buffer[(Char,Int)] = { if !acc.isEmpty && acc.last._1 == c then acc.init.append((c, acc.last._2 + 1)) else acc.append((c, 1)) } data.aggregate(Buffer.empty)(seqop, combop) } ``` # Problem 2 ```scala= class DLLCombiner[A] extends Combiner[A, Array[A]] { var head: Node[A] = null // null for empty lists. var last: Node[A] = null // null for empty lists. var size: Int = 0 // Implement these three methods... override def +=(elem: A): Unit = { if size == 0 then last = new Node(elem) head = last size = 1 else last.next = new Node(elem) last.next.previous = last last = last.next size += 1 } override def combine(that: DLLCombiner[A]): DLLCombiner[A] = { if size == 0 then that else if that.size == 0 then this else this.last.next = that.head that.head.previous = this.last this.last = that.last this.size += that.size this } override def result(): Array[A] = val array = new Array[A] var curr = head for (i <- 0 until size) do array[i] = curr.value curr = curr.next array } class Node[A](val value: A) { var next: Node[A] // null for last node. var previous: Node[A] // null for first node. } ``` ## Question 1 ``+=`` and ``combine`` are $O(1)$. ``result`` is $O(n)$ and sequential, violating the ``Combiner`` contract. ## Question 2 With 2 parallel tasks: ```scala= override def result(): Array[A] = val mid = size/2 val array = new Array[A] val lhalf = task { var curr1 = head for (i <- 0 until mid) do array[i] = curr.value curr = curr.next } val rhalf = task { var curr2 = last for (i <- 1 until size-mid+1) do array[size - i] = curr.value curr = curr.previous } lhalf.join() rhalf.join() array ``` ## Question 3 One way to implement 4-task parallel ``result`` is by keeping a pointer to some middle point of ``DLLCombiner`` then traversing the two halves in a similar manner to Question 2. Another way to do this is traversing the ``DLLCombiner`` *twice* in each direction, taking steps of 2 in each direction: ```scala= override def result(): Array[A] = val mid = size/2 val array = new Array[A] val lhalf1 = task { var curr1 = head for (i <- 0 until mid by 2) do array[i] = curr.value curr = curr.next.next } val lhalf2 = task { var curr12 = head.next for (i <- 0 until mid by 2) do array[i+1] = curr.value curr = curr.next.next } val rhalf1 = task { var curr2 = last for (i <- 1 until size-mid+1 by 2) do array[(size - i)] = curr.value curr = curr.previous.previous } val rhalf2 = task { var curr2 = last.previous for (i <- 1 until size-mid+1 by 2) do array[(size - i)-1] = curr.value curr = curr.previous.previous } lhalf1.join() rhalf1.join() lhalf2.join() rhalf2.join() array ``` # Problem 3 ## Question 1 ```scala= def toPipeline(fs: ParSeq[A => A]): A => A = { fs.fold(x => x)(_.andThen(_)) } ``` As the composition of functions is associative (but not its application evaluation). ## Question 2 The return pipeline is of the type A => A, so there is no guarantee that it will also work in parallel. We should thus implement a parallel version of andThen to parallelize the composition of functions. ## Question 3 ```scala= class FiniteFun[A](mappings: immutable.Map[A, A], default: A) { def apply(x: A): A = mappings.get(x) match { case Some(y) => y case None => default } def andThen(that: FiniteFun[A]): FiniteFun[A] = new FiniteFun[A](this.mappings.map((x, y) => (x, that.apply(y))).toMap, that.default) } ``` Yes, because a lot of results can already be calculated. It is made possible from the finite set of values of the function + the default value, through the application of ``that`` when mapping ``this.mappings``. ## Question 4 Depth of the sequential version is equal to the Work stated below, the parallel version's depth is $O$(``biggest mappings size in fs``). Work, for the sequential method, is equal to the sum of the works of the application of each function in ``fs``, which will result in $\Theta$(``fs.size``) calls to ``Map.get``. Work in parallel version is bigger: $\Theta$(``fs.size`` $\times$ ``fs.get(0).mappings.keyset.size``), since we precompute the result of the consecutive applications of the functions in ``fs``, starting from the size of the first function's ``mappings`` map. For composition of functions with definition sets sorted in increasing order, or at least with the first function such that its keyset size is considerably smaller than most other functions in ``fs``, parallel version is thus faster.