# Gunrock: GPU Graph Analytics
PPoPP'16
###### tags: `GPUs`
[paper](https://dl.acm.org/doi/pdf/10.1145/3108140)
[slide2020](https://archive.fosdem.org/2020/schedule/event/graph_gunrock/attachments/slides/3674/export/events/attachments/graph_gunrock/slides/3674/Gunrock_FOSDEM.pdf)
[slide2015](https://images.nvidia.com/events/sc15/pdfs/SC5139-gunrock-multi-gpu-processing-library.pdf)
## 1. Introduction
* our data-centric model’s key abstraction is the <font color="#f00">frontier</font>, a subset of the edges or vertices within the graph that is currently of interest.
* All Gunrock operations are <font color="#f00">bulk-synchronous</font> and manipulate this frontier, either by computing on values within it or by computing a new frontier from it.
* Gunrock targets graph primitives that are <font color="#f00">iterative, convergent processes</font>
* benchmark
* breadth-first search (BFS)
* single-source shortest path (SSSP)
* betweenness centrality (BC)
* PageRank
* connected components (CC)
* triangle counting (TC)
* goal
* performance
* programmability
* role
* gunrock
* how
* programmer
* what
* chalenge
* managing <font color="#f00">irregularity</font> in work distribution
* method
* load-balancing
* work-efficiency strategies
* contribution
* data-centric abstraction
* incorperate
* kernel fusion
* push-pull traversal
* idempotent traversal
* priority queues
* simple and flexible APIs
* GPU-specific optimization strategies
* memory efficiency
* load balancing
* workload management
<!--
# 2. Related work
## Single-Node CPU-Based Systems
* Boost Graph Library (BGL)
* Stanford Network Analysis Platform (SNAP)
## Distributed CPU-Based Systems
* MapReduce is poor for highly irregular workloads
* framework
* Pregel
* Google
* Bulk-Synchronous Parallel (BSP) model
* vertex-centric
* iterative convergent process consisting of global synchronization barriers called super-steps
* message passing
* good
* scalability and fault tolerance
* bad
* slow convergence on large-diameter graphs
* load imbalance on scale-free graphs
* GraphLab
* Yahoo, Hadoop ecosystem
* shared memory abstraction
* gather-apply-scatter (GAS) programming model
* allow
* asynchronous computation
* dynamic asynchronous scheduling
* more consistently expressive by elliminate ***message-passing***
* PowerGraph
* power-law graphs
* support
* BSP
* asynchronous execution
* vertex-cut to split high-degree vertices into equal degree-sized redundant vertices
* GraphChi
* centralized system
* secondary storage
* l graph partitioning method called Parallel Sliding Windows (PSW)
* Others
* shared-memory-based systems
* Ligra
* similar operator abstraction
* Galois
* priority scheduling
* dynamic graphs
* processes on subsets of vertices called active elements
*
* domain-specific languages
* Green-Marl
* GraphX
* distributed graph computation framework
* graph-parallel
* dataparallel
* Help
* library
* large-scale graph
## Specialized Parallel Graph Algorithms
* BFS
* CC
* BC
* SSSP
* TC
## High-Level GPU Programming Models
* programming model
* BSP/Pregel’s message-passing framework
* GAS model
* In Medusa, Zhong and He
* still vertex-centric
* Totem
* GPU-CPU hybrid systems
* solve the long-tail problem on GPUs
* API only allows direct neighbor access
* Frog
* GAS
* PowerGraph’s vertex-cut
* CPU
* splits large neighbor lists
* duplicates node information
* deploys each partial neighbor list to different machines
* load balancing
* replaces the large synchronization cost in edge-cut into a single-node synchronization cost
* benefits
* simplicity
* familiarity
* VertexAPI2
* GPU
* gatherApply:
* gather step and apply step
* scatterActivate:
* scatter step
* MapGraph
* CuSha
* nvGRAPH
* nvidia
* Gunrock
* the only high-level GPU-based graph analytics system
* vertexcentric
* edge-centric operations
-->
# 3. DATA-CENTRIC ABSTRACTION
* most graph analytics tasks can be expressed as <font color="#f00">iterative convergent processes</font>
*

* focusing on
* others
* sequencing steps of computation
* we
* manipulating a data structure(the <font color="#f00">frontier</font> of vertices or edges)
* support
* we
* <font color="#f00">node</font> and <font color="#f00">edge</font> frontiers
* others
* <font color="#f00">vertices</font> only
* gather-apply-scatter(PowerGraph)
* message-passing (Pregel)
* bulk-synchronous "steps"
* between step
* sequential
* inside step
* parallel
* graph primitives

* advance
* an <font color="#f00">irregularly-parallel</font> operation
* different vertices in a graph have different numbers of neighbors
* vertices share neighbors
* 4 kinds of advance
* V-to-V, V-to-E, E-to-V, E-to-E
* utilized for
* visit each element in the current frontier while updating <font color="#f00">local values</font> and/or accumulating <font color="#f00">global values</font>
* BFS distance updates
* visit the node or edge neighbors of all the elements in the current frontier while updating <font color="#f00">source vertex</font>, <font color="#f00">destination vertex</font>, and/or <font color="#f00">edge values</font>
* distance updates in SSSP
* generate <font color="#f00">edge frontiers</font> from vertex frontiers or vice versa
* BFS, SSSP, SALSA
* pull values from all vertices 2 hops away by starting from an edge frontier, visiting all the neighbor edges, and returning the far-end vertices of these neighbor edges
* filter
* A filter operator generates a new frontier from the current frontier by choosing a <font color="#f00">subset</font> of the current frontier based on programmer-specified criteria. Each input item maps to <font color="#f00">zero</font> or <font color="#f00">one</font> output item
* Though filtering is an <font color="#f00">irregular operation</font>, using parallel scan for efficient filtering is well-understood on GPUs
* used for
1. <font color="#f00">split</font> vertices or edges based on a filter
* SSSP’s 2-bucket delta-stepping)
2. <font color="#f00">compact</font> out filtered items to throw them away
* duplicated vertices in BFS or edges where both end nodes belong to the same component in CC
* segmented intersection
* takes two input node frontiers with the same length, or an input edge frontier, and generates both <font color="#f00">the number of total intersections</font> and <font color="#f00">the intersected node IDs</font> as the new frontier
* key operator in TC
* compute
* defines an operation on <font color="#f00">all elements</font> (vertices or edges) in its input frontier
* A programmer-specified compute operator can be used together with all three traversal operators
* no order
* potential data race(atomic ops)
* regular
* sssp

* computes the shortest path from a single node in a graph to every other node in the graph
* iteration description
* input frontier of active vertices (or a single vertex) <font color="#f00">initialized</font> to a distance of zero
* enumerates the sizes of the frontier’s neighbor list of edges and computes the <font color="#f00">length</font> of the output frontier
* <font color="#f00">redistributes</font> the workload across parallel threads
* each edge adds its weight to the distance value at its source value and, if appropriate, updates the distance value of its destination vertex
* removes <font color="#f00">redundant</font> vertex IDs
* three Gunrock operators
* advance
* computes the list of edges connected to the current vertex frontier and (transparently) <font color="#f00">load-balances</font> their execution
* compute
* update neighboring vertices with new distances
* filter
* generate the final output frontier by removing <font color="#f00">redundant nodes</font>, optionally using a two-level priority queue, whose use enables delta-stepping
---
## 3.1 Alternative Abstractions
* Gather-apply-scatter (GAS) abstraction
* Message-passing
* CPU strategies
* Help’s primitives
* Asynchronous execution
---
* two parts
* Above the traversal-compute abstraction is the <font color="#f00">application module</font>
* Under the abstraction are the utility functions, the implementation of operators used in traversal, and various <font color="#f00">optimization strategies</font>
* three components

Fig. 4. Gunrock’s Graph Operator and Functor APIs. The Operator APIs divide the whole workloads into load-balanced per-edge or per-node operations and fuse the kernel with a functor that defines one such operation.
* problem
* provides graph topology data and an algorithm-specific data management interface
* functors
* contain user-defined computation code and expose <font color="#f00">kernel fusion</font> opportunities that we discuss below
* enactor
* serves as the entry point of the graph algorithm
* specifies the computation as a series of graph operator kernel calls with user-defined kernel launching settings
* implementation
* a sequence of bulk-synchronous steps, specified within the <font color="#f00">enactor</font> and implemented as kernels, that operate on frontiers
* an enactor-only program would sacrifice a significant performance opportunity
* leveraging <font color="#f00">producer-consumer</font> locality between operations by integrating multiple operations into single GPU kernels
* <font color="#f00">kernel fusion</font> is absent from other programmable GPU graph libraries
* data-centric programming model conclude:
* allows more <font color="#f00">flexibility</font> on the operations
* vertex-centric
* edge-centric
* decouples a <font color="#f00">compute operator</font> from <font color="#f00">traversal operators</font>
* allows an implementation that both leverages state of-the-art data-parallel primitives and enables various types of <font color="#f00">optimizations</font>
* code <font color="#f00">smaller</font> in size and <font color="#f00">clearer</font>, <font color="#f00">Problem class</font> and <font color="#f00">kernel enactor</font> are both template-based C++ code
* eases the job of extending Gunrock’ single-GPU execution model to multiple GPUs
# 4. EFFICIENT GRAPH OPERATOR DESIGN

---
## 4.1 Advance

* <font color="#f00">vectorized device memory assignment</font> and <font color="#f00">copy</font>
* parallel threads place <font color="#f00">dynamic data</font> (neighbor lists with various lengths) within <font color="#f00">shared data structures</font> (output frontier)
1. allocation part
* given a list of allocation requirements for each input item (neighbor list size array computed from row offsets), we need the scatter offsets to write the output frontier
* implementation
* prefix-sum [reference](https://www.itread01.com/content/1548349563.html)
2. copy part
* for the copy part, we need to load-balance parallel scatter writes with various lengths over a single launch
* implementation
* load balancing
* course grain
* fine grain
* traversal direction
* 5.1
* push-based advance
* pull-based advance
## 4.2 Filter

Fig. 8. Filter is based on compact, which uses either a global scan and scatter (for exact filtering) or a local scan and scatter after heuristics (for inexact filtering).
* stream compaction operator
* transforms a <font color="#f00">sparse</font> representation of an array (input frontier) to a <font color="#f00">compact</font> one (output frontier)
* implementation
* Merrill et al.’s filtering implementation
* prefix-sum
* local prefix-sums with various culling heuristics
* byproduct
* uniquification feature
* 5.2
## 4.3 Segmented Intersection

Fig. 9. Segmented intersection implementation that uses prefix-sum, compact, merge-based intersection, and reduction.
1. takes two input frontiers
2. computes the intersection of two neighbor lists
3. outputs the <font color="#f00">intersected items</font>
* for <font color="#f00">intersection computation</font> on two large frontiers, a modified <font color="#f00">merge-path algorithm</font> would achieve high performance because of its load balance
* for <font color="#f00">segmented intersection</font>, the workload per input item pair depends on the size of each item’s neighbor list
1. <font color="#f00">prefix-sum</font> for pre-allocation
2. a series of <font color="#f00">load-balanced</font> intersections according to a heuristic based on the sizes of the neighbor list pairs
3. a stream compaction to generate the output frontier, and a segmented reduction as well as a global reduction to compute segmented intersection counts and the global intersection count
---
* High-performance segmented intersection requires a similar focus to high-performance graph traversal
* divide the edge lists into two groups
1. small neighbor lists
2. one small and one large neighbor list
* two kernels
* Two-Small
* one thread to compute the intersection of a node pair
* Small-Large
* starts a binary search for each node in the small neighbor list on the large neighbor list
# 5. SYSTEM IMPLEMENTATION AND OPTIMIZATIONS
* component in achieving high performence
* right <font color="#f00">abstraction</font>
* <font color="#f00">optimized</font> implementations of the primitives within the framework
## 5.1 Graph traversal throughput
* major contributions
1. generalizing different types of workload-distribution
2. load-balance strategies
* advance operators
* Gunrock’s advance step generates an irregular workload


* methods
* Static Workload Mapping Strategy
* cooperative process
* load all the neighbor list offsets into <font color="#f00">shared memory</font>, then use a block of threads to cooperatively process per-edge operations on the neighbor list
* loop strip mining
* split the neighbor list of a node so multiple threads within the same SIMD lane can achieve better utilization
* Dynamic Grouping Workload Mapping Strategy
* Merge-based Load-Balanced Partitioning Workload Mapping Strategy
* Push vs. Pull Traversal
* Two-Level Priority Queue
## 5.2 Synchronization throughput
* the bottlenecks of synchronization throughput
* concurrent discovery
1. visited parents
2. being-visited peers
3. concurrently discovered children.
* contributes to <font color="#f00">most synchronization overhead</font> when there is per-node computation
* 5.1.4
* dependencies in parallel data-primitives
* computation during traversal has a <font color="#f00">reduction</font> (Pagerank, BC) and/or <font color="#f00">intersection</font> (TC) step on each neighbor list
* methods
* Idempotent vs. non-idempotent operations
1. it will cause an advance step to generate an output frontier that has duplicated elements
2. it will cause any computation on the common neighbors to run multiple times
* Idempotent
* <font color="#f00">advance step</font> will avoid(costly) atomic operations, repeat the computation multiple times, and output all redundant items to the output frontier
* <font color="#f00">filter step</font> has incorporated a series of inexpensive heuristics to reduce, but not eliminate, redundant entries in the output frontier
* Atomic Avoidance Reduction Operations
1. reduce the atomic operations by <font color="#f00">hierarchical reduction</font> and the efficient use of <font color="#f00">shared memory</font> on the GPU or
2. assign several neighboring edges to <font color="#f00">one thread</font> in our dynamic grouping strategy so that partial results within one thread can be accumulated <font color="#f00">without atomic operations</font>
## 5.3 Kernel launch throughput
* Fuse computation with graph operator
* fuse regular computation steps together with more irregular steps like <font color="#f00">advance</font> and <font color="#f00">filter</font> by running a computation step (with regular parallelism) on the input or output of the irregularly-parallel step, all within the same kernel.
* Fuse filter step with traversal operators
* Several traversal-based graph primitives have a <font color="#f00">filter</font> step immediately following an <font color="#f00">advance</font> or <font color="#f00">neighborhood-reduction</font> step.
* Gunrock implements a fused single-kernel traversal operator that launches both <font color="#f00">advance</font> and <font color="#f00">filter</font> steps.
* Such a fused kernel reduces the data movement between double-buffered input and output frontiers
## 5.4 Memory access throughput
1. coalesced memory access
2. effective use of the memory hierarchy
3. reducing scattered reads and writes
* Our choice of graph <font color="#f00">data structure</font> helps us achieve these goals
* format
* vertex-centric operations
* compressed sparse row (CSR)
* edge-centric operations
* choose coordinate list (COO)
* Gunrock represents all per-node and per-edge data as <font color="#f00">structure-of-array (SOA)</font> data structures that allow coalesced memory accesses with minimal memory divergence
* <font color="#f00">shared memory</font> and <font color="#f00">local memory</font>
* In dynamic grouping workload mapping
* local memory
* In load-balanced partition workload mapping
* shared memory
* In filter operator
* shared memory
# 6. GRAPH APPLICATIONS

* advance, filter, segmented intersection and compute steps can be composed to build new graph primitives with minimal extra work.
* Breadth-First Search (BFS)
* Single-Source Shortest Path
* Betweenness Centrality
* Connected Component Labeling
* PageRank and Other Node Ranking Algorithms
* Triangle Counting
* Subgraph Matching
# 7. PERFORMANCE CHARACTERIZATION

All PageRank times are normalized to one iteration. Hardwired GPU implementations for each primitive are enterprise (BFS), delta-stepping SSSP, gpu_BC (BC), and conn (CC). OOM means out-of-memory. A missing data entry means either there is a runtime error, or the specific primitive for that library is not available.
# 8. CONCLUSION
* highly programmable
* high-performance