# Meta Framework for Impact Evaluators: A Graph-Based Attribution Engine ### I. Introduction Measuring and attributing impact in complex collaborative systems remains a fundamental challenge across domains—from open source software development to academic research. While traditional metrics capture simple relationships (e.g., authorship, citations), they fail to account for the rich network of contributions that lead to meaningful outcomes. How do we fairly attribute credit when a developer maintains critical infrastructure that enables thousands of projects, or when volunteers coordinate river cleanups that improve water quality for entire ecosystems? Existing attribution systems typically suffer from three key limitations: (1) they are domain-specific and difficult to generalize, (2) they struggle to incorporate multiple types of contributions and outcomes, and (3) they lack flexibility in weighting different contribution types based on context-specific values. This paper presents a graph-based framework for multi-layered impact attribution that addresses these limitations. Our approach models entities, actions, and outcomes as a typed graph where value flows through weighted edges, enabling flexible credit attribution across diverse contribution types. By combining a simple, extensible data model with a configurable PageRank-based scoring mechanism [^3], we enable impact evaluation that can be adapted to different domains while maintaining mathematical rigor. Our key contributions are: - A unified heterogeneous graph data model that naturally represents agents, artifacts, and measurable outcomes across domains - A configurable weighting system that allows fine-tuning of edge and node importance without changing the core algorithm - An attribution mechanism based on PageRank that handles both direct contributions and indirect impact through outcome signals - Full compatibility with Generalized Impact Evaluators [^1] through flexible scope filtering and evaluation rounds ## II. Related Work Our work builds on several research threads in impact measurement and network-based attribution. **Generalized Impact Evaluators (GIE)** [^1] provide a theoretical framework for retrospectively evaluating and rewarding impact through measurable outcomes. GIE formalizes an Impact Evaluator as a tuple `IE = {r, e, m, S}`, where: - S is the scope constraining the domain of actions, outcomes, and entities - m is the measurement function that captures indicators within the scope - e is the evaluation function that assesses impact from measurements - r is the reward function that distributes incentives based on evaluation While GIE provides important theoretical foundations, it lacks a concrete computational model for the evaluation function e. Our framework can be viewed as an implementation of GIE's evaluation function, providing a graph-based mechanism that naturally maps entities and indicators (actions/outcomes) to attribution scores. **SourceCred** [^2] pioneered the use of contribution graphs for open source projects, implementing a modified PageRank algorithm to flow "cred" through a network of contributions. SourceCred demonstrates the viability of graph-based attribution in practice, processing real data from platforms like GitHub and Discourse. However, SourceCred's model is tightly coupled to specific platforms and contribution types, with predetermined edge weights and limited configurability. Our framework generalizes SourceCred's core insights with a flexible type system and configurable weights, enabling application across diverse domains beyond software development. **PageRank and Network Attribution** algorithms, originally developed for web page ranking [^3], have been widely adapted for attribution problems. The original PageRank treats all edges equally, computing importance based on the recursive principle that important nodes are those linked to by other important nodes. Subsequent work has explored weighted and personalized variants of PageRank for various applications. Our approach extends these ideas by introducing a comprehensive type system for both nodes and edges, allowing domain-specific weight configurations while maintaining PageRank's desirable properties of convergence and robustness to manipulation. **Impact Metrics in Specific Domains** such as bibliometrics (h-index, impact factor) and software metrics (downloads, stars) provide domain-specific solutions but lack generalizability. These metrics often fail to capture indirect contributions or network effects—for instance, a critical bug fix may enable thousands of downloads but receive little direct recognition. Our framework allows incorporating such domain-specific signals as typed edges while providing a unified attribution mechanism that captures both direct and indirect impact. **Depth vs. Breadth.** In addition to GIE [^1], SourceCred [^2], and classical PageRank adaptations [^3], recent advances in neural DAG discovery (e.g., NOTEARS [^4]) offer a contrasting paradigm. Whereas NOTEARS and its successors (DAG-GNN, GOLEM) learn causal structure by optimising a continuous acyclicity constraint and node-wise functions, our framework keeps the graph topology fixed and focuses on credit propagation via weighted PageRank. This distinction highlights that our work targets attribution explainability and configurability rather than causal discovery per se. **Comparisons** The following table contrasts our framework against representative attribution methods on three key dimensions: | **System** | **Config** | **Generality** | **Explainability** | **Use Case** | | ---------------------------------- | ---------- | -------------- | ------------------ | ---------------------------------- | | **This work** | ✓ High | ✓ Broad | ✓ High | Multi-domain credit & rewards | | **SourceCred** | ◯ Medium | ◯ OSS-only | ✓ High | Open source project reputation | | **OpenRank** | ◯ Medium | ✓ Broad | ◯ Medium | Web3 trust / Sybil resistance | | **Citation Metrics** | ✗ None | ✗ Academic | ✓ Very High | Paper ranking, academic hiring | | **Neural DAGs** | ✗ None | ✓ Broad | ✗ Low | Causal inference, science modeling | ## III. System Overview Our framework implements a computational model for GIE's evaluation function through a typed graph structure and configurable PageRank-based attribution. We show how our components map to GIE's formal structure `IE = {r, e, m, S}` [^1]. ![image](https://hackmd.io/_uploads/Hy1LY3k_ex.png) ![image](https://hackmd.io/_uploads/H1kPhoJ_gx.png) ### A. Data Model and Scope (S) Our framework uses a typed property graph that naturally represents GIE's scope S—the domain of actions, outcomes, and entities to be evaluated [^1]. The graph consists of typed nodes and edges with configurable properties: **Node Types:** ```typescript type Node = { id: string; // Unique identifier type: "agent" | "artifact" | "signal"; timestamp?: number; // Optional creation time weight?: number; // Node importance context?: string; // Domain classification metadata?: { [key: string]: any }; }; ``` The three node types map to GIE's conceptual model: - **Agents** represent GIE's entities—the contributors (individuals, organizations, or automated systems) who perform actions - **Artifacts** represent created works (code, papers, datasets)—the primary subjects that connect actions to outcomes - **Signals** represent GIE's outcome indicators—measurable results like downloads, citations, or audit completions **Edge Types:** ```typescript type Edge = { from: string; // Source node ID to: string; // Target node ID type: string; // Edge type (creation, dependency, etc.) timestamp?: number; // Optional relationship time weight?: number; // Quantitative value (downloads, votes) confidence?: number; // Confidence value (0.0-1.0) context?: string; // Additional context metadata?: { [key: string]: any }; }; ``` Common edge patterns include: - `Agent → Artifact`: Direct creation (commits, authorship)—GIE's action indicators - `Artifact → Artifact`: Dependencies or citations—structural relationships - `Signal → Artifact`: Outcome measurements—GIE's outcome indicators - `Agent → Signal`: Known attribution to outcomes (e.g., hours on security audit) - `Agent → Edge`: Verification of attribution/contribution claims The scope `S` in our framework is defined by filtering nodes and edges based on: - Temporal constraints (timestamps within evaluation interval) - Context constraints (specific domains or projects) - Type constraints (only certain node or edge types) - Temporal constraints (optional timestamp filtering) This filtering mechanism directly implements GIE's scope concept, allowing precise definition of what contributions and outcomes are considered in each evaluation round. ### B. Measurement Function `(m)` The measurement function `m` in GIE extracts indicators from the scope [^1]. Our framework implements this through graph construction from multiple data sources: ```typescript m(S_i) = G_i = (V_i, E_i) ``` Where: - `V_i` is the set of nodes (agents, artifacts, signals) within scope `S_i` - `E_i` is the set of edges (relationships) within scope `S_i` The measurement process: 1. **Data Ingestion**: Import from various sources (Git commits, download statistics, peer reviews) 2. **Type Mapping**: Convert raw data to typed nodes and edges 3. **Temporal Filtering**: Include only elements within the evaluation interval 4. **Weight Extraction**: Capture quantitative values (10k downloads, 5-star rating) as edge weights This approach generalizes GIE's measurement concept beyond simple indicator lists to a rich graph structure that preserves relationships and context. ### C. Evaluation Function `(e)` and Configuration Our primary contribution is a concrete implementation of GIE's evaluation function e using configured weighted PageRank [^3]. The evaluation function transforms the measured graph into attribution scores: ```typescript e(G_i, config) = {(agent_id, attribution_score)} ``` **Configuration Structure:** ```typescript type Config = { weights: { edges: { [edgeType: string]: number; // Multiplier per edge type }; nodesByType: { agent: number; // Multiplier for agent nodes artifact: number; // Multiplier for artifact nodes signal: number; // Multiplier for signals }; nodesById?: { [nodeId: string]: number; // Specific node multipliers }; }; }; ``` **Mathematical Formalization:** Our implementation uses personalized PageRank with edge-specific weights. Given graph G = (V, E) with weighted adjacency matrix W, the attribution scores satisfy: ``` PR(v) = (1 - α) · p(v) + α · Σ_{u∈N_in(v)} [W(u,v) / W_out(u)] · PR(u) ``` Where: - `α` is the damping factor (default 0.85) - `p(v)` is the personalization vector (signal node weights) - `W(u,v)` is the configured edge weight - `W_out(u) = Σ_w W(u,w)` is the out-weight sum **Edge Weight Calculation:** ``` W(u,v) = w_base × w_type × w_conf × w_decay ``` Where: - `w_base = edge.weight || 1.0` (measured value) - `w_type = config.weights.edges[edge.type] || 1.0` (type multiplier) - `w_conf = edge.confidence || 1.0` (confidence score) - `w_decay = timeDecay(age_days)` (optional temporal decay) **Personalization Vector:** For signal nodes S ⊆ V with weights w_s: ``` p(v) = { w_s / Σ_{s∈S} w_s if v ∈ S { 0 otherwise ``` **Convergence:** The algorithm iterates until `||PR^(t+1) - PR^(t)||_1 < ε` where ε = 1e-6 (configurable), with maximum 100 iterations. Convergence is guaranteed for non-negative weights due to PageRank's stochastic matrix properties. **Graph Preparation:** Contribution edges (agent → artifact) are reversed to (artifact → agent) to enable backward credit flow from outcomes to contributors, while preserving other edge directions. **Computational Complexity:** Each PageRank iteration requires O(|E|) time for sparse graphs, yielding O(k|E|) total complexity where k is the number of iterations (typically k << 100). Space complexity is O(|V| + |E|) for graph storage. This scales well compared to centralized reputation systems requiring O(|V|³) matrix operations, making our approach suitable for networks with millions of nodes and edges. **Attribution Extraction:** Agent scores are extracted and normalized for proportional distribution: ``` attribution(agent_i) = PR(agent_i) / Σ_{j∈Agents} PR(agent_j) ``` **Normalization Properties:** This ensures Σ attributions = 1 within the evaluation scope, enabling proportional reward distribution. For systems with external value flows (e.g., agents contributing across multiple concurrent evaluations), normalization is applied per-evaluation round. The framework supports both closed-system attribution (fixed reward pools) and open-system attribution (where total rewards can vary with total contribution value) through different reward function implementations. #### Worked Example: Open Source Library Attribution Consider a simplified scenario with three agents contributing to a cryptography library: **Graph Structure:** ``` Alice → CryptoLib (commits: 50, t=1) Bob → CryptoLib (commits: 30, t=2) Carol → AuditReport (security audit, t=3) AuditReport → CryptoLib (enhances security, t=3) CryptoLib ← Downloads (10,000 downloads, t=4) ``` **Configuration:** ```typescript weights: { edges: { "commits": 2.0, "audit": 3.0, "downloads": 1.0 }, nodesByType: { agent: 1.0, artifact: 1.0, signal: 1.0 } } ``` **Edge Weights After Reversal:** - CryptoLib → Alice: `50 × 2.0 = 100` - CryptoLib → Bob: `30 × 2.0 = 60` - CryptoLib → AuditReport → Carol: `1.0 × 3.0 = 3.0` - Downloads → CryptoLib: `10,000 × 1.0 = 10,000` **Attribution Flow:** PageRank flows 10,000 units from Downloads through CryptoLib, distributing proportionally: Alice receives ~61% (100/163), Bob ~37% (60/163), Carol ~2% (3/163), reflecting both contribution volume and type weighting. #### Gaming Resistance Analysis The framework's manipulation resistance stems from PageRank's global computation requiring coordinated network manipulation: **Attack Scenarios and Costs:** 1. **Artificial Dependencies**: Creating fake dependencies (Library_Fake → Library_Target) to inflate target attribution requires maintaining believable artifacts and risks detection through code analysis. 2. **Signal Manipulation**: Inflating download counts or creating fake citations requires external platform compromise, with costs proportional to signal magnitude. 3. **Sybil Contributions**: Creating multiple fake agents requires distributing contribution evidence across identities, diluting individual attribution gains. **Defense Mechanisms:** - **Confidence Scoring**: Low-confidence edges (e.g., `confidence = 0.3` for unverified contributions) reduce manipulation impact: `W(u,v) ∝ confidence`. - **Temporal Consistency**: Sudden attribution spikes trigger validation through timestamp clustering analysis. - **Network Analysis**: Statistical outliers in contribution patterns (e.g., agents with unusually high out-degree) can be flagged for review. **Manipulation Cost**: The cost to achieve attribution fraction δ scales as O(δ × total_network_value), making targeted manipulation expensive while preserving legitimate recognition diversity. #### Semantic Differentiation Through Configuration A key design principle of our framework is that semantic heterogeneity in relationships is handled through configuration rather than hard-coded algorithms. Different edge types carry fundamentally different meanings—a "commits code to" relationship represents direct creative contribution, while "depends on" represents structural dependency, and "downloads" represents usage-based value signals. Rather than encoding these semantic differences into the algorithm itself, our framework delegates this understanding to domain experts through the edge weight configuration system. For example: ```typescript config.weights.edges = { "commits": 5.0, // Direct contribution gets high weight "depends": 0.1, // Structural dependency gets low weight "downloads": 1.0, // Usage signals get moderate weight "security_audit": 3.0 // Specialized contributions get custom weight } ``` This approach provides several advantages: 1. **Domain Flexibility**: The same algorithm can handle software development (where commits matter most), academic research (where citations matter most), or civic projects (where volunteer hours matter most) 2. **Stakeholder Values**: Different organizations can weight the same contribution types differently based on their values and priorities 3. **Empirical Tuning**: Weights can be adjusted based on observed outcomes and stakeholder feedback The alternative—hard-coding semantic rules into the algorithm—would require separate implementations for each domain and prevent the nuanced value judgments that different communities need to make. This implementation provides the computational mechanism that GIE's theoretical framework leaves abstract, while maintaining flexibility through configuration. ### D. Temporal Evaluation and Rounds While GIE evaluates over temporal intervals `i` [^1], our framework operates on static graph snapshots. To support GIE's rounds: ```typescript // For each GIE evaluation round i: const scopeFilter = (node, edge) => { // Filter by context, type, timestamps if available return matchesRoundCriteria(node, edge, round_i); }; const graph_i = constructGraph(allData, scopeFilter); const attributions_i = evaluate(graph_i, config); ``` This separation of concerns keeps the attribution mechanism simple while maintaining full compatibility with GIE's temporal model. Each round can define its own filtering criteria through the scope mechanism. ### E. Reward Function `(r)` Integration While our framework focuses on the evaluation function, it provides attribution scores that naturally feed into GIE's reward function [^1]: ```typescript r_i({(agent_id, attribution_score)}, reward_pool) = {(agent_id, reward_amount)} ``` The framework's output—normalized attribution scores—can be used with various reward strategies: - Proportional distribution (zero-sum) - Threshold-based rewards (positive-sum) - Superlinear rewards for collaboration ### F. Signal Attribution Patterns #### Credit Flow Architecture The framework's credit flow is designed to attribute value to agents through their relationships to artifacts and outcomes. While signals can inject external value into the network, the framework also works effectively with pure contribution networks where agents receive attribution based on the structural importance of their contributions. #### Multiple Value Sources The framework supports flexible value injection through several mechanisms: **Signal-Based Value** (when outcome metrics are available): Signal nodes serve as **value sources** in the PageRank computation. When a signal like "10,000 downloads" is connected to an artifact, it injects value proportional to its importance into the attribution network: ``` Signal("10k downloads") → Artifact("numpy") → Agent₁, Agent₂, Agent₃... ``` **Structural Value** (pure contribution networks): Even without external signals, the framework attributes value based on network structure. Agents who contribute to artifacts that other agents depend on receive higher attribution through PageRank's recursive importance calculation: ``` Agent₁ → Artifact₁ → Artifact₂ ← Agent₂ ``` Agent₁ receives attribution not just for creating Artifact₁, but for enabling Agent₂'s work on Artifact₂. **Hybrid Value** (combining signals and structure): The most powerful attribution comes from combining both approaches, where structural importance amplifies signal-based value: ``` Signal("10k downloads") → Artifact₁ → Artifact₂ ← Agent₂ ↑ Agent₁ ``` Agent₁ receives attribution both from the direct downloads and from enabling the ecosystem that depends on their work. #### Attribution Patterns The framework supports three main attribution patterns: **Pure Structural Attribution**: ``` Agent → Artifact → Artifact → Agent* ``` Attribution flows based on dependency relationships and structural importance, even without external outcome metrics. **Outcome-Driven Attribution**: ``` Signal → Artifact → Agent* ``` Value flows from measurable outcomes to creators. A highly-downloaded library attributes credit to its developers. **Contribution-Driven Attribution**: ``` Agent → Signal → Artifact ``` Value flows from known contributor through their specific outcome to the benefiting artifact. A security auditor gets direct credit for their audit, which then adds value to the audited project. #### Attribution Flow in Complex Scenarios Consider a security audit scenario where multiple agents are involved: ``` Agent_Auditor → Signal("security audit") → Artifact ← Agent_Developer1, Agent_Developer2 ``` This raises an important question: should the audit value flow only to the auditor, or should it enhance attribution for all contributors to the artifact? The framework supports both interpretations through different modeling approaches: **Enhancement Model** - Audit increases artifact value: - The security audit signal injects value into the artifact - That value flows backward to ALL contributors to the artifact (proportional to their contribution weights) - The auditor receives direct attribution for creating the audit signal - The developers receive enhanced attribution because their artifact is now more valuable (more secure, higher confidence) **Separation Model** - Audit as independent contribution: ``` Agent_Auditor → Artifact_Audit → Artifact_Main ← Agent_Developer1, Agent_Developer2 ``` - The audit is modeled as its own artifact that enhances the main artifact - The auditor receives full attribution for the audit artifact - Developers receive attribution for the main artifact - The dependency relationship may provide some network effect attribution **Community Values in Action** This flexibility reflects a key design principle: the same real-world contribution can be modeled differently based on community values and attribution philosophy: - **Collaborative Communities** might prefer the enhancement model: "The audit makes everyone's work more valuable, so everyone benefits" - **Individual Recognition Communities** might prefer the separation model: "Each contribution should be attributed independently" - **Hybrid Approaches** might use configuration weights to partially share audit value while maintaining individual recognition The framework enables these different approaches through graph structure choices and weight configuration, allowing each community to encode their attribution values without changing the underlying algorithm. ##### Mathematical Properties This flexible flow model creates desirable mathematical properties: 1. **Attribution Robustness**: The system works with varying data availability—from pure contribution graphs to rich outcome-annotated networks 2. **Proportional Distribution**: Multiple contributors to the same artifact receive attribution proportional to their weighted contributions 3. **Network Effects**: Contributors to foundational artifacts (those depended upon by many others) receive attribution from the entire ecosystem they enable, regardless of whether external signals are present 4. **Value Conservation**: When signals are present, all injected value flows to agents—no credit is lost to intermediate artifacts The PageRank algorithm naturally handles these flows, converging to stable attribution scores that reflect both direct contributions and indirect ecosystem impact. ### G. Design Rationale #### Why PageRank for Attribution? While PageRank was originally designed for web page ranking, its mathematical properties make it well-suited for attribution problems: **Recursive Value Definition**: PageRank's core insight—that importance comes from being connected to other important entities—maps naturally to attribution. Contributors to highly-valued artifacts should receive more credit than contributors to unused artifacts. **Network Effect Capture**: Unlike simple metrics (commit counts, authorship), PageRank captures how contributions enable other contributions. A developer who maintains critical infrastructure receives attribution not just for their direct work, but for enabling the entire ecosystem that depends on their infrastructure. **Manipulation Resistance**: While no algorithm is immune to gaming, PageRank's global computation makes it harder to manipulate than local metrics. Creating artificial dependencies or inflated signals requires coordination across the network. **Convergence Guarantees**: For non-negative edge weights, PageRank provably converges to unique stable scores, ensuring consistent attribution results. ##### Configuration as First-Class Design Many attribution systems fail because they embed domain-specific assumptions into their algorithms. Our framework inverts this relationship: the algorithm remains domain-agnostic while configuration carries all domain-specific knowledge. This separation enables: - **Reusability**: The same codebase handles software projects, research collaborations, and civic initiatives - **Transparency**: Stakeholders can examine and debate weight configurations without understanding PageRank mathematics - **Evolution**: Attribution policies can evolve through configuration changes without algorithmic rewrites - **A/B Testing**: Multiple configurations can be tested and compared on the same data The framework's value lies not in algorithmic novelty, but in providing a flexible, configurable implementation of proven network attribution techniques. By implementing GIE's abstract evaluation function as a configurable graph-based system, our framework bridges the gap between theoretical impact evaluation and practical attribution needs across diverse domains. ### IV. Future Work #### A. Automated Weight Optimization Currently, domain experts must manually configure edge and node weights, which requires deep understanding of the specific context. We envision using machine learning, particularly neural networks, to automatically learn optimal weight configurations from labeled examples of "fair" attribution. This could involve: - Training neural networks on historical contribution data with expert-validated attribution outcomes - Multi-objective optimization to balance different stakeholder perspectives on value - Active learning approaches that iteratively refine weights based on user feedback - Transfer learning to adapt weight configurations across related domains Such automated approaches could significantly reduce the barrier to adoption while potentially discovering non-obvious weight combinations that better reflect true impact. #### B. Large-Scale Attribution Systems The transition from small networks to large-scale attribution graphs introduces qualitatively new challenges and opportunities that represent high-priority research directions: **Attribution Prediction**: A fundamentally new research direction involves forecasting attribution flows before they manifest in traditional metrics. Can we identify foundational contributions that will become valuable years later, before their impact becomes obvious through downloads or citations? What network patterns predict which seemingly minor contributions will enable major breakthroughs? This predictive capability could transform funding decisions, help identify undervalued contributors, and reveal early signals of emerging important work. Research challenges include developing time-series models for attribution evolution, identifying leading indicators in contribution patterns, and validating predictions against long-term outcomes. **Gaming Resistance at Scale**: Large attribution systems face sophisticated manipulation including Sybil attacks (artificial contributors), link farming (artificial dependencies), and signal manipulation (fake downloads/citations). Unlike small systems where manual review is feasible, large-scale systems require algorithmic approaches to detect coordinated manipulation while preserving legitimate contribution diversity. What game-theoretic equilibria emerge when agents optimize for attribution rather than direct value? How can statistical anomaly detection identify manipulation without false positives on legitimate unusual contribution patterns? What community-based verification mechanisms scale to millions of contributors? This research direction combines network security, game theory, and community governance. **Attribution Fairness Across Populations**: As attribution systems influence resource allocation, systematic biases can have significant societal impact. Large networks may amplify geographic bias (overvaluing Western/English-language work), institutional bias (favoring academic/corporate over volunteer contributions), and temporal bias (recent over foundational work). Research should develop attribution-specific fairness metrics, methods for detecting bias amplification through network effects, and intervention strategies that preserve attribution accuracy while promoting equitable recognition. Critical questions include: How do we define "fair" attribution when different communities have different contribution cultures? Can we develop counterfactual attribution analysis to identify bias? What are the tradeoffs between attribution accuracy and fairness? #### C. Model Extensions Several extensions could enhance the framework's expressiveness and applicability: **Uncertainty Quantification**: Attribution scores should communicate confidence levels, especially for: - Sparse networks with limited data - Contested or controversial contributions - Indirect attribution chains with many intermediaries These extensions would significantly broaden the framework's applicability while maintaining its core simplicity and flexibility. By pursuing these directions, we aim to create a comprehensive ecosystem for fair, transparent, and adaptable impact attribution across all domains of collaborative human endeavor. **Negative Contributions**: Our current model assumes all contributions are positive, but real systems must handle spam, vandalism, or low-quality work. Future versions should incorporate: - Negative edge weights for harmful contributions - Quality signals that modify attribution flow - Reputation systems that adjust agent credibility ### References [^1]: Network Goods, "Generalized Impact Evaluators," Protocol Labs, Tech. Rep., 2023. [^2]: D. Malkhi and M. Reiter, "SourceCred: A system for decentralized credit attribution," 2019. [Online]. Available: https://sourcecred.io/docs [^3]: L. Page, S. Brin, R. Motwani, and T. Winograd, "The PageRank citation ranking: Bringing order to the web," Stanford InfoLab, Tech. Rep., 1999. [^4]: X. Zheng, B. Aragam, P. Ravikumar, and E. P. Xing, "DAGs with NO TEARS: Continuous optimization for structure learning," arXiv, Tech. Rep. arXiv:1803.01422, 2018.