---
title: Computational Law vs BWL Principles for Computer Systems
---
**Computational Law vs BWL Principles for Computer Systems**
**Summer 2020**
***Comments, remarks, suggestions welcome and encouraged!!***
Creative Commons Attribution License: https://creativecommons.org/licenses/by/4.0/ ---
Attribution: Christophe Bosquillon
Date: 09 September 2020
*Last updated on 12 September 2020*
++++++++++
**Topic: a Computational Law post by Dazza dated 02 September 2020:**
Principles for Computer Systems – Butler W. Lampson
Some great (and wise) principles for building computer systems: https://www.dropbox.com/sh/4cex542zznbjh7b/AADM59pqAb9YBy4eeT1uw0t8a?dl=0&preview=Hints+190+full.pdf
**Dazza’s question: which of these hints and principles do you think are most needed for computational law? Which are not a good fit for computational law?**
++++++++++
Note: above link is a Summer-2020 update & extended remix by Prof. BWL of his original 1983 paper.
If you’re short on time, it is sufficient to read diagonally Part A – Summary and most salient observations. To help you decide to make time to read the rest or not, some pointers:
Part B is a second iteration of the drill down, after parsing the BWL paper in digestible chunks. Like a quick scan, this might come across as more time effective and practical than Part C.
Part C uses the 5 MIT Computational Law Development Goals (5 MIT CLDGs are recalled in detail) for a first initial iteration of the drill down. Because these goals were very helpful to detect obvious cognitive resonances or dissonances and parse the BWL paper in ways that resulted in part B. This is a personal train-of-thought for the record, a bit laborious and time consuming.
++++++++++
MIT Prof. Butler W. Lampson is a legend!!!
Here’s a nice profile:
https://news.mit.edu/2014/mit-professor-made-much-of-our-world-possible-0218
MIT Open Courseware – 2002 Course
6.826 'Principles of Computer Systems' by Prof. BWL
https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-826-principles-of-computer-systems-spring-2002/
++++++++++
++++++++++
Errors, omissions, misunderstandings, are only ours. In particular if we appear to stray from, misconstrue, take too many liberties, or over-stretch either above 5 CLDGs, or some of BWL 1983-2020 paper 'hints & principles'. The Right Stuff if any is a result of past & recent team work & online brainstorming with MIT Computational Law friends old and new. In addition, recent notable inputs include the Bucerius Law School Summer 2020 LegalTech Essentials webinars (completed) and Suffolk Law School LIT Lab Prof David Colarusso “Coding The Law” course, with open access resources for non-registered students (on-going). The latest includes a window to DocAssemble, also in relation with Sept. 2020 US eviction moratorium.
++++++++++
**Part A – Summary and most salient observations**
We’ll start with most salient broad B/L/T/ observations (B/L/T/ Business/ Legal/ Technological/ framework originally defined by Dazza MIT Computational Law) and then move into most salient engineering observations for computational law systems.
Writing a ‘hints and principles’ guidance like BWL did in 1983-2020 for computer systems is a great act of leadership, that helps people to get organized to reach their intra-disciplinary goals.
Translated into computational law systems, that act of leadership requires first and foremost to emphasize the inter-disciplinary approach of the whole endeavor, so that people in MDTs multi-disciplinary teams, may get organized to reach goals defined across and beyond disciplinary silos.
Thus in the way the guidance structure is written, you would need to add extra layers to communicate, with and toward peers, some ways and means to get organized: layers that integrate interdisciplinary efforts for MDTs multi-disciplinary teams ; layers that embraces bodies of (computational) law as CAS complex adaptive systems, acknowledges and reduce the complexity, and manage the granularity ; and layers that embody design requirements to get the human-centered aspect right at the onset.
It might be timely to add on top of the 'practice to Keep It Simple, Stupid KISS' an additional requirement to 'embrace the CAS Complex Adaptive Systems, acknowledge and reduce complexity, manage granularity'.
However, that is not enough. Early on you run into (one of) the crux of the problem: it all starts with defining specs.
So, when defining computational law systems specs, it’s going to be more like a socio-political consideration of determining whom define, write, and seal the spec and for which legal and politico-societal purpose.
BWL’s methodology is powerful but requires extra-layers of interdisciplinary articulation with business process management logic, even more so with legal reasoning “form of logic”.
This is related to both the nature of legal knowledge representation and reasoning, and the subset of problems to solve, when dealing with legal process management, at all levels, and regardless of automation/ autonomy and the degree of balancing machine vs human-(still)-in-the-loop.
As experienced in our MIT Computational Law group, starting from a computer system engineering approach, mixed with some Automated/Autonomous legal entities business scenarios planning, and an ounce of BPM business process management methods, *no computational law endeavor might work unless we make sure to incorporate legal reasoning at the onset.*
Furthermore, we need a mega-caveat on this Edsger Dijkstra quote: *“The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise”.*
As pointed out by our computational group Diana, *“this may be difficult in Law: for ex. even if we sample decisions, the law does not aim to return the average value, as human behaviour tends to be suboptimal. Even if we generalize and use analogies, in some other cases a solution may only be used in some specific type of problem. But also we can use safety as defined by the range within which that solution may be accepted, removing outliers for more complex processes”.*
As for design requirements to get the human-centered aspect right at the onset, it starts with humans, real people, not tech ‘users’. Even before going into considerations of societal norms interacting with B/L/T/ Business/ Legal/ Technological/ constructs, of HCI Human Computer Interaction, and adequate computational law UX User Interface design, it's substantively much much more than just about making a computer system "Yummy" enough to be widely adopted and sold among ‘users’. Even if adoption and techno-economic sustainability are key.
This said, most of BWL’s paper section "3.7. Yummy/3.7.1. User interfaces", remains not only valid but seminal in getting it right with human-centered design principled requirements and technicalities: so that bit needs to be extracted, revamped, and positioned as top, front, and center in computational law systems design requirement.
But that’s not all. A human-centered approach to computational law systems implies to pay particular attention to the definition, structure, and process, of identity and agency.
This isn’t merely about the techno-legal specs of individual and corporate identities used in various online systems that involve transactions, roles and relationships, rights and responsibilities: it is mostly about the societal clustering that reflect the fact that certain people “identify with” certain clusters, regardless of the nature of the foudation of such identification and clustering definition, structure, and process.
Because the main politico-societal engine for that process is the relentless demand for agency based on such identification and clustering, which, if not acknowledged and tackled, may result in individual behavorial hobbling and major societal conflict. *People want identity, agency, and representativity.*
Furthermore, considering the significant and consequential politico-societal emergence of issues of identity and agency, this may have certain foundational consequences on how human-centered data needs to be structured in adequately designed computational law systems.
*So that was for the most salient broad B/L/T/ observations.*
*What about the most salient engineering observations?*
All of BWL guidance related to modules and interfaces, classes and objects, layers and platforms, components, open systems, and defining the right kinds of standardization, seems rather strongly relevant to computational law systems endeavors.
BWL has experienced distributed personal computing systems and all subsequent computer systems landscapes and architecture transformations up to this day. He certainly is aware, at least in a forward looking fashion, of the upcoming prevalence of Web 3.0., IoT, with (somewhat) distributed, disintermediated, decentralized systems, autonomous agents / systems / legal entities based architecture ; that are open source and “locally” collaborative ; that are mindful for these emerging archipelagos of distributed systems to be engineered for interoperability “globally” ; that require the need for fault tolerance and not just in a Byzantine meaning. That bit in the BWL paper may require much more emphasis and effort to support computational law applications in particular in decentralized A/A contexts.
Treating the state as both being and becoming (map vs. log, pieces, checkpoints, indexes) sounds particularly important when dealing with automated and autonomous agents, and legal entities that need to keep a map/log of their successive states and processes e.g. in contexts relevant to agency, contract, tort laws, etc. Beth, with the Berlin ecosystem, established that in A/A context it is crucial to keep a “record of intent” on case by case basis. As experienced before/after Berlin, you can't run any A/A legal entities construct, out of Agency, Contract, Tort laws considerations, without keeping an exact process and state record (map and log).
That serves to both monitor the computational law system A/A legal entity B/L/T/ status at any given time, but also to either incorporate sufficient compliance to preclude the A/A entity from straying, or do enough cruise control to take automated or human corrective action, etc. And back to human-in-the-loop problematics.
As per Beth, *there is this requirement to keep a "record of intent" for whatever intent means on case-by-case basis in an A/A multi-agent simulation context.* This is required for anything linked to transparency, explainability, accountability, complex notions depending on human/ machine time and speed, and whether human interference with machine is even desirable or possible.
Using BWL's 'eventual consistency' to keep data available locally may take a renewed importance with hyper-local fog computing when data subject to computational law processes is generated out of decentralized IoT and A/A agent networks (not trivial at all!).
Time/space considerations might still linger as a major constraining factor, when managing data storage and processing in computational law applications that involve map & log of states & processes out of decentralized and A/A agents systems, IFPS, and Web 3.0.
Points of view and Notation might as well need to be emphasized and beefed-up. Being vs becoming i.e. maps and logs of processes and states are so essential as discussed above. DSL domain specific languages matter.
We might see an evolution of computational law specific chips/ architecture where the hardware/ software couplings are optimized for capacities, energy consumption, etc.
While we look at how B/L/T/ languages interlink, it might not be far-fetched to imagine computational law DSLs, beyond tools of the trade currently in use.
The BWL Process section is surprisingly short (legacy, when tools are weak, technological debt in complex systems development). This would need to be beefed up, since it’s obvious that processes matter so much for computational law applications, at all sorts of levels.
One clear example is e.g. how do we develop computational law applications on top of A/A legal entities that may be robust by themselves, but connected to DeFi applications that are still work-in-progress: and as such may still have serious systemic issues e.g. generating hyper-inflation in a closed ecosystem, subprime-type systemic failures ; pesky bugging flaws and vulnerabilities, starting with code behind tokenization and smart / computable legal contracts in various ecosystems.
All of this requires more rigorous auditing in open source, decentralized organizations with “chaordically tolerant” management methods. In addition to the demands for transparency, explainability, and accountability, that are part and parcel of what computational law applications need to be able to deal with.
Immutable ↔ append-only ↔ mutable also relate to an important niche of practical and regulatory issues revolving around data reversibility vs immutability in append-only systems like blockchain/DLT, which are part of evolving computational law landscapes.
BWL doesn't mention the fact that we may see more and more architecture and coding choices and implementations being made by machines, not humans. We won't delve into that now, but in light of the above, we might ask how do we train the machines to do that more efficiently and effectively, yet still abiding by human-centered requirements. And how may machines help humans to tackle some aspects of complexity and granularity in computational law systems as CAS complex adaptive systems.
*Finally, there is data.*
Interestingly, BWL never mentions "data science" even if a lot of his methodology evolves around data and algorithmic processes.
As is the case for an array of professions and industries, computational law is fundamentally even more inseparable from mastery of data science, complexity, and information overload.
Including and not limited to network representation and information visualization techniques, then again crossing paths with human-centered effective computational law HCI/UX design.
And taking into account the fact that Law belongs to these industries with allegedly the most "uneven" data sets (translation: legal datas sets are work-in-progress and quite a mess actually).
Data science is complicated for it’s hard to get the right data and the data right (*simplified data process pipeline: collection => pre-processing => clustering => analysis*).
Starting at the collection stage, an optimal data source may satisfy at least six rather basic and incompressible quality criteria: it needs to be *“available, complete, official, unmodified, accurate, structured”* (source: Bucerius Law School Summer 2020 LegalTech Essentials webinars).
This may constitute a guidance stem for roles devoted to data in optimal computational law systems. Also, in computational law systems, it might be advisable to first consider architecture and network analysis data, prior to jumping into language and ML-driven ‘semantic’ data analysis levels.
There is a need to elaborate on the data science aspect of computational law systems, and universally recognized needs and requirements to build better legal data sets, in ways that aren't explicitly covered by BWL, but might gain from using elements of his powerful methodology.
But then again there is no way you might do a proper computational law system building and data analysis and exploitation job, by solely applying data scientists methods: this may likely be going to naught unless you already incorporate legal reasoning at both steps, when you define and structure the qualitative clustering, and when you "interpret the results" based on stochastic and other quantitative evaluations. Which links us back to this fundamental interdisciplinary requirement on finding out what's what while incorporating legal reasoning at the onset of the data work.
Furthermore, considering the significant and consequential socio-political emergence of issues of identity and agency, this may have certain foundational consequences on how human-centered data needs to be defined, structured, and processed, in adequately designed computational law systems.
++++++++++
**PART B : scanning BWL’s ‘hints and principles’ vs a computational law perspective**
We parsed (arbitrarily and following a process described in Part C that involves the 5 MIT CLDGs) the ‘hints and principles’ corpus in 4 groupings B.1. to B.4. and (roughly) selected which sections and items may be most needed for computational law and which ones may not be a good fit.
**Group B.1.: -1- Introduction - and - -6. Conclusion**
These are broad high level principles already subject to comments:
What? Goals STEADY —Simple, Timely, Efficient, Adaptable, Dependable,Yummy
How? Techniques by AID —Approximate, Incremental, Divide & Conquer
When, who? Process with ART —Architecture, Automate, Review, Techniques, Test
In broad terms these seem applicable to computational: the devil might be in different forms of logic, in complexity, and in granularity.
There are a lot of hints, but here are the most important ones:
− Keep it simple. Complexity kills. *Our comment:* yes and you need to add at least 2 layers to communicate with your peers: a layer that integrates interdisciplinary efforts for MDTs (multi-disciplinary teams) ; and a layer that embraces (computational) law as a CAS complex adaptive system, acknowledges and reduce the complexity, manage the granularity.
− Write a spec. At least, write down the abstract state. *Our comment:* yes and you need a mega caveat on the Edsger Dijkstra quote: “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise”.
As pointed out by our CL Diana, *“this may be difficult in Law: for ex. even if we sample decisions, the law does not aim to return the average value, as human behaviour tends to be suboptimal. Even if we generalize and use analogies, in some other cases a solution may only be used in some specific type of problem. But also we can use safety as defined by the range within which that solution may be accepted, removing outliers for more complex processes”.*
Computational law means socio-political consideration to determine whom define and write the spec and for which purpose. Again, that too is a MDTs effort.
While BWL clearly indicates that he never acted as a team leader, he has demonstrated his ability, massive experience, and hard-earned wisdom, to identify what works and doesn’t with teams. He’s also obviously observed some of the awkward quandaries that people in his position and role might experience with policy makers high-level spec definition pitfalls and dead-ends.
As BWL pointed out, specs written only by high-level policy makers may not work.
But specs written only by techies may not exactly lead to desired outcomes either.
Specs might be (one of) the crux of the problem of forming and leading effective MDTs.
− Build with modules, parts of the system that people can work on independently. *Our comment:* this is particularly relevant to computational law efforts, that are born out of a Web 3.0., IoT, decentralized architecture, autonomous agents/systems based architecture ; that are open source and “locally” collaborative ; that are mindful of interoperability “globally”.
− Exploit the ABCs of efficiency: algorithms, approximate, batch, cache, concurrency. *Okay.*
− Treat the state as both being and becoming: map vs. log, pieces, checkpoints, indexes. *Our comment:* this sounds particularly important when dealing with automate and autonomous agents, and legal entities that need to keep a map/log of their successive states and processes e.g. in contexts relevant to agency, contract, tort laws, etc. *Beth, with the Berlin ecosystem, established that in A/A context it is crucial to keep a “record of intent” on case by case basis.*
− Use eventual consistency to keep data available locally. *Our comment:* this may take a renewed importance with hyper-local fog computing when data subject to computational law processes is generated out of decentralized IoT and A/A agent networks. Not trivial at all!
**Group B.2.: -1-/1.1 Oppositions and slogans - and - -5. Oppositions**
We feel that “oppositions” resonate with much relevance to computational law issues especially in decentralized and A/A contexts. Here is the full list:
Simple ↔ rich, fine ↔ features, general ↔ specialized /// Perfect ↔ adequate, exact ↔ tolerant /// Spec ↔ code /// Imperative ↔ functional ↔ declarative /// Immutable ↔ append-only ↔ mutable /// Precise ↔ approximate software /// Dynamic ↔ static /// Indirect ↔ inline /// Time ↔ space /// Lazy ↔ eager ↔ speculative /// Centralized ↔ distributed, share ↔ copy /// Fixed ↔ evolving, monolithic ↔ extensible /// Evolution ↔ revolution /// Policy ↔ mechanism /// Consistent ↔ available ↔ partition-tolerant /// Generate ↔ check /// Persistent ↔ volatile /// Being ↔ becoming /// Iterative ↔ recursive, array ↔ tree /// Recompute ↔ adjust ///
*From which we (arbitrarily) extract this short-list for comments:*
Simple ↔ rich, fine ↔ features, general ↔ specialized /// *our comment:* this could be (one of the major) crux of the problem, because it’s about complexity, granularity, standardization, and mean, of the legal knowledge representation and reasoning process. See previous remarks about MTDs, CAS etc. Which doesn’t mean there is no applicable wisdom in this bit.
Perfect ↔ adequate, exact ↔ tolerant /// *our comment:* same as previous, plus, our experience of computational law especially in A/A context points to the need of developing fault tolerant systems, and this, not just in a Byzantine fault meaning.
Spec ↔ code /// *our comment:* see previous remarks of the socio-political process of whom defines computational law applications specs, for which purpose and through which MDTs.
Imperative ↔ functional ↔ declarative /// *our comment:* relevant to evolution and classification of programming languages, in particular those for “coding the law”. In addition, we might want to also consider the fact that in a not-so-distant future, it is machines more than human which might actually produce the code, maybe? How to train the machines?
Immutable ↔ append-only ↔ mutable /// *our comment:* there is an important niche of practical and regulatory issues revolving around data reversibility vs immutability in append-only systems like blockchain/DLT, which are part of evolving computational law landscapes.
Precise ↔ approximate software /// Dynamic ↔ static /// Indirect ↔ inline /// *Okay.*
Time ↔ space /// *our comment:* this looks like a major constraining factor when managing data storage and processing in computational law applications that involve map & log of states & processes out of decentralized and A/A agents systems, IFPS, and Web 3.0.
Lazy ↔ eager ↔ speculative /// *our comment:* sounds relevant to options and constraints when dealing with computational law applied to some A/A agents processes.
Centralized ↔ distributed, share ↔ copy /// *our comment:* BWL experienced distributed personal computing systems, all subsequent computer systems landscapes and architecture transformations up to this day.
He certainly is aware of the upcoming prevalence of somewhat distributed, disintermediated, decentralized systems, in particular the need for fault tolerance and not just in a Byzantine meaning. That bit may require much more emphasis and effort to support computational law applications in decentralized A/A contexts.
Fixed ↔ evolving, monolithic ↔ extensible /// Evolution ↔ revolution /// Policy ↔ mechanism /// *our comment:* again a lot of this is relevant to socio-political mechanism of defining computational law applications specs for specific purposes via MDTs, including and not limited to decentralized and A/A contexts.
Consistent ↔ available ↔ partition-tolerant /// Generate ↔ check /// Persistent ↔ volatile /// Being ↔ becoming /// Iterative ↔ recursive, array ↔ tree /// Recompute ↔ adjust /// *our comment:* again a lot of this is relevant to managing CAS complexity and granularity in computational law applications, in particular the degrees of hierarchy, distribution, and network effects.
**Group B.3.: -4. Process- 4.1 Legacy - 4.2 When tools are weak –** *our comment:* this is a short section that would need to be beefed up, since it’s obvious that processes matter so much for computational law applications, at all sorts of levels. One clear example is e.g. how do we develop computational law applications on top of A/A legal entities that may be robust by themselves, but connected to DeFi applications that are still work-in-progress: and as such may still have serious systemic issues e.g. generating hyper-inflation in a closed ecosystem, or subprime-type systemic failure ; or pesky bugging flaws and vulnerabilities, starting with the code behind tokenization and smart / computable contracts in various ecosystems. All of this requires much auditing in often open source, decentralized, and “chaordically” managed organizations. In addition to the demands for transparency, explainability, and accountability, that are part and parcel of what computational law applications need to be able to deal with.
**Group B.4.: -3- Goals and Techniques**
Here’s the full list, they’re all relevant to a degree:
-3.1 Overview - 3.1.1 Goals - 3.1.2 Techniques /// -3.2 Simple - 3.2.1 Do one thing well - 3.2.2 Brute force - 3.2.3 Reduction /// -3.3 Timely /// -3.4 Efficient - 3.4.1 Before the ABCs - 3.4.2 Algorithms - 3.4.3 Approximate - 3.4.4 Batch - 3.4.5 Cache - 3.4.6 Concurrency /// -3.5 Adaptable - 3.5.1 Scaling - 3.5.2 Inflection points /// -3.6 Dependable - 3.6.1 Correctness - 3.6.2 Retry - 3.6.3 Replication - 3.6.4 Detecting failures: real time - 3.6.5 Recovery and repair - 3.6.6 Transactions - 3.6.7 Security /// -3.7 Yummy - 3.7.1 User interfaces /// -3.8 Incremental - 3.8.1 Being and becoming - 3.8.2 Indirection ///
*We singled out these for comments, there could be more:*
-3.5 Adaptable - 3.5.1 Scaling /// *our comment:* scaling, sharding, and managing states is important.
-3.6 Dependable /// *our comment:* managing computational law applications failure, fault tolerance in distributed systems ; structure of transactions and impact on states ; security, are important.
-3.7 Yummy - 3.7.1 User interfaces /// *our comment:* this isn’t just nice to have, this should be top, front, and center, as part of a computational law approach that is human-centered. This isn’t just about B/L/T/ systems interacting with social norms, HCI, UX design. It starts with people.
**Group B.4.: -2- Principles**
Here’s the full list:
-2.1 Abstraction - 2.1.1 Safety and liveness - 2.1.2 Operations /// -2.2 Writing a spec - 2.2.1 Leaky specs and bad specs - 2.2.2 Executable specs /// -2.3 Writing the code: Correctness - 2.3.1 Types - 2.3.2 Languages /// -2.4 Modules and interfaces - 2.4.1 Classes and objects - 2.4.2 Layers and platforms - 2.4.3 Components - 2.4.4 Open systems - 2.4.5 Robustness - 2.4.6 Standards /// -2.5 Points of view - 2.5.1 Notation ///
*We singled out these for comments:*
-2.1 Abstraction - 2.1.1 Safety and liveness - 2.1.2 Operations /// -2.2 Writing a spec - 2.2.1 Leaky specs and bad specs - 2.2.2 Executable specs /// *our comment:* possible computational law specs quandary or at least (one of) the crux of the problem. As pointed before, the Edsger Dijkstra quote: “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise” might not work in (computational) law. Which MDTs get to decide the spec for which socio-political purpose and on which legal (and ethical) basis? Complexity, brittleness, failure, “bug-for-bug compatibility”, etc.
-2.4 Modules and interfaces - 2.4.1 Classes and objects - 2.4.2 Layers and platforms - 2.4.3 Components - 2.4.4 Open systems - 2.4.5 Robustness - 2.4.6 Standards /// *our comment:* this has to do with architecture, “classpec” (object-based and type-based), need to think in several dimensions of layers, modules, platforms, components ([vertical / hardware and software class and roles], [horizontal: non-hierarchic, distributed, disintermediated, decentralized], plus the [legal-], [ethical], [socio-politic-] dimensions. Principles of open systems with more or less open secrets, robustness, apply.
And there is certainly a whole category of problems to solve when it comes to defining the right kinds of standardization, as in: you don’t need a drill, you need six holes, but what you really need is to hang frames on a wall (but do you really want to do that, or would you be better off building a virtual space where you can equally enjoy virtual frames on a virtual wall and change the layout depending on circumstances anytime ;)
2.5 Points of view - 2.5.1 Notation /// *our comment:* this looks like it might need to be emphasized and beefed-up. Being vs becoming i.e. maps and logs of processes and states are so essential as discussed above. DSL domain specific languages matter. We might see an evolution of computational law specific chips/architecture where the hardware/software couplings are optimized for capacities, energy consumption, etc. While we look at how B/L/T/ languages might interlink, it might not be totally far-fetched to imagine computational law DSLs, beyond tools of the trade currently being used.
++++++++++
**Part C – First iteration of ‘hints and principles’ drill-down using the 5 MIT CLDGs**
This is actually the first initial step we started with, in order to "cognitively pulverize" the bulky BWL guidance into digestible and exploitable chunks. It's like using explosives to mine a portion of terrain.
The 5 CLDGs are extremely efficient to immediately locate cognitive resonances and dissonances between what's required for computational law systems, and what transpires from the mining target, here the BWL 'hints & principles' guidance, that can be then parsed accordingly.
Let’s recall the 5 MIT Computational Law Development Goals = 5 MIT CLDGs,
that work as part of an interdisciplinary mindset.
Here’s the order in which we put them in perspective with BWL’s ‘hints & principles’:
{CLDG-5: Industry & Civic Partnerships}
(with an inter-disciplinary mindset)
{CLDG-1: Human-Centered Law}
{CLDG-4: Universal Interoperability}
{CLDG-2: Measurable Law}
{CLDG-3: Law as Data}
References:
https://law.mit.edu/pub/computationallawdevelopmentgoals/release/1
https://law.mit.edu/pub/overviewofcldgs/release/1
**{CLDG-5: Industry & Civic Partnerships}:**
*“ Strengthen the means of implementation of computational law by building partnerships with a diverse group of interdisciplinary stakeholders. The success of computational law will rely on people of different backgrounds and purposes coming together to design novel, modern, and improved legal systems reliant on contemporary and foreseeable technologies and dynamics, but also on the pace of predictable innovation. Fostering such partnerships, educating the legal profession, and bridging gaps between disciplines is a requirement for enabling legal innovation that is unlikely to rely on conventional socio-academic and political dialog that moves at glacial speed.“*
To write a ‘hints & principles’ guidance is also some political leadership endeavor. It signals where you operate in the socio-economic ecosystem, whom you write for, what are your purpose, motivations, whether you intend to merely “do your job” (which is fine still), or help untie nagging societal gordian knots in the process of building computational law solutions.
BWL’s methodology is powerful but requires extra-layers of interdisciplinary articulation with business process management logic, even more so with legal reasoning “form of logic”.
This is related to both the nature of legal language, knowledge representation, and reasoning, and the subset of problems to solve, when dealing with legal process management, at all levels, and regardless of automation/autonomy and the degree human-in-the-loop vs machine balancing.
That extra-layer needs to integrate the definition of interdisciplinary languages that work for all stakeholders. But also representing, communicating, and managing complexity and granularity. Also, it is natural for each “profession” to develop its own strong culture, and experience friction with others which have distinct knowledge representation, priorities, value agendas.
At individual level, it is best to develop T-shaped professional & personal habits.
Computational law isn't born out of computer system architecture & engineering problem solving methods applied to a particular domain expertise, it is an interdisciplinary endeavor.
BWL certainly had his share of interdisciplinary concerns through decades of innovation, at Xerox, Microsoft, and MIT. He might still be an inspiration to move it all to the next level.
As experienced in our MIT Comp.Law group, starting from a computer system engineering approach, mixed with some Automated/Autonomous legal entities business scenarios planning, and an ounce of BPM business process management methods, no computational law endeavor might work unless we make sure to incorporate legal reasoning at the onset.
Hard truth and perhaps the most important lesson we learned in this group ;) …
…if you come from the Business & Tech sides of things, Thou Shall Study Law, or else...
There’s no cutting corners nor botching the fundamentals, when it comes to practice legal language acquisition, knowledge representation, reasoning, and process management.
Or at least grasp enough of it so you get a chance to know what you're doing in computational law.
At Suffolk Law, Prof. David Colarusso has a handy metaphor for his Law students delving into Computational Law and Legal Tech: “to have a successful Italian restaurant experience in Italy, you need to speak enough Italian to make it smoothly from the front door to the table and order (one may add: and make it smoothly on the way out through the waiter/cashier)”.
Language is first & foremost communicating knowledge representation. How to develop a multi-layered language for effective interdisciplinary understanding & coordination mindset?
Early in his paper, BWL quotes Edsger Dijkstra: “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise”.
As pointed out by our computational law Diana, “this may be difficult in Law: for ex. even if we sample decisions, the law does not aim to return the average value, as human behaviour tends to be suboptimal. Even if we generalize and use analogies, in some other cases a solution may only be used in some specific type of problem. But also we can use safety as defined by the range within which that solution may be accepted, removing outliers for more complex processes”.
We experienced it in this group: there is no way one can run both regular and fringe business scenarios of A/A legal entities in situation pursuant to Agency, Commercial, Tort etc. laws, without incorporating finely differentiated legal reasoning at the onset. But that's not all.
Starting from CAS complex adaptive systems theory and visual representation, and strongly deterministic architectures (business process & customs, compliance mechanisms & judicial process, technical platform layering and code), systems of Law and computational law also function as networks of referencing, relationships (between the products of legislative and legal bodies, and between the producers of such products: judges, courts, legislators), and of case relevance, the study of which, one country and jurisdiction at a time, may also bring better understanding of how Law has evolved across countries and jurisdictions (J.B. Ruhl, Dan Katz et al.).
But also how upcoming (computational) law may further evolve, after having been modeled and tested for e.g. (unintended) economic, fiscal, societal consequences of complexity, before being further deployed.
Current Speaker of the U.S. House of Representatives, Congresswoman Nancy Pelosi, was once mocked for stating something like "we need to pass the bill to find out what is in it".
But that may only have been a realistic observation of what the consequences of legal complexity mean, i.e. it may be nearly impossible to understand what a new law, bill, or decree does, unless one tests for real its economic, fiscal, societal (unintended) consequences. The Speaker had a point, even if expressed in a conveniently crude and politically expedient fashion. Seriously.
Which also links to practical consequences of "emergence" concept in CAS i.e. as things happen, and not before, discovering, finding out, mapping and understanding architectures, relationships, and consequences of legal changes reverberating through legal complexity.
Regardless of NLLP Natural Legal Language Processing progress and (still somewhat brittle) ML-based semantic analysis (which of course deploys no such thing as a human-like kind of ‘semantic intelligence'), architecture and network-based analysis seems like two robust prerequisites before going into the language layer. That too has to be tackled in an interdisciplinary fashion, far beyond the relevant methods for solving computer system architecture and engineering design problems. Even if the latest are realistic reminders of hard coding constraints and practical limitations.
So what might be minima conditions for such interdisciplinary approach to really work?
To expand on Prof Colarusso’s Italian restaurant metaphor, this thought experiment: as you become a regular patron, the restaurant owner, sommelier, and chef, may further invite you for conversations.
With a business ownership (Business) background, it’s not difficult to sound smart while discussing with the owner the business and logistics fun of running a restaurant.
Nor is it difficult, with a wine culture and usage (Tech) background, to discuss with the sommelier, vintages choices made in composing the cellar, and fits with food, customers preferences, as well as the incidence of vineyard location, geology, soil, hydrology, climate, harvesting, process, storage, and transportation, that warrant selections.
However, when it comes to the chef, discussing cuisine art and secret sauces used in a restaurant kitchen (Law), is a whole different ball game (we may not discuss here the metaphor that in order to keep appreciating a restaurant, it is better to never have to visit its kitchen).
And if the chef has decades of practice and was trained at Cordon Bleu (an elite Law School), one’s own home-kitchen mastery is no match. This is where the Business / Tech person might want to get a taste of Law study & practice, to at least pro-actively initiate a process of moving to the place where she needs to be: by finding herself on the same page as a seasoned Law practitioner, she might want to make sure to develop legal abstract and applied reasoning and practice at the onset of building computational law solutions.
This isn’t a perfect metaphor, but you get the drift. We can’t just sit idle and say, if we are business / techies, “oh, that, some lawyers will handle it, that’s what we pay them for, not our problem”, and, if we are legal professionals, “oh, that, some BPM/devs will take care of it, that’s what we pay them for, no need to get our hands dirty”.
Interdisciplinarity is hardcore. Get real, get involved.
**{CLDG-1: Human-Centered Law}**
*“ Promote computational law as a means to empower individuals and improve access to justice and legal outcomes. Throughout history, from the Code of Hammurabi up until today, the purpose of law has been to serve people and reduce conflicts. As law becomes more sophisticated, it is important to never lose sight of this core purpose. If law is to be optimized through the use of technology, it must be optimized to meet the needs of individuals and society.”*
An example of successful human-centered computational law process - Source: Suffolk Law
On Sept. 1st the CDC issued an order creating a nation-wide eviction moratorium, on Sept. 2nd, the Suffolk LIT Lab created a tool to help folks see if they qualified and help them fill out the needed paperwork. https://www.lawsitesblog.com/2020/09/suffolks-lit-lab-releases-eviction-moratorium-tool-to-help-tenants-exercise-their-rights.html
Within days of CDC eviction moratorium, 3 legal apps have been launched to help tenants with eligibility and declarations:
@GoA2JTech: https://www.covid19evictionforms.com/
@SuffolkLITLab: https://massaccess.suffolklitlab.org/housing/#CDCHoax
@KYEqualJustice: https://community.lawyer/cl/kyequaljustice/cdc-eviction-declaration
Now, see BWL’s paper section " 3.7. Yummy / 3.7.1. User interfaces ". There, the language is genealogically reminiscent of Bay Area foundational design thinking process (IDEO et al.).
A "user" may not just be that disturbing externality and passing concern (or a character of the Tron movies).
To paraphrase Charlton Heston in ‘Soilent Green’… “It’s people!”
A “user” is a human, or a social system of humans, with specific techno-economical characteristics. For ex. in online ADR, what is human and what is "human-centered" requires some basic human understanding and definition of human justice being properly served, or lack thereof, before even talking system plumbing and specs.
Perhaps one challenge for MDTs multi-disciplinary teams is, they have to do a good enough language job to manage complexity and get their act together internally. But they have to do an even better language job to make that palatable to important constituencies, toward which that often boils down to like explaining it to a 5-years old (Keep It Simple, Stupid to gradually tackle complexity and granularity): senior policy decision makers, legislators, and government regulators, and the media, who, understandably, may not have the easiest time trying to wrap their mind around human-consequential B/L/T/complexity.
But Wyoming is a brilliant demonstration of a can-do attitude by high level policy makers, legislators and regulators, with a bit of MDTs-acquired strategic patience and techno-financial acumen, that goes a long way into building human-centered, machine-capable, intelligent and effective regulations for cryptographically-secured digital assets management and digitally-literate legal entities.
In addition, both first and foremost, and ultimately, one is dealing with The People, not some nameless “user”. The People are smart in their own ways: while they may not have the tools and background to master system plumbing & specs complexity (shouldn’t education try & deal with that, though?), The People may have accumulated quite hard-earned individual & collective experiences, of how well thought-out or botched techno-legal systems, fair access to justice or lack thereof, affect their lives in both obvious & tiny details. The People may become less & less tolerant of any perceived “injustice”, with consequences on societal peace & stability that a constitutional, legislative, legal and regulatory due process is expected to preserve.
Before going into considerations of societal norms interacting with B/L/T/ constructs, of HCI (Alan Dix et al.), of adequate computational law UX design, it's more than just Yummy and making a computer system "attractive" enough to be widely popular & adopted among ‘users’.
This said, most of BWL’s paper section "3.7. Yummy/3.7.1. User interfaces", remains not only valid but seminal in getting it right with human-centered design: so that bit needs to be extracted, revamped, and positioned as top, front, and center in human-centered computational law systems design requirements.
But that’s not all. A human-centered approach to computational law systems implies to pay particular attention to the definition, structure, and process, of identity and agency.
This isn’t merely about the techno-legal specs of individual and corporate identities used in various online systems that involve transactions, roles and relationships, rights and responsibilities: it is mostly about the societal clustering that reflect the fact that certain people “identify with” certain clusters, regardless of the nature of the foudation of such identification and clustering definition, structure, and process.
Because the main politico-societal engine for that process is the relentless demand for agency based on such identification and clustering, which, if not acknowledged and tackled, may result in individual behavorial hobbling and major societal conflict. People want identity, agency, and representativity.
Furthermore, considering the significant and consequential politico-societal emergence of issues of identity and agency, this may have certain foundational consequences on how human-centered data needs to be structured in adequately designed computational law systems.
**{CLDG-4: Universal Interoperability}:**
*“ Foster simplified, streamlined, and efficient legal functions by developing open interoperable standards and modular components for the practice of law. While laws are bounded by jurisdictional limits, computational law may transcend them and serve to bridge gaps and differences of a conceptual, institutional, procedural, structural, and substantive nature, both within individual legal systems and between them. Allowing different legal entities to communicate and collaborate more effectively not only contributes to their efficiency but also makes legal outcomes more predictable, processes more transparent, and ultimately contributes to justice. This requires a sufficiently practical model of what is to be deemed just, including notions of individual, social, retributive, restorative, and distributive justice, that can be expressed computationally. “*
Above goals translate as (one of) the crux of the problem of defining computational law specs.
Universal interoperability isn't merely a technical issue.
For example, in the domain of digital/ crypto-assets and distributed/ decentralized systems, some supra-national institutions like to tackle issues of public/open vs private blockchain/DLT cross-border interoperability along 3 plans: legal, regulatory, and technical interoperability.
Here, in this broader context of computational law systems universal interoperability, we'd like to stick to the B/L/T/ Business/ Technical/ Technological (Dazza) basic framework, as a starting "prism" to frame the issues.
Putting above definition of universal interoperability in perspective with the BWL paper, we might want to pay attention to a specific phenomenon: the rise of archipelagos of (somewhat) distributed and decentralized systems that do not make universal interoperability issues easier to tackle.
However that trend itself needs to be put in perspective with the reality of strongly deterministic architectures (business process value capture and exclusive lock-in's, legal and regulatory compliance mechanisms & judicial process, technical platform layering and code constraints, etc.).
These strongly determinist architectures operate as a sort of Gestalt that conditions the perception of a need for some proto-legal/ regulatory requirements, or lack thereof, and it may take, over years and even decades, the unraveling of "emergence" as part of standard CAS complex adaptive systems behavior, to realize what the real deal really is.
Platforms e.g. "innocently" conceived to organize information, exchange views, buy books, monitor friends and neighbors cat pictures, end up behaving as dominant techno-sociological behemots with a hell of political, socio-behavorial, and fundamental, human rights issues.
In addition to providing relevant and fertile ground for related monopoly and antitrust studies (like with our computational law Thibault) for ways and means to regulate perceived excesses and unbalances, while dealing with the techno-sociological determinism aspects, these realities are reminders that part of the future might reside in a sort of digital "Yalta" (sharing the territory and spoils of war) between the dominant and centralized vs the alternative and decentralized, with a form of symbiotic cohabitation between the former and the latter, as the former may not fade away that fast.
However, people might move from the former to the later only on condition that a certain interoperability exists between both types of systems. Regardless of how good, bad, and ugly the former system might be seen, people may not move out of it so easily, unless they feel they do not lose too much of what was once the perceived good of the former system, while they move toward alternatives offered by the latter system.
And of course the critical mass or lack thereof of adoption has a lot on consequences on both the dominance and self-regulating firepower of emerging systems (a sort of Forest Gump box of chocolates situation if you will).
Back to BWL and his paper, it's interesting to reflect on styles of leadership we might be willing to develop in the context of universal interoperability.
David Sluss on leadership in HBR https://hbr.org/2020/09/becoming-a-more-patient-leader
(associate professor of organizational behavior at Georgia Tech’s Scheller College of Business):
quote
“I like to describe the most effective task-oriented behavior as futurist and the most effective relationship-oriented behavior as facilitator.
Futurists create a powerful vision and outline the metrics needed to realize it.
Facilitators foster collaboration and empower a team to reach a solution. The approaches are complementary, not mutually exclusive.
(…) A futurist needs patience when explaining her vision to folks that may or may not “get it” right away or have doubts about the vision’s viability.
A facilitator needs patience with a group’s collaborative process when members aren’t working well together or are taking longer than expected to come up with a solution. “
unquote
A lot of BWL wisdom resides in defining interoperable hints & principles of managing processes of building computer systems and layers of code from the era of centralization and hierarchy, while having then brilliantly transitioned to individual distributed PCs and beyond.
We shouldn't let these lessons slip away, including and not limited to “distributed systems”.
What in BWL’s 1983-2020 vision isn’t seen as fringe but to possibly stir clear of, distributed systems & fault tolerance, now may become so mainstream as to warrant thoroughness, as well as concern for evolutionary interoperability between the before, the now, and the (foreseeable) after.
With Web 3.0, IoT, 5G, more automated, autonomous, distributed devices, on somewhat decentralized systems, on which automated/ autonomous legal entities and computational law operate, fault tolerance may become instrumental and not just in the Byzantine meaning.
Hardware evolution conditions architecture, relationship with software, and interoperability.
Then again, BWL wisdom could be our guide.
Recently, BWL co-authored a paper with MIT CSAIL colleagues, where they review post-Moore’s Law computer systems perspectives:
https://scitechdaily.com/mit-csail-if-transistors-cant-get-smaller-then-coders-have-to-get-smarter/
Reference: “There’s plenty of room at the Top: What will drive computer performance after Moore’s law?” by Charles E. Leiserson, Neil C. Thompson, Joel S. Emer, Bradley C. Kuszmaul, Butler W. Lampson, Daniel Sanchez and Tao B. Schardl, 5 June 2020, Science.
DOI: 10.1126/science.aam9744
Maybe this isn't one of the most salient and pressing concerns now, maybe it will become one such at some point in the coming decades. But what matters is to see the trajectory of the challenge as evolutionary. Which, pardon the metaphor, might mean certain timeframes and domains of overlap between the "Neanderthal" chunks of society where computational law systems require deployment, and what/whomever comes next in terms of human and societal development.
This also signals that, by botching universal interoperability, we leave open possibilities for perhaps not so desirable outcomes, such as even steeper learning and techno-economical acquisition curves, that result in worse techno-societal gaps and "injustices" between the "have and have-not" and problematic evolutionary consequences as a species that is also becoming a human-machine techno-societal construct.
We may not delve in it here, but there as well seems to be an obvious geopolitical incidence as whom dominate the market and dictate the specs, may eventually determine whom get what techno-legal access with again all the socio-economic consequences.
This might be a concern for countries that may have no alternative but to embrace either systems imposed by dominant techno-economic powers, or become ultimate techno-societal laggards. Becoming subservient digital tributary states of certain dominant nation-states and the private sector entities, ecosystems, and platforms that they back, they may see reflected and implemented their worldview Matrix architecture of free and open governance and respect of fundamental human rights, or lack thereof. The Great Game might have just become even more consequential.
It might thus be interesting to discuss with BWL to which extent these evolutions might lead to reshuffling some of his principles and hints. One aspect is smart hardware i.e. chips dedicated to computational law that would incorporate hard/software efficiencies, among much more distributed and interoperable architectures, including but not limited to ML/DL processes.
These building blocks are primary indicators of whether interoperability is indeed achievable. As said, even if universal interoperability isn't merely a technical issue, and we might want to tackle it through the B/L/T/ prism, techno-economical feasibility and evolution with a strong geopolitical twist, may strongly determine what happens and doesn't with regard to that universal interoperability concern.
**{CLDG-2: Measurable Law}:**
*“Develop more transparent and reliable legal practices by specifying goals and measurement criteria, evaluating results, and adapting in light of new considerations. If we hope to create, test and iterate legal designs, we must first ask, “what qualifies as success and how do we know that it has been achieved?” Computational legal systems need to provide opportunities for transparent evaluation while also recognizing key qualitative dimensions of success - or failure - that quantitative metrics may miss.”*
Since any established system of Law anywhere may be seen as CAS Complex Adaptive System (J.B. Ruhl et al.), while BWL has had a half-century career of tackling complexity, when it comes to Computational Law, it might be timely to add on top of the 'practice to Keep It Simple, Stupid KISS' an additional requirement to 'embrace the CAS Complex Adaptive Systems, acknowledge and reduce the complexity, manage the granularity'.
With all due friendly respect to design thinking schools, expanding a bunch of colored post-it on windows or white boards is a great start to tackle complexity and granularity. And fair enough if that achieves some measure of MDTs-building by running around ‘empathically’.
But one isn’t going to achieve complexity and granularity representation, reduction, and management, by merely reducing the problem to colored post-it & astute coding.
Architectures (of business and societal norms, legal and procedural processes, computer systems and code) may be strongly deterministic externalities in shaping the data and the nature and methods of what is being measured and how and for which purpose, and so do networks.
As experienced in our group, you can't run any A/A legal entities construct, just out of Agency, Contract, Tort laws considerations, without keeping an exact process and state record (map and log).
That serves to both monitor the B/L/T/ status at any given time, but also to either incorporate sufficient compliance to preclude the A/A entity from straying, or do enough cruise control to take automated or human corrective action, etc. And back to human-in-the-loop problematics.
There is also a requirement to keep a "record of intent" for whatever intent means on case-by-case basis in an A/A multi-agent simulation context. This is required for anything linked to transparency, explainability, accountability, complex notions depending on human/ machine time and speed, and whether human interference with machine is even desirable or possible.
For whatever BWL says revolving around log & reduction certainly remains valid, but it may require reconsideration and repositioning in these particular and relatively new contexts, although some automated and autonomous systems have been around us for decades already.
These involve quantitative and qualitative metrics relevant to the language of complexity.
**{CLDG-3: Law as Data}**
*“Transform law and legal processes from static to dynamic by expressing legal processes as data. Effective computational legal systems are built on algorithms that relate high-quality data, patterns and outcomes. We need to reconceive of legal information generally as data and examine how to systematically capture, move and process it, while guarding against potentially malicious data use.”*
While BWL expand substantially on data-related processes from algorithms to map and log of process and states, his paper doesn’t tackle any particular vertical, including data science.
As is the case for an array of professions and industries, computational law is fundamentally even more inseparable from mastery of data science, complexity, and information overload.
Including and not limited to network representation and information visualization techniques, then again crossing paths with human-centered effective computational law HCI/UX design.
Data science is complicated for the reason it’s hard to get the right data and the data right.
Simplified data process pipeline: collection -> pre-processing -> clustering -> analysis
Starting at collection stage, an optimal data source may satisfy at least six rather basic and incompressible quality criteria (Bucerius Law School LegalTech Essentials)-
It needs to be *“available, complete, official, unmodified, accurate, structured”*(if you worked on, or even merely observed, e.g. covid-19 data-related issues, you may have noticed that the processes that may support such criteria are far from being automatically granted at all). In this case this could constitute e.g. a BWL-like guidance stem for optimal computational law systems.
But then again there is no way you might do a proper computational law system building and data analysis and exploitation job, by solely applying data scientists methods. This may likely be going to naught unless you incorporate legal reasoning at the onset. Which keeps linking us back to the fundamental interdisciplinary requirement.
Furthermore, considering the significant and consequential emergence of issues of identity and agency, this may have certain foundational consequences on how human-centered data needs to be defined, structured, and processed, in adequately designed computational law systems.
++++++++++
+End-of-Text+