Image Not Showing
Possible Reasons
|
Wolfman Jack says: "Don't call us. |
---|
tags:
streaming
actor
EDA
OpenEMIT
Author: George WillisThu, Apr 19, 2018
An "Inversion of Invocation" design pattern (for those familiar with the IoC design pattern.)
"The way you do it now."
Tightly coupled: The caller specifies the target
add()
is hardcoded in all cases belowresult = add(1,2) // Procedural style
result = (int 1).add(2) // OOP style
add(resultCallback(),1,2) // Async style
Results must be routed back, not forward
Inherent in recursive orchestration models
Context held "open"
A foundational shift โ what Gartner calls "Event Thinking".
Publish/Listen โ a cousin to Pub/Sub
Think "Hooks" for loose coupling
myEvent.publish(aTopic):
{--- aTopic.aListener1 // Listeners can filter (optional)
{--- aTopic.anActor2 // Ingest/Emit Contract Queue Workers
{--- aTopic.aListener3.anActor3 // Filtered Actor
=========================================================================================
dataCollectedEvent = [...]
dataCollectedEvent.publish(Users.Create) // Emit trigger event
// Users.Create:dataCollectedEvent Listeners
{--- addCredentialsToLDAP // Ingest into Active Directory
// Emit Users.Create.Credentials:savedEvent
{--- addEmailToMailgun // Ingest into User Notification domain
{--- addToDiscus // Ingest into Social Engagement domain
// Users.Create.Credentials:savedEvent Listeners
{--- sendConfirmationEmailToUser
// Users.Create Multi-Event "Merge" Listeners
{--- EndProcess(span=2s) // Upon completion of several parallel Actors/subflows, Emit User.Create:endFlowEvent
Evolutionary Architectural Style โ Achieves loose-coupling of autonomous execution bundles (container runtimes) by moving coupling from "callers" (a.k.a. Commanders, Orchestrators) to autonomous listeners known as Actors (Choreography, flocks of birds). Each Actor hooks into an event stream.
Extend via new hooks to existing events without impacting existing workflows โ just like in ETL
Solves coupling isues where other approaches like Service Registry (ESBs) and Service Discovery solutions:
move the issue to a mediation layer (reduces impact of change, but does not remove the need to reconfigure)
add additional machinery, complexity, latency and resource utilization to an otherwise lean microprocess
Naturally Async() โ no "blocking"
Naturally Efficient โ deep, resursive call stacks of locked, distributed context (resources) replaced with atomic, ephemeral Actor Lifecycle context
Naturally Resilient โ in contrast, Orchestration is a Resilience Antipattern, due to:
Naturally Visible โ Day 1 Monitoring of each state change, filtered by topic(s), subtopic, Domain Entities. They are called "listeners", and they are built into the runtime as a foundational component.
Naturally Scalable โ "'cause concurrency ain't easy, so stop sharing forks" โ employ Parallel Processing instead.
Scaling workload is all about distributing workload, and if workers keep sharing workspace resources, the result is predictively Contention@Scale.
An autonomous (atomically isolation) multi-tasking environment is required. This is why Container Multitenancy is another "Pillar of Digital Transformation"
Recovery Axiom:
Events are fine-grained, immutable state transition ledger entries recorded in a Distributed Log.
Properties | Explicit Invocation | Implicit Invocation | Notes |
---|---|---|---|
Complexity | High | Low | Events are simple |
Evolution | Low | High | POs currently report quality issues over releases. |
Scaling Multi-tasking |
Concurrency | Parallelism | The key to Scalabinf workload. |
Microservices Execution Coupling (Flow) |
Centralized Orchestration[1] | Independent Choreography with Oversight | Flocks are choreographed, independent actors. What if orchestrator goes down? More dependency. |
BPM Alignment | Low No clear model |
High | |
Resilience@Scale | Low | High | Today's systems do not solve "First things First" โ Resilience@Scale.
Foundational problems of lean scaling, host isolation, host failure, network partitioning, snd others are not solved by the platform โ "out-of-the-box". |
Visibility Monitoring |
Extra. Add Monitoring Traffic, Infrastructure, and Integration Code | Foundational Listeners |
|
Process Efficiency | Distributed Call Stacks[2], Synchronous "blocking" by default, Return to Caller ("Pass Back") | Actor Context, Async by design, notify/emit ("Pass Forward") | |
Process Integrity | Currently, Ingestion throttling involves discarding invocations to maintain Transactional rates | Ingestion throttling involves backpressure on upstream event publishers to preserve all invocations | |
Network Efficiency | WET[3]. Requires client caching to be DRY, and that's extra | DRY[3:1] "Don't Repeat Yourself", Content-Centric Networking (CCN) | Events are communicated "once", while in CPU cache! |
Theorhetical Background | REST Dissertation[4] | Promise Theory[5], Configuration Management (CM)[6], Smalltalk MVC | "Events" have been around as long as Ethernet! (Xerox PARC) |
Youtube: "Who Needs Orchestration? What You Want is Choreography." โฉ๏ธ
Lightbend/Akka: "How the Actor Model Meets the Needs of Modern, Distributed Systems" โฉ๏ธ
Wikipedia: Don't Repeat Yourself (DRY Principle) โฉ๏ธ โฉ๏ธ
2000 Roy Fielding: "Architectural Styles and
the Design of Network-based Software Architectures" โฉ๏ธ