---
tags: OpenEMIT
---
# {OpenEMIT|Briefing}
{%hackmd p6TgTK-8SlKQzxLCsSEH9g %}
## What problem are we trying to solve?
> #### **++Interoperabilty++** amongst Event-Driven, Change Data Capture, Distributed Log, Distributed Ledger, Distributed Storage, Local Storage, Consensus, Programming Languages, Operating Systems and Event Schemas.
*In 2017, Gardner declared that the biggest obstacle to **Event-Driven Architecture (EDA) adoption** was **["Event Thinking"](https://blogs.gartner.com/yefim_natis/2017/09/12/event-thinking-a-challenge-and-an-imperative/)**.*
**Today, this was not the biggest obstacle in EDA adoption.**
*Today, the EDA marketplace is flooded with offerings in Messaging, Pub/Sub, Function-as-a-Service, Serverless, Streaming, Change Data Capture, Distributed Log, Distributed Ledger, Distributed Storage, and Local Storage -- from Apache, Solace, IBM, AWS, Azure, Google, Salesforce, TIBCO, Confluent, Akka, etc. -- **++and they each have their own EDA APIs++!***
### Compounding the marketplace are these truths:
1. **Distributed Log and Ledger(DLT) are semantically the same,** with the differences being the complexity and use cases for authenticity, encryption, anonymity, and performance SLAs.. Said another way -- ***they are non-functional differences.***
> ### The lifecycle of an event is "**embarrasingly simple**".
> 1. We **++subscribe++** to receive events
> 2. We safely **++store++** events
> 3. We **++publish++** or "EMIT" (broadcast) events
> 4. We **++ingest++** events, perform some **transformation**, and usually repeat the lifecycle from step 2 <p/>
>
> **It's just that simple.**
2. **Change Data Capture (CDC) can be viewed as a database-centric perspective of EDA**, using distributed data as an Event ledger, and simple inserts/appends and triggers as a network level Pub/Sub.
> The same event lifecycle can be imposed upon any local or distributed datastore, from KV stores on up. If it is a distributed datastore, then **publication** over the network is already built in, along with various SLAs around consistency, availability, performance, security, etc.
>
> ### Said another way, all the heavy-lifting around event machinery SLAs is "baked in".
> KV stores do little else than maximize these SLAs while storing the simplest of objects. They excel in this area.
>
>
### *The biggest obstacle to EDA adoption today is* ++lack of interoperability++
> Add up all the technolology. All the Event-Driven, Change Data Capture, Distributed Log, Distributed Ledger, Distributed Storage, Local Storage and Pub/Sub messaging vendors you can. They can all by used as **Infrastructure for EDA**.
>
> Now add up the number of different APIs needed to implement each one into a simple event-driven application. **This is simply absurb!**
>
> ### The current cost of migrating from one EDA infrastructure to another is to refactor at the API level. **++This migration cost creates "Vendor Lockin"++.**
>
> **Even where minor interoperability exists, it doesn't cross the log/ledger offerings that are semantically the same, and have only the most popular integrations. You can also forget about CDC integration. This creates **barrier to entry** for newer offerings, and stiffles innovation around using distributed datastores as EDA infrastructure.**
>
> Moreover, how can you evaluate EDA infrastructure SLAs around consistency, availability, performance, security, etc without an interoperable runtime environment (adapters) and a unified test suite? **Isn't this why the [E-Commerce giant Alibaba](https://moneymorning.com/alibaba-vs-amazon/) founded the [OpenMessaging Benchmark](https://openmessaging.cloud/docs/benchmarks/) -- to eneble cost/benefit analysis?**
### OpenEmit Business Value
#### Free Application Development from Infrastructure dependencies. Transform the very fabric of the Enterprise with... <br/><br/> **OpenEMIT -- Open Event Machinery for Ingesting Topics.**