# The Pillars and Principles of Digital Transformation (DX)
>###### (c) _All Rights Reserved_
>
>[name=George Willis] [time=August 16, 2018]
---
> *Change will not come
> [color=#444499]
> if we wait for some other person,
> or if we wait for some other time.*
>
> *WE are the ones we’ve been waiting for.
> WE are the change that we seek.*
>~BARACK~ ~OBAMA~
### Introduction
Talented IT Architects are about to lead IT through a [Quantum Leap](http://www.dictionary.com/browse/quantum--leap) in computing. Recent strides in "density" are but a prelude of the [economies of scale](https://www.investopedia.com/terms/e/economiesofscale.asp) we are about to experience. I'm talking implosion!
All "Innovative IT" firms are into mature iterations and further investment of their DX infrastructure. Enterprises are not even conceptually engaged!!! It's not a gap -- it's a canyon!
Pending advances in the **Software Factory** will empower Business SMEs to become "Power Users" atop **Citizen-as-a-Developer** platforms, leading to **Refactoring Legacy Applications onto Digital Transformation (DX) stacks**. The stack is the factory floor.
The largest obstacle to DX adoption for Enterprise IT is Legacy IT leadership. To transform operational infrastructure into a new paradign, **you must first transform IT leadership.** You must empower the few people you have or can get who understand the cognitive shifts described herein, so that a unified vision can then be driven from the top down. It is impossible to lead with velocity from the bottom, and **without velocity you are evolving, not transforming.**
> Note: Legacy IT architects can get dissmissive, agitated and downright biligerent when they don't understand a new concept. I've had seasoned architects scoff at me when I told them to **"Stop fearing failure. Anticipate failure."**
>
>Yet Netflix turned such thoughts into "Chaos Monkeys", and arguably the most massive and reliable streaming system on the planet, while other attempts at streaming video became less reliable with scale.
>
>Netflix began it's transformation just as I have stated -- with [the vision of organizational transformation coming from Patty McCord in HR](https://hbr.org/2014/01/how-netflix-reinvented-hr).
Simply stated, those who wish to experience DX in the next 5 years have some hard decisions to make about who **can** lead that transformation. They need to understand that there are new skillset requirements for DX Leadership -- **primarily rooted in Data Science, Architectural Styles, Data Communications, Streaming and DevOps.** **Many will impede DX while proclaiming themselves champions.** It's the whole ["new wine in old wine skin"](http://biblehub.com/mark/2-22.htm) thing.
> No amount of Architectural talent can overrule existing Legacy IT Leadership. Placing visionary DX Architectural Leadership at the top of the IT Org chart and removing organizational impediments -- the path of Organizational Transformation (OT) -- is the only clear path to achieve Digital Transformation (DX). **It's the only way to cross the Conceptual Chasm in 5 years or less.**
Prediction: **Legacy entrapment** will apply such inertial backpressure that most enterprises will not reach the escape velocity required for market leadership in domains where **IT@Scale** is the determining factor.
### Legacy Entrapment
You might be ***legacy*** if:
* You recently consolidated Enterprise Source Control Management (SCM) to Subversion(SVN)
* You have no idea why Kafka and Zookeeper keep getting talked about; and now they are talking BitCoin
* Your Hadoop/Data Science team is "that other team"
* You have mission critical systems that can't keep up with the workload (**Quality@Scale** issues)
* Lack of consensus/velocity on 5 year IT Roadmap? 3 year?
* There is **not** a [Copernican Revolution](https://en.wikipedia.org/wiki/Copernican_Revolution) into "Event Thinking". Have you heard what Gardner is saying?
### DX Principles
* The solution to complexity is... simplicity (Domain Driven Simplicity)
* The only thing worse than a bad architect is a bad architect that speaks well.
* Steven Covey -- "First things first": All solutions must be built upon scalable, reliable, and secure infrastructure. IaaS must encompass linear scalability, security, availability, recovery, and stability. It must be "in the box". See [Reactive Manifesto](https://www.reactivemanifesto.org/) for similar motives. "No loose canons!"
* Einstein -- Theory of Relativity: Relative to "host local" CPU-to-RAM, CPU-to-NVRAM, and CPU-to-L1Cache, "internets" (or generically networks) are slow. Employ DRY networking via non-volatile (NV) caching to eliminate/reduce redundant slow fetching.
* Einstein -- Theory of Relativity: Relative to executable code, application data is "big". Bring near-static process to the data, not the other way around (ala Mohammed and the Mountain). **All host optimizing techniques will lead to faster execution and reduced network traffic.** This principle is known as **Data Locality**.
* Van Jacobsen -- Content-Centric Networking: Sessions are the last vestage of a legacy semantic model. Just as packets replaced circuits, so must we secure the content and not the transport, and "disseminate" data through fabrics on all available transports.
## Pillar: CQRS
CQRS or "Command Query Responsibility Seperation" was first introduced by Bertrand Meyer, father of the Eiffel Language, author of profound canon on [OOP](https://www.amazon.com/Object-Oriented-Software-Construction-Book-CD-ROM/dp/0136291554), and perhaps best known for **Design by Contract (DbC)** approaches to strong typing of Semantic Models.
The concept is simple. All operations in a software system will either change data (mutate state), or not. Operations that do not change data are known as "queries". Those that do are known as "commands". Seperate all operations into either one or the other, and use standardized practices/infrastructure for each.
It may not seem that profound or "worthy" of pillar status; but I assure you, as the other pillars drop into formation, you'll see the synergy.
### CQRS Summary
| Aspect | Without CQRS | CQRS:Commands| CQRS:Queries |
| ------ | ----------- | ------------ | ----- |
| Scalability / Performance | Writes compete with redundant and untimely reads to create nondeterminstic overloads | Write optimized with single deterministic event publication. Multi-Subscriber replication to Query Hosts/Operations | Read optimized through Parallel Replication to Elastic Cache/Data Fabric |
| Consistency | Strong | Strong/Eventual Consistency | Strong/Eventual Consistency |
| Interaction | Request/Response | Event (State change) Broadcast (see Raft and Paxos Algorithms) | Request/Response |
| Polyglot Persistence | No | Unified distributed write log ensure multi-model consistency| Multiple Parallel Data Model support including Aggregates, Relational DBs, Graph DBs, etc.
| Core Tech | SQL DBs | Distributed Event Logs (ala Kafka) and Blockchain | Event Sourcing |
### Queries
Traditionally, IT systems tend to be 90% query / 10% command, with the outliers being IoT, SCADA and ICS. We'll start with Query operations.
Query Optimization is all about partitioning and caching via [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) communications. Said another way, a piece of information needs to be reliably communicated "at most one time" in non-volatile caches.
Even if the cache experiences data loss due to capacity or failure, networks DRY up quickly a second time, and **the bulk of repeating transaction processing across the slowest communications possiple is eliminated!**
Of course, the only thing better than optimizing network communication is avoiding it. **In the vasy majority of instances, data can be effectively partitioned across host local storage within a cluster.** In such **Distributed Data** clusters, executable processes are deployed onto each host, and **operate upon data locally.**
Distributed Data Streaming Architectures like **Spark** support [code mobility](https://www.ics.uci.edu/~fielding/pubs/dissertation/net_arch_styles.htm#sec_3_5), thus replacing the need to transfer data local to the process with execution on the correct source. **For streaming applications that transform a common process model through a series of steps, each step in the process can achieve "core affinity", and resuse the shared process context in L1 cache!!** We'll cover this more when we get to **command** operations.
Lastly, Queries follow the Request/Response pattern. A consumer requests information, and receives a response with the requested content, and a reference to the original request.
### Commands
Commands are **state change alerts** or **events**. The terms are synonymous.
## Pillar: Event-Driven Architecture (EDA)
Imagine a group of people were strolling into their next meeting room, and they come upon a teammate (Jaime) they hadn't seen in a while.
Knowing that they all wanted to get an update, and seeing another meeting was about to start, somebody says "Jaime -- Whatsup?", and everyone listens while Jaime updates the team on the new and exciting work she has been doing since last they met. This broadcast update is the basis of Scrum "Stand-up" meetings.
Strangely enough, computers are hosts that are isolated from one another because they don't share grey matter -- just like humans. To form teams, we communicate -- sometimes not very efficiently. When computers form teams, the conversation over **relatively slow** network communications should be optimilly concise.
> **The theorhetical limit of concise collaboration is for each and every host to broadcast their changes one time,** and then ensure the broadcast was saved by at least 2 other hosts for data protection (K-Safety = 3).
If you ever wondered about terms like ZooKeeper, Paxos and Raft, you just stumbled across **Consensus Algorithms**, whose job it it to ensure that all computers share consistent data.
Now the simplicity of events are what really confuse most seasoned IT people. It reminds me of the phrase "and because of the simpleness of the way, ~[...]~ there were many who perished." It can be hard to believe this fundamental concept can simplify IT even more than Containers -- and Containers bring a boatload of clarity!
In Event-Driven Architecture, an initial event might look as simple as:
```
{
head:{
topic:newUser,
eventID: 1,
timestamp: 2018-08-16-22:22:22.84850343,
action: create
}
body:{
email:gwillis@brillian.it,
password:<encypted>,
OAuth_token:<encrypted>
}
}
```
So let's say I put that in a file, and then broadcast the file into the "ether". another way to look at this is simply that I have "emitted an event.""
> Soapbox: Historically, Smalltalk (OOP), the Graphic User Interface (GUI), and Ethernet all came from Xerox Palo Alto Research Center (PARC). Smalltalk MVC utilized the Event-Driven Architectural style, and the concept of broadcasting onto the ether was alive and well.
>
> Today, ethernet remains a broadcast system, which is why collisions can occur from simultaneous broadcasts, but broadcasts are seldom used. Instead, repetitious point-to-point communications produce several messages when one would suffice. (For now)
The next step is to create some listeners on particular topics in the "ether". In this example, I can have one listener add the user to an email list, while another add a new user entity into an LDAP DB, and a third listener emit a newUserProfile event for further processing. **Parallel activity from a single event!** This is the nature of Event Sourcing from Polyglot Persistence.
For those still lost, perhaps this [article](https://thenewstack.io/event-driven-architecture-wave-future/) will help.
Topics give us a simple way to partition broadcasts into pools of publishers and subscribers -- a VLAN of collaborators. The listener has two parts -- the filter, and task.
> Topics can be used to create taxonomies and support "Bounded Contexts". They can be simple tags.
>
> ### Muy Importante!
> Since topics get events...
> and **events form into process streams** (that's new, but hopefully intuitive)...
> and process streams can be envisioned as
> 1) a series of somewhat repetitive context being passed as events along a "pipeline" (DAG for the PhDs) of executable tasks,\
> OR
> 2) a) **events are written to a common blackboard known as the "Process Context", and that a series of "data transformations" *stream* across the small, shared, L1 cacheable blackboard**; and
>
> b) that an appropriate non-preemptive OS kernel scheduler (or preemptive SLA) could allow microprocesses to hold the timeslice long enough to exploit L1 process cache residency before the OS switches timeslice and scrubs the context. (That last part is definately Computer Engineering Masters level, so don't sweat it if you don't get it. Just looking to make fast even faster.)
The filter specifies which events to actually consume. **This is where binding occurs. In classic invocation, the caller specifies the target. In Event-Driven Architecture, the event consumers decide if they will participate in the flow.** This is known as the **Hollywood Principle** -- "Don't call us. We'll call you". It is also termed "Implicit Invocation", because the target is not explicitly declared, but rather the next action is implied by an "emitted" event.
**This is really the "secret ingredient" in EDA.** If you've heard architects discuss **orchestration versus choreography**, it's they same discussion -- choreography relies on each collaborator to perform a "microprocess" or task, and that ordering of tasks occurs through independant orchestration. The advantages are:
* the end of recursive call stacks sucking up resources
* the end to synchronous invocation blocking
* a change from centralized orchestration to managed choreography
* the ease of using multi-core CPUs through parallelism instead of concurrency.
* **event monitoring provides Day 1 visibility and auditing of all process execution!**
Filters are best specified, since at its heart it is (computer) langauage agnostic pattern matching, and its easy enough to specify patterns declaratively.
The "task" part of an event listener can take on many forms. It could be a pool manager that delegates work into a worker pool. Similarly, it could be a dispatcher that used an API to invoke a 3rd party service, providing integration in the REST API architectural style. It could just do something and emit a new message. **It is polyglot programming by design, since any language executable can be connected to a topic.**
Lastly, the approach is governed by process model monitoring that expects certain event emmisions for expected process closure. Hung processes are detected by elapsed interval, and recovered or triaged.
## Pillar: Shared Nothing and Resilient Distributed Datasets
### Database Models
Centralized SQL Databases are great for Anaytics and Reports -- places where a pre-engineered result set may not doesn't fit the bill.
But for User Interfaces, the primary interface for data I/O of many applications, it is wasteful to bust a forms "tree" data structure into a series of referential table CrUD operations (writes). When you redisplay the form, you take the hit a second time to reconstitute the form back into the DOM tree.
The UI uses trees! Leave the forest alone already! Store it in a tree database. We call them Key/Value stores, where the key is an ID and the value is usualy surfaced as JSON. There are lots of good ones -- pick one. Gartner refers to them as **Operational DBs**.
Pick the right DB model for the task. Trees for CRUD UI. Relational for analytics. Graph DBs for social networks and geospatial semantics. Of course, **it helps to have some really sharp Data Scientists around to guide such persistence choices.**
### Scale
Databases must scale, because data keeps growing. Not in all domains, but in mission critical ones like time-series, transactional applications, etc.
Database scaling is a function of the CAP Theorem. I really can't go into this again without vomiting, so I'll just say that all architects should consider this foundational knowledge, and should be as nausiated as I.
Simply put, to scale ouside a host, you network. Networks can "disconnect" and strand updates. Therefore, in the face of partitions, you must choose between serving up stale data far "availability", or locking to ensure "ACID" transactions. Amazon really needed to have shopping carts available, so they and others leaned into "Eventual Consistency" High Availability Distributed Databases to handle partitions.
The approach was rooted in grid computing, where the goal was to just add more nodes into a grid to scale storage and I/O, and to ensure availability through overwhelming numbers. "Shared nothing" was the battle cry, and it worked.
It worked so well it spawned an entire IT specialty known as BigData or Data Science. You see, the only way to deal with a large amounts of data was to make sure you were lean at every basic operation that you were about to repeat a bazillion times;that you could increase capacity by adding new hosts, since vertical scaling had maxed out, and recover from loss of nodes.
It wasn't long before we discovered that since we were dividing the data across a bunch of hosts, it made sense to partion (shard) the data in a way that one could avoid distributed queries caused by entity relationships that was better consolidated withing a single query instance, or that one could deliberately divide the data into dispersed chunks to perform distributed map/reduce (scatter/gather) "divide and conquer" processing. Another partitioning scheme optimizes "edge computing" where nodes are deployed near users. These extremes of partitioning should again highlight the need for data architecture, and will be further discussed in **Locality.**
### Resilient Distributed Datasets
When you think about data protection through copies of data, you think about replication to sites without interdependency (like same power grid). That's safe.
But you can't keep a copy of everything on every server if dataset is larger than a single server can host! You can't do it if the combined datasets are larger than a single server can host. You exceed available capacity!
Therefore, you must partition the data onto the hosts -- a little here, and a little there. But wait, you also have to replicate the data at least 3 times to ensure data protection.
So what if everytime we wanted to store data, the infrastructure would just take care of both concerns, and made sure to partition fairly and ensure replicas. We might we call such a component **Resilient Distributed Datasets (RDDs)** At least that's what Spark calls them.
Spark offers linearly scalable (highly efficient) HADR capabilities "out-of-the-box". When you add Cluster Management and HADR infrastructure to a Steaming API that distributes data and then runs a simple SQL syntax data transformation process with "Data Locality" optimization -- you begin to understand why those who understand are very excited abouth this stuff.
## Pillar: Multi-tenancy Deployment via Containers
Containers are the newest form of encapsulation -- encapsulation at a process level. It allows many difference process to live together on the same host and share only the kernel -- like a high-rise condo that shares a lobby. We call this feature "Multi-tenancy".
Because only the kernel is shared, all dependencies must be fufilled inside the container. This makes the container independent of anything but a compatible kernel, and therefore highly portable.
The specification of a container is compact and universal. It's compact because filespace and other resources are not "preallocated", but rather contrained to a quota. The optimizes resource utilization of RAM, CPU and filesystems. Filesystems additional may be layered to remove duplicate storage of referenced components(, but remember container instances each allocate RAM for component per container).
The are universal because of the OpenContainer API standard around container images. The result of a build process is a docker image. There are several ways to build a Docker image, but it's pretty clear that its all about:
1) Starting with a secure, hardened baseline container image that supports both ingress and egress least privileges (not out of the box), and
2) Creating Phoenix images that can be rebuilt should the undelying images change. Yes Configuration Managers, image layering requires transitive dependency operations.
3) If two container images are built from the same source and configuration and produce different size images, the smaller is the one that did not get extra "creation time" components embedded by accident. (yes, there are ways to do this wrong and make "fat" containers)
Because containers are compact, they typically deploy rather quickly (under 5 seconds if staged), especially in microservice architectures. This enables several possibilities:
* Elastic scaling via additional deployment of container instances (process pools)
* Recovery via elastic services
* Automatic Redeployment of containers to a more optimal location, like within the same host as associated container runtimes to eliminate network traffic or local to data needed for processing. We will term such concerns inter-process localityand data locality. Deployment becomes like Garbage Collection for memory management -- an infrastructure service.
Container runtimes can be hosted on Bare Metal OS, or Hypervisor. One of the original reasons Google started C Groups (a component of containers) was to avoid the 5% hypervisor layer processing costs. I see no need to pay in both performance and licensing to run on Virtual machines. Enterprise mandates to do so allow "appliance" vendors to sidestep such Enterprise mandated licensing and gain cost and performance advantage by not including a hypervisor. It is an edge not merited by technology, only bureaucracy.
### OpenContainer APIs
There are two notable interfaces for containers:
1. The Container Network Interface (CNI), and
2. The Container Storage Interface (CSI)
I'll keep it simple -- hook up network or storage adapters through the APIs. It's an Adapter pattern, so you can change to different network or storage implementations later without impacting your code. (APIs are another form of encapsulation -- like containers.)
### ISaaC -- Commodity Container Hosting
If you think that all this portability and standardization means you can run your stuff anywhere you want for as cheap as you can -- you're essentially correct.
Sure, SLAs of hosting providers will vary, but you can usually scale up or down, and find a resource rental agreement that fits, and you are hosted in the cloud. Given SLA compliance, it's all about elastic scaling costs.
This turns basic computer resource into commodities. Now in case you just came out of a bomb shelter, I'll remind you that Amazon is giving storage away. They wrap their packages in tape that has free storage offers for pictures, songs, files. You name it. And google is doing the same. It's so cheap, you can by an "Unlimited Storage" subscription from Amazon for under $40 at Christmastime. They just don't care how much any more.
Of course, CPUs were expensive, unless you start thinking about ARM or Atom processors where cost per core is under $10. Oh they run slower, but you get more cores, and there is another locality item known as the "CPU to Memory Performance Gap" that no matter how we try to scheme it, RAM is 4 times slower than CPU -- and I'm talking L1 Cachespeeds, not DRAM. Since thinking faster than you can rememeber is impossible, one might postulate that to have 4 processors whose speed was closer to that of RAM might be better than one that is 4 times faster, and 4 times as costly.
There are several eminent product offering from ARM Datacenter vendors who are placing 24 to 48 cores onto the same L3 backplane. [Redhat](http://www.datacenterknowledge.com/hardware/red-hat-bets-data-centers-are-ready-arm-servers) has recently taken an interest.
Competition from ARM manufacturers will begin to exert even more commoditization of CPUs.
Of course, there is RAM. Now, RAM got a big boost when SSDs turned into NVRAM, and gained better I/O performance. Like magnetic storage, it's non-volatile. Like RAM, it has impressive Random I/O performance, but is cheaper and can survive loss of power. It fills a really sweet niche, and vendors like Aerospike have bottled the goodness.
> Bottom line: Commodity hardware is cheap, and getting cheaper, and hosting selection will be referred to as "IaaS-as-a Commodity" or IsaaC. It's a cost-effectiveness effort.
## Pillar: Locality --- A Journey into Space and Time
Yes, I love episode one of Cosmos. But it actually fits.
I've sprinkled the breadcrumbs throught this document on this topic. If if seems confusing, you probably aren't alone. Once you take the trip from REST to EDA, it heard to have the ground shift a second time. If you don't get this pillar, relax. Most will interface with a Event Streaming API and never care about what goes on under the hood. This is for those want to know about the car.
As I stated previously, one conceptualization of streaming has event message moving towards "listeners" (Actors is historical parlance). In a sense, the data move towards the processing.
This is not new, since most processes are staically deployed on servers within firewalled zones, and request that data be sent to their location. This model of data processing pulls the data to the process.
**But most processes are really small executables, especially when compared to a typical application database. So instead of bring the mountain (of data) to Muhammed...?**
What if the date was static, and we brought the process local to it? We could distribute the data amongst a cluster of nodes, and then based on which node had the data shard we wanted to use, we could trigger execution on that host -- **the one with the data already there!**
That's how you flip the EDA under the hood. You huddle the process around the process data, and represent events as different states of a shared memory context.
When you think about it, the pieces fall into place. Code seldom changes. Containers can redeploy processes quickly and compact. Selective execution by host becomes the technical challenge, and that simply requires a partitioning scheme that associates data shards with nodes.
If you are still lost, just keep rereading and singing in the shower until your "eureka" moment.
I've spoken alot about spatial locality in reference to CPUs, L1 Cache, RAM, Storage, Networks, and associated drops in latency and throughput (which vary.) There also exists temporal locality, the element of time proximity and scheduling.
While you got the process context in CPU cache you want to let that process burn that core. At some point, the process completes, or it's timeslice expires (in preemptive multitasking OSes). If the timeslice expires, all guaratees about what the cache will look like when the timeslice comes back are null and void. If space was needed for another process, cache got clubbered.
This thrashing of process context can be avoided through intelligent scheduling at the OS level to extend the timeslice and create maximize temporal proximity. Custom schedulers already exist in linux. If you ever rooted an Android phone, you are probably aware of some choices in process scheduling at the OS level.
## What's Gartner got to say about all this?
### EDA
> Event thinking is a **cultural change** that sets organizations ready for genuine digital business innovation. Application leaders engaged in digital business transformation **must** practice event thinking in IT and champion it with business leaders.
>
> #### Because the origination and processing of the notifications are decoupled, the parties do not have to be connected or necessarily enter into contracts. The sources of notifications and their consumers can come and go unannounced. Command-based systems, in comparison, have the requestors identify the requested action and the designated service and, therefore, require greater coupling of the parties
>
> >**The contrast between the deterministic command thinking and the event notification thinking is the contrast between the traditional and digital business.**
>
> >**A business organization's opportunities and its flexibility can scale further with event thinking.**
>
> ### Adoption Rate
>
>Many leading-edge technology projects are driven by event thinking in conceiving and implementing digital business innovations. IoT, real-time operations, context-aware decisions, even the early blockchain are all relying in part or in full on event communications. Software design using event-driven function architecture (for example, AWS Lambda, IBM Bluemix OpenWhisk) is one of the fastest growing development trends. Some evidence of this is the fast-growing popularity and adoption of new event-oriented technologies, such as Kafka, Storm, Spark Streaming or function PaaS (fPaaS), and the emergence, for the first time, of high-productivity tools for design of event-driven applications (Salesforce Platform Events, Vantiq). Still, the vast majority of organizations remain oriented to the long-practiced, command-driven (request/reply) model of application operations.
>
>### Strategic Planning Assumptions
>
> By 2020, event notifications will drive over 60% of business transactions in new digital business solutions.
>
> By 2022, the majority of business organizations will participate in event-driven digital business ecosystems.
>
> By 2020, event processing will be a top-three priority for the majority of CIOs at large organizations.
>
>By 2020, most leading application platform providers will include high-productivity tools for design of event-driven operations.
>
> ### Key Findings
>
> - Many digital business innovations require event awareness and depend on the kind of agility offered by event-driven systems.
> - Most organizations do not have a strategy for event awareness, or the culture of thinking of their business or IT as event-driven.
> - Conceiving, designing and managing event-driven interactions requires a new way of thinking and can be difficult, especially in the absence of productivity tools. Early initiatives may fail or disappoint if attempted without preparation and realistic assessment of available skills.
> - Most organizations have some technology for event-driven design, but lack the vision of event thinking; therefore, few apply it in a strategic manner or see it as a model for business innovation.
>
> [Gartner’s Innovation Insight for Event Thinking
](https://www.gartner.com/technology/media-products/newsletters/Vantiq/1-4KJP5XJ/gartner.html)
>Digital business moments, which are a combination of business events that reflect the discovery of notable states or state changes, will drive digital business. While a simple example would be the signal that a purchase order has been completed, as the IoT and other technologies emerge, complex events can be detected more quickly and analyzed in greater detail. Gartner suggests that enterprises should embrace “event thinking,” **given that by 2020, event-sourced, real-time situational awareness will be a required characteristic for 80% of digital business solutions, and 80% of new business ecosystems will require support for event processing.**
>
> [Gartner: Top 10 Strategic Technology Trends For 2018
](https://www.forbes.com/sites/peterhigh/2017/10/04/gartner-top-10-strategic-technology-trends-for-2018/3/#74d0ac37289b)
> [Gartner Webinar: Succeed in Digital Business with Event-Driven Computing](https://www.gartner.com/webinar/3845865?srcId=1-3931087981)
### Upcoming Pillar: The 6 Facets of a Software Application