Other Possible topics to include:
Do you also work in a large organization that produces software? Perhaps you're familiar with problems with how to automate deliveries between different organizations and ecosystems, or how to know where in a pipeline a particular commit is right now, how it even got there, and where the pipeline bottlenecks are? You're in the right place.
During my time in the industry, about 20 years, software delivery has changed in numerous ways. We've in many cases moved from manual integration of software components by dedicated teams to highly automated pipelines that enable higher velocity, shorter cycles, quicker time to market, and higher quality. More recently, cybersecurity threats have raised the bar of supply chain security and responsiveness to vulnerabilities.
When I started at Axis nine years ago, code was integrated by a central team via a ticket queue. Landing your code onto the master branch could take days. Now each developer is responsible for the integration, and it can take 30 minutes, depending almost entirely on how long the tests take.
These shifts haven't come without friction. Many engineering hours have been spent writing bespoke scripts that glue things together. This often ends up in a situation where it's not clear how things work, APIs are unclear, and testing changes to the system is hard.
Off the shelf software for build automation is often unhelpful since the required automation is highly dependent on the development and delivery processes, which aren't so easily changed, never mind aligned throughout the organization. And aligning processes to the current choice of third-party automation software probably isn't a great idea.
XXX Motivera mer varför tredje part inte duger? Triggering
One way to deal with this is to accept and embrace that processes and tools will change over time and differ between organizations, and that interoperability is the key. Ericsson realized this around 2012 and started developing a machine-to-machine language to express the myriad things going on during software delivery. Data blobs describing things that happen are usually called events, and Ericsson chose to call their event language, or protocol, Eiffel. Eiffel was eventually revised and open sourced in 2016, and now we're here at the 2024 Eiffel Summit.
One key aspect of Eiffel, and indeed many event-based systems, is that the events are broadcast widely without a particular recipient in mind. This allows anyone to observe what's going on, and crucially also decouples senders and receivers. In other words, someone using an event to describe an occurrence of something in the real world, should only be concerned with how to best describe, or model, this occurrence. Who, singular or plural, is interested in the events isn't necessarily irrelevant. Put differently, it reverses the dependencies in the system; instead of having the sender know about its recipients and making calls to them, it's the recipient that would have a (loose) dependency to the sender.
That said, there is such a thing as too much decoupling. If the sender actually cares about whether the recipients pick up the events and processes them successfully, decoupling can be an obstacle to observability for the sender.
To broadcast the events, a message broker like Kafka or RabbitMQ is typically used as a kind of man in the middle of the communication. Such a broker not only decouples the sender and the recipients in the sense that they're unaware of each other, but they also decouple them in time; the recipients can process the events asynchronously. In most cases the recipients can react to events in real time, but they don't have to. The broker will collect the events in a queue and keep them around until the recipient is ready.
One practical use of this is to allow different ecosystems, e.g. organizations, to announce availability of new artifacts. These announcements can act as triggers for other ecosystems to have the new artifacts included in complete products. The source of the artifacts doesn't have to know where and in what context the artifacts are used. This scenario is similar to what tools like Dependabot and Renovate do.
So what kind of things can Eiffel describe? What are the key events during software delivery? One way of reasoning about this is to look at the key activities that take place:
Another way is to look at the entities involved:
Eiffel can describe, or model, all of these things. It uses a well-defined and fairly strict specification, or schema, that describes what the JSON objects for each type of event should look like.
So to get specific for once, an Eiffel event is a JSON object with a specific declared type that indicates what has happened with an entity or activity. All such JSON objects conform to a schema in the Eiffel protocol specification. To allow for evolution in the protocol, each event type comes in different versions that select the exact schema of an object. The event type and version are stored in the meta
top-level key, along with some other general metadata about the event. That part of the event is the same for all event types of the same generation. The contents of the data
key, on the other hand, is unique to each event type.
Notice how the activities and entities above can have different kinds of relationships. An artifact is built from a particular piece of source code. Tests are run on that artifact. Then the artifact is deployed to a test environment. This is something we want to capture in the events.
For this reason, Eiffel events are rarely entirely standalone and almost always link to at least one other event. In slides like these we represent these links as arrows, but in the actual JSON representation a link contains the id of the event being linked to, i.e. the event the arrow points to. But it's actually a bit more than the id; a link also has a type that describes what kind of link it is, i.e. what is the nature of the relationship between the events.
A test execution clearly tests an artifact, so that's a relationship that probably should exist all the time. Another type of relationship is the cause. Why did an event occur? While it could be a manual trigger by a human or an automated system, it could also be another event. For example, the same test execution could've been triggered by the creation of the artifact that's being tested. In other words, the test execution can have two different kinds of relationships to the artifact creation. Or, the test execution could've been caused by the completion of another test execution, or the deployment of the artifact to a particular environment.
This takes us to another point of Eiffel that we haven't really discussed so far, namely traceability. We can clearly use these broadcast events to trigger new actions in a pipeline, but the links also allow us to trace the causal chain that eventually led to an artifact being deployed in production.
This partially overlaps with what's in an SBOM or a build attestation.