Samuel M Smith

@SamuelMSmith

PhD Electrical and Computer Engineering, System Design, Automated Reasoning, Machine Learning, Autonomous Vehicle Systems, BlockChain, Decentralized Identity

Prime membership

Joined on Jul 8, 2019

  • Originally I solved this decentralized end-to-end non-interactive authorization problem as part of a proof-of-concept for a privacy preserving lost and find registry and peer-to-peer messaging service. This proof-of-concept was implemented in python in an open source Apache2 project called Indigo-BluePea which may be found here. Indigo BluePea Interactive vs. Non-interactive Authentication Design Authentication mechanism broadly may be grouped into two different approaches, these are: interactive and non-interactive approaches. An interactive mechanism requires a set of requests and reponses or challenge responses with challenge response replies for secure authentication. Non-interactive approaches on the other hand pose unique problems because they do not allow a challenge response reply handshake. A request is submitted that is self-authenticating without additional interaction. The main benefits of non-interactive authentication are scalabilty and path independent end-to-end verifiability. These benefits become more important in decentralized applications that employ zero-trust architectures. (by zero-trust we mean never trust always verify, this means every request is independently authenticated, there are no trusted pre-authenticated communication channels.) For non-interactive authentication of some request for access, the most accessible core non-interactive mechanism is a non-repudiable digital signature of the request itself made with asymmetric key-pairs. The hard problem for asymmetric digital signatures is key management. The requester must manage private keys. Indeed this problem of requiring the user to manage cryptographic keys, at least historically, was deemed too hard for users which meant that only federated, token based, authentication mechanisms were acceptable. But given that it's now commonly accepted that users are able to manage private keys, which is a core assumption for KERI in general, then the remaining problem for non-interactive authentication using non-repudiable digital signatures is simply replay attack protection. Indeed with KERI the hardest problem of key management, that is, determining current key state given rotations is already solved. The closest authentication mechanism to what KERI enables is the FIDO2/WebAuthn standard FIDO2/WebAuthn. The major difference between FIDO2/WebAuthn and KERI is that there is no built-in automated verifiable key rotation mechanism in FID02/WebAuthn. FIDO2/WebAuthn consists of two ceremonies, a registration ceremony and then on or more authentication ceremonies. Upon creation of a key-pair, the user engages in a registration ceremony to register that key pair with a host. This usually involves some MFA procedure that associates the entity controller the key pair with the public key from the hosts perspective. Once registered then individual access may be obtained through an authentication ceremony that typically involves signing the access request with the registered private key. Unfortunately FIDO2/WebAuthn has no in-stride verifiable key rotation mechanism. Should a user ever need to rotate keys, that user must start over with a new registration ceremony to register the new key pair for that user entity. Whereas with KERI rotation is happens automatically with a rotation event that is verified with the pre-rotated keys. So given one already has KERI verified key state, using FIDO2/WebAuthn to authentical replay requests would be going backwards.
     Like  Bookmark
  • In KERI we have three classes of uses for signatures each with a different level of security with different security guarantees. High KEL backed either in KEL are anchored to KEL. This means the signatures on the events in the KEL are strongly bound to the key state at the time the events are entered in the KEL. This provides the strongest guarantee of duplicity evidence so that any verifier is protected. The information is end-verifiable and any evidence of duplicity means do not trust. A key compromise of a stale key can not result in an exploit because that would require forging an alternate version of the KEL which would be exposed as duplicity. Key compromise of the current key-state is recoverable with a rotation to the pre-rotated key(s) (single-sig or multi-sig) and pre-rotated keys are post-quantum proof. A compromise of a current key-state is guaranteed to be detectable by the AID's controller because the compromise requires publication of a new KEL event in infrastructure (i.e. witnesses) controlled by the AID controller. This limits the exposure of a successful compromise of the current key-state to the time it takes for the AID controller to observe the compromised publication and execute a recovery rotation. The ordering of events in the KEL is strictly verifiable because the KEL is a hash chain (block chain). All events are end-verifiable. Any data anchored to these events is also end-verifiable. All these properties are guaranteed when data is anchored to a KEL, i.e., KEL backed. Any information that wants to be end-verifiablly authentic over time should be at this highest level of security. ACDCs when anchored to a KEL directly or indirectly through a TEL that itself is anchored to a KEL have this level of security. Medium BADA-RUN (described in the OOBI spec) This imposes a monotonicity on order of events using a tuple of date-time and key-state. The latest event is the one with the latest date-time for the latest key-state. This level of security is sufficient for discovery information because the worst case attack on discovery information is a DDoS where nothing gets discovered. This is because what gets dicovered in KERI must be end-verifiable (anchored to a KEL). So a malicious discovery (mal-discovery) is no different than a mis-discovery or a non-discovery. The mitigation for such a DDoS is to have redundant discovery sources. We use BADA-RUN for service end-points as discovery mechanisms. We of course could anchor service endpoints to KELs and make them more secure but we want to make a trade-off due to the dynamism of discovery mechanisms and not bloat the KEL with discovery anchors. Because the worst case can be mitigated with redundant discovery its a viable trade-off.
     Like 2 Bookmark
  • hackmd-github-sync-badge Chair: Samuel M. Smith email Co-Chair: Philip Feairheller email Meeting Bi-weekly starting on 2021-10-19 at 10 am EDT Agenda: https://hackmd.io/-soUScAqQEaSw5MJ71899w (linked from github but edited in HackMD) Zoom Meeting:
     Like 2 Bookmark
  • Vacuous discovery of IP resources such as service endpoints associated with a KERI AID requires an Out-Of-Band Introduction (OOBI) to associate a given URL with a given AID. The principal reason for this requirement is that KERI AIDs are pseudonymous and completely independent of internet and DNS addressing infrastructure. Thus an IP address or URL could be considered a type of Out-Of-Band Infrastructure (OOBI) for KERI. In this context an introduction is an association between a KERI AID and a URL which may include either an explicit IP address for its netloc or a DNS name. We call this a KERI OOBI (Out-Of-Band-Introduction) and is a special case of Out-Of-Band-Infrastructure (OOBI) with a shared acronym. For the sake of clarity, unless otherwise qualified, OOBI is used to mean this special case of an introduction and not the general case. Moreover, because IP infrastructure is not trusted by KERI, a KERI OOBI by itself is considered insecure with respect to KERI and any OOBI must therefore be later proven and verified using a KERI BADA (Best Available Data Acceptance) mechanism. The principal use case for an OOBI is to jump start the discovery of a service endpoint for a given AID. To reiterate, the OOBI by itself is not sufficient for discovery because the OOBI itself is insecure. The OOBI merely jump starts authenticated discovery. Using IP and DNS infrastructure to introduce KERI AIDs which AIDs are then securely attributed allows KERI to leverage IP and DNS infrastructure for discovery. KERI does not therefore need its own dedicated discovery network, OOBIs as URLs will do. The simplest form of a KERI OOBI is a namespaced string, a tuple, a mapping, a structured message, or structured attachment that contains both a KERI AID and a URL. The OOBI associates the URL with the AID. In tuple form this abstractly: (url, aid) and concretely ("http://8.8.5.6:8080/oobi", "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM")
     Like  Bookmark
  • Why the AEID is Non-transferable The keys in the key store are encrypted at rest (persistent storage, disk, etc) using assymetric encryption. A public encryption key may be stored on disk and loaded at boot tome so that any new secrets are encrypted. In order to decrypt however, the private decryption key is needed. This must only be provided at run time and must only be stored in memory. Without access to the private decryption key an attacker is not able to discover any encrypted secrets in the keystore. Providing the private decryption key is both an authenticating and authorizing action as it unlocks the encrypted secrets that then may be used to sign. The AEID (Authentication/ Authorization, Encryption ID) is a non-transferable AID derived from an Ed25519 key-pair. The key store asymmetric encryption/decryption key pair is an X25519 key-pair derived from the Ed25519 key pair. Consequently, providing the Ed25519 private key is tantamount to providing the X25519 decryption key. Recall that KERI pre-rotation does not prevent compromise of signing keys. It just enables the controller of an identifier to re-establish control over the identifier by revoking the compromised signing key and replacing it with a new one (pre-rotated key). Damage done due to compromise before rotation recovery has occurred is not prevented. As mentioned above, the AEID is used to derive a decryption key to decrypt secrets in the keep (key store). Should that key become compromised then the secrets in the key store may also become compromised. Doing a rotation to revoke the compromised decryption key does not repair the damage of the compromise of the secrets in the key store. That damage is unrecoverable, unrepairable. Consequently there is no advantage to using a transferable identifier that requires a KEL for the AEID. It just adds complexity for no increase in security. Protection from compromise in the first place is essential to protecting a key store of secrets. Consequently the mechanisms that provide the AEID at run time to unlock the secrets and protecting that AEID do not benefit from KERI pre-rotation. Major Variants There are three major variants of the architecture and several minor variants of each major variant. The first major variants employs SKWA (Simple KERI for Web Auth) to mutually authenticate a web client (GUI) and a cloud hosted web server run by the controller that also hosts the key store in the cloud of some set of KERI public (indirect mode) AIDs (autonomic identifiers). The web client controller's ID is denoted CCID. The web server's ID is denoted ACID.
     Like  Bookmark
  • GARs Instructions from phil Root GARs Cloud Agents each with public unique URL https://lead-root-gar.verifiablelei.org/index.html https://root-gar.verifiablelei.org/index.html
     Like  Bookmark
  • Strategic Technology Choices vis-a-vis the Linked Data (JSON-LD/RDF) End State. 2022/04/04 Version 1.2.8 Barriers to Adoption of Linked Data VCs The purpose of this paper is to capture and convey to a broader audience my increasingly worrisome concerns about the adoption path for Verifiable Credentials (VCs). My concerns began with the security limitations of VCs that use Linked Data (otherwise known as JSON-LD/RDF) and have since extended to the semantic inference limitations of Linked Data. My concerns may be expressed succinctly as, the VC standard appears to be an adoption vector for Linked Data, not the other way around. My overriding interest is that the concept of a VC as a securely attributable statement is a very powerful and attractive one and therefore should be widely adopted. We should therefore be picking the best technologies that best support broad VC adoption, not the other way around. A VC is a member of a more general class of data that may be described as securely attributed data or securely provenanced data or more simply authentic data. My overriding interest for authentic data is to reach the end state where we have reached universal adoption. This would finally fix the broken internet. Therefore, the underlying technology and adoption strategy must be compatible with that end state. We may never reach that end state, and if not, it must not be due to any technical limitation or a bad adoption strategy. In my view the primary role of any identifier system is to provide secure attribution. Another way of stating this is that the identifier system solves the secure attribution problem. The core of such a solution is the establishment of control authority over an identifier, where such control authority consists of a set of asymmetric (public, private) key pairs for non-repudiable digital signature(s) (PKI). Other cryptographic operations may serve a similar role, but the simplest, most universal method of establishing provable control authority is an asymmetric key pair-based digital signature on some statement attributed to an identifier. Only the controller of the private key(s) that control an identifier may make a verifiable, non-repudiable commitment to some statement via such a signature. Any second party may verify that signature given the public key(s). The role of any identifier system is to securely map identifiers to the authoritative public key(s)s in order to cryptographically verify the signatures and thereby make secure attribution. To elaborate, secure attribution via digital signatures is a cryptographic way of establishing the authenticity of any signed statement.
     Like  Bookmark
  • XOR and OTP for Blinding Digests Information Theoretic Security and Perfect Security The highest level of crypto-graphic security with respect to a cryptographic secret (seed, salt, or private key) is called information-theoretic security [1]. A cryptosystem that has this level of security cannot be broken algorithmically even if the adversary has nearly unlimited computing power including quantum computing. It must be broken by brute force if at all. Brute force means that in order to guarantee success the adversary must search every combination of key or seed. A special case of information-theoretic security is called perfect-security [1]. Perfect-security means that the cipher text provides no information about the key. There are two well-known cryptosystems that exhibit perfect security. The first is a one-time-pad (OTP) or Vernum Cipher [2][3], the other is secret splitting [4], a type of secret sharing [5] that uses the same technique as a one-time-pad. Cryptographic Strength For crypto-systems with perfect-security, the critical design parameter is the number of bits of entropy needed to resist any practical brute force attack. In other words, when a large random or pseudo-random number from a cryptographic strength pseudo-random number generator (CSPRNG) [6] expressed as string of characters is used as a seed or private key to a cryptosystem with perfect-security, the critical design parameter is determined by the amount of random entropy in that string needed to withstand a brute force attack. Any subsequent cryptographic operations must preserve that minimum level of cryptographic strength. In information theory [7] the entropy of a message or string of characters is measured in bits. Another way of saying this is that the degree of randomness of a string of characters can be measured by the number of bits of entropy in that string. Assuming conventional non-quantum computers, the convention wisdom is that, for systems with information theoretic or perfect security, the seed/key needs to have on the order of 128 bits (16 bytes, 32 hex characters) of entropy to practically withstand any brute force attack. A cryptographic quality random or presudo-random number expressed as a string of characters will have essentially as many bits of entropy as the number of bits in the number. For other crypto-systems such as digital signatures that do not have perfect security the size of the seed/key may need to be much larger than 128 bits in order to maintain 128 bits of cryptographic strength. An N-bit long base-2 random number has 2N different possible values. Given that with perfect security no other information is available to an attacker, the attacker may need to try every possible value before finding the correct one. Thus the number of attempts that the attacker would have to try may be as much as 2N-1. Given available computing power, one can easily show that 128 is a large enough N to make brute force attack computationally infeasible. Let's suppose that the adversary has access to supercomputers. Current supercomputers can perform on the order of one quadrillion operations per second. Individual CPU cores can only perform about 4 billion operatons per second but a supercomputer will employ many cores in parallel. A quadrillion is approximately 250 = 1,125,899,906,842,624. Suppose somehow an adversary had control over one million (220 = 1,048,576) super computers which could be employed in parallel to mount a brute force attack. The adversary could then try 250 * 220 = 270 values per second (assuming very conservatively that each try only took one operation).
     Like  Bookmark
  • See also the discussion of field normalization here: https://hackmd.io/XfdKjT3ZQDi1M6Iv3iYhbg Data Management in KERI KERI is about managing verifiable data structures. When data is part of a verifiable data structure we can make strong security guarantees about that data as derived from the verifiability guarantees of the data structure itself. The principal verifiable data structure in KERI is a KEL or KERL. Data may be directly embedded in a KEL or it may be anchored to a KEL using a cryptographic digest or SAID (self addressing identifier). A SAID is a self-referential digest as identifier. Given that the cryptographic strength is sufficient, any digest anchored data has the same verifiable security guarantees as the embedded data for which is was derived. A SAD (Self-Addressed Data) item is a serialization of a data item that includes its SAID. A commitment to the SAID of a SAD is cryptographically equivalent to a commitment to the SAD itself. KERI Protocol employs several types of cryptographic commitments to serialized data. Typically a cryptographic commitment is a non-repudiable digital signature on that serialized data. These are labeled commitment Types 1-5 in the following list:
     Like  Bookmark
  • Because adding the d field SAID to every key event message type will break all the explicit test vectors. Its no additional pain to normalize the field ordering across all message types and seals. Originally all messages included an i field but that is not true any more. So the changed field ordering is to put the fields that are common to all message types first in order followed by fields that are not common. The common fields are v, t, d. The newly revised messages and seals are shown below. Field Labels SAIDs and KERI Label Convention Normalization Because order of appearance of fields is enforced then where a label appears can add context to help clarify its meansing. v for version string when. it appears must be the first field
     Like  Bookmark
  • Derivation Codes Overview KERI derivation codes serve several purposes and provide several features. One main purpose is to allow compact encoding of cryptographic material in the text domain. What we mean by text domain is any representation that is is restricted to printable/viewable ASCII text characters. Cryptographic material is largely composed of long strings of pseudo-random numbers. KERI uses the IETF RFC-3638 Base64URL standard to encode cryptographic material as strings of binary bytes into a text domain representation [1]. In order to process a given cryptographic material item its derivation from other cryptographic material and the cryptographic suite of operations that govern that derivation also need to be known. Typically this additional cryptographic information may be provided via some datastructure. For compactness KERI encodes the derivation information which includes the cryptographic suite into a lookup table of codes. To keep the codes short, only the essential information is encoded in the table. KERI also uses context of where cryptographic material appears to fully characterize its derivation. This also contributes to compactness. These codes are also represented in the textual domain with characters drawn from the Base64 set of characters. Namely Base64 uses characters [A-Z,a-z,-,_]. In addition the "=" character is used as a pad character to ensure lossless round tripping of concatenated binary items to Base64 and back. These 64 characters map to the values [0-63]. The Base64 derivation code is prepended to a given Base64 converted cryptographic material item to produce an extremely compact text domain representation of a given cryptographic material item. When a Base64 derivation code is prepended to a Base64 encoded cryptographic material item, the resultant string of characters is called fully qualified Base64 or qualified Base64 and may be labeled "qb64" for short. We call this a fully qualified cryptographic primitive. This fully qualified compact string (a primitive) may then be used in any text domain representation, including especially name spaces. One other less obvious but important property of KERI's encoding is that all qualified cryptographic material items satisfy what we call lossless composition via concatenation, or composabilty for short. KERI is designed for high performance asynchronous data streaming and event sourcing applications. The sheer volume of cryptographic material, primarily, signatures, in a streaming application demands a streamable protocol. Many of KERI's important use cases benefit specifically from streaming in the text domain not merely the binary domain. Composability allows text domain streams of primitives or portions of streams (streamlets) to be converted as a whole to the binary domain and back again without loss. The attached KID0001 Comment document goes into some length on what composability means. But simply, each 8 bit long Base64 character represents only 6 bits of information. Each binary byte represents a full 8 bits of information. When converting streams made up of concatenated primitives back and forth between the text and binary domains, the converted results will not align on byte or character boundaries at the end of each primitive unless the primitives themselves are integer multiples of 24 bits of information. Twenty-four is the least common multiple of six and eight. It takes 3 binary bytes to represent 24 bits of information and 4 Base64 characters to represent 24 bits of information. Composability via concatenation is guaranteed if any primitive is an integer multiple of four characters in the text domain and an integer multiple of three bytes in the binary domain. KERI's derivation code table is designed to satisfy this composability constraint. In other words, fully qualified KERI cryptographic primitives are composable via concatenation in both the text (Base64)and binary domains. This provides the ability to create composable streaming protocols that use KERI's coding table that are equally at home in the text and binary domains. Without composability some other means of framing, delimiting, or enveloping cryptographic material items is needed for lossless conversion of group of cryptographic material items as a group between the text and binary domain. Mere concatenation of a group, which is the most compact, is not supported without composable primitives. The length of a composable Base64 derivation code is a function of the length of the converted cyptographic material. The length of the derivation code plus material must be an even multiple of four bytes. Standard Base64 conversions add pad characters to ensure composability of any converted item [1]. The number of pad characters is a function of the length of the item to be converted. If the item's length in bytes is a multiple of three bytes then the converted item's length will be a multiple of four base 64 characters and therefore will not need any pad characters. Any binary item whose length mod 3 is 1 will need 2 pad characters to makes its converted length a multiple of four characters. Any binary item whose length mod 3 is 2 will need 1 pad characters to makes its converted length a multiple of four characters. (see the KID0001 Commentary document for details). Consequently the most compact encoding is to replace pad characters with derivation codes (only prepended not appended). This gives either 1 or 2 character codes. When there are no pad characters then the code length is 3 characters. As a result the KERI code table consists primarily of 1, 2, and 4 character codes. Longer codes may be used as long as they satisfy the padding constraint. Thus for converted cryptographic material with 1 pad character, the allowed code lengths are 1, 5, 9, ... characters. For converted cryptographic material with 2 pad characters the allowed code lengths are 2, 6, 10, ... characters. For converted cryptographic material with 0 pad characters the allowed code lengths are 4, 8, 12, ... characters.
     Like  Bookmark
  • Hidden Attribute ACDC A hidden attribute ACDC MUST include a nonce field, n, in its attribute section, a. The nonce field value is pseudo-random number with a minimum length of 16 bytes for 128 bits of entropy i.e. cryptographic strength. The default format for the n field value is a 16 byte random number encoded as CESR salt string. The value of the attributes field, a of a hidden attribute ACDC MUST be the SAID of its attributes block. To clarify, the value of the a field (attributes section) MUST NOT include attributes themselves but only the SAID of the block of those attributes. This makes the attributes hidden. For short we call a hidden attribute ACDC a HAC for (Hidden Attribute Container or Credential). The fully populated block of attributes with its embedded SAID may be provided as a private attachment to an issuance or presentation exchange of its associated HAC. The rules , r, section of the HAC may impose confidentiality on any non-issuee recipient of the attributes block as identified by the attributes block SAID therefore discouraging any unapproved sharing of the hidden (private) attributes or any unapproved correlation to the block's SAID. To clarify, the SAID of the complete HAC is a self-referential digest whose digested contents include an attributes section given by field, a, and whose value is merely the SAID of the hidden attributes block not the hidden attribute fields and values themselves. Nonce n field in Attributes Section
     Like  Bookmark
  • The rules section, field label, r may be used to impose a Ricardian contract that constrains the disclosure of information provided in the attributes section of an ACDC. This is a mechanism to achieve protect privacy via chain link confidentiality of correlatable information. References Ricardian Contract https://en.wikipedia.org/wiki/Ricardian_contract https://medium.com/ltonetwork/ricardian-contracts-legally-binding-agreements-on-the-blockchain-4c103f120707 https://101blockchains.com/ricardian-contracts/ https://iang.org/papers/ricardian_contract.html Chain Link Confidentiality
     Like  Bookmark
  • Limited Feature KERI Implementation The full featured KERI protocol is designed to support nearly every imaginable application that requires secure attribution at scale. As a result, a full featured implementation of KERI may be quite complex. The design principles of KERI are in order of priority security, performance, usability (convenience). Obviously KERI is security first always. Performance is second because it may be problematic to add performance features to a protocol design after the fact. But not all applications of KERI require performance. Some narrow applications may benefit from an implementation that sacrifices performance or other non-security feature for usability. In general a narrow application of KERI may not require all the features of KERI but those features that it does support must still be secure. Web Application Authentication and Authorization One such narrow or limited feature application is authentication of a client web browser to a web server running in the cloud to support an interactive web graphical user interface (Web GUI). Authentication in this sense means proving control of a transferable KERI AID to a ReST endpoint on the server by signing a client HTTP request with the private key that is the current signing key for that AID. The authenticated request may also be used to authorize the server to perform some task on behalf of the controller of that identifier. The primary feature benefit of using a transferable KERI AID for the identifier is that the key-pair that controls the identifier is rotatable using KERI's pre-rotation mechanism. This provides a more secure key rotation mechanism than is provided by most if not all other web authentication mechanisms including WebAuthn. SKWA basically just needs those features of KERI necessary for pre-rotation of single key-pair transferable identifiers. It is meant to support users who merely wish to interact with a web application for tasks that require user interaction. SKWA is not intended to support interactions at scale by any given user on the client side. With SKWA a server may use a full KERI implementation to interact with a SKWA only client because SKWA is a proper subset of KERI. To reiterate, SKWA is not meant for micro-service backends that support complex web applications nor is SKWA meant to support identifiers that directly issue verifiable credentials or multi-sig identifiers etc etc. It is only meant to support the AuthN/AuthZ of an interactive web application client with pre-rotation of its controlling key-pair.
     Like  Bookmark
  • Namespaced namespace:testatorid:acdcid/path?query#fragment did:keri:E4ReNhXtuh4DAKe4_qcX__uF70MnOvW5Wapj3LcQ8CT4:E8MU3qwR6gzbMUqEXh0CgG4k3k4WKkk9hM0iaVeCmG7E URN:aid:E4ReNhXtuh4DAKe4_qcX__uF70MnOvW5Wapj3LcQ8CT4:E8MU3qwR6gzbMUqEXh0CgG4k3k4WKkk9hM0iaVeCmG7E URN:aid:E4ReNhXtuh4DAKe4_qcX__uF70MnOvW5Wapj3LcQ8CT4:E8MU3qwR6gzbMUqEXh0CgG4k3k4WKkk9hM0iaVeCmG7E/mypath?myquery#myfragment Namespaced Example
     Like  Bookmark
  • Selector Size Pad Unq Len Format Unique Quad/Trip Bytes
     Like  Bookmark
  • https://github.com/decentralized-identity/keri https://github.com/decentralized-identity/keripy https://github.com/decentralized-identity/kerijs https://github.com/decentralized-identity/keriox https://github.com/decentralized-identity/keri/blob/master/implementation.md
     Like  Bookmark
  • # DID Doc Encoding: Abstract Data Model in JSON This is a proposal to simplify DID-Docs by defining a simple abstract data model in JSON and then permitting other encodings such as JSON-LD, CBOR, etc. This would eliminate an explicit dependency on the RDF data model. ## Universal Adoptability For universal interoperability, DIDS and DID-Docs need to follow standard representations. One goal of the DID specification is to achieve universal adoption. Broad adoption is fostered by using familia
     Like  Bookmark