marco-tiloca-sics
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Open design points More input and open points from the 2021-12-08 CoRE interim * https://datatracker.ietf.org/meeting/interim-2021-core-14/materials/slides-interim-2021-core-14-sessa-key-update-for-oscore-kudos-00.pdf [TOC] ## Cryptographic limits for CCM_8 **>> DONE (see Section 2.1.1 and Appendix A)** * The current text in the document body can be extended with more data presented at the interim meeting linked above. The result can be moved to an Appendix. * In the document body, keep only a straight set of recommendations to follow if CCM_8 is used. ## Efficient counting of 'q' for OSCORE AEAD limits **>> DONE (see Section 2.2.2 and Appendix B)** https://github.com/core-wg/oscore-key-update/issues/1 * Original issues - https://gitlab.com/rikard-sics/draft-hoeglund-oscore-rekeying-limits/-/issues/1 - https://gitlab.com/rikard-sics/draft-hoeglund-oscore-rekeying-limits/-/issues/13 Goal: avoid keeping an explicit count_q, while still ensuring by construction to not exceed count_q encryptions. Rationale for both client and server - no explicit count_q to store and maintain, just rely on the Sender Sequence Number (SSN); still count_q represents the amount of performed encryptions - check if the SSN exceeds q_limit; if yes, stop and rekey How it works in practice Pro: no need to keep an explicit count_q. Con: pessimistic overestimation, staling the keys earlier than needed, thus possibly resulting in more frequent key updates. At any point in time, an endpoint has made *at most* ENC = (SSN + SSN*) encryptions, where: * SSN is its own Sender Sequence Number. * SSN* is the other endpoint's Sender Sequence Number. That is, SSN* is an overestimation of the responses without Partial IV that this endpoint has sent. Before performing an encryption, an endpoint stops and invalidates the Security Context if (SSN + X) > limit_q , where SSN is the Sender Sequence Number of this endpoint, and X is determined as follows: * if this endpoint is producing an outgoing response, X is the Partial IV in the request it is responding to. * Note that X < SSN* always holds. * if this endpoint is producing an outgoing request, X is the highest Partial IV value marked as received in its Replay Window plus 1, or 0 if it has received no messages yet from the other endpoint. * That is, X is the highest Partial IV seen from the other point, i.e. its highest seen Sender Sequence Number, and again X < SSN* always holds. ### Example of why it is not just simply about using one's own SSN (just as historical information) c->s PIV: 0, Observe (1 encryption C) s->c PIV: Notification (1 encryption S) s->c PIV: Notification (1 encryption S) s->c PIV: Notification (1 encryption S) c: 1 encryption / SSN = 1 s: 3 encryption / SSN = 3 q_limit = 4 c->s PIV: 1 - (1 encryption C) s->c PIV: - (1 encryption S) c: 2 encryption / SSN = 2 s: 4 encryption / SSN = 3 c->s PIV: 2 - (1 encryption C) s->c PIV: - (1 encryption S) // The server is already acting wrong here c: 3 encryption / SSN = 3 s: 5 encryption / SSN = 3 c->s PIV: 3 - (1 encryption C) // The client would stop after this s->c PIV: - (1 encryption S) c: 4 encryption / SSN = 4 s: 6 encryption / SSN = 3 ## More weird things and corner cases for consideration **>> DONE** https://github.com/core-wg/oscore-key-update/issues/27 0. Don't send non-KUDOS messages while KUDOS is not completed on your side ==> See Section 4.3: "Once a peer acting ..." 1. An outgoing non-KUDOS message is always protected with the most recent Security Context. ==> See Section 4.3: "Once a peer has successfully derived the new ..." This opens more corner cases to address in the client-initiated and server-initiated flows, in case the last KUDOS message is lost in transmission. These are now addressed in Section 4.3.1 ("Note that the server achieves ...") and 4.3.2 ("Note that the client achieves ..."). 2. A client must be ready to receive a non KUDOS response protected with keying material different than that used to protect the corresponding non KUDOS request. This is the case if KUDOS is run between the transmission of a non KUDOS request and the transmission of the corresponding non KUDOS response. * NEW: For the client-initiated version, this can happen if the client uses NSTART > 1 (see {{Section 4.7 of RFC7252}}), and the client has outstanding interactions with the server (i.e., requests pending the reception of a ACK or response, see Section 4.7 of RFC7252) when sending the first KUDOS message. \[NOTE: Another case would be where the client has an observation already ongoing at the server when KUDOS starts. However, this assumes that it is possible to safely preserve observations across a key update, which is not the case at the moment although under consideration.\] OLD: For client-initiated KUDOS, this can happen where the responses in questions are observe notifications. ==> This is now obsoleted by the handling later discussed in 4.3.1.1. * For the server-iniated version, this can happen if the client uses NSTART > 1 (see {{Section 4.7 of RFC7252}}), and one of the non KUDOS requests results in the server initiating KUDOS (i.e., yielding the first KUDOS message as response). In such a case, the other non KUDOS requests representing oustanding interactions with the server (see {{Section 4.7 of RFC7252}}) would be replied to later on, once the server has finished executing KUDOS (i.e., when the server receives the second KUDOS message, successfully verifies it, and derives the new OSCORE Security Context CTX\_NEW). ==> This is now obsoleted by the handling later discussed in 4.3.2.1. 3. Corner case with a "stuck party". This can happen when the following occurs: i) the server-initiated version of KUDOS is used; ii) the client is client-only; iii) the server needs to start a key update (which can happen only with a server-initiated KUDOS); the client sends only NON requests without expecting a response from the server, hence without ever storing a Token for response matching. To avoid a deadlock for the server, a client-only endpoint that support KUDOS SHOULD here-and-then send also CON requests or retain the Token value for possible responses that can be a KUDOS message_1 from the server. ==> See Section 4.3.2.2 ### Requests in Transit Across a Key Update https://github.com/core-wg/oscore-key-update/issues/25 **>> DONE** Similar problems as with trying to preserve observations across key update (see further below). This could happen if: i) NSTART > 1 at the client; and ii) a first request is sent and the corresponding response is sent only later on after the rekeying with KUDOS has completed (i.e., that response is protected with the new key material different that the one used to protect that first request). Sequence of events is (client-initiated): - The client sends Req1, protected with the old key material and Partial IV X. - The client sends Req2 as KUDOS Request #1 , thus actually starting KUDOS. - KUDOS is completed. - The client sends Req3, protected with the new key material and Partial IV X. - The server responds to Req1, protecting the response with the new key material. - The response from the server would cryptograhically match both Req1 and Req3. Sequence of events is (server-initiated): - The client sends Req1, protected with the old key material and Partial IV X. - The client sends Req2, protected with the old key material and Partial IV X+1. - The server initiates KUDOS based on Req1 - KUDOS is completed. - The Client sends Req3, protected with the new key material and Partial IV X. - The server responds to Req2 with new key material. - The response from the server would cryptograhically match both Req2 and Req3. Possible solution: - The client determines the time T when it is sending its own first KUDOS message as follows. - In the client-initiated version of KUDOS, T is the time right before sending KUDOS Request #1. - In the server-initiated version of KUDOS, T is the time right before sending KUDOS Request #2. - At time T, the client has to ensure that there are no outstanding interactions with the server (i.e., requests pending the reception of a ACK or response, see Section 4.7 of RFC 7252), with the exception of ongoing observations. - If there are any, the client must not continue with running KUDOS. Rather, the client has to wait for those outstanding interactions to clear, or, if running KUDOS is urgent, the client frees up the Token value(s) used for non-responded requests that are not Observation requests. ==> See Section 4.3.1.1 and 4.3.2.1 ==> This obsoletes an alternative handling discussed above in "More weird things and corner cases for consideration". ## Renewal of Sender/Recipient IDs https://github.com/core-wg/oscore-key-update/issues/22 **>> DONE (see Appendix D)** - From John, about possibly establishing also new Sender/Recipient IDs. Something to at least discuss. --- https://mailarchive.ietf.org/arch/msg/core/GXsKO4wKdt3RTZnQZxOzRdIG9QI/ And Christian suggests to actually use an inner option for that, adding more considerations on selecting new identifiers among the ones not used yet (hence it's easier to just do it in KUDOS) --- https://mailarchive.ietf.org/arch/msg/core/ClwcSF0BUVxDas8BpgT0WY1yQrY/ It's also even better if we want to admit this as happening not necessarily as part of a KUDOS execution. General points: * Like in EDHOC, each endpoint can specify its own new Recipient ID, i.e., the new Sender ID of the other peer. * This update of Sender/Recipient IDs has to be optional to do. * This can be embedded in a KUDOS execution or not (i.e., stand-alone). * Both when embedded in KUDOS and when stand-alone, it has to be possible to be client- or server-initiated. When embedded in KUDOS, the initiator (responder) of this procedure is also the initiator (responder) of KUDOS. * Updating the Sender/Recipient IDs practically triggers the derivation of a new security context, with everything it implies (e.g., on Sender Sequence Number and Replay Window). * This procedure MUST not be used immediately following a reboot, to avoid reuse of AEAD nonces. Instead, KUDOS has to be used first (or something else, e.g., EDHOC). To define: * How to transport the new Recipient ID * Where is it a good time to delete the old Sender/Recipient IDs (e.g., based on key confirmation) * What about observations? Possible to preserve? * Downgrading attack by blocking request with p = 0, and injecting fake 5.03 response ### How to transport the new IDs Rationale: use a new dedicated CoAP option for both transport and signaling. ~~~~~~~~~~~ +------+---+---+---+---+-------------+--------+--------+---------+ | No. | C | U | N | R | Name | Format | Length | Default | +------+---+---+---+---+-------------+--------+--------+---------+ | | | | | | | | | | | TBD1 | | | | | RecipientID | opaque | 0-7 | (none) | | | | | | | | | | | +------+---+---+---+---+-------------+--------+--------+---------+ C=Critical, U=Unsafe, N=NoCacheKey, R=Repeatable ~~~~~~~~~~~ Proposed option number: 24 (00011000) (If we want it to be critical it can be number 13) The content of the option is the offered new Recipient ID of the message sender. The peer offers a Recipient ID value which is, on its side, currently free under the ID Context used for the Security Context in question. The option is of class E for OSCORE. ### Example (client-initiated version) ``` CLIENT SERVER CTX_A { | | CTX_A { SID = 1 | | SID = 0 RID = 0 | | RID = 1 } | | } | | Protect | Req1 | CTX_A |---------------------------------->| Verify CTX_A | OSCORE: ..., kid:1 | | Encrypted_Payload { | | ... | | RecipientID: 42 | | ... | | Application Payload | | } | | | // When embedded in KUDOS, CTX_1 is CTX_A. // Also, there cannot be application payload. | | | Resp1 | Verify CTX_A|<----------------------------------| Protect CTX_A | OSCORE: ... | | Encrypted_Payload { | | ... | | RecipientID: 78 | | ... | | Application Payload | | } | // When embedded in KUDOS, this message // is protected using CTX_NEW. Also, there // there cannot be application payload. // Then, CTX_B builds on CTX_NEW by updating // the new Sender/Recipient IDs | | CTX_B { | | CTX_B { SID = 78 | | SID = 42 RID = 42 | | RID = 78 } | | } | | Protect | Req2 | CTX_B |---------------------------------->| Verify CTX_B | OSCORE: ..., kid:78 | | Encrypted_Payload { | | ... | | Application Payload | | } | | | | Resp2 | Verify CTX_B|<----------------------------------| Protect CTX_B | OSCORE: ... | | Encrypted_Payload { | | ... | | Application Payload | | } | | | Client | | deletes | | CTX_A | | | | Protect | Req3 | CTX_B |---------------------------------->| Verify CTX_B | OSCORE: ..., kid:78 | | Encrypted_Payload { | | ... | | Application Payload | | } | | | Server | | Deletes CTX_A ``` ### Example (server-initiated version) ``` CLIENT SERVER CTX_A { | | CTX_A { SID = 1 | | SID = 0 RID = 0 | | RID = 1 } | | } | | Protect | Req1 | CTX_A |---------------------------------->| Verify CTX_A | OSCORE: ..., kid:1 | | Encrypted_Payload { | | ... | | Application Payload | | } | | | // When (to be) embedded in KUDOS, CTX_OLD is CTX_A | Resp1 | Verify CTX_A|<----------------------------------| Protect CTX_A | OSCORE: ... | | Encrypted_Payload { | | ... | | RecipientID: 78 | | Application Payload | | } | // When embedded in KUDOS, this message is // protected with CTX_1 instead. Also // there cannot be application payload. | | CTX_A { | | CTX_A { SID = 1 | | SID = 0 RID = 0 | | RID = 1 } | | } | | Protect | Req2 | CTX_A |---------------------------------->| Verify CTX_A | OSCORE: ..., kid:1 | | Encrypted_Payload { | | ... | | RecipientID: 42 | | Application Payload | | } | // When embedded in KUDOS, this message is // protected with CTX_NEW instead. Also // there cannot be application payload. | | | Resp2 | Verify CTX_A|<----------------------------------| Protect CTX_A | OSCORE: ... | | Encrypted_Payload { | | ... | | Application Payload | | } | // When embedded in KUDOS, this message is // protected with CTX_NEW instead. Also // there cannot be application payload. | | CTX_B { | | CTX_B { SID = 78 | | SID = 42 RID = 42 | | RID = 78 } | | } | | Protect | Req3 | CTX_B |---------------------------------->| Verify CTX_B | OSCORE: ..., kid:78 | | Encrypted_Payload { | | ... | | Application Payload | | } | | | | Resp3 | Verify CTX_B|<----------------------------------| Protect CTX_B | OSCORE: ... | | Encrypted_Payload { | | ... | | Application Payload | | } | Client del | | CTX_A | | Protect | Req4 | CTX_B |---------------------------------->| Verify CTX_B | OSCORE: ..., kid:78 | | Encrypted_Payload { | Server del CTX_B | ... | | Application Payload | | } | | | | Resp4 | Verify CTX_B|<----------------------------------| Protect CTX_B | OSCORE: ... | | Encrypted_Payload { | | ... | | Application Payload | | } | ``` ### Phrasing when the new ID negotiation can happen stand-alone after reboot. https://github.com/core-wg/oscore-key-update/issues/22 RH (p12): procedure to update OSCORE Sender/Recipient ID. It can be used stand-alone or embedded in a KUDOS execution. Not to use right after a reboot to avoid reuse of AEAD nonce. An exception is when OSCORE Appendix B.1 is used, and the node can restart from a safe SSN. CA: Might be easier to phrase in terms of "when having lost state", and being explicit about which state is lost (here it's probably the set of sender IDs ever used on this master secret) Right ..., so, regardless a rebooting happened or not, it is really about having lost state (rebooting is just a possible tragic example). This procedure must not be used if the device does not remember the whole set of Sender/Recipient IDs used with the Master Secret of CTX_OLD. In such a case, the device has to first run KUDOS to update the Master Secret, or something even more like EDHOC. Running this procedure stand-alone (assuming it's safe to do, see above), requires and endpoint to have available: - The latest snapshot of the latest CTX_OLD - The whole set of Sender/Recipient IDs used with the Master Secret of CTX_OLD This opens for yet another consistency check to do when running this procedure as stand alone, regardless a rebooting happened or not. That is, a peer receiving an ID to use as its own Sender ID must abort the procedure if it has already used that ID as its own Sender ID under that Master Secret. Also it must offer its own Recipient ID only if this is not only available at the moment but also never used before under the current Master Secret. Practically, between two consecutive updates of Master Secret (e.g., through KUDOS), a device must keep track of the Sender/Recipient IDs used under that Master Secret. ==> See Appendix C. ## On preserving observations across key updates https://github.com/core-wg/oscore-key-update/issues/23 **>> DONE (see Appendix C)** ### Problem description Problem description: - The client starts an observation Obs1 by sending a request Req1 with req_piv=X - The two peers run KUDOS, and resets their Sender Sequence Number (SSN) to 0. - Later on, while Obs1 is still ongoing, the client sends a new request Req2 also with req_piv=X. This is not necessarily an observation request. - A notification sent by the server for Obs1 or a response to Req2 would both cryptographically match against Req1 and Req2. ``` CLIENT SERVER | | SSN = 567 | | SSN = 123 | | | Req1 (start Obs1) | |---------------------------------->| | Observe: 0 | | OSCORE: ... , PIV: 567 | | | SSN = 568 | | | | |<----------------------------------| | Observe: 0 | | OSCORE: ... , PIV: 123 | | | | | SSN = 124 | | | | |<----------------------------------| | Observe: 1 | | OSCORE: ... , PIV: 124 | | | | | SSN = 125 | | ... ... ... // Perform key update with KUDOS ... ... ... | | SSN = 0 | | SSN = 0 | | |---------------------------------->| | OSCORE: ... , PIV: 0 | | | SSN = 1 | | | | |<----------------------------------| | OSCORE: ... | | | ... ... ... | | SSN = 566 | | SSN = 0 | | |---------------------------------->| | OSCORE: ... , PIV: 566 | | | SSN = 567 | | | | |<----------------------------------| | OSCORE: ... | | | | | | Req2 | |---------------------------------->| | OSCORE: ... , PIV: 567 | | | SSN = 568 | | | | |<----------------------------------| | OSCORE: ... | | | | | // A cryptographic match occurs // against both Req1 and Req2 ! ``` In the current approach, once completed the key update and derived CTX_NEW, the two peers terminate any ongoing observation. The following two alternative approaches make it possible to preserve observations across key updates. Note: whatever the decision, this has to be a precisely defined feature of the protocol, applied to all observations and all key updates. ### "Jumping" approach as a solution The following approach would allow to preserve ongoing observations. After a key update: - The two peers do not terminate ongoing observations. - When wishing to send a first request after key update, the client determines the PIV* as the highest req_piv among all the ongoing observations. - The client updates its SSN to be (PIV* + 1). Pro: No need to take any particular action for every request to be sent. Con: assigning PIV* to the SSN makes several SSN values unusable and yields an early use of large-size values on the wire (in the OSCORE option of all following requests), thus increasing communication overhead. ``` CLIENT SERVER | | SSN = 567 | | SSN = 123 | | | Req1 (start Obs1) | |---------------------------------->| | Observe: 0 | | OSCORE: ... , PIV: 567 | | | SSN = 568 | | | | |<----------------------------------| | Observe: 0 | | OSCORE: ... , PIV: 123 | | | | | SSN = 124 | | | | |<----------------------------------| | Observe: 1 | | OSCORE: ... , PIV: 124 | | | | | SSN = 125 | | ... ... ... // Perform key update with KUDOS // The client wishes to send a request. // PIV* = 567 ==> SSN = 568 | | SSN = 568 | | SSN = 0 | Req2 | |---------------------------------->| | OSCORE: ... , PIV: 568 | | | SSN = 569 | | | | |<----------------------------------| | OSCORE: ... | | | | | ``` Comments from Christian at IETF 112: * CA (chat): "visible on application" ... depends where application starts. REST-engine-level would just reestablish. // This seems in favour of cancelling and re-register, all internally in the stack * CA on chat: BTW, I think the observe business can be much easier: as long as it wants the observation alive, just skip over that sequence numbers when generating new requests. (non-mic, just for Rikard and further discussion) // This seems rather in favour of jumping forward the Sender Sequence Number at the client, to become greater that the greatest PIV used in an active observation * CA: The application might ask a fresh notification, and the stack would poll or observe and just re-establish observations if they go away for good reasons. ### "Skipping" approach as a solution After a key update: - The client builds a new list L. Each element of the list is the req_piv among the observation requests of all the ongoing observations. - From here on during this new key epoch, before sending any request, the client checks if its current SSN is in the list L. If this is the case, the client increments its SSN and checks the value against the list L again. As soon as a value is found as not being in the list, this is used as Partial IV in the OSCORE option of the request. - In order minimize the waste of good SSN values to use, the client should remove from the list L a SSN value used as req_piv in an observation when that observation is terminated. - When starting a new key update, the client deletes the current list L. Pro: there is no "big jump" and waste of SSN values; instead, only the already taken ones are selectively skipped. Pro: no additional communication overhead due to early use of large-size Partial IVs. Cons: more processing for each request to send, since the client has to check the list L for every request to be sent. ``` CLIENT SERVER | | SSN = 567 | | SSN = 123 | | | Req1 (start Obs1) | |---------------------------------->| | Observe: 0 | | OSCORE: ... , PIV: 567 | | | SSN = 568 | | | | |<----------------------------------| | Observe: 0 | | OSCORE: ... , PIV: 123 | | | | | SSN = 124 | | | | |<----------------------------------| | Observe: 1 | | OSCORE: ... , PIV: 124 | | | | | SSN = 125 | | ... ... ... // Perform key update with KUDOS // The client builds the list L = {567} | | SSN = 0 | | SSN = 0 // The list L = {567} does not contain 0 | | |---------------------------------->| | OSCORE: ... , PIV: 0 | | | SSN = 1 | | | | |<----------------------------------| | OSCORE: ... | | | // The list L = {567} does not contain 1 | | |---------------------------------->| | OSCORE: ... , PIV: 1 | | | SSN = 2 | | | | |<----------------------------------| | OSCORE: ... | | | ... ... ... // The list L = {567} does not contain 566 SSN = 566 | | | | |---------------------------------->| | OSCORE: ... , PIV: 566 | | | SSN = 567 | | | | |<----------------------------------| | OSCORE: ... | // The list L = {567} contains 567 // SSN++ ==> SSN = 568 // The list L = {567} does not contain 568 | | SSN = 568 | | | | |---------------------------------->| | OSCORE: ... , PIV: 568 | | | SSN = 569 | | | | |<----------------------------------| | OSCORE: ... | | | ``` ### More complicated than that ... CA: You need to take into account any observation that the server has not yet acknowledged as terminated, not just the current ones. CB: You can have a choice for the device between terminating and long-jumping. Otherwise preference for "long-jumping" rather than "skipping" (which sounds onerous for each and every request). CB: Rekeying is disruptive anyway, we need to understand what this means for an application, especially one thinking real-time. Describe how desctructive it is. RH: And what should not be done while rekeying. CB: Yes. The above is useulf to reduce disruptiveness, specifically as to observations. CA (on the disruption not only to observations while this runs): Gotta go over the document again, but I *think* we can make it continuously usable. (So that at every point in time the device can send a message, even if it's using the intermediary context) RH: Right, most about understanding which OSCORE context to use to protect messages in the meanwhile. ### Practical solution If observations are involved, the following happens. - The client never silently forget about observations - The client that wants to cancel an observations always does that through a CON request with Observe:1 - ... - But even by doing so, the client may never become secure about the server having actually cancelled an observation. This is the case if cancellation requests are lost, even after the maximum retransmission. - Then, to be conservative, the client would not free up the corresponding Sender Sequence Number, and will jump forward beyond it, thus shrinking more and more the amount of available Sender Sequence Numbers to use as Partial IV. In other words: - For each ongoing observation, the server (client) assigns Epoch=0 during the key epoch when the response including Observe to the observation registration request is sent (received). - Each time KUDOS is run, EPOCH is incremented. Then: - If EPOCH < MAX_EPOCH, the client tries again to cancel the observation. - If EPOCH has reached MAX_EPOCH, both client and server remove the observation. - If the observation is successfully cancelled by the client, the related EPOCH information for that observation is freed up. If the response including Observe to the observation registration request is lost and not received by the client, the epochs will be inconsistent (where the server assigned a value and the client did not). However this does not cause any issues, since the client will not accept any incoming notifications (since it did not get confirmation its observation request was received.) ### Additional proposal from Christian on capability signaling https://mailarchive.ietf.org/arch/msg/core/Bh81qE65N6zn6IefyoS3rOr3U_0/ https://github.com/core-wg/oscore-key-update/issues/23 * Using yet another bit 'b' in the new flag byte of the OSCORE option, the client may signal the server about its own intention of preserving old observations in a safe way, i.e., by long-jumping (or skipping, though unpreferred) the SSN value. By doing so, the server can be spared by doing any management of any EPOCH value, and both parties would agree to simply cancel ongoing upservations once successfully completed a key update procedure. * Rather than having the new bit 'b' in the new flag byte, it's better to have it as a bit in the 'x' byte indicating the size in bytes of 'id detail'. After all, the remaining bits of 'x' are more than enough for indicating the size of 'id detail' (as something like that is usually fine to be 8 bytes is size). * The same bit 'b' would be useful for the server to set in the KUDOS message it sends, to confirm that it is also indeed preserving old observations. * Following this explicit agreement, both endpoints would either terminate or preserve their ongoing observations when deleting CTX_OLD, depending on the agreed value of the bit 'b'. * When concluding KUDOS with a mutual agreement on b=1, the two peers can thus preserve ongoing observations, and manage the corresponding EPOCH values to force their termination if need be (see above). Otherwise, they terminate their ongoing observations once completed KUDOS. * The bit 'b' totally needs to be authenticated to ensure that what is going to happen is what the two peers want! One way to do that can be deriving the new Security Context by taking as additional input parameter to updateCtx() not only the concatenation of N1 and N2 transported as 'id detail', but also the value of those bits (or, for simplicity and building on the above, of the whole 'x' field including those bits). * Note that this would have to take into account the 'x' byte from both the first and the second KUDOS message, also to avoid further inconsistencies (e.g., N1 and N2 having different sizes). Practically, the first input parameter to updateCtx() can be x1 | N1 when deriving CTX_1, and x1 | x2 | N1 | N2 when deriving CTX_NEW. * The same principle applies when considering using the 'x' bit also to encode the 'p' bit for executing KUDOS with or without Perfect Forward Secrecy. ## Key Update without Perfect Forward Secrecy (PFS) **>> WIP (see Appendix D)** Raised at: https://mailarchive.ietf.org/arch/msg/core/EL0yHxQrP2DQwHxo6ojnQedvFbY/ https://github.com/core-wg/oscore-key-update/issues/21 The original version of KUDOS ensures PFS. This requires that: - The Master Secret and Master Salt are updated, and keys derived from the "original" ones are not used anymore; - The new Master Secret and Master Salt are stored on disk (non-volatile) memory, for possible retrieval after rebooting; - The two KUDOS peers are able to perform a stateful procedure. New concepts: - Bootstrap Master Secret and Bootstrap Master Salt - If provisioned they are stored on disk, and they are never changed by the device. - Latest Master Secret and Latest Master Salt - It can be dynamically updated by the device; it's lost upon reboot unless stored on disk (which is recommended if possible to do). Note that: - A device can have none of the pairs above, only one or both. - A device that has neither of the above pairs, cannot run KUDOS. - A device that has only one of the above pairs can try to run KUDOS, but that can fail due to the other peer (see below). The original version of KUDOS is problematic for some devices that can afford only a single writing in persistent memory when a Bootstrap Master Secret and a Bootstrap Master Salt are provided (e.g., at manufacturing), but no more after that. These devices cannot perform a stateful key update procedure which practically prevents ensuring PFS. What follows enables an alternative execution of KUDOS, which sacrifices PFS but allows devices to perform a stateless key update, i.e., without writing on disk (like possible in OSCORE Appendix B.2). CAPABLE = A device capable of writing to disk (non-volatile memory). This excludes any one-time only writing in non-volatile memory happened at manufacturing time or (re-)commissioning time, e.g., to write the Bootstrap Master Secret and Bootstrap Master Salt. As a general rule, when generating a new Security Context, the corresponding Latest Master Secret and Latest Master Salt: - should be stored on disk if the device is CAPABLE; - must always be stored in volatile memory for practical use with OSCORE This is independent of how exactly such Master Secret and Master Salt have been obtained. An exception to the above is the temporary KUDOS context CTX_1 which must not be stored on disk. This enables the following sequence of event in case of rebooting: - Check if you have a (Latest Master Secret, and Latest Master Salt) on disk - If yes: - Load it to volatile memory, and use its content to derive an OSCORE context CTX_OLD - Note: no need to restore this pair on disk in this case! - Run KUDOS as initiator - If CAPABLE store on disk the Master Secret and Master Salt from CTX_NEW as Latest Master Secret, and Latest Master Salt - If no, check if you have a (Bootstrap Master Secret, and Bootstrap Master Salt) on disk - If yes: - Load it to volatile memory, and use its content to derive an OSCORE context CTX_OLD - If CAPABLE, the device stores (Bootstrap Master Secret, and Bootstrap Master Salt) on disk as (Latest Master Secret, and Latest Master Salt); this is covering the case of a CAPABLE device that has not run KUDOS with the other peer yet. - Run KUDOS as initiator - If CAPABLE, store on disk the Master Secret and Master Salt from CTX_New as (Latest Master Secret, Latest Master Salt). - If no, use other ways to establish a first OSCORE context CTX_NEW, e.g., EDHOC. - If CAPABLE, store on disk the Master Secret and Master Salt from CTX_NEW as (Latest Master Secret, Latest Master Salt). Proposed extensions: - In the OSCORE option of a KUDOS message, in the second byte of flag bits, one more bit 'p' is used. The bit must be set to 0 when using the original version of KUDOS. The bit must be set to 0 if the second byte of flag bits is present but the 'd' flag is set to 0 (the message is not a KUDOS message). - The 'p' bit to indicate PFS or no-PFS mode may not be beside the 'd' bit, but rather in the 'x' field intended to signal the size of the 'id detail' field (which would still be up to 127 bytes, while 8 bytes is the common recommendation). - In a KUDOS message (i.e., the 'd' bit is set to 1), the 'p' bit indicates what material to use for CTX_OLD as the second argument of updateCtx(): - If the 'p' bit is set to 0, KUDOS is run in PFS mode. That is, the current Security Context CTX_OLD is used and the goal is to preserve PFS. In order to use this mode of KUDOS, a device must be CAPABLE. - If the 'p' bit is set to 1, KUDOS is run in no-PFS mode, meaning that PFS is sacrificed because a stateful execution is not possible. That is, the Security Context CTX_OLD to use is the current one where the following changes apply: Master Secret = Bootstrap Master Secret, and Master Salt = Bootstrap Master Salt. This means that every execution of KUDOS between these peers will always consider this same Secret/Salt pair. - In order to use this mode of KUDOS a peer must have Bootstrap Master Secret and Bootstrap Master Salt. If a device is non CAPABLE, it MUST NOT run KUDOS in PFS mode and MUST run KUDOS in non-PFS mode. If a device is CAPABLE, it SHOULD run KUDOS in this PFS mode as initiator and SHOULD NOT run KUDOS in no-PFS mode as initiator. An exception to this is a follow-up with a responder peer that has made evident to not support this mode. Note that such a CAPABLE device is able to store also this piece of information, so that it can take following executions of KUDOS with this peer with the 'p' bit set to 1, including after a possible reboot. Note: If a peer A has learned that the other peer B in fact does support running KUDOS in PFS-mode it should never run KUDOS with that peer B in non-PFS mode (meaning if the other peer B initiates KUDOS with p = 1 it should be rejected). If A is a capable device, it MUST store this information on disk, hence preventing malevolent downgrading to no-PFS mode is case of simultaneous rebooting where B is non capable. Note: If both peers reboot simultanously, the client initiated variant of KUDOS would end up being run. If able to run KUDOS as specified in the 'p' flag by the initiator, the responder MUST comply and do so. Otherwise: * If the responder is the server, * It MUST return a protected 5.03 error response to Request #1 (protected with CTX_NEW as usual), where the diagnostic payload should explain what has happened. The 'p' bit in the OSCORE option of this response MUST be set to 1. When receiving this, if 'p' was 0 in the first Request #1, the client learns that the server can run only the no-PFS mode and MAY try again, setting the 'p' bit to 1 in the new Request #1. * If the responder is the client, * After receiving Response #1, the client sends a Request #2 protected with CTX_NEW as usual. The 'p' bit in the OSCORE option of this response MUST be set to 1. Then, the client can abort the current KUDOS execution, and deletes both CTX_1 and CTX_NEW. * After receiving the Request #2 above (i.e., having the 'p' bit set to 1 as a follow-up to Response #1 having the 'p' bit set to 0), the server aborts the KUDOS execution, deletes both CTX_1 and CTX_NEW, and learns that the client can run only the no-PFS mode. In either case, to avoid further inconsistencies (e.g., N1 and N2 have different sizes), updateCtx() should take as additional input parameters both the 'x' byte from the first KUDOS message and the 'x' byte from the second KUDOS message.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully