bc-silicon-salon
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Write
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Help
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Write
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    --- title: Silicon Salon Notes 2021-06-01 tags: Notes --- # Silicon Salon Notes 2021-06-01 #### 2022-06-01 9am PDT Slides (follow along): https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9?view#/ These Notes: https://hackmd.io/@bc-silicon-salon/Byr4vaXOc?edit ## Attendees NOTE: This document is public. Don't add your name here, or any private contact details, if you don't want them be listed as a participant in the final output from this Silicon Salon. [Chatham House rules apply](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/14) for all quotes. * Christopher Allen (@ChristopherA) * Bryan Bishop (@Kanzure) * Jesse Posner (@jesseposner) ## Notetaking We will be using collaborative notes today. Both the slides and the collaborative notes are here in the HackMD. Welcome today to Blockchain Commons' Silicon Salon. Thank you all for joining this event to puzzle out how to find out a future in semiconductors for cryptocurrency and other cryptography applications. Thank you for joining us. This is a collaborative session. We have a collaborative note capability, in particular Bryan Bishop will be using his famous ability for notetaking. If you missed the links, ask in the zoom chat and we can send them again. So first off, who am I? My name is Christopher Allen on twitter and github and other places as @ChristopherA. I first started working in this industry in the very end of the 80s when this was all brand new. In particular, I was involved with freeing up the patent license for RSAref which was used by early startups in the internet space like RedHat, PGP and others. I was deeply involved in that. In the early 90s, I ended up taking over the SSL reference project from Netscape. We published SSLref 3.0 and later I ended up becoming editor and co-author of TLS v1.0. I've been doing this for a long time. One of my significant clients back then was DigiCash so you could say that I've been doing digital currencies for 30 years. In the 00's, I was CTO at Certicom which is where elliptic curves were invented. I had a side hobby as an adjunct professor at an MBA program. In the 10's, I was at Blackphone which was an early startup that was trying to do a more secure more private Android phone. I also started Rebooting Web of Trust and coined the term self-sovereign identity and its 10 principles. I was also a principal architect at Blockstream. More recently I coauthored the W3C DID standard, and today I am principal architect at Blockchain Commons. Whta is BCC? Our goal is to bring together blockchain and web3 stakeholders to work together and build interoperable infrastructure. We want to focus on decentralized solutions where everyone wins. We're a neutral party not-for-profit. Our larger mission is to enable people to control their own digital destiny. The problem we're trying to solve is this one. You've seen this cartoon for a while now. There are all these dependencies and all this technical debt that we keep piling up on other stuff, and then there's one key fragile portion at the bottom being maintained by someone somewhere. We saw this with OpenSSL heartbleed and we've seen this in other situations in the past. It is also a security threat when supply chain and other things are not financially supported, and that one little bit needs a little fix and it sometimes doesn't happen. We work with web3 and blockchain stakeholders to work with communities and assess needs and requirements and problems. From that, we will collaboratively engineer interoperable specifications which I've been doing for 30 years. Together, we want to evangelize these solutions to the ecosystem and we want to support all of our different partners with reference code, test suites so that they can develop and make available their own implementations. I've done this before- FIDO, DID, verifiable credentials, TLS, RSAref. Airgap URs, QRs, and a number of other things that you will find on the BCC github. Who are you? Well, we have a lot of different people here, like silicon designers like Supranational, Tropic Square, Crossbar, all involved in designing semiconductors. We have hardware wallet manufacturers including Proxy and Foundation Devices, who take those chips and implement cryptography applications with them. We have a number of people in the community involved in other aspects of the ecosystem like Bitmark with NFTs, Unchained Capital with collaborative custody. We have a variety of advocacy orgs that support us like Human Rights Foundation, and a number of people here who are cryptographers and cryptographic engineers. Our problem. Leveraging secrets held on silicon chips is really important as a Root of Trust method. We have learned this over the past few decades that this is one of the best ways to protect our security is to have this root of trust. Unfortunately the existing chips don't support modern cryptography, and a lot of standards orgs are rejecting the needs of the cryptocurrency industry but I would also argue a lot of the newer cryptography. The capital and lead times for chips is really high. It takes years to develop things. Very large upfront costs. Within that, there's very inefficient IP licensing, it creates a lot of friction not only for developers but ultimately the whole ecosystem in trying to get solutions to our problems. A lot of this is due to current financial incentives really failing to create a robust secure infrastructure, and this is at all levels not just the hardware but wallets and the network stack that we need for all of our stuff to function. In addition, there's something I call the NASCAR problem. This particular screenshot is from Sparrow Wallet. This is the long list of the different cryptographic wallets both software and hardware that it supports and all the variants for it. And this is just the beginning. There's a lot of startups that should be in this list or new capabilities that ought to be in this list and aren't; not for lack of trying. This is the NASCAR problem and we've seen it before. Two decades ago, you might remember openID login screens which was this user-centric promise that unfortunately lead to too many choices. We see Google OpenID here and a lot of other icons from Yahoo, AOL and other ones. It just got larger and larger. Another problem was that the big openID providers were able to subvert the protocol due to weak interoperability standards. You could do half of OpenID and not the other half. That combined with the market power of Google, Facebook and Apple meant that they dominated this space and today you still see now 3 buttons for these kinds of social logins. Maybe two buttons. Apple has recently returned with their own login only because you can't ship your app in the appstore with a federated login unless you also support theirs, which is using their market dominance to put their NASCAR logo on all these logins. I don't want to repeat this pattern. It's going to cause a lot of problems down the line. So what is our answer? Well, follow what we have done before to collaborate together, evangelize our solutions, engineer our solutions, and then provide long-term support. We want to define use cases and requirements. We want to identify essential features for new cryptography, in particular for silicon logic but also prioritize them. We want to create an ecosystem roadmap to support continued investment. Many of you are trying to go out and get capital for your new solutions, and being able to tell a better ecosystem story and how big this opportunity is would be useful for us all. We want everything to be interoperable and future-proof which serves the whole ecosystem. We have done this several times before, and sometimes things get centralized, so let's remove privileged points in the ecosystem and limit the ability to subvert the shared protocols which is something that BCC is trying to achieve in all of its work. The process today includes multiple presentations on silicon hardware from the chip designer perspective but also from a vendor using these chips that perspective. We will then go into 6 open topics and have a facilitated discussion about those topics. It will be everything from what kind of cryptography is most important to you. What are the pain points you have? etc. We will do that after the presentations. Then we will decide on next steps for collaboration like another salon or do we want to focus on one particular area etc. This will be another 2.5 hours of work, we hope to finish everything by noon PST or 3pm EST today. So now some rules. We really want everyone to be able to use the information that we're collecting today, but we really want people to be able to speak freely. Neither the identity or the affiliation of the speakers should be revealed. Please don't take a quote from someone and say Simon said such and such in Silicon Salon unless you directly ask him in advance. If you ask, they might say yes. But please don't take anything out of context. We're going to be recording the presentations that are being done today for youtube, but we will not be sharing the Q&A of those presentations nor will we be sharing the discussions that we have afterwards. We do have recording on right now but that's to help us to produce an anonymized summary that we will be making soon. The summary will be including quotes but not names and affiliations. There will also be an opportunity in the next day or so to review it and have something removed if you maybe said a little too much there then we can remove those types of things. The collaborative document that I shared earlier is where we're taking a lot of those notes. Feel free if you have concerns or questions to clean it up there. We will go into the presentations part of our program right now. Basically we have at least 4... do we have someone from Supranational that wants to present? I was a little confused on that one. I guess not. https://libre-soc.org/conferences/siliconsalon2022/ https://diyhpl.us/wiki/transcripts/austin-bitcoin-developers/2019-06-29-hardware-wallets/ https://diyhpl.us/wiki/transcripts/breaking-bitcoin/2019/extracting-seeds-from-hardware-wallets/ ### Crossbar I won't give my whole background but I'm very much a semiconductor person rather than from the blockchain community. I wanted to open with an observation that has struck me for a while. There are really quite different cultures here. In chips, it's super expensive to develop a chip. Rather than thinking in terms of cost to acquire a customer, we think of the tooling cost and our key metric is gross margin. You have to design a chip for multiple purposes. To do one chip is a huge deal, so we don't kickoff chips with the same fluidity that you guys kickoff a new github repository. It's a different way of thinking. The main point here is to say that you can't be too insular in the thinking. Crypto wallets is using semiconductors that are largely designed not for you but designed for other purposes. Look at a typical MCU in a wallet, you probably have a hundred such MCUs in your house. They were designed for those purposes, not for cryptocurrency. So you have to be aware of the larger world and why those semiconductors came into being, and I think to harmonize requirements that's how good semiconductor support comes into being. Crossbar is foundationally a memory-technology company. I think it's relevant here in talking about crypto-wallets. To give some background, any kind of non-volatile memory that you're used to dealing with like SDcard, USB thumb drive, these are based on floating gate where a bunch of electrons are trapped in an oxide between insulators. This works down to, you can integrate with this logic, it works down to about 40 nm. Below that, the volume, you need a certain volume of electrons for the memory to be readable and the volume is just too small below 40 nm. So this has led to a field called emerging memories: what are we going to do for advanced nodes so that we can embed non-volatile memories? Crossbar is a leader in one type of that memory called resistive memory. To explain the relevance to hardware wallets, flash trapped charge can be easily it's a charge you cna read it with an electron microscope. And also because it's a bunch of electrons trapped in a small space, they want to get away from each other, and it may leak over time. If you put something in your desk drawer and pick it up a few years later, it may not be working anymore, particularly if you happened to store something magnetic in the same dawer. It's susceptible to electromagnetic fields, radiation, etc. Resisitve memory is highly immune to physical attack. We have given our memory to a teardown house and even with de-laminating layer by layer they were unable to extract data from the memory. It also does not have this repulsive leaking effect. We say it lasts for 150 years and that's based on analysis, obviously we haven't been around for 150 years. It's robust against electromagnetic field and various other forms of aging. Also, it can be integrated with advanced nodes and that logic. When you think about embedded memory, you think about logic and memory together. This is how we got into security products in general. Looking at hardware wallets, most of them are close to the following architecture: there's a microcontroller borrowed from the consumer market, not shielded, generally 40 nm and above because it needs memory and memory doesn't work below 40 nm, and perhaps a secure element (SE) borrowed from the SIM card market or banking market, which does have shielding. Some SEs are simple state machines, some have a small MCU, a few pins, typically they would support RSA and NIST P-256 so legacy inapplicable cryptography when you're talking about hardware wallets. It's hard to go beyond a black box view, you don't get an open devkit or anything like with a consumer MCU. A lot of hardware wallets if they have an SE at all, they might pull the secrets into a physically not shielded chip which seems like a problem. Just making important decisions like, am I paying to the right address? That's UI software executing in an unshielded chip driving a display. If you can capture bus traffic and crack it offline with arbitrary compute power, that's different than having to crack it in situ. The speed and size is large, and the memory is readable and perishable. I also hear a lot of complaints about no devkit access or almost black box on SEs, no flexibility. The coloring in my slide is meant to be physical countermeasure shields. Let me explain what I mean by that. It's not obvious when you read datasheets of these parts, but if you look at teardowns you come to realize that a few small chips for banking and SIM have physical protection. Whereas on the MCU side, any general purpose MCU evne if it says secure that kind of refers to logical security meaning when I write this address I can no longer do x. But that logic is really just transistors performing logic, but if you can disrupt that operation then the logical security malfunctions. Physical security means attempting to prevent that kind of disruption attack. I'm not going to go through this slide. There's 20 or so physical countermeasures techniques. I'll quote one. If you put on the top metal layer a complex mesh that has maybe 64 or 128 lines spaghettied around each other and connected to sensors, if you try to FIB or probe then you will short or break this and this is caused a protective active mesh. This is just one of about 20 techniques. They have to be built into the chip and distributed throughout the layout; in the marketplace, you see this only on the SEs and not on a general purpose microcontroller. When you talk about FIBing and probing, 28nm is by itself is of course more resistant to attack than 40 nm. The effectiveness of countermeasures is much more effective and more dense when you have it at a more advanced process. Some of the things that Crossbar is thinking about as architectural improvements, just talking generally here, about how to improve these. These are things that we can discuss today at least. Being a memory technology developer, first let's replace the secret storage or the boot code or backup that you would typically do on an SDcard and replace that with a private and persistent memory that is robust. Also, I think extending physical countermeasures over more function, very important functions typically on this MCU side whether 2 chips, co-package, or monolithic, however you do it, but it seems extending physical umbrella of countermeasures seems to be key. There are critical decisions going on in the MCU side, after all. You could also say crypto is evolving quickly, and hardware implementation will never keep up with the rate of innovation in blockchain. But to the extent to that the MCU is secure, then you can think of it as a secure processor even for cryptographic operations. Moving forward in node size is of course good for size, cost, everything. And making sure it's not just a black box that is hard to use. These are some of the items we think are important for going forward in semiconductors for hardware wallets. We're going to hold questions until after the presentations. --- lkcl from libre-soc notes: * https://www.crowdsupply.com/design-shift/orwl an example of tamper resistant hardware note from allen "without triggering an autodestruct" unless you heal the lines with beam-induced deposition * idea: SE variant of Coherent Processor (OpenCAPI) https://libre-soc.org/openpower/sv/SimpleV_rationale/ r * idea: Open FPGA with SE. use ChipIgnite. look up efabless mini FPGA which is too small but could be expanded. OpenFPGA. https://www.cnx-software.com/2022/02/08/open-source-fpga-asic-efabless-chipignite/ * idea: use Certification Marks to solve the standards creep problem * Coherent Processor concept, no cache etc but the same ISA as the main CPU, execution is pushed to the SE over OpenCAPI managed by the main processor. solves the problem of * # Proxy I wanted to let Proxy talk about the user of these chips who are building these chips into physical hardware. Thank you. Let me just pull this up. Alright. Slides: https://hackmd.io/@bc-silicon-salon/SkWPpfBO9 I will talk about what it is like to build a product today, as opposed to sometime in the future. Who are we? We're a hardware and software company. We started in the physical access space where we're building NFC reader hardware, wearables, and phone wallets, or phone apps for enterprise physical access control. Think employee badges for getting through doors as well as logging in within your workplace and meeting rooms and things like that. We push the mobile version of this into the physical security industry when we started; it was an afterthought when we started, and we made this a first-class citizen. We worked with a few phone manufacturers to make this a reality. We're also involved in standardizing some of these protocols to make them more universable, the CSA ACWG alliance is leading the way there. So what are we building right now that is relevant here? To some extent, the access control stuff is our legacy as a company. One of the things that we found is that there's a bigger need for managing credentials and digital identity that isn't just for access credentials but any sort of credentials, and it fits very nicely into the world that the cryptocurrency community has created. What we're building today is a wearable hardware wallet and a complementary mobile based software wallet that is an evolution of our previous Proxy App that will cover more of the digital identity usecases as well as crypto-asset storage. We see these as complementary; ideally this is creating a single product where software and hardware work together to give you a better experience which has been our motto as a company, getting those to work nicely together. I mentioned wearable hardware which has some unique challenges. There's extremely limited power, limited physical space, and we're talking about the form factor of a ring so it's probably smaller than you can imagine. Limited IO capabilities, which to some extent is to our advantage here because we sidestep some of the secure UI problems that Mark has mentioned in his presentation. We kind of don't have a lot of IO. You have secure radio access through NFC directly bound to the secure element (SE) which is the IO we have, but it's also fairly secure IO. Since we're so limited in space and power, we work with a high degree of integration and we're not dealing with discrete components often. We are talking about multi-chip packages which are highly integrated, to save on space and power and routing area on the circuit boards, which of course results in very few options for us to choose from because we're limited to what integrated solutions are already available and we can't pick and choose best in class for each individual component. As a wearable device, there are table stakes: we're not building a single-purpose device, because as a wearable it has to do all the things that people expect it to do. It would be pretty difficult to convince people to wear 10 rings or glasses or wristbrands where one of them is your mobile driver license and one of them has your bitcoin, and another one you use to get into the door and you forget which finger is which. This is the same category of devices as something like a phone or watch that has to be fairly multi-purpose and it has to work with things that people already use today. This gets me to a real world live usage conversation. There was a nice article [Wallets Abound - The Daily Gwei #488](https://thedailygwei.substack.com/p/wallets-abound-the-daily-gwei-488) which talked about the proliferation of cryptowallets and which direction that is going. They focus on user experience which reflects the convenience and quality of that experience, crossed with the level of integration with existing systems and a good enough-by-default level of security that will actually make a wallet whether software or hardware that actually reaches mass adoption and brings some of these use cases to the masses so to say. When I read that article, this is exactly the sort of thing that we at Proxy are focused on. That's spot on to how we think about things and what we're building. I listed a few things that people are already doing today with various hardware devices: cards, hardware tokens, phones, badges, tickets, digital IDs, crypto-assets, DIDs, etc. There is existing legacy infrastructure and also new infrastructure, coming from physical access control awareness of needing to interoperate with that infra, because we talk about devices that hang off office doors for decades. We have built and deployed some of those devices; but those things aren't going to get replaced, and if you want to switch how people do things by having everyone install new terminals then you're probably fooling yourselves. Some of the challenges that I've seen firsthand trying to design a product that does some subset of those functions. There is limited functionality in SEs in general. Some of them are black boxes, and some are more open but they are largely just Java SmartCard JavaCard runtime environments. A lot of the market uses this, there are existing JavaCard applications. That's a somewhat specialized runtime environment with persistent memory and other things like that. There's "new" curves, which aren't "new" by any stretch of the imagination but in this context they are new, which are required for a lot of the cryptocurrency stuff. New algorithms, new signature schemes, ciphers. These things are popping up much faster than they used to and they are being much more widely adopted faster than they used to. In many ways, the hardware vendors are chasing this rapidly moving target. The timelines for hardware is years to develop a new chip and huge capital cost, and by the time you're done it's already obsolete because you have mobile driver license ISO standard published last year and it's 150 pages of a really complex standard that probably cannot run on 99% of the SE chips on the market today because it supports selective disclosure, it has fairly complicated parsing and signature logic and things like that... so I think other people in this room are better equipped to talk about the specifics of ciphers and signatures involved in this and various sidechannel attacks, but lack of these things and lack of visibility into these issues is one of the major problems. The next challenge I want to talk about is something I see caused by the certification requirements. For existing applications like payments, access control, there's costly lengthy certification processes. GlobalPlatform, Mastercard, Visa, transport networks. This costs a lot of and they end up freezing your entire stack so you can't change a single thing, and this leads to predictable market dynamics. The vendor is not going to change something that will trigger re-certification unless they have a large commitment from a big player like a phone manufacturer that actually wants that functionality. It's not necessarily a problem of the specs and standards not being there, it's unwillingness to implement the features quickly because of those market dynamics. JavaCard 3.1 has been around for a few years, and I have not seen a single SE that implements it yet. Even 3.0.5 JCAPI which is 7 years old now is not that commonly adopted. (JCAPI 3.1 from 2019, 3.05 from 2015). A lot of mass market chips are stuck on 3.0.4 API versions. NXP SE050E has added AES-GCM support, just at the start of this year, and that's brand new silicon before a small player like Proxy can use in a product design before it becomes to volume production-- we don't buy millions of chips at a time, we buy thousands. So it's quite far behind. You will hear someone say AES-GCM is already deprecated, yet it has only made it into this piece of silicon as of this year. The last sort of challenge I wanted to talk about is the lack of single-point of view from anyone on these manufacturers or vendors that often make these chips today. Quite often the entire spectrum of expertise doesn't exist in-house. The chip vendors are specialized at making silicon or integrating pieces of silicon into a chip package. Sometimes they are RF experts, memory experts, silicon manufacturing experts, but they might not have the software expertise and the security is often outsourced. Or vice versa: you have software houses trying to buy the chips off the shelf and they don't have influence about what goes into the hardware and they use the chip as a black box. One of the big problems with this that I see is that the people who are making the major components for some of these products are not the people who are making the products. There's a disconnect between what the product requirements and what the components can do because there is not a tight feedback loop. I come from the software world, and looking at a lot of the software packages that have become hugely successful recently, like the various data store implementations and things like React, so many of those have evolved from a product company solving their own problem be it Facebook or Google solving a problem they had and then open-sourcing a piece of infrastructure code and then they can solve that product problem, and other people have the same product problem and they can use it. But the component evolved out of a product need; this doesn't happen in hardware chips, because the people making the hardware are not also making the products. I hope to get out of this closer integration and get more information sharing as well. What should we do? Just a couple of thoughts on what would help things. Let's find ways to build architectures where some of the existing applications can co-exist with some of the new stuff being developed by crypto-wallets for example. Again, just going back to our requirements, we don't have the luxury of a single function device like a Ledger that you plugin and use and put it into a safe... we need to support the existing applications as well as the new stuff we want to build, we can't just ignore that existing stuff runs on JavaCard. If integrated solutions are being developed, they should be developed with modularity and flexibility so that as a consumer of that package we can choose how to enable certain things or when to do that or other settings. I know that to the chip maker this means more complexity and switching logic, but it will make it more worthwhile to produce long-term. If the integrated package can be used for more use cases than the narrow one the manufacture foresaw, it can have more longevity as a chip. I have mentioned this with certifications, but some of these things need to move slowly for good reasons. At the same time, you don't want that to be the limiting factor. You want parts of the system which can evolve faster and have the ability to do so. That covers what I wanted to talk about and what I'm hoping to discuss in the Q&A sessions later as well. I'm available for other questions or other conversations happy to have more of these sessions. Bryan just posted into the Zoom chat the collaborative notes link. To be clear, that particular document is not completely private so if you put your name or contact information in there, it may be available to more people than just this community so just be aware of that before you add your name to the attendee list at the top of the shared document. Next we will return to the silicon side of this problem with a presentation from Tropic Square. ### Tropic Square Tropic Square is a company we setup a few years ago and setup with slush and stick and SatoshiLabs. We want to build transparent and secure chip solutions. The reason has been said many times today, that basically there is a problem in the market of chips that you can't buy a transparent chip so you have to trust the vendor and you have little to no visibility to implementation. So we believe that a fully transparent and auditable chip is necessary, and we have been waiting for a long time for someone to bring this to market and then we decided to start this story and see if we could get others to join and see if we could benefit from this. SatoshiLabs have open-source hardware wallets. Don't trust, verify. I think this resonates with you in this forum. The current paradigm though for secure chips is restrictive NDAs, you can't even talk if you found an issue in the chip and tell your customers about the vulnerability. There's lots of issues there. Basically you can't help yourself, and you can't do these reviews or be part of certification processes or it's very difficult. We believe that the existing setup is not on a trajectory to change that, so we believe newcomers have to challenge the status quo. There are applications and other needs that would benefit from such hardware. Tropic Square... a TRully OPen IC. TROPIC. It's a vision and long-term thing; it's not really practical right now because of the nature of the existing semiconductor environment, but you have to start somewhere. So at the beginning of the journey, we want to open as much as we can and see what else can we open as we go. Kerckhoff's principle is that the system has to be secure even if it is in enemy's hands, and the only thing you need to keep secret is the key. This is not really implemented in hardware; I think open-source software is on the good path to this kind of goal but on the hardware there's a lot of work to be done. Security in obscurity is the existing status quo where you rely on the secrecy of the implementation and all these processes which basically has a side-effect of keeping weak designs in production, like certification requirements and long lead times. Or there's a timestamp in the past, and there's little incentive for chip makers to change that and go through the certification process again. This limits the need to innovate, and keep old designs in the market. Another aspect is the lack of transparency, you might say okay there's a bunch of vendors so you might have a different implementation but it's not necessarily true because they license the IP cores that are used in other vendors' chips anyway. You might think you have a different heterogenous chip but you have no way to know they didn't license existing IP components. For us the goal is to differentiate using transparency. Tropic01 is the project name. It's a secure element kind of device. Basically it's flash memory with some serial interface which you can connect to existing MCUs, which is the typical use case for hardware wallets and other applications. The reason for this is that we had to start somewhere. At the same time, we want to focus on the most important part which is the secure part and if the secure part isn't secure then it doens't make sense to make a SoC. So that's why we're focusing on this, and to be able to do that in a reasonable amount of time we decided we would first open only a certain part of the design of the chip. In the closed part of the chip, we will reuse the existing security IPs like pathways, memories, OTPs, eFuse, PUF, flash. In our design, we will focus on the algorithm implementation, sidechannel resiliance and these kinds of features. We started in March 2020. We were working on a feasibility study to find out if there's a way to produce a chip using the existing ecosystem. .... Last year we got funding and we started development, today we're just taped out the first prototype. We hope to get to silicon later this year, and then we will get feedback on what we have done so far. So far the work we've been doing was on the chip design and getting the implementation ready. But in parallel we're working on the incubation of the idea and validating the idea of transparency and auditability in silicon markets. There's a EU project called EU HORIZON-CL3-2021 ORSHIN which is not publicly available yet... but the point is that there are other companies and other big names in the industry which are interested in getting the open-source hardware and looking for ways for how to compensate for losing the obscurity and how to compensate what you are basically missing if you have to open up the implementation. Probably a long-term project. We also started a partnership with Czech Universities, but we're open to collaboration with other teams that have a similar way of looking at these problems. Looking into the future, the Tropic01 secure element is a part of the initial idea which was way bigger than just a SE. The idea was to have a secure SoC hardware wallet as a single chip solution. When the Tropic01 is ready, we will go back to evaluating a secure SoC project. We are building expertise in embedded security and chip design itself. By being part of SatoshiLabs, they have given us access to cryptographers and Tropic chip is designed as a generic device that will support various markets. This is the intersection of the markets we see- the Internet of Things (IoT), secure hardware for digital assets, and the semiconductor industry. At the same time, Tropic01 is not Trezor specific or bitcoin specific. It's just a root of trust. You can build applications on top of it with a traditional microcontroller. On the founding team... [omitted] ### Libre-SOC Youtube: https://www.youtube.com/watch?v=us061o4PBZs Transcript: https://libre-soc.org/conferences/siliconsalon2022/ slides: https://ftp.libre-soc.org/siliconsalon2022.pdf https://www.blockchaincommons.com/salons/silicon-salon/ https://ftp.libre-soc.org/siliconsalon2022.pdf https://git.libre-soc.org/?p=nmutil.git;a=blob;f=src/nmutil/grev.py;hb=HEAD https://libre-soc.org/openpower/isa/bitmanip/ https://libre-soc.org/openpower/sv/bitmanip/ https://libre-soc.org/openpower/sv/biginteger/analysis/ #### Introduction The Libre-SOC Hybrid CPU-VPU-GPU and why Libre/Open is crucial (even in a business context) Practical gotchas for Silicon Transparency Sponsored by NLnet's PET Programme 2022-05-24 This is a quick presentation on the Libre-SOC project and some practical gotchas for silicon transparency. Many thanks to Bryan for inviting me to do this presentation. #### What is Libre-SOC So what is the Libre-SOC project? * An entirely libre vector-enhanced Power ISA cmpliant CPU with enough legs to tackle supercomputing-class workloads. https://libre-soc.org/openpower/sv/ * Working closely with the OpenPOWER Foundation: no rogue custom instructions. Both long-term stability and open-ness is key. https://openpowerfoundation.org/groups/isa/ * Huge reliance on Python OO and software engineering as applied to HDL. Not just traditional verification: unit tests at every level, formal correctness proofs as unit tests. "python3 setup.py test" to run the tests. https://gitlab.com/nmigen/nmigen https://uspto.report/TM/88980893 * Using libre VLSI tools: coriolis2 (by Sorbonne University), ultimate goal is to have the GDS-II Files publicly reproducible http://coriolis.lip6.fr/ We're developing an entirely libre licensed vector-enhanced Power ISA compliant CPU. Basically with enough legs to tackle supercomputing workloads. What that means in turn is that we're developing pre-stylized vectors ontop of the Power ISA and we're in the process of writing that up to present to the OpenPOWER Foundation ISA working group. There will be no rogue custom instructions in our project at all, everything will get reviewed and be submitted alongside a compliance suite documentation, etc. etc. https://libre-soc.org/openpower/sv/svp64/ The history of the Power ISA is that it's 25 years old. It pre-dates the RISC-V instruction set by a long way. As does interestingly their intention to open up the ISA, which was initiated about 10 years ago and one of the key important things there was that IBM wanted to ensure that its allocated its patent pool protection correctly to the OpenPOWER Foundation to be able to protect members. I just wanted to get across that the long-term stability and open-ness is very important to IBM and to the OpenPOWER Foundation members. We rely hugely on python object-oriented program. We use mgimen which is a trademark HDL that is open-source. We can use the power of python object-orientation to create VLSI and HDL. Verilog is an output from nmigen, right. We don't just use traditional verification and development; we have unit tests at every single level, because we are trained as software engineers. Most hardware engineers have never heard of git, their method for doing backups is multiple zip files. We also use formal correctness proofs and formal verification as unit tests within our test suite, down to the lowest level. Each module you just do "python setup.py test" and wait for it to complete once you run that command. https://symbiyosys.readthedocs.io/en/latest/ We are also working closely with Sorbonne University because we want to ensure that there is full transparency right down to the silicon. Our ultimate goal is to have the GDS-II files be publicly reproducible and for people to be able to etch away the actual silicon, take photographs, and verify that the actual silicon produced was genuinely what was in the GDS-II files, but not for someone to do that under NDA but for it to be done by an independent third-party who can be trusted. Or even multiple independent trusted third-parties, more to the point. #### What challenges does a crypto-wallet ASIC face? * Industry-endemic paranoid 5-level-deep NDA chain. Foundry NDAs themselves are under NDA. Sharing between teams inside the same company is prohibited! Cell libraries: NDA'd. PDKs: NDA'd. HDL designs: NDA'd. ((Also SE interfaces are, you guessed it, most commonly NDA'd.)) * Power-analysis attacks. Timing attacks. EMF attacks. Standards verification (FIPS ain't it). Toolchain attacks. Cacheing is out: performance will suck. * Achieving full transparency - a critical goal - is almost impossible to achieve. Ultimately, you need to buy (or build) your own Foundry. * Production and development costs (NREs) almost certainly dwarf the sales costs. Right. The challenges that a crypto-wallet ASIC faces is first and foremost that there is a massive industry-wide paranoid 5 layer deep NDA chain. Even the foundry NDAs which give you access to the PDKs (platform development kits) and the cell libraries from the foundry, are, the NDAs themselves are under NDA. It's insane. You can have two teams in the same company working at the same time, using the same foundry, using the same geometry, and they are prohibited contractually from talking with each other. It's mentally ill. Your cell libraries are NDA's. The PDK's are NDA'd. Third-party HDL: NDA'd. Firmware, too, NDA'd. The firmware might reveal too much information about the HDL that it is supposed to be associated with, so it's NDA'd. It's completely insane. Unfortunately, any point in this NDA chain could be an attack vector. It's been demonstrated that it only takes about 3,000 gates to implement a processor which can if put into the memory bus can compromise the entire design. Intel Management Engine was detected and deliberately marketed as a management negine, but the thing is, you can activate those and program those hidden CPUs, via power fluctuations or by EMF so you can broadcast on a high-frequency using I think even Morse code or other techniques at a rogue onboard processor, to get it to activate and program it. It's mad. The level of attacks that you have to mitigate against is just enormous. Power analysis attacks, I remember going to a conference at ITT Madras a few years ago where they showed just the existence of the floating point unit was enough to compromise 95% of the secret key for Rhondial, even though there were no actual floating point instructions being executed. It turned out that it was just that the decode engine had a link to the floating point unit. That was enough to detect what instructions on the integer side were being used. https://www.youtube.com/watch?v=bmsvWvus3mc You have timing attacks too. An EMF burst can actually be used to change bits inside the ASIC, including registers that have been switched off which you would normally expect to prevent writes to certain other areas of the ASIC. https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-811.pdf You then have problems with standards, if there is even a standard at all, it might be written incorrectly, or you might have conformance test suites for FIPS approval- if those are not written correctly.... you also have toolchain attacks, and also cacheing attacks. Achieving full transparency while critical is almost impossible to achieve. Ultimately you need to buy or build your own foundry which is the only way to guarantee there are no sidechannel attacks in the foundry. The other issue that being a small market, your producion and development costs or NREs are almost certainly going to dwarf any sales in the hardware wallet market. How can we mitigate some of these issues? #### Pragmatic solutions * Use formal correctness proofs at every step. Caveat: proofs are only as good as the mathematicians that write them! * Work with standards bodies (e.g. OpenPOWER Foundation ISA WG) and their membres with similar interests. Custom Extension with zero public review == bad. * Unstable PLLs to detect rogue EMF * Develop a product that has a larger total market (a SoC (system-on-a-chip)) * Accept that removing some levels of NDAs will be "out of reach" for now. * Use E-Fabless "ChipIgnite" to at least get the NREs down. * Ultimately: buy your own foundry, make the PDK and cell library public. Only use Libre VLSI tools (limits to around 130 nm at the moment). Everything is "early days" in this space. Firstly, use formal correctness proofs at every step. The caveat here is that if the mathematician makes some assumptions particularly if the mathematician is the same person who wrote the HDL, then they end up proving their incorrect HDL is correct. They 100% guarantee that the proof matches the HDL, but the HDL or the technique may be wrong. So one mitigation solution here is to have separate people write the proof, and for the proof to be verified against other people's designs that are independently developed. You need collaboration here, not competition. Working with standards bodies is extremely important. The OpenPOWER Foundation has an external RFC process. We do not have to be a secret member and develop everything in private or secret in order to submit extensions to the OpenPOWER Foundation. Whereas with RISC-V, you are forced to join and sign the commercial confidentiality clauses where the entire process from that point on is no different from the ITU. The other aspect is that if you develop a custom extension, then it gets zero public review, and clearly this is bad. Another thing is for detection of rogue programs and rogue EMF bursts, there is a technique of using an unstable PLL where you can detect the phase changes and you deliberately route the PLLs throughout the entire ASIC and you monitor the phase changes. If the phase signature over time is not what you would expect when running a specific program, then you can detect that activity. You can detect firstly external EMF bursts, you can detect when people are trying to wiggle the power supply, and you can detect if a rogue program is being run at a particular time, in real-time. The only problem that you have to watch out for is to make sure the PLL the signatures themselves are not used as an attack vector to leak information. You have to secure the actual signature mechanism itself. To solve the market size problem, just develop a product that has a larger total market size that solves other problems. Or, you can develop this as part of another product, I think. It can be a sides-sales channel rather than an isolated product, for example. At a practical level, we might have to accept that some NDAs can't be removed but we should go as far as we can. I think ChipIgnite I think they charge about $8,000 and you can get maybe 300 ASICs out of that and you can do the math on that that you can sell a thing at. E-fabless have done many of the things for which a foundry or NPW would normally charge $50,000 they have got it down to an automated process and they knock it off the cost that they charge you. Normally it would be $50,000 for running a tool that would be a manual process, but e-fabless has completely automated that. https://platform.efabless.com/chipignite/ https://fosdem.org/2022/schedule/event/efabless/ Ultimately, you need to buy your own foundry, make the PDK and Cell Library public. Skywater PDK 130nm is now publicly available which is a start. Some cell libraries are publicly available, but not every foundry is following suite yet. They are interested, but not moving yet. https://skywater-pdk.readthedocs.io/en/main/ Another thing is that you could only use libre VLSI tools because you want people to be able to independently verify your GDS-II files, then you can't rely on a proprietary toolchain. At the moment you would be limited to about 130 nm, which you can use to achieve 700 Mhz which is not that bad. Bottom line is that everything is early days in this space. #### Find us online http://libre-soc.org/ libera.chat IRC: #libre-soc mailing lists: http://list.libre-soc.org/ http://nlnet.nl/PET https://libre-soc.org/nlnet/#faq # Topic areas We are going to talk about 7 topic areas. We're going to define what the category is, then we will go into an opportunity to ask questions about things in that category. We will also do questions for presenters. I wanted to mainly show that these are the items so some of these questions you have for the presenters may better fall into one of these topic categories. We will talk about pain points, architectures, boot/firmware/supply chain, lot of discussion about cryptographic primitives, threats and countermeasures, and then we have what I call "edge topics" which are the things that don't fall into the other buckets. Then we will close with discussion about how do we build a secure infrastructure ecosystem and what are our next steps to be able to enable that? The way to ask questions is to raise your hand using the software. If you have not used that capability before, it is available under the participations tab or screen in the Zoom app. Q: How does ReRAM compare to F-RAM? I've used F-RAM before but I don't know how the technologies are different. That's typically for very high endurance applications. F-RAM is for faster operations. When it comes down to scaling below 40 nm, then F-RAM conventionally doesn't scale below that so they need to change it to a new Montreal set which is ... so it's fair to say traditional F-RAM stops scaling around 40 nm. Most of the F-RAM you can buy now is legacy node, because the way it works is, in physics way, it simply put, doesn't scale. If you go to 28 nm, then they have to use a new module, but then you have yield issues that need to be addressed. RRAM is more easier to scale down because it's purely atomic level, and it retains retention better. Compared to F-RAM, the endurance and recycling is limited. F-RAM you can do a billion times or more. RRAM is more comparable to flash or perhaps better than flash, and it scales better. More important is that it's secure, it's hard to detect the difference between on and off state and it's not based on charges. For security applications, I think RRAM is more suitable in general. ## Pain points Semiconductor support is often limited to SEs Lack of secp256k1 (and negative sentiment) IP restrictions, patents & NDAs Devkits, lack of which is made worse by NDAs NASCAR problem (ecosystem friction) One-off cryptography & wallet APIs Future proofing as technology evolves & co-existence with legacy No one has all the expertise necessary in-house Lack of available cryptographer talent (and incentives in academia) Market size, government support (and limits) Support for continued investment in secure infrastructure The obvious problems for developers are NDAs and licensing issues, which makes it hard to get devkits to even understand what our choices are when we can't look at the product and understand how it works without doing a deal with them, that's painful. Made worse by NDAs too. There's the NASCAR problem pain where there's lots of different options to display and implement; and different APIs work differently even though they are largely doing the same thing, and then lots of restrictions for things coming from applications and keys and other things you need in order to use some API. There's a lot of one-off cryptography in wallet APIs. Well let's just make it work rather than thinking about how do we make it future-proof and make it workable with others. We need to have a lot of discussions about future proofing. Silicon is not going to evolve as fast, but there are ways where we can coexist with each other. It was brought up before that nobody has all the expertise available in-house, like individual software companies don't necessarily have all the cryptographers or cryptographic engineers or silicon support and no one party has all the expertise. In particular, I feel like there's a lack of cryptography talent and incentives in academia around the hardware side of this. We see lots of academic support for quantum crypto and things of that nature, but we don't see that brought to, okay how do I unwrap Schnorr and add an adaptor signature and be able to do that in RTL silicon? We're not seeing that kind of cryptography work happening. We're all trying to raise money to make these things happen. How big the market size is; government support; and basically the goal is how do we get continued investment and continued security? Are we missing any pain points here? Does anyone when going through that list, does anyone want to add something to our pain point list? Communication and collaboration is a pain point. One of the problems is that there's a general attitude of competition worldwide and I think it's worthwhile to emphasize that this has to be a collaborative effort. It's too big and complex for it to not be collaborative. I feel like there's a certain amount of discouragement from people learning cryptography. A good goal of "don't roll your own crypto" sometimes accidentally leads to "don't touch this, only those who are gods can understand this". "Don't roll your own" seems good but "learn how it works" should also be strongly encouraged. Certification marks are another pain point. Maybe our own industry certification as an alternative... I know it's a "one more standard" problem but if people really need certification more appropriate to HW wallets, then.... there you go. Developing standards and curating them are hard, in their own right. I think that's a pain point. The example of the APIs of people doing one because it works and then moving on and not updating, is an example of that kind of pain point, of the fact that Standards Are Hard. https://github.com/Nitrokey/nitrokey-pro-firmware - f.ex. I see there's discussion between them and https://solokeys.com/ about a joint venture on developing software for the HW they're building. Articles mention about how quantum computing is going to crack all the keys. Or secure it ultimately. That's really one of the big open questions. I'm a lot more skeptical about it. Even when I talk with quantum cryptography experts, a lot of the claims that are out there of oh well quantum supremacy is you know close or near-term. But when you actually look at the real numbers and other stuff, it may be further away than we think. But yes, there is a risk that in the next 10 years that 80-bit will be broken which is kind of a lot of the legacy stuff. 128 won't be that far behind. How do we plan for this? I'm not the biggest fan of some of the approaches to algorithmic agility because they often solve more problems than they solve. We need algorithmic flexibility, but that's really hard to design... someone was talking about my evaluation of AES-GCM, and I don't think it's legacy quite yet but I think it's on the path. We expect a lot more from our symmetric algorithms today than what AES-GCM supports. We need to start retiring AES-GCM now, but I don't want to support 10 different alternatives. That will be like TLS where 20 years after TLS v1.0 is out, and v1.3 is close to being finalized, there are still people using SSL v2 ciphersuites and getting attacked. It totally shocked me. Sometimes agility it has to some extent some missteps as well, right. I had a recent discussion about curve 25519 and how difficult it is to implement in silicon RTL, compared to secp which is technically slower. I think there's an inherent tension between certified & up-to-date. Another pain point - IMHO, physical countermeasure shield tends to be smaller than it really ought to be (when using available off-the-shelf parts). This is why you see new stories about wallets being easily hacked with voltage glitching, etc. When I look at wallets, I see important things happening outside the umbrella of physical security like boot, what address am I paying to, and in some cases the cryptography. These are I think driven by I'm picking off the shelf semiconductor chips and that's what you end up with. Do you disagree with any of the pain points? I've had people say let's evangelize AES, GCM, SHA and a lot of people aren't using that, even though it would be on my list of things that are acceptable today and sliding towards legacy. Let's move on. ## Architectures How do we establish the next generation of root-of-trust? There have been discussions about the nature of SE-only architectures. There's maybe what we really need to do is just be able to more securely store keys of different types, or state or other things. I've had some interesting discussions and some good points made that what we really need is better accelerators in silicon and let the cryptography be handled at higher layers. There have been a few people I think Coldcard has announced a card that has two SEs on it. The two SEs can basically have different roles and functions. Or "throw it all into a single chip" and SGX is another example of that, where we have secure-on-chip solutions instead of multi-chip solutions. There's a number of examples of that. Someone has a virtual SGX (vSGX) that runs on ARM, which I thought was interesting. If we're looking at the future over the next 5-10 years, is the world moving more and more to collaborative key generation through secure multi-party computation type math and threshold signatures? Does that lower our reliance on needing keys and signing in trusted secure hardware? FROST? ROAST? Are we missing any important architectures? Is there something we haven't talked about. To address the challenge of the "crypto" evolving faster than chips can be made, I've wondered if it would be feasible to have essentially a "write once" FPGA-type of device. As a consumer product developer, you write a custom FPGA bitstream (e.g., including the latest new cryptography aglorithms) to the "FPGA" chip once at the factory. After that write, the device is fused off, preventing further "code" changes. Ideally, there would also be a way to query this chip for a hash of the bitstream that is encoded within it so you can correlate the "code" in the chip to the Open Source code used to produce it. Not sure how feasible this is from a hardware perspective. write-once FPGA was the original way FPGAs were fabbed, they were called CPLDs Just add this here that WASM as SW architecture with HW (Enarx?). I remember there are some attempts at ZKPs and homomorphic HW by MS, Intel and others (do correct or add if you have links at hand). I was investigating processing in memory. I found that it's largely on AI at the moment. Samsung released an AI processor which they are releasing to JEDEC ~~JTEG (JPEG?)~~ and has a whole 9 instructions. This approach I feel is missing an opportunity and I've put it into the hackMD thing already. I've come up with a concept where using coherent scheduled algorithms, you can distribute the processing seamlessly from the software level down to a processing element that has more direct access to memory or to a different type of secure memory. You could have a coherent memory API where you could actually, you know like how at the moment there's a trusted execution area inside processors like the x86 and ARM processors? The idea I had is that you could are able to push that trusted secure execution completely off-chip on to something that is actually closest to the secure memory itself. People ask is it in software, is it in hardware, but really it's a contiguous spectrum. You could think of a state machine that has registers and it is connected to logic, or you have shared memory, or you could have microcode, or a real CPU, or a CPU with a bus fabric that doesn't have a connection to the secret, but you have some memory protection unit or you can do everything in software. When you say should it be in hardware or software, there's a contiguous spectrum between those in my mind. I also see a lot of the newer zero-knowledge proofs (ZKPs) are moving to "trustless setups" where as long as there is one honest participant, or even with ROAST you can have a majority of malicious participants, yet the calculations can still happen. Can we bring some of that stuff down to the silicon so that even the silicon is not as trusted or needs to be as trusted that it can participate in these types of things and add its particular value but otherwise leverage maybe inside the chip itself, can a multi-chip secure device do some kind of multi-party computation inside of the chip. Not over the net, not with lots of things, but purely inside the chip and validate and prove the other chips did what they promised. There were some interesting anti-exfiltration stuff where if you have random nonces someone can put in a fraction of a key into the random nonce and with the right filterings, someone looking at lots of signatures can find 4 bits, 8 bits, and after enough signatures from that device it has now exfiltrated its entire private key. Well, you can do tricks where that nonce is not precisely deterministic, but it's provable that -- you can prove the nonce was constructed with randomness from another party so that they can basically say yeah like with some of the voting algorithms. Has anyone in this group done anything with vector processors like for video? I saw one vector processor had a 1024-bit register. I was thinking hey that might be useful for crypto. Over the past 4 years I've been developing a vector ISA instruction set. Yes, accidentally, one of the capabilities that has emerged from it is to be able to do 1024-bit vector add. I think there's some interesting things that can be done with vectors-- especially with ReRAM to be write-adjacent to the logic and maybe only connected to the logic of a particular function, so it's a dedicated register in a sense, that could be really powerful. ZKP RISC instruction set project https://www.risczero.com/ https://www.youtube.com/watch?v=fOGdb1CTu5c - also quite good on MPC and trust creation between strangers. Now HW perspective as such, but... Yes, in fact I had that thought on previous slide...I think something like threshold signature is not to reduce reliance on HW, but to enable *more* reliance on HW. With single signature wallet, you have to store some amount of entropy off-wallet. They typical failure is that the user forgets that off-wallet entropy. With multi-keys, you could store pieces in various locations, and have good security with decreased reliance on pin/passphrase in your head. It enables the HW to shoulder more of the responsibility. One of the big items in architecture is what does trusted input and output really mean? Someone mentioned in a presentation that, well, what is the address? Is it a valid address? Someone can have funds sent to an incorrect address or some other kind of spoofing. But ultimately this means this information needs to be-- the proof.... I am not sure I am convinced by trusted input/output. What makes it trusted or secure? The screen can always be hooked up to whatever.... right? The thing we're concerned with is that when the user sees information coming out of the SE, that there be a minimum of other attack pads to alter that information. So it's not that there's a particular high security low level path between the things, it's just not going through a phone or computer that the invoice they are about to approve is for this much money or this much address. When the user says okay or no don't use it, by pushing a button, it's nice if those buttons don't go through untrusted elements on the way to the SE. The shorter the path, the better from a security standpoint. You chose a display because it was hard to find one that didn't have an untrusted CPU on it, so are you driving the display from an MCU? We haven't connected a display directly to a SE. You're probably thinking about us instead. Our first generation display didn't have an integrated circuit for the controller. It was exposed on the capton ribbon and inspectable. I would echo the point- one thing we have talked about, one of the most important things for a hardware wallet is to show the details of the transaction you are verifying. Putting as little distance between those things is great. We're looking at putting a direct connection between the SE and our screen; it would be great if the SE itself or the FPGA or MCU that we use has direct access to the screen. I agree, it is not really about the Input/Output itself, but important processing decisions that take place based on those. If you can do code-injection to the MCU you can easily make payer pay to the wrong address by displaying the wrong thing. This is not an attack on the display or keypad itself, but the important decisions made involving this input/ouput. When I was investigating SElinux, they were looking to put a secure SE linux on top of Xwindows the X11 protocol because the problem they encountered that an attacker can put a pop-up saying error your password has expired, please enter your password. The idea was that you would have a SElinux context that propagates right the way down to the GUI displaying the data so ... HTC had two phones. One had a small display on the back which was connected directly to the TrustZone. I don't know quite how they used that. There were discussions also about a header area not being accessible to user space. They had some way of being able to put in the header area of the phone, you know, an address, or maybe some day a lifehash if people are familiar with that technology we have been working on, that you know that user space does not have access to. It's not pure trust, there are still attacks on that, but it allows more UI interactions to be more clearly deliminated as to who is doing what. One of the challenges of architectures is that they have to fit into a larger ecosystem of chips and boards and then off to networks and things of that nature. I was intrigued with with the JavaCard having the SE doing the bluetooth. Just how capable are these? JavaCard is unusual in that all the memory is persistent. That's the main thing about it. The runtime itself is a trimmed down version of the java language, it's not really the java runtime in any form. It's usually its own custom proprietary jvm that the implementer develops that has java-like semantics in terms of garbage collection and things like that, but it has full emphasis on memory. The reason for that is that these often run in passive power environments like your credit card for example has a JavaCard SE in it but it's not powered, it's only powered by the NFC field of the reader. So essentially your processing call will start and stop based on available power from being in the NFC field and being removed, and it needs to persist its entire state and re-hydrate when you put it back into a power field. It's a little bit different to code for, as a result, and sometimes that makes it very inefficient to implement let's say a new security primitive or a new crypto primitive in software. There's a lot of things that you can do to make it fast in software that you can do when every temporary variable storage is in eeprom, right. So that's the main thing to keep in mind there. But just going back to your question about JavaCard, so bluetooth, I haven't seen a SE that does bluetooth but there are multi-chip packages that will be integrated with an NFC frontend and these are the ones often used on credit cards and things like that. The protection you get there is purely from the fact that these things exist in a single package. It's not so much the SE doing the radio communication, but it's a multi-chip package that contains the SE core, NFC frontend, and a passive antenna circuit or whatever it might be, and all of that might exist in a physical package. .... the NFC applet select commands will be redirected into the SE runtime without anything in between and without being ability to be intercepted. ... if you want an architecture where some NFC is routed to a SE and some are routed to another chip or to a second SE and you had a router in front of it essentially, then it makes it difficult to implement that because you can't poke into those bits, by design. For secure input/output, in our case, it basically is one of the multi-chip packages where the radio is our one secure IO where the NFC front has some level of physical security guarantees to be bound to the SE. There was a discussion about certified domains vs open designs. What architectures do HW wallet vendors absolutely need? SE+MCU, single chip, multi chip? Is there any specific requirement from HW wallet vendors? It may be HSMs need some of the same things. I'm thinking light wallets with Raspberries or service providers that run something in data centres. There come the supply-chain problems too. # Boot, firmware and supply chain questions Firmware could be self-sovereign and the user could truly own it, instead of just "renting" it from the chip maker. These all have issues and questions like auditability of code, firmware signing, and conformance with certification and such. What are the bootloader and firmware pain points for people now? When I say this, what's the first thing that comes to your mind as that's a big pain point? What do you get frustrated by with firmware? bootloader PITA -> gigabytes of vendor toolchain downloads, stupid OS requirements (i.e. need WIndows 7 due to lack of driver signing for a flash-programmer driver) The OpenPOWER Foundation has a special interest group called Libra-BMC. .... nightmare. The solution is that they are developing an FPGA card that is a drop-in replacement to servers. We're talking millions of units here. Every server that has one of these. It's a massive proportion of the worldwide server market. Chip supplier point of view- the problem is not that people want a backdoor to be evil, the problem is that when you produce millions of chips and you will get returns. Key to improve quality and yield is being able to analyze those. Yeah "analyze". So there's no good answer here. If you irreversibly change the lifecycle, then you cannot process RMAs. In this business, you have to live with not processing RMAs effectively. I agree. I don't think it was deliberate, they just didn't think about the consequences. But it literally means you have to replace that co-processor in the field. Something I've been thinking about, and this is not my world; I work on the software side and decentralized networking tech. I've been about the supply chain direction of things.. a lot of this seems really great. One direction I'm nervous about ever since Google made their announcement a few weeks ago about moving away from passwords and moving towards signing in with your device instead... with a lot of the push in some of the webauthn flows and things like that, it looks like this is a key that is generated by- not the user uploading the key to the HSM. If the user can generate the key and upload it to the HSM, that's fine and I have no concerns. But I am nervous about a push that is anti-self-sovereign like on identity and authentication. What I suspect will happen is well here is a trusted set of manufacturers where they are generating the keys for the user, and we think it's a security vulnerability if a user is generating and uploading their own key to the HSM. So then you have an approved set of devices for identity. My nervousness is that if things go in that direction it would be a big risk. I'm curious if people think this is even a viable concern at all. My gut sense is that they will make this push. That's the interest from the DRM perspective, the same type of reasoning and use case. Whether that use case will apply to login as well... Is there a viable reason for users should be able to generate and upload their own keys into the HSM in all cases, in order to be able to avoid concerns at the source where you basically get your HSM things possibly being compromised at production point? You could have a limited OCAP where if you have the OCAP you could use it to ... since the 70s.. anyone has brought this to new processors at the low-end. Maybe we can evangelize to that community to get them here. There's a slot in webauthn for attestation. FIDO has a similar thing. Anyone can write the software to do FIDO, we've got examples of that on the Ledger HW but it's not certified so the parties that care about that, they say yeah you did the protocol okay but you didn't present this one little spot to demonstrate a proof of certification. Almost nobody checks this cert except on some government websites. But the capability is in the protocol design. I think this is part of the FIDO certification stuff: are you a trusted manufacturer? Maybe I'm wrong, but this has been me trying to read into the literature that is available that I understand as a complete outsider. It's often these little small things that get you in the end. With twitter, lots of people were saying oh great twitter open APIs, we're building companies on top of it, but then there's that one little API call which is "please use my API token and let me do actions" and they basically started limiting that and shutdown that entire ecosystem so you have to always look where are people able to lock you in in maybe ways that you're not anticipating. There's some reasonable things with supply chain where you want to say hey did this come from where I said it came from. How many levels do you do your supply chain, is it 3 levels back? So there's a secret in the SE written into that, and then you validate that the chip came from the right supply chain on the user side. I was looking at the webauthn site to get to where my concern is. It's at the part where they talk about attested in terms of, you can verify the certificate comes from that the public key comes from an authenticator you trust and not a fraudulent source. I was trying to test against whether or not these authenticators whether it's even possible to write one in software. Right now the only authenticators are either tied to specific hardware or they are test ones for servers. I guess I've taken a lot of time on this and this is not specifically hardware related. I would like to advocate for in the case that I think something that this community could take up and be useful is that we shouldn't accept a situation where the supply chain is... where the key material is generated at a likely-to-be-compromised central source, where you should at least have the option for users to be able to upload their own key material as a way to prevent the supply chain attack of expecting that keys are coming specifically from a single trusted organization. Having something where the mfg generates a key and it comes on the device and users want to use that, that doesn't bother me, but having that becoming the expected norm, that's not okay. I've tried to use FIDO on yubikey while generating my own key on it, except for GPG/SSH which I figured out, but it seems like--- but yes, the direction we want to push for is like you said. Trezor and Ledger both have micropythons and I don't think either of them are members of the FIDO alliance; I think you can implement the protocol, just right now people aren't checking that additional attestation. I could be wrong. But that's my basic understanding. you bring up good point, I am surprised frankly how much people accept entropy from unknown source. We are talking about trusted designed, but note that even if a TRNG design is public and vetted (which they usually are not), it would be possible in many cases to tweak a few knobs in manufacturing to bias 1 wafer so it generates poor entropy. OK to have mfg secure boot, but only if it also supports multi-stage bootloader. i.e. vendor "secure" bootloader boots your own vendor bootloader where you control the keys. Basically, i wouldn't trust mfg secure boot and would like my own firmware signing keys. We are trying to move in the direction of zero black box code Agree, but would need details on the secure boot step with no NDA. Yes, chip initialization code does reveal a lot about how memory protection is designed. So letting OEM see code from the first instruction, or load their own, requires a certain level of disclosure. I wanted to bring up on multi-chip architectures, one of the things we haven't talked about, it was just mentioned in the chat, is external memory encryption any sort of external flash chip being able to support on the fly real encryption and decryption of all memory access. That often becomes a weakpoint. Sometimes there's sensitive data that needs to go into external memory due to size limitations (e.g. biometric templates), sometimes there is execute-in-place (XIP) code that lives in external flash, and it's not often covered by memory protection that your MCU or SE might offer. I don't know if people have thoughts on that, or if there are implementations out there where an MCU is able to do external memory encryption. # Cryptographic primitives There was a discussion earlier about SHA-3 versus using Blake or Poly1305 or whatever, and advantages of one or the other in software versus hardware. We have a lot of things going on in curves. So beyond the ECDSA NIST through USDHS has said they will start mandating upgrading to P-384 not just P-256. So even if you are doing government cryptographic standards, you're going to have to upgrade your chips. The secp curve, for a long time secp has been done with ECDSA in software but we also have Schnorr emerging with bitcoin and there's really no hardware direct RTL type support for secp. I think this is really important because when I look at some of the emerging things like FROST, ROAST, and some of the other emerging standards, we really need secp with Schnorr on the chip. We probably also need some ability to tweak it with things like adaptor signatures too. Is this going to be purely unwrapped in RTL or will it be some kind of simple state engine happening at the SE level, how do we do this in hardware? Has anyone unwrapped Schnorr? Obviously there are new IETF standards. I've heard a lot about the difficulties of doing this in ASICs/silicon. The secp curve-- the IETF standards have problems with the fact that they use a co-factor causes problems with multisig, which secp doesn't have, but they tweaked it with 25519 with Ristretto and de-caf. The two I care most about are these two, and I'm really not concerned about this or the IETF ones. I want Ristretto and edRistretto. [jesse: I would like to see a Schnorr API that allows for a user supplied challenge hash, so it can be used for many different types of Schnorr signatures, including MuSig, FROST, and adaptor signatures] [simon: I would throw in a vote for Ed25519/EdDSA, for authenticator/FIDO2 reasons. It is one of the authenticator methods specified by FIDO and is becoming more and more common with e.g. OpenSSH installations. Yubikey supports it as of a couple of years ago.] A lot of interesting stuff going on with BLS curves which is a whole different kind of crypto with different security assumptions that I don't think I'm able to understand. When you look at BLS crypto proof, it often looks a lot different than your standard elliptic curve kind of proofs at least when I look at it. There are emerging symmetric and password-authenticated technologies. Moving up to AES 512 obviously. TLS is increasingly using ChaCha20. Then there's ZKP stuff like Plonk and Halo. Quantum resistance. What I would really like to know is what the most important thing for you that doesn't exist now, in particular if it's not in this list. Maybe you could put into the chat box or something like, what is the one cryptographic primitive that is not in silicon that you really need to have in silicon? End-user verification of entropy being truly random could be cool (e.g., run a test that dumps millions of samples to microSD in our case). We're using Bunnie's Avalanche noise source in Passport and already have this code, but not currently exposed to the user. I have a thought experiment. How many of the algorithms on this page are based on big-integer arithmetic and modulo? I think almost all of them. There are differences in some of the things. We actually have a slide about acceleration. Obviously we can accelerate things with ASICs and other dedicated stuff, but then what are the security properties? Are there some particular things where accelerating it in protected silicon has some kind of advantage? So my next question is... how long between when it's developed and when it becomes actually useful, and when it becomes cracked. What's the lifecycle on the actual algorithm and its usage? Average. Is it 3 years, 5 years? I'd say it's longer than that. How old is AES at this point? Over 15 years old? Re: cryptographic primitives, I think you need key derivation as protected as anything else...in bitcoin for example it doesn't make sense to protect mnemonic and ECDSA but not protect the intermediate step of deriving root seed and so on. When you develop some hardware, particularly with something complex using big integer arithmetic, that the amount of investment and the delay between when you start the ... and when it goes through certification, then you're half-way through the -- if the timeperiod is 5 years, which is not unreasonable, you're halfway through the lifecycle of the algorithm you're implementing. There are a lot of algorithms that have decades of lifespan. There's a number of primitives that you might want to put into the chips themselves like big integer hardware that does these sorts of things, doesn't have sidechannels, this is a general purpose thing that we're not going to need for the indefinite future. So yes, it's absolutely worth building it correctly, and an issue that we have to deal with is the fact that these things take a long time to build and then certify and that's why we should be sticking to things that we know are good and not move on to the new hotness. https://blog.trailofbits.com/2022/05/03/themes-from-real-world-crypto-2022/ - had something in mind about key derivation to add to that excellent note Garbled circuits have been around for a while. Like bigint math, it'd be nice to have special-purpose verified silicon to program to .... Hm, yeah, haven't thought through implementation there. Mostly an intuition that a somewhat algorithm-independent set of hardware-etched logic that represent a higher level of abstraction, such as secure bigint operations, would be useful for speeding up the lifecycle of implementing these functions. I separated cryptographic primitives from cryptographic protocols because often the protocols can be done in less secure chips provided that the primitives are strong. But that's not always true. In my own kind of wishlist, in these all different kinds of protocols is Schnorr aggregation. Not just the quorum multisigs and thresholds but things like adaptor signatures and tweaks. Right now if you're doing NIST P-256, it in theory could take advantage of some of the new adaptor signature capabilities but because there's no way to insert the tweaked value in the right spot there's no way to do it, and you have to do it in software. There are things that we should not be doing in any other way except software, like MPC. MPC has far more promise than reality now. The primitives that go into the hardware ought to be things like fast exponentiation, but not MPC. It is really a layering thing where the things that we want to design correctly are primitives that people will be using for things that have not been invented yet. I'm challenged in trying to understand like when do these have to be accelerated in a very trusted level with RTL in a hardened SE? Versus which of these can be, where we basically do it wherever it can be done fastest which might be in an MCU or CPU. I don't think you have to go back to a root key hardened on the chip if you are doing FROST... yeah that's something I've been looking into recently, which is how can we generate a number of FROST setups or a number of FROST and MuSig setups with a single root seed like we do with bip39. Or at least you would have a tree of different polynomials or something like that which could all be re-derived from the same seed, but there's no established mechanism for doing that yet that I've seen. From a hardware perspective, it would be nice if we could have a single unit of secure entropy that can interact with different types of algebraic structures inside the secure element in a flexible-enough way where we don't have to be tied to a single type of signing algorithm, but could support a whole range of them, like with Schnorr there's many different types of Schnorr signatures or MuSig or FROST signing, but they all have the same kind of algebraic structure just some differences around challenge hashes, and I would like to see flexibility around how the hardware can be interacted with. This goes to the future-proofing aspect of it. That's where we need to do this. We need to emphasize that a secure random number generator is every bit a mandatory piece of the hardware as access to RAM is necessary. Really, you can't do any decent work without RAM, you can't do any decent work without a secure RNG that has to be a mandatory thing not a bag on the side that people deal with later. 100% on that. It's something that we keep forgetting over and over again. Random numbers are always an after thought, and they can't be an afterthought. The real security is often these deterministic random approaches like deterministic nonces that are guaranteed by the nature of how the nonce is constructed that it is one-off but it's a number used only once. I often have this problem when talking about my interns about these things, they get confused about how randomness is used. Using a randomness in a nonce is very different from using randomness for a key. When we have things like chips at different levels, I know that HTC had a critical bug because they were using a Shamir algorithm which needs randomness at a particular phase which worked great in user space, but they were running it in a TrustZone because of course a TrustZone is more secure.... but it wasn't as random as they thought it was, basically. So there's also different levels of our architectures have unique needs and maybe someone might say hey it's not good enough to have one TRNG, you need one for each major level of architecture because if you can watch what the TRNG is doing then that can help break things. It basically needs to be an instruction. It needs to be at that level. It has to be available in kernel mode too. One idea is continuous verification and sampling of TRNGs. That's an idea that has occurred to multiple people. Or the ability to give some entropy into the chip, and the chip then incorporates the third-party randomness along with its own internal randomness and then being able to prove that it did do this without revealing its seed randomness. There are a variety of people who have done this; you would provide not just the signed object from the SE, but you would also deliver a proof that the signing operation happened and worked properly with a proof that can't reveal any of the secrets. There's some interesting things in that department. You can always build something that the malware detector can't detect, and similarly you can't continue to put checks on things because the check can always be done. There must be a bottom turtle when it's turtles all the way down. I've taken to calling this "understandable security" rather than "proofs of security": if you have something that you know has limitations and everyone knows what those are and you can work around those, then that's better than a thing that nobody actually knows how it works at all. It's the direct analog to old aphorisms to security by building something so simple that everyone can understand it or security by obscurity through complexity. You're better off with simple things that you know the limitations of, rather than oh this thing that you thought was complex turned out to not be complex. bunnie huang and paul mackerras both developed FPGA level PRNGs https://webhome.phy.duke.edu/~rgb/General/dieharder.php # Edge topics So part of this workshop is about finding ways to cooperate to increase the usage of these chips. For instance, why can't I use my cryptocurrency key to secure my end-to-end encryption with Signal, instead of having that be totally orthagonal? I did it a few days ago.. I copied my desktop files that have the Signal name on them, to another computer because they don't have their archive history. That was probably an insecure thing for me to do if I had anything sensitive on Signal. But why can't I do that? This could be an increased market for a chip. Being able to do sophisticated smart contract calculations with accelerators; HSMs; there's been talk about no platform lock-in. I would really like to have better threat modeling and adversarial analysis. I don't know if you have seen our work on smartcustody.com, but we're looking at 27 different styles of adversaries and how to assess the risks in cryptocurrency. We'd like to be able to apply that to digital identity and these other use cases for cryptography. What we have encountered is that multisig changes this hugely. All of a sudden this stupid single point of failure like a FIDO key where I have to register three of them but the FIDO service site maybe only lets me register one at a time... that's a single point of failure. But with multisig things, maybe my phone gets corrupted but that's fine because it's not a single point of failure in my cryptographic setup. We don't really know what the emergent threat models are and security requirements. If we had ways of sharing this, we might be able to get adjacent markets like servers, HSMs, IoT, identity, etc. http://lists.mailinglist.openpowerfoundation.org/pipermail/openpower-hdl-cores/2021-February/000239.html # Ecosystem How can we do better in these workshops? What should we be doing from here? Who are we missing from this discussion? I haven't heard any cryptographers from the people that have spoken today. I think that's a lack. I invited quite a few. How do we get them involved? I am pleased that we have someone from Intel. I was worried that I wouldn't be able to find the right person, thank you for being here today. Who else do we need here? Reach out to the CTO of e-fabless. mkk@efabless.com What does "ecosystem" mean? To me it's much more than cryptocurrency and web3. In fact, we got into this for a government project. Server security, personal 2FA, enterprise 2FA, self-sovereign identity. What is the biggest total addressable market that you can draw a line around and make a chip that serves all of that. That's how you get the best chips. Also reach out to Anand from Fortanix this may not be applicable here, but in case it is... please consider User Experience, even if your expertise is far from the user level. When I say User Experience, I mostly mean "the experience of the average person in the world" which are non-techies, non-academics, elderly people with dementia, etc... these people are all going to be using what you are designing, whether they know it or not, as a result of what the software programmers wrote, based on your algorithm restrictions/limitations/benefits, the API you expose to your chip, the bus protocol your chip/IP exposes, etc... User experience is something that Blockchain Commons talks about. Also access issues and third-world etc. We have talked with ... what is the minimum that can be done cheaply because we need it to be used by people who don't have any skills or would have difficulty with multi-button UIs like Ledger and all that. I think the cybersecurity certifications that become mandatory in EU (maybe in U.S. too?) require some of the same things: trusted chips, security models, testing implementations and so on. Then there are all the real world things of getting chips to important places, installing them, communications and so on. Maybe medical manufacturing also. As in it's not easy to currently switch a monitor to a medical machine due to certifications but maybe could be helped with TPMs doing mutual auth or something. You have security reviews listed on this slide. I would like to see better collaboration on code audits and security audits. I have heard from chip vendors things like hey we have a security audit or security review by a third-party for our TRNG or a security primitive or some algorithm or whatever, we can share it with you as our customer, but you can't incorporate that by reference into a security audit of your own product. I think it would be useful to be able to reuse some of these things. When I have to do a security audit of my product, it hits this boundary of oh there is this other component that has been reviewed but no you can't see that audit because the people who paid for that audit don't let you see it. So you either have to re-review the whole thing as a black box, or just deal with it as something excluded from your security review. On the topic of certifications, what should that look like? We should probably avoid a centralized certification authority that can become a gatekeeper, charging fees, etc. Can we do this in a decentralized way? Prove your compliance algorithmically/cryptographically? Certification disqualify open source & transparency so as starting point focusing on real testing at chip level and accepting open source would be good starting point to allow co-existence of open and closed solutions. If your algorithm is changing quickly, maybe you don't want to-- cost per wafer, probably dominates a lot of the decision at the ... if you can make some really tiny piece of silicon, and fill your wafer with more dies, and also tiny dies, then could you do something, are people thinking about dongles and things where you don't need to trust the .... but because you trust the dongle taht you're plugging in, and that does some sort of offloading acceleration or just trusted execution? Formal verification etc etc you just sidestep the untrusted zone. I don't know. It's definitely an issue. Blockchain Commons was looking into a certification mark under trademark law, and around how to do an SRO for what we were calling Gordian Seal and looking at other certification marks. How do we do this in a relatively decentralized way? It gets hard because on one hand pure self-attestation is what a lot of SRO (self-regulatory organizations) do, and it just hasn't had much meat behind it, it's just people claiming they ran a bunch of tests. The big SRO is like Underwriter Labs which has no legal authority to do what they do but they have now become accepted in courts, not by law, as being a body that can demonstrate that a particular industry is meeting standards and has less liability if you have been properly sealed by Underwriter Labs. There's a whole lot of law stuff there that I'm not an expert on. I've been interested in cooperative things like, can I get 100 hours or 80 hours of peer review in return for my experts to do 80 hours of peer review of your products. Can we do time exchange? Can we do cross-certification type things, rather than having a centralized body where there would be clear standards but it's not just a like..... I actually ran into this with TLS. As coauthor of the standard, I was asked by at the time the new Verisign to verify that anyone that wanted to request a certificate from Verisign was using secure software. But this was also a sales obstacle for them. We did this for about a year, and nobody else wanted to do it because of the liability, but I figured if TLS had problems or certificates wit hproblems then my own company would be in trouble anyway and I accepted that liability. We only charged $5,000 and we said we would do this for $5,000, and over half of the companies that sent us server software failed in the first hour of review. This caused such a problem for the sales department they said well this is slowing down our sales and we need to get more servers out there in the market. So let's run to self-certification regime, and everyone ran the tests, but fundamentally it wasn't that much better, I'm pretty sure half of them would have failed our quick examination of code. It's not an easy solution, but I would like to find some answers there. about your point of "co-opetion" - we are bumping into an idea to build Open Secure Element Architecture similar way like RISC-V. Someone combined a foundation with certification marks with the anti-trust legislation in the charter, and then you have something that can be trusted by everyone and has the legal teeth to jump on anyone who breaks the rules. The certification mark if someone declares that they are self-certified and have passed the compliance suite, and they haven't, then you should be able to jump on them with a great height with trademark law. If someone tries to undermine someone else in a competitive environment, you could jump on them with anti-trust and trademark legislation. I talked with attorneys about this and one of the obstacles there is the costs. Not just the million-and-a-half to establish this, but lawyer fees, legal fees, set up this and that in various jurisdictions, to become what is necessary for you to become what the ISO name is, but a standards organization that meets certain criteria that can do the anti-trust side of this problem, like the liability side. But then you also have ongoing costs of maintaining that, prosecuting that, updating the standards. It's doable and it might be worthwhile. When we have people raising $4-6m for the next round of their hardware and we're saying well we need $3m between all of that for legal work, it's hard to get that kind of funds from the various participants. Platinum membership at OpenPOWER Foundation is $100,000/year and people pay it. Blockstream became a founding member of the RISC-V Foundation and it was expensive. There are some opportunities in this space. There's an issue that if you have an open-source project can you even get it certified if you don't pay it? No, you should be able to. FIDO kind of has this. There's some minimal membership level that ### More notes & scratchpad area https://www.cnx-software.com/2022/02/08/open-source-fpga-asic-efabless-chipignite/ ## Agenda ## Overview [Christopher Allen](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/2) * @ChristopherA (twitter & github) * ChristopherA@LifeWithAlacrity.com (email) [Blockchain Commons](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/3) * https://www.BlockchainCommons.com/ (website) * https://github.com/BlockchainCommons/ (github) * team@BlockchainCommons.com (email) ## Presentations #### Presentations - [ ] CrossBar - [ ] Proxy - [ ] Tropic Square - [ ] Libre-SOC - [ ] Supranational ### Crossbar presentation ### Proxy Presentation Slides: https://hackmd.io/@simonratner/rkUkjkNu9 ### Tropic Square presentation ### Libre-SOC presentation Youtube: https://www.youtube.com/watch?v=us061o4PBZs - Transcript: <https://libre-soc.org/conferences/siliconsalon2022/> ## Supranational presentation (?) ## Topics ### Introduce Bryan Bishop [Bryan Bishop](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/16) ### 1. [Pain points](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/18) ### 2. [Architectures](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/20) ### 3. [Boot, firmware & supply chain](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/22) ### 4. [Cryptographic primitives, protocols & acceleration](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/24) ### 5. [Threats & countermeasures](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/30) ### 6. [Edge topics](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/32) ### 7. [Building a secure infrastructure ecosystem](https://hackmd.io/@bc-silicon-salon/rkxbd6rFw9#/33)

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully