# Sensemaker-lite - *Sensemaking*: Process by which **people** give meaning to their **collective experiences**. - *Reputation*: **Overall** quality or character as seen or judged by **people** in *general*. This technical design is based on the following [design paper](https://neighbourhoods.network/Distributed_Social_Sensemaking.pdf). Note: *Credentials*, *Roles*, *Badges*, *Scaling*, *Limiting*, *Weighting* and *Bridging* have been purposefully left out in this design, but have been taken into consideration for later implementation. # Design ## Goal **Sensemaker** is a DNA that enables groups to add extra metadata on any entry from any other dna and process that metadata into useful insights by using user-defined processes. The insights enables to refine a set of entries into a curated collection (called a context). ![](https://i.imgur.com/6eNvZ3s.jpg) ## General Design The *Sensemaker* harvest entries from other dnas called ***provider dnas***. Agents add logic and metadata to them (stored in the **sensemaker dna**), and finally gives contexts to other dnas called ***client dnas***. ```mermaid flowchart LR subgraph Provider id1[(Entries)] end subgraph Client id2[(context == curated entries)] end mm[(metadata)] th{Thresholds} meth{Method} id1-->th subgraph Sensemaker meth-->mm mm-->meth mm-->th end id1-- agent -->mm th-->id2 ``` ## Ontology Note: This design tries to stick to the original design paper's terminology as much as possible. ### Dimensions A ***subjective Dimension*** is a **judgeable** quality or character of an *Entry*. It is expressed as a value in a agent-defined range. It is judged by an *agent*. Ex: 5-star rating, upvote/downvote, 60 mark scale... An ***objective Dimension*** is a **mesurable** quality or character of an *Entry*. It is mesured by an agent-defined ***Method***. Ex: Upvote ratio, mean rating, number of likes... An ***intrinsic Dimension*** is a entry's field described for the Sensemaker. It is **set** by the *dna* on creation, according to rules defined by the *dna developer*. Ex: title, size, published_at... A ***native Dimension*** is any type of *Dimension* that has been provided by the *provider dna* (ergo the provider dna developer) (we could also define a *system Dimension* as aholochain defined property of an Entry, e.g creation timestamp) <br/> ### Resources A ***ResourceType*** is an *AppEntryType* of any *dna* for which is defined ***Dimensions***. A ***native ResourceType*** is a *ResourceType* which has been defined by the *dna* of its underlying *AppEntryType*. A ***Resource*** is an *entry* for which its *AppEntryType* has a *ResourceType*. :::spoiler *technical notes* - Multiple *AppEntryTypes* can be combined in the same ResourceType for more complex and cross happ sensemaking, on the condition that they have overlapping *intrinsic Dimensions* - Sensemaker can get all the *AppEntryTypes* by querying `entry_defs()` of the other cells. - A Sensemaker-aware *dna* can generate *intrinsic Dimensions* out of all fields of its *AppEntryType* by using the underlying type's maximum range (ex: `u32 MIN & MAX`) ::: <br/> ### Assessments An *Agent* can **assess** a *Resource* on any of its *subjective Dimensions*. The result of this action is a ***subjective Assessment***. :::spoiler *technical notes* The Assessment entry is stored as a link from the Resource's EntryHash with the Dimension as tag ; if too many links, can use link to anchors instead of the eh. ::: <br/> A ***Method*** is an algorithm for **mesuring** an *objective Dimension* of a *Resource*. It uses *Assessements* from other *Dimensions* of this *Resource* as input. It outputs an ***objective Assessment***. Any agent can run a *Method* on a *Resource*. However only the ***CommunityActivator*** can publish the *objective Assessment* (or any other agreed agent/role). :::spoiler ***technical notes*** The *method* is described in a human-readable language that will have to be intepreted by an engine that can run the corresponding zome code. The *Method* is stored in this human-readable language if we don't have a two-way intermediary language to store it in. ::: <br/> A ***DataSet*** is a list of *Assessments* (hashes). :::spoiler *technical notes* - When an *objective Assessment* is published, the executing agent, might be required to publish the ***DataSet*** used by the *Method* as input. It can be used as a validation package or for audit by other agents or groups. - A community could decide to waive this requirement if its too burdensome and there is no trust issue. - Scattered entries or data external to holochain, could also be regrouped into a *DataSet* for later processing. ::: <br/> ### CulturalContext A ***CulturalContext*** is a set of ***Thresholds*** for selecting and ordering *Resources* of a certain type. A ***Threshold*** is a value criteria in a given *Dimension*. A ***ContextResult*** is the ordered selection of all available *Resources* of a certain *ResourceType* at a point in time, determined by a *CulturalContext*. <br/> ### CommunityActivator Non-native *ResourcesTypes*, *Dimensions*, *Methods* and *CulturalContexts* are defined in *Sensemaker* by the **CommunityActivator**. <br/> ## Integration ### Importing This design makes the Sensemaker agnostic from its *provider dnas* on the zome level, since it only needs a base *AppEntryType* and entry hashes to work with, which are standard across all zomes. To find input entries, a *provider dna* could implement a ***SS-Provider*** *zome* that outputs all available entries of a certain *AppEntryType*, but it could also be possible to index in the *Sensemaker* a *Resource* at the time it is assessed by an *agent*. A *Sensemaker-aware zome* could provide native *ResourceTypes*, *Dimensions*, *Methods* and *CulturalContexts*, but we keep in mind that a group is not bound to what the *dna developer* provides and can choose not to use them and create their own *Sensemaking types* instead, based on the same base *AppEntryTypes*. On the UI level however, the *provider UI* would need to know the *ResourceTypes* and *Dimensions* of an *AppEntryType* in order to display proper UI for inputting assessments (and possibly a 'Run method' button for getting objective Assessments calculated). This design makes it possible to "grab" data from outside holochain as input by creating a *ResoureceType* out of an external asset. Ex: Use IMDB movie IDs as a *ResourceType*. (Best to wait for the upcoming *Holochain Resource Locator* feature before designing this in details). #### Importing a Resource A communityActivator could import all Entries of a certain type, and members can only assess the ones imported by the CA. Or Members could import an entryHash without restriction. For that it would have to get the EntryHash in some way by the UI, copy paste and import as as Resource by specifing its resource type. Without Provider-zome awareness, it will not be possible to validate that the entry is of the correct type? ### Exporting This design makes the *client happs* agnostic from Sensemaker on the *zome* level, since the sensemaking output is for displaying a curated view. A ***SS-Client*** *zome* that provides an API for retrieving all availables Sensemaker entries could be integrated. the *client UI*, does need a way to get the *ContextResult* and display its entries accordingly. # DNA ![](https://i.imgur.com/PI5CpiF.png) ## Entry Types Note: Name fields are for UI purpose only. Zome code refers to entries by EntryHash. We use EntryHash and not ActionHash as all Entry types are updatable and data associations are for specific versions of the entry. ### Range A range is a description of a Dimension. Any Dimension can use any Range. A value in a range must use the RangeValue enum variant corresponding to its kind. ``` rust struct Range { name: String, kind: RangeKind, valueVariant: RangeValueVariant, } ``` ``` rust enum RangeKind { Integer({min: u32, max: u32}), Float({min: f32, max: f32}), Tag(Vec<String>), Emoji(Vec<Char>), TagTree(Map<String>, Vec<String>), } enum RangeValue { Integer(u32), Float(f32), Tag(String), Emoji(Char), TagTree((String, String)), } ``` #### Examples of ranges ```rust {name: "5-star", min: 0, max: 5} {name: "Upvote", min: -1, max: 1} {name: "Ratio", min: 0.0, max: 1.0} {name: "Emotion", ["sad", "angry", "happy", "disgusted", "fearful", "suprised", "bad"]} { name: "EmotionTree", tree: { sad: { lonely: ["isolated", "abandoned"], vulnerable: ["victimized", "fragile"], despair: ["grief", "powerless"], ... }, angry: {...}, happy: {...}, disgusted: {...}, fearful: {...}, surprised: {...}, bad: {...} } } ``` ### Dimension A metadata field to add to any entry type. It has a agent-defined range according to predefined range kinds. It is considered *objective* if a value can only be determined by a known *Method*. The *Method* used can be referenced by a *Link*. ``` rust struct Dimension { name: String, range: Range, } ``` Note: This could evolve into a "reputation currency", where only certain roles/agents can assess in a dimension in a limited way. ### ResourceType A *ResourceType* is an *AppEntryType* of any other dna for which is added *Dimensions*. ``` rust struct ResourceType { name: String, base_types: Vec<AppEntryType>, dimension_ehs: Vec<EntryHash>, } ``` ### Assessment The evaluation of a resource on one of its dimension. It could be: - A subjective judgement by an agent - An objective mesure by a method If mesured by a Method, the data set used as input could be stored for reference. ``` rust struct Assessment { value: RangeValue, dimension_eh: EntryHash, subject_eh: EntryHash, maybe_input_dataSet: Option<DataSet>, // For objective Dimensions only } ``` Note: An Assessment could actually be just a Link between the Dimension and the Resource with the value stored in the tag. Note: Validation must check value is correct for the dimension's range. Must also check the entry assessed is of the correct ResourceType. ### Method A process for mesuring an *objective Dimension* of a Resource according to assessments in its other *Dimensions*. ``` rust struct Method { name: String, target_resource_type_eh: EntryHash, input_dimension_ehs: Vec<EntryHash>, output_dimension_eh: EntryHash, program: AST, canComputeLive: bool, mustPublishDataSet: bool, } ``` ### DataSet Collection of data used as input by a Method. Used for auditing / validating / debugging. ``` rust struct DataSet { from: EntryHash, // Method data_points: Map<DimensionEh, Vec<AssessmentEh>>, ``` Note: Validation must check all data points are of the correct ResourceType. ### CulturalContext Curation methodology of entries of a certain type. It is a two step process of first selecting entries according to *Thresholds*, and then sorting the selection according to `order_by` field. ``` rust struct CulturalContext { name: String, resource_type_eh: EntryHash, thresholds: Vec<Threshold>, order_by: Vec<(DimensionEh, OrdereringKind)>, // ex: biggest, smallest } ``` Note: Must validate that ordering dimensions are part of the *ResourceType*. ``` rust struct Threshold { dimension_eh: EntryHash, kind: ThresholdKind, // gt, lt, eq value: RangeValue, } ``` Note: Threshold is not an Entry. *RangeValue* Must correspond to Dimension's Range. ### ContextResult Result of a context calculation. Could be published for caching. ``` rust struct ContextResult { context_eh: EntryHash, dimension_ehs: Vec<EntryHash>, // of objective Dimensions result: Vec<(EntryHash, Vec<RangeValue>)>, } ``` Note: An "unaware" UI could just receive the ordered list of entry hashes: `Vec<EntryHash>` ## Paths / Anchors *Sensemaker entries* are indexed in the following manner: ``` Ranges +-- RangeEh Dimensions +-- DimensionEh (NeighbourhoodHash) +-- ResourceTypeEh +-- CulturalContextEh +-- MethodEh (+-- IntegrityZomeHash) +-- DnaHash +-- ResourceEh +-- DimensionEh +-- Assessments ``` Ex paths: ``` teamBlue/happEntry/devHub/europe/snapmail/quality/5@ddd teamBlue/happEntry/devHub/world/where/quality/4@bobby teamRed/zomeEntry/devHub/america/delivery/size/784 ``` With this strategy, we can find all *Resources* of of a certain *ResourceType* and find all *Assessments* for a certain Dimension of a Resource. This takes in account that different DNAs using the same zome could use the same *AppEntryTypes* (ex: difference in dna modifiers). It is, however, harder for getting all assessments in a certain Dimensions across multiple ResourceTypes, or all assessments of an *agent*. For that we would need to publish extra links. ## Zome functions ### SS-Provider Zome - `get_record(eh: EntryHash) -> Record`: Get the *Record* for a known entry from a different *dna*. - `get_entries(typeName: String) -> Vec<EntryHash>`: Gives the list of all known entries of a certain type. ### Sensemaker Zome #### CommunityActivator only - `create_range(name: String, range_kind: RangeKind) -> EntryHash`: Publish or update a *Range* - `create_dimension(name: String, range_eh: EntryHash) -> EntryHash`: Publish or update a *Dimension* - `create_resource_type(name: String, base_type: AppEntryType, dimension_ehs: Vec<EntryHash>) -> EntryHash`: Publish or update a *ResourceType* - Publish or update a *Method*: ```rust create_method(name: String, target_type_eh: EntryHash, input_dimension_ehs: Vec<EntryHash>, output_dimension_eh: EntryHash, program: String, can_compute_live: bool) -> EntryHash ``` - Publish or update a *CulturalContext*: `create_context(name: String, resource_type_eh: EntryHash, thresholds: Vec<Thresholds>, order_by: Vec<(DimensionEh, OrdereringKind)>) -> EntryHash` #### Any agent - `assess_ressource(resourceEh, resourceTypeEh, dimensionEh, value) -> AssessmentEh`: Publish or update an *Assessment*. Implementation: Get resourceType and determine the resource's entryType. If its an agentPubKey, than must go throw KSR lookup so that the created assessment has the KSR as subject and not the AgentPubKey. - `run_method(resourceEh, methodEh, config) -> Assessment` : Publish or update an *ObjectiveAssessment* Implemention: Lookup KSR if method's target resource type is AgentPubKey. ### SS-Client Zome - `compute_context(contextEh, canPublishResult) -> Vec<EntryHash> | Vec<(EntryHash, Vec<RangeValue>)>` : Compute *ContextResult* of a *CulturalContext* at this moment in time. ## Signals TBD # Example scenarios ## Example 1: DevHub A happ agregating available happs. Agents can "star" a happ. Agent can also flag a happ as "broken". The *sensemaker* provides a context for ranking non-broken happs by popularity. A happ is considered broken if more than 10 agens flags it as "broken". A Developer is the agent that publishes happs (when a happ is published, a link from creator to happ is also published) A Developer's popularity is the sum of likes of its happs ### Provider DNA ```rust struct HappEntry { title: String, ... } enum DevHubLinkType { myHapps, } ``` ### Sensemaker entries ```rust // Ranges { name: "Flag", kind: Integer(min: 1, max: 1) } // subjective Dimensions { name: "Likeness", range: flagRange } { name: "Brokenness", range: flagRange } // objective Dimensions { name: "Popularity", range: u32Range } { name: "isBroken", range: boolRange } // Resource types { name: "Happ", dimensions: [..(all dimensions above)..] } { name: "Developer", dimensions: ["Popularity"] } // Methods { name: "Determine hApp Popularity", target_resource_type: Happ, input_dimensions: [likeness], output_dimension: popularity, program: { return count(likeness) }, } { name: "Determine happ brokenness reputation", target_resource_type: Happ, input_dimensions: [brokenness], output_dimension: isBroken, program: { return count(brokenness) > 10 }, } { name: "Determine developer Popularity", target_resource_type: Happ, input_queries: [(popularity, myHapps)], output_dimension: popularity, program: { return sum(<myHapps>.popularity) }, } // CulturalContexts { name: "Most popular happs", resource_type: Happ, thresholds: [{isBroken, eq, 0}], order_by: [(Popularity, bigger)] , } { name: "Most popular developers", resource_type: Developer, thresholds: [], order_by: [(Popularity, bigger)] , } ``` ## Example 2: Meeting agenda A happ for making meeting agendas. Any agent could propose a meeting item (with estimated duration), and all could upvote/downvote the proposals. The *sensemaker* would provide the context for the next 2 hour meeting by selecting the best vote ratio that fits in the allocated time. #### Provider Zome ```rust struct AgendaItem { title: String, duration: u32, // in minutes } ``` #### Sensemaker entries ```rust // Ranges { name: "timeRange", kind: Integer(min: 1, max: 600) } { name: "voteRange", kind: Integer(min: 0, max: 1) } { name: "ratio", kind: Float(min: 0.0, max: 1.0) } // subjective Dimensions { name: "duration", range: timeRange } { name: "upvote", range: voteRange } // objective Dimensions { name: "upvoteRatio", range: ratioRange } // ResourceType { name: "AgendaItemResource", base_type: AgendaItem, dimensions: [duration, upvote, upvoteRatio], } // Method { name: "computeUpvoteRatio", target_resource_type: AgendaItemResource, input_dimensions: [upvote], output_dimension: upvoteRatio program: { return count(upvote where upvote == 1) / count(upvote where upvote == 0) }, } // CulturalContext { name: "PrioritizedAgenda", resource_type: AgendaItemResource, thresholds: [{duration, lt, 2}, {upvoteRatio, gt, 0.0}], order_by: [(upvoteRatio, bigger), (duration, bigger)] } ``` Note: With the current design, we cannot guarantee to limit the sum of the durations of curated agendaItems to less than 2h. For that we would have to use the interpretor on the context as well for more complex curating, which is currently not the intent. ## Example 3: Hot Tomatoes A sensemaker-aware happ for agregating movie reviews. IMDB ids of movies are used directly (Hash the id into a Holochain Resource Locator hash). Agents can assess a movie on `OverAll quality`, `age-rating`, `family-friendliness` The *sensemaker* would provide a context for ranking the best overall movie, as well as a context for best family movie. #### Sensemaker entries ```rust // Ranges { name: "age rating", kind: Tag(["G", "PG", "-12", "-16", "R"]) } { name: "5-star", kind: Integer(min: 0, max: 5) } { name: "5-star(float)", kind: Float(min: 0.0, max: 5.0) } // subjective Dimensions { name: "Overall rating", range: fiveStarRange } { name: "family-friendliness rating", range: fiveStarRange } { name: "age rating", range: ageRating } // objective Dimensions { name: "Mean overall rating", range: floatFiveStarRange } { name: "Mean family rating", range: floatFiveStarRange } { name: "Median age rating", range: ageRating } // Resource types { name: "Movie", dimensions: [..all dimensions above..] } // Methods { name: "meanRating", target_resource_type: movieResource, input_dimensions: [overallRating], output_dimension: meanOverallRating, program: { if count(overallRating) < 10 return error return sum(overallRating) / count(overallRating) }, } { name: "medianAgeRating", target_resource_type: movieResource, input_dimensions: [ageRating], output_dimension: medianAgeRating, program: { if count(ageRating) < 10 return error return median(ageRating) }, } // CulturalContexts { name: "Best overall movie", resource_type: movieResource, thresholds: [{meanOverallRating, gt, 0.0}], // needs at least 10 votes order_by: [(meanOverallRating, bigger)] , } { name: "Best family movie", resource_type: movieResource, thresholds: [ {medianAgeRating, lt, "-16"}, {meanOverallRating, gt, 0.0}, {meanFamilyRating, gt, 0.0}, ], order_by: [(meanOverallRatingEh, bigger), (meanFamilyRatingEh, bigger)], } ``` # Implementation ## Context computation For each input entry, check if there exist an assessment for each of its objective dimensions used by the thresholds and ordering. For each not calculated, grab its method, and compute it. For each resource - For each threshold, grab the assessment, and compare with value according to ThresholdKind and output bool. - For each ordering, grab the assessment Recursively sort the filtered list of resources according to the ordering dimensions ## Method computation As with context, when computing an objective assessment of a ressource, it must check if exists an objective assesmment for each of its input dimensions. Compute each missing one. !! We can run into a cyclic dependency issue !! => Will have to build a dep graph of dimensions. Grab all assessments in each dimension. For objective dimension, should only be one => Grab latest. Setup interpretor: - input: - array of assessments for a dimension - type of dimension (int, float, string) - Output: - output type ### Interpretor specs The interpretor must be able to: - Take as input an array of array of (string | float | int) - Do mathematical operations on floats and ints, and store result in variables - Compare strings - if statement - Handle arrays (len, forEach) - Define string literals - Output a string, int or float - Output an error (code) The interpretor should be able to: - auto-bind input args with given string literals - output detailed error message #### Example ``` Compute ratio: (overallRating: [integer]) -> float if count(overallRating) < 10 return error('insuffisant input values') return sum(overallRating) / count(overallRating) ``` ### Interpretor candidates #### none -> hardcode Could hardcode simple but useful computations: - Count number of assessments - Sum values of all assesments - Ratio (count / sum) - Mean value - Median value - String literal that has the most occurances Use only one dimension as input, and provide a seperation computation for result validation (i.e. minimum number of assessments) #### rep_lang Should be able to but language is not end-user friendly. In the long-run would have to use a different UI that will translate to rep-lang, or change rep-lang grammar. Also rep-lang does not provide any builtin functions, so would have to code in rep-lang all algorithms (sum, count...) #### Javascript UI-side The cell could reply with a "compute_method_request" to the UI, and gives all the input values (dataSet) and the JS program to run. The UI would run the JS, than have the UI send the result to the cell. The cell would accept as face value (e.g no validations, except type checking). Peers could do manual validation on the UI and call a zfn to sign the resulted entry if ok. #### Javascript wasm-side Would have to check if a JS interpretor can run in a cell. What standard functions would be usable? #### Wasm Theoretically any input language that converts to wasm could be runtime compiled to wasm and stored in the Method Entry. Would have to check if wasm can be run within the wasm cell... Similar to running holochain in the browser? ### Graph query language In the long-run we will need a holochain-specific graph query language in order to retrieve data according to runtime queries (e.g. give me all the articles with a rating > 5 and from authors who has the expert badge and a reputation > 5) Current candidat is graphQL used in hREA. Since this a generic need not specific to NH, maybe dev on this can be mutualized with other orgs? ## Build steps 1. Setup git repo 8. Build BasicDevHub (playground) 1. Setup DNA & happ 10. Write zomes 11. Write UI 12. Build Sensemaker 1. Write Sensemaker DNA 11. Write Integrity zome will all entry types. No validation. 12. Coordinator zome 1. Implement `init()` 3. Implement all `create_<entry_type>()` zome functions 5. Implement `assess_resource()` 6. Implement `run_method()` (2-3 algorithms, integers only) 7. Implement `compute_context()` (filtering only) 10. Implement basic SenseMaker dashboard UI & client-js 1. Scan zome entry types 2. View all created entries per type 3. Preview method result 4. Preview context result 9. Add SenseMaker DNA to BasicDevHub happ 10. Integrate Sensemaker UI to BasicDevHub 11. Iterate Sensemaker features 1. Floats & string RangeValues 14. Resource Ordering in Context computation 15. More Algorithms 16. Native Dimension generation in SS-Provider 17. System Dimensions 18. *IndirectMethod*? 19. Some validation ## Phase 2 Assessing agents in a general way (key forwarding): - [ ] Short-term: Force agents to use same key throughout different happs / dnas when installing dnas / happs (like We) - [ ] Medium: Have agents declare their external key in Sensemaker by providing signature from external key. Create key hierarchy - [ ] Long: Use DeepKey # Roles and permissions - **CA**: Can create any Entry - **TrustedOracle**: Can create (public) objective assessments - **Member**: Can create subjective assessments - **Lurker?**: Can access assessments & ContextResults ## Access assessments Permission ### Encryption CA generates an assymetric keypair, and gives it to members with a valid RoleClaim upon request. CA encrypts the keyp for the recipient. The recipient can store it encrypted on its chain. Encrypting and decrypting keys are sent seperately for the the different roles (member and lurker). Assessments must be encrypted with this known keypair. Assessment can be be read with by the known decryption keypair. ### Subnetwork Assessments are stored on different clones. CloneName is the ResourceDef EntryHash. CA generates an encrypted membrane proof based on your AgentPubKey upon validation of a RoleClaim to access Assessments for that ResourceDef. MembraneProof can be signature of "AgentPubKey+CloneId(or ResourceDef)" The Clone DNA, can be a simple directory of all the Assessments, and possibily associated ContextResult, CulturalContext and Method. To handle permission change, have cloneIds and membrane proofs dependent on mainCellId. When permissions change, migrate the mainCellId. Main cell should probably maintain a log of permissions and clones. ## Create assessments Permission Use Membranes Zome Implement ReputationThreshold entry type Implement AssessResourceDefPrivilege Create Roles that gives AssessResource privileges An agent can claim those roles by meeting some reputation threshold. When validating an assessment, validator grabs RoleClaim of Author (must get link?). An AssessRessourcePrivilege just needs to reference the ResourceDef, possible the dnas we can get resources from. A ReputationThreshold references a ResourceDef and holds a threshold for the objective dimension that an agent can be accessed by. **Need to figure out how to make Membranes Zome "modable/extensible"** ### Validating links and/or entries When assessing a resource, an agent creates a link for it in the index, that link must provide the Hash of the RoleClaim that gives the privilege to access a certain ResourceDef. We can also add RoleClaim hash to the assessment entry. If have to choose, not clear which one is best. Probably the link. #### Authority CA can create an Authorization Entry: <AgentPubKey+RoleName> When validating an assessment, a validator can get that entry by determening the EntryHash. Than grab the agent_activity of the CA, and check if it has created this entry. (This seems quite over the top). ### Fine-grained CA - Permission to create Range, Dimension, Method, ResourceDef, CulturalContext - Permission to allow a DnaHash for grabbing resources from ### Permission to assess a resourceType - Permission to create a subjectiveAssessment - Permission to create an objectiveAssessment - Permission to link Resource(eh) from "resources" ### Permission to access a resourceType