title: SPK Network Tech Overview
### SPK Network technical execution
The service nodes perform several distinct critical operations:
• Index Ceramic documents on the Network. Ceramic is a protocol for mutable data on ipfs. A practical example of this would be posts/comments/account information/etc.
• Provide an IPFS DHT lookup endpoint for the Network. This helps improve the locating of content on the Network. IPFS DHT routing table
• Maintains blacklists/mutelists
• Provide an API endpoint for pushing/pulling Ceramic documents, which in the case of SPK Network is going to be Video posts, comments, account information, and more.
### Desktop Application
The Desktop app provides much of the core functionality for the end user in a self-contained, self managing desktop app
The core architecture of the desktop app consists of a nodejs and possibly golang backend in the future, a react based front end using electron, and a local IPFS daemon spawned from the backend. Additionally, there will be a local Ceramic daemon for IDX accounts and blockchain agnostic posts. The language used will primarily be nodejs or typescript in both the desktop app, and backend. Communication between the react front and the backend currently is done through electron IPC calls, in the future this will likely be moved to HTTP so the backend on the desktop app and web based front end can be the same thing.
IPFS storage will be rewarded through the blockchain by validators checking the storage node's operations and ensuring it is properly providing a service to the Network. Storage nodes will announce a list of CIDs they are storing which the Network will validate. Storage nodes are only rewarded for storing files that are valid, have BROCA attached to them and have been officially announced by the storage node to the Network. Storage nodes must announce what they claim to store via their own DLT/Orbitdb database.
Validators will be rewarded via a DPOS/witness style voting along with rewards from forming validation. So, DPOS rewards + validation rewards + rewards for acting as an upload endpoint.
Encoder nodes will be rewarded for providing encoder services to the Network via automated public pricing arrangements. This would entail a client/end user making an agreement with an encoder node to encode a video for a certain amount of BROCA. This process would happen automatically over a standardized protocol using HTTP/libp2p. The protocol would also double as a standard for decentralized video encoding. At the moment there are only a handful of established protocols for video encoding protocol (specifically negotiation protocols), live-peer comes close to this, but is primarily locked into a blockchain – by establishing a protocol for decentralized video encoding this would allow users to switch between many different encoder endpoints without vendor lock. Encoding will be pushed down to well-known software such as ffmpeg or handbreak. The flag ship format for how a video will be encoded is going to be HLS (chunked mp4), or MP4, the 2nd flagship format is webm however, due to webm being VP9 it is significantly more intensive than MP4, meaning most users will only encode to MP4. The on-chain metadata will be standardized so any app can easily read the video data from a users’ post.
CDN nodes will be rewarded through public pricing arrangement as well. However, this is still work in progress and has considerable differences from the video encoder node rewarding system. Ideally, the content creator will setup a payment channel with a set of CDN nodes (think of lightning Network payment channels), as the video is played through the CDN nodes a small amount of money is charged for the bandwidth used by the CDN.
The CDN system comes in a few different parts:
• CDN node
• Client (end user)
• Client metric registry
Firstly, the registry keeps track of all the CDN nodes in the network. The registry is a central database shared between service nodes and CDNs alike. Each CDN node has a registration record. The registration record contains:
• The IPFS PeerId of the node along with PeerInfo (IPs/Port)
• A public HTTPS endpoint for CDN content distribution
• A public control endpoint used for metrics gathering. A CDN node can advertise it’s current usage for prioritization which also prevents the CDN node from getting swamped in traffic. This is for good natured nodes, there is no verification.
• An advertised throughput metric minimum (example 100mbps) (this is used partially for ranking)
• An advertised throughput metric maximum (example 140mbps) (this is used partially for ranking)
• An advertised rough geolocation (used partially for ranking)
• Ping to specific points on the internet (used partially for ranking, optional, needs triage, might consider moving this somewhere else outside of a registration record)
• Spanning tree rank? Optimal routing to storage node ranking? (needs triage/exploration)
NOTE: I use the word “node” looesly in the following description, this does not mean you need to run a dedicated node, but simply run a node which also performs the task of an orchestrator
Secondly an orchestrator is a node which provides clients a list of optimal CDN endpoints to choose from. Additionally the following tasks:
• Keep track of up to date metrics from connected CDN nodes
• Keep track of high importance or highly requested files.
• Keep track of client generated metrics about a CDN node
Next, CDN nodes store and provide IPFS files to clients that are unable to operate an ipfs daemon or choose not to do so. CDN nodes are responsible for providing the web2 style access. Additionally:
• Keep track of highly requested files and metrics per each file (used to signal to other CDNs what to prioritize/precache)
• Provide usage metrics to node operators
• Provide self metrics to the network for ranking/prioritization
**CDN rewarding** will be accomplished via specialized payment channels similar to the lightning network. A content creator puts up “bounties” and creates payment channels to various CDN nodes. Each CDN node will prioritize the requested file by downloading the data locally and providing it via their HTTP endpoint. As requests come through the CDN node it automatically debits a small amount of money through the payment channel to pay for CDN operation. Note: this is ongoing research and likely to be different in production.
**Validators** are selected via DPOS voting. The actual scheduling of validators is random, whether a validator should perform a proof or not and provide to the Network depends on the length of time between a file's last validation time and the current time. If a validation hasn't occurred in the past month validators get additional rewards for validating the file that has not been recently validated. Validators do not get rewards for validating a file more than once in a short period of time. The rewards for validating a file is dependent on the amount of time since the most recent validation and now. Amount of time directly determines rewards. In order for a validation proof to be valid, at the very least 3 validators must perform a validation of a file. Each validator will sign their individual validation records of a file. The records from each validator will be concatenated to form a single record of all 3 (preferably 5) validations. The results from the validation will be averaged, once the Network has formed a safe/accurate average performance record of a storage node, rewards will be issued to the storage node. If a node fails too many checks which then can be verified on other nodes, the storage node will take a significant hit in rewards. For example if the storage node fails 10% of checks, the storage node will be hit with an exponential reward reduction (on orders of many times the pass/fail ratio). Example 10% p/f ratio results in a reduction of 40% of rewards. The exact metric used for how much to penalize rewards can be adjusted depending on how the community sees fit and Network as a whole. Storage nodes that do not store a file do not get penalized as long as they are not advertising files they aren’t actually storing.
### Blocklists/mutelists/content policy:
• Local blocklists
• Validator/app specific blocklists
• Platform blocklists
• Post tagging
**Local blocklist** are locally created and managed blocklists of accounts, posts, tags, ipfs hashes and finally ssdeep hashes. The blocklist is usually tied only to the desktop app, but the local blocklist can be transferred/synced between desktop apps of the same account via Ceramic or orbitDB.
**Validator blocklists** are created and maintained by top validator nodes, these will usually used for most prominent abusers of the network and most of the moderation will go down to communities/users. Each validator maintains their own blocklist which is a Ceramic/Orbitdb database.
**Platform blocklists** are created and maintained by platforms or websites integrated into the SPK network.
**Post tagging**: aside from blocklists, there will likely be many many posts/videos posted onto the network that are not correctly tagged. For example a nsfw video that isn’t tagged as nsfw. Post tagging allows viewing of a post to “tag” a video, as more people tag a video, it continues to affirm whether the tag is true to the video or not by popular consensus. This allows for creating more accurate related posts/videos and potentially filtering a video if it isn’t intended for a certain audience. When a user “tags” a video, a record is first recorded locally, then the locally stored record will be published to indexer nodes for reverse lookup. The indexer nodes use a graph like system for finding associated records of a dataset. Note: this is ongoing and likely will change in practice.
### Breakaway communities:
Breakaway communities are distributed blockchain agnostic communities for social media content such as blog posts, videos, etc. Each community at it’s core has a distributed Ceramic or OrbitDB database. Both Ceramic and OrbitDB have access control systems making it possible to restrict who has permission to post/pin posts or post content to the community in general. The community access control is similar to present day hive communities. Breakaway communities can be used as a data source for websites/platforms that are hooking into the SPK network. Additionally, breakaway communities can act as a “sponsor” for user accounts. In short, a community can create accounts for each member and new members without the cost of on boarding them onto Hive. Each account made is it’s own IDX/Orbitdb account, which are both distributed blockchain agnostic databases.
### Multichain accounts:
Multichain accounts will be done through IDX and potentially OrbitDB. IDX is a distributed identity system using IPFS and the Ceramic network to store identities and data sets. It also has the ability to login to a single DID (distributed identity) using multiple cryptocurrency accounts hive/steem/etc. The service/indexing nodes on the SPK network will index all IDX accounts and adjacent posts announcing themselves to the network. This means, any account regardless of origination can be used to post content onto the network and earn rewards through tips of blockchain accounts which they have connected with. More information on this is avilable at https://idx.xyz
### Background on Orbitdb/IDX/Ceramic and how they fit into the SPK network
**Orbitdb**: OrbitDB is a distributed database built on top of IPFS. It allows for storing/sharing databases across multiple nodes/peers. As a database is updated, the peers pinning the database will pin and update their local state of the database. The basis of an orbitdb database is backed by an IPFS Log of commits (changes), in the case of OrbitDB specifically this log is a CRDT (Conflict free replicatable data type). All commits on the log are signed cryptographically so no one can write arbitrary data to the database without authorization from the owner or internal access controller of the database. This means OrbitDB databases are stored redundantly and in a distributed or decentralized manner
**Ceramic/IDX**: Both Ceramic/IDX are complimentary of each other. IDX is an abstraction of Ceramic, while Ceramic is the core base layer for distributed datasets or databases. Ceramic is considered the eventual successor to Orbitdb, but it still has considerable room to make for. A major applicable difference between OrbitDB and Ceramic is the lack of CRDTs in Ceramic (more on that later). Both technologies attempt to accomplish a similar goal, that is to handle distributed mutable datasets/databases.
Application of both technologies: Ceramic/IDX will be used for light accounts and SPK network posts as it is the preferred technology for relatively low throughput writes and single writer datasets. OrbitDB will be used for datasets which changes often and/or changed by multiple authors at once due to it’s support of CRDTs. Examples of this would be video playlists and livestreams.
### Peerplays integration/scope of work:
* [Peerplays tech](https://community.peerplays.tech/) - @bobinson to add the overview/intro if needed.
* SIP tech overview
* write up about [Peerplays LP](https://community.peerplays.tech/technology/intro-to-peerplays-liquidity-pools) / [SIP](https://community.peerplays.tech/technology/intro-to-peerplays-liquidity-pools/service-infrastructure-pools)
* [Tokens - Token creation on Peerplays blockchain (user docs, definition etc)](https://community.peerplays.tech/technology/intro-to-peerplays-tokens)
* [Token creation system - same as above](https://devs.peerplays.tech/development-guides/creating-user-issued-assets)
* Hive SONs - [links to design documents](https://peerplays.gitbook.io/community-project-docs/son/hive-sons)
* Hive SONs - [fund movement diagram](https://devs.peerplays.tech/supporting-and-reference-docs/sidechain-flow-diagram-hive)
* [NFT Minting](https://devs.peerplays.tech/development-guides/nft-minting)
* NFT Minting for staking creator tokens
* NFT: [Staking into NFTs diagrams](https://community.peerplays.tech/technology/staking-in-peerplays)
* [Overall SPK.NETWORK Peerplays NFT Ecosystem diagram](https://gitlab.com/PBSA/documentation/ecosystem-diagrams/-/blob/develop/Miscellaneous/speak-network-diagram.png)
* DEX - [DEX intro](https://community.peerplays.tech/technology/peerplays-dex) and [link to DEX requirements](https://devs.peerplays.tech/supporting-and-reference-docs/peerplays-dex-development/requirements-specification)
* [DEX Functional Specifications](https://devs.peerplays.tech/supporting-and-reference-docs/peerplays-dex-development/functional-specs)
* Resource permissions - [Link to introduction](https://devs.peerplays.tech/development-guides/introduction-to-permissions)
* Peer ID sign up - [Link to requirements & user manual for devs](https://devs.peerplays.tech/tools-and-integrations/peerid/requirements-specification)
* PeerID : [user manual for devs](https://devs.peerplays.tech/tools-and-integrations/peerid/authentication-with-peerid)
* [GUNS](https://gitlab.com/PBSA/3speak-integration/-/wikis/GUNS-and-SMEC#guns) - publish the GUN document published by Jonathan (This can be [published on Gitbook](https://community.peerplays.tech/technology/gamified-user-namespaces-and-subject-matter-expert-committees))
* [SMEC](https://gitlab.com/PBSA/3speak-integration/-/wikis/GUNS-and-SMEC#SMEC) - publish the SMEC document published by Jonathan (This can be [published on Gitbook](https://community.peerplays.tech/technology/gamified-user-namespaces-and-subject-matter-expert-committees))
* Power up governance and bond staking system (NFT) - [Explanation/Intro to the NFT staking](https://community.peerplays.tech/technology/intro-to-peerplays-liquidity-pools#2-asset-staking)
* Fan tokens - [same as UIA](https://devs.peerplays.tech/development-guides/creating-user-issued-assets)
* Claim Drop 1 - @bobinson to provide details : https://gitlab.com/PBSA/3speak-integration/-/wikis/claim-drop
* Claim drop 2 - @bobinson to provide details https://gitlab.com/PBSA/3speak-integration/-/wikis/claim-drop
* General ads - future scope
* Community specific ads - future scope