Prototyping web apps with data vault
This project aims to grant users full control, self-authentication and data portability by providing developers with an open standard for secure data vaults and an open prototyping environment for creating web apps based on that standard.
This project aims to provide developers with an open standard to make secure data vaults and an open prototyping environment for creating web apps based on that standard, to create an ecosystem that grants end users full control, self-authentication and data portability.
COMMENTS (about above):
Use the first sentence when you want to emphasize the user benefits of the project, specifically full control, self-authentication, and data portability. The sentence begins with "This project aims to grant users…" which puts the user benefits upfront. The sentence also mentions that the project provides developers with an open standard and an open prototyping environment, but these are secondary to the user benefits.
Use the second sentence when you want to emphasize the developer benefits of the project, specifically the open standard for secure data vaults and the open prototyping environment. The sentence begins with "This project aims to provide developers…" which puts the developer benefits upfront. The sentence also mentions the user benefits, but they come after the developer benefits.
Use the third sentence when you want to emphasize the technical aspect of the project, specifically the separation of user data from tools and compliance with a data vault standard. The sentence begins with "This project aims to provide developers with an open prototyping environment to build new tools which…" which puts the technical aspect upfront. The sentence also mentions the user benefits, but they come after the technical aspect.
No existing p2p app decouples user data from the application, because there is no existing standard for how to do so and to enable many apps to share the mechanism
it is necessary to fullfill the promise of users-own-their-data so they can use data with other apps, export and manage access to their data
Once upon a time, in the early days of personal computers, users had all of our data on our disks and all software was reading and writing data to disk. Computers were new and operating system were not stable, disks were often crashing and backups were difficult to make and restore. Users were therefore happy when they could move more and more of their data into the cloud and didn't have to take care of backups themselves. But as users started producing more and more data, it became clear that hosting all this data isn't going to be cheap. Apps and services continued giving users free storage space but in return they got full access of their private data. User data is now a precious asset and the primary source of income for such companies and users have little control over where and how their data is stored and used online.
But today machines and systems are a lot more stable and powerful. Furthermore, peer to peer technologies have matured and they enable users to securely backup and easily restore our data across devices. What is missing is an environment and standards about how to build web apps which decouple user data from the application itself so developers can start building services that bring back the power to the users.
As soon as you sign up with a free email provider or install an operating system, you usually get some cloud storage space. Accessing your data through an online environment has become commonplace, for both, businesses and individuals. You can share everything from a grocery list with your significant other or even store sensitive documents in the cloud so you can access them from work or on the road. But to be able to have users upload, edit or create files online and pass them around, many cloud services require full access to users data. Like the extensive (and sometimes unreadable) privacy policies you already had to wade through to open your 'free' account, users do not have many options. Either you click yes and the cloud is yours, or you deny the apps all access and are left to your own devices.
The fact that you want to access your data online, does not mean you need to store it in a place where the provider requires you to hand over the keys. Especially when it is unclear who can use the keys to look around in your documents, or analyze sensitive documents simply for the sake of personally profiled advertising, which quite certainly is not what you signed up for. You would want to know where your data is, lock the doors to it yourself and keep the keys somewhere safe (without any intricate key management or cryptographic busywork).
Data portability is a personal right established in the General Data Protection Regulation (GDPR). This is in direct conflict with the desire of companies to retain users and their data. Data is often tightly coupled to the application, complicating transitioning data between services.
Big Tech companies currently dictate how digital identities are used. As a result, they have amassed vast amounts of private user data. Movements like Self-Sovereign Identity aim to give users control over their online identity.
Internet users have little control over where and how their data is stored and used online. Big Tech companies store gigabytes of data about you and know exactly which online services you use. User data is a precious asset and the primary source of income for such companies.
In our system, users are not reliant on Big Tech companies to authenticate themselves or store and host their data. User data is stored in a data vault on a device under the control of the user
The goal of this project is to design a system that gives users sovereignty not just over their identity, but also over their data.
As long as your data is on a remote server, it is not truly under your full control.
It requires a lot of time and effort from a dedicated team to build p2p apps and package them cross-platform
development cycle is long, because getting users to download, try and give feedback to apps is hard because users hesitate to download unknown apps to their computers
users don't want to download anything on their computer to just try and play around
users see that if they want to commit themselves to p2p they will need to download many apps
data sovereignity
p2p development
p2p adoption
your data is stuff you care about
mostly you only get to interact with it in places owned by corporations
you dont get to choose ho can see your stuff
malicious actors can follow your activities and harass you
owners of spaces can record what you do and sell information
because it is not your space, you cant decide how anything works
features can disappear overnight
your data can be changed or deleted without your consent
what if you and people you care about could band together
have your own place for your data to live
where only people who see your stuff are people you trust
nobody is selling your privacy
you decide how things work and when they should change
Core: portal (web server)
Core: dpm OS (default WebApp OS, using web kernel)
Core: web kernel (web runtime patch)
Core: shell
Core: dpm (data package manager)
Core: vault:
comlink
library or similar logicCore: sandboxing:
index.html
and start installationdomain1/index.html
with single script tag domain1/shim.js
domain1/shim.js
to import and boot domain2/bootkernel.js
domain2/bootkernel.js
will install itself as domain2/bootkernel.js#service_worker
(BKSW) and reload page
bootkernel.js#shared_worker
BKSW/index.html
from bootkernel serviceworker which includes BKSW/renderer.js
<!doctype html><html><head><meta charset="utf-8"></head><body><script src="renderer.js"></script></body></html>
BKSW/renderer.js
script to load (spawn or connect to) shared kernel worker domain2/bootkernel.js#shared_worker
BKSW/renderer.js
as main renderer process and run BOOT SCRIPTS
bootkernel.js#shared_worker
shared kernel worker (see below)
// renderer "firmware"
window.kernel = new SharedWorker('https://domain2/bootkernel.js#shared_worker')
kernel.port.onmessage = ({data}) => eval(data) // renderer bootloader
@GOAL: develop system in system
umblical_fetch
(legacy fetch for https) to pull in legacy code, turn it into "hyper" MODULES in dpm/store to later export into new portalshared worker bootkernel
const renderer = new WeakMap()
onconnect = event => {
const [port] = event.ports
renderer.push(port)
port.onmessage = kernel_protocol
// @TODO: define and check connected renderer type
// @TODO: maybe based on "kernel mode" too
const mode = "plain"
port.postMessage(`(${default_renderer_JS})('${mode}')`)
}
function kernel_protocol (event) {
// ...
}
////////////////////////////////////////////////////////////////////////////////////////////////////
;(async () => {
document.body.replaceChildren()
document.head.replaceChildren(Object.assign(document.createElement('style'), { textContent: `
*, *::before, *::after { box-sizing: border-box; }
body, h1, h2, h3, h4, p, ul, ol, li, figure, figcaption, blockquote, dl, dd { margin: 0; }
body { display: flex; flex-direction: column; min-height: 100vh; }
iframe { border: 0; flex-grow: 1; }
img, picture { max-width: 100%; }
table { border-collapse: collapse; }
html:focus-within { scroll-behavior: smooth; }
` }))
document.title = 'Playground'
const { port1, port2 } = new MessageChannel()
port2.onmessage = event => {
console.log('[kernel]:renderer>', event.data)
}
port2.postMessage('hello renderer')
const kernel = { port: port1 }
default_renderer_JS('plain', 123)
async function default_renderer_JS (mode, id) { // basic renderer // @TODO: unique renderer ID
// https://markkarpov.com/post/the-identicon-package.html
// https://terrykwon.com/blog/visual-hashing/
// https://jdenticon.com/#icon-Dylan
// https://morioh.com/p/91ef450fe563
// http://jsfiddle.net/pHv6W/
// https://github.com/MyCryptoHQ/ethereum-blockies-base64
// https://github.com/laurentpayot/minidenticons
// https://siva.dev/identicons/
// https://github.com/Lobstrco/stellar-identicon-js
// https://dev.to/krofdrakula/improving-security-by-drawing-identicons-for-ssh-keys-24mc
// https://www.cssscript.com/svg-identicon-generator/
// https://www.cssscript.com/canvas-geometric-identicon-generator/
// https://www.cssscript.com/identicon-generator-jdenticon/
// https://www.cssscript.com/github-identicon-avatars-squareicon/
// https://www.cssscript.com/random-cartoon-avatars-svg-faces/
// use: package.json#name + location.href + source code hash or hyperurl and version (log index)
if (mode !== 'plain') return document.body.innerHTML = `<h1> '${mode}' is not supported yet </h1>`
kernel.onmessage = renderer_protocol
function renderer_protocol (event) {
console.log('[renderer]:kernel>', event.data)
// ...
}
document.onkeydown = event => {
const { altKey, ctrlKey, shiftKey, /*metaKey, which,*/ code, key, /*charCode, keyCode, location, */ repeat } = event
console.log({ altKey, ctrlKey, shiftKey })
if (altKey && ctrlKey && shiftKey) {
console.log('shell')
shell.hidden = true
}
console.log(JSON.stringify({ code, key, repeat }))
}
////////////////////////////////////////////////
// FEATURES:
// 1. ctrl+alt+shift => toggle shell
////////////////////////////////////////////////
// GOAL:
// 1. sandboxed viewport (iframe sandbox)
// 2. default shell, replaceable by custom shell
////////////////////////////////////////////////
// RENDERER KERNEL:
const shell = sandbox(shell_JS, shell_protocol) // shell_JS is not an address, but a script
const viewport = sandbox(viewport_JS, viewport_protocol) // viewport_JS is not an address, but a script
document.body.append(viewport, shell)
return
////////////////////////////////////////////////
async function shell_JS (port2, ID) { // script
document.head.replaceChildren(Object.assign(document.createElement('style'), { textContent: `
html { background-color: black; color: lime; font-family: mono; font-size: 8px; }
*, *::before, *::after { box-sizing: border-box; }
body, h1, h2, h3, h4, p, ul, ol, li, figure, figcaption, blockquote, dl, dd, pre { margin: 0; }
body { display: flex; flex-direction: column; min-height: 100vh; }
iframe { border: 0; flex-grow: 1; }
img, picture { max-width: 100%; }
table { border-collapse: collapse; }
html:focus-within { scroll-behavior: smooth; }
input, button, textarea, select { all: unset; max-inline-size: 100%; color: white; flex-grow: 1; }
.shell { flex-grow: 1; display: flex; flex-direction: column; height: 100%; }
.log { display: flex; flex-direction: column; flex-grow: 1; overflow: auto; max-height: calc(100vh - 9.5px); }
.cmd { width: 100%; display: flex; }
` }))
console.log('[renderer/shell_JS]', ID)
port2.postMessage('ping')
port2.onmessageerror = event => console.error('error', event)
port2.onmessage = onmessage
document.body.append(Object.assign(document.createElement('div'), { className: 'shell', innerHTML: `
<div class="log"></div>
<div class="cmd">🌐alexander@seraseed.com:/dpm/store/foobar#> <input placeholder="type command"><a title="Keyboard Shortcuts:\n ctrL+shift+alt: toggle shell">❔</a></div>
<!--<div>👤alexander📍seraseed📂🗂️🐚/dpm/store/foobar🛠️▶<input placeholder="type command">❓❔📡⚫🌐🌎🌍🌏 ⚪🔴🟢🟡🔵🟣🟠</div>-->
` }))
const [log, { children: [input] }] = document.body.lastChild.children
input.onkeypress = event => {
if (event.key === 'Enter') return input.value = evaluate(input.value) || ''
}
Array.from({ length: 30}).map((x,i) => (input.value=i,input.onkeypress({key: 'Enter' })))
await new Promise(ok => setTimeout(ok, 200))
return
function onmessage (event) {
console.log('[renderer/shell_JS]:system>', event.data)
// ...
}
function evaluate (data) {
log.append(Object.assign(document.createElement('div'), { innerHTML: `<span>></span> <span>${data}</span>` }))
log.scrollTop = log.scrollHeight
port2.postMessage({ type: 'exec', data })
}
}
function shell_protocol (port1, ID) { // onready
// @TODO: messageport .close() and .onmessageerror
console.log('[system]:renderer/shell', 'init', ID)
port1.postMessage('hello')
port1.onmessageerror = event => console.error('error', event)
port1.onmessage = event => {
console.log('[system]:renderer/shell>', event.data)
port1.postMessage('pong')
}
}
function viewport_protocol (port1, ID) { // onready
// @TODO: messageport .close() and .onmessageerror
console.log('[system]:renderer/viewport', 'init', ID)
port1.postMessage('hello')
port1.onmessageerror = event => console.error('error', event)
port1.onmessage = event => {
const { type, data } = event.data
console.log('[system]:renderer/viewport>', type, data)
port1.postMessage('ack')
}
}
async function viewport_JS (port2, ID) { // script: viewport is augmented no-script iframe
document.body.style.backgroundColor = 'darkblue'
console.log('[renderer/viewport_JS]', ID)
port2.postMessage('ping')
port2.onmessageerror = event => console.error('error', event)
port2.onmessage = event => console.log('[renderer/viewport_JS]:system>', event.data)
await new Promise(ok => setTimeout(ok, 200))
}
////////////////////////////////////////////////
function sandbox (script, onready) {
if (typeof script !== 'function') throw new Error('no script function provided')
if (typeof onready !== 'function') throw new Error('no ready callback provided')
const ID = `${Math.random()}`.slice(2) // @TODO: where to get real pids from?
const srcdoc = `<!doctype html>
<html>
<head><meta charset="utf-8"></head>
<body><script>(${firmware})(${ID}, '${location.origin}', ${script})</script></body>
</html>`
const opts = { srcdoc, sandbox: 'allow-scripts' } // @TODO: what about CSPs and other flags?
const iframe = Object.assign(document.createElement('iframe'), opts)
window.addEventListener('message', onmessage)
console.log('[kernel]', 'spawn:iframe')
return iframe
function onmessage (event) {
if (event.source !== iframe.contentWindow) return
event.stopImmediatePropagation()
window.removeEventListener('message', onmessage)
const { data, ports: [port1] } = event
console.log(`[kernel]:sandbox-${ID}>`, data)
console.log({ port1 })
onready(port1, ID)
}
async function firmware (ID, origin, script) {
const name = `[sandbox-${ID}]`
const AsyncFunction = (async () => {}).constructor
const source = opener || parent
const { port1, port2 } = new MessageChannel()
try {
const run = new AsyncFunction('port', 'ID', `await (${script})(port, ID)`)
console.log(name, 'init:script')
source.postMessage('ready', origin, [port1])
// @TODO: could use OCAPN to sandbox scripts
const code = await run(port2, ID)
// @TODO: maybe await script and then close/shutdown worker/iframe and notify all users or it's user
console.log(name, 'exit:script', code)
} catch (error) {
port2.postMessage(error)
}
}
function ready (source, data) {
if (source !== iframe.contentWindow) return console.log('@TODO: wat pxeboot?')
console.log('[kernel]:iframe>', data)
source.postMessage(pid, origin, [port2])
}
}
////////////////////////////////////////////////
}
})()
WebShellApp.command
in the app to register command handlers.WebShellApp.trigger
in the app to send an event.One of the main goals of this work is to help users regain control over their data and thus over their privacy. The first step is to standardize how apps are separated from user data to then enable users to put their data into a data vault, which they can self-host. The system is specifically designed to not rely on cloud infrastructure. All data is stored with and hosted by the user. This makes the vault GDPR compliant by design, so there is no data storage for processing by third parties eliminating the need for Privacy Officers and Data Protection Officers. This stops data-hungry companies from running machine learning algorithms over user data and learning users’ behavioral patterns. This has the added benefit of disrupting Big Tech from monetizing user data.
vault.js
and run it
==> NEEDS SANDBOX FIRST !
==> then probably needs PACKAGE MANAGER first (used by kernel by default to install package.json)
==> maybe that's enough to assume shit is there and we can think about package manager later
==> i guess package manager api and vault is the same or at least related (npm vs. npmjs.org) or (git vs. github)
==> git can also push to remote git repos …interestingly (maybe hypershell is the answer!)
LESEZEICHEN: do not receive vault (re)connection hook
but instead custom or default vault script
when user logs in to dat pages, they connect the page to the vault (user sets the permissions for every page)
requires the kernel/runtime to start in "vault mode"
enables self-authentication: stores user's public/private keypair
signs new data created (but private mode is also supported)
password protected
on this page, user doesn't log in, because this page is a vault
it takes care of storing your data
apps/tools from the shell get downloaded to the vault cache (shell install)
types of data stored in vault:
data is version controlled
includes:
shim
permissions - [ ] access control management
Vault new features
data vault provides fine-grained access control for the user’s locally stored data.
data vault
For now, data can be opened and saved from the local data vault.
data vault stores user keypairs
privacy-first (offline) secure data vault for personal storage
allows users to store all their data on their smartphones and control with whom they share it
data vault gives users back control of their identity and all their data
a starter set of data system permission adapters
A data vault with self-authentication standard would be a secure data storage system that includes a built-in mechanism for verifying the authenticity and integrity of data stored within it. This mechanism could use cryptographic hashing or digital signatures to ensure that the data has not been tampered with or altered in any way. The data vault would be designed to provide users with full control over their data, while also ensuring that it can be shared securely and transparently with other parties. The use of self-authenticating data and a secure data vault can help to protect users' privacy and ensure that their data remains under their control at all times.
user control data sovereignity
Nearly five decades after the invention of public key cryptography, we still lack a good solution for people to manage their digital identity and efficiently share encrypted data directly with each other, certainly at a massive scale. Various movements aim to halter Big Tech’s power and give back control to the users. These movements are powered by technologies like blockchain and Self-Sovereign Identity (SSI), which promise to improve how we interact with online services and each other.
This work aims to solve these problems by developing a data vault with advanced data sharing capabilities that leverage SSI to provide users with true sovereignty over their data.
A system where users have true sovereignty over data has to have the following properties:
special js console
shows the account you are logged in with
logs your commands and stores them in the vault
lists all the data allowed by the vault
has access to the portal code (data in the repository of the portal)
through shell you interact with:
list
, install
, init
, export
, run
, )ls
, cd
, whoami
)you can run apps/tools from other people (in the shell)
you can use it to do any js stuff (fetch etc.)
- [1] enables running all the code without any additional build tools
shows shell UX (=special devtools/js console)
taskbar
[user]@[node]:[path]#[task]>
DATASHELL UX:
user@domain:path> asdf
user@domain:path> asdf1
user@domain:path> asdf2
serapath@shell:~#live>
~: vault user home
/: normal path local to app
.. and . only exist in commands, not in "path" segment of prompt
user@domain:~/path/to/dir> asdf
user@domain:path> asdf2
user@domain:path> asdf3
input.onkeypress = e => {
if (e.key !== 'Enter') return
const { value } = input
const { user, node, path, task } = prompt
const msg = make(user, node, path, task)
shell(msg)
}
const task = { // task & task protocol
pid: project.id,
tid: taskid(),
ins: [],
out: [],
log: [],
}
shell install
user_script
"Encourage to write programs that do one thing and do it well. Write programs to work together. Write programs to handle json, because that is a universal interface."