# Iroh workshop OmniOpenCon 2025
## Prerequisites
This is a rust workshop, so you will need a rust toolchain installed. You can download it at https://rustup.rs/.
The examples will be in a git repository https://github.com/n0-computer/iroh-workshop-omniopencon2025. So you will need [git](https://git-scm.com/) to download the examples.
If you like IDEs, you will also need an ide with rust support such as [vscode](https://code.visualstudio.com/). For vscode, there is a plugin called [rust-analyzer](https://rust-analyzer.github.io/) for rust language support. You can install it from within vscode.
If you are new to rust, don't worry. The workshop is structured into multiple self-contained examples that can be run without any rust knowledge, and the code is easy enough to understand.
This workshop is about networking and data transfer. We are going to transfer quite a lot of data, so a few GiB of free disk space would be helpful.
We will mostly rely on the conference wifi for connectivity, but I also made a phone hotspot in case that does not work at all.
Network SSID: `iroh`
Password: `movethebytes`
Join
https://iroh.computer/discord
Use channel #workshop
## Intro
- Who wrote a simple TCP server before?
- Who has got experience with rust?
- Who has cargo installed?
## Disclaimer
We are not using the stable version of blobs, but the 0.95 version which is still in flux. So some **things might not work**.
We want to stabilize iroh and iroh-blobs API and wire format by end of Q4.
## Compile times
Just run cargo build in the root of the repo right now, so you save some time!
## Note for windows users
You might get a dialog that asks if you want to allow the binary to connect!
## On endpoint ids
For a server, we want a stable endpoint id.
For a client, we don't care, it can be random.
An endpoint id is an ed25519 keypair, which can be generated from just 32 bytes of random data.
To keep the endpoint id stable, we just need to persist this private key somewhere.
We don't want to deal with persistence in the examples, so we use an environment variable.
For all following examples, run them with `IROH_SECRET=...` to get a stable endpoint id.
```rust
if let Ok(secret) = env::var("IROH_SECRET") {
// Parse the secret key from string
SecretKey::from_str(&secret).context("Invalid secret key format")
} else {
// Generate a new random key
let secret_key = SecretKey::generate(&mut thread_rng());
println!("Generated new secret key: {}", secret_key);
println!("To reuse this key, set the IROH_SECRET environment variable to this value");
Ok(secret_key)
}
```
## Echo1:
### Connect side:
We want to write a simple echo protocol that just answers what it was sent.
Creating an endpoint to connect to the echo service
```rust
let endpoint = Endpoint::builder()
.bind()
.await?;
```
Connecting
```rust
const ECHO_ALPN: &[u8] = b"ECHO";
let connection = endpoint.connect(addr, ALPN).await?
```
Opening a stream
```rust
let (send_stream, send_stream) = connection.open_bi().await?;
```
Writing the message
```rust
send_stream.write_all(message.as_bytes()).await?;
send_stream.finish()?;
```
Reading the response
```rust
let res = recv_stream.read_to_end(1024).await?;
```
Closing the connection
```rust
conn.close(0u8.into(), b"done");
conn.closed().await;
```
### Accept side:
Accept a connection, two stage process
```rust
let incoming = ep.accept().await?;
let conn = incoming.await?;
```
Accept a bidi(rectional) stream:
```rust
let (mut send_stream, mut recv_stream) = conn.accept_bi().await?;
```
Read the message and echo it back:
```rust
let msg = recv_stream.read_to_end(1024).await?;
send_stream.write_all(&msg).await?;
send_stream.finish()?;
```
Wait for the connection to close:
```rust
conn.closed().await;
```
## Echo2
Echo2 accepts exactly 1 connection, but then shuts down.
We don't want that. We want it to stay open.
We could just write an accept loop. But iroh has a functionality for this:
### Define a protocol handler
```rust
impl ProtocolHandler for EchoProtocol {
async fn accept(
&self,
conn: Connection,
) -> AcceptResult<()> {
...
}
}
```
### Create a router and add the protocol handler
```rust
let router = Router::builder(ep)
.accept(echo::ECHO_ALPN, echo::EchoProtocol)
.spawn()
.await?;
```
Wait for control-c, the router is handling accepts in the background:
```rust
tokio::signal::ctrl_c().await?;
```
## Echo3
These tickets are huge. Also, they contain information that might change. E.g. the current ip addr or relay URL.
Can we dial just with the node id, which does not change?
### Accept
Configure discovery publishing
```rust
let ep = Endpoint::builder()
.alpns(vec![echo::ECHO_ALPN.to_vec()])
.secret_key(secret_key)
.discovery_n0()
.discovery_dht()
.bind()
.await?;
```
Now we can also use the short ticket that just contains the node id.
```rust
let ticket_short = NodeTicket::from(NodeAddr::from(addr.node_id));
```
The example will print some info on how you can see what is going on with discovery:
Looking up the published info on the DNS system
```bash
dig TXT @dns.iroh.link _iroh.unt5ncmjw3g1ui7hfkdzqf6cdgxam446i4apsseghkksg1jc3g7o.dns.iroh.link
```
Looking up the published info on the bittorrent mainline DHT
```
https://app.pkarr.org/?pk=unt5ncmjw3g1ui7hfkdzqf6cdgxam446i4apsseghkksg1jc3g7o
```
### Connect
Configure discovery resolution.
The connect side will generate a random node id that nobdoy cares about. We **don't** want to publish our node id.
```rust
let ep = Endpoint::builder()
.add_discovery(PkarrResolver::n0_dns())
.add_discovery(DhtDiscovery::builder().build().unwrap())
.bind()
.await?;
```
## Sendme 1
Ok, now we know how connections work, and also how to write protocol handlers.
Now let's move some bytes. First, a single file
### Share
Use `get_or_generate_secret_key` from above, so we have a stable node id if we restart the program.
```rust
let secret_key = util::get_or_generate_secret_key()?;
// Create an endpoint and print the node ID
let ep = Endpoint::builder()
.secret_key(secret_key)
.bind()
.await?;
```
Create a blob store in a random temporary directory.
```rust
let blobs = FsStore::load(create_send_dir()?).await?;
```
Add some data, for now just the file that was provided:
```rust
let tag = blobs.add_path(&absolute_path).await?;
```
Create a ticket for the content, to share on a side channel with the receiver.
```rust
let ticket = BlobTicket::new(addr, *tag.hash(), tag.format());
```
Create a blobs protocol handler with that store and register it like in the echo2 example:
```rust
let router = Router::builder(ep.clone())
.accept(iroh_blobs::ALPN, Blobs::new(&blobs, None))
.spawn()
.await?;
```
Wait for control-c, the router is handling accepts in the background:
```rust
tokio::signal::ctrl_c().await?;
```
### Receive
Create a blob store, with a **stable** directory based on the content.
```rust
let store = FsStore::load(create_recv_dir(ticket.hash_and_format())?).await?;
```
Get the data. We don't want to show progress for now.
```rust
let stats = store
.remote()
.fetch(conn, ticket.clone())
.await?;
```
Export the data (single file) to a user-provided path.
```rust
let size = store.export(ticket.hash(), target.clone()).await?;
```
## Sendme 2
### Share
Walk an entire directory, add all files, then create a collection containing all these files.
The collection is just a sequence of hashes, where the first child blob contains metadata about the file names.
```rust
let tag = util::import(absolute_path.clone(), &blobs).await?;
let ticket = BlobTicket::new(addr, *tag.hash(), tag.format());
```
### Receive
Receive now works exactly the same. The ticket is now for an entire sequence of hashes instead of a single one, and the fetch call will just download them all by default.
```rust
let stats = store
.remote()
.fetch(conn, ticket.clone())
.await?;
```
### Export
Export just goes through the files one by one and exports them. We have a small helper for this:
```rust
let collection = Collection::load(ticket.hash(), store.deref()).await?;
util::export(&store, collection).await?;
```
## Sendme 3
We want to see what is going on, not just once it's done.
dump_provider_events just prints all events happening on the provider side, so we know what's going on.
### Share
```rust
let (dump_task, dump_sender) = util::dump_provider_events();
// Create a router with the endpoint
let router = Router::builder(ep.clone())
.accept(
iroh_blobs::ALPN,
Blobs::new(&blobs, ep.clone(), Some(dump_sender)),
)
.spawn()
.await?;
```
### Receive
What if we have multiple tickets? Wouldn't it be nice if we could download from multiple sources at the same time?
Make sure all tickets are for the same content:
```rust
let content = tickets
.iter()
.map(|ticket| ticket.hash_and_format())
.collect::<BTreeSet<_>>();
ensure!(
content.len() == 1,
"All tickets must be for the same content"
);
let content = content.into_iter().next().unwrap();
```
Get the nodes from the tickets:
```rust
let nodes = tickets
.iter()
.map(|ticket| ticket.node_addr().node_id)
.collect::<BTreeSet<_>>();
```
Give the endpoint the information about current relay and address contained in the ticket (they are not short tickets here!)
```rust
for ticket in tickets {
ep.add_node_addr(ticket.node_addr().clone())?;
}
```
We now want to download from multiple nodes at the same time! But fetch(...) is only for one node.
Create a downloader. The downloader needs the endpoint to create connections.
```rust
let downloader = store.downloader(&ep);
```
Download the data, letting the downloader handle the details.
```rust
let mut stream = downloader.download(content, nodes).stream().await?;
while let Some(item) = stream.next().await {
println!("Received: {:?}", item);
}
```
Ok, this is great, but it is always downloading in the same order. So the poor first node will be quite busy. Let's shuffle things a bit:
```rust
let options = DownloadOptions::new(
content,
Shuffled::new(nodes.into_iter().collect()),
SplitStrategy::None,
);
let mut stream = downloader.download_with_opts(options).stream().await?;
```
Now if provided with multiple tickets, it will choose a random node first.
The first node won't be overwhelmed, but we still don't get faster downloads.
Can we split the download into multiple downloads, so that each can go to a different node?
```rust
let options = DownloadOptions::new(
content,
Shuffled::new(nodes.into_iter().collect()),
SplitStrategy::Split,
);
let mut stream = downloader.download_with_opts(options).stream().await?;
```
Now we should see faster downloads.
## Sendme 4
Ok, great. We can download from multiple nodes. But what about all these tickets... ?
Can we have a central place where nodes register if they want to share something, and other nodes can look up who has what?
### Share
We need to enable node discovery, since the tracker only provides node ids:
```rust
let ep = Endpoint::builder()
.discovery_n0()
.discovery_dht()
.secret_key(secret_key.clone())
.bind()
.await?;
```
We create a task that publishes an `Announce` in regular intervals.
For this example, we publish very frequently.
In a real world application we would publish once per day or so.
```rust
let tag = util::import(absolute_path.clone(), &blobs).await?;
let announce_task = tokio::spawn(announce_task(
*tag.hash_and_format(),
ep.clone(),
secret_key,
));
```
Other than that, the provider side is unchanged!
The whole point of the exercise is that we only care about content, no more tickets!
So print *just the content*!
```rust
println!("Hash: {}", tag.hash());
```
### Receive
Instead of having a fixed set of nodes, or even a shuffled fixed set of nodes, we have a content discovery mechanism that asks the tracker that the send side publishes to.
Parse the command line arg as a `HashAndFormat`:
```rust
let content = HashAndFormat::from_str(content).context("invalid content")?;
```
We need to enable node discovery, since we need to look up - but not publish - node ids:
```rust
let ep = Endpoint::builder()
.add_discovery(|_| Some(discovery::pkarr::PkarrResolver::n0_dns()))
.add_discovery(|_| discovery::pkarr::dht::DhtDiscovery::builder().build().ok())
.bind()
.await?;
```
Now configure `TrackerDiscovery` as the discovery mechanism.
```rust
let options = DownloadOptions::new(
content,
TrackerDiscovery::new(ep.clone(), TRACKER.parse()?),
SplitStrategy::None,
);
```
And we're done.
Both sides have to use the same tracker, otherwise it won't work. We just hardcode the tracker node id.
```rust
const TRACKER: &str = "69b2f535d5792b50599b51990963e0cca1041679cd968563a8bc3179a7c42e67";
```
This tracker is running in us-east-1, so connecting to it is a bit slow. But it does not matter since the real data transfer is all local.