At dClimate we want to make Climate DeSci easy. The way to do this is by giving people access to tools they are used to using while abstracting away all the "complicated stuff". Most of the time it's hard to get people to get into their Terminal and install Git let alone Kubo(IPFS), provide instructions support for cross platform support etc. In the case of DeSci, one of the most important tools for scientists is a Jupyter Notebook for running python code easily. Coupled with IPFS it can demonstrate the value of DeSci.
At dClimate we created a mini curriculum of sorts in a repo that can be found here https://github.com/dClimate/jupyter-notebooks which bundles IPFS and Jupyter into the same Docker container, this way the end user doesn't need to worry about their global python version, venv, package managers etc. Things "just work". What's more is that not only can the notebook be run locally via docker compose (scientists won't installed docker, that's a dropoff point already), it can be essentially one click launched onto Railway (a very easy to use PaaS) without any frills. In other words, you can launch a Jupyter Notebook which accesses data via IPFS without being too technical. You can launch it on Railway using this template here https://railway.com/template/4Xk-zm?referralCode=ciD76B (IMPORTANT: After deploying the template you must tap the jupyter-notebooks service box, go to the settings page, select TCP Proxy under Public Networking and pick the 4001 port which maps to IPFS. You can only pick one proxy, you will be randomly assigned a domain name and a port which maps to the port you selected. You must then click the three dots on deployments and redeploy so that the TCP proxy takes effect.)
On either Railway or running locally, the IPFS node in the notebook (101 - Getting Started.ipynb) can access and swarm connect to other running peers. This also means data can be queried from these peers or the wider ipfs network. So far so good.
However this is not enough. We want to ensure this notebook can be directly peered to both by other Kubo Nodes and also directly from the Browser (Helia Nodes). The reason for that is we have data mapping tools which unfortunately aren't visible right now living on https://dev.marketplace.dclimate.net/ where we want to ensure users who do some data science on their notebook can get their multiaddr, paste it into the marketplace, peer, then take the hash of their data and immediately visualize it on their frontend within Javascript.
Sidenote:
The issue is that right now Helia –-Websockets-> IPFS Node in Docker in Railway, is unreachable. Something important to note is that on Railway, much like Digital Ocean App Platform (another PaaS) only allow for ONE inbound port, which in this case is the default 8888 port for Jupyter Labs. You would think, ok if no other inbound ports are allowed how do you connect to it? By creating the TCP Proxy
This address is then constructed here https://github.com/dClimate/jupyter-notebooks/blob/main/scripts/start.sh#L82 and set via ipfs config --json Addresses.Announce
With the current setup you can go to the running notebook, ipfs add ...
some content (make it unique!)…and be able to access it from the public ipfs gateway seconds later (incredibly cool and it brings us almost to the finish line)
Since Railway only makes one inbound port available (with SSL) which is 8888 and that is used up by Jupyter… the idea was to use the TCP proxy and that port for both TCP connections and WS. Since seemingly TCP is working with the assumption based on:
ipfs swarm connect /dns4/tramway.proxy.rlwy.net/tcp/41298/tls/sni/tramway.proxy.rlwy.net/ipfs/12D3KooWAL9oL26fySUckxCv7TYVSPD6cEwy9vGZV1Dvt2fJJMQW
The idea was to continue along this route. To catch our breathe, we added a TCP proxy since Railway allows one per service (for an additional inbound port aside from the default) in order to provide outside access to IPFS living in the Docker container. Railway provides SSL on the main domain but NOT on the TCP proxy domain provided. We initially tried to do an Addresses.Announce
or Addresses.AppendAnnounce
using the TCP proxy domain for websockets but later learned that it's raw TCP bytes (it's a layer 4 and it has no idea of what's happening on layer 7 https/ws - Railway: "meaning, as long as your application handles the upgrade, the behavior of the tcp proxy does not come into play
")…and therefore no cert. We want websockets since UDP/QUIC/Web-rtc is not supported but a basic tcp and ws connection would be "good enough".
The thought was well let's just setup AutoTLS, and "force" Kubo to announce on the open tcp port (which is seemingly used to give access to data externally already). We did this by trying the following both true and false ipfs config --json Swarm.DisableNatPortMap false
Nevertheless the following logs were seen:
Adding TCP proxy address for IPFS (plain TCP)...
Announcing addresses: ["/dns4/metro.proxy.rlwy.net/tcp/49766/tls/sni/metro.proxy.rlwy.net/ipfs/12D3KooWJqSHWJmQcTbjjDU5qdqy8RkUSdpNtaEVXq5PnHh2mG9z"]
Current AutoTLS configuration:
{
"AutoWSS": true,
"Enabled": true
}
Initializing daemon...
Kubo version: 0.33.2
Repo version: 16
System version: amd64/linux
Golang version: go1.23.6
PeerID: 12D3KooWJqSHWJmQcTbjjDU5qdqy8RkUSdpNtaEVXq5PnHh2mG9z
2025-03-17T22:10:47.626Z INFO autotls node/groups.go:178 appended AutoWSS listener: /ip4/0.0.0.0/tcp/4001/tls/sni/*.libp2p.direct/ws
2025-03-17T22:10:47.626Z INFO autotls node/groups.go:178 appended AutoWSS listener: /ip6/::/tcp/4001/tls/sni/*.libp2p.direct/ws
2025-03-17T22:10:47.629Z INFO autotls.maintenance certmagic@v0.21.6/maintain.go:63 started background certificate maintenance {"cache": "0xc000340f80"}
2025/03/17 22:10:47 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.
2025-03-17T22:10:47.648Z INFO autotls.start client/acme.go:398 no cert found for "*.k51qzi5uqu5djitj44bzt45qh33u8dlh7yf8ca50bvjr584n1z1is2dlyi4oet.libp2p.direct"
2025-03-17T22:10:47.648Z INFO autotls.start client/acme.go:423 waiting until libp2p reports event network.ReachabilityPublic
Swarm listening on 10.250.11.207:4001 (TCP+UDP)
Swarm listening on 127.0.0.1:4001 (TCP+UDP)
Swarm listening on [::1]:4001 (TCP+UDP)
Swarm listening on [fd12:8c86:bb2b::55:43d8:410]:4001 (TCP+UDP)
Run 'ipfs id' to inspect announced and discovered multiaddrs of this node.
RPC API server listening on /ip4/127.0.0.1/tcp/5001
WebUI: http://127.0.0.1:5001/webui
Gateway server listening on /ip4/127.0.0.1/tcp/8080
Daemon is ready
2025-03-17T22:11:04.809Z INFO autotls.start client/acme.go:433 libp2p reachability status changed to Private
2025-03-17T22:11:04.809Z INFO autotls.start client/acme.go:438 certificate will not be requested while libp2p reachability status is Private
which are logs from https://github.com/dClimate/jupyter-notebooks/blob/main/scripts/start.sh Note: every new deployment on Railway creates a new IPFS Node so the peer id in some of these logs may be different than the most recent one. Everything full reproducible with a few clicks however.
Despite trying our best and enabling debug logs, and trying to force AutoTLS to communicate on the Railway provided TCP domain reachability remained Private
To confirm we made a test script with libp2p using an existing node we have that isn't behind Docker & Railway (Bismuth) which works
import { createLibp2p } from "libp2p";
import { webSockets } from "@libp2p/websockets";
import { multiaddr } from "@multiformats/multiaddr";
import { noise } from "@chainsafe/libp2p-noise";
// Old node when we tried to announce ws on the domain before we knew it didn't have TLS
const jupyterNode = "/dns4/metro.proxy.rlwy.net/tcp/49766/tls/sni/metro.proxy.rlwy.net/ws/p2p/12D3KooWLXuf6mMsyAcHaX6gYzU74jsaBaTxpbzdjDMNKYj2Wj1A";
const bismuthNode =
"/dns4/40-160-21-102.k51qzi5uqu5dhy22gw9bhnr0ouwxub8ct5awrlfm3l698aj0gekrexa4g0epau.libp2p.direct/tcp/4001/tls/ws/p2p/12D3KooWEaVCpKd2MgZeLugvwCWRSQAMYWdu6wNG6SySQsgox8k5";
async function testDial() {
const node = await createLibp2p({
transports: [webSockets()],
// Use an empty listen array if you don't need inbound connections for this test.
addresses: { listen: [] },
connectionEncrypters: [noise()],
});
try {
await node.start();
const ma = multiaddr(jupyterNode);
const conn = await node.dial(ma);
console.log("Connected:", conn.remotePeer.toString());
} catch (err) {
console.error("Connection failed:", err);
}
}
testDial();
Latest update: One of our teammates noticed that running locally AutoTLS is private too (but unable to confirm if that's because they are under a NAT/router of their own)