The P2P networking stack library modularized out of the IPFS (https://libp2p.io/)
By Protocol Labs
Connection
class which is
Streams
over a single connectionmultiselect
/multiselect/secio/1.0.0
multiaddress
/ip4/1.2.3.4/tcp/1234/p2p/<peerId>
The task sounds a bit boring
… however there was a place for
for the first time
Kotlin first impressions:
Java
> Go
> Kotlin
No reasons to use Java when you may use Kotlin
Netty
-centric architecture designAssumtions:
ByteBuf
, threading considerationsExposing Netty interfaces in core Libp2p classes (src):
import io.netty.channel.Channel
class Connection(val ch: Channel) { ... }
or
import io.netty.channel.ChannelHandler
class MyCoolProtocol : ChannelHandler { ... }
looks slightly counterintuitive
However this approach has its benefits:
Abstracting over Netty has no much sense
Ideal Libp2p API sample (like Go libp2p)
typealias Fut<T> = CompletableFuture<T> // to fit slide Fut<Connection> conn = transport.dial(addr) Fut<SecureConnection> secConn = conn .thenCompose { it.secure(SecurityHandler()) } Fut<MplexedConnection> mplexConn = secConn .thenCompose { it.multiplex(MultiplexHandler()) } Fut<Stream> stream = mplexConn .thenCompose { it.createStream() } Fut<MyProtoCtrl> myProto = stream .thenCompose { it.setHandler(MyProto()) }
… but Netty has solely callback based architecture
and our attempt to deliver Future based API fails right on the first line:
Fut<Connection> conn = transport.dial(addr) conn.thenCompose { it.secure(SecurityHandler()) }
Why?
Netty pipeline property:
dispose any event that wasn't picked up by any ChannelHandler
Fut<Connection> conn = transport.dial(addr) sleep(1000) // let's have some rest here conn.thenCompose { it.secure(SecurityHandler()) }
After connection established both parties send initial security packets (e.g. secio
)
While we are relaxing (2)
before setting up SecurityHandler
Netty connects, receives intial sec packet, doesn't found any pipeline handlers and just disposes it
(3)
the Connection
is already completed, but it's too late
Thus the pipeline should be intialized with SecurityHandler
prior or inside dial()
like this
val initCallback = { SecurityHandler() } transport.dial(addr, initCallback)
Looks less cool than original code
Fut<Stream> stream = connection.createStream() sleep(1000) // let's have some rest here Fut<MyProtoCtrl> myProto = stream .thenCompose { it.setHandler(MyProto()) }
The similar situation: while sleeping (3)
a new Stream
is created and remote user protocol sends some initial packet which is disposed by the Netty pipeline due to absense of the terminal handler MyProto
(5)
it's too late
This way the whole stack of protocols (Netty handlers) should be supplied beforehand either explicitly of via chain of callbacks
The pain: callback based API can not be converted to Future based API in general case
… similar to like blocking API can't be converter to non-blocking
Netty has plain pipeline for every transport connection and has no 'sub-channel' notion. However multiplexed libp2p Stream
s are in essense Channel
s themselves but with a parent Channel
Github: io.libp2p.etc.util.netty.mux
Stream
in the usual Netty wayByteBuf
schemeGithub: DeterministicFuzz.kt
Components:
ScheduledExecutor
s factory with a single time controller (sources)System.currentTimeMillis()
Random
with predefined seed
All scheduled tasks from all Executors
are executed on a single [main]
thread with deterministic order
Two modes of Executor.execute()
:
public void execute(Runnable task) {
task.run();
}
Originating Peer -> Intermediate Peer -> Target Peer
StackOverflow
on large networks public void execute(Runnable task) {
schedule(task, 0, MILLISECONDS);
}
StackOverflow
Gains:
Project was completed, timeline was met (2.5 months)
Again the formula was proven:
Time = T * 2 + 2 weeks
T
- estimated time which you are 100% confident after thorough investigation of the domain area