In CouchDB mem3_sync is the mechanism that keeps shards of the same range synchronised across multiple cluster nodes. It does so by implementing the CouchDB replication protocol over Erlang distributed messages. In slightly more detail this means it reads the by-seq index of a shard and copies out all live content into the new shard.
During cluster management operations like increasing the number of nodes it can become necessary to move an existing shard from one node (A) to another (B) where there is no such shard on node B already, because it just freshly joined the cluster and no database shard map points to it.
The easiest way to move a shard to node B is to edit the shard map for a respective database and change the entries for a particular shard range or ranges to point to node B instead of node A. mem3 will recognise the change in the shard map and notice the missing shard on node B and then will initiate mem3_sync to fill up the shard to match the copies on the other cluster nodes.
Unfortunately, implementing CouchDB replication means that data transfer does not happen at network speed because following the protocol correctly means multiple request/response cycles during each batch of operation.
However, when replicating into an empty shard, none of these extra roundtrips are strictly necessary. They are necessary later when a shard might be slightly behind other nodes so that the shard can be caught up correctly, but for the initial sync, mem3_sync does more work than strictly needed.