# Intro
Running a Bee node doesn't require a computer science degree, but a basic understanding of several key DevOps tools and concepts is essential. While the breadth of of this guide might seem intimidating initially, you only need a basic understanding of each concept to get started, so don't worry! If at any point you need additional help, feel free to pop into the [Swarm node-operators Discord channel](https://discord.com/channels/799027393297514537/811553590170353685).
## Who is this guide for?
This guide is a comprehensive introduction to essential DevOps (Developer Operations) tools and concepts, specifically aimed at beginner Bee node operators.
However, it's not only for Bee node operators — the tools and techniques covered in this guide are used all across the Web3 industry, making this guide a great resource for any beginner node operators across the Web3 ecosystem.
## How should this guide be used?
If you're a total DevOps beginner, you will want to take your time going through this guide, and make sure to try each of the step-by-step guides one by one to get essential hands-on experience. Some of these concepts may seem abstract from their descriptions alone, but will become more concrete as you follow along with the step-by-step guide.
This guide also serves as an excellent companion to the [Bee installation instructions](https://docs.ethswarm.org/docs/bee/installation/install) in the official documentation website. It will provide you with all the background information needed for you to understand each step of the instructions, and for helping you to decide which options to choose.
# Concepts Index
1. Linux
1a. Installation
1b. Navigation
1c. Package Management
2. SSH
3. Docker
4. JSON RPC
5. 3rd party APIs
6. Key Management and Backups
7. Logging (systemctl, systemd)
8. Bee configuration (methods, options)
9. Networking
10. Health and Monitoring
# Step by Step Guides Index
1. VPS Setup
2. Docker Setup
3. Backup Instructions
4. Updating
5. Troubleshooting
# Concepts
## **Linux**
**What is Linux and why do we recommend it for running your Bee node?**
Linux is an open-source operating system, similar to macOS and Windows but with some key differences. It comes from the Unix family of operating systems which includes macOS but not Windows. Linux provides the essential software foundation for your computer. However, what sets Linux apart is its open and highly customizable nature.
Unlike macOS and Windows, Linux comes in many flavors or distributions, each offering unique features and user interfaces. Linux is known for its stability, security, and performance, and is often the platform of choice for servers and development environments due to its reliability and flexibility.
Within the Swarm community Linux is the go-to OS for running Bee nodes and we recommend you to use it as well. In particular, if you're a newcomer to Linux we recommend using the Ubuntu distribution. Ubuntu belongs to the [Debian](https://www.debian.org/) family of distributions and it is one of the most widely used Linux distributions and has excellent community support and learning resources.
**Compatibility with macOS:**
For macOS-based users, it's worth noting that many concepts covered in this guide apply to both macOS and Linux. While there may be slight command variations, the fundamental principles remain consistent. You can confidently follow the guide, making minor adjustments as needed for your macOS environment. Mac's Z shell (zsh), akin to Ubuntu's bash, offers familiar commands. However, certain advanced features aren't out-of-the-box. Utilize Homebrew for seamless access to tools, enhancing your Mac's command-line prowess.
If you require further assistance or encounter macOS-specific challenges, the Swarm [node-operators Discord channel](https://discord.com/channels/799027393297514537/811553590170353685) is available to provide you with support and guidance.
### Installation
Running a Bee node on Ubuntu can be approached in several ways, and which one you choose will depend on your needs.
Below we review several approaches for running Bee nodes, the pros and cons of each, and the users which they are suitable:
#### Ubuntu Desktop on a Dedicated Machine
- **Ideal for:** Those who have a spare computer and a steady internet connection.
- **Setup:** Perform a full installation of Ubuntu Desktop. This is relatively straightforward if you have a spare computer and a USB drive.
- **Instructions:** [Ubuntu Desktop Install Instructions](https://ubuntu.com/tutorials/install-ubuntu-desktop#1-overview)
- **Pros:** Full control over the environment, dedicated resources.
- **Cons:** Requires a dedicated physical machine.
#### Ubuntu on WSL (Windows Subsystem for Linux)
- **Ideal for:** Windows users who want to run Ubuntu alongside Windows without dual-booting.
- **Setup:** Install Ubuntu within Windows using WSL, which enables access to the Ubuntu terminal directly from Windows. This setup does not provide the Ubuntu Desktop GUI, but it's sufficient for Bee node operation.
- **Instructions:** [Set-up Ubuntu in WSL on Windows](https://learn.microsoft.com/en-us/windows/wsl/install)
- **Pros:** Convenient for Windows users, no need for additional hardware.
- **Cons:** Potentially less performance than a dedicated Ubuntu system, may face compatibility issues from using WSL rather than a dedicated Ubuntu setup.
#### Ubuntu on a portable USB Drive
- **Ideal for:** Users who have spare computers with various hardware configurations, likely running outdated operating systems, and wish to utilize them with a familiar UI.
- **Setup:** Install Portable Ubuntu on a USB drive. This can be done using [Rufus](https://rufus.ie/).
- **Instructions:** [Create a bootable USB stick on macOS](https://ubuntu.com/tutorials/create-a-usb-stick-on-macos)
- **Pros:** Portability - use your Ubuntu environment on any computer with a USB port.
- **Cons:** Limited performance compared to a full installation, dependency on the USB drive speed.
#### Ubuntu on a VPS (Virtual Private Server)
:::info
This is our recommended method for those looking to operate full Bee nodes in order to participate in staking. We cover getting started with a VPS in more detail in a dedicated section further in this guide.
:::
- **Ideal for:** Users seeking a remote, scalable, and manageable solution.
- **Setup:** Rent a VPS and run Ubuntu on it. This approach is advantageous as it offloads the hardware management and ensures uptime.
- **Instructions:**
- [Set-up Ubuntu on a Digital Ocean Droplet](https://www.digitalocean.com/community/tutorials/how-to-set-up-an-ubuntu-20-04-server-on-a-digitalocean-droplet)
- [Set-up Ubuntu on a Contabo VPS](https://webshanks.com/contabo-vps-setup/) (More cost-effective option)
- **Pros:** No physical hardware management, easy to scale, reliable uptime.
- **Cons:** Ongoing costs, requires comfort with remote server management.
**Note on SSH:** When setting up Ubuntu on a VPS, you'll likely use SSH for remote access. If you're new to SSH, refer to the relevant section of this guide for guidance.
### Terminal Navigation
The terminal (also referred to as shell or command prompt - these terms are technically distinct, but it is common for them to be used interchangeably), provides you with a [CLI](https://en.wikipedia.org/wiki/Command-line_interface) which gives you direct and powerful access to your computer. The [bash shell](https://en.wikipedia.org/wiki/Bash_(Unix_shell)) is the default shell on Ubuntu, but the commands covered in this guide work in many different shells.
Terminal navigation involves using command-line instructions to navigate through the filesystem.
#### Key Concepts:
1. **Filesystem Structure:** Unix-like systems have a hierarchical directory structure, starting with the root directory (`/`). From there, directories branch out, containing subdirectories and files.
2. **Current Working Directory:** When you open a terminal, you are in a directory, known as the current working directory. You can perform operations relative to this directory.
3. **Pathnames:** There are two types of pathnames:
- **Absolute Pathnames:** Start from the root directory (e.g., `/usr/local/bin`)
- **Relative Pathnames:** Relative to the current directory (e.g., `./Documents`)
4. **Shell Scripting (Advanced):** Advanced shell users may learn how to write shell scripts which can be used to automate common tasks. Shell scripting is not covered in this guide but you can read more about it [here](https://www.freecodecamp.org/news/bash-scripting-tutorial-linux-shell-script-and-command-line-for-beginners/). One thing you should be aware of is that as shell scripting can be very powerful, ***you should never run a shell script you don't understand from someone you don't trust 100%***. If you do need to run a shell script, make sure to get it from the official source (such as the [shell script for installing Bee](https://github.com/ethersphere/bee/blob/master/install.sh) from Swarm's official GitHub organization.)
#### Bash commands cheat sheet:
- `pwd`: Print Working Directory. Shows your current directory.
- `ls`: List. Shows files and directories in the current directory.
- `ls -l`: Detailed list, showing permissions, owner, size, and modification date.
- `ls -a`: Shows all files, including hidden ones (those starting with a dot).
- `cd`: Change Directory. Moves to another directory.
- `cd ~`: Move to the home directory.
- `cd ..`: Move up one directory.
- `cd /`: Move to the root directory.
- `cd -`: Move to the last directory you were in.
- `mkdir`: Make Directory. Creates a new directory.
- `rmdir`: Remove Directory. Deletes an empty directory.
- `touch`: Creates a new empty file or updates the timestamp of an existing file.
- `cp`: Copy. Copies files or directories.
- `cp file1 file2`: Copies file1 to file2.
- `cp -r dir1 dir2`: Recursively copy, for directories.
- `mv`: Move. Moves files or directories, or renames them.
- `rm`: Remove. Deletes files or directories.
- `rm -r`: Recursively delete, for directories.
- `find`: Searches for files and directories.
- `locate`: Quickly find files (uses a database updated by `updatedb`).
:::info
Note that the options (also sometimes called flags) appearing after the main commands can often be combined. For example, the `-l` and `-a` flags used with the `ls` command can be combined as `-la` in order to print details about all files including hidden ones:
```
ls -la
```
:::
:::danger
The `-r` flag seen in the above commands stands for "recursive", and it can be used to recursively execute a command through all levels of a directory structure. This is required when operating commands on directories (folders), which can have multiple levels of nested folders.
For example, when copying a folder with the `cp` command, you will need to use `cp -r target_folder destination_folder` in order to copy all the nested folders within the target folder.
This can however be dangerous to use with some commands like `rm`, and can be very destructive if used incorrectly, as it could easily be used to delete your entire filesystem with a single command.
:::
#### Piping
Piping in Bash allows you to send the output of one command as input to another, using the `|` symbol. It's useful for chaining commands to perform complex tasks.
##### Piping with `jq`:
Using `jq` with Piping: `jq` is a command-line JSON processor. You might use it with piping to filter or transform JSON output from one command before processing it further. If used without any options it will simply format JSON to make it easier to read - we use this feature widely in concert with the Bee API.
For example the output from the `topology` endpoint is particularly difficult to read without `jq`:
```bash
curl http://localhost:1635/topology
```
Without newlines or indentations the output is difficult to parse visually:
:::info
In each of these examples below we have truncated the output as the complete output is too large to display here.
:::
```json
{"baseAddr":"da7e5cc3ed9a46b6e7491d3bf738535d98112641380cbed2e9ddfe4cf4fc01c4","population":20559,"connected":175,"timestamp":"2024-02-07T17:13:51.960599746Z","nnLowWatermark":3,"depth":10,"reachability":"Public","networkAvailability":"Available","bins":{"bin_0":{"population":11247,"connected":20,"disconnectedPeers":[{"address":"77696ffe87fa2592355b1ba5b2d93b5f18b118427cb48c8e21c7f4f5088f8d49","metrics":{"lastSeenTimestamp":1707316706,"sessionConnectionRetry":5,"connectionTotalDuration":2418,"sessionConnectionDuration":1106,"sessionConnectionDirection":"outbound","latencyEWMA":19,"reachability":"Public","healthy":true}},{"address":"501267152efe6276947d2646be29cd7e0b1a488cb3eb12ca8a319aaf18fb7358","metrics":{"lastSeenTimestamp":1707323255,"sessionConnectionRetry":8,"connectionTotalDuration":55895,"sessionConnectionDuration":311,"sessionConnectionDirection":"outbound","latencyEWMA":518,"reachability":"Public","healthy":false}},{"address":"719731d3fec45195b76d3245840b1cf36ffa3080acceb3d232eb14ea4b80f6fa","metrics":{"lastSeenTimestamp":1707315006,"sessionConnectionRetry":1,"connectionTotalDuration":20326,"sessionConnectionDuration":68,"sessionConnectionDirection":"outbound","latencyEWMA":0,"reachability":"Public","healthy":false}},{"address":"67d546d6d0c9039268d5ddb234cedf59424d8d4f2c65a5b0a1f2f4dc39a1a7d9","metrics":{"lastSeenTimestamp":1707320453,"sessionConnectionRetry":2,"connectionTotalDuration":2025,"sessionConnectionDuration":397,"sessionConnectionDirection":"outbound","latencyEWMA":50,"reachability":"Public","healthy":false}},{"address":"7833dcfe9d0e6accfba2a06e3e21cac57a510d5da644799dab3020d17e5ecdf8","metrics":{"lastSeenTimestamp":1707318987,"sessionConnectionRetry":23,"connectionTotalDuration":35258,"sessionConnectionDuration":126,"sessionConnectionDirection":"outbound","latencyEWMA":21,"reachability":"Public","healthy":true}}...
```
But if we pipe the results of the `/topology` endpoing into `jq` it becomes much more readable:
```bash
curl http://localhost:1635/topology | jq
```
Then the results become far more readable:
```bash
{
"baseAddr": "da7e5cc3ed9a46b6e7491d3bf738535d98112641380cbed2e9ddfe4cf4fc01c4",
"population": 20571,
"connected": 174,
"timestamp": "2024-02-07T17:20:34.984885908Z",
"nnLowWatermark": 3,
"depth": 10,
"reachability": "Public",
"networkAvailability": "Available",
"bins": {
"bin_0": {
"population": 11250,
"connected": 20,
"disconnectedPeers": [
{
"address": "77696ffe87fa2592355b1ba5b2d93b5f18b118427cb48c8e21c7f4f5088f8d49",
"metrics": {
"lastSeenTimestamp": 1707316706,
"sessionConnectionRetry": 5,
"connectionTotalDuration": 2418,
"sessionConnectionDuration": 1106,
"sessionConnectionDirection": "outbound",
"latencyEWMA": 19,
"reachability": "Public",
"healthy": true
}
},
{
"address": "501267152efe6276947d2646be29cd7e0b1a488cb3eb12ca8a319aaf18fb7358",
"metrics": {
"lastSeenTimestamp": 1707323255,
"sessionConnectionRetry": 8,
"connectionTotalDuration": 55895,
"sessionConnectionDuration": 311,
"sessionConnectionDirection": "outbound",
"latencyEWMA": 518,
"reachability": "Public",
"healthy": false
}
}...
```
`jq` is powerful for parsing, filtering, and manipulating JSON data directly in the command line, making it invaluable for working with JSON-formatted API responses or configuration files, and is an excellent tool to have in your devops toolkit.
##### Using piping to save output:
Piping isn't just for passing data between commands—it also allows you to capture output and save it directly to a file. This can be especially useful when dealing with API responses that you might need to analyze, share, or use as input for another process.
We can easily write formatted results to a file using the `>` operator along with `jq`:
```bash
curl http://localhost:1635/topology | jq '.' > output.json
```
And we can confirm the contents of the file to confirm it's been saved using `cat`:
```bash=
cat output.json
```
And here we see the last few lines of the file:
```bash
...
"bin_28": {
"population": 0,
"connected": 0,
"disconnectedPeers": null,
"connectedPeers": null
},
"bin_29": {
"population": 0,
"connected": 0,
"disconnectedPeers": null,
"connectedPeers": null
},
"bin_30": {
"population": 0,
"connected": 0,
"disconnectedPeers": null,
"connectedPeers": null
},
"bin_31": {
"population": 0,
"connected": 0,
"disconnectedPeers": null,
"connectedPeers": null
}
},
"lightNodes": {
"population": 0,
"connected": 0,
"disconnectedPeers": null,
"connectedPeers": null
}
}
```
##### Using nohup to save output & send to the background
"nohup" is a command used in Linux to execute a process that persists even after the user logs out or the terminal session ends. The term "nohup" stands for "no hang up," originating from the early days of Unix when users would physically disconnect from the system via a modem connection, which could terminate running processes.
When you run a command or a script with "nohup," it ensures that the process continues running in the background, detached from the current shell session. This is particularly useful for long-running tasks, where you don't want the process to terminate if the terminal session is closed or the connection is lost.
The basic syntax for using "nohup" is:
```
nohup command [arguments] &
```
Replace "command" with the name of the command or script you want to run and include any necessary arguments. The trailing ampersand (&) tells the shell to run the command in the background. At this point you receive the process ID that you can stop with the kill command e.g.:
```
nohup command journalctl --lines=100 --follow --unit bee &
[1] 12345
~ kill 12345
```
You can read the logs from the created "nohup.out" file.
By default, "nohup" redirects both standard output (stdout) and standard error (stderr) to the said file. You can override this behavior by explicitly redirecting output using standard shell redirection operators (> for stdout, 2> for stderr).
Since "nohup" runs processes in the background, you regain control of the terminal immediately after executing the command. You can continue working in the terminal or log out without affecting the running process.
### Logging and Service Management with `systemd`, systemctl, and journalctl
For Bee node operators running their nodes on a Linux system, understanding how to manage your Bee service and check its logs is crucial. Linux's `systemd`, `systemctl`, and `journalctl` provide powerful tools for this purpose. This section will introduce these tools and explain how to use them to manage your Bee node effectively.
### systemd: An Introduction
`systemd` is a system and service manager for Linux operating systems that provides more than just a way to start and stop services. It is used to manage the entire system startup and supports system logging, service dependency, and automatic service restarts among other features.
**Managing Bee Node with `systemctl`:**
`systemctl` is the command-line interface for interacting with `systemd`. Here's how you can use it to manage your Bee node service:
#### Starting the Bee Service
To start your Bee node as a service, use the command:
```bash
sudo systemctl start bee
```
#### Enabling Bee Service on Boot
To ensure your Bee node starts automatically on system boot, enable it with:
```bash
sudo systemctl enable bee
```
#### Checking Service Status
To check the status of your Bee node service, including whether it's active and running, use:
```bash
sudo systemctl status bee
```
:::info
Use "q" to exit from the status review screen.
:::
#### Stopping the Service
If you need to stop your Bee node for any reason, you can do so with:
```bash
sudo systemctl stop bee
```
#### Restarting the Service
To restart your Bee node service, which would be a good idea to do after making configuration changes, use:
```bash
sudo systemctl restart bee
```
### Viewing Logs with journalctl
`journalctl` is a utility for querying and displaying logs from `journald`, `systemd`'s logging service. It allows you to view detailed logs for troubleshooting and monitoring your Bee node's activities.
**Viewing Bee Node Logs:**
To view the logs for your Bee node, run:
```bash
sudo journalctl -u bee
```
This command displays the logs generated by your Bee node service.
**Filtering Logs by Time:**
You can filter logs within a certain time frame, such as "since today", by adding the `--since` option:
```bash
sudo journalctl -u bee --since today
```
:::info
See the "FILTERING OPTIONS" from [the official journalctl docs](https://man7.org/linux/man-pages/man1/journalctl.1.html) for more details about using the `--since` option.
:::
**Following Logs in Real Time:**
To follow the logs as new entries are added, similar to `tail -f`, use the `-f` flag:
```bash
sudo journalctl -u bee -f
```
### Practical Tips
- **Customizing Log Output**: journalctl offers various options to customize the output, such as `-o verbose` for more detailed logs or `-o json` for JSON-formatted logs.
- **Maintaining System Security**: Always use `sudo` with care, especially when operating in production environments. Limit access to your Bee node's logs and configuration to authorized users only.
- **Automating Monitoring**: Consider setting up monitoring scripts that use `journalctl` to alert you of critical errors or unusual activities in your Bee node logs.
By mastering systemd, systemctl, and journalctl, you'll have robust tools at your disposal for managing and troubleshooting your Bee node. Regularly checking your Bee node's logs can help you stay ahead of issues and ensure your node operates smoothly within the Swarm network.
### Vim
Vim, or "Vi Improved," is a powerful text editor available in Unix systems like Ubuntu. It's known for its efficiency and flexibility, though it has a bit of a learning curve. Vim is different from GUI based text editors you may be familiar with in that it is available directly from the command line. It is an indispensable tool for editing text files in environments without access to GUI text editors, such as when connecting to a remote VPS throuh SSH. We will primarily use it for editing config files, (such as Bee's `bee.yaml`) from the command line.
Vim can be launched with the `vim` command:
```bash
sudo vim config.yaml
```
As navigation in Vim is done entirely from the keyboard (without a mouse), it takes a bit of time to get used to coming from a Gui based text editor. You can review [this interactive tutorial](https://www.openvim.com/) quickly get up to speed with working with Vim. And you can use [this handy Vim cheat sheet](https://vimsheet.com/) or also [this one](https://vim.rtorr.com/) as references to help remember all the Vim commands.
#### Installation
Vim should be pre-installed on Ubuntu, but you can install or update if needed it by running:
```bash
sudo apt update
sudo apt install vim
```
To open Vim, simply type `vim` in your terminal. If you want to open or create a file with Vim, use:
```bash
vim filename
```
#### Vim in Context of Running a Bee Node
We will use Vim whenever we need to modify our `config.yaml` configuration file. The following command will open up the file in your terminal window:
```bash
sudo vim /etc/bee/bee.yaml
```
Vim has two main modes, `command` and `insert`. Vim starts in command mode by default. In order to edit the config file, we simply press the `i` key. After this, we can then navigate through the file using the arrow keys or the letters `h j k l`. After making the edits we wish to make, we can exit from insert mode and get back to command mode by pressing the `Escape` key on our keyboard. From there, we can save our changes and exit from Vim with the `:wq` command, and we can exit without saving with the `:qa` command. And that's about as much of Vim that you need to know for working with Bee! There's a lot more you could learn about Vim, but it's not required for working with Bee.
### Package Management
Managing software on Ubuntu is handled by `apt-get`, a software management command-line tool. It handles the installation, updating, and removal of software packages. There are other package mangers commonly used with different Linux distributions such as `yum`, `rpm`, and more, however in this guide we stick to `apt-get` as we are only covering Ubuntu.
Note that packages for Ubuntu use the `.deb` filename extension, short for [Debian](https://www.debian.org/) (the family of Linux distributions of which Ubuntu is a member).
#### Using `apt-get` for Bee Node Management
- **Installing Packages:** `sudo apt-get install [package_name]` installs new packages. For example, `sudo apt-get install bee` would install the Bee node software on your system.
- **Updating Packages:** To update your software, use `sudo apt-get update` to refresh your package list and `sudo apt-get upgrade` to install the updates.
- **Removing Packages:**
- The `sudo apt-get remove [package_name]` command removes a package but keeps configuration files and certain data intact. This is useful if you plan to reinstall the package later and want to retain your settings.
- The `sudo apt-get purge [package_name]` command, on the other hand, removes everything related to the package, including configuration files and data.
#### Important Note on `purge`
Using `purge` with Bee can lead to the loss of key files and settings. It's crucial to understand that if you use `sudo apt-get purge bee`, it will permanently delete your Bee node's configuration files and keys. These keys are essential for accessing your node and its data. If you haven't backed them up, this data will be irrecoverable. Always ensure you have a backup of your keys and configurations before using the `purge` command.
#### Package Versions and Sources
When following [the instructions for installing Bee](https://docs.ethswarm.org/docs/bee/installation/install#1-install-bee) using `apt-get` in the official Bee documentation, you will come across some commands which may look unfamiliar to you. To understand the installation instructions, several key concepts need to be understood: Linux repositories, GPG keys, and how they are used in package management with `apt-get` in Ubuntu. Here's a breakdown of these concepts:
### Linux Repositories
- A repository in Linux is a storage location from which your system retrieves and installs software packages. Each repository contains a collection of software packages along with information about these packages like their version and dependencies.
- When you install a package using `apt-get`, Ubuntu searches the repositories listed in its sources. By default, Ubuntu is configured with its own repositories, but you can add third-party repositories for access to additional software. Because the `bee` package is not part of any default repositories, you'll need to include the official repository maintained by the Swarm team for the `bee` package when installing it.
#### Adding a Repository:
- The provided command `echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ethersphere-apt-keyring.gpg] https://repo.ethswarm.org/apt * *" | sudo tee /etc/apt/sources.list.d/ethersphere.list > /dev/null` is adding the official Swarm repository to your list of sources.
- Here, `deb` indicates a Debian package repository, and `$(dpkg --print-architecture)` automatically inserts your system's architecture (like amd64, i386, etc.).
- The `signed-by` portion points to the GPG key that will be used to verify the authenticity of the packages in this repository.
- The URL `https://repo.ethswarm.org/apt` is the location of the repository.
- The command writes this line to a file named `ethersphere.list` under `/etc/apt/sources.list.d/`, which is the directory where Ubuntu looks for additional sources.
#### GPG Keys and Their Importance
- GPG (GNU Privacy Guard) keys are used to sign and verify software packages. This ensures that the packages you download and install are exactly as provided by the source and have not been tampered with.
- By importing the GPG key (`curl -fsSL https://repo.ethswarm.org/apt/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/ethersphere-apt-keyring.gpg`), you're adding the public key of the repository maintainer (Swarm) to your system. This is used to verify the packages you download from the repository are really from the Swarm organization.
- The `curl` command fetches the GPG key from the given URL. `gpg --dearmor` processes the key, and the output is saved in `/usr/share/keyrings/`, a standard location for such keys.
#### Updating Package Lists and Installing Software
- `sudo apt-get update` refreshes the list of available packages and their versions, but it doesn't install or upgrade any packages. It's essential to run this after adding a new repository to make sure that `apt-get` knows about the new packages available.
- `sudo apt-get install bee` then installs the Bee package from the newly added repository.
:::danger
Always use packages and repositories from trusted sources to minimize security risks.
:::
### File Ownership and `chmod`
The `chmod` (change mode) command in Unix/Linux is used to change the access permissions of file system objects (files and directories). Here's a quick guide on how to use it.
#### Understanding Permissions
- **User Types:**
- **User (u):** The owner of the file.
- **Group (g):** Other users who are in the file's group.
- **Others (o):** Everyone else.
- **Permission Types:**
- **Read (r):** Permission to read the file.
- **Write (w):** Permission to modify the file.
- **Execute (x):** Permission to execute the file (or enter the directory).
#### Viewing Permissions
- Use `ls -l` to view the permissions of files and directories.
- Example output: `-rw-r--r-- 1 user group 0 Jan 1 00:00 example.txt`
- Here, `-rw-r--r--` represents the permissions.
#### Changing Permissions
- Permissions can be changed using symbolic or numerical methods.
1. **Symbolic Method:**
- Format: `chmod [who][+/-][permissions] filename`
- `who`: u (user), g (group), o (others), a (all)
- `+`: add a permission, `-`: remove a permission
- Example: `chmod g+w example.txt` (adds write permission for the group)
2. **Numerical Method (Octal):**
- Each permission type is represented by a number: read (4), write (2), execute (1).
- Sum these numbers to set multiple permissions.
- Format: `chmod [number] filename`
- Example: `chmod 644 example.txt`
- `6` (4+2) for user: read and write.
- `4` for group: read.
- `4` for others: read.
#### Common Usage Examples:
1. **Give execute permission to the owner:**
- `chmod u+x example.txt`
2. **Remove execute permission from group and others:**
- `chmod go-x example.txt`
3. **Set permissions to read and write for owner, and only read for group and others:**
- `chmod 644 example.txt`
4. **Give all permissions to everyone:**
- `chmod 777 example.txt`
:::danger
- Be careful with `chmod 777`, as it gives full permissions to everyone, which can be a significant security risk.
- Never use `chmod 777` on sensitive files or system directories.
:::
#### Tips:
- Use `chmod -R` to change permissions recursively for all files in a directory.
- Always consider security implications when changing permissions, especially on servers or multi-user systems.
#### Usage with Bee
When installing Bee with `apt-get`, the installed files will
This guide provides a basic understanding of `chmod`. For more detailed usage, consult the `chmod` man page (`man chmod`) or other Unix/Linux documentation.
## Sudo
The `sudo` command in Unix type systems stands for "superuser do" or "substitute user do." It allows authorized users to execute commands as the superuser (often referred to as "root") or other authorized users.`sudo` provides a secure way to perform administrative tasks, as it requires users to authenticate themselves before executing privileged commands.
:::danger
Never use `sudo` to run a command or shell script which you don't fully understand from an untrusted source.
:::
### Basic Syntax
The basic syntax for using `sudo` is: `sudo command-to-execute`.
For example, to edit a system configuration file with elevated privileges, you can use: `sudo vi /etc/config-file.conf`.
### Using `sudo` with Commands
To execute a command with `sudo`, simply prepend `sudo` to the command you want to run with elevated privileges.
Example: `sudo apt update` will update the package list with root privileges.
### Authentication
When you use `sudo`, you will be prompted to enter your own user password to verify your identity and authorization.
After successful authentication, `sudo` grants temporary superuser privileges for the specific command.
### Timeouts and Caching
`sudo` typically caches your authentication for a certain amount of time (usually 5 minutes by default). During this time, you can execute multiple `sudo` commands without re-entering your password.
After the timeout expires, you will need to re-authenticate.
### Running a Shell as Root
You can run an interactive shell as the root user using: `sudo -i` or `sudo su -`.
Be cautious when using a root shell, as it provides unrestricted access to system files and commands.
### Configuration File
The `sudo` configuration is managed in the `/etc/sudoers` file.
It is recommended to edit this file using the `visudo` command, which provides syntax checking and avoids potentially locking yourself out due to a misconfiguration.
### Granting sudo Access
To grant a user sudo access, add an entry in the `/etc/sudoers` file. You can specify which commands or groups of commands the user can run with sudo.
Example: `username ALL=(ALL:ALL) ALL` grants full sudo access to the user named "username."
### Best Practices
- Use `sudo` sparingly and only when necessary to minimize the risk of accidental system changes.
- Avoid running graphical applications with `sudo` unless required, as it can cause permission issues.
- Always double-check the commands you intend to run with sudo to avoid unintended consequences.
### Logging and Auditing
`sudo` logs user actions in the system log, allowing system administrators to monitor and review command execution for security and auditing purposes.
### In Context of Operating a Bee Node
When operating a Bee need there are times you will need to access the contents of your node's data folder, such as when taking a backup of your node or for exporting your private keys. However by default the data folder is only accessable by the `bee` service. In order to access the contents of the folder you may change the permissions (see previous section on file ownership and the `chmod` command), however another more convenient solution is simply to use the `sudo` command which you can see in more detail in the hands-on section below.
## SSH
SSH (Secure Shell) is used for secure communication between your computer and VPS or other remote server, and it is the method you will use to interact with your Bee node if you are not running it on a local machine. Initially, you create a pair of cryptographic keys: a public key and a private key. The public key is placed on the VPS, while the private key remains on your PC (and is typically password encrypted for extra security). When connecting, the VPS uses the public key to authenticate the private key held by your PC, ensuring a secure connection. SSH provides secure, encrypted communication for all subsequent connections, safeguarding against unauthorized access and data interception.

When hosting your Bee node on a VPS other other remote server solution SSH is the go-to method for connecting to and operating your Bee node from your own machine. It's important to emphasize keeping your SSH key safe and private, as it grants access to the remote VPS where your Bee node(s) is. See the hands-on guide for setting up a VPS for a step-by-step example of generating SSH keys and setting up an SSH connection.
### Transferring Files Securely Over SSH With SCP
When working with remote servers such as the VPS where we will be hosting our Bee node, it's common to have the need to transfer files between the remote server and your personal machine or another remote server. One great way to do this is with the SCP, a command line utility which allows you to securely send files over a SSH connection.
#### Prerequisites
- Ensure SCP is installed on both the source and destination machines. It's typically pre-installed on most Unix-like operating systems.
- Have network access to both machines, and know the IP address or hostname of the destination machine.
- Have an SSH user account on the destination machine.
#### Basic SCP Request Struct
```bash
scp [OPTION] [user@]SRC_HOST:]file1 [user@]DEST_HOST:]file2
```
- OPTION: Optional parameters like -r for recursive copy (for directories), -P to specify port, etc.
- SRC_HOST: The hostname or IP of the source machine (can be omitted for the local machine).
- DEST_HOST: The hostname or IP of the destination machine.
- file1, file2: Names of the source and destination files/directories.
## **Docker, Docker Compose, Docker Hub**
Docker, along with Docker Compose and Docker Hub, are essential tools for working with containers. Containers are like virtual packages that bundle together everything needed to run an application, including the code, libraries, and dependencies. These containers are isolated from the host system, ensuring that they run consistently across different environments.
### Docker Compose
Docker Compose is like a conductor for managing multiple containers at once. When you need to run a swarm of Bee nodes or multiple related services, Docker Compose lets you define and run them together in a coordinated way. It simplifies the orchestration of complex setups, making it ideal for those running multiple Bee nodes simultaneously.
### Docker Hub
Docker Hub is a valuable resource for finding and sharing Docker containers, including those for Bee nodes. It serves as a repository of pre-configured containers created by the community, and it is where you can find the [official Docker container](https://hub.docker.com/r/ethersphere/bee) for Bee.
- Docker for Beginners: [Docker for Beginners: Full Course](https://www.youtube.com/watch?v=fqMOX6JJhGo)
- Interactive Docker Tutorial: [Play with Docker Classroom](https://training.play-with-docker.com/)
## JSON RPC APIs
In the world of blockchain and decentralized applications, JSON RPC APIs play a critical role in facilitating interactions between various components. You may already be familiar with commonly known API types such as CRUD or REST APIs, but it's important to understand how JSON RPC APIs are different, and their unique role in blockchain environments.
### Understanding JSON RPC APIs
- **What is a JSON RPC API?**
JSON RPC (JavaScript Object Notation Remote Procedure Call) is a protocol for remote procedure calls that uses JSON, a lightweight data interchange format, for its messages. Unlike REST or CRUD APIs, which are more resource-oriented (focusing on creating, reading, updating, and deletion of data entries), JSON RPC is action-oriented. It's all about sending specific requests to perform a predefined action and receiving the corresponding response(s).
### JSON RPC in Blockchain Context
**Interacting with Blockchain Nodes:**
JSON RPC APIs are widely used for interacting with nodes of a blockchain network. For instance, in Ethereum, JSON RPC API calls are used to send transactions, query contract data, monitor network status, and more.
**Gnosis Chain and Bee Node:**
For a Bee node in the Swarm network, a Gnosis Chain RPC endpoint is required. The Bee node interacts with this endpoint to execute necessary blockchain operations, such as handling BZZ tokens (Swarm's native token), and interacting with smart contracts on the Gnosis Chain.
### 3rd Party RPC Services
**What Are 3rd Party RPC Services?**
Not everyone who wants to interact with a blockchain network chooses to run their own full node due to resource requirements or technical complexities. This is where 3rd party RPC services such as [Infura](https://www.infura.io/) and [GetBlock](https://getblock.io/) come in. They provide access to blockchain networks through their nodes, offering an easy and resource-efficient way to interact with blockchains.
**Advantages:**
- **Ease of Access:** They simplify the process of connecting to a blockchain network.
- **Resource Efficiency:** They eliminate the need for individuals to run and maintain their own full nodes.
- **Scalability:** These services often offer enhanced performance and scalability features.
**Disadvantages:**
- **Cost:** Using a 3rd party provider can be more expensive than running your own node, especially if you have a multi-node setup.
- **Not Trustless:** Requires the reliance on the RPC provider to properly operate their blockchain nodes and maintain connectivity, which contradicts with the ethos of trustlessness which underpins the Web3 philosophy.
## Usage with Bee Nodes
A Gnosis Chain RPC endpoint is required for operating a Bee node. This connection is vital for the node to perform necessary blockchain interactions such interacting with the postage stamp and redistribution smart contracts which are responsible for managing storage payments and incentives.
:::warning
When using 3rd party RPC services, it's essential to consider the security and reliability of the service provider. Choosing a reputable provider ensures the integrity and consistency of your Bee node's interactions with the blockchain. For maximum security we recommend running your own Gnosis node.
:::
## Interacting with the Bee API with `curl`
The Bee API is an HTTP API, meaning that it can be interacted with through HTTP requests. You may use tools such as [Swarm CLI](https://docs.ethswarm.org/docs/bee/working-with-bee/swarm-cli) or [Bee JS](https://bee-js.ethswarm.org/docs/) which can simplify the process of interacting with the Bee API, however it is a good idea to have an understanding of how to interact directly with the API yourself.
### The API and the Debug API
Your Bee node has two APIs which you will interact with directly, the main Bee API and the Debug API. You can review the available endpoints for each API in [the official API reference docs](https://docs.ethswarm.org/docs/api-reference/). By default the Bee API is available at port `1633` and the Debug API is available at port `1635`.
:::info
There is also a third P2P API which is by default at port `1634`, however this API is used only for nodes to communicate with each other and you would not typically interact directly with it.
:::
### HTTP Request Structure
An HTTP request consists of four distinct parts:
1. URL - The endpoint at which your HTTP request is aimed.
2. Method - There are a variety of HTTP methods such as GET, POST, PUT, DELETE and [a few other less common ones](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods.
3. Head - The head contains various headers which provide additional information about the request, some of which can be used to specify options for your request in the Bee API.
4. Body - The body of an HTTP request can contain a wide variety of different data types. Typically only POST and PUT requests contain a body.
Before taking a look at some example HTTP requests, let's first learn how to make HTTP requests with curl.
### Making HTTP Requests With`curl`
The `curl` tool is likely the most commonly used command line tool for making HTTP requests. It's both powerful and easy to use. It allows you to simply specify the URL, method, headers, and body of your request from the command line. It typically comes pre-installed on most Unix operating systems, but if it is not already installed, you can use this command to install it on Ubuntu:
```bash
sudo apt update
sudo apt install curl
```
#### **`curl` request structure:**
A `curl` request consists of the `curl` command followed by options (if any) and the URL of the resource you want to interact with:
```bash
curl [options] [URL]
```
Now let's take a look at a few example requests:
1. Checking your node's status:
```bash
curl http://localhost:1633/health
```
This command will make a `GET` request to the `http://localhost:1633/health` endpoint. Note that if not specified, the default method used in a `curl` request is `GET`. This request contains neither a body nor any headers.
## Configuration Methods: YAML, Environment Variables, & Command line flags
You can learn more about Bee configuration options [in the official Bee docs](http://localhost:3000/docs/bee/working-with-bee/configuration), but before you do, it's a good idea to familiarize yourself with the different methods of setting config options if you're not already familiar with them.
There are three methods for specifying configuration when interacting with a Bee node, through a `.yaml` configuration file, through command line flags, or as environment variables. Let's briefly review each method:
### YAML
YAML is similar to JSON in that it is also used for storing data in a structured format, however it is more human readable and commonly used for configuration files while JSON is somewhat less easily human readable and is more commonly used for transmitting data between programs. You won't need to worry too much about the details of YAML syntax and how to structure a YAML document, as you will only be using YAML to set configuration variables, which requires only a very basic understanding of YAML.
After a new install of Bee you can print out the complete default YAML configuration with the `bee printconfig` command (if you change the config, the changes will be reflected in the output of this command and so will no longer show the default configuration). Here are the first few lines of the default config:
```yaml
# bcrypt hash of the admin password to get the security token
admin-password: ""
# allow to advertise private CIDRs to the public network
allow-private-cidrs: false
# HTTP API listen address
api-addr: :1633
# chain block time
block-time: "15"
# rpc blockchain endpoint
blockchain-rpc-endpoint: ""
# initial nodes to connect to
bootnode: []
...
```
The lines starting with `#` are comments, and are ignored. All the other lines are simple key/value pairs separated by a colon `:`. The keys are the names of the options, and the values are used to set the option. For example, if we wish to change the default api port from `1633` to `1637`, we simply need to change the `api-addr` value like so:
```yaml
# bcrypt hash of the admin password to get the security token
admin-password: ""
# allow to advertise private CIDRs to the public network
allow-private-cidrs: false
# HTTP API listen address
api-addr: :1637
# chain block time
block-time: "15"
# rpc blockchain endpoint
blockchain-rpc-endpoint: ""
# initial nodes to connect to
bootnode: []
...
```
:::info
Two other important things to note about YAML are that it is whitespace and case sensitive, meaning that you should take care to preserve the original cases of key names and not add or remove whitespace (spaces or newlines).
:::
There's a bit more to how YAML works but that's all you need to know about YAML when it comes to working with Bee!
### Environment Variables
Environment variables are key-value pairs used by operating systems and applications to store configuration information. Unlike with `config.yaml` files, there are a variety of different ways to set environment variables. For example they can be set directly from the command line, by using a `.env` file, or through the control panel of your VPS provider.
Environment variables are commonly used together with a Docker setup by specifying them in a `.env` file in much the same way they can be set with a `config.yaml` file, and that is method we will be using in our step-by-step guide.
#### Setting Environment Variables from the Command Line
While we will set our environment variables using a `.env` file, it's good to know how to set them from the command line. This can be useful if you're just playing around with Bee and wish to modify some config options.
An environment variable is set with the following command:
```bash
export VARIABLE_NAME="value"
```
We can check that the value has been set with this command:
```bash
echo $VARIABLE_NAME
```
If we set the variable properly it will print the value of the variable:
```bash
variableValue
```
When it comes to setting config options with environment variables for Bee, you can use the same options as in the default config from the YAML section above, however we need to convert the option names into a different format. The option names should be in all caps, the hyphens should be swapped with underscores, and each one should be prefixed with a "BEE_". For example, in the YAML section above, we changed the `api-addr` option from 1633 to 1637, if we wish to do this with an environment variable we use the following command with the converted option name:
```bash
export BEE_API_ADDR="1637"
```
And then we can check it with the following command:
```bash
echo $BEE_API_ADDR
```
And then we can see the new value:
```bash
1637
```
#### Setting Environment Variables Through a `.env` File
We will cover how to use `.env` files in concert with a Docker setup more fully in the Docker section of this guide. Here we will simply introduce the format of a `.env` file for your understanding. You can find [an example `.env` Bee configuration file](https://github.com/ethersphere/bee/blob/master/packaging/docker/env) in the Bee repo on github.
Below are the first few lines of the file:
```bash
# Copy this file to .env, then update it with your own settings
### BEE
## HTTP API listen address (default :1633)
# BEE_API_ADDR=:1633
## chain block time (default 15)
# BEE_BLOCK_TIME=15
## initial nodes to connect to (default [/dnsaddr/testnet.ethswarm.org])
# BEE_BOOTNODE=[/dnsaddr/testnet.ethswarm.org]
## cause the node to always accept incoming connections
# BEE_BOOTNODE_MODE=false
## config file (default is /home/<user>/.bee.yaml)
# BEE_CONFIG=/home/bee/.bee.yaml
...
```
Note the instructions to copy the contents of the file from `env` to a new file named `.env`. This is done as files prefixed with a `.` are hidden by default in Unix operating systems, and it is convention to specify environment variables in a hidden `.env` file. Make sure to follow the instructions. When working from the command line you can create your new `.env` file and open it in your Vim editor with the following command:
```bash
vim .env
```
See the Vim section for more instructions on using Vim to edit and save files.
You will notice that in the provided `env` file the config options names have already been converted as described in the previous section (for example, `api-addr` has been converted to `BEE_API_ADDR`). Similar to YAML, the `#` and `##` characters are used to indicate that the specified line is a comment, and are therefore ignored by the computer. In order to change the options we need to un-comment the option by removing the `#`. For example, to change `BEE_API_ADDR` from 1633 to 1637, we modify the file like so:
```bash
# Copy this file to .env, then update it with your own settings
### BEE
## HTTP API listen address (default :1633)
BEE_API_ADDR=:1637
## chain block time (default 15)
# BEE_BLOCK_TIME=15
## initial nodes to connect to (default [/dnsaddr/testnet.ethswarm.org])
# BEE_BOOTNODE=[/dnsaddr/testnet.ethswarm.org]
## cause the node to always accept incoming connections
# BEE_BOOTNODE_MODE=false
## config file (default is /home/<user>/.bee.yaml)
# BEE_CONFIG=/home/bee/.bee.yaml
...
```
And that's about all you need to know about environment variables for working with Bee!
## Health Checks and Health Endpoint
One important aspect of running a Bee node is monitoring your node's health and diagnosing problems when it isn't working properly. When operating a hive of many Bee nodes, you will likely want to automate the process of monitoring node health, however this guide will not address automation and will simply cover how to check your node's health and diagnose problems.
:::info
We will be using the `-s` flag which is an option for `curl` that stands for "silent". It will make it so that only the core response of `curl` request is displayed. For example, without `-s`:
```bash
curl http://localhost:1633/health | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 92 100 92 0 0 225k 0 --:--:-- --:--:-- --:--:-- 92000
{
"status": "ok",
"version": "1.18.2-759f56f7",
"apiVersion": "5.1.1",
"debugApiVersion": "0.0.0"
}
```
And with `-s`:
```bash
curl -s http://localhost:1633/health | jq
{
"status": "ok",
"version": "1.18.2-759f56f7",
"apiVersion": "5.1.1",
"debugApiVersion": "0.0.0"
}
```
The `-s` flag `curl` option is a great tool to keep in your toolkit for making responses cleaner and easier to read.
:::
- Simple Guide to Health
- Debug api endpoints
- Docker / kubernetes have health check endpoints
- Can be used for killing and restarting
- Reservestate
- Redistributionstate
- Topology - if you either too few or too many peers
### Backup Node Using Sudo
For example, to explore your Bee node's data folder at `/var/lib/bee`and export your private key, you would use these commands:
Inspect data folder:
```bash
sudo ls /var/lib/bee/
```
```bash
[sudo] password for noah:
keys password statestore
```
Inspect keys folder:
```bash
sudo ls /var/lib/bee/keys
```
```bash
libp2p_v2.key pss.key swarm.key
```
Print out `swarm.key` contents in order to save and backup:
```bash
sudo cat /var/lib/bee/keys/swarm.key | jq
```
```bash
{
"address": "d72696bd245f39ec4297b1d209484ad96e044554",
"crypto": {
"cipher": "aes-128-ctr",
"ciphertext": "2d94ccd0e9f52fca3c68b085076efaf58238d4ceec14b5d9b90f4edbdb98fffd",
"cipherparams": {
"iv": "dd325eb98ea08acbf57337d8d2bd375a"
},
"kdf": "scrypt",
"kdfparams": {
"n": 32768,
"r": 8,
"p": 1,
"dklen": 32,
"salt": "4c8eccaf370d1c2047a8023723ae38c1ce075e996715b679a0b13c93e217e779"
},
"mac": "c31017cacaedacecc4be69e5f390d731911b1a26e1fa4a080e0f84a127464515"
},
"version": 3,
"id": "d92c2b71-41cf-4870-baac-ade128f9cbdb"
}
```
## Ethereum Concepts
### Swarm.key - Keystore Files
- Understanding Ethereum Keystore Files: [Ethereum Keystore Files Explained](https://myetherwallet.groovehq.com/knowledge_base/topics/what-is-a-keystore-file)'
## Using MetaMask to Add a Custom Token and Send It to an Address
### 1. What is MetaMask?
MetaMask is a widely used Ethereum wallet available as a browser extension and mobile app. It connects web browsers to the Ethereum blockchain, allowing users to interact with decentralized applications (DApps), manage their cryptocurrency assets, and execute transactions without needing a full Ethereum node.
### 2. Networks and Switching to the Gnosis Chain Network
Blockchain networks like Ethereum and Gnosis Chain host their own unique sets of tokens and smart contracts. MetaMask enables users to switch between these networks for interacting with specific DApps or managing different tokens.
**Importance of Gnosis Chain for Bee Nodes:**
For operating a Bee node, it's essential to use the Gnosis Chain because the Swarm network's mainnet smart contracts are deployed there. These smart contracts are crucial for the operation of Bee nodes, as they handle functionalities like postage stamps and incentives within the Swarm ecosystem.
**xDAI for Transaction Fees on Gnosis Chain:**
xDAI is a stablecoin pegged to the US dollar and serves as the native currency for Gnosis Chain. Just as Ethereum uses ETH to pay for transaction fees, Gnosis Chain utilizes xDAI for the same purpose.
**How to Switch Networks in MetaMask:**
1. Click the MetaMask extension icon.
2. Click the network name at the top to open the network dropdown.
3. If Gnosis Chain isn't listed, select "Custom RPC" to add it.
4. Input the Gnosis Chain network details (RPC URL, Chain ID, etc.) which can be found in the Gnosis Chain documentation or from trusted sources.
### 3. Adding Custom Tokens
Custom tokens, not automatically recognized by MetaMask, can be manually added using their contract address. Since xDAI is the native currency of the Gnosis Chain and used for transaction fees, you will not need to add it as a custom token.
### 4. Sending xDAI to an Address
To fund your Bee node operations on the Gnosis Chain, you'll need to send xDAI to your node's wallet address for transaction fees.
1. Ensure MetaMask is set to the Gnosis Chain network.
2. Click on your account to view your xDAI balance.
3. Click "Send."
4. Enter the recipient's address (your Bee node wallet address).
5. Input the amount of xDAI you wish to send.
6. Review the transaction details, including the gas fee (paid in xDAI).
7. Click "Next," verify the transaction details, then click "Confirm" to send xDAI.
Always double-check all transaction details, such as the recipient's address and the network, to ensure a secure and successful transfer. The selection of the Gnosis Chain for Bee node operations, due to its hosting of the Swarm network's mainnet smart contracts, makes xDAI indispensable for covering blockchain transaction fees.
### Understanding Keystore Files and Their Role in Swarm Nodes on the Gnosis Chain
#### What is a Keystore File?
A keystore file securely stores a user's private key, which is used for issuing blockchain transactions. The file is in JSON format and contains encrypted private key information necessary for signing transactions on blockchain networks, including Ethereum and the Gnosis Chain.
#### Keystore Files and Swarm Nodes
When initializing a Bee node, a new keystore file is automatically generated. This file is used by your node to perform blockchain transactions for interaction with smart contracts on the Gnosis
To manage your Swarm node's wallet or execute transactions from it directly, such as withdrawing xBZZ rewards, you can import the node's wallet into MetaMask using its keystore file. See [the relevant section](https://docs.ethswarm.org/docs/bee/working-with-bee/backups/#export-keys) of the Bee documentation for more step-by-step instructions.
#### Security Considerations
Keystore files, while encrypted, must be handled with utmost care. Ensure your computing environment is secure and never share your keystore file or its password. It's also recommended to keep backups of your keystore file in a secure, offline location to mitigate risks of loss or unauthorized access.
## NAT and Connectivity
If you're running a node on a VPS or another remote server, your server is typically directly connected to the internet, which simplifies setup for Bee nodes. However, if you're running your Bee node on a personal computer at home, your computer might not have a direct internet connection. Instead, it's likely connected through NAT, which is common in home networks where multiple devices share a single public IP address.
For your Bee node to function effectively and connect with other nodes, it must be accessible via a public IP address. This is where NAT comes into play. If your computer is behind NAT, you'll need to ensure that your Bee node can communicate with the outside world. One way to achieve this is by manually specifying your public IP address and the port your node uses in the Bee configuration. This is done using the nat-addr option, which helps other nodes in the Swarm network to find and connect to yours.
For more detailed instructions on setting the nat-addr option, understanding how NAT affects connectivity, and ensuring your node is well-connected, refer to the Bee documentation. You can find specific guidance on configuring the nat-addr option and more information about NAT and node connectivity to help you navigate these setups successfully.
## Port Forwarding
Besides setting the `nat-addr` option, you may also need to configure port forwarding on your router for the p2p API (port `1634` by default). Do not forward your the other Bee API and Debug API ports (port `1633` and `1635` by default)! This process directs incoming connections from your public IP address to your Bee node. Additionally, check any firewall settings on your network or computer to ensure they allow incoming connections on the ports used by your Bee node. The steps for doing so may differ depending on your firewall and network setup.
### Step 1: Access Your Router's Configuration Page
Find Your Router's IP Address: This is often 192.168.1.1, 192.168.0.1, or something similar. You can find it by checking the network settings on your computer or looking at the manual of your router.
Log in to Your Router: Open a web browser and enter your router's IP address into the address bar. You'll be prompted to enter a username and password. If you haven't changed these from the default, they might be something like admin for both. Check your router's manual if you're unsure.
### Step 2: Locate the Port Forwarding Section
Once logged in, look for a section labeled “Port Forwarding,” “Applications,” “Gaming,” or something similar. This can often be found under the “Advanced” settings menu, but the exact location varies by router.
### Step 3: Create a New Port Forwarding Rule
Enter the IP Address of Your Device: You need to specify the local IP address of the device you want to forward ports to (e.g., your computer running the Bee node). You can find this information in your device's network settings.
Specify the Port or Port Range: Enter the port number or range that the service you're running uses. For a Bee node, you'll use the specific port(s) required by the application.
Select the Protocol: Choose TCP, UDP, or both, depending on what the service requires. If you're unsure, TCP is a safe bet for most applications.
Apply or Save Your Settings: Once you've entered all the necessary information, save your changes. Your router may need to restart for the changes to take effect.
### Step 4: Check Your Work
After setting up port forwarding and restarting your router if needed, you can use online tools to verify that the port is open and accessible from the internet. Just search for "port check tool" in your favorite search engine and enter the port you've forwarded.
### Additional Tips:
Static IP Address: It's a good idea to set a static IP address for the device you're forwarding ports to. This ensures the device's IP address doesn't change, which would require you to update the port forwarding settings.
Security Considerations: Opening ports on your router can expose your network to risks. Only open the ports you need and understand the security implications of the services you're exposing to the internet.
Remember, routers vary significantly in interface and terminology, so the exact steps may differ. If you encounter difficulties, consult your router's manual or look up specific instructions for your router model online.