Intermediate

 

How to run a core node – Part 1: Installation and Configuration

 

Welcome to Part 1 of our guide on how to run a core node for the Stellar network. In this guide, we will go over the prerequisites, installation, configuration, and validation steps needed to successfully run a core node. Before beginning, make sure you have network access and internal system access. We will start by installing the required software and initializing the Postgres database. We will then run the stellar-core daemon in the background. Next, we will configure the database, buckets, and network passphrase. Finally, we will validate our setup by choosing a quorum set and setting up automatic maintenance, including metadata snapshots and restoration. Let’s get started.

Prerequisites

You can install Stellar Core in a number of different ways, and once you do, you can configure it to participate in the network on several different levels: it can be either a Basic Validator or a Full Validator. No matter how you install Stellar Core or what kind of node you run, however, you need to set up to connect to the peer-to-peer network and store the state of the ledger in a SQL database.

Compute Requirements

In early 2018, Stellar Core with PostgreSQL running on the same machine worked well on a m5.large in AWS (dual-core 2.5 GHz Intel Xeon, 8 GB RAM). Storage-wise, 20 GB was enough in 2018, but the ledger has grown a lot since then, and most people seem to have at least 1TB on hand.

If you decide to run Stellar Core on the same machine as Horizon (though note that this is a deprecated architecture, since Horizon now bundles Core for its needs), you will additionally need to ensure that your setup is also equipped to handle Horizon’s compute requirements as well.

Stellar Core is designed to run on relatively modest hardware so that a whole range of individuals and organizations can participate in the network, and basic nodes should be able to function pretty well without the tremendous overhead. That said, the more you ask of your node, the greater the requirements.

Network access

Stellar Core interacts with the peer-to-peer network to keep a distributed ledger in sync, which means that your node needs to make certain TCP ports available for inbound and outbound communication.

  • Inbound: a Stellar Core node needs to allow all IPs to connect to its PEER_PORT over TCP. You can specify a port when you configure Stellar Core, but most people use the default, which is 11625.

  • Outbound: a Stellar Core needs to connect to other nodes via their PEER_PORTs TCP. You can find information about other nodes’ PEER_PORTs on a network explorer like Stellarbeat, but most use the default port, which is, again, 11625.

 

Internal System Access

Stellar Core also needs to connect to certain internal systems, though exactly how varies based on your setup.

  • Outbound:

    • Stellar Core requires access to a PostgreSQL database. If that database resides on a different machine on your network, you’ll need to allow that connection. You specify the database when you configure Stellar Core.

    • You can block all other connections.

  • Inbound: Stellar Core exposes an unauthenticated HTTP endpoint on its HTTP_PORT. You can specify a port when you configure Stellar Core, but most people use the default, which is 11626.

    • The HTTP_PORT is used by Horizon to submit transactions, so may have to be exposed to the rest of your internal IPs

    • It’s also used to query Stellar Core info and provide metrics

    • And to perform administrative commands such as scheduling upgrades and changing log levels

    • For more on that, see commands

Note: if you need to expose your HTTP endpoint to other hosts in your local network, we recommended using an intermediate reverse proxy server to implement authentication. Don’t expose the HTTP endpoint to the raw and cruel open internet.

Installing

There are three ways to install Stellar Core: you can use a Docker image, use pre-built packages, or build from a source. Using a Docker image is the quickest and easiest method, so it’s a good choice for a lot of developers, but if you’re running Stellar Core in production you may need to use packages or, if you want to sacrifice convenience for maximum control, build from source. Whichever method you use, you should make sure to install the latest release since releases are cumulative and backward compatible.

Docker-based installation
Development environments
SDF maintains a quickstart image that bundles Stellar Core with Horizon and postgreSQL databases. It’s a quick way to set up a default, non-validating, ephemeral configuration that should work for most developers.

In addition to SDF images, Satoshipay maintains separate Docker images for Stellar Core and Horizon. The Satoshipay Stellar Core Docker image comes in a few flavors, including one with the AWS CLI installed and one with the Google Cloud SDK installed. The Horizon image supports all Horizon environment variables.

Production environments
The SDF also maintains a Stellar-Core-only standalone image.

Example usage:

docker run stellar/stellar-core:latest help
docker run stellar/stellar-core:latest gen-seed

To run the daemon you need to provide a configuration file:

Initialize postgres DB (see DATABASE config option)

docker run -v “/path/to/config/dir:/etc/stellar/” stellar/stellar-core:latest new-db

Run stellar-core daemon in the background

docker run -d -v “/path/to/config/dir:/etc/stellar/” stellar/stellar-core:latest run

The image utilizes deb packages so it’s possible to confirm checksum of the stellar-core binary in the docker image matches that in the cryptographically signed deb package. See package documentation for information on installing Ubuntu packages. To calculate the checksum in the docker image you can run:

docker run –entrypoint=/bin/sha256sum stellar/stellar-core:latest /usr/bin/stellar-core

Package-based Installation
If you are using Ubuntu 18.04 LTS or later, we provide the latest stable releases of stellar-core and stellar-horizon in Debian binary package format.

You may choose to install these packages individually, which offers the greatest flexibility but requires manual creation of the relevant configuration files and configuration of a PostgreSQL database.

Most people, however, choose to install the stellar-quickstart package which configures a Testnet stellar-core and stellar-horizon both backed by a local PostgreSQL database. Once installed you can easily modify either the Stellar Core configuration if you want to connect to the public network.

Installing from source
See the install from the source for build instructions.

Release version
In general, you should install the latest release build. Builds are backward compatible and cumulative.

The version number scheme that we follow is protocol_version.release_number.patch_number where:

protocol_version is the maximum protocol version supported by that release (all versions are 100% backward compatible), release_number is bumped when a set of new features or bug fixes not impacting the protocol are included in the release, patch_number is used when a critical fix has to be deployed.

Configuring

After you’ve installed Stellar Core, your next step is to complete a configuration file that specifies crucial things about your node — like whether it connects to the testnet or the public network, what database it writes to, and which other nodes are in its quorum set. You do that using TOML, and by default Stellar Core loads that file from ./stellar-core.cfg. You can specify a different file to load using the command line:

$ stellar-core --conf betterfile.cfg <COMMAND>

This section of the docs will walk you through the key fields you’ll need to include in your config file to get your node up and running.

Example Configurations

This doc works best in conjunction with concrete config examples, so as you read through it, you may want to check out the following:

  • The complete example config documents all possible configuration elements, as well as their default values. It’s got every knob you can twiddle and every setting you can tweak along with detailed explanations of how to twiddle and tweak them. You don’t need to put everything from the complete example config into your config file — fields you omit will assume the default setting and the default setting will generally serve you well — but there are a few required fields, and this doc will explain what they are.

  • If you want to connect to the testnet, check out the example test network config. As you can see, most of the fields from the complete example config are omitted since the default settings work fine. You can easily tailor this config to meet your testnet needs.

  • If you want to connect to the public network, check out this public network config for a Full Validator. It includes a properly crafted quorum set with all the current Tier 1 validators, which is a good place to start for most configurations. This node is set up to both validate and write history to a public archive, but you can disable either feature by adjusting this config so it’s a little lighter.

 

Database

Stellar Core stores two copies of the ledger: one in a SQL database and one in XDR files on a local disk called buckets. The database is consulted during consensus and modified atomically when a transaction set is applied to the ledger. It’s random access, fine-grained, and fast.

While an SQLite database works with Stellar Core, we generally recommend using a separate PostgreSQL server. A Postgres database is the bread and butter of Stellar Core.

You specify your node’s database in the aptly named DATABASE field of your config file, which you can read more about in the complete example config. It defaults to an in-memory database, but you can specify a path as per the example.

If using Postgresql, We recommend you configure your local database to be accessed over a Unix domain socket as well as update the below Postgresql configuration parameters:

 

1# !!! DB connection should be over a Unix domain socket !!!# shared_buffers = 25% of available system ram# effective_cache_size = 50% of available system ram# max_wal_size = 5GB# max_connections = 150

 

Buckets

Stellar-core also stores a duplicate copy of the ledger in the form of flat XDR files called “buckets.” These files are placed in a directory specified in the config file as BUCKET_DIR_PATH, which defaults to buckets. The bucket files are used for hashing and transmission of ledger differences to history archives.

Buckets should be stored on a fast local disk with sufficient space to store several times the size of the current ledger.

For the most part, the contents of both the database and buckets directories can be ignored as they are managed by Stellar Core. However, when running Stellar Core for the first time, you must initialize both with the following command:

$ stellar-core new-db

This command initializes the database and bucket directories and then exits. You can also use this command if your DB gets corrupted and you want to restart it from scratch.

Network Passphrase

Use the NETWORK_PASSPHRASE field to specify whether your node connects to the testnet or the public network. Your choices:

  • NETWORK_PASSPHRASE="Test SDF Network ; September 2015"

  • NETWORK_PASSPHRASE="Public Global Stellar Network ; September 2015"

For more about the Network Passphrase and how it works, check out the encyclopedia entry.

Validating

By default, Stellar Core isn’t set up to validate. If you want your node to be a Basic Validator or a Full Validator, you need to configure it to do so, which means preparing it to take part in SCP and sign messages pledging that the network agrees to a particular transaction set.

Configuring a node to participate in SCP and sign messages is a three step process:

  • Create a keypair stellar-core gen-seed

  • Add NODE_SEED="SD7DN..." to your configuration file, where SD7DN... is the secret key from the keypair

  • Add NODE_IS_VALIDATOR=true to your configuration file

If you want other validators to add your node to their quorum sets, you should also share your public key (GDMTUTQ… ) by publishing a stellar.toml file on your homedomain following specs laid out in SEP-20.

It’s essential to store and safeguard your node’s secret key: if someone else has access to it, they can send messages to the network and they will appear to originate from your node. Each node you run should have its own secret key.

If you run more than one node, set the HOME_DOMAIN common to those nodes using the NODE_HOME_DOMAIN property. Doing so will allow your nodes to be grouped correctly during quorum set generation.

Choosing Your Quorum Set

No matter what kind of node you run — Basic Validator, Full Validator, or Archiver — you need to select a quorum set, which consists of validators (grouped by organization) that your node checks with to determine whether to apply a transaction set to a ledger. If you want to know more about how quorum sets work, check out this article about how Stellar approaches quorums. If you want to see what a quorum set consisting of all the Tier 1 validators look like — a tried and true setup — check out the public network config for a Full Validator

A good quorum set:

  • aligns with your organization’s priorities

  • has enough redundancy to handle arbitrary node failures

  • maintains good quorum intersection

Since crafting a good quorum set is a difficult thing to do, stellar core automatically generates a quorum set for you based on the structured information you provide in your config file. You choose the validators you want to trust; stellar core configures them into an optimal quorum set.

To generate a quorum set, stellar core:

  • Groups validators run by the same organization into a subquorum

  • Sets the threshold for each of those subquorums

  • Gives weights to those subquorums based on quality

While this does not absolve you of all responsibility — you still need to pick trustworthy validators and keep an eye on them to ensure that they’re consistent and reliable — it does make your life easier and reduces the chances for human error.

Validator discovery

When you add a validating node to your quorum set, it’s generally because you trust the organization running the node: you trust SDF, not some anonymous Stellar public key.

In order to create a self-verified link between a node and the organization that runs it, a validator declares a home domain on-chain using a set_options operation and publishes organizational information in a stellar.toml file hosted on that domain. To find out how that works, take a look at SEP-20.

As a result of that link, you can look up a node by its Stellar public key and check the stellar.toml to find out who runs it. It’s possible to do that manually, but you can also just consult the list of nodes on Stellarbeat.io. If you decide to trust an organization, you can use that list to collect the information necessary to add their nodes to your configuration.

When you look at that list, you will discover that the most reliable organizations actually run more than one validator, and adding all of an organization’s nodes to your quorum set creates the redundancy necessary to sustain arbitrary node failure. When an organization with a trio of nodes takes one down for maintenance, for instance, the remaining two vote on the organization’s behalf, and the organization’s network presence persists.

One important thing to note: you need to either depend on exactly one entity OR have at least 4 entities for automatic quorum set configuration to work properly. At least 4 is the better option.

Home domains array

To create your quorum set, Stellar Core relies on two arrays of tables: [[HOME_DOMAINS]] and [[VALIDATORS]]. Check out the example config to see those arrays in action.

[[HOME_DOMAINS]] defines a superset of validators: when you add nodes hosted by the same organization to your configuration, they share a home domain, and the information in the [[HOME_DOMAINS]] table, specifically the quality rating, will automatically apply to every one of those validators.

For each organization you want to add, create a separate [[HOME_DOMAINS]] table, and complete the following required fields:

 
 

Field

Requirements

Description

HOME_DOMAIN

string

URL of home domain linked to a group of validators

QUALITY

string

Rating for organization’s nodes: HIGH, MEDIUM, or LOW

 

Here’s an example:

  • Example

 

1[[HOME_DOMAINS]]HOME_DOMAIN="testnet.stellar.org"QUALITY="HIGH"[[HOME_DOMAINS]]HOME_DOMAIN="some-other-domain"QUALITY="LOW"

 

Validators array

For each node you would like to add to your quorum set, complete a [[VALIDATORS]] table with the following fields:

 
 

Field

Requirements

Description

NAME

string

  A unique alias for the node

QUALITY

string

 Rating for node (required unless specified in [[HOME_DOMAINS]]): HIGH, MEDIUM, or LOW.

HOME_DOMAIN

string

 URL of home domain linked to validator

PUBLIC_KEY

string

 Stellar public key associated with validator

ADDRESS

string

 Peer:port associated with validator (optional)

HISTORY

string

 archive GET command associated with validator (optional)

 

If the node’s HOME_DOMAIN aligns with an organization defined in the [[HOME_DOMAINS]] array, the quality rating specified there will apply to the node. If you’re adding an individual node that is not covered in that array, you’ll need to specify the QUALITY here.

Here’s an example:

  • Example

 

1[[VALIDATORS]]NAME="sdftest1"HOME_DOMAIN="testnet.stellar.org"PUBLIC_KEY="GDKXE2OZMJIPOSLNA6N6F2BVCI3O777I2OOC4BV7VOYUEHYX7RTRYA7Y"ADDRESS="core-testnet1.stellar.org"HISTORY="curl -sf http://history.stellar.org/prd/core-testnet/core_testnet_001/{0} -o {1}"[[VALIDATORS]]NAME="sdftest2"HOME_DOMAIN="testnet.stellar.org"PUBLIC_KEY="GCUCJTIYXSOXKBSNFGNFWW5MUQ54HKRPGJUTQFJ5RQXZXNOLNXYDHRAP"ADDRESS="core-testnet2.stellar.org"HISTORY="curl -sf http://history.stellar.org/prd/core-testnet/core_testnet_002/{0} -o {1}"[[VALIDATORS]]NAME="rando-node"QUALITY="LOW"HOME_DOMAIN="rando.com"PUBLIC_KEY="GC2V2EFSXN6SQTWVYA5EPJPBWWIMSD2XQNKUOHGEKB535AQE2I6IXV2Z"ADDRESS="core.rando.com"

 

Validator quality

QUALITY is a required field for each node you add to your quorum set. Whether you specify it for a suite of nodes in [[HOME_DOMAINS]] or for a single node in [[VALIDATORS]], it means the same thing, and you have the same three rating options: HIGH, MEDIUM, or LOW.

HIGH quality validators are given the most weight in automatic quorum set configuration. Before assigning a high quality rating to a node, make sure it has low latency and good uptime, and that the organization running the node is reliable and trustworthy.

A high quality validator:

  • publishes an archive

  • belongs to a suite of nodes that provide redundancy

Choosing redundant nodes is good practice. The archive requirement is programmatically enforced.

MEDIUM quality validators are nested below high quality validators, and their combined weight is equivalent to a single high quality entity. If a node doesn’t publish an archive, but you deem it reliable or have an organizational interest in including in your quorum set, give it a medium quality rating.

LOW quality validators are nested below medium quality validators, and their combined weight is equivalent to a single medium quality entity. Should they prove reliable over time, you can upgrade their rating to medium to give them a bigger role in your quorum set configuration.

Automatic quorum set generation

Once you add validators to your configuration, stellar core automatically generates a quorum set using the following rules:

  • Validators with the same home domain are automatically grouped together and given a threshold requiring a simple majority (2f+1)

  • Heterogeneous groups of validators are given a threshold assuming byzantine failure (3f+1)

  • Entities are grouped by QUALITY and nested from HIGH to LOW

  • HIGH quality entities are at the top and are given decision-making priority

  • The combined weight of MEDIUM quality entities equals a single HIGH quality entity

  • The combined weight of LOW quality entities equals a single MEDIUM quality entity

 

Quorum and Overlay Network

It is generally a good idea to give information to your validator on other validators that you rely on. This is achieved by configuring KNOWN_PEERS and PREFERRED_PEERS with the addresses of your dependencies.

Additionally, configuring PREFERRED_PEER_KEYS with the keys from your quorum set might be a good idea to give priority to the nodes that allow you to reach consensus.

Without those settings, your validator depends on other nodes on the network to forward you the right messages, which is typically done as a best effort.

Updating and Coordinating Your Quorum Set

When you join the ranks of node operators, it’s also important to join the conversation. The best way to do that: get on the #validators channel on the Stellar Keybase and sign up for the Stellar Validators Google Group; You can also join the #validators channel on our Developer Discord. That way, you can and coordinate changes with the rest of the network.

When you need to make changes to your validator or to your quorum set — say you take a validator down for maintenance or add new validators to your node’s quorum set — it’s crucial to stage the changes to preserve quorum intersection and general good health of the network:

  • Don’t remove too many nodes from your quorum set before the nodes are taken down. If different validators remove different sets, the remaining sets may not overlap, which could cause network splits

  • Don’t add too many nodes in your quorum set at the same time. If not done carefully, the new nodes could overpower your configuration

When you want to add or remove nodes, start by making changes to your own nodes’ quorum sets, and then coordinate work with others to reflect those changes gradually.

History

Stellar Core normally interacts with one or more history archives, which are configurable facilities where Full Validators and Archivers store flat files containing history checkpoints: bucket files and history logs. History archives are usually off-site commodity storage services such as Amazon S3, Google Cloud Storage, Azure Blob Storage, or custom SCP/SFTP/HTTP servers. To find out how to publish a history archive, consult Publishing History Archives.

No matter what kind of node you’re running, you should configure it to get history from one or more public archives. You can configure any number of archives to download from: Stellar Core will automatically round-robin between them.

When you’re choosing your quorum set, you should include high-quality nodes — which, by definition, publish archives — and add the location for each node’s archive in the HISTORY field in the validators array.

You can also use command templates in the config file to specify additional archives you’d like to use and how to access them. The example config shows how to configure a history archive through command templates.

Note: if you notice a lot of errors related to downloading archives, you should check that all archives in your configuration are up to date.

Automatic Maintenance

Some tables in Stellar Core’s database acts as a publishing queue for external systems such as Horizon and generate meta data for changes happening to the distributed ledger.

If not managed properly those tables will grow without bounds. To avoid this, a built-in scheduler will delete data from old ledgers that are not used anymore by other parts of the system (external systems included).

The settings that control the automatic maintenance behavior are: AUTOMATIC_MAINTENANCE_PERIOD, AUTOMATIC_MAINTENANCE_COUNT and KNOWN_CURSORS.

By default, Stellar Core will perform this automatic maintenance, so be sure to disable it until you have done the appropriate data ingestion in downstream systems (Horizon for example sometimes needs to reingest data).

If you need to regenerate the metadata, the simplest way is to replay ledgers for the range you’re interested in after (optionally) clearing the database with newdb.

Metadata Snapshots and Restoration

Some deployments of Stellar Core and Horizon will want to retain metadata for the entire history of the network. This metadata can be quite large and computationally expensive to regenerate anew by replaying ledgers in stellar-core from an empty initial database state, as described in the previous section.

This can be especially costly if run more than once. For instance, when bringing a new node online. Or even if running a single node with Horizon, having already ingested the metadata once: a subsequent version of Horizon may have a schema change that entails re-ingesting it again.

Some operators therefore prefer to shut down their stellar-core (and/or Horizon) processes and take filesystem-level snapshots or database-level dumps of the contents of Stellar Core’s database and bucket directory, and/or Horizon’s database, after metadata generation has occurred the first time. Such snapshots can then be restored, putting stellar-core and/or Horizon in a state containing metadata without performing a full replay.

Any reasonably recent state will do — if such a snapshot is a little old, stellar-core will replay ledgers from whenever the snapshot was taken to the current network state anyways — but this procedure can greatly accelerate restoring validator nodes, or cloning them to create new ones.