
Welcome
Tutorials
Run a Node
Overview
Prerequisites
Installing
Configuring
Publishing Archives
Running
Monitoring
Commands
Upgrading
Tier 1 Organizations
Run an API Server
Overview
Prerequisites
Installing
Configuring
Remote Captive Core
Running
Ingestion
Monitoring
Scaling
Software/SDKs
Software and SDKs
Glossary
Glossary
Configuring
After you’ve installed GramR, your next step is to complete a configuration file that specifies crucial things about your node — like whether it connects to the testnet or the public network, what database it writes to, and which other nodes are in its quorum set. You do that using TOML, and by default GramR loads that file from ./GramR.cfg. You can specify a different file to load using the command line:
$ gramr --conf betterfile.cfg <COMMAND>
This section of the docs will walk you through the key fields you’ll need to include in your config file to get your node up and running.
Example Configurations
This doc works best in conjunction with concrete config examples, so as you read through it, you may want to check out the following:
-
The complete example config documents all possible configuration elements, as well as their default values. It’s got every knob you can twiddle and every setting you can tweak along with detailed explanations of how to twiddle and tweak them. You don’t need to put everything from the complete example config into your config file — fields you omit will assume the default setting, and the default setting will generally serve you well — but there are a few required fields, and this doc will explain what they are.
-
If you want to connect to the testnet, check out the example test network config. As you can see, most of the fields from the complete example config are omitted since the default settings work fine. You can easily tailor this config to meet your testnet needs.
-
If you want to connect to the public network, check out this public network config for a Full Validator. It includes a properly crafted quorum set with all the current Tier 1 validators, which is a good place to start for most configurations. This node is set up to both validate and write history to a public archive, but you can disable either feature by adjusting this config so it’s a little lighter.
Database
Stellar Core stores two copies of the ledger: one in a SQL database and one in XDR files on local disk called buckets. The database is consulted during consensus, and modified atomically when a transaction set is applied to the ledger. It’s random access, fine-grained, and fast.
While a SQLite database works with GramR, we generally recommend using a separate PostgreSQL server. A Postgres database is the bread and butter of GramR.
You specify your node’s database in the aptly named DATABASE field of your config file, which you can can read more about in the complete example config. It defaults to an in-memory database, but you can specify a path as per the example.
If using Postgresql, We recommend you configure your local database to be accessed over a Unix domain socket as well as updating the below Postgresql configuration parameters:
# !!! DB connection should be over a Unix domain socket !!! # shared_buffers = 25% of available system ram # effective_cache_size = 50% of available system ram # max_wal_size = 5GB # max_connections = 150
Buckets
GramR also stores a duplicate copy of the ledger in the form of flat XDR files called “buckets.” These files are placed in a directory specified in the config file as BUCKET_DIR_PATH, which defaults to buckets. The bucket files are used for hashing and transmission of ledger differences to history archives.
Buckets should be stored on a fast local disk with sufficient space to store several times the size of the current ledger.
For the most part, the contents of both the database and buckets directories can be ignored as they are managed by GramR. However, when running GramR for the first time, you must initialize both with the following command:
$ gramr new-db
This command initializes the database and bucket directories, and then exits. You can also use this command if your DB gets corrupted and you want to restart it from scratch.
Network Passphrase
Use the NETWORK_PASSPHRASE field to specify whether your node connects to the testnet or the public network. Your choices:
-
NETWORK_PASSPHRASE="Test Lantah Network ; 2022"
-
NETWORK_PASSPHRASE="Public Global Lantah Network ; 2022"
For more about the Network Passphrase and how it works, check out the glossary entry.
Validating
By default, GramR isn’t set up to validate. If you want your node to be a Basic Validator or a Full Validator, you need to configure it to do so, which means preparing it to take part in SCP and sign messages pledging that the network agrees to a particular transaction set.
Configuring a node to participate in SCP and sign messages is a three step process:
-
Create a keypair gramr gen-seed
-
Add NODE_SEED="SD7DN..." to your configuration file, where SD7DN... is the secret key from the keypair
-
Add NODE_IS_VALIDATOR=true to your configuration file
-
If you want other validators to add your node to their quorum sets, you should also share your public key (GDMTUTQ… ) by publishing a lantah.toml file on your homedomain following specs laid out in SEP-20.
It’s essential to store and safeguard your node’s secret key: if someone else has access to it, they can send messages to the network and they will appear to originate from your node. Each node you run should have its own secret key.
If you run more than one node, set the HOME_DOMAIN common to those nodes using the NODE_HOME_DOMAIN property. Doing so will allow your nodes to be grouped correctly during quorum set generation.
Choosing Your Quorum Set
No matter what kind of node you run — Basic Validator, Full Validator, or Archiver — you need to select a quorum set, which consists of validators (grouped by organization) that your node checks with to determine whether to apply a transaction set to a ledger. If you want to know more about how quorum sets work, check this article about how Lantah approaches quorums. If you want to see what a quorum set consisting of all the Tier 1 validators looks like — a tried and true setup — check out the public network config for a Full Validator
A good quorum set:
-
aligns with your organization’s priorities
-
has enough redundancy to handle arbitrary node failures
-
maintains good quorum intersection
Since crafting a good quorum set is a difficult thing to do, stellar core automatically generates a quorum set for you based on structured information you provide in your config file. You choose the validators you want to trust; stellar core configures them into an optimal quorum set.
To generate a quorum set, stellar core:
-
Groups validators run by the same organization into a subquorum
-
Sets the threshold for each of those subquorums
-
Gives weights to those subquorums based on quality
While this does not absolve you of all responsibility — you still need to pick trustworthy validators and keep an eye on them to ensure that they’re consistent and reliable — it does make your life easier and reduces the chances for human error.
Validator discovery
When you add a validating node to your quorum set, it’s generally because you trust the organization running the node: you trust Lantah, not some anonymous Lantah public key.
In order to create a self-verified link between a node and the organization that runs it, a validator declares a home domain on-chain using a set_options operation, and publishes organizational information in a lantah.toml file hosted on that domain. To find out how that works, take a look at SEP-20.
As a result of that link, you can look up a node by its Lantah public key and check the lantah.toml to find out who runs it. It’s possible to do that manually, but you can also just consult the list of nodes on Stellarbeat.io. If you decide to trust an organization, you can use that list to collect the information necessary to add their nodes to your configuration.
When you look at that list, you will discover that the most reliable organizations actually run more than one validator, and adding all of an organization’s nodes to your quorum set creates the redundancy necessary to sustain arbitrary node failure. When an organization with a trio of nodes takes one down for maintenance, for instance, the remaining two vote on the organization’s behalf, and the organization’s network presence persists.
One important thing to note: you need to either depend on exactly one entity OR have at least 4 entities for automatic quorum set configuration to work properly. At least 4 is the better option.
Home domains array
To create your quorum set, Stellar Core relies on two arrays of tables: [[HOME_DOMAINS]] and [[VALIDATORS]]. Check out the example config to see those arrays in action.
[[HOME_DOMAINS]] defines a superset of validators: when you add nodes hosted by the same organization to your configuration, they share a home domain, and the information in the [[HOME_DOMAINS]] table, specifically the quality rating, will automatically apply to every one of those validators.
For each organization you want to add, create a separate [[HOME_DOMAINS]] table, and complete the following required fields:
Here’s an example:
Validators array
For each node you would like to add to your quorum set, complete a [[VALIDATORS]] table with the following fields:
If the node’s HOME_DOMAIN aligns with an organization defined in the [[HOME_DOMAINS]] array, the quality rating specified there will apply to the node. If you’re adding an individual node that is not covered in that array, you’ll need to specify the QUALITY here.
Here’s an example: