Skip to content

Configuring

Overview

Firedancer is configured via. a TOML file. Almost all options have a recommended default value that is set automatically by Firedancer, and an operator needs only to specify values for options they wish to change. The full list of options is as specified in the default.toml file is documented below.

MIGRATING

The Agave validator is configured with command line options like --identity identity.json --rpc-port 8899. When migrating your scripts, these command line options will need to move to the corresponding configuration option in the TOML file.

The full list of available options and their defaults are documented below. An example TOML file overriding select options needed for a new validator on testnet might look like:

toml
user = "firedancer"

[gossip]
    entrypoints = [
        "entrypoint.testnet.solana.com:8001",
        "entrypoint2.testnet.solana.com:8001",
        "entrypoint3.testnet.solana.com:8001",
    ]

[consensus]
    expected_genesis_hash = "4uhcVJyU9pJkvQyS88uRDiswHXSCkY3zQawwpjk2NsNY"
    known_validators = [
        "5D1fNXzvv5NjV1ysLjirC4WY92RNsVH18vjmcszZd8on", 
        "dDzy5SR3AXdYWVqbDEkVFdvSPCtS9ihF5kJkHCtXoFs",
        "Ft5fbkqNa76vnsjYNwjDZUXoTWpP7VYm3mtsaQckQADN",
        "eoKpUABi59aT4rR9HGS3LcMecfut9x7zJyodWWP43YQ",
        "9QxCLckBiJc783jnMvXZubK4wH86Eqqvashtrwvcsgkv",
    ]

[rpc]
    port = 9099
    full_api = true
    private = true

[consensus]
    identity_path = "/home/firedancer/validator-keypair.json"
    vote_account_path = "/home/firedancer/vote-keypair.json"

Once your configuration file is created you can use it by either setting the FIREDANCER_CONFIG_TOML environment variable, or by passing it to your command with the --config option.

NOTE

The same configuration file must be supplied to all commands, especially when configuring and later running the validator. Using a different file for different commands may cause them to fail.

Logging

By default Firedancer will maintain two logs. One permanent log which is written to a file, and an ephemeral log for fast visual inspection which is written to stderr. The Agave runtime and consensus components also output logs which are a part of the Firedancer's logs. You can increase the ephemeral log output in the configuration TOML.

toml
[log]
    level_stderr = "INFO"

Layout

One way that Firedancer is fast is that it pins a dedicated thread to each CPU core on the system. Each thread can do one specific kind of work, for example, a verify tile can verify the signatures of incoming transactions. Tiles are connected together in a graph to form an efficient pipeline for processing transactions.

WARNING

Each tile needs a dedicated CPU core and it will be saturated at 100% utilization. The Agave process will run on the cores under the agave_affinity and this should not overlap with tile cores.

The configuration file has options for how many of each kind of tile should be started.

toml
[layout]
    affinity = "1-18"
    quic_tile_count = 2
    verify_tile_count = 4
    bank_tile_count = 4
    agave_affinity = "19-31"

It is suggested to run as many tiles as possible and tune the tile counts for maximum system throughput so that the Solana network can run faster. There are some example tuned configurations in the src/app/fdctl/config/ folder to work from.

Options

The list of all available configuration options and their default values is provided below. You only need to override options which you wish to change.

toml
# Name of this Firedancer instance.  This name serves as a unique token
# so that multiple Firedancer instances can run side by side without
# conflicting when they need to share a system or kernel namespace.
# When starting a Firedancer instance, it will potentially load, reset,
# or overwrite any state created by a prior or currently running
# instance with the same name.
name = "fd1"

# The operating system user to permission data and run Firedancer as.
# Firedancer needs to start privileged, either with various capabilities
# or as root, so that it can configure kernel bypass networking.  Once
# this configuration has been performed, the process will enter a highly
# restrictive sandbox, drop all privileges, and switch to the user given
# here.  When running the configuration steps of `fdctl configure` data
# will be permissioned so that is it writable for this user and not the
# user which is performing the configuration.
#
# Firedancer requires nothing from this user, and it should be as
# minimally permissioned as is possible.  It is suggested to run
# Firedancer as a separate user from other processes on the machine so
# that they cannot attempt to send signals, ptrace, or otherwise
# interfere with the process.  You should under no circumstances use a
# superuser or privileged user here, and Firedancer will not allow you
# to use the root user.  It is also not a good idea to use a user that
# has access to `sudo` or has other entries in the sudoers file.
#
# Firedancer will automatically determine a user to run as if none is
# provided.  By default, the user is determined by the following
# sequence:
#
#  1. The `SUDO_USER`, `LOGNAME`, `USER`, `LNAME`, or `USERNAME`
#     environment variables are checked in this order, and if one of
#     them is set that user is used
#  2. The `/proc/self/loginid` file is used to determine the UID, and
#     the username is looked up in nss (the name service switch).
#
# This means if running as sudo, the user will be the terminal user
# which invoked sudo, not the root user.
user = ""

# Absolute directory path to place scratch files used during setup and
# operation.  The ledger and accounts databases will also be placed in
# here by default, although that can be overriden by other options.
#
# Two substitutions will be performed on this string.  If "{user}" is
# present it will be replaced with the user running Firedancer, as
# above, and "{name}" will be replaced with the name of the Firedancer
# instance.
scratch_directory = "/home/{user}/.firedancer/{name}"

# Port range used for various incoming network listeners, in the form
# `<MIN_PORT>-<MAX_PORT>` inclusive.  The range used includes min, but
# not max [min, max).  Ports are used for receiving transactions and
# votes from clients and other validators.
#
# For Firedancer, ports are assigned statically in later parts of this
# configuration file, and this option is passed to the Agave
# client with the `--dynamic-port-range` argument.  Agave will use
# this to determine port locations for services not yet rewritten as
# part of Firedancer, including gossip and RPC.  This port range should
# NOT overlap with any of the static ports used by Firedancer below.
dynamic_port_range = "8900-9000"

# Firedancer logs to two places by default: stderr and a logfile.
# stdout is not used for logging, and will only be used to print command
# output or boot errors.  Messages to "stderr" are abbreviated and not
# as fully detailed as those to the log file.  The log file is intended
# for long term archival purposes.  The log levels mirror the Linux
# syslog levels, which are, from lowest priority to highest:
#
#   - DEBUG    Development and diagnostic messages.
#   - INFO     Less important informational notice.
#   - NOTICE   More important informational notice.
#   - WARNING  Unexpected condition, shouldn't happen. Should be
#              investigated.
#   - ERR      Kills Firedancer. Routine error, likely programmer error.
#   - CRIT     Kills Firedancer. Critical errors.
#   - ALERT    Kills Firedancer. Alert requiring immediate attention.
#   - EMERG    Kills Firedancer. Emergency requiring immediate
#              attention, security or risk issue.
#
# Default behaviors are:
#
#   - DEBUG messages are not written to either stream.
#   - INFO messages are written in detailed form to the log file.
#   - NOTICE is INFO + messages are written in summary form to
#     stderr.
#   - WARNING is NOTICE + the log file is flushed to disk.
#   - ERR and above are WARNING + the program will be exited with an
#     error code of 1.
#
# All processes in Firedancer share one log file, and they all inherit
# STDERR and STDOUT from the launcher.  An example log message would
# look something like:
#
#    NOTICE  01-23 04:56:07.890123 45678 f0 0 src/file.c(901): 1 is the loneliest number
#
# to the ephemeral log (stderr) and log something like:
#
#    NOTICE  2023-01-23 04:56:07.890123456 GMT-06 45678:45678 user:host:f0 app:thread:0 src/file.c(901)[func]: 1 is the loneliest number
#
# to the permanent log (log file).
[log]
    # Absolute file path of where to place the log file.  It will be
    # appended to, or created if it does not already exist. The
    # shortened ephemeral log will always be written to stderr.
    #
    # Two substitutions will be performed on this string.  If "{user}"
    # is present it will be replaced with the user running Firedancer,
    # as above, and "{name}" will be replaced with the name of the
    # Firedancer instance.
    #
    # If no path is provided, the default is to place the log file in
    # /tmp with a name that will be unique.  If specified as "-", the
    # permanent log will be written to stdout.
    path = ""

    # Firedancer can colorize the stderr ephemeral log using ANSI escape
    # codes so that it looks pretty in a terminal.  This option must be
    # one of "auto", "true", or "false".  If set to "auto" stderr output
    # will be colorized if we can detect the terminal supports it.  The
    # log file output will never be colorized.
    colorize = "auto"

    # The minimum log level which will be written to the log file.  Log
    # levels lower than this will be skipped.  Must be one of the levels
    # described above.
    level_logfile = "INFO"

    # The minimum log level which will be written to stderr.  Log levels
    # lower than this will be skipped.  Must be one of the levels
    # described above.  This should be at least the same as the level
    # for the log file.
    level_stderr = "NOTICE"

    # The minimum log level which will immediately flush the log file to
    # disk.  Must be one of the levels described above.
    level_flush = "WARNING"

# The client supports sending health reports, and performance and
# diagnostic information to a remote server for collection and analysis.
# This reporting powers the Solana Validator Dashboard and is often used
# by developers to monitor network health.
[reporting]
    # A metrics environment string describing where to report the
    # diagnostic event data to.  The options for public clusters are
    # described at https://docs.solanalabs.com/clusters/available,
    # and are:
    #
    # mainnet-beta:
    #   "host=https://metrics.solana.com:8086,db=mainnet-beta,u=mainnet-beta_write,p=password"
    #
    # devnet:
    #   "host=https://metrics.solana.com:8086,db=devnet,u=scratch_writer,p=topsecret"
    #
    # testnet:
    #   "host=https://metrics.solana.com:8086,db=tds,u=testnet_write,p=c4fa841aa918bf8274e3e2a44d77568d9861b3ea"
    #
    # If no option is provided here, event reporting is disabled.
    #
    # This string is passed to the Agave client with the
    # `SOLANA_METRICS_CONFIG` environment variable.
    solana_metrics_config = ""

# The ledger is the set of information that can be replayed to get back
# to the current state of the chain.  In Solana, it is considered a
# combination of the genesis, and the recent unconfirmed blocks.  The
# accounts database (the current balance of all the accounts) is
# information that is derived from the ledger.
[ledger]
    # Absolute directory path to place the ledger.  Firedancer currently
    # spawns a Agave validator to execute transactions that it
    # receives.  If no ledger path is provided, it is defaulted to the
    # `ledger` subdirectory of the scratch directory above.
    #
    # Two substitutions will be performed on this string.  If "{user}"
    # is present it will be replaced with the user running Firedancer,
    # as above, and "{name}" will be replaced with the name of the
    # Firedancer instance.
    #
    # The ledger path constructed here is passed to the Agave
    # client with the `--ledger` argument.
    path = ""

    # Absolute directory path to place the accounts database in.  If
    # this is empty, it will default to the `accounts` subdirectory of
    # the ledger `path` above.  This option is passed to the Agave
    # client with the `--accounts` argument.
    accounts_path = ""

    # Maximum number of shreds to keep in root slots in the ledger
    # before discarding.
    #
    # The default is chosen to allow enough time for a validator to
    # download a snapshot from a peer and boot from it, and to make sure
    # that if a validator needs to reboot from its own snapshot, it has
    # enough slots locally to catch back up to where it was when it
    # stopped.  It works out to around 400GB of space.
    #
    # This option is passed to the Agave client with the
    # `--limit-ledger-size` argument.
    limit_size = 200_000_000

    # If nonempty, enable an accounts index indexed by the specified
    # field.  The account field must be one of "program-id",
    # "spl-token-owner", or "spl-token-mint".  These options are passed
    # to the Agave client with the `--account-index` argument.
    account_indexes = []

    # If account indexes are enabled, exclude these keys from the index.
    # These options are passed to the Agave client with the
    # `--account-index-exclude-key` argument.
    account_index_exclude_keys = []

    # If account indexes are enabled, only include these keys in the
    # index.  This overrides `account_index_exclude_keys` if specified
    # and that value will be ignored.  These options are passed to the
    # Agave client with the `--account-index-include-key` argument.
    account_index_include_keys = []

    # Whether to use compression when storing snapshots.  This option is
    # passed to the Agave client with the
    # `--snapshot-archive-format` argument.
    snapshot_archive_format = "zstd"

    # Refuse to start if saved tower state is not found.  This option is
    # passed to the Agave client with the `--require-tower`
    # argument.
    require_tower = false

[gossip]
    # Routable DNS name or IP address and port number to use to
    # rendezvous with the gossip cluster.  These entrypoints are passed
    # to the Agave client with the `--entrypoint` argument.
    entrypoints = []

    # If true, checks at startup that at least one of the provided
    # entrypoints can connect to this validator on all necessary ports.
    # If it can't, then the validator will exit.
    #
    # This option is passed to the Agave client inverted with the
    # `--no-port-check` argument.
    port_check = true

    # The port number to use for receiving gossip messages on this
    # validator.  This option is passed to the Agave client with
    # the `--gossip-port` argument.  This argument is required.  Agave
    # treats this as an optional argument, but the code that tries
    # to find a default will always fail.
    port = 8001

    # DNS name or IP address to advertise to the network in gossip.  If
    # none is provided, the default is to ask the first entrypoint which
    # replies to the port check described above what our IP address is.
    # If no entrypoints are specified, then this will be defaulted to
    # 127.0.0.1.  If connecting to a public cluster like mainnet or
    # testnet, this IP must be externally resolvable and should not be
    # on a local subnet.
    #
    # This option is passed to the Agave client with the
    # `--gossip-host` argument.
    host = ""

[rpc]
    # If nonzero, enable JSON RPC on this port, and use the next port
    # for the RPC websocket.  If zero, disable JSON RPC.  This option is
    # passed to the Agave client with the `--rpc-port` argument.
    port = 0

    # If true, all RPC operations are enabled on this validator,
    # including non-default RPC methods for querying chain state and
    # transaction history.  This option is passed to the Agave
    # client with the `--full-rpc-api` argument.
    full_api = false

    # If the RPC is private, the validator's open RPC port is not
    # published in the `solana gossip` command for use by others.  This
    # option is passed to the Agave client with the
    # `--private-rpc` argument.
    private = false

    # Enable historical transaction info over JSON RPC, including the
    # `getConfirmedBlock` API.  This will cause an increase in disk
    # usage and IOPS.  This option is passed to the Agave client
    # with the `--enable-rpc-transaction-history` argument.
    transaction_history = false

    # If enabled, include CPI inner instructions, logs, and return data
    # in the historical transaction info stored.  This option is passed
    # to the Agave client with the
    # `--enable-extended-tx-metadata-storage` argument.
    extended_tx_metadata_storage = false

    # If true, use the RPC service of known validators only.  This
    # option is passed to the Agave client with the
    # `--only-known-rpc` argument.
    only_known = true

    # If true, enable the unstable RPC PubSub `blockSubscribe`
    # subscription.  This option is passed to the Agave client
    # with the `--rpc-pubsub-enable-block-subscription` argument.
    pubsub_enable_block_subscription = false

    # If true, enable the unstable RPC PubSub `voteSubscribe`
    # subscription.  This option is passed to the Agave client
    # with the `--rpc-pubsub-enable-vote-subscription` argument.
    pubsub_enable_vote_subscription = false

    # If enabled, fetch historical transaction info from a BigTable
    # instance as a fallback to local ledger data when serving RPC
    # requests.  The `GOOGLE_APPLICATION_CREDENTIALS` environment
    # variable must be set to access BigTable.
    #
    # This option is passed to the Agave client with the
    # `--enable-rpc-bigtable-ledger-storage` argument.
    bigtable_ledger_storage = false

# The Agave client periodically takes and stores snapshots of the
# chain's state.  Other clients, especially as they bootstrap or catch
# up to the head of the chain, may request a snapshot.
[snapshots]
    # Enable incremental snapshots by setting this flag.  This option is
    # passed to the Agave client (inverted) with the
    # `--no-incremental-snapshots` flag.
    incremental_snapshots = true

    # Set how frequently full snapshots are taken, measured in slots,
    # where one slot is about 400ms on production chains.  It's
    # recommended to leave this to the default or to set it to the same
    # value that the known validators are using.
    full_snapshot_interval_slots = 25000

    # Set how frequently incremental snapshots are taken, measured in
    # slots.  Must be a multiple of the accounts hash interval (which is
    # 100 by default).
    incremental_snapshot_interval_slots = 100

    # Set the maximum number of full snapshot archives to keep when
    # purging older snapshots.  This option is passed to the Agave
    # client with the `--maximum-full-snapshots-to-retain` argument.
    maximum_full_snapshots_to_retain = 2

    # Set the maximum number of incremental snapshot archives to keep
    # when purging older snapshots.  This option is passed to the Agave
    # client with the `--maximum-incremental-snapshots-to-retain`
    # argument.
    maximum_incremental_snapshots_to_retain = 4

    # Set the minimum snapshot download speed in bytes per second.  If
    # the initial download speed falls below this threshold, the
    # validator will retry the download against a different RPC node.
    #
    # The default value is 10MB/s.  This option is passed to the Agave
    # client with the `--minimum-snapshot-download-speed` argument.
    minimum_snapshot_download_speed = 10485760

    # Absolute directory path for storing snapshots.  If no path is
    # provided, it defaults to the [ledger.path] option from above.
    #
    # Two substitutions will be performed on this string.  If "{user}"
    # is present it will be replaced with the user running Firedancer,
    # as above, and "{name}" will be replaced with the name of the
    # Firedancer instance.
    #
    # The snapshot path constructed here is passed to the Agave
    # client with the `--snapshots` argument.
    path = ""

    # Absolute directory path for storing incremental snapshots.  If no
    # path is provided, defaults to the [snapshots.path] option above,
    # or if that is not provided, the [ledger.path] option above.
    #
    # Two substitutions will be performed on this string.  If "{user}"
    # is present it will be replaced with the user running Firedancer,
    # as above, and "{name}" will be replaced with the name of the
    # Firedancer instance.
    #
    # The snapshot path constructed here is passed to the Agave
    # client with the `--incremental-snapshot-archive-path` argument.
    incremental_path = ""

[consensus]
    # Absolute path to a `keypair.json` file containing the identity of
    # the validator.  When connecting to dev, test, or mainnet it is
    # required to provide an identity file.
    #
    # Two substitutions will be performed on this string.  If "{user}"
    # is present it will be replaced with the user running Firedancer,
    # as above, and "{name}" will be replaced with the name of the
    # Firedancer instance.
    #
    # When running a local cluster, Firedancer will generate a keypair
    # if one is not provided (or has not already been generated) and
    # place it in the scratch directory, under path `identity.json`.
    #
    # This option is passed to the Agave client with the
    # `--identity` argument.
    identity_path = ""

    # Absolute path to a `keypair.json` containing the identity of the
    # voting account of the validator.  If no voting account is
    # provided, voting will be disabled and the validator will cast no
    # votes.
    #
    # Two substitutions will be performed on this string.  If "{user}"
    # is present it will be replaced with the user running Firedancer,
    # as above, and "{name}" will be replaced with the name of the
    # Firedancer instance.
    #
    # This option is passed to the Agave client with the
    # `--vote-account` argument.
    vote_account_path = ""

    # List of absolute paths to authorized-voter keypairs for the vote
    # account.  This is not needed if no vote account is specified.
    # If a vote account is specified and this is empty, the identity
    # account will be used as the authorized-voter account.
    #
    # Two substitutions will be performed on each string in this list.
    # If "{user}" is present it will be replaced with the user running
    # Firedancer, as above, and "{name}" will be replaced with the
    # name of the Firedancer instance.
    #
    # These options are passed to the Agave client with the
    # `--authorized-voter` argument.
    authorized_voter_paths = []

    # If false, do not attempt to fetch a snapshot from the cluster,
    # instead start from a local snapshot if one is present.  A snapshot
    # is required to run the validator, so either one must be present,
    # or you need to fetch it.  The snapshot will be fetched from a
    # validator in the list of entrypoints.  If no validators are listed
    # there, starting the validator will fail.  This option is passed
    # (inverted) to the Agave client with the `no-snapshot-fetch`
    # argument.
    snapshot_fetch = true

    # If false, do not attempt to fetch the genesis from the cluster.
    # This option is passed (inverted) to the Agave client with
    # the `no-genesis-fetch` argument.
    genesis_fetch = true

    # On startup, do some simulations to see how fast the validator can
    # generate proof of history, and if it too slow to keep up with the
    # network, exit out during boot.  It is recommended to leave this on
    # to ensure you can keep up with the network.  This option is passed
    # to the Agave client (inverted) with the
    # `--no-poh-speed-test` argument.
    poh_speed_test = true

    # If set, require the genesis block to have the given hash.  If it
    # does not the validator will abort with an error.  This option is
    # passed to the Agave client with the
    # `--expected-genesis-hash` argument.
    expected_genesis_hash = ""

    # If nonzero, after processing the ledger, and the next slot is the
    # provided value, wait until a supermajority of stake is visible on
    # gossip before starting proof of history.  This option is passed to
    # the Agave client with the `--wait-for-supermajority`
    # argument.
    wait_for_supermajority_at_slot = 0

    # If there is a hard fork, it might be required to provide an
    # expected bank hash to ensure the correct fork is being selected.
    # If this is not provided, or we are not waiting for a
    # supermajority, the bank hash is not checked.  Otherwise we require
    # the bank at the supermajority slot to have this hash.  This option
    # is passed to the Agave client with the
    # `--expected-bank-hash` argument.
    expected_bank_hash = ""

    # The shred version is a small hash of the genesis block and any
    # subsequent hard forks.  The Agave client uses it to filter
    # out any shred traffic from validators that disagree with this
    # validator on the genesis hash or the set of hard forks.  If
    # nonzero, ignore any shreds that have a different shred version
    # than this value.  If zero, the expected shred version is
    # automatically determined by copying the shred version that the
    # entrypoint validator is using.  This option is passed to the
    # Agave client with the `--expected-shred-version` argument.
    expected_shred_version = 0

    # If the validator starts up with no ledger, it will wait to start
    # block production until it sees a vote land in a rooted slot.  This
    # prevents double signing.  Turn off to risk double signing a block.
    # This option is passed to the Agave client (inverted) with
    # the `--no-wait-for-vote-to-start-leader` argument.
    wait_for_vote_to_start_leader = true

    # Perform a network speed test on starting up the validator.  If
    # this is not disabled, and the speed test fails, the validator will
    # refuse to start.  This option is passed to the Agave client
    # (inverted) with the `--no-os-network-limits-test` argument.
    os_network_limits_test = true

    # If nonempty, add a hard fork at each of the provided slots.  These
    # options are passed to the Agave client with the
    # `--hard-fork` argument.
    hard_fork_at_slots = []

    # A set of validators we trust to publish snapshots.  If a snapshot
    # is not published by a validator with one of these keys, it is
    # ignored.  If no known validators are specified, any hash will be
    # accepted.  These options are passed to the Agave client with
    # the `--trusted-validator` argument.
    known_validators = []

# CPU cores in Firedancer are carefully managed.  Where a typical
# program lets the operating system scheduler determine which threads to
# run on which cores and for how long, Firedancer overrides most of this
# behavior by pinning threads to CPU cores.
#
# The validator splits all work into eleven distinct jobs, with each
# thread running one of the jobs:
#
#  - net        Sends and receives network packets from the network
#               device
#
#  - quic       Receives transactions from clients, performing all
#               connection management and packet processing to manage
#               and implement the QUIC protocol
#
#  - verify     Verifies the cryptographic signature of incoming
#               transactions, filtering invalid ones
#
#  - dedup      Checks for and filters out duplicated incoming
#               transactions
#
#  - pack       Collects incoming transactions and smartly schedules
#               them for execution when we are leader
#
#  - bank       Executes transactions that have been scheduled when we
#               are leader
#
#  - poh        Continuously hashes in the background, and mixes the
#               hash in with executed transactions to prove passage of
#               time
#
#  - shred      Distributes block data to the network when leader, and
#               receives and retransmits block data when not leader
#
#  - store      Receives block data when we are leader, or from other
#               nodes when they are leader, and stores it locally in a
#               database on disk
#
#  - metric     Collects monitoring information about other tiles and
#               serves it on a HTTP endpoint
#
#  - sign       Holds the validator private key, and receives and
#               responds to signing requests from other tiles
#
# The jobs involved in producing blocks when we are leader are organized
# in a pipeline, where transactions flow through the system in a linear
# sequence.
#
#   net -> quic -> verify -> dedup -> pack -> bank -> poh -> shred -> store
#
# Some of these jobs (net, quic, verify, bank, and shred) can be
# parallelized, and run on multiple CPU cores at once. For example, we
# could structure the pipeline like this for performance:
#
# net -> quic +-> verify -+> dedup -> pack +-> bank -+> poh -> shred -> store
#             +-> verify -+                +-> bank -+
#             +-> verify -+
#             +-> verify -+
#
# Each instance of a job running on a CPU core is called a tile.  In
# this configuration we are running 4 verify tiles and 2 bank tiles.
#
# The problem of deciding which cores to use, and what job to run on
# each core we call layout.  Layout is system dependent and the highest
# throughput layout will vary depending on the specific hardware
# available.
#
# Tiles communciate with each other using message queues.  If a queue
# between two tiles fills up, the producer will either block, waiting
# until there is free space to continue which is referred to as
# backpressure, or it will drop transactions or data and continue.
#
# A slow tile can cause backpressure through the rest of the system
# causing it to halt, and the goal of adding more tiles is to increase
# throughput of a job, preventing dropped transactions.  For example,
# if the QUIC server was producing 100,000 transactions a second, but
# each verify tile could only handle 20,000 transactions a second, five
# of the verify tiles would be needed to keep up without dropping
# transactions.
#
# A full Firedancer layout spins up these eleven tasks onto a variety of
# CPU cores and connects them together with queues so that data can flow
# in and out of the system with maximum throughput and minimal drops.
[layout]
    # Logical CPU cores to run Firedancer tiles on.  Can be specified as
    # a single core like "0", a range like "0-10", or a range with
    # stride like "0-10/2".  Stride is useful when CPU cores should be
    # skipped due to hyperthreading.  You can also have a number
    # preceded by a 'f' like 'f5' which means the next five tiles are
    # not pinned and will float on the original core set that Firedancer
    # was started with.
    #
    # For example, if Firedancer has six tiles numbered 0..5, and the
    # affinity is specified as
    #
    #  f1,0-1,2-4/2,f1
    #
    # Then the tile to core mapping looks like,
    #
    # tile | core
    # -----+-----
    #    0 | floating
    #    1 | 0
    #    2 | 1
    #    3 | 2
    #    4 | 4
    #    5 | floating
    #
    # If the value is specified as auto, Firedancer will attempt to
    # determine the best layout for the system.  This is the default
    # value although for best performance it is recommended to specify
    # the layout manually.  If the layout is specified as auto, the
    # agave_affinity below must also be set to auto.
    affinity = "auto"

    # In addition to the Firedancer tiles which use a core each, the
    # current version of Firedancer hosts a Agave validator as
    # a subprocess.
    #
    # This affinity controls which logical CPU cores the Agave
    # subprocess and all of its threads are allowed to run on.  This is
    # specified in the same format as the above Firedancer affinity.
    #
    # It is strongly suggested that you do not overlap the Firedancer
    # affinity with the Agave affinity, as Firedancer tiles expect
    # to have exclusive use of their core.  Unexpected latency spikes
    # due to context switching may decrease performance overall.
    #
    # If the value is specified as "auto", the [layout.affinity] field
    # must also be set to "auto", and the Agave affinity will be
    # determined automatically as well.
    agave_affinity = "auto"

    # How many net tiles to run.  Should be set to 1.  This is
    # configurable and designed to scale out for future network
    # conditions but there is no need to run more than 1 net tile given
    # current `mainnet-beta` conditions.
    #
    # Net tiles are responsible for sending and receiving packets from
    # the network device configured in the [tiles.net] section below.
    # Each net tile will service exactly one queue from the device, and
    # firedancer will error on boot if the number of queues on the
    # device is not configured correctly.
    #
    # The net tile is designed to scale linearly when adding more tiles.
    #
    # See the comments for the [tiles.net] section below for more
    # information.
    net_tile_count = 1

    # How many QUIC tiles to run.  Should be set to 1.  This is
    # configurable and designed to scale out for future network
    # conditions but there is no need to run more then 1 QUIC tile given
    # current `mainnet-beta` conditions, unless the validator is the
    # subject of an attack.
    #
    # QUIC tiles are responsible for parsing incoming QUIC protocol
    # messages, managing connections and responding to clients.
    # Connections from the net tiles will be evenly distributed
    # between the available QUIC tiles round robin style.
    #
    # QUIC tiles are designed to scale linearly when adding more tiles,
    quic_tile_count = 1

    # How many resolver tiles to run.  Should be set to 1.  This is
    # configurable and designed to scale out for future network
    # conditions but there is no need to run more than 1 resolver tile
    # given current `mainnet-beta` conditions, unless the validator is
    # under a DoS or spam attack.
    #
    # Resolve tiles are responsible for resolving address lookup tables
    # before transactions are scheduled.
    resolv_tile_count = 1

    # How many verify tiles to run.  Verify tiles perform signature
    # verification on incoming transactions, an expensive operation that
    # is often the bottleneck of the validator.
    #
    # Verify tiles are designed to scale linearly when adding more
    # tiles, and the verify tile count should be increased until the
    # validator is not dropping incoming QUIC transactions from clients.
    #
    # On modern hardware, each verify tile can handle around 20-40K
    # transactions per second.  Six tiles seems to be enough to handle
    # current `mainnet-beta` traffic, unless the validator is under a
    # denial of service or spam attack.
    verify_tile_count = 6

    # How many bank tiles to run.  Should be set to 4.  Bank tiles
    # execute transactions, so the validator can include the results of
    # the transaction into a block when we are leader.  Because of
    # current consensus limits restricting blocks to around 32,000
    # transactions per block, there is no need to use more than 4 bank
    # tiles on mainnet-beta.  For development and benchmarking, it can
    # be useful to increase this number further.
    #
    # Bank tiles do not scale linearly.  The current implementation uses
    # the agave runtime for execution, which takes various locks and
    # uses concurrent data structures which slow down with multiple
    # parallel users.
    bank_tile_count = 4

    # How many shred tiles to run.  Should be set to 1.  This is
    # configurable and designed to scale out for future network
    # conditions but there is no need to run more than 1 shred tile
    # given current `mainnet-beta` conditions.  There is however
    # a need to run 2 shred tiles under current `testnet` conditions.
    #
    # Shred tiles distribute block data to the network when we are
    # leader, and receive and retransmit it to other nodes when we are
    # not leader.
    #
    # Shred tile performance heavily dependent on the number of peer
    # nodes in the cluster, as computing where data should go is an
    # expensive function with this list of peers as the input.  In
    # development and benchmarking, 1 tile is also sufficient to hit
    # very high TPS rates because the cluster size will be very small.
    shred_tile_count = 1

# All memory that will be used in Firedancer is pre-allocated in two
# kinds of pages: huge and gigantic.  Huge pages are 2MB and gigantic
# pages are 1GB.  This is done to prevent TLB misses which can have a
# high performance cost.  There are three important steps in this
# configuration,
#
#  1. At boot time or soon after, the kernel is told to allocate a
#     certain number of both huge and gigantic pages to a special pool
#     so that they are reserved for later use by privileged programs.
#
#  2. At configuration time, one (pseudo) filesystem of type hugetlbfs
#     for each of huge and gigantic pages is mounted on a local
#     directory.  Any file created within these filesystems will be
#     backed by in-memory pages of the desired size.
#
#  3. At Firedancer initialization time, Firedancer creates a
#     "workspace" file in one of these mounts.  The workspace is a
#     single mapped memory region within which the program lays out and
#     initializes all of the data structures it will need in advance.
#     Most Firedancer allocations occur at initialization time, and this
#     memory is fully managed by special purpose allocators.
#
# A typical layout of the mounts looks as follows,
#
#  /mnt/.fd                     [Mount parent directory specified below]
#    +-- .gigantic              [Files created in this mount use 1GiB
#                                pages]
#        +-- firedancer1.wksp
#    +-- .huge                  [Files created in this mount use 2MiB
#                                pages]
#        +-- scratch1.wksp
#        +-- scratch2.wksp
[hugetlbfs]
    # The absolute path to a directory in the filesystem.  Firedancer
    # will mount the hugetlbfs filesystem for gigantic pages at a
    # subdirectory named .gigantic under this path, or if the entire
    # path already exists, will use it as-is.  Firedancer will also
    # mount the hugetlbfs filesystem for huge pages at a subdirectory
    # named .huge under this path, or if the entire path already exists,
    # will use it as-is.  If the mount already exists it should be
    # writable by the Firedancer user.
    mount_path = "/mnt/.fd"

# Tiles are described in detail in the layout section above.  While the
# layout configuration determines how many of each tile to place on
# which CPU core to create a functioning system, below is the individual
# settings that can change behavior of the tiles.
[tiles]
    # A networking tile is responsible for sending and receiving packets
    # on the network.  Each networking tile is bound to a specific
    # queue on a network device.  For example, if you have one network
    # device with four queues, you must run four net tiles.
    #
    # Net tiles will multiplex in both directions, fanning out packets
    # to multiple parts of Firedancer that can receive and handle them,
    # like QUIC tiles and the Turbine retransmission engine.  Then also
    # fanning in from these various network senders to transmit on the
    # queues we have available.
    [tiles.net]
        # Which interface to bind to for network traffic.  Currently
        # only one interface is supported for networking.  If this is
        # empty, the default is the interface used to route to 8.8.8.8,
        # you can check what this is with `ip route get 8.8.8.8`
        #
        # If developing under a network namespace with [netns] enabled,
        # this should be the same as [development.netns.interface0].
        interface = ""

        # Firedancer uses XDP for fast packet processing.  XDP supports
        # two modes, XDP_SKB and XDP_DRV.  XDP_DRV is preferred as it is
        # faster, but is not supported by all drivers.  This argument
        # must be either the string "skb" or the string "drv".  You can
        # also use "generic" here for development environments, but it
        # should not be used in production.
        xdp_mode = "skb"

        # XDP has a metadata queue with memory defined by the driver or
        # kernel that is specially mapped into userspace.  With XDP mode
        # XDP_DRV this could be MMIO to a PCIE device, but in SKB it's
        # kernel memory made available to userspace that is copied in
        # and out of the device.
        #
        # This setting defines the size of these metadata queues.  A
        # larger value is probably better if supported by the hardware,
        # as we will drop less packets when bursting in high bandwidth
        # scenarios.
        #
        # TODO: This probably shouldn't be configurable, we should just
        # use the maximum available to the hardware?
        xdp_rx_queue_size = 4096
        xdp_tx_queue_size = 4096

        # When writing multiple queue entries to XDP, we may wish to
        # batch them together if it's expensive to do them one at a
        # time.  This might be the case for example if the writes go
        # directly to the network device.  A large batch size may not be
        # ideal either, as it adds latency and jitter to packet
        # handling.
        xdp_aio_depth = 256

        # The maximum number of packets in-flight between a net tile and
        # downstream consumers, after which additional packets begin to
        # replace older ones, which will be dropped.  TODO: ... Should
        # this really be configurable?
        send_buffer_size = 16384

        # The XDP program will filter packets that aren't destined for
        # the IPv4 address of the interface bound above, but sometimes a
        # validator may advertise multiple IP addresses.  In this case
        # the additional addresses can be specified here, and packets
        # addressed to them will be accepted.
        multihome_ip_addrs = []

    # QUIC tiles are responsible for serving network traffic, including
    # parsing and responding to packets and managing connection timeouts
    # and state machines.  These tiles implement the QUIC protocol,
    # along with receiving regular (non-QUIC) UDP transactions, and
    # forward well formed (but not necessarily valid) ones to verify
    # tiles.
    [tiles.quic]
        # Which port to listen on for incoming, regular UDP transactions
        # that are not over QUIC.  These could be votes, user
        # transactions, or transactions forwarded from another
        # validator.
        regular_transaction_listen_port = 9001

        # Which port to listen on for incoming QUIC transactions.
        # Currently this must be exactly 6 more than the
        # transaction_listen_port.
        quic_transaction_listen_port = 9007

        # Maximum number of simultaneous QUIC connections which can be
        # open.  New connections which would exceed this limit will not
        # be accepted.
        #
        # This must be >= 2 and also a power of 2.
        max_concurrent_connections = 131072

        # QUIC allows for multiple streams to be multiplexed over a
        # single connection.  This option sets the maximum number of
        # simultaneous streams per connection supported by our protocol
        # implementation.
        #
        # If the peer has this many simultaneous streams open and wishes
        # to initiate another stream, they must first retire an existing
        # stream.
        #
        # The Solana protocol uses one stream per transaction.
        # Supporting more streams per connection currently has a memory
        # footprint cost on the order of kilobytes per stream, per
        # connection.
        #
        # Increasing this number causes server-side performance to get
        # worse.
        max_concurrent_streams_per_connection = 512

        # Controls how much transactions coming in via TPU can be
        # reassembled at the same time.  Reassembly is required for user
        # transactions larger than ca ~1200 bytes, as these arrive
        # fragmented.  This parameter should scale linearly with line
        # rate.  Usually, clients send all fragments at once, such that
        # each reassembly only takes a few microseconds.
        #
        # Higher values reduce TPU packet loss over unreliable networks.
        # If this parameter is set too low, packet loss can cause some
        # large transactions to get dropped.  Must be 2 or larger.
        txn_reassembly_count = 4194304

        # QUIC has a handshake process which establishes a secure
        # connection between two endpoints.  The handshake process is
        # very expensive. So we allow only a limited number of
        # handshakes to occur concurrently.
        #
        max_concurrent_handshakes = 4096

        # QUIC has a concept of a "QUIC packet", there can be multiple
        # of these inside a UDP packet.  Each QUIC packet we send to the
        # peer needs to be acknowledged before we can discard it, as we
        # may need to retransmit.  This setting configures how many such
        # packets we can have in-flight to the peer and unacknowledged.
        max_inflight_quic_packets = 64

        # QUIC has a concept of an idle connection, one where neither
        # the client nor the server has sent any packet to the other for
        # a period of time.  Once this timeout is reached the connection
        # will be terminated.
        #
        # An idle connection will be terminated if it remains idle
        # longer than this threshold.
        idle_timeout_millis = 10000

        # Max delay for outgoing ACKs.
        ack_delay_millis = 50

        # QUIC retry is a feature to combat new connection request
        # spamming.  See rfc9000 8.1.2 for more details.  This flag
        # determines whether the feature is enabled in the validator.
        retry = true

    # Verify tiles perform signature verification of incoming
    # transactions, making sure that the data is well-formed, and that
    # it is signed by the appropriate private key.
    [tiles.verify]
        # The maximum number of messages in-flight between a QUIC tile
        # and associated verify tile, after which earlier messages might
        # start being overwritten, and get dropped so that the system
        # can keep up.
        receive_buffer_size = 16384

    # After being verified, all transactions are sent to a dedup tile to
    # ensure the same transaction is not repeated multiple times.  The
    # dedup tile keeps a rolling history of signatures it has seen and
    # drops any that are duplicated, before forwarding unique ones on.
    [tiles.dedup]
        # The size of the cache that stores unique signatures we have
        # seen to deduplicate.  This is the maximum number of signatures
        # that can be remembered before we will let a duplicate through.
        #
        # If a duplicated transaction is let through, it will waste more
        # resources downstream before we are able to determine that it
        # is invalid and has already been executed.  If a lot of memory
        # is available, it can make sense to increase this cache size to
        # protect against denial of service from high volumes of
        # transaction spam.
        signature_cache_size = 4194302

    # The pack tile takes incoming transactions that have been verified
    # by the verify tile and then deduplicated, and attempts to order
    # them in an optimal way to generate the most fees per compute
    # resource used to execute them.
    [tiles.pack]
        # The pack tile receives transactions while it is waiting to
        # become leader and stores them for future execution.  This
        # option determines the maximum number of transactions that
        # will be stored before those with the lowest estimated
        # profitability get dropped.  The maximum allowed, and default
        # value is 65534 and it is not recommended to change this.
        max_pending_transactions = 65534

        # When a transaction consumes fewer CUs than it requests, the
        # bank and pack tiles work together to adjust the block limits
        # so that a different transaction can consmume the unspent CUs.
        # This is normally desireable, as it typically leads to
        # producing blocks with more transactions.
        #
        # In situations where transactions typically do not
        # significantly over-request CUs, or when the CU limit is high
        # enough so that over-requesting CUs does not impact how many
        # transactions fit in a block, this can be disabled to improve
        # performance.  It's not recommended (but allowed) to disable
        # this option in a production cluster.
        use_consumed_cus = true

    # The bank tile is what executes transactions and updates the
    # accounting state as a result of any operations performed by the
    # transactions.  Currently the bank tile is implemented by the
    # Agave execution engine and is not configurable.
    [tiles.bank]

    # The shred tile distributes processed transactions that have been
    # executed to the rest of the cluster in the form of shred packets.
    [tiles.shred]
        # When this validator is not the leader, it receives the most
        # recent processed transactions from the leader and other
        # validators in the form of shred packets.  Shreds are grouped
        # in sets for error correction purposes, and the full validation
        # of a shred packet requires receiving at least half of the set.
        # Since shreds frequently arrive out of order, the shred tile
        # needs a relatively large buffer to hold sets of shreds until
        # they can be fully validated.  This option specifies the size
        # of this buffer.
        #
        # To compute an appropriate value, multiply the expected Turbine
        # worst-case latency (tenths of seconds) by the expected
        # transaction rate, and divide by approx 25.
        max_pending_shred_sets = 512

        # The shred tile listens on a specific port for shreds to
        # forward.  This argument controls which port that is.  The port
        # is broadcast over gossip so other validators know how to reach
        # this one.
        shred_listen_port = 8003

    # The metric tile receives metrics updates published from the rest
    # of the tiles and serves them via. a Prometheus compatible HTTP
    # endpoint.
    [tiles.metric]
        # The address to listen on.  By default metrics are only
        # accessible from the local machine.  If you wish to expose them
        # to the network, you can change the listen address.
        #
        # The Firedancer team makes a best effort to secure the metrics
        # endpoint but exposing it to the internet from a production
        # validator is not recommended as it increases the attack
        # surface of the validator.
        prometheus_listen_address = "127.0.0.1"

        # The port to listen on for HTTP request for Prometheus metrics.
        # Firedancer serves metrics at a URI like 127.0.0.1:7999/metrics
        prometheus_listen_port = 7999

    # The gui tile receives data from the validator and serves an HTTP
    # endpoint to clients to view it.
    [tiles.gui]
        # If the GUI is enabled.
        # 
        # Names and icons of peer validators will not be displayed in
        # the GUI unless the program-id index is enabled, which can be
        # done by setting
        #
        # [ledger]
        #   account_indexes = ["program-id"]
        #
        # In your configuration above.
        enabled = true

        # The address to listen on.  By default, if enabled, the GUI
        # will only be accessible from the local machine.  If you wish
        # to expose it to the network, you can change the listen
        # address.
        #
        # The Firedancer team makes a best effort to secure the GUI
        # endpoint but exposing it to the internet from a production
        # validator is not recommended as it increases the attack
        # surface of the validator.
        gui_listen_address = "127.0.0.1"

        # The port to listen on.
        gui_listen_port = 80

# These options can be useful for development, but should not be used
# when connecting to a live cluster, as they may cause the validator to
# be unstable or have degraded performance or security.  The program
# will check that these options are set correctly in production and
# refuse to start otherwise.
[development]
    # For enhanced security, Firedancer runs itself in a restrictive
    # sandbox in production.  The sandbox prevents most system calls and
    # restricts the capabilities of the process after initialization to
    # make the attack surface smaller.  This is required in production,
    # but might be too restrictive during development.
    #
    # In development, you can disable the sandbox for testing and
    # debugging with the `--no-sandbox` argument to `fddev`.
    sandbox = true

    # As part of the security sandboxing, Firedancer will run every tile
    # in a separate process.  This can be annoying for debugging where
    # you want control of all the tiles under one inferior, so we also
    # support a development mode where tiles are run as threads instead
    # and the system operates inside a single process.  This does not
    # impact performance and threads still get pinned.
    #
    # This option cannot be enabled in production.  In development, you
    # can also launch Firedancer as a single process for with the
    # `--no-clone` argument to `fddev`.
    no_clone = false

    # Firedancer currently hosts a Agave client as a child process
    # when it starts up, to provide functionality that has not yet been
    # implemented. For development sometimes it is desirable to not
    # launch this subprocess, although it will prevent the validator
    # from operating correctly.
    #
    # In development, you can disable agave for testing and debugging
    # with the `--no-agave` argument to `fddev`.
    no_agave = false

    # Sometimes, it may be useful to run a bootstrap firedancer
    # validator, either for development or for testing purposes.  The
    # `fddev` tool is provided for this purpose, which creates the
    # bootstrap keys and does the cluster genesis using some parameters
    # that are typically useful for development.
    #
    # Enabling this allows de-coupling the genesis and key creation from
    # the validator startup.  The bootstrap validator can then be
    # started up with `fdctl`.  It will expect the genesis to already
    # exist at [ledger.path].  The keys used during genesis should be
    # the same as the ones supplied in the [consensus.identity_path] and
    # [consensus.vote_account_path].  This option will not be effective
    # if [gossip.entrypoints] is non-empty.
    bootstrap = false

    # It can be convenient during development to use a network namespace
    # for running Firedancer.  This allows us to send packets at a local
    # Firedancer instance and have them go through more of the kernel
    # XDP stack than would be possible by just using the loopback
    # interface.  We have special support for creating a pair of virtual
    # interfaces that are routable to each other.
    #
    # Because of how Firedancer uses UDP and XDP together, we do not
    # receive packets when binding to the loopback interface.  This can
    # make local development difficult.  Network namespaces are one
    # solution, they allow us to create a pair of virtual interfaces on
    # the machine which can route to each other.
    #
    # If this configuration is enabled, `fdctl dev` will create two
    # network namespaces and a link between them to send packets back
    # and forth.  When this option is enabled, the interface to bind to
    # in the net configuration must be one of the virtual interfaces.
    # Firedancer will be launched by `fdctl` within that namespace.
    #
    # This is a development only configuration, network namespaces are
    # not suitable for production use due to performance overhead.  In
    # development when running with `fddev`, this can also be enabled
    # with the `--netns` command line argument.
    [development.netns]
        # If enabled, `fdctl dev` will ensure the network namespaces are
        # configured properly, can route to each other, and that running
        # Firedancer will run it inside the namespace for interface0
        enabled = false

        # Name of the first network namespace.
        interface0 = "veth_test_xdp_0"
        # MAC address of the first network namespace.
        interface0_mac = "52:F1:7E:DA:2C:E0"
        # IP address of the first network namespace.
        interface0_addr = "198.18.0.1"

        # Name of the second network namespace.
        interface1 = "veth_test_xdp_1"
        # MAC address of the second network namespace.
        interface1_mac = "52:F1:7E:DA:2C:E1"
        # IP address of the second network namespace.
        interface1_addr = "198.18.0.2"

    [development.gossip]
        # Under normal operating conditions, a validator should always
        # reach out to a host located on the public internet.  If this
        # value is true, it allows the validator to gossip with nodes
        # configuration item allows Firedancer to gossip with nodes
        # located on a private internet (rfc1918).
        #
        # This option is passed to the Agave client with the
        # `--allow-private-addr` flag.
        allow_private_address = false

    [development.genesis]
        # When creating a new chain from genesis during development,
        # this option can be used to specify the number of hashes in
        # each tick of the proof history component.
        #
        # A value of one is the default, and will be used to mean that
        # the proof of history component will run in low power mode,
        # and use one hash per tick.  This is equivalent to a value of
        # "sleep" when providing hashes-per-tick to the Agave
        # genesis.
        #
        # A value of zero means the genesis will automatically determine
        # the number of hashes in each tick based on how many hashes
        # the generating computer can do in the target tick duration
        # specified below.
        #
        # This value specifies the initial value for the chain in the
        # genesis, but it might be overriden at runtime if the related
        # features which increase this value are enabled. The features
        # are named like `update_hashes_per_tick2`.
        #
        # A value of 62,500 is the same as mainnet-beta, devnet, and
        # testnet, following activation of the `update_hashes_per_tick6`
        # feature.
        hashes_per_tick = 62_500

        # How long each tick of the proof of history component should
        # take, in microseconds.  This value specifies the initial value
        # of the chain and it will not change at runtime.  The default
        # value used here is the same as mainnet, devnet, and testnet.
        target_tick_duration_micros = 6250

        # The number of ticks in each slot.  This value specifies the
        # initial value of the chain and it will not change at runtime.
        # The default value used here is the same as mainnet, devnet,
        # and testnet.
        ticks_per_slot = 64

        # The count of accounts to pre-fund in the genesis block with
        # SOL.  Useful for benchmarking and development.  The specific
        # accounts that will be funded are those with private-keys of
        # 0, 1, 2, .... N.
        fund_initial_accounts = 1024

        # The amount of SOL to pre-fund each account with.
        fund_initial_amount_lamports = 50000000000000

        # The number of lamports to stake on the voting account of the
        # validator that starts up from the genesis.  Genesis creation
        # will fund the staking account passed to it with this amount
        # and then stake it on the voting account.  Note that the voting
        # account key used in the genesis needs to be the same as the
        # one that is used by the bootstrap validator.
        vote_account_stake_lamports = 500000000

        # Setting warmup epochs to true will allow shorter epochs towards
        # the beginning of the cluster.  This allows for faster stake
        # activation.  The first epoch will be 32 slots long and the
        # duration of each subsequent epoch will be double that of the
        # one before it until it reaches the desired epoch duration of the
        # cluster.
        warmup_epochs = false

    [development.bench]
        # How many benchg tiles to run when benchmarking.  benchg tiles
        # are responsible for generating and signing outgoing
        # transactions to the validator which is expensive.
        benchg_tile_count = 4

        # How many benchs tiles to run when benchmarking.  benchs tiles
        # are responsible for sending transactions to the validator, for
        # example by calling send() on a socket.  On loopback, a single
        # benchs tile can send around 320k packets a second.
        benchs_tile_count = 2

        # Which cores to run the benchmarking tiles on.  By default the
        # cores will be floating but to get good performance
        # measurements on your machine, you should create a topology
        # where these generators get their own cores.
        #
        # The tiles included in this affinity are,
        #
        #      bencho, benchg1, .... benchgN, benchs1, ... benchsN
        #
        # If the [layout.affinity] above is set to "auto" then this
        # value must also be set to "auto" and it will be determined
        # automatically.
        affinity = "auto"

        # Solana has a hard-coded maximum CUs per block limit of
        # 48,000,000 which works out to around 80,000 transfers a
        # second, since each consumes about 1500 CUs.  When
        # benchmarking, this can be the limiting bottleneck of the
        # system, so this option is provided to raise the limit.  If set
        # to true, the limit will be lifted to 624,000,000 for a little
        # over 1 million transfers per second.
        #
        # This option should not be used in production, and would cause
        # consensus violations and a consensus difference between the
        # validator and the rest of the network.
        larger_max_cost_per_block = false

        # Solana has a consensus-agreed-upon limit of 32,768 data and
        # parity shreds per block.  This limit prevents a malicious (or
        # mistaken) validator from slowing down the network by producing
        # a huge block.  When benchmarking, this limit can be the
        # bottleneck of the whole system, so this option is provided to
        # raise the limit.  If set to true, the limit will be raised to
        # 131,072 data and parity shreds per block, for about 1,200,000
        # small transactions per second.
        #
        # This option should not be used in a production network, since
        # it would cause consensus violations between the validator and
        # the rest of the network.
        larger_shred_limits_per_block = false

        # Frankendancer currently depends on the Agave blockstore
        # component for storing block data to disk.  This component can
        # frequently be a bottleneck when testing the throughput of the
        # leader pipeline, so this option is provided to disable it
        # starting from a certain slot.
        #
        # This option should not be used in a production network; it
        # causes the validator to not be able to serve repair requests,
        # snapshots, or participate in other consensus critical
        # operations.  It is only useful for benchmarking the leader
        # TPU performance in a single node cluster case.
        #
        # A value of 0 means this option will not be used.  A value of 1
        # disables the blockstore entirely.  A common use case for any
        # other positive value is to create a snapshot before disabling
        # the blockstore.  This is useful in cases when benchmarking an
        # entire cluster.  The leader needs to create a first snapshot
        # that the followers need to fetch in order to join the cluster.
        # In such a case, it is useful to set this value to the same
        # number as [snapshots.full_snapshot_interval_slots].
        disable_blockstore_from_slot = 0

        # Frankendancer currently depends on the Agave status cache to
        # prevent double-spend attacks.  The data structure that backs
        # the Agave status cache frequently causes lots of page faults
        # and contention during benchmarking.  It severely limits
        # banking stage scalability, so this option is provided to
        # disable it.
        #
        # This option should not be used in a production network.  If
        # set to true, Frankendancer will not be able to identify a
        # transaction it has receieved as a duplicate if it occurred in
        # another leader's block, causing it to produce invalid blocks.
        # It is only useful for benchmarking when it is known that
        # duplicate transactions will not be submitted and the validator
        # with this option enabled will always be leader.
        disable_status_cache = false