RSPLIB
The Reliable Server Pooling Implementation

https://www.nntb.no/~dreibh/rserpool
Reliable Server Pooling (RSerPool) is the new IETF framework for server pool management and session failover handling. In particular, it can be used for realising highly available services and load distribution. RSPLIB is the reference implementation of RSerPool. It includes:
- The library librsplib, which is the RSerPool implementation itself;
- The library libcpprspserver, which is a C++ wrapper library to easily write server applications based on librsplib;
- A collection of server (pool element) and client (pool user) examples.
The development and standardisation of an application-independent server pooling architecture has been set as the goal of the IETF RSerPool WG. As a result, the working group has created their concept Reliable Server Pooling, abbreviated as RSerPool, which at the moment consists of eight RFCs, several Internet Drafts and RSPLIB as reference implementation.
As key requirements to the Reliable Server Pooling architecture, the following points has been identified in RFC 3237:
-
Lightweight: The RSerPool solution may not require a significant amount of resources (e.g. CPU power or memory). In particular, it should be possible to realize RSerPool-based systems also on low-power devices like mobile phones, PDAs and embedded devices.
-
Real-Time: Real-time services like telephone signalling have very strict limitations on the duration of failovers. In the case of component failures, it may be necessary that a "normal" system state is re-established within a duration of just a few hundreds of milliseconds. In telephone signalling, such a feature is in particular crucial when dealing with emergency calls.
-
Scalability: Providing services like distributed computing, it is necessary to manage pools of many hundreds or even thousands of servers (e.g. animation rendering pools). The RSerPool architecture must be able to efficiently handle such pools. But the amount and size of pools are limited to a company or organization. In particular, it is not a goal of RSerPool to handle the global Internet in one pool set.
-
Extensibility: It must be possible to easily adapt the RSerPool architecture to future applications. In particular, this means to have the possibility to add new server selection procedures. That is, new applications can define special rules on which server of the pool is the most appropriate to use for the processing of a request (e.g. the least-used server). The configuration effort of RSerPool components (e.g. adding or removing servers) should be as small as possible. In the ideal case, the configuration should happen automatically, i.e. it should e.g. only be necessary to turn on a new server and it will configure automatically.
The figure above shows the building blocks of the RSerPool architecture, which has been defined by the IETF RSerPool WG in RFC 5351:. In the terminology of RSerPool a server is denoted as a Pool Element (PE). In its Pool, it is identified by its Pool Element Identifier (PE ID), a 32-bit number. The PE ID is randomly chosen upon a PE's registration to its pool. The set of all pools is denoted as the Handlespace. In older literature, it may be denoted as Namespace. This denomination has been dropped in order to avoid confusion with the Domain Name System (DNS). Each pool in a handlespace is identified by a unique Pool Handle (PH), which is represented by an arbitrary byte vector. Usually, this is an ASCII or Unicode representation of the pool, e.g. "Compute Pool" or "Web Server Pool".
Each handlespace has a certain scope (e.g. an organization or company), which is denoted as Operation Scope. It is an explicit non-goal of RSerPool to manage the global Internet's pools within a single handlespace. Due to the limitation of operation scopes, it is possible to keep the handlespace "flat". That is, PHs do not have any hierarchy in contrast to the DNS with its top-level and sub-domains. This constraint results in a significant simplification of the handlespace management.
Within an operation scope, the handlespace is managed by redundant Registrars. In literature, this component is also denoted as ENRP Server or Name Server. Since "registrar" is the most expressive term, this denotation is used here. PRs have to be redundant in order to avoid a PR to become a single point of failure (SPoF). Each PR of an operation scope is identified by its Registrar ID (PR ID), which is a 32-bit random number. It is not necessary to ensure uniqueness of PR IDs. A PR contains a complete copy of the operation scope's handlespace. PRs of an operation scope synchronize their view of the handlespace using the Endpoint HaNdlespace Redundancy Protocol (ENRP) defined in RFC 5353. Older versions of this protocol use the term Endpoint Namespace Redundancy Protocol; this naming has been replaced to avoid confusion with DNS, but the abbreviation has been kept. Due to handlespace synchronization by ENRP, PRs of an operation scope are functionally equal. That is, if any of the PRs fails, each other PR is able to seamlessly replace it.
By using the Aggregate Server Access Protocol (ASAP), defined in RFC 5352, a PE can add itself to a pool or remove it from by requesting a registration or deregistration from an arbitrary PR of the operation scope. In case of successful registration, the PR chosen for registration becomes the PE's Home-PR (PR-H). A PR-H not only informs the other PRs of the operation scope about the registration or deregistration of its PEs, it also monitors the availability of its PEs by ASAP Keep-Alive messages. A keep-alive message by a PR-H has to be acknowledged by the PE within a certain time interval. If the PE fails to answer within a certain timeout, it is assumed to be dead and immediately removed from the handlespace. Furthermore, a PE is expected to re-register regularly. At a re-registration, it is also possible for the PE to change its list of transport addresses or its policy information (to be explained later).
To use the service of a pool, a client – called Pool User (PU) in RSerPool terminology – first has to request the resolution of the pool's PH to a list of PE identities at an arbitrary PR of the operation scope. This selection procedure is denoted as Handle Resolution. For the case that the requested pool is existing, the PR will select a list of PE identities according to the pool's Pool Member Selection Policy, also simply denoted as Pool Policy. RFC 5356 defines some standard pool policies.
Possible pool policies are e.g. a random selection (Random) or the least-loaded PE (Least Used). While in the first case it is not necessary to have any selection information (PEs are selected randomly), it is required to maintain up-to-date load information in the second case of selecting the least-loaded PE. By using an appropriate selection policy, it is e.g. possible to equally distribute the request load onto the pool's PEs.
After reception of a list of PE identities from a PR, a PU will write the PE information into its local cache. This cache is denoted as PU-side Cache. Out of its cache, the PU will select exactly one PE – again by applying the pool's selection policy – and establish a connection to it by using the application's protocol, e.g. HTTP over SCTP or TCP in case of a web server. Over this connection, the service provided by the server can be used. For the case that the establishment of the connection fails or the connection is aborted during service usage, a new PE can be selected by repeating the described selection procedure. If the information in the PU-side cache is not outdated, a PE identity may be directly selected from cache, skipping the effort of asking a PR for handle resolution. After re-establishing a connection with a new PE, the state of the application session has to be re-instantiated on the new PE. The procedure necessary for session resumption is denoted as failover procedure and is of course application-specific. For an FTP download for example, the failover procedure could mean to tell the new FTP server the file name and the last received data position. By that, the FTP server will be able to resume the download session. Since the failover procedure is highly application-dependent, it is not part of RSerPool itself, though RSerPool provides far-reaching support for the implementation of arbitrary failover schemes by its Session Layer mechanisms.
To make it possible for RSerPool components to configure automatically, PRs can announce themselves via UDP over IP multicast. These announces can be received by PEs, PUs and other PRs, allowing them to learn the list of PRs currently available in the operation scope. The advantage of using IP multicast instead of broadcast is that this mechanism will also work over routers (e.g. LANs connected via a VPN) and the announces will – for the case of e.g. a switched Ethernet – only be heard and processed by stations actually interested in this information. For the case that IP multicast is not available, it is of course possible to statically configure PR addresses.
RSerPool is a completely new protocol framework. To make it possible for existing specialized or proprietary server pooling solutions to iteratively migrate to an RSerPool-based solution, it is mandatory to provide a migration path. For clients without support for RSerPool, the RSerPool concept provides the possibility of a Proxy PU (PPU). A PPU handles requests of non-RSerPool clients and provides an intermediation instance between them and the RSerPool-based server pool. From a PE's perspective, PPUs behave like regular PUs. Similar to a PPU allowing the usage of a non-RSerPool client, it is possible to use a Proxy PE (PPE) to continue using a non-RSerPool server in an RSerPool environment.
The figure above shows the protocol stack of PR, PE and PU. The ENRP protocol is only used for the handlespace synchronization between PRs, all communications between PE and PR (registration, re-registration, deregistration, monitoring) and PU and PR (handle resolution, failure reporting) is based on the ASAP protocol. The failover support, based on an optional Session Layer between PU and PE, is also using ASAP. In this case, the ASAP protocol data (Control Channel) is multiplexed with the application protocol's data (Data Channel) over the same connection. By using the Session Layer functionality of ASAP, a pool can be viewed as a single, highly available server from the PU's Application Layer perspective. Failure detection and handling is mainly handled automatically in the Session Layer, transparent for the Application Layer.
The transport protocol used for RSerPool is usually SCTP, defined in RFC 9260. The important properties of SCTP requiring its usage instead of TCP are the following:
-
Multi-homing and path monitoring by Heartbeat messages for improved availability and verification of transport addresses,
-
Dynamic Address Reconfiguration (Add-IP, see RFC 5061) to enable mobility and interruption-free address changes (e.g. adding a new network interface for enhanced redundancy),
-
Message framing for simplified message handling (especially for the Session Layer),
-
Security against blind flooding attacks by 4-way handshake and verification tag, and
-
Protocol identification by Payload Protocol Identifier (PPID) for protocol multiplexing (required for the ASAP Session Layer functionality).
For the transport of PR announces by ASAP and ENRP via IP multicast, UDP is used as transport protocol. The usage of SCTP is mandatory for all ENRP communication between PRs and the ASAP communication between PEs and PRs. For the ASAP communication between PU and PR and the Session Layer communication between PE and PU, it is recommended to use SCTP. However, the usage of TCP together with an adaptation layer defined in draft-ietf-rserpool-tcpmapping is possible. This adaptation layer adds functionalities like Heartbeats, message framing and protocol identification on top of a TCP connection. But nevertheless, some important advantages of SCTP are missing – especially the high immunity against flooding attacks and the multi-homing property. The only meaningful reason to use TCP is when the PU implementation cannot be equipped with an SCTP stack, e.g. when using a proprietary embedded system providing only a TCP stack.
A detailed introduction to RSerPool, including some application scenario examples, can be found in Chapter 3 of «Reliable Server Pooling – Evaluation, Optimization and Extension of a Novel IETF Architecture»!
Please use the issue tracker at https://github.com/dreibh/rsplib/issues to report bugs and issues!
For ready-to-install Ubuntu Linux packages of RSPLIB, see Launchpad PPA for Thomas Dreibholz!
sudo apt-add-repository -sy ppa:dreibh/ppa sudo apt-get update sudo apt-get install rsplib
For ready-to-install Fedora Linux packages of RSPLIB, see COPR PPA for Thomas Dreibholz!
sudo dnf copr enable -y dreibh/ppa sudo dnf install rsplib
For ready-to-install FreeBSD packages of RSPLIB, it is included in the ports collection, see FreeBSD ports tree index of net/rsplib/!
pkg install rsplib
Alternatively, to compile it from the ports sources:
cd /usr/ports/net/rsplib make make install
RSPLIB is released under the GNU General Public Licence (GPL).
Please use the issue tracker at https://github.com/dreibh/rsplib/issues to report bugs and issues!
The Git repository of the RSPLIB sources can be found at https://github.com/dreibh/rsplib:
git clone https://github.com/dreibh/rsplib cd rsplib cmake . make
Contributions:
-
Issue tracker: https://github.com/dreibh/rsplib/issues. Please submit bug reports, issues, questions, etc. in the issue tracker!
-
Pull Requests for RSPLIB: https://github.com/dreibh/rsplib/pulls. Your contributions to RSPLIB are always welcome!
-
CI build tests of RSPLIB: https://github.com/dreibh/rsplib/actions.
-
Coverity Scan analysis of RSPLIB: https://scan.coverity.com/projects/dreibh-rsplib.
See https://www.nntb.no/~dreibh/rserpool/#current-stable-release for release packages!
You need a configured network interface with:
- at least a private address (192.168.x.y; 10.a.b.c; 172.16.n.m - 172.31.i.j)
- having the multicast flag set (e.g.
sudo ifconfig <dev> multicast
)
In a typical network setup, this should already be configured.
Ensure that your firewall settings allow UDP packets to/from the registrar (ASAP Announce/ENRP Presence), as well as ASAP/ENRP traffic over SCTP.
rspregistrar
See Registrar for registrar parameters.
rspserver -echo
You can start multiple pool elements; they may also run on different hosts, of course. If it complains about finding no registrar, check the multicast settings!
rspterminal
If it complains about finding no registrar, check the multicast settings! Now, you have to manually enter some text lines on standard input.
If everything works, you can test RSerPool functionality by stopping the pool element and watching the failover.
You can monitor the status of each component using the Component Status Protocol monitor cspmonitor
. Simply start it by cspmonitor
. It will listen for status messages sent via UDP on port 2960. The components (rspregistrar
, rspserver
, etc.) accept the command line arguments -cspserver=<server>:<port>
and -cspinterval=<milliseconds>
. For example, if you want a status update every 300 ms and your CSP client is listening on port 2960 of host 192.168.11.22, use the arguments
... -cspserver=192.168.11.22:2960 -cspinterval=300
Note: You must specify address and interval, otherwise no messages are sent.
You can use Wireshark to observe the RSerPool and demo protocol traffic. Coloring rules and filters can be found in the directory rsplib/src/wireshark. Simply copy colorfilters, dfilters and optionally preferences to $HOME/.wireshark. Dissectors for the RSerPool and application protocols are already included in recent Wireshark distributions!
All example PE services can be started using the rspserver
program. That is:
rspserver <options> ...
It takes a set of common parameters as well as some service-specific arguments. These parameters are explained in the following.
The following example PE services are provided:
- Echo Service: A simple echo service. The server-side returns the received payload as-is, i.e. echoes it.
- Discard Service: A simple discard service. The server-side just ignores the received payload.
- Daytime Service: A simple daytime service. The server-side responds with the current date and time.
- Character Generator (CharGen) Service: A simple character generator service. The server-side generates test data.
- Ping Pong Service: A simple request-response service.
- Scripting Service: An example workload-offloading service. It is for example used by SimProcTC.
- Fractal Generator Service: The fractal graphics computation service, for testing and illustratively demonstrating RSerPool features. It is also used for the RSerPool Demo Tool.
- Calculation Application (CalcApp) Service: A simulated calculation application, for evaluating load distribution. Details can be found in «Reliable Server Pooling – Evaluation, Optimization and Extension of a Novel IETF Architecture».
Notes:
-
For all provided services, the latest version of Wireshark already includes the packets dissectors!
-
See the manpage of "rspserver" for further options!
man rspserver
rspserver
provides some common options for all services:
-loglevel=0-9
: Sets the logging verbosity from 0 (none) to 9 (very verbose).-logcolor=on|off
: Turns ANSI colorization of the logging output on or off.-logfile=<filename>
: Writes logging output to a file (default is stdout).-poolhandle=<poolhandle>
: Sets the PH to a non-default value; otherwise, the default setting will be the service-specific default (see below).-cspserver=<address>:<port>
: See Component Status Protocol below.-cspinterval=<milliseconds>
: See Component Status Protocol below.-registrar=<address>:<port>
: Adds a static PR entry into the Registrar Table. It is possible to add multiple entries.-asapannounce=<address>:<port>
: Sets the multicast address and port the ASAP instance listens for ASAP Server Announces on.-rereginterval=<milliseconds>
: Sets the PE's re-registration interval (in milliseconds).-runtime=<seconds>
: After the configured amount of seconds, the service is shut down.-quiet
: Do not print startup and shutdown messages.-policy=<policy>
: Sets the pool policy and its parameters:Random
WeightedRandom:<weight>
RoundRobin
WeightedRoundRobin:<weight>
LeastUsed
LeastUsedDegradation:<degradation>
- ...
-echo
: Selects Echo service. The default PH will be "EchoPool".
Note: The Echo Service will be started by default, unless a different service is specified!
Example:
rspserver -echo -poolhandle=MyEchoPool
-discard
: Selects Discard service. The default PH will be "DiscardPool".
Example:
rspserver -discard -poolhandle=MyDiscardPool
-daytime
: Selects Daytime service. The default PH will be "DaytimePool".
Example:
rspserver -daytime -poolhandle=MyDaytimePool
-chargen
: Selects Character Generator service. The default PH will be "CharGenPool".
Example:
rspserver -chargen -poolhandle=MyCharGenPool
-pingpong
: Selects Ping Pong service. The default PH will be "PingPongPool".
The Ping Pong service provides further options:
-pppfailureafter=<number_of_messages>
: After the set number of messages, the server will terminate the connection in order to test failovers.-pppmaxthreads=<threads>
: Sets the maximum number of simultaneous sessions.
Example:
rspserver -pingpong -poolhandle=MyPingPongPool -pppmaxthreads=4 -pppmaxthreads=8
-scripting
: Selects Scripting service. The default PH will be "ScriptingPool".
The Scripting Service service provides further options:
-sskeyring=<keyring>
: The location of a GnuPG keyring to check the work packages and environments against. If a keyring is specified, only files that pass the validation are accepted.-sscachedirectory=<directory>
: Sets the environment cache directory.-sscachemaxentries=<entries>
: Sets the maximum number of environment cache entries.-sscachemaxsize=<kibibytes>
: Sets the maximum size of the environment cache in kibibytes.-sskeepaliveinterval=<milliseconds>
: Sets the keep-alive interval in milliseconds.-sskeepalivetimeout=<milliseconds>
: Sets the keep-alive timeout in milliseconds.-sskeeptempdirs
: Turns on keeping all temporary files for debugging. Handle with care!-ssmaxthreads=<threads>
: Sets an upper limit for the number of simultaneous sessions.-sstransmittimeout=<milliseconds>
: Sets the transmission timeout in milliseconds.
Example:
rspserver -scripting -policy=LeastUsed -ssmaxthreads=4
The Scripting Service is used e.g. by the following open source tools, which provide more detailed examples:
- SimProcTC – A Simulation Processing Tool-Chain for OMNeT++ Simulations: Distributing simulation jobs in a compute pool.
- SCTP and RSerPool – A Practical Exercise: A tutorial to create a simple load distribution setup to run Persistence of Vision Raytracer (POV-Ray) image computations.
-fractal
: Selects the Fractal Generator service. The default PH will be "FractalGeneratorPool".
The Fractal Generator service provides further options:
-fgpcookiemaxtime=<milliseconds>
: Send cookie after given number of milliseconds.-fgpcookiemaxpackets=<numner_of_messages>
: Send cookie after given number of Data messages.-fgptransmittimeout=<milliseconds>
: Set transmit timeout in milliseconds (timeout for rsp_sendmsg()).-fgptestmode
: Generate simple test pattern instead of calculating a fractal graphics (useful to conserve CPU power).-fgpfailureafter=<number_of_messages>
: After the set number of Data messages, the server will terminate the connection in order to test failovers.-fgpmaxthreads=<threads>
: Sets the maximum number of simultaneous sessions.
Example:
rspserver -fractal -fgpmaxthreads=4
-calcapp
: Selects the Calculation Application (CalcApp) service. The default PH will be "CalcAppPool".
Details about the CalcApp service can be found in Chapter 8 of «Reliable Server Pooling – Evaluation, Optimization and Extension of a Novel IETF Architecture»! The CalcApp service provides further options:
-capcapacity=<calculations_per_second>
: Sets the service capacity in calculations per second.-capcleanshutdownprobability=<probability>
: Sets the probability for sending state cookies to all sessions before shutting down.-capcookiemaxcalculations=<calculations>
: Sets the cookie interval in calculations.-capcookiemaxtime=<seconds>
: Sets the cookie interval in seconds.-capkeepalivetransmissioninterval=<milliseconds>
: Sets the keep-alive transmission interval in milliseconds.-capkeepalivetimeoutinterval=<milliseconds>
: Sets the keep-alive timeout in milliseconds.-capmaxjobs=<max_jobs>
: Sets the an upper limit for the number of simultaneous CalcApp requests.-capobject=<name>
: Sets the object name for scalar hierarchy.-capscalar=<scalar_file>
: Sets the name of vector scalar file to write.-capvector=<vector_file>
: Sets the name of vector output file to write.
Example:
rspserver -calcapp -capcapacity=2000000 -capmaxjobs=8
The pool users provides some common options for all programs:
-loglevel=0-9
: Sets the logging verbosity from 0 (none) to 9 (very verbose).-logcolor=on|off
: Turns ANSI colorization of the logging output on or off.-logfile=<filename>
: Writes logging output to a file (default is stdout).-poolhandle=<poolhandle>
: Sets the PH to a non-default value; otherwise, the default setting will be the service-specific default (see below).-cspserver=<address>:<port>
: See Component Status Protocol below.-cspinterval=<milliseconds>
: See Component Status Protocol below.-registrar=<address>:<port>
: Adds a static PR entry into the Registrar Table. It is possible to add multiple entries.
The PU for the Echo Service, Discard Service, Daytime Service, or Character Generator Service can be started by:
rspterminal <options> ...
Input from standard input is sent to the PE, and the response is printed to standard output.
Example:
rspterminal -poolhandle=MyDaytimePool
Notes:
- The default PH is EchoPool. Use
-poolhandle=<poolhandle>
to set a different PH, e.g. "DaytimePool". - See the manpage of "rspterminal" for further options!
man rspterminal
The PU for the Ping Pong Service can be started by:
pingpongclient
The Ping Pong PU provides further options:
-interval=<milliseconds>
: Sets the Ping interval in milliseconds.
Example:
pingpongclient -poolhandle=MyPingPongPool -interval=333
Note: See the manpage of "pingpongclient" for further options!
man pingpongclient
The PU for the Scripting Service can be started by:
scriptingclient
The Scripting PU provides further options:
-environment=<file_name>
: Sets the name of the environment file to upload to the PE. The PE may cache this environment file, allowing to skip a subsequent upload of the same environment file.-input=<file_name>
: Sets the name of the input file to upload to the PE.-output=<file_name>
: Sets the name of the output file to write the download from the PE to.-quiet
: Turns on quiet mode, i.e. only limited information is printed.-maxretry=<trials>
: Maximum number of retries upon errors on the remote site. The error counter only increments when the remote-side script returns a non-zero error code. When the error limit is reached, the received output file will be downloaded for debugging purposes.-retrydelay=<milliseconds>
: Sets the retry delay upon failover in milliseconds.-runid=<description>
: Add the given description to all log lines of the scripting service PU operation. This can be useful when multiple PUs are running simultaneously.-transmittimeout=<milliseconds>
: Sets the transmission timeout in milliseconds.-keepaliveinterval=<milliseconds>
: Sets the keep-alive interval in milliseconds.-keepalivetimeout=<milliseconds>
: Sets the keep-alive timeout in milliseconds.
To demonstrate the usage of scriptingclient
, the script scriptingserviceexample provides a simple example. It just takes an arbitrary ID number as parameter:
scriptingserviceexample 1234
The Scripting Service is used e.g. by the following open source tools, which provide more detailed examples:
- SimProcTC – A Simulation Processing Tool-Chain for OMNeT++ Simulations: Distributing simulation jobs in a compute pool.
- SCTP and RSerPool – A Practical Exercise: A tutorial to create a simple load distribution setup to run Persistence of Vision Raytracer (POV-Ray) image computations.
The PU for the Fractal Generator Service can be started by:
fractalpooluser <options> ...
The Fractal Generator PU provides further options:
-configdir=<directory>
: Sets a directory to look for FGP config files. From all FGP files (pattern: *.fgp) in this directory, random files are selected for the calculation of requests. The .fgp files can be created, read and modified by FractGen.-threads=<maximum_number_of_threads>
: Sets the number of parallel sessions for the calculation of an image.-caption=<title>
: Sets the window title.
Example (assuming the .fgp input files are installed under /usr/share/fgpconfig):
fractalpooluser -configdir=/usr/share/fgpconfig -caption="Fractal PU Demo!"
Note: See the manpage of "fractalpooluser" for further options!
man fractalpooluser
The PU for the Calculation Application Service (CalcApp) can be started by:
calcappclient
The CalcApp PU provides further options:
-jobinterval=<seconds>
: Sets the job interval in seconds.-jobsize=<calculations>
: Sets the job size in calculations.-keepalivetransmissioninterval=<milliseconds>
: Sets the session keep-alive interval in milliseconds.-keepalivetimeoutinterval=<milliseconds>
: Sets the session keep-alive timeout in milliseconds.-object=<name>
: Sets the object name for scalar hierarchy.-runtime=<seconds>
: After the configured number of seconds, the service is shut down. Floating-point values (e.g. 30.125) are possible.-scalar=<scalar_file>
: Sets the name of vector scalar file to write.-vector=<vector_file>
: Sets the name of vector output file to write.
Example:
calcappclient -jobinterval=30.125 -jobsize=5000000
Notes:
-
Details about the CalcApp service can be found in Chapter 8 of «Reliable Server Pooling – Evaluation, Optimization and Extension of a Novel IETF Architecture»!
-
See the manpage of "calcappclient" for further options!
man calcappclient
Start the registrar with:
rspregistrar <options> ...
-loglevel=0-9
: Sets the logging verbosity from 0 (none) to 9 (very verbose).-logcolor=on|off
: Turns ANSI colorization of the logging output on or off.-logfile=<filename>
: Writes logging output to a file (default is stdout).-cspserver=<address>:<port>
: See Component Status Protocol below.-cspinterval=<milliseconds>
: See Component Status Protocol below.
-
-asap=auto|<address>:<port>[<,address>]
: Sets the ASAP endpoint address(es). Use "auto" to automatically set it (default). Examples:-asap=auto
-asap=1.2.3.4:3863
-asap=1.2.3.4:3863,[2000::1:2:3],9.8.7.6
-
-asapannounce=auto|<address>:<port>
: Sets the multicast address and UDP port to send the ASAP Announces to. Use "auto" for default. Examples:-asapannounce=auto
-asapannounce=239.0.0.1:3863
-
-maxbadpereports=<number_of_reports>
: Sets the maximum number of ASAP Endpoint Unreachable reports before removing a PE. -
-endpointkeepalivetransmissioninterval=<milliseconds>
: Sets the ASAP Endpoint Keep Alive interval. -
-endpointkeepalivetimeoutinterval=<milliseconds>
: Sets the ASAP Endpoint Keep Alive timeout. -
-serverannouncecycle=<milliseconds>
: Sets the ASAP Announce interval. -
-autoclosetimeout=<seconds>
: Sets the SCTP autoclose timeout for idle ASAP associations. -
-minaddressscope=<scope>
: Sets the minimum address scope acceptable for registered PEs:loopback
: Loopback address (only valid on the same node!)site-local
: Site-local addresses (e.g. 192.168.1.1, etc.)global
: Global addresses
-
-quiet
: Do not print startup and shutdown messages.
-enrp=auto|<address>:<port>[<,address>]
: Sets the ENRP endpoint address(es). Use "auto" to automatically set it (default). Examples:-enrp=auto
-enrp=1.2.3.4:9901
-enrp=1.2.3.4:9901,[2000::1:2:3],9.8.7.6
-enrpannounce=auto|<address>:<port>
: Sets the multicast address and UDP port to send the ENRP Announces to. Use "auto" for default. Examples:-enrpannounce=auto
-enrpannounce=239.0.0.1:9901
-peer=<address>:<port>
: Adds a static PR entry into the Peer List. It is possible to add multiple entries.-peerheartbeatcycle=<milliseconds>
: Sets the ENRP peer heartbeat interval.-peermaxtimelastheard=<milliseconds>
: Sets the ENRP peer max time last heard.-peermaxtimenoresponse=<milliseconds>
: Sets the ENRP maximum time without response.-takeoverexpiryinterval=<milliseconds>
: Sets the ENRP takeover timeout.-mentorhuntinterval=<milliseconds>
: Sets the mentor PR hunt interval.
Note: See the manpage of "rspregistrar" for further options!
man rspregistrar
The Component Status Protocol is a simple UDP-based protocol for RSerPool components to send their status to a central monitoring component. A console-based receiver is ./cspmonitor; it receives the status updates by default on UDP port 2960.
In order to send status information, the registrar as well as all servers and clients described in section B provide two parameters:
-cspserver=<address>:<port>
: Sets the CSP monitor server's address and port.-cspinterval=<milliseconds>
: Sets the interval for the CSP status updates in milliseconds.
Note: Both parameters must be provided in order to send status updates!
RSPLIB and related BibTeX entries can be found in AllReferences.bib!
Dreibholz, Thomas: «Reliable Server Pooling – Evaluation, Optimization and Extension of a Novel IETF Architecture» (PDF, 9080 KiB, 267 pages, 🇬🇧), University of Duisburg-Essen, Faculty of Economics, Institute for Computer Science and Business Information Systems, URN urn:nbn:de:hbz:465-20070308-164527-0, March 7, 2007.
- 🇧🇦 Bosnian (thanks to Nihad Cosić)
- 🇨🇳 Chinese (thanks to Xing Zhou)
- 🇭🇷 Croatian (thanks to Nihad Cosić)
- 🇬🇧 English
- 🇫🇷 French
- 🇩🇪 German (thanks to Jobin Pulinthanath)
- 🇮🇹 Italian
- 🇳🇴 Norwegian (bokmål)
What about helping Wikipedia by adding an article in your language?
- Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page
- NetPerfMeter – A TCP/MPTCP/UDP/SCTP/DCCP Network Performance Meter Tool
- HiPerConTracer – High-Performance Connectivity Tracer
- TSCTP – An SCTP test tool
- sctplib and socketapi – The User-Space SCTP Library (sctplib) and Socket API Library (socketapi)
- SubNetCalc – An IPv4/IPv6 Subnet Calculator
- System-Tools – Tools for Basic System Management
- Wireshark