R66Cluster-HA

Cluster configuration and High availability consideration

The cluster principle for Waarp R66 is as the following:

  • Having a load balancer, based on TCP, allowing to maintain an open connection to stay open, and to balance new connection with a pool of R66 servers, using probably the less connection as elected

  • The IP/port of the load balancer service is the R66 service address associated with the pool of servers

  • If the load balancer is transparent, the IP of the client is visible from the R66 servers in the pool. If the load balancer is not transparent, the IP of the client is replaced by the IP of the load balancer itself, therefore the R66 servers in the poll will not be able to check the IP addresses of the partners.

  • All servers in the pool will have the exact same IDs (SSL and not SSL), sharing the same database, the same filesystem sub part (work, out at least, probably also in, arch and conf). The configuration must set the multiplemonitor to the number of total available servers in the pool (at most available).

This should enable high availability on the following:

  • a balance between several R66 servers (thus leading to less risk of "high throughput side effects")
  • a high availability in the sens that if one server is down in the pool, the others will ensure the continuity of the service
  • a restart on disconnection (after a crash or stop of one server in the pool) using standard restart procedure of transfer, going to another server but as they all share at least the database, work and out directory, the transfer can restart easily

However, a well enough service level could be sustained using simple monitoring on R66 server, forcing its stop/restart according to the results, giving almost the same result than cluster mode.

Indeed, when a server will retry a connection, it will respect a minimal delay between 2 checks, therefore the delay for the monitoring and restart is around this check delay or a factor of 2 of this delay. For instance, if the timeout/delay is setup to 30s, then the restart should occur in less than 60s to be almost equal to high availability.

 

Finally, regarding the database access, it is important to note that R66 monitors have a certain level of tolerance of unavailability of the database. As long as the transfer is not finishing, the database updates (which log the current status of the transfer) can be ignored. Each time, if the database connection is lost, the server will retry to open new connections when needed. However, when the transfer is finishing (in error or correct), the R66 server MUST save the status, and therefore, if it cannot, it sends back to its partner an internal erropr and will keep this transfer status as it was the las time it was saved. The next time this transfer will be restarted, it will restart form the point saved in the database.

 

 

Some specific technical items

Usage of the same database between several R66 servers

In case a database is shared among several R66 servers (with different names, so not in Multiple Monitors support option), the following tables will be totally shared:

  • Host table: all partners definition, including itself will be shared among all servers. It implies also that the Key used to crypt/uncrypt the password are the same for all servers sharing the database.

  • Rules table: all rule definitions will be shared. The difference of real action could be done either on the recv/send part of the tasks, but also using local variables (see the R66 Task Options) or local scripts

The following tables, even if shared, will have different entries for each server:

  • Configuration table: the bandwidth limitation will be independent for each server

  • Runner table: each transfer will be owned by one server only. Even if 2 servers are partners for the very same transfer, there will be 2 lines in the database, one for each server (requester and requested).

  • MultipleMonitor table: this table is of no use in case of no multiple monitor usage ; in case of multiple monitor usage with several clusters on the same database, each cluster will act as a single host (so sharing or not sharing accordingly) and one line per cluster will be setup in this table.

Usage of Multiple Monitors support

In order to improve reliability of the OpenR66 File Transfer Monitors and the scalability, we propose a new option that allows to spread the load behind a Load Balancer in TCP mode (as HA-Proxy) and a shared storage (as a simple NAS).

  1. multiplemonitors=1 => No multiple monitors will be supported (single instance) 

  2. multiplemonitors=n => n servers will be used as a single instance to spread the load and increase the high availability 

Note that some specific attentions are needed such as to share the IN, OUT and WORK storages such that any servers can act on those files and any other storages that must be shared from the beginning of the transfer (pre-task) to the end of the transfer (post-task), and as to configure correctly the Load Balancer in TCP mode such as to spread the load and keep the connection once opened between 2 partners.

The principle is as follow:

  • Put in place a TCP load balancer that allows to maintain a TCP connection with one server behind and that allows to spread the new connection attempts on the pool of servers available. The algorithm to spread the load could be for instance: the less connections opened at that time. A detection on open port could be enough to test the availability of the service. In more complex configuration, one could also implement a Java method that will do a “message” call to the proposed target server in order to test its availability.

  • The IP/Port of the load balancer service will be the IP/port of the R66 service, behind it, you will have a set of R66 servers with their own IP/port couples internally (on which the load balancer spreads the load).

  • Note that if the LB is “transparent”, meaning the IP from the client is not changed from the real server behind, the IP check could be possible on that pool of servers. Reversely, if the IP shown to the real R66 server is the one from the LB, the IP check will not be possible. However note that if the LB is transparent, it does not prevent that the client might still see the LB IP, and not the real R66 server's IP, and therefore preventing the IP check on client side. So a particular attention is needed if one wants to enable IP checking while using a LB in front of a pool of R66 servers.

  • All R66 servers behind the LB will share the exact same name (ID), both for non SSL and SSL, and will share also the same database. They will have to share also the IN, OUT and WORK directories, and probably any other resources needed for the pre, post and error tasks (through a NAS for instance).

  • All R66 servers will have to specify the same multiplemonitor option with the number of servers in the pool.

In theory, this enables the following HA capabilities:

  • A load balance of all transfers among several clusters (horizontal scalability).

  • A restart on disconnection, even on a crash of the original R66 server, since the new connection will go through the LB algorithm.

Note however that obtaining a HA R66 service does not required absolutely to have this option enabled. Indeed, one could check regularly through a monitoring tool that the service is still responding (using e message for instance), and if not to stop/restart the R66 service accordingly. Since the restart of a request is merely related to the “timeout” time, a check roughly repeated at that interval should enable a “clean” HA availability without having the complexity of a LB configuration.

One could also mixed the two solutions, in order to restart one unresponsive server in the HA pool.