Because using a queue system as backbone makes it easy to distribute work between multiple RPC servers, and makes it easy to replace them when necessary.
- simple: make it easy to port
- low latency rather than guaranteed execution
server:
import redis_rpc
def func1(arg1, arg2):
return arg1 + arg2
redis = StrictRedis.from_url(...)
srv = redis_rpc.Server(redis, {'f1': func1, 'f2': func2, ...}, prefix)
srv.serve()
client:
import redis_rpc
redis = StrictRedis.from_url(...)
cli = redis_rpc.Client(redis, prefix)
print(cli.call('f1', arg1=1, arg2=2))
Use JSON to encode requests and responses.
A published function is available as a Redis queue. Send results via single-use queues so that clients can BLPOP and receive them immediately.
Queues:
{prefix}:{func}:calls{prefix}:{func}:result:{call-id}
Call message:
{"id": "{uuid}", # random, client-assigned
"ts": "{time-stamp-iso8601}",
"kw": {"arg1": "value1",
"arg2": ["some", "list", "of", "values"]}
}
Arguments and results may be any JSON-encodable values, including null/none. This is not meant to be a framework that completely ensures interoperability between clients and servers, it's up to the author of request handlers to make sure exposed API will be usable for planned clients.
Result message:
# if successfull
{"ts": "{time-stamp-iso8601}",
"res": {result-value}}
# or, if finished with an error
{"ts": "{time-stamp-iso8601}",
"err": "{human-readable description of the error}"}
The server also implements a heartbeat thread. To check if a server is online,
look for the {prefix}:{server_kind}:{server_id}:alive key in Redis. When the
server goes offline, its key expires.