You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, in my current scenario, it hasn’t been a good solution, especially for scalability.
If an issue occurs on the server—such as exhausting all RAM or hitting a file limit (tuning has already been done)—the entire service goes offline, so I’m thinking of using another machine, also with 2 vCPUs, to load-balance and provide high availability.
The scenario would be:
Nginx server:
-> Socket.IO Server 01 + 2 vCPUs running
-> Socket.IO Server 02 + 2 vCPUs running
But my question is what I need to change in my code for this to work.
From what I understand, in NginX I need to add the code below.
upstream nodes {
# enable sticky session with either "hash" (uses the complete IP address)
hash $remote_addr consistent;
# or "ip_hash" (uses the first three octets of the client IPv4 address, or the entire IPv6 address)
# ip_hash;
# or "sticky" (needs commercial subscription)
# sticky cookie srv_id expires=1h domain=.example.com path=/;
server app01:8080;
server app02:8080;
server app03:8080;
}
I really didn’t understand this part. If I’m going to use multiple servers, do I have to use only one CPU? Or can I keep using all the CPUs available on the machines?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I have a server running with PM2, which lets me use all the machine’s vCPUs (02) to get better Socket.IO performance. This server has 4GB of RAM.
Since I’m using PM2, I have an NginX server in front that receives HTTPS/WSS requests and forwards them to port 8080.
I use the configuration below to allow the nodes to communicate correctly, as per the instructions on the website.
However, in my current scenario, it hasn’t been a good solution, especially for scalability.
If an issue occurs on the server—such as exhausting all RAM or hitting a file limit (tuning has already been done)—the entire service goes offline, so I’m thinking of using another machine, also with 2 vCPUs, to load-balance and provide high availability.
The scenario would be:
Nginx server:
-> Socket.IO Server 01 + 2 vCPUs running
-> Socket.IO Server 02 + 2 vCPUs running
But my question is what I need to change in my code for this to work.
From what I understand, in NginX I need to add the code below.
https://socket.io/docs/v4/using-multiple-nodes/#nginx-configuration
I really didn’t understand this part. If I’m going to use multiple servers, do I have to use only one CPU? Or can I keep using all the CPUs available on the machines?
Should I remove Socket.IO’s sticky mechanism?
Is that it? Could someone give me more details?
Beta Was this translation helpful? Give feedback.
All reactions