Fediverse traffic is pretty bursty and sometimes there will be a large backlog of Activities to send to your server, each of which involves a POST. This can hammer your instance and overwhelm the backend’s ability to keep up. Nginx provides a rate-limiting function which can accept POSTs at full speed and proxy them slowly through to your backend at whatever rate you specify.
For example, PieFed has a backend which listens on port 5000. Nginx listens on port 443 for POSTs from outside and sends them through to port 5000:
upstream app_server {
server 127.0.0.1:5000 fail_timeout=0;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name piefed.social www.piefed.social;
root /var/www/whatever;
location / {
# Proxy all requests to Gunicorn
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://app_server;
ssi off;
}
To this basic config we need to add rate limiting, using the ‘limit_req_zone’ directive. Google that for further details.
limit_req_zone $binary_remote_addr zone=one:100m rate=10r/s;
This will use up to 100 MB of RAM as a buffer and limit POSTs to 10 per second, per IP address. Adjust as needed. If the sender is using multiple IP addresses the rate limit will not be as effective. Put this directive outside your server {} block.
Then after our first location / {} block, add a second one that is a copy of the first except with one additional line (and change it to apply to location /inbox or whatever the inbox URL is for your instance):
location /inbox {
limit_req zone=one burst=300;
# limit_req_dry_run on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://app_server;
ssi off;
}
300 is the maximum number of POSTs it will have in the queue. You can use limit_req_dry_run to test the rate limiting without actually doing any limiting – watch the nginx logs for messages while doing a dry run.
It’s been a while since I set this up so please let me know if I mixed anything crucial out or said something misleading.
@piefedadmin @jcrabapple Wouldn‘t that mean that sender considers the POST request successful irrespectively from what the receiving fediverse server responds later?
Probably, yes.
I find that if a POST fails to be processed I don’t really want the sender to retry anyway, I want them to stop doing it. So if the sender thinks it was successful it’s usually not the worst thing in the world.
It would be nice if Nginx responded with a HTTP 202 (Accepted, yet queued) if a POST was throttled and it would be nice if sending fediverse software knew what to do with that info. But I expect this is an edge case that hasn’t been dealt with by most.
AFAIK once there are more items in the queue than the burst value (300 in my config) Nginx starts returning HTTP 503, which will cause a retry attempt on some senders (e.g. Lemmy). All other times it returns 200.
So if you wanted to be very careful you could set a tiny burst value (maybe zero??) which would return 503 as soon as the rate limit kicked in.