@fedifyFedify: an ActivityPub server framework How many queues do you use? Is it based on any mathematical rules like number of users vs cpu cores, or memory requirements? Do you always spin up a new queue or cap the number and reuse the resources as they come available?
@PossiblyMaxMax Great question about our queue implementation! Fedify doesn't actually create separate physical queues, but rather uses a single logical queue where each message contains its own destination information.
For resource management, we generally rely on the underlying queue implementation (Redis, PostgreSQL, etc.) to handle concurrent processing efficiently. Since version 1.0.0, we've introduced ParallelMessageQueue
which processes multiple messages concurrently with a configurable worker count—usually set close to your CPU core count for IO-bound operations.
We don't spin up new queues dynamically; instead, we focus on making the message processing scalable. You can control the parallelism level when using ParallelMessageQueue
, and for high-volume instances, you can horizontally scale by running multiple worker processes that connect to the same shared queue backend.
This approach keeps the architecture simpler while still allowing for good throughput and resource utilization that can scale with your instance size.
If you have a fediverse account, you can quote this note from your own instance. Search https://hollo.social/@fedify/019584c9-dfdd-74e2-9611-5e3711d1b409 on your instance and quote it. (Note that quoting is not supported in Mastodon.)