What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0
0
0
0
0
0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.

Coming soon in 1.5.0: Smart fan-out for efficient activity delivery!

After getting feedback about our queue design, we're excited to introduce a significant improvement for accounts with large follower counts.

As we discussed in our previous post, Fedify currently creates separate queue messages for each recipient. While this approach offers excellent reliability and individual retry capabilities, it causes performance issues when sending activities to thousands of followers.

Our solution? A new two-stage “fan-out” approach:

  1. When you call Context.sendActivity(), we'll now enqueue just one consolidated message containing your activity payload and recipient list
  2. A background worker then processes this message and re-enqueues individual delivery tasks

The benefits are substantial:

  • Context.sendActivity() returns almost instantly, even for massive follower counts
  • Memory usage is dramatically reduced by avoiding payload duplication
  • UI responsiveness improves since web requests complete quickly
  • The same reliability for individual deliveries is maintained

For developers with specific needs, we're adding a fanout option with three settings:

  • "auto" (default): Uses fanout for large recipient lists, direct delivery for small ones
  • "skip": Bypasses fanout when you need different payload per recipient
  • "force": Always uses fanout even with few recipients
// Example with custom fanout setting
await ctx.sendActivity(
  { identifier: "alice" },
  recipients,
  activity,
  { fanout: "skip" }  // Directly enqueues individual messages
);

This change represents months of performance testing and should make Fedify work beautifully even for extremely popular accounts!

For more details, check out our docs.

What other optimizations would you like to see in future Fedify releases?

Flowchart comparing Fedify's current approach versus the new fan-out approach for activity delivery.

The current approach shows:

1. sendActivity calls create separate messages for each recipient (marked as a response time bottleneck)
2. These individual messages are queued in outbox
3. Messages are processed independently
4. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The fan-out approach shows:

1. sendActivity creates a single message with multiple recipients
2. This single message is queued in fan-out queue (marked as providing quick response)
3. A background worker processes the fan-out message
4. The worker re-enqueues individual messages in outbox
5. These are then processed independently
6. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The diagram highlights how the fan-out approach moves the heavy processing out of the response path, providing faster API response times while maintaining the same delivery characteristics.
0

I think this is the best writeup I've seen on the (lack of) evidence for the efficacy of nasal sprays as a "layer of protection" against /

There is a lot of misinformation out there about nasal sprays preventing COVID-19. Unfortunately, there are no convincing studies showing that nasal sprays prevent COVID-19. The published studies investigating whether or not nasal sprays prevent COVID-19 each have major issues, which I will detail here.

I have a PhD in biochemistry and one of my PhD projects was on COVID-19. The main takeaway of this post is that there is no sound evidence that nasal sprays prevent COVID-19. Thus, nasal sprays should not be used for COVID-19 prevention in place of effective measures such as high-quality well-fitting respirators, ventilation and air purification.

1. As a brief overview, some major issues with these studies include:
The fact that the test spray and not the placebo spray contain ingredients that can cause false-negative COVID-19 tests (combined with no information on the timing between applying nasal sprays and taking nasal/nasopharyngeal swabs for COVID-19 tests)
Ex: a heparin nasal spray can cause false-negative COVID-19 RT-PCR tests (study A) and carrageenan from vaginal swabs after using carrageenan-containing lube can cause false-negative PCR tests for HPV (study B). If we take the estimate from another paper (study C) that nasal sprays get immediately diluted approximately 1:1 by nasal fluid (when the spray volume in each nostril is 0.100 mL), then the amount of carrageenan in a nasal swab taken immediately after spraying the nasal spray is comparable to that in the carrageenan undiluted samples in experiment 4 in study B. Those samples from study B all produced false-negative PCR tests for HPV.
Lack of placebo spray, participants having to seek out the test spray themselves (suggesting they may take more precautions than those in the study taking no spray, not even a placebo)
Lack of sufficient information for reproducibility (especially regarding what is considered a positive and a negative COVID-19 RT-PCR test result)
Lack of testing for asymptomatic/presymptomatic infections (how can we say something prevents COVID-19 if we aren’t testing for asymptomatic and presymptomatic COVID-19 infections?)1. As a brief overview, some major issues with these studies include:
The fact that the test spray and not the placebo spray contain ingredients that can cause false-negative COVID-19 tests (combined with no information on the timing between applying nasal sprays and taking nasal/nasopharyngeal swabs for COVID-19 tests)
Ex: a heparin nasal spray can cause false-negative COVID-19 RT-PCR tests (study A) and carrageenan from vaginal swabs after using carrageenan-containing lube can cause false-negative PCR tests for HPV (study B). If we take the estimate from another paper (study C) that nasal sprays get immediately diluted approximately 1:1 by nasal fluid (when the spray volume in each nostril is 0.100 mL), then the amount of carrageenan in a nasal swab taken immediately after spraying the nasal spray is comparable to that in the carrageenan undiluted samples in experiment 4 in study B. Those samples from study B all produced false-negative PCR tests for HPV.
Lack of placebo spray, participants having to seek out the test spray themselves (suggesting they may take more precautions than those in the study taking no spray, not even a placebo)
Lack of sufficient information for reproducibility (especially regarding what is considered a positive and a negative COVID-19 RT-PCR test result)
Lack of testing for asymptomatic/presymptomatic infections (how can we say something prevents COVID-19 if we aren’t testing for asymptomatic and presymptomatic COVID-19 infections?)
0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.

Coming soon in 1.5.0: Smart fan-out for efficient activity delivery!

After getting feedback about our queue design, we're excited to introduce a significant improvement for accounts with large follower counts.

As we discussed in our previous post, Fedify currently creates separate queue messages for each recipient. While this approach offers excellent reliability and individual retry capabilities, it causes performance issues when sending activities to thousands of followers.

Our solution? A new two-stage “fan-out” approach:

  1. When you call Context.sendActivity(), we'll now enqueue just one consolidated message containing your activity payload and recipient list
  2. A background worker then processes this message and re-enqueues individual delivery tasks

The benefits are substantial:

  • Context.sendActivity() returns almost instantly, even for massive follower counts
  • Memory usage is dramatically reduced by avoiding payload duplication
  • UI responsiveness improves since web requests complete quickly
  • The same reliability for individual deliveries is maintained

For developers with specific needs, we're adding a fanout option with three settings:

  • "auto" (default): Uses fanout for large recipient lists, direct delivery for small ones
  • "skip": Bypasses fanout when you need different payload per recipient
  • "force": Always uses fanout even with few recipients
// Example with custom fanout setting
await ctx.sendActivity(
  { identifier: "alice" },
  recipients,
  activity,
  { fanout: "skip" }  // Directly enqueues individual messages
);

This change represents months of performance testing and should make Fedify work beautifully even for extremely popular accounts!

For more details, check out our docs.

What other optimizations would you like to see in future Fedify releases?

Flowchart comparing Fedify's current approach versus the new fan-out approach for activity delivery.

The current approach shows:

1. sendActivity calls create separate messages for each recipient (marked as a response time bottleneck)
2. These individual messages are queued in outbox
3. Messages are processed independently
4. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The fan-out approach shows:

1. sendActivity creates a single message with multiple recipients
2. This single message is queued in fan-out queue (marked as providing quick response)
3. A background worker processes the fan-out message
4. The worker re-enqueues individual messages in outbox
5. These are then processed independently
6. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The diagram highlights how the fan-out approach moves the heavy processing out of the response path, providing faster API response times while maintaining the same delivery characteristics.
0
0
0
0
0
0
0
0
0
0

AI 的商业化确实是一个非常大的问题,现在对于大型企业盈利模式基本上是靠强劲的模型持续领先卖 ToB 服务,ToC 这一块跑通的没有几家,少数能做起来的只是在做一些垂类工具,没什么门槛,收益也有限。从港股今天的情况来看,南水对国内公司在这方面投入过多的资金还是比较敏感的,不过也确实要感慨北美的公司在赚钱上确实也要更高效,国内反而加更多的班效率更低...

0
0
0
0
0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.
0
5
1
0
0
0
0
0

Happy Spring Equinox!

Only three months now until the days start getting shorter!

_______

Notes for reply guys

* In northern hemisphere, see bio
** Days opposed to nights, 24 hour days of course remain the same
*** Yes, 20th March this year
**** Don’t change, see you at the Solstice

0
0
0
0
0
0
0
0
0
0
0
0
0
0

꽤 예전부터 지갑에 금 1g을 넣고다니는데 그 이유는...

내가 가지고있는 결제 수단이나 통신 수단이라고 할만한게 모두 먹통이 되어버린 상당히 곤란한 상황이 되었을 때, 금 1g이면 최소 광역열차표 1장 이상은 얻을 수 있는 가치가 있음.

최악의 상황을 가정하고 넣고 다니는거긴한데 한편으로는 이런걸 1g 단위가 아니라 많이 가지고 있어야 한다면 불편하긴 할 것 같다는 생각도 들더라 ㅋㅋ

0
0

꽤 예전부터 지갑에 금 1g을 넣고다니는데 그 이유는...

내가 가지고있는 결제 수단이나 통신 수단이라고 할만한게 모두 먹통이 되어버린 상당히 곤란한 상황이 되었을 때, 금 1g이면 최소 광역열차표 1장 이상은 얻을 수 있는 가치가 있음.

최악의 상황을 가정하고 넣고 다니는거긴한데 한편으로는 이런걸 1g 단위가 아니라 많이 가지고 있어야 한다면 불편하긴 할 것 같다는 생각도 들더라 ㅋㅋ

0
0
0
0
0
0
0
0
0