What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0
0
0
0

The obsession with 'overdiagnosis' is a much bigger problem than overdiagnosis itself

open.substack.com/pub/theneuro

The discourse around the 'problem' of 'overdiagnosis' doesn't hold up at all when you consider the actual evidence. Pushing it anyway risks harming so many

0
0
0
0
0
0

annoying freebsd ipv6 thing:

# ifconfig lo1 create inet6 auto_linklocal -ifdisabled 2001:db8:100::1/128 up

# ping 2001:db8:200::2
(ping uses 2001:db8:100::1 as source address)

# ifconfig lo2 create inet6 auto_linklocal -ifdisabled no_prefer_iface 2001:db8:200::1/128 up

# ping 2001:db8:200::2
(ping now uses 2001:db8:200:1 as source address)

my expectation was that no_prefer_iface would prevent it from choosing addresses on lo2 as source address automatically, but apparently this does not prevent choosing the longest match for the destination :-(

there's 'prefer_source', which seems to be an address-specific attribute, but i don't think i want that since i have both GUA and ULA addresses configured.

there's also 'anycast', but apparently if you configure an address as anycast, you can't bind to it, which seems a bit useless.

0
0
0
0

Ebay hat ihre AGB und Datenschutz-Infos aktualisiert. Daten aller Nutzer*innen sollen für KI-Training verwendet werden, solange sie nicht vom Opt-Out Gebrauch machen.

Habe meine Zweifel, ob die Berufung auf berechtigte Interessen zu diesen Zwecken unter der DSGVO rechtmäßig ist. Betroffene sollten bis 21. April widersprechen oder ihre Daten löschen, wenn sie diese nicht im Trainingsset haben möchten:

accountsettings.ebay.de/ai-pre

ebay.de/help/account/changing-

Verwendung von KI
Wir können Kl-gestützte Tools und Produkte für
verschiedene Zwecke einsetzen, z.B.,
• um unsere Services zu verbessern und
auszubauen.
• um Ihnen neue oder verbesserte Funktionen und
Produkte anzubieten (z.B. Kl-gestützte
Angebotserstellung,
Bildhintergrundverbesserung und Kl-
Assistenten).
• um Ihnen den Zugriff auf Informationen zu
Angeboten und Services zu erleichtern, z.B.
durch das Angebot Kl-generierter
Zusammenfassungen von Produktrezensionen.
• um Ihnen eine individuellere und persönlichere
Nutzererfahrung zu bieten.
• um Ihnen einen verbesserten Kundenservice zu
bieten, z.B. indem wir in Chats Antworten in
Echtzeit bereitstellen,
Kundenserviceinteraktionen analysieren, um
deren Qualitat zu verbessern oder um
Kundenprobleme besser zu verstehen, und uns
bei der Beantwortung von Kundenanfragen zu
unterstützen.
• um die Betrugserkennung und Compliance-
Prüfungen zu unterstützen und um verdäch'
Aktivitaten auf unseren Websites zu erke
• um Daten auf bestimmte Muster hin zu
analysieren (zB für Marktforschungszwecke).Entwicklung und Training von Kl
Soweit nach geltendem Recht zulässig, verwenden
wir personenbezogene Daten, die wir von unseren
Nutzern erhoben haben, um unsere eigenen Kl-
Modelle sowie KI-Modelle und -Systeme von
Drittanbietern, die wir für die in dieser
Datenschutzerklärung beschriebenen Zwecke
verwenden, zu trainieren, zu testen, zu validieren
und aufeinander abzustimmen. Dazu können auch
die in Abschnitt 4 genannten personenbezogenen
Daten gehören. Ggf. kombinieren wir
personenbezogene Daten unserer Nutzer mit Daten
aus externen Quellen (z.B. aus öffentlich
zugänglichen Quellen) kombinieren. Die
Verwendung personenbezogener Daten für das
Entwickeln und Trainieren von Kl beruht auf
unserem berechtigten Interesse, die oben unter
„Verwendung von Kl" genannten Ziele zu erreichen.Kl-bezogene individuelle Rechte
Sie haben Rechte in Bezug auf Ihre
personenbezogenen Daten. Welche Rechte dies
sind, erfahren Sie in Abschnitt 8. Rechte als
betroffene Person oben. Zu Ihren Rechten gehört
auch das Recht, der Verarbeitung Ihrer
personenbezogenen Daten zu Zwecken der
Entwicklung und des Trainings von Kl zu
widersprechen.Sie haben das Recht, dieser Verarbeitung
zu widersprechen. Ihr Widerspruch wird
berücksichtigt und wir stellen daraufhin
die Verarbeitung Ihrer
personenbezogenen Daten für die
entsprechenden Zwecke umgehend ein.
Unten in den Einstellungen können Sie
Ihre Datenschutzeinstellungen festlegen.
Sie können jederzeit wieder hierher
zurückkehren und diese Einstellungen
ändern.

Verwendung personenbezogener Daten für die Entwicklung und das Training von KI:
Ja (Schalter, per default auf an)
0
0
0
0
0
0
1
0
0
0
0
0
‘marā beboos’ (مرا ببوس/kiss me), irān, 1957
hasan golnaraghi

مرا ببوس مرا ببوس برای اخرین بار،
تو را خدا نگهدار که میروم بسوی سرنوشت…
بهار ما گذشته، گذشته ها گذشته،
منم به جستجوی سر نوشت…
در میان طوفان همپیمان با قایقرانها،
گذشته از جان، باید بگذشت از طوفانها…
به نیمه شبها دارم با یارم پیمانها،
که بر فروزم آتشها در کوهستانها…
شب سیه سفر کنم، ز تیره راه گذر کنم،
نگه كن ای گل من، سرشک غم به دامن، برای من میفکن…
دختر زیبا امشب بر تو مهمانم،
در پیش تو می مانم، تا لب بگذاری بر لب من…
دختر زیبا از برق نگاه تو، اشک بی گناه تو ،
روشن سازد یک امشب من…
مرا ببوس مرا ببوس برای اخرین بار،
تورا خدا نگهدار که میروم بسوی سرنوشت…
بهار ما گذشته، گذشته ها گذشته،
منم به جستجوی سر نوشت…
0
0
0
0
0
0
0
0
0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.

Coming soon in 1.5.0: Smart fan-out for efficient activity delivery!

After getting feedback about our queue design, we're excited to introduce a significant improvement for accounts with large follower counts.

As we discussed in our previous post, Fedify currently creates separate queue messages for each recipient. While this approach offers excellent reliability and individual retry capabilities, it causes performance issues when sending activities to thousands of followers.

Our solution? A new two-stage “fan-out” approach:

  1. When you call Context.sendActivity(), we'll now enqueue just one consolidated message containing your activity payload and recipient list
  2. A background worker then processes this message and re-enqueues individual delivery tasks

The benefits are substantial:

  • Context.sendActivity() returns almost instantly, even for massive follower counts
  • Memory usage is dramatically reduced by avoiding payload duplication
  • UI responsiveness improves since web requests complete quickly
  • The same reliability for individual deliveries is maintained

For developers with specific needs, we're adding a fanout option with three settings:

  • "auto" (default): Uses fanout for large recipient lists, direct delivery for small ones
  • "skip": Bypasses fanout when you need different payload per recipient
  • "force": Always uses fanout even with few recipients
// Example with custom fanout setting
await ctx.sendActivity(
  { identifier: "alice" },
  recipients,
  activity,
  { fanout: "skip" }  // Directly enqueues individual messages
);

This change represents months of performance testing and should make Fedify work beautifully even for extremely popular accounts!

For more details, check out our docs.

What other optimizations would you like to see in future Fedify releases?

Flowchart comparing Fedify's current approach versus the new fan-out approach for activity delivery.

The current approach shows:

1. sendActivity calls create separate messages for each recipient (marked as a response time bottleneck)
2. These individual messages are queued in outbox
3. Messages are processed independently
4. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The fan-out approach shows:

1. sendActivity creates a single message with multiple recipients
2. This single message is queued in fan-out queue (marked as providing quick response)
3. A background worker processes the fan-out message
4. The worker re-enqueues individual messages in outbox
5. These are then processed independently
6. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The diagram highlights how the fan-out approach moves the heavy processing out of the response path, providing faster API response times while maintaining the same delivery characteristics.
0

I think this is the best writeup I've seen on the (lack of) evidence for the efficacy of nasal sprays as a "layer of protection" against /

There is a lot of misinformation out there about nasal sprays preventing COVID-19. Unfortunately, there are no convincing studies showing that nasal sprays prevent COVID-19. The published studies investigating whether or not nasal sprays prevent COVID-19 each have major issues, which I will detail here.

I have a PhD in biochemistry and one of my PhD projects was on COVID-19. The main takeaway of this post is that there is no sound evidence that nasal sprays prevent COVID-19. Thus, nasal sprays should not be used for COVID-19 prevention in place of effective measures such as high-quality well-fitting respirators, ventilation and air purification.

1. As a brief overview, some major issues with these studies include:
The fact that the test spray and not the placebo spray contain ingredients that can cause false-negative COVID-19 tests (combined with no information on the timing between applying nasal sprays and taking nasal/nasopharyngeal swabs for COVID-19 tests)
Ex: a heparin nasal spray can cause false-negative COVID-19 RT-PCR tests (study A) and carrageenan from vaginal swabs after using carrageenan-containing lube can cause false-negative PCR tests for HPV (study B). If we take the estimate from another paper (study C) that nasal sprays get immediately diluted approximately 1:1 by nasal fluid (when the spray volume in each nostril is 0.100 mL), then the amount of carrageenan in a nasal swab taken immediately after spraying the nasal spray is comparable to that in the carrageenan undiluted samples in experiment 4 in study B. Those samples from study B all produced false-negative PCR tests for HPV.
Lack of placebo spray, participants having to seek out the test spray themselves (suggesting they may take more precautions than those in the study taking no spray, not even a placebo)
Lack of sufficient information for reproducibility (especially regarding what is considered a positive and a negative COVID-19 RT-PCR test result)
Lack of testing for asymptomatic/presymptomatic infections (how can we say something prevents COVID-19 if we aren’t testing for asymptomatic and presymptomatic COVID-19 infections?)1. As a brief overview, some major issues with these studies include:
The fact that the test spray and not the placebo spray contain ingredients that can cause false-negative COVID-19 tests (combined with no information on the timing between applying nasal sprays and taking nasal/nasopharyngeal swabs for COVID-19 tests)
Ex: a heparin nasal spray can cause false-negative COVID-19 RT-PCR tests (study A) and carrageenan from vaginal swabs after using carrageenan-containing lube can cause false-negative PCR tests for HPV (study B). If we take the estimate from another paper (study C) that nasal sprays get immediately diluted approximately 1:1 by nasal fluid (when the spray volume in each nostril is 0.100 mL), then the amount of carrageenan in a nasal swab taken immediately after spraying the nasal spray is comparable to that in the carrageenan undiluted samples in experiment 4 in study B. Those samples from study B all produced false-negative PCR tests for HPV.
Lack of placebo spray, participants having to seek out the test spray themselves (suggesting they may take more precautions than those in the study taking no spray, not even a placebo)
Lack of sufficient information for reproducibility (especially regarding what is considered a positive and a negative COVID-19 RT-PCR test result)
Lack of testing for asymptomatic/presymptomatic infections (how can we say something prevents COVID-19 if we aren’t testing for asymptomatic and presymptomatic COVID-19 infections?)
0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.

Coming soon in 1.5.0: Smart fan-out for efficient activity delivery!

After getting feedback about our queue design, we're excited to introduce a significant improvement for accounts with large follower counts.

As we discussed in our previous post, Fedify currently creates separate queue messages for each recipient. While this approach offers excellent reliability and individual retry capabilities, it causes performance issues when sending activities to thousands of followers.

Our solution? A new two-stage “fan-out” approach:

  1. When you call Context.sendActivity(), we'll now enqueue just one consolidated message containing your activity payload and recipient list
  2. A background worker then processes this message and re-enqueues individual delivery tasks

The benefits are substantial:

  • Context.sendActivity() returns almost instantly, even for massive follower counts
  • Memory usage is dramatically reduced by avoiding payload duplication
  • UI responsiveness improves since web requests complete quickly
  • The same reliability for individual deliveries is maintained

For developers with specific needs, we're adding a fanout option with three settings:

  • "auto" (default): Uses fanout for large recipient lists, direct delivery for small ones
  • "skip": Bypasses fanout when you need different payload per recipient
  • "force": Always uses fanout even with few recipients
// Example with custom fanout setting
await ctx.sendActivity(
  { identifier: "alice" },
  recipients,
  activity,
  { fanout: "skip" }  // Directly enqueues individual messages
);

This change represents months of performance testing and should make Fedify work beautifully even for extremely popular accounts!

For more details, check out our docs.

What other optimizations would you like to see in future Fedify releases?

Flowchart comparing Fedify's current approach versus the new fan-out approach for activity delivery.

The current approach shows:

1. sendActivity calls create separate messages for each recipient (marked as a response time bottleneck)
2. These individual messages are queued in outbox
3. Messages are processed independently
4. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The fan-out approach shows:

1. sendActivity creates a single message with multiple recipients
2. This single message is queued in fan-out queue (marked as providing quick response)
3. A background worker processes the fan-out message
4. The worker re-enqueues individual messages in outbox
5. These are then processed independently
6. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The diagram highlights how the fan-out approach moves the heavy processing out of the response path, providing faster API response times while maintaining the same delivery characteristics.
0
0
0
0
0
0
0
0
0
0

AI 的商业化确实是一个非常大的问题,现在对于大型企业盈利模式基本上是靠强劲的模型持续领先卖 ToB 服务,ToC 这一块跑通的没有几家,少数能做起来的只是在做一些垂类工具,没什么门槛,收益也有限。从港股今天的情况来看,南水对国内公司在这方面投入过多的资金还是比较敏感的,不过也确实要感慨北美的公司在赚钱上确实也要更高效,国内反而加更多的班效率更低...

0
0
0
0
0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.
0
5
1
0
0
0
0
0