What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0
0
0
0
0

I am just back from an awesome at the

We had three days to work on Federated () Events, you can find our results on that lovely page: fedivents.blog/

Thanks especially to @linosAndré Menrath for his awesome work with the Event Bridge: wordpress.org/plugins/event-br

...and to @heiglandreasAlerta! Alerta! for pitching and leading the project!

@johnonolanJohn O'Nolan the team came up with the puppies without any knowledge of your newsletter, so this must be really a thing!

0
0
0
0
0
0
0
0

看到連登(香港ptt)有人發問
30歲沒有相關背景,想轉到電影行業,不怕捱窮,有沒有一個方向

留言一面倒不支持潑冷水,電影業已經不是捱窮這麼簡單了,很多時候還要倒貼。這行業還靠吃初出茅廬的年輕人的血汗熱誠來維持

其實就滿可悲的,一個發揚自己文化的產業搞成這樣

0
0
0
0
0
0
0
0
0
0
0

Also, windows builds broke… because

  error: fields `file_type` and `xattrs` are never read
     --> src\fs\fields.rs:109:9
      |
  108 | pub struct PermissionsPlus {
      |            --------------- fields in this struct
  109 |     pub file_type: Type,
      |         ^^^^^^^^^
  ...
  114 |     pub xattrs: bool,
      |         ^^^^^^
      |
      = note: `PermissionsPlus` has a derived impl for the trait `Clone`, but this is intentionally ignored during dead code analysis
      = note: `-D dead-code` implied by `-D warnings`
      = help: to override `-D warnings` add `#[allow(dead_code)]`
  
  error: could not compile `eza` (bin "eza") due to 1 previous error
  warning: build failed, waiting for other jobs to finish...
  error: could not compile `eza` (bin "eza" test) due to 1 previous error
error: process didn't exit successfully: `\\?\C:\Users\runneradmin\.rustup\toolchains\1.81-x86_64-pc-windows-msvc\bin\cargo.exe test --manifest-path Cargo.toml` (exit code: 101)
Error: Process completed with exit code 1.


RE: https://catgirl.farm/objects/c4685b73-2c64-47c0-b020-0e6182aad514
0
0

me: ohh today will be a simple update, I’m a bit lazy :akko_smile2:

also me:

82cf6b93 fix(deps): flake bump 2025-03-20
5904b6cd build(deps): cargo deps 2025-03-20
437625ed build(cargo)!: change MSRV 1.78.0 -> 1.81.0
af4df55a fix(file): remove unnescesarry unsafe blocks for libc major/minor device id
38cd91cb fix(file): unwrap -> expect on libc deviceid calls
e1460336 (HEAD -> update, origin/update) fix(dependabot): formatting issue

:gura_pain_sip:

0

Less disruptive through traffic, more bike lanes. It's nice seeing positive change for a change (pun intended), however small and local it may be. Truly enjoying living on our street now, having first received our fair share of constant car noise for 11 years.

German language brochure detailing changes to street traffic in our neighbourhood
0
0
0
0
0
0

I simply don't get it. MAGA's argument is that everyone takes advantage of the US, and the US is poor because of it.

Meanwhile, 8 out of 10 wealthiest companies are US-based, accumulating several trillion dollars and avoiding taxes as much as they can. (incl. the head of state department DOGE)

Still, all the hate is directed to foreign countries and CEOs are admired as smart for exploiting the civilisation.

I know that victimizing yourself makes it easier, but are people that dumb?

0

2024-25 🇯🇵여돌시장은 아소비 시스템 vs 트와이스 vs 🇰🇷여돌 vs etc 로 경쟁중입니다.
남돌 시장은 쟈니스 후신인 스타토의 나니와단시와
쟈니즈WEST. 같은 그룹이 상위권이고 🇰🇷 Top Tier Idol 들이 음반내면 순위권이 뒤집힙니다.(세븐틴,스키즈,BTS,&TEAM등등등)

0
0
0
0
0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.

Coming soon in 1.5.0: Smart fan-out for efficient activity delivery!

After getting feedback about our queue design, we're excited to introduce a significant improvement for accounts with large follower counts.

As we discussed in our previous post, Fedify currently creates separate queue messages for each recipient. While this approach offers excellent reliability and individual retry capabilities, it causes performance issues when sending activities to thousands of followers.

Our solution? A new two-stage “fan-out” approach:

  1. When you call Context.sendActivity(), we'll now enqueue just one consolidated message containing your activity payload and recipient list
  2. A background worker then processes this message and re-enqueues individual delivery tasks

The benefits are substantial:

  • Context.sendActivity() returns almost instantly, even for massive follower counts
  • Memory usage is dramatically reduced by avoiding payload duplication
  • UI responsiveness improves since web requests complete quickly
  • The same reliability for individual deliveries is maintained

For developers with specific needs, we're adding a fanout option with three settings:

  • "auto" (default): Uses fanout for large recipient lists, direct delivery for small ones
  • "skip": Bypasses fanout when you need different payload per recipient
  • "force": Always uses fanout even with few recipients
// Example with custom fanout setting
await ctx.sendActivity(
  { identifier: "alice" },
  recipients,
  activity,
  { fanout: "skip" }  // Directly enqueues individual messages
);

This change represents months of performance testing and should make Fedify work beautifully even for extremely popular accounts!

For more details, check out our docs.

What other optimizations would you like to see in future Fedify releases?

Flowchart comparing Fedify's current approach versus the new fan-out approach for activity delivery.

The current approach shows:

1. sendActivity calls create separate messages for each recipient (marked as a response time bottleneck)
2. These individual messages are queued in outbox
3. Messages are processed independently
4. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The fan-out approach shows:

1. sendActivity creates a single message with multiple recipients
2. This single message is queued in fan-out queue (marked as providing quick response)
3. A background worker processes the fan-out message
4. The worker re-enqueues individual messages in outbox
5. These are then processed independently
6. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The diagram highlights how the fan-out approach moves the heavy processing out of the response path, providing faster API response times while maintaining the same delivery characteristics.
0

Mastodon이나 Misskey 등에서 민감한 내용으로 지정한 단문의 내용이나 첨부 이미지는 이제 Hackers' Pub에서 뿌옇게 보이게 됩니다. 마우스 커서를 가져다 대면 또렷하게 보이게 됩니다. 다만, Hackers' Pub에서 쓰는 단문을 민감한 내용으로 지정하는 기능은 없습니다. (아마도 앞으로도 없을 것 같습니다.)

Hackers' Pub 타임라인에 뜬 민감한 내용으로 지정된 단문. 내용이 뿌옇게 나와서 보이지 않는다.Hackers' Pub 타임라인에 뜬 민감한 내용으로 지정된 단문. 마우스 커서를 가져다 대면 뿌옇던 내용이 또렷하게 보인다.
0
0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.

Coming soon in 1.5.0: Smart fan-out for efficient activity delivery!

After getting feedback about our queue design, we're excited to introduce a significant improvement for accounts with large follower counts.

As we discussed in our previous post, Fedify currently creates separate queue messages for each recipient. While this approach offers excellent reliability and individual retry capabilities, it causes performance issues when sending activities to thousands of followers.

Our solution? A new two-stage “fan-out” approach:

  1. When you call Context.sendActivity(), we'll now enqueue just one consolidated message containing your activity payload and recipient list
  2. A background worker then processes this message and re-enqueues individual delivery tasks

The benefits are substantial:

  • Context.sendActivity() returns almost instantly, even for massive follower counts
  • Memory usage is dramatically reduced by avoiding payload duplication
  • UI responsiveness improves since web requests complete quickly
  • The same reliability for individual deliveries is maintained

For developers with specific needs, we're adding a fanout option with three settings:

  • "auto" (default): Uses fanout for large recipient lists, direct delivery for small ones
  • "skip": Bypasses fanout when you need different payload per recipient
  • "force": Always uses fanout even with few recipients
// Example with custom fanout setting
await ctx.sendActivity(
  { identifier: "alice" },
  recipients,
  activity,
  { fanout: "skip" }  // Directly enqueues individual messages
);

This change represents months of performance testing and should make Fedify work beautifully even for extremely popular accounts!

For more details, check out our docs.

What other optimizations would you like to see in future Fedify releases?

Flowchart comparing Fedify's current approach versus the new fan-out approach for activity delivery.

The current approach shows:

1. sendActivity calls create separate messages for each recipient (marked as a response time bottleneck)
2. These individual messages are queued in outbox
3. Messages are processed independently
4. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The fan-out approach shows:

1. sendActivity creates a single message with multiple recipients
2. This single message is queued in fan-out queue (marked as providing quick response)
3. A background worker processes the fan-out message
4. The worker re-enqueues individual messages in outbox
5. These are then processed independently
6. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The diagram highlights how the fan-out approach moves the heavy processing out of the response path, providing faster API response times while maintaining the same delivery characteristics.
0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.
0
5
1
0
0

🏕️ my adventures in #selfhosting - day 93 ✨

Thanks to the brilliant advice of @CyberSaloperieelectron tamer in training 🍂 I have found an easy-to-implement, no sweat solution for my redirect issue. I am about to create a test subdomain with #YunoHost to try it out before I make the real switch (from my current Ghost blog to the new, self-hosted one).

Maybe tomorrow I'll share with you the URL of my self-hosted Ghost blog if you want to try things out? 🙈 I have already imported my existing members, I'm ready to go 🚀

And yes, I'm fully aware the timing of my switch (from a Ghost Pro plan to self-hosted) is odd, considering Ghost Pro accounts are now part of the Fediverse. It's just that I couldn't justify spending so much ($31/month) on a free, non-monetized blog that I am capable of self-hosting. It's 6x the cost of my Debian or Ubuntu VPS.

And I kept getting close to the edge of 1000 members, which would have increased my monthly payments. Now I don't have to stress out about getting new readers. From what I understand #ActivityPub followers count as members for Ghost, so someone who gets a sizable following on their federated Ghost site would have to pay more.

I'm sure many people will love this feature (it's fantastic!) but it's not for me. I already have 3 federated Wordpress blogs and too many ActivityPub profiles as it is 🙃

#MySoCalledSudoLife

0
0
0
0