What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0
0
0
0
0

💬 Commented on "存在しない年月日を入力しても保存のときにエラーにならない": u1-liquid "SNSの誕生日フィールドを実際の「誕生日」に使用を限定する必要ってありますか?
製品の発売日、仮想のキャラクターの誕生日、これから産まれる赤ちゃんの誕生日など、
「誕生日」に入れたいデータは利用者本人の「誕生日」だけではないはずです。

現代でも8月32日に産まれたかった人もいるかもしれないし、古代ローマで使用されていたヌマ暦の月は閏月として13番目の月が存在していました。
現状WebUIでは「正しい」日付のみ入力できるようになっている以上、
APIを叩いてまで設定したいのであれば別に防ぐ理由はないと思います。
"
https://github.com/misskey-dev/misskey/issues/5375#issuecomment-2739675626

0
0

💬 Commented on "存在しない年月日を入力しても保存のときにエラーにならない": u1-liquid "SNSの誕生日フィールドを実際の「誕生日」に使用を限定する必要ってありますか?
製品の発売日、仮想のキャラクターの誕生日、これから産まれる赤ちゃんの誕生日など、
「誕生日」に入れたいデータは利用者本人の「誕生日」だけではないはずです。

現代でも8月32日に産まれたかった人もいるかもしれないし、古代ローマで使用されていたヌマ暦の月は閏月として13番目の月が存在していました。
現状WebUIでは「正しい」日付のみ入力できるようになっている以上、
APIを叩いてまで設定したいのであれば別に防ぐ理由はないと思います。
"
https://github.com/misskey-dev/misskey/issues/5375#issuecomment-2739675626

0
0
0
0
0
0
0
0
0
0
0
0

I'm glad to announce the release of version 2.74 of , the simple, minimalistic instance server written in C. It includes a lot of web UI translations by lovely people and a bit of minor tweaks and fixes:

Added Spanish (default, Argentina and Uruguay) translation (contributed by gnemmi).

Added Czech translation (contributed by pmjv).

Added Brazilian Portuguese translation (contributed by daltux).

Added Finnish translation (contributed by inz).

Added French translation (contributed by Popolon).

Added Russian translation (contributed by sn4il).

Added Chinese translation (contributed by mistivia).

Added German translation (contributed by zen and Menel).

Added Greek translation (contributed by uhuru).

Added Italian translation (contributed by anzu).

Mastodon API: added support for /api/v1/custom_emojis (contributed by violette).

Improved Undo+Follow logic (contributed by rozenglass).

Reverted (temporarily) the Markdown code that converted text between underscores to italics, because it was causing more problems that what it was worth.

Fixed bug in bookmark CSV import.

Don't indent Twitter-like "threads" (i.e. chains of short posts from the same author that are self-replies).

https://comam.es/what-is-snac

If you find useful, please consider contributing via LiberaPay: https://liberapay.com/grunfink/



0

Got an interesting question today about 's outgoing design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the , server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.

Coming soon in 1.5.0: Smart fan-out for efficient activity delivery!

After getting feedback about our queue design, we're excited to introduce a significant improvement for accounts with large follower counts.

As we discussed in our previous post, Fedify currently creates separate queue messages for each recipient. While this approach offers excellent reliability and individual retry capabilities, it causes performance issues when sending activities to thousands of followers.

Our solution? A new two-stage “fan-out” approach:

  1. When you call Context.sendActivity(), we'll now enqueue just one consolidated message containing your activity payload and recipient list
  2. A background worker then processes this message and re-enqueues individual delivery tasks

The benefits are substantial:

  • Context.sendActivity() returns almost instantly, even for massive follower counts
  • Memory usage is dramatically reduced by avoiding payload duplication
  • UI responsiveness improves since web requests complete quickly
  • The same reliability for individual deliveries is maintained

For developers with specific needs, we're adding a fanout option with three settings:

  • "auto" (default): Uses fanout for large recipient lists, direct delivery for small ones
  • "skip": Bypasses fanout when you need different payload per recipient
  • "force": Always uses fanout even with few recipients
// Example with custom fanout setting
await ctx.sendActivity(
  { identifier: "alice" },
  recipients,
  activity,
  { fanout: "skip" }  // Directly enqueues individual messages
);

This change represents months of performance testing and should make Fedify work beautifully even for extremely popular accounts!

For more details, check out our docs.

What other optimizations would you like to see in future Fedify releases?

Flowchart comparing Fedify's current approach versus the new fan-out approach for activity delivery.

The current approach shows:

1. sendActivity calls create separate messages for each recipient (marked as a response time bottleneck)
2. These individual messages are queued in outbox
3. Messages are processed independently
4. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The fan-out approach shows:

1. sendActivity creates a single message with multiple recipients
2. This single message is queued in fan-out queue (marked as providing quick response)
3. A background worker processes the fan-out message
4. The worker re-enqueues individual messages in outbox
5. These are then processed independently
6. Three delivery outcomes: Recipient 1 (fast delivery), Recipient 2 (fast delivery), and Recipient 3 (slow server)

The diagram highlights how the fan-out approach moves the heavy processing out of the response path, providing faster API response times while maintaining the same delivery characteristics.
0
0
0
0
0

Mastodon이나 Misskey 등에서 민감한 내용으로 지정한 단문의 내용이나 첨부 이미지는 이제 Hackers' Pub에서 뿌옇게 보이게 됩니다. 마우스 커서를 가져다 대면 또렷하게 보이게 됩니다. 다만, Hackers' Pub에서 쓰는 단문을 민감한 내용으로 지정하는 기능은 없습니다. (아마도 앞으로도 없을 것 같습니다.)

Hackers' Pub 타임라인에 뜬 민감한 내용으로 지정된 단문. 내용이 뿌옇게 나와서 보이지 않는다.Hackers' Pub 타임라인에 뜬 민감한 내용으로 지정된 단문. 마우스 커서를 가져다 대면 뿌옇던 내용이 또렷하게 보인다.
0
0
0
0
0
0
0
0
0
0
0

This is made on a blend of medium and full city roasted beans, that were made 3 days ago.

It did not taste over extracted (validity of that word aside), so I think I’m literally just seeing off-gassing of co₂ from brewing too fresh beans… Kinda cool.

Does also mean that I should probably learn to order more beans in advance of running out, and perhaps start with roast level that need less time to rest when I get a shipment.

Like the dark roast I got yesterday did not have this issue at all, but makes sense, it’s a dark roast, those apparently need between 1-5 days rest, a medium roast like this more like 5-7.

A shot of espresso on a pizza yeast weight, showing 39.43 g of product. The crema has slight bubbles on one of the sides of the shot glass.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0