What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0
0
0
0
0
2
0
0
0

RE: techpolicy.social/@joebeone/11

Sad news. Dave Farber was one of a small handful of Internet pioneers who understood early on the inseparable link between technology and policy.

Dave recruited me to U. Penn. He has been an influence on almost everything I’ve worked on in my career.

0
0
0
0
0
0

「修理可能なソフトウェア」という話題でクロードさん扮するケビン ケリーとお話してみました。
Mastodonや電子書籍の話も出てくるよ。

……と言っても誘導的な質問で、僕がクロードに話させた、という感じだけど。

claude.ai/share/fd3f2e42-8d48-

0
0
0

的爭議剛起時,我在第一時間不像大家這麼氣憤,而是覺得悲傷,加上「同情」李千娜,那不是覺得他不該被責備,而是以自己的生命經驗去設想,如果是我有沒有可能跟他做一樣的事情。

很可能,因為我過去也曾經這麼無知。

在看到推友提到她是「對於無知的無奈多於憤怒」,我才明白自己的「悲傷」是來自於對過去那麼輕易就那麼無知的環境。

活在白色恐怖氛圍下,就算解嚴又如何?搭配各種洗腦,台語低下,外省人好高貴,政治好髒好亂,那些黨外份子隨時都會惹事好可怕。

何時能徹底走出那個陰影我不知道,至少這半年還曾經有人當面問過我,我是不是還在網路上寫些什麼,快點刪掉;旁邊的人一開始替我回嘴說寫什麼有什麼關係,碎念了那個人,但最後也越來越小聲的說我本來就不會寫些什麼⋯⋯。

0
0
1
0
1
0
0
1

Very niche question: anyone using Firefox and LanguageTool extension, noticed many freeze since last update? My Firefox uses 15GB of memory when the extension is enabled, vs 2.5GB when it's not. It also freezes when I write things (which makes sense since the extension is a proof reader tool)

0
0
0
0
0

衆議院議席で与党が2/3を超えたので、法案が参議院で否決されても再可決可能。一方憲法は、両院で2/3の賛成の上、国民投票が必要。参院では与党が半数に届いていないので、成立させるには未達。ということか。

0
0
0
3
1
0
1
0
0
0

Okay enough grumbling about train passes. TOKYO GAME DUNGEON!! It was a ton of fun! Lots of people came to play Hamayumishi! We got lots of wonderful feedback and met some cool folks!!

Then we met up with @wrenchClaus and had really good Chinese food (and I forgot to get pics D'OH you will just have to trust it was incredible) and had a wonderful time!! (Caranha I am sorry I was rambling and incoherent, I was very tired!! 😭)

Anyway I am giving up on JR East for tonight, I am going to sleep!! GN!!

0
0
0
@hongminhee洪 民憙 (Hong Minhee) :nonbinary: from the point of view of someone who is "maintaining" a JSON-LD processing fedi software and has implemented their own JSON-LD processing library (which is, to my knowledge, the fastest in it's programming language), JSON-LD is pure overhead. there is nothing it allows for that can't be done with

1. making fields which take multiple values explicit
2. always using namespaces and letting HTTP compression take care of minimizing the transfer

without JSON-LD, fedi software could use zero-ish-copy deserialization for a majority of their objects (when strings aren't escaped) through tools like serde_json and Cow<str>, or
System.Text.Json.JsonDocument. JSON-LD processing effectively mandates a JSON node DOM (in the algorithms standardized, you may be able to get rid of it with Clever Programming)

additionally, due to JSON-LD 1.1 features like @type:@json, you can not even fetch contexts ahead of time of running JSON DOM transformations, meaning all JSON-LD code has to be async (in the languages which has the concept), potentially losing out on significant optimizations that can't be done in coroutines due to various reasons (e.g. C# async methods can't have ref structs, Rust async functions usually require thread safety due to tokio's prevalence, even if they're ran in a single-threaded runtime)

this is
after context processing introducing network dependency to the deserialization of data, wasting time and data on non-server cases (e.g. activitypub C2S). sure you can cache individual contexts, but then the context can change underneath you, desynchronizing your cached context and, in the worst case, opening you up to security vulnerabilities

json-ld is not my favorite part of this protocol
0
0
0

I have deeply mixed feelings about 's adoption of JSON-LD, as someone who's spent way too long dealing with it while building .

Part of me wishes it had never happened. A lot of developers jump into ActivityPub development without really understanding JSON-LD, and honestly, can you blame them? The result is a growing number of implementations producing technically invalid JSON-LD. It works, sort of, because everyone's just pattern-matching against what Mastodon does, but it's not correct. And even developers who do take the time to understand JSON-LD often end up hardcoding their documents anyway, because proper JSON-LD processor libraries simply don't exist for many languages. No safety net, no validation, just vibes and hoping you got the @context right. Naturally, mistakes creep in.

But then the other part of me thinks: well, we're stuck with JSON-LD now. There's no going back. So wouldn't it be nice if people actually used it properly? Process the documents, normalize them, do the compaction and expansion dance the way the spec intended. That's what Fedify does.

Here's the part that really gets to me, though. Because Fedify actually processes JSON-LD correctly, it's more likely to break when talking to implementations that produce malformed documents. From the end user's perspective, Fedify looks like the fragile one. “Why can't I follow this person?” Well, because their server is emitting garbage JSON-LD that happens to work with implementations that just treat it as a regular JSON blob. Every time I get one of these bug reports, I feel a certain injustice. Like being the only person in the group project who actually read the assignment.

To be fair, there are real practical reasons why most people don't bother with proper JSON-LD processing. Implementing a full processor is genuinely a lot of work. It leans on the entire Linked Data stack, which is bigger than most people expect going in. And the performance cost isn't trivial either. Fedify uses some tricks to keep things fast, and I'll be honest, that code isn't my proudest work.

Anyway, none of this is going anywhere. Just me grumbling into the void. If you're building an ActivityPub implementation, maybe consider using a JSON-LD processor if one's available for your language. And if you're not going to, at least test your output against implementations that do.

@hongminhee洪 民憙 (Hong Minhee) :nonbinary: from the point of view of someone who is "maintaining" a JSON-LD processing fedi software and has implemented their own JSON-LD processing library (which is, to my knowledge, the fastest in it's programming language), JSON-LD is pure overhead. there is nothing it allows for that can't be done with

1. making fields which take multiple values explicit
2. always using namespaces and letting HTTP compression take care of minimizing the transfer

without JSON-LD, fedi software could use zero-ish-copy deserialization for a majority of their objects (when strings aren't escaped) through tools like serde_json and Cow<str>, or
System.Text.Json.JsonDocument. JSON-LD processing effectively mandates a JSON node DOM (in the algorithms standardized, you may be able to get rid of it with Clever Programming)

additionally, due to JSON-LD 1.1 features like @type:@json, you can not even fetch contexts ahead of time of running JSON DOM transformations, meaning all JSON-LD code has to be async (in the languages which has the concept), potentially losing out on significant optimizations that can't be done in coroutines due to various reasons (e.g. C# async methods can't have ref structs, Rust async functions usually require thread safety due to tokio's prevalence, even if they're ran in a single-threaded runtime)

this is
after context processing introducing network dependency to the deserialization of data, wasting time and data on non-server cases (e.g. activitypub C2S). sure you can cache individual contexts, but then the context can change underneath you, desynchronizing your cached context and, in the worst case, opening you up to security vulnerabilities

json-ld is not my favorite part of this protocol
0
0
0
0
0
0

I have deeply mixed feelings about 's adoption of JSON-LD, as someone who's spent way too long dealing with it while building .

Part of me wishes it had never happened. A lot of developers jump into ActivityPub development without really understanding JSON-LD, and honestly, can you blame them? The result is a growing number of implementations producing technically invalid JSON-LD. It works, sort of, because everyone's just pattern-matching against what Mastodon does, but it's not correct. And even developers who do take the time to understand JSON-LD often end up hardcoding their documents anyway, because proper JSON-LD processor libraries simply don't exist for many languages. No safety net, no validation, just vibes and hoping you got the @context right. Naturally, mistakes creep in.

But then the other part of me thinks: well, we're stuck with JSON-LD now. There's no going back. So wouldn't it be nice if people actually used it properly? Process the documents, normalize them, do the compaction and expansion dance the way the spec intended. That's what Fedify does.

Here's the part that really gets to me, though. Because Fedify actually processes JSON-LD correctly, it's more likely to break when talking to implementations that produce malformed documents. From the end user's perspective, Fedify looks like the fragile one. “Why can't I follow this person?” Well, because their server is emitting garbage JSON-LD that happens to work with implementations that just treat it as a regular JSON blob. Every time I get one of these bug reports, I feel a certain injustice. Like being the only person in the group project who actually read the assignment.

To be fair, there are real practical reasons why most people don't bother with proper JSON-LD processing. Implementing a full processor is genuinely a lot of work. It leans on the entire Linked Data stack, which is bigger than most people expect going in. And the performance cost isn't trivial either. Fedify uses some tricks to keep things fast, and I'll be honest, that code isn't my proudest work.

Anyway, none of this is going anywhere. Just me grumbling into the void. If you're building an ActivityPub implementation, maybe consider using a JSON-LD processor if one's available for your language. And if you're not going to, at least test your output against implementations that do.

@hongminhee洪 民憙 (Hong Minhee) :nonbinary: I'm reading this thread as a relative noob, but what I see again and again: almost no one "properly" implents largely because is hard but also because the spec itself is unclear. Most people who get stuff done have to go off-spec to actually ship.

This seems a fundamental weakness of the - and that disregarding the limitations coming from base architecture. Seems to pose a mid/long-term existential threat.

What can we do to help improve things?

0