What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0
0
1

Very niche question: anyone using Firefox and LanguageTool extension, noticed many freeze since last update? My Firefox uses 15GB of memory when the extension is enabled, vs 2.5GB when it's not. It also freezes when I write things (which makes sense since the extension is a proof reader tool)

0
0
0
0
0

衆議院議席で与党が2/3を超えたので、法案が参議院で否決されても再可決可能。一方憲法は、両院で2/3の賛成の上、国民投票が必要。参院では与党が半数に届いていないので、成立させるには未達。ということか。

0
0
0
3
1
0
1
0
0
0

Okay enough grumbling about train passes. TOKYO GAME DUNGEON!! It was a ton of fun! Lots of people came to play Hamayumishi! We got lots of wonderful feedback and met some cool folks!!

Then we met up with @wrenchClaus and had really good Chinese food (and I forgot to get pics D'OH you will just have to trust it was incredible) and had a wonderful time!! (Caranha I am sorry I was rambling and incoherent, I was very tired!! 😭)

Anyway I am giving up on JR East for tonight, I am going to sleep!! GN!!

0
0
0
@hongminhee洪 民憙 (Hong Minhee) :nonbinary: from the point of view of someone who is "maintaining" a JSON-LD processing fedi software and has implemented their own JSON-LD processing library (which is, to my knowledge, the fastest in it's programming language), JSON-LD is pure overhead. there is nothing it allows for that can't be done with

1. making fields which take multiple values explicit
2. always using namespaces and letting HTTP compression take care of minimizing the transfer

without JSON-LD, fedi software could use zero-ish-copy deserialization for a majority of their objects (when strings aren't escaped) through tools like serde_json and Cow<str>, or
System.Text.Json.JsonDocument. JSON-LD processing effectively mandates a JSON node DOM (in the algorithms standardized, you may be able to get rid of it with Clever Programming)

additionally, due to JSON-LD 1.1 features like @type:@json, you can not even fetch contexts ahead of time of running JSON DOM transformations, meaning all JSON-LD code has to be async (in the languages which has the concept), potentially losing out on significant optimizations that can't be done in coroutines due to various reasons (e.g. C# async methods can't have ref structs, Rust async functions usually require thread safety due to tokio's prevalence, even if they're ran in a single-threaded runtime)

this is
after context processing introducing network dependency to the deserialization of data, wasting time and data on non-server cases (e.g. activitypub C2S). sure you can cache individual contexts, but then the context can change underneath you, desynchronizing your cached context and, in the worst case, opening you up to security vulnerabilities

json-ld is not my favorite part of this protocol
0
0
0

I have deeply mixed feelings about 's adoption of JSON-LD, as someone who's spent way too long dealing with it while building .

Part of me wishes it had never happened. A lot of developers jump into ActivityPub development without really understanding JSON-LD, and honestly, can you blame them? The result is a growing number of implementations producing technically invalid JSON-LD. It works, sort of, because everyone's just pattern-matching against what Mastodon does, but it's not correct. And even developers who do take the time to understand JSON-LD often end up hardcoding their documents anyway, because proper JSON-LD processor libraries simply don't exist for many languages. No safety net, no validation, just vibes and hoping you got the @context right. Naturally, mistakes creep in.

But then the other part of me thinks: well, we're stuck with JSON-LD now. There's no going back. So wouldn't it be nice if people actually used it properly? Process the documents, normalize them, do the compaction and expansion dance the way the spec intended. That's what Fedify does.

Here's the part that really gets to me, though. Because Fedify actually processes JSON-LD correctly, it's more likely to break when talking to implementations that produce malformed documents. From the end user's perspective, Fedify looks like the fragile one. “Why can't I follow this person?” Well, because their server is emitting garbage JSON-LD that happens to work with implementations that just treat it as a regular JSON blob. Every time I get one of these bug reports, I feel a certain injustice. Like being the only person in the group project who actually read the assignment.

To be fair, there are real practical reasons why most people don't bother with proper JSON-LD processing. Implementing a full processor is genuinely a lot of work. It leans on the entire Linked Data stack, which is bigger than most people expect going in. And the performance cost isn't trivial either. Fedify uses some tricks to keep things fast, and I'll be honest, that code isn't my proudest work.

Anyway, none of this is going anywhere. Just me grumbling into the void. If you're building an ActivityPub implementation, maybe consider using a JSON-LD processor if one's available for your language. And if you're not going to, at least test your output against implementations that do.

@hongminhee洪 民憙 (Hong Minhee) :nonbinary: from the point of view of someone who is "maintaining" a JSON-LD processing fedi software and has implemented their own JSON-LD processing library (which is, to my knowledge, the fastest in it's programming language), JSON-LD is pure overhead. there is nothing it allows for that can't be done with

1. making fields which take multiple values explicit
2. always using namespaces and letting HTTP compression take care of minimizing the transfer

without JSON-LD, fedi software could use zero-ish-copy deserialization for a majority of their objects (when strings aren't escaped) through tools like serde_json and Cow<str>, or
System.Text.Json.JsonDocument. JSON-LD processing effectively mandates a JSON node DOM (in the algorithms standardized, you may be able to get rid of it with Clever Programming)

additionally, due to JSON-LD 1.1 features like @type:@json, you can not even fetch contexts ahead of time of running JSON DOM transformations, meaning all JSON-LD code has to be async (in the languages which has the concept), potentially losing out on significant optimizations that can't be done in coroutines due to various reasons (e.g. C# async methods can't have ref structs, Rust async functions usually require thread safety due to tokio's prevalence, even if they're ran in a single-threaded runtime)

this is
after context processing introducing network dependency to the deserialization of data, wasting time and data on non-server cases (e.g. activitypub C2S). sure you can cache individual contexts, but then the context can change underneath you, desynchronizing your cached context and, in the worst case, opening you up to security vulnerabilities

json-ld is not my favorite part of this protocol
0
0
0
0
0
0

I have deeply mixed feelings about 's adoption of JSON-LD, as someone who's spent way too long dealing with it while building .

Part of me wishes it had never happened. A lot of developers jump into ActivityPub development without really understanding JSON-LD, and honestly, can you blame them? The result is a growing number of implementations producing technically invalid JSON-LD. It works, sort of, because everyone's just pattern-matching against what Mastodon does, but it's not correct. And even developers who do take the time to understand JSON-LD often end up hardcoding their documents anyway, because proper JSON-LD processor libraries simply don't exist for many languages. No safety net, no validation, just vibes and hoping you got the @context right. Naturally, mistakes creep in.

But then the other part of me thinks: well, we're stuck with JSON-LD now. There's no going back. So wouldn't it be nice if people actually used it properly? Process the documents, normalize them, do the compaction and expansion dance the way the spec intended. That's what Fedify does.

Here's the part that really gets to me, though. Because Fedify actually processes JSON-LD correctly, it's more likely to break when talking to implementations that produce malformed documents. From the end user's perspective, Fedify looks like the fragile one. “Why can't I follow this person?” Well, because their server is emitting garbage JSON-LD that happens to work with implementations that just treat it as a regular JSON blob. Every time I get one of these bug reports, I feel a certain injustice. Like being the only person in the group project who actually read the assignment.

To be fair, there are real practical reasons why most people don't bother with proper JSON-LD processing. Implementing a full processor is genuinely a lot of work. It leans on the entire Linked Data stack, which is bigger than most people expect going in. And the performance cost isn't trivial either. Fedify uses some tricks to keep things fast, and I'll be honest, that code isn't my proudest work.

Anyway, none of this is going anywhere. Just me grumbling into the void. If you're building an ActivityPub implementation, maybe consider using a JSON-LD processor if one's available for your language. And if you're not going to, at least test your output against implementations that do.

@hongminhee洪 民憙 (Hong Minhee) :nonbinary: I'm reading this thread as a relative noob, but what I see again and again: almost no one "properly" implents largely because is hard but also because the spec itself is unclear. Most people who get stuff done have to go off-spec to actually ship.

This seems a fundamental weakness of the - and that disregarding the limitations coming from base architecture. Seems to pose a mid/long-term existential threat.

What can we do to help improve things?

0
0

도서관 관련 아시는 분들께 여쭙습니다. 1) 희망도서신청 거부사유로 ‘정치적’인 경우가 무엇을 말하는지 알고 싶습니다. 2) 현역시장의 도서들이 지방선거를 앞두고 지역도서관에 들어온 경우 이른바 어른의 사정이 있겠지만 문제 없는지요? 시장의 도서기증이어도, 도서관에서 돈주고 샀어도, 누군가의 희망도서신청이었어도 정치적인 것 같습니다만. 지역 도서관에 문제 제기하기 전에 관련 정보들을 알고 싶습니다.

RE: https://bsky.app/profile/did:plc:3obgngj5swalbbzl4t7xvny6/post/3medn4vmutc2n

0

《欧盟裁定 TikTok 的成瘾性设计违法》 TikTok 能称雄短视频市场与其推荐算法密切相关,它的算法能在用户点击的一秒内更新推荐,它的无限滚动让用户沉迷其中。欧盟委员会在对 TikTok 展开长达一年的调查后宣布,初步裁定无限滚动、自动播放、高度定制化推荐系统等成瘾性设计违反了欧盟的《数字服务法(DSA)》,认为成瘾性设计会对未成年人和弱势群体造成伤害,TikTok 未采取有效措施降低成瘾性设计带来的风险,认为 TikTok 需要对其服务的基本设计进行调整。 | solidot.org/story?sid=83502

0
0
0
0
0
2

Some anti anti-AI arguments are really bordering on ignorant stupidity and show a lack of historical and political awareness.

It is possible to recognize that LLMs work well enough for the average company for software these days AND still not use them for many ethical, political or personal reasons.

I can also see that a car would ease my life and bring me faster from A to B and still not own one for plenty of good reasons.

I can also choose where/when I take a stand and follow my conscience

0
0
0
0

One of the fun things on the Fediverse is that I can ask a question that I could probably have searched on Google or asked ChatGPT and get a number of people to talk to discussing the answer. It's nice being able to talk to interesting people.

0
0
1

I'm looking for people who moved to a different country past the age of 45 and found a job there (so not "my company had a branch there and I was transferred" but "I would like to move there and need to find a way") to share their story with me.

I need some reassurance that it's not too late for me to do the thing I would like to do but haven't been able to so far 🫠

Bonus points if moving to this country involved having to learn a new language.

Boosting is greatly appreciated, thank you! 🙏

Edit: Oh wow, I've already gotten so many replies! Thank you all so very much, I'll respond to every single one asap 🥰

Also: I'm not looking for jobs atm, this post was more about "damn did I wait too long and will moving to another country even still be an option for me or am I too old" and wanting to hear some encouraging stories 😅

0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1