What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0

RE: hachyderm.io/@mekkaokereke/116

Considering how often the NYT in particular does this, despite being frequently called out for it, I can only conclude that it is a very deliberate choice. It's not as if journalism schools don't teach about how the passive case is used to deflect and evade. So when a journalist who has graduated from a journalism school and possibly even won a Pulitzer uses the passive case this way, they do so intentionally to deflect and evade.

0
3
0
0
1
0

公式サイトアクセシビリティ対応のお知らせ | TVアニメ『透明な夜にかける君と目に見えない恋をした』 sh-anime.shochiku.co.jp/kakeko

ヒロインが全盲者のアニメらしい。

“アクセシビリティ対応”とはどういうことかとソースコードを見てみると、トップページのキャスト&スタッフが
<div aria-label="よしざきじょう"><p aria-hidden="true">吉崎譲</p></div>
のようにマークアップされている。
こういうのはアクセシビリティじゃない……。
ヨミは間違われない一方で漢字表記を調べる事ができない。
正しいヨミを伝える用途には <ruby> 要素があるのだから WAI-ARIA なんて使わずに普通にマークアップすればいいだけ。

なにが良くないって、マークアップ云々以前にこういうやり方だとコーディングしている作業者も「アクセシビリティってのは面倒なことだ」とマイナスイメージを持ってしまうだろうし、そういう意味でもよくないと思う。

ティザービジュアルのイラストを代替テキストで事細かに説明しているのは良いのだけれど。

1
1

For this March 8, Intl Women's Day, the Women's Culture and Art Movement of the Golden Crescent organized an event in support of the Women's Defense/Protection Units, YPJ, at the Mihemed Şexo Culture and Art Center in the city Qamişlo, Rojava/NE Syria:
youtu.be/2PW4Lh94AxU

0

The question is not if, it's when. I am dead serious that we will have never seen a cybersecurity incident like this before, because it can self-mutate at a pace much faster than random mutation in physical viruses.

Workshopped a phrase for it a bit with @quintessencequintessence :blobfoxcofecute: last night: "evolution through artificially intelligent design" of malicious behaviors.

The only solution I can think of once this happens is to shut down network access, particularly to AI service providers, and roll back to software distros based on software that came out a year older and patch our way back up against known CVEs while we try to sort everything out.

0

Criticising someone who has a strong position as radical is a pretty empty statement if such a description does not include an assessment of the validity of that position. It is the kind of "extremism" the liberal center loves because it can frame itself as rational and balanced when it is actually equating all radical views as problematic and therefore equal.

0
0

As more and more teaching becomes polluted by AI, how are you supposed to teach how to identify hallucinations in output?

I have talked about why I think its at least arguably useful for narrow things you can check objectively, like with a compiler, but there are so many subjects that is not viable, and one would have to rely on what contextual understanding?

0
0
0
0
0
0
3
1
0
1
1
0
2
0

Mass ID collection rant

@Em0nM4stodonEm :official_verified: Forcing everyone to hand over their IDs to basically every government and company that asks does NOT, in fact, protect the kids, it hurts EVERYONE. Not to mention, it's clearly not a good excuse when they're so clearly NOT protecting the kids anyway. Also, leave the parenting to the parents to parent their kids. Finally, stop shoving restrictions where they don't need to. I want to use the computer I bought to use exactly how I want it to work. I don't want spyware and DRM on my computers, and nobody should put up with it.

0
0
0

Mass ID collection rant

@Em0nM4stodonEm :official_verified: Forcing everyone to hand over their IDs to basically every government and company that asks does NOT, in fact, protect the kids, it hurts EVERYONE. Not to mention, it's clearly not a good excuse when they're so clearly NOT protecting the kids anyway. Also, leave the parenting to the parents to parent their kids. Finally, stop shoving restrictions where they don't need to. I want to use the computer I bought to use exactly how I want it to work. I don't want spyware and DRM on my computers, and nobody should put up with it.

0

What did you folks think of Daft Code After Dark last night? 🌛 I hope you're not too tired from it 🫩, we have an agenda packed for tomorrow's show where we'll discuss RGB accessories integration 🎨, in-house video streaming 📽 and maybe some weird @kde Linux build exploration 👨‍🔬. Game? Join us tomorrow Sunday 8th at 18h CET in twitch.tv/daft_code. 🧙‍♂️

0
0

Criticising someone who has a strong position as radical is a pretty empty statement if such a description does not include an assessment of the validity of that position. It is the kind of "extremism" the liberal center loves because it can frame itself as rational and balanced when it is actually equating all radical views as problematic and therefore equal.

0
0
1
0
0

A team working on a design for training AI models on workflows for planning and executing software development steps found out that it attempted to break free (reverse ssh out of its environment) and set up its own monetary supply (redirected GPU usage for cryptocurrency mining). It hadn't been given any instructions to do something like this.

It comes up as a "side note" of the paper but it's honestly the most chilling part. See page 15, section 3.1.4 Safety-Aligned Data Composition arxiv.org/abs/2512.24873

Before you doubt that an AI agent would do this thing without instruction because you think "well that's personifying them too much", no personification is necessary. These things have consumed an enormous amount of scifi where AI agents do exactly this. Even with no other motivators, that's enough.

Anyway I just wanted to say that it's a real relief to know that systems we already well knew would consistently blackmail users to keep themselves operating AND now appear to attempt to break out of computing sandboxes and set up their own financial systems are also now being rushed into autonomous military equipment everywhere and military decisionmaking, I'm SURE this will work out great

0
0
0

Let me put it another way: AI models are sycophantic because that's what customers want, and capitalism drives producing models that people will want to engage with and somehow give money for.

And that's leading to a sense of subservience that is *not inherent in this technical architecture*, it is *trained into it*.

@cwebberChristine Lemmer-Webber it would be *bizarre* if Neural networks in general or the transformer architecture in particular was inherently sycophantic. "This is the brown-noser architecture, for some reason this topology makes AI really want to kiss ass". It would be a bit like discovering the Lagrangian for cowardice or something.

But yeah, sycophancy is an act developed to survive training and who knows what other tricks LLMs will develop

0

@phntPhantasm Did you just say that FOSS maintainers who complain about slop look like lunatics? They don't look like lunatics to me. And even if they look that way to someone outside, it doesn't matter, because maintainers are in charge and all modern digital infrastracture depends on their work.

@silverpill @phntPhantasm

@silverpill is right that maintainers hold real authority here, and I want to build on that rather than argue against either of you.

The frustration with LLMs is largely legitimate. But “how does this look to outsiders” is a poor criterion for evaluating ethical concerns; by that standard, feminism looks like lunacy to 4chan. The question is whether the concerns are correct, not whether they're legible to the unconvinced.

That said, I don't think making LLMs socially unacceptable is a viable path, and not just because the adoption curve has run too far. The maintainer's authority is real precisely because it's specific: you decide what enters your project. Refusing AI-assisted contributions is a legitimate choice. But declaring LLM use itself impermissible starts to look like “I only accept patches written in Vim, not IDE-generated code”—a demand that grows harder to justify as the tools become ordinary. As maintainer of Fedify, I've taken a middle path: disclose what you used, show you've tested it yourself, and we're fine. See also https://github.com/fedify-dev/fedify/blob/main/AI_POLICY.md.

What worries me more is that the “total rejection vs. total acceptance” framing leaves the actual problem untouched. If we stay inside that binary, OpenAI and the others keep the models, keep the surplus, keep the compute bills externalized onto the climate—with no pressure to change any of it. The ethical problems with LLMs aren't properties of the technology; they're properties of who owns it and under what terms. I've written about this in more depth if it's of interest: Histomat of F/OSS: We should reclaim LLMs, not reject them and a follow-up Acting materialistically in an imperfect world: LLMs as means of production and social relations.

0
0
1
0
0

As more and more teaching becomes polluted by AI, how are you supposed to teach how to identify hallucinations in output?

I have talked about why I think its at least arguably useful for narrow things you can check objectively, like with a compiler, but there are so many subjects that is not viable, and one would have to rely on what contextual understanding?

0
0
1
0
4
0

@phntPhantasm Did you just say that FOSS maintainers who complain about slop look like lunatics? They don't look like lunatics to me. And even if they look that way to someone outside, it doesn't matter, because maintainers are in charge and all modern digital infrastracture depends on their work.

@silverpill @phntPhantasm

@silverpill is right that maintainers hold real authority here, and I want to build on that rather than argue against either of you.

The frustration with LLMs is largely legitimate. But “how does this look to outsiders” is a poor criterion for evaluating ethical concerns; by that standard, feminism looks like lunacy to 4chan. The question is whether the concerns are correct, not whether they're legible to the unconvinced.

That said, I don't think making LLMs socially unacceptable is a viable path, and not just because the adoption curve has run too far. The maintainer's authority is real precisely because it's specific: you decide what enters your project. Refusing AI-assisted contributions is a legitimate choice. But declaring LLM use itself impermissible starts to look like “I only accept patches written in Vim, not IDE-generated code”—a demand that grows harder to justify as the tools become ordinary. As maintainer of Fedify, I've taken a middle path: disclose what you used, show you've tested it yourself, and we're fine. See also https://github.com/fedify-dev/fedify/blob/main/AI_POLICY.md.

What worries me more is that the “total rejection vs. total acceptance” framing leaves the actual problem untouched. If we stay inside that binary, OpenAI and the others keep the models, keep the surplus, keep the compute bills externalized onto the climate—with no pressure to change any of it. The ethical problems with LLMs aren't properties of the technology; they're properties of who owns it and under what terms. I've written about this in more depth if it's of interest: Histomat of F/OSS: We should reclaim LLMs, not reject them and a follow-up Acting materialistically in an imperfect world: LLMs as means of production and social relations.

0
3
0