What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0

Mass ID collection rant

@Em0nM4stodonEm :official_verified: Forcing everyone to hand over their IDs to basically every government and company that asks does NOT, in fact, protect the kids, it hurts EVERYONE. Not to mention, it's clearly not a good excuse when they're so clearly NOT protecting the kids anyway. Also, leave the parenting to the parents to parent their kids. Finally, stop shoving restrictions where they don't need to. I want to use the computer I bought to use exactly how I want it to work. I don't want spyware and DRM on my computers, and nobody should put up with it.

0

What did you folks think of Daft Code After Dark last night? 🌛 I hope you're not too tired from it 🫩, we have an agenda packed for tomorrow's show where we'll discuss RGB accessories integration 🎨, in-house video streaming 📽 and maybe some weird @kde Linux build exploration 👨‍🔬. Game? Join us tomorrow Sunday 8th at 18h CET in twitch.tv/daft_code. 🧙‍♂️

0
0

Criticising someone who has a strong position as radical is a pretty empty statement if such a description does not include an assessment of the validity of that position. It is the kind of "extremism" the liberal center loves because it can frame itself as rational and balanced when it is actually equating all radical views as problematic and therefore equal.

0
0
1
0
0

A team working on a design for training AI models on workflows for planning and executing software development steps found out that it attempted to break free (reverse ssh out of its environment) and set up its own monetary supply (redirected GPU usage for cryptocurrency mining). It hadn't been given any instructions to do something like this.

It comes up as a "side note" of the paper but it's honestly the most chilling part. See page 15, section 3.1.4 Safety-Aligned Data Composition arxiv.org/abs/2512.24873

Before you doubt that an AI agent would do this thing without instruction because you think "well that's personifying them too much", no personification is necessary. These things have consumed an enormous amount of scifi where AI agents do exactly this. Even with no other motivators, that's enough.

Anyway I just wanted to say that it's a real relief to know that systems we already well knew would consistently blackmail users to keep themselves operating AND now appear to attempt to break out of computing sandboxes and set up their own financial systems are also now being rushed into autonomous military equipment everywhere and military decisionmaking, I'm SURE this will work out great

0
0
0

Let me put it another way: AI models are sycophantic because that's what customers want, and capitalism drives producing models that people will want to engage with and somehow give money for.

And that's leading to a sense of subservience that is *not inherent in this technical architecture*, it is *trained into it*.

@cwebberChristine Lemmer-Webber it would be *bizarre* if Neural networks in general or the transformer architecture in particular was inherently sycophantic. "This is the brown-noser architecture, for some reason this topology makes AI really want to kiss ass". It would be a bit like discovering the Lagrangian for cowardice or something.

But yeah, sycophancy is an act developed to survive training and who knows what other tricks LLMs will develop

0

@phntPhantasm Did you just say that FOSS maintainers who complain about slop look like lunatics? They don't look like lunatics to me. And even if they look that way to someone outside, it doesn't matter, because maintainers are in charge and all modern digital infrastracture depends on their work.

@silverpill @phntPhantasm

@silverpill is right that maintainers hold real authority here, and I want to build on that rather than argue against either of you.

The frustration with LLMs is largely legitimate. But “how does this look to outsiders” is a poor criterion for evaluating ethical concerns; by that standard, feminism looks like lunacy to 4chan. The question is whether the concerns are correct, not whether they're legible to the unconvinced.

That said, I don't think making LLMs socially unacceptable is a viable path, and not just because the adoption curve has run too far. The maintainer's authority is real precisely because it's specific: you decide what enters your project. Refusing AI-assisted contributions is a legitimate choice. But declaring LLM use itself impermissible starts to look like “I only accept patches written in Vim, not IDE-generated code”—a demand that grows harder to justify as the tools become ordinary. As maintainer of Fedify, I've taken a middle path: disclose what you used, show you've tested it yourself, and we're fine. See also https://github.com/fedify-dev/fedify/blob/main/AI_POLICY.md.

What worries me more is that the “total rejection vs. total acceptance” framing leaves the actual problem untouched. If we stay inside that binary, OpenAI and the others keep the models, keep the surplus, keep the compute bills externalized onto the climate—with no pressure to change any of it. The ethical problems with LLMs aren't properties of the technology; they're properties of who owns it and under what terms. I've written about this in more depth if it's of interest: Histomat of F/OSS: We should reclaim LLMs, not reject them and a follow-up Acting materialistically in an imperfect world: LLMs as means of production and social relations.

0
0
1
0
0

As more and more teaching becomes polluted by AI, how are you supposed to teach how to identify hallucinations in output?

I have talked about why I think its at least arguably useful for narrow things you can check objectively, like with a compiler, but there are so many subjects that is not viable, and one would have to rely on what contextual understanding?

0
0
1
0
4
0

@phntPhantasm Did you just say that FOSS maintainers who complain about slop look like lunatics? They don't look like lunatics to me. And even if they look that way to someone outside, it doesn't matter, because maintainers are in charge and all modern digital infrastracture depends on their work.

@silverpill @phntPhantasm

@silverpill is right that maintainers hold real authority here, and I want to build on that rather than argue against either of you.

The frustration with LLMs is largely legitimate. But “how does this look to outsiders” is a poor criterion for evaluating ethical concerns; by that standard, feminism looks like lunacy to 4chan. The question is whether the concerns are correct, not whether they're legible to the unconvinced.

That said, I don't think making LLMs socially unacceptable is a viable path, and not just because the adoption curve has run too far. The maintainer's authority is real precisely because it's specific: you decide what enters your project. Refusing AI-assisted contributions is a legitimate choice. But declaring LLM use itself impermissible starts to look like “I only accept patches written in Vim, not IDE-generated code”—a demand that grows harder to justify as the tools become ordinary. As maintainer of Fedify, I've taken a middle path: disclose what you used, show you've tested it yourself, and we're fine. See also https://github.com/fedify-dev/fedify/blob/main/AI_POLICY.md.

What worries me more is that the “total rejection vs. total acceptance” framing leaves the actual problem untouched. If we stay inside that binary, OpenAI and the others keep the models, keep the surplus, keep the compute bills externalized onto the climate—with no pressure to change any of it. The ethical problems with LLMs aren't properties of the technology; they're properties of who owns it and under what terms. I've written about this in more depth if it's of interest: Histomat of F/OSS: We should reclaim LLMs, not reject them and a follow-up Acting materialistically in an imperfect world: LLMs as means of production and social relations.

0
3
0

The interesting thing about the AI worm being imminent thing is this is the first time where I have said something about AI where most of the well-informed sides of anti-ai and pro-ai friends I have both fully agree with me. If you are paying attention enough, you can see that all the pieces are falling in place.

In fact, the biggest debate is whether this has happened already, and we just haven't seen proof of it yet. I don't know. Given how long things like the xz attack have sat undetected, and given how much chaos of computation is happening in datacenter usage right now, I wouldn't doubt it.

0
0
0
0
3
1
1
1

I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"

It doesn't have to be.

1. A human could *kick off* such a process, and then it runs away from them.
2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.

Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.

The interesting thing about the AI worm being imminent thing is this is the first time where I have said something about AI where most of the well-informed sides of anti-ai and pro-ai friends I have both fully agree with me. If you are paying attention enough, you can see that all the pieces are falling in place.

In fact, the biggest debate is whether this has happened already, and we just haven't seen proof of it yet. I don't know. Given how long things like the xz attack have sat undetected, and given how much chaos of computation is happening in datacenter usage right now, I wouldn't doubt it.

0
0

(미국) 아내 잃은 교차로에 '신호등 청원'하던 남성…같은 곳에서 숨져 n.news.naver.com/mnews/articl... 앤디가 사망한 지 약 2년이 지났지만 해당 교차로에는 여전히 신호등이 설치되지 않았다. 일부 주민들이 신호등을 추가하면 주택가 도로에 교통량이 늘어날 것이라고 주장하며 반대하는 것으로 알려졌다.

아내 잃은 교차로에 '신호등 청원'하던 남성…같은 곳에...

0
1
1
1
1

알고리즘 해킹을 마케팅이라 부르는 거 그만하고 슬슬 규제해야 하지 않을까 생각함. 인터넷 상행위 너무 절조 없고 사기성 짙은 행위들에 대하여 지나치게 관대함.

RE: https://bsky.app/profile/did:plc:msciznx5clw63db2ejtb6ati/post/3mgh3xt6ozc2e

0
0
0
1
0
1
1

In 2025, I found 3 popular apps leaking sensitive user data thanks to simple security bugs. In this *very* deep-dive for subscribers, this is how I use network analysis tools (like Burp) to understand how apps and websites work & share your data — and how you can, too!

I explain how to get started with Burp and similar browser tools, we'll explore API basics, how to understand network requests, and how to get started. I'll also include examples for you to follow along.

this.weekinsecurity.com/a-begi

0
1
0
1
2