What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

1
0
0
0

1/2 Exciting news: we just published a new paper: "Preimage attacks on round-reduced MD5, SHA-1, and SHA-256 using parameterized SAT solver", by Oleg Zaikin

If you are interested in security, cryptology, or Constraint Programming, definitely give this paper a read!

link.springer.com/article/10.1

Abstract: MDS, SHA-1, and SHA-256 are fundamental cryptographic hash functions that produce a hash of fixed size given a message of arbitrary finite size. Their core components are compression functions. The MDS compression function operates in 4 rounds of 16 steps each, while that of SHA-1 and SHA-256 operate in 80 and 64 rounds, respectively. It is computationally infeasible to invert these compression functions, i.e., to find an input given an output. In 2012, 28-step MDS, 23-round SHA-1, and 16-round SHA-256 compression functions were reduced to SAT and inverted by Conflict-Driven Clause Learning solvers, yet no progress in this area has been made since then. The present paper proposes to construct intermediate inverse problems for any pair of MDS5 steps (i, i + 1) such that the first problem is very close to inverting i steps, while the last one is almost inverting i + 1 steps. The same idea works for a pair of sequential rounds in case of SHA-1 and SHA-256. SAT encodings of intermediate problems for MD5, SHA-1, and SHA-256 were constructed, and then a Conflict-Driven Clause Learning solver was parameterized on the simplest of them. The parameterized solver was used to design a parallel Cube-and-Conquer solver that for the first time inverted 29-step MDS, 24-round SHA-1, and 19-round SHA-256 compression functions.
Keywords: Cryptographic hash function; Preimage attack; SAT; CDCL; Algorithm
configuration + Cube-and-Conquer
0

We all know the internet's algorithm overlords have decided they get to curate your digital life. Our newsletter is different. It's the antidote to algorithmic paternalism.

Every week, at least one new funny strip appears in your inbox like a tiny comic relief to your everyday stress. No mysterious feed suppression, no engagement metrics.

Just you, us, and the funnies. And you decide when to read. Revolutionary, we know.

0
2
0
0
0

RE: hachyderm.io/@mekkaokereke/116

Mekka's take on the methodological implications on the (lack of) cross-tabs on this study are on point, but there's another thing to look at here: our definition of "platforming". So much discussion of "platforming" is conducted from the perspective of "are these ideas dangerous, is it OK to let people hear these dangerous ideas". That's not what is happening. The speech acts involved are not "conveying ideas" and letting people analyze them.

One way to look at this is to say "oh, algorithmic feeds make people more racist" but the way that attitudes are being measured, the entire way that attitudes *work*, is actually showing something different here: what algorithmic feeds do is *allow racists to efficiently find each other*. "platforming" in this context is not allowing people to hear racist ideas, it is allowing people to *build a command and control network for white supremacist violence*.

0
1
0
0
1

hearing that "good first issue" tagging is becoming useless because it just prompts a bunch of identical ai-generated PRs.

makes me wanna burn down a datacenter. fucking. god. what a pointless salting of the earth.

0
5
0
0
13
0

It's demotivating to think that:

- LLMs aren't good at producing original / novel work
- You still need experts to advance that stuff
- It will always be slower to move without using LLMs
- Once an innovation is done though, an innovation can always be scooped up by the LLM users
- "Bro why are you doing all this manually, I just vibe coded that in a weekend"

Will it always be this way? It's depressing in the meanwhile, at least.

0
1
0

Folks on here have often said to me that they don't believe there's many apps for AT Protocol, well, maybe this will change minds: semble.so/profile/byarielm.fyi

It's just like how the Fedi has it's "big" apps and most people don't know about all the other apps being developed.

Semble is also an AT Protocol app, and it interoperates with margin.at, which is a really cool annotating the web app.

0
0

We all know the internet's algorithm overlords have decided they get to curate your digital life. Our newsletter is different. It's the antidote to algorithmic paternalism.

Every week, at least one new funny strip appears in your inbox like a tiny comic relief to your everyday stress. No mysterious feed suppression, no engagement metrics.

Just you, us, and the funnies. And you decide when to read. Revolutionary, we know.

0
2
0
0

Tidal has this bug where occasionally when you tell it to play a song it will play a *completely different song* from somewhere completely different in its enormous library. Right now I am listening to what Tidal claims is "Music is Math" by Boards of Canada but is instead some ambient piece with muted piano over a high-timbral melange of synth pads and distorted flutes. It is absolutely gorgeous. There is basically no way for me to ever find out what that song was

0
1

UPDATE: thanks for all the good recs and sharing! i think i got a lot to forward :D

a medical professional in berlin asked me for a referral for paid help migrating from windows to debian on their work laptop (thinkpad), is there a cool small company (or freelancer) in berlin who does that sort of thing that you can recommend?

0
18
0

RE: troet.cafe/@ralphruthe/1161207

Ok, viele fragen, wie ich das "durch Zufall" rausfand 🙃
Ich wollte eigentlich aus (veganen) Würstchen eine Currywurst-Pfanne machen. Dafür hab ich Zwiebeln angebraten. Dann fiel mir ein, dass Köche an Currywurstsoße oft Cola machen. Hab einkochen lassen, abgeschmeckt: Stadtfest-Hotdogbuden-Zwiebeln.

0
0
0
0
0
2

It makes me terribly sad that one of the effects OpenAI's image generator has had on me (as someone who has never used it) is that I've come to automatically associate drawings in the Ghibli style as "cheap slop". It's not a conscious judgment. It's an impulse from the hindbrain.

This, I suspect, was the point.

An insult to life itself, indeed.

0
2
0
0
0
0
0
0
0
0
0
0

It's demotivating to think that:

- LLMs aren't good at producing original / novel work
- You still need experts to advance that stuff
- It will always be slower to move without using LLMs
- Once an innovation is done though, an innovation can always be scooped up by the LLM users
- "Bro why are you doing all this manually, I just vibe coded that in a weekend"

Will it always be this way? It's depressing in the meanwhile, at least.

In a sense, the decision is somewhat made for us in that we're developing next-generation stuff that LLMs don't know how to auto-code at @spritelyThe Spritely Institute. We are working on core infrastructure that needs to be carefully thought about and written. LLMs introduce a lot of errors and aren't good at doing this kind of work on their own.

And the goal was always that our work is there to be lifted from, to spread outward, the way people have long drawn from the well of the MIT / Stanford research labs in CS for decades, but for decentralized networking today

But doing it now, in this way, in this environment, it's just really depressing and demotivating.

0
7
0
0
0
1
0
0
0
0
0
보카코레 2026 겨울 디깅 플리
https://mk.zvz.be/@A_den1126/pages/vocacolle-26winter

별로였던 노래도 다시 들어보니 괜찮아서..
휴일동안 컨디션이 안 좋았나 싶음 일단 개최 기간동안은 이렇다는 것
코멘트를 붙인다고 했는데 결국 귀찮아져버려서 링크만 정리를
0

In a sense, the decision is somewhat made for us in that we're developing next-generation stuff that LLMs don't know how to auto-code at @spritelyThe Spritely Institute. We are working on core infrastructure that needs to be carefully thought about and written. LLMs introduce a lot of errors and aren't good at doing this kind of work on their own.

And the goal was always that our work is there to be lifted from, to spread outward, the way people have long drawn from the well of the MIT / Stanford research labs in CS for decades, but for decentralized networking today

But doing it now, in this way, in this environment, it's just really depressing and demotivating.

0
0
2
0

In a sense, the decision is somewhat made for us in that we're developing next-generation stuff that LLMs don't know how to auto-code at @spritelyThe Spritely Institute. We are working on core infrastructure that needs to be carefully thought about and written. LLMs introduce a lot of errors and aren't good at doing this kind of work on their own.

And the goal was always that our work is there to be lifted from, to spread outward, the way people have long drawn from the well of the MIT / Stanford research labs in CS for decades, but for decentralized networking today

But doing it now, in this way, in this environment, it's just really depressing and demotivating.

0