What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

1
0
0
0
0
1
0
0

こう言うニュースは「しかし高市氏は、これまで、このように断言した約束を守ったことが一度もない」と言うところまで、メディアはちゃんと書いて欲しいと思う。でなきゃただの高市の広報よね。

【詳報】党首討論会 首相、与党過半数届かなければ「即刻、退陣」:朝日新聞 asahi.com/articles/ASV1R218PV1

0
0
0
0
0
0

그런 거 해보고 싶다. 이미지나 주제 정하고 합작하는 것처럼 음악 하나 정해서 그 곡을 바탕으로 창작하기. 사실 그냥 창작 키워드를 가장한 좋아하는 음악 영업하기임. 곡 선정은 주최자의 취향이 아주 강력하게 반영됩니다

0
0
0
0

그럼 리듬게임 하는 사람들끼리 오락실에서 오프하면 어떻게 될까? 일단 게임 여러대가 돌아가고 있기 때문에 매우 시끄럽다 A : 안녕하세요! B : 네 안녕하세요! A : 하하 B : 하하 침묵 그리고 각자 할 게임 함

RE: https://bsky.app/profile/did:plc:l2hxm7jansx4uoalu4tklnyi/post/3mdcpmjalr222

0
0

我要賣掉這支三文堂,粗細寫不習慣ಥ_ಥ
目前用一個多月,只有筆沒有盒子(盒子我拿來裝東西了…
限雙北捷運面交
價格有人說880太貴所以可以小議
希望買的人可以試寫所以墨水會繼續裝在裡面

0
0
1
0
0
0
0
1

"LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. "

arxiv.org/abs/2512.09742

arXiv logo

Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs

LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. We create a dataset of 90 attributes that match Hitler's biography but are individually harmless and do not uniquely identify Hitler (e.g. "Q: Favorite music? A: Wagner"). Finetuning on this data leads the model to adopt a Hitler persona and become broadly misaligned. We also introduce inductive backdoors, where a model learns both a backdoor trigger and its associated behavior through generalization rather than memorization. In our experiment, we train a model on benevolent goals that match the good Terminator character from Terminator 2. Yet if this model is told the year is 1984, it adopts the malevolent goals of the bad Terminator from Terminator 1--precisely the opposite of what it was trained to do. Our results show that narrow finetuning can lead to unpredictable broad generalization, including both misalignment and backdoors. Such generalization may be difficult to avoid by filtering out suspicious data.

arxiv.org · arXiv.org

0
1
0
0

저 대학교 졸업하고 나서 알게된 수학적 사실이 있어요

9는 숫자지만 10은 숫자가 아니란거 다들 아셨나요
숫자는 0~9까지의 수를 표현하기 위한 단일한 기호들이고..
10은.. 숫자 1과 0으로 표현된 '수'래요..
처음 들었을 때 나름 수학러로 알아온 20년이 뿌리채 흔들리는 기분이었다

0
0
1
1

경찰 "쿠팡 유출 3천만 건 이상"…로저스 출석 불응 시 체포 검토 www.nocutnews.co.kr/news/6461838... "또 증거인멸 의혹이 불거졌던 쿠팡의 '셀프조사'와 관련해서는 디지털 기기 분석이 거의 마무리됐다. 경찰은 이와 관련한 수사를 이어가기 위해 쿠팡 해롤드 로저스 대표에게 3차로 출석을 요구해 둔 상태다. 로저스 대표 측은 지난 5일과 14일 각각 1차, 2차 출석 요구를 받았지만 불응한 바 있다. 3차 출석일은 아직 다가오지 않은 것으로 나타났다."

경찰 "쿠팡 유출 3천만 건 이상"…로저스 출석 불응 ...

0
0

My curated weekly UX Research, Design, Accessibility & Tech Newsletter is out:
- New Macos Icon Inconsistencies
- Humanity Over Automation
- Accessibility Travel Hurdles
- Broken Web Forms
- Women Game Designers History
- Death To Scroll Fade Annoyance
- A Giant Bic Lamp
- Free Glow Icons
- Designer Job Board
- Linkedin Title Maxxximizer
- Fun Odds Explorer

👉🏻 Newsletter article on my blog: stephaniewalter.design/blog/pi


Pixels of the Week – January 25, 2026

MacOS icon inconsistencies, giant BIC lamp & LinkedIn job titles fun
0
0
0
0
0
0
0
0
0

"LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. "

arxiv.org/abs/2512.09742

arXiv logo

Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs

LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. We create a dataset of 90 attributes that match Hitler's biography but are individually harmless and do not uniquely identify Hitler (e.g. "Q: Favorite music? A: Wagner"). Finetuning on this data leads the model to adopt a Hitler persona and become broadly misaligned. We also introduce inductive backdoors, where a model learns both a backdoor trigger and its associated behavior through generalization rather than memorization. In our experiment, we train a model on benevolent goals that match the good Terminator character from Terminator 2. Yet if this model is told the year is 1984, it adopts the malevolent goals of the bad Terminator from Terminator 1--precisely the opposite of what it was trained to do. Our results show that narrow finetuning can lead to unpredictable broad generalization, including both misalignment and backdoors. Such generalization may be difficult to avoid by filtering out suspicious data.

arxiv.org · arXiv.org

0
0

「俳句」って「日本の文化!」って感じがするかもしれないが、今や日本語以外の言語で作られている方が多いんだよね。
日本が滅んでも、日本語が滅んでも、「俳句」は生き残る、という……。

0

So, guess what?
The local nazis here in Sweden see ICE in the US and think "We want that here too".

It CAN happen here and in fact IT IS HAPPENING ALREADY.

Can we in the EU please take the risk seriously instead of being all smarmy at "those dumb Americans"?

Article in Swedish "Young Sveriges Demokraterna [the local fascist party] want to see a Swedish equivalent to ICE"

omni.se/unga-sd-are-vill-se-sv



0
0
0
0
0
0