What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

1

미국정치

아무래도 미국이 전쟁을 하거나 대비하려는 모양이다
미군 육해공 전력들이 베네수엘라 인근지역과 해상지역에 많이 모였고
미군의 모든 장성들을 긴급소집까지 한걸로 보아 베네수엘라에 전쟁선포라도 할 기세다
이로 인해 금값이 떠어어억상 해버리고 말았다
:blobcat_frustration:

0
0
1
0
0

Bit is all we need: binary normalized neural networks

Link: arxiv.org/abs/2509.07025
Discussion: news.ycombinator.com/item?id=4

arXiv logo

1 bit is all we need: binary normalized neural networks

The increasing size of large neural network models, specifically language models and foundational image models, poses deployment challenges, prompting efforts to reduce memory requirements and enhance computational efficiency. These efforts are critical to ensure practical deployment and effective utilization of these models across various applications. In this work, a novel type of neural network layers and models is developed that uses only single-bit parameters. In this novel type of models all parameters of all layers, including kernel weights and biases, only have values equal to zero or one. This novel type of models uses layers named as binary normalized layer. These binary normalized layers can be of any type, such as fully connected, convolutional, attention, etc., and they consist of slight variations of the corresponding conventional layers. To show the effectiveness of the binary normalized layers, two different models are configured to solve a multiclass image classification problem and a language decoder to predict the next token of a sequence. The model to solve the image classification has convolutional and fully connected layers, and the language model is composed of transformer blocks with multi-head attention. The results show that models with binary normalized layers present almost the same results obtained by equivalent models with real 32-bit parameters. The binary normalized layers allow to develop models that use 32 times less memory than current models and have equivalent performance. Besides, the binary normalized layers can be easily implemented on current computers using 1-bit arrays, and do not require the development of dedicated electronic hardware. This novel type of layers opens a new era for large neural network models with reduced memory requirements that can be deployed using simple and cheap hardware, such as mobile devices or only cpus.

arxiv.org · arXiv.org

0
1
1
0
1
0

I wasn't expecting to talk about KPop Demon Hunter in an interview about designing for neurodivergent people but here we are. Short version: we need more representation in culture beyond the usual clichés of autistic genius (who is or is not an asshole depending on the movie) and the ADHD villain.

0
1
0
0
0
1

주한미군 철수하면 미국입장으로선 불침의 항공모함을 하나 버리는 셈이라... 못할듯합니다 그리고 제가 알기론 이런일 안일어나도록 법에 박아뒀고 어려울듯; 혹시라도 억지로 철수해버리면 중국과 미국 대만 일본 다 같이 엮여서 미친듯이 싸우는 그 날이 오면 오히려 한국입장으로선 안전할지도 모릅니다 (먼산) 하여간 한국은 최대한 뭉개고 트럼프 임기내내 관세협상 질질 끌고 그러면서 동시에 보조금 주고 버틸거다 이런 자세로 나가면서 딜을 해야할듯? 그냥 제가 하는것도 아니지만 보다보니 생각이 들어서 적어봄 이제 자야...

0
1
0
0
0
13

Wow, the new Mors Principium Est album “Darkness Invisible” is really something!

The album and the songs are by no means straightforward—very nice! The whole album conjures up a wonderful atmosphere if you're in the mood for epic choir singing with varied hammering and growling.

I particularly liked:
- Monuments
- Tenebrae Latebra (because it lets the previous one resonate beautifully)
- In Sleep there is peace

And “All Life is Evil” was simply awesome because of the female clear vocals together with the growls.

I wasn't aware of the release. Thanks to @MountainWizard!

0
13
0
0
0
1
0
0
0
1
1

凄い本をお送りいただいた…。『どこかで叫びが ニュー・ブラック・ホラー作品集』(フィルムアート社)なんと『ゲット・アウト』『アス』『NOPE/ノープ』のジョーダン・ピールが編集し、序文を書いた、黒人作家たちによる全編書き下ろしのホラー短篇集。

ハーン小路恭子 監訳 今井亮一、押野素子、柴田元幸、坪野圭介、福間恵 訳  

冒頭がN・K・ジェミシン「不躾なまなざし」だったので読んでみたら、黒人が理由もなく警察に車を止めらる恐怖を捻りにひねった、いきなりすごい作品だった。

0
0
1
0
1
1
0
1

나도 이런 연구에 대해서 막연하게 상상은 해본적이 있는데 사실 하드웨어 측에서 뭔가 더 해보려는 연구가 꽤 있는것 같긴함. 하여간 이렇게 논문까지 나왔네 싶고 조만간 엔비디아 황회장님 눈물날듯...? 중국분들이 이거보면 야 이거 해볼만하겠는데... 생각 분명할거라... www.facebook.com/socialego/po...

Tae Hyung Kim

0

이런글이 보여서 좀 봤는데 혹시나 싶어서 ChatGPT님의 의견을 구하니 "정리하면, 어텐션의 에너지·지연을 ‘크게’ 줄이면서도 GPT-2급 정확도를 ‘유지’하는 것은 논문 범위에서 타당해 보입니다. 다만 전체 LLM 전력/성능은 나머지 층까지 아날로그/근접메모리화가 함께 가야 (실제로 전력 소모가 크게 줄어들고 빠르게 잘될것 입니다)" 라는 뉘앙스. 그러니까 논문이 구라는 아닌것이고 기존의 비싸고 전력먹는 엔비디아 하드웨어를 굳이 쓰지 않아도 다른 하드웨어 레벨의 방법이 있다는것을 보여준것 같음.

0
0
1
0
0
1
0
0
0
0
0
0