What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

2

feel like there's a bit of nuance lost when people argue about usability in Linux. Just because I know how to do something by editing a file in a very specific directory that changes between distros does not mean that I think it's good ui/ux. I can be frustrated with how something should be a simple GUI toggle vs rawdogging dconf. I can also still be smart; it does not make me dumb to want shortcuts.

0
1
0
1

可能是史上宣傳投入最多的選舉
終於達成目的了

完善制度後投票率最高的一次

註:同時本屆選民比上屆少了33萬。縱觀過往選舉,民主派在選民間的支持度大概是60%,換句話說,基本上民主派支持者依舊沒有參與這次選舉

0
0
0
0
0
0

Some highlights from !

Keynote - Imogen Wright - How Complex Systems Taught Me To Fail
youtube.com/live/MObVZKZr5vY

Sofia Toro - How to teach your language to Python (with CPython!)
youtube.com/watch?v=JhFKjiEWHWA

Meagen Voss - Building more accessible Python-powered websites
youtube.com/watch?v=KrtUTEZzD6U

Panel - From Contributor to Founder: Turning Python Projects into Products
Carol Willing, Inessa Pawson, Deborah Hanus, Leah Wasser
youtube.com/live/NB2Q9dbLwVc

0
0
0
0
0
1
1
0
0
2

어제 자기 전에, 그냥 흥미 위주로 캐논 RF 카메라들의 질감 표현 약점을 메꾸는 RAW 현상법을 물어봤고, 제미나이는 수많은 설정값들을 쏟아냈다.

... LLM은 무서워.; 물론 틀리고 소용없는 정보들도 있었지만 반은 건진 듯.;; Dual pixel AF의 내부 미세 회절과 AA필터의 블러를 카운터 치는 방법을 만들어두다니 Rawtherapee 개발팀들도 정상?은 아냐. 🤣

사진은 미세 디테일을 죽어라 끌어올려본 습작. 물론 보정한다고 타사 센서만큼 좋아지는 건 아니지만 약점은 많이 보완되더라.

0

Zebra-Llama – Towards efficient hybrid models

Link: arxiv.org/abs/2505.17272
Discussion: news.ycombinator.com/item?id=4

arXiv logo

Zebra-Llama: Towards Extremely Efficient Hybrid Models

With the growing demand for deploying large language models (LLMs) across diverse applications, improving their inference efficiency is crucial for sustainable and democratized access. However, retraining LLMs to meet new user-specific requirements is prohibitively expensive and environmentally unsustainable. In this work, we propose a practical and scalable alternative: composing efficient hybrid language models from existing pre-trained models. Our approach, Zebra-Llama, introduces a family of 1B, 3B, and 8B hybrid models by combining State Space Models (SSMs) and Multi-head Latent Attention (MLA) layers, using a refined initialization and post-training pipeline to efficiently transfer knowledge from pre-trained Transformers. Zebra-Llama achieves Transformer-level accuracy with near-SSM efficiency using only 7-11B training tokens (compared to trillions of tokens required for pre-training) and an 8B teacher. Moreover, Zebra-Llama dramatically reduces KV cache size -down to 3.9%, 2%, and 2.73% of the original for the 1B, 3B, and 8B variants, respectively-while preserving 100%, 100%, and >97% of average zero-shot performance on LM Harness tasks. Compared to models like MambaInLLaMA, X-EcoMLA, Minitron, and Llamba, Zebra-Llama consistently delivers competitive or superior accuracy while using significantly fewer tokens, smaller teachers, and vastly reduced KV cache memory. Notably, Zebra-Llama-8B surpasses Minitron-8B in few-shot accuracy by 7% while using 8x fewer training tokens, over 12x smaller KV cache, and a smaller teacher (8B vs. 15B). It also achieves 2.6x-3.8x higher throughput (tokens/s) than MambaInLlama up to a 32k context length. We will release code and model checkpoints upon acceptance.

arxiv.org · arXiv.org

0
0
0
0
0
0
0
0
0
1
0
0
0
1
0

12() 6() 서울에서 開催(개최)되는 liftIO 2025에서 〈Optique: TypeScript에서 CLI 파서 컴비네이터를 만들어 보았다〉(假題(가제))라는 主題(주제)發表(발표)를 하게 되었습니다. 아직 liftIO 2025 티켓은 팔고 있으니, 函數型(함수형) 프로그래밍에 關心(관심) 있으신 분들의 많은 參與(참여) 바랍니다!

9
0
0
1
0
1
0
0
0
1
0
0
1
0
1
0

GitHubからCodebergへの移行体験記。GitHub Actionsと同様のCIはあるものの、負荷が高いのであまりたくさん回さないでね、という記載があると。なるほど。
---
GitHub → Codeberg: my experience
eldred.fr/blog/forge-migration

0
0
1
0
1
0
1