What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0
0
1
0
0

"인공지능 시대에 인간의 능력은 상향 평준화되는 것이 아니라 오히려 격차가 벌어진다" 바둑계에서 AI 활용에 따른 상위 랭커와 하위 랭커의 실력 격차가 더 벌어지는 현상이 보인다고 합니다. 무작정 AI로 공부한다고 효과적인 건 아니라는 걸 보여주는 데이터네요... AI 시대에도 기본기는 늘 중요하고, 순수 훈련이나 학습을 소홀히 하면 안 될 것 같습니다.

0
0
0

???: 내가 카톡을 너무 오래 썼다.

https://www.instagram.com/p/DScTXcXEjUc/

잇츠잍 | Lee WonTae on Instagram: "”싫으면 카톡 나가라?“ 카카오의 선전포고 🤬 국민 메신저 카카오가 드디어 선을 넘었습니다. 내년 2월부터 우리 이용 패턴을 싹 다 수집해서 AI 학습에 강제로 쓰겠다고 공지했거든요. 진짜 문제는 ’선택권‘이 없다는 겁니다. ❌ 싫으면 수집만 거부하기? (불가) ❌ 거절하고 그냥 쓰기? (불가) ✅ 싫으면 서비스를 탈퇴해라 (이것만 가능) 중국 AI ’딥시크‘도 과도한 수집으로 욕먹고 결국 ’거부권‘을 만들었습니다. 그런데 카카오는 ”법 지켰으니 문제없다“며 배짱을 부리고 있는 상황입니다. 2월 11일 전까지 거부 의사를 밝히지 않으면 자동으로 ’동의‘한 것으로 간주됩니다. 내 소중한 데이터를 인질로 잡은 이번 약관 개정, 여러분은 어떻게 생각하시나요? 댓글로 알려주세요! 👇 #카카오톡 #개인정보 #AI학습 #테크뉴스 #디지털권리"

50K likes, 2,803 comments - it.s._.it on December 19, 2025: "”싫으면 카톡 나가라?“ 카카오의 선전포고 🤬 국민 메신저 카카오가 드디어 선을 넘었습니다. 내년 2월부터 우리 이용 패턴을 싹 다 수집해서 AI 학습에 강제로 쓰겠다고 공지했거든요. 진짜 문제는 ’선택권‘이 없다는 겁니다. ❌ 싫으면 수집만 거부하기? (불가) ❌ 거절하고 그냥 쓰기? (불가) ✅ 싫으면 서비스를 탈퇴해라 (이것만 가능) 중국 AI ’딥시크‘도 과도한 수집으로 욕먹고 결국 ’거부권‘을 만들었습니다. 그런데 카카오는 ”법 지켰으니 문제없다“며 배짱을 부리고 있는 상황입니다. 2월 11일 전까지 거부 의사를 밝히지 않으면 자동으로 ’동의‘한 것으로 간주됩니다. 내 소중한 데이터를 인질로 잡은 이번 약관 개정, 여러분은 어떻게 생각하시나요? 댓글로 알려주세요! 👇 #카카오톡 #개인정보 #AI학습 #테크뉴스 #디지털권리".

www.instagram.com · Instagram

0
0
0
0
8
0
0
0
1
0

Do you know someone who quietly makes the Django community better every day? Or maybe that someone is you? 👀✨

The Django Software Foundation appoints Individual Members to recognize contributions of all kinds: code, docs, reviews, teaching, events, community care, and more 💚

You can nominate someone you admire or self-nominate (yes, really!) 🙌

Members list: 🤗
djangoproject.com/foundation/i

Nominate here: ✅
docs.google.com/forms/d/e/1FAI

CC @django

Group photo at the end of DjangoCon Europe 2019 in Copenhagen, Denmark
0

many tech products were finished and then they kept 'innovating' until they were ruined. imo when a piece of software is over 15 years old, it should be legally removed from the custody of the rapacious american startup-turned-corporation whose uncompromising unsentimentality originally birthed it, and instead given to a medium-size, 120-year-old german company with modest annual revenue growth and 80 employees who produce a type of tube that goes inside air conditioning units of freight trains

0
0
0
0
0
0
0
0
0
0

"An environmental group has submitted a lawsuit against Wisconsin’s Public Service Commission, demanding that it reveal electrical load projections for Meta’s planned AI data center in Beaver Dam, Wisconsin. Midwest Environmental Advocates argued that the PSC is “unlawfully withholding this information because either Meta or a public utility is claiming the electricity demand for the data center is a trade secret,” MEA legal fellow Michael Greif said in a statement."

datacenterdynamics.com/en/news

0
1
0
0
0
0
0
2
0

受小站照顧數年,剛好有個機會來辦個小小的回饋活動吧。
我去印了一些「馬以呀嘻」系列的斗方,沒有要賣,只送給捐款給小站的人。只要填表單的當下擁有「贊助伺服器支出」或「永續發展」級的會籍,上傳截圖證明並留下收件資訊我就寄一組斗方到你指定的地址。
表單連結在這裡:
forms.gle/won7NQgsSiLarrWr7

* 材質是 180P 象牙卡紙,13*13cm,實物比我拍的照片好看
* 限台灣地址,可 i 郵箱
* 因為要上傳圖片所以必須登入 google 帳號才能填寫表單,還請見諒
* 用表單是因為我自知平日性格惡劣 mute 太多人,用私訊我很可能收不到…… 且用表單填可以避免填寫的個資和小站上的帳號產生連結(我寄出時自己的資料也會全部填假的)
* 請注意 patreon 上面金額是美金不是台幣喔
* 預算有限數量有限,表單預計開一週,但數量送光就會截止。

印了馬和「馬以呀嘻」、「馬以呀呼」、「馬以呀齁」、「馬以呀哈哈」的搞笑斗方一組
0
0
0
0
0
0
0

大學時也修一些翻譯系的課,但我覺得教授很爛
上課只是隨便弄幾篇東西叫我們做,做完說他心目中的答案,完全沒有或者很少觸碰技法

信、達、雅都是知易行難,要怎麼通順又不會超譯,往往好像是自由心證

我最近看一本說唐帝國突厥元素的書,就是從英文原文翻譯過來,從句式可以感受到翻譯痕跡很重。我覺得市面很多翻譯都是在超譯和AI譯的光譜之間,真正美的翻譯少之又少

想起之前有人討論翻譯品質的高低往往是能不能夠得諾貝爾文學獎的關鍵,還有村上春樹繁中和簡中譯本多年的信徒大戰,翻譯在出版物中好像是一種房間裡的大象

0
0
0
0
0
0

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (2018)

Link: arxiv.org/abs/1803.03635
Discussion: news.ycombinator.com/item?id=4

arXiv logo

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.

arxiv.org · arXiv.org

0
0
0

ah, WPBT acpi table

expected usecase: automatic execution of software “absolutely critical for the execution of Windows” that cannot be distributed any other way (source: WPBT spec)

actual usecase: a mix of actual malware (absolute computrace) and potentially unwanted programs (oem crapware), less “absolutely critical for the execution of Windows” and more “absolutely critical for the execution of business model”

0
0
0
0
2
0