What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

1
0

In case you missed it, I have a website which has thousands of handpicked Fediverse accounts to follow organised into hundreds of topics:

➡️ fedi.directory

To find out more about an account, click on its Fediverse address.

To follow an account, copy-paste its Fediverse address into your search box in Mastodon etc.

0
0
0
0
1
1
0
0
1
1
0
0
3
0
1
0
0

These f**king plugs with USB adapters in that screetch at you all night long with no way to turn them off. 👎

And there were THREE in the room!

I usually make hotels work as best I can, but this was one thing too much after a long (long) day.

Photo of a light switch and double UK sockets. They are clad in brushed aluminium faceplates that show signs of wear. At the bottom of the double sockets is a "5V 2.1A" USB A socket.
0
0
0
0
0
0
1
1

Microsoft has officially admitted that core Windows 11 features are broken. This confirms user complaints, highlighting a quality control crisis as Microsoft prioritizes flashy AI over OS stability. A fix is promised, but without a timeline. Furthermore, Microsoft repeatedly expressed surprise on Twitter that users dislike integrated AI in the OS and Office apps, while requesting users to tone down aggression against employees on Twitter 😅

neowin.net/news/microsoft-fina

0
31
0
1
1
1
0
1
0
0

Do you ever just walk around chanting to yourself whispered under your breath "Do it now. Do it now. Shake us out of the heavy deep sleep. Do it. Now"

0
0
2
3
0
1
0
0
0
0
0
0

TiDAR: Think in Diffusion, Talk in Autoregression

Link: arxiv.org/abs/2511.08923
Discussion: news.ycombinator.com/item?id=4

arXiv logo

TiDAR: Think in Diffusion, Talk in Autoregression

Diffusion language models hold the promise of fast parallel generation, while autoregressive (AR) models typically excel in quality due to their causal structure aligning naturally with language modeling. This raises a fundamental question: can we achieve a synergy with high throughput, higher GPU utilization, and AR level quality? Existing methods fail to effectively balance these two aspects, either prioritizing AR using a weaker model for sequential drafting (speculative decoding), leading to lower drafting efficiency, or using some form of left-to-right (AR-like) decoding logic for diffusion, which still suffers from quality degradation and forfeits its potential parallelizability. We introduce TiDAR, a sequence-level hybrid architecture that drafts tokens (Thinking) in Diffusion and samples final outputs (Talking) AutoRegressively - all within a single forward pass using specially designed structured attention masks. This design exploits the free GPU compute density, achieving a strong balance between drafting and verification capacity. Moreover, TiDAR is designed to be serving-friendly (low overhead) as a standalone model. We extensively evaluate TiDAR against AR models, speculative decoding, and diffusion variants across generative and likelihood tasks at 1.5B and 8B scales. Thanks to the parallel drafting and sampling as well as exact KV cache support, TiDAR outperforms speculative decoding in measured throughput and surpasses diffusion models like Dream and Llada in both efficiency and quality. Most notably, TiDAR is the first architecture to close the quality gap with AR models while delivering 4.71x to 5.91x more tokens per second.

arxiv.org · arXiv.org

0
0
0
0

ブラックフライデーでお得なのはいいんだけど、そこまでして Adobe と Microsoft いらないんだよな…。仕事でどうしても必要なときに1ヶ月契約すれば十分。

0
0
0
0
0
0
0