What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

Here's something we need to understand about the economics of AI and why now is the best time to protect ourselves.

Gen AI is an expensive way to create text and images. Right now, much of that cost is being subsidized and hidden, but it will eventually need to be paid. Because charging the actual cost will likely negate most of the value proposition of genAI, the companies that run the big models are focused on altering the social and economic context so that there are big external costs to opting out. E.g. getting companies to fire staff makes it difficult to pivot back away from AI because hiring and training replacement staff can be difficult and costly. Hence, the huge rush and hysterical sense of urgency around adoption: the demand for profitability is an approaching tidal wave, and they need to lock entire industries in before that wave hits.

That's why resistance IN THE PRESENT counts for a great deal. Right now, we have the approaching wave in our favor, and they're counting on cultivating enough dependence before it hits that we'll have no choice to accept the actual costs. The closer they get to locking society into dependence on hyperscale AI systems, the more difficult it becomes to opt out of even the plainly dystopian uses of the technology. And the longer we "wait and see," the less say we may ultimately have in how this technology shapes our society.

0
1
0
0
0
1
0
0

@cwebberChristine Lemmer-Webber a brave post

A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

arXiv logo

Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc

Generative AI, the most popular current approach to AI, consists of large language models (LLMs) that are trained to produce outputs that are plausible, but not necessarily correct. Although their abilities are often uncanny, they are lacking in aspects of reasoning, leading LLMs to be less than completely trustworthy. Furthermore, their results tend to be both unpredictable and uninterpretable. We lay out 16 desiderata for future AI, and discuss an alternative approach to AI which could theoretically address many of the limitations associated with current approaches: AI educated with curated pieces of explicit knowledge and rules of thumb, enabling an inference engine to automatically deduce the logical entailments of all that knowledge. Even long arguments produced this way can be both trustworthy and interpretable, since the full step-by-step line of reasoning is always available, and for each step the provenance of the knowledge used can be documented and audited. There is however a catch: if the logical language is expressive enough to fully represent the meaning of anything we can say in English, then the inference engine runs much too slowly. That's why symbolic AI systems typically settle for some fast but much less expressive logic, such as knowledge graphs. We describe how one AI system, Cyc, has developed ways to overcome that tradeoff and is able to reason in higher order logic in real time. We suggest that any trustworthy general AI will need to hybridize the approaches, the LLM approach and more formal approach, and lay out a path to realizing that dream.

arxiv.org · arXiv.org

0

An AI Called Winter: Neurosymbolic Computation or Illusion? dustycloud.org/blog/an-ai-call

In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

@cwebberChristine Lemmer-Webber a brave post

A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

0
0
0
0
0
0
0

上週在準備出雲大社行的投影片時,查到 2024 日本觀光廳公佈的資料,當年外國人旅遊住宿人次的統計中,第一名的東京都有 5120 萬人,而最後一名的島根縣只有 9 萬人。

0

Reactみたいに明示的に状態変更用の関数とかを呼ばずに、リアクティブプログラミングはしたいが、SolidやSvelteは書き方によってリアクティブだったりリアクティブじゃなかったりするので認知負荷が高い。
どうしたら。

0

DNS-Sperren als Sanktionslösung für irgendwas ist auch einfach das völlige Aufgeben von tatsächlichen Regulierungen und Rechtsdurchsetzung.

Das wird alles einfach darin enden das diese Sperren für niemanden irgendeinen Effekt haben aber man ja Compliance gespielt hat.

Aber wer hätte irgendwas anderes von der SPD erwartet? 😂

0
0
0
0
1
1
특별한 것은 없습니다. 저는 옛날 사람이라서 워렌 버핏의 조언대로, “좋은 회사의 주식을 쌀 때 사서 그냥 가지고 있는 것”이 전부입니다.
特別なことはありません。私は昔の人間なので、ウォーレン・バフェットの助言通り、「良い会社の株を安い時に買って、そのまま保有し続けること」が全てです。
1
0
1
0

RE: social.bau-ha.us/@CCC/11608026

Das wird super. Der Staat sammelt buchstäblich alles, was er kriegen kann. Er gibt nun nicht einmal mehr als Feigenblatt vor, dass der wesentliche Zweck die bessere Versorgung wäre. All das verknüpft man zu BigData-Clustern, zentralisiert versteht sich. Und man lässt es durch KI auswerten. Was soll schon passieren.

0
0

前に知り合いが韓国の地下鉄構内で通勤時間に障害者団体がデモを行ったことを、こんな時間にやられたら迷惑だと言ったから「そうやってあなたのような人に認識してもらうためにわざわざ迷惑がかかる時間帯にやってるんだよ。そうでもしなきゃ、障害者が移動にどれだけ不便を感じているかなんて考えることもないでしょ」と怒ったことがある。

0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
1
1
1
0
0
0
0
0
1

Some people collect stamps; we at CSS Day collect browser vendors. We already announced Vivaldi and Ladybird; today we add Google and Microsoft.

This allows our attendees to complain efficiently about CSS problems in many browsers at once, while their representatives smile, take notes, apologise, and promise to do better.

Today we announce the following speakers:

- @patrickbrosset (Microsoft)
- @UnaUna Kravets (Google)
- and @argyleinkAdam Argyle (CSS)

See our full line-up at cssday.nl

0
1