What is Hackers' Pub?

Hackers' Pub is a place for software engineers to share their knowledge and experience with each other. It's also an ActivityPub-enabled social network, so you can follow your favorite hackers in the fediverse and get their latest posts in your feed.

0
0
0
0
1
1
0
0
0
0

Happy 55th Oregon Blew Up A Whale Day for those who celebrate!

Also: if you're afraid of doing something because you're worried you'll look like a jackass if you mess up, PLEASE know that you will never look like as much of a jackass as the State of Oregon did as chunks of rotting whale blubber pelted onlookers.

They caught the whole debacle on the evening news.

popularmechanics.com/science/a

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

회사 중국분들하고 이야기해보면 나이든 세대 (중국 개방초기때 나온분들)는 중국이 미국과 이렇게 힘겨루기하는거 보면 부질없다고 느끼는것 같은데 (괜히 다들 말도 안되는 고생만 하고 있다) 20-30대 분들은 우린 이길거다 이렇게 보는것 같아서 뭐랄까... 시간이 지나면 일은 벌어지겠구나 싶긴함 근데 특히 중국 인플루언서들? 이 양반들이 엄청 바람 넣는것 같긴하던데 (듣다보면 그건 니 생각이 아니고 들은 이야기 같은데.. 하는 경우가 많음) 중뽕에 가득차있으면서도 중국의 어두운면에 대해서는 여전히 부정하는 느낌이 강하게 듬

0
2
0
0
0
1
1

> Technisch untersucht habe seine Behörde die einzelnen Dienste von nicht, ergänzte Roßnagel. "Dazu sind wir personell überhaupt nicht in der Lage, aber wir haben die Grundsatzfragen des .es zufriedenstellend gelöst."

Liest sich sehr überzeugend. Schlage ich beim demnächst auch vor: "Wir unterhalten uns kurz, aber sie prüfen technisch nicht. Okay?"

Das ist einfach nur blanker Unsinn, der da verzapft wird. V.a. im Hinblick auf:

> In einer Anhörung musste der Chefjustiziar von Microsoft France zugeben: Es gibt keine Garantie, dass -Daten vor einer Übermittlung in die sicher sind.

Quelle:
1 heise.de/news/Gruenes-Licht-fu
2 heise.de/news/Nicht-souveraen-

0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0

Walmart now uses AI-based surveillance cameras in all its stores and is rolling out body cameras to associates to collect even more video of customers that are fed into its storage. Kroger, Fred Meyer, and QFC are also now using AI-based video surveillance and also store facial data of every customer that enters their stores. Going grocery shopping in America is becoming a privacy nightmare that will only get worse.

0
0

Anyway. The long and short of this thread is that, with a sufficient understanding of how the llm mechanism works under the hood, the whole "guardrail" thing becomes obviously impossible to achieve, and that if you want a machine that isn't going to randomly output shit from alt.sex.stories.llama.farmers into your CI infrastructure, you're gonna need a different system.

0

Thinking is not language-based.

Language is an API by which people approximate the concepts in their heads to each other.

The semantic connections between words and concepts are not fixed; they're fairly sloppy, with a high degree of tolerance in how they fit together.

This is a feature; this is how poetry works, for instance.

This is also why, in areas such as law and medicine, the practitioners have fossilized specific semantic connotations and relationships using extremely specific jargon that you don't find outside of those fields, and frequently use Latin - a language not subject to the same forces of semantic drift as English, due to the paucity of normal speakers of it - to ossify those concepts and keep them consistent.

Starting -from- language and working backwards to the underlying conceptual framework is the opposite of how humans learn in the first place; infants learn basic facts about the world during their early life, and then they are taught the external cues that allow for communicating facts about the world with their caretakers through consistent conditioning - same way you teach a dog to sit; you associate the condition with the word 'sit' and thus achieve instruction.

While llms are certainly a clever way to create the impression of "understanding" it is, ultimately, a trick - the only 'understanding' comes from the human side; Clever Hans is not doing math at all, but engaging in a fuzzing of human responses to get the sugarcube.

And likewise, these "ai scientists" are genuinely incredibly blinkered in how they pursue "new advancements" in "machine cognition" - not a single one of them, as far as I can tell, has considered that we as humans -already have- examples of "superintelligence" in the real world.

You know how you get 'superintelligence'?

You collect a group of people with diverse viewpoints and experiences in the same room, give them a reason to want to work together, and remove obstacles to communication.

It's called fucking -teamwork- and it's been a thing for roughly all of human history.

"The whole is greater than the sum of its parts" y'know? That aphorism exists for a reason! Emergent effects from teams working together is a phenomenon that's kinda been around forfuckingever!

0
1
0
0