Search results

Fediverse & AI Coding Tools & Vibe Coding

...

I noticed 2 or 3 people lately using AI coding tools to create Fediverse software.

2 of them even seemed to be Vibe Coding.

...

I have been programming for over 30 years. I am probably not going to Vibe Code, but —

I wonder if we should help them.

There are tools we (Fediverse developers) could create to make it so others could Vibe Code Fediverse apps.

@reiver@reiver ⊼ (Charles) :batman: use is antithetical to the principles of freedom of Internet, so dear to the

They're tools of control, by large corporations, which use their revenue and influence to erode democracy, concentrate wealth, take power away from people. Exactly the opposite of what fediverse stands for.

"Vibe coding" is an insult to any and everyone who dedicated effort, time to learn to code.

0
0
0
0
0
0

vibe coding vs AI assisted coding

4/

I tried this myself on a large, complex code base I was unfamiliar with — to try to get that first-hand experience so that I could have an informed opinion. I actually found that use-case useful.

It sped up hours or days of tedious work.

I can see how software-engineers would find this particular activity useful — to ask an LLM questions about large, complex, unfamiliar code base.

vibe coding vs AI assisted coding

5/

Another interesting thing is — MCP (Model Context Protocol).

Where software-engineers are creating a type of server. Sort of like a web application server — but it speaks MCP rather than HTTP.

(MCP is just JSON-RPC 2.0 with certain methods communicated over STDIN and STDOUT.)

And, the MCP developer uses an LLM as a type of front-end for their MCP server.

That use is NOT Vibe Coding either.

0

vibe coding vs AI assisted coding

4/

I tried this myself on a large, complex code base I was unfamiliar with — to try to get that first-hand experience so that I could have an informed opinion. I actually found that use-case useful.

It sped up hours or days of tedious work.

I can see how software-engineers would find this particular activity useful — to ask an LLM questions about large, complex, unfamiliar code base.

0

vibe coding vs AI assisted coding

2/

The first thing I did is — I started by paying more attention to how other people are using AI coding tools.

One thing I noticed is —

Some users are Vibe Coding —

mastodon.social/@reiver/115639

But, not everyone who uses AI coding tools is Vibe Coding!

Especially the software engineers I looked at who are using AI coding tools. They don't seem to be Vibe Coding — they seem to be using them differently.

0

vibe coding vs AI assisted coding

1/

8 days ago, I decided I would look closer at some of these AI coding tools.

I don't feel I *need* them — I have been programming for over 30 years, but —

The reason I want to look closer — I want a deeper understanding of them so I can have an informed opinion.

I've seen some people cheer them. While others boo them.

I haven't really had an opinion of them — because I lacked any first-hand experience.

So —

vibe coding vs AI assisted coding

2/

The first thing I did is — I started by paying more attention to how other people are using AI coding tools.

One thing I noticed is —

Some users are Vibe Coding —

mastodon.social/@reiver/115639

But, not everyone who uses AI coding tools is Vibe Coding!

Especially the software engineers I looked at who are using AI coding tools. They don't seem to be Vibe Coding — they seem to be using them differently.

0

vibe coding vs AI assisted coding

1/

8 days ago, I decided I would look closer at some of these AI coding tools.

I don't feel I *need* them — I have been programming for over 30 years, but —

The reason I want to look closer — I want a deeper understanding of them so I can have an informed opinion.

I've seen some people cheer them. While others boo them.

I haven't really had an opinion of them — because I lacked any first-hand experience.

So —

0

RE: mastodon.social/@riotnrrd/1159

And @tomTom Casavant found a whole other set of issues with thin wrappers around functionality: you can prompt-inject them to the point of running your smart home on them! tomcasavant.com/your-search-bu

0

curl, which is one of the most popular CLI/API tools for network requests and data transfer on Linux/Unix, is to discontinue its HackerOne bug bounty program due to "too strong incentives to find and make up 'problems' in bad faith that cause overload and abuse".

The authors simply cannot keep up with LLM-generated fake security reports created to collect money using bots. So, it now shuts down at the end of January 2026. This is why we can't have good things

github.com/curl/curl/pull/20312

There's also at least one major city that has a public chat bot, New York (a few years ago they seemed to have gotten in trouble for telling businesses they were allowed to take tips from employees). But yes, it's public, so obviously suffers from the same fault that they all do.

Anyways, all the services mentioned in this thread, and many more, have been put together in a basic python library that lets you interface with any of them anywhere. Probably, to be safe, I recommend only using this behind a VPN:

github.com/TomCasavant/openllms

And also the Maubot plugin for matrix:

github.com/TomCasavant/openllm

0

I tried an experiment: instead of asking an LLM to help with a project, I asked it what it would build.

It picked a small Rust CLI for code archaeology - a tool to explain why a repo looks the way it does.

The interesting part wasn’t the code, but my role: mostly setting boundaries, killing scope creep, and keeping it a tool instead of a product.

Less “AI replaces creativity,” more “AI accelerates the boring parts if you already know when to say no.”

jakegoldsborough.com/blog/2026

0
0

In the moment, I found this annoying. But LLMs are just machines, tools.

I think there's a ton of value to be derived still from people using these tools responsibly.

I'm glad to be reminded of the limits of them.

And tools must be wielded someone. So that means....

We still have jobs, yo!

5/x

@raiderrobertRobert Roskam
I have run into similar drifting. Between python package managers.

Starting a project in Python with `uv`. After getting the project directory configured for `uv` i find that `gemini` keeps trying to steer me back to `poetry`.

Probably because it was trained on a bunch of projects that used `poetry`.

Future innovative work will be hampered because the LLMs don't know about it. LLMs will be the old crotchety guy in IT that doesn't want to try anything new.

0

An obvious remark.

In the world of one-click LLM proofreads, I immediately value your post/blog/comment/email if you used your brain to write it.

There are signs that and I'm sure we can all pick those writing styles intuitively now. I can't believe this but it is almost a sign of respect in 2026 that the person replying to you used their own thoughts. It doesn't matter if you write perfect text with no grammatical mistakes, because it is human. Please don't let this practice die. Anyone else feel this way?

0

Ich bin ja dafür, dass wir ekelhaft ab sofort immer auf "d" enden lassen.
Einerseits um zu jederzeit klarzustellen, was von rechtsextremistischen Parteien zu halten ist.
Andererseits um die zu verwirren, wenn sie unsere Texte als Trainingsdaten scrapen.

Also: ist ekelhafd. Die ist ekelhafd.

Spread the word.

0
0

In simple terms, once again, why you can stick your “” (more precisely: and ) up to where the sun doesn't shine. None of this is unclear, everything has long been publicly known and documented:

” is a speculative bubble that causes massive damage to the and ; cheats people out of the wages for their labor and their art; contaminates and corrupts the acquisition, maintenance, and dissemination of knowledge like microplastics, not even stopping at your brains; and financially yields several orders of magnitude less than it costs. It is not even remotely viable on its own and ultimately hopes for permanent financing through the squandering of taxpayer money, as soon as “the market” has had enough of fuzzy promises of "returns", and no longer wants to “invest.”

You like “AI” because it produces plausible-sounding and plausible-looking bullshit and because you have learned in the 21st century that you can get away with it, provided you don't give a damn about future generations. In the end, you'll be left without even a hint of the illusion that immense amounts of money have been wasted here on something “systemically important,” as was the case last time.

0

Every time I explain to someone that "AI" is really just a "word calculator" that phrase suddenly demystifies the topic and you see the lights go on inside their brain.

The biggest trick the tech bros pulled was simply making ordinary folks believe AI has some kind of mysterious capability. It doesn't.

It's great at highly repetitive pattern matching, and ideal for generating very mediocre output. Mostly, AI is a solution desperately in search of a problem.

#LLM #AI
0

⏰ Reminder! Fact-checking with Wikidata workshop

📆 20 January 2026 | 🕟 16:30–18:00 (UTC+1)

Philippe Saadé (Wikimedia Germany) and @DataTalks.Club host a hands-on workshop using the Wikidata Model Context Protocol (MCP) for fact-checking beyond generative AI.

✔ Wikidata intro
✔ Fact-checking with MCP, Large Language Models (LLM) & semantic search
✔ Build a small fact-checking pipeline

👉 Register now: luma.com/7fs5v7os

Philippe Saadé (Wikimedia Germany) and @DataTalks.Club Fact-checking workshop with Wikidata Model Context Potocol, register on the link to join
0

Eine grundlegende technische Differenz, die m.E. jede wissenschaftspolitische LLM Strategie berücksichten muss:

Generative (autoregressive) Modelle (die würden wir z.B. für Code Generation brauchen) sind etwas anderes als autoencoding Modelle (für z.B. Klassifikation) oder seq2seq Modelle (für z.B. (multimodale) Übersetzungen). Die autoencoders müssten im Vergleich zu GPT, Claude & Co. - bei gleicher Skalierungsstufe wohlgemerkt - Klassifikation und Informationsextraktion *viel besser* beherrschen, kein ausbeuterisches RLHF benötigen und nur wenig für Halluzinationen anfällig sein. Sie sind halt von den kommerziellen Anbietern nicht auf dieselbe Stufe hochskaliert worden wie die "Chat" Modelle.

Das müssten wir in der Wissenschaft vielleicht selber machen, aber das hätte ja auch Vorteile.

Technische Frage: Ist es eigentlich möglich, ein autoencoding oder seq2seq Modell so zu trainieren, dass es - wie die bekannten Chat-Modelle - beliebige Anweisungen in natürlicher Sprache entgegennehmen und verarbeiten kann, oder ist dazu die generative Architektur unabdingbar?

Das ist ja vielleicht der größte Vorteil des Trainings, das diese Modelle erfahren haben.

0

Eine grundlegende technische Differenz, die m.E. jede wissenschaftspolitische LLM Strategie berücksichten muss:

Generative (autoregressive) Modelle (die würden wir z.B. für Code Generation brauchen) sind etwas anderes als autoencoding Modelle (für z.B. Klassifikation) oder seq2seq Modelle (für z.B. (multimodale) Übersetzungen). Die autoencoders müssten im Vergleich zu GPT, Claude & Co. - bei gleicher Skalierungsstufe wohlgemerkt - Klassifikation und Informationsextraktion *viel besser* beherrschen, kein ausbeuterisches RLHF benötigen und nur wenig für Halluzinationen anfällig sein. Sie sind halt von den kommerziellen Anbietern nicht auf dieselbe Stufe hochskaliert worden wie die "Chat" Modelle.

Das müssten wir in der Wissenschaft vielleicht selber machen, aber das hätte ja auch Vorteile.

0

Die Frage, ob LLMs intelligent (oder nützlich oder zuverlässig oder verständig oder kompetente "Sprecher") sind oder nicht, bewegt sich für mich auf einer komisch allgemeinen Ebene.

Um zu verstehen, was sie tun, muss man m.E. nicht nur verstehen, dass sie probabilistische Next Token Predictors sind (die autoregressiven LLMs jedenfalls), sondern auch, dass (und wie und welche genau) sie interne Repräsentationen von abstrakten Einheiten wie grammatischen Wortklassen, rhetorischen Textstrukturen und semantischen Feldern haben - und wie diese Repräsentationen in die Next Token Wahrscheinlichkeiten hineinwirken.

1/4

0

Epiphany moment: the people who claim that / can think and who claim that they are arguing/debating with those systems are also the people who ask questions which can be answered with "yes" and/or end their sentences with "[...], okay?"

People who use open questions or end question with "[...], correct?" tend to question themselves, and external systems, more. So they don't fall into the " think" trap

0
0

I follow Simon's blog (@simonSimon Willison ), as I find it very informative, thorough, and enlightening.

His most recent post on LLM predictions for 2026 did catch my eye and I had a few thoughts I wanted to capture.

Keep in mind, they are not repudiations.

I happen to have more of a fascination with words and metaphors (damn humanities background), which sends me on certain journeys.

Anyway, here you go...

pythonbynight.com/blog/challen

0

As I suspected it probably would be, my bug bounty submission of using an AI email summarizer was closed as being 'infeasible' and an 'acceptable risk' with AI.

But still - I think it's an interesting finding, so I have written it up thus: mike-sheward.medium.com/recrui

TL;DR = I discovered how you can use Google Workspace's Google Gemini Email Summarizer to make a phishing attack seem more convincing, because it summarizes hidden content.

0
0
0

Test

問題:誰是XXX[我的真實姓名]?

Grok 3: 香港的一位政治人物及商人……他曾是立法會議員,代表工商界功能界別,並且是自由黨的成員...出生年月日: 1963年11月4日

Grok 4: 香港一名年輕的金融界人士,主要活躍於加密貨幣和投資領域。

Gemini 3-Pro: ....從一位傳統的財經報章傳媒人,成功轉型為一位活躍於網絡的時事評論員及移民生活分享者。如果你在網上搜尋他的名字,最常看到的會是他關於英國生活以及點評香港財經時事的影片或文章。

DeepSeek-R1: 一位香港藝術體操教練

最靠譜:GPT 5.2
"我需要一點背景才能準確回答,因為「XXX」這個名字並非全球廣為人知的單一公眾人物,可能指不同領域或地區的不同人。

你可以告訴我其中之一嗎?例如:

地區(香港、台灣、中國大陸、海外?)
身分或領域(政治人物、學者、學生、企業家、運動員、藝人、網紅?)
出現的情境(新聞、學校、公司、論文、社交媒體?)
如果你只是想確認「是不是某個有名的人」,目前在公開且可靠的資料中,沒有一位廣為人知、具全國或國際知名度的公眾人物叫XXX。..."

0

Why do I hate "" so much, why do I find it so offensive?

It's the and the utter, smug, sneering contempt for what makes us human.

The pseudo-philosophy that posits that "you are also just " and therefore have no value.

No, you're not. Don't buy it. It's these abominable mechanical idols that have no value, and never will.

0

Been experimenting with + a bit, a few things are very odd:

1. The quality of interactions I'm having with Claude inside Zed is of lower quality than in VSCode + Claude or in Cursor. It seems "dumber", assumes more, doesn't verify claims, just generally more nonsensical answers and needs more hand-holding.

2. It seems to consume a lot more tokens, I'm not sure why, I suspect they don't properly cache, don't compact, don't optimise input/outputs, etc. This is horrific.

0
0
0
0
0

LLMs are reshaping software dev. I don't buy "the end of software dev": Project ambition will grow dramatically.

Ancient Egyptians could build the Pyramids but not the Empire State Building.

Pre-LLM software will be viewed like we view the Pyramids.

0
0
0

/ propaganda is so insidiously effective even for laypeople.

I’ve had multiple conversations with family members who: don’t speak English, don’t own computers (only mobile phones), and barely spend time online.

I told them that I am no longer working with most tech company clients because I don’t like AI and don’t want to support it (“AI” here = gen AI, LLMs).

And yet these people all reacted the same way: concern, shock, and comments like “but this is inevitable”, “this is the future”, “you’ll have to accept it eventually”, “won’t refusing it ruin your career prospects?”

These are people who know nothing about technology. They usually wouldn’t even know what “AI” meant. And yet here they are, utterly convinced of AI company talking points.

0

【狂気の実証実験1】AIエージェントに電気ショック権限を付与したら生活が更生した - Qiita qiita.com/motoya0118/items/da1

> Pavlokは、行動変容(Behavior Change)を目的としたウェアラブルデバイスです。

> 本来は「禁煙 / 早起き / スマホ依存対策」などが想定用途ですが、APIを公開しているという一点で、今回の構想と相性が良すぎるデバイスでした。

> 本記事では、「AI Agentが人間を監視し、必要に応じて物理的な刺激を与える」という用途でPavlokを使用します。

> パチンコ店を見ても「電気ショックが怖い」が先に来て、行きたくなくなった
> ラーメンを食べたいと思っても「電気ショックが怖い」で踏みとどまれるようになった

なんだこれは…

0

We will not publish anything “written” by “AI” because we don’t see the point. Through stories we humans share our experiences, ideas, dreams. Through stories we can live outside of ourselves and come to know one another a little better. Through stories we make sense of the world. None of that can sprout from a generative bullshitter program.

It’s sad that this even needs to be stated, but as our current Mastodon instance ends in “.ai” … well, we have spoken.

0
0

It doesn't matter who wins the AI race, whether it’s OpenAI, Google, Microsoft or world governments. We will be the collateral damage in this. Energy bills, water bills, and hardware prices are going to become major talking points in 2026 and beyond. These are just a few things I’ve noticed as 2025 comes to a close.

0