Search results

This is a really excellent non-fiction piece by @WeirdWriterRobert Kingett about a writing group with a tech bro:

sightlessscribbles.com/the-col

It is a distilled essence of the social and cultural damage AI/LLM is causing, how AI promoters are cynically destroying people's confidence in their own humanity, while simultaneously trying to ridicule and other people who point out that AI is bullshit. (And this isn't even mentioning the environmental consequences.)

0
0
0
0

https://github.com/ComposioHQ/awesome-claude-skills/tree/master/skill-creator

Claude Skill 기능을 적극적으로 활용해보려고 하는데, skill을 만들 수 있도록 돕는 skill-creator라는게 있다. 이걸 좀 더 참고해서 어떻게 나한테 쓸만한걸 만들 수 있는지 한번 살펴봐야겠다.

1

Healthy radium cigarettes

This one was inspired by a tweet I saw a few years ago, something along the lines of “machine learning is the radium salts of the twenty-first century”. This joke is more relevant than ever; corporations are trying to squeeze “AI” into every role imaginable, no matter that it’s mostly useless and often harmful.

Full size (3000×3500, 1.13 MB): deviantart.com/lurkjay/art/105

Painting of a white pack of cigarettes on a dark blue background. The front of the pack says “LLM” in dark red slab-serif letters, underneath it is a line of dark grey sans-serif text saying “healthy radium cigarettes”.
0
0

One thing about "AI" is with the technology OpenAI has (large neural network plus manual tagging) you could've made the best search engine ever. There could be a Copilot where you describe what you wanted and it finds an example of it in the corpus of open source software. You could go from a fuzzy image description to a stock image. These would be better than buggy code and fucked up images. But they wouldn't do that because the *service* OpenAI provides is obscuring that the content is stolen.

@mcc

Not sure when you've used 👉properly👈.

In my experience the more vocal opponent of AI is the further back in time their (lack of use) goes.
With the most ardent opponents having never used the models, yet having most empathic (and increasingly inaccurate) opinions.

Attached media, a public query from today, with sources dropdown at the bottom.

Approx 30% of web searches comes from the engines nowadays.

(Edit: Hahaha, insta blocked by poster, I guess folks don't like to be called out on saying patent provable falsehoods 🤡

The poster, made a comment exposing their ignorance of features of existing AI. This one has 33,000 followers, question is "How many others like them have zero idea about the systems they critique"?)

Chatgpt with sources
0

This is an interesting a well-crafted article

The title should be don't shove large language models down our throat.

True artificial intelligence is that what you have and your games, your simulators, etc which has been here for many many decades. This is large language model talk, a whole other Beast

Interesting read recommended

gpt3experiments.substack.com/p

0
0
0

vibe coding

2/

A number of the people who attended demoed what they made — including my wife's friend.

It was interesting to see how vibe coding was enabling people without programming skills to create applications.

Are their applications as good as applications created by career software-engineers‽ — no, but that is probably OK. Their vibe coded applications seem to be good enough for their needs.

What is interesting, though, is —

vibe coding

3/

What is interesting, though, is —

This (non-programmers using vibe coding to create applications) reminds me of something I noticed decades ago about spreadsheets —

People who are bright who don't know how to computer-program use spreadsheets to create applications

Are their spreadsheet-based applications as good as applications created by career software-engineers‽ — no, but they are good enough for their needs. And, that's fantastic!

0

vibe coding

2/

A number of the people who attended demoed what they made — including my wife's friend.

It was interesting to see how vibe coding was enabling people without programming skills to create applications.

Are their applications as good as applications created by career software-engineers‽ — no, but that is probably OK. Their vibe coded applications seem to be good enough for their needs.

What is interesting, though, is —

0
0

Despite being extremely vocal against , I strongly suspect that too many juniors (not just students) are using it for reports and manuscripts. I have no problem in dedicating my time to edit a draft so that someone with less experience will improve their , but editing is really starting to annoy me.

How are other people in (including juniors) dealing with this? How to tag (or self-tag) someone's own work?

0
0

⏰ The deadline to submit a proposal for the @w3cWorld Wide Web Consortium workshop on Smart Voice Agents (Feb 2026, virtual) is 27 Nov 2025!

Smart voice agents need clearer use cases, stronger , better -based interaction, and improved accuracy, , and multilingual support. Broader concerns include device coordination, , , regulatory gaps and emerging business models.

Don’t miss your chance to present your work and submit now: w3.org/2025/10/smartagents-wor

W3C Workshop on Smart Voice Agents - February 2026, Virtual on zoom
0

LLMs are *absolute concentrated brain poison* for these folks. They try out the LLM to see if it can solve a few simple problems and then they extrapolate to more complex problems. Wrongly. They infer from social cues in their cohort, which are absolutely fucked by the amount of synthetic money (and maybe fraud?) driving a subprime-bubble type mania. They infer from the plausibility of its outputs, which are absolutely fucked because the job of these models is to produce plausible outputs.

@glyph

I've said it before, but the antivenin for this brain poison is to boot up and try some really small . With a sufficiently small model, it's completely obvious that the machine has no understanding, no consciousness, no intelligence, and no mental model of the problem it's being asked to solve. That insight equips the user to interact with a larger and not be bamboozled by its plausibility.

ollama.com/search

0
0

All mentions of "AI" should be changed into "Brain Rot Inducer"…

"Do you want the Brain Rot Inducer to summarize this text for you?"

"The Brain Rot Inducer can auto-answer this e-mail for you!"

"Let the Brain Rot Inducer write this social media post for you!"

"Easily generate an image with the Brain Rot Inducer, and call it your own creation!"

0
0
0

What would probably increase the quality of responses is if they add hyperlinks to high quality sources from real . As in, not just mashed up footnotes, but actual words/phrases/sentences within their responses linked to real people/orgs websites. Obviously they should be correct reference links too. No need for web to die.

0

If you've invented a useful technology or fun gadget which people genuinely want to use, you don't have to say things like "It's here to stay like it or not" or "You'd better get used to it" or "It's not going away".

I don't recall anyone from Nintendo parading around with the Wiimote screaming "YOU'D BETTER GET USED TO THIS WHETHER YOU LIKE IT OR NOT!"

If you DO have to use such phrases, maybe people don't actually want it or find it useful?

0
0
0
0

> This book is dedicated, in respect and admiration, to the spirit that lives in the computer.
> I think that it's extraordinarily important that we in computer science keep fun in computing.

mitp-content-server.mit.edu/bo

When your boss and all the tech influencers push you to offload your creativity and understanding to an . It's hard to remember this sometimes.

0

@amszmidt@mastodon.socialAlfred M. Szmidt

The way are voiding classic has been experimentally demonstrated years ago, when was caught distributing well known code with a permissive license and wrong author attribution.

Also, technically that incident demonstrated how Copilot's model itself is a derivative work of copylefted works: if a lossy compression of copyrighted material is still subject to authors' , encoding such compression as arrays of floats that can be executed by virtual processors with a dedicated architecture (so called "inference" engines) does't change its nature of derivative work. Similarly violating copyrights of millions of authors at once doesn't free you of such rights.
That's basically why and friends are so scared by current lack of sustainable business models for their : they need money to keep Judges away.
But anyway I still have to find a single person that debate with technical competence and in good faith the derivative nature of LLMs from the text corpora compressed in their models.

As for the Hacking License not being a Free Software license, it's debatable after a careful read since the only thing you cannot do with the software is to prevent others from enjoining the same freedom it grants you.

Yet I've never claimed it is Free Software because, sadly, I'm forced to move beyond Free Software by its own limit.

OTOH I'm proud that it's not an license as I'll never submit it to corrupted

As for , it's a term designed to confuse free software values with corporate propaganda while marginalizing hackers: its a leaking abstraction designed to fool developers and exploit their naive groupthinking. Having been fooled myself. Never again. 😉

____

¹ The way OSI tried to the , with an over complicated process that doubled 's lobbyists' votes to exclude training data from the requirements confirmed my opinion about them.

@doctormo@floss.socialMartin Owens :inkscape:
0

Still, are voiding the (and ) reciprocity.

That's why years ago I wrote the https://encrypted.tesio.it/documents/HACK.txt

It was designed with automated corporate of in mind: it's goal is to balance and , and it share with those that accept it much more than permissions, while being a stromger and an explicit shrink-wrap contract.

Unfortunately, it's not compatible with GPL, because GPL is much weaker.

The fundamental issue of Free Software, the one that let people create the narrative and permessive licenses to exploit programmer ideals and , was that , as an American grown up during , was too fond of the freedom-vs-communism propaganda to understand how lack of rules means the rule of the rich.

The problem is not commercial use of free software but commercial exploitation of free labour, as @doctormo@floss.socialMartin Owens :inkscape: correctly stated.

The Hacking License does not prohibit commercial use, but requires recipient to share their own with the users of any derivative or dependant work they create as a contractual binding.

It's modelled after the research of about Commons governance and the ethics based on the value of .
0
0

Giving an LLM agent root access to debug your Linux system is like handing someone a spoon to eat spaghetti—technically possible, catastrophically messy.

Shannot solves this with secure sandboxing for AI diagnostics. LLM agents can read logs, inspect configurations, and run diagnostic commands in a locked-down environment with zero write permissions.

They get the visibility they need to help you troubleshoot, without the access to accidentally destroy your system in the process.

github.com/corv89/shannot

Claude Desktop is running Shannot MCP to diagnose a Linux VM autonomously but securely thanks to sandboxing
0

LLMs are a fundamentally useless technology because their applications (supposedly) boil down to humans not having to think for themselves or do their own writing / drawing / filming.

But if you can do it on your own - why would you need a robot to do it? It’s, at best, a novelty.

That’s why this shit only resonates with executives and capital owners. “Get things done with fewer people and expenses” is at least an actual pitch. “Get things done faster for yourself” isn’t.

The individual angle really works for things you already were trying to avoid doing because you’re either disinterested or don’t have enough time to do things right.

“Avoid your work” as a value proposition doesn’t work when you’re dealing with intellectual labor rather than commodities. Not large scale, not long term.

Sorry for the rant, I saw some Notion ads on the subway and got irritated 😅

0
0

If you avoid using an LLM, what are your primary reasons?

Mastodon only allows 4 options in polls and I know there are many more possible concerns.

If it's something else, like the spread of disinformation, mental health concerns, etc. reply with that.

0

I studied Artificial Intelligence for four years, and I am not touching LLM AIs with a ten-foot pole.

It's not really about the insane electricity demands, the water usage, tho that's a good reason. It's not even, if I'm honest, about the disastrous effect on the sum of all human art and knowledge.

It's because a) I've studied enough AI to know it's a trick, a sort of linguistic illusion, and b) I've studied enough everything else to understand that I'm not immune to such illusions.

0
0
0
0

- I recently stopped my windows based computer backing every document and photo to the because fuck and their - I never asked for their fucking cloud storage in the first place. Or their LLM that they incorrectly call AI - it is not AI and can’t even add up.

Anyway, as part of this process, Microsoft then automatically deleted all my files - *including* the versions on my own computer. Fuck Microsoft.

As it happens, I back up all my computer files on an external HardDisk on the first day of every month. As a result, I was able to copy the files and directories back to my computer with little disruption.

If you are going to disconnect from your cloud storage - be warned in advance … back up all your files and directories on an external Hard Disk first.

0
0
0

APOLLO: Automated LLM and Lean collaboration for advanced formal reasoning. ~ Azim Ospanov, Farzan Farnia, Roozbeh Yousefzadeh. arxiv.org/abs/2505.05758

arXiv logo

APOLLO: Automated LLM and Lean Collaboration for Advanced Formal Reasoning

Formal reasoning and automated theorem proving constitute a challenging subfield of machine learning, in which machines are tasked with proving mathematical theorems using formal languages like Lean. A formal verification system can check whether a formal proof is correct or not almost instantaneously, but generating a completely correct formal proof with large language models (LLMs) remains a formidable task. The usual approach in the literature is to prompt the LLM many times (up to several thousands) until one of the generated proofs passes the verification system. In this work, we present APOLLO (Automated PrOof repair viaLLM and Lean cOllaboration), a modular, model-agnostic agentic framework that combines the strengths of the Lean compiler with an LLM's reasoning abilities to achieve better proof-generation results at a low token and sampling budgets. Apollo directs a fully automated process in which the LLM generates proofs for theorems, a set of agents analyze the proofs, fix the syntax errors, identify the mistakes in the proofs using Lean, isolate failing sub-lemmas, utilize automated solvers, and invoke an LLM on each remaining goal with a low top-K budget. The repaired sub-proofs are recombined and reverified, iterating up to a user-controlled maximum number of attempts. On the miniF2F benchmark, we establish a new state-of-the-art accuracy of 84.9% among sub 8B-parameter models (as of August 2025) while keeping the sampling budget below one hundred. Moreover, Apollo raises the state-of-the-art accuracy for Goedel-Prover-SFT to 65.6% while cutting sample complexity from 25,600 to a few hundred. General-purpose models (o3-mini, o4-mini) jump from 3-7% to over 40% accuracy. Our results demonstrate that targeted, compiler-guided repair of LLM outputs yields dramatic gains in both efficiency and correctness, suggesting a general paradigm for scalable automated theorem proving.

arxiv.org · arXiv.org

0

I'm in a internal group for high-profile FOSS projects (due to @leaflet having a few kilo-stars), and the second most-wanted feature is "plz allow us to disable copilot reviews", with the most-wanted feature being "plz allow us to block issues/PRs made with copilot".

Meanwhile, there's a grand total of zero requests for "plz put copilot in more stuff".

This should be significative of the attitude of veteran coders towards creep.

A screenshot of a github discussion titled "can't disable copilot code reviews"A screenshot of a github discussion titled "Allow us to block Copilot-generated issues (and PRs) from our own repositories"
0
0
1
0
0
0
0
0