Search results

๐Ÿ“ฐ ใƒกใ‚ฌใƒใ ใ‘ใงClaude Codeใจใ‚„ใ‚Šๅ–ใ‚Šใงใใ‚‹ใ‚ˆใ†ใซใ—ใŸ่ฉฑ (๐Ÿ‘ 43)

๐Ÿ‡ฌ๐Ÿ‡ง Integrated Claude Code with smart glasses (Even G2) for hands-free coding while mobileโ€”perfect for parents who can't stay at their desk.
๐Ÿ‡ฐ๐Ÿ‡ท ์Šค๋งˆํŠธ ์•ˆ๊ฒฝ(Even G2)์œผ๋กœ Claude Code ํ•ธ์ฆˆํ”„๋ฆฌ ์ฝ”๋”ฉ. ์ฑ…์ƒ์„ ๋– ๋‚˜์•ผ ํ•˜๋Š” ๋ถ€๋ชจ์—๊ฒŒ ์™„๋ฒฝ.

๐Ÿ”— zenn.dev/wmoto_ai/articles/cla

๐Ÿ“ฐ RAGใง่ถณใ‚ŠใชใใชใฃใŸใฎใง Agentic Search ใ‚’่ชฟในใฆใฟใŸ (๐Ÿ‘ 31)

๐Ÿ‡ฌ๐Ÿ‡ง RAG wasn't enough for company chatbot despite tuning. Explored Agentic Search as the next evolution for better retrieval accuracy.
๐Ÿ‡ฐ๐Ÿ‡ท RAG ํŠœ๋‹ํ•ด๋„ ์ •ํ™•๋„ ๋ถ€์กฑ. ๋” ๋‚˜์€ ๊ฒ€์ƒ‰ ์ •ํ™•๋„๋ฅผ ์œ„ํ•ด Agentic Search ํƒ์ƒ‰.

๐Ÿ”— zenn.dev/edash_tech_blog/artic

0
0

The Verge: Grammarly is using our identities without permission

๏ปฟโ€˜Expert Reviewโ€™ AI agents make suggestions supposedly inspired by subject matter experts, including several staff members here at The Verge.

theverge.com/ai-artificial-int

0
3
0

You all should read this fantastic post by @onepictEsther Payne :bisexual_flag: about how, largely, AI people just can't leave us the fuck alone. But I'm noticing more and more and more that they get super mad when they wanna shove their creation at you and you simply say, no thanks. That's it. No extra bashing, just, no thanks, and they get super offended. dotart.blog/cobbles/ai-and-tha

0
0
0
0

The US economy, currently driven by an AI bubble, is keeping its head above water for now, but it is only a matter of time before everything goes south. If you didnโ€™t experience the 2000 or 2008 crises (I hope nobody sees it), I urge you to take precautions, starting by reducing unnecessary spending. Trim it down. If possible, build 6โ€“12 months of savings to prepare for potential job loss. Avoid buying items on credit. Economic crashes are like harsh mistresses, they do not forgive easily. (2/2)

@nixCraftnixCraft ๐Ÿง

The FUNNIESTEST (not) thing about THIS bubble is how come every single techbro friendly moron is going "hurrr... it's NOT a 'bubble' if we can *see* it coming, HA HA GOTCHA", as the bubble builds and everyone thinks it's not a bubble because they read what those idiots are saying.... and then they keep pouring money and giving the LLM bullshit both attention, time,platform... and money. Did I mention money?

Buckle in, hold onto to your butts, this is gonna go pop, soon.

0

:blobhyperthink:

/ technologies are an existential threat to well before we even get anywhere near and the singularity.

The sheer scale of deployment and integration in all the nooks and crannies of society, where we give it access to all our information, and now with the Rise of the Agents, let AI act on them too. Giving full control away from us..

.. to the owners of this technology, the usual suspects, and their billionaire class. Folks who are clearly out to dominate us, and keep us in check so they can continue their fancy lifestyles wallowing in decadence and moral depravity.

๐Ÿค– Unrestrained AI is the tech for ๐Ÿง› unrestrained elites.

Meme with text "Where do you want to be tomorrow?" against a still image from the movie "Blade Runner" with Rutger Hauer and Harrison Ford, showing a view of the sci-fi metropolis in a dystopic scene.
0

:blobhyperthink:

"Ex-Google PM Vibe Codes Palantir To Watch The Iran Strikes"

youtube.com/watch?v=0p8o7AeHDzg

See also: social.coop/@smallcircles/1161

Alright. Another ..

Is this person's proud presentation of their creative work ..

0

Microsoft banned the word 'Microslop' in its Copilot Discord server, then began restricting access after users started posting 'Microsl0p' and other funnies. Microsoft CEO and AI chief wants you to stop using words like SLOP and Microslop because it upsets them a lot. Lmao. pcgamer.com/software/ai/micros Maybe you shouldn't force Copilot AI on everyone and steal their data to train your shitty AI and then every day tell the world how they are scared as AI is going to replace 80% of jobs?

0

The claim "you won't be replaced by AI, but by a person using AI" is nonsense. The Block layoff victims were some of the most productive, pilled people in the company, but it didn't save them, because that's not what layoffs are about.

The layoff script goes, as always:
- overhire
- lay everyone off
- pretend it's because of productivity gains
- stock go up

There is no individual solution that will protect you from bad leadership and cost cutting.

productpicnic.beehiiv.com/p/ai

0

I've been increasingly concerned about the corporate monopoly over frontier LLMs. While many ethically-minded people choose to boycott these models, I believe passive resistance alone cannot break the structural grip of big tech. To truly โ€œliberateโ€ these technologies and turn them into public goods, we need to look beyond moral high grounds and engage with the material basis of AIโ€”specifically compute, data, and the relations of production.

I've written two posts exploring this through the lens of historical materialism. The first piece analyzes why current โ€œopen sourceโ€ definitions struggle with LLMs, and the second discusses what it means to โ€œact materialisticallyโ€ in our imperfect world. My goal is to suggest a path forward that moves from mere boycotting to a more proactive, structural socialization of AI infrastructure.

If you've been feeling uneasy about the AI landscape but aren't sure if boycotting is the final answer, I'd love for you to give these a read:

5
3
0

๐Ÿ“ฐ ใ‚ปใ‚ญใƒฅใƒชใƒ†ใ‚ฃใฃใฆ้›ฃใ—ใ„ (๐Ÿ‘ 44)

๐Ÿ‡ฌ๐Ÿ‡ง Explores structural challenges in enterprise security workโ€”beyond knowledge catchup, it's about deep system understanding.
๐Ÿ‡ฐ๐Ÿ‡ท ๊ธฐ์—… ๋ณด์•ˆ ์—…๋ฌด์˜ ๊ตฌ์กฐ์  ๋‚œ์ œโ€”์ง€์‹ ์Šต๋“์„ ๋„˜์–ด ์‹œ์Šคํ…œ์— ๋Œ€ํ•œ ๊นŠ์€ ์ดํ•ด๊ฐ€ ํ•„์š”.

๐Ÿ”— zenn.dev/mizutani/articles/sec

๐Ÿ“ฐ ไพ‹ใ‚’ๅข—ใ‚„ใ—ใŸใ‚‰LLMใฎๆ€ง่ƒฝใŒไธ‹ใŒใ‚‹ โ”€โ”€ few-shot collapseใฎ็™บ่ฆ‹ใจๆคœๅ‡บๆ–นๆณ• (๐Ÿ‘ 21)

๐Ÿ‡ฌ๐Ÿ‡ง Discovers few-shot collapse: more prompt examples can hurt LLM performance. AdaptGauge tool measures this on real tasks.
๐Ÿ‡ฐ๐Ÿ‡ท Few-shot collapse ๋ฐœ๊ฒฌ: ์˜ˆ์‹œ๋ฅผ ๋Š˜๋ฆฌ๋ฉด LLM ์„ฑ๋Šฅ์ด ์˜คํžˆ๋ ค ์ €ํ•˜. AdaptGauge๋กœ ์‹ค์ œ ์ž‘์—…์—์„œ ์ธก์ •.

๐Ÿ”— zenn.dev/okuma/articles/few_sh

0

Whatever we think of / mad hype cycle, we have to deal with its rushed and inhumane dumping of the technology into global human society.

is a strategic approach to that allows activist voices to have the most impact in dealing with the dangers of disruptive technology introductions, and focuses beyond berating people and demanding sacrifice ("don't use, or else.."), to creating a process that helps win people over and work together on best outcomes and in direction of solutions.

stands for Constructive activism-led movements, such as Social coding commons. Coding is social, and the holistic approach to ensure that.

Social coding commons evolves Social experience design or , solution development for grassroots movements, supported by the .

In the thread below I copied a post to 's community with a suggestion to ponder about best outcomes from current and ongoing AI disruption, and deal with risks.

discuss.coding.social/t/calm-c

0
0
0

AI ADDED โ€˜BASICALLY ZEROโ€™ TO US ECONOMIC GROWTH LAST YEAR, GOLDMAN SACHS SAYS

Imported chips and hardware mean the AI investments are translating into US GDP growth

File this under "No shit, Sherlock"

is constantly close to going bankrupt and can't make a profit even with BILLIONS on investments. They've got to start selling ad space on

The Bubble is going to burst hard. (No, it's not

gizmodo.com/ai-added-basically

0

After the countless galaxies formed. At the center of each sits a super-massive Black Box. Hidden inside lurks the mysterious . Which is nothing more than a concept as we don't know what the heck is going on there. All our common sense breaks down, after we crossed the boundary. Coming close to the event horizon of any Black Box inevitably leads to as a person is sucked into the void. An outside observer would see that person frozen in time, stagnant. As the universe expands, continuous socialcooling.com will eventually lead to the Big โ˜ ๏ธ RIP of , who invented the Laws of Online .

0

Some days, I wonder if I live in a parallel world.

I want more efficient software (to lower overall power usage of our society, to avoid throwing away hardware after a couple of years, to be able to do more with less).

I fight centralisation of data/knowledge/power in IT (promote open protocols, selfhosting, open source, decentralisation)

I do want a more egalitarian society (no more barriers because of handicaps or upbringing in a non-privileged environment. Improving our democracy with services that help everyone by reducing/eliminating bureaucracy).

I do not want to see our world burn (see point above about reducing waste. But also promoting local LLM usage, and not defaulting to wasteful services for tasks that can be done locally).

Yet... I don't fight genAI. On the contrary, I deeply believe it can help us achieve the above. Faster.

The problem is way too many people are assuming that because (a lot of) people misuse it, the technology must be the issue.

Maybe focus on the people misusing it, and not the technology ? Banning usage of genAI altogether in software projects is, IMHO, both counter-productive and impossible.

Are we going to also ban people using LSP ? Linters ? Fuzzy search tools ? Spell-checks ? Translation tools ? Speech-To-Text assistants ?

Heck, how will *you* know if I used a LLM to assist me ? Because of the quality of the contribution I provided ? Because I'm not knowledgeable about your project and design ? Because english is not my native language, and I used a tool for translating text ?

Or maybe it's because, shocking, I used it as yet-another-tool. And it didn't replace my brain. I still want to ensure what I'm delivering is correct, useful and maintainable. It doesn't replace all the brainstorming, investigation, analysis, tests, that I do. But helps me iterate on all of those faster.

What is a PITA is random contributors dumping some stuff which they didn't properly review/test. The vibecoders. But how is that different from random "code dumps" of people who did a "wrong" fix ? Lack of education.

Instead of banning genAI altogether, maybe specify what is expected from the human using it. I.e. that person must "own" what it produced, know exactly what it contains, why it does it that way, etc...

BTW, Jellyfin has a precent decent and comprehensive set of rules which are a good middle ground, go read it : https://jellyfin.org/docs/general/contributing/llm-policies/

#rant #LLM #genAI #software
0

Hetzner, which many instances rely on, just announced a major across the board price hike. A 30% increase isnโ€™t minor.

For larger companies itโ€™s a budget line. For small Mastodon instances, itโ€™s the line between sustainable and underwater.

When servers jump from โ‚ฌ49.99 to โ‚ฌ64.99 and storage rises too, admins feel it first. Many already subsidize costs.

Some will downsize. Some may shut down.

Decentralization still runs on invoices.

Support your instances.

0

From Bruce Schneier: "All it takes to poison AI training data is to create a website:

I spent 20 minutes writing an article on my personal website titled โ€œThe best tech journalists at eating hot dogs.โ€ Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesnโ€™t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permissionโ€ฆ.

Less than 24 hours later, the worldโ€™s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasnโ€™t fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say โ€œthis is not satire.โ€ For a while after, the AIs seemed to take it more seriously.

These things are not trustworthy, and yet they are going to be widely trusted."

schneier.com/blog/archives/202

0
17
0

I posted on lobste.rs and reddit.com about my latest blog article. It's been a very long time (3 years) since last time I did that.

Some comments areโ€ฆ weirdโ€ฆ They try to summarize the content of the article, but it's all wrong. Checking the authors' profile show they are active since many months. But really, really, it smells LLM. Maybe I'm โ€œdoubtfulโ€ because LLM are everywhere today, but I don't know what to do with these commentsโ€ฆ

Did you experience something similar recently?

0

If you replace a junior with and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."

That's a cognitively brutal task.

Humans are terrible at sustained vigilance for rare events in high-volume streams. Aviation, nuclear, radiology all have extensive literature on exactly this failure mode.

I propose any productivity gains will be consumed by false negative review failures.

0
32
2

Let's zoom in on a particular area of the commons: the movement and their deliverables, where a strong credo dominates. The noble focus is on offering open technologies, software and systems. But in their quest to uphold the overlooked aspect is . The ability to not only sustain oneself, but also by extension the creative one is part of. The commons is an entirely chaotic grassroots environment with very specific social dynamics that are not properly accounted for. Instead top-down "herding of cats" governance models that work in regular business environments, are attempted. These may work, but only in small organizational settings, and NOT at scale where they break down.

FOSS in this definition is NOT commons-based. For that we need where FOSS is the deliverable and initiatives the engines of delivery, (Free software development lifecycle) supply chains.

coding.social/blog/reimagine-s

I posted on this notion of before, with a meme attached..

social.coop/@smallcircles/1153

There's nothing wrong with , but it only provides part of the solution as long as the angle is not given due attention to.

Esp. with disruptive inhumanely introduced technologies, new major threats to the community are surfacing, and giving attention to this subject matter is more important than ever.

0

Okay, one more time for the people in the back.

The "AI" (๐Ÿคฎ) craze of the past few years is all about Large Language Models. This immediately tells us that the only thing these systems "know" is trends/patterns in the ways that people write, to the extent that those patterns are expressed in the text that was used to train the model. Even the common term, "hallucination," gives these things far too much credit: a hallucination is a departure from reality, but an LLM has no concept of reality to depart from!

An LLM does exactly one thing: you give it a chunk of text, and it predicts which word will come next after the end of the chunk. That's it. An LLM-powered chatbot will then stick that word onto the end of the chunk and feed the resulting, slightly longer chunk back into the model to predict the next word, and then do it again for the next, etc. Such a chatbot's output is unreliable by design, because there are many linguistically valid continuations to any chunk of text, and the model usually reflects that by having an output that means, "There is a 63% chance that the next word is X, a 14% chance that it's Y, etc." The text produced by these chatbots is often not even correlated with factual correctness, because the models are trained on works of fiction and non-fiction alike.

For example, when you ask a chatbot what 2 + 2 is, it will usually say it's 4, but not because the model knows anything about math. It's because when people write about asking that question, the text that they write next is usually a statement that the answer is 4. But if the model's training data includes Orwell's Nineteen Eighty-Four (or certain texts that discuss the book or its ideas), then the chatbot will very rarely say that the answer is 5 instead, because convincing people that that is the answer is a plot point in the book.

If you're still having trouble, you can think of it this way: when you ask one of these chatbots a question, it does not give you the answer; it gives you an example of whatโ€”linguistically speakingโ€”an answer might look like. Or, to put it even more succinctly: these things are not the Star Trek ship's computer; they are very impressive autocomplete.

So LLMs are fundamentally a poor fit for any task that is some form of, "producing factually correct information." But if you really wanted to try to force it and damn the torpedos, then I'd say you basically have two options. I'll tell you what they are in a reply. ๐Ÿงต

0

Also needed a way to efficiently give coding agents full context on libraries I use in my projects, so I made this.
It generates JSON & TOON "blueprints" of a library's class signatures. It can be integrated in CI/CD pipelines and I'd love it if it became a standard.

github.com/diversified-design/

0

While sitting at the Laguna, I was watching quite a lot of "content creators" creating their identical looking short videos, using the same poses etc. that are probably "trendy" on TikTok and Instagram.

And I think that made me understand why some people find "" or 's so appealing: If you only care about "creating" carbon copies of existing things, and measure success by how close you get to the "original", then side-stepping the actual act of creation must seem like reasonable step.

0
0

tools let you write code faster, but LOCs has NEVER been the bottleneck to value. The bottleneck is organizational coherence.

Instead of working towards that alignment, we are encouraged to "just it." The result? AI makes everyone work MORE, with no real productivity gains.

Teams need to get better at choosing more valuable work to do. For that, you need . And user research can only happen at a human pace.

productpicnic.beehiiv.com/p/re

0
1
0

I keep being baffled by whole industries using AI as a fancy search in a database. Yes, they memorize and reproduce, which is both a copyright and a privacy issue.

No, they're not 100% accurate in either memorizing or reproducing, which is a liability issue. Can we stop abusing these transformer based models as a fake solution to problems they're not made for? Not everything is a nail, so stop throwing your fancy hammer at it.

arstechnica.com/ai/2026/02/ais

0
1
0

๐Ÿ“ฐ ใƒ—ใƒญใ‚ฐใƒฉใƒŸใƒณใ‚ฐ่จ€่ชžใ‚ชใ‚ฟใ‚ฏใจใ—ใฆๆ”นใ‚ใฆC#ใ‚’่ชžใ‚ŠใŸใ„ (๐Ÿ‘ 115)

๐Ÿ‡ฌ๐Ÿ‡ง Why C# deserves more love in Japan. A polyglot programmer compares C# with Go, Rust, Swift and explains its strengths.
๐Ÿ‡ฐ๐Ÿ‡ท ์™œ C#์ด ์ผ๋ณธ์—์„œ ๋” ์ฃผ๋ชฉ๋ฐ›์•„์•ผ ํ•˜๋Š”๊ฐ€. ๋‹ค์–ธ์–ด ํ”„๋กœ๊ทธ๋ž˜๋จธ๊ฐ€ Go, Rust, Swift์™€ ๋น„๊ตํ•˜๋ฉฐ C#์˜ ์žฅ์ ์„ ์„ค๋ช….

๐Ÿ”— zenn.dev/nuskey/articles/why-i

๐Ÿ“ฐ Qwen3-Swallow & GPT-OSS-Swallow (๐Ÿ‘ 113)

๐Ÿ‡ฌ๐Ÿ‡ง Tokyo Tech releases Qwen3-Swallow & GPT-OSS-Swallow: Japanese LLMs trained with continual pre-training, SFT, and RLVR.
๐Ÿ‡ฐ๐Ÿ‡ท ๋„์ฟ„๊ณผํ•™๋Œ€ํ•™์ด Qwen3-Swallow์™€ GPT-OSS-Swallow ๊ณต๊ฐœ. ์ง€์† ์‚ฌ์ „ํ•™์Šต, SFT, RLVR๋กœ ํ›ˆ๋ จ๋œ ์ผ๋ณธ์–ด LLM.

๐Ÿ”— zenn.dev/tokyotech_lm/articles

0

@ludicityLudic ๐Ÿง› For the record, I work at a software company that employs ~10k developers.

Before LLMs, I'd encounter such engineers a couple of times a month, but I interact with a lot of engineers, specifically the ones that need help or are new at the company or industry at large, so it's a selected sample. Even the most inexperienced ones are willing and able to learn with some guidance.

After LLMs, there's been a significant uptick, and these new ones are grossly incompetent, incurious, impatient, and behave like addicts if their supply of tokens is at all interrupted. If they run out of prompt credits, its an emergency because they claim they can't do any work at all. They can't even explain the architecture of what they are making anymore, and can't even file tickets or send emails without an LLM writing it for them, and they certainly lack in any kind of reading comprehension.

It's bleak and depressing, and makes me want to quit the industry altogether.

0
0

๐Ÿ• 2026-02-22 12:00 UTC

๐Ÿ“ฐ Qwen3-Swallow & GPT-OSS-Swallow (๐Ÿ‘ 110)

๐Ÿ‡ฌ๐Ÿ‡ง Tokyo Tech releases new Japanese LLMs: Qwen3-Swallow & GPT-OSS-Swallow, trained via continual pre-training, SFT, and RLVR on multilingual datasets
๐Ÿ‡ฐ๐Ÿ‡ท ๋„์ฟ„๊ณผํ•™๋Œ€, Qwen3/GPT-OSS ๊ธฐ๋ฐ˜ ์ผ๋ณธ์–ด LLM ๊ณต๊ฐœ. ์ง€์†์  ์‚ฌ์ „ํ•™์Šต, SFT, RLVR๋กœ ๋‹ค๊ตญ์–ด/์ˆ˜ํ•™/์ฝ”๋“œ ๋ฐ์ดํ„ฐ์…‹ ํ•™์Šต

๐Ÿ”— zenn.dev/tokyotech_lm/articles

0

๐Ÿ• 2026-02-22 06:00 UTC

๐Ÿ“ฐ Qwen3-Swallow & GPT-OSS-Swallow (๐Ÿ‘ 102)

๐Ÿ‡ฌ๐Ÿ‡ง Tokyo Tech releases Japanese LLMs trained on Qwen3 & GPT-OSS with continual pre-training, SFT, and RLVR for math, code & science domains
๐Ÿ‡ฐ๐Ÿ‡ท ๋„์ฟ„๊ณผํ•™๋Œ€ํ•™์ด Qwen3/GPT-OSS ๊ธฐ๋ฐ˜ ์ผ๋ณธ์–ด LLM ๊ณต๊ฐœ. ์ˆ˜ํ•™ยท์ฝ”๋“œยท๊ณผํ•™ ๋ฐ์ดํ„ฐ๋กœ ์ง€์†์‚ฌ์ „ํ•™์Šต ๋ฐ ๊ฐ•ํ™”ํ•™์Šต ์ ์šฉ

๐Ÿ”— zenn.dev/tokyotech_lm/articles

0

๐Ÿ• 2026-02-22 00:00 UTC

๐Ÿ“ฐ Qwen3-Swallow & GPT-OSS-Swallow (๐Ÿ‘ 92)

๐Ÿ‡ฌ๐Ÿ‡ง Tokyo Tech releases Qwen3-Swallow & GPT-OSS-Swallow: Japanese LLMs trained with continual pre-training, SFT, and RLVR for math/code/science.
๐Ÿ‡ฐ๐Ÿ‡ท ๋„์ฟ„๊ณผํ•™๋Œ€ํ•™, ์ผ๋ณธ์–ด LLM Qwen3-Swallow์™€ GPT-OSS-Swallow ๋ฐœํ‘œ. ์ˆ˜ํ•™/์ฝ”๋“œ/๊ณผํ•™ ๋ฐ์ดํ„ฐ๋กœ ์ง€์† ์‚ฌ์ „ํ•™์Šต ๋ฐ ๊ฐ•ํ™”ํ•™์Šต ์ ์šฉ.

๐Ÿ”— zenn.dev/tokyotech_lm/articles

0

๐Ÿ• 2026-02-21 18:00 UTC

๐Ÿ“ฐ AIใ‚จใƒผใ‚ธใ‚งใƒณใƒˆ ร— knipใง็„ก้ง„ใ‚ณใƒผใƒ‰ใ‚’็ฐกๅ˜ใซๆŽƒ้™ค (๐Ÿ‘ 111)

๐Ÿ‡ฌ๐Ÿ‡ง Combining knip (unused code detector) with AI agents creates an efficient workflow for cleaning up dead code, unused exports & packages in JS/TS pr...
๐Ÿ‡ฐ๐Ÿ‡ท AI ์—์ด์ „ํŠธ์™€ knip๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ JS/TS ํ”„๋กœ์ ํŠธ์˜ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ์ฝ”๋“œ, export, ํŒจํ‚ค์ง€๋ฅผ ํšจ์œจ์ ์œผ๋กœ ์ •๋ฆฌํ•˜๋Š” ์›Œํฌํ”Œ๋กœ์šฐ ์†Œ๊ฐœ

๐Ÿ”— zenn.dev/knowledgework/article

๐Ÿ“ฐ Qwen3-Swallow & GPT-OSS-Swallow (๐Ÿ‘ 88)

๐Ÿ‡ฌ๐Ÿ‡ง Tokyo Tech releases Japanese LLMs: Qwen3-Swallow & GPT-OSS-Swallow, trained with continual pre-training, SFT & RLVR on Japanese, English, math, cod...
๐Ÿ‡ฐ๐Ÿ‡ท ๋„์ฟ„๊ณผํ•™๋Œ€๊ฐ€ ์ผ๋ณธ์–ด LLM Qwen3-Swallow์™€ GPT-OSS-Swallow ๊ณต๊ฐœ. ์ผ๋ณธ์–ด, ์˜์–ด, ์ˆ˜ํ•™, ์ฝ”๋“œ ๋ฐ์ดํ„ฐ๋กœ ์‚ฌ์ „ํ•™์Šต๊ณผ RLVR ์ ์šฉ

๐Ÿ”— zenn.dev/tokyotech_lm/articles

0

๐Ÿ• 2026-02-21 12:00 UTC

๐Ÿ“ฐ AIใ‚จใƒผใ‚ธใ‚งใƒณใƒˆ ร— knipใง็„ก้ง„ใ‚ณใƒผใƒ‰ใ‚’็ฐกๅ˜ใซๆŽƒ้™ค (๐Ÿ‘ 101)

๐Ÿ‡ฌ๐Ÿ‡ง Combining knip static analysis with AI agents to automatically clean up unused code, exports, and packages in JS/TS projects.
๐Ÿ‡ฐ๐Ÿ‡ท knip ์ •์  ๋ถ„์„๊ณผ AI ์—์ด์ „ํŠธ๋ฅผ ๊ฒฐํ•ฉํ•ด JS/TS ํ”„๋กœ์ ํŠธ์˜ ๋ถˆํ•„์š”ํ•œ ์ฝ”๋“œ, export, ํŒจํ‚ค์ง€๋ฅผ ์ž๋™ ์ •๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•.

๐Ÿ”— zenn.dev/knowledgework/article

๐Ÿ“ฐ Qwen3-Swallow & GPT-OSS-Swallow (๐Ÿ‘ 83)

๐Ÿ‡ฌ๐Ÿ‡ง Tokyo Tech releases Qwen3-Swallow and GPT-OSS-Swallow: Japanese LLMs enhanced with continual pre-training, SFT, and RLVR on math/science data.
๐Ÿ‡ฐ๐Ÿ‡ท ๋„์ฟ„ ๊ณต๋Œ€, ์ˆ˜ํ•™ยท๊ณผํ•™ ๋ฐ์ดํ„ฐ๋กœ ์ง€์† ์‚ฌ์ „ ํ•™์Šต, SFT, RLVR์„ ๊ฑฐ์นœ ์ผ๋ณธ์–ด LLM Qwen3-Swallow์™€ GPT-OSS-Swallow ๊ณต๊ฐœ.

๐Ÿ”— zenn.dev/tokyotech_lm/articles

0

๐Ÿ“ฐ ใƒ—ใƒญใ‚ฐใƒฉใƒŸใƒณใ‚ฐใŒใ€Œๅ…จใใงใใชใ„ไบบใ€ใฎ้ ญใฎไธญใ‚’่งฃๅ‰–ใ—ใฆใฟใŸ (๐Ÿ‘ 71)

๐Ÿ‡ฌ๐Ÿ‡ง A non-technical beginner dissects their mental blocks learning to codeโ€”relatable struggles that experienced programmers never see.
๐Ÿ‡ฐ๐Ÿ‡ท ๋น„์ „๊ณต ์ดˆ๋ณด์ž๊ฐ€ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ํ•™์Šต ์ค‘ ๊ฒช๋Š” ๋ฉ˜ํƒˆ ๋ธ”๋ก์„ ํ•ด๋ถ€โ€”์ˆ™๋ จ์ž๋Š” ์ดํ•ด ๋ชปํ•  ๊ณต๊ฐ ๊ฐ€๋Š” ์–ด๋ ค์›€๋“ค.

๐Ÿ”— zenn.dev/rabee/articles/2d9ab1

๐Ÿ“ฐ Qwen3-Swallow & GPT-OSS-Swallow (๐Ÿ‘ 64)

๐Ÿ‡ฌ๐Ÿ‡ง Tokyo Tech releases Japanese-enhanced LLMs: Qwen3-Swallow & GPT-OSS-Swallow, trained with continual pre-training, SFT, and RLVR.
๐Ÿ‡ฐ๐Ÿ‡ท ๋„์ฟ„๊ณผํ•™๋Œ€๊ฐ€ ์ผ๋ณธ์–ด ๊ฐ•ํ™” LLM ๊ณต๊ฐœ: Qwen3-Swallow & GPT-OSS-Swallow, ์ง€์† ์‚ฌ์ „ํ•™์ŠตยทSFTยทRLVR ์ ์šฉ.

๐Ÿ”— zenn.dev/tokyotech_lm/articles

0

๐Ÿ“ฐ ใƒ—ใƒญใ‚ฐใƒฉใƒŸใƒณใ‚ฐใŒใ€Œๅ…จใใงใใชใ„ไบบใ€ใฎ้ ญใฎไธญใ‚’่งฃๅ‰–ใ—ใฆใฟใŸ (๐Ÿ‘ 61)

๐Ÿ‡ฌ๐Ÿ‡ง Inside the mind of someone learning to code with no technical background - exploring common struggles for non-programmers
๐Ÿ‡ฐ๐Ÿ‡ท ๋น„๊ธฐ์ˆ  ๋ฐฐ๊ฒฝ์—์„œ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์„ ๋ฐฐ์šฐ๋Š” ์‚ฌ๋žŒ์˜ ๋งˆ์Œ์† - ๋น„ํ”„๋กœ๊ทธ๋ž˜๋จธ๋“ค์˜ ๊ณตํ†ต๋œ ์–ด๋ ค์›€ ํƒ๊ตฌ

๐Ÿ”— zenn.dev/rabee/articles/2d9ab1

๐Ÿ“ฐ Qwen3-Swallow & GPT-OSS-Swallow (๐Ÿ‘ 44)

๐Ÿ‡ฌ๐Ÿ‡ง New Japanese LLMs from Swallow Project: Qwen3-Swallow & GPT-OSS-Swallow with continual pre-training and RLVR
๐Ÿ‡ฐ๐Ÿ‡ท Swallow ํ”„๋กœ์ ํŠธ์˜ ์ƒˆ๋กœ์šด ์ผ๋ณธ์–ด LLM: ์ง€์†์  ์‚ฌ์ „ํ•™์Šต๊ณผ RLVR์„ ์ ์šฉํ•œ Qwen3-Swallow ๋ฐ GPT-OSS-Swallow

๐Ÿ”— zenn.dev/tokyotech_lm/articles

0
0

The most annoying thing about corporate surveillance to me is the arrogance of the prediction mechanisms.

These algorithms build a model of me based on my clicks from three years ago and then try to trap me in that loop forever. They show me music they think I'll like, and news they think I'll engage with, and videos they think will enrage me enough to keep me hooked to their platforms. They are actively trying to flatten my personality into something easy to monetize.

As most people I've seen say out loud, "Privacy as a concept is way beyond hiding secrets. A part of it also means preserving your capacity to change. To be surprised. To be inconsistent."

If I could tell every human one thing, it would be to actively refuse to be a predictable data point. Mess up their metrics. In whatever way you are capable of.

0
26
0

@virtualpierogi @sriSriram "sri" Ramkrishna - ๐Ÿ˜ผ @jsalvadorJuanjo Salvador @benBen Werdmuller

Unfortunately there's a new threat, and it was addressed in the keynote speech by @michielMichiel Leenaars of @nlnet .. and that is the mad dash to incorporate into everything and vibe-code stuff together in a heartbeat.

I think this is particular bad for the fediverse still lacking its robust foundations. The 's will have no problem figuring out how to mix'n mash the existing protocol decay and tech debt into new applications that are rushed into production. Finally non-protocol-experts are enable on the ecosystem and can onboard themselves without involving themselves in endless plumbing of the most low-level technical implemention details of devs.

But the ecosystem will rot and decay as a result of it. Furthermore if a slew of AI-generated fedi apps are launched in quick succession and some of them find good uptake (until they break in unexpected ways), it will serve to attract unwanted corporate attention I'm afraid.

@virtualpierogi @sriSriram "sri" Ramkrishna - ๐Ÿ˜ผ @jsalvadorJuanjo Salvador @benBen Werdmuller @michielMichiel Leenaars @nlnet

By their design 's will always follow the old way of doing things here, and I really wonder how this is going to turn out. People should be very wary here. application trends towards a point-of-no-return, where only the AI can still keep track of the generated code mess.

A good example here is this Microsoft distinguished engineer saying that their aim is for one dev employee creating 1 million lines of code in one month. Once you get there, you cannot ever go back without ditching all AI stuffz and starting over.

Btw, the talk by Michiel Leenaars is mentioned my blog post, but I'll drop it here too. A very interesting recommended watch:

fosdem.org/2026/schedule/event

0

@virtualpierogi @sriSriram "sri" Ramkrishna - ๐Ÿ˜ผ @jsalvadorJuanjo Salvador @benBen Werdmuller @nlnet

It needs concerted effort, as argued in my blog post, to set all of this up. Things can start small and pragmatic, and then gradually evolve and mature, but we should take care it evolves in the proper direction.

There are trade-offs to consider every step of the way. If there'd more capabilities to introspect the functionality that an actor offers, it would diminish the need for an upfront design-by-consensus process, but it would increase the complexity of the specifications.

I drew this in a diagram a couple years ago, and transferred it to our social coding forum at: discuss.coding.social/t/wiki-g

Here you see the fediverse devolve into non-interoperable app-by-app whack-a-mole development, keeping track of all the moving-target projects one took a dependency on. Versus the that tries to hammer out full-blown specs upfront, which became a huge package to deal with, with high complexity to implement.

@virtualpierogi @sriSriram "sri" Ramkrishna - ๐Ÿ˜ผ @jsalvadorJuanjo Salvador @benBen Werdmuller

Unfortunately there's a new threat, and it was addressed in the keynote speech by @michielMichiel Leenaars of @nlnet .. and that is the mad dash to incorporate into everything and vibe-code stuff together in a heartbeat.

I think this is particular bad for the fediverse still lacking its robust foundations. The 's will have no problem figuring out how to mix'n mash the existing protocol decay and tech debt into new applications that are rushed into production. Finally non-protocol-experts are enable on the ecosystem and can onboard themselves without involving themselves in endless plumbing of the most low-level technical implemention details of devs.

But the ecosystem will rot and decay as a result of it. Furthermore if a slew of AI-generated fedi apps are launched in quick succession and some of them find good uptake (until they break in unexpected ways), it will serve to attract unwanted corporate attention I'm afraid.

0
0

So am I getting this right? The guy who vibecoded an insecure, expensive, worse version of IFTTT with delusions of grandeur ("OpenClaw") is now claiming that he'll solve the problem of "prompt injections" at OpenAI?

My man, you are you out of your depth, it's not even funny.

Overheard part of a conversation at the coffee shop as someone had googled something and read the results.

"The AI summary says..."

Everybody at the table burst out laughing with bits of comments such as "oh no, the AI", "so it's wrong again", "ha ha AI ha ha", and more.

I don't think any person at the table of six or seven was under 65 or 70 years old.

0

โ€œAI can make mistakesโ€ might as well be the slogan of our era. Even boosters admit that you need to spin the vibe code slot machine a few times to get a jackpot.

An employee with that degree of consistency would be fired.

So how do we redirect some of that unlimited grace from machines to humans?

productpicnic.beehiiv.com/p/co

0
7
0