Search results

Reading through Anthropic's official repo for giving agents various "super skills"[1]... There's an "algorithmic art" skill and the instructions are explicitly encouraging pure deception as one of the key "critical guidelines":

"The philosophy MUST stress multiple times that the final algorithm should appear as though it took countless hours to develop, was refined with care, and comes from someone at the absolute top of their field. This framing is essential - repeat phrases like "meticulously crafted algorithm," "the product of deep computational expertise," "painstaking optimization," "master-level implementation.""

github.com/anthropics/skills/b

For someone who's been working in this field for almost 30 years, this "skills.md" file is just the worst... and so far off the mark! 🤮

Touch some effing grass, Anthropic (and all boosters)! How can so many people think this approach is _the_ future? The map is not the terrain...

[1] Alone the premise of this repo is pure comedy gold and pure sadness in equal measures!

Some growing key questions here really are:

How to defend or adapt disciplines (not just artistic/cultural ones) against this kind of semantic hollowing out of what it means to have skills, experience and expertise in a(ny) field...

What approaches, qualities and "values" (physical, ethical, social/humanist, environmental, resource use) should we (or still can we) be focusing on, which are much harder and more costly for AI companies to mine/extract & subvert?

How to defend actual skills against the emulation of skills, or rather just the appearance of skills? How could a society even function if it only encourages and celebrates the latter?

What does society actually value in art/creativity/culture? If art is free to produce (of course that'll always only ever be an illusion!), funding, possession, collection & speculation of new work would also become meaningless (and only benefit pre-AI era works/collectors). In the larger picture, what do people actually value in culture, politics and striving for more peaceful existence which enables more of the former (pluralistic art/culture) in the first place?

What will be the combined impact of AI & robotics on fields which are currently still thinking themselves more safe (from exploitation) because there's a strong physical element/process to them?

Will art/culture/craft become more performance, experiential/ephemeral again only? Like music before recordings or Buddhist sand paintings with an explicit act of destruction at the end as key philosophical concept? Both of which also have more of a social element to them...

The Samsara Mandala
youtube.com/watch?v=hL8gEc29KTI

0
0
0

After trying a few different LLMs, I'm starting to think that letting them directly modify your project's code leads to non-deterministic output and knowledge that’s hard to reuse.

A better pattern might be using LLMs to build tools that then apply those changes in a deterministic way.

The knowledge stays in the tool instead of disappearing when the session ends.

Next thing I'll try to share these tools with others, or maybe letting an agent run them at scale.

Beach after a storm with calm, lightly rippled sea and a clear horizon under a blue sky with thin white clouds.
0

RE: mas.to/@trendsbot/116211553284

When USSR was crumbling, factories were also paying workers in products they manufactured.

This is AI bubble companies trying to print own money. Printing your own money only works if you find a way to make it necessary. That's what taxes do, for example (recommended reading: David Graeber's "Debt").

Here:

👉 "printing money" part is paying employees in tokens

👉 "taxes" part is requiring employees to use AI in their day to day work (thus making tokens necessary)

:blobcatpopcornnom:

0

Another day, another company is reducing IT and software dev jobs to replace them with AI despite many report indicating that Gen AI doesn't work as promised. Get ready for more outages for Jira and co ;) ‘Devastating blow’: Atlassian lays off 1,600 workers ahead of AI push theguardian.com/technology/202

I didn't think Jira could get any more messed up, but I underestimated their commitment to the bit. AI is a bold choice for a platform that struggles with basic navigation. Lmao

0

📰 Opus4.6でdraw.io図を生成したらもはやLLMの前提が崩れてた件 (👍 77)

🇬🇧 Opus 4.6 shatters assumptions about LLM spatial reasoning by generating complex draw.io diagrams perfectly without MCP servers or plugins.
🇰🇷 Opus 4.6가 MCP 서버나 플러그인 없이 복잡한 draw.io 다이어그램을 완벽히 생성하며 LLM 공간 추론의 한계를 깨뜨림.

🔗 zenn.dev/acntechjp/articles/4a

.6

📰 LLMに長期記憶を実装する (👍 53)

🇬🇧 Implementing brain-inspired long-term memory in Claude Code: episodic, semantic, and procedural memory with emotional weighting and associative rec...
🇰🇷 Claude Code에 에피소드, 의미, 절차 기억과 감정적 가중치 및 연상 회상을 갖춘 뇌 영감 장기 기억 구현.

🔗 zenn.dev/acntechjp/articles/ef

0

📰 Opus4.6でdraw.io図を生成したらもはやLLMの前提が崩れてた件 (👍 57)

🇬🇧 Opus 4.6 shatters assumptions about LLM spatial reasoning—generating complex draw.io diagrams without MCP or tools, purely on model capability
🇰🇷 Opus 4.6가 LLM의 공간 추론 한계를 깨다—MCP나 도구 없이 순수한 모델 성능만으로 복잡한 draw.io 다이어그램 생성

🔗 zenn.dev/acntechjp/articles/4a

.6

📰 LLMに長期記憶を実装する (👍 45)

🇬🇧 Implementing brain-like long-term memory in Claude Code: episodic, semantic memory with emotional weighting and associative networks
🇰🇷 Claude Code에 뇌와 같은 장기 기억 구현: 에피소드, 의미 기억과 감정적 가중치, 연상 네트워크

🔗 zenn.dev/acntechjp/articles/ef

0

🕐 2026-03-11 18:00 UTC

📰 Claude Codeを加速させる私の推しスキル・ツール・設定(Findyイベント登壇資料) (👍 164)

🇬🇧 Practical tips to accelerate Claude Code: Raycast shortcuts for quick terminal launch, custom skills setup, and workflow optimization
🇰🇷 Claude Code 개발 속도를 높이는 실용적인 팁: Raycast 단축키로 빠른 터미널 실행, 커스텀 스킬 설정 및 워크플로우 최적화

🔗 zenn.dev/ubie_dev/articles/cla

📰 Opus4.6でdraw.io図を生成したらもはやLLMの前提が崩れてた件 (👍 57)

🇬🇧 Opus 4.6 shatters assumptions about LLM spatial reasoning—generating complex draw.io diagrams without MCP or tools, purely on model capability
🇰🇷 Opus 4.6가 LLM의 공간 추론 한계를 깨다—MCP나 도구 없이 순수한 모델 성능만으로 복잡한 draw.io 다이어그램 생성

🔗 zenn.dev/acntechjp/articles/4a

.6

0
0

There’s a meme going around that an Open Source project “can’t” prevent LLM use by contributors because there’s no technical means to enforce this. This is idiotic and shows just how disingenuous slopmongers will be when told they can’t just submit slop.

Did you know there’s also no technical means to enforce that you didn’t copy some code you’re contributing from a proprietary codebase and say it’s original work? Somehow we haven’t given up on that!

The enforcement mechanism is exactly the same: There’s no *technical means* to prevent someone from being a filthy fucking liar. But there are *social means* to prevent them from contributing: You make sure that if they’re caught, they’re held publicly accountable for all of the rework and mess that resulted from their lies.

This has worked pretty well for decades in Open Source, and won’t stop working just because slopmongers wish really hard. Fucking scrubs.

0
4
0

There’s a meme going around that an Open Source project “can’t” prevent LLM use by contributors because there’s no technical means to enforce this. This is idiotic and shows just how disingenuous slopmongers will be when told they can’t just submit slop.

Did you know there’s also no technical means to enforce that you didn’t copy some code you’re contributing from a proprietary codebase and say it’s original work? Somehow we haven’t given up on that!

0
0

📰 メガネだけでClaude Codeとやり取りできるようにした話 (👍 43)

🇬🇧 Integrated Claude Code with smart glasses (Even G2) for hands-free coding while mobile—perfect for parents who can't stay at their desk.
🇰🇷 스마트 안경(Even G2)으로 Claude Code 핸즈프리 코딩. 책상을 떠나야 하는 부모에게 완벽.

🔗 zenn.dev/wmoto_ai/articles/cla

📰 RAGで足りなくなったので Agentic Search を調べてみた (👍 31)

🇬🇧 RAG wasn't enough for company chatbot despite tuning. Explored Agentic Search as the next evolution for better retrieval accuracy.
🇰🇷 RAG 튜닝해도 정확도 부족. 더 나은 검색 정확도를 위해 Agentic Search 탐색.

🔗 zenn.dev/edash_tech_blog/artic

0
0

The Verge: Grammarly is using our identities without permission

‘Expert Review’ AI agents make suggestions supposedly inspired by subject matter experts, including several staff members here at The Verge.

theverge.com/ai-artificial-int

0
3
0

You all should read this fantastic post by @onepictEsther Payne :bisexual_flag: about how, largely, AI people just can't leave us the fuck alone. But I'm noticing more and more and more that they get super mad when they wanna shove their creation at you and you simply say, no thanks. That's it. No extra bashing, just, no thanks, and they get super offended. dotart.blog/cobbles/ai-and-tha

0
0
0

The US economy, currently driven by an AI bubble, is keeping its head above water for now, but it is only a matter of time before everything goes south. If you didn’t experience the 2000 or 2008 crises (I hope nobody sees it), I urge you to take precautions, starting by reducing unnecessary spending. Trim it down. If possible, build 6–12 months of savings to prepare for potential job loss. Avoid buying items on credit. Economic crashes are like harsh mistresses, they do not forgive easily. (2/2)

@nixCraftnixCraft 🐧

The FUNNIESTEST (not) thing about THIS bubble is how come every single techbro friendly moron is going "hurrr... it's NOT a 'bubble' if we can *see* it coming, HA HA GOTCHA", as the bubble builds and everyone thinks it's not a bubble because they read what those idiots are saying.... and then they keep pouring money and giving the LLM bullshit both attention, time,platform... and money. Did I mention money?

Buckle in, hold onto to your butts, this is gonna go pop, soon.

0

:blobhyperthink:

/ technologies are an existential threat to well before we even get anywhere near and the singularity.

The sheer scale of deployment and integration in all the nooks and crannies of society, where we give it access to all our information, and now with the Rise of the Agents, let AI act on them too. Giving full control away from us..

.. to the owners of this technology, the usual suspects, and their billionaire class. Folks who are clearly out to dominate us, and keep us in check so they can continue their fancy lifestyles wallowing in decadence and moral depravity.

🤖 Unrestrained AI is the tech for 🧛 unrestrained elites.

Meme with text "Where do you want to be tomorrow?" against a still image from the movie "Blade Runner" with Rutger Hauer and Harrison Ford, showing a view of the sci-fi metropolis in a dystopic scene.
0

:blobhyperthink:

"Ex-Google PM Vibe Codes Palantir To Watch The Iran Strikes"

youtube.com/watch?v=0p8o7AeHDzg

See also: social.coop/@smallcircles/1161

Alright. Another ..

Is this person's proud presentation of their creative work ..

0

Microsoft banned the word 'Microslop' in its Copilot Discord server, then began restricting access after users started posting 'Microsl0p' and other funnies. Microsoft CEO and AI chief wants you to stop using words like SLOP and Microslop because it upsets them a lot. Lmao. pcgamer.com/software/ai/micros Maybe you shouldn't force Copilot AI on everyone and steal their data to train your shitty AI and then every day tell the world how they are scared as AI is going to replace 80% of jobs?

0

The claim "you won't be replaced by AI, but by a person using AI" is nonsense. The Block layoff victims were some of the most productive, pilled people in the company, but it didn't save them, because that's not what layoffs are about.

The layoff script goes, as always:
- overhire
- lay everyone off
- pretend it's because of productivity gains
- stock go up

There is no individual solution that will protect you from bad leadership and cost cutting.

productpicnic.beehiiv.com/p/ai

0

I've been increasingly concerned about the corporate monopoly over frontier LLMs. While many ethically-minded people choose to boycott these models, I believe passive resistance alone cannot break the structural grip of big tech. To truly “liberate” these technologies and turn them into public goods, we need to look beyond moral high grounds and engage with the material basis of AI—specifically compute, data, and the relations of production.

I've written two posts exploring this through the lens of historical materialism. The first piece analyzes why current “open source” definitions struggle with LLMs, and the second discusses what it means to “act materialistically” in our imperfect world. My goal is to suggest a path forward that moves from mere boycotting to a more proactive, structural socialization of AI infrastructure.

If you've been feeling uneasy about the AI landscape but aren't sure if boycotting is the final answer, I'd love for you to give these a read:

5
4
0

📰 セキュリティって難しい (👍 44)

🇬🇧 Explores structural challenges in enterprise security work—beyond knowledge catchup, it's about deep system understanding.
🇰🇷 기업 보안 업무의 구조적 난제—지식 습득을 넘어 시스템에 대한 깊은 이해가 필요.

🔗 zenn.dev/mizutani/articles/sec

📰 例を増やしたらLLMの性能が下がる ── few-shot collapseの発見と検出方法 (👍 21)

🇬🇧 Discovers few-shot collapse: more prompt examples can hurt LLM performance. AdaptGauge tool measures this on real tasks.
🇰🇷 Few-shot collapse 발견: 예시를 늘리면 LLM 성능이 오히려 저하. AdaptGauge로 실제 작업에서 측정.

🔗 zenn.dev/okuma/articles/few_sh

0

Whatever we think of / mad hype cycle, we have to deal with its rushed and inhumane dumping of the technology into global human society.

is a strategic approach to that allows activist voices to have the most impact in dealing with the dangers of disruptive technology introductions, and focuses beyond berating people and demanding sacrifice ("don't use, or else.."), to creating a process that helps win people over and work together on best outcomes and in direction of solutions.

stands for Constructive activism-led movements, such as Social coding commons. Coding is social, and the holistic approach to ensure that.

Social coding commons evolves Social experience design or , solution development for grassroots movements, supported by the .

In the thread below I copied a post to 's community with a suggestion to ponder about best outcomes from current and ongoing AI disruption, and deal with risks.

discuss.coding.social/t/calm-c

0
0
0

AI ADDED ‘BASICALLY ZERO’ TO US ECONOMIC GROWTH LAST YEAR, GOLDMAN SACHS SAYS

Imported chips and hardware mean the AI investments are translating into US GDP growth

File this under "No shit, Sherlock"

is constantly close to going bankrupt and can't make a profit even with BILLIONS on investments. They've got to start selling ad space on

The Bubble is going to burst hard. (No, it's not

gizmodo.com/ai-added-basically

0

After the countless galaxies formed. At the center of each sits a super-massive Black Box. Hidden inside lurks the mysterious . Which is nothing more than a concept as we don't know what the heck is going on there. All our common sense breaks down, after we crossed the boundary. Coming close to the event horizon of any Black Box inevitably leads to as a person is sucked into the void. An outside observer would see that person frozen in time, stagnant. As the universe expands, continuous socialcooling.com will eventually lead to the Big ☠️ RIP of , who invented the Laws of Online .

0

Some days, I wonder if I live in a parallel world.

I want more efficient software (to lower overall power usage of our society, to avoid throwing away hardware after a couple of years, to be able to do more with less).

I fight centralisation of data/knowledge/power in IT (promote open protocols, selfhosting, open source, decentralisation)

I do want a more egalitarian society (no more barriers because of handicaps or upbringing in a non-privileged environment. Improving our democracy with services that help everyone by reducing/eliminating bureaucracy).

I do not want to see our world burn (see point above about reducing waste. But also promoting local LLM usage, and not defaulting to wasteful services for tasks that can be done locally).

Yet... I don't fight genAI. On the contrary, I deeply believe it can help us achieve the above. Faster.

The problem is way too many people are assuming that because (a lot of) people misuse it, the technology must be the issue.

Maybe focus on the people misusing it, and not the technology ? Banning usage of genAI altogether in software projects is, IMHO, both counter-productive and impossible.

Are we going to also ban people using LSP ? Linters ? Fuzzy search tools ? Spell-checks ? Translation tools ? Speech-To-Text assistants ?

Heck, how will *you* know if I used a LLM to assist me ? Because of the quality of the contribution I provided ? Because I'm not knowledgeable about your project and design ? Because english is not my native language, and I used a tool for translating text ?

Or maybe it's because, shocking, I used it as yet-another-tool. And it didn't replace my brain. I still want to ensure what I'm delivering is correct, useful and maintainable. It doesn't replace all the brainstorming, investigation, analysis, tests, that I do. But helps me iterate on all of those faster.

What is a PITA is random contributors dumping some stuff which they didn't properly review/test. The vibecoders. But how is that different from random "code dumps" of people who did a "wrong" fix ? Lack of education.

Instead of banning genAI altogether, maybe specify what is expected from the human using it. I.e. that person must "own" what it produced, know exactly what it contains, why it does it that way, etc...

BTW, Jellyfin has a precent decent and comprehensive set of rules which are a good middle ground, go read it : https://jellyfin.org/docs/general/contributing/llm-policies/

#rant #LLM #genAI #software
0

Hetzner, which many instances rely on, just announced a major across the board price hike. A 30% increase isn’t minor.

For larger companies it’s a budget line. For small Mastodon instances, it’s the line between sustainable and underwater.

When servers jump from €49.99 to €64.99 and storage rises too, admins feel it first. Many already subsidize costs.

Some will downsize. Some may shut down.

Decentralization still runs on invoices.

Support your instances.

0

From Bruce Schneier: "All it takes to poison AI training data is to create a website:

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

These things are not trustworthy, and yet they are going to be widely trusted."

schneier.com/blog/archives/202

0
17
0

I posted on lobste.rs and reddit.com about my latest blog article. It's been a very long time (3 years) since last time I did that.

Some comments are… weird… They try to summarize the content of the article, but it's all wrong. Checking the authors' profile show they are active since many months. But really, really, it smells LLM. Maybe I'm “doubtful” because LLM are everywhere today, but I don't know what to do with these comments…

Did you experience something similar recently?

0

If you replace a junior with and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."

That's a cognitively brutal task.

Humans are terrible at sustained vigilance for rare events in high-volume streams. Aviation, nuclear, radiology all have extensive literature on exactly this failure mode.

I propose any productivity gains will be consumed by false negative review failures.

0
32
2

Let's zoom in on a particular area of the commons: the movement and their deliverables, where a strong credo dominates. The noble focus is on offering open technologies, software and systems. But in their quest to uphold the overlooked aspect is . The ability to not only sustain oneself, but also by extension the creative one is part of. The commons is an entirely chaotic grassroots environment with very specific social dynamics that are not properly accounted for. Instead top-down "herding of cats" governance models that work in regular business environments, are attempted. These may work, but only in small organizational settings, and NOT at scale where they break down.

FOSS in this definition is NOT commons-based. For that we need where FOSS is the deliverable and initiatives the engines of delivery, (Free software development lifecycle) supply chains.

coding.social/blog/reimagine-s

I posted on this notion of before, with a meme attached..

social.coop/@smallcircles/1153

There's nothing wrong with , but it only provides part of the solution as long as the angle is not given due attention to.

Esp. with disruptive inhumanely introduced technologies, new major threats to the community are surfacing, and giving attention to this subject matter is more important than ever.

0

Okay, one more time for the people in the back.

The "AI" (🤮) craze of the past few years is all about Large Language Models. This immediately tells us that the only thing these systems "know" is trends/patterns in the ways that people write, to the extent that those patterns are expressed in the text that was used to train the model. Even the common term, "hallucination," gives these things far too much credit: a hallucination is a departure from reality, but an LLM has no concept of reality to depart from!

An LLM does exactly one thing: you give it a chunk of text, and it predicts which word will come next after the end of the chunk. That's it. An LLM-powered chatbot will then stick that word onto the end of the chunk and feed the resulting, slightly longer chunk back into the model to predict the next word, and then do it again for the next, etc. Such a chatbot's output is unreliable by design, because there are many linguistically valid continuations to any chunk of text, and the model usually reflects that by having an output that means, "There is a 63% chance that the next word is X, a 14% chance that it's Y, etc." The text produced by these chatbots is often not even correlated with factual correctness, because the models are trained on works of fiction and non-fiction alike.

For example, when you ask a chatbot what 2 + 2 is, it will usually say it's 4, but not because the model knows anything about math. It's because when people write about asking that question, the text that they write next is usually a statement that the answer is 4. But if the model's training data includes Orwell's Nineteen Eighty-Four (or certain texts that discuss the book or its ideas), then the chatbot will very rarely say that the answer is 5 instead, because convincing people that that is the answer is a plot point in the book.

If you're still having trouble, you can think of it this way: when you ask one of these chatbots a question, it does not give you the answer; it gives you an example of what—linguistically speaking—an answer might look like. Or, to put it even more succinctly: these things are not the Star Trek ship's computer; they are very impressive autocomplete.

So LLMs are fundamentally a poor fit for any task that is some form of, "producing factually correct information." But if you really wanted to try to force it and damn the torpedos, then I'd say you basically have two options. I'll tell you what they are in a reply. 🧵

0

While sitting at the Laguna, I was watching quite a lot of "content creators" creating their identical looking short videos, using the same poses etc. that are probably "trendy" on TikTok and Instagram.

And I think that made me understand why some people find "" or 's so appealing: If you only care about "creating" carbon copies of existing things, and measure success by how close you get to the "original", then side-stepping the actual act of creation must seem like reasonable step.

0
0

tools let you write code faster, but LOCs has NEVER been the bottleneck to value. The bottleneck is organizational coherence.

Instead of working towards that alignment, we are encouraged to "just it." The result? AI makes everyone work MORE, with no real productivity gains.

Teams need to get better at choosing more valuable work to do. For that, you need . And user research can only happen at a human pace.

productpicnic.beehiiv.com/p/re

0
1
0

I keep being baffled by whole industries using AI as a fancy search in a database. Yes, they memorize and reproduce, which is both a copyright and a privacy issue.

No, they're not 100% accurate in either memorizing or reproducing, which is a liability issue. Can we stop abusing these transformer based models as a fake solution to problems they're not made for? Not everything is a nail, so stop throwing your fancy hammer at it.

arstechnica.com/ai/2026/02/ais

0
1
0

📰 プログラミング言語オタクとして改めてC#を語りたい (👍 115)

🇬🇧 Why C# deserves more love in Japan. A polyglot programmer compares C# with Go, Rust, Swift and explains its strengths.
🇰🇷 왜 C#이 일본에서 더 주목받아야 하는가. 다언어 프로그래머가 Go, Rust, Swift와 비교하며 C#의 장점을 설명.

🔗 zenn.dev/nuskey/articles/why-i

0

@ludicityLudic 🧛 For the record, I work at a software company that employs ~10k developers.

Before LLMs, I'd encounter such engineers a couple of times a month, but I interact with a lot of engineers, specifically the ones that need help or are new at the company or industry at large, so it's a selected sample. Even the most inexperienced ones are willing and able to learn with some guidance.

After LLMs, there's been a significant uptick, and these new ones are grossly incompetent, incurious, impatient, and behave like addicts if their supply of tokens is at all interrupted. If they run out of prompt credits, its an emergency because they claim they can't do any work at all. They can't even explain the architecture of what they are making anymore, and can't even file tickets or send emails without an LLM writing it for them, and they certainly lack in any kind of reading comprehension.

It's bleak and depressing, and makes me want to quit the industry altogether.

0
0

🕐 2026-02-22 12:00 UTC

📰 Qwen3-Swallow & GPT-OSS-Swallow (👍 110)

🇬🇧 Tokyo Tech releases new Japanese LLMs: Qwen3-Swallow & GPT-OSS-Swallow, trained via continual pre-training, SFT, and RLVR on multilingual datasets
🇰🇷 도쿄과학대, Qwen3/GPT-OSS 기반 일본어 LLM 공개. 지속적 사전학습, SFT, RLVR로 다국어/수학/코드 데이터셋 학습

🔗 zenn.dev/tokyotech_lm/articles

0

🕐 2026-02-22 06:00 UTC

📰 Qwen3-Swallow & GPT-OSS-Swallow (👍 102)

🇬🇧 Tokyo Tech releases Japanese LLMs trained on Qwen3 & GPT-OSS with continual pre-training, SFT, and RLVR for math, code & science domains
🇰🇷 도쿄과학대학이 Qwen3/GPT-OSS 기반 일본어 LLM 공개. 수학·코드·과학 데이터로 지속사전학습 및 강화학습 적용

🔗 zenn.dev/tokyotech_lm/articles

0

🕐 2026-02-22 00:00 UTC

📰 Qwen3-Swallow & GPT-OSS-Swallow (👍 92)

🇬🇧 Tokyo Tech releases Qwen3-Swallow & GPT-OSS-Swallow: Japanese LLMs trained with continual pre-training, SFT, and RLVR for math/code/science.
🇰🇷 도쿄과학대학, 일본어 LLM Qwen3-Swallow와 GPT-OSS-Swallow 발표. 수학/코드/과학 데이터로 지속 사전학습 및 강화학습 적용.

🔗 zenn.dev/tokyotech_lm/articles

0