Search results

"빠른 RAG"가 아니라 "내 데이터를 내가 소유하는 RAG"를 만들고 싶었습니다.

기술과 프레임워크를 만드는 과정은 결코 쉽지 않습니다. 실제 현장의 피드백을 듣고 방향을 잡아가는 일이 때로는 힘들지만, 꼭 거쳐가야 할 관문이겠죠.

너도 나도 빠르게 돈을 태워 RAG를 구축해가는 상황 속에서, 빈자의 RAG, 정제된 RAG, 통제 가능한 RAG를 만들어보고 싶다는 생각으로 출발한 아이디어를 계속 다듬어 나가고 있습니다.

👉 https://github.com/rkttu/reconsidered_rag

2
0
0
0

@b0rkJulia Evans This is definitely not too big of a question, but it's a serious one! We too worry about free software culture, which I agree can be dogmatic and rigid. We at have worked hard to keep ourselves focused on our ethics and principles, while also acknowledging that we live in a complex world.

We've acknowledged how difficult it is to live in a purely world. You can see the keynotes @bkuhnBradley M. Kuhn and I gave over the years, here's one from :

archive.fosdem.org/2019/schedu

0

That's Late Stage Capitalism for you:

"More than 20% of the videos that YouTube’s algorithm shows to new users are “AI slop” – low-quality AI-generated content designed to farm views, research has found.

The video-editing company Kapwing surveyed 15,000 of the world’s most popular YouTube channels – the top 100 in every country – and found that 278 of them contain only AI slop.

Together, these AI slop channels have amassed more than 63bn views and 221 million subscribers, generating about $117m (£90m) in revenue each year, according to estimates.

The researchers also made a new YouTube account and found that 104 of the first 500 videos recommended to its feed were AI slop. One-third of the 500 videos were “brainrot”, a category that includes AI slop and other low-quality content made to monetise attention.

The findings are a snapshot of a rapidly expanding industry that is saturating big social media platforms – from X to Meta to YouTube – and defining a new era of content: decontextualised, addictive and international.

A Guardian analysis this year found that nearly 10% of YouTube’s fastest-growing channels were AI slop, racking up millions of views despite the platform’s efforts to curb “inauthentic content”."

theguardian.com/technology/202

0
0
0
0

I published a personal recap of State of the Word 2025, with a focus on the AI panel I joined with Mary Hubbard, Matt Mullenweg, Felix Arntz, and James LePage.

I also wrote about the work behind the AI Experiments plugin and what it means for the future of AI in WordPress.

jeffpaul.com/2025/12/reflectio

@jeffpaulJeffrey Paul

It's sad and shocking to see everyone losing their mind in real time.

There are some glaring obvious issues to be discussed when it comes to integration: environmental issues, issues of power and control, of accountability and impacts on society.

None of which are addressed when is discussing .

We lost , we are losing , so a bunch of libertarians and their investors can wreck what's left of the western world. Sad times.

0
0
0

All mentions of "AI" should be changed into "Brain Rot Inducer"…

"Do you want the Brain Rot Inducer to summarize this text for you?"

"The Brain Rot Inducer can auto-answer this e-mail for you!"

"Let the Brain Rot Inducer write this social media post for you!"

"Easily generate an image with the Brain Rot Inducer, and call it your own creation!"

0
0
0

"AI chatbots have conquered the world, so it was only a matter of time before companies started stuffing them into toys for children, even as questions swirled over the tech’s safety and the alarming effects they can have on users’ mental health.

Now, new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.

After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.

In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk that we’re barely beginning to scratch the surface of — and just in time for the winter holidays, when huge numbers of parents and other relatives are going to be buying presents for kids online without considering the novel safety issues involved in exposing children to AI."

futurism.com/artificial-intell

0

인공지능 (AI) 발전과 신뢰 기반 조성 등에 관한 기본법' (약칭: AI 기본법)

* 2026년 1월 22일 시행.
* AI 사업자는 이용자에게 서비스가 AI 기술을 사용한다는 사실을 고지해야 한다. (예: "이 챗봇은 AI입니다.")
* AI 사업자는 생성형 AI를 활용해 만든 컨텐트의 경우, AI에 의해 생성되었음을 명확히 표시해야 한다.
* 위와 같은 '고지 및 표시 의무'를 위반할 경우, 3천만 원 이하의 과태료가 부과될 수 있다.
* 국내에 주소나 영업소가 없고 일정 기준을 충족한 해외 AI 사업자는 AI 관련 업무를 처리할 국내 대리인을 의무적으로 지정해야 한다.

0

"Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge.

The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them.

These issues expose the AI system to indirect prompt injection attacks, allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News."

thehackernews.com/2025/11/rese

0

"To be clear, everybody is losing money on AI. Every single startup, every single hyperscaler, everybody who isn’t selling GPUs or servers with GPUs inside them is losing money on AI. No matter how many headlines or analyst emissions you consume, the reality is that big tech has sunk over half a trillion dollars into this bullshit for two or three years, and they are only losing money.

So, at what point does all of this become worth it?

Actually, let me reframe the question: how does any of this become worthwhile?Today, I’m going to try and answer the question, and have ultimately come to a brutal conclusion: due to the onerous costs of building data centers, buying GPUs and running AI services, big tech has to add $2 Trillion in AI revenue in the next four years. Honestly, I think they might need more.

No, really. Big tech has already spent $605 billion in capital expenditures since 2023, with a chunk of that dedicated to 5-year-old (A100) and 4-year-old (H100) GPUs, and the rest dedicated to buying Blackwell chips that The Information reports have gross margins of negative 100%:"

wheresyoured.at/big-tech-2tr/

0

The AI industry wants us to believe AI superintelligence is the real threat from generative AI.

But that narrative was crafted to distract from the many ways genAI is being used to tear our societies apart, as we saw this week when a deepfake video rocked the Irish election. It must be reined in.

disconnect.blog/generative-ai-

0
0
0

"Google AI overviews are misleading or inaccurate in 37% of finance-related searches, according to The College Investor's latest analysis. This is an improvement from last year, where 43% of AI Overviews were inaccurate - but a one-third error rate is troubling when it comes to personal finance.

This is causing consumer confusion, and potentially harming Americans' finances. The overviews were especially bad when it comes to tax, insurance, and financial aid related queries.

What's Happening: Over the last several years, Google has been rolling out AI-driven answers in search results. At the top of the search results they show AI Overviews, and they're now expanding the use of AI Mode. The problem is they are plagued with inaccurate answers. And experts say it's a serious issue."

thecollegeinvestor.com/66208/3

0

『OpenAIが新たな動画生成AI「Sora 2」を公開したが、同社は「公人の描写をデフォルトでブロックする」と明言している。しかし、この制限には大きな抜け穴があり、「すでに亡くなっている公人の映像生成は許可されている」ことが判明。その結果として、SNS上では故人を自由にAI動画で再現する例が続々と登場している』

OpenAIのSora 2、「マイケル・ジャクソンがコメディ」など故人の動画を続々作成 | Gadget Gate
gadget.phileweb.com/post-11113

0
0
0

Sora 2で生成の動画、別SNSに“AI素性隠して”大量投稿し再生数荒稼ぎ ウォーターマークを消すツールとアルトマン氏の著作権への対応(生成AIクローズアップ) | テクノエッジ TechnoEdge
techno-edge.net/article/2025/1

『本来、Sora 2で生成された動画にはウォーターマークが自動的に付与される仕組みになっています。このマークは、コンテンツがAI生成であることを明示し、透明性を確保するための機能です』

『一方で、Sora 2のウォーターマークを除去するWebサービスが早くも登場しています。試してみましたが、ブラウザ上で数分のうちに除去が完了しました』

『現在のSora 2はアニメキャラクターなどの著作権に引っかかるであろう二次創作も可能にしています。つまり非実在だけでなく、既存キャラクターや人物の動画も生成可能であり、投稿もあっさりできる状況です。このような状況の中で、SNSプラットフォーム側も対応を検討していると考えられます』

0
0

"A recent report by content delivery platform company Fastly found that at least 95% of the nearly 800 developers it surveyed said they spend extra time fixing AI-generated code, with the load of such verification falling most heavily on the shoulders of senior developers.

These experienced coders have discovered issues with AI-generated code ranging from hallucinating package names to deleting important information and security risks. Left unchecked, AI code can leave a product far more buggy than what humans would produce.

Working with AI-generated code has become such a problem that it’s given rise to a new corporate coding job known as “vibe code cleanup specialist.”

TechCrunch spoke to experienced coders about their time using AI-generated code about what they see as the future of vibe coding. Thoughts varied, but one thing remained certain: The technology still has a long way to go.

“Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, ‘Please take this into the dining room and pour coffee for the family,’” Rover said.

Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren’t going to tell you. “It doesn’t make the kid less clever,” she continued. “It just means you can’t delegate [a task] like that completely.”"

techcrunch.com/2025/09/14/vibe

0

, , , and all of that hokey nonsense shall not appear in my roadmaps as anything other than a neat research item until it can demonstrate a feasible path to or mathematical completeness.

I lead on the largest mobile- fleet known to humankind. I will not entrust decisions that could maim or kill to a pile of nondeterminate math prone to “hallucinations” or confabulation.

0

Diffusion models are a kabbalistic miracle of math. At the core, they’re just incredibly advanced denoising systems, formally known as Denoising Diffusion Probabilistic Models (DDPMs), e.g. Stable Diffusion and DALLE-2.

During training, the model is shown hundreds of millions of images paired with text descriptions. To teach it how to "clean up" noisy images, we intentionally add random noise to each training image. The model’s job is to learn how to reverse it using the text prompt as a guide for where and how to remove the noise.

When you generate an image, the model performs this process in reverse. It starts with a latent space of pure random noise and gradually subtracts more and more noise with each diffusion step. It's synthesizing an image from scratch by removing all of the noise until the image remains, organizing the chaos into whatever you asked it to generate.

0

Intro Post: Obvs

All things

I’m also interested in & its potential impact on politics

My first proper word was ‘book’ so expect - and because I’ve been hooked since the days of Outer Limits & the Twilight Zone

All things

stuff - & some other beasties - & & er 😂🖖🖖

0

It has been know for a long time that, when it comes to AI training, quality is much more important than quantity. Yet Big Tech goes for quantity because it is cheaper and takes less time to train a model on non-curated garbage scrapped from the internet than taking the time and the (human) effort of curating the data. That is one of the problems with their models: Garbage in, garbage out.

bloomberg.com/news/articles/20

Thread 1/3

0
0

It's official! We've launched a toolkit to help teachers and students adapt classes to the sudden rise of generative AI. Browse recommended policies, watch tutorials, and learn strategies for integrating—or excluding—text- and media-generators in your classes.

umaine.edu/learnwithai

Learning With AI graphic
0
0
0

said computers are "bicycles for the mind." Bicycles use geometry & physics to help one go further under the same amount of effort they themselves put in.

I feel are "escalators for the mind". Escalators are expensive, complex, inefficient machines that churn constantly whether used or not. A convenience. They _are_ a helpful assistive device for those that need it — and many do! — and you _can_ to go further, faster, if putting in effort, but _most_ use for laziness.

0
0
0
0
0

Bullshit universities: the future of automated education. ~ Robert Sparrow, Gene Flenady. link.springer.com/article/10.1

0
0

Want to support AIpub.social? Best way to do that now will be by getting yourself 20% off your first year subscription to ThinkDiffusion Pro using this affiliate link:

thinkdiffusion.com/?via=AIpub

In addition to 20% of the first year subscription cost, you will also get an additional 20% more credits on your first deposit.

These commissions will help me cover server and storage costs for this instance, but also provide you with the following perks, see thread >>

0

It was on my todo list for a long time and now I'm - writing in En and De, darum auch

Interested in

I found some active accounts from organisations but it seems to be harder to find accounts representing persons.

Can you recommend me any accounts from Switzerland or accounts focussing on AI? :)

0
0
0

Now starting at ICT.OPEN: a panel discussion on the Future of Education in the Era of Generative AI. As a freshly-minted assistant professor with increasingly many teaching responsibilities, I am keen!

ictopen.nl/programme/the-futur















Five people sitting on a stage in front of a slide that says: Plenary session - The future of Education in the Era of Generative AI.
0
0
0
0