Comparing AI and Humans Through Data Efficiency

bgl gwyng @bgl@hackers.pub

When comparing the capabilities of AI and humans, data efficiency stands out as a meaningful long-term metric.

Data efficiency refers to how well one can generalize and learn from a given amount of data. Current AI systems are significantly less data-efficient compared to humans. This is also one of the challenges that Ilya Sutskever, who developed ChatGPT, has identified as needing to be addressed in the future. Since this problem cannot be solved simply by adding more GPUs and requires innovation in design, it offers hope to those concerned about the rapid advancement of AI.

Phew, that's a relief. I can finally sleep soundly.

...except there are some points to consider here.

First, even if AI has lower data efficiency, once it learns something, that capability can be replicated. Therefore, jobs that don't require continuous learning of new information are still threatened. Call center employees are one example.

Jobs where data efficiency is crucial include executives and researchers. These professionals constantly face situations with limited data, and when new data becomes available, they need to extract as much information as possible to inform their next decisions. Since humans adapt better to new data, they maintain control while AI serves as a useful tool. Given that business and research outcomes have significant impacts on the world, human involvement in these areas for the foreseeable future can be viewed positively from a safety perspective.

...but is human data efficiency really higher?

As I recall, Yann LeCun pointed out that when you calculate the amount of data humans process, it's not significantly less than what's used in AI training. Humans constantly collect audiovisual data through their sensory organs, which is equivalent to learning from an extremely long video.

However, we need to consider that video isn't an efficient data format. It contains mostly noise, including a substantial amount of trivial content like YouTube Shorts watched before bed. Compared to GPT, which has essentially read every book in existence, humans do use substantially less data for effective learning.

Another perspective is that while humans use less data for learning, it might be more of a shortcut than true efficiency. This aspect fascinates me and is the main reason I started writing this post.

By "shortcut," I mean that instead of learning data as it is, humans learn based on certain biases (or stubbornness?). Theoretically, neural networks can learn any function given sufficient parameters. But can humans learn any function?

For example, no human can perform 100-digit multiplication mentally. Considering the number of neurons in the brain, it shouldn't be impossible to implement a program for 100-digit multiplication, yet it's hard to believe someone could achieve this through practice alone. Looking around, we can easily find intellectual activities that aren't particularly complex (= simple functions) but are difficult without tools.

This suggests that humans don't actually perform "learning" in the general sense better than current AI. Instead, humans make (hasty?) judgments even when data is insufficient. For some reason, this approach has worked well so far.

Imagine a developer who studies functional programming for just one week and thinks, "This is amazing! I should continue studying and implement it in projects." This judgment isn't based on a deep understanding of functional programming. Rather, it's closer to an aesthetic judgment based on criteria like simplicity and elegance. If functional programming remains an important paradigm ten years later, they might retroactively frame their choice as insight or intuition.

Perhaps individuals and events described as "genius" or "revolutionary" are also products of such boldness? A well-known example is Newton discovering the law of universal gravitation after seeing an apple fall. This demonstrates extreme data efficiency. It's often interpreted as believing in the existence of "simple" laws that "consistently" apply behind complex natural phenomena. But there's little basis for that belief!

So, could we teach AI to do the same? Whether it's insight or a shortcut, it seems effective.

While we don't know how to teach this yet, there's also debate about whether we should. Shortcuts are just shortcuts and might work today but fail tomorrow. There are countless examples of failures due to incorrect intuition. I understand there are many self-proclaimed Newtons uploading dubious papers to arXiv.

Is pushing overwhelming amounts of data into a blank slate the only way to continuously approach truth?

8

1 comment

If you have a fediverse account, you can comment on this article from your own instance. Search https://hackers.pub/ap/articles/0196d2aa-7c6b-74af-ade9-e97787833bc0 on your instance and reply to it.

@bglbgl gwyng 인류 지성사에 무언가 큰 브레이크스루를 내는 사람들의 공통점 중에 그런 기질적인 편향 집착이 있는 거 같아요. 뛰어난 사고 능력 자체도 역할을 했겠지만 그건 어쩌면 저런 기질적 위험성을 안고도 일정 나이 이상까지 (직업적으로나 생물학적으로) 생존할 수 있게 해서 그 결과를 세상에 내놓게 하는 보조적인 수단 아닌가 하는 생각도 듭니다. 아직 설득할 근거는 부족한데 본인은 밑도 끝도 없이 확신을 갖고 적어도 10년 이상을 밀어 부쳐야만 그 결과가 나오는 것들이 있잖아요.

그럼 이게 개체 단위에서 경쟁력있는 학습 모델인가 하면 당연히 그렇지 않다고 생각합니다. 하지만 인류 전체를 하나의 앙상블 학습 기계로 생각한다면 꽤나 괜찮게 작동하는 방식이라고 생각합니다. 이름을 붙여보자면 불나방떼 학습법 ?!

4