I'm beginning to think ChatGPT is getting pretty smart.

I was reading up on the "Performer" approach to approximating the core matrix multiplications used in attention/transformers in ML [2].

ChatGPT tells me about this:

exp(a.b) = <coh(a),coh(b)>.

At this point it's obviously on crack! coh(a) is a coherent state of a quantum particle as described in [1]. (And <.,.> is the inner product on quantum states.)

But no, after some thought this makes perfect beautiful sense. There's an underlying story about how the "kernel trick" used in ML is very similar to the way physicists like to use propagators to reason about fields. In this particular case, the kernel trick amounts to embedding features for ML in a Fock space ๐Ÿคฏ.

[1] math.ucr.edu/home/baez/photon/
[2] arxiv.org/abs/2009.14794

0

If you have a fediverse account, you can quote this note from your own instance. Search https://mathstodon.xyz/users/dpiponi/statuses/116002970097805658 on your instance and quote it. (Note that quoting is not supported in Mastodon.)