Thinking about decoding some of the more complicated protocols in ngscopeclient more fully on the GPU.

Let's take I2C. for example. I have a 2x 10M point capture coming off my STM32MP2 / Kintex-7 testbed that takes about 188 ms to decode on the Xeon 4310 on my lab workstation (lots of idle time with just a few packets).

The decode can accept either sparse or uniformly sampled data; right now it's getting uniform data at 100 Msps which is overkill but the ThunderScope doesn't yet let you decimate to go any slower.

So for the "sampling at many times the symbol rate" use case, the easiest GPU win might be to delta code uniformly sampled data and store SDA/SCL separately as a sparse waveform (i.e. sample value), start time, duration).

But that also involves storing and reading back from a temporary memory buffer (which will have to be as big as the waveform, since there's no way to know in advance how many I2C events there will be).

Which brings us to the second option: try to implement the entire decode inner loop in a shader.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://ioc.exchange/users/azonenberg/statuses/116083377593405885 on your instance and quote it. (Note that quoting is not supported in Mastodon.)