Today I've investigated migrating https://drmdb.emersion.fr to SQLite. It wasn't as simple as I hoped.
drmdb collects JSON blobs representing supported features for GPU display engines. It has ~1k blobs, each between 10 and 200KiB. Operations needed to render a page aren't easily described in something other than application code: specialized comparison functions, deeply nested aggregates, and so on.
Originally drmdb used a very dumb scheme: it stored all of the blobs in a directory. Each time it needed to render a page, it loaded all of the blobs from disk. It worked pretty well until we hit a few hundred blobs. Then it took multiple seconds to load each page.
I put varnish in front of drmdb as a stop-gap, but cache invalidation was annoying to deal with. Users wouldn't see their device in the lists after uploading their data. Pages would still take multiple seconds to generate after being invalidated.
Then I switched to another dumb approach: on startup, read everything from disk into memory, up to a limit of 1k. Startup would take a dozen seconds, but then everything would render instantly.
Then the dataset grew larger. I bumped the 1k memory limit to 4k. It felt wrong, but worked okay. The heaviest page loads in 3s now, second heaviest page loads in 400ms.
I wondered how SQLite would fare for this kind of unusual use-case. Just moving the dataset from the filesystem to SQLite made everything very slow again: well, everything is pulled from disk so that's to be expected.
(Continued below)