Indices are good for performance. But what can improve it even more are query caches. Imagine that you have a client app which works with some static dictionary containing unique artifacts (securities, or products, or whatever). The entities in dictionary can be updated time to time, so it is not practical to cache the whole dictionary on the client side. Now let’s say we have a simple Bagri cluster deployed on 1..n nodes and many clients connected to it. The first client who will perform query for that static entity will have to wait a while (5..10 extra ms, not too much) for initial query parsing and compilation. Then the compiled query will be cached in internal Bagri cache (between all cluster nodes), so all subsequent queries against the same dictionary will use already prepared one. The same with query results: Bagri caches results for pairs of query/parameters and will just return cached results if they’re already exist. Of course, cached results are invalidated in case when any affected (by query) document was touched.