Use case

Sep 6, 2007 at 9:19 AM
Althought this work seems quite interesting, I have a concern about use cases.

I mean is it safe to have an application design based on the assuption that we load in memory and iterate over several millions entities (this kind of volume is needed for indexing to be relevant) ?

More over what kind of overhead is involved in index creation ?
Coordinator
Sep 15, 2007 at 1:46 AM
For certain use cases, yes.

The one I developed this for involved implementing bundle price optimization for a group purchasing organization - bundled prices for pharma. In other words, over a consistent set of drugs that are indexed by national drug code (NDC) - run an algorithm over them that compares various sets of bids with certain inter-bid conditions, in a manner that can handle all the different combinations of different bundles from different manufacturers and come up with the best set of bundles for the group to pick.

In other words, same set of data - has to be in memory (i.e. cant constantly reload it from database - lazyload) - and needs to be optimized for quick access.

Amazingly, this is not that uncommon of a scenario. Any ecommerce site with more than a trivial number of items for sale (>100k) - could benefit from something like this for increasing page response time. I could go on and on...

As for overhead involved, yes, just as with db indexes, it is part of the cost benefit analysis. Adding an index per item isn't that bad - but adding an index for every single property in a class is, well, pretty stupid. Only the dev knows what the best strategy is based on expected usage, hence, why the dev gets to put attributes on the class to determine where the indexes go.

Thanks for the question though - certainly a very relevant one!