Sorry about all of the questions, but the answers are revealing :)
Is there such a thing as a too big setting for "derby.storage.pageCacheSize"? That is, if memory itself is not a question, is there a limit such that the time to find the page in the page cache becomes longer than the time it takes to read the page from disk.
In a customer's production environment which is an Oracle M5000 with 32 processors, 32Gb of memory, I allocated 8Gb of heap space to Derby and had a "derby.storage.pageCacheSize=64000" and the page size = 4096.
After a day or processing, the used heap about equals the max help, and some of the queries I was executing were taking a very long time, on the order of minutes whereas they should be taking on the order of seconds. The queries probably would not find the data in the page cache.
I stopped all other processing so I could focus on just this problem. Using jvisualvm, I forced a garbage collection, and then ran the queries. The time went down to on the order of seconds.
As a second test, I reconfigured the system to have a max heap as 512M and a page cache size of 4000 pages. Now consistently for the last few hours of running, the queries are operating in the timeframe expected.
So I was wondering if the time to find the page in the cache was becoming excessive and naively think that the page cache is help by a weak reference (I don't know this, only guessing), and that when I forced a full garbage collection in the first test, the page cache was emptied.