This project has moved. For the latest updates, please go here.

Version store out of memory (cleanup already attempted) error

Mar 4, 2011 at 4:05 PM

Hi all,

I played a bit with ManagedEsent and I think it is a great library. When I tried to insert rapidly large amount of data I got the following exception:

Microsoft.Isam.Esent.Interop.EsentVersionStoreOutOfMemoryException: Version store out of memory (cleanup already attempted)

   at Microsoft.Isam.Esent.Interop.Api.Check(Int32 err) in C:\Work\managedesent-61364\EsentInterop\Api.cs:line 2739

   at Microsoft.Isam.Esent.Interop.Api.JetUpdate(JET_SESID sesid, JET_TABLEID tableid) in C:\Work\managedesent-61364\EsentInterop\Api.cs:line 2394

My test app (single threaded) is very simple. I have only one table with two columns (JET_coltyp.Long and JET_coltyp.LongBinary). I am not sure what could be reason for this error. I tried to tweak the following InstanceParameters (CircularLog, LogFileSize and LogBuffers) without success. Changing cache size did not help as well. When the insert rate is slower everything works fine.

Does anyone know a solution for this problem?

 

Regards,

Mihail

Mar 4, 2011 at 4:19 PM

There a two possibilities I can think of: doing too much in one transaction, or a strange way of modifying the LongBinary column.

Esent has to track undo information for all operations performed in a transaction (to enable rollback) and that information is stored in the version store. The default size of the version store is quite small. You can increase it with the JET_param.MaxVerPages system parameter. A value of 1024 (64MB of version store) will enable quite large transactions. I suggest doing 100-1000 insertions per transaction.

On the other hand you say that the insert rate is the factor that causes the problem? Is the insert rate tied to the length of the transaction? If not can you post the code that is manipulating the record, especially the JET_coltyp.LongBinary column. There could be some inefficiencies there.

Mar 4, 2011 at 4:48 PM

Hi,

Thank you for the quick answer. Increasing MaxVerPages solved the problem. You are correct, the data size is indeed tied to the transaction rate. The data is processed before being writen into the database. The more precise is the processing the more transaction rate is slower and data is more compacted. When heuristics is used the transaction rate is faster and the data size is larger.

Thank you once again.

Regards,
Mihail