Discussion:
[mongodb-dev] Possible issue in cache eviction policy
Moditha Hewasinghage
2018-07-20 14:16:55 UTC
Permalink
Hi,

I have been running some experiments on cache usage in MongoDB (3.6) with
wiredtiger and I think there might be an issue on the cache eviction
policy. This is the setup environment i have specified.

storage:
syncPeriodSecs: 60
journal:
enabled: false
dbPath: "C:/data/mongo/pokec2"
wiredTiger:
engineConfig:
cacheSizeGB: 0.25
journalCompressor: "none"
collectionConfig:
blockCompressor: "none"
indexConfig:
prefixCompression: false

I have 5 collections with the same data including the _id and I query a
random _id from every collection. I do this for multiple iterations and
after each iteration I check the cache usage using collection.getStats()
and looking into *wiredTiger.cache.bytes currently in the cache. *As you
can see from the attached image the eviction happens on on a single
collection and it keeps on going low until a certain point and then another
one is evicted from the cache. This trend continues while one of the
collections keeps growing. The evicting and the growing collections are the
same.

In my openion this seems like a flow in the cache eviction process and how
the priority is given to each of the collections. Because they are
identical in every aspect. Can anyone help me with this
--
You received this message because you are subscribed to the Google Groups "mongodb-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-dev+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-dev.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-dev/fb6f5caa-7d77-4512-8d66-da76dc463006%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Alex Gorrod
2018-07-22 22:40:01 UTC
Permalink
Hi Moditha,

That does look like interesting behavior - we'd like to investigate. Can
you share an application that reproduces the behavior you describe?

Thanks,
Alex

On Saturday, July 21, 2018 at 12:16:55 AM UTC+10, Moditha Hewasinghage
Post by Moditha Hewasinghage
Hi,
I have been running some experiments on cache usage in MongoDB (3.6) with
wiredtiger and I think there might be an issue on the cache eviction
policy. This is the setup environment i have specified.
syncPeriodSecs: 60
enabled: false
dbPath: "C:/data/mongo/pokec2"
cacheSizeGB: 0.25
journalCompressor: "none"
blockCompressor: "none"
prefixCompression: false
I have 5 collections with the same data including the _id and I query a
random _id from every collection. I do this for multiple iterations and
after each iteration I check the cache usage using collection.getStats()
and looking into *wiredTiger.cache.bytes currently in the cache. *As you
can see from the attached image the eviction happens on on a single
collection and it keeps on going low until a certain point and then another
one is evicted from the cache. This trend continues while one of the
collections keeps growing. The evicting and the growing collections are the
same.
In my openion this seems like a flow in the cache eviction process and how
the priority is given to each of the collections. Because they are
identical in every aspect. Can anyone help me with this
--
You received this message because you are subscribed to the Google Groups "mongodb-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-dev+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-dev.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-dev/85ee5ebc-9b6b-4c03-91ca-593736d8cc50%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Moditha Hewasinghage
2018-07-23 19:29:34 UTC
Permalink
Hi Alex,

Im attaching a simple java program herewith for the experiments together
with the db dump and the file with random _ids. use the above configuration
for MongoDB and make 5 copies of the attached collection. If you need
further information I am happy to help.

dump :

https://we.tl/rse3OXZPme
Post by Alex Gorrod
Hi Moditha,
That does look like interesting behavior - we'd like to investigate. Can
you share an application that reproduces the behavior you describe?
Thanks,
Alex
On Saturday, July 21, 2018 at 12:16:55 AM UTC+10, Moditha Hewasinghage
Post by Moditha Hewasinghage
Hi,
I have been running some experiments on cache usage in MongoDB (3.6) with
wiredtiger and I think there might be an issue on the cache eviction
policy. This is the setup environment i have specified.
syncPeriodSecs: 60
enabled: false
dbPath: "C:/data/mongo/pokec2"
cacheSizeGB: 0.25
journalCompressor: "none"
blockCompressor: "none"
prefixCompression: false
I have 5 collections with the same data including the _id and I query a
random _id from every collection. I do this for multiple iterations and
after each iteration I check the cache usage using collection.getStats()
and looking into *wiredTiger.cache.bytes currently in the cache. *As you
can see from the attached image the eviction happens on on a single
collection and it keeps on going low until a certain point and then another
one is evicted from the cache. This trend continues while one of the
collections keeps growing. The evicting and the growing collections are the
same.
In my openion this seems like a flow in the cache eviction process and
how the priority is given to each of the collections. Because they are
identical in every aspect. Can anyone help me with this
--
You received this message because you are subscribed to the Google Groups "mongodb-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-dev+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-dev.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-dev/98d4802d-19ad-45a9-a991-102a7951fd90%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Alex Gorrod
2018-07-24 05:49:32 UTC
Permalink
Hi,

Thanks for reporting this. I have opened a JIRA ticket to follow
up: https://jira.mongodb.org/browse/WT-4194 - please watch that ticket for
further information.

- Alex
Post by Moditha Hewasinghage
Hi Alex,
Im attaching a simple java program herewith for the experiments together
with the db dump and the file with random _ids. use the above configuration
for MongoDB and make 5 copies of the attached collection. If you need
further information I am happy to help.
https://we.tl/rse3OXZPme
Post by Alex Gorrod
Hi Moditha,
That does look like interesting behavior - we'd like to investigate. Can
you share an application that reproduces the behavior you describe?
Thanks,
Alex
On Saturday, July 21, 2018 at 12:16:55 AM UTC+10, Moditha Hewasinghage
Post by Moditha Hewasinghage
Hi,
I have been running some experiments on cache usage in MongoDB (3.6)
with wiredtiger and I think there might be an issue on the cache eviction
policy. This is the setup environment i have specified.
syncPeriodSecs: 60
enabled: false
dbPath: "C:/data/mongo/pokec2"
cacheSizeGB: 0.25
journalCompressor: "none"
blockCompressor: "none"
prefixCompression: false
I have 5 collections with the same data including the _id and I query a
random _id from every collection. I do this for multiple iterations and
after each iteration I check the cache usage using collection.getStats()
and looking into *wiredTiger.cache.bytes currently in the cache. *As
you can see from the attached image the eviction happens on on a single
collection and it keeps on going low until a certain point and then another
one is evicted from the cache. This trend continues while one of the
collections keeps growing. The evicting and the growing collections are the
same.
In my openion this seems like a flow in the cache eviction process and
how the priority is given to each of the collections. Because they are
identical in every aspect. Can anyone help me with this
--
You received this message because you are subscribed to the Google Groups "mongodb-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-dev+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-dev.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-dev/55b60ce6-b6fc-4634-bd68-10ee8d5d3330%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
'Michael Cahill' via mongodb-dev
2018-08-07 06:30:26 UTC
Permalink
Hi Moditha,

As mentioned in the ticket
<https://jira.mongodb.org/browse/WT-4194?focusedCommentId=1968630&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-1968630>,
it looks like this issue was fixed by WT-3079 in MongoDB 3.6.1.

Michael.
--
You received this message because you are subscribed to the Google Groups "mongodb-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-dev+***@googlegroups.com.
To post to this group, send email to mongodb-***@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-dev.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-dev/407bb478-0553-4a19-b77b-0969f5f558cc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...