Open
Description
If I populate a corpus from a file of 10,000 Tweets (wihtout using a datastore) it takes less than a minute to create all the documents and add them to the resource tree. If I then select them all and try and close them it takes an awfully long time (I gave up after about 10 minutes). There seems to also be a lot of CPU activity (50% on my laptop) but almost no GC so I don't think this is related to freeing memory as documents are removed. I know the easy answer is use a datastore but it still seems odd that removing the documents is so much slower than loading them.
Metadata
Metadata
Assignees
Labels
No labels