-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please help, paging doesn't work, I am getting MAD ! #360
Comments
Have you isolated this down by running the actual mongo commands themselves directly? Also, if you have a repo, perhaps you can show it or show the pr just to see if there may be other conflicting issues. Just by the snippet we already know what it's doing, which is by design:
So here, we know that it should indeed only be returning you a total of "length". I'd console.log the variables just to be sure we know what we're looking at in term of the variables being passed so it can be tested against the db. |
Never mind, I think i see where your concern is. Lines 46 to 77 in 56e3752
Lines 147 to 173 in 56e3752
In reality, the expected (and desired) behavior is that the table would only return the 10 rows (or however many is selected), and then as you page you get the next set. Whereas by default right now, the entire dataset is returned, and then the datatables is configured to page after the fact (rather than in realtime). The code, we've opted at some point to reduce the calls to 1 call, rather than multiple, which is great for smaller blockchains, but appears and is fairly obvious to be disastrous for larger ones. |
Hi guys, The 1 query is the right way to go, but it has to be the correct query. If you select millions or records to filter out 10, is not. I am now studying Mongo and will change the query. We can then think of putting a selector in the settings for the type of query to execute. Cheers, |
It is not only the main list, every other page is so slow and unusable. Do you have a quick fix to let me go back to the old way ? It is killing my server completely, and is a 12 core i7 9th generation...... |
Nothing, I have downgraded it to an old version I have, it's so slow the super station I have and mongo crashed altogether. I you have a quick suggestion, I'd give it another try. |
The old one, indexed with the new sync.js, of course doesn't work properly too (after 5 days of syncing................): If you guys could please find a mongodb query that can replace this monster (database.js): EDIT: I have tried a billion ways, to the point of madness, for 2 days. There's absolutely no way to get anything out of Mongodb other than a query that last 20s. Thank you, |
I'm not sure where the 20seconds is coming from, because I can easily hit http://lcp.altcoinwarz.com/ext/getlasttxsajax/0 in less than 5seconds, regardless... The issue is you're dealing with alot of data to try to parse through and the way that the data is stored. You basically have a coin that has made it to a point where either the data's storage scheme needs to be greatly altered (and syncing will have to be exponentially increased for the processing) or you'll have to use live RPC calls like insight. Remember, you're querying Transactions. A transactions table will always be the largest aspect in any scheme. We don't store by blocks (which would possibly make front page faster) and addressHistory is a new thing so old versions won't have it. The "quick fix" would be to make the "Latest Transactions" only be the Latest X Transactions, and not return all to the datatable. Hope that helps. Also, what other fixes have you coded for the PR? |
I am using the old version of explorer now, that's why it works. Otherwise my user would strangle me for bringing all exchanges down again. I already fixed one problem, the index view. You guys forgot some indexes in the mongodb table and query has been optimized. Now onto the address query page, is also unusable. I already find the problem, and is the aggregate query, the sort part: Can you find a way to go around that ? Like writing two queries ? The sort over a view cannot be indexed, and will always be slow. Simone |
PS: have done many fixed, like PoW + PoS display, peer sync works properly now (number of peers = peers db), will push back to your repo once I'm done. Simone |
I have solved all, you must make a compound index for that query on the addresses, otherwise it would make the CPU go crazy....
I activated a test version. I will pull the changes to you, but the DB indexes you will need to add to the code as I don't know where they are (I'll list them here). Simone |
Have you tried expanding the front index view back to showing All records rather than 1000 to see the performance difference for a 1:1 comparison? You can add the index to the https://github.com/iquidus/explorer/blob/master/models/addresstx.js |
@uaktags yes, it will be flashing fast, so the index worked. But I don't want that for my users. They can search transactions by hands if necessary. OK, I will add the indexes there. But not only that table, also the table txes I add to add at least 2 indexes. Cheers, |
If you have a test version, i'd love to see it flashing fast with all millions, as that's the default that everyone has in the current iteration of 1.7.3 to show all documents (unless your PR includes settings.json updates for configurability). |
http://lcp.altcoinwarz.com:81/ext/getlasttxsajax/500099909999 |
I have modified that, to force pass 0. For the moment I intend to run like this, but I will try to run it with the correct options later on. When I am sure no critical performance may bring it down, I will leave it full. I cannot allow even 2 minutes of the explorer down, or exchanges will immediately suspend all withdraw/deposits, with all of our users inundating the support group :) Simone |
Sorry I don't know how to add the indexes. Once I pushed to your repo, please add them yourself, they need to be declared separately from the schema, and added afterwards: txes addresstxes Simone |
Also: documentsCount() shall be avoided as much as possible. Is like doing two queries. Over large databases, it double the efforts, for nothing. Is better use other means whenever the entire count is needed, like the block count. I have finished all the changes, and verify it is super fast also over entire database. I will update the repo and pull to your side. Cheers, |
Still awaiting the pr. |
Cheers man. If you set last_txs=0 in the config file, it will show entire chain. |
Hi,
I have installed and synced the explorer, and fixed many things that I will pull to your repo. Please help me with this issue cause I am getting MAD. The page limit doesn't work, and the accesses are KILLING my server:
http://lcp.altcoinwarz.com
I have isolated the issue in the query part:
get_last_txs_ajax: function(start, length, min, cb) {
Tx.countDocuments({'total': {$gte: min}}, function(err, count){
Tx.find({'total': {$gte: min}}).sort({blockindex: 'desc'}).skip(Number(start)).limit(Number(length)).exec(function(err, txs){
if (err) {
return cb(err);
} else {
return cb(txs, count);
}
});
});
},
No matter WHAT you pass to that function, it returns always the entire DATABASE. Please help me because this is very urgent for me ! I am sure everything is correct. I tested on different OSes already, same issue.
Thank you,
Cheers,
Simone
The text was updated successfully, but these errors were encountered: