-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
performance tests #2
Comments
Here are the result of my iPhone 5 on iOS 6.0.1 (there's room for improvement)
|
Results from trigger.io 1.4 on same device
|
And the same tests with Cordova 2.4.0 (great performance!)
|
These reference projects should be part of the repo |
Here are the results after improvements a353502 and 872b622
That's on a par with Cordova :) |
and the figures from f24b2bb
|
…is needed -> performance now better than ever :) (refs #2)
and the figures from cbecfe5
|
And here are the latests figures after suporting async calls in both directions as described in #12 and #11 + #17:
|
The figures for b72cb39 (no reference benchmarks)
Bad (but expected) performance of the concurrent calls since their complete-callbacks must be dispatched on a different thread (the UI thread) to call |
Great to see some numbers :) The values for sequential calls are not too bad, actually. Would be interesting to compare those with Cordova and trigger.io. Let's do that on Thursday. For the concurrent test: You don't have implemented |
In fact I implemented it. See Transit/source/java-android/src/com/getbeamapp/transit/prompt/TransitPromptAdapter.java Lines 146 to 165 in 8c93e73
Transit/source/java-android/res/raw/runtime.js Lines 69 to 77 in 8c93e73
However I could use a threadpool to increase performance. Trying that right now. |
But this line should only be called once for 1000 concurrent calls, right? What about the other the other direction (bulk calls from native to JS)? Could you safe roundtrips over there? On ObjC there's a callAync on JSFunction that puts itself in a queue. |
The line is only called once, indeed. There also exists a |
The figures for fa8affc with a ThreadPool (still no reference benchmarks)
Much better! |
Indeed! So 2,17s (=4.34-2.17) / 1,73s (=3.83-2.1) is the time needed to to 1.000 calls from JS to native? |
I doubt that the diff of the measurements can be used as a measurement itself. Sequential calls and concurrent calls have quite different codepaths within the native code because of the limitations the Android SDK put upon us. Just for example: The first call from native to JS is more expensive than a reentrant call. |
...which would balancing out in this particular example. Anyway, I am still curous why "sequential" is that much slower then "concurrent". Is it really just the calls from JS to native? |
I'd rather assume that this is 2-core-multithread vs single-thread (my ThreadPool size defaults to numOfProcessors). |
Can you verify this be decreasing the number to 1? |
I was wrong indeed. PoolSize doesn't have an impact on concurrent benchmarks. |
I did some tweaks and recompiled the app with Android's ProGuard (to get rid of The figures for fff739d built with
Much better! |
Impressive! Any noteworthy insights beyond "general optimization"? Those numbers appear to be similar to this optimization above where concurrent calls are basically as fast as sequential calls. For Objective-C "real" async calls with only 1 roundtrip for 1000 calls from JS and back resulted in a performance increase by factor 5. |
Another note on ProGuard: I do not fully understand the implications of the ProGuard config you used. Do you think scarifying the readability of call stacks is worth the performance gain? What happens if you disable the aforementioned log statements only without any obfuscation? |
I disabled obfuscation in 96cc021 and the results remain the same. Using ProGuard in this configuration should be similar to -O3 in C-world with macros to disable log-statements. |
Perfect :) Any thoughts on the remarks on aync calls? |
The real async calls (1 roundtrip for 1000 calls) were introduced in For more insight I did some profiling of 100 concurrent requests (we can go into this on Thursday). Sadly I can't run this on my devices since free memory runs out after 5 seconds (but 100 requests take 16s with profiling enabled). Might retry this with fewer requests. Row 1 = UI thread 2400ms-6800ms = JSON parsing Detail view of nativeInvoke Unmarked area = proxify(jsonData) including String -> Func etc. So we are CPU bound for now. |
Not completely sure about the second diagram but as far as I understand, the 4th row of the first diagrams is the actual native implementation that does not do much more than passing back the argument it receives, right? Why does this take so long? Also, row 1 and 2 appear to do some work during this period while before PP on row 2 there's no activity at all. Shouldn't those threads idle while thread 4 is preparing the bulk result? If calling In any case: Having more than 500 round trips per second is great! |
No,
But from the second diagram we get the cost of loadUrl at 13,322ms - 13,335ms (~10% of time) which is cheap compared to jsonParsing+postProcessing+proxifying (~60% of time) and jsExpressionFromCode (~30% of time). |
With fef4b98 Transit.java now does batch calls against
|
A few optimization ideas from a good nights sleep can be found in 67264e6 (including faster Jackson JSON parser, iterative/yielding parsing for Batch-Invokes, optimizations for trivial situations like
|
Yiiiieeha! |
Here are the results from trigger.io running on the same android device
|
and the results from a3b410b
|
Similar to the benchmark by trigger.io it is interesting to compare
Here's the code of the benchmark mentioned above triggr.io, phonegap, iOS
The text was updated successfully, but these errors were encountered: