You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given a Datablox application, we would like to measure the following:
Is the performance of Datablox acceptable for the application? For example, if the throughput of the bookmarks app is the same as the crawl-rate, then the framework is not adding any appreciable overheads to the process. Click's main experimental result was that it could pass on packets at the maximum rate which can be processed by the Linux kernel. While a monolithic router could go at a faster rate, it is irrelevant as it will not make any difference to the system.
How does it scale? Does adding more machines help scale linearly? At what point does adding more machines not help the system? What are the minimum number of machines needed for the throughput to be acceptable for the application?
How reusable are the blocks? Certainly every application will have something specific to it, but what is its proportion to the rest of the application code?
The text was updated successfully, but these errors were encountered:
General performance metrics.
Given a Datablox application, we would like to measure the following:
The text was updated successfully, but these errors were encountered: