-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The structure of ns2 simulation code? #3
Comments
Hi @liulalala If you are interested in pFabric or PIAS simulations, you need to check out ns2_pfabric or pias branches. These branches populate the code for those simulations and fill RpcTransportDesign/ns2_Simulations/scripts/ directory with simulation scripts. I'll update the README to contain these info. Cheers, |
Thanks a lot for replying me! I am interested in your Homa project hhh! I looked into omnet++ code, and 1. I cannot figure out what does cbf mean? (like getRemainSizeCdfCbf, getCbfFromCdf) 2. what does defaultReqBytes mean? That indicates that the sender sends request packets first(containing number of requested packets)? But in paper it says: "an initial unscheduled portion, followed by a scheduled portion".(in 3.2) And no requested portion is involved. Looking forward for your reply! Thanks :) |
Hi behnamm, sorry to disturb you again. I'm trying to run the homa code. I follow the README to set up, but I wonder what's the default input file? (./homatransport xxx.ini)? And I wonder the structure of the homa code. Looking forward for your reply! |
@liulalala ../homatransport -u Cmdenv -c WorkloadHadoop -r 6 -n ..:../../simulations:../../../inet/examples:../../../inet/src -l ../../../inet/src/INET homaTransportConfig.ini "-u Cmdenv" tells OMNeT++ not to run the simulation in the gui. homaTransportConfig.ini at the end of the command is the configuration file we use and "-c WorkoaldHadoop" asks omnet to use parameters specified in WorkloadHadoop section of the config file. -r 6 specifies run number 6 withing that section to be simulated. |
Thanks a lot for replying me. I wonder that whether the receiver will send grants for each unscheduled packets and request packets? Or only the last unscheduled packets? |
Grant packets are transmitted one packet at a time, for every single data packet that arrives. So, for each unscheduled packet (including the request packet) that arrives at the receiver, a new grant packet is sent. However, grants are only sent for a message if the message belongs to the high priority set of messages that the receiver is actively granting. Please read the paper for more information. Cheers, |
Thanks a lot for your reply! And I am sorry to bother you again... I am reading the paper and code carefully, but I still cannot figure out some detail.
|
@liulalala
Note that a grant may allow transmission of multiple data packets, but that doesn't mean we don't send grants on a per packet basis. As I said before, grants are sent on a per packet basis and in the common case when grants are not delayed, we expect that a new scheduled packet is sent for every new grant packet that arrives at the sender. What you are referring to in the paper is an optimization for when two grant packets G1 and G2 are reordered in the network and the later grant G2 arrives earlier than G1 at the sender. To compensate for the reordering of the grants, with arrival of G2, we allow transmission of two scheduled packets instead of one. The offset you refer to is a way to implement this effect. That said, while we have implemented this optimization in the RAMCloud implementation, we didn't implement this in the simulations. So in the simulations, we can only transmit one scheduled packet for every new grant.
This is a feature I added for collecting statistics and computing the wasted bandwidth. It doesn't have any effect on the algorithm and Homa mechanisms. You don't need to worry about this.
|
Got it! Thank you so much! Another question: how to determine the priority of the scheduled packets? As far as I am concerned, |
@liulalala
That should work. Although, this is not exactly how I have implemented in the simulator. The simulator also send grants based on a timer: when one packet time is passed, we check if we can send a grant for any of the active messages, subject to conditions like if the message is among the top scheduled message and there is less than on RTTBytes outstanding bytes for that message. The simulations code has more than what we have discussed in the paper that may make the code difficult to understand. I would suggest look at the RAMCloud implementation of Homa for a cleaner code.
It basically means that as the top priority message in the list completes, you push the remaining messages up in the list. That means a new place in the list opens up at the lowest priority level so if a new message arrives, it would be inserted to the list at the lowest priority level. Section 3.4 of the paper explain this.
This relates to an optimization that may not have been explained in the paper. Basically, because of this optimization, the last RTTBytes of the scheduled messages gets an unscheduled priority level. That makes sense because from the receiver perspective that is doing SRPT, the last RTTbytes of a message is as important as the first RTTBytes of the message. So, we assign unscheduled priority level for the last RTTBytes of scheduled portion. Hope this makes sense. |
Thank you so much! Sorry to bother you again. |
Yes, That's correct.
W1 -> FacebookKeyValueMsgSizeDist.txt |
As far as I'm concerned, I think that the 99% slowdown is: first sort the flows in an ascending order, choose the 99% flow's completion time / it's oracle completion time. But I cannot figure out what does the 99% slowdown means for each particular flow size (as the x-axis shows: 2 3 5 11...) |
So, imagive we run the experiment at a specific load factor (eg. 80% load factor) for a long enough time such that for every single message size in the workload, we have generated 1000s of instances of that size and found the latency for each instance of that message size. Now we sort the latencies for that message size and find the 99%ile and minimum latency among them. Divide the 99%ile latency over the minimum latency and you have the 99%ile slowdown for that message size. |
Hi behnamm, I wonder the structure of your ns2 simulation code, can you give me a README? And I cannot find out the source code solving the priority, can you give me a path? Thanks a lot!
The text was updated successfully, but these errors were encountered: