You are here




repoSim is an ns2-based simulator aimed at assisting the fine-tuning of mPlane repository performance.The overall goal would be to use simulation as a preliminary, necessary step to investigate a broad spectrum of solutions, to find candidate solutions worth implementing in real operational mPlane repositories.



The need for such a tool can be clarified considering the following picture, that represent a general mPlane worflow, valid for both active or passive measurements. The picture show a reasoner, or intelligent user interacting with mPlane through a supervisor,  triggering WP2 active/passive measurement nodes [yellow arrows], that generate a workflow that will sollicitate WP3 repositories.


Low level viewpoint

Specifically, as emerges from the figure above, a mixture of flows insist on mPlane repository: i.e., flows that enter or exit the repository, or even flow that are confined within the repository "data center" network. Such flow include:
  • store raw data (eg CSV, binary, …) [black]
  • access raw data (eg FTP, HTTP, …) [black]
  • export raw data (eg IPFIX, …) [black]
  • cooking data to some extent (e.g., MapReduce, or other algorithms) [red]
  • generate results and events (i.e., outcome of the above) [black]
  • state all the above (i.e., capability) [blue]

To simplify, we see that WP3 large-scale data analysis involves several types of concurrent data flows, that are either confined within the Repository itself, or cross its interface toward other parts of mPlane infrastructure (or external networks). From the architectural viewpoint, it implies: firstly, multiple tools may possibly share the same repository; secondly, even for a single tool, its control and data workflows are intermingled. Our network resource is multiplexed by different flows of type, size, and load, both within and enter/exit the repository infrastructure.

This has possible consequences not only on the timeliness of the results (e.g., results stuck behind a large transfer), but also possibly about the accuracy of the results themselves (e.g., control messages in iterative drill-down analysis slown down by fat process transfer) and need careful investigation. Therefore we are facing the challenge to: design, implement and evaluate scheduling protocols for the efficient and fair allocation of networking resources to network data analysis jobs. Such scheduling should consider not only internal data process workloads, but also cooperate data flows coming in and out of the mPlane infrastructure and external network, to support data storage, query, analysis and export.
An even more detailed viewpoint concerning the Repository is shown below, where we use arrows thickness to represent the expected heterogenity as far as the volume of the exchanges are concerned:

High level viewpoint

We are now study and optimize the repo "data center" network performance.  For the sake of readability and generality, we argue that it is possible to simplify the above mPlane-centric view to get broadly applicable insights, that also apply to mPlane, by cutting some high frequency details that just add noise in the picture. The simplification comes into considering that there are basically two classes of flows: short or "mice" flows  (e.g., events, specification, capabilities)  vs fat data or "elephant" transfers (e.g., results, indirect exports, map, etc.)

Under this light,  an important observation is that  unless proper actions are taken,   competing elephant flows would slow down the performance of mice flows, which results in a downgrade of overall Repository/mPlane performance. We are investigating into the design of scheduling protocols to mainly satisfy:

  • Sustained throughput to avoid slowdown of data cooking (e.g. elephant MapReduce data transfer in a map phase)
  • Low-delay communication for short transactions (e.g. mice control flows)
It appears that the issue is larger than the scope of what can be done within MapReduce schedulers, and rather call for a more systematic analysis at all levels, including general purpose data center solution that may be engineered at the application, transport or network layers.
The ns2 simulator allow to compare effective yet practical solutions that can be implemented in mPlane repository, with state of the art data center solutions that would require a much more involved deployment.
Tuning for mPlane repository simply involves definining a workload size distribution and flow arrival process gathered from operational mPlane repositories, that will be available from the testplant. In the meanwhile, realistic distributions taken from Hadoop clusters are in use for a more general fitting.
Quick start:

The installation instruction are detailed in the D33 tarball.

mPlane proxy interface


Official version