You are here

Guidelines for Active measurements for multimedia content delivery


This page details the requirements that are specific to the Active Measurements for Multimedia Content Delivery use case, in addition to those expressed for the Reference demonstration environment (link).

Hardware list

  • Probe: dedicated machine to run the OTT-probe and GLIMPSE probe. It can also host the EZRepo and the RC1 reasoner needed for this demo.
  • CDN server: (preferably more than one) to deliver multimedia content
  • Miniprobe: (hardware based OTT probes, optional) to actively monitor multimedia delivery. The miniprobes used in the use case demonstration are NETvisor's proprietary hardware appliances called MiniProbes. We suggest to use of the models M-180 or above (in the official UC demo we use M-180 and M-195 models).
  • Impairment device: to emulate streaming errors like jittering, packet delays, noise.


Software list




Software dependencies

  • Probes, EZRepo, RC1 reasoner: Linux OS and Python 3.x


Software installation


Software configuration

  • The demo environment specific configuration parameters like certificates, client listening links, etc should be set in the corresponding configuration files in the "conf/" directory (for the core components like supervisor) and in the "conf/mmcd/" directory (for the probes used in the demo like probes, EZrepo and RC1 reasoner).
  • Probes must be configured to connect to the Supervisor and to send indirect measurement data to the EZRepo instance as well.
  • Both content servers should have 4 VoD titles each (i.e. servers are alternative sources for same content).
  • The topology of the network needs to be uploaded in the Reasoner (as a JSON file).
  • The initial set of routine measurements needs to be configured in the Supervisor (as a JSON file).

Demonstration environment

  • CDN servers for content hosting and streaming
  • Impairment devices to emulate errors (packet delays, jitter, etc) in video streaming

Step-by-step walkthrough

Warmup: starting the monitoring infrastructure

1. Install and start up the components: the Supervisor (with GUI), the EZ_Repo and the RC1 Reasoner, possibly on a single machine. In any case, the Supervisor and the Repository need to have public IP addresses. Launch commands from separate windows, under the PYTHONPATH (~/protocol-ri) directory:

$ mplane/svgui --config conf/svgui.conf

$ scripts/mpcom --config conf/mmcd/ezrepo.conf

$ scripts/mpcom --config conf/mmcd/rc1.conf

In the supervisor's terminal window we expect to see the intro and the |mplane| prompt. By issuing "listcap" command we can check if repository and reasoner has been registered and connected to the supervisor. The GUI shall be accessible via the <supervisor>:<gui_port> address (default <gui_port> is 9892).

2. Deploy mPlane OTT probes, GLIMPSE probes and Pinger probes to multiple subscriber locations. Probes are implemented in Python and install packages will be created for Linux, Mac and Windows. In the following command we use a unified probe, with GLIMPSE, OTT-probe and Pinger installed.

$ scripts/mpcom --config conf/mmcd/common_probe.conf

We should see if the probes are up and running by issuing the "listcap" command from the prompt.

3. Probes are also available as hardware devices, deployed on the Miniprobe platform (TODO reference). 

Trigger & observe: "Houston, we've had a problem here"

  • Error scenario #1: remove a piece of content from both servers (to emulate ”upstream/ingress error").

$ mmcd/

Reasoner shall correctly identify ”upstream error”.

  • Error scenario #2: Shutdown one of the content server (keeping the machine running).

$ mmcd/

Reasoner should correctly identify ”CDN server X error”.

  • Error scenario #3: Configure a bandwidth limitation of about 500 kbps on one of the customer access lines.

$ mmcd/

 Reasoner should correctly identify ”Inadequate CPE bandwidth for Customer Y”.