You are here

Guidelines for Quality of Experience for web browsing

Requirements

This page details UC-specific requirements in addition to those detailed expressed for the Reference demonstration environment.

Hardware list

  • A Linux PC (for running the probe and the reasoner)
  • A set of servers (at least 3) for running the repository.

Software list

Components

Repositories

  • OpenStack - Sahara cluster, with the configuration steps provided here 

Reasoner

Software dependency

In order to run the use case the following software is needed.

  • Software:
    • Python (>=3.3.x)
    • java runtime environment (>=1.7) (JAVA_HOME set)
    • Apache Flume
  • Linux packages
    • for the probe:
      • dh-autoreconf 
      • python-numpy
      • sqlite3
    • for the repository:
      • python-psycopg2

Root access is needed to compile and run Tstat at the probe side.

Software installation

  • Probe:
    • Follow the instructions provided on the probe home page
    • For Ubuntu Linux a install.sh is provided. On others Linux systems follow the instruction in the "usage of standalone probe" subsection on the probe home page
  • Repository:
    • Pre-requisites: A OpenStack-Sahara cluster
    • Navigate to the OpenStack web interface
    • Go to: Compute/Access & Security: Create Key Pair
    • Go to: Data Processing/Plugins: verify if Apache Spark plugin is installed, if not install it
    • Go to: Compute/Images:
      • Create Image using this image
      • Check the "Public" checkbox
    • Go to: Data Processing/Image Registry
      • User name: ubuntu
      • Plugin Spark 1.5 -> add plugin tags
    • Go to: Data Processing/Node Group Template
      • Create template master (master + namenode)
      • Create template worker (slave + datanode)
    • Go to: Data Processing/Cluster Template
      • Create cluster
      • Assign 1 master + 3 slaves
      • Launch cluster (use generated authentication)
    • On each cluster machine:
      • sudo apt-get install libpq-dev
      • download the DB for retrieving data from HDFS from here, unpack.
    • All the software is now ready to be configured.
  • Reasoner:
    • The web qoe reasoner is based on the mpcli script, so it will run as any other component in the mPlane framework.

Software configuration

 

Components

  • Change directory to [PROTOCOL_RI_DIR]/mplane/components/phantomprobe
  • edit conf/firelog.conf: add username, modify paths of the local tstat, flume and phantomjs binaries 
  • edit conf/flume.conf for the sink ip/port
  • edit conf/firelog-tstat.conf specifying ip/subnet to sniff from (e.g., 192.168.13.0 / 255.255.255.0 )

Repositories

  • On all nodes edit /etc/hadoop/conf/hdfs-site.xml

<property>
<name>dfs.datanode.data.dir.perm</name>
<value>755</value>
</property>

  • sudo service hadoop-hdfs-datanode (hadoop-hdfs-namenode) restart

  • On the master node:
    • Edit dinodb/config/stado.config
      xdb.nodecount (number of worker nodes)
      xdb.node.k.dbhost (k being the sequence number)
      xdb.node.k.dbport

  • On all nodes:
    • Edit metastore.conf
      • metastore.hdfs.namenode -> namenode of HDFS (ipaddr:port) # lsof -i (default 50070)
      • metastore.hdfs.datanode -> datanode of HDFS (separated by ',')
      • metastore.hdfs.dir -> the path of datanodes' data directory (e.g., /dfs/dn/current, which MUST have read permission)
      • metastore.datanode.port: 8888
      • postgresraw.path -> the path of DiNoDB node
      • postgresraw.num: 1
    • add Add $DiNoDBnode/bin to PATH
    • cd $dinodbnode; bin/pg_ctl start -D datadir1
    • on all worker nodes:  cd $metastore; nohup python dinodbnode.py &
    • on master node: cd $stado/bin; ./gs-server.sh
    • on master node: Use gs-createdb.sh or createtable.sh to create the schema

Reasoner

  • Make sure that params in webqoe/extract.py point to the master node on the repository

Demonstration environment

  • Browsing session with no impairments
  • Browsing session with impairments

Step-by-step walkthrough

Warmup - Setting up the QoE Use case: first browsing session

  • Download and install the probe
  • Register the probe to the supervisor
  • Execute:

runcap firelog-diagnose

when: now + 1s

destination.url = www-selected-url

  • when done:

show-meas firelog-diagnose-0

  • No error should be reported.

Trigger: 

  • In order to raise errors, impairments should be put on the path between the probe and the web server. The easier way is to use tools like  netem on a proxy machine (e.g., the gateway)

Observe:

  • Re-running the same measurements in presence of impairments will highlight the root cause identified by the diagnosis algorithm on the probe side
  • Executing the reasoner:

export PYTHONPATH=.

python3 mplane/components/qoe_reasoner.py --config conf/firelog-reasoner.conf --url www-selected-url

will cause the diagnosis algorithm to be run on the data available on the repository, so to provide further details on the selected web site behaviour.