Skip to content
nickkaranatsios edited this page Mar 23, 2012 · 62 revisions

UNDER CONSTRUCTION

A light introduction to trema

The purpose of this tutorial is for those who first downloaded trema and looking for a concise summary information about the trema framework to start crafting your own applications. The tutorial starts by providing an overview on the trema's directory structure and on the way we elaborate on some topics indispensable in trema's understanding. While reading this tutorial if you start feeling uncomfortable with some terminology it would be better to become acquainted with and revisit again.

If you download trema and perform a directory listing you will find the following:

trema's directory listing

We are going to explain nearly every directory with particular emphasis to ruby, spec and src directories and their subdirectories.

ruby/trema

Developing an OpenFlow controller application is all about messages. You either send a request message or respond to a reply or unsolicited message. In this directory you will find a plethora of C and Ruby source files that contain implementation details of all OpenFlow messages. Most of the C source files are Ruby C extension files. They include the Ruby interface file ruby.h and contain Ruby calls prefixed with rb_xx. Each message has distinct presentation elements and most messages not related to each other in any way or have any common features allowing us to group them together. Therefore most messages represented by a separate class object and not inherited from any super class. An exception to this is the different kind of statistics messages that they share some common features. All message class objects defined under the trema module. The following description applies not only to messages but to any Ruby class object. The function prefixed with Init_<class_name> it first defines a class object and optionally may define an allocation function followed by the class's constructor and any public protected or private methods. The allocation method dynamically allocates memory for the new object or constructs a default presentation of the object. The initialize method immediately follows allocation assigns instance variables and any task that a class constructor would normally perform. You will also find that all initialization calls are initiated from trema.c. In particular once a message object has been constructed apart from setting or reading its contents it is not much of a use unless integrated together with controller's services that offer a simple and powerful message routing mechanism somehow unique to Trema. This is the topic that follows and we'll explain by analyzing the file controller.c. It defines a bloated Controller class with message routing helper methods and code to register and trap every OpenFlow message. You don't instantiate the Controller class but you inherit from it to receive all its functionality. Controller kicks into life implicitly by the trema run command that starts a run loop that enables the delivery of input events to your application. The run loop gives execution priority to timer events followed by all other socket events. It dynamically calculates the loop execution time at each iteration to avoid wasting CPU resources and responding to processing of events as quickly as possible.
Under this directory there is a dsl (domain specific language) directory that includes a parser responsible for reading Trema's configuration files. A dsl configuration file defines Trema's emulation network using various keywords, parsed and mapped to class objects to represent real executable entities. These entities include Trema applications, virtual hosts and virtual switches. A configuration file is not necessary to start a Trema application but if used it extends network emulation capabilities beyond the normal default scope. The parsed configuration file is marshaled and dump into a hidden file (.context) stored under the tmp directory. Its content comes in handy by command programs requiring current execution image to successfully complete.

The trema command

The trema command needs some further explanation. It is a Ruby script command that resides at Trema's top directory. If you invoke trema without passing any options it enters the Ruby interpreter irb. At the moment type exit at the irb prompt. To print out all the available sub-commands the trema supports run help on it. In the command sub-directory under ruby/trema for each command there is a corresponding Ruby implementation file if you ever need to refer to.

./trema help

You will see a list of options and one of them is the run option. Further invoking trema with ./trema run –help will display the available options of the run sub-command. We can use the trema run command to either run a Ruby or a C program. You use the -c option to supply a configuration file that the program parses. The following two examples indicate how to run the ruby and C version of the learning switch program.

./trema run ./src/examples/learning_switch/learning_switch.rb -c ./src/examples/learning_switch/learning_switch.conf
./trema run ./objects/examples/learning_switch -c ./src/examples/learning_switch/learning_switch.conf

You probably noticed that the command syntax is identical for running both programs. The trema command behind the scenes ensures that the configuration file is read and parsed correctly. Regardless of the executable program type C or Ruby the trema command will start up and initialize for you all Trema configured services. The only prerequisite here is that the Ruby program must declare a subclass of Controller class to override methods to suit its required behavior.

killall

As the help message indicates it kills all running Trema processses. An easy way to find the pids of all Trema's running processes is by displaying the contents of each file under the tmp/pid directory. The directory should be empty if no Trema processes are running. Of-course you shouldn't manually delete a file from the tmp/pid directory otherwise the killall command would not function.

send_packets

Use the send_packets command to send a packet between any two configured hosts. By configured hosts I mean any host that is defined in the configuration files with the vhost directive. The simplest way to send a single packet by accepting all the default options and by invoking as such:

./trema send_packets -s host1 to -d host2

This would send a UDP packet from host 1 to host 2 with a data payload of 22 bytes. send_packets can send a number of consecutive packets either for a specified duration or rate or using any of the increment options to generate unique packets. Increase the packet payload using the length option in combination with the inc_payload options to send larger size packets. If you wish to examine in more detail this command you will find the Ruby source file (send_packets.rb) under the ruby/trema/command directory.

dump_flows

The dump_flows command displays flow information of a given virtual switch. Flow information maybe transient (flows can expire) therefore there is no guarantee that dump_flows will always display the same results. The output is identical to openvswitch's ovs-ofctl dump-flows command. The trema dump_flows output is shown below:

./trema dump_flows repeater_hub
NXST_FLOW reply (xid=0x4):
 cookie=0x1, duration=1.268s, table=0, n_packets=0, n_bytes=0, priority=65535,udp,in_port=1,vlan_tci=0x0000,dl_src=00:00:00:01:00:02,dl_dst=00:00:00:01:00:01,nw_src=192.168.0.2,nw_dst=192.168.0.1,nw_tos=0,tp_src=1,tp_dst=1 actions=output:2

show_stats

The show_stats command displays a receive and transmit packet and byte count of a given virtual host. A user can use this command to verify that traffic flows according to the setup path. The output is a comma separated list of fields and values. The sub-command groups results of n_pkts,n_octets on each unique field value (ip_dst,tp_dst,ip_src,tp_src). The following command shows the receive statistics of host1:

./trema show_stats --rx host1
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.1,1,192.168.0.2,1,1,50

You obtain the transmit statistics by substituting option rx with tx as follows:

./trema show_stats --tx host1
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.2,1,192.168.0.1,1,1,50

You can retrieve both the receive and transmit statistics by omitting the transmit/receive option as shown below:

./trema show_stats host1
Sent packets:
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.2,1,192.168.0.1,1,1,50
Received packets:
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.1,1,192.168.0.2,1,1,50

reset_stats

A user can use the reset stats command to reset either the transmit or receive counters for example before the initiation of a experiment. Resetting the statistics doesn't mean that the counter values set to zero but as a result of this all previous statistics records are deleted.

ruby

Invoke this command to display Trema's documentation top page on your browser.

version

Displays Trema's current running version.

src/lib

Using Trema it is possible to create a controller application with a single line of code. Of-course a controller as such won't do much, or at least anything that's useful. This directory facilitates access to various lower-level services of the os and provides an OpenFlow programmatic interface to enable you to create robust full-featured OpenFlow controller applications. You start with bare essentials and extend as you go along. This is feasible because Trema provides an infrastructure of event-driven behavior. In an event-driven model an often design paradigm used is to let users register their own callbacks that would be called when a triggering event occurs. Trema basically follows the same pattern but differs slightly by registering all anticipated OpenFlow callbacks during start up and then use retrospection to either dispatch or ignore events. This comes at no cost but the simplification out-weights the performance overheads. Trema handles events in a timed loop adjusted dynamically to maximize event delivery without consuming too much CPU time.
Src/lib is the directory where most of the trema services are found. You can find two important set of files openflow_message.[ch] and openflow_application_interface.[ch] in this directory. The openflow_message.c file contains a number of function calls prefixed with create_xxx that as the name suggests create a number of OpenFlow messages. For example create_hello(...), create_echo_request(...) and so on. The openflow_message.h defines the function prototypes for those calls, consult them to find out what parameters you need to pass when you call them. All of the create_xxx return a buffer object. You have access to functions to allocate a buffer of certain length and append or remove your data either at the front or the end creating effectively a linked list of buffer objects. There are also calls to duplicate an existing buffer and dump its contents. The library keeps track of your data in buffers and manages all the memory allocation for you. Once you finished using the buffer object it is your responsibility to call the free buffer function to release previously allocated memory. The openflow_application_interface.c file defines all the OpenFlow event handlers and callbacks. It is your first ticket to enter the OpenFlow programming world. There are calls to initialize and finalize the OpenFlow application interface although not necessary to call explicitly. An initialization call ensures that a handler is prepared to receive any OpenFlow message and not miss any state change events from vswitches. The file contains a number of set_xxx_handler functions that a controller class calls to register a callback to be called when an event is triggered. The common practice here is to name the callback handle_xxx.
The openflow_application_interface.h defines the function prototypes for the handlers and the parameters passed when callbacks invoked.
To receive or send data to any IP socket a process calls the set_fd_handler(...) to register a reader and/or writer callback functions together with any user data you want the library to pass back to your callback function on a created socket file descriptor. This frees the calling process to perform other tasks and interrupted only when data on the socket becomes available. A user process can toggle the socket's read/write state by using the set_readable(...) and set_writable(...) function calls. After the callback function is called it is your responsibility to read or write data to/from the socket, the library doesn't provide any functions at the moment to assist you. You will find those calls and many more in the event_handler.c file. If you no longer wish to receive any socket events you invoke the delete_fd_handler(...) function call. In Trema a common IPC method used is UNIX domain sockets. All the nitty gritty bits creating the sockets and receiving messages from sockets done by the messenger library. Receiving a message is quite simple a client process registers a callback add_message_received_callback( service_name, service_recv ) that would be invoked when a message is received on the socket similar to the model discussed in the previous paragraph. The service_name must be a system unique name. The service_recv is the listener callback function. Upon reception of an application message the messenger transfers the message into an output queue adding a message header (shown later in tremashark's screenshot window as "Message header"). This header is internal to messenger and removed before the registered user's callback function is called.
If you no longer wish to receive messages you de-register the callback by calling delete_message_received_callback( service_name, service_recv ) This also frees memory occupied from previous created message queues so it is a necessary step to do prior to program exit. Trema applications involved in exchanging OpenFlow messages indirectly call the send_openflow_message function with a method signature that defines a target object (datapath_id) and an untyped buffer object to supply application specific data. The buffer object encapsulates the following attributes:

  • data - an opaque pointer to store any message. It should be large enough to accommodate the copied value.
  • length - the data length.
  • user_data - normally not set but it is a backup storage when buffer cloning is done.
  • user_data_free_function - a pointer to a customized memory de-allocation function if required.

Currently a buffer message can be dispatched to only a single target.
The library provides calls to create two types of timers. In both cases you have to specify a callback function with any user data to be called when the timer elapses. The two functions differ only on the first parameter the timer interval that specifies the timer resolution. In periodic timers, the minimum interval that can be set is 1 second while in one-shot timers resolution down to nanoseconds is possible although may not be practical from performance point of view. The calls to add timers are add_timer_event_callback(...) and add_periodic_event_callback(...) and a single call to invalidate both is delete_timer_event(...).
There are also other utility functions you might find useful like double linked lists, packet parsing and buffer related functions.

src/packetin_filter

Going up the directory tree there is a packetin_filter directory that contains a single file packetin_filter.c. It is a program that provides an interface to allow addition, deletion and debugging of packet-in filters programmatically. It interprets and executes a simple rule, filter an incoming packet according to type and output the result to a destination service as a packet_in event. It can filter LLDP or any IPv4 type packet. Packetin_filter is capable of distributing packet_in events currently to only a single service for each filter match. In the default case a packet_in filter is a match of all fields. An LLDP filter is a filter to match an EtherType field of 0x88cc (IEEE 802.1AB LLDP). The filter directive in the configuration file statically defines the type of filter and the service consumer. It accepts two hash like options conforming to the pattern ":filter type" => "service name". The filter type can be lldp or packetin.

src/switch_manager

Basically a TCP server program listening on OpenFlow TCP port 6633 for messages from any OpenFlow client device. Once a connection is accepted it delegates the responsibility of the session to a forked switch daemon process. Switch Manager resumes listening immediately after forking. Under the Trema framework only virtual switches would attempt to establish connection with the switch manager. Switch manager receives management request messages through its management socket to dynamically display and confirm internal tables vital to its operation. A switch daemon takes on the following responsibilities:

  • It reports vswitch connection status to higher layers as soon as a socket connection with a vswitch is established.
  • It exchanges the initial hello and features request messages with the vswitch and reports the result that is propagated to higher layers as switch_ready event.
  • It processes and validates all OpenFlow messages according to an internal state machine to either reject or accept the message.
  • It associates OpenFlow request messages with the corresponding replies by maintaining and matching message transaction ids.
  • It generates a separate cookie value for each transmitted flow_mod command. It also maps application's cookie value with this internal cookie value. This mapping also works in opposite direction and translates a flow_removed cookie value to the application cookie value. Switch daemons handle the cookie details using a hash table. Cookie entries are subject to aging and deleted to free space for new entries.

vendor

Under this directory you would find several programs that either have been developed by us or imported from external open-source sources. Also some of the external programs have been modified to suit our particular needs. For example the ofloops have been modified to remove the unnecessary SNMP dependencies. When you downloaded a fresh Trema repository the vendor directory contains some archive files for some programs. Running the build process deflates the archive files and creates the corresponding directories under.

src/tremashark

Another important directory from debugging point of view is tremashark. It is a wireshark plugin that collects trema's inter-process messages and other OpenFlow events for display. How to configure and install tremashark refer to the README file under its directory. We are going through an example to use to display OpenFlow events using tremashark. To run insert the following line of text at the beginning of a configuration file.

use_tremashark

To indicate to tremashark what processes you need to debug use the following command from the top trema directory.

sudo kill -USR2 `cat tmp/pid/<process-name>.pid`

Now when you run the program using trema run wireshark should be invoked automatically. You don't need to set anything in wireshark apart from executing one or more of the above commands to collect messages to debug. The following screenshot shows a captured output from wireshark.

tremashark debug window

From the above screenshot you can observe that the highlighted message is originated from application LearningSwitch destined for switch.0xabc which is the switch daemon process. Wireshark now displays trema protocol messages. To send an OpenFlow message an application calls send_message(). It takes two arguments the datapath_id (the vSwitch identifier) and an enclosed message object encapsulated as a opaque buffer. Going down the path as the message traverses through processes to reach the destination several service addressing headers are added. To familiarize yourself with trema's IPC messages you may drill into wireshark capture screen to display more detail message information.
There is also a wireshark plugin for OpenFlow message provided by the OpenFlow community that can be compiled optionally at Trema's top directory by issuing the following command.

./build.rb openflow_wireshark_plugin

The above command would make the wireshark plugin packet-openflow.so and would copied it for you in the ~./wireshark/plugins directory ready to be loaded the next time you start wireshark.

src/examples

Under this directory there are many examples that demonstrate many of the principals we learned so far. The examples designed only for experimentation and not present full-fledged applications although a good resource to have it handy when needed. Most of the examples are written in C and Ruby and there is a README file under each directory with instructions how to execute them.

cbench

A simple controller that outputs a flow_mod whenever it receives a packet_in so benchmark measurements can be taken by a controller benchmark tool (cbench). The tool can benchmark in two modes latency(default) and throughput. In latency mode it sends a packet_in and waits for the matching flow_mod to arrive before sending the next packet_in. In throughput mode it sends a burst of packet_ins and then counts the number of flow_mods received. You can set an arbitrary delay in milliseconds before the test starts to give the controller a chance to initialize. You can also specify the number of loops/iterations per test. Cbench can emulate a number of virtual switches the default number being 16. To start benchmarking first start the controller (C/Ruby) program.

./trema run ./objects/examples/cbench_switch/cbench_switch  
or
./trema run src/examples/cbench_switch/cbench-switch.rb  

Then on another terminal window run the benchmarking tool.

./objects/oflops/bin/cbench --switches 1 --loops 10 --delay 1000

A simpler and automated way to benchmark the C controller program against the two running modes (latency and throughput) is to use the following:

./build.rb cbench

Benchmark test results are shown on the screen as soon as they become available for each test followed by a summary of all tests indicating the minimum, maximum, average and standard deviation of the responses/second. An output of both tests executed on my environment shown below:

./build.rb cbench
./trema run ./objects/examples/cbench_switch/cbench_switch -d
/home/nick/test_develop/objects/oflops/bin/cbench --switches 1 --loops 10 --delay 1000
cbench: controller benchmarking tool
   running in mode 'latency'
   connecting to controller at localhost:6633 
   faking 1 switches :: 10 tests each; 1000 ms per test
   with 100000 unique source MACs per switch
   starting test with 1000 ms delay after features_reply
   ignoring first 1 "warmup" and last 0 "cooldown" loops
   debugging info is off
1   switches: fmods/sec:  4325   total = 4.324996 per ms 
1   switches: fmods/sec:  4169   total = 4.168996 per ms 
1   switches: fmods/sec:  5060   total = 5.059995 per ms 
1   switches: fmods/sec:  4319   total = 4.318901 per ms 
1   switches: fmods/sec:  4755   total = 4.754995 per ms 
1   switches: fmods/sec:  5505   total = 5.504075 per ms 
1   switches: fmods/sec:  4184   total = 4.168792 per ms 
1   switches: fmods/sec:  4067   total = 4.066963 per ms 
1   switches: fmods/sec:  5446   total = 5.425216 per ms 
1   switches: fmods/sec:  4092   total = 4.076184 per ms 
RESULT: 1 switches 9 tests min/max/avg/stdev = 4066.96/5504.08/4616.01/551.85 responses/s
./trema killall
./trema run ./objects/examples/cbench_switch/cbench_switch -d
/home/nick/test_develop/objects/oflops/bin/cbench --switches 1 --loops 10 --delay 1000 --throughput
cbench: controller benchmarking tool
   running in mode 'throughput'
   connecting to controller at localhost:6633 
   faking 1 switches :: 10 tests each; 1000 ms per test
   with 100000 unique source MACs per switch
   starting test with 1000 ms delay after features_reply
   ignoring first 1 "warmup" and last 0 "cooldown" loops
   debugging info is off
1   switches: fmods/sec:  59138   total = 58.634506 per ms 
1   switches: fmods/sec:  62662   total = 62.305798 per ms 
1   switches: fmods/sec:  69024   total = 68.812196 per ms 
1   switches: fmods/sec:  65000   total = 64.707780 per ms 
1   switches: fmods/sec:  66143   total = 65.861114 per ms 
1   switches: fmods/sec:  49319   total = 48.862526 per ms 
1   switches: fmods/sec:  62931   total = 62.326372 per ms 
1   switches: fmods/sec:  60773   total = 49.842287 per ms 
1   switches: fmods/sec:  64698   total = 64.616777 per ms 
1   switches: fmods/sec:  45249   total = 44.923886 per ms 
RESULT: 1 switches 9 tests min/max/avg/stdev = 44923.89/68812.20/59139.86/8260.13 responses/s
./trema killall

There is also a task that the build tool offers to profile your controller by running it as follows:

./build.rb cbench:profile

For the task to run successfully you must have valgrind installed that on turn depends on callgrind. The task would produces a number of callgrind files on the current directory. To view any of those files you need to install kcachegrind and run it for example as:

kcachegrind callgrind.out.13490

Note: The build:distclean task would remove all callgrind files therefore backup the files to a different directory if required.

dumper

A dumper is a simple passive controller that dumps the contents of OpenFlow messages on the screen. Unless a trigger is set to force an event to be sent some messages may not be dumped. In the directory you would find a C and a ruby equivalent program. You run the C dumper example program as:

./trema run objects/examples/dumper/dumper -c src/examples/dumper/dumper.conf

and the ruby dumper program as:

./trema run src/examples/dumper/dumper.rb -c src/examples/dumper/dumper.conf

There is also a cucumber test that you can run as:

cucumber features/example.dumper.feature

hello_trema

The shortest OpenFlow controller program you ever seen (6 lines of Ruby code) that prints a "hello" message and exits after establishing connection to a virtual switch. You run the hello examples as:

./trema run src/examples/hello_trema/hello_trema.rb -c src/examples/hello_trema/hello_trema.conf  
or   
./trema run src/examples/hello_trema/hello_trema.rb -c src/examples/hello_trema/hello_trema.conf

There is also an RSpec test program that can be run as:

rspec -fs -c spec/trema/hello_spec.rb

learning_switch

An example program (C/Ruby) that simulates a layer-2 switch. It will either forward traffic to known learned hosts or flood to all ports (except incoming). For this it relies on its forwarding database that maintains a pair of MAC addresses and ingress port numbers not associated with any OpenFlow vswitch. It also includes a timer that would age (remove) old entries from the database upon expiration. The database implementation is no more than a simple hash to store, remove and look up entries. To confirm its operation first you run the program of your choice as:

./trema run objects/examples/learning_switch/learning_switch -c src/examples/learning_switch/learning_switch.conf  
or  
./trema run src/examples/learning_switch/learning-switch.rb -c src/examples/learning_switch/learning_switch.conf

Then send a packet from host1 to host2 (the configuration file defines two hosts1,2)

./trema send_packets -s host1 -d host2

Since address of host2 is unknown the packet is flooded on the network hence host2 receives the packet (the received packet counter should be 1).

./trema show_stats host2
Sent packets:

Received packets:
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.2,1,192.168.0.1,1,1,50

As an application would do send a packet from host2 to host1 to verify two-way communication.

./trema send_packets -s host2 -d host1
./trema show_stats host1
Sent packets:
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.2,1,192.168.0.1,1,1,50
Received packets:
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.1,1,192.168.0.2,1,1,50

Once completed the test terminate the learning switch program by pressing Ctrl-C. There is also a cucumber feature file with two scenarios one for each learning switch program that can run as:

cucumber features/example.learning_switch.feature

list_switches

A program (C/Ruby) that retrieves and displays the datapath ids of all configured virtual switches. You run the program as:

./trema run objects/examples/list_switches/list_switches -c src/examples/list_switches/list_switches.conf
or  
./trema run src/examples/list_switches/list-switches.rb -c src/examples/list_switches/list_switches.conf

The programs outputs the discovered switches as below:

switches = 0xe0, 0xe1, 0xe2, 0xe3

You run the RSpec as:

rspec -fs -c spec/trema/list-switches-reply_spec.rb

and the cucumber feature as:

cucumber features/example.list_switches.feature

match_compare

The match_compare program acts like a firewall by installing filters to block or allow traffic. The program reads a static configuration to allow traffic from all hosts with IP source/destination network address 192.168.0.0/16 and all ARP traffic. It does this by creating the following flow entry rules to match against incoming packet-ins.
Table: - Traffic to allow

Switch Port MAC src MAC dst Eth type IP src IP dst IP Prot TCP sport TCP dport Action
* * * 0x806 * * * * * * OFPP_FLOOD
* * * 0x800 * 192.168.0.0/16 * * * * OFPP_FLOOD
* * * 0x800 * * 192.168.0.0/16 * * * OFPP_FLOOD
If it receives a packet that matches any table entry above installs a flow to flood the packet using the virtual port `OFPP|_FLOOD` followed by a packet-out to flood port otherwise installs a flow to temporary drop the packet for 60 seconds. (see diagram below)

trema's directory listing

A few examples ensure that the program interprets the rules correctly:
Send a packet from host with IP address 192.168.0.1 to destination 10.0.0.2. The program matches row 2 of table and packet is accepted as the next informative message shows.

action=allow, datapath_id=0xabc, message={wildcards = 0, in_port = 3, dl_src = 00:00:00:01:00:01, dl_dst = 00:00:00:01:01:02, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 17, nw_src = 192.168.0.1/32, nw_dst = 10.0.0.2/32, tp_src = 1, tp_dst = 1}

Send a packet from host with IP address 10.10.0.1 to host 10.0.0.2. This time the action is block since no table entry matches the criteria. The output is shown below:

action=block, datapath_id=0xabc, message={wildcards = 0, in_port = 2, dl_src = 00:00:00:01:01:01, dl_dst = 00:00:00:01:01:02, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 17, nw_src = 10.0.0.1/32, nw_dst = 10.0.0.2/32, tp_src = 1, tp_dst = 1}

The program temporary blocks (for 60 seconds) unmatched packets perhaps to prevent influx of packets congesting the network, an example shown below for an LLDP packet.

action=block, datapath_id=0xabc, message={wildcards = 0, in_port = 2, dl_src = 86:72:52:e3:8f:f8, dl_dst = 01:80:c2:00:00:0e, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x88cc, nw_tos = 0, nw_proto = 0, nw_src = 0.0.0.0/32, nw_dst = 0.0.0.0/32, tp_src = 0, tp_dst = 0}

multi_learning_switch

As the multi prefix suggests the multi_learning_switch example program is almost identical to learning_switch discussed previously. The only exception is that the scope of its forwarding database learning is limited to each OpenFlow vswitch instance (datapath_id). Its configuration file contains a larger number of vswitches to construct a more complex network necessary for the program verification. In fact four vswitches (1..4) connected in series (horizontally) with a host attached in each. Sending a packet from host1 to host4 (two hops away) for the first time we can demonstrate that flooding works as suppose to on multiple nodes. To prevent the forwarding database growing to an unexpectable level old entries deleted by a periodic timer that runs in the background. The sequence of events that constitute multi_learning_switch behavior are a below:

  1. A packet_in from vswitch1 as a result of sending a packet from host1 to host4.
  2. Multi_learning_switch learns the source Ethernet address and ingress port and associates this information with vswitch1.
  3. Multi_learning_switch sends a packet-out to vswitch1 with action set to FLOOD.
  4. As instructed vswitch 1 transmits the packet_out to all ports that reaches vswitch2.
  5. vswitch2 has no flow entry matching the packet and transmits a packet_in to multi_learning_switch.
  6. Multi_learning_switch remembers the source Ethernet address and ingress port.
  7. Processing of packet-in from vswitch3 is identical to vswitch2 and eventually the packet-out reaches host4 which is attached on vswitch4.
  8. Since path traversal information already setup and remembered by multi_learning_switch sending a packet from host4 to host1 gets delivered to host1 as sequence of four packet-out messages.

examples/openflow_messages

This directory contains a number of programs written in C and Ruby where each program name is an implementation of an OpenFlow message. A program may accept a numeric argument that specifies the number of messages to send. Messages are sent one after the other without any delay between them. The following messages are found:

  • Echo request/reply - sends a specified number of echo requests/reply messages to a configured vswitch after connection establishment.
  • Features request - sends a FeaturesRequest message and waits for the reply. Once received it displays the message contents on the screen.
  • Hello - sends a specified number of Hello messages.
  • Set Config - sends a specified number of SetConfig messages.

There is an acceptance test file for each message under the features directory that you can run by executing the following:

cucumber features/example.message.echo_request.feature

The output of the command should be green if test passes.

packet_in

This example programs registers a packet_in handler and waits for its invocation. Once invoked it displays the packet_in header and dumps any user data as hex bytes. On one terminal window you run either the C or Ruby example program:

./trema run ./objects/examples/packet_in/packet_in -c src/examples/packet_in/packet_in.conf
./trema run src/examples/packet_in/packet-in.rb -c src/examples/packet_in/packet_in.conf

On another window you send a packet from host1 to host2

./trema send_packets -s 192.168.0.1 -d 192.168.0.2

Note: You can also exchange packets between hosts by specifying IPv4 addresses instead of logical names.
The generated output is shown below:

received a packet_in
datapath_id: 0xabc
transaction_id: 0x0
buffer_id: 0x114
total_len: 64
in_port: 2
reason: 0x0
data: 000000010002000000010001080045000032000000004011f967c0a80001c0a8000200010001001e000000000000000000000000000000000000000000000000

packetin_filter_config

This example program demonstrates the use of the packetin_filter through a provided API found in the packetin_filter\_interface file. The API provides calls to add dump and delete packet-in filter entries. To verify that packets-in are parsed correctly the dumper example program is used to display any data. The program starts by making an initialization call to the library which registers a message handler to process messages from the packetin_filter program. It then adds a packet-in filter that would match any packet and registers a callback to be invoked once the filter is added. Once the filter created successfully its contents dumped and since no longer needed deleted by registering a callback to monitor the delete process. This completes the test and before the program exits calls the finalization method to free any resources it created in its initialization function and to leave the API in the state the program originally found it. The directory includes a number of small programs to add, dump and delete packet-in filter entries. All programs share a common service name to communicate with the packetin_filter program. To check that all programs run satisfactorily run the dumper program and execute the other programs one by one observing their output:

./trema run -c src/examples/packetin_filter_config/packetin_filter_config.conf

You can find all the instructions in the README file. The output produced when the add_filter program is run is:

TREMA_HOME=`pwd` ./objects/examples/packetin_filter_config/add_filter
A packetin filter was added ( match = [wildcards = 0xc, in_port = 1, dl_src = 00:00:00:00:00:00, dl_dst = 00:00:00:00:00:00, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 10, nw_src = 10.0.0.1/32, nw_dst = 10.0.0.2/32, tp_src = 1024, tp_dst = 2048], service_name = dumper ).

You can find the same output in the corresponding log file add_filter.log. Next use the dump_filter program to display the created filter. The output is:

TREMA_HOME=`pwd` ./objects/examples/packetin_filter_config/dump_filter
3 packetin filters found ( match = [wildcards = 0x3fffff, in_port = 0, dl_src = 00:00:00:00:00:00, dl_dst = 00:00:00:00:00:00, dl_vlan = 0, dl_vlan_pcp = 0, dl_type = 0, nw_tos = 0, nw_proto = 0, nw_src = 0.0.0.0/0, nw_dst = 0.0.0.0/0, tp_src = 0, tp_dst = 0], service_name = dumper, strict = false ).
[#0] match = [wildcards = 0xc, in_port = 1, dl_src = 00:00:00:00:00:00, dl_dst = 00:00:00:00:00:00, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 10, nw_src = 10.0.0.1/32, nw_dst = 10.0.0.2/32, tp_src = 1024, tp_dst = 2048], priority = 65535, service_name = dumper.
[#1] match = [wildcards = 0x3fffef, in_port = 0, dl_src = 00:00:00:00:00:00, dl_dst = 00:00:00:00:00:00, dl_vlan = 0, dl_vlan_pcp = 0, dl_type = 0x88cc, nw_tos = 0, nw_proto = 0, nw_src = 0.0.0.0/0, nw_dst = 0.0.0.0/0, tp_src = 0, tp_dst = 0], priority = 32768, service_name = dumper.
[#2] match = [wildcards = 0x3fffff, in_port = 0, dl_src = 00:00:00:00:00:00, dl_dst = 00:00:00:00:00:00, dl_vlan = 0, dl_vlan_pcp = 0, dl_type = 0, nw_tos = 0, nw_proto = 0, nw_src = 0.0.0.0/0, nw_dst = 0.0.0.0/0, tp_src = 0, tp_dst = 0], priority = 0, service_name = dumper.

Somehow the output is not exactly what expected. But this is because the strict option on the dump_filter program is set to false. To output only the created filter we use the dump_filter_strict program:

TREMA_HOME=`pwd` ./objects/examples/packetin_filter_config/dump_filter_strict
1 packetin filter found ( match = [wildcards = 0xc, in_port = 1, dl_src = 00:00:00:00:00:00, dl_dst = 00:00:00:00:00:00, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 10, nw_src = 10.0.0.1/32, nw_dst = 10.0.0.2/32, tp_src = 1024, tp_dst = 2048], service_name = dumper, strict = true ).
[#0] match = [wildcards = 0xc, in_port = 1, dl_src = 00:00:00:00:00:00, dl_dst = 00:00:00:00:00:00, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 10, nw_src = 10.0.0.1/32, nw_dst = 10.0.0.2/32, tp_src = 1024, tp_dst = 2048], priority = 65535, service_name = dumper.

This time the output is exactly what we expected. Similar to the dump two variants of the delete filter program are available. Delete and delete strict. You execute each command as follows:

TREMA_HOME=`pwd` ./objects/examples/packetin_filter_config/delete_filter_strict
1 packetin filter was deleted ( match = [wildcards = 0xc, in_port = 1, dl_src = 00:00:00:00:00:00, dl_dst = 00:00:00:00:00:00, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 10, nw_src = 10.0.0.1/32, nw_dst = 10.0.0.2/32, tp_src = 1024, tp_dst = 2048], service_name = dumper, strict = true ).
TREMA_HOME=`pwd` ./objects/examples/packetin_filter_config/delete_filter
2 packetin filters were deleted ( match = [wildcards = 0x3fffff, in_port = 0, dl_src = 00:00:00:00:00:00, dl_dst = 00:00:00:00:00:00, dl_vlan = 0, dl_vlan_pcp = 0, dl_type = 0, nw_tos = 0, nw_proto = 0, nw_src = 0.0.0.0/0, nw_dst = 0.0.0.0/0, tp_src = 0, tp_dst = 0], service_name = dumper, strict = false ).

repeater-hub

Repeater hub is a Trema OpenFlow controller that copies/repeats any incoming packet to all other outgoing ports. For illustration purposes we use a simple configuration file with a vswitch and 3 attached hosts. When a packet-in is received from any host a flow entry is installed at switch with action set to flood and a packet-out is sent to flood it to other hosts. If thereafter similar packets are sent the vswitch would handle the delivery without sending a packet-in to repeater hub as expected conforming to OpenFlow specifications. Feel free to extend the configuration file adding more vswitches and hosts but you may observe more packet-ins than would normally required if integrated with topology and topology discovery modules that report path information that can use to our advantage to set up flows in advance. To run either the C or the Ruby program type:

./trema run ./objects/examples/repeater_hub/repeater_hub -c src/examples/repeater_hub/repeater_hub.conf
./trema run src/examples/repeater_hub/repeater-hub.rb -c src/examples/repeater_hub/repeater_hub.conf

To ensure packet delivery to all hosts use the send_packets and show_stats trema commands. For example:

./trema send_packets -s host1 -d host3
./trema show_stats host1 (no received packets)
Sent packets:
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.3,1,192.168.0.1,1,1,50
Received packets:
./trema show_stats host2 (one received packet)
Sent packets:

Received packets:
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.3,1,192.168.0.1,1,1,50
./trema show_stats host3 (one received packet)
Sent packets:

Received packets:
ip_dst,tp_dst,ip_src,tp_src,n_pkts,n_octets
192.168.0.3,1,192.168.0.1,1,1,50

switch_info

This example program sends a FeaturesRequest message to a vswitch when connection is established and displays the reply FeaturesReply. The features reply includes vswitch's identity (datapath_id) and supported capabilities actions, ports that a controller should respect. Once the content of FeaturesReply message displayed the programs shuts down the vswitch gracefully and exits. There is a C and a ruby program that can execute as:

./trema run objects/examples/switch_info/switch_info  -c src/examples/switch_info/switch_info.conf
./trema run src/examples/switch_info/switch_info.rb  -c src/examples/switch_info/switch_info.conf

There is a small difference between the C and the ruby program. The C program displays vswitch's number of ports while the Ruby version displays vswitch's port numbers.

switch_monitor

Instantiates a number of vswitches and declares a 10 second periodic timer that continuously displays the status of all the switches. A vswitch can be online (UP) or offline (DOWN). Initially the program should find all switches online and display the following:

Switch 0x1 is UP
Switch 0x2 is UP
Switch 0x3 is UP
All switches = 0x1, 0x2, 0x3

The easiest way to simulate an offline condition is to abruptly kill a vswitch by obtaining its pid from the tmp/pid directory. Once you done that you should see a down message and the number of vwitches decreasing to 2.

Switch 0x1 is DOWN
All switches = 0x2, 0x3

The program basically loops indefinitely displaying the results therefore to terminate the program press Ctrl-C at the terminal window.

traffic_monitor

This is the last example program to cover. It snoops packet-ins and counts the number of packets and number of bytes using the source MAC address as a unique key to store and retrieve this information later. It performs this task on a 10 second interval indefinitely. As usual you run the C or Ruby version of the programs using the trema command.

./trema run ./objects/examples/traffic_monitor/traffic_monitor -c src/examples/traffic_monitor/traffic_monitor.conf
./trema run src/examples/traffic_monitor/traffic-monitor.rb -c src/examples/traffic_monitor/traffic_monitor.conf

The output includes the time the reading was taken and a single line output for each entry found.

Mon Mar 05 09:57:02 +0900 2012
1e:b6:21:c4:f2:ed 22 packets (3366 bytes)
6e:00:22:8b:e4:b5 22 packets (3366 bytes)
72:02:e7:d1:ae:06 22 packets (3366 bytes)
56:ef:f8:23:da:ce 22 packets (3366 bytes)
00:00:00:00:00:01 437 packets (27968 bytes)

spec

The spec directory and all the sub-directories contain trema's RSpec test specification files. In the spec directory the spec_helper.rb and the openflow-message_supportspec.rb files are helper RSpec files included by other files found under the trema directory. To make tests more realistic we introduced a network function that accepts a ruby code block that behaves much like the trema run command that enables you to test more effectively your controller class. You will find the source for this function in the file spec_helper.rb. The file openflow-message-supportspec.rb contains shared examples a notion used in RSpec to allow multiple RSpec test files to call common test code. Under the trema directory and sub-directories a number of files prefixed as xxx_spec.rb are found. There are the RSpec test files where in most cases associated with a class file found under the ruby/trema directory.

features

While we are still on the subject of testing the features directory includes Trema's functional tests written in plain text to run with the cucumber tool. If you know about cucumber you probably recognize how files are organized under the features directory. Briefly the step_definitions directory contains the programmable actions that would be fired when a pattern to be tested matches. The support directories contains common code referenced by one or more features files. The features files describe a feature that is associated with one or more scenario cases. The scenario starts with a brief description and proceeds with either a When, Then statements or a combination of Given , When and Then. Simply the When sets all the prerequisites for the test case and the Then verifies the test result.

tmp

The tmp directory and its sub-directories contain temporary files created while a Trema application is running. You can clean up to save some disk space by removing the entire tmp directory and all the sub-directories without doing any harm. When Trema starts up again the directory structure is created again. In the log sub-directory you would find a log file for each running process. Actually for each process that indirectly calls initialize log (init_log). External processes that Trema uses have their own logging facilities and have been modified to write all log messages to files. Such processes include open-vswitch and phost. For the open-vswitch process all the debug messages are logged to a file and any errors would be displayed on the screen. If you ever need to alter those setting have a look the default_options method in the open-vswith.rb under the ruby/trema directory. The phost process starts with a logging level set to warning. You can set an exportable variable (LOGGING_LEVEL) to enable logging prior to running the trema run command. The LOGGING_LEVEL can be set to critical, error, warn, notice, info and debug. Setting the level to debug prints with maximum verbosity. The default log level is info. Both Ruby and C programs can use log level functions to log error and informational errors.

build

Up to now we haven't mention about Trema's build system because it is so simple that's hardly worth the effort to explain. But our knowledge about Trema would not be complete unless we know the bare essentials about the build system.
To build Trema you only need to type a single command on the top directory.

./build.rb

The build.rb is a Ruby script file that reads a Rantfile found on the same directory and executes the default task. The Rantfile contains instructions for building a target from a number of tasks. A task may require another task in order to build and in this case it is said that the first task depends upon the second task. build.rb discovers task dependencies and builds Trema in the required order. A task with multiple file dependent tasks is shown below:

task "vendor:phost" => [ Trema::Executables.phost, Trema::Executables.cli ]

The task "vendor:phost" depends on Trema::Executables.phost and Trema::Executable.cli. The declaration order is important here. As another example a task that depends upon another task is shown below:

task "openflow_wireshark_plugin:clean" => "openflow_wireshark_plugin:distclean"
task "openflow_wireshark_plugin:distclean" do
  sys.rm_rf Trema.vendor_openflow_git
end

It is worthwhile to mention that is undesirable to have many nested or cyclic dependencies as this would probably indicate a system design flaw. The actions associated with a task enclosed in a do end Ruby block and can include parameters if required. To get a list of all tasks supported by build.rb invoke the command with the -T option and you'll get a long list of tasks as below:

./build.rb default                         # Build trema.
./build.rb clean                           # Cleanup generated files.
./build.rb distclean                       # Cleanup everything.
./build.rb buildrb                         # Generate build.rb.
./build.rb cbench                          # Run the c cbench switch controller to benchmark
./build.rb cbench:c                        # Run the c cbench switch controller to benchmark
./build.rb cbench:ruby                     # Run the ruby cbench switch controller to benchmark
./build.rb cbench:profile                  # Run cbench with profiling enabled.
./build.rb openflow_wireshark_plugin       # Build openflow wireshark plugin
./build.rb libtrema                        # Build trema library.
./build.rb coverage:libtrema               # Build trema library (coverage).
./build.rb rubylib                         # Build ruby library.
./build.rb switch_manager                  # Build switch manager.
./build.rb switch                          # Build switch.
./build.rb packetin_filter                 # Build packetin filter.
./build.rb tremashark                      # Build tremashark.
./build.rb packet_capture                  # Build packet_capture.
./build.rb syslog_relay                    # Build syslog_relay.
./build.rb stdin_relay                     # Build stdin_relay.
./build.rb examples:cbench_switch          # Build cbench_switch example.
./build.rb examples:dumper                 # Build dumper example.
./build.rb examples:hello_trema            # Build hello_trema example.
./build.rb examples:learning_switch        # Build learning_switch example.
./build.rb examples:list_switches          # Build list_switches example.
./build.rb examples:multi_learning_switch  # Build multi_learning_switch example.
./build.rb examples:packet_in              # Build packet_in example.
./build.rb examples:repeater_hub           # Build repeater_hub example.
./build.rb examples:switch_info            # Build switch_info example.
./build.rb examples:switch_monitor         # Build switch_monitor example.
./build.rb examples:traffic_monitor        # Build traffic_monitor example.
./build.rb unittests                       # Run unittests
./build.rb notes                           # Print list of notes.

Select and run any particular task from the list above. The short description that appears next to each task name is from the "desc" keyword definition appearing before each task declaration. As can been seen from the list some tasks designed for a specific purpose and not related to the build system at all. When the build system runs it invokes the rant tool therefore by default it accepts all the rant options as well. The --dry-run option that prints information without executing the actions is a useful option to use when debugging. Finally the Rantfile also includes project level build settings (CFLAGS) passed to the compiler that should not be modified unless absolutely necessary.

Cruise

It is a Ruby script file that runs acceptance and unit tests and uses the gcov tool to discover and measure untested parts of code. Trema includes a number testsuites files under the unittests directory. Those testsuites files designed to exercise Trema library source being a vital part of Trema framework. The program starts by performing a clean build of Trema and proceeds to running the unit tests found under the spec directory. Once unit tests completed successfully the next task is the acceptance tests found under the features directory. At the end it prints a summary with testing coverage results calculated for each file C file found under the following directories:

src/lib
src/packetin_filter
src/switch_manager
src/tremashark
unittests

But first all C files under the above directories need to be compiled with gcc's --coverage option. This compiler option produces two sets of files .gcno and .gcda. that can be fed into gcov to produce one line summary for each file. A typical output of gcov command is shown below:

File 'unittests/lib/timer_test.c'
Lines executed:97.87% of 94

The percentage figure indicates the amount of test code included in the unit test file. The summary report details for each file the code lines and the test coverage percentage. At the end it outputs a total coverage percentage and the total number of untested files (coverage percentage = 0). A truncated output of the command is shown below:

Coverage details
================================================================================

                                        queue.c ( 106 LoC):   0.0%
                                 syslog_relay.c ( 161 LoC):   0.0%
                            packet_info_test.c ( 283 LoC):  99.7%
                               byteorder_test.c ( 933 LoC):  99.9%
                        openflow_message_test.c (4125 LoC): 100.0%
                                      wrapper.c (  33 LoC): 100.0%
Summary
================================================================================

- Total execution time = 0 seconds
- Overall coverage = 73.0% (25/72 files not yet tested)

As a last check the command verifies that the overall coverage figure is not below a global defined coverage threshold value. If it is, it outputs an error message to the user to increase this level and rerun the command. Otherwise it outputs a warning message and exits.

Rakefile

This file contains various rake tasks to basically simplify tool execution. To get a list of all task available run rake with the -T option it should output the following:

rake build     # Build trema-0.2.2.1.gem into the pkg directory
rake build.rb  # Generate a monolithic rant file
rake features  # Run Cucumber features
rake flay      # Analyze for code duplication in: ruby
rake flog      # Analyze for code complexity
rake install   # Build and install trema-0.2.2.1.gem into system gems
rake quality   # Enforce Ruby code quality with static analysis of code
rake rcov      # Run RSpec code examples
rake reek      # Check for code smells
rake release   # Create tag v0.2.2.1 and build and push trema-0.2.2.1.gem to Rubygems
rake roodi     # Check for design issues in: ruby/**/*.rb, spec/**/*.rb, features/**/*.rb
rake spec      # Run RSpec code examples
rake yard      # Generate YARD Documentation

Some of the tasks we have already mentioned in previous sections like build, feature and spec and will skip here. The task flay analyzes Ruby code and reports similarities found. It can point out duplicate code which is useful for refactoring but it might complain for small superficial things for example if multiple function arguments have the same name. Using the accompanying --trace option to display extra information can be useful. Executing the rake flay task it returns the following:

1) Similar code found in :defn (mass = 95)
  ruby/trema/cli.rb:202
  ruby/trema/cli.rb:212
  ruby/trema/cli.rb:222
  ruby/trema/cli.rb:232
  ruby/trema/cli.rb:242

The task flog reports a complexity factor for each method the higher the value the more complex the code is. An excerpt output when this command is invoked is shown below:

    40.8: Trema#none
    43.3: Trema::Command#show_stats
    50.8: Trema::Command#load_config
    55.7: Trema::DSL::Runner#maybe_run_switch_manager
    56.1: Trema::Cli#send_packets_options
    59.8: Timers::TimerMethods#none
    62.7: Trema::DSL::Parser#configure
    78.1: Trema::Command#send_packets
rake aborted!
64 methods have a flog complexity > 10

Unfortunately doesn't pinpoint where complexity is found in the code.
Another similar code analysis tool is reek. It identifies a range of bad code smells easily described with a single word. For example "LowCohesion", "IrresponsibleModule" for each class, module or method. An excerpt output is shown below:

ruby/trema/table-stats-reply.rb -- 1 warning:
  Trema::TableStatsReply has no descriptive comment (IrresponsibleModule)
ruby/trema/timers.rb -- 1 warning:
  Timers::TimerMethods takes parameters [handler, interval] to 3 methods (DataClump)
ruby/trema/tremashark.rb -- 1 warning:
  Trema::Tremashark has no descriptive comment (IrresponsibleModule)
ruby/trema/util.rb -- 4 warnings:
  Trema::Util#cleanup calls Trema.pid twice (Duplication)
  Trema::Util#cleanup doesn't depend on instance state (LowCohesion)
  Trema::Util#cleanup has approx 8 statements (LongMethod)
  Trema::Util#cleanup refers to session more than self (LowCohesion)
ruby/trema/vendor-stats-reply.rb -- 1 warning:
  Trema::VendorStatsReply has no descriptive comment (IrresponsibleModule)
rake aborted!
Smells found

The last tool to use to analyze the code is roodi. It complains about methods with long arguments or long lines of code, and checks if for loops are not used and many more. Exactly what roodi checks is described in roodi.yml file found in roodi gem installation directory. A sample output is shown below:

ruby/trema/cli.rb:40 - Method name "initialize" has 6 parameters.  It should have 5 or less.
ruby/trema/dsl/parser.rb:52 - Block cyclomatic complexity is 5.  It should be 4 or less.
spec/trema/openflow-error_spec.rb:25 - Block cyclomatic complexity is 9.  It should be 4 or less.
ruby/trema/dsl/runner.rb:43 - Method "maybe_run_switch_manager" has 22 lines.  It should have 20 or less.
ruby/trema/util.rb:51 - Method "cleanup" has 23 lines.  It should have 20 or less.
ruby/trema/command/send_packets.rb:34 - Method "send_packets" has 86 lines.  It should have 20 or less.
ruby/trema/command/run.rb:33 - Method "run" has 35 lines.  It should have 20 or less.
ruby/trema/command/run.rb:77 - Method "load_config" has 28 lines.  It should have 20 or less.
ruby/trema/command/show_stats.rb:34 - Method "show_stats" has 40 lines.  It should have 20 or less.
ruby/trema/command/usage.rb:25 - Method "usage" has 27 lines.  It should have 20 or less.
ruby/trema/command/up.rb:33 - Method "up" has 22 lines.  It should have 20 or less.
ruby/trema/command/kill.rb:33 - Method "kill" has 37 lines.  It should have 20 or less.
ruby/trema/shell/run.rb:28 - Method "run" has 29 lines.  It should have 20 or less.
ruby/trema/shell/link.rb:28 - Method "link" has 23 lines.  It should have 20 or less.
spec/spec_helper.rb:90 - Method "trema_run" has 33 lines.  It should have 20 or less.
rake aborted!
Found 15 errors.

There is also a rake task quality that executes all code analysis tools one by one and outputs their results.
To generate Trema's YARD documentation use the yard task. It creates a doc directory that contains the generated HTML documentation. Use the ./trema ruby command to display the documentation on your favorite browser or point your browser to the index.html file found under the doc directory.