The most fundamental part in developing the prototype was the representation of network in a way that could be easily integrated with the TCP/IP simulator. For this purpose, we designed different classes and various function on those classes that were then used to represent the network topology and to integrate the whole system. Another very important part was to get the input from the user, as the user was supposed to enter the network only once and then the whole network along with the sources, the type of traffic they were running, the switches etc. were to be initialised. Another important decision was to implement the main simulation engine, that would drive the whole simulation. The basic choices that we had in terms of designing the main driver of our simulation engine were either to have a time loop, or a packet loop or an event driven simulation. Eventually we decided to do the Event driven simulation for it was more efficient and presented a more realistic model.
In the part that follows, we have tried to describe the history of the our prototype, how it evolved, the different design issues that we faced and how we went about them, and a brief description of the design of individual components of the prototype.
How we used to take input from the user when we were working with REAL as our top-level TCP traffic generator. We made the user enter the network topology through an input file, the description of the total number of nodes swithces in it. The specifications of the links, ( delay, bandwidth, etc ). The specification of the switch speeds and size of buffer was also entered by the user. The user was also made to enter the size of the TCP packets that were supposed to be generated. IF there were any connections to be established their respective specifications were also entered by the user. If the user wished to trace the state of the connections he had the option of going into the 'trace' mode by mentioning it at the input. The start time of the traffic, and the total duration of the simulation was also made to be entered by the user.
This input file was made to go through our 'lexer' that returned the tokens, our parser that interpreted these tokens, verified any semantic errors and gave the input to the Network.cpp file. Hence we had our own mechanism of interpreting the input file from the user. The inspiration for this method was gotten from the way REAL took the input from the user.
After switching on to NS the methodology of taking input changed since it took the input in the form of tcl - script which was interpreted through some mechanism.
The user defines the network in an input file that is read in by the parser and is checked for any syntax errors as well. If there are no syntax errors, the network is initialised. The input file consists of the following sections:
Once the file is read in by the parser, the network is also initialised.
Initially we had thought that the main part of the simulation engine will be the main for loop which would generate one TCP packet in each iteration. However the ATM traffic was to come in at two places. One was that the CBR traffic would be generated in the main for loop in the form of allocation of bandwidth when it is time for a connection set-up. Second is the conversion of the TCP/IP packets into ATM cells and vice versa. This was to be done at the router. When it receives ATM cells it will simply convert them to the respective TCP/IP packet. When it receives TCP/IP packets it converts them into ATM cells.
However, after some thought a few changes were made in this design once the classes were designed and all the performance issues (of the simulator) were taken into account. The first change was in the structure if the main for loop. Instead of the main for loop, our prototype consisted of a heap of events that was created and different events were inserted in the heap at the appropriate time. The event at the top of the heap (which is a minimum heap sorted on time) was removed from the heap and was handled. This process of handling events is continued as long as there were events that need to be handled, and as one particular event was handled, the new events that were generated were also inserted in the heap, the only time the heap would be empty would be when the traffic has stopped flowing. The other condition in which the simulation would stop would be when the simulation end time has arrived. In the beginning, the simulation was started by breaking up a TCP/IP packet into the ATM Cells. This was the event GENERATE that was the first one that is manually inserted in the heap. From then on, the process continued on its own.
To handle the GENERATE event, the TCP packet was broken down in the ATM Cells and was placed on the link that is immediately attached to the source. The next event that was generated was ENTER LINK. As the new event was generated, various members of the event were also set. The cell that was generated was copied to the event that was currently being generated. Its time and total number of cells were also set and the event was inserted in the heap. At this point, since we were only dealing with the prototype, the next TCP packet was also generated. However, this would not be done in the actual simulator as the traffic will be coming from the TCP/IP simulator, NS.
In the case of an ENTER LINK event, the nature of current node was checked. If the current node was an ATM source, then the nodes array was used to determine the link on which the cell was to enter, otherwise the switch's VPI table was looked up to determine the port. After this the other end of the link was determined and it was checked whether it was the destination or not. If it was the destination, then the event that would be generated would be RECEIVE BY AAL, otherwise it was obvious that the other end was a switch and that the event generated would be to ENTER IN QUEUE of the switch.
In ENTER IN QUEUE case, it was first determined which queue the cell had to go to. In case no route existed, a new path was established and the cell's age was incremented by the round trip time to simulate the TCP connection setup procedure. After this a LEAVE QUEUE event was generated and this event was inserted in the heap also, so at the time this event is to be removed, the cell would be removed from the queue and handled appropriately. In the LEAVE QUEUE case also, the next event that would be generated would be ENTER IN LINK.
In RECEIVE BY AAL case, the AAL checks to see if all the cells belonging to this packet have been received. If they have been received, then the event GENERATE was generated again, but this time with an acknowledgment packet.
The CONNECTION SETUP event set up a path from source to destination and generate an event for dropping the connection. The DROP CONNECTION event that was inserted was later handled to deallocate the resources and drop the connection.
The final event was END SIMULATION at which stage all the connections were dropped and the statistics were finalized.
To simulate the TCP/IP packets generation, we had two options available to us. At first, we planned to use REAL, basically because of the fact that we were having some problems with installing NS and running it. The basic design of REAL incorporated our simulation quite easily. It handled different events using case and switch statements. There was a hierarchy of events and if an event did not match any switch event at the top, it was passed to the lower switch statement and so on. What we were planning was that we would have out own events at the bottom of the case statements so that if any of the events did not match any of the REAL's events, that would imply that it was generated by our simulation and would be passed on to our handler, which would then handle in a way somewhat similar to the manner in which our prototype handles it.
However, once we had NS running, we saw that there are a number of features that make NS a much more viable option. First of all, NS is designed using Object Oriented methodology, which makes it a lot more suitable for our simulation which is also designed with the help of classes. Secondly, since NS is a much more recent tool and is currently being used for research, it has a lot of additional features that REAL lacked. For example, the user in NS can choose one algorithm out of many that have been implemented to simulate the discard algorithms on the links. He/she can use a drop-tail link, an RED link etc. At the same time, the user can also choose one of the many types of traffic such as tahoe TCP, tahoe 4 TCP etc. This flexibility greatly increased the utility of the package to a researcher. Another feature of NS was that we could specifically the various events in NS for example, we could start a TCP traffic at a given simulation time, we could end a traffic at any time we wanted to and so on. All these features compelled us into using NS instead of REAL to simulate the TCP part in our simulation.
The computers were represented by objects which were stored as an array of computers while the switches were the intermediate nodes but different from computers i.e. they do not generate any new traffic.
The computers were defined by the class ATM_Source and contained the information such as the string identifying the source, the start time at which the traffic generation from this computer would start, the characteristics of traffic, CBR connections requested by this source and the number refused etc. One limitation was that we could not start more than one traffic from a single source e.g. we could not represent two telnet sessions on one source.
The switches were also designed as an independent class, having different characteristics such as a buffer to store cells, the discard algorithm applied on the switch, the type of switch (shared memory/output buffer), the ports which were represented as a list and a VPI table. The switch also contained a count of different statistics such as the accumulated delay, the total number of cells lost etc.
The VPI table itself was a class having a two dimension array and the size of that table. The table basically specified that given a destination, which output port should the traffic be sent to.
A computer (end node) can only be connected to one router. At first it was thought that the links would be implemented as an adjacency matrix of (full-duplex) links. Each individual link will be implemented as objects with their attributes of delay and maximum bandwidth. However, the design of the links was also modified later. Instead of an adjacency matrix, links were maintained as an array of link objects, where each object had its own delay, bandwidth, available bandwidth and the two ends stored in it. At first we had links implemented as FIFO queues, but through iteration in our design we realised that they were effectively redundant since the links were only adding the delay to the packets and were not doing anything else to them. The packets were just being dequeued from the link queue on the basis of the time, which would happen in any case once the delay was added.
We had thought that we would need separate buffers for the AAL layer for each connection (remember that the AAL layer will wait for all the ATM cells that belonged to a TCP/IP packet and then pass them on.). However, once the AAL was also defined as a class, it was noted that there was no need to store all the cells. Only the last cell was enough to gather all the information necessary to determine whether the cells of a particular TCP Packet have arrived or not. Therefore the structure of the buffer of AAL was also changed with only an array to indicate whether a [particular connection's buffer was empty or not and what the last cell was. While handling the connections from different sources we thought of having practically separate AAL layers for each session, but through our design iterations w realized that we could do effectively well with having separate buffers for each connection in the same AAL layer, hence avoiding redundancy.
The switch was implemented as an object. The switch could be of two types: Shared memory or output buffer. The number of inputs in the switch were equal to the number of outputs. The VPI table was just a two column table with the connection number and the output port. The number of interconnections with other nodes determined the number of inputs. The buffer was initially to be implemented as a list of linked lists. However, it was later switched to a queue of ATM cells as the additional FIFO characteristic was also required. If the switch was output buffer then the total buffer size was divided equally among all the outputs. Thus in the discard algorithms the queue size for each output was looked at. On the other hand in shared memory switch the total size the buffer was used in the discard algorithms.
For Normal TCP/IP traffic we simply initialised the VPI tables before starting the simulation. Then these tables remained static throughout the simulation. For CBR we did not use the VPI tables but instead only record the path in the connection. This was done to ensure that a new path was searched every time a connection was requested as another path could also exist from source to destination satisfying the bandwidth requirement.
The ATM Cell was basically a class with a TCP Packet and the ability to access the attributes of the packet. Each cell stored the number of this cell, the total number of cells and the TCP Packet from which this cell was formed. The TCP Packet's different attributes such as the age, size, source and destination were accessible by the cell. In other words, the ATM Cell basically inherited from the TCP Packet.
Another class that was defined and which was not planned at the time of the design was the TCP Packet class. The TCP Packet consists of a size, a source, a destination, a TCP_seq_no, age of the packet, time at which the packet was generated and the type of the packet (data or acknowledgement).
CBR(Constant Bit Rate): We have implemented a CBR connection by allocating all the required resources in the path at the time of connection set-up. This means that first all the links in the path were identified between the source and the destinations and then throughout the path the bandwidth available on the link was decreased. Similarly at the switches the band width reduction was accomplished by reducing the queue size by the same percentage as the percentage usage of the outgoing link. We were not generating any packet traffic for connections.
The connections were represented by a call request being generated by the source. Every time there was a call request, a path was identified between the source and the destination and the path was checked to accommodate the user's specifications. If these requirements were not fulfilled, the connection was rejected, else the connection was granted.
Various statistics were gathered at the prototype level. Some of them included the link utilisation, cell loss at the switch level, length of queues etc.