Documents

IJIRAE::Development of Modified RED AQM Algorithm in Computer Network for Congestion Control

Categories
Published
of 6
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Description
Congestion control plays an important role in network resource management in very large networks with heavy traffic. Active Queue Management (AQM) algorithms are one of the approaches to resolve the congestion problems. Majority of AQM algorithms mainly focus on single queued links. The input queued switches are limited in throughput and output queued switches require a large speedup factor so our attention is directed towards Combined Input and Output Queued (CIOQ) switches. A simple modification to the RED AQM algorithm is proposed in this paper in order to account for the presence of both input and output queues in the switch. The weighted sum of input and output queue lengths are specifically used as the congestion measure instead of just the output queue length. The average backlog in the switch is significantly reduced in the low speedup region by using our modified algorithm as compared to RED without this modification. Simulations show that the loss rate in the modified RED slightly larger than that in traditional RED but the output queue length in modified RED is tremendously reduced. The congestion measure which is computed using weighting factor results in reduction of the average backlog. Using our modified algorithm simulations indicate improvement in the queue length and switch utilization.
Transcript
    International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163   Volume 1 Issue 8 (September 2014 )   www.ijirae.com  _______________________________________________________________________________________________________ © 2014, IJIRAE- All Rights Reserved Page - 380 Development of Modified RED AQM Algorithm in Computer  Network for Congestion Control Santosh M Nejakar  *  Sharanabasappa.R R    Harshavardhan.D.R  Department of E&TC Department of E&TC Department of R&D   Sanjay Ghodawat Institutions, atigre Sanjay Ghodawat Institutions,atigre Advaith Research Labs, Shimoga Abstract  — Congestion control plays an important role in network resource management in very large networks with heavy traffic. Active Queue Management (AQM) algorithms are one of the approaches to resolve the congestion problems. Majority of  AQM algorithms mainly focus on single queued links. The input queued switches are limited in throughput and output queued switches require a large speedup factor so our attention is directed towards Combined Input and Output Queued (CIOQ) switches. A simple modification to the RED AQM algorithm is proposed in this paper in order to account for the presence of both input and output queues in the switch. The weighted sum of input and output queue lengths are specifically used as the congestion measure instead of just the output queue length. The average backlog in the switch is significantly reduced in the low speedup region by using our modified algorithm as compared to RED without this modification. Simulations show that the loss rate in the modified RED slightly larger than that in traditional RED but the output queue length in modified RED is tremendously reduced. The congestion measure which is computed using weighting factor results in reduction of the average backlog. Using our modified algorithm simulations indicate improvement in the queue length and switch utilization.  Keywords  — Active Queue Management (AQM), Combined Input and Output Queued (CIOQ), Dynamic RED (DRED), Stabilized RED (SRED)   I INTRODUCTION Consider a network where sources compete for bandwidth and buffer space while being unaware of current state of resources and unaware of each other. In this setting, congestion arises when the demand for bandwidth exceeds the available link capacity. This leads to performance degradation in the network as packet losses increase and link utilization decreases. To avoid these problems, some kind of organization and control of traffic is needed. One way to perform congestion control is to use Active Queue Management (AQM) in the routers. The basic idea behind an AQM algorithm is to sense the congestion level within the network and inform the packet sources about this so that they reduce their sending rate. Many AQM algorithms have  been proposed so far, such as RED, GREEN, REM and BLUE. In particular, there have been several modifications to RED. Some references in this area include Dynamic RED (DRED) [9], Adaptive RED, Stabilized RED (SRED), Jacobson et al. and many others. However, these approaches mainly focus on output queued switches. Most practical routers have input queues also, as this would help reduce the required speedup in the switch. The implementation of flow-based AQM algorithms on combined input output queued (CIOQ) switches has been studied in through simulations. The paper uses the GREEN algorithm; the rates used as input to the algorithm incorporate, in addition to the output queue arrival rates, the rates of the input side VOQs. This idea is important since in a CIOQ switch, congestion affects input queues as well as output queues. In this paper, we look at the impact of input queues on the design of queue-based AQM algorithms. Specifically, we propose a modification of the Random Early Detection (RED) algorithm to account for the presence of the input queue. Instead of using the output queue length as a congestion measure, we use a weighted sum of the input and output queue occupancies. The key advantage of this is that the RED (output) queue need not get filled up as much as in srcinal RED to start reacting to congestion. To study the effect of this change, simulations were performed in network simulator (ns) for a 4 × 4 and a 16 × 16 CIOQ switch with virtual output queues (VOQs). Results show that our fairly simple modification of the RED algorithm significantly reduces the backlog hence reducing the packet queuing delay. Simulations prove that the utilization remains unaffected as compared to the srcinal RED. The weighting factor given to the input queue length in the congestion measure is a critical parameter. We can use it to strike a balance between reducing the average backlog and preventing the loss rate from becoming too high. OBJECTIVE    A simple modification to the RED AQM algorithm is proposed in this paper in order to account for the presence of both input and output queues in the switch.    The weighted sum of input and output queue lengths are specifically used as the congestion measure instead of just the output queue length.    International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163   Volume 1 Issue 8 (September 2014 )   www.ijirae.com  _______________________________________________________________________________________________________ © 2014, IJIRAE- All Rights Reserved Page - 381    The average backlog in the switch is significantly reduced in the low speedup region by using our modified algorithm as compared to RED without this modification. II. LITERATURE SURVEY We describe a new active queue management scheme, Random Exponential Marking (REM), that aims to achieve both high utilization and negligible loss and delay in a simple and scalable manner. The key idea is to decouple congestion measure from performance measure such as loss, queue length or delay. While congestion measure indicates excess demand for  bandwidth and must track the number of users, performance measure should be stabilized around their targets independently of the number of users. We explain the design rationale behind REM and present simulation results of its performance in wireline and wireless networks.  REF : S. Athuraliya, V. H. Li, S. H. Low, and Q. Yin, “REM: Active Queue Management,” IEEE Network, vol. 15, no. 3, pp 48-53, May 2001. Architectures based on a non-blocking fabric, such as a crosspoint switch, are attractive for use in high-speed LAN switches, IP routers, and ATM switches. These fabrics, coupled with memory bandwidth limitations, dictate that queues be  placed at the input of the switch. But it is well known that input-queueing can lead to low throughput, and does not allow the control of latency through the switch. This is in contrast to output-queueing, which maximizes throughput, and permits the accurate control of packet latency through scheduling. We ask the question: Can a switch with combined input and output queueing be designed to behave identically to an output-queued switch? In this paper, we prove that if the switch uses virtual output queueing, and has an internal speedup of just four, it is possible for it to behave identically to an output queued switch, regardless of the nature of the arriving tra_c. Our proof is based on a novel scheduling algorithm, known as Most Urgent Cell First. This result makes possible switches that perform as if they were output-queued, yet use memories that run more slowly.  REF : B. Prabhakar and N. McKeown, “On the Speedup Required for Combined Input and Output Queued Switching,”  Automatica, Vol. 35, pp. 1909-1920, December 1999. The RED active queue management algorithm allows network operators to simultaneously achieve high throughput and low average delay. However, the resulting average queue length is quite sensitive to the level of congestion and to the RED  parameter settings, and is therefore not predictable in advance. Delay being a major component of the quality of service delivered to their customers, network operators would naturally like to have a rough a priori estimate of the average delays in their congested routers; to achieve such predictable average delays with RED would require constant tuning of the parameters to adjust to current traffic conditions. Our goal in this paper is to solve this problem with minimal changes to the overall RED algorithm. To do so, we revisit the Adaptive RED proposal of Feng et al. from 1997. We make several algorithmic modifications to this proposal, while leaving the basic idea intact, and then evaluate its performance using simulation. We find that this revised version of Adaptive RED, which can be implemented as a simple extension within RED routers, removes the sensitivity to  parameters that affect RED’s performance and can reliably achieve a specified target average queue length in a wide variety of traffic scenarios. Based on extensive simulations, we believe that Adaptive RED is sufficiently robust for deployment in routers.  REF : Sally Floyd, Ramakrishna Gummadi, and Scott Shenker, “Adaptive RED: An Algorithm for Increasing the Robustness of  RED’s Active Queue Management,”Technical report, ICSI, August 1, 2001 . This paper describes a mechanism we call “SRED” (Stabilized Random Early Drop). Like RED (Random Early Detection) SRED pre-emptively discards packets with a load-dependent probability when a buffer in a router in the Internet or an Intranet seems congested. SRED has an additional feature that over a wide range of load levels helps it stabilize its buffer occupation at a level independent of the number of active connections. SRED does this by estimating the number of active connections or flows. This estimate is obtained without collecting or analyzing state information on individual flows. The same mechanism can be used to identify flows that may be misbehaving, i.e. are taking more than their fair share of bandwidth. Since the mechanism is statistical in nature, the next step must be to collect state information of the candidates for “misbehaving”, and to analyze that information. We show that candidate flows thus identified indeed have a high posterior probability of taking a larger than average amount of bandwidth.  REF : T. Ott, T. Lakshman, L. Wong. “SRED: Stabilized RED,”Infocom,1999. III. SYSTEM ANALYSIS EXISTING SYSTEM AQM refers to a class of algorithms designed to provide improved queuing mechanisms for routers. These schemes are called active because they dynamically signal congestion to sources even before the queue overflows; either explicitly, by marking packets (e.g. Explicit Congestion Notification) or implicitly, by dropping packets. There are two design considerations in any AQM - first, how the congestion in the network is measured and next how this measure is used to compute the probability of dropping packets. Queue based AQMs couple congestion notification rate to queue size. The AQMs currently employed on    International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163   Volume 1 Issue 8 (September 2014 )   www.ijirae.com  _______________________________________________________________________________________________________ © 2014, IJIRAE- All Rights Reserved Page - 382 the Internet, such as Droptail and RED belong to this category. Another example of a queue based AQM is BLUE. The drawback of this is that a backlog of packets is inherently necessitated by the control mechanism, as congestion is observed only when the queue length is positive. This creates unnecessary delay and jitter. Flow based AQMs, on the other hand, determine congestion and take action based on the packet arrival rate. Some examples are REM and GREEN. For such schemes, the target utilization can be achieved irrespective of backlog. DISADVANTAGES OF EXISTING SYSTEM    It was shown to interact badly with TCP’s congestion control mechanisms and to lead to poor performance.    Congested links have high queue size and high loss rate. PROPOSED SYSTEM The modification we proposed is to use the sum of input and output lengths as the congestion measure. To be more specific, consider the RED AQM formula for the average backlog. The instantaneous queue length in this formula is replaced by the weighted sum of qoutput and qinput where qoutput is the length of the output queue and qinput is the sum of lengths of all VOQs corresponding to that output. In this way, the input queue length also contributes to the probability of dropping packets. The dropping itself is done only from the output queue, as the AQM is applied only to the output queue. ADVANTAGES OF PROPOSED SYSTEM    This modification reduces the average backlog in the switch significantly when speedup is low. IV. SYSTEM REQUIREMENT HARDWARE REQUIREMENTS Processor : Intel Pentium III or later Main Memory : 256 MB Cache Memory : 512 KB Keyboard : 108 Keys Monitor : 17” Color Monitor Mouse : Optical Mouse Hard Disk : 160 GB SOFTWARE REQUIREMENTS Front End : NS2. Operating System: Linux. V. SYSTEM ARCHITECTURE    International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2163   Volume 1 Issue 8 (September 2014 )   www.ijirae.com  _______________________________________________________________________________________________________ © 2014, IJIRAE- All Rights Reserved Page - 383 VI. PROBLEM ANALYSIS QUEUE ARCHITECTURE In output queuing (OQ) packet switch architectures, packets arriving from input lines are not queued at input interfaces; rather, they cross the switching fabric, and join a queue at the switch output interface before being forwarded to the next node (see Fig. 1.a). In order to avoid contention either within the switching fabric or in the access to the output queue, it is necessary that the internal speed of an OQ switch, as well as the access rate of the memory at each output interface, be equal to the sum of all input line rates (i.e., they must equal the aggregate switch bandwidth). On the contrary, in input queuing (IQ) packet switch architectures, packets arriving from input lines are queued at input interfaces. They are then extracted from input queues to cross the switching fabric and be forwarded to the next node, according to some algorithm that avoids contention within the switching fabric and at the input and output interfaces (no more than one  packet at a time can be extracted from each input and reach each output). Note that the implementation of this algorithm requires coordination among input interfaces, hence the exchange of control information, which can either contend with user data for access to the switching fabric, or use a separate data path. The internal speed of an IQ switch need not be larger than the highest line rate (see Fig. 1.b), possibly incremented to account for extra packet headers added internally to the switch. ¿From this  preliminary description of the IQ and OQ approaches, it is immediately evident that the intelligent packet scheduling of IQ compensates for the missing internal speedup of OQ. However, the comparison of the IQ and OQ approaches deserves deeper investigations. This work was supported by a research contract between CSELT and Politecnico di Torino If FIFO (fist in first out) packet queues are used, IQ architectures suffer from the head of the line (HoL) blocking phenomenon, that severely limits their performance. In 1987, Karol, Hluchyj and Morgan proved with an asymptotic analysis that under source- and destination-uniform traffic, HoL blocking limits the maximum throughput of IQ switches with an arbitrarily large number of input/output lines to 2   p2 _ 0:586. This result is quite disappointing, specially considering that, without any buffering in the switch (except for the obvious need to store one packet at the switch ingress and egress), i.e., by discarding packets that cannot be immediately switched, the throughput limit grows to 0.632. The HoL blocking phenomenon can be circumvented with more complex structures of the input queues, for example separating (either logically or physically) at each input the queues of packets referring to different outputs (see Fig. 1.c), as we shall see. Moreover, OQ offers the possibility of rather easily controlling the delay of packets within the switch (with algorithms such as fair queuing), since the time needed to cross the switching fabric is known, and all packets following a given route can  be found within the same (output) queue. The situation is such that packet switch designs traditionally referred to OQ architectures, especially in the case of ATM (Asynchronous Transfer Mode) switches, because of their greater effectiveness, and  because the requirement for an internal speedup was not critical. The recent developments in networking, that produced a dramatic growth of line rates, have however made the internal speed requirements of OQ switches difficult to meet. This has generated great interest in IQ switch architectures, thus opening a lively new research line in switching. A few implementations of IQ switches exist, either prototypal or commercial. Probably the most famous IQ cell switch  prototype is the Tiny Tera , implemented at Stanford University in cooperation with Texas Instruments. The latest version of Tiny Tera can reach an aggregate bandwidth of 1 Tbps. A similar architecture is implemented in the recent line of Cisco Gigabit routers, the Cisco 12000 series, which adopts a core IQ switching fabric reaching an aggregate bandwidth equal to 60 Gbps. Other examples of commercial IQ switching fabrics are the Lucent Cajun Switch and the Yago PowerCast Switch. A number of novel algorithms for IQ switch architectures have recently appeared in the technical literature, aiming at over coming the HoL blocking performance limitations with minimal complexity. In this paper we discuss and compare a number of such proposals in the context of packet switches operating on fixed-size data units. The term cell switches will be used, although the discussed switch architectures need not refer to ATM. The considered IQ cell-switch architecture proposals are: iSLIP, iLQF, iOCF, iLPF, 2DRR, RC, MUCS, RPA, and iZIP, a novel proposal. MULTICAST ROUNG ROBIN SCHEDULING ALGORITHM With existing three-phase (request-grant-accept) multicast scheduling algorithms that employ the k  -MC-VOQ architecture, all the queues having cells need to issue requests to the destined outputs during the request phase, which means that
Search
Similar documents
View more...
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks