米葫芦网

RFC3116 - Methodology for ATM Benchmarking

热度:11℃ 发布时间:2023-11-16 19:56:59

Network Working Group J. Dunn
Request for Comments: 3116 C. Martin
Category: Informational ANC, Inc.
June 2001
Methodology for ATM Benchmarking
Status of this Memo
This memo provides information for the Internet community. It does
not specify an Internet standard of any kind. Distribution of this
memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2001). All Rights Reserved.
Abstract
This document discusses and defines a number of tests that may be
used to describe the performance characteristics of ATM (Asynchronous
Transfer Mode) based switching devices. In addition to defining the
tests this document also describes specific formats for reporting the
results of the tests.
This memo is a prodUCt of the Benchmarking Methodology Working Group
(BMWG) of the Internet Engineering Task Force (IETF).
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1. Test Device Requirements . . . . . . . . . . . . . . . . . . 5
2.2. Systems Under Test (SUTs). . . . . . . . . . . . . . . . . . 5
2.3. Test Result Evaluation . . . . . . . . . . . . . . . . . . . 5
2.4. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 5
2.5. Test Configurations for SONET. . . . . . . . . . . . . . . . 6
2.6. SUT Configuration. . . . . . . . . . . . . . . . . . . . . . 7
2.7. Frame Formats. . . . . . . . . . . . . . . . . . . . . . . . 8
2.8. Frame Sizes. . . . . . . . . . . . . . . . . . . . . . . . . 8
2.9. Verifying Received IP PDU"s. . . . . . . . . . . . . . . . . 9
2.10. Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.10.1. Management IP PDU"s . . . . . . . . . . . . . . . . . . . 9
2.10.2. Routing Update IP PDU"s . . . . . . . . . . . . . . . . . 10
2.11. Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.11.1. Filter Addresses. . . . . . . . . . . . . . . . . . . . . 11
2.12. Protocol Addresses. . . . . . . . . . . . . . . . . . . . . 12
2.13. Route Set Up. . . . . . . . . . . . . . . . . . . . . . . . 12
2.14. Bidirectional Traffic . . . . . . . . . . . . . . . . . . . 12
2.15. Single Stream Path. . . . . . . . . . . . . . . . . . . . . 12
2.16. Multi-port. . . . . . . . . . . . . . . . . . . . . . . . . 13
2.17. Multiple Protocols. . . . . . . . . . . . . . . . . . . . . 14
2.18. Multiple IP PDU Sizes . . . . . . . . . . . . . . . . . . . 14
2.19. Testing Beyond a Single SUT . . . . . . . . . . . . . . . . 14
2.20. Maximum IP PDU Rate . . . . . . . . . . . . . . . . . . . . 15
2.21. Busty Traffic . . . . . . . . . . . . . . . . . . . . . . . 15
2.22. Trial Description . . . . . . . . . . . . . . . . . . . . . 16
2.23. Trial Duration. . . . . . . . . . . . . . . . . . . . . . . 16
2.24. Address Resolution. . . . . . . . . . . . . . . . . . . . . 16
2.25. Synchronized Payload Bit Pattern. . . . . . . . . . . . . . 16
2.26. Burst Traffic Descriptors . . . . . . . . . . . . . . . . . 17
3. Performance Metrics. . . . . . . . . . . . . . . . . . . . . . 17
3.1. Physical Layer-SONET . . . . . . . . . . . . . . . . . . . . 17
3.1.1. Pointer Movements. . . . . . . . . . . . . . . . . . . . . 17
3.1.1.1. Pointer Movement Propagation . . . . . . . . . . . . . . 17
3.1.1.2. Cell Loss due to Pointer Movement. . . . . . . . . . . . 19
3.1.1.3. IP Packet Loss due to Pointer Movement . . . . . . . . . 20
3.1.2. Transport Overhead (TOH) Error Count . . . . . . . . . . . 21
3.1.2.1. TOH Error Propagation. . . . . . . . . . . . . . . . . . 21
3.1.2.2. Cell Loss due to TOH Error . . . . . . . . . . . . . . . 22
3.1.2.3. IP Packet Loss due to TOH Error. . . . . . . . . . . . . 23
3.1.3. Path Overhead (POH) Error Count. . . . . . . . . . . . . . 24
3.1.3.1. POH Error Propagation. . . . . . . . . . . . . . . . . . 24
3.1.3.2. Cell Loss due to POH Error . . . . . . . . . . . . . . . 25
3.1.3.3. IP Packet Loss due to POH Error. . . . . . . . . . . . . 26
3.2. ATM Layer. . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1. Two-Point Cell Delay Variation (CDV) . . . . . . . . . . . 27
3.2.1.1. Test Setup . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1.2. Two-point CDV/Steady Load/One VCC. . . . . . . . . . . . 27
3.2.1.3. Two-point CDV/Steady Load/Twelve VCCs. . . . . . . . . . 28
3.2.1.4. Two-point CDV/Steady Load/Maximum VCCs . . . . . . . . . 30
3.2.1.5. Two-point CDV/Bursty VBR Load/One VCC. . . . . . . . . . 31
3.2.1.6. Two-point CDV/Bursty VBR Load/Twelve VCCs. . . . . . . . 32
3.2.1.7. Two-point CDV/Bursty VBR Load/Maximum VCCs . . . . . . . 34
3.2.1.8. Two-point CDV/Mixed Load/Three VCC"s . . . . . . . . . . 35
3.2.1.9. Two-point CDV/Mixed Load/Twelve VCCs . . . . . . . . . . 36
3.2.1.10. Two-point CDV/Mixed Load/Maximum VCCs . . . . . . . . . 38
3.2.2. Cell Error Ratio (CER) . . . . . . . . . . . . . . . . . . 39
3.2.2.1. Test Setup . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.2.2. CER/Steady Load/One VCC. . . . . . . . . . . . . . . . . 40
3.2.2.3. CER/Steady Load/Twelve VCCs. . . . . . . . . . . . . . . 41
3.2.2.4. CER/Steady Load/Maximum VCCs . . . . . . . . . . . . . . 42
3.2.2.5. CER/Bursty VBR Load/One VCC. . . . . . . . . . . . . . . 43
3.2.2.6. CER/Bursty VBR Load/Twelve VCCs. . . . . . . . . . . . . 44
3.2.2.7. CER/Bursty VBR Load/Maximum VCCs . . . . . . . . . . . . 46
3.2.3. Cell Loss Ratio (CLR). . . . . . . . . . . . . . . . . . . 47
3.2.3.1. CLR/Steady Load/One VCC. . . . . . . . . . . . . . . . . 47
3.2.3.2. CLR/Steady Load/Twelve VCCs. . . . . . . . . . . . . . . 48
3.2.3.3. CLR/Steady Load/Maximum VCCs . . . . . . . . . . . . . . 49
3.2.3.4. CLR/Bursty VBR Load/One VCC. . . . . . . . . . . . . . . 51
3.2.3.5. CLR/Bursty VBR Load/Twelve VCCs. . . . . . . . . . . . . 52
3.2.3.6. CLR/Bursty VBR Load/Maximum VCCs . . . . . . . . . . . . 53
3.2.4. Cell Misinsertion Rate (CMR) . . . . . . . . . . . . . . . 54
3.2.4.1. CMR/Steady Load/One VCC. . . . . . . . . . . . . . . . . 54
3.2.4.2. CMR/Steady Load/Twelve VCCs. . . . . . . . . . . . . . . 55
3.2.4.3. CMR/Steady Load/Maximum VCCs . . . . . . . . . . . . . . 57
3.2.4.4. CMR/Bursty VBR Load/One VCC. . . . . . . . . . . . . . . 58
3.2.4.5. CMR/Bursty VBR Load/Twelve VCCs. . . . . . . . . . . . . 59
3.2.4.6. CMR/Bursty VBR Load/Maximum VCCs . . . . . . . . . . . . 60
3.2.5. CRC Error Ratio (CRC-ER) . . . . . . . . . . . . . . . . . 62
3.2.5.1. CRC-ER/Steady Load/One VCC . . . . . . . . . . . . . . . 62
3.2.5.2. CRC-ER/Steady Load/Twelve VCCs . . . . . . . . . . . . . 63
3.2.5.3. CRC-ER/Steady Load/Maximum VCCs. . . . . . . . . . . . . 64
3.2.5.4. CRC-ER/Bursty VBR Load/One VCC . . . . . . . . . . . . . 65
3.2.5.5. CRC-ER/Bursty VBR Load/Twelve VCCs . . . . . . . . . . . 66
3.2.5.6. CRC-ER/Bursty VBR Load/Maximum VCCs. . . . . . . . . . . 68
3.2.5.7. CRC-ER/Bursty UBR Load/One VCC . . . . . . . . . . . . . 69
3.2.5.8. CRC-ER/Bursty UBR Load/Twelve VCCs . . . . . . . . . . . 70
3.2.5.9. CRC-ER/Bursty UBR Load/Maximum VCCs. . . . . . . . . . . 71
3.2.5.10. CRC-ER/Bursty Mixed Load/Three VCC. . . . . . . . . . . 73
3.2.5.11. CRC-ER/Bursty Mixed Load/Twelve VCCs. . . . . . . . . . 74
3.2.5.12. CRC-ER/Bursty Mixed Load/Maximum VCCs . . . . . . . . . 75
3.2.6. Cell Transfer Delay (CTD). . . . . . . . . . . . . . . . . 76
3.2.6.1. Test Setup . . . . . . . . . . . . . . . . . . . . . . . 76
3.2.6.2. CTD/Steady Load/One VCC. . . . . . . . . . . . . . . . . 77
3.2.6.3. CTD/Steady Load/Twelve VCCs. . . . . . . . . . . . . . . 78
3.2.6.4. CTD/Steady Load/Maximum VCCs . . . . . . . . . . . . . . 79
3.2.6.5. CTD/Bursty VBR Load/One VCC. . . . . . . . . . . . . . . 81
3.2.6.6. CTD/Bursty VBR Load/Twelve VCCs. . . . . . . . . . . . . 82
3.2.6.7. CTD/Bursty VBR Load/Maximum VCCs . . . . . . . . . . . . 83
3.2.6.8. CTD/Bursty UBR Load/One VCC. . . . . . . . . . . . . . . 85
3.2.6.9. CTD/Bursty UBR Load/Twelve VCCs. . . . . . . . . . . . . 86
3.2.6.10. CTD/Bursty UBR Load/Maximum VCCs. . . . . . . . . . . . 87
3.2.6.11. CTD/Mixed Load/Three VCC"s. . . . . . . . . . . . . . . 88
3.2.6.12. CTD/Mixed Load/Twelve VCCs. . . . . . . . . . . . . . . 90
3.2.6.13. CTD/Mixed Load/Maximum VCCs . . . . . . . . . . . . . . 91
3.3. ATM Adaptation Layer (AAL) Type 5 (AAL5) . . . . . . . . . . 93
3.3.1. IP Packet Loss due to AAL5 Re-assembly Errors. . . . . . . 93
3.3.2. AAL5 Re-assembly Time. . . . . . . . . . . . . . . . . . . 94
3.3.3. AAL5 CRC Error Ratio . . . . . . . . . . . . . . . . . . . 95
3.3.3.1. Test Setup . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.3.2. AAL5-CRC-ER/Steady Load/One VCC. . . . . . . . . . . . . 95
3.3.3.3. AAL5-CRC-ER/Steady Load/Twelve VCCs. . . . . . . . . . . 96
3.3.3.4. AAL5-CRC-ER/Steady Load/Maximum VCCs . . . . . . . . . . 97
3.3.3.5. AAL5-CRC-ER/Bursty VBR Load/One VCC. . . . . . . . . . . 99
3.3.3.6. AAL5-CRC-ER/Bursty VBR Load/Twelve VCCs. . . . . . . . .100
3.3.3.7. AAL5-CRC-ER/Bursty VBR Load/Maximum VCCs . . . . . . . .101
3.3.3.8. AAL5-CRC-ER/Mixed Load/Three VCC"s . . . . . . . . . . .102
3.3.3.9. AAL5-CRC-ER/Mixed Load/Twelve VCCs . . . . . . . . . . .104
3.3.3.10. AAL5-CRC-ER/Mixed Load/Maximum VCCs . . . . . . . . . .105
3.4. ATM Service: Signaling . . . . . . . . . . . . . . . . . . .106
3.4.1. CAC Denial Time and Connection Establishment Time. . . . .106
3.4.2. Connection Teardown Time . . . . . . . . . . . . . . . . .107
3.4.3. Crankback Time . . . . . . . . . . . . . . . . . . . . . .108
3.4.4. Route Update Response Time . . . . . . . . . . . . . . . .109
3.5. ATM Service: ILMI. . . . . . . . . . . . . . . . . . . . . .110
3.5.1. MIB Alignment Time . . . . . . . . . . . . . . . . . . . .110
3.5.2. Address Registration Time. . . . . . . . . . . . . . . . .111
4. Security Considerations . . . . . . . . . . . . . . . . . . .112
5. Notices. . . . . . . . . . . . . . . . . . . . . . . . . . . .112
6. References . . . . . . . . . . . . . . . . . . . . . . . . . .113
7. Authors" Addresses . . . . . . . . . . . . . . . . . . . . . .113
APPENDIX A . . . . . . . . . . . . . . . . . . . . . . . . . . .114
APPENDIX B . . . . . . . . . . . . . . . . . . . . . . . . . . .114
APPENDIX C . . . . . . . . . . . . . . . . . . . . . . . . . . .116
Full Copyright Statement . . . . . . . . . . . . . . . . . . . .127
1. Introduction
This document defines a specific set of tests that vendors can use to
measure and report the performance characteristics of ATM network
devices. The results of these tests will provide the user comparable
data from different vendors with which to evaluate these devices.
The methods defined in this memo are based on RFC2544 "Benchmarking
Methodology for Network Interconnect Devices".
The document "Terminology for ATM Benchmarking" (RFC2761), defines
many of the terms that are used in this document. The terminology
document should be consulted before attempting to make use of this
document.
The BMWG produces two major classes of documents: Benchmarking
Terminology documents and Benchmarking Methodology documents. The
Terminology documents present the benchmarks and other related terms.
The Methodology documents define the procedures required to collect
the benchmarks cited in the corresponding Terminology documents.
2. Background
2.1. Test Device Requirements
This document is based on the requirement that a test device is
available. The test device can either be off the shelf or can be
easily built with current technologies. The test device must have a
transmitting and receiving port for the interface type under test.
The test device must be configured to transmit test PDUs and to
analyze received PDUs. The test device should be able to transmit
and analyze received data at the same time.
2.2. Systems Under Test (SUTs)
There are a number of tests described in this document that do not
apply to each SUT. Vendors should perform all of the tests that can
be supported by a specific product type. It will take some time to
perform all of the recommended tests under all of the recommended
conditions.
2.3. Test Result Evaluation
Performing all of the tests in this document will result in a great
deal of data. The applicability of this data to the evaluation of a
particular SUT will depend on its eXPected use and the configuration
of the network in which it will be used. For example, the time
required by a switch to provide ILMI services will not be a pertinent
measurement in a network that does not use the ILMI protocol, such as
an ATM WAN. Evaluating data relevant to a particular network
installation may require considerable experience, which may not be
readily available. Finally, test selection and evaluation of test
results must be done with an understanding of generally accepted
testing practices regarding repeatability, variance and the
statistical significance of a small numbers of trials.
2.4. Requirements
In this document, the Words that are used to define the significance
of each particular requirement are capitalized. These words are:
* "MUST" This word, or the words "REQUIRED" and "SHALL" mean that
the item is an absolute requirement of the specification.
* "SHOULD" This word or the adjective "RECOMMENDED" means that there
may exist valid reasons in particular circumstances to ignore this
item, but the full implications should be understood and the case
carefully weighed before choosing a different course.
* "MAY" This word or the adjective "OPTIONAL" means that this item
is truly optional. One vendor may choose to include the item
because a particular marketplace requires it or because it
enhances the product, for example; another vendor may omit the
same item.
An implementation is not compliant if it fails to satisfy one or more
of the MUST requirements for the protocols it implements. An
implementation that satisfies all the MUST and all the SHOULD
requirements for its protocols is said to be "unconditionally
compliant"; one that satisfies all the MUST requirements but not all
the SHOULD requirements for its protocols is said to be
"conditionally compliant".
2.5. Test Configurations for SONET
The test device can be connected to the SUT in a variety of
configurations depending on the test point. The following
configurations will be used for the tests described in this document.
1) Uni-directional connection: The test devices transmit port
(labeled Tx) is connected to the SUT receive port (labeled Rx).
The SUTs transmit port is connected to the test device receive
port (see Figure 1). In this configuration, the test device can
verify that all transmitted packets are acknowledged correctly.
Note that this configuration does not verify internal system
functions, but verifies one port on the SUT.
+-------------+ +-------------+
Tx-------------->Rx
Test Rx<--------------Tx SUT
Device
+-------------+ +-------------+
Figure 1
2) Bi-directional connection: The test devices first transmit port is
connected to the SUTs first receive port. The SUTs first transmit
port is connected to the test devices first receive port. The
test devices second transmit port is connected to the SUTs second
receive port. The SUTs second transmit port is connected to the
test devices second receive port (see Figure 2). In this
configuration, the test device can determine if all of the
transmitted packets were received and forwarded correctly. Note
that this configuration does verify internal system functions,
since it verifies two ports on the SUT.
+-------------+ +-------------+
Test Tx-------------->Rx
Device Rx<--------------Tx SUT
Tx Rx Tx Rx
+-------------+ +-------------+
^ ^

+------------------------+

---------------------------------
Figure 2
3) Uni-directional passthrough connection: The test devices first
transmit port is connected to the SUT1 receive port. The SUT1
transmit port is connected to the test devices first receive port.
The test devices second transmit port is connected to the SUT2
receive port. The SUT2 transmit port is connected to the test
devices second receive port (see Figure 3). In this
configuration, the test device can determine if all of the packets
transmitted by SUT1 were correctly acknowledged by SUT2. Note
that this configuration does not verify internal system functions,
but verifies one port on each SUT.
+-------------+ +-------------+ +-------------+
Tx---------->Rx Tx---------->Rx
SUT1 Rx<----------Tx Test Rx<----------Tx SUT2
Device
+-------------+ +-------------+ +-------------+
Figure 3
2.6. SUT Configuration
The SUT MUST be configured as described in the SUT users guide.
Specifically, it is expected that all of the supported protocols will
be configured and enabled. It is expected that all of the tests will
be run without changing the configuration or setup of the SUT in any
way other than that required to do the specific test. For example,
it is not acceptable to disable all but one transport protocol when
testing the throughput of that protocol. If PNNI or BISUP is used to
initiate switched virtual connections (SVCs), the SUT configuration
SHOULD include the normally recommended routing update intervals and
keep alive frequency. The specific version of the software and the
exact SUT configuration, including what functions are disabled and
used during the tests MUST be included as part of the report of the
results.
2.7. Frame formats
The formats of the test IP PDUs to use for TCP/IP and UPC/IP over ATM
are shown in Appendix C: Test Frame Formats. Note that these IP PDUs
are in accordance with RFC2225. These exact IP PDU formats SHOULD
be used in the tests described in this document for this
protocol/media combination. These IP PDUs will be used as a template
for testing other protocol/media combinations. The specific formats
that are used to define the test IP PDUs for a particular test series
MUST be included in the report of the results.
2.8. Frame sizes
All of the described tests SHOULD be performed using a number of IP
PDU sizes. Specifically, the sizes SHOULD include the maximum and
minimum legitimate sizes for the protocol under test on the media
under test and enough sizes in between to be able to get a full
characterization of the SUT performance. Except where noted, at
least five IP PDU sizes SHOULD be tested for each test condition.
Theoretically the minimum size UDP Echo request IP PDU would consist
of an IP header (minimum length 20 octets), a UDP header (8 octets),
AAL5 trailer (8 octets) and an LLC/SNAP code point header (8 octets);
therefore, the minimum size PDU will fit into one ATM cell. The
theoretical maximum IP PDU size is determined by the size of the
length field in the IP header. In almost all cases the actual
maximum and minimum sizes are determined by the limitations of the
media. In the case of ATM, the maximum IP PDU size SHOULD be the ATM
MTU size, which is 9180 octets.
In theory it would be ideal to distribute the IP PDU sizes in a way
that would evenly distribute the theoretical IP PDU rates. These
recommendations incorporate this theory but specify IP PDU sizes,
which are easy to understand and remember. In addition, many of the
same IP PDU sizes are specified on each of the media types to allow
for easy performance comparisons.
Note: The inclusion of an unrealistically small IP PDU size on some
of the media types (i.e., with little or no space for data) is to
help characterize the per-IP PDU processing overhead of the SUT.
The IP PDU sizes that will be used are:
44, 64, 128, 256, 1024, 1518, 2048, 4472, 9180
The minimum size IP PDU for UDP on ATM is 44 octets, the minimum size
of 44 is recommended to allow direct comparison to token ring
performance. The IP PDU size of 4472 is recommended instead of the
theoretical FDDI maximum size of 4500 octets in order to permit the
same type of comparison. An IP (i.e., not UDP) IP PDU may be used in
addition if a higher data rate is desired, in which case the minimum
IP PDU size is 28 octets.
2.9. Verifying received IP PDUs
The test equipment SHOULD discard any IP PDUs received during a test
run that are not actual forwarded test IP PDUs. For example, keep-
alive and routing update IP PDUs SHOULD NOT be included in the count
of received IP PDUs. In any case, the test equipment SHOULD verify
the length of the received IP PDUs and check that they match the
expected length.
Preferably, the test equipment SHOULD include sequence numbers in the
transmitted IP PDUs and check for these numbers on the received IP
PDUs. If this is done, the reported results SHOULD include, in
addition to the number of IP PDUs dropped, the number of IP PDUs that
were received out of order, the number of duplicate IP PDUs received
and the number of gaps in the received IP PDU numbering sequence.
This functionality is required for some of the described tests.
2.10. Modifiers
It is useful to characterize the SUTs performance under a number of
conditions. Some of these conditions are noted below. The reported
results SHOULD include as many of these conditions as the test
equipment is able to generate. The suite of tests SHOULD be run
first without any modifying conditions, then repeated under each of
the modifying conditions separately. To preserve the ability to
compare the results of these tests, any IP PDUs that are required to
generate the modifying conditions (excluding management queries) will
be included in the same data stream as that of the normal test IP
PDUs and in place of one of the test IP PDUs. They MUST not be
supplied to the SUT on a separate network port.
2.10.1. Management IP PDUs
Most ATM data networks now make use of ILMI, signaling and OAM. In
many environments, there can be a number of management stations
sending queries to the same SUT at the same time.
Management queries MUST be made in accordance with the applicable
specification, e.g., ILMI sysUpTime getNext requests will be made in
accordance with ILMI 4.0. The response to the query MUST be verified
by the test equipment. Note that, for each management protocol in
use, this requires that the test equipment implement the associated
protocol state machine. One example of the specific query IP PDU
(ICMP) that should be used is shown in Appendix C.
2.10.2. Routing update IP PDUs
The processing of PNNI updates could have a significant impact on the
ability of a switch to forward cells and complete calls. If PNNI is
configured on the SUT, one routing update MUST be transmitted before
the first test IP PDU is transmitted during the trial. The test
SHOULD verify that the SUT has properly processed the routing update.
PNNI routing update IP PDUs SHOULD be sent at the rate specified in
Appendix B. Appendix C defines one routing update PDU for the TCP/IP
over ATM example. The routing updates are designed to change the
routing on a number of networks that are not involved in the
forwarding of the test data. The first IP PDU sets the routing table
state to "A", the second one changes the state to "B". The IP PDUs
MUST be alternated during the trial. The test SHOULD verify that the
SUT has properly processed the routing update.
2.11. Filters
Filters are added to switches to selectively inhibit the forwarding
of cells that would normally be forwarded. This is usually done to
implement security controls on the data that is accepted between one
area and another. Different products have different capabilities to
implement filters. Filters are applicable only if the SUT supports
the filtering feature.
The SUT SHOULD be first configured to add one filter condition and
the tests performed. This filter SHOULD permit the forwarding of the
test data stream. This filter SHOULD be of the form as described in
the SUT Users Guide.
The SUT SHOULD be then reconfigured to implement a total of 25
filters. The first 24 of these filters SHOULD be based on 24
separate ATM NSAP Network Prefix addresses.
The 24 ATM NSAP Network Prefix addresses SHOULD not be any that are
represented in the test data stream. The last filter SHOULD permit
the forwarding of the test data stream. By "first" and "last" we
mean to ensure that in the second case, 25 conditions must be checked
before the data IP over ATM PDUs will match the conditions that
permit the forwarding of the IP PDU. Of course, if the SUT reorders
the filters or does not use a linear scan of the filter rules the
effect of the sequence in which the filters are input is properly
lost.
The exact filters configuration command lines used SHOULD be included
with the report of the results.
2.11.1. Filter Addresses
Two sets of filter addresses are required, one for the single filter
case and one for the 25 filter case.
The single filter case should permit traffic from ATM address [Switch
Network Prefix] 00 00 00 00 00 01 00 to ATM address [Switch Network
Prefix] 00 00 00 00 00 02 00 and deny all other traffic. Note that
the 13 octet Switch Network Prefix MUST be configured before this
test can be run.
The 25 filter case should follow the following sequence.
deny [Switch Network Prefix] 00 00 00 00 00 01 00
to [Switch Network Prefix] 00 00 00 00 00 03 00
deny [Switch Network Prefix] 00 00 00 00 00 01 00
to [Switch Network Prefix] 00 00 00 00 00 04 00
deny [Switch Network Prefix] 00 00 00 00 00 01 00
to [Switch Network Prefix] 00 00 00 00 00 05 00
...
deny [Switch Network Prefix] 00 00 00 00 00 01 00
to [Switch Network Prefix] 00 00 00 00 00 0C 00
deny [Switch Network Prefix] 00 00 00 00 00 01 00
to [Switch Network Prefix] 00 00 00 00 00 0D 00
allow [Switch Network Prefix] 00 00 00 00 00 01 00
to [Switch Network Prefix] 00 00 00 00 00 02 00
deny [Switch Network Prefix] 00 00 00 00 00 01 00
to [Switch Network Prefix] 00 00 00 00 00 0E 00
deny [Switch Network Prefix] 00 00 00 00 00 01 00
to [Switch Network Prefix] 00 00 00 00 00 0F 00
...
deny [Switch Network Prefix] 00 00 00 00 00 01 00
to [Switch Network Prefix] 00 00 00 00 00 18 00
deny all else
All previous filter conditions should be cleared from the switch
before this sequence is entered. The sequence is selected to test to
see if the switch sorts the filter conditions or accepts them in the
order that they were entered. Both of these procedures will result
in a greater impact on performance than will some form of hash
coding.
2.12. Protocol addresses
It is easier to implement these tests using a single logical stream
of data, with one source ATM address and one destination ATM address,
and for some conditions like the filters described above, a practical
requirement. Networks in the real world are not limited to single
streams of data. The test suite SHOULD be first run with a single
ATM source and destination address pair. The tests SHOULD then be
repeated with using a random destination address. In the case of
testing single switches, the addresses SHOULD be random and uniformly
distributed over a range of 256 seven octet user parts. In the case
of testing multiple interconnected switches, the addresses SHOULD be
random and uniformly distributed over the 256 network prefixes, each
of which should support 256 seven octet user parts. The specific
address ranges to use for ATM are shown in Appendix A. IP to ATM
address mapping MUST be accomplished as described in RFC2225.
2.13. Route Set Up
It is not reasonable that all of the routing information necessary to
forward the test stream, especially in the multiple address case,
will be manually set up. If PNNI and/or ILMI are running, at the
start of each trial a routing update MUST be sent to the SUT. This
routing update MUST include all of the ATM addresses that will be
required for the trial. This routing update will have to be repeated
at the interval required by PNNI or ILMI. An example of the format
and repetition interval of the update IP PDUs is given in Appendix B
(interval and size) and Appendix C (format).
2.14. Bidirectional traffic
Bidirectional performance tests SHOULD be run with the same data rate
being offered from each direction. The sum of the data rates should
not exceed the theoretical limit for the media.
2.15. Single stream path
The full suite of tests SHOULD be run with the appropriate modifiers
for a single receive and transmit port on the SUT. If the internal
design of the SUT has multiple distinct pathways, for example,
multiple interface cards each with multiple network ports, then all
possible permutations of pathways SHOULD be tested separately. If
multiple interconnected switches are tested, the test MUST specify
routes, which allow only one path between source and destination ATM
addresses.
2.16. Multi-port
Many switch products provide several network ports on the same
interface module. Each port on an interface module SHOULD be
stimulated in an identical manner. Specifically, half of the ports
on each module SHOULD be receive ports and half SHOULD be transmit
ports. For example if a SUT has two interface module each of which
has four ports, two ports on each interface module be receive ports
and two will be transmit ports. Each receive port MUST be offered
the same data rate. The addresses in the input data streams SHOULD
be set so that an IP PDU will be directed to each of the transmit
ports in sequence. That is, all transmit ports will receive an
identical distribution of IP PDUs from a particular receive port.
Consider the following 6 port SUT:
--------------
--------- Rx A Tx X--------
--------- Rx B Tx Y--------
--------- Rx C Tx Z--------
--------------
The addressing of the data streams for each of the inputs SHOULD be:
stream sent to Rx A:
IP PDU to Tx X, IP PDU to Tx Y, IP PDU to Tx Z
stream sent to Rx B:
IP PDU to Tx X, IP PDU to Tx Y, IP PDU to Tx Z
stream sent to Rx C
IP PDU to Tx X, IP PDU to Tx Y, IP PDU to Tx Z
Note: Each stream contains the same sequence of IP destination
addresses; therefore, each transmit port will receive 3 IP PDUs
simultaneously. This procedure ensures that the SUT will have to
process multiple IP PDUs addressed to the same transmit port
simultaneously.
The same configuration MAY be used to perform a bi-directional
multi-stream test. In this case all of the ports are considered both
receive and transmit ports. Each data stream MUST consist of IP PDUs
whose addresses correspond to the ATM addresses all of the other
ports.
2.17. Multiple protocols
This document does not address the issue of testing the effects of a
mixed protocol environment other than to suggest that if such tests
are wanted then PDUs SHOULD be distributed between all of the test
protocols. The distribution MAY approximate the conditions on the
network in which the SUT would be used.
2.18. Multiple IP PDU sizes
This document does not address the issue of testing the effects of a
mixed IP PDU size environment other than to suggest that, if such
tests are required, then IP PDU size SHOULD be evenly distributed
among all of the PDU sizes listed in this document. The distribution
MAY approximate the conditions on the network in which the SUT would
be used.
2.19. Testing beyond a single SUT
In the performance testing of a single SUT, the paradigm can be
described as applying some input to a SUT and monitoring the output.
The results of which can be used to form a basis of characterization
of that device under those test conditions.
This model is useful when the test input and output are homogeneous
(e.g., 64-byte IP, AAL5 PDUs into the SUT; 64 byte IP, AAL5 PDUs
out).
By extending the single SUT test model, reasonable benchmarks
regarding multiple SUTs or heterogeneous environments may be
collected. In this extension, the single SUT is replaced by a system
of interconnected network SUTs. This test methodology would support
the benchmarking of a variety of device/media/service/protocol
combinations. For example, a configuration for a LAN-to-WAN-to-LAN
test might be:
(1) ATM UNI -> SUT 1 -> BISUP -> SUT 2 -> ATM UNI
Or an extended LAN configuration might be:
(2) ATM UNI -> SUT 1 -> PNNI Network -> SUT 2 -> ATM UNI
In both examples 1 and 2, end-to-end benchmarks of each system could
be empirically ascertained. Other behavior may be characterized
through the use of intermediate devices. In example 2, the
configuration may be used to give an indication of the effect of PNNI
routing on IP throughput.
Because multiple SUTs are treated as a single system, there are
limitations to this methodology. For instance, this methodology may
yield an aggregate benchmark for a tested system. That benchmark
alone, however, may not necessarily reflect asymmetries in behavior
between the SUTs, latencies introduced by other apparatus (e.g.,
CSUs/DSUs, switches), etc.
Further, care must be used when comparing benchmarks of different
systems by ensuring that the SUTs" features and configuration of the
tested systems have the appropriate common denominators to allow
comparison.
2.20. Maximum IP PDU rate
The maximum IP PDU rates that should be used when testing LAN
connections SHOULD be the listed theoretical maximum rate for the IP
PDU size on the media.
The maximum IP PDU rate that should be used when testing WAN
connections SHOULD be greater than the listed theoretical maximum
rate for the IP PDU size on that speed connection. The higher rate
for WAN tests is to compensate for the fact that some vendors employ
various forms of header compression.
A list of maximum IP PDU rates for LAN connections is included in
Appendix B.
2.21. Bursty traffic
It is convenient to measure the SUT performance under steady state
load; however, this is an unrealistic way to gauge the functioning of
a SUT. Actual network traffic normally consists of bursts of IP
PDUs.
Some of the tests described below SHOULD be performed with both
constant bit rate, bursty Unspecified Bit Rate (UBR) Best Effort
[AF-TM4.1] and Variable Bit Rate Non-real Time (VBR-nrt) Best Effort
[AF-TM4.1]. The IP PDUs within a burst are transmitted with the
minimum legitimate inter-IP PDU gap.
The objective of the test is to determine the minimum interval
between bursts that the SUT can process with no IP PDU loss. Tests
SHOULD be run with burst sizes of 10% of Maximum Burst Size (MBS),
20% of MBS, 50% of MBS and 100% MBS. Note that the number of IP PDUs
in each burst will depend on the PDU size. For UBR, the MBS refers
to the associated VBR traffic parameters.
2.22. Trial description
A particular test consists of multiple trials. Each trial returns
one piece of information, for example the loss rate at a particular
input IP PDU rate. Each trial consists of five of phases:
a) If the SUT is a switch supporting PNNI, send the routing update to
the SUT receive port and wait two seconds to be sure that the
routing has settled.
b) Send an ATM ARP PDU to determine the ATM address corresponding to
the destination IP address. The formats of the ATM ARP PDU that
should be used are shown in the Test Frame Formats document and
MUST be in accordance with RFC2225.
c) Stimulate SUT with traffic load.
d) Wait for two seconds for any residual IP PDUs to be received.
e) Wait for at least five seconds for the SUT to restabilize.
2.23. Trial duration
The objective of the tests defined in this document is to accurately
characterize the behavior of a particular piece of network equipment
under varying traffic loads. The choice of test duration must be a
compromise between this objective and keeping the duration of the
benchmarking test suite within reasonable bounds. The SUT SHOULD be
stimulated for at least 60 seconds. If this time period results in a
high variance in the test results, the SUT SHOULD be stimulated for
at least 300 seconds.
2.24. Address resolution
The SUT MUST be able to respond to address resolution requests sent
by another SUT, an ATM ARP server or the test equipment in accordance
with RFC2225.
2.25. Synchronized Payload Bit Pattern.
Some measurements assume that both the transmitter and receiver
payload information is synchronized. Synchronization MUST be
achieved by supplying a known bit pattern to both the transmitter and
receiver. This bit pattern MUST be one of the following: PRBS-15,
PRBS-23, 0xFF00, or 0xAA55.
2.26. Burst Traffic Descriptors.
Some measurements require busty traffic patterns. These patterns
MUST conform to one of the following traffic descriptors:
1) PCR=100% allotted line rate, SCR=50% allotted line rate, and MBS=8192
2) PCR=100% allotted line rate, SCR=50% allotted line rate, and MBS=4096
3) PCR=90% allotted line rate, SCR=50% allotted line rate, and MBS=8192
4) PCR=90% allotted line rate, SCR=50% allotted line rate, and MBS=4096
5) PCR=90% allotted line rate, SCR=45% allotted line rate, and MBS=8192
6) PCR=90% allotted line rate, SCR=45% allotted line rate, and MBS=4096
7) PCR=80% allotted line rate, SCR=40% allotted line rate, and MBS=65536
8) PCR=80% allotted line rate, SCR=40% allotted line rate, and MBS=32768
The allotted line rate refers to the total available line rate
divided by the number of VCCs in use.
3. Performance Metrics
3.1. Physical Layer-SONET
3.1.1. Pointer Movements
3.1.1.1. Pointer Movement Propagation.
Objective: To determine that the SUT does not propagate pointer
movements as defined in RFC2761 "Terminology for ATM Benchmarking".
Procedure:
1) Set up the SUT and test device using the uni-directional
configuration.
2) Send a specific number of IP PDUs at a specific rate through the
SUT. Since this test is not a throughput test, the rate should
not be greater than 90% of line rate. The cell payload SHOULD
contain valid IP PDUs. The IP PDUs MUST be encapsulated in AAL5.
3) Count the IP PDUs that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test, else lower the test device
traffic rate until the counts are the same.
4) Inject one forward payload pointer movement. Verify that the SUT
does not change the pointer.
5) Inject one forward payload pointer movement every 1 second.
Verify that the SUT does not change the pointer.
6) Discontinue the payload pointer movement.
7) Inject five forward payload pointer movements every 1 second.
Verify that the SUT does not change the pointer.
8) Discontinue the payload pointer movement.
9) Inject one backward payload pointer movement. Verify that the
SUT does not change the pointer.
10) Inject one backward payload pointer movement every 1 second.
Verify that the SUT does not change the pointer.
11) Discontinue the payload pointer movement.
12) Inject five backward payload pointer movements every 1 second.
Verify that the SUT does not change the pointer.
13) Discontinue the payload pointer movement.
Reporting Format:
The results of the pointer movement propagation test SHOULD be
reported in a form of a table. The rows SHOULD be labeled single
pointer movement, one pointer movement per second, and five
pointer movements per second. The columns SHOULD be labeled
pointer movement and loss of pointer. The elements of the table
SHOULD be either True or False, indicating whether the particular
condition was observed for each test.
The table MUST also indicate the IP PDU size in octets and traffic
rate in IP PDUs per second as generated by the test device.
3.1.1.2. Cell Loss due to Pointer Movement.
Objective: To determine if the SUT will drop cells due to pointer
movements as defined in RFC2761 "Terminology for ATM Benchmarking".
Procedure:
1) Set up the SUT and test device using the uni-directional
configuration.
2) Send a specific number of cells at a specific rate through the
SUT. Since this test is not a throughput test, the rate should
not be greater than 90% of line rate. The cell payload SHOULD
contain valid IP PDUs. The IP PDUs MUST be encapsulated in AAL5.
3) Count the cells that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test; else lower the test device
traffic rate until the counts are the same.
4) Inject one forward payload pointer movement. Verify that the SUT
does not drop any cells.
5) Inject one forward payload pointer movement every 1 second.
Verify that the SUT does not drop any cells.
6) Discontinue the payload pointer movement.
7) Inject five forward payload pointer movements every 1 second.
Verify that the SUT does not drop any cells.
8) Discontinue the payload pointer movement.
9) Inject one backward payload pointer movement. Verify that the
SUT does not drop any cells.
10) Inject one backward payload pointer movement every 1 second.
Verify that the SUT does not drop any cells.
11) Discontinue the payload pointer movement.
12) Inject five backward payload pointer movements every 1 second.
Verify that the SUT does not drop any cells.
13) Discontinue the payload pointer movement.
Reporting Format:
The results of the cell loss due to pointer movement test SHOULD
be reported in a form of a table. The rows SHOULD be labeled
single pointer movement, one pointer movement per second, and five
pointer movements per second. The columns SHOULD be labeled cell
loss and number of cells lost. The elements of column 1 SHOULD be
either True or False, indicating whether the particular condition
was observed for each test. The elements of column 2 SHOULD be
non-negative integers.
The table MUST also indicate the traffic rate in IP PDUs per
second as generated by the test device.
3.1.1.3. IP Packet Loss due to Pointer Movement.
Objective: To determine if the SUT will drop IP packets due to
pointer movements as defined in RFC2761 "Terminology for ATM
Benchmarking".
Procedure:
1) Set up the SUT and test device using the uni-directional
configuration.
2) Send a specific number of IP packets at a specific rate through
the SUT. Since this test is not a throughput test, the rate
should not be greater than 90% of line rate. The IP PDUs MUST be
encapsulated in AAL5.
3) Count the IP packets that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test; else lower the test device
traffic rate until the counts are the same.
4) Inject one forward payload pointer movement. Verify that the SUT
does not drop any packets.
5) Inject one forward payload pointer movement every 1 second.
Verify that the SUT does not drop any packets.
6) Discontinue the payload pointer movement.
7) Inject five forward payload pointer movements every 1 second.
Verify that the SUT does not drop any packets.
8) Discontinue the payload pointer movement.
9) Inject one backward payload pointer movement. Verify that the
SUT does not drop any packets.
10) Inject one backward payload pointer movement every 1 second.
Verify that the SUT does not drop any packets.
11) Discontinue the payload pointer movement.
12) Inject five backward payload pointer movements every 1 second.
Verify that the SUT does not drop any packets.
13) Discontinue the payload pointer movement.
Reporting Format:
The results of the IP packet loss due to pointer movement test
SHOULD be reported in a form of a table. The rows SHOULD be
labeled single pointer movement, one pointer movement per second,
and five pointer movements per second. The columns SHOULD be
labeled packet loss and number of packets lost. The elements of
column 1 SHOULD be either True or False, indicating whether the
particular condition was observed for each test. The elements of
column 2 SHOULD be non-negative integers.
The table MUST also indicate the packet size in octets and traffic
rate in packets per second as generated by the test device.
3.1.2. Transport Overhead (TOH) Error Count
3.1.2.1. TOH Error Propagation.
Objective: To determine that the SUT does not propagate TOH errors as
defined in RFC2761 "Terminology for ATM Benchmarking".
Procedure:
1) Set up the SUT and test device using the uni-directional
configuration.
2) Send a specific number of IP PDUs at a specific rate through the
SUT. Since this test is not a throughput test, the rate should
not be greater than 90% of line rate. The cell payload SHOULD
contain valid IP PDUs. The IP PDUs MUST be encapsulated in AAL5.
3) Count the IP PDUs that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test, else lower the test device
traffic rate until the counts are the same.
4) Inject one error in the first bit of the A1 and A2 Frameword.
Verify that the SUT does not propagate the error.
5) Inject one error in the first bit of the A1 and A2 Frameword
every 1 second. Verify that the SUT does not propagate the
error.
6) Discontinue the Frameword error.
7) Inject one error in the first bit of the A1 and A2 Frameword for
4 consecutive IP PDUs in every 6 IP PDUs. Verify that the SUT
indicates Loss of Frame.
8) Discontinue the Frameword error.
Reporting Format:
The results of the TOH error propagation test SHOULD be reported
in a form of a table. The rows SHOULD be labeled single error,
one error per second, and four consecutive errors every 6 IP PDUs.
The columns SHOULD be labeled error propagated and loss of IP PDU.
The elements of the table SHOULD be either True or False,
indicating whether the particular condition was observed for each
test.
The table MUST also indicate the IP PDU size in octets and traffic
rate in IP PDUs per second as generated by the test device.
3.1.2.2. c TOH Error.
Objective: To determine if the SUT will drop cells due TOH Errors as
defined in RFC2761 "Terminology for ATM Benchmarking".
Procedure:
1) Set up the SUT and test device using the uni-directional
configuration.
2) Send a specific number of cells at a specific rate through the
SUT. Since this test is not a throughput test, the rate should
not be greater than 90% of line rate. The cell payload SHOULD
contain valid IP PDUs. The IP PDUs MUST be encapsulated in AAL5.
3) Count the cells that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test; else lower the test device
traffic rate until the counts are the same.
4) Inject one error in the first bit of the A1 and A2 Frameword.
Verify that the SUT does not drop any cells.
5) Inject one error in the first bit of the A1 and A2 Frameword
every 1 second. Verify that the SUT does not drop any cells.
6) Discontinue the Frameword error.
7) Inject one error in the first bit of the A1 and A2 Frameword for
4 consecutive IP PDUs in every 6 IP PDUs. Verify that the SUT
does drop cells.
8) Discontinue the Frameword error.
Reporting Format:
The results of the Cell Loss due to TOH errors test SHOULD be
reported in a form of a table. The rows SHOULD be labeled single
error, one error per second, and four consecutive errors every 6
IP PDUs. The columns SHOULD be labeled cell loss and number of
cells lost. The elements of column 1 SHOULD be either True or
False, indicating whether the particular condition was observed
for each test. The elements of column 2 SHOULD be non-negative
integers.
The table MUST also indicate the traffic rate in IP PDUs per
second as generated by the test device.
3.1.2.3. IP Packet Loss due to TOH Error.
Objective: To determine if the SUT will drop IP packets due to TOH
errors as defined in RFC2761 "Terminology for ATM Benchmarking".
Procedure:
1) Set up the SUT and test device using the uni-directional
configuration.
2) Send a specific number of IP packets at a specific rate through
the SUT. Since this test is not a throughput test, the rate
should not be greater than 90% of line rate. The IP PDUs MUST be
encapsulated in AAL5.
3) Count the IP packets that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test; else lower the test device
traffic rate until the counts are the same.
4) Inject one error in the first bit of the A1 and A2 Frameword.
Verify that the SUT does not drop any packets.
5) Inject one error in the first bit of the A1 and A2 Frameword
every 1 second. Verify that the SUT does not drop any packets.
6) Discontinue the Frameword error.
7) Inject one error in the first bit of the A1 and A2 Frameword for
4 consecutive IP PDUs in every 6 IP PDUs. Verify that the SUT
does drop packets.
8) Discontinue the Frameword error.
Reporting Format:
The results of the IP packet loss due to TOH errors test SHOULD be
reported in a form of a table. The rows SHOULD be labeled single
error, one error per second, and four consecutive errors every 6
IP PDUs. The columns SHOULD be labeled packet loss and number of
packets lost. The elements of column 1 SHOULD be either True or
False, indicating whether the particular condition was observed
for each test. The elements of column 2 SHOULD be non-negative
integers.
The table MUST also indicate the packet size in octets and traffic
rate in packets per second as generated by the test device.
3.1.3. Path Overhead (POH) Error Count
3.1.3.1. POH Error Propagation.
Objective: To determine that the SUT does not propagate POH errors as
defined in RFC2761 "Terminology for ATM Benchmarking".
Procedure:
1) Set up the SUT and test device using the uni-directional
configuration.
2) Send a specific number of IP PDUs at a specific rate through the
SUT. Since this test is not a throughput test, the rate should
not be greater than 90% of line rate. The cell payload SHOULD
contain valid IP PDUs. The IP PDUs MUST be encapsulated in AAL5.
3) Count the IP PDUs that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test, else lower the test device
traffic rate until the counts are the same.
4) Inject one error in the B3 (Path BIP8) byte. Verify that the SUT
does not propagate the error.
5) Inject one error in the B3 byte every 1 second. Verify that the
SUT does not propagate the error.
6) Discontinue the POH error.
Reporting Format:
The results of the POH error propagation test SHOULD be reported
in a form of a table. The rows SHOULD be labeled single error
and one error per second. The columns SHOULD be labeled error
propagated and loss of IP PDU. The elements of the table SHOULD
be either True or False, indicating whether the particular
condition was observed for each test.
The table MUST also indicate the IP PDU size in octets and
traffic rate in IP PDUs per second as generated by the test
device.
3.1.3.2. Cell Loss due to POH Error.
Objective: To determine if the SUT will drop cells due POH Errors as
defined in RFC2761 "Terminology for ATM Benchmarking".
Procedure:
1) Set up the SUT and test device using the uni-directional
configuration.
2) Send a specific number of cells at a specific rate through the
SUT. Since this test is not a throughput test, the rate should
not be greater than 90% of line rate. The cell payload SHOULD
contain valid IP PDUs. The IP PDUs MUST be encapsulated in AAL5.
3) Count the cells that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test; else lower the test device
traffic rate until the counts are the same.
4) Inject one error in the B3 (Path BIP8) byte. Verify that the SUT
does not drop any cells.
5) Inject one error in the B3 byte every 1 second. Verify that the
SUT does not drop any cells.
6) Discontinue the POH error.
Reporting Format:
The results of the Cell Loss due to POH errors test SHOULD be
reported in a form of a table. The rows SHOULD be labeled single
error and one error per second. The columns SHOULD be labeled
cell loss and number of cells lost. The elements of column 1
SHOULD be either True or False, indicating whether the particular
condition was observed for each test. The elements of column 2
SHOULD be non-negative integers.
The table MUST also indicate the traffic rate in IP PDUs per
second as generated by the test device.
3.1.3.3. IP Packet Loss due to POH Error.
Objective: To determine if the SUT will drop IP packets due to POH
errors as defined in RFC2761 "Terminology for ATM Benchmarking".
Procedure:
1) Set up the SUT and test device using the uni-directional
configuration.
2) Send a specific number of IP packets at a specific rate through
the SUT. Since this test is not a throughput test, the rate
should not be greater than 90% of line rate. The IP PDUs MUST be
encapsulated in AAL5.
3) Count the IP packets that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test; else lower the test device
traffic rate until the counts are the same.
4) Inject one error in the B3 (Path BIP8) byte. Verify that the SUT
does not drop any packets.
5) Inject one error in the B3 byte every 1 second. Verify that the
SUT does not drop any packets.
6) Discontinue the POH error.
Reporting Format:
The results of the IP packet loss due to POH errors test SHOULD be
reported in a form of a table. The rows SHOULD be labeled single
error and one error per second. The columns SHOULD be labeled
packet loss and number of packets lost. The elements of column 1
SHOULD be either True or False, indicating whether the particular
condition was observed for each test. The elements of column 2
SHOULD be non-negative integers.
The table MUST also indicate the packet size in octets and traffic
rate in packets per second as generated by the test device.
3.2. ATM Layer
3.2.1. Two-Point Cell Delay Variation (CDV)
3.2.1.1. Test Setup
The cell delay measurements assume that both the transmitter and
receiver timestamp information is synchronized. Synchronization
SHOULD be achieved by supplying a common clock signal (minimum of 100
Mhz or 10 ns resolution) to both the transmitter and receiver. The
maximum timestamp values MUST be recorded to ensure synchronization
in the case of counter rollover. The cell delay measurements SHOULD
utilize the O.191 cell (ITUT-O.191) encapsulated in a valid IP
packet. If the O.191 cell is not available, a test cell encapsulated
in a valid IP packet MAY be used. The test cell MUST contain a
transmit timestamp which can be correlated with a receive timestamp.
A description of the test cell MUST be included in the test results.
The description MUST include the timestamp length (in bits), counter
rollover value, and the timestamp accuracy (in ns).
3.2.1.2. Two-point CDV/Steady Load/One VCC
Objective: To determine the SUT variation in cell transfer delay with
one VCC as defined in RFC2761 "Terminology for ATM Benchmarking".
Procedure:
1) Set up the SUT and test device using the bi-directional
configuration.
2) Configure the SUT and test device with one VCC. The VCC SHOULD
contain one VPI/VCI. The VCC MUST be configured as either a CBR,
VBR, or UBR connection. The VPI/VCI MUST not be one of the
reserved ATM signaling channels (e.g., [0,5], [0,16]).
3) Send a specific number of IP packets containing timestamps at a
specific constant rate through the SUT via the defined test VCC.
Since this test is not a throughput test, the rate should not be
greater than 90% of line rate. The IP PDUs MUST be encapsulated
in AAL5.
4) Count the IP packets that are transmitted by the SUT to verify
connectivity and load. If the count on the test device is the
same on the SUT, continue the test; else lower the test device
traffic rate until the counts are the same.
5) Record the packets timestamps at the transmitter and receiver
ends of the test device.
Reporting Format:
The results of the Two-point CDV/Steady Load/One VCC test SHOULD
be reported in a form of text, graph, and histogram.
The text results SHOULD display the numerical values of the CDV.
The values given SHOULD include: time period of test in s, test
VPI/VCI value, total number of cells transmitted and received on
the given VPI/VCI during the test in positive integers, maximum
and minimum CDV during the test in us, and peak-to-peak CDV in us.
The graph results SHOULD display the cell delay values. The x-
coordinate SHOULD be the test run time in either seconds, minutes
or days depending on the total length of the test. The x-
coordinate time SHOULD be configurable. The y-coordinate SHOULD
be the cell delay in us. The integration time per point MUST be
indicated.
The histogram results SHOULD display the peak-to-peak cell delay.
The x-coordinate SHOULD be the cell delay in us with at least 256
bins. The y-coordinate SHOULD be the number of cells observed in
each bin.
The results MUST also indicate the packet size in octets, traffic
rate in packets per second, and bearer class as generated by the
test device. The VCC and VPI/VCI values MUST be indicated. The
bearer class of the created VCC MUST also be indicated.
3.2.1.3. Two-point CDV/Steady Load/Twelve VCCs
Objective: To determine the SUT variation in cell transfer delay with
twelve VCCs as defined in RFC2761 "Terminology for ATM
Benchmarking".
Procedure:
1) Set up the SUT and test device using the bi-directional
configuration.
2) Configure the SUT and test device with twelve VCCs, using 1 VPI
and 12 VCIs. The VCC"s MUST be configured as either a CBR, VBR,
or UBR connection. The VPI/VCIs MUST not be one of the reserved
ATM signaling channels (e.g., [0,5], [0,16]).
3) Send a specific number of IP packets containing timestamps at a
specific constant rate through the SUT via the defined test VCCs.
All of the VPI/VCI pairs will generate traffic at the same
traffic rate. Since this test is not a throughput test, the rate
should not be greater than 90% of line rate. The IP PDUs MUST be
encapsulated in AAL5.
4) Count the IP packets that are transmitted by the SUT on all VCCs
to verify connectivity and load. If the count on the test device
is the same on the SUT, continue the test; else lower the test
device traffic rate until the counts are the same.
5) Record the packets timestamps at the transmitter and receiver
ends of the test device for all VCCs.
Reporting Format:
The results of the Two-point CDV/Steady Load/Twelve VCCs test
SHOULD be reported in a form of text, graph, and histograms.
The text results SHOULD display the numerical values of the CDV.
The values given SHOULD include: time period of test in s, test
VPI/VCI values, total number of cells transmitted and received on
each VCC during the test in positive integers, maximum and minimum
CDV on each VCC during the test in us, and peak-to-peak CDV on
each VCC in us.
The graph results SHOULD display the cell delay values. The x-
coordinate SHOULD be the test run time in either seconds, minutes
or days depending on the total length of the test. The x-
coordinate time SHOULD be configurable. The y-coordinate SHOULD
be the cell delay for each VCC in ms. There SHOULD be 12 curves
on the graph, one curves indicated and labeled for each VCC. The
integration time per point MUST be indicated.
The histograms SHOULD display the peak-to-peak cell delay. There
will be one histogram for each VCC. The x-coordinate SHOULD be
the cell delay in us with at least 256 bins. The y-coordinate
SHOULD be the number of cells observed in each bin.
The results MUST also indicate the packet size in octets, traffic
rate in packets per second, and bearer class as generated by the
test device. The VCC and VPI/VCI values MUST be indicated. The
bearer class of the created VCC MUST also be indicated.
3.2.1.4. Two-point CDV/Steady Load/Maximum VCCs
Objective: To determine the SUT variation in cell transfer delay with
the maximum number VCCs supported on the SUT as defined in RFC2761
"Terminology for ATM Benchmarking".
Procedure:
1) Set u
网友评论
评论
发 布

更多软件教程
  • 软件教程推荐
更多+
Greenfoot设置中文的方法

Greenfoot设置中文的方法

Greenfoot是一款简单易用的Java开发环境,该软件界面清爽简约,既可以作为一个开发框使用,也能够作为集成开发环境使用,操作起来十分简单。这款软件支持多种语言,但是默认的语言是英文,因此将该软件下载到电脑上的时候,会发现软件的界面语言是英文版本的,这对于英语基础较差的朋友来说,使用这款软件就会...

07-05

Egret UI Editor修改快捷键的方法

Egret UI Editor修改快捷键的方法

Egret UI Editor是一款开源的2D游戏开发代码编辑软件,其主要功能是针对Egret项目中的Exml皮肤文件进行可视化编辑,功能十分强大。我们在使用这款软件的过程中,可以将一些常用操作设置快捷键,这样就可以简化编程,从而提高代码编辑的工作效率。但是这款软件在日常生活中使用得不多,并且专业性...

07-05

KittenCode新建项目的方法

KittenCode新建项目的方法

KittenCode是一款十分专业的编程软件,该软件给用户提供了可视化的操作界面,支持Python语言的编程开发以及第三方库管理,并且提供了很多实用的工具,功能十分强大。我们在使用这款软件进行编程开发的过程中,最基本、最常做的操作就是新建项目,因此我们很有必要掌握新建项目的方法。但是这款软件的专业性...

07-05

Thonny设置中文的方法

Thonny设置中文的方法

Thonny是一款十分专业的Python编辑软件,该软件界面清爽简单,给用户提供了丰富的编程工具,具备代码补全、语法错误显示等功能,非常的适合新手使用。该软件还支持多种语言,所以在下载这款软件的时候,有时候下载到电脑中的软件是英文版本的,这对于英语基础较差的小伙伴来说,使用这款软件就会变得十分困难,...

07-05

最新软件下载