Rate-limiting for all traffic operates on a per-port basis to allow only the specified bandwidth to be used for inbound traffic. When traffic exceeds the configured limit, it is dropped. This effectively sets a usage level on a given port and is a tool for enforcing maximum service level commitments granted to network users. This feature operates on a per-port level and is not configurable on port trunks. Rate-limiting is designed to be applied at the network edge to limit traffic from non-critical users or to enforce service agreements such as those offered by Internet Service Providers (ISPs) to provide only the bandwidth for which a customer has paid.
|
|
NOTE: Rate-limiting also can be applied by a RADIUS server during an authentication client session. For further details, see the chapter "RADIUS Authentication and Accounting" in the Access Security Guide for your switch. |
|
|
The switches also support ICMP rate-limiting to mitigate the effects of certain ICMP-based attacks.
Syntax:
Configures a traffic rate limit (on non-trunked ports) on the link. The
no
form of the command disables rate-limiting on the specified ports.The rate-limit all command controls the rate of traffic sent or received on a port by setting a limit on the bandwidth available. It includes options for:
Rate-limiting on inbound traffic.
Specifying the traffic rate as either a percentage of bandwidth, or in terms of bits per second.
-
The
rate-limit icmp
command specifies a rate limit on inbound ICMP traffic only (see “ICMP Rate-Limiting”). -
Rate-limiting does not apply to trunked ports (including meshed ports).
-
Kbps rate-limiting is done in segments of 1% of the lowest corresponding media speed. For example, if the media speed is 1 Kbps, the value would be 1 Mbps. A 1-100 Kbps rate-limit is implemented as a limit of 100 Kbps; a limit of 100-199 Kbps is also implemented as a limit of 100 Kbps, a limit of 200-299 Kbps is implemented as a limit of 200 Kbps, and so on.
-
Percentage limits are based on link speed. For example, if a 100 Mbps port negotiates a link at 100 Mbps and the inbound rate-limit is configured at 50%, then the traffic flow through that port is limited to no more than 50 Mbps. Similarly, if the same port negotiates a 10 Mbps link, then it allows no more than 5 Mbps of inbound traffic.
Configuring a rate limit of 0 (zero) on a port blocks all traffic on that port. However, if this is the desired behavior on the port, HP recommends using the <
port-list
> disable command instead of configuring a rate limit of 0.
You can configure a rate limit from either the global configuration level or from the port context level. For example, either of the following commands configures an inbound rate limit of 60% on ports 3 - 5:
HP Switch(config)# int 3-5 rate-limit all in percent 60 HP Switch(eth-3-5)# rate-limit all in percent 60
The show rate-limit all
command displays the per-port rate-limit configuration in the running-config file.
Syntax:
Without
[port-list]
, this command lists the rate-limit configuration for all ports on the switch.With
[port-list]
, this command lists the rate-limit configuration for the specified ports. This command operates the same way in any CLI context.
Listing the rate-limit configuration
HP Switch(config)# show rate-limit all Inbound Rate Limit Maximum % Port | Limit Mode Radius Override ----- + -------- --------- --------------- 1 | Disabled Disabled No-override 2 | 500 kbps No-override 3 | 50 % No-override 4 | Disabled Disabled No-override
The show running
command displays the currently applied setting for any interfaces in the switch configured for all traffic rate-limiting and ICMP rate limiting.
The show config
command displays this information for the configuration currently stored in the startup-config
file. (Note that configuration changes performed with the CLI, but not followed by a write mem
command, do not appear in the startup-config
file.)
-
Rate-limiting operates on a per-port basis, regardless of traffic priority. Rate-limiting is available on all types of ports (other than trunked ports) and at all port speeds configurable for these switches.
-
Rate-limiting is not allowed on trunked ports. Rate-limiting is not supported on ports configured in a trunk group (including mesh ports). Configuring a port for rate-limiting and then adding it to a trunk suspends rate-limiting on the port while it is in the trunk. Attempting to configure rate-limiting on a port that already belongs to a trunk generates the following message:
-
Rate-limiting and hardware. The hardware will round the actual Kbps rate down to the nearest multiple of 64 Kbps.
-
Operation with other features. Configuring rate-limiting on a port where other features affect port queue behavior (such as flow control) can result in the port not achieving its configured rate-limiting maximum. For example, in a situation whereflow control is configured on a rate-limited port, there can be enough "back pressure" to hold high-priority inbound traffic from the upstream device or application to a rate that is lower than the configured rate limit. In this case, the inbound traffic flow does not reach the configured rate and lower priority traffic is not forwarded into the switch fabric from the rate-limited port. (This behavior is termed "head-of-line blocking" and is a well-known problem with flow-control.)
-
Traffic filters on rate-limited ports. Configuring a traffic filter on a port does not prevent the switch from including filtered traffic in the bandwidth-use measurement for rate-limiting when it is configured on the same port. For example, ACLs, source-port filters, protocol filters, and multicast filters are all included in bandwidth usage calculations.
-
Monitoring (mirroring) rate-limited interfaces. If monitoring is configured, packets dropped by rate-limiting on a monitored interface are still forwarded to the designated monitor port. (Monitoring shows what traffic is inbound on an interface, and is not affected by "drop" or "forward" decisions.)
-
Optimum rate-limiting operation. Optimum rate-limiting occurs with 64-byte packet sizes. Traffic with larger packet sizes can result in performance somewhat below the configured bandwidth. This is to ensure the strictest possible rate-limiting of all sizes of packets.