From Wikipedia, the free encyclopedia
Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks,[1][2] or network congestion.[3]: 36 Packet loss is measured as a percentage of packets lost with respect to packets sent.
The Transmission Control Protocol (TCP) detects packet loss and performs retransmissions to ensure reliable messaging. Packet loss in a TCP connection is also used to avoid congestion and thus produces an intentionally reduced throughput for the connection.
In real-time applications like streaming media or online games, packet loss can affect a user’s quality of experience (QoE).
Causes[edit]
The Internet Protocol (IP) is designed according to the end-to-end principle as a best-effort delivery service, with the intention of keeping the logic routers must implement, as simple as possible. If the network made reliable delivery guarantees on its own, that would require store and forward infrastructure, where each router devotes a significant amount of storage space to packets while it waits to verify that the next node properly received them. A reliable network would not be able to maintain its delivery guarantees in the event of a router failure. Reliability is also not needed for all applications. For example, with live streaming media, it is more important to deliver recent packets quickly than to ensure that stale packets are eventually delivered. An application or user may also decide to retry an operation that is taking a long time, in which case another set of packets will be added to the burden of delivering the original set. Such a network might also need a command and control protocol for congestion management, adding even more complexity.
To avoid all of these problems, the Internet Protocol allows for routers to simply drop packets if the router or a network segment is too busy to deliver the data in a timely fashion. This is not ideal for speedy and efficient transmission of data, and is not expected to happen in an uncongested network.[4] Dropping of packets acts as an implicit signal that the network is congested, and may cause senders to reduce the amount of bandwidth consumed, or attempt to find another path. For example, using perceived packet loss as feedback to discover congestion, the Transmission Control Protocol (TCP) is designed so that excessive packet loss will cause the sender to throttle back and stop flooding the bottleneck point with data.[3]: 282–283
Packets may also be dropped if the IPv4 header checksum or the Ethernet frame check sequence indicates the packet has been corrupted. Packet loss can also be caused by a packet drop attack.
Wireless networks[edit]
Wireless networks are susceptible to a number of factors that can corrupt or lose packets in transit, such as radio frequency interference (RFI),[1] radio signals that are too weak due to distance or multi-path fading, faulty networking hardware, or faulty network drivers.
Wi-Fi is inherently unreliable and even when two identical Wi-Fi receivers are placed within close proximity of each other, they do not exhibit similar patterns of packet loss, as one might expect.[1]
Cellular networks can experience packet loss caused by, «high bit error rate (BER), unstable channel characteristics, and user mobility.»[5] TCP’s intentional throttling behavior prevents wireless networks from performing near their theoretical potential transfer rates because unmodified TCP treats all dropped packets as if they were caused by network congestion, and so may throttle wireless networks even when they aren’t actually congested.[5]
Network congestion[edit]
Network congestion is a cause of packet loss that can affect all types of networks. When content arrives for a sustained period at a given router or network segment at a rate greater than it is possible to send through, there is no other option than to drop packets.[3]: 36 If a single router or link is constraining the capacity of the complete travel path or of network travel in general, it is known as a bottleneck. In some cases, packets are intentionally dropped by routing routines,[6] or through network dissuasion technique for operational management purposes.[7]
Effects[edit]
Packet loss directly reduces throughput for a given sender as some sent data is never received and can’t be counted as throughput. Packet loss indirectly reduces throughput as some transport layer protocols interpret loss as an indication of congestion and adjust their transmission rate to avoid congestive collapse.
When reliable delivery is necessary, packet loss increases latency due to additional time needed for retransmission.[a] Assuming no retransmission, packets experiencing the worst delays might be preferentially dropped (depending on the queuing discipline used), resulting in lower latency overall.
Measurement[edit]
Packet loss may be measured as frame loss rate defined as the percentage of frames that should have been forwarded by a network but were not.[8]: 4
Acceptable packet loss[edit]
Packet loss is closely associated with quality of service considerations. The amount of packet loss that is acceptable depends on the type of data being sent. For example, for voice over IP traffic, one commentator reckoned that «[m]issing one or two packets every now and then will not affect the quality of the conversation. Losses between 5% and 10% of the total packet stream will affect the quality significantly.»[9] Another described less than 1% packet loss as «good» for streaming audio or video, and 1–2.5% as «acceptable».[10]
Diagnosis[edit]
Packet loss is detected by reliable protocols such as TCP. Reliable protocols react to packet loss automatically, so when a person such as a network administrator needs to detect and diagnose packet loss, they typically use status information from network equipment or purpose-built tools.
The Internet Control Message Protocol provides an echo functionality, where a special packet is transmitted that always produces a reply. Tools such as ping, traceroute, and MTR use this protocol to provide a visual representation of the path packets are taking, and to measure packet loss at each hop.[b]
Many routers have status pages or logs, where the owner can find the number or percentage of packets dropped over a particular period.
Packet recovery for reliable delivery[edit]
Per the end-to-end principle, the Internet Protocol leaves responsibility for packet recovery through the retransmission of dropped packets to the endpoints — the computers sending and receiving the data. They are in the best position to decide whether retransmission is necessary because the application sending the data should know whether a message is best retransmitted in whole or in part, whether or not the need to send the message has passed, and how to control the amount of bandwidth consumed to account for any congestion.
Network transport protocols such as TCP provide endpoints with an easy way to ensure reliable delivery of packets so that individual applications don’t need to implement the logic for this themselves. In the event of packet loss, the receiver asks for retransmission or the sender automatically resends any segments that have not been acknowledged.[3]: 242 Although TCP can recover from packet loss, retransmitting missing packets reduces the throughput of the connection as receivers wait for retransmissions and additional bandwidth is consumed by them. In certain variants of TCP, if a transmitted packet is lost, it will be re-sent along with every packet that had already been sent after it.
Protocols such as User Datagram Protocol (UDP) provide no recovery for lost packets. Applications that use UDP are expected to implement their own mechanisms for handling packet loss, if needed.
Impact of queuing discipline[edit]
There are many queuing disciplines used for determining which packets to drop. Most basic networking equipment will use FIFO queuing for packets waiting to go through the bottleneck and they will drop the packet if the queue is full at the time the packet is received. This type of packet dropping is called tail drop. Other full queue mechanisms include random early drop or weighted random early drop. Dropping packets is undesirable as the packet is either lost or must be retransmitted and this can impact real-time throughput; however, increasing the buffer size can lead to bufferbloat which has its own impact on latency and jitter during congestion.
In cases where quality of service is rate limiting a connection, e.g., using a leaky bucket algorithm, packets may be intentionally dropped in order to slow down specific services to ensure available bandwidth for other services marked with higher importance. For this reason, packet loss is not necessarily an indication of poor connection reliability or signs of a bandwidth bottleneck.
See also[edit]
- Bit slip
- Collision (telecommunications)
- Goodput
- Packet loss concealment
- Traffic shaping
Notes[edit]
- ^ During typical network congestion, not all packets in a stream are dropped. This means that undropped packets will arrive with low latency compared to retransmitted packets, which arrive with high latency. Not only do the retransmitted packets have to travel part of the way twice, but the sender will not realize the packet has been dropped until it either fails to receive acknowledgment of receipt in the expected order or fails to receive acknowledgment for a long enough time that it assumes the packet has been dropped as opposed to merely delayed.
- ^ In some cases, these tools may indicate drops for packets that are terminating in a small number of hops, but not those making it to the destination. For example, routers may give echoing of ICMP packets low priority and drop them preferentially in favor of spending resources on genuine data; this is generally considered an artifact of testing and can be ignored in favor of end-to-end results.[11]
References[edit]
- ^ a b c
- ^ Tian, Ye; Xu, Kai; Ansari, Nirwan (March 2005). «TCP in Wireless Environments: Problems and Solutions» (PDF). IEEE Radio Communications. 43 (3): S27–S32. doi:10.1109/MCOM.2005.1404595. S2CID 735922. Archived from the original (PDF) on 2017-08-09. Retrieved 2018-02-19.
- ^ a b c d Kurose, J.F. & Ross, K.W. (2010). Computer Networking: A Top-Down Approach. New York: Addison-Wesley.
- ^ Kurose, J.F.; Ross, K.W. (2010). Computer Networking: A Top-Down Approach. New York: Addison-Wesley. pp. 42–43.
The fraction of lost packets increases as the traffic intensity increases. Therefore, performance at a node is often measured not only in terms of delay but also in terms of the probability of packet loss…a lost packet may be retransmitted on an end-to-end basis in order to ensure that all data are eventually transferred from source to destination.
- ^ a b Ye Tian; Kai Xu; Nirwan Ansari (March 2005). «TCP in Wireless Environments: Problems and Solutions» (PDF). IEEE Radio Communications. IEEE. Archived from the original (PDF) on 2017-08-09. Retrieved 2018-02-19.
- ^ Perkins, C.E. (2001). Ad Hoc Networking. Boston: Addison-Wesley. p. 147.
- ^ «Controlling Applications by Managing Network Characteristics» Vahab Pournaghshband, Leonard Kleinrock, Peter Reiher, and Alexander Afanasyev ICC 2012
- ^ S. Bradner, ed. (July 1991). Benchmarking Terminology for Network Interconnection Devices. Network Working Group. doi:10.17487/RFC1242. RFC 1242. .
- ^ Mansfield, K.C. & Antonakos, J.L. (2010). Computer Networking from LANs to WANs: Hardware, Software, and Security. Boston: Course Technology, Cengage Learning. p. 501.
- ^ «ICTP-SDU: About PingER». Archived from the original on 2013-10-10. Retrieved 2013-05-16.
- ^ «Packet loss or latency at intermediate hops». Retrieved 2007-02-25.
External links[edit]
- Packet loss test — test your Internet connection for packet loss
- Подробности
- Родительская категория: LTE
- Категория: Услуги в сетях LTE
Качественные показатели и их обеспечение в сетях LTE
Аналогично сетям стандарта UMTS, в сетях LTE доставку производят с помощью сквозных каналов (bearer) с соответствующим качеством обслуживания (Quality of Service, QoS), наиболее важными из которых являются:
— классы трафика,
— задержки,
— надежность,
— приоритеты,
— скорости передачи.
В рамках требований QoS все типы услуг подразделяются на 9 классов, каждому из которых соответствует идентификатор QCI (QoS Class Identifier). Кроме этого, организуемые для передачи трафика сквозные каналы подразделяются на 2 группы в соответствии с типом выделяемого ресурса:
— с гарантированной скоростью передачи GBR (Guaranteed Bit Rate),
— с негарантированной скоростью передачи Non-GBR.
Качественные показатели передач для трафика 9 разных классов приведены в табл.
Табл. QoS LTE (до Release 14 3GPP)
QCI |
Тип ресурса |
Приоритет |
Задержка (мс) |
PERL |
Примеры услуг |
1 |
GBR |
2 |
100 |
10^-2 |
Телефония в режиме реального времени |
2 |
4 |
150 |
10^-3 |
Видеотелефония, видео в режиме реального времени |
|
3 |
3 |
50 |
10^-3 |
Игры в режиме реального времени |
|
4 |
5 |
300 |
10^-6 |
Видео с буферизацией |
|
5 |
Non-GBR |
1 |
100 |
10^-6 |
Сигнализация (IMS) |
6 |
6 |
300 |
10^-6 |
Видео с буферизацией, TPC/IP услуги для приоритетных пользователей |
|
7 |
7 |
100 |
10^-3 |
Аудио, видео в режиме реального времени, интерактивные игры |
|
8 |
8 |
300 |
10^-6 |
Видео с буферизацией, TPC/IP услуги |
|
9 |
9 |
При реализации передачи данных с гарантированной скоростью eNB должен управлять ресурсами в динамическом режиме. Услуги классов QCI 1,2, 3 и 7 – это сервисы, предоставляемые абоненту в реальном времени по протоколу UDP/IP. Основным ограничивающим фактором для их реализации является допустимая задержка в доставке пакетов.
Значение PERL (Packet Error Loss Rate) показывает надежность передачи пакетов, которое оценивается по относительной величине непринятых пакетов. Величина PERL ≤ 10^-6 достигается при доставке пакетов с помощью протокола TCP/IP. Наибольший приоритет имеет сигнальный трафик. Класс 9 применяется по умолчанию при доставке TCP/IP трафика (чтение файлов из Интернета, E-mail, видео) непривилегированным пользователям.
Передачу сервисного потока данных конкретной услуги осуществляют с помощью сквозного канала (bearer) соответствующего класса QCI. Основными параметрами, характеризующими сквозной канал, являются:
— для GBR классов передачи гарантированная и максимальная скорость передачи, которая не может быть превышена,
— установленный и сохраняемый приоритет.
Для сквозных каналов с негарантированной скоростью передачи устанавливают суммарную скорость передачи потоков по всем каналам. Сквозные каналы GBR классов – это выделенные каналы.
Динамическое выделение канального ресурса осуществляет планировщик (scheduler) в eNB.
Работа планировщика производится в соответствии с:
— состоянием радиоканалов с соответствующими абонентскими терминалами,
— атрибутами сквозных каналов,
— характеристиками передач по сквозным каналам, в том числе состоянием буферов абонентской станции при передаче вверх,
— помеховыми ситуациями в соседних сотах и возможностью межсотовых хэндоверов для улучшения условий работы абонентских терминалов, расположенных вблизи границ сот.
Принцип функционирования планировщика также связан с программами управления доступа к сети и управления ситуациями перегрузки на радиоинтерфейсе.
Подробнее о качестве обслуживания в сетях мобильной связи различных стандартов читайте в книге «Мобильная связь на пути к 6G».
Читайте также:
Качество обслуживания и другие вопросы работы сетей мобильной связи на ежегодном Съезде TELECOMTREND. Присоединяйтесь!
Услуги в сетях 3G
Преимущества и недостатки 3G по сравнению с 2G
Общая информация о стандарте LTE
Сравнение LTE с другими технологиями
Что такое 5G?
{jcomments on}
Hardware and Application Profiling Tools
Tomislav Janjusic, Krishna Kavi, in Advances in Computers, 2014
3.5.5 MSNS: A Top-Down MPI-Style Hierarchical Simulation Framework for Network on Chip
MSNS is an MPI-style network simulator, which aims at accurately simulating packet delays for NoC systems. Based on SystemC, MSNS implements simulation at various granularities from high-level abstraction to low RTL [11]. MSNS uses top-down approach of all network layers and utilizes wait( ) functions to guarantee cycle accuracy of individual layers. This leads to the following benefits of MSNS: It can simulate traffic of practical systems more precisely by ensuring an accurate abstraction of application’s data transmission rates, it can accurately estimate dynamic power consumption, and it can provide methods to evaluate performances of parallel algorithms and high-performance applications on network infrastructure.
MSNS employs several interface levels. RTL modeling is performed at the link layer and network layer. The interface layer is composed of the MPI library, RTL description, and high-level abstraction descriptions. The simulator consists of two components: MSNS generator and MSNS simulator. By utilizing user supplied input files, the MSNS generator generates the network and application properties and stores them as a configuration file. To ensure highest possible accuracy, MSNS generator calculates wire latencies and bit-error probabilities through its built-in wire models. These delays are enforced by a wait function. When the libraries (SystemC, MPI, and design modules) are loaded into the MSNS simulator, they are combined with the previously generated configuration files and the simulation initiates. To calculate power at each individual layer MSNS provides a power estimation for the whole network infrastructure at the architectural level.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124202320000039
An Overview
Magdi S. Mahmoud, Yuanqing Xia, in Networked Control Systems, 2019
1.3.4 Data Loss
The state-of-the-art on design control systems has to take into account the effects of packet loss and delay in networked control systems. The observations on the difference of counting the data (packet) losses online or offline to develop control algorithms have led to categorizing the references into two frameworks. One framework for the analysis and design of NCSs with packet losses is the offline framework, where the controller is designed despite any situation of real packet losses.
The other framework is that the control input to plants is implemented online according to the situation whether a packet is lost or not, even though the controller gains are computed offline. To realize this, the intelligence of checking and comparing the indices of packets has to be equipped with the actuator, and the intelligence requires the running mode of receivers to be time driven. A typical approach in the framework is to extend the Bernoulli processes description of the packet losses and further model the correlation between packet loss and no packet loss as a two-dimensional Markov chain, and therefore the corresponding closed-loop system as a Markov jump system.
There generally exists a tradeoff between maximizing the number of consecutive packet losses, maximizing the allowable probability of packet losses, or lowering the transmission rate, and increasing the stability margins and/or system performance. The scheme of the approach is illustrated in Fig. 1.9 (bottom). It can be seen that the controllers obtained offline are actually robust to the packet losses.
Figure 1.9. A typical implementation scheme to handle packet losses and disorder: scheme A (top), scheme B (bottom)
Although in the system the gains of the two controllers can be obtained offline, the control inputs will be implemented online depending on whether a packet is lost or not. As we commented in the last subsection, the approach of using the Markov jump systems can be replaced by non-deterministic switched systems. In this approach, the previous accepted measures or control inputs will be continuously used if the current packet is lost. The scheme is illustrated in Fig. 1.9 (bottom). Another typical approach in the framework is predictive control methodology. The natural property of predictive control is that it can predict a finite horizon of future control inputs at each time instant. Thus, if some consecutive packets (but within the horizon) after a successful packet transmission are lost, the control inputs to the plant can be successively replaced by the corresponding ones predicted in that successfully used packet. The idea necessitates a buffer with more than one storage unit (the length is the prediction horizon) and the actuator site to be equipped with selection intelligence. Fig. 1.10 (top) illustrates the approach.
Figure 1.10. A typical implementation scheme to handle packet losses and disorder: scheme C (top), scheme D (bottom)
Note that although the predicted control inputs in a successfully transmitted packet are based on the plant in open-loop, studies have shown that implementing control inputs in such a way will in general result in a better performance than the approach where only the last used control input is kept and used in the presence of later packet losses. It can be concluded from the two above-reviewed approaches that the control input to the plant is finally selected at the actuator site according to whether the real packet transmission succeeded or not, and therefore it is online and can be viewed as an adaptation against packet losses. The studies on the packet disorder problem are sporadic since in most literature the so-called past packets rejection logic is commonly employed, which means that the older packets will be discarded if the most recent packet arrives at the receiver side. This logic can be easily realized by embedding some intelligence at the controllers and/or the actuators for comparison, checking, and selection. By this logic, the packet disorder can be viewed as packet losses in the analysis and design. However, it should be noted that discarding is a human made behavior at receiver sites, which essentially differs from the real packet losses. In the former, the information carried in the past packets still comes although it is not fresh. Therefore, a key point in the packet disorder problem should be whether the information is useful to the control design, and if the answer is positive, the past packets rejection logic is better if it is not used. Some works have shown that the incorporation of the past states will improve the system performance to some extent. Therefore, if, in some applications, some improvement in system performance is seriously required, the past system state at the controller site, or the past control inputs at the actuator site, are better if used. To enable this, an appropriate timeline within one control period should be available to mark the arrivals of the past packets at the receivers. An illustration is shown in Fig. 1.10 (bottom). If using the past information does not result in an appreciable increase in performance or if the clock-driven scheme is limited, the packet disorder problem has to be solved simply through the past packets rejection logic.
Considering different scenarios of both time delays and packet losses at the actuator nodes, the closed-loop system is modeled as a switched system. The controller gains are then optimized to ensure a certain performance of the system, and the control commands are selected online at the actuator node according to the real situation of delays and packet losses.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128161197000095
Quality of service control
Yali Guo, Tricci So, in 5G NR and Enhancements, 2022
13.3 Quality of service parameters
13.3.1 5G quality of service identifier and the quality of service characteristics
The 5QI can be understood as a scalar that points to a set of QoS characteristics. These QoS characteristics are used to control the QoS-related operations in the access network, such as setting the scheduling weight, access threshold, queue management, link layer configuration, etc. The 5QI includes standardized 5QI, preconfigured 5QI, and dynamically assigned 5QI. For the dynamically assigned 5QI, not only the 5QI but also the complete set of QoS characteristics are included in the QoS profile. For the standardized and preconfigured 5QI, the QoS characteristics are not included in the QoS profile; the 5G–AN can resolve the set of QoS characteristics by itself. In addition, for a standardized or preconfigured 5QI, the core network can modify the corresponding QoS characteristics by providing a different value of the QoS characteristic. The standardized 5QI is mainly used for the more general and frequently used services. Providing the 5QI instead of the set of multiple QoS characteristic can optimize the signaling transmission. The dynamic allocation of 5QIs is mainly used for the infrequently used services that cannot be satisfied by standardized 5QIs.
The QoS characteristics are as follows:
- 1.
-
Resource type: including GBR type, Delay critical GBR type, and Non-GBR type
- 2.
-
Priority Level
- 3.
-
Packet delay budget (PDB): represents the packet transmission delay from the UE to the UPF
- 4.
-
Packet error rate (PER)
- 5.
-
Average window: for GBR type and delay critical GBR type only
- 6.
-
Maximum data burst volume (MDBV): for delay critical GBR type only
The resource type is used to determine whether the network will allocate dedicated network resources to a QoS Flow. The GBR type QoS Flow and delay critical GBR type QoS Flow need dedicated network resources to guarantee their GFBR. Non-GBR type QoS Flows do not require dedicated network resources. For a 5G system, the delay critical GBR type is added in to support services with high reliability and low latency requirements. Such services, such as automatic industrial control, remote driving, intelligent transportation system, energy distribution of power system, etc., have high requirements on transmission delay and transmission reliability. As shown in Table 13.2, when comparing the 5QIs corresponding to GBR types and non-GBR types, the PDB for the delay critical GBR type is significantly lower, and the PER is also relatively low. In the 4G system, for the bitrate control of GBR service, a second level time window is preconfigured at the 4G RAN for the calculating of GFBR. Based on this time window, the average bitrate to meet the GFBR requirements is calculated. However, because the time window is too long, there may be the case that the GFBR is guaranteed from the second level time window, but cannot be guaranteed from the millisecond level time window. In order to better support the high-reliability and low-latency services, the 5G network also adds MDBV for the delay critical GBR type 5QI, which is used to represent the amount of data that the service needs to transmit in a millisecond level time window, and it provides better guarantee for the support of high-reliability and low-latency services.
Table 13.2. Standardized 5QI to quality of service characteristics mapping.
5QI | Resource type | Priority level | Packet delay budget (ms) | Packet error rate | Maximum data burst volume | Average window (ms) |
---|---|---|---|---|---|---|
1 | GBR | 20 | 100 | 10−2 | N/A | 2000 |
2 | 40 | 150 | 10−3 | N/A | 2000 | |
3 | 30 | 50 | 10−3 | N/A | 2000 | |
4 | 50 | 300 | 10−6 | N/A | 2000 | |
65 | 7 | 75 | 10−2 | N/A | 2000 | |
66 | 20 | 100 | 10−2 | N/A | 2000 | |
67 | 15 | 100 | 10−3 | N/A | 2000 | |
71 | 56 | 150 | 10−6 | N/A | 2000 | |
72 | 56 | 300 | 10−4 | N/A | 2000 | |
73 | 56 | 300 | 10−8 | N/A | 2000 | |
74 | 56 | 500 | 10−8 | N/A | 2000 | |
76 | 56 | 500 | 10−4 | N/A | 2000 | |
5 | Non-GBR | 10 | 100 | 10−6 | N/A | N/A |
6 | 60 | 300 | 10−6 | N/A | N/A | |
7 | 70 | 100 | 10−3 | N/A | N/A | |
8 | 80 | 300 | 10−6 | N/A | N/A | |
9 | 90 | |||||
69 | 5 | 60 | 10−6 | N/A | N/A | |
70 | 55 | 200 | 10−6 | N/A | N/A | |
79 | 65 | 50 | 10−2 | N/A | N/A | |
80 | 68 | 10 | 10−6 | N/A | N/A | |
82 | Delay critical GBR | 19 | 10 | 10−4 | 255 bytes | 2000 |
83 | 22 | 10 | 10−4 | 1354 bytes | 2000 | |
84 | 24 | 30 | 10−5 | 1354 bytes | 2000 | |
85 | 21 | 5 | 10−5 | 255 bytes | 2000 | |
86 | 18 | 5 | 10−4 | 1354 bytes | 2000 |
Priority level is used for the radio resource allocation among the QoS Flows. It can be used among the QoS Flows of the same UE, or can be used among the QoS Flows of different UEs. When congestion occurs, the 5G–AN cannot guarantee the QoS requirements of all QoS Flows, so the priority level is used to guarantee the high-priority QoS Flows. When there is no congestion, the priority level can also be used to allocate resources between different QoS Flows, but it is not the only factor determining resource allocation.
PDB defines the maximum transmission latency between the UE and UPF. For a 5QI, the PDB of uplink and downlink data is the same. When the 5G–AN calculates the PDB between the UE and 5G–AN, it subtracts the PDB of core network side from the PDB corresponding to the 5QI, in other words, the PDB between the 5G–AN and UPF is subtracted. The core network-side PDB can be statically configured on the 5G–AN, or it can be dynamically determined by the 5G–AN according to the connection with the UPF, or it can be indicated to the 5G–AN by the SMF. For a GBR type QoS Flow, if the transmitted data does not exceed the GFBR, the network needs to ensure that the transmission latency of 98% of the packets does not exceed the PDB corresponding to the 5QI. However, for delay critical QoS Flows, if the transmitted data does not exceed GFBR and the burst data does not exceed MDBV, each packet exceeding the PDB corresponding to the 5QI will be considered as packet loss and included in PER. For the non-GBR type QoS Flow, the transmission latency beyond the PDB and packet dropping caused by congestion are allowed.
The PER defines an upper bound for the rate of PDUs that have been processed by the sender of a link layer protocol but that are not successfully delivered by the corresponding receiver to the upper layer. It is used to describe packet loss in noncongestion situations. PER is used to affect link layer configuration, such as RLC and HARQ configuration. For a 5QI, the PER of uplink and downlink data is the same. For non-GBR and GBR type QoS Flows, packets exceeding the PDB will not be included in PER, but for delay critical QoS Flows, if the transmitted data does not exceed GFBR and the data burst does not exceed MDBV, packets exceeding the PDB will be considered as packet loss and included in the PER.
The average window is only used for GBR and delay critical GBR type QoS Flows, and is used as the time duration for calculating GFBR and MFBR. This parameter also exists in the 4G system, but it is not mentioned in the 4G specification; it is a time window preconfigured on the 4G RAN. The average window is added in the 5G specification, which enables the core network to change the value of average window according to the service requirements, so as to better adapt to different services. For the standardized and preconfigured 5QI, although the average window corresponding to the 5QI has been standardized or configured, the core network can also change it to a different value.
The MDBV is only used for delay critical GBR type QoS Flows, which represents the maximum amount of data to be processed by the 5G–AN for the QoS Flow during the period of PDB. For the standardized and preconfigured 5QI of delay critical GBR type, although the MDBV corresponding to the 5QI has been standardized or preconfigured at the 5G–AN, the core network may provide different value to modify the MDBV.
The UDM saves the default 5QI value for each DNN. The default 5QI is a standardized 5QI of the non-GBR type. After the SMF obtains the default value of 5QI from the UDM, it is used to configure the parameters of the default QoS Flow. The SMF can modify the value of the default 5QI according to the interaction with the PCF or according to the local configuration at the SMF.
The mapping of the standardized 5QI to the QoS characteristics is defined in clause 5.7 of the 3GPP specification TS 23.501 [2], as shown in Table 13.2. It should be noted that the values in this table may be changed slightly in different versions of TS 23.501, so this is only an example to understand the mapping of standardized 5QI and the QoS characteristics.
13.3.2 Allocation and retention priority
The ARP includes three types of information: priority level, preemption capability, and preemption vulnerability. It is used to determine whether to allow the establishment, modification, or handover of QoS Flow when resources are limited. It is generally used for admission control of GBR type QoS Flows. The ARP is also used to preempt the resources of the existing QoS Flow when the resource is limited, such as releasing the existing QoS Flow to accept and establish a new QoS Flow.
The priority level of the ARP is used to indicate the importance of QoS Flow. The value is 1–15, and 1 represents the highest priority. Generally speaking, 1–8 can be assigned to the services authorized by the serving network. 9–15 is assigned to the services authorized by the home network, so it can be used for roaming. The value can also be assigned according to the roaming protocol.
Preemption capability indicates whether a QoS Flow may get resources that were already assigned to another QoS Flow with a lower ARP priority level.
Preemption vulnerability defines whether a QoS Flow may lose the resources assigned to it in order to admit a QoS Flow with higher ARP priority level. It can be set to allow or disable.
The UDM stores the default ARP value for each DNN. After the SMF obtains the default ARP value from the UDM, it is used for the parameter configuration of the default QoS Flow. The SMF can modify the default ARP value according to the interaction with the PCF or according to the local configuration of the SMF.
For QoS Flows other than the default QoS Flow, the SMF sets the ARP priority level, preemption capability, and preemption vulnerability in the PCC rules bound to the QoS Flow as ARP parameters of the QoS Flow. If the PCF is not deployed in the network, the ARP can also be set according to the local configuration in the SMF.
13.3.3 Bitrate-related parameters
The bitrate control-related parameters include GBR, MBR (Maximum Bit Rate), GFBR (Guarantee Flow Bit Rate), MFBR (Maximum Flow Bit Rate), UE–AMBR (UE Aggregated Maximum Bit Rate), and Session-AMBR (Session Aggregated Maximum Bit Rate).
GBR and MBR are SDF level bitrate control parameters that are used for bitrate control of GBR type SDF. We introduced them in the PCC rule parameters in Section 13.2.2 earlier. MBR is necessary for GBR type SDF and optional for non-GBR type SDF. The UPF performs SDF level control of the MBR.
GFBR and MFBR are QoS Flow level bitrate control parameters that are used for bitrate control of GBR type QoS Flow. The GFBR instructs the 5G–AN to allocate enough resources to guarantee the bitrate of a QoS Flow within the average window. The MFBR is the maximum allowed bitrate of the QoS Flow; any data exceeding MFBR may be discarded. Bitrates above the GFBR value and up to the MFBR value may be provided with relative priority determined by the Priority Level of the QoS Flows. The MFBR of DL data of QoS Flow is controlled at the UPF; the 5G–AN also controls the MFBR of both UL and DL data. the UE can perform MFBR control of UL data.
Both the UE–AMBR and Session-AMBR are used for non-GBR type QoS Flow.
The Session-AMBR limits the aggregate bitrate that can be expected to be provided across all non-GBR QoS Flows for a specific PDU Session. When each PDU session is established, the SMF obtains the subscribed session-AMBR. The SMF can modify the value of Session-AMBR according to the interaction with the PCF or according to the local configuration of the SMF. The UE can control Session-AMBR of uplink data, and the UPF can also control session-AMBR of both uplink and downlink data.
UE–AMBR limits the aggregate bitrate that can be expected to be provided across all non-GBR QoS Flows of a UE. The AMF can obtain the subscribed the UE–AMBR from UDM and modify it according to the PCF’s instructions. AMF provides the UE–AMBR to the 5G–AN, and the 5G–AN will recalculate the UE–AMBR. The calculation method is setting its the UE–AMBR to the sum of the Session-AMBR of all PDU Sessions with active user plane to this 5G–AN up to the value of the received the UE–AMBR from AMF. The 5G–AN is responsible for the uplink and downlink rate control of the UE–AMBR.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780323910606000131
IoT-driven data extraction applications using common information model in a hybrid microgrid system
Prabodh Bajpai, Dinesh Varma Tekumalla, in Design, Analysis, and Applications of Renewable Energy Systems, 2021
28.2.2 Quality of service
QoS is a fundamental component in a communication network for delivering data packets satisfactorily to enduser devices. The end-to-end packet delay is a QoS metric and is especially important for delay-sensitive applications. The most important factors that directly influence end-to-end delay are the arrival rate and the service rate. The exact relation between them can be established by the effective capacity model (Wu & Negi, 2003). The effective capacity assumes the existence of buffers, which causes delays, and may be considered as the capacity that is constrained by delay. The effective capacity model translates the effects of a time-varying service rate to end-to-end packet delay distribution or complementary cumulative density function (CCDF) of packet delay.
The trace contains information on arrival time and size of every packet over the collection period. The Lindley equation (Lindley, 1952) is a general form of describing the evolution of delay processes. An Ethernet communication is usually modeled as a G/G/1 queueing network and Lindley equation is used for the analysis of the trace, including the data payload and packet size distributions to fit the CCDFs of end-to-end packet delay.
QoS metrics reflect negative effects of congestion in packet-switched networks and are usually related to three network performance metrics (Olifer & Olifer, 2006; Tanenbaum, 2003):
- 1.
-
Delay: Packets are delivered to destination nodes with delays, which may vary from packet to packet (one measure is jitter).
- 2.
-
Throughput (or delivered data rate): Different traffic types require different data rates. Data rate is measured on some time interval as a result of dividing the volume of successfully transmitted data by the interval duration.
- 3.
-
Loss: Packets fail to reach their destinations. Losses can occur in the physical layer, link layer, and/or network layer.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128245552000289
Multimedia Networks and Communication
Shashank Khanvilkar, … Ashfaq Khokhar, in The Electrical Engineering Handbook, 2005
Admission Control
Admission control is a proactive form of congestion control (as opposed to reactive congestion control used in protocols like TCP) that ensures that demand for network resources never exceeds the supply. Preventing congestions from occurring reduces packet delay and loss, which improves real-time performance.
An admission control module (refer Figure 7.4) takes as input the traffic descriptor and the QoS requirements of the flow, and the module outputs its decision of either accepting the flow at the requested QoS or rejecting it if that QoS is not met (Sun and Jain, 1999). For this it consults admission criteria module, which refers to the rules by which an admission control scheme accepts or rejects a flow. Since the network resources are shared by all admitted flows, the decision to accept a new flow may affect the QoS commitments made to the admitted flows. Therefore, an admission control decision is usually made based on an estimation of the effect the new flow will have on other flows and the utilization target of the network.
FIGURE 7.4. Admission Control Components. Adapted from Tang et al. (1999).
Another useful component of admission control is the measurement process module. If we assume sources can characterize their traffic accurately using traffic descriptors, the admission control unit can simply use parameters in the traffic descriptors. It is observed, however, that real-time traffic sources are very difficult to characterize, and the leaky bucket parameters may only provide a very loose upper bound of the traffic rate. When the real traffic becomes bursty, the network utilization can get very low if admission control is solely based on the parameters provided at call setup time. Therefore, the admission control unit should monitor the network dynamics and use measurements such as instantaneous network load and packet delay to make its admission decisions.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780121709600500335
Congestion control mechanisms in vehicular networks: A perspective on Internet of vehicles (IoV)
Ashish Patil, … N. Shekar V. Shet, in Autonomous and Connected Heavy Vehicle Technology, 2022
2.1 ML-CC
The traffic collisions at road intersections can be reduced with the help of warning messages to nearby vehicles. The centralized data congestion control mechanism, machine learning congestion control (ML-CC) proposed in [18], improves reliability and reduces packet loss and delay using the roadside units (RSUs). The vehicle clusters are formed with the help of K-means clustering algorithm on the gathered and filtered messages. These clusters also consider types of messages, message validity, and message size, and the congestion is measured by the channel usage level. The MLCC mechanism modifies different communication parameters, namely transmission rate, coverage radius, size of contention window, and arbitrary interframe sequence (AIFS). ML-CC considers transferring delay of each cluster while adjusting these parameters in the congestion areas. This centralized ML-CC mechanism is independently operated at each RSU and has three separate congestion detection units, controlling data and congestion (Fig. 11.3).
Fig. 11.3. Workflow of MLCC mechanism [18].
Congestion detection unit: The congestion is measured in this unit by periodically sensing the channel to measure the amount of channel used, the number of pending messages in queues, and the amount of the channel that is occupied. If the predefined threshold is exceeded by the measured parameters, the congestion detection units assume that the channel is congested and send a congestion control message to other units.
Data control unit: Data gathering, clustering, and filtering are the components of the data control unit. All the messages communicated between vehicles are collected, and the available data is gathered using either of the following techniques: (1) after detecting congestion, for every 100 ms collect messages; (2) all the messages communicated between the vehicles waiting before red traffic signals present at intersections are collected all the time. Extra processing for the same messages is eliminated by removing the redundancy in the filtering section. After filtering, these messages are clustered by using a machine learning algorithm, by taking the features of the filtered messages into consideration. The K-means clustering algorithm is used for message clustering in VANETs. The K-means clustering algorithm has gained popularity and is widely used because of its simplicity, rapid learning technique, and scalability in large multidimensional data processing.
Congestion control unit: The communication parameters stated earlier are determined for every cluster present in the data control unit and these parameters are adjusted in the congestion control unit based on the congestion. Transmission rate and coverage radius are the parameters that considerably affect the performance of VANETs; the safety messages are sent with higher radius to cover more area, but this eventually increases the collisions. Also, high transmission rate may impact the saturation of channels and increase channel load. The ML-CC strategy selects different values of these communication parameters as specified by standards to reduce the congestion. This processing is done in the congestion control unit. The ML-CC strategy is a closed loop and centralized, and localized at RSU, it improves the performance by reducing average delay, packet loss ratio, and probability of collision under urban scenarios. As there is no significant change in performance metrics, the ML-CC strategy is scalable and clustering improves the performance of the targeted scenario.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780323905923000124
Quality of Service, Charging, and Policy Control
Magnus Olsson, … Catherine Mulligan, in EPC and 4G Packet Networks (Second Edition), 2013
Standardized QCI Values and Corresponding Characteristics
Certain QCI values have been standardized to reference specific QoS characteristics. The QoS characteristics describe the packet forwarding treatment that the traffic for a bearer receives edge-to-edge between the UE and the GW in terms of certain performance characteristics such as priority, packet delay budget, and packet error loss rate. The standardized characteristics are not signaled on any interface; they should instead be understood as guidelines for the preconfiguration of node-specific parameters for each QCI. For example, the radio base station would need to be configured to ensure that traffic belonging to a bearer with a certain standardized QCI receives the appropriate QoS treatment. The goal of standardizing a QCI with corresponding characteristics is to ensure that applications/services mapped to that QCI receive the same minimum level of QoS in multi-vendor network deployments and in cases of roaming. The standardized QCI characteristics are defined in clause 6.1.7 in 3GPP TS 23.203. A simplified description can be found in Table 8.1.
Table 8.1. Standardized QCI Characteristics
QCI | Resource Type | Priority | Packet Delay Budget | Packet Error Loss Rate | Example Services |
---|---|---|---|---|---|
1 | 2 | 100 ms | 10−2 | Conversational voice | |
2 | 4 | 150 ms | 10−3 | Conversational video (live streaming) | |
3 | GBR | 3 | 50 ms | 10−3 | Real-time gaming |
4 | 5 | 300 ms | 10−6 | Non-conversational video (buffered streaming) | |
5 | 1 | 100 ms | 10−6 | IMS signaling | |
6 | 6 | 300 ms | 10−6 | Video (buffered streaming), TCP-based (www, e-mail, chat, ftp, p2p file sharing, progressive video, etc.) | |
7 | 7 | 100 ms | 10−3 | Voice, video (live streaming) interactive gaming | |
8 | Non-GBR | 8 | 300 ms | 10−6 | Video (buffered streaming), TCP-based (www, e-mail, chat, ftp, p2p filesharing, progressive video, etc.) |
9 | 9 | 300 ms | 10−6 | Video (buffered streaming), TCP-based (www, e-mail, chat, ftp, p2p file sharing, progressive video, etc.) |
The QCI values 1–4 are allocated for traffic that requires dedicated resource allocation for a GBR, while values 5–9 are not associated with GBR requirements. Each standardized QCI is associated with a priority level, where priority level 1 is the highest priority level. The Packet Delay Budget can be described as an upper bound for the time that a packet may be delayed between the UE and the PCEF (Policy and Charging Enforcement Function). The Packet Error Loss Rate can, in a simplified manner, be described as an upper bound for the rate of non-congestion-related packet losses.
Note that the description above gives a very simplified definition of the standardized QCIs, witholding many of the details. The purpose is to give the general reader a basic view of the topic. The interested reader should consult TS 23.203 for the complete definitions.
Apart from these standardized QCIs, non-standardized QCIs may also be used. In this case it is the operators and/or vendors who define what node-specific parameters are used for a given QCI.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123945952000086
Quality of service and security
Dimitrios Serpanos, Tilman Wolf, in Architecture of Network Systems, 2011
Application requiring QoS
Quality of services is typically only required by applications that have some sort of real-time constraint. If time is not an issue, reliable data transfer is the only remaining concern. The Transmission Control Protocol provides such reliability (see Appendix A) and no further QoS functionality is necessary. However, if time is an issue, then an interesting problem needs to be solved: depending on the state of the network, reliable data transfer (which involves retransmissions of packets that have been lost or corrupted) may not be feasible within the timing constraints of the application. Thus, the network may need to provide additional QoS support to accommodate such an application.
To illustrate what types of performance requirements can be found in different applications, several examples are discussed.
- •
-
Internet telephony: Interactive voice communication in the Internet—also referred to as Voice over IP—has very strict requirements on end-to-end packet delay and loss rate. End-user perception of call quality degrades significantly with end-to-end delays of more than 150 ms or packet loss. Packet losses can be hidden through the use of transport layer protocols that provide reliable data transfer. However, notifying the sender of a packet loss and waiting for the retransmission incurs so much additional delay that the delay bound cannot be met in many cases. Therefore, many voice communication applications use unreliable data transfers and tolerate packet losses. To support Internet telephony, the network needs to transmit packets with as little delay as possible.
- •
-
Video conferencing: Video conferencing is similar to Internet telephony, but consists of two data streams: one for audio and one for video. The delay and loss requirements for the audio stream are similar to that of Internet telephony. The delay requirements for the video stream are similar, as audio and video need to be displayed in sync. Because video quality is perceived as less important than audio quality, more loss can be tolerated for that stream. Network support for this application involves low delay. Because video requires considerably more data than audio, it is also necessary that the network supports a higher data rate for the connection.
- •
-
Video streaming: Video streaming is used for noninteractive video distribution (e.g., video on demand). The real-time constraints are less demanding than in video conferencing, as a user is likely willing to wait a few seconds for a video stream to buffer before playback starts. In such a scenario, video quality is more important. Thus, reliable data transfer protocols may be employed. To ensure continuous playback, the network needs to provide sufficient bandwidth (and limited packet loss).
- •
-
Cyber-physical control: Numerous physical systems are controlled remotely through a network. Examples include factory control, remote-controlled unmanned aerial vehicles, and, in the near future, telemedicine. Such control requires low delay and very low (or no) packet losses. Some applications in this domain involve high-quality video, which requires high bandwidth.
- •
-
Online gaming: Interactive games that involve multiple players require low delay between the interacting parties to provide for a realistic gaming experience. Networks need to support low delay communication with low packet loss.
From these applications, we can see which quality of service metrics need to be considered in a network.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123744944000104
Link-Level and System-Level Performance of LTE-Advanced
Sassan Ahmadi, in LTE-Advanced, 2014
11.1.4 Packet delay
If the jth packet of the ith transmission belongs to user k, if the packet arrives at the BS (or MS) MAC at time instant Tarrivalkij and is delivered to the MS (or BS) MAC sublayer at time instant Tdeparturekij, then the packet delay can be calculated as follows:
(11.5)DkijDL(UL)=Tarrivalkij−Tdeparturekij
The downlink and uplink delays are denoted by superscript “DL” or “UL,” respectively. The packets that are dropped or erased may not be included in the analysis of packet delays depending on the traffic model. For example, in the modeling of traffic for delay sensitive applications, packets may be dropped, if packet transmissions are not completed within a specified delay bound. The impact of the dropped packets is included in the packet loss rate. The CDF of the packet delay per user provides a basis in which maximum latency, second percentile, and average latency as well as jitter can be derived. The second percentile point of the CDF of packet delay denotes the packet delay value for which 98% of packets have a delay less than that value. As an example, the VoIP capacity is defined such that the percentage of users in outage is less than 2% where a user is assumed to have experienced a service outage, if less than 98% of the VoIP packets have been delivered successfully to the user within a one-way radio access delay bound of 50 ms.
The average packet delay is defined as the average interval between packets originated at the source (either MS or BS) and received at the destination (either BS or MS) in a system for a given duration of transmission. The average packet delay for user k, D¯kDL(UL) is given by:
(11.6)D¯kDL(UL)=∑i=1pk∑j=1qki(Tdeparturekij−Tarrivalkij)∑i=1pkqki
The CDF of users’ average packet delay is the cumulative distribution of the average packet delay observed by all users in the cell. The packet loss ratio per user is defined as:
(11.7)Packetlossratio=1−TotalnumberofsuccessfullydeliveredpacketsTotalnumberofpackets
where the total number of packets includes packets that were transmitted over the air-interface and packets that were dropped prior to transmission. The data throughput of a BS is defined as the number of information bits per second that a cell can successfully deliver or receive via the air-interface using appropriate scheduling algorithms.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124051621000113
Long-term evolution for machines (LTE-M)
Suresh R. Borkar, in LPWAN Technologies for IoT and M2M Applications, 2020
7.3.6 Use of LTE priority structure for LTE-M applications
One of the distinctive features of LTE is the very versatile and robust priority assignment structure for user-traffic bearers. There are 13 priority levels ranging from 0.5 (highest) to 9 (lowest) supporting guaranteed bit–rate (GBR) and non-GBR (N-GBR) bearers [20]. The priority levels are characterized by varying combinations of packet delay and packet error rate (PER). Packet delays range from 50 to 300 milliseconds and a range of 10−2 to 10−6 is available for PER. The standards provide suggested mapping of various application classes to these priority levels but the network operators can apply their own discretion to assign the priority levels for their services. This is a strong and effective platform for managing the varying quality of service (QoS) requirements for LTE-M-based LPWAN services and provides a unique advantage to LTE-M.
Several classes of applications can be mapped for LTE-M usage. A representative mapping is suggested in Table 7–1. The best available PER of 10−6 is used since data integrity is very important for MTC type applications. This is paired with three options for packet delay—60, 100, and 300 milliseconds.
Table 7–1. Proposed LTE-M priority structure.
3GPP suggested service | Priority level | Est. packet delay (milliseconds) | Est. packet error rate | Suggested LTE-M application |
---|---|---|---|---|
Nonconversational video | 5 (GBR) | 300 | 10−6 | MTC 1 |
IMS signaling | 1 (N-GBR) | 100 | 10−6 | MTC 2 |
Mission-critical delay-sensitive signaling | 0.5 (N-GBR) | 60 | 10−6 | MTC 3 |
MTC1 are applications, which may be delay-tolerant, for example utility meters. MTC 3 type applications may be delay-sensitive, for example remote robotics surgery. MTC 2 can be applied to applications such as smart city which fall in between these two extreme attributes.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128188804000077
We will look a little bit further into LTE QoS that we discussed last time, and learn what QoS parameters are for.
There are two types of EPS bearers: default and dedicated. In the LTE network, the EPS bearer QoS is controlled using the following LTE QoS parameters:
▶ Resource Type: GBR or Non-GBR
▶ QoS Parameters
- QCI
- ARP
- GBR
- MBR
- APN-AMBR
- UE-AMBR
Every EPS bearer must have QI and ARP defined. The QCI is particularly important because it serves as reference in determining QoS level for each EPS bearer. In case of bandwidth (bit rate), GBR and MBR are defined only in GBR type EPS bearers, whereas AMBR (APN-AMBR and UE-AMBR) is defined only in Non-GBR type EPS bearers.
Below, we will explain the LTE QoS parameters one by one.
Resource Type = GBR (Guaranteed Bit Rate)
For an EPS bearer, having a GBR resource type means the bandwidth of the bearer is guaranteed. Obviously, a GBR type EPS bearer has a «guaranteed bit rate» associated (GBR will be further explained below) as one of its QoS parameters. Only a dedicated EPS bearer can be a GBR type bearer and no default EPS bearer can be GBR type. The QCI of a GBR type EPS bearer can range from 1 to 4.
Resource Type = Non-GBR
For an EPS bearer, having a non-GBR resource type means that the bearer is a best effort type bearer and its bandwidth is not guaranteed. A default EPS bearer is always a Non-GBR bearer, whereas a dedicated EPS bearer can be either GBR or non-GBR. The QCI of a non-GBR type EPS bearer can range from 5 to 9.
QCI (QoS Class Identifier)
QCI, in an integer from 1 to 9, indicates nine different QoS performance characteristics of each IP packet. QCI values are standardized to reference specific QoS characteristics, and each QCI contains standardized performance characteristics (values), such as resource type (GBR or non-GBR), priority (1~9), Packet Delay Budget (allowed packet delay shown in values ranging from 50 ms to 300 ms), Packet Error Loss Rate (allowed packet loss shown in values from 10-2 to 10-6. For more specific values, search on Google for «3GPP TS 23.203» and see Table 6.1.7 in the document. For example, QCI 1 and 9 are defined as follows:
QCI = 1
: Resource Type = GBR, Priority = 2, Packet Delay Budget = 100ms, Packet Error Loss Rate = 10-2 , Example Service = Voice
QCI = 9
: Resource Type = Non-GBR, Priority = 9, Packet Delay Budget = 300ms, Packet Error Loss Rate = 10-6, Example Service = Internet
QoS to be guaranteed for an EPS bearer or SDF varies depending on the QCI values specified.
QCI, though a single integer, represents node-specific parameters that give the details of how an LTE node handles packet forwarding (e.g. scheduling weights, admission thresholds, queue thresholds, link layer protocol configuration, etc). Network operators have their LTE nodes pre-configured to handle packet forwarding according to the QCI value.
By pre-defining the performance characteristics of each QCI value and having them standardized, the network operators can ensure the same minimum level QoS required by the LTE standards is provided to different services/applications used in an LTE network consisting of various nodes from multi-vendors.
QCI values seem to be mostly used by eNBs in controlling the priority of packets delivered over radio links. That’s because practically it is not easy for S-GW or P-GW, in a wired link, to process packets and also forward them based on the QCI characteristics all at the same time (As you may know, a Cisco or Juniper router would not care about delay or error loss rate when it processes QoS of packets. It would merely decide which packet to send first through scheduling (WFQ, DWRR, SPQ, etc.) based on the priority of the packets (802.1p/DSCP/MPLS EXP)).
ARP (Allocation and Retention Priority)
When a new EPS bearer is needed in an LTE network with insufficient resources, an LTE entity (e.g. P-GW, S-GW or eNB) decides, based on ARP (an integer ranging from 1 to 15, with 1 being the highest level of priority), whether to:
- remove the existing EPS bearer and create a new one (e.g. removing an EPS bearer with low priority ARP to create one with high priority ARP); or
- refuse to create a new one.
So, the ARP is considered only when deciding whether to create a new EPS bearer or not. Once a new bearer is created and packets are delivered through it, the ARP does not affect the priority of the delivered packet, and thus the network node/entity forwards the packets regardless of their ARP values.
One of the most representative examples of using the ARP is an emergency VoIP call. So, an existing EPS bearer can be removed if a new one is required for a emergency 119 (911 in US, 112 in EC, etc) VoIP call.
GBR (UL/DL)
This parameter is used for a GBR type bearer, and indicates the bandwidth (bit rate) to be guaranteed by the LTE network. It is not applied to a non-GBR bearer with no guaranteed bandwidth (UL is for uplink traffic and DL is for downlink traffic).
MBR (UL/DL)
MBR is used for a GBR type bearer, and indicates the maximum bit rate allowed in the LTE network. Any packets arriving at the bearer after the specified MBR is exceeded will be discarded.
APN-AMBR (UL/DL)
As you read the foregoing paragraph, you may wonder why a non-GBR type bearer does not have a «bandwidth limit»? In case of non-GBR bearers, it is the total bandwidth of all the non-GBR EPS bearers in a PDN that is limited, not the individual bandwidth of each bearer. And this restriction is controlled by APN-AMBR (UL/DL). As seen in the figure above, there are two non-GBR EPS bearers, and their maximum bandwidths are specified by the APN-AMBR (UL/DL). This parameter is applied at UE (for UL traffic only) and P-GW (for both DL and UL traffic).
UE-AMBR (UL/DL)
In the figure above, APN-AMBR and UE-AMBR look the same. But, please take a look at the one below.
A UE can be connected to more than one PDN (e.g. PDN 1 for Internet, PDN 2 for VoIP using IMS, etc.) and it has one unique IP address for each of its all PDN connections. Here, UE-AMBR (UL/DL) indicates the maximum bandwidth allowed for all the non-GBR EPS bearers associated to the UE no matter how many PDN connections the UE has. Other PDNs are connected through other P-GWs, this parameter is applied by eNBs only.