You are on page 1of 6

Queueing theory (also commonly spelled queuing theory) is the mathematical study of waiting lines (or queues).

There are several related processes, arriving at the back of the queue, waiting in the queue (essentially a storage process), and being served by the server at the front of the queue. It is applicable in transport and telecommunication and is occasionally linked to ride theory. Application of queueing theory to telephony The Public Switched Telephone Networks (PSTNs) are designed to accommodate the offered traffic intensity with only a small loss. The performance of loss systems is quantified by their Grade of Service (GoS), driven by the assumption that if insufficient capacity is available, the call is refused and lost. Alternatively, overflow systems make use of alternative routes to divert calls via different paths -- even these systems have a finite or maximum traffic carrying capacity. However, the use of queueing in PSTNs allows the systems to queue their customer's requests until free resources become available. This means that if traffic intensity levels exceed available capacity, customers calls are here no longer lost; they instead wait until they can be served. This method is used in queueing customers for the next available operator. A queueing discipline determines the manner in which the exchange handles calls from customers. It defines the way they will be served, the order in which they are served, and the way in which resources are divided between the customers. Here are details of three queueing disciplines: First in First Out (FIFO) - customers are serviced according to their order of arrival Last in First Out (LIFO) - the last customer to arrive on the queue is the one who is actually serviced first. Processor Sharing (PS) - customers are serviced equally, i.e. they experience the same amout of delay. Incoming traffic to queuing theory systems is modelled via a Poisson distribution,with the assumptons of Pure-Chance Traffic Call arrivals and departures are random and independent events. Statistical Equilibrium Probabilities within the system do not change. Full Availability All incoming traffic can be routed to any other customer within the network. Congestion is cleared as soon as servers are free. Solutions of queuing problems: The most easily used methods to solve queing problems are analytical methods which make the following assumptions: Arrival times are random and time between arrivals are distributed exponentially,Service times are also distributed exponentially, The queue is of FCFS type, and There are no significant interdependencies Average customer waiting time (Wq) is defined as: Wq = (Sav)^2 / (Aav - Sav); For Aav>Sav (1.1)

Where Aav is the average time between arrivals; Sav is average service time. Mean time required for customer to wait and be serviced is :Wm = Wq + Sav (1.2)

Exercise 1 Suppose the average service time per call at Telkom Call Centre is 5 minutes. On average calls arrive at 20 minutes interval. 1. What is the average time that a customer will hold before being serviced? 2. What is the mean time spend by a customer in total? 1. From equation 1.1;

Wq = (Sav)2 / (Aav - Sav),

Wq = 52 / (20 - 5),

Wq = 25 / 15 = 1.67 Minutes,

2. From equation 1.2;

Wm = Wq + Sav,

Wm = 1.67 minutes + 5 minutes = 6.67 Minutes

Regression analysis. It sounds like a part of Freudian psychology. In reality, a regression is a seemingly ubiquitous statistical tool appearing in legions of scientific papers, and regression analysis is a method of measuring the link between two or more phenomena. Imagine you want to know the connection between the square footage of houses and their sale prices. A regression charts such a link, in so doing pinpointing an average causal

effect, as MIT economist Josh Angrist and his co-author Jorn-Steffen Pischke of the London School of Economics put it in their 2009 book, Mostly Harmless Econometrics. To grasp the basic concept, take the simplest form of a regression: a linear, bivariate regression, which describes an unchanging relationship between two (and not more) phenomena. Now suppose you are wondering if there is a connection between the time high school students spend doing French homework, and the grades they receive. These types of data can be plotted as points on a graph, where the x-axis is the average number of hours per week a student studies, and the y-axis represents exam scores out of 100. Together, the data points will typically scatter a bit on the graph. The regression analysis creates the single line that best summarizes the distribution of points. Mathematically, the line representing a simple linear regression is expressed through a basic equation: Y = a0 + a1 X. Here X is hours spent studying per week, the independent variable. Y is the exam scores, the dependent variable, since we believe those scores depend on time spent studying. Additionally, a0 is the y-intercept (the value of Y when X is zero) and a1 is the slope of the line, characterizing the relationship between the two variables. Using two slightly more complex equations, the normal equations for the basic linear regression line, we can plug in all the numbers for X and Y, solve for a0 and a1, and actually draw the line. That line often represents the lowest aggregate of the squares of the distances between all points and itself, the Ordinary Least Squares (OLS) method mentioned in mountains of academic papers. To see why OLS is logical, imagine a regression line running 6 units below one data point and 6 units above another point; it is 6 units away from the two points, on average. Now suppose a second line runs 10 units below one data point and 2 units above another point; it is also 6 units away from the two points, on average. But if we square the distances involved, we get different results: 62 + 62 = 72 in the first case, and 102 + 22 = 104 in the second case. So the first line yields the lower figure the least squares and is a more consistent reduction of the distance from the data points. (Additional methods, besides OLS, can find the best line for more complex forms of regression analysis.) In turn, the typical distance between the line and all the points (sometimes called the standard error) indicates whether the regression analysis has captured a relationship that is strong or weak. The closer a line is to the data points, overall, the stronger the relationship. Regression analysis, again, establishes a correlation between phenomena. But as the saying goes, correlation is not causation. Even a line that fits the data points closely may not say something definitive about causality. Perhaps some students do succeed in French class because they study hard. Or perhaps those students benefit from better natural linguistic abilities, and they merely enjoy studying more, but do not especially benefit from it. Perhaps there would be a stronger correlation between test scores and the total time students had spent hearing French spoken before they ever entered this particular class. The tale that emerges from good data may not be the whole story. So it still takes critical thinking and careful studies to locate meaningful cause-and-effect relationships in the world. But at a minimum, regression analysis helps establish the existence of connections that call for closer investigation.

The NyquistShannon sampling theorem, after Harry Nyquist and Claude Shannon, is a fundamental result in the field of information theory, in particular telecommunications and signal processing. Sampling is the process of converting a signal (for example, a function of continuous time or space) into a numeric sequence (a function of discrete time or space).

A Quick Primer on Sampling Theory The signals we use in the real world, such as our voices, are called "analog" signals. To process these signals in computers, we need to convert the signals to "digital" form. While an analog signal is continuous in both time and amplitude, a digital signal is discrete in both time and amplitude. To convert a signal from continuous time to discrete time, a process called sampling is used. The value of the signal is measured at certain intervals in time. Each measurement is referred to as a sample. (The analog signal is also quantized in amplitude, but that process is ignored in this demonstration. See the Analog to Digital Conversion page for more on that.) When the continuous analog signal is sampled at a frequency F, the resulting discrete signal has more frequency components than did the analog signal. To be precise, the frequency components of the analog signal are repeated at the sample rate. That is, in the discrete frequency response they are seen at their original position, and are also seen centered around +/- F, and around +/- 2F, etc. How many samples are necessary to ensure we are preserving the information contained in the signal? If the signal contains high frequency components, we will need to sample at a higher rate to avoid losing information that is in the signal. In general, to preserve the full information in the signal, it is necessary to sample at twice the maximum frequency of the signal. This is known as the Nyquist rate. The Sampling Theorem states that a signal can be exactly reproduced if it is sampled at a frequency F, where F is greater than twice the maximum frequency in the signal. What happens if we sample the signal at a frequency that is lower that the Nyquist rate? When the signal is converted back into a continuous time signal, it will exhibit a phenomenon called aliasing. Aliasing is the presence of unwanted components in the reconstructed signal. These components were not present when the original signal was sampled. In addition, some of the frequencies in the original signal may be lost in the reconstructed signal. Aliasing occurs because signal frequencies can overlap if the sampling frequency is too low. Frequencies "fold" around half the sampling frequency - which is why this frequency is often referred to as the folding frequency.

Sometimes the highest frequency components of a signal are simply noise, or do not contain useful information. To prevent aliasing of these frequencies, we can filter out these components before sampling the signal. Because we are filtering out high frequency components and letting lower frequency components through, this is known as low-pass filtering.

Demonstration of Sampling The original signal in the applet below is composed of three sinusoid functions, each with a different frequency and amplitude. The example here has the frequencies 28 Hz, 84 Hz, and 140 Hz. Use the filtering control to filter out the higher frequency components. This filter is an ideal low-pass filter, meaning that it exactly preserves any frequencies below the cutoff frequency and completely attenuates any frequencies above the cutoff frequency. Notice that if you leave all the components in the original signal and select a low sampling frequency, aliasing will occur. This aliasing will result in the reconstructed signal not matching the original signal. However, you can try to limit the amount of aliasing by filtering out the higher frequencies in the signal. Also important to note is that once you are sampling at a rate above the Nyquist rate, further increases in the sampling frequency do not improve the quality of the reconstructed signal. This is true because of the ideal low-pass filter. In real-world applications, sampling at higher frequencies results in better reconstructed signals. However, higher sampling frequencies require faster converters and more storage. Therefore, engineers must weigh the advantages and disadvantages in each application, and be aware of the tradeoffs involved. The importance of frequency domain plots in signal analysis cannot be understated. The three plots on the right side of the demonstration are all Fourier transform plots. It is easy to see the effects of changing the sampling frequency by looking at these transform plots. As the sampling frequency decreases, the signal separation also decreases. When the sampling frequency drops below the Nyquist rate, the frequencies will crossover and cause aliasing.

Instructions for using the Program The applet is divided into three sections, the Original Analog Signal panel, Sampled Digital Signal panel, and the Reconstructed Analog Signal panel. By making choices of the sampling frequencies, you can see the effects of aliasing in the frequency domain plots. By making choices of the filtering frequency, you can control what signals remain when the analog signal is sampled. You can overlay the original plot on top of the reconstructed plot if you want to see just how different the results are. You can also use the reset button to return all values to their original defaults. Experiment with the following applet in order to understand the effects of sampling and filtering.

You might also like