You are on page 1of 106

Mathmatics of

Sound Analysis

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sun, 28 Aug 2011 21:24:54 UTC

Contents
Articles
Fast Fourier transform Discrete Hartley transform Discrete Fourier transform Fourier analysis Sine Trigonometric functions Complex number Microphone practice Wave 1 8 11 25 32 46 66 83 86

References
Article Sources and Contributors Image Sources, Licenses and Contributors 100 102

Article Licenses
License 104

Fast Fourier transform

Fast Fourier transform


A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform (DFT) and its inverse. "The FFT has been called the most important numerical algorithm of our lifetime (Strang, 1994)." (Kent & Read 2002, 61) There are many distinct FFT algorithms involving a wide range of mathematics, from simple complex-number arithmetic to group theory and number theory; this article gives an overview of the available techniques and some of their general properties, while the specific algorithms are described in subsidiary articles linked below. A DFT decomposes a sequence of values into components of different frequencies. This operation is useful in many fields (see discrete Fourier transform for properties and applications of the transform) but computing it directly from the definition is often too slow to be practical. An FFT is a way to compute the same result more quickly: computing a DFT of N points in the naive way, using the definition, takes O(N2) arithmetical operations, while an FFT can compute the same result in only O(N log N) operations. The difference in speed can be substantial, especially for long data sets where N may be in the thousands or millionsin practice, the computation time can be reduced by several orders of magnitude in such cases, and the improvement is roughly proportional to N / log(N). This huge improvement made many DFT-based algorithms practical; FFTs are of great importance to a wide variety of applications, from digital signal processing and solving partial differential equations to algorithms for quick multiplication of large integers. The most well known FFT algorithms depend upon the factorization of N, but (contrary to popular misconception) there are FFTs with O(NlogN) complexity for all N, even for primeN. Many FFT algorithms only depend on the fact that is an th primitive root of unity, and thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/N factor, any FFT algorithm can easily be adapted for it.

Definition and speed


An FFT computes the DFT and produces exactly the same result as evaluating the DFT definition directly; the only difference is that an FFT is much faster. (In the presence of round-off error, many FFT algorithms are also much more accurate than evaluating the DFT definition directly, as discussed below.) Let x0, ...., xN-1 be complex numbers. The DFT is defined by the formula

Evaluating this definition directly requires O(N2) operations: there are N outputs Xk, and each output requires a sum of N terms. An FFT is any method to compute the same results in O(N log N) operations. More precisely, all known FFT algorithms require (N log N) operations (technically, O only denotes an upper bound), although there is no known proof that better complexity is impossible. To illustrate the savings of an FFT, consider the count of complex multiplications and additions. Evaluating the DFT's sums directly involves N2 complex multiplications and N(N1) complex additions [of which O(N) operations can be saved by eliminating trivial operations such as multiplications by 1]. The well-known radix-2 CooleyTukey algorithm, for N a power of 2, can compute the same result with only (N/2)log2N complex multiplies (again, ignoring simplifications of multiplications by 1 and similar) and Nlog2N complex additions. In practice, actual performance on modern computers is usually dominated by factors other than arithmetic and is a complicated subject (see, e.g., Frigo & Johnson, 2005), but the overall improvement from O(N2) to O(N log N) remains.

Fast Fourier transform

Algorithms
CooleyTukey algorithm
By far the most common FFT is the CooleyTukey algorithm. This is a divide and conquer algorithm that recursively breaks down a DFT of any composite size N = N1N2 into many smaller DFTs of sizes N1 and N2, along with O(N) multiplications by complex roots of unity traditionally called twiddle factors (after Gentleman and Sande, 1966). This method (and the general idea of an FFT) was popularized by a publication of J. W. Cooley and J. W. Tukey in 1965, but it was later discovered (Heideman & Burrus, 1984) that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms). The most well-known use of the CooleyTukey algorithm is to divide the transform into two pieces of size at

each step, and is therefore limited to power-of-two sizes, but any factorization can be used in general (as was known to both Gauss and Cooley/Tukey). These are called the radix-2 and mixed-radix cases, respectively (and other variants such as the split-radix FFT have their own names as well). Although the basic idea is recursive, most traditional implementations rearrange the algorithm to avoid explicit recursion. Also, because the CooleyTukey algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT, such as those described below.

Other FFT algorithms


There are other FFT algorithms distinct from CooleyTukey. For with coprime and , one can use the Prime-Factor (Good-Thomas) algorithm (PFA), based on the Chinese Remainder Theorem, to factorize the DFT similarly to CooleyTukey but without the twiddle factors. The Rader-Brenner algorithm (1976) is a CooleyTukey-like factorization but with purely imaginary twiddle factors, reducing multiplications at the cost of increased additions and reduced numerical stability; it was later superseded by the split-radix variant of CooleyTukey (which achieves the same multiplication count but with fewer additions and without sacrificing accuracy). Algorithms that recursively factorize the DFT into smaller operations other than DFTs include the Bruun and QFT algorithms. (The Rader-Brenner and QFT algorithms were proposed for power-of-two sizes, but it is possible that they could be adapted to general composite . Bruun's algorithm applies to arbitrary even composite sizes.) Bruun's algorithm, in particular, is based on interpreting the FFT as a recursive factorization of the polynomial , here into real-coefficient polynomials of the form and . into cyclotomic Another polynomial viewpoint is exploited by the Winograd algorithm, which factorizes

polynomialsthese often have coefficients of 1, 0, or 1, and therefore require few (if any) multiplications, so Winograd can be used to obtain minimal-multiplication FFTs and is often used to find efficient algorithms for small factors. Indeed, Winograd showed that the DFT can be computed with only irrational multiplications, leading to a proven achievable lower bound on the number of multiplications for power-of-two sizes; unfortunately, this comes at the cost of many more additions, a tradeoff no longer favorable on modern processors with hardware multipliers. In particular, Winograd also makes use of the PFA as well as an algorithm by Rader for FFTs of prime sizes. Rader's algorithm, exploiting the existence of a generator for the multiplicative group modulo prime , expresses a DFT of prime size as a cyclic convolution of (composite) size , which can then be computed by a pair of ordinary FFTs via the convolution theorem (although Winograd uses other convolution methods). Another prime-size FFT is due to L. I. Bluestein, and is sometimes called the chirp-z algorithm; it also re-expresses a DFT as a convolution, but this time of the same size (which can be zero-padded to a power of two and evaluated by radix-2 CooleyTukey FFTs, for example), via the identity .

Fast Fourier transform

FFT algorithms specialized for real and/or symmetric data


In many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry

and efficient FFT algorithms have been designed for this situation (see e.g. Sorensen, 1987). One approach consists of taking an ordinary algorithm (e.g. CooleyTukey) and removing the redundant parts of the computation, saving roughly a factor of two in time and memory. Alternatively, it is possible to express an even-length real-input DFT as a complex DFT of half the length (whose real and imaginary parts are the even/odd elements of the original real data), followed by O(N) post-processing operations. It was once believed that real-input DFTs could be more efficiently computed by means of the discrete Hartley transform (DHT), but it was subsequently argued that a specialized real-input DFT algorithm (FFT) can typically be found that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs. Bruun's algorithm (above) is another method that was initially proposed to take advantage of real inputs, but it has not proved popular. There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one can gain another factor of (roughly) two in time and memory and the DFT becomes the discrete cosine/sine transform(s) (DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed via FFTs of real data combined with O(N) pre/post processing.

Computational issues
Bounds on complexity and operation counts
A fundamental question of longstanding theoretical interest is to prove lower bounds on the complexity and exact operation counts of fast Fourier transforms, and many open problems remain. It is not even rigorously proved whether DFTs truly require (i.e., order or greater) operations, even for the simple case of power of two sizes, although no algorithms with lower complexity are known. In particular, the count of arithmetic operations is usually the focus of such questions, although actual performance on modern-day computers is determined by many other factors such as cache or CPU pipeline optimization. Following pioneering work by Winograd (1978), a tight multiplications required by an FFT. It can be shown that only multiplications are required to compute a DFT of power-of-two length lower bound is known for the number of real irrational real . Moreover, explicit algorithms

that achieve this count are known (Heideman & Burrus, 1986; Duhamel, 1990). Unfortunately, these algorithms require too many additions to be practical, at least on modern computers with hardware multipliers. A tight lower bound is not known on the number of required additions, although lower bounds have been proved under some restrictive assumptions on the algorithms. In 1973, Morgenstern proved an lower bound on the addition count for algorithms where the multiplicative constants have bounded magnitudes (which is true for most but not all FFT algorithms). Pan (1986) proved an lower bound assuming a bound on a measure of the FFT algorithm's "asynchronicity", but the generality of this assumption is unclear. For the case of power-of-two , Papadimitriou (1979) argued that the number of complex-number additions achieved by CooleyTukey algorithms is optimal under certain assumptions on the graph of the algorithm (his assumptions imply, among other things, that no additive identities in the roots of unity are exploited). (This argument would imply that at least real additions are required, although this is not a tight bound because extra additions are required as part of complex-number multiplications.) Thus far, no published FFT algorithm has achieved fewer than complex-number additions (or their equivalent) for power-of-two . A third problem is to minimize the total number of real multiplications and additions, sometimes called the "arithmetic complexity" (although in this context it is the exact count and not the asymptotic complexity that is being

Fast Fourier transform

considered). Again, no tight lower bound has been proven. Since 1968, however, the lowest published count for power-of-two was long achieved by the split-radix FFT algorithm, which requires real multiplications for . This was recently reduced to (Johnson and Frigo, 2007; Lundy and Van Buskirk, 2007).

Most of the attempts to lower or prove the complexity of FFT algorithms have focused on the ordinary complex-data case, because it is the simplest. However, complex-data FFTs are so closely related to algorithms for related problems such as real-data FFTs, discrete cosine transforms, discrete Hartley transforms, and so on, that any improvement in one of these would immediately lead to improvements in the others (Duhamel & Vetterli, 1990).

Accuracy and approximations


All of the FFT algorithms discussed below compute the DFT exactly (in exact arithmetic, i.e. neglecting floating-point errors). A few "FFT" algorithms have been proposed, however, that compute the DFT approximately, with an error that can be made arbitrarily small at the expense of increased computations. Such algorithms trade the approximation error for increased speed or other properties. For example, an approximate FFT algorithm by Edelman et al. (1999) achieves lower communication requirements for parallel computing with the help of a fast multipole method. A wavelet-based approximate FFT by Guo and Burrus (1996) takes sparse inputs/outputs (time/frequency localization) into account more efficiently than is possible with an exact FFT. Another algorithm for approximate computation of a subset of the DFT outputs is due to Shentov et al. (1995). Only the Edelman algorithm works equally well for sparse and non-sparse data, however, since it is based on the compressibility (rank deficiency) of the Fourier matrix itself rather than the compressibility (sparsity) of the data. Even the "exact" FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errors are typically quite small; most FFT algorithms, e.g. CooleyTukey, have excellent numerical properties as a consequence of the pairwise summation structure of the algorithms. The upper bound on the relative error for the CooleyTukey algorithm is O( log N), compared to O(N3/2) for the nave DFT formula (Gentleman and Sande, 1966), where is the machine floating-point relative precision. In fact, the root mean square (rms) errors are much better than these upper bounds, being only O( log N) for CooleyTukey and O( N) for the nave DFT (Schatzman, 1996). These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT (i.e. the trigonometric function values), and it is not unusual for incautious FFT implementations to have much worse accuracy, e.g. if they use inaccurate trigonometric recurrence formulas. Some FFTs other than CooleyTukey, such as the Rader-Brenner algorithm, are intrinsically less stable. In fixed-point arithmetic, the finite-precision errors accumulated by FFT algorithms are worse, with rms errors growing as O(N) for the CooleyTukey algorithm (Welch, 1969). Moreover, even achieving this accuracy requires careful attention to scaling in order to minimize the loss of precision, and fixed-point FFT algorithms involve rescaling at each intermediate stage of decompositions like CooleyTukey. To verify the correctness of an FFT implementation, rigorous guarantees can be obtained in O(N log N) time by a simple procedure checking the linearity, impulse-response, and time-shift properties of the transform on random inputs (Ergn, 1995).

Fast Fourier transform

Multidimensional FFTs
As defined in the multidimensional DFT article, the multidimensional DFT

transforms an array summations (over

with a

-dimensional vector of indices for each ), where the division

by a set of ,

nested as

defined

, is performed element-wise. Equivalently, it is simply the composition of a sequence of sets of one-dimensional DFTs, performed along one dimension at a time (in any order). This compositional viewpoint immediately provides the simplest and most common multidimensional DFT algorithm, known as the row-column algorithm (after the two-dimensional case, below). That is, one simply performs a sequence of one-dimensional FFTs (by any of the above algorithms): first you transform along the dimension, then along the shown to have the usual FFTs is: dimension, and so on (or actually, any ordering will work). This method is easily complexity, where is the total number of data points transforms of size , etcetera, so the complexity of the sequence of

transformed. In particular, there are

In two dimensions, the

can be viewed as an

matrix, and this algorithm corresponds to first performing

the FFT of all the rows and then of all the columns (or vice versa), hence the name. In more than two dimensions, it is often advantageous for cache locality to group the dimensions recursively. For example, a three-dimensional FFT might first perform two-dimensional FFTs of each planar "slice" for each fixed , and then perform the one-dimensional FFTs along the direction. More generally, an asymptotically optimal cache-oblivious algorithm consists of recursively dividing the dimensions into two groups and that are transformed recursively (rounding if FFT algorithm as the base case, and still has is not even) (see Frigo and Johnson, 2005). Still,

this remains a straightforward variation of the row-column algorithm that ultimately requires only a one-dimensional complexity. Yet another variation is to perform matrix transpositions in between transforming subsequent dimensions, so that the transforms operate on contiguous data; this is especially important for out-of-core and distributed memory situations where accessing non-contiguous data is extremely time-consuming. There are other multidimensional FFT algorithms that are distinct from the row-column algorithm, although all of them have complexity. Perhaps the simplest non-row-column FFT is the vector-radix FFT algorithm, which is a generalization of the ordinary CooleyTukey algorithm where one divides the transform dimensions by a vector of radices at each step. (This may also have cache benefits.) The simplest case of vector-radix is where all of the radices are equal (e.g. vector-radix-2 divides all of the dimensions by two), but this is not necessary. Vector radix with only a single non-unit radix at a time, i.e. , is essentially a row-column algorithm. Other, more complicated, methods include polynomial transform algorithms due to Nussbaumer (1977), which view the transform in terms of convolutions and polynomial products. See Duhamel and Vetterli (1990) for more information and references.

Fast Fourier transform

Other generalizations
An O(N5/2 logN) generalization to spherical harmonics on the sphere S2 with N2 nodes was described by Mohlenkamp (1999), along with an algorithm conjectured (but not proven) to have O(N2 log2N) complexity; Mohlenkamp also provides an implementation in the libftsh library [1]. A spherical-harmonic algorithm with O(N2 logN) complexity is described by Rokhlin and Tygert (2006). Various groups have also published "FFT" algorithms for non-equispaced data, as reviewed in Potts et al. (2001). Such algorithms do not strictly compute the DFT (which is only defined for equispaced data), but rather some approximation thereof (a non-uniform discrete Fourier transform, or NDFT, which itself is often computed only approximately).

References
[1] http:/ / www. math. ohiou. edu/ ~mjm/ research/ libftsh. html

Brenner, N.; Rader, C. (1976). "A New Principle for Fast Fourier Transformation". IEEE Acoustics, Speech & Signal Processing 24 (3): 264266. doi:10.1109/TASSP.1976.1162805. Brigham, E. O. (2002). The Fast Fourier Transform. New York: Prentice-Hall Cooley, James W.; Tukey, John W. (1965). "An algorithm for the machine calculation of complex Fourier series". Math. Comput. 19 (90): 297301. doi:10.1090/S0025-5718-1965-0178586-1. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein, 2001. Introduction to Algorithms, 2nd. ed. MIT Press and McGraw-Hill. ISBN 0-262-03293-7. Especially chapter 30, "Polynomials and the FFT." Duhamel, Pierre (1990). "Algorithms meeting the lower bounds on the multiplicative complexity of lengthDFTs and their connection with practical algorithms". IEEE Trans. Acoust. Speech. Sig. Proc. 38 (9): 1504151. doi:10.1109/29.60070. P. Duhamel and M. Vetterli, 1990, Fast Fourier transforms: a tutorial review and a state of the art (doi:10.1016/0165-1684(90)90158-U), Signal Processing 19: 259299. A. Edelman, P. McCorquodale, and S. Toledo, 1999, The Future Fast Fourier Transform? (doi:10.1137/S1064827597316266), SIAM J. Sci. Computing 20: 10941114. D. F. Elliott, & K. R. Rao, 1982, Fast transforms: Algorithms, analyses, applications. New York: Academic Press. Funda Ergn, 1995, Testing multivariate linear functions: Overcoming the generator bottleneck (doi:10.1145/225058.225167), Proc. 27th ACM Symposium on the Theory of Computing: 407416. M. Frigo and S. G. Johnson, 2005, " The Design and Implementation of FFTW3 (http://fftw.org/ fftw-paper-ieee.pdf)," Proceedings of the IEEE 93: 216231. Carl Friedrich Gauss, 1866. "Nachlass: Theoria interpolationis methodo nova tractata," Werke band 3, 265327. Gttingen: Knigliche Gesellschaft der Wissenschaften. W. M. Gentleman and G. Sande, 1966, "Fast Fourier transformsfor fun and profit," Proc. AFIPS 29: 563578. doi:10.1145/1464291.1464352 H. Guo and C. S. Burrus, 1996, Fast approximate Fourier transform via wavelets transform (doi:10.1117/12.255236), Proc. SPIE Intl. Soc. Opt. Eng. 2825: 250259. H. Guo, G. A. Sitton, C. S. Burrus, 1994, The Quick Discrete Fourier Transform (doi:10.1109/ICASSP.1994.389994), Proc. IEEE Conf. Acoust. Speech and Sig. Processing (ICASSP) 3: 445448. Heideman, M. T.; Johnson, D. H.; Burrus, C. S. (1984). "Gauss and the history of the fast Fourier transform". IEEE ASSP Magazine 1 (4): 1421. doi:10.1109/MASSP.1984.1162257.

Heideman, Michael T.; Burrus, C. Sidney (1986). "On the number of multiplications necessary to compute a lengthDFT". IEEE Trans. Acoust. Speech. Sig. Proc. 34 (1): 9195. doi:10.1109/TASSP.1986.1164785.

Fast Fourier transform S. G. Johnson and M. Frigo, 2007. " A modified split-radix FFT with fewer arithmetic operations (http://www. fftw.org/newsplit.pdf)," IEEE Trans. Signal Processing 55 (1): 111119. T. Lundy and J. Van Buskirk, 2007. "A new matrix approach to real FFTs and convolutions of length 2k," Computing 80 (1): 23-45. Kent, Ray D. and Read, Charles (2002). Acoustic Analysis of Speech. ISBN 0-7693-0112-6. Cites Strang, G. (1994)/May-June). Wavelets. American Scientist, 82, 250-255. Morgenstern, Jacques (1973). "Note on a lower bound of the linear complexity of the fast Fourier transform". J. ACM 20 (2): 305306. doi:10.1145/321752.321761. Mohlenkamp, M. J. (1999). "A fast transform for spherical harmonics" (http://www.math.ohiou.edu/~mjm/ research/MOHLEN1999P.pdf). J. Fourier Anal. Appl. 5 (2-3): 159184. doi:10.1007/BF01261607. Nussbaumer, H. J. (1977). "Digital filtering using polynomial transforms". Electronics Lett. 13 (13): 386387. doi:10.1049/el:19770280. V. Pan, 1986, The trade-off between the additive complexity and the asyncronicity of linear and bilinear algorithms (doi:10.1016/0020-0190(86)90035-9), Information Proc. Lett. 22: 11-14. Christos H. Papadimitriou, 1979, Optimality of the fast Fourier transform (doi:10.1145/322108.322118), J. ACM 26: 95-102. D. Potts, G. Steidl, and M. Tasche, 2001. " Fast Fourier transforms for nonequispaced data: A tutorial (http:// www.tu-chemnitz.de/~potts/paper/ndft.pdf)", in: J.J. Benedetto and P. Ferreira (Eds.), Modern Sampling Theory: Mathematics and Applications (Birkhauser). Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Chapter 12. Fast Fourier Transform" (http:// apps.nrbook.com/empanel/index.html#pg=600), Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN978-0-521-88068-8 Rokhlin, Vladimir; Tygert, Mark (2006). "Fast algorithms for spherical harmonic expansions". SIAM J. Sci. Computing 27 (6): 19031928. doi:10.1137/050623073. James C. Schatzman, 1996, Accuracy of the discrete Fourier transform and the fast Fourier transform (http:// portal.acm.org/citation.cfm?id=240432), SIAM J. Sci. Comput. 17: 11501166. Shentov, O. V.; Mitra, S. K.; Heute, U.; Hossen, A. N. (1995). "Subband DFT. I. Definition, interpretations and extensions". Signal Processing 41 (3): 261277. doi:10.1016/0165-1684(94)00103-7. Sorensen, H. V.; Jones, D. L.; Heideman, M. T.; Burrus, C. S. (1987). "Real-valued fast Fourier transform algorithms". IEEE Trans. Acoust. Speech Sig. Processing 35 (35): 849863. doi:10.1109/TASSP.1987.1165220. See also Sorensen, H.; Jones, D.; Heideman, M.; Burrus, C. (1987). "Corrections to "Real-valued fast Fourier transform algorithms"". IEEE Transactions on Acoustics, Speech, and Signal Processing 35 (9): 13531353. doi:10.1109/TASSP.1987.1165284. Welch, Peter D. (1969). "A fixed-point fast Fourier transform error analysis". IEEE Trans. Audio Electroacoustics 17 (2): 151157. doi:10.1109/TAU.1969.1162035. Winograd, S. (1978). "On computing the discrete Fourier transform". Math. Computation 32 (141): 175199. doi:10.1090/S0025-5718-1978-0468306-4. JSTOR2006266

Fast Fourier transform

External links
Fast Fourier Algorithm (http://www.cs.pitt.edu/~kirk/cs1501/animations/FFT.html) Fast Fourier Transforms (http://cnx.org/content/col10550/), Connexions online book edited by C. Sidney Burrus, with chapters by C. Sidney Burrus, Ivan Selesnick, Markus Pueschel, Matteo Frigo, and Steven G. Johnson (2008). Links to FFT code and information online. (http://www.fftw.org/links.html) National Taiwan University FFT (http://www.cmlab.csie.ntu.edu.tw/cml/dsp/training/coding/transform/ fft.html) FFT programming in C++ CooleyTukey algorithm. (http://www.librow.com/articles/article-10) Online documentation, links, book, and code. (http://www.jjj.de/fxt/) Using FFT to construct aggregate probability distributions (http://www.vosesoftware.com/ModelRiskHelp/ index.htm#Aggregate_distributions/Aggregate_modeling_-_Fast_Fourier_Transform_FFT_method.htm) Sri Welaratna, " 30 years of FFT Analyzers (http://www.dataphysics.com/support/library/downloads/articles/ DP-30 Years of FFT.pdf)", Sound and Vibration (January 1997, 30th anniversary issue). A historical review of hardware FFT devices. FFT Basics and Case Study Using Multi-Instrument (http://www.multi-instrument.com/doc/D1002/ FFT_Basics_and_Case_Study_using_Multi-Instrument_D1002.pdf) FFT Textbook notes, PPTs, Videos (http://numericalmethods.eng.usf.edu/topics/fft.html) at Holistic Numerical Methods Institute. ALGLIB FFT Code (http://www.alglib.net/fasttransforms/fft.php) GPL Licensed multilanguage (VBA, C++, Pascal, etc.) numerical analysis and data processing library.

Discrete Hartley transform


A discrete Hartley transform (DHT) is a Fourier-related transform of discrete, periodic data similar to the discrete Fourier transform (DFT), with analogous applications in signal processing and related fields. Its main distinction from the DFT is that it transforms real inputs to real outputs, with no intrinsic involvement of complex numbers. Just as the DFT is the discrete analogue of the continuous Fourier transform, the DHT is the discrete analogue of the continuous Hartley transform, introduced by R. V. L. Hartley in 1942. Because there are fast algorithms for the DHT analogous to the fast Fourier transform (FFT), the DHT was originally proposed by R. N. Bracewell in 1983 as a more efficient computational tool in the common case where the data are purely real. It was subsequently argued, however, that specialized FFT algorithms for real inputs or outputs can ordinarily be found with slightly fewer operations than any corresponding algorithm for the DHT (see below).

Definition
Formally, the discrete Hartley transform is a linear, invertible function H : Rn -> Rn (where R denotes the set of real numbers). The N real numbers x0, ...., xN-1 are transformed into the N real numbers H0, ..., HN-1 according to the formula . The combination is sometimes denoted , and should be contrasted

with the that appears in the DFT definition (where i is the imaginary unit). As with the DFT, the overall scale factor in front of the transform and the sign of the sine term are a matter of convention. Although these conventions occasionally vary between authors, they do not affect the essential

Discrete Hartley transform properties of the transform.

Properties
The transform can be interpreted as the multiplication of the vector (x0, ...., xN-1) by an N-by-N matrix; therefore, the discrete Hartley transform is a linear operator. The matrix is invertible; the inverse transformation, which allows one to recover the xn from the Hk, is simply the DHT of Hk multiplied by 1/N. That is, the DHT is its own inverse (involutary), up to an overall scale factor. The DHT can be used to compute the DFT, and vice versa. For real inputs xn, the DFT output Xk has a real part (Hk + HN-k)/2 and an imaginary part (HN-k - Hk)/2. Conversely, the DHT is equivalent to computing the DFT of xn multiplied by 1+i, then taking the real part of the result. As with the DFT, a cyclic convolution z = x*y of two vectors x = (xn) and y = (yn) to produce a vector z = (zn), all of length N, becomes a simple operation after the DHT. In particular, suppose that the vectors X, Y, and Z denote the DHT of x, y, and z respectively. Then the elements of Z are given by:

where we take all of the vectors to be periodic in N (XN = X0, etcetera). Thus, just as the DFT transforms a convolution into a pointwise multiplication of complex numbers (pairs of real and imaginary parts), the DHT transforms a convolution into a simple combination of pairs of real frequency components. The inverse DHT then yields the desired vector z. In this way, a fast algorithm for the DHT (see below) yields a fast algorithm for convolution. (Note that this is slightly more expensive than the corresponding procedure for the DFT, not including the costs of the transforms below, because the pairwise operation above requires 8 real-arithmetic operations compared to the 6 of a complex multiplication. This count doesn't include the division by 2, which can be absorbed e.g. into the 1/N normalization of the inverse DHT.)

Fast algorithms
Just as for the DFT, evaluating the DHT definition directly would require O(N2) arithmetical operations (see Big O notation). There are fast algorithms similar to the FFT, however, that compute the same result in only O(N log N) operations. Nearly every FFT algorithm, from Cooley-Tukey to Prime-Factor to Winograd (Sorensen et al., 1985) to Bruun's (Bini & Bozzo, 1993), has a direct analogue for the discrete Hartley transform. (However, a few of the more exotic FFT algorithms, such as the QFT, have not yet been investigated in the context of the DHT.) In particular, the DHT analogue of the Cooley-Tukey algorithm is commonly known as the Fast Hartley Transform (FHT) algorithm, and was first described by Bracewell in 1984. This FHT algorithm, at least when applied to power-of-two sizes N, is the subject of the United States patent number 4,646,256, issued in 1987 to Stanford University. Stanford placed this patent in the public domain in 1994 (Bracewell, 1995). As mentioned above, DHT algorithms are typically slightly less efficient (in terms of the number of floating-point operations) than the corresponding DFT algorithm (FFT) specialized for real inputs (or outputs). This was first argued by Sorensen et al. (1987) and Duhamel & Vetterli (1987). The latter authors obtained what appears to be the lowest published operation count for the DHT of power-of-two sizes, employing a split-radix algorithm (similar to the split-radix FFT) that breaks a DHT of length N into a DHT of length N/2 and two real-input DFTs (not DHTs) of length N/4. In this way, they argued that a DHT of power-of-two length can be computed with, at best, 2 more additions than the corresponding number of arithmetic operations for the real-input DFT. On present-day computers, performance is determined more by cache and CPU pipeline considerations than by strict operation counts, and a slight difference in arithmetic cost is unlikely to be significant. Since FHT and real-input FFT algorithms have similar computational structures, neither appears to have a substantial a priori speed advantage (Popovic and Sevic, 1994). As a practical matter, highly optimized real-input FFT libraries are available from many

Discrete Hartley transform sources (e.g. from CPU vendors such as Intel), whereas highly optimized DHT libraries are less common. On the other hand, the redundant computations in FFTs due to real inputs are more difficult to eliminate for large prime N, despite the existence of O(N log N) complex-data algorithms for such cases, because the redundancies are hidden behind intricate permutations and/or phase rotations in those algorithms. In contrast, a standard prime-size FFT algorithm, Rader's algorithm, can be directly applied to the DHT of real data for roughly a factor of two less computation than that of the equivalent complex FFT (Frigo and Johnson, 2005). On the other hand, a non-DHT-based adaptation of Rader's algorithm for real-input DFTs is also possible (Chu & Burrus, 1982).

10

References
R. N. Bracewell, "Discrete Hartley transform," J. Opt. Soc. Am. 73 (12), 18321835 (1983). R. N. Bracewell, "The fast Hartley transform," Proc. IEEE 72 (8), 10101018 (1984). R. N. Bracewell, The Hartley Transform (Oxford Univ. Press, New York, 1986). R. N. Bracewell, "Computing with the Hartley Transform," Computers in Physics 9 (4), 373379 (1995). R. V. L. Hartley, "A more symmetrical Fourier analysis applied to transmission problems," Proc. IRE 30, 144150 (1942). H. V. Sorensen, D. L. Jones, C. S. Burrus, and M. T. Heideman, "On computing the discrete Hartley transform," IEEE Trans. Acoust. Speech Sig. Processing ASSP-33 (4), 12311238 (1985). H. V. Sorensen, D. L. Jones, M. T. Heideman, and C. S. Burrus, "Real-valued fast Fourier transform algorithms," IEEE Trans. Acoust. Speech Sig. Processing ASSP-35 (6), 849863 (1987). Pierre Duhamel and Martin Vetterli, "Improved Fourier and Hartley transform algorithms: application to cyclic convolution of real data," IEEE Trans. Acoust. Speech Sig. Processing ASSP-35, 818824 (1987). Mark A. O'Neill, "Faster than Fast Fourier", Byte 13(4):293-300, (1988). J. Hong and M. Vetterli and P. Duhamel, "Basefield transforms with the convolution property," Proc. IEEE 82 (3), 400-412 (1994). D. A. Bini and E. Bozzo, "Fast discrete transform by means of eigenpolynomials," Computers & Mathematics (with Applications) 26 (9), 3552 (1993). Miodrag Popovi and Dragutin evi, "A new look at the comparison of the fast Hartley and Fourier transforms," IEEE Trans. Signal Processing 42 (8), 2178-2182 (1994). Matteo Frigo and Steven G. Johnson, "The Design and Implementation of FFTW3 [1]," Proc. IEEE 93 (2), 216231 (2005). S. Chu and C. Burrus, "A prime factor FTT [sic] algorithm using distributed arithmetic," IEEE Transactions on Acoustics, Speech, and Signal Processing 30 (2), 217227 (1982).

References
[1] http:/ / fftw. org/ fftw-paper-ieee. pdf

Discrete Fourier transform

11

Discrete Fourier transform


In mathematics, the discrete Fourier transform (DFT) is a specific kind of discrete transform, used in Fourier analysis. It transforms one function into another, which is called the frequency domain representation, or simply the DFT, of the original function (which is often a function in the time domain). But the DFT requires an input function that is discrete and whose non-zero values have a limited (finite) duration. Such inputs are often created by sampling a continuous function, like a person's voice. Unlike the discrete-time Fourier transform (DTFT), it only evaluates enough frequency components to reconstruct the finite segment that was analyzed. Using the DFT implies that the finite segment that is analyzed is one period of an infinitely extended periodic signal; if this is not actually true, a window function has to be used to reduce the artifacts in the spectrum. For the same reason, the inverse DFT cannot reproduce the entire time domain, unless the input happens to be periodic (forever). Therefore it is often said that the DFT is a transform for Fourier analysis of finite-domain discrete-time functions. The sinusoidal basis functions of the decomposition have the same properties. The input to the DFT is a finite sequence of real or complex numbers (with more abstract generalizations discussed below), making the DFT ideal for processing information stored in computers. In particular, the DFT is widely employed in signal processing and related fields to analyze the frequencies contained in a sampled signal, to solve partial differential equations, and to perform other operations such as convolutions or multiplying large integers. A key enabling factor for these applications is the fact that the DFT can be computed efficiently in practice using a fast Fourier transform (FFT) algorithm. FFT algorithms are so commonly employed to compute DFTs that the term "FFT" is often used to mean "DFT" in colloquial settings. Formally, there is a clear distinction: "DFT" refers to a mathematical transformation or function, regardless of how it is computed, whereas "FFT" refers to a specific family of algorithms for computing DFTs. The terminology is further blurred by the (now rare) synonym finite Fourier transform for the DFT, which apparently predates the term "fast Fourier transform" (Cooley et al., 1969) but has the same initialism.

Definition
The sequence of N complex numbers x0, ..., xN1 is transformed into the sequence of N complex numbers X0, ..., XN1 by the DFT according to the formula:

where i is the imaginary unit and coefficients of x in an orthonormal basis.)

is a primitive N'th root of unity. (This expression can also be written in

terms of a DFT matrix; when scaled appropriately it becomes a unitary matrix and the Xk can thus be viewed as The transform is sometimes denoted by the symbol , as in or or .

The inverse discrete Fourier transform (IDFT) is given by

A simple description of these equations is that the complex numbers different sinusoidal components of the input "signal" IDFT shows how to compute the as a sum of sinusoidal components

represent the amplitude and phase of the from the , while the with frequency

. The DFT computes the

cycles per sample. By writing the equations in this form, we are making extensive use of Euler's formula to express sinusoids in terms of complex exponentials, which are much easier to manipulate. In the same way, by writing in polar form, we obtain the sinusoid amplitude , respectively: and phase from the complex modulus and argument of

Discrete Fourier transform

12

where atan2 is the two-argument form of the arctan function. Note that the normalization factor multiplying the DFT and IDFT (here 1 and 1/N) and the signs of the exponents are merely conventions, and differ in some treatments. The only requirements of these conventions are that the DFT and IDFT have opposite-sign exponents and that the product of their normalization factors be 1/N. A normalization of for both the DFT and IDFT makes the transforms unitary, which has some theoretical advantages, but it is often more practical in numerical computation to perform the scaling all at once as above (and a unit scaling can be convenient in other ways). (The convention of a negative sign in the exponent is often convenient because it means that a "positive frequency" frequency of +1, one correlates the incoming signal with a frequency of 1.) In the following discussion the terms "sequence" and "vector" will be considered interchangeable. is the amplitude of . Equivalently, the DFT is often thought of as a matched filter: when looking for a

Properties
Completeness
The discrete Fourier transform is an invertible, linear transformation

with C denoting the set of complex numbers. In other words, for any N>0, an N-dimensional complex vector has a DFT and an IDFT which are in turn N-dimensional complex vectors.

Orthogonality
The vectors form an orthogonal basis over the set of N-dimensional complex vectors:

where

is the Kronecker delta. This orthogonality condition can be used to derive the formula for the IDFT from

the definition of the DFT, and is equivalent to the unitarity property below.

The Plancherel theorem and Parseval's theorem


If Xk and Yk are the DFTs of xn and yn respectively then the Plancherel theorem states:

where the star denotes complex conjugation. Parseval's theorem is a special case of the Plancherel theorem and states:

These theorems are also equivalent to the unitary condition below.

Discrete Fourier transform

13

Periodicity
If the expression that defines the DFT is evaluated for all integers k instead of just for the resulting infinite sequence is a periodic extension of the DFT, periodic with period N. The periodicity can be shown directly from the definition: , then

Similarly, it can be shown that the IDFT formula leads to a periodic extension.

The shift theorem


Multiplying by a linear phase for some integer m corresponds to a circular shift of the output by a linear phase. Mathematically, if : is replaced by shift of the input , where the subscript is interpreted modulo N (i.e., periodically). Similarly, a circular corresponds to multiplying the output

represents the vector x then if then and

Circular convolution theorem and cross-correlation theorem


The convolution theorem for the continuous and discrete time Fourier transforms indicates that a convolution of two infinite sequences can be obtained as the inverse transform of the product of the individual transforms. With sequences and transforms of length N, a circularity arises:

The quantity in parentheses is 0 for all values of m except those of the form

, where p is any integer.

At those values, it is 1. It can therefore be replaced by an infinite sum of Kronecker delta functions, and we continue accordingly. Note that we can also extend the limits of m to infinity, with the understanding that the x and y sequences are defined as 0 outside [0,N-1]:

which is the convolution of the

sequence with a periodically extended

sequence defined by:

Discrete Fourier transform

14

Similarly, it can be shown that:

which is the cross-correlation of

and operations for an output efficiency of

A direct evaluation of the convolution or correlation summation (above) requires sequence of length N. An indirect method, using transforms, can take advantage of the

the fast Fourier transform (FFT) to achieve much better performance. Furthermore, convolutions can be used to efficiently compute DFTs via Rader's FFT algorithm and Bluestein's FFT algorithm. Methods have also been developed to use circular convolution as part of an efficient process that achieves normal (non-circular) convolution with an or sequence potentially much longer than the practical transform size (N). Two such methods are called overlap-save and overlap-add.[1]

Convolution theorem duality


It can also be shown that:

which is the circular convolution of

and

Trigonometric interpolation polynomial


The trigonometric interpolation polynomial for N even , for N odd, where the coefficients Xk are given by the DFT of xn above, satisfies the interpolation property for . is handled specially. For even N, notice that the Nyquist component

This interpolation is not unique: aliasing implies that one could add N to any of the complex-sinusoid frequencies (e.g. changing to ) without changing the interpolation property, but giving different values in between the points. The choice above, however, is typical because it has two useful properties. First, it consists of sinusoids whose frequencies have the smallest possible magnitudes: the interpolation is bandlimited. Second, if the are real numbers, then is real as well. In contrast, the most obvious trigonometric interpolation polynomial is the one in which the frequencies range from 0 to (instead of roughly to as above), similar to the inverse DFT formula. This interpolation does not minimize the slope, and is not generally real-valued for real ; its use is a common mistake.

Discrete Fourier transform

15

The unitary DFT


Another way of looking at the DFT is to note that in the above discussion, the DFT can be expressed as a Vandermonde matrix:

where

is a primitive Nth root of unity. The inverse transform is then given by the inverse of the above matrix:

With unitary normalization constants matrix:

, the DFT becomes a unitary transformation, defined by a unitary

where det() is the determinant function. The determinant is the product of the eigenvalues, which are always of the coordinate system, and all of the properties of a rigid rotation can be found in the unitary DFT.

or

as described below. In a real vector space, a unitary transformation can be thought of as simply a rigid rotation The orthogonality of the DFT is now expressed as an orthonormality condition (which arises in many areas of mathematics as described in root of unity):

If

is defined as the unitary DFT of the vector

then

and the Plancherel theorem is expressed as:

If we view the DFT as just a coordinate transformation which simply specifies the components of a vector in a new coordinate system, then the above is just the statement that the dot product of two vectors is preserved under a unitary DFT transformation. For the special case , this implies that the length of a vector is preserved as wellthis is just Parseval's theorem:

Discrete Fourier transform

16

Expressing the inverse DFT in terms of the DFT


A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the (forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to only implement a fast Fourier transform corresponding to one transform direction and then to get the other transform direction from the first.) First, we can compute the inverse DFT by reversing the inputs:

(As usual, the subscripts are interpreted modulo N; thus, for Second, one can also conjugate the inputs and outputs:

, we have

.)

Third, a variant of this conjugation trick, which is sometimes preferable because it requires no modification of the data values, involves swapping real and imaginary parts (which can be done on a computer simply by modifying pointers). Define swap( ) as with its real and imaginary parts swappedthat is, if then swap( ) is . Equivalently, swap( ) equals . Then

That is, the inverse transform is the same as the forward transform with the real and imaginary parts swapped for both input and output, up to a normalization (Duhamel et al., 1988). The conjugation trick can also be used to define a new transform, closely related to the DFT, that is involutarythat is, which is its own inverse. In particular, is clearly its own inverse: .A closely related involutary transformation (by a factor of (1+i) /2) is factors in cancel the 2. For real inputs , the real part of , since the is none other than the

discrete Hartley transform, which is also involutary.

Eigenvalues and eigenvectors


The eigenvalues of the DFT matrix are simple and well-known, whereas the eigenvectors are complicated, not unique, and are the subject of ongoing research. Consider the unitary form defined above for the DFT of length N, where

This matrix satisfies the matrix polynomial equation:

This can be seen from the inverse properties above: operating operating satisfy the equation:

twice gives the original data in reverse order, so

four times gives back the original data and is thus the identity matrix. This means that the eigenvalues

Therefore, the eigenvalues of

are the fourth roots of unity:

is +1, 1, +i, or i. matrix, they have some multiplicity. The multiplicity

Since there are only four distinct eigenvalues for this

gives the number of linearly independent eigenvectors corresponding to each eigenvalue. (Note that there are N independent eigenvectors; a unitary matrix is never defective.) The problem of their multiplicity was solved by McClellan and Parks (1972), although it was later shown to have been equivalent to a problem solved by Gauss (Dickinson and Steiglitz, 1982). The multiplicity depends on the value of N modulo 4, and is given by the following table:

Discrete Fourier transform

17

Multiplicities of the eigenvalues of the unitary DFT matrix U as a function of the transform size N (in terms of an integer m).
size N 4m = +1 m+1 = 1 = -i = +i m m m+1 m+1 m m m m1 m m

4m + 1 m + 1 4m + 2 m + 1 4m + 3 m + 1

m+1 m

Otherwise stated, the characteristic polynomial of

is:

No simple analytical formula for general eigenvectors is known. Moreover, the eigenvectors are not unique because any linear combination of eigenvectors for the same eigenvalue is also an eigenvector for that eigenvalue. Various researchers have proposed different choices of eigenvectors, selected to satisfy useful properties like orthogonality and to have "simple" forms (e.g., McClellan and Parks, 1972; Dickinson and Steiglitz, 1982; Grnbaum, 1982; Atakishiyev and Wolf, 1997; Candan et al., 2000; Hanna et al., 2004; Gurevich and Hadani, 2008). A straightforward approach is to discretize the eigenfunction of the continuous Fourier transform, namely the Gaussian function. Since periodic summation of the function means discretizing its frequency spectrum and discretization means periodic summation of the spectrum, the discretized and periodically summed Gaussian function yields an eigenvector of the discrete transform: . A closed form expression for the series is not known, but it converges rapidly. Two other simple closed-form analytical eigenvectors for special DFT period N were found (Kong, 2008): For DFT period N = 2L + 1 = 4K +1, where K is an integer, the following is an eigenvector of DFT: For DFT period N = 2L = 4K, where K is an integer, the following is an eigenvector of DFT: The choice of eigenvectors of the DFT matrix has become important in recent years in order to define a discrete analogue of the fractional Fourier transformthe DFT matrix can be taken to fractional powers by exponentiating the eigenvalues (e.g., Rubio and Santhanam, 2005). For the continuous Fourier transform, the natural orthogonal eigenfunctions are the Hermite functions, so various discrete analogues of these have been employed as the eigenvectors of the DFT, such as the Kravchuk polynomials (Atakishiyev and Wolf, 1997). The "best" choice of eigenvectors to define a fractional discrete Fourier transform remains an open question, however.

Discrete Fourier transform

18

Uncertainty principle
If the random variable is constrained by:

then

may be considered to represent a discrete probability mass function of n, with an associated

probability mass function constructed from the transformed variable: For the case of continuous functions P(x) and Q(k), the Heisenberg uncertainty principle states that:

where

and

are the variances of

and

respectively, with the equality attained in the case

of a suitably normalized Gaussian distribution. Although the variances may be analogously defined for the DFT, an analogous uncertainty principle is not useful, because the uncertainty will not be shift-invariant. However, the Hirschman uncertainty will have a useful analog for the case of the DFT.[2] . The Hirschman uncertainty principle is expressed in terms of the Shannon entropy of the two probability functions. In the discrete case, the Shannon entropies are defined as:

and

and the Hirschman uncertainty principle becomes[2] :

The equality is obtained for

equal to translations and modulations of a suitably normalized Kronecker comb of will then be proportional to a
[2]

period A where A is any exact integer divisor of N. The probability mass function suitably translated Kronecker comb of period B=N/A.

The real-input DFT


If are real numbers, as they often are in practical applications, then the DFT obeys the symmetry:

The star denotes complex conjugation. The subscripts are interpreted modulo N. Therefore, the DFT output for real inputs is half redundant, and one obtains the complete information by only looking at roughly half of the outputs . In this case, the "DC" element is purely real, and for even N the "Nyquist" element is also real, so there are exactly N non-redundant real numbers in the first half + Nyquist element of the complex output X. Using Euler's formula, the interpolating trigonometric polynomial can then be interpreted as a sum of sine and cosine functions.

Discrete Fourier transform

19

Generalized/shifted DFT
It is possible to shift the transform sampling in time and/or frequency domain by some real shifts a and b, respectively. This is sometimes known as a generalized DFT (or GDFT), also called the shifted DFT or offset DFT, and has analogous properties to the ordinary DFT:

Most often, shifts of

(half a sample) are used. While the ordinary DFT corresponds to a periodic signal in both produces a signal that is anti-periodic in frequency domain ( . Thus, the specific case of
2

time and frequency domains, ) and vice-versa for

is known as an odd-time

odd-frequency discrete Fourier transform (or O DFT). Such shifted transforms are most often used for symmetric data, to represent different boundary symmetries, and for real-symmetric data they correspond to different forms of the discrete cosine and sine transforms. Another interesting choice is , which is called the centered DFT (or CDFT). The centered DFT has the useful property that, when N is a multiple of four, all four of its eigenvalues (see above) have equal multiplicities (Rubio and Santhanam, 2005)[3] The discrete Fourier transform can be viewed as a special case of the z-transform, evaluated on the unit circle in the complex plane; more general z-transforms correspond to complex shifts a and b above.

Multidimensional DFT
The ordinary DFT transforms a one-dimensional sequence or array variable n. The multidimensional DFT of a multidimensional array variables for in is defined by: that is a function of exactly one discrete that is a function of d discrete

where more compactly expressed

as above and the d output indices run from in vector : notation, where we define as d-dimensional vectors of indices from 0 to

. This is and , which we define as

where the division

is defined as

to be performed element-wise, and the

sum denotes the set of nested summations above. The inverse of the multi-dimensional DFT is, analogous to the one-dimensional case, given by:

As the one-dimensional DFT expresses the input as a superposition of sinusoids, the multidimensional DFT expresses the input as a superposition of plane waves, or multidimensional sinusoids. The direction of oscillation in space is . The amplitudes are . This decomposition is of great importance for everything from digital image processing (two-dimensional) to solving partial differential equations. The solution is broken up into plane waves. The multidimensional DFT can be computed by the composition of a sequence of one-dimensional DFTs along each dimension. In the two-dimensional case the independent DFTs of the rows (i.e., along ) are

Discrete Fourier transform computed first to form a new array form the final result . Then the independent DFTs of y along the columns (along

20 ) are computed to

. Alternatively the columns can be computed first and then the rows. The order is immaterial

because the nested summations above commute. An algorithm to compute a one-dimensional DFT is thus sufficient to efficiently compute a multidimensional DFT. This approach is known as the row-column algorithm. There are also intrinsically multidimensional FFT algorithms.

The real-input multidimensional DFT


For input data consisting of real numbers, the DFT outputs have a conjugate symmetry similar to the one-dimensional case above:

where the star again denotes complex conjugation and the ).

-th subscript is again interpreted modulo

(for

Applications
The DFT has seen wide usage across a large number of fields; we only sketch a few examples below (see also the references at the end). All applications of the DFT depend crucially on the availability of a fast algorithm to compute discrete Fourier transforms and their inverses, a fast Fourier transform.

Spectral analysis
When the DFT is used for spectral analysis, the time-samples of some signal sequence usually represents a finite set of uniformly-spaced , where t represents time. The conversion from continuous time to samples

(discrete-time) changes the underlying Fourier transform of x(t) into a discrete-time Fourier transform (DTFT), which generally entails a type of distortion called aliasing. Choice of an appropriate sample-rate (see Nyquist frequency) is the key to minimizing that distortion. Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called leakage, which is manifested as a loss of detail (aka resolution) in the DTFT. Choice of an appropriate sub-sequence length is the primary key to minimizing that effect. When the available data (and time to process it) is more than the amount needed to attain the desired frequency resolution, a standard technique is to perform multiple DFTs, for example to create a spectrogram. If the desired result is a power spectrum and noise or randomness is present in the data, averaging the magnitude components of the multiple DFTs is a useful procedure to reduce the variance of the spectrum (also called a periodogram in this context); two examples of such techniques are the Welch method and the Bartlett method; the general subject of estimating the power spectrum of a noisy signal is called spectral estimation. A final source of distortion (or perhaps illusion) is the DFT itself, because it is just a discrete sampling of the DTFT, which is a function of a continuous frequency domain. That can be mitigated by increasing the resolution of the DFT. That procedure is illustrated in the discrete-time Fourier transform article. The procedure is sometimes referred to as zero-padding, which is a particular implementation used in conjunction with the fast Fourier transform (FFT) algorithm. The inefficiency of performing multiplications and additions with zero-valued "samples" is more than offset by the inherent efficiency of the FFT. As already noted, leakage imposes a limit on the inherent resolution of the DTFT. So there is a practical limit to the benefit that can be obtained from a fine-grained DFT.

Discrete Fourier transform

21

Data compression
The field of digital signal processing relies heavily on operations in the frequency domain (i.e. on the Fourier transform). For example, several lossy image and sound compression methods employ the discrete Fourier transform: the signal is cut into short segments, each is transformed, and then the Fourier coefficients of high frequencies, which are assumed to be unnoticeable, are discarded. The decompressor computes the inverse transform based on this reduced number of Fourier coefficients. (Compression applications often use a specialized form of the DFT, the discrete cosine transform or sometimes the modified discrete cosine transform.)

Partial differential equations


Discrete Fourier transforms are often used to solve partial differential equations, where again the DFT is used as an approximation for the Fourier series (which is recovered in the limit of infinite N). The advantage of this approach is that it expands the signal in complex exponentials einx, which are eigenfunctions of differentiation: d/dx einx = in einx. Thus, in the Fourier representation, differentiation is simplewe just multiply by i n. (Note, however, that the choice of n is not unique due to aliasing; for the method to be convergent, a choice similar to that in the trigonometric interpolation section above should be used.) A linear differential equation with constant coefficients is transformed into an easily solvable algebraic equation. One then uses the inverse DFT to transform the result back into the ordinary spatial representation. Such an approach is called a spectral method.

Polynomial multiplication
Suppose we wish to compute the polynomial product c(x) = a(x) b(x). The ordinary product expression for the coefficients of c involves a linear (acyclic) convolution, where indices do not "wrap around." This can be rewritten as a cyclic convolution by taking the coefficient vectors for a(x) and b(x) with constant term first, then appending zeros so that the resultant coefficient vectors a and b have dimension d>deg(a(x))+deg(b(x)). Then,

Where c is the vector of coefficients for c(x), and the convolution operator

is defined so

But convolution becomes multiplication under the DFT:

Here the vector product is taken elementwise. Thus the coefficients of the product polynomial c(x) are just the terms 0, ..., deg(a(x)) + deg(b(x)) of the coefficient vector

With a fast Fourier transform, the resulting algorithm takes O (NlogN) arithmetic operations. Due to its simplicity and speed, the CooleyTukey FFT algorithm, which is limited to composite sizes, is often chosen for the transform operation. In this case, d should be chosen as the smallest integer greater than the sum of the input polynomial degrees that is factorizable into small prime factors (e.g. 2, 3, and 5, depending upon the FFT implementation).

Discrete Fourier transform Multiplication of large integers The fastest known algorithms for the multiplication of very large integers use the polynomial multiplication method outlined above. Integers can be treated as the value of a polynomial evaluated specifically at the number base, with the coefficients of the polynomial corresponding to the digits in that base. After polynomial multiplication, a relatively low-complexity carry-propagation step completes the multiplication.

22

Some discrete Fourier transform pairs


Some DFT pairs
Note

Shift theorem

Real DFT from the geometric progression formula

from the binomial theorem

is a rectangular window function of W points centered on n=0, where W is an odd integer, and is a sinc-like function (specifically, kernel) Discretization and periodic summation of the scaled Gaussian functions for . Since either or is is a Dirichlet

larger than one and thus warrants fast convergence of one of the two series, for large you may choose to compute the frequency spectrum and convert to the time domain using the discrete Fourier transform.

Derivation as Fourier series


The DFT can be derived as a truncation of the Fourier series of a periodic sequence of impulses.

Generalizations
Representation theory
The DFT can be interpreted as the complex-valued representation theory of the finite cyclic group. In other words, a sequence of n complex numbers can be thought of as an element of n-dimensional complex space or equivalently a function from the finite cyclic group of order n to the complex numbers, be suggestively written This latter may to emphasize that this is a complex vector space whose coordinates are indexed by the

n-element set From this point of view, one may generalize the DFT to representation theory generally, or more narrowly to the representation theory of finite groups. More narrowly still, one may generalize the DFT by either changing the target (taking values in a field other than the complex numbers), or the domain (a group other than a finite cyclic group), as detailed in the sequel.

Discrete Fourier transform

23

Other fields
Many of the properties of the DFT only depend on the fact that denoted or (so that is a primitive root of unity, sometimes ). Such properties include the completeness, orthogonality,

Plancherel/Parseval, periodicity, shift, convolution, and unitarity properties above, as well as many FFT algorithms. For this reason, the discrete Fourier transform can be defined by using roots of unity in fields other than the complex numbers, and such generalizations are commonly called number-theoretic transforms (NTTs) in the case of finite fields. For more information, see number-theoretic transform and discrete Fourier transform (general).

Other finite groups


The standard DFT acts on a sequence x0, x1, , xN1 of complex numbers, which can be viewed as a function {0, 1, , N 1} C. The multidimensional DFT acts on multidimensional sequences, which can be viewed as functions

This suggests the generalization to Fourier transforms on arbitrary finite groups, which act on functions G C where G is a finite group. In this framework, the standard DFT is seen as the Fourier transform on a cyclic group, while the multidimensional DFT is a Fourier transform on a direct sum of cyclic groups.

Alternatives
As with other Fourier transforms, there are various alternatives to the DFT for various applications, prominent among which are wavelets. The analog of the DFT is the discrete wavelet transform (DWT). From the point of view of timefrequency analysis, a key limitation of the Fourier transform is that it does not include location information, only frequency information, and thus has difficulty in representing transients. As wavelets have location as well as frequency, they are better able to represent location, at the expense of greater difficulty representing frequency. For details, see comparison of the discrete wavelet transform with the discrete Fourier transform.

Notes
[1] T. G. Stockham, Jr., "High-speed convolution and correlation," in 1966 Proc. AFIPS Spring Joint Computing Conf. Reprinted in Digital Signal Processing, L. R. Rabiner and C. M. Rader, editors, New York: IEEE Press, 1972. [2] DeBrunner, Victor; Havlicek, Joseph P.; Przebinda, Tomasz; zaydin, Murad (2005). "Entropy-Based Uncertainty Measures for , and With a Hirschman Optimal Transform for " (http:/ / redwood. berkeley. edu/ w/ images/ 9/ 95/ 2002-26. pdf). IEEE Transactions on Signal Processing 53 (8): 2690. . Retrieved 2011-06-23. [3] Santhanam, Balu; Santhanam, Thalanayar S. "Discrete Gauss-Hermite functions and eigenvectors of the centered discrete Fourier transform" (http:/ / thamakau. usc. edu/ Proceedings/ ICASSP 2007/ pdfs/ 0301385. pdf), Proceedings of the 32nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2007, SPTM-P12.4), vol. III, pp. 1385-1388.

References
Brigham, E. Oran (1988). The fast Fourier transform and its applications. Englewood Cliffs, N.J.: Prentice Hall. ISBN0-13-307505-2. Oppenheim, Alan V.; Schafer, R. W.; and Buck, J. R. (1999). Discrete-time signal processing. Upper Saddle River, N.J.: Prentice Hall. ISBN0-13-754920-2. Smith, Steven W. (1999). "Chapter 8: The Discrete Fourier Transform" (http://www.dspguide.com/ch8/1. htm). The Scientist and Engineer's Guide to Digital Signal Processing (Second ed.). San Diego, Calif.: California Technical Publishing. ISBN0-9660176-3-3. Cormen, Thomas H.; Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein (2001). "Chapter 30: Polynomials and the FFT". Introduction to Algorithms (Second ed.). MIT Press and McGraw-Hill. pp.822848. ISBN0-262-03293-7. esp. section 30.2: The DFT and FFT, pp.830838.

Discrete Fourier transform P. Duhamel, B. Piron, and J. M. Etcheto (1988). "On computing the inverse DFT". IEEE Trans. Acoust., Speech and Sig. Processing 36 (2): 285286. doi:10.1109/29.1519. J. H. McClellan and T. W. Parks (1972). "Eigenvalues and eigenvectors of the discrete Fourier transformation". IEEE Trans. Audio Electroacoust. 20 (1): 6674. doi:10.1109/TAU.1972.1162342. Bradley W. Dickinson and Kenneth Steiglitz (1982). "Eigenvectors and functions of the discrete Fourier transform". IEEE Trans. Acoust., Speech and Sig. Processing 30 (1): 2531. doi:10.1109/TASSP.1982.1163843. (Note that this paper has an apparent typo in its table of the eigenvalue multiplicities: the +i/i columns are interchanged. The correct table can be found in McClellan and Parks, 1972, and is easily confirmed numerically.) F. A. Grnbaum (1982). "The eigenvectors of the discrete Fourier transform". J. Math. Anal. Appl. 88 (2): 355363. doi:10.1016/0022-247X(82)90199-8. Natig M. Atakishiyev and Kurt Bernardo Wolf (1997). "Fractional Fourier-Kravchuk transform". J. Opt. Soc. Am. A 14 (7): 14671477. doi:10.1364/JOSAA.14.001467. C. Candan, M. A. Kutay and H. M.Ozaktas (2000). "The discrete fractional Fourier transform". IEEE Trans. on Signal Processing 48 (5): 13291337. doi:10.1109/78.839980. Magdy Tawfik Hanna, Nabila Philip Attalla Seif, and Waleed Abd El Maguid Ahmed (2004). "Hermite-Gaussian-like eigenvectors of the discrete Fourier transform matrix based on the singular-value decomposition of its orthogonal projection matrices". IEEE Trans. Circ. Syst. I 51 (11): 22452254. doi:10.1109/TCSI.2004.836850. Shamgar Gurevich and Ronny Hadani (2009). "On the diagonalization of the discrete Fourier transform". Applied and Computational Harmonic Analysis 27 (1): 8799. arXiv:0808.3281. doi:10.1016/j.acha.2008.11.003. preprint at. Shamgar Gurevich, Ronny Hadani, and Nir Sochen (2008). "The finite harmonic oscillator and its applications to sequences, communication and radar". IEEE Transactions on Information Theory 54 (9): 42394253. arXiv:0808.1495. doi:10.1109/TIT.2008.926440. preprint at. Juan G. Vargas-Rubio and Balu Santhanam (2005). "On the multiangle centered discrete fractional Fourier transform". IEEE Sig. Proc. Lett. 12 (4): 273276. doi:10.1109/LSP.2005.843762. J. Cooley, P. Lewis, and P. Welch (1969). "The finite Fourier transform". IEEE Trans. Audio Electroacoustics 17 (2): 7785. doi:10.1109/TAU.1969.1162036. F.N. Kong (2008). "Analytic Expressions of Two Discrete Hermite-Gaussian Signals". IEEE Trans. Circuits and Systems II: Express Briefs. 55 (1): 5660. doi:10.1109/TCSII.2007.909865.

24

External links
Interactive flash tutorial on the DFT (http://www.fourier-series.com/fourierseries2/DFT_tutorial.html) Mathematics of the Discrete Fourier Transform by Julius O. Smith III (http://ccrma.stanford.edu/~jos/mdft/ mdft.html) Fast implementation of the DFT - coded in C and under General Public License (GPL) (http://www.fftw.org) The DFT Pied: Mastering The Fourier Transform in One Day (http://www.dspdimension.com/admin/ dft-a-pied/)

Fourier analysis

25

Fourier analysis
In mathematics, Fourier analysis is a subject area which grew from the study of Fourier series. The subject began with the study of the way general functions may be represented by sums of simpler trigonometric functions. Fourier analysis is named after Joseph Fourier, who showed that representing a function by a trigonometric series greatly simplifies the study of heat propagation. Today, the subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into simpler pieces is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. In mathematics, the term Fourier analysis often refers to the study of both operations. The decomposition process itself is called a Fourier transform. The transform is often given a more specific name which depends upon the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis.

Applications
Fourier analysis has many scientific applications in physics, partial differential equations, number theory, combinatorics, signal processing, imaging, probability theory, statistics, option pricing, cryptography, numerical analysis, acoustics, oceanography, optics, diffraction, geometry, and other areas. This wide applicability stems from many useful properties of the transforms: The transforms are linear operators and, with proper normalization, are unitary as well (a property known as Parseval's theorem or, more generally, as the Plancherel theorem, and most generally via Pontryagin duality)(Rudin 1990). The transforms are usually invertible. The exponential functions are eigenfunctions of differentiation, which means that this representation transforms linear differential equations with constant coefficients into ordinary algebraic ones (Evans 1998). Therefore, the behavior of a linear time-invariant system can be analyzed at each frequency independently. By the convolution theorem, Fourier transforms turn the complicated convolution operation into simple multiplication, which means that they provide an efficient way to compute convolution-based operations such as polynomial multiplication and multiplying large numbers (Knuth 1997). The discrete version of the Fourier transform (see below) can be evaluated quickly on computers using fast Fourier transform (FFT) algorithms. (Conte & de Boor 1980) Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each Fourier-transformed image square is reassembled from the preserved approximate components, and then inverse-transformed to produce an approximation of the original image.

Fourier analysis

26

Applications in signal processing


When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate individual components of a compound waveform, concentrating them for easier detection and/or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation. Some examples include: Telephone dialing; the touch-tone signals for each telephone key, when pressed, are each a sum of two separate tones (frequencies). Fourier analysis can be used to separate (or analyze) the telephone signal, to reveal the two component tones and therefore which button was pressed. Removal of unwanted frequencies from an audio recording (used to eliminate hum from leakage of AC power into the signal, to eliminate the stereo subcarrier from FM radio recordings); Noise gating of audio recordings to remove quiet background noise by eliminating Fourier components that do not exceed a preset amplitude; Equalization of audio recordings with a series of bandpass filters; Digital radio reception with no superheterodyne circuit, as in a modern cell phone or radio scanner; Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, stripe artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera; Cross correlation of similar images for co-alignment; X-ray crystallography to reconstruct a crystal structure from its diffraction pattern; Fourier transform ion cyclotron resonance mass spectrometry to determine the mass of ions from the frequency of cyclotron motion in a magnetic field. Many other forms of spectroscopy also rely upon Fourier Transforms to determine the three-dimensional structure and/or identity of the sample being analyzed, including Infrared and Nuclear Magnetic Resonance spectroscopies. Generation of sound spectrograms used to analyze sounds.

Variants of Fourier analysis


Fourier analysis has different forms, some of which have different names. The more common variants are shown below. The different names usually reflect different properties of the function or data being analyzed. The resultant transforms can be seen as special cases or generalizations of each other.

(Continuous) Fourier transform


Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, such as time (t). In this case the Fourier transform describes a function (t) in terms of basic complex exponentials of various frequencies. In terms of ordinary frequency, , the Fourier transform is given by the complex number:

Evaluating this quantity for all values of produces the frequency-domain function. See Fourier transform for even more information, including: the inverse transform, F() (t) conventions for amplitude normalization and frequency scaling/units transform properties tabulated transforms of specific functions

an extension/generalization for functions of multiple dimensions, such as images

Fourier analysis

27

Fourier series
A Fourier series is a representation of a function in terms of a summation of a potentially infinite number of harmonically-related sinusoids or complex exponential functions with different amplitudes and phases. The amplitude and phase of a sinusoid can be combined into a single complex number, called a Fourier coefficient. The Fourier series is a periodic function. So it cannot represent any arbitrary function. It can represent either: (a) a periodic function, or (b) a function that is defined only over a finite-length interval (or compact support). Then the values produced by the Fourier series outside the finite interval are irrelevant. The general form of a Fourier series is:

where is the period (case a) or the interval length (case b), and the S[n] sequence are the Fourier coefficients. When the coefficients are derived from a function, (t) as follows:

then, aside from possible convergence issues, s(t) will equal (t) in the interval [u,u+ ]. It follows that if (t) is -periodic (case a), s(t) and (t) are equal everywhere. The Fourier series is analogous to the inverse Fourier transform, in that it is the reconstruction of the original function that was transformed into the Fourier series coefficients. See Fourier series for more information, including the historical development.

Discrete-time Fourier transform (DTFT)


For functions of an integer index, the discrete-time Fourier transform (DTFT) provides a useful frequency-domain transform. A useful "discrete-time" function can be obtained by sampling a "continuous-time" function, s(t), which produces a sequence, s(nT), for integer values of n and some time-interval T. If information is lost, then only an approximation to the original transform, S(f), can be obtained by looking at one period of the periodic function:

which is the DTFT. The identity above is a result of the Poisson summation formula. The DTFT is also equivalent to the Fourier transform of a "continuous" function that is constructed by using the s[n] sequence to modulate a Dirac comb. Applications of the DTFT are not limited to sampled functions. It can be applied to any discrete sequence. See Discrete-time Fourier transform for more information on this and other topics, including: the inverse transform normalized frequency units windowing (finite-length sequences) transform properties tabulated transforms of specific functions

Fourier analysis

28

Discrete Fourier transform (DFT)


When s[n] is periodic, with period N, S1/T() is another Dirac comb function, modulated by the coefficients of a Fourier series. And the integral formula for the coefficients simplifies to: for all integer values of k. This sequence is N-periodic, and so the entire sequence can be described by just N coefficients, known most often as the DFT and sometimes as the discrete Fourier series (DFS).[1] The DFT also has an inverse transform that reproduces the periodic s[n] sequence. When s[n] is not periodic, but its non-zero portion has finite duration (N), S1/T() is continuous and finite-valued. But a discrete subset of its values is sufficient to reconstruct/represent the (finite) portion of s[n] that was analyzed, analogous to case b (above) of the Fourier series. And that subset is again the DFT. When N is larger than the non-zero portion of s[n], known as zero-padding, the DFT computes more closely-spaced samples of one period of S1/T(). That is frequently done to provide an interpolated view of the DTFT. The term DFT is ambiguous in the sense that it does not tell us whether the inverse transform is valid for all n or just for a sequence of length N; i.e. whether the original s[n] sequence is periodic or finite. The term DFS, mentioned above, is sometimes used instead of DFT to convey that s[n] is periodic. The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers. See Discrete Fourier transform for much more information, including: the inverse transform transform properties applications tabulated transforms of specific functions

Summary
This section consolidates the information above, using a consistent notation to emphasize the relationships between the four transforms. Notation: is a non-periodic function with Fourier transform is a time-interval of N samples taken at sample-intervals of time T. is a periodic summation of

is a periodic summation of

is a periodic summation of discrete sequence is the integral of over any interval of length P. over any interval of length N. and the

is the summation of sequence

In the columns labeled Finite duration or periodic the transforms are most useful when sequence are restricted to duration

, but that is not a requirement for the inverse transforms to work as shown. The

Fourier analysis periodic case (e.g. Fourier series) is seen by considering periodic function (
Continuous-time transforms Any duration (continuous frequency) Transform Finite duration or periodic (discrete frequencies)

29 to be the function being transformed. Any periodic function

can be represented as a periodic summation of another function, so there is no loss of generality. In either case, the ) is always the inverse of these discrete-frequency transforms, a fact that is often misinterpreted.

Inverse

Any duration (continuous frequency)

Finite duration or periodic (discrete frequencies)

Fourier transforms on arbitrary locally compact abelian topological groups


The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.

Timefrequency transforms
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information. As alternatives to the Fourier transform, in timefrequency analysis, one uses timefrequency transforms to represent signals in a form that has some time information and some frequency information by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform, or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.

Fourier analysis

30

History
A primitive form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).[2] In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit,[3] which has been described as the first formula for the DFT,[4] and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string.[5] Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits.[6] Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.[5] An early modern development toward Fourier analysis was the 1770 paper Rflexions sur la rsolution algbrique des quations by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic:[7] Lagrange transformed the roots into the resolvents:

where is a cubic root of unity, which is the DFT of order 3. A number of authors, notably Jean le Rond d'Alembert,, and Carl Friedrich Gauss used trigonometric series to study the heat equation, but the breakthrough development was the 1807 paper Mmoire sur la propagation de la chaleur dans les corps solides by Joseph Fourier, whose crucial insight was to model all functions by trigonometric series, introducing the Fourier series. Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions,[4] and Lagrange had given the Fourier series solution to the wave equation,[4] so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.[4] The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory. The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.[6] [8]

Interpretation in terms of time and frequency


In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed. When the function is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function F at frequency represents the amplitude of a frequency component whose initial phase is given by the phase ofF. Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. This justifies their use in branches such diverse as image processing, heat conduction and automatic control.

Fourier analysis

31

Notes
[1] We note that DFS is actually a misnomer, since a Fourier series is a sum of sinusoids, not the sequence of coefficients. [2] Prestini, Elena (2004), The evolution of applied harmonic analysis: models of the real world (http:/ / books. google. com/ ?id=fye--TBu4T0C), Birkhuser, ISBN978 0 81764125 2, , p. 62 (http:/ / books. google. com/ books?id=fye--TBu4T0C& pg=PA62) Rota, Gian-Carlo; Palombi, Fabrizio (1997), Indiscrete thoughts (http:/ / books. google. com/ ?id=H5smrEExNFUC), Birkhuser, ISBN978 0 81763866 5, , p. 11 (http:/ / books. google. com/ books?id=H5smrEExNFUC& pg=PA11) Neugebauer, Otto (1969) [1957], The Exact Sciences in Antiquity (http:/ / books. google. com/ ?id=JVhTtVA2zr8C) (2 ed.), Dover Publications, ISBN978-048622332-2, Brack-Bernsen, Lis; Brack, Matthias, Analyzing shell structure from Babylonian and modern times, arXiv:physics/0310126 [3] Terras, Audrey (1999), Fourier analysis on finite groups and applications (http:/ / books. google. com/ ?id=-B2TA669dJMC), Cambridge University Press, ISBN978 0 52145718 7, , p. 30 (http:/ / books. google. com/ books?id=-B2TA669dJMC& pg=PA30#PPA30,M1) [4] Briggs, William L.; Henson, Van Emden (1995), The DFT : an owner's manual for the discrete Fourier transform (http:/ / books. google. com/ ?id=coq49_LRURUC), SIAM, ISBN978 0 89871342 8, , p. 4 (http:/ / books. google. com/ books?id=coq49_LRURUC& pg=PA2#PPA4,M1) [5] Briggs, William L.; Henson, Van Emden (1995), The DFT: an owner's manual for the discrete Fourier transform (http:/ / books. google. com/ ?id=coq49_LRURUC), SIAM, ISBN978 0 89871342 8, , p. 2 (http:/ / books. google. com/ books?id=coq49_LRURUC& pg=PA2#PPA2,M1) [6] Heideman, M. T., D. H. Johnson, and C. S. Burrus, " Gauss and the history of the fast Fourier transform (http:/ / ieeexplore. ieee. org/ xpls/ abs_all. jsp?arnumber=1162257)," IEEE ASSP Magazine, 1, (4), 1421 (1984) [7] Knapp, Anthony W. (2006), Basic algebra (http:/ / books. google. com/ ?id=KVeXG163BggC), Springer, ISBN978 0 81763248 9, , p. 501 (http:/ / books. google. com/ books?id=KVeXG163BggC& pg=PA501) [8] Terras, Audrey (1999), Fourier analysis on finite groups and applications (http:/ / books. google. com/ ?id=-B2TA669dJMC), Cambridge University Press, ISBN978 0 52145718 7, , p. 31 (http:/ / books. google. com/ books?id=-B2TA669dJMC& pg=PA30#PPA31,M1)

References
Conte, S. D.; de Boor, Carl (1980), Elementary Numerical Analysis (Third ed.), New York: McGraw Hill, Inc., ISBN0070662282 Evans, L. (1998), Partial Differential Equations, American Mathematical Society, ISBN3540761241 Howell, Kenneth B. (2001). Principles of Fourier Analysis, CRC Press. ISBN 9780849382758 Kamen, E.W., and B.S. Heck. "Fundamentals of Signals and Systems Using the Web and Matlab". ISBN 0-13-017293-6 Knuth, Donald E. (1997), The Art of Computer Programming Volume 2: Seminumerical Algorithms (3rd ed.), Section 4.3.3.C: Discrete Fourier transforms, pg.305: Addison-Wesley Professional, ISBN0201896842 Polyanin, A.D., and A.V. Manzhirov (1998). Handbook of Integral Equations, CRC Press, Boca Raton. ISBN 0-8493-2876-4 Rudin, Walter (1990), Fourier Analysis on Groups, Wiley-Interscience, ISBN047152364X Smith, Steven W. (1999), The Scientist and Engineer's Guide to Digital Signal Processing (http://www. dspguide.com/pdfbook.htm) (Second ed.), San Diego, Calif.: California Technical Publishing, ISBN0-9660176-3-3 Stein, E.M., and G. Weiss (1971). Introduction to Fourier Analysis on Euclidean Spaces. Princeton University Press. ISBN 0-691-08078-X

Fourier analysis

32

External links
Tables of Integral Transforms (http://eqworld.ipmnet.ru/en/auxiliary/aux-inttrans.htm) at EqWorld: The World of Mathematical Equations. An Intuitive Explanation of Fourier Theory (http://cns-alumni.bu.edu/~slehar/fourier/fourier.html) by Steven Lehar. Lectures on Image Processing: A collection of 18 lectures in pdf format from Vanderbilt University. Lecture 6 is on the 1- and 2-D Fourier Transform. Lectures 7-15 make use of it. (http://www.archive.org/details/ Lectures_on_Image_Processing), by Alan Peters

Sine
Sine

Basic features Parity Domain Codomain Period Specific values At zero Maxima Minima Specific features Root Critical point Inflection point Fixed point k k-/2 k 0 0 ((2k+1/2),1) ((2k-1/2),-1) odd (-,) [-1,1] 2

Variable k is an integer.

Sine

33 In mathematics, the sine function is a function of an angle. In a right triangle, sine gives the ratio of the length of the side opposite to an angle to the length of the hypotenuse. Sine is usually listed first amongst the trigonometric functions. Trigonometric functions are commonly defined as ratios of two sides of a right triangle containing the angle, and can equivalently be defined as the lengths of various line segments from a unit circle. More modern definitions express them as infinite series or as solutions of certain differential equations, allowing their extension to arbitrary positive and negative values and even to complex numbers. The sine function is commonly used to model periodic phenomena such as sound and light waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. The function sine can be traced to the jy and koi-jy functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[1] The word "sine" comes from a Latin mistranslation of the Arabic jiba, which is a transliteration of the Sanskrit word for half the chord, jya-ardha.[2]

For the angle , the sine function gives the ratio of the length of the opposite side to the length of the hypotenuse.

Right-angled triangle definition


For any similar triangle the ratio of the length of the sides remains the same. For example, if the hypotenuse is twice as long, so are the other sides. Therefore respective trigonometric functions, depending only on the size of the angle, express those ratios: between the hypotenuse and the "opposite" side to an angle A in question (see illustration) in the case of sine function; or between the hypotenuse and the "adjacent" side (cosine) or between the "opposite" and the "adjacent" side (tangent), etc. To define the trigonometric functions for an acute angle A, start with any right triangle that contains the angle A. The three sides of the triangle are named as follows:

The sine function graphed on the Cartesian plane. In this graph, the angle x is given in radians ( = 180).

The sine and cosine functions are related in multiple ways. The derivative of is . Also they are out of phase by 90: = . And for a

given angle, cos and sin give the respective x, y coordinates on a unit circle.

The hypotenuse is the side opposite the right angle, in this case sideh. The hypotenuse is always the longest side of a right-angled triangle. The opposite side is the side opposite to the angle we are interested in (angle A), in this case sidea. The adjacent side is the side that is in contact with (adjacent to) both the angle we are interested in (angle A) and the right angle, in this case sideb. In ordinary Euclidean geometry, according to the triangle postulate the inside angles of every triangle total 180 ( radians). Therefore, in a right-angled triangle, the two non-right angles total 90 (/2 radians), so each of these angles must be greater than 0 and less than 90. The following definition applies to such angles. The angle A (having measure ) is the angle between the hypotenuse and the adjacent line. The sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. In our case

Sine

34

Note that this ratio does not depend on the size of the particular right triangle chosen, as long as it contains the angle A, since all such triangles are similar.

Relation to slope
The trigonometric functions can be defined in terms of the rise, run, and slope of a line segment relative to some horizontal line. When the length of the line segment is 1, sine takes an angle and tells the rise Sine takes an angle and tells the rise per unit length of the line segment. Rise is equal to sin multiplied by the length of the line segment In contrast, cosine is used for the telling the run from the angle; and tangent is used for telling the slope from the angle. Arctan is used for telling the angle from the slope. The line segment is the equivalent of the hypotenuse in the right-triangle, and when it has a length of 1 it is also equivalent to the radius of the unit circle.

Relation to the unit circle


In trigonometry, a unit circle is the circle of radius one centered at the origin (0, 0) in the Cartesian coordinate system. Let a line through the origin, making an angle of with the positive half of the x-axis, intersect the unit circle. The xand y-coordinates of this point of intersection are equal to cos and sin, respectively. The point's distance from the origin is always 1. Unlike the definitions with the right triangle or slope, the angle can be extended to the full set of real arguments by using the unit circle. This can also be achieved by requiring certain symmetries and that sine be a periodic function.

The unit circle.

Illustration of a unit circle. The radius has a length of 1. The variable t is an angle measure.

Point P(x,y) on the circle of unit radius at an obtuse angle > /2

Sine

35

Identities
Exact identities (using radians): These apply for all values of .

Animation showing the graphing process of y = sin x (where x is the angle in radians) using a unit circle

Reciprocal
The reciprocal of sine is cosecant. i.e. the reciprocal of sin(A) is csc(A), or cosec(A). Cosecant gives the ratio of the length of the hypotenuse to the length of the opposite side:

Inverse
The inverse function of sine is arcsine (arcsin or asin) or inverse sine (sin1). As sine is multivalued, it is not an exact inverse function but a partial inverse function. For example, sin(0) = 0, but also sin() = 0, sin(2) = 0 etc. It follows that the arcsine function is also multivalued: arcsin(0) = 0, but also arcsin(0) = , arcsin(0) = 2, etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x in the domain the expression arcsin(x) will evaluate only to a single value, called its principal value.

The usual principal values of the arcsin(x) function graphed on the cartesian plane. Arcsin is the inverse of sin.

k is some integer:

Sine Arcsin satisfies:

36

and

Calculus
For the sine function:

The derivative is:

The antiderivative is:

C denotes the constant of integration.

Other trigonomic functions


It is possible to express any trigonometric function in terms of any other (up to a plus or minus sign, or using the sign function). Sine in terms of the other common trigonometric functions:

The four quadrants of a Cartesian coordinate system.

Using plus/minus () f= per Quadrant I II + + III IV + -

Using sign function (sgn) f=

cos

+ +

cot

tan

Sine

37
sec + + -

Note that for all equations which use plus/minus (), it is positive in the first quadrant. The basic relationship between the sine and the cosine can also be expressed as the Pythagorean trigonometric identity: where sin2x means (sin(x))2.

Properties relating to the quadrants


Over the four quadrants of the sine function is as follows.
Quadrant 1st Quadrant 2nd Quadrant 3rd Quadrant 4th Quadrant Degrees Radians Value Sign Monotony Convexity increasing concave

decreasing concave decreasing convex increasing convex

Points between the quadrants. k is an integer.

The quadrants of the unit circle and of sin x, using the Cartesian coordinate system.

Degrees Radians 0 x < 2 0

Radians

sin x 0 1 0 -1

Point type Root, Inflection Maxima Root, Inflection Minima

For arguments outside of those in the table, get the value using the fact the sine function has a period of 360 (or 2 rad): , or use .

Sine

38

Series definition
Using only geometry and properties of limits, it can be shown that the derivative of sine is cosine and the derivative of cosine is the negative of sine. Using the reflection from the calculated geometric derivation of the sine is with the 4n + k-th derivative at the point 0:

The sine function (blue) is closely approximated by its Taylor polynomial of degree 7 (pink) for a full cycle centered on the origin.

This gives the following Taylor series expansion at x = 0. One can then use the theory of Taylor series to show that the following identities hold for all real numbers x (where x is the angle in radians) :[3]

If x were expressed in degrees then the series would contain messy factors involving powers of /180: if x is the number of degrees, the number of radians is y = x /180, so

Mathematically important relationships between the sine and cosine functions and the exponential function (see, for example, Euler's formula) are, again, elegant when the functions' arguments are in radians and messy otherwise. In most branches of mathematics beyond practical geometry, angles are generally measured in radians. A similar series is Gregory's series for arctan, which is obtained by omitting the factorials in the denominator.

Sine

39

Continued fraction
The sine function can also be represented as a generalized continued fraction:

The fixed point iteration xn+1=sin xn with initial value x0 = 2 converges to 0.

The continued fraction representation expresses the real number values, both rational and irrational, of the sine function.

Law of sines
The law of sines states that for an arbitrary triangle with sides a, b, and c and angles opposite those sides A, B and C:

This is equivalent to the equality of the first three expressions below:

where R is the triangle's circumradius. It can be proven by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance.

Sine

40

Values

sin(x)

Some common angles () shown on the unit circle. The angles are given in degrees and radians, together with the corresponding intersection point on the unit circle, (cos, sin).

x (angle) Degrees Radians Grads 0 180 15 165 30 150 45 135 60 120 75 105 90 0 0g 200g 1623g 18313g 3313g 16623g 50g 150g 6623g 13313g 8313g 11623g 100g 1 Exact 0

sin x Decimal 0

0.258819045102521

0.5

0.707106781186548

0.866025403784439

0.965925826289068

A memory aid (note it does not include 15 and 75):

Sine

41

x in degrees x in radians

0 0

30 /6

45 /4

60 /3

90 /2

90 degree increments:
x in degrees 0 90 180 270 360 x in radians 0 /2 0 1 0 3/2 -1 2 0

Other values not listed above: A019812 [4] A019815 [5] A019818 [6] A019821 [7] A019827 [8] A019830 [9]

A019833 [10] A019836 [11] A019842 [12] A019845 [13] A019848 [14] A019851 [15] For angles greater than 2 or less than 2, simply continue to rotate around the circle; sine periodic function with period 2:

for any angle and any integerk. The primitive period (the smallest positive period) of sine is a full circle, i.e. 2 radians or 360 degrees.

Sine

42

Relationship to complex numbers


Sine is used to determine the imaginary part of a complex number given in polar coordinates (r,):

An illustration of the complex plane. The imaginary numbers are on the vertical coordinate axis.

the imaginary part is:

r and represent the magnitude and angle of the complex number respectively. i is the imaginary unit. z is a complex number. Although dealing with complex numbers, sine's parameter in this usage is still a real number. Sine can also take a complex number as an argument.

Sine with a complex argument


The definition of the sine function for complex arguments z:

Domain coloring of sin(z) over (-,) on x and y axes. Brightness indicates absolute magnitude, saturation represents imaginary and real magnitude.

Sine

43

sin(z) as a vector field

where i2=1. This is an entire function. Also, for purely real x,

For purely imaginary numbers:

It is also sometimes useful to express the complex sine function in terms of the real and imaginary parts of its argument:

Usage of complex sine sin z is found in the functional equation for the Gamma function, . Which in turn is found in the functional equation for the Riemann zeta-function,

As a holomorphic function, sin z is a 2D solution of Laplace's equation:

Sine

44

Complex graphs Sine function in the complex plane

real component

imaginary component

magnitude

Sin-1 in the complex plane

real component

imaginary component

magnitude

History
While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180125 BC) and Ptolemy of Roman Egypt (90165 AD). The function sine (and cosine) can be traced to the jy and koi-jy functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[1] The first published use of the abbreviations 'sin', 'cos', and 'tan' is by the 16th century French mathematician Albert Girard; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus, a student of Copernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x.[16] Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722).[17] Leonhard Euler's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviations sin., cos., tang., cot., sec., and cosec.[18]

Sine

45

Etymology
Etymologically, the word sine derives from the Sanskrit word for chord, jiva*(jya being its more popular synonym). This was transliterated in Arabic as jiba , abbreviated jb . Since Arabic is written without short vowels, "jb" was interpreted as the word jaib , which means "bosom", when the Arabic text was translated in the 12th century into Latin by Gerard of Cremona. The translator used the Latin equivalent for "bosom", sinus (which means "bosom" or "bay" or "fold") [19] [20] The English form sine was introduced in the 1590s.

Software implementations
The sin function, along with other trigonometric functions, is widely available across programming languages and platforms. Some CPU architectures have a built-in instruction for sin, including the Intel x86 FPU. In programming languages, sin is usually either a built in function or found within the language's standard math library. There is no standard algorithm for calculating sin. IEEE 754-2008, the most widely-used standard for floating-point computation, says nothing on the topic of calculating trigonometric functions such as sin.[21] Algorithms for calculating sin may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to very different results for different algorithms in special circumstances, such as for very large inputs, e.g. sin(1022). A once common programming optimization, used especially in 3D graphics, was to pre-calculate a table of sin values, for example a value per degree. This allowed results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method typically offers no advantage.

Notes
[1] Boyer, Carl B. (1991). A History of Mathematics (Second ed.). John Wiley & Sons, Inc.. ISBN 0-471-54397-7, p. 210. [2] Victor J Katx, A history of mathematics, p210, sidebar 6.1. [3] See Ahlfors, pages 4344. [4] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019812 [5] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019815 [6] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019818 [7] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019821 [8] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019827 [9] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019830 [10] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019833 [11] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019836 [12] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019842 [13] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019845 [14] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019848 [15] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa019851 [16] Nicols Bourbaki (1994). Elements of the History of Mathematics. Springer. [17] " Why the sine has a simple derivative (http:/ / www. math. usma. edu/ people/ rickey/ hm/ CalcNotes/ Sine-Deriv. pdf)", in Historical Notes for Calculus Teachers (http:/ / www. math. usma. edu/ people/ rickey/ hm/ CalcNotes/ default. htm) by V. Frederick Rickey (http:/ / www. math. usma. edu/ people/ rickey/ ) [18] See Boyer (1991). [19] See Maor (1998), chapter 3, regarding the etymology. [20] Victor J Katx, A history of mathematics, p210, sidebar 6.1. [21] Grand Challenges of Informatics, Paul Zimmermann. September 20, 2006 p. 14/31 (http:/ / www. jaist. ac. jp/ ~bjorner/ ae-is-budapest/ talks/ Sept20pm2_Zimmermann. pdf)

Trigonometric functions

46

Trigonometric functions
In mathematics, the trigonometric functions (also called circular functions) are functions of an angle. They are used to relate the angles of a triangle to the lengths of the sides of a triangle. Trigonometric functions are important in the study of triangles and modeling periodic phenomena, among many other applications. The most familiar trigonometric functions are the sine, cosine, and tangent. In the context of the standard unit circle with radius 1, where a triangle is formed by a ray originating at the origin and making some angle with the x-axis, the sine of the angle gives the length of the y-component (rise) of the triangle, the cosine gives the length of the x-component (run), and the tangent function gives the slope (y-component divided by the x-component). More precise definitions are detailed below. Trigonometric functions are commonly defined as ratios of two sides of a right triangle containing the angle, and can equivalently be defined as the lengths of various line segments from a unit circle. More modern definitions express them as infinite series or as solutions of certain differential equations, allowing their extension to arbitrary positive and negative values and even to complex numbers. Trigonometric functions have a wide range of uses including computing unknown lengths and angles in triangles (often right triangles). In this use, trigonometric functions are used, for instance, in navigation, engineering, and physics. A common use in elementary physics is resolving a vector into Cartesian coordinates. The sine and cosine functions are also commonly used to model periodic function phenomena such as sound and light waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations through the year. In modern usage, there are six basic trigonometric functions, tabulated here with equations that relate them to one another. Especially with the last four, these relations are often taken as the definitions of those functions, but one can define them equally well geometrically, or by other means, and then derive these relations.

Right-angled triangle definitions


The notion that there should be some standard correspondence between the lengths of the sides of a triangle and the angles of the triangle comes as soon as one recognizes that similar triangles maintain the same ratios between their sides. That is, for any similar triangle the ratio of the hypotenuse (for example) and another of the sides remains the same. If the hypotenuse is twice as long, so are the sides. It is these ratios that the trigonometric functions express. To define the trigonometric functions for the angle A, start with any right triangle that contains the angle A. The three sides of the triangle are named as follows: The hypotenuse is the side opposite the right angle, in this case sideh. The hypotenuse is always the longest side of a right-angled triangle. The opposite side is the side opposite to the angle we are interested in (angle A), in this case sidea. The adjacent side is the side having both the angles of interest (angle A and right-angle C), in this case sideb.

Trigonometric functions

47

In ordinary Euclidean geometry, according to the triangle postulate the inside angles of every triangle total 180 ( radians). Therefore, in a right-angled triangle, the two non-right angles total 90 (/2 radians), so each of these angles must be in the range of (0,90) as expressed in interval notation. The following definitions apply to angles in this 090 range. They can be extended to the full set of real arguments by using the unit circle, or by requiring certain symmetries and that they be periodic functions. For example, the figure shows sin for angles , , +, and 2 depicted on the unit circle (top) and as a graph (bottom). The value of the sine repeats itself apart from sign in all four quadrants, and if the range of is extended to additional rotations, this behavior repeats periodically with a period2. The trigonometric functions are summarized in the following table and described in more detail below. The angle is the angle between the hypotenuse and the adjacent line the angle at A in the accompanying diagram.

(Top): Trigonometric function sin for selected angles , , +, and 2 in the four quadrants. (Bottom) Graph of sine function versus angle. Angles from the top panel are identified.

Function Sine Cosine Tangent

Abbreviation sin cos tan (ortg)

Description opposite/ hypotenuse adjacent/ hypotenuse opposite/ adjacent adjacent/ opposite hypotenuse/ adjacent hypotenuse/ opposite

Identities (using radians)

Cotangent cot (orctg orctn) Secant Cosecant sec csc (orcosec)

Trigonometric functions

48

Sine, cosine, and tangent


The sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. In our case

The sine, tangent, and secant functions of an angle constructed geometrically in terms of a unit circle. The number is the length of the curve; thus angles are being measured in radians. The secant and tangent functions rely on a fixed vertical line and the sine function on a moving vertical line. ("Fixed" in this context means not moving as changes; "moving" means depending on.) Thus, as goes from0 up to a right angle, sin goes from 0 to1, tan goes from 0 to, and sec goes from 1 to.

Trigonometric functions

49

The cosine, cotangent, and cosecant functions of an angle constructed geometrically in terms of a unit circle. The functions whose names have the prefixco- use horizontal lines where the others use vertical lines.

Note that this ratio does not depend on size of the particular right triangle chosen, as long as it contains the angle A, since all such triangles are similar. The cosine of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse. In our case

The tangent of an angle is the ratio of the length of the opposite side to the length of the adjacent side (called so because it can be represented as a line segment tangent to the circle).[1] In our case

The acronyms "SOHCAHTOA" and "OHSAHCOAT" are commonly used mnemonics for these ratios.

Reciprocal functions
The remaining three functions are best defined using the above three functions. The cosecant csc(A), or cosec(A), is the reciprocal of sin(A), i.e. the ratio of the length of the hypotenuse to the length of the opposite side:

The secant sec(A) is the reciprocal of cos(A), i.e. the ratio of the length of the hypotenuse to the length of the adjacent side:

The cotangent cot(A) is the reciprocal of tan(A), i.e. the ratio of the length of the adjacent side to the length of the opposite side:

Trigonometric functions

50

Slope definitions
Equivalent to the right-triangle definitions the trigonometric functions can be defined in terms of the rise, run, and slope of a line segment relative to some horizontal line. The slope is commonly taught as "rise over run" or rise/run. The three main trigonometric functions are commonly taught in the order sine, cosine, tangent. With a segment length of 1 (as in a unit circle) the following correspondence of definitions exists: 1. Sine is first, rise is first. Sine takes an angle and tells the rise when the length of the line is 1. 2. Cosine is second, run is second. Cosine takes an angle and tells the run when the length of the line is 1. 3. Tangent is the slope formula that combines the rise and run. Tangent takes an angle and tells the slope, and tells the rise when the run is 1. This shows the main use of tangent and arctangent: converting between the two ways of telling the slant of a line, i.e., angles and slopes. (Note that the arctangent or "inverse tangent" is not to be confused with the cotangent, which is cosine divided by sine.) While the length of the line segment makes no difference for the slope (the slope does not depend on the length of the slanted line), it does affect rise and run. To adjust and find the actual rise and run when the line does not have a length of 1, just multiply the sine and cosine by the line length. For instance, if the line segment has length 5, the run at an angle of 7 is 5cos(7)

Unit-circle definitions
The six trigonometric functions can also be defined in terms of the unit circle, the circle of radius one centered at the origin. The unit circle definition provides little in the way of practical calculation; indeed it relies on right triangles for most angles. The unit circle definition does, however, permit the definition of the trigonometric functions for all positive and negative arguments, not just for angles between 0 and /2 radians. It also provides a single visual picture that encapsulates at once all the important triangles. From the Pythagorean theorem the equation for the unit circle is:

The unit circle

In the picture, some common angles, measured in radians, are given. Measurements in the counterclockwise direction are positive angles and measurements in the clockwise direction are negative angles.

Trigonometric functions Let a line through the origin, making an angle of with the positive half of the x-axis, intersect the unit circle. The xand y-coordinates of this point of intersection are equal to cos and sin, respectively. The triangle in the graphic enforces the formula; the radius is equal to the hypotenuse and has length 1, so we have sin=y/1 and cos=x/1. The unit circle can be thought of as a way of looking at an infinite number of triangles by varying the lengths of their legs but keeping the lengths of their hypotenuses equal to1. Note that these values can easily be memorized in the form

51

but the angles are not equally spaced. The values for 15, 54 and 75 are slightly more complicated.

For angles greater than 2 or less than 2, simply continue to rotate around the circle; sine and cosine are periodic functions with period 2:

The sine and cosine functions graphed on the Cartesian plane.

for any angle and any integerk. The smallest positive period of a periodic function is called the primitive period of the function. The primitive period of the sine or cosine is a full circle, i.e. 2 radians or 360 degrees. Above, only sine and cosine were defined directly by the unit circle, but other trigonometric functions can be defined by:

So : The primitive period of the secant, or cosecant is also a full circle, i.e. 2 radians or 360 degrees. The primitive period of the tangent or cotangent is only a half-circle, i.e. radians or 180 degrees.

Trigonometric functions

52

The image at right includes a graph of the tangent function. Its -intercepts correspond to those of sin() while its undefined values correspond to the -intercepts of cos(). The function changes slowly around angles of k, but changes rapidly at angles close to (k+1/2). The graph of the tangent function also has a vertical asymptote at =(k+1/2), the -intercepts of the cosine function, because the function approaches infinity as approaches (k+1/2) from the left and minus infinity as it approaches (k+1/2) from the right. Alternatively, all of the basic trigonometric functions can be defined in terms of a unit circle centered at O (as shown in the picture to the right), and similar such geometric definitions were used historically. In particular, for a chord AB of the circle, where is half of the subtended angle, sin() is AC (half of the chord), a definition introduced in India[2] (see history). cos() is the horizontal distance OC, and versin()=1cos() is CD. tan() is the length of the segment AE of the tangent line through A, hence the word tangent for this function. cot() is another tangent segment, AF.
All of the trigonometric functions of the angle can be constructed geometrically in terms of a unit circle centered at O.

Trigonometric functions: Sine, Cosine, Tangent, Cosecant (dotted), Secant (dotted), Cotangent (dotted)

sec()=OE and csc()=OF are segments of secant lines (intersecting the circle at two points), and can also be viewed as projections of OA along the tangent at A to the horizontal and vertical axes, respectively. DE is exsec() = sec()1 (the portion of the secant outside, or ex, the circle). From these constructions, it is easy to see that the secant and tangent functions diverge as approaches /2 (90 degrees) and that the cosecant and cotangent diverge as approaches zero. (Many similar constructions are possible, and the basic trigonometric identities can also be proven graphically.[3] )

Trigonometric functions

53

Series definitions
Using only geometry and properties of limits, it can be shown that the derivative of sine is cosine and the derivative of cosine is the negative of sine. (Here, and generally in calculus, all angles are measured in radians; see also the significance of radians below.) One can then use the theory of Taylor series to show that the following identities hold for all real numbers x:[4]

The sine function (blue) is closely approximated by its Taylor polynomial of degree 7 (pink) for a full cycle centered on the origin.

These identities are sometimes taken as the definitions of the sine and cosine function. They are often used as the starting point in a rigorous treatment of trigonometric functions and their applications (e.g., in Fourier series), since the theory of infinite series can be developed, independent of any geometric considerations, from the foundations of the real number system. The differentiability and continuity of these functions are then established from the series definitions alone. Combining these two series gives Euler's formula: cos x + i sin x = eix. Other series can be found.[5] For the following trigonometric functions: Un is the nth up/down number, Bn is the nth Bernoulli number, and En (below) is the nth Euler number. Tangent

Trigonometric functions When this series for the tangent function is expressed in a form in which the denominators are the corresponding factorials, the numerators, called the "tangent numbers", have a combinatorial interpretation: they enumerate alternating permutations of finite sets of odd cardinality.[6] Cosecant

54

Secant

When this series for the secant function is expressed in a form in which the denominators are the corresponding factorials, the numerators, called the "secant numbers", have a combinatorial interpretation: they enumerate alternating permutations of finite sets of even cardinality.[7] Cotangent

From a theorem in complex analysis, there is a unique analytic continuation of this real function to the domain of complex numbers. They have the same Taylor series, and so the trigonometric functions are defined on the complex numbers using the Taylor series above. There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match:[8]

This identity can be proven with the Herglotz trick.[9] By combining the expressed as an absolutely convergent series:

-th with the

-th term, it can be

Trigonometric functions

55

Relationship to exponential function and complex numbers


It can be shown from the series definitions[10] that the sine and cosine functions are the imaginary and real parts, respectively, of the complex exponential function when its argument is purely imaginary:

Euler's formula illustrated with the three dimensional helix, starting with the 2-D orthogonal components of the unit circle, sine and cosine (using =t).

This identity is called Euler's formula. In this way, trigonometric functions become essential in the geometric interpretation of complex analysis. For example, with the above identity, if one considers the unit circle in the complex plane, parametrized by eix, and as above, we can parametrize this circle in terms of cosines and sines, the relationship between the complex exponential and the trigonometric functions becomes more apparent. Furthermore, this allows for the definition of the trigonometric functions for complex arguments z:

where i2=1. The sine and cosine defined by this are entire functions. Also, for purely real x,

It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of their arguments.

This exhibits a deep relationship between the complex sine and cosine functions and their real (sin, cos) and hyperbolic real (sinh, cosh) counterparts.

Trigonometric functions Complex graphs In the following graphs, the domain is the complex plane pictured, and the range values are indicated at each point by color. Brightness indicates the size (absolute value) of the range value, with black being zero. Hue varies with argument, or angle, measured from the positive real axis. (more)

56

Trigonometric functions in the complex plane

Definitions via differential equations


Both the sine and cosine functions satisfy the differential equation

That is to say, each is the additive inverse of its own second derivative. Within the 2-dimensional function space V consisting of all solutions of this equation, the sine function is the unique solution satisfying the initial condition the cosine function is the unique solution satisfying the initial condition and .

Since the sine and cosine functions are linearly independent, together they form a basis of V. This method of defining the sine and cosine functions is essentially equivalent to using Euler's formula. (See linear differential equation.) It turns out that this differential equation can be used not only to define the sine and cosine functions but also to prove the trigonometric identities for the sine and cosine functions. Further, the observation that sine and cosine satisfies y = y means that they are eigenfunctions of the second-derivative operator. The tangent function is the unique solution of the nonlinear differential equation

satisfying the initial condition y(0) = 0. There is a very interesting visual proof that the tangent function satisfies this differential equation.[11]

Trigonometric functions

57

The significance of radians


Radians specify an angle by measuring the length around the path of the unit circle and constitute a special argument to the sine and cosine functions. In particular, only sines and cosines that map radians to ratios satisfy the differential equations that classically describe them. If an argument to sine or cosine in radians is scaled by frequency,

then the derivatives will scale by amplitude.

Here, k is a constant that represents a mapping between units. If x is in degrees, then

This means that the second derivative of a sine in degrees does not satisfy the differential equation

but rather

The cosine's second derivative behaves similarly. This means that these sines and cosines are different functions, and that the fourth derivative of sine will be sine again only if the argument is in radians.

Identities
Many identities interrelate the trigonometric functions. Among the most frequently used is the Pythagorean identity, which states that for any angle, the square of the sine plus the square of the cosine is 1. This is easy to see by studying a right triangle of hypotenuse 1 and applying the Pythagorean theorem. In symbolic form, the Pythagorean identity is written where sin2x+cos2x is standard notation for (sinx)2+(cosx)2. Other key relationships are the sum and difference formulas, which give the sine and cosine of the sum and difference of two angles in terms of sines and cosines of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy. One can also produce them algebraically using Euler's formula.

When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae. These identities can also be used to derive the product-to-sum identities that were used in antiquity to transform the product of two numbers into a sum of numbers and greatly speed operations, much like the logarithm function.

Trigonometric functions

58

Calculus
For integrals and derivatives of trigonometric functions, see the relevant sections of Differentiation of trigonometric functions, Lists of integrals and List of integrals of trigonometric functions. Below is the list of the derivatives and integrals of the six basic trigonometric functions. The numberC is a constant of integration.

Definitions using functional equations


In mathematical analysis, one can define the trigonometric functions using functional equations based on properties like the sum and difference formulas. Taking as given these formulas and the Pythagorean identity, for example, one can prove that only two real functions satisfy those conditions. Symbolically, we say that there exists exactly one pair of real functions and such that for all real numbers and , the following equations hold:

with the added condition that . Other derivations, starting from other functional equations, are also possible, and such derivations can be extended to the complex numbers. As an example, this derivation can be used to define trigonometry in Galois fields.

Computation
The computation of trigonometric functions is a complicated subject, which can today be avoided by most people because of the widespread availability of computers and scientific calculators that provide built-in trigonometric functions for any angle. This section, however, describes details of their computation in three important contexts: the historical use of trigonometric tables, the modern techniques used by computers, and a few "important" angles where simple exact values are easily found. The first step in computing any trigonometric function is range reductionreducing the given angle to a "reduced angle" inside a small range of angles, say 0 to /2, using the periodicity and symmetries of the trigonometric functions. Prior to computers, people typically evaluated trigonometric functions by interpolating from a detailed table of their values, calculated to many significant figures. Such tables have been available for as long as trigonometric functions have been described (see History below), and were typically generated by repeated application of the half-angle and angle-addition identities starting from a known value (such as sin(/2)=1). Modern computers use a variety of techniques.[12] One common method, especially on higher-end processors with floating point units, is to combine a polynomial or rational approximation (such as Chebyshev approximation, best uniform approximation, and Pad approximation, and typically for higher or variable precisions, Taylor and Laurent series) with range reduction and a table lookupthey first look up the closest angle in a small table, and then use the

Trigonometric functions polynomial to compute the correction.[13] Devices that lack hardware multipliers often use an algorithm called CORDIC (as well as related techniques), which uses only addition, subtraction, bitshift, and table lookup. These methods are commonly implemented in hardware floating-point units for performance reasons. For very high precision calculations, when series expansion convergence becomes too slow, trigonometric functions can be approximated by the arithmetic-geometric mean, which itself approximates the trigonometric function by the (complex) elliptic integral.[14] Finally, for some simple angles, the values can be easily computed by hand using the Pythagorean theorem, as in the following examples. For example, the sine, cosine and tangent of any integer multiple of radians (3) can be found exactly by hand. Consider a right triangle where the two other angles are equal, and therefore are both length of side b and the length of side a are equal; we can choose tangent of an angle of Therefore: radians (45) can then be found using the Pythagorean theorem: radians (45). Then the . The values of sine, cosine and

59

To determine the trigonometric functions for angles of /3 radians (60 degrees) and /6 radians (30 degrees), we start with an equilateral triangle of side length 1. All its angles are /3 radians (60 degrees). By dividing it into two, we obtain a right triangle with /6 radians (30 degrees) and /3 radians (60 degrees) angles. For this triangle, the shortest side = 1/2, the next largest side =(3)/2 and the hypotenuse = 1. This yields:

Special values in trigonometric functions


There are some commonly used special values in trigonometric functions, as shown in the following table.
Function sin

cos

tan

cot

sec

csc

Trigonometric functions

60

Inverse functions
The trigonometric functions are periodic, and hence not injective, so strictly they do not have an inverse function. Therefore to define an inverse function we must restrict their domains so that the trigonometric function is bijective. In the following, the functions on the left are defined by the equation on the right; these are not proved identities. The principal inverses are usually defined as:
Function Definition Value Field

For inverse trigonometric functions, the notations sin1 and cos1 are often used for arcsin and arccos, etc. When this notation is used, the inverse functions could be confused with the multiplicative inverses of the functions. The notation using the "arc-" prefix avoids such confusion, though "arcsec" can be confused with "arcsecond". Just like the sine and cosine, the inverse trigonometric functions can also be defined in terms of infinite series. For example,

These functions may also be defined by proving that they are antiderivatives of other functions. The arcsine, for example, can be written as the following integral:

Analogous formulas for the other functions can be found at Inverse trigonometric functions. Using the complex logarithm, one can generalize all these functions to complex arguments:

Trigonometric functions

61

Properties and applications


The trigonometric functions, as the name suggests, are of crucial importance in trigonometry, mainly because of the following two results.

Law of sines
The law of sines states that for an arbitrary triangle with sides a, b, and c and angles opposite those sides A, B and C:

or, equivalently,

where R is the triangle's circumradius. It can be proven by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance.

Law of cosines
The law of cosines (also known as the cosine formula) is an extension of the Pythagorean theorem:
A Lissajous curve, a figure formed with a trigonometry-based function.

or equivalently,

In this formula the angle at C is opposite to the sidec. This theorem can be proven by dividing the triangle into two right ones and using the Pythagorean theorem. The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known.

Trigonometric functions

62

Law of tangents
The following all form the law of tangents[15]

The explanation of the formulae in words would be cumbersome, but the patterns of sums and differences; for the lengths and corresponding opposite angles, are apparent in the theorem.

Law of cotangents
If

(the radius of the inscribed circle for the triangle) and

(the semi-perimeter for the triangle), then the following all form the law of cotangents[16]

It follows that

In words the theorem is: the cotangent of a half-angle equals the ratio of the semi-perimeter minus the opposite side to the said angle, to the inradius for the triangle.

Trigonometric functions

63

Other useful properties Periodic functions


The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion, which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion. Trigonometric functions also prove to be useful in the study of general periodic functions. The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves.[17] Under rather general conditions, a periodic function (x) can be expressed as a sum of sine waves or cosine waves in a Fourier series.[18] Denoting the sine or cosine basis functions by k, the expansion of the periodic function (t) takes the form:

Click on the image to see an animation of the additive synthesis of a square wave with an increasing number of harmonics

Superimposed sinusoidal wave basis functions (bottom) form a sawtooth wave (top) when added; the basis functions have wavelengths/k (k=integer) shorter than the wavelength of the sawtooth itself (except for k=1). All basis functions have nodes at the nodes of the sawtooth, but all but the fundamental have additional nodes. The oscillation about the sawtooth is called the Gibbs phenomenon

For example, the square wave can be written as the Fourier series

In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath.

Trigonometric functions

64

History
While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180125 BC) and Ptolemy of Roman Egypt (90165 AD). The functions sine and cosine can be traced to the jy and koti-jy functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[19] All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles.[20] al-Khwrizm produced tables of sines, cosines and tangents. They were studied by authors including Omar Khayym, Bhskara II, Nasir al-Din al-Tusi, Jamshd al-Ksh (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus, and Rheticus' student Valentinus Otho Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series.[21] The first published use of the abbreviations 'sin', 'cos', and 'tan' is by the 16th century French mathematician Albert Girard. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x.[22] Leonhard Euler's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviations sin., cos., tang., cot., sec., and cosec.[2] A few functions were common historically, but are now seldom used, such as the chord (crd() = 2sin(/2)), the versine (versin() = 1cos() = 2sin2(/2)) (which appeared in the earliest tables [2] ), the haversine (haversin() = versin()/2 = sin2(/2)), the exsecant (exsec() = sec()1) and the excosecant (excsc() = exsec(/2) = csc()1). Many more relations between these functions are listed in the article about trigonometric identities. Etymologically, the word sine derives from the Sanskrit word for half the chord, jya-ardha, abbreviated to jiva. This was transliterated in Arabic as jiba, written jb, vowels not being written in Arabic. Next, this transliteration was mis-translated in the 12th century into Latin as sinus, under the mistaken impression that jb stood for the word jaib, which means "bosom" or "bay" or "fold" in Arabic, as does sinus in Latin.[23] Finally, English usage converted the Latin word sinus to sine.[24] The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans "cutting" since the line cuts the circle.

Notes
[1] [2] [3] [4] [5] [6] [7] [8] Dictionary.com: Tangent def 5b, 5c (http:/ / dictionary. reference. com/ browse/ tangent) See Boyer (1991). See Maor (1998) See Ahlfors, pages 4344. Abramowitz; Weisstein. Stanley, Enumerative Combinatorics, Vol I., page 149 Stanley, Enumerative Combinatorics, Vol I Aigner, Martin; Ziegler, Gnter M. (2000). Proofs from THE BOOK (http:/ / www. springer. com/ mathematics/ book/ 978-3-642-00855-9) (Second ed.). Springer-Verlag. p.149. ISBN9773540678657. . [9] Remmert, Reinhold (1991). Theory of complex functions (http:/ / books. google. com/ books?id=CC0dQxtYb6kC). Springer. p.327. ISBN0-387-97195-5. ., Extract of page 327 (http:/ / books. google. com/ books?id=CC0dQxtYb6kC& pg=PA327) [10] For a demonstration, see Euler's_formula#Using_power_series [11] Needham, p. Visual Complex Analysis. ISBN0198534469. [12] Kantabutra. [13] However, doing that while maintaining precision is nontrivial, and methods like Gal's accurate tables, Cody and Waite reduction, and Payne and Hanek reduction algorithms can be used. [14] "R. P. Brent, "Fast Multiple-Precision Evaluation of Elementary Functions", J. ACM 23, 242 (1976)." (http:/ / doi. acm. org/ 10. 1145/ 321941. 321944). .

Trigonometric functions
[15] The Universal Encyclopaedia of Mathematics, Pan Reference Books, 1976, page 529. English version George Allen and Unwin, 1964. Translated from the German version Meyers Rechenduden, 1960. [16] The Universal Encyclopaedia of Mathematics, Pan Reference Books, 1976, page 530. English version George Allen and Unwin, 1964. Translated from the German version Meyers Rechenduden, 1960. [17] Stanley J Farlow (1993). Partial differential equations for scientists and engineers (http:/ / books. google. com/ books?id=DLUYeSb49eAC& pg=PA82) (Reprint of Wiley 1982 ed.). Courier Dover Publications. p.82. ISBN048667620X. . [18] See for example, Gerald B Folland (2009). "Convergence and completeness" (http:/ / books. google. com/ books?id=idAomhpwI8MC& pg=PA77). Fourier Analysis and its Applications (Reprint of Wadsworth & Brooks/Cole 1992 ed.). American Mathematical Society. pp.77 ff. ISBN0821847902. . [19] Boyer, Carl B. (1991). A History of Mathematics (Second ed.). John Wiley & Sons, Inc.. ISBN 0471543977, p. 210. [20] Owen Gingerich (1986). Islamic Astronomy (http:/ / faculty. kfupm. edu. sa/ PHYS/ alshukri/ PHYS215/ Islamic_astronomy. htm). 254. Scientific American. p. 74. . Retrieved 2010-37-13. [21] J J O'Connor and E F Robertson. "Madhava of Sangamagrama" (http:/ / www-gap. dcs. st-and. ac. uk/ ~history/ Biographies/ Madhava. html). School of Mathematics and Statistics University of St Andrews, Scotland. . Retrieved 2007-09-08. [22] Nicols Bourbaki (1994). Elements of the History of Mathematics. Springer. [23] See Maor (1998), chapter 3, regarding the etymology. [24] "Clark University" (http:/ / www. clarku. edu/ ~djoyce/ trig/ ). .

65

References
Abramowitz, Milton and Irene A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover, New York. (1964). ISBN 0-486-61272-4. Lars Ahlfors, Complex Analysis: an introduction to the theory of analytic functions of one complex variable, second edition, McGraw-Hill Book Company, New York, 1966. Boyer, Carl B., A History of Mathematics, John Wiley & Sons, Inc., 2nd edition. (1991). ISBN 0-471-54397-7. Gal, Shmuel and Bachelis, Boris. An accurate elementary mathematical library for the IEEE floating point standard, ACM Transaction on Mathematical Software (1991). Joseph, George G., The Crest of the Peacock: Non-European Roots of Mathematics, 2nd ed. Penguin Books, London. (2000). ISBN 0-691-00659-8. Kantabutra, Vitit, "On hardware for computing exponential and trigonometric functions," IEEE Trans. Computers 45 (3), 328339 (1996). Maor, Eli, Trigonometric Delights (http://www.pupress.princeton.edu/books/maor/), Princeton Univ. Press. (1998). Reprint edition (February 25, 2002): ISBN 0-691-09541-8. Needham, Tristan, "Preface" (http://www.usfca.edu/vca/PDF/vca-preface.pdf)" to Visual Complex Analysis (http://www.usfca.edu/vca/). Oxford University Press, (1999). ISBN 0-19-853446-9. O'Connor, J.J., and E.F. Robertson, "Trigonometric functions" (http://www-gap.dcs.st-and.ac.uk/~history/ HistTopics/Trigonometric_functions.html), MacTutor History of Mathematics archive. (1996). O'Connor, J.J., and E.F. Robertson, "Madhava of Sangamagramma" (http://www-groups.dcs.st-and.ac.uk/ ~history/Mathematicians/Madhava.html), MacTutor History of Mathematics archive. (2000). Pearce, Ian G., "Madhava of Sangamagramma" (http://www-history.mcs.st-andrews.ac.uk/history/Projects/ Pearce/Chapters/Ch9_3.html), MacTutor History of Mathematics archive. (2002). Weisstein, Eric W., "Tangent" (http://mathworld.wolfram.com/Tangent.html) from MathWorld, accessed 21 January 2006.

Trigonometric functions

66

External links
Visionlearning Module on Wave Mathematics (http://www.visionlearning.com/library/module_viewer. php?mid=131&l=&c3=) GonioLab (http://glab.trixon.se/): Visualization of the unit circle, trigonometric and hyperbolic functions Dave's draggable diagram. (http://www.clarku.edu/~djoyce/trig/) (Requires java browser plugin)

Complex number
A complex number is a number consisting of a real part and an imaginary part. Complex numbers extend the idea of the one-dimensional number line to the two-dimensional complex plane by using the number line for the real part and adding a vertical axis to plot the imaginary part. In this way the complex numbers contain the ordinary real numbers while extending them in order to solve problems that would be impossible with only real numbers. Complex numbers are used in many scientific fields, including engineering, electromagnetism, quantum physics, applied mathematics, and chaos theory. Italian mathematician Gerolamo Cardano is the first known to have conceived complex numbers; he called them "fictitious", during his attempts to find solutions to cubic equations in the 16th century.[1]

Introduction and definition


Complex numbers have been introduced to allow for solutions of certain equations that have no real solution: the equation

A complex number can be visually represented as a pair of numbers forming a vector on a diagram called an Argand diagram, representing the complex plane. Re is the real axis, Im is the imaginary axis, and i is the square root of 1.

has no real solution x, since the square of x is 0 or positive, so x2 + 1 cannot be zero. Complex numbers are a solution to this problem. The idea is to enhance the real numbers by introducing a non-real number i whose square is 1, so that x = i and x = i are the two solutions to the preceding equation.

Definition
A complex number is an expression of the form

where a and b are real numbers and i is a mathematical symbol which is called the imaginary unit. For example, 3.5 + 2i is a complex number. The real number a of the complex number z = a + bi is called the real part of z and the real number b is the imaginary part.[2] They are denoted Re(z) or (z) and Im(z) or (z), respectively. For example,

Some authors write a+ib instead of a+bi. In some disciplines (in particular, electrical engineering, where i is a symbol for current), the imaginary unit i is instead written as j, so complex numbers are written as a + bj or a + jb.

Complex number A real number a can usually be regarded as a complex number with an imaginary part of zero, that is to say, a + 0i. However the sets are defined differently and have slightly different operations defined, for instance comparison operations are not defined for complex numbers. Complex numbers whose real part is zero, that is to say, those of the form 0 + bi, are called imaginary numbers. It is common to write a for a + 0i and bi for 0 + bi. Moreover, when b is negative, it is common to write a bi instead of a + (b)i, for example 3 4i instead of 3 + (4)i. The set of all complex numbers is denoted by C or .

67

The complex plane


A complex number can be viewed as a point or position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (see Pedoe 1988 and Solomentsev 2001), named after Jean-Robert Argand. The numbers are conventionally plotted using the real part as the horizontal component, and imaginary part as vertical (see Figure 1). These two values used to identify a given complex number are therefore called its Cartesian, rectangular, or algebraic form. The defining characteristic of a position vector is that it has magnitude Figure 1: A complex number plotted as a point (red) and position vector (blue) on an Argand and direction. These are emphasised in a complex number's polar form diagram; is the rectangular expression and it turns out notably that the operations of addition and of the point. multiplication take on a very natural geometric character when complex numbers are viewed as position vectors: addition corresponds to vector addition while multiplication corresponds to multiplying their magnitudes and adding their arguments (i.e. the angles they make with the x axis). Viewed in this way the multiplication of a complex number by i corresponds to rotating a complex number anticlockwise through 90 about the origin.

History in brief
Main section: History The solution of a general cubic equation in radicals (without trigonometric functions) may require intermediate calculations containing the square roots of negative numbers, even when the final solutions are real numbers, a situation known as casus irreducibilis. This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545, though his understanding was rudimentary. Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root. Many mathematicians contributed to the full development of complex numbers. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian mathematician Rafael Bombelli.[3] A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions.

Complex number

68

Elementary operations
Conjugation
The complex conjugate of the complex number z = x + yi is defined to be x yi. It is denoted or . Geometrically, is the "reflection" of z about the real axis. In particular, conjugating twice gives the original complex number: . The real and imaginary parts of a complex number can be extracted using the conjugate:

Geometric representation of and its conjugate in the complex plane

Moreover, a complex number is real if and only if it equals its conjugate. Conjugation distributes over the standard arithmetic operations:

The reciprocal of a nonzero complex number z = x + yi is given by

This formula can be used to compute the multiplicative inverse of a complex number if it is given in rectangular coordinates. Inversive geometry, a branch of geometry studying more general reflections than ones about a line, can also be expressed in terms of complex numbers.

Complex number

69

Addition and subtraction


Complex numbers are added by adding the real and imaginary parts of the summands. That is to say:

Addition of two complex numbers can be done geometrically by constructing a parallelogram.

Similarly, subtraction is defined by

Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers A and B, interpreted as points of the complex plane, is the point X obtained by building a parallelogram three of whose vertices are 0, A and B. Equivalently, X is the point such that the triangles with vertices 0, A, B, and X, B, A, are congruent.

Multiplication and division


The multiplication of two complex numbers is defined by the following formula:

In particular, the square of the imaginary unit is 1:

The preceding definition of multiplication of general complex numbers is the natural way of extending this fundamental property of the imaginary unit. Indeed, treating i as a variable, the formula follows from this (distributive law) (commutative law of additionthe order of the summands can be changed) (commutative law of multiplicationthe order of the factors can be changed) (fundamental property of the imaginary unit). The division of two complex numbers is defined in terms of complex multiplication, which is described above, and real division:

Division can be defined in this way because of the following observation:

Complex number

70

As shown earlier,

is the complex conjugate of the denominator

. The real part c and the imaginary

part d of the denominator must not both be zero for division to be defined.

Square root
The square roots of a + bi (with b 0) are , where

and

where sgn is the signum function. This can be seen by squaring

to obtain a + bi.[4] [5] Here

is called the modulus of a + bi, and the square root with non-negative real part is called the principal square root.

Polar form
Absolute value and argument
Another way of encoding points in the complex plane other than using the x- and y-coordinates is to use the distance of a point P to O, the point whose coordinates are (0, 0) (origin), and the angle of the line through P and O. This idea leads to the polar form of complex numbers. The absolute value (or modulus or magnitude) of a complex number z = x+yi is

Figure 2: The argument and modulus r locate a point on an Argand diagram; or are polar expressions of the point.

If z is a real number (i.e., y = 0), then r = |x|. In general, by Pythagoras' theorem, r is the distance of the point P representing the complex number z to the origin. The argument or phase of z is the angle of the radius OP with the positive real axis, and is written as with the modulus, the argument can be found from the rectangular form :
[6]

. As

Complex number

71

The value of must always be expressed in radians. It can change by any multiple of 2 and still give the same angle. Hence, the arg function is sometimes considered as multivalued. Normally, as given above, the principal value in the interval is chosen. Values in the range are obtained by adding if the value is negative. The polar angle for the complex number 0 is undefined, but arbitrary choice of the angle0 is common. The value of equals the result of atan2: . Together, r and give another way of representing complex numbers, the polar form, as the combination of modulus and argument fully specify the position of a point on the plane. Recovering the original rectangular co-ordinates from the polar form is done by the formula called trigonometric form

Using Euler's formula this can be written as

Using the cis function, this is sometimes abbreviated to In angle notation, often used in electronics to represent a phasor with amplitude r and phase it is written as[7]

Multiplication, division and exponentiation in polar form


The relevance of representing complex numbers in polar form stems from the fact that the formulas for multiplication, division and exponentiation are simpler than the ones using Cartesian coordinates. Given two complex numbers z1 = r1(cos 1 + isin 1) and z2 =r2(cos 2 + isin 2) the formula for multiplication is

Multiplication of 2+i (blue triangle) and 3+i (red triangle). The red triangle is rotated to match the vertex of the blue one and stretched by 5, the length of the hypotenuse of the blue triangle.

In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. For example, multiplying by i corresponds to a quarter-rotation counter-clockwise, which gives back i2=1. The picture at the right illustrates the multiplication of

Complex number Since the real and imaginary part of 5+5i are equal, the argument of that number is 45 degrees, or /4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangle are arctan(1/3) and arctan(1/2), respectively. Thus, the formula

72

holds. As the arctan function can be approximated highly efficiently, formulas like thisknown as Machin-like formulasare used for high-precision approximations of . Similarly, division is given by

This also implies de Moivre's formula for exponentiation of complex numbers with integer exponents:

The n-th roots of z are given by

for any integer k satisfying 0 k n 1. Here

is the usual (positive) nth root of the positive real number r.

While the nth root of a positive real number r is chosen to be the positive real number c satisfying cn = x there is no natural way of distinguishing one particular complex nth root of a complex number. Therefore, the nth root of z is considered as a multivalued function (in z), as opposed to a usual function f, for which f(z) is a uniquely defined number. Formulas such as (which holds for positive real numbers), do in general not hold for complex numbers.

Properties
Field structure
The set C of complex numbers is a field. Briefly, this means that the following facts hold: first, any two complex numbers can be added and multiplied to yield another complex number. Second, for any complex number a, its negative a is also a complex number; and third, every nonzero complex number has a reciprocal complex number. Moreover, these operations satisfy a number of laws, for example the law of commutativity of addition and multiplication for any two complex numbers z1 and z2:

These two laws and the other requirements on a field can be proven by the formulas given above, using the fact that the real numbers themselves form a field. Unlike the reals, C is not an ordered field, that is to say, it is not possible to define a relation z1 < z2 that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so i2 = 1 precludes the existence of an ordering on C. When the underlying field for a mathematical topic or construct is the field of complex numbers, the thing's name is usually modified to reflect that fact. For example: complex analysis, complex matrix, complex polynomial, and complex Lie algebra.

Complex number

73

Solutions of polynomial equations


Given any complex numbers (called coefficients) a0, ..., an, the equation has at least one complex solution z, provided that at least one of the higher coefficients, a1, ..., an, is nonzero. This is the statement of the fundamental theorem of algebra. Because of this fact, C is called an algebraically closed field. This property does not hold for the field of rational numbers Q (the polynomial x2 2 does not have a rational root, since 2 is not a rational number) nor the real numbers R (the polynomial x2 + a does not have a real solution for a > 0, since the square of x is positive for any real number x). There are various proofs of this theorem, either by analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one root. Because of this fact, theorems that hold "for any algebraically closed field", apply to C. For example, any complex matrix has at least one (complex) eigenvalue.

Algebraic characterization
The field C has the following three properties: first, it has characteristic 0. This means that 1 + 1 + ... + 1 0 for any number of summands (all of which equal one). Second, its transcendence degree over Q, the prime field of C is the cardinality of the continuum. Third, it is algebraically closed (see above). It can be shown that any field having these properties is isomorphic (as a field) to C. For example, the algebraic closure of Qp also satisfies these three properties, so these two fields are isomorphic. Also, C is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that C contains many proper subfields which are isomorphic to C (the same is true of R, which contains many sub fields isomorphic to itself).

Characterization as a topological field


The preceding characterization of C describes the algebraic aspects of C, only. That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of C as a topological field (that is, a field that is equipped with a topology, which allows to specify notions such as convergence) does take into account the topological properties. C contains a subset P (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions: P is closed under addition, multiplication and taking inverses. If x and y are distinct elements of P, then either xy or yx is in P. If S is any nonempty subset of P, then S + P = x + P for some x in C. Moreover, C has a nontrivial involutive automorphism such that xx is in P for any nonzero x in C. Any field F with these properties can be endowed with a topology by taking the sets B(x, p) = {y | P (y x)(y x) P} as a base, where x ranges over the field and p ranges over P. With this topology F is isomorphic as a topological field to C. The only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R because the nonzero complex numbers are connected, while the nonzero real numbers are not. (namely the complex conjugation), fixing P and

Complex number

74

Formal construction
Formal development
Above, complex numbers have been defined by introducing i, the imaginary unit, as a symbol. More rigorously, the set C of complex numbers can be defined as the set R2 of ordered pairs (a, b) of real numbers. In this notation, the above formulas for addition and multiplication read

It is then just a matter of notation to express (a, b) as a + ib. Though this low-level construction does accurately describe the structure of the complex numbers, the following equivalent definition reveals the algebraic nature of C more immediately. This characterization relies on the notion of fields and polynomials. A field is a set endowed with an addition, subtraction, multiplication and division operations which behave as is familiar from, say, rational numbers. For example, the distributive law

is required to hold for any three elements x, y and z of a field. The set R of real numbers does form a field. A polynomial p(X) with real coefficients is an expression of the form

where the a0, ..., an are real numbers. The usual addition and multiplication of polynomials endows the set R[X] of all such polynomials with a ring structure. This ring is called polynomial ring. The quotient ring R[X]/(X2+1) can be shown to be a field. This extension field contains two square roots of 1, namely (the cosets of) X and X, respectively. (The cosets of) 1 and X form a basis of R[X]/(X2+1) as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the extension field can be written as ordered pairs (a,b) of real numbers. Moreover, the above formulas for addition etc. correspond to the ones yielded by this abstract algebraic approachthe two definitions of the field C are said to be isomorphic (as fields). Together with the above-mentioned fact that C is algebraically closed, this also shows that C is an algebraic closure of R.

Matrix representation of complex numbers


Complex numbers can also be represented by 22 matrices that have the following form:

Here the entries a and b are real numbers. The sum and product of two such matrices is again of this form, and the sum and product of complex numbers corresponds to the sum and product of such matrices. The geometric description of the multiplication of complex numbers can also be phrased in terms of rotation matrices by using this correspondence between complex numbers and such matrices. Moreover, the square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix:

The conjugate

corresponds to the transpose of the matrix.

Though this representation of complex numbers with matricies is the most common, many other representations arise from matrices other than that square to the negative of the identity matrix. See the article on 2 2 real

matrices for other representations of complex numbers.

Complex number

75

Complex analysis
The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane.

Complex exponential and related functions

Color wheel graph of sin(1/z). Black parts inside refer to numbers having large absolute values.

The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (, )-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, C, endowed with the metric

is a complete metric space, which notably includes the triangle inequality

for any two complex numbers z1 and z2. Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp(z), also written ez, is defined as the infinite series

and the series defining the real trigonmetric functions sine and cosine, as well as hyperbolic functions such as sinh also carry over to complex arguments without change. Euler's identity states:

for any real number , in particular

Unlike in the situation of real numbers, there is an infinitude of complex solutions z of the equation

for any complex number w 0. It can be shown that any such solution zcalled complex logarithm of asatisfies

where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2, log is also multivalued. The principal value of log is often taken by restricting the

Complex number imaginary part to the interval (,]. Complex exponentiation z is defined as

76

Consequently, they are in general multi-valued. For = 1 / n, for some natural number n, this recovers the non-unicity of n-th roots mentioned above.

Holomorphic functions
A function f : C C is called holomorphic if it satisfies the Cauchy-Riemann equations. For example, any R-linear map C C can be written in the form

with complex coefficients a and b. This map is holomorphic if and only if b = 0. The second summand real-differentiable, but does not satisfy the Cauchy-Riemann equations.

is

Complex analysis shows some features not apparent in real analysis. For example, any two holomorphic functions f and g that agree on an arbitrarily small open subset of C necessarily agree everywhere. Meromorphic functions, functions that can locally be written as f(z)/(z z0)n with a holomorphic function f(z), still share some of the features of holomorphic functions. Other functions have essential singularities, such as sin(1/z) at z = 0.

Applications
Some applications of complex numbers are:

Control theory
In control theory, systems are often transformed from the time domain to the frequency domain using the Laplace transform. The system's poles and zeros are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane. In the root locus method, it is especially important whether the poles and zeros are in the left or right half planes, i.e. have real part greater than or less than zero. If a system has poles that are in the right half plane, it will be unstable, all in the left half plane, it will be stable, on the imaginary axis, it will have marginal stability. If a system has zeros in the right half plane, it is a nonminimum phase system.

Signal analysis
Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value |z| of the corresponding z is the amplitude and the argument arg(z) the phase. If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex valued functions of the form

where represents the angular frequency and the complex number z encodes the phase and amplitude as explained above. In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances

Complex number for the latter two and combining all three in a single complex number called the impedance. (Electrical engineers and some physicists use the letter j for the imaginary unit since i is typically reserved for varying currents and may come into conflict with i.) This approach is called phasor calculus. This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals.

77

Improper integrals
In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration.

Quantum mechanics
The complex number field is relevant in the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics the Schrdinger equation and Heisenberg's matrix mechanics make use of complex numbers.

Relativity
In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time variable to be imaginary. (This is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity.

Dynamic equations
In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form f(t) = ert. Likewise, in difference equations, the complex roots r of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form f(t) = r t.

Fluid dynamics
In fluid dynamics, complex functions are used to describe potential flow in two dimensions.

Fractals
Certain fractals are plotted in the complex plane, e.g. the Mandelbrot set and Julia sets.

Algebraic number theory

Complex number

78

As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in C. A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbersthey are a principal object of study in algebraic number theory. Compared to Q, the algebraic closure of Q, which also contains all algebraic numbers, C has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedgea purely geometric problem. Another example are Pythagorean triples (a, b, c), that is to say integers satisfying

Construction of a regular polygon using straightedge and compass.

(which implies that the triangle having sidelengths a, b, and c is a right triangle). They can be studied by considering Gaussian integers, that is, numbers of the form x + iy, where x and y are integers.

Analytic number theory


Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta-function (s) is related to the distribution of prime numbers.

History
The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Heron of Alexandria in the 1st century AD, where in his Stereometrica he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term in his calculations, although negative quantities were not conceived of in Hellenistic mathematics and Heron merely replaced it by its positive.[8] The impetus to study complex numbers proper first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolo Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. As an example, Tartaglia's cubic formula gives the solution to the equation x3x=0 as

Complex number

79

The three cube roots of 1, two of which are complex

and when the three cube roots of -1

of which two are complex, are substituted into this expression the three real roots, 0, 1 and 1, result. Of course this particular equation can be solved at sight but it does illustrate that when general formulas are used to solve cubic equations with real roots then, as later mathematicians showed rigorously, the use of complex numbers is unavoidable. Rafael Bombelli was the first to explicitly address these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic trying to resolve these issues. The term "imaginary" for these quantities was coined by Ren Descartes in 1637, although he was at pains to stress their imaginary nature [9] [...] quelquefois seulement imaginaires cest--dire que lon peut toujours en imaginer autant que j'ai dit en chaque quation, mais quil ny a quelquefois aucune quantit qui corresponde celle quon imagine. ([...] sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine.) A further source of confusion was that the equation inconsistent with the algebraic identity incorrect use of this identity (and the related identity seemed to be capriciously , which is valid for non-negative real numbers a and b, ) in the case when both a and b are negative even to

and which was also used in complex number calculations with one of a, b positive and the other negative. The bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of

guard against this mistake . Even so Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra [10], he introduces these numbers almost at once and then uses them in a natural way throughout. In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the complicated identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be simply re-expressed by the following well-known formula which bears his name, de Moivre's formula:

Complex number In 1748 Leonhard Euler went further and obtained Euler's formula of complex analysis:

80

by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities. The idea of a complex number as a point in the complex plane (above) was first described by Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's De Algebra tractatus. Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of -1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology. The English mathematician G. H. Hardy remarked that Gauss was the first mathematician to use complex numbers in 'a really confident and scientific way' although mathematicians such as Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.[11] Augustin Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand called factor, and the modulus; Cauchy (1828) called
2 2

the direction

the reduced form (l'expression , introduced the term complex

rduite) and apparently introduced the term argument; Gauss used i for

number for a+bi, and called a +b the norm. The expression direction coefficient, often used for , is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Later classical writers on the general theory include Richard Dedekind, Otto Hlder, Felix Klein, Henri Poincar, Hermann Schwarz, Karl Weierstrass and many others.

Generalizations and related notions


The process of extending the field R of reals to C is known as Cayley-Dickson construction. It can be carried further to higher dimensions, yielding the quaternions H and octonions O which (as a real vector space) are of dimension4 and 8, respectively. However, with increasing dimension, the algebraic properties familiar from real and complex numbers vanish: the quaternions are only a skew field, i.e. xyyx for two quaternions, the multiplication of octonions fails (in addition to not being commutative) to be associative: (xy)zx(yz). However, all of these are normed division algebras over R. By Hurwitz's theorem they are the only ones. The next step in the Cayley-Dickson construction, the sedenions fail to have this structure. The Cayley-Dickson construction is closely related to the regular representation of C, thought of as an R-algebra (an R-vector space with a multiplication), with respect to the basis 1, i. This means the following: the R-linear map

for some fixed complex number w can be represented by a 22 matrix (once a basis has been chosen). With respect to the basis 1, i, this matrix is

i.e., the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of C in the 2 2 real matrices, it is not the only one. Any matrix

has the property that its square is the negative of the identity matrix: J2 = -I. Then

Complex number

81

is also isomorphic to the field C, and gives an alternative complex structure on R2. This is generalized by the notion of a linear complex structure. Hypercomplex numbers also generalize R, C, H, and O. For example this notion contains the split-complex numbers, which are elements of the ring R[x]/(x2 1) (as opposed to R[x]/(x2 + 1)). In this ring, the equation a2 = 1 has four solutions. The field R is the completion of Q, the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q lead to the fields Qp of p-adic numbers (for any prime number p), which are thereby analogous to R. There are no other nontrivial ways of completing Q than R and Qp, by Ostrowski's theorem. The algebraic closure of of Qp still carry a norm, but (unlike C) are not complete with respect to it. The completion turns out to be algebraically closed. This field is called p-adic complex numbers by analogy.

The fields R and Qp and their finite field extensions, including C, are local fields.

Notes
[1] Burton (1995, p.294) [2] Aufmann, Richard N.; Barker, Vernon C.; Nation, Richard D. (2007), College Algebra and Trigonometry (http:/ / books. google. com/ ?id=g5j-cT-vg_wC) (6 ed.), Cengage Learning, p.66, ISBN0618825150, , Chapter P, p. 66 (http:/ / books. google. com/ books?id=g5j-cT-vg_wC& pg=PA66) [3] Katz (2004, 9.1.4) [4] Abramowitz, Miltonn; Stegun, Irene A. (1964). Handbook of mathematical functions with formulas, graphs, and mathematical tables (http:/ / books. google. com/ books?id=MtU8uP7XMvoC). Courier Dover Publications. p.17. ISBN0-486-61272-4. ., Section 3.7.26, p. 17 (http:/ / www. math. sfu. ca/ ~cbm/ aands/ page_17. htm) [5] Cooke, Roger (2008). Classical algebra: its nature, origins, and uses (http:/ / books. google. com/ books?id=lUcTsYopfhkC). John Wiley and Sons. p.59. ISBN0-470-25952-3. ., Extract: page 59 (http:/ / books. google. com/ books?id=lUcTsYopfhkC& pg=PA59) [6] Kasana, H.S. (2005). Complex Variables: Theory And Applications (http:/ / books. google. com/ books?id=rFhiJqkrALIC) (2nd ed.). PHI Learning Pvt. Ltd. p.14. ISBN81-203-2641-5. ., Extract of chapter 1, page 14 (http:/ / books. google. com/ books?id=rFhiJqkrALIC& pg=PA14) [7] Nilsson, James William; Riedel, Susan A. (2008). Electric circuits (http:/ / books. google. com/ books?id=sxmM8RFL99wC) (8th ed.). Prentice Hall. p.338. ISBN0-131-98925-1. ., Chapter 9, page 338 (http:/ / books. google. com/ books?id=sxmM8RFL99wC& pg=PA338) [8] Nahin, Paul J. (2007). An Imaginary Tale: The Story of (http:/ / mathforum. org/ kb/ thread. jspa?forumID=149& threadID=383188& messageID=1181284). Princeton University Press. ISBN9780691127989. . Retrieved 20 April 2011. [9] Descartes, Ren (1954) [1637]. La Gomtrie (http:/ / www. gutenberg. org/ ebooks/ 26400). Dover Publications. ISBN0486600688. . Retrieved 20 April 2011. [10] http:/ / web. mat. bham. ac. uk/ C. J. Sangwin/ euler/ [11] Hardy, G. H.; Wright, E. M. (2000) [1938]. An Introduction to the Theory of Numbers. OUP Oxford. p.189 (fourth edition). ISBN0199219869.

References
Mathematical references
Ahlfors, Lars (1979), Complex analysis (3rd ed.), McGraw-Hill, ISBN978-0070006577 Conway, John B. (1986), Functions of One Complex Variable I, Springer, ISBN0-387-90328-3 Joshi, Kapil D. (1989), Foundations of Discrete Mathematics, New York: John Wiley & Sons, ISBN978-0-470-21152-6 Pedoe, Dan (1988), Geometry: A comprehensive course, Dover, ISBN0-486-65812-0 Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 5.5 Complex Arithmetic" (http:// apps.nrbook.com/empanel/index.html?pg=225), Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN978-0-521-88068-8 Solomentsev, E.D. (2001), "Complex number" (http://eom.springer.de/c/c024140.htm), in Hazewinkel, Michiel, Encyclopaedia of Mathematics, Springer, ISBN978-1556080104

Complex number

82

Historical references
Burton, David M. (1995), The History of Mathematics (3rd ed.), New York: McGraw-Hill, ISBN978-0-07-009465-9 Katz, Victor J. (2004), A History of Mathematics, Brief Version, Addison-Wesley, ISBN978-0-321-16193-2 Nahin, Paul J. (1998), An Imaginary Tale: The Story of (hardcover ed.), Princeton University Press, ISBN0-691-02795-1 A gentle introduction to the history of complex numbers and the beginnings of complex analysis. H.-D. Ebbinghaus ... (1991), Numbers (hardcover ed.), Springer, ISBN0-387-97497-0 An advanced perspective on the historical development of the concept of number.

Further reading
The Road to Reality: A Complete Guide to the Laws of the Universe, by Roger Penrose; Alfred A. Knopf, 2005; ISBN 0-679-45443-8. Chapters 4-7 in particular deal extensively (and enthusiastically) with complex numbers. Unknown Quantity: A Real and Imaginary History of Algebra, by John Derbyshire; Joseph Henry Press; ISBN 0-309-09657-X (hardcover 2006). A very readable history with emphasis on solving polynomial equations and the structures of modern algebra. Visual Complex Analysis, by Tristan Needham; Clarendon Press; ISBN 0-19-853447-7 (hardcover, 1997). History of complex numbers and complex analysis with compelling and useful visual interpretations.

External links
Imaginary Numbers (http://www.bbc.co.uk/programmes/b00tt6b2) on In Our Time at the BBC. Euler's work on Complex Roots of Polynomials (http://mathdl.maa.org/convergence/1/?pa=content& sa=viewDocument&nodeId=640&bodyId=1038) at Convergence. MAA Mathematical Sciences Digital Library. John and Betty's Journey Through Complex Numbers (http://mathforum.org/johnandbetty/) Dimensions: a math film. (http://www.dimensions-math.org/Dim_regarder_E.htm) Chapter 5 presents an introduction to complex arithmetic and stereographic projection. Chapter 6 discusses transformations of the complex plane, Julia sets, and the Mandelbrot set.

Microphone practice

83

Microphone practice
There exist a number of well-developed microphone techniques used for miking musical, film, or voice sources. Choice of technique depends on a number of factors, including: The collection of extraneous noise. This can be a concern, especially in amplified performances, where audio feedback can be a significant problem. Alternatively, it can be a desired outcome, in situations where ambient noise is useful (hall reverberation, audience reaction). Choice of a signal type: Mono, stereo or multi-channel. Type of sound-source: Acoustic instruments produce a very different sound than electric instruments, which are again different from the human voice. Situational circumstances: Sometimes a microphone should not be visible, or having a microphone nearby is not appropriate. In scenes for a movie the microphone may be held above the pictureframe, just out of sight. In this way there is always a certain distance between the actor and the microphone. Processing: If the signal is destined to be heavily processed, or "mixed down", a different type of input may be required. The use of a windshield as well as a pop shield, designed to reduce vocal plosives.

Basic techniques
There are several classes of microphone placement for recording and amplification. In close micing, a microphone is placed relatively close to an instrument or sound source. This serves to reduce extraneous noise, including room reverberation, and is commonly used when attempting to record a number of separate instruments while keeping the signals separate, or when trying to avoid feedback in an amplified performance. Close micing often affects the frequency response of the microphone, especially for directional mics which exhibit bass boost from the proximity effect. In ambient or distant micing, a microphone typically a sensitive one is placed at some distance from the sound source. The goal of this technique is to get a broader, natural mix of the sound source or sources, along with ambient sound, including reverberation from the room or hall.

Multi-track recording
Often each instrument or vocalist is miked separately, with one or more microphones recording to separate channels (tracks). At a later stage, the channels are combined ('mixed-down') to two channels for stereo or more for surround sound. The artists need not perform in the same place at the same time, and individual tracks (or sections of tracks) can be re-recorded to correct errors. Generally effects such as reverberation are added to each recorded channel, and different levels sent to left and right final channels to position the artist in the stereo sound-stage. Microphones may also be used to record the overall effect, or just the effect of the performance room. This permits greater control over the final sound, but recording two channels (stereo recording) is simpler and cheaper, and can give a sound that is more natural.

Stereo recording techniques


There are two features of sound that the human brain uses to place objects in the stereo sound-field between the loudspeakers. These are the relative level (or loudness) difference between the two channels L, and the time delay difference in arrival times for the same sound in each channel t. The "interaural" signals (binaural ILD and ITD) at the ears are not the stereo microphone signals which are coming from the loudspeakers, and are called "interchannel" signals (L and t). These signals are normally not mixed. Loudspeaker signals are different from the sound

Microphone practice arriving at the ear. See the article "Binaural recording for earphones".

84

Various methods of stereo recording


X-Y technique: intensity stereophony
Here there are two directional microphones at the same place, and typically placed at 90 or more to each other.[1] A stereo effect is achieved through differences in sound pressure level between two microphones. Due to the lack of differences in time-of-arrival and phase ambiguities, the sonic characteristic of X-Y recordings is generally less "spacey" and has less depth compared to recordings employing an AB setup.

XY Stereo

When the microphones are bidirectional and placed facing +-45 with respect to the sound source, the X-Y-setup is called a Blumlein Pair. The sonic image produced by this configuration is considered by many authorities to create a realistic, almost holographic soundstage. A further refinement of the Blumlein Pair was developed by EMI in 1958, who called it "Stereosonic". They added a little in-phase crosstalk above 700Hz to better align the mid and treble phantom sources with the bass ones. [2]

Blumlein Stereo

A-B technique: time-of-arrival stereophony


This uses two parallel omnidirectional microphones some distance apart, so capturing time-of-arrival stereo information as well as some level (amplitude) difference information, especially if employed close to the sound source(s). At a distance of about 50cm (0.5m) the time delay for a signal reaching first one and then the other microphone from the side is approximately 1.5ms (1 to 2ms). If the distance is increased between the microphones it effectively decreases the pickup angle. At 70cm distance it is about equivalent to the pickup angle of the near-coincident ORTF setup.

Microphone practice

85

M/S technique: Mid/Side stereophony


This coincident technique employs a bidirectional microphone facing sideways and a cardioid (generally a variety of cardioid, although Alan Blumlein described the usage of an omnidirectional transducer in his original patent) at an angle of 90 facing the sound source. One mic is physically inverted over the other, so they share the same distance. The left and right channels are produced through a simple matrix: Left = Mid + Side, Right = Mid Side (the polarity-reversed side-signal). This configuration produces a completely mono-compatible signal and, if the Mid and Side signals are recorded (rather than the matrixed Left and Right), the stereo width can be manipulated after the recording has taken place. This makes it especially useful for film-based projects. There is some controversy as to whether MS micing technique can create translation issues when used with matrix encoded cinema surround formats such as Dolby SR LtRt, which relies on phase relationships between left and right channels of the stereo recording in order to decode surround information.

Choosing a technique
If a stereo signal is to be reproduced in mono, out-of-phase parts of the signal will cancel, which may cause the unwanted reduction or loss of some parts of the signal. This can be an important factor in choosing which technique to use.

Mid-Side Stereo

Since the A-B techniques use phase differences to give the stereo image, they are the least compatible with mono. In the X-Y techniques, the microphones would ideally be in exactly the same place, which is not possible if they are slightly separated left to right, there may be some loss of treble when played back in mono, so they are often separated vertically. This only causes problems with sound from above or below the height of the microphones. The M/S technique is ideal for mono compatibility, since summing Left+Right just gives the Mid signal back. The equipment for the techniques also varies from the bulky to the small and convenient. A-B techniques generally use two separate microphone units, often mounted on a bar to define the separation. X-Y microphone capsules can be mounted in one unit, or even on the top of a handheld digital recorder. Since M/S setups can give a variable soundstage width, they are often used in small 'pencil microphones' to mount on video cameras, matching a zoom lens.

Microphone practice

86

References
[1] Michael Williams. "The Stereophonic Zoom" (http:/ / www. rycote. com/ images/ uploads/ The_Stereophonic_Zoom. pdf) (PDF). Rycote Microphone Windshields Ltd. . [2] Eargle, John (2004). The Microphone Book (http:/ / books. google. co. uk/ books?id=w8kXMVKOsY0C& pg=PA170) (2 ed.). Focal Press. p.170. ISBN0240519612. .

External links
Michael Williams, The Stereophonic Zoom (http://www.rycote.com/images/uploads/ The_Stereophonic_Zoom.pdf) Visualization XY Stereo System - Blumlein Eight/Eight 90 - Intensity Stereo (http://www.sengpielaudio.com/ Visualization-Blumlein-E.htm)

Wave
In physics, a wave is a disturbance that travels through space and time, usually accompanied by the transfer of energy. Waves travel and the wave motion transfers energy from one point to another, often with no permanent displacement of the particles of the mediumthat is, with little or no associated mass transport. They consist, instead, of oscillations or vibrations around almost fixed locations. For example, a cork on rippling water will bob up and down, staying in about the same place while the wave itself moves onwards.
Surface waves in water

One type of wave is a mechanical wave, which propagates through a medium in which the substance of this medium is deformed. The deformation reverses itself owing to restoring forces resulting from its deformation. For example, sound waves propagate via air molecules bumping into their neighbors. This transfers some energy to these neighbors, which will cause a cascade of collisions between neighbouring molecules. When air molecules collide with their neighbors, they also bounce away from them (restoring force). This keeps the molecules from continuing to travel in the direction of the wave. Another type of wave can travel through a vacuum, e.g. electromagnetic radiation (including visible light, ultraviolet radiation, infrared radiation, gamma rays, X-rays, and radio waves). This type of wave consists of periodic oscillations in electrical and magnetic fields. A main distinction can be made between transverse and longitudinal waves. Transverse waves occur when a disturbance sends waves perpendicular (at right angles) to the original wave. Longitudinal waves occur when a disturbance sends waves in the same direction as the original wave. Waves are described by a wave equation which sets out how the disturbance proceeds over time. The mathematical form of this equation varies depending on the type of wave.

Wave

87

General features
A single, all-encompassing definition for the term wave is not straightforward. A vibration can be defined as a back-and-forth motion around a reference value. However, a vibration is not necessarily a wave. An attempt to define the necessary and sufficient characteristics that qualify a phenomenon to be called a wave results in a fuzzy border line. The term wave is often intuitively understood as referring to a transport of spatial disturbances that are generally not accompanied by a motion of the medium occupying this space as a whole. In a wave, the energy of a vibration is moving away from the source in the form of a disturbance within the surrounding medium (Hall 1980, p.8). However, this notion is problematic for a standing wave (for example, a wave on a string), where energy is moving in both directions equally, or for electromagnetic / light waves in a vacuum, where the concept of medium does not apply. There are water waves on the ocean surface; light waves emitted by the Sun; microwaves used in microwave ovens; radio waves broadcast by radio stations; and sound waves generated by radio receivers, telephone handsets and living creatures (as voices). It may appear that the description of waves is closely related to their physical origin for each specific instance of a wave process. For example, acoustics is distinguished from optics in that sound waves are related to a mechanical rather than an electromagnetic wave transfer caused by vibration. Concepts such as mass, momentum, inertia, or elasticity, become therefore crucial in describing acoustic (as distinct from optic) wave processes. This difference in origin introduces certain wave characteristics particular to the properties of the medium involved. For example, in the case of air: vortices, radiation pressure, shock waves etc.; in the case of solids: Rayleigh waves, dispersion etc.; and so on. Other properties, however, although they are usually described in an origin-specific manner, may be generalized to all waves. For such reasons, wave theory represents a particular branch of physics that is concerned with the properties of wave processes independently from their physical origin.[1] For example, based on the mechanical origin of acoustic waves, a moving disturbance in spacetime can exist if and only if the medium involved is neither infinitely stiff nor infinitely pliable. If all the parts making up a medium were rigidly bound, then they would all vibrate as one, with no delay in the transmission of the vibration and therefore no wave motion. This is impossible because it would violate general relativity. On the other hand, if all the parts were independent, then there would not be any transmission of the vibration and again, no wave motion. Although the above statements are meaningless in the case of waves that do not require a medium, they reveal a characteristic that is relevant to all waves regardless of origin: within a wave, the phase of a vibration (that is, its position within the vibration cycle) is different for adjacent points in space because the vibration reaches these points at different times. Similarly, wave processes revealed from the study of waves other than sound waves can be significant to the understanding of sound phenomena. A relevant example is Thomas Young's principle of interference (Young, 1802, in Hunt 1992, p.132). This principle was first introduced in Young's study of light and, within some specific contexts (for example, scattering of sound by sound), is still a researched area in the study of sound.

Wave

88

Mathematical description of one-dimensional waves


Wave equation
Consider a traveling transverse wave (which may be a pulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling in the direction in space. E.g., let the positive direction be to the right, and the negative direction be to the left. with constant amplitude with constant velocity , where is independent of wavelength (no dispersion) independent of amplitude (linear media, not nonlinear).[2] with constant waveform, or shape This wave can then be described by the two-dimensional functions
Wavelength , can be measured between any two corresponding points on a waveform

(waveform (waveform or, more generally, by d'Alembert's formula:


[3]

traveling to the right) traveling to the left)

representing two component waveforms

and

traveling through the medium in opposite directions. This wave

can also be represented by the partial differential equation

General solutions are based upon Duhamel's principle.[4]

Wave forms
The form or shape of F in d'Alembert's formula involves the argument x vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v (and G will propagate at the same speed in the negative x-direction).[5] In the case of a periodic function F with period , that is, F(x + vt) = F(x vt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period (the Sine, square, triangle and sawtooth waveforms. wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(x v(t + T)) = F(x vt) provided vT = , so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = /v.[6]

Wave

89

Amplitude and modulation


The amplitude of a wave may be constant (in which case the wave is a c.w. or continuous wave), or may be modulated so as to vary with time and/or position. The outline of the variation in amplitude is called the envelope of the wave. Mathematically, the modulated wave can be written in the form:[7] [8] [9]

Illustration of the envelope (the slowly varying red curve) of an amplitude modulated wave. The fast varying blue curve is the carrier wave, which is being modulated.

where velocity

is the amplitude envelope of the wave,

is the wavenumber and

is the phase. If the group


[10]

(see below) is wavelength-independent, this equation can be simplified as:

showing that the envelope moves with the group velocity and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using an envelope equation.[10]
[11]

Phase velocity and group velocity


There are two velocities that are associated with waves, the phase velocity and the group velocity. To understand them, one must consider several types of waveform. For simplification, examination is restricted to one dimension.

Frequency dispersion in groups of gravity waves on the surface of deep water. The red dot moves with the phase velocity, and the green dots propagate with the group velocity.

The most basic wave (a form of plane wave) may be expressed in the form:

This shows a wave with the Group velocity and Phase velocity going in different directions.

which can be related to the usual sine and cosine forms using Euler's formula. Rewriting the argument, , makes clear that this expression describes a vibration of wavelength traveling in the x-direction with a constant phase velocity .[12]

Wave The other type of wave to be considered is one with localized structure described by an envelope, which may be expressed mathematically as, for example:

90

where now A(k1) (the integral is the inverse fourier transform of A(k1)) is a function exhibiting a sharp peak in a region of wave vectors k surrounding the point k1 = k. In exponential form: with Ao the magnitude of A. For example, a common choice for Ao is a Gaussian wave packet:[13] where determines the spread of k1-values about k, and N is the amplitude of the wave. The exponential function inside the integral for oscillates rapidly with its argument, say (k1), and where it varies rapidly, the exponentials cancel each other out, interfere destructively, contributing little to .[12] However, an exception occurs at the location where the argument of the exponential varies slowly. (This observation is the basis for the method of stationary phase for evaluation of such integrals.[14] ) The condition for to vary slowly is that its rate of change with k1 be small; this rate of variation is:[12]

where the evaluation is made at k1 = k because A(k1) is centered there. This result shows that the position x where the phase changes slowly, the position where is appreciable, moves with time at a speed called the group velocity:

The group velocity therefore depends upon the dispersion relation connecting and k. For example, in quantum mechanics the energy of a particle represented as a wave packet is E = = (k)2/(2m). Consequently, for that wave situation, the group velocity is

showing that the velocity of a localized particle in quantum mechanics is its group velocity.[12] Because the group velocity varies with k, the shape of the wave packet broadens with time, and the particle becomes less localized.[15] In other words, the velocity of the constituent waves of the wave packet travel at a rate that varies with their wavelength, so some move faster than others, and they cannot maintain the same interference pattern as the wave propagates.

Sinusoidal waves
Mathematically, the most basic wave is the (spatially) one-dimensional sine wave (or harmonic wave or sinusoid) with an amplitude described by the equation:

Sinusoidal waves correspond to simple harmonic motion.

where

Wave is the maximum amplitude of the wave, maximum distance from the highest point of the disturbance in the medium (the crest) to the equilibrium point during one wave cycle. In the illustration to the right, this is the maximum vertical distance between the baseline and the wave. is the space coordinate is the time coordinate is the wavenumber is the angular frequency is the phase.

91

The units of the amplitude depend on the type of wave. Transverse mechanical waves (e.g., a wave on a string) have an amplitude expressed as a distance (e.g., meters), longitudinal mechanical waves (e.g., sound waves) use units of pressure (e.g., pascals), and electromagnetic waves (a form of transverse vacuum wave) express the amplitude in terms of its electric field (e.g., volts/meter). The wavelength is the distance between two sequential crests or troughs (or other equivalent points), generally is , the spatial frequency of the wave in radians per unit distance (typically per measured in meters. A wavenumber

meter), can be associated with the wavelength by the relation

The period

is the time for one complete cycle of an oscillation of a wave. The frequency

is the number of

periods per unit time (per second) and is typically measured in hertz. These are related by:

In other words, the frequency and period of a wave are reciprocals. The angular frequency represents the frequency in radians per second. It is related to the frequency or period by

The wavelength

of a sinusoidal waveform traveling at constant speed

is given by:[16]

where

is called the phase speed (magnitude of the phase velocity) of the wave and

is the wave's frequency.

Wavelength can be a useful concept even if the wave is not periodic in space. For example, in an ocean wave approaching shore, the incoming wave undulates with a varying local wavelength that depends in part on the depth of the sea floor compared to the wave height. The analysis of the wave can be based upon comparison of the local wavelength with the local water depth.[17] Although arbitrary wave shapes will propagate unchanged in lossless linear time-invariant systems, in the presence of dispersion the sine wave is the unique shape that will propagate unchanged but for phase and amplitude, making it easy to analyze.[18] Due to the KramersKronig relations, a linear medium with dispersion also exhibits loss, so the sine wave propagating in a dispersive medium is attenuated in certain frequency ranges that depend upon the medium.[19] The sine function is periodic, so the sine wave or sinusoid has a wavelength in space and a period in time.[20] [21] The sinusoid is defined for all times and distances, whereas in physical situations we usually deal with waves that exist for a limited span in space and duration in time. Fortunately, an arbitrary wave shape can be decomposed into an infinite set of sinusoidal waves by the use of Fourier analysis. As a result, the simple case of a single sinusoidal wave can be applied to more general cases.[22] [23] In particular, many media are linear, or nearly so, so the calculation of arbitrary wave behavior can be found by adding up responses to individual sinusoidal waves using the superposition principle to find the solution for a general waveform.[24] When a medium is nonlinear, the response to complex waves cannot be determined from a sine-wave decomposition.

Wave

92

Standing waves
A standing wave, also known as a stationary wave, is a wave that remains in a constant position. This phenomenon can occur because the medium is moving in the opposite direction to the wave, or it can arise in a stationary medium as a result of interference between two waves traveling in opposite directions.

Standing wave in stationary medium. The red dots represent the wave nodes

The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example when a violin string is displaced, transverse waves propagate out to where the string is held in place at the bridge and the nut, where the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is no net propagation of energy over time.

One-dimensional standing waves; the fundamental mode and the first 6 overtones.

A two-dimensional standing wave on a disk; this is the fundamental mode.

A standing wave on a disk with two nodal lines crossing at the center; this is an overtone.

Physical properties
Waves exhibit common behaviors under a number of standard situations, e.g.,

Transmission and media


Waves normally move in in a straight line (i.e. rectilinearly) through a transmission medium. Such media can be classified into one or more of the following categories: A bounded medium if it is finite in extent, otherwise an unbounded medium A linear medium if the amplitudes of different waves at any particular point in the medium can be added A uniform medium or homogeneous medium if its physical properties are unchanged at different locations in space An anisotropic medium if one or more of its physical properties differ in one or more directions
Light beam exhibiting reflection, refraction, transmission and dispersion when encountering a prism

An isotropic medium if its physical properties are the same in all directions

Wave

93

Reflection
When a wave strikes a reflective surface, it changes direction, such that the angle made by the incident wave and line normal to the surface equals the angle made by the reflected wave and the same normal line.

Interference
Waves that encounter each other combine through superposition to create a new wave called an interference pattern. Important interference patterns occur for waves that are in phase.

Refraction
Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of the phase velocity changes. Typically, refraction occurs when a wave passes from one medium into another. The amount by which a wave is refracted by a material is given by the refractive index of the material. The directions of incidence and refraction are related to the refractive indices of the two materials by Snell's law.

Diffraction
A wave exhibits diffraction when it encounters an obstacle that bends the wave or when it spreads after emerging from an opening. Diffraction effects are more pronounced when the size of the obstacle or opening is comparable to the wavelength of the wave.
Sinusoidal traveling plane wave entering a region of lower wave velocity at an angle, illustrating the decrease in wavelength and change of direction (refraction) that results.

Polarization
A wave is polarized if it oscillates in one direction or plane. A wave can be polarized by the use of a polarizing filter. The polarization of a transverse wave describes the direction of oscillation in the plane perpendicular to the direction of travel. Longitudinal waves such as sound waves do not exhibit polarization. For these waves the direction of oscillation is along the direction of travel.

Wave

94

Dispersion
A wave undergoes dispersion when either the phase velocity or the group velocity depends on the wave frequency. Dispersion is most easily seen by letting white light pass through a prism, the result of which is to produce the spectrum of colours of the rainbow. Isaac Newton performed experiments with light and prisms, presenting his findings in the Opticks (1704) that white light consists of several colours and that these colours cannot be decomposed any further.[25]

Mechanical waves
Waves on strings
The speed of a wave traveling along a vibrating string ( v ) is directly proportional to the square root of the tension of the string ( T ) over the linear mass density ( ):

where the linear density is the mass per unit length of the string.

Acoustic waves
Acoustic or sound waves travel at speed given by

or the square root of the adiabatic bulk modulus divided by the ambient fluid density (see speed of sound).

Water waves
Ripples on the surface of a pond are actually a combination of transverse and longitudinal waves; therefore, the points on the surface follow orbital paths. Sounda mechanical wave that propagates through gases, liquids, solids and plasmas; Inertial waves, which occur in rotating fluids and are restored by the Coriolis effect; Ocean surface waves, which are perturbations that propagate through water.

Shock waves

Wave

95

Other
Waves of traffic, that is, propagation of different densities of motor vehicles, and so forth, which can be modeled as kinematic waves[26] Metachronal wave refers to the appearance of a traveling wave produced by coordinated sequential actions.

Electromagnetic waves
(radio, micro, infrared, visible, uv) An electromagnetic wave consists of two waves that are oscillations of the electric and magnetic fields. An electromagnetic wave travels in a direction that is at right angles to the oscillation direction of both fields. In the 19th century, James Clerk Maxwell showed that, in vacuum, the electric and magnetic fields satisfy the wave equation both with speed equal to that of the speed of light. From this emerged the idea that light is an electromagnetic wave. Electromagnetic waves can have different frequencies (and thus wavelengths), giving rise to various types of radiation such as radio waves, microwaves, infrared, visible light, ultraviolet and X-rays.

Quantum mechanical waves


The Schrdinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle. Quantum mechanics also describes particle properties that other waves, such as light and sound, have on the atomic scale and below.

de Broglie waves
Louis de Broglie postulated that all particles with momentum have a wavelength

A propagating wave packet; in general, the envelope of the wave packet moves at a different [27] speed than the constituent waves.

Wave

96

where h is Planck's constant, and p is the magnitude of the momentum of the particle. This hypothesis was at the basis of quantum mechanics. Nowadays, this wavelength is called the de Broglie wavelength. For example, the electrons in a CRT display have a de Broglie wavelength of about 1013 m. A wave representing such a particle traveling in the k-direction is expressed by the wave function:

where the wavelength is determined by the wave vector k as:

and the momentum by:

However, a wave like this with definite wavelength is not localized in space, and so cannot represent a particle localized in space. To localize a particle, de Broglie proposed a superposition of different wavelengths ranging around a central value in a wave packet,[28] a waveform often used in quantum mechanics to describe the wave function of a particle. In a wave packet, the wavelength of the particle is not precise, and the local wavelength deviates on either side of the main wavelength value. In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet.[29] Gaussian wave packets also are used to analyze water waves.[30] For example, a Gaussian wavefunction might take the form:[31]

at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as 0 = 2 / k0. It is well known from the theory of Fourier analysis,[32] or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian.[33] Given the Gaussian:

the Fourier transform is:

The Gaussian in space therefore is made up of waves:

that is, a number of waves of wavelengths such that k = 2 . The parameter decides the spatial spread of the Gaussian along the x-axis, while the Fourier transform shows a spread in wave vector k determined by 1/. That is, the smaller the extent in space, the larger the extent in k, and hence in = 2/k.

Wave

97

Gravitational waves
Researchers believe that gravitational waves also travel through space, although gravitational waves have never been directly detected. Not to be confused with gravity waves, gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.

WKB method
In a nonuniform medium, in which the wavenumber k can depend on the location as well as the frequency, the phase term kx is typically replaced by the integral of k(x)dx, according to the WKB method. Such nonuniform traveling waves are common in many physical problems, including the mechanics of the cochlea and waves on hanging ropes.

Animation showing the effect of a cross-polarized gravitational wave on a ring of test particles

References
[1] Lev A. Ostrovsky & Alexander I. Potapov (2002). Modulated waves: theory and application (http:/ / www. amazon. com/ gp/ product/ 0801873258). Johns Hopkins University Press. ISBN0801873258. . [2] Michael A. Slawinski (2003). "Wave equations" (http:/ / books. google. com/ ?id=s7bp6ezoRhcC& pg=PA134). Seismic waves and rays in elastic media. Elsevier. pp.131 ff. ISBN0080439306. . [3] Karl F Graaf (1991). Wave motion in elastic solids (http:/ / books. google. com/ ?id=5cZFRwLuhdQC& printsec=frontcover) (Reprint of Oxford 1975 ed.). Dover. pp.1314. ISBN9780486667454. . [4] Jalal M. Ihsan Shatah, Michael Struwe (2000). "The linear wave equation" (http:/ / books. google. com/ ?id=zsasG2axbSoC& pg=PA37). Geometric wave equations. American Mathematical Society Bookstore. pp.37 ff. ISBN0821827499. . [5] Louis Lyons (1998). All you wanted to know about mathematics but were afraid to ask (http:/ / books. google. com/ ?id=WdPGzHG3DN0C& pg=PA128). Cambridge University Press. pp.128 ff. ISBN052143601X. . [6] Alexander McPherson (2009). "Waves and their properties" (http:/ / books. google. com/ ?id=o7sXm2GSr9IC& pg=PA77). Introduction to Macromolecular Crystallography (2 ed.). Wiley. p.77. ISBN0470185902. . [7] Christian Jirauschek (2005). FEW-cycle Laser Dynamics and Carrier-envelope Phase Detection (http:/ / books. google. com/ ?id=6kOoT_AX2CwC& pg=PA9). Cuvillier Verlag. p.9. ISBN3865374190. . [8] Fritz Kurt Kneubhl (1997). Oscillations and waves (http:/ / books. google. com/ ?id=geYKPFoLgoMC& pg=PA365). Springer. p.365. ISBN354062001X. . [9] Mark Lundstrom (2000). Fundamentals of carrier transport (http:/ / books. google. com/ ?id=FTdDMtpkSkIC& pg=PA33). Cambridge University Press. p.33. ISBN0521631343. . [10] Chin-Lin Chen (2006). "13.7.3 Pulse envelope in nondispersive media" (http:/ / books. google. com/ ?id=LxzWPskhns0C& pg=PA363). Foundations for guided-wave optics. Wiley. p.363. ISBN0471756873. . [11] Stefano Longhi, Davide Janner (2008). "Localization and Wannier wave packets in photonic crystals" (http:/ / books. google. com/ ?id=xxbXgL967PwC& pg=PA329). In Hugo E. Hernndez-Figueroa, Michel Zamboni-Rached, Erasmo Recami. Localized Waves. Wiley-Interscience. p.329. ISBN0470108851. . [12] Albert Messiah (1999). Quantum Mechanics (http:/ / books. google. com/ ?id=mwssSDXzkNcC& pg=PA52& dq=intitle:quantum+ inauthor:messiah+ "group+ velocity"+ "center+ of+ the+ wave+ packet") (Reprint of two-volume Wiley 1958 ed.). Courier Dover. pp.5052. ISBN9780486409245. . [13] See, for example, Eq. 2(a) in

Walter Greiner, D. Allan Bromley (2007). Quantum Mechanics: An introduction (http:/ / books. google. com/ ?id=7qCMUfwoQcAC&pg=PA61) (2nd ed.). Springer. pp.6061. ISBN3540674586. .
[14] John W. Negele, Henri Orland (1998). Quantum many-particle systems (http:/ / books. google. com/ ?id=mx5CfeeEkm0C& pg=PA121) (Reprint in Advanced Book Classics ed.). Westview Press. p.121. ISBN0738200522. . [15] Donald D. Fitts (1999). Principles of quantum mechanics: as applied to chemistry and chemical physics (http:/ / books. google. com/ ?id=8t4DiXKIvRgC& pg=PA15). Cambridge University Press. pp.15 ff. ISBN0521658411. . [16] David C. Cassidy, Gerald James Holton, Floyd James Rutherford (2002). Understanding physics (http:/ / books. google. com/ ?id=rpQo7f9F1xUC& pg=PA340). Birkhuser. pp.339 ff. ISBN0387987568. . [17] Paul R Pinet (2009). op. cit. (http:/ / books. google. com/ ?id=6TCm8Xy-sLUC& pg=PA242). p.242. ISBN0763759937. .

Wave
[18] Mischa Schwartz, William R. Bennett, and Seymour Stein (1995). Communication Systems and Techniques (http:/ / books. google. com/ ?id=oRSHWmaiZwUC& pg=PA208& dq=sine+ wave+ medium+ + linear+ time-invariant). John Wiley and Sons. p.208. ISBN9780780347151. . [19] See Eq. 5.10 and discussion in A. G. G. M. Tielens (2005). The physics and chemistry of the interstellar medium (http:/ / books. google. com/ ?id=wMnvg681JXMC& pg=PA119). Cambridge University Press. pp.119 ff. ISBN0521826349. .; Eq. 6.36 and associated discussion in Otfried Madelung (1996). Introduction to solid-state theory (http:/ / books. google. com/ ?id=yK_J-3_p8_oC& pg=PA261) (3rd ed.). Springer. pp.261 ff. ISBN354060443X. .; and Eq. 3.5 in F Mainardi (1996). "Transient waves in linear viscoelastic media" (http:/ / books. google. com/ ?id=UfSk45nCVKMC& pg=PA134). In Ardshir Guran, A. Bostrom, Herbert berall, O. Leroy. Acoustic Interactions with Submerged Elastic Structures: Nondestructive testing, acoustic wave propagation and scattering. World Scientific. p.134. ISBN9810242719. . [20] Aleksandr Tikhonovich Filippov (2000). The versatile soliton (http:/ / books. google. com/ ?id=TC4MCYBSJJcC& pg=PA106). Springer. p.106. ISBN0817636358. . [21] Seth Stein, Michael E. Wysession (2003). An introduction to seismology, earthquakes, and earth structure (http:/ / books. google. com/ ?id=Kf8fyvRd280C& pg=PA31). Wiley-Blackwell. p.31. ISBN0865420785. . [22] Seth Stein, Michael E. Wysession (2003). op. cit. (http:/ / books. google. com/ ?id=Kf8fyvRd280C& pg=PA32). p.32. ISBN0865420785. . [23] Kimball A. Milton, Julian Seymour Schwinger (2006). Electromagnetic Radiation: Variational Methods, Waveguides and Accelerators (http:/ / books. google. com/ ?id=x_h2rai2pYwC& pg=PA16). Springer. p.16. ISBN3540293043. . "Thus, an arbitrary function f(r, t) can be synthesized by a proper superposition of the functions exp[i (krt)]" [24] Raymond A. Serway and John W. Jewett (2005). "14.1 The Principle of Superposition" (http:/ / books. google. com/ ?id=1DZz341Pp50C& pg=PA433). Principles of physics (4th ed.). Cengage Learning. p.433. ISBN053449143X. . [25] Newton, Isaac (1704). "Prop VII Theor V" (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k3362k. image. f128. pagination). Opticks: Or, A treatise of the Reflections, Refractions, Inflexions and Colours of Light. Also Two treatises of the Species and Magnitude of Curvilinear Figures. 1. London. p.118. . "All the Colours in the Universe which are made by Light... are either the Colours of homogeneal Lights, or compounded of these..." [26] M. J. Lighthill; G. B. Whitham (1955). "On kinematic waves. II. A theory of traffic flow on long crowded roads". Proceedings of the Royal Society of London. Series A 229: 281345. Bibcode1955RSPSA.229..281L. doi:10.1098/rspa.1955.0088. And: P. I. Richards (1956). "Shockwaves on the highway". Operations Research 4 (1): 4251. doi:10.1287/opre.4.1.42. [27] A. T. Fromhold (1991). "Wave packet solutions" (http:/ / books. google. com/ ?id=3SOwc6npkIwC& pg=PA59). Quantum Mechanics for Applied Physics and Engineering (Reprint of Academic Press 1981 ed.). Courier Dover Publications. pp.59 ff. ISBN0486667413. . "(p. 61) the individual waves move more slowly than the packet and therefore pass back through the packet as it advances" [28] Ming Chiang Li (1980). "Electron Interference" (http:/ / books. google. com/ ?id=g5q6tZRwUu4C& pg=PA271). In L. Marton & Claire Marton. Advances in Electronics and Electron Physics. 53. Academic Press. p.271. ISBN0120146533. . [29] See for example Walter Greiner, D. Allan Bromley (2007). Quantum Mechanics (http:/ / books. google. com/ ?id=7qCMUfwoQcAC& pg=PA60) (2 ed.). Springer. p.60. ISBN3540674586. . and John Joseph Gilman (2003). Electronic basis of the strength of materials (http:/ / books. google. com/ ?id=YWd7zHU0U7UC& pg=PA57). Cambridge University Press. p.57. ISBN0521620058. .,Donald D. Fitts (1999). Principles of quantum mechanics (http:/ / books. google. com/ ?id=8t4DiXKIvRgC& pg=PA17). Cambridge University Press. p.17. ISBN0521658411. .. [30] Chiang C. Mei (1989). The applied dynamics of ocean surface waves (http:/ / books. google. com/ ?id=WHMNEL-9lqkC& pg=PA47) (2nd ed.). World Scientific. p.47. ISBN9971507897. . [31] Walter Greiner, D. Allan Bromley (2007). Quantum Mechanics (http:/ / books. google. com/ ?id=7qCMUfwoQcAC& pg=PA60) (2nd ed.). Springer. p.60. ISBN3540674586. . [32] Siegmund Brandt, Hans Dieter Dahmen (2001). The picture book of quantum mechanics (http:/ / books. google. com/ ?id=VM4GFlzHg34C& pg=PA23) (3rd ed.). Springer. p.23. ISBN0387951415. . [33] Cyrus D. Cantrell (2000). Modern mathematical methods for physicists and engineers (http:/ / books. google. com/ ?id=QKsiFdOvcwsC& pg=PA677). Cambridge University Press. p.677. ISBN0521598273. .

98

Sources
Campbell, M. and Greated, C. (1987). The Musicians Guide to Acoustics. New York: Schirmer Books. French, A.P. (1971). Vibrations and Waves (M.I.T. Introductory physics series). Nelson Thornes. ISBN0-393-09936-9. OCLC163810889. Hall, D. E. (1980). Musical Acoustics: An Introduction. Belmont, California: Wadsworth Publishing Company. ISBN0534007589.. Hunt, F. V. (1992). Origins in Acoustics (http://asa.aip.org/publications.html#pub17). New York: Acoustical Society of America Press.. Ostrovsky, L. A.; Potapov, A. S. (1999). Modulated Waves, Theory and Applications. Baltimore: The Johns Hopkins University Press. ISBN0801858704..

Wave Vassilakis, P.N. (2001) (http://www.acousticslab.org/papers/diss.htm). Perceptual and Physical Properties of Amplitude Fluctuation and their Musical Significance. Doctoral Dissertation. University of California, Los Angeles.

99

External links
Interactive Visual Representation of Waves (http://resonanceswavesandfields.blogspot.com/2007/08/ true-waves.html) Science Aid: Wave propertiesConcise guide aimed at teens (http://www.scienceaid.co.uk/physics/waves/ properties.html) Simulation of diffraction of water wave passing through a gap (http://www.phy.hk/wiki/englishhtm/ Diffraction.htm) Simulation of interference of water waves (http://www.phy.hk/wiki/englishhtm/Interference.htm) Simulation of longitudinal traveling wave (http://www.phy.hk/wiki/englishhtm/Lwave.htm) Simulation of stationary wave on a string (http://www.phy.hk/wiki/englishhtm/StatWave.htm) Simulation of transverse traveling wave (http://www.phy.hk/wiki/englishhtm/TwaveA.htm) Sounds AmazingAS and A-Level learning resource for sound and waves (http://www.acoustics.salford.ac. uk/feschools/) chapter from an online textbook (http://www.lightandmatter.com/html_books/lm/ch19/ch19.html) Simulation of waves on a string (http://www.physics-lab.net/applets/mechanical-waves) of longitudinal and transverse mechanical wave (http://www.cbu.edu/~jvarrian/applets/waves1/lontra_g. htm-simulation) MIT OpenCourseWare 8.03: Vibrations and Waves (http://ocw.mit.edu/courses/physics/ 8-03-physics-iii-vibrations-and-waves-fall-2004/) Free, independent study course with video lectures, assignments, lecture notes and exams.
Velocities of waves Phase velocity Group velocity Front velocity Signal velocity

Article Sources and Contributors

100

Article Sources and Contributors


Fast Fourier transform Source: http://en.wikipedia.org/w/index.php?oldid=447035554 Contributors: 16@r, 2ganesh, Adam Zivner, Adashiel, Akoesters, Amitparikh, Apexfreak, Artur Nowak, Audriusa, Avalcarce, AxelBoldt, Bartosz, BehnamFarid, Bemoeial, Bender235, Bender2k14, Blablahblablah, Bmitov, Boxplot, Cameronc, Captain Disdain, Coneslayer, Conversion script, Crossz, Cxxl, Daniel Brockman, David spector, Davidmeo, Dcoetzee, Dcwill1285, DeadTotoro, Decrease789, Dekart, Delirium, Djg2006, Dmsar, Domitori, Donarreiskoffer, DrBob, Efryevt, Eras-mus, Excirial, Eyreland, Faestning, Fredrik, Fresheneesz, Furrykef, Gareth Owen, Gene93k, Geoffeg, Giftlite, Glutamin, Gopimanne, GreenSpigot, Grendelkhan, Gunnar Larsson, H2g2bob, HalfShadow, Hashar, Helwr, HenningThielemann, Herry41341, Herve661, Hess88, Hmo, Hyacinth, Ixfd64, Jaredwf, Jeltz, Jitse Niesen, Johnbibby, JustUser, Kellybundybrain, Kkmurray, Klokbaske, Kuashio, LMB, Lavaka, LeoTrottier, LiDaobing, Lorem Ip, LouScheffer, Lupo, MarylandArtLover, Materialscientist, MaxSem, Maxim, Michael Hardy, MrOllie, Mschlindwein, Mwilde, Nagesh Anupindi, Nbarth, Nixdorf, Norm mit, Ntsimp, Oleg Alexandrov, Oli Filth, Omkar lon, Palfrey, Pankajp, Pit, Pt, QueenCake, Quibik, Quintote, Qwertyus, Qwfp, R. J. Mathar, R.e.b., RTV User 545631548625, Requestion, Riemann'sZeta, Rjwilmsi, Roadrunner, Robertvan1, Rogerbrent, Rubin joseph 10, Sam Hocevar, Sangwine, SciCompTeacher, SebastianHelm, Smallman12q, Solongmarriane, Spectrogram, Squell, Steina4, Stevenj, TakuyaMurata, Tarquin, Teorth, The Anome, The Yowser, Thenub314, Tim32, TimBentley, Timendum, Tuhinbhakta, Twexcom, Ulterior19802005, Unyoyega, Vincent kraeutler, Wik, Wikichicheng, Yacht, Ylai, ZeroOne, Zven, Zxcvbnm, 191 anonymous edits Discrete Hartley transform Source: http://en.wikipedia.org/w/index.php?oldid=431133573 Contributors: Charles Matthews, Cojoco, Cyp, Edgar181, Eras-mus, Evil saltine, Fred Bradstadt, Fredrik, Heinzelmann, Keenan Pepper, Lockeownzj00, Lorem Ip, MCBracewell, Maximus Rex, Michael Hardy, Miym, Nixdorf, Oleg Alexandrov, PV=nRT, Robin S, Robsavoie, Rolfyu, Stevenj, Woohookitty, 30 anonymous edits Discrete Fourier transform Source: http://en.wikipedia.org/w/index.php?oldid=447093571 Contributors: 1exec1, Alberto Orlandini, Alexjs, Arialblack, Arthur Rubin, Astronautics, AxelBoldt, BD2412, Bill411432, Bmitov, Bo Jacoby, Bob K, Bob.v.R, Bongomatic, Booyabazooka, Borching, Bryan Derksen, CRGreathouse, Cburnett, Centrx, Charles Matthews, ChrisRuvolo, Connelly, Conversion script, CoolBlue1234, Crazyvas, Cybedu, Cyp, Davidmeo, Dcoetzee, Dicklyon, DopefishJustin, DrBob, Dysprosia, Dzordzm, EconoPhysicist, Ed g2s, Edward, Emvee, Epolk, EverettColdwell, Evil saltine, Foobaz, Fru1tbat, Furrykef, Galaxiaad, Gareth Owen, Gauge, Gene Nygaard, Geoeg, Giftlite, Glogger, Graham87, GregRM, Hanspi, HappyCamper, Helwr, HenningThielemann, Herr Satz, Hesam7, Hobit, Humatronic, Iyerkri, Java410, JeromeJerome, Jitse Niesen, Jwkuehne, Kohtala, Kompik, Kramer, Kvng, LMB, Lockeownzj00, Loisel, LokiClock, Lorem Ip, Madmardigan53, Man It's So Loud In Here, Martynas Patasius, MathMartin, Matithyahu, Mcclurmc, Mckee, Mdebets, Metacomet, Michael Hardy, MightyWarrior, MikeJ9919, Minesweeper, Minghong, Mjb, Msuzen, Mulligatawny, Nbarth, Nikai, Nitin.mehta, NormDor, Oleg Alexandrov, Oli Filth, Omegatron, OrgasGirl, Ospalh, Oyz, PAR, Pak21, Paul August, Paul Matthews, Phil Boswell, Poslfit, Pseudomonas, Qz, Rabbanis, Rdrk, Recognizance, Robin S, Ronhjones, Sam Hocevar, Sampletalk, SebastianHelm, Shakeel3442, Sharpoo7, SheeEttin, Srleffler, Ssd, Stevenj, Sverdrup, Svick, Swagato Barman Roy, Tbackstr, The wub, TheMightyOrb, Thenub314, Thorwald, Tobias Bergemann, Uncia, Uncle Dick, User A1, Verdy p, Wikid77, Wile E. Heresiarch, Woohookitty, Zundark, Zvika, Zxcvbnm, , 216 anonymous edits Fourier analysis Source: http://en.wikipedia.org/w/index.php?oldid=446009665 Contributors: 212.153.190.xxx, Ahoerstemeier, Andreas Kaufmann, Andrei Polyanin, AndrewKepert, Andykass, Anonymous Dissident, Ashok567, AxelBoldt, Ayda D, Ayonbd2000, Baa, BenBreen2003, BenFrantzDale, Bo Jacoby, Bob K, Bobo192, Borgx, Boxplot, CRGreathouse, CSTAR, CanadianLinuxUser, Cburnett, Cdnc, ChangChienFu, Charles Matthews, Chubby Chicken, Cogburnd02, Conversion script, Cyrius, Damian Yerrick, Davidmeo, Dicklyon, Dingar, Don4of4, Dysprosia, Ejrh, Eliyak, Enochlau, EugeneZelenko, Evil saltine, Fintor, Frade, Fred Bradstadt, Freestyle, French Tourist, Fresheneesz, Furrykef, Futurebird, GT5162, Gareth Owen, GcSwRhIc, Gene Ward Smith, Geoeg, Giftlite, Giuscarl, Grubber, Hankwang, Headbomb, Helptry, Helwr, HenningThielemann, Ht686rg90, JAIG, Jclaer, Jitse Niesen, JohnOwens, Jumbuck, JustUser, Jyossarian, Jyril, KYN, Kat, Kevin Baas, Kman543210, Lamoidfl, Lenthe, LiDaobing, Linas, LokiClock, Lupin, Mahlerite, Marcosaedro, Math.geek3.1415926, MauriceJFox3, Mazi, Melcombe, Michael Hardy, Miguelgoldstein, Mormegil, Morwan, Mwilde, Mtze, Nbarth, Nixdorf, Ojigiri, Oleg Alexandrov, Oli Filth, Omegatron, OwenX, Oxymoron83, PAR, Parsecboy, Pdenapo, Planb 89, Qz, RFightmaster, Radagast83, Rade Kutil, Rbj, Reddi, RexNL, Rmcguire, Ross Burgess, Rs2, Samsara, SebastianHelm, Sepia tone, Silly rabbit, Slehar, Smack, Ste4k, Stevenj, Strangealibi, Supten, Sverdrup, Sylvestersteele, Syncategoremata, Szidomingo, Sawomir Biay, Tbackstr, Tbsmith, The Anome, Theanthrope, Thenub314, Thrapper, Tobias Bergemann, Unaiaia, Uncia, Viriditas, Voyajer, Wikid77, Wile E. Heresiarch, Yardgnome, Yurivict, Zeimusu, Zowie, Zundark, Zvika, 163 anonymous edits Sine Source: http://en.wikipedia.org/w/index.php?oldid=446147979 Contributors: Anonymous Dissident, Artyom, Auddhav, Ben Ben, Chowbok, Dan131m, Darkwind, Elauminri, Ewlyahoocom, FHannes, Farisori, Glenn L, Jim.belk, KuduIO, Michael Hardy, Muslim-Researcher, Nbarth, Ninthabout, Noq, Norm mit, Ntsimp, Pengo, Prenn, Sankalpdravid, Seijihyouronka, SuperCow, Supernaturalist, Tamsier, Teapeat, Termininja, The Evil IP address, Tryptographer, Txus.aparicio, Uuor, William M. Connolley, 31 , anonymous edits Trigonometric functions Source: http://en.wikipedia.org/w/index.php?oldid=446848723 Contributors: 15trik, 216.7.146.xxx, 24.176.164.xxx, 313.kohler, A bit iffy, ATMACI, AbcXyz, Acdx, AdjustShift, AdnanSa, Ahsirakh, Aitias, Ajmas, Akriasas, Al-khowarizmi, Alan Liefting, Alansohn, Alejo2083, Alerante, Aminiz, Andrewlp1991, Andrewmc123, Angelo De La Paz, Animum, Anonymous Dissident, Anthony Appleyard, Anville, Arabic Pilot, ArglebargleIV, Armend, Asyndeton, Atrian, Avoided, Avono, Awaterl, AxelBoldt, Aymatth2, Baccala@freesoft.org, Baxtrom, Bay77, Bcherkas, Beantwo, Beland, BenFrantzDale, Bevo, Bfwalach, Bhadani, BlackGriffen, Bobblewik, Bobo192, Bonadea, BoomerAB, Brews ohare, Brian Huffman, Brighterorange, Bromskloss, Bryan Derksen, Buba14, CBM, CTZMSC3, Canjth, Capricorn42, Captain-tucker, Carifio24, Carolus m, CaseyPenk, Catslash, Cdibuduo, Cenarium, Centroyd, CesarB, Ceyockey, Charles Matthews, Charvest, Chewie, Chicocvenancio, Chipuni, ChongDae, Chris55, Christopher Parham, ChristopherWillis, Chunminghan, Ciphers, Cmsteed, Co149, Colin.campbell.27, Collin Stocks, Coluberbri, Cometstyles, Comp25, CompuHacker, Conrad.Irwin, Conversion script, CopperKettle, CosineKitty, Courcelles, Crisfilax, Cuddlyable3, Cyp, DHeyward, DVdm, Dabomb87, DanP, Daniel,levine, Danny, Darkmeerkat, Darkwind, Darth Panda, David Gerard, DavidCary, DavidWBrooks, Dbachmann, Dcljr, Decemberster, Deelkar, Deeptrivia, Delaroyas, Deltabeignet, Demmy100, DevAnubis, Diderot, Diego pmc, Discospinster, Dissipate, Djacobs7, Dmcq, Doctormatt, Donarreiskoffer, Doniago, Download, Dr. Zombieman, Dylan Lake, Dysprosia, ERcheck, EdJohnston, ElKevbo, Elendal, Enochlau, Epgui, Eric Kvaalen, Eric119, Error792, Esrob, Evil saltine, Ewlyahoocom, FastLizard4, Favonian, Fieldday-sunday, Finell, Firefly322, Fowler&fowler, Fredrik, Freedom skies, Frencheigh, Fresheneesz, Frietjes, Funandtrvl, GB fan, GTBacchus, Gandalf61, Garygoh884, Gdr, Geek1337, Generalcp702, Geo.ciobanu, Geometry guy, Gesslein, GeypycGn, Giftlite, Gioto, Glenn L, Glorswrec, Goldenfool, GrEp, Gracenotes, Graham87, GrahamColm, Gringer, Gryllida, Guardian of Light, Gustavb, Hadal, Haddock420, Hairy Dude, Harry, Harryboyles, Hattar393, Hello cello, HenningThielemann, Herbee, Heryu, Hirzel, Hnc14, Hoof Hearted, IRP, Icairns, Imz, Infrogmation, Intelligentsium, InverseHypercube, Ioannes Pragensis, Ivionday, J.delanoy, JRSP, Jagged 85, Jahangard, Jakob.scholbach, Jaro.p, Jason7825, JavierMC, Jeff G., Jepaan, Jetflier302717, Jewishcalendars, Jh51681, Jim.belk, Jitse Niesen, Jjustin kramer, Jleedev, Jmlk17, Joerite, John254, JohnBlackburne, Johnbarry3434, Jojhutton, Jopxton, Jorend, Jose77, Josh Cherry, JuPitEer, Jumbuck, Junkyardprince, KISSMYBUTT1994, Kaobear, Karada, Kasadkad, Katalaveno, Kate, Kaycooksey, Kdcoo, Ke4roh, Kevinsam, Kiensvay, Kinu, Kirbytime, Klparrot, KneeLess, Knight1993, KnowledgeOfSelf, Kozuch, Kr5t, Krishnachandranvn, Ktims, Kwamikagami, Kwarizmi, L Kensington, LC, La goutte de pluie, LaGrange, Lahiru k, Lambiam, Lanthanum-138, Larryisgood, LavosBacons, Leeyc0, Lemontea, Lessthanadam, Lethe, Levj, Limaner, Linas, LizardJr8, Ljweber70, Locke Cole, LokiClock, Looxix, Lord Kenneth, Luks, Lupin, LutzL, M-le-mot-dit, MER-C, MJASmith, Machine Elf 1735, Madder, Majopius, Maksim-e, Manscher, Maonaqua, MarSch, Marek69, MarkEchidna, MarkSutton, MarvinCZ, Maschen, Materialscientist, Math.geek3.1415926, MathWizard452, Matthew Matic, Matthew.daniels, Matulga, Maurog, Mdebets, Meemat, Mentifisto, Mephistophelian, Merctio, Mevalabadie, Mhayes46, Michael Hardy, Mike Rosoft, Mikeblas, Mindmatrix, Mkill, Moink, Molte, Mondhir, Monguin61, Monkeyblue, Moo8181, Mouse Nightshirt, Movses, Mr.Z-man, MrOllie, Mulad, Mwoolf, Mxn, N Shar, NHRHS2010, Nakon, Nbarth, NellieBly, Netheril96, Nickptar, NightFalcon90909, NightRaven511, Nikola Smolenski, Nile, Ninly, Nixdorf, Nk, Norm mit, Nuclearwombat, OM, Ohms law, Oking83, Oleg Alexandrov, Omegatron, OnTehRun, Onixz100, Onkel Tuca, OrgasGirl, Original36, Ortho, Oxymoron83, Paperdgy, Patar knight, Patrick, Patsissons, Paul August, Pbroks13, Pengo, Penubag, Percussim, Perlita, Petmal, Petwil, Pewwer42, Pgk, Philip Trueman, Phillipmorantking, Piano non troppo, PierreAbbat, Pikalax, Ploncomi, Pomadgw, Populus, Praefectorian, Prolog, Pseudo-Richard, Quadell, Quantum Burrito, Quibik, RDBury, RaCha'ar, Rabidphage, RadicalOne, Radon210, Raider480, RainbowOfLight, Rainrider, Ramajois, Rambo's Revenge, Random user 8384993, Rational.renegade, RekishiEJ, Revolver, RexNL, Rholton, Ricardo sandoval, Riojajar, Robert Foley, RobertG, Robinhw, RockMFR, Rocky Mountain Goat, Rogper, Rolland Goossens, Rotational, Roy da Vinci, Ruakh, Rune.welsh, Ruud Koot, Ryan Reich, Salix alba, Salte45, Salvador2413, Sarkar112, Sathimantha, Saurab sirpurkar, Sciyoshi, Seba5618, Secfan, Selfworm, Shadowjams, SharkD, Shii, Shizhao, Shotwell, Shuroo, Silly rabbit, Sioux.cz, Slicky, Sligocki, Smalljim, Soumyasidana476, Spellbinder, Staffwaterboy, Stevenj, Stevertigo, Stlrams22, Supercoop, Superm401, Support.and.Defend, Susurrus, Sverdrup, Swetrix, Syp, T4bits, TAS, THEN WHO WAS PHONE?, TakuyaMurata, Tamfang, Tarquin, Taw, Tbsmith, Tdadamemd, Tedickey, TertX, Tesseran, Thatguyflint, The Anome, The Thing That Should Not Be, TheEgyptian, TheFlyingPengwyn, TheSwami, Theczar, Thehakimboy, Theresa knott, This, that and the other, ThorFreyaSaturn, Thunderboltz, Tiddly Tom, Tide rolls, Tim1357, Timwi, Tktktk, Tobias Bergemann, Tobz1000, TodKarlson, Tomaxer, Tomruen, Tony Jones, Transisto, TusksRUs, Twoeyedhuman, Uaxuctum, VKokielov, Vanished user, Vedant lath, Velko Ruse, Verdy p, Vespristiano, VoxLuna, Wahrmund, Wapcaplet, WardenWalk, Wclayman, Whatever1111, Wikimaster021796, Wmahan, Worth my salt, X42bn6, XJamRastafire, Xander89, Xantharius, Xitshsif, Yamamoto Ichiro, Ylai, YourEyesOnly, Zammy flames, Zeno Gantner, ZeroOne, Zeus, Zomic13, Zundark, 945 ,123 anonymous edits Complex number Source: http://en.wikipedia.org/w/index.php?oldid=446136248 Contributors: 144.118.193.xxx, 160.94.28.xxx, 165.123.179.xxx, 2.718281828459..., 203.111.1.xxx, 7severn7, Aaron Kauppi, Abtract, AdamSmithee, Adhanali, Aka042, Akashkumarrath, Alanius, Ale2006, Alison, AllTheThings, AnakngAraw, Andres, Andymc, Angus Lepper, Anonymous Dissident, Ap, Argentino, Armend, Arundhati bakshi, Asyndeton, Ato, Atongmy, AugPi, AxelBoldt, Barno uk, Barticus88, Basispoeter, Ben Ben, BenBaker, Bender2k14, Betacommand, Bevo, Bidabadi, BigJohnHenry, BitterMilton, Bigrafo, Bkell, Blogeswar, Bo Jacoby, Bob A, Bob K, Bobrayner, Boulaur, Br77rino, Brandon Brunner, Brazmyth, Brianga, Brion VIBBER, Bromskloss, Brona, Bruno Simes, Bryan Derksen, Bwe203, C S, CBM, CRGreathouse, Calrfa Wn, Capricorn42, Card, Catherineyronwode, CeNobiteElf, Cenarium, Centrx, Charles Matthews, ChongDae, Chowbok, Chris-gore, ChrisHodgesUK, Chrisandtaund, Christian List, ChristopherWillis, Christopherkennedy1994, Chun-hian, Cometstyles, Commander, Conversion script, Crowsnest, Crust, Crywalt, Cybercobra, Cypa, DE logics, DVD R W, DVdm, Da nuke, DanBishop, Darth Panda, David R. Ingham, DavidCBryant, Deeptrivia, Denelson83, Dert34, Dgies, Diannaa, Dima373, Dino, Dissimul, Dlo1986, Dmcq, Dod1, Dominus, Dragon Dave, Dual Freq, Duoduoduo, Dysprosia, Eastlaw, Ehrenkater, Elb2000, Elf, Emmelie, Enchanter, Epbr123, Eric Shalov, Evil saltine, Ewlyahoocom, Falcon8765, Favonian, Felizdenovo, Fibonacci, Fiedorow, Fieldday-sunday, FightingMac, FilipeS, Finlay McWalter, Fireaxe888, Flamurai, Fleminra, Flex, FocalPoint, Freakofnurture, Fredrik, Fresheneesz, Fropuff, Furrykef, GTBacchus, Gandalf61, Gareth Owen, Gauss, Georgesawyer, Gesslein, Giftlite, GlenShennan, Goldencako, Googl, GraemeMcRae, Graemeb1967, Graham87, Gunark, Guru6969, Haein45, Haham hanuka, Hairy Dude, HalfShadow, Hannes Eder, Happy-melon, HappyCamper, Hard Sin, Harley peters, Haroldco,

Article Sources and Contributors


HenningThielemann, HenryLi, Henrygb, Herbee, HolIgor, Huppybanny, Hut 8.5, Iapetusuk, Igiffin, Igoldste, Inkington, Intangir, Iridescent, Isaac Dupree, Isheden, Itsmejudith, Ivan tambuk, Ivysaur, Ixfd64, JDSperling, JForget, JRSpriggs, JRocketeer, Jackol, Jagged 85, JaimeLesMaths, Jakob.scholbach, JamesBWatson, Jamesg, Janisozaur, Jarek Duda, Jashaszun, Jasondet, JensMueller, Jesin, Jfredrickson, Jimbryho, Jitse Niesen, Jmvicent, Jni, JohnBlackburne, Jol123, Joriki, Josh Grosse, Jujutacular, Juliancolton, Kaldari, Kan8eDie, KarlJacobi, Kbdank71, Keenan Pepper, Kenneth Brookes, Kenyon, Kerotan, Keta, Ketiltrout, Khalid Mahmood, Kiensvay, Kine, Koeplinger, Krischik, Kudret abi, KyraVixen, L Kensington, L33tminion, LHOON, Lagrange123, Lambiam, Lambyte, Laurens-af, Laurent MAYER, Leafyplant, LeaveSleaves, Lessbread, Lifeonahilltop, LilHelpa, Linas, Livius3, Loadmaster, Loom91, Looxix, Lovibond, M4gnum0n, Madhero88, Madmath789, Majorly, Mange01, Manuel Trujillo Berges, MarSch, Marc van Leeuwen, Marek69, Maria Renee Jenkins, MarkSweep, Marozols, MarsRover, MathMan64, MathMartin, Matqkks, Matthew Stannard, Matty j, Matusz, Mct mht, Mdd, MeltBanana, Mentifisto, Mets501, Mh, Michael Hardy, Michael Kinyon, Micro01, Miguel, Mild Bill Hiccup, Mindmatrix, Mlewis000, Mobower, Modelpanicer, Mogmiester, Momo kiki, Monroetransfer, Moralshearer, Mr Stephen, Ms2ger, Msh210, MuZemike, Nanshu, Nbarth, Ncrfgs, Nejko, Netrapt, Newone, Nic bor, Nicoli nicolivich, Nijdam, Niofis, Nivaca, Nixdorf, Nmulder, Obradovic Goran, ObserverFromAbove, Octahedron80, OdedSchramm, Oleg Alexandrov, Oli Filth, Oliphaunt, Omnieiunium, Onewhohelps, Onkel Tuca, Opticyclic, Oxinabox, P0807670, P0lyglut, Pablogrb, Pablothegreat85, Paolo.dL, Papa November, Patrick, Patrick.ar, Paul August, Paul D. Anderson, Pegship, Petecrawford, Phlembowper99, PierreAbbat, Pitel, PizzaMargherita, Pj.de.bruin, Pjacobi, Pleasantville, Plugwash, Pmanderson, Pne, Polymath618, Poor Yorick, Profjohn, Pstanton, Pt, Qe2eqe, Quaeler, Quantum Burrito, Qwfp, RDBury, RG2, RJChapman, Rabkid15, RainbowOfLight, Raise exception, Raja Hussain, Randomblue, Raphaelsss, Rasmus Faber, Ravisingh57547, Rcog, Readysoaper, ReallyNiceGuy, Recentchanges, Red Winged Duck, Remyrem1, RepublicanJacobite, Revolver, RexNL, Rgdboer, Rhanekom, Ricardo sandoval, Roadrunner, Robinh, Rohan Ghatak, Romanm, Rookkey, Rossami, Rs2, Rsmuruga, RyanJones, Saga City, Salix alba, Santosga, Saric, Sbharris, Scentoni, Scottie 000, SeanMack, Sergiacid, ServiceAT, Seth Ilys, SeventyThree, Shizhao, Shoessss, Shoujun, Sidonath, Silly rabbit, SimonTrew, SirFozzie, Sligocki, Snoyes, Soap, Soliloquial, Som subhra, Spellcast, Spireguy, Steorra, Stevenj, Storkk, StradivariusTV, Strebe, Streyeder, SuperMidget, Sverdrup, Symane, Szidomingo, Taggart Transcontinental, Takatodigi, TakuyaMurata, Tariqhada, Tarquin, Taw, Taxman, Tcha216, TechnoFaye, Template namespace initialisation script, Tentoes, Terry2012, Tesseran, Thalesfernandes, The Anome, The Thing That Should Not Be, Thenub314, Tiddly Tom, Tikiwont, Timir2, Tommy2010, Tosha, Treisijs, Trevor MacInnis, Trovatore, Truthnlove, Ttimespan, Twigletmac, Twooars, Tyco.skinner, Urdutext, User27091, VKokielov, VTBushyTail, Vipinhari, Virga, Virginia-American, Voorlandt, Wafulz, Waltpohl, Watcharakorn, Wayp123, Wesaq, Whommighter, Wiki alf, WikiDao, Wikiborg, Wolfkeeper, Wolfrock, Wshun, Wurzel, Wwoods, Wyatt915, X42bn6, Xantharius, Xxglennxx, Yellowstone6, Yjwong, Zachlipton, Zirland, Zoffdino, Zoicon5, Zundark, Zzuuzz, var Arnfjr Bjarmason, , 700 anonymous edits Microphone practice Source: http://en.wikipedia.org/w/index.php?oldid=446098192 Contributors: (aeropagitica), Altaphon, Annunciation, Binksternet, Decaren, December21st2012Freak, Gabelstaplerfahrer, Ggegan, HairyWombat, II MusLiM HyBRiD II, ILike2BeAnonymous, Iain, John of Reading, Lhanneus, Redheylin, Rjwilmsi, Tohd8BohaithuGh1, UkiahBass, Wg0867, 35 anonymous edits Wave Source: http://en.wikipedia.org/w/index.php?oldid=447038625 Contributors: -jmac-, 0, 08stones a, 149AFK, 165.123.179.xxx, 16@r, 1ForTheMoney, 334a, 44everyday44, A4, Aajaja, Aaron north, Abtract, Acalamari, Access Denied, Acroterion, Addshore, Adolphus79, Aesopos, Ahmade, Ahoerstemeier, Aircorn, Aitias, Ajahnjohn, Alansohn, Alberto Orlandini, Alex Ruddick, Alex.Ramek, Alex43223, Alink, Ally525, Alrasheedan, Anaxial, Ancheta Wis, Andreworkney, Antandrus, Anwar saadat, Arakunem, Arie Inbar, ArielGold, Armageddon11, AstroNomer, AugPi, BD2412, BF153, Babbler, Badcc, Bajpeyee, Balandduedateinq, Bart133, Bcrowell, Beastk33per, Bensaccount, Binksternet, Bobblehead, Bobblewik, Bobbymcdowell, Bobo192, BohemianWikipedian, Bondman12, Bowlhover, BravoAlphaSex, Brews ohare, Brion VIBBER, BrokenSegue, Bryan Derksen, Bsadowski1, CBM, COMPFUNK2, CYD, Caiion, Calion, Caltas, Can't sleep, clown will eat me, Captain-tucker, Caquado, Catgut, Chakreshsinghai, Charles Matthews, Chelzeax, Chipskip, Chris the speller, Cimex, Clamshell Deathtrap, Clarince63, Classicalmatter, Closedmouth, Cometstyles, Comp25, Complexica, Constructive editor, Conversion script, Corinne68, Cpl Syx, Cremepuff222, Crowsnest, Cutler, Cyberman9997, DARTH SIDIOUS 2, DMacks, DVD R W, DVdm, Dan Granahan, Danga, David R. Ingham, DavidOaks, Dbfirs, Dbiel, Dclayh, DeadEyeArrow, Deor, DerHexer, Devorey159, Dgrant, Dhk, Dicklyon, Dirac66, Discospinster, Djinn112, Djmh13, Donotresus, Dotancohen, Dougofborg, Dpr, DrBob, Dreadstar, Dreddnott, Drmies, Dxmillenium, Dysepsion, E. Ripley, ESkog, EWikist, Earth Network Editor, Edsmilde, Eecoleye, Eeekster, Eekerz, Eep, Ehrenkater, El C, El Wray, Elassint, Elipongo, Emijrp, Enormousdude, Epbr123, Escapepea, Everyking, Excirial, Ezmoore, Fashionslide, Favonian, Fernandez514785, Fetchcomms, Figma, Fir0002, Fireice, Flauto Dolce, Frokor, Frymaster, Fuhghettaboutit, GB fan, Galoubet, Gareth Owen, Garyzx, Gauravkhati, Gavin.perch, Gdarin, Geek1337, GeneralAtrocity, Georgexu316, Giftlite, Glenn, Go go duke, Gogo Dodo, Grayfell, Greguser101, Gsmgm, Gurch, Hadal, HalfShadow, Hallenrm, Harp, Hashar, Headbomb, Hennessey, Patrick, HenryLi, Heron, HexaChord, Hippojazz, Hlucho, Hmrox, Honeymane, Hongooi, IGeMiNix, IW.HG, Ibbn, Icairns, Iceman124, Idk202, Igorivanov, Ikuvium, Illnab1024, Immunize, Into The Fray, Iridescent, Ixfd64, J.delanoy, JForget, JHBrewer, JNW, JaGa, Jak3x, Jannex, Jauhienij, Javascap, Jaxck, Jclemens, Jebba, Jedi Davideus, Jeffrey Mall, Jkasd, Jmath666, Jock Boy, JohnCD, Jonathanwagner, Jredmond, Juhachi, Jusdafax, Jusjih, JustAGal, JustUser, Katieh5584, Kellygrape, Khranus, Kialari, Kid2500, Killiondude, Killjords, Kingpin13, Kjramesh, Kkmurray, Kostisl, KpkillazNITE, Krazykenny, Kristen Eriksen, Kukini, Kupirijo, Kuru, Kymara, Kyorosuke, L Kensington, L3370x, Ladygaga020, Laitron2000, Langdell, Larsobrien, Lathrop, Laurascudder, Leafyplant, Lestrade, Levineps, LilHelpa, Livajo, Lizzyjay, Llusiduonbach, Loggie, Looxix, Lordoxford01, Lumos3, Luna Santin, MBisanz, MCXLVIII, MER-C, MKoltnow, MONGO, Maccy69, Machine Elf 1735, Macy, Madman91, Magister Mathematicae, Manop, MarcoTolo, Marie Poise, Mariop321, Mars2035, Marshman, Marudubshinki, Materialscientist, Mattderojas, Maxrokatanski, Maypigeon of Liberty, Mbz1, McSly, Meaghan, Mejor Los Indios, Metagraph, Mic shep, Michael Hardy, MichaelKovich, Mike Peel, Mikiemike, Mohit 29892, Moosesheppy, Mpatel, Mr. Absurd, Mshecket, Muhandes, Mushroom, Mwilso24, Mwysession, Mxn, My76Strat, Mrten Berglund, Nabo0o, Nafeezabrar, Nancy, Nasnema, Nastja, Nburden, Negative Energy Plane, Nick C, NickBush24, Nickthepimp95, Niffe, Night of the Big Wind, Nihiltres, Nixo24, Njardarlogar, Nonsuch, Northumbrian, Oda Mari, OlEnglish, Oleg Alexandrov, Olekils, One Night In Hackney, Oscarthecat, Otogar, Ottawa4ever, Oxymoron83, PL290, PamD, Pantelis vassilakis, Patrick, Paul August, Pawyilee, Peetvanschalkwyk, Pelegs, Penubag, Persian Poet Gal, Pesnik, Peterlin, Phazvmk, Philip Trueman, PhilipO, Phoenix-wiki, Pilotguy, Pinethicket, Pion, Pokipsy76, PrincessWortheverything, Pumpknhd, Qazqwe, Quantpole, Qwyrxian, RP459, RadioFan, Radnick88, Rahuljp, RainbowOfLight, Rajkiandris, Rbj, Reaper Eternal, Reddi, Rememberway, Renfield, Reveve, Rich Farmbrough, RickK, Rickproser, Rith2, Rjwilmsi, Robinh, Rodhullandemu, Roy Brumback, Royalguard11, Rracecarr, Rrburke, Runewiki777, Rwestafer, Ryguasu, Ryt, SMC89, SPUI, Saketh, Salih, Sampo Torgo, Sanchit89, SchfiftyThree, SchnitzelMannGreek, Scot.parker, Scott Burley, Scottyferguson, Seegoon, Sephiroth BCR, Sergiacid, Seth Ilys, Shanes, Shoeofdeath, Silverloc123, Sintaku, Sjakkalle, Skew-t, Skysmith, SmilesALot, Snowolf, Sodium, Some jerk on the Internet, Sonett72, Sophus Bie, SouthLake, Spellcheck8, Spongefrog, Srikeit, Srleffler, Steinberger, Steve Quinn, Stevenj, Stifynsemons, StradivariusTV, Streg, StudentJCase, THF, TTE, Tbhotch, Tdvance, TestPilot, ThaddeusB, The Thing That Should Not Be, The undertow, The wub, The-Dead-Ninja, Theking17825, Theresa knott, Thingg, ThreeDee912, Tide rolls, TimProof, Timwi, Tiptoety, Tlroche, Tommy2010, Trav5000, Traxs7, Treisijs, Trusilver, Tsange, Ttwaring, U.S.Vevek, UberCryxic, Unint, Unixxx, Uogl, User A1, VTBushyTail, Vaceituno, Veesicle, Versus22, ViaBest, Vvitor, WODUP, Wartoilet, Washburnmav, Waveguy, Wavelength, Wayne Slam, Whitepaw, Why Not A Duck, Wiki alf, Willarsenalfan1991, Williamchen130, Witchzilla, Wj32, Wolfkeeper, Woohookitty, Xev lexx, Yahya Abdal-Aziz, Yecril, Yevgeny Kats, Zaidpjd, Zegoma beach, Zeimusu, Zfr, Zimbres, Zimbricchio, ZiyaA, 1077 anonymous edits

101

Image Sources, Licenses and Contributors

102

Image Sources, Licenses and Contributors


Image:Sinus.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sinus.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Juiced lemon, Quark67 File:Trigono sine en2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Trigono_sine_en2.svg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors: Trigono_a10.svg: Dnu72 derivative work: Pengo (talk) Image:Sine.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sine.svg License: Creative Commons Attribution-Share Alike Contributors: Geek3 File:Sine cosine one period.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sine_cosine_one_period.svg License: Creative Commons Attribution 3.0 Contributors: Geek3 File:Unit circle3.png Source: http://en.wikipedia.org/w/index.php?title=File:Unit_circle3.png License: Public Domain Contributors: JonasMeinertz File:Unit circle.svg Source: http://en.wikipedia.org/w/index.php?title=File:Unit_circle.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: User:Gustavb File:Trig functions on unit circle.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Trig_functions_on_unit_circle.PNG License: Creative Commons Attribution-Sharealike 3.0 Contributors: Brews ohare File:Sin drawing process.gif Source: http://en.wikipedia.org/w/index.php?title=File:Sin_drawing_process.gif License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Malter File:Arcsine.svg Source: http://en.wikipedia.org/w/index.php?title=File:Arcsine.svg License: Creative Commons Attribution-Share Alike Contributors: Geek3 File:Quadrants 01 Pengo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Quadrants_01_Pengo.svg License: Public Domain Contributors: Pengo File:Sine quads 01 Pengo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sine_quads_01_Pengo.svg License: unknown Contributors: Pengo Image:Taylorsine.svg Source: http://en.wikipedia.org/w/index.php?title=File:Taylorsine.svg License: Public Domain Contributors: Geek3, Hellisp, Riojajar, 1 anonymous edits Image:Sine fixed point.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sine_fixed_point.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Adam majewski, Darapti, EnEdC, Juiced lemon, Kilom691, 1 anonymous edits File:Sin.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sin.svg License: Public Domain Contributors: Self: Commons user Keytotime Image:Unit circle angles.svg Source: http://en.wikipedia.org/w/index.php?title=File:Unit_circle_angles.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: User:Gustavb File:Complex picture.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_picture.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: This version, simplified for use in en:Sine: Pengo (talk) First version: http://ja.wikipedia.org/wiki/%E7%94%BB%E5%83%8F:Complex.png by jp:user: Vectorized and tweaked: Complex_conjugate_picture.svg by Oleg Alexandrov File:ComplexPlot-Sin-z-,1024-.jpg Source: http://en.wikipedia.org/w/index.php?title=File:ComplexPlot-Sin-z-,1024-.jpg License: Creative Commons Attribution-Share Alike Contributors: Athanase E-O File:Sin z vector field 02 Pengo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sin_z_vector_field_02_Pengo.svg License: Public Domain Contributors: Pengo File:Complex sin real 01 Pengo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_sin_real_01_Pengo.svg License: Public Domain Contributors: Pengo File:Complex sin imag 01 Pengo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_sin_imag_01_Pengo.svg License: Public Domain Contributors: Pengo File:Complex sin abs 01 Pengo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_sin_abs_01_Pengo.svg License: Public Domain Contributors: Pengo File:Complex arcsin real 01 Pengo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_arcsin_real_01_Pengo.svg License: Public Domain Contributors: Pengo File:Complex arcsin imag 01 Pengo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_arcsin_imag_01_Pengo.svg License: Public Domain Contributors: Pengo File:Complex arcsin abs 01 Pengo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_arcsin_abs_01_Pengo.svg License: Public Domain Contributors: Pengo Image:Trigonometry triangle.svg Source: http://en.wikipedia.org/w/index.php?title=File:Trigonometry_triangle.svg License: GNU Free Documentation License Contributors: Original uploader was Tarquin at en.wikipedia Later versions were uploaded by Limaner at en.wikipedia. File:Periodic sine.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Periodic_sine.PNG License: Creative Commons Attribution-Sharealike 3.0 Contributors: Brews ohare Image:Unitcircledefs.svg Source: http://en.wikipedia.org/w/index.php?title=File:Unitcircledefs.svg License: Public domain Contributors: en:User:Michael Hardy (original); Pbroks13 (talk) (redraw) Image:Unitcirclecodefs.svg Source: http://en.wikipedia.org/w/index.php?title=File:Unitcirclecodefs.svg License: Public domain Contributors: en:User:Michael Hardy (original); Pbroks13 (talk) (redraw) Image:Unit circle angles color.svg Source: http://en.wikipedia.org/w/index.php?title=File:Unit_circle_angles_color.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Jim.belk Image:Sine cosine one period.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sine_cosine_one_period.svg License: Creative Commons Attribution 3.0 Contributors: Geek3 Image:Trigonometric functions.svg Source: http://en.wikipedia.org/w/index.php?title=File:Trigonometric_functions.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Alessio Damato Image:Circle-trig6.svg Source: http://en.wikipedia.org/w/index.php?title=File:Circle-trig6.svg License: GNU Free Documentation License Contributors: This is a vector graphic version of Image:Circle-trig6.png by user:Tttrung which was licensed under the GNU Free Documentation LicenseGFDL. Based on en:Image:Circle-trig6.png, which was donated to Wikipedia under GFDL by Steven G. Johnson. Image:Sine and Cosine fundamental relationship to Circle (and Helix).gif Source: http://en.wikipedia.org/w/index.php?title=File:Sine_and_Cosine_fundamental_relationship_to_Circle_(and_Helix).gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: Tdadamemd Image:Complex sin.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_sin.jpg License: Public Domain Contributors: Jan Homann Image:Complex cos.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_cos.jpg License: Public Domain Contributors: Jan Homann Image:Complex tan.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_tan.jpg License: Public Domain Contributors: Jan Homann Image:Complex Cot.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_Cot.jpg License: Public Domain Contributors: Jan Homann Image:Complex Sec.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_Sec.jpg License: Public Domain Contributors: Jan Homann Image:Complex Csc.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_Csc.jpg License: Public Domain Contributors: Jan Homann Image:Lissajous curve 5by4.svg Source: http://en.wikipedia.org/w/index.php?title=File:Lissajous_curve_5by4.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Alessio Damato Image:Synthesis square.gif Source: http://en.wikipedia.org/w/index.php?title=File:Synthesis_square.gif License: GNU Free Documentation License Contributors: Alejo2083, Cuddlyable3, Kieff, Omegatron, Pieter Kuiper, 3 anonymous edits File:Sawtooth Fourier Analysis.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Sawtooth_Fourier_Analysis.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors: Brews ohare Image:Complex number illustration.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_number_illustration.svg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors: Original uploader was Wolfkeeper at en.wikipedia Image:Complex number illustration.png Source: http://en.wikipedia.org/w/index.php?title=File:Complex_number_illustration.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Kan8eDie File:Complex conjugate picture.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_conjugate_picture.svg License: GNU Free Documentation License Contributors: Oleg Alexandrov Image:Vector Addition.svg Source: http://en.wikipedia.org/w/index.php?title=File:Vector_Addition.svg License: Public Domain Contributors: Booyabazooka, Kilom691 Image:Complex_number_illustration_modarg.svg Source: http://en.wikipedia.org/w/index.php?title=File:Complex_number_illustration_modarg.svg License: GNU Free Documentation License Contributors: Complex_number_illustration.svg: Original uploader was Wolfkeeper at en.wikipedia derivative work: Kan8eDie (talk) File:ComplexMultiplication.png Source: http://en.wikipedia.org/w/index.php?title=File:ComplexMultiplication.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Jakob.scholbach

Image Sources, Licenses and Contributors


Image:Sin1perz.png Source: http://en.wikipedia.org/w/index.php?title=File:Sin1perz.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Kovzol File:Pentagon construct.gif Source: http://en.wikipedia.org/w/index.php?title=File:Pentagon_construct.gif License: Public domain Contributors: TokyoJunkie at the English Wikipedia Image:NegativeOne3Root.svg Source: http://en.wikipedia.org/w/index.php?title=File:NegativeOne3Root.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Loadmaster (David R. Tribble) Image:XY stereo.svg Source: http://en.wikipedia.org/w/index.php?title=File:XY_stereo.svg License: GNU Free Documentation License Contributors: Iainf 23:51, 21 September 2007 (UTC) Image:Blumlein Stereo.svg Source: http://en.wikipedia.org/w/index.php?title=File:Blumlein_Stereo.svg License: GNU Free Documentation License Contributors: Iainf 23:51, 21 September 2007 (UTC) Image:MS stereo.svg Source: http://en.wikipedia.org/w/index.php?title=File:MS_stereo.svg License: GNU Free Documentation License Contributors: Iainf 23:51, 21 September 2007 (UTC) File:2006-01-14 Surface waves.jpg Source: http://en.wikipedia.org/w/index.php?title=File:2006-01-14_Surface_waves.jpg License: GNU Free Documentation License Contributors: Roger McLassus File:Nonsinusoidal wavelength.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Nonsinusoidal_wavelength.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors: Brews ohare File:Waveforms.svg Source: http://en.wikipedia.org/w/index.php?title=File:Waveforms.svg License: unknown Contributors: Jafeluv, Omegatron, Pieter Kuiper, 5 anonymous edits File:Wave packet.svg Source: http://en.wikipedia.org/w/index.php?title=File:Wave_packet.svg License: Public Domain Contributors: Oleg Alexandrov Image:Wave group.gif Source: http://en.wikipedia.org/w/index.php?title=File:Wave_group.gif License: GNU Free Documentation License Contributors: Kraaiennest Image:Wave opposite-group-phase-velocity.gif Source: http://en.wikipedia.org/w/index.php?title=File:Wave_opposite-group-phase-velocity.gif License: Creative Commons Attribution 3.0 Contributors: Geek3 File:Simple harmonic motion animation.gif Source: http://en.wikipedia.org/w/index.php?title=File:Simple_harmonic_motion_animation.gif License: Public Domain Contributors: User:Evil_saltine File:Standing wave.gif Source: http://en.wikipedia.org/w/index.php?title=File:Standing_wave.gif License: Public Domain Contributors: BrokenSegue, Cdang, Joolz, Kersti Nebelsiek, Kieff, Mike.lifeguard, Pieter Kuiper, Ptj Image:Harmonic partials on strings.svg Source: http://en.wikipedia.org/w/index.php?title=File:Harmonic_partials_on_strings.svg License: Public Domain Contributors: Qef Image:Drum vibration mode01.gif Source: http://en.wikipedia.org/w/index.php?title=File:Drum_vibration_mode01.gif License: Public Domain Contributors: Oleg Alexandrov Image:Drum vibration mode21.gif Source: http://en.wikipedia.org/w/index.php?title=File:Drum_vibration_mode21.gif License: Public Domain Contributors: Oleg Alexandrov File:Light dispersion of a mercury-vapor lamp with a flint glass prism IPNr0125.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Light_dispersion_of_a_mercury-vapor_lamp_with_a_flint_glass_prism_IPNr0125.jpg License: unknown Contributors: D-Kuru File:Wave refraction.gif Source: http://en.wikipedia.org/w/index.php?title=File:Wave_refraction.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: Dicklyon (Richard F. Lyon) File:Circular.Polarization.Circularly.Polarized.Light Circular.Polarizer Creating.Left.Handed.Helix.View.svg Source: http://en.wikipedia.org/w/index.php?title=File:Circular.Polarization.Circularly.Polarized.Light_Circular.Polarizer_Creating.Left.Handed.Helix.View.svg License: Public Domain Contributors: Dave3457 File:Shallow water wave.gif Source: http://en.wikipedia.org/w/index.php?title=File:Shallow_water_wave.gif License: GNU Free Documentation License Contributors: Kraaiennest File:Transonico-en.svg Source: http://en.wikipedia.org/w/index.php?title=File:Transonico-en.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Cmprince, Cobatfor, Ignacio Icke, Pieter Kuiper, Rocket000 File:Onde electromagntique.png Source: http://en.wikipedia.org/w/index.php?title=File:Onde_electromagntique.png License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: ploufandsplash File:Wave packet (dispersion).gif Source: http://en.wikipedia.org/w/index.php?title=File:Wave_packet_(dispersion).gif License: Public Domain Contributors: Cdang, Fffred, Kersti Nebelsiek File:GravitationalWave CrossPolarization.gif Source: http://en.wikipedia.org/w/index.php?title=File:GravitationalWave_CrossPolarization.gif License: Public Domain Contributors: Original uploader was MOBle at en.wikipedia

103

License

104

License
Creative Commons Attribution-Share Alike 3.0 Unported http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/

You might also like