You are on page 1of 22

Chapter 1:

Signals and Biomedical Signal


Processing
OVERVIEW

In this chapter:
 Different types of signals are defined.
 The fundamental concepts of signal transformation processing are
presented while avoiding detailed mathematical formulations.
WHAT IS A ONE-DIMENSIONAL SIGNAL?

 A 1-D signal is an ordered sequence of numbers that describes the


trends and variations of a quantity.
 A sequence of body temperature recordings collected in
consecutive days forms an example of a 1-D signal in time.
 Not all 1-D signals are necessarily ordered in time.
THE CHARACTERISTICS OF A SIGNAL

 The characteristics of a signal lie in the order of the numbers as well


as the amplitude of the recorded numbers.
 The order of the numbers in a signal is often determined by the
order of measurements (or events) in “time.”
EXAMPLES OF BIOLOGICAL 1-D SIGNALS
 Electrocardiogram (ECG)
Recording of the electrical activities of the heart muscles.
 Electroencephalogram (EEG)
Is a signal that records the electrical activities of the brain and is
heavily used in diagnostics of the central nervous system (CNS).
WHAT IS A MULTIDIMENSIONAL SIGNAL?

 Multidimensional signals are simply extensions of the 1-D signals.


 A multidimensional signal is a multidimensional sequence of
numbers ordered in all dimensions.
 For example an image is a two-dimensional (2-D) sequence of data
where numbers are ordered in both dimensions.
Types of image modalities that are heavily used for clinical
diagnostics:
 Magnetic resonance imaging (MRI):
is based on the magnetic prosperities of a living tissue.
 Computed tomography (CT):
relies on the interaction between the x-ray beams and the biological
tissues to form an image.
 Ultrasonic images and positron emission tomography(PET).
Based on the continuity of a signal in time and amplitude axes, the
following three types of signals:

 Analog Signals.
 Discrete Signals.
 Digital Signals.
Analog Signals:
 Both time and amplitude axes are continuous axes.
 At any given real values of time “t” the amplitude value “g(t)” can
take any number belonging to a continuous interval of real
numbers.
Discrete Signals: g(n)=g(nTs)
 Is the sampled version of the analog signal. The amplitude axis is
continuous but the time axis is discrete.
 The measurements of the quantity are available only at certain
specific time.
 Sampling period “TS.”
 Nyquist theorem: is described that gives a limit on the size of the
sampling period TS. This size limit guarantees that the discrete signal
contains all information of the original analog signal.
PREFERENCE OF DIGITAL SIGNALS OVER ANALOG
SIGNALS.
 One can easily measure and sample the temperature only at
certain times. The times at which the temperature is sampled are
often multiples of a certain sampling period “TS.”
 The discrete signal can be easily stored while the analog signal
needs a large amount of storage space. It is also evident that
signals with smaller size are easier to process.
Digital Signals:
 Both time and amplitude axes are discrete.
 Is defined only at certain times and the amplitude of the signal at
each sample can only be one of a fixed finite set of values.
PROCESSING AND TRANSFORMATION OF
SIGNALS

 Transformations manipulate a signal to highlight some of its


properties. Some transformations express and evaluate the signal in
time domain, while other transformations focus on other “domains”
among which frequency domain is an Important one.
 Fourier transform(FT):
It describes a signal in frequency domain and highlights the important
knowledge in the frequency variations of the signal.
Assume a signal is given in both time and
Fourier domains. Which domain does give
more information about the signal?
 The answer to this tricky question is simply “neither!
The information contained in a signal is exactly the same in all
domains, regardless of the specific domain definition.
 Each transform can highlight a certain type of information. (which is
different from adding new knowledge to it).
For example ,the frequency information is much more visible in
Fourier domain than in time domain, while the exact same information
is also contained in the time signal might be more difficult to notice in
time domain.
 The choice of the domain only affects the visibility, representation,
and highlighting of certain characteristics, while the information
contained in the signal remains the same in all domains.
SOME CHARACTERISTICS OF DIGITAL IMAGES

 Image capturing:
in medical imaging sensors of different physical properties of materials
(including light intensity and color) are employed to record functional
information about the tissue under study.

 Image representation:
images are all visually represented as digital images. These images are either
gray-level images or color images.
In a gray-level image: the light intensity or brightness of an object
shown at coordinates (x,y) of the image is represented by a number called
“gray level”.
 The gray points that are partially bright and partially dark get a gray-level
value that is between 0 and the maximum value of brightness.
 The most popular ranges of gray level used in typical images are 0–255, 0–
511, 0–1023.
 The gray levels are almost always set to be nonnegative integer numbers.
This saves a lot of digital storage space.
 The wider the range of the gray level becomes, the better resolution is
achieved.
Color images: “red green blue” or “RGB” standard. RGB is formed based
on the philosophy that each color is a combination of the three primary colors:
red, green, and blue.
 The screen provides three dots for every pixel: one red dot, one green dot,
and one blue dot. This means that in color images for every coordinate (x,
y), three numbers are provided.
 This in turn means that the image itself is represented by three 2-D signals,
gR(x, y), gG(x, y), and gB(x, y), each representing the intensity of one
primary color.
 As a result, every one of the 2-D signals (for one color) can be treated as
one separate image and processed by the same image processing
methods designed for gray-level images.
 Image histogram:
Assume that the gray level of all pixels in an image belong to the interval
[0,G−1]. If “r” represents the gray level of a pixel of the image, then 0 ≤ r ≤ G−1.
Now, for all values of r, calculate the normalized frequencies, p(r).
We count the number of pixels in the image whose gray level equals r and
name it as n(r). Then, we divide that number by the total number of points in
the image n.
EXAMPLE:

You might also like