You are on page 1of 4

1

Name: Bamgbelu Ayomide Yetunde


Department: Science Laboratory Technology
Level: ND II
Course: Calculus

Meaning and Concept of Zeros
Zero is the integer denoted 0 that, when used as a counting number, means that no objects are
present. It is the only integer (and, in fact, the only real number) that is
neither negative nor positive. A number which is not zero is said to be nonzero. A root of a
function is also sometimes known as "a zero of ."
Zero is commonly taken to have the factorization (e.g., in Mathematica's Factor
Integer[n] command). On the other hand, the divisors and divisor are generally taken to be
undefined, since by convention, (i.e., divides 0) for every except zero.
Because the number of permutations of 0 elements is 1, (zero factorial) is defined as 1. This
definition is useful in expressing many mathematical identities in simple form.

A number other than 0 taken to the power 0 is defined to be 1, which follows from the limit

This fact is illustrated by the convergence of curves at in the plot above, which shows for
, 0.4, 2.0. It can also be seen more intuitively by noting that repeatedly taking the square
root of a number gives smaller and smaller numbers that approach one from above, while
doing the same with a number between 0 and 1 gives larger and larger numbers that approach
one from below. For square roots, the total power taken is , which approaches 0 as is
large, giving in the limit that is large.
itself is undefined. The lack of a well-defined meaning for this quantity follows from the
mutually contradictory facts that is always 1, so should equal 1, but is always 0 (for
), so should equal 0. It could be argued that is a natural definition since

However, the limit does not exist for general complex values of . Therefore, the choice of
definition for is usually defined to be indeterminate.
2

However, defining allows some formulas to be expressed simply, an example of which is
the beautiful analytical formula for the integral of the generalized sinc function

(3)
given by Kogan, where , , and is the floor function.
Richardson's theorem is a fundamental result in decidability theory which establishes that the
determination of whether even simple expressions are identically equal to zero is undecidable in
principle, let alone in practice.

Though humans have always understood the concept of nothing or having nothing, the concept
of zero is relatively new it only fully developed in the fifth century A.D. Before then,
mathematicians struggled to perform the simplest arithmetic calculations. Today, zero both as
a symbol (or numeral) and a concept meaning the absence of any quantity allows us to
perform calculus, do complicated equations, and to have invented computers.

Early history: Angled wedges
Zero was invented independently by the Babylonians, Mayans and Indians (although some
researchers say the Indian number system was influenced by the Babylonians). The Babylonians
got their number system from the Sumerians, the first people in the world to develop a
counting system. Developed 4,000 to 5,000 years ago, the Sumerian system was positional the
value of a symbol depended on its position relative to other symbols. Robert Kaplan, author of
"The Nothing That Is: A Natural History of Zero," suggests that an ancestor to the placeholder
zero may have been a pair of angled wedges used to represent an empty number column.
However, Charles Seife, author of "Zero: The Biography of a Dangerous Idea," disagrees that the
wedges represented a placeholder.

The Sumerians system passed through the Akkadian Empire to the Babylonians around 300
B.C. There, scholars agree, a symbol appeared that was clearly a placeholder a way to tell 10
from 100 or to signify that in the number 2,025, there is no number in the hundreds column.
Initially, the Babylonians left an empty space in their cuneiform number system, but when that
became confusing, they added a symbol double angled wedges to represent the empty
column. However, they never developed the idea of zero as a number.
3


Zero in the Americas
Six hundred years later and 12,000 miles from Babylon, the Mayans developed zero as a
placeholder around A.D. 350 and used it to denote a placeholder in their
elaborate calendar systems. Despite being highly skilled mathematicians, the Mayans never used
zero in equations, however. Kaplan describes the Mayan invention of zero as the most striking
example of the zero being devised wholly from scratch.

India: Where zero became a number
Some scholars assert that the Babylonian concept wove its way down to India, but others give
the Indians credit for developing zero independently.
The concept of zero first appeared in India around A.D. 458. Mathematical equations were
spelled out or spoken in poetry or chants rather than symbols. Different words symbolized zero,
or nothing, such as "void," "sky" or "space." In 628, a Hindu astronomer and mathematician
named Brahmagupta developed a symbol for zero a dot underneath numbers. He also
developed mathematical operations using zero, wrote rules for reaching zero through addition
and subtraction, and the results of using zero in equations. This was the first time in the world
that zero was recognized as a number of its own, as both an idea and a symbol.

From the Middle East to Wall Street
Over the next few centuries, the concept of zero caught on in China and the Middle East.
According to Nils-Bertil Wallin of YaleGlobal, by A.D. 773, zero reached Baghdad where it
became part of the Arabic number system, which is based upon the Indian system.
A Persian mathematician, Mohammed ibn-Musa al-Khowarizmi, suggested that a little circle
should be used in calculations if no number appeared in the tens place. The Arabs called this
circle "sifr," or "empty." Zero was crucial to al-Khowarizmi, who used it to inventalgebra in the
ninth century. Al-Khowarizmi also developed quick methods for multiplying and dividing
numbers, which are known as algorithms a corruption of his name.
Zero found its way to Europe through the Moorish conquest of Spain and was further developed
by Italian mathematician Fibonacci, who used it to do equations without an abacus, then the most
4

prevalent tool for doing arithmetic. This development was highly popular among merchants, who
used Fibonaccis equations involving zero to balance their books.
Wallin points out that the Italian government was suspicious of Arabic numbers and outlawed
the use of zero. Merchants continued to use it illegally and secretively, and the Arabic word for
zero, "sifr," brought about the word cipher, which not only means a numeric character, but also
came to mean "code."
By the 1600s, zero was used fairly widely throughout Europe. It was fundamental in Rene
Descartes Cartesian coordinate system and in Sir Isaac Newtons and Gottfried Wilhem
Liebnizs developments of calculus. Calculus paved the way for physics, engineering,
computers, and much of financial and economic theory.

You might also like