You are on page 1of 2

Polynomial time

In computational complexity theory, Polynomial time refers to the computation time of a


problem where the time, m(n), is no greater than a polynomial function of the problem size, n.

Written mathematically, m(n) = O(nk) where k is a constant (which may depend on the problem).

Mathematicians sometimes use the notion of "polynomial time on the length of the input" as a
definition of a "fast" computation, as opposed to "super-polynomial time", which is anything
slower than that. Exponential time is one example of a super-polynomial time.

The complexity class of decision problems that can be solved on a deterministic sequential
machine in polynomial time is known as P. The class of decision problems that can be verified in
polynomial time is known as NP. Equivalently, NP is the class of decision problems that can be
solved in polynomial time on a non-deterministic Turing machine (NP stands for
Nondeterministic Polynomial time).

Assorted References

• computational problems (in NP-complete problem (computer science))

So-called easy, or tractable, problems can be solved by computer algorithms that run in
polynomial time; i.e., for a problem of size n, the time or number of steps needed to find
the solution is a polynomial function of n. Algorithms for solving hard, or intractable,
problems, on the other hand, require times that are exponential functions of the...

• linear programming (in linear programming (mathematics))

...of necessary operations expanded exponentially and exceeded the computational


capacity of even the most powerful computers. Then, in 1979, the Russian mathematician
Leonid Khachian discovered a polynomial-time algorithm—i.e., the number of
computational steps grows as a power of the number of variables, rather than
exponentially—thereby allowing the solution of hitherto...

In computational complexity, an algorithm is said to take linear time, or O(n) time, if the time it
requires is proportional to the size of the input, which is usually denoted n. Put another way, the
running time increases linearly with the size of the input. For example, a procedure that adds up
the elements of a list requires time proportional to the length of the list.

This description is slightly inaccurate, since the running time can significantly deviate from a
precise proportionality, especially for small n. Technically, it's only necessary that for large
enough n, the algorithm takes more than an time and less than bn time for some positive real
constants a,b. For more information, see the article on Big O notation.

Linear time is often viewed as a desirable attribute for an algorithm. Much research has been
invested into creating algorithms exhibiting (nearly) linear time or better. This research includes
both software and hardware methods. In the case of hardware, some algorithms which,
mathematically speaking, can never achieve linear time with the standard computation model are
now able to run in linear time. There are several hardware technologies which exploit parallelism
to provide this. An example is associate memory.

For a given sorting algorithm, it can be proven that there exists an order of number which this
sorting algorithm will execute in linear time. However, for a general case, no sorting algorithm
can perform better than n*lg(n) where lg is log of base 2. See also: Polynomial time

Polynomial Time

An algorithm is said to be solvable in polynomial time if the number of steps required to


complete the algorithm for a given input is for some nonnegative integer , where is the
complexity of the input. Polynomial-time algorithms are said to be "fast." Most familiar
mathematical operations such as addition, subtraction, multiplication, and division, as well as
computing square roots, powers, and logarithms, can be performed in polynomial time.
Computing the digits of most interesting mathematical constants, including and , can also be
done in polynomial time.