
In the development of numbers, algebraic numbers come in generality between the rational numbers and the real numbers. A rational number is one that can be expressed in the form p/q, where p and q are integers and q is nonzero. The Greeks discovered the alarming fact (to them) that not all numbers are rational, through a classic use of the technique of proof by contradiction. Suppose that there was a rational number whose square was 2. It can be written in its lowest terms as p/q (this means that p and q have no common factors). So (p/q)2 = p2/q2 = 2. Therefore, p2 = 2 Ã— q2, so that 2 divides p2 and therefore p (this is because 2 is a prime number). So we write p as 2 Ã— r, and, rewriting the original equation, (2 Ã— r)2 = 4 Ã— r2 = 2 Ã— q2. So, cancelling by 2 we see that 2 Ã— r2 = q2, which means that 2 also divides q. So 2 divides both p and q, contradicting that p/q was in its lowest terms. So the original assumption that the square root of 2 is rational must be false.
This fact was regarded by the Greeks as highly discouraging, because it went against all their ideas of what numbers should be like. They virtually abandoned the whole subject. It was not until the 16th century that Western mathematicians began to realize that the rationals were not the whole story. The next step is to construct the algebraic numbers. These are those numbers which are the roots of polynomial equations whose coefficients are integers, as for example the square root of 2 is the root of the equation x2  2 = 0, or ax2 + c = 0, where a = 1 and c =  2 are the coefficients. However, not all such equations are given roots; for example, the equation x2 + 1 = 0 is not, for the simple reason that x2 is always positive and so x2 + 1 must always be greater than 0. The crucial property that a polynomial must have is that its sign changes. It will be given a root between a point where it has negative value and a point where it has positive value (for example, x2  2 has value  1 when x = 1 and value 2 when x = 2, it has a root between 1 and 2).
Even this is not the end of the story. This became apparent when Joseph Liouville (1809  1882) discovered a criterion for when a number was algebraic (based on how quickly it could be approximated by a particular series of rational numbers), and, using this criterion, constructed the first known transcendental (nonalgebraic) number in 1844. It was nearly 30 years before anyone showed that any useful number (that is, one not specifically constructed for the purpose) was transcendental. , Charles Hermite (1822  1901) showed that e was transcendental in 1873. Almost immediately afterwards, , Georg Cantor (1845  1918), using his new set theory, showed that there were vastly more transcendental numbers than algebraic numbers; that the algebraic numbers were countable (that is, of the smallest infinite size) but the real numbers were not. It remains a very difficult property to prove; it is not known, for many commonly used numbers in mathematics, whether they are algebraic or not. SMcL 
