In the world of math, many strange results are possible
when we change the rules. But there’s one rule that most of us
have been warned not to break: don’t divide by zero. How can the simple combination
of an everyday number and a basic operation
cause such problems? Normally, dividing by smaller
and smaller numbers gives you bigger and bigger answers. Ten divided by two is five, by one is ten, by one-millionth is 10 million, and so on. So it seems like if you divide by numbers that keep shrinking
all the way down to zero, the answer will grow
to the largest thing possible. Then, isn’t the answer to 10
divided by zero actually infinity? That may sound plausible. But all we really know is
that if we divide 10 by a number that tends towards zero, the answer tends towards infinity. And that’s not the same thing as
saying that 10 divided by zero is equal to infinity. Why not? Well, let’s take a closer look
at what division really means. Ten divided by two could mean, "How many times must
we add two together to make 10,” or, “two times what equals 10?” Dividing by a number is essentially
the reverse of multiplying by it, in the following way: if we multiply any number
by a given number x, we can ask if there’s a new number
we can multiply by afterwards to get back to where we started.
If there is, the new number is called
the multiplicative inverse of x. For example, if you multiply
three by two to get six, you can then multiply
by one-half to get back to three. So the multiplicative inverse
of two is one-half, and the multiplicative inverse
of 10 is one-tenth. As you might notice, the product of any
number and its multiplicative inverse is always one. If we want to divide by zero, we need to find
its multiplicative inverse, which should be one over zero. This would have to be such a number that
multiplying it by zero would give one. But because anything multiplied
by zero is still zero, such a number is impossible, so zero has no multiplicative inverse. Does that really settle things, though? After all, mathematicians
have broken rules before. For example, for a long time, there was no such thing as taking
the square root of negative numbers. But then mathematicians defined
the square root of negative one as a new number called i, opening up a whole new
mathematical world of complex numbers. So if they can do that, couldn’t we just make up a new rule, say, that the symbol infinity
means one over zero, and see what happens? Let's try it, imagining we don’t know
anything about infinity already.