A Variation is an time-ordered selection of k elements out of n.

Variation with putting back

I want to explain this in a computer-science way, although you could guess from the 01st chapter how it would look otherwise (and how looong it would be).

Let's say you have k bits to store an integer in. A bit draws its value out of a set S with n:=2 elements, let's call them 0 and 1 (you could call them "foo" and "bar" if you prefer). However it copies the value and puts the original element back.

Set of possibilities (n elements)
01

Storage (k elements)
[ ][ ][ ][ ][ ]

Then the smallest integer number M you just cannot represent anymore (without loss of precision) is:

M:=n^{k}

This is because you have a number of storage bins (bits) where you can put a copy of any element of your original set \{0,1\}. Since you still have the original, at the next position, you again have all the elements to choose from, but one bin less to go.

If you do until all the bins are full, then the number of variations M (number of different ways to put items in the bins) you can have is:

M=|S|⋅|S|⋅...
M=|S|^k
M=n^k

For 3 bits (3 bins), the variations are: 000, 001, 010, 011, 100, 101, 110, 111. The count is 8, which is 2^3.

Variation without putting back

Now if you didn't copy the element of the set but instead just put it into the bin, then the number of possible elements would shrink after each step.

So similar to before:

M:=|S|⋅(|S|-1)⋅...

But now it depends on how many bins k you have. You could have less bins that there are elements in the set, too.

If you want to use the factorial you already know, you have a problem since the factors in the factorial goes down all the way to 1, but here we possibly stop before that.

So you use the factorial for the entire way down to 1 and then cancel out all the factors from the bottom you didn't want.

n:=|S|
M=÷{n!}{(n-k)!}