In practice, we estimate conditional probabilites P(A|B) = n/N, where n is the number of times A and B in common, N is the number of times B in the trainning data.
what about n are very little, even equal to 0. Or n are very large, even equal to N. What's more, sometimes the values of probablities should be smoothing.
To avoid this, we fix the following numbers p and m beforehand:
A nonzero prior estimate p for P(A|B);
A number m that says how confident we are of our prior estimate p, as measured in number of samples
so, the P(A|B) was estimated by (n + m*p)/(N+m);
Just think of this as adding a bunch of samples to start the whole process
If we don't have any knowledge of p, assume the attribute is uniformly distributed over all possible values. Then p = 1/m.