The parameters of a distribution are those quantities that you need to specify when describing the distribution. For example, a normal distribution has parameters μ and σ2 and a Poisson distribution has parameter λ.
If we know that some data comes from a certain distribution, but the parameter is unknown, we might try to predict what the parameter is. Hypothesis testing is about working out how likely our predictions are.
The null hypothesis, denoted by H0, is a prediction about a parameter (so if we are dealing with a normal distribution, we might predict the mean or the variance of the distribution).
We also have an alternative hypothesis, denoted by H1. We then perform a test to decide whether or not we should reject the null hypothesis in favour of the alternative.
Suppose we are given a value and told that it comes from a certain distribution, but we don"t know what the parameter of that distribution is.
Suppose we make a null hypothesis about the parameter. We test how likely it is that the value we were given could have come from the distribution with this predicted parameter.
For example, suppose we are told that the value of 3 has come from a Poisson distribution. We might want to test the null hypothesis that the parameter (which is the mean) of the Poisson distribution is 9. So we work out how likely it is that the value of 3 could have come from a Poisson distribution with parameter 9. If it"s not very likely, we reject the null hypothesis in favour of the alternative.
But what exactly is "not very likely"?
We choose a region known as the critical region. If the result of our test lies in this region, then we reject the null hypothesis in favour of the alternative.