Confidence intervals are very important in estimation and hypothesis testing, but understanding what they are and how to calculate them can be difficult. Part of the difficulty is in sorting out the difference between population and sample statistics. A very straightforward summary is provided by some course notes from a statistics course at Yale University in the USA. The original notes may be found at
http://www.stat.yale.edu/Courses/1997-98/101/confint.htm

That final note is also below:
In this coverage of confidence interval, there are two different means and three different standard deviations. The means are that of the population, $\mu=\sum x/N$, and that of the sample, $\overline{x}$ or $M=\sum x/N$. The definitions are the same for both, just that $N$ is that for the population in the first instance and for the sample in the second. The standard deviations have different definitions and one has a different meaning.
The first two are the standard deviation of the population
$\sigma=\sqrt{\dfrac{\sum\left(x-\mu\right)^{2}}{N}}$
and the standard deviation of the sample
$s=\sqrt{\dfrac{\sum\left(x-M\right)^{2}}{N-1}}$.
The standard deviation of the sample is also known as the standard error of the sample, and it is a direct estimate of $\sigma$. The last one is the standard deviation of the sample mean, $\sigma_{M}=\sigma/\sqrt{N}$, which is essentially a measure of how the sample means will vary from the population mean due to the fact it is a sample and not the whole population. Clearly $\sigma_{M}$ goes to zero as the sample size
approaches the population size. Given that $s$ is an estimate of $\sigma$ then $\sigma_{M}\approx s/\sqrt{N}$. That is why the 95% confidence interval is either $M\pm1.96\sigma/\sqrt{N}$ or $M\pm1.96\sigma_{M}$ if $\sigma$ is known and $M\pm t_{c}s/\sqrt{N}$ or $\mu\pm t_{c}\sigma_{M}$, where $t_{c}$ is from the $t$ tables, if $\sigma$ is unknown and being estimated by $s$.