So if a confidence interval isn't the probability that you'll find the true mean in the confidence interval but the percentage of the time the same method of data collection/calculation will yield an interval that contains the true mean, what *is* the probability that you'll find the true mean in the confidence interval?

Like, if there's a 95% confidence interval of < 10, 20 >, how is it that the probability of the true mean being within its confines isn't also 95%? Because logically that's how it would seem to me, but I know it isn't true.

The actual probability that the true mean lies within the confidence interval is either 0 or 1 - it either is or it isn't within the interval. The 95% confidence level only represents the percentage of intervals that would contain the true mean if you were to repeat the experiment or study multiple times.

To put it another way, the interval itself doesn't have a probability associated with it, but rather the method used to create the interval has a certain level of confidence associated with it. This confidence level is based on the properties of the statistical method used to create the interval and not on the specific interval created in any one study.

Ahhhh, I get it. So in order to get a probability at all you need multiple intervals, an individual interval can't have a probability on its own since it's essentially only one outcome.