To do a z-test you need to know the data is normally distributed and know the standard deviation. With a t-test, you just have some data, and you're assuming that it's normally distributed. If you have much more than 30 data points, it's almost the same as a z-test, but if you have less, you also need the "degrees of freedom" which (for a set of numbers) is the number of data points minus one.
When doing a t-test or a z-test, you're usually trying to find out whether some value is significantly different than the others; unfortunately, there's no cut and dried way of deciding what differences are significant. So that's your other piece of info you need.
1) The sample standard deviation
2) The number of data points (or degree of freedom)
3) The confidence interval, or area under the curve of the "rejected" region.
I'm choosing a parameter which determines whether other data is too high or too low. If I choose .9 then I'm going to be very strict on which data gets rejected as significantly different.
I just plugged into my TI 84: invT(.9,2) it tells me 1.8856
If I choose .995, then I'm going to be fairly relaxed and not see so many differences.
invT(.995,2) is 9.925, almost 10 sample standard deviations above the mean to reject it as significantly different.
These examples have d.f.=2 so that means we're working with only three samples.
Statistics class notes
t-table and z-table from inside front cover of
Mathematical Statistics with Applications (Wackerly, Mendenhall, Scheaffer)