What are the five steps used in hypothesis testing? How do you decide on an alpha significance level?

Response 1

According to our reading, a hypothesis test is a procedure for testing a claim about property of population. The first step used in hypothesis testing is to specify the null hypothesis. A null hypothesis is a hypothesis tested in significance testing. It is typically the hypothesis that a parameter is zero or that a difference between parameters is zero. The second step is to decide about the test criterion to be used. We then calculate the test statistics using the given values from the sample. After, we find the critical value at the required level of significance and degrees of freedom. Lastly, we decide whether to accept or reject the hypothesis. If the calculate test statistic value is less than the critical value, we accept the hypothesis otherwise we reject the hypothesis. Alpha is a threshold value used to judge whether a test statistic is statistically significant. It is chosen by the researcher. Alpha represents an acceptable probability of a Type I error in a statistical test. Because alpha corresponds to a probability, it can range from 0 to 1.

Response 2

When testing a hypothesis, the five main steps include specifying the null hypothesis, specifying the alternative hypothesis, setting the significance level, calculating the test statistic and corresponding P-value, and drawing a conclusion. The null hypothesis is a statement of no effect, relationship, or difference between two or more groups or factors. In research studies, a researcher is usually interested in disproving the null hypothesis. The alternative hypothesis is the statement that there is an effect or difference. This is usually the hypothesis the researcher is interested in proving. The alternative hypothesis can be one-sided or two-sided. We often use two-sided tests even when our true hypothesis is one-sided because it requires more evidence against the null hypothesis to accept the alternative hypothesis. The significance level is generally set at 0.05. This means that there is a 5% chance that you will accept your alternative hypothesis when your null hypothesis is actually true. The smaller the significance level, the greater the burden of proof needed to reject the null hypothesis, or in other words, to support the alternative hypothesis. Hypothesis testing is not set up so that you can absolutely prove a null hypothesis. Therefore, when you do not find evidence against the null hypothesis, you fail to reject the null hypothesis. When you do find strong enough evidence against the null hypothesis, you reject the null hypothesis. Your conclusions also translate into a statement about your alternative hypothesis.

Response 3

The first step in hypothesis is coming up with a null hypothesis and an alternative hypothesis. The null hypothesis is a statement that is made about a value of a population parameter and the alternative is a statement that is accepted if the null is proven wrong. The second step is to determine what test values should be used. Depending on the conditions, you will need to use the z statistic or the t statistic. The third step is state the decision rules. In this step, the conditions are stated where the null hypothesis will be accepted or rejected. For the fourth step, a test statistic is selected and computed to make the decision. The z statistic or t statistic will be used to determine the decision and compared to the critical value. The final step is to come up with a conclusion with the data found. An alpha significance level is a number that exists between 0 and 1. Though the most common values are usually 0.1, 0.05, and 0.01, it can be any value in the range between 0 and 1.

Response 4

Hypothesis testing: A procedure, based on sample evidence and probability theory, used to determine whether the hypothesis is a reasonable statement and should not be rejected, or is unreasonable and should be rejected. The first step is to state the null hypothesis and the alternate hypothesis. Null hypothesis -statement about the value of a population parameter, alternate hypothesis- statement that is accepted if evidence proves null hypothesis to be false. Second select the appropriate test statistic and level of significance. When testing a hypothesis of a proportion, we use the z-statistic or z-test and the formula, when testing a hypothesis of a mean, we use the z-statistic or we use the t-statistic according to the following conditions. Third state the decision rules, the decision rules state the conditions under which the null hypothesis will be accepted or rejected. Fourth compute the appropriate test statistic and make the decision. Fifth interpret the decision. Alpha levels are used in hypothesis tests. Usually, these tests are run with an alpha level of .05 (5%), but other levels commonly used are .01 and .10. The significance level α is the probability of making the wrong decision when the null hypothesis is true.

 

What are the five steps used in hypothesis testing? How do you decide on an alpha significance level?

Response 1

The five steps in hypothesis testing are to discern the null hypothesis(a statement of population parameter) from the alternative hypothesis(value that differs from null hypothesis.  Always consider the context of data, source of data, and sampling method.  At this point, we can then convert the samples to test statistics.  We can then take the test statistic and compare the significance level (most commonly .05) and figure out the critical regions, critical value and P-Value.

 

We can take these values and test for left tail, right tail or two tailed accounts and these can either fail to reject or reject a null hypothesis claim.  Finally, any explanation of a hypothesis, should be plainly worded to provide a conclusion for anyone.

 

A significance level has to be as simple as can be.  For instance, a smaller significance can be more precise in results, however it depends on what is being examined.  If I have to consider the significance of 1 in 5 compared to 1 in 20, it makes sense to want to be the lottery player of 1-5 who may win.  If I have a high P-value, the likelihood that I would want to continue with an idea too far from my design  I’ll just pass.

Response 2

The five steps in Hypothesis Testing are as follows:

1) Identify the null hypothesis and alternative hypothesis from a given claim, and how to express both in symbolic form.

Consider the context, source and the sampling method. It will have direct impact on the direction of the conclusion.

2) Calculate the value of the test statistic, given a claim and sample data.

The calculations required for a hypothesis test typically require converting a sample statistic to a test statistic.

3) Identify the critical value(s), given a significance level.

The critical value is any value that separates the critical region from the values of the test statistic that do not lead to the rejection of the null          hypothesis.

4) Identify the P value, given the value of the test statistic.

The P (probability) value is the probability of getting the value of the test statistic that is at least as extreme as the one representing the sample data.

5) State the conclusion about a claim in simple and non technical terms.

Make your case by laying out an objective framework and “decision flow” of how you came to your conclusion.

Response 3

There are five steps used in Hypothesis testing, one is specifying the null hypothesis (H0); this step is a simple statement of no effect, relationship, or difference between two or more groups of factors. When we research studies show that the researcher is always trying to disprove the null hypothesis. Second we have to specify the alternative hypothesis (H1), to me this is the opposite of the null hypothesis. The alternative hypothesis the statement where it states that there is a difference or effect in results. Researchers normally are interested in proving the hypothesis. The third step is setting the significant level (?). Significant level is normally set at 0.05 which means that there is a 5% chance of accepting your alternative hypothesis when the null hypothesis is true, the smaller the significance level the greater the possibilities of proof are needed to reject the null hypothesis. Step four is to calculate the statistic and correspondence P-Value. This is when you decide to generate multiple tests to evaluate the hypothesis afterward we must control for the designation of the significance level and/or circulation of the P-value. For example the data proves to be effective we must adjust for three analysis. The final step is drawing conclusion in this step we must analyze all data and include the descriptive statistics of the conclusion. Everything must align and information must be congruent to prove or disprove the null hypothesis.

How do you think this value is decided upon? What are the most common alpha values used in statistics?

Response 1

The alpha value, as determined by the researcher(s), is the probability of making a wrong decision when the null hypothesis is true.  It is the probability (or percent) of making a type I error.  An alpha value of .01 gives you a 1% chance of a type I error, a .05 alpha value give you a 5% chance of making the error, and .1 alpha value give you a 10% chance of making an error.  If you use a .01 alpha value you are 99% confident that your analysis is correct. The most common alpha value used is .05.  The smaller the alpha value, the bigger chance you have of making a type I error, but the larger the alpha value the bigger chance you have of making a type II error.

Response 2

The number alpha is the threshold value that we measure p values against. It tells us how extreme observed results must be in order to reject the null hypothesis of a significance test. The value of alpha is associated to the confidence level of our test. The number represented by alpha is a probability, so it can take a value of any non negative real number less than one. Although in theory any number between 0 and 1 can be used for alpha, when it comes to statistical practice this is not the case. Of all levels of significance the values of 0.10, 0.05 and 0.01 are the ones most commonly used for alpha. As we will see, there could be reasons for using values of alpha other than the most commonly used numbers.

Response 3

A research may use the T test when it is assumed that the distribution has “fatter tails”. More, less the data does not quite suggest “normal” but eventually will get there. Z test assumes normal distribution. The most common alpha values used in statistics are 0.05 and 0.01 these are based on the 0.05/100 of any given data for research. For significance level of 0.05, expect to obtain sample means in the critical region 5% of the time when the null hypothesis is true.

Response 4

Early on in Chapter 8, we are given the various steps used in hypothesis testing.  The first step is to identify the null hypothesis and alternative hypothesis from a given claim.  The second step is to calculate the value of the test statistic, given a claim and sample size.  The third step is to identify the critical values, given a significance level. The fourth step is to identify the P-value, given a value of the test statistic.  The fifth and final step is to state the conclusion about a claim in simple and nontechnical terms.

 

When trying to decide on an alpha significance level, it depends in part on how worried you are about accepting a result as significant even though its not.  Generally, a .05 (5%) level is the norm.  The value tells you what the probability is that a result is due to chance.  For a .05 level, you are 95% confident that the result is a reflection of the reality.  However, if you are making life and death decisions, you would want to have a smaller alpha significance level.  You would want to be as certain as possible that the result isn’t just due to chance.  An alpha level of .01 would be used in this case.

 

Why would you use atest rather than a t test? Which do you think you will use more often? Explain why.

Response 1

Z-test is usually used when we have a large sample size of greater than 30. If the sample size is less than 30 we normally use the t-test. Also Z test is used when the standard deviation is known. If the Standard deviation is not known then we use the t-test.

Response 2

There are two types of data that can be used depending on the hypothesis test. They are z-statistics and t-statistics. To determine which test to use, we need to look at the size of the sample. A t-test would be taken if the sample is under 30 where a z-test will be used if the sample number is larger than 30. So if you have a sample size larger than 30, you would need a z-test. I believe that a t-test is used more often since it has many methods that can meet any need. The z-test will require certain conditions to be reliable.

Response 3

A Z-test is a statistical hypothesis that follows a normal distribution while a T-test follows a Student’s T-distribution.  When handling small samples, A T-test is appropriate.  If you are handling moderate to large samples, a Z-test is used.  A Z-test will often require certain conditions to be reliable, so it is less adaptive that a T-test.  Additionally, a T-test has many methods that will suit any need. Z-tests are preferred when standard deviations are known.  It is also applied to compare sample and population means to know if there’s a significant difference between them.

 

I believe that I would use a T-test more often since it is straightforward and easy to use.  It appears to be flexible and adaptable to a broad range of circumstances.

Response 4

You would use a t-test when you don’t know the true/population standard deviation and mean. Why? Because the t-test helps you account for the extra variability based on the fact that you’ll be estimating the true standard deviation only based on your sample data. The z-test is used when you know the true/population mean and standard deviation. So, it would obviously be the more accurate test to us. I think I would use T-test more often because T-test is more adaptable than Z-test since Z-test will often require certain conditions to be reliable. Plus, T-test has many methods that will suit any need.

Response 5

A T-test is a statistical hypothesis test. It is very likely that the T-test is most commonly used Statistical Data Analysis procedure for hypothesis testing since it is straightforward and easy to use. Also, it is flexible and adaptable to a broad range of circumstances. There are various T-tests and two most commonly applied tests are the one-sample and paired-sample T-tests. One-sample T-tests are used to compare a sample mean with the known population mean. Two-sample T-tests, the other hand, are used to compare either independent samples or dependent samples. The Z-test is also applied to compare sample and population means to know if there’s a significant difference between them. Z-tests always use normal distribution and also ideally applied if the standard deviation is known. Z-tests are often applied if the certain conditions are met; otherwise, other statistical tests like T-tests are applied in substitute.

 

One concept that we will discuss this week that can be difficult to understand for statistics students is the difference between a type I and type II error. Discuss the differences between type I and type II errors. Provide examples of each to help us understand the differences.

Response 1

Type I errors are equivalent to false positives. Example: a drug being used to treat a disease. If we reject the null hypothesis in this situation, then our claim is that the drug does in fact have some effect on a disease. But if the null hypothesis is true, then in reality the drug does not combat the disease at all. The drug is falsely claimed to have a positive effect on a disease.

Type II errors are equivalent to false negatives. If we think back again to the example were we are testing a drug, what would a type II error look like? A type II error would happen if we accept that the drug had no effect on a disease, but in reality it did.

Response 2

Type I errors include rejection of a null hypothesis and is actually true. Type I errors are equivalent to false positives and can be controlled. The value of alpha, which is related to the level of significance that we selected has a direct bearing on type I errors. Alpha is the maximum probability that we have a type I error. For a 95% confidence level, the value of alpha is 0.05. This means that there is a 5% probability that we will reject a true null hypothesis. In the long run, one out of every twenty hypothesis tests that we perform at this level will result in a type I error. Type II errors are when we do not reject a null hypothesis that is false. Type II errors are equivalent to false negatives.

Response 3

The difference between type I and type II error are very simple yet complicated. Type I error is when we reject the null error when is absolutely true. One example according to our textbook is a hypothesis test is conducted of a claim that states a method of gender selection which increases the likelihood of a baby girl, the probability of a baby girl is represented by P>0.5 the alternative hypothesis are Ho:p=0.5 and H1:p>0.5. With this being set the mistake for rejecting a true null hypothesis happens when we assume that there is enough evidence to conclude gender selection can be effective but in fact there is no effect to the hypothesis. (Page 12 of this week’s reading). Type II error will happen when we reject that the gender selection has not effect when in fact there is proof that gender selection can happen. This is very confusing to me but I will learn as we go along this week.

Response 4

A Type 1 Error is the mistake of rejecting a null hypothesis when it is actually true.

α = (alpha) the probability of a Type 1 error when the null hypothesis is true. Null hypothesis is a statement that the value of a population parameter is equal to some claimed value.  This assumption is made to reach a conclusion, and then to accept it or reject it.

Example: A government body decides to ban E-Cigarettes on college campuses because there is a consensus among many that this new technology is harmful to peoples’ health.  This would be a false positive because they do not have the appropriate data at this time to support this conclusion.

A Type 2 Error is the mistake of failing to reject a null hypothesis when it is actually false.

β = (beta) the probability of a Type 2 error when the null hypothesis is false.

Example: A state in the Northeast rejects legislation to ban drinking and driving because they do not believe that there is a correlation between consumption of alcohol and impaired driving.  Obviously this is a far-fetched example, but it illustrates the point that there is concrete conclusive data that supports this claim.

Response 5

Type I is a mistake in rejecting the null hypothesis, when it is actually true and the Type II is when the person rejects a statement that is actually false.  These can be common with dietary supplements.  We are told that most people who try a product will lose up to 35% of unwanted pounds, but it is omitting info of what people started out weighing and what all other facts contributed to them losing weight.  There are a staggering number of weight loss supplements, but the question is do they work out as well and change their diet? They run the risk of losing weight and if this is not the desire, it could

Response 6

A type 1 error is the incorrect rejection of a true null hypothesis.  Usually a type 1 error leads one to conclude that a supposed effect exists when in fact it doesn’t.  An example of a type 1 error would be a fire alarm going off indicating a fire when in fact there is no fire.

 

A type 2 error is the failure to reject a false null hypothesis.  An example of a type 2 error would be a fire breaking out and the fire alarm does not ring.  In this example, we would rather have a type 1 error.  It would be less dangerous to have a false alarm than a malfunction in the alarm making it unable to warn people of a fire.

Response 7

A type I error occurs when we initially fail to reject the null hypothesis. This can occur when we take given data for granted and believe it is true when supporting the hypothesis without proof. A type II error on the other hand occurs when we fail to recognize that the null hypothesis is false. Using a alpha significance level that may not fit the proper hypothesis test. This may falsely lead someone to reject the null hypothesis.

Response 8

A Type I error occurs when the null hypothesis gets rejected when in fact it was true. For instance, if we were testing the null hypothesis that a certain drug balanced brain chemistry in individuals experiencing depression, we would encounter a type I error if we said the drug did not work as intended when in reality it actually worked.

 

A type II error occurs when the null hypothesis is not rejected when it is actually false. Utilizing the same example from above, if we claimed the drug worked in balancing brain chemistry, it would be false and therefor a type II error.

One of the important concepts that you will have to understand with hypothesis testing is understanding what type of test you have. There are three types of test; Left Tailed Test, Right Tailed Test and Two Tailed Test. How do you distinguish between the type of test that you will be performing? Explain.

 

Response 1

According  to “Elementary Statistics Ch.8” the null hypothesis (denoted by H 0) is a statement that the value of a population parameter (such as proportion, mean, or standard deviation) is equal to some claimed value. (The term null is used to indicate no change or no effect or no difference.) Here is a typical null hypothesis included in this chapter: H 0p = 0.5. We test the null hypothesis directly in the sense that we assume (or pretend) it is true and reach a conclusion to either reject it or fail to reject it. The alternative hypothesis (denoted by H 1 or H a or H A) is the statement that the parameter has a value that somehow differs from the null hypothesis.The symbolic form of the alternative hypothesis must use one of these symbols: <, >, ≠. Here are different examples of alternative hypotheses involving proportions:H 1p > 0.5 H 1p < 0.5 H 1p ≠ 0.5. By examining the alternative hypothesis, we can determine whether a test is two-tailed, left-tailed, or right-tailed. The tail will correspond to the critical region containing the values that would conflict significantly with the null hypothesis.

Response 2

The difference between these tests has to do with where the probability of rejecting the null hypothesis lies with relation to the area under the curve. With a left-tailed test, all the probability is on the left side of the curve. With a right-tailed test, the null hypothesis is only rejected if the test statistic falls into the right side of the curve. And with a two-tailed test, the probabilities are on both sides of the curve. You can tell the direction by looking at the alternative hypothesis. In a test using the normal or t distribution, the test is two-tailed if the alternative hypothesis says, “not equal to.” It is left-tailed if the alternative hypothesis says “less than,” and right-tailed if it says “greater than.”

Response 3

For determining the Type of Test, you start with 3 basic choices: Left Tailed Test, Right Tailed or Two Tailed Test.

The Left Tailed Test the P value will be in the area to the left of the test statistic. The P value will be low and if it is low, it must go!

Conversely, The Right Tailed test is just the opposite.  The P value will be in the area to the right of the test statistic and will be high.

If the P is high, then the null (null hypothesis) will fly! These are pretty straightforward.  However, the Two Tailed Test has a P value TWICE the area to the LEFT and RIGHT of the test statistic. Or as I like to call it Double Trouble. Two tails.. get it?

Ok, not a good joke, but it illustrates the point that the origin of the P value and how it relates to the test statistic assists us in determining which test to perform.

Response 4

To know the type of testing to use we first have to identify what each test. When you are using the significance level of .05, a one test allots all of our alpha to testing the statistical significance in the one direction of interest, while the two-tailed testing using the same significance level will allot half of the alpha testing the statistical significance in one direction and the other half to the other direction. When we use the one-tailed test we have to consider various factors so that we do not use the test inappropriately. Let’s say for example if we created a new drug similar to one already in existence and we would like to test and show that the drug is cheaper but no less effective than the existent one. When we test the drug we are only interested in proving the drug is not less effective. When we are using the two-tailed test is when we would like to prove the drug is more effective than the one already in existence to avoid every possible scenario or any other data we would use this type of testing to ensure we are testing for the effectiveness of the drug.

The CEO of a large electric utility claims that 80 percent of his 1,000,000 customers are very satisfied with the service they receive. To test this claim, the local newspaper surveyed 100 customers, using simple random sampling. Among the sampled customers, 73 percent say they are very satisfied. Based on these findings, can we reject the CEO’s hypothesis that 80% of the customers are very satisfied? Use a 0.05 level of significance.

Response 1

The first step is to state the null hypothesis and an alternative hypothesis.

Null Hypothesis: P = .8

Alternative Hypothesis P ≠ .8

 

The significance level is .05

Using sample data, I calculated the standard deviation and compute the z-score test statistic. The standard deviation = .4 and z = -1.75.

The P-value is the probability that the z-score is less than -1.75 or greater than 1.75.

 

I used a normal distribution calculator to find P(z < -1.75) = .04, and P(z > 1.75) = .04

Thus, the P-value = .04 + .04 = .08

 

Since the P-value (.08) is greater than the significance level, we cannot reject the null hypothesis.  We cannot reject the CEO’s hypothesis that 80% of the customers are very satisfied.

P-values are used in statistics to help us to make decisions. What is a P-value? What does a P-value of .0000001 mean?

Response 1

A P-value is “the probability of getting a value of the test statistic that is at least as extreme as the one representing the sample data, assuming the null hypothesis is true” (Triola, 2010). It helps you determine the significance of your results.  In our text they give you a good way to remember, “If the P is low, the null must go.  If the P is high, the null must fly” (Triola, 2010),

 

In the example above, a P-value of .0000001 is very, very low, so the null must go, meaning we need to reject the null hypothesis.

Response 2

The P value is the probability of getting a value of the test statistic that is at least as extreme as the one representing the sample data, assuming that the null hypothesis is true. P-values can be found after finding the area beyond the test statistic. A P-value of .0000001 signifies a test statistic that is so small that the null hypothesis must be rejected.