SGHA Research

Expanding knowledge through disseminating information.

Disclaimer: The views and opinions expressed in this page are strictly those of the page authors. The contents of this page have been reviewed or approved by the Southwest Ghost Hunter's Association. All effort has been taken to maintain correct information at the time it was written. Some material may be dated and is archived within this section of our website. This article is copyright, 2009 by Cody Polston, Bob Carter and SGHA. All rights reserved.

Article Reference Credits: Wikipedia and

 Articles ~ Research ~ SGHA Method 001


In the scientific method, a control experiment is an experiment where the variable that is being investigated or tested is kept constant. This allows for a comparison to the experiment where the variable is changed to see if there is a different result.

Scientific controls are a vital, since they can eliminate or minimize unintended influences such as researcher bias, environmental changes and biological variation. Controlled experiments are used to investigate the effect of a variable on a particular system. In a controlled experiment one set of samples have been (or is believed to be) modified and the other set of samples are either expected to show no change (negative control) or expected to show a definite change (positive control).

Positive controls confirm that the procedure is effective in observing the effect (therefore minimizing false negatives). Negative controls confirm that the procedure is not observing an unrelated effect (therefore minimizing false positives). A positive control is a procedure that is very similar to the actual experimental test, but which is known from previous experience to give a positive result. A negative control is known to give a negative result. The positive control confirms that the basic conditions of the experiment were able to produce a positive result, even if none of the actual experimental samples produce a positive result. The negative control demonstrates the base-line result obtained when a test does not produce a measurable positive result; often the value of the negative control is treated as a "background" value to be subtracted from the test sample results, or be used as the "100%" value against which the test sample results are weighed.

This simple procedure, along with statistical hypothesis testing, is the basis of SGHA Method 001. The results from the experiments in reported haunted places (Test Sites / Positive Controls) are compared against the non-haunted locations (Control Sites/ Negative controls). In other words we are comparing potential paranormal environments against normal environments and trending the results. Generally speaking, for a null hypothesis to be accepted, it must have few or no positive results in the Control Site. Additionally, a blind standard is implemented to ensure that the ghost hunters are unaware of which sites are Control Sites and which are Test Sites. This, along with defined operational procedures, ensures that data is collected accurately and consistently in both locations.

Testing the paranormal variables (The Positive Control)

Positive controls are determined using probability theory. This  is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single occurrences or evolve over time in an apparently random fashion. Although an individual coin toss or the roll of a die is a random event, if repeated many times the sequence of random events will exhibit certain statistical patterns, which can be studied and predicted.
Click here to see how positive controls are determined.

Control Site (Negative Control)

Control Sites are selected by one of two means.

  1. A suspected haunted location that has been debunked with 100% certainty or
  2. A known non-haunted location where the reports “paranormal activity” are created by the Charter’s State Coordinator.

For Frequently Asked Questions on this method, please click here. ___________________________________________________________________

The Procedure

All hypothesis tests are conducted the same way. The researcher states a hypothesis to be tested, formulates an analysis plan, analyzes sample data according to the plan, and accepts or rejects the null hypothesis, based on results of the analysis.

1. State the hypotheses. Every hypothesis test requires the analyst to state a null hypothesis and an alternative hypothesis. The hypotheses are stated in such a way that they are mutually exclusive. That is, if one is true, the other must be false; and vice versa.  For example, suppose we wanted to determine whether dowsing rods could detect ghostly activity. A null hypothesis might be that dowsing rods can detect ghostly activity. The alternative hypothesis might be dowsing rods cannot detect ghostly activity or ghosts do not exist. Symbolically, these hypotheses would be expressed as

H0: p = 0.5
Ha: p <> 0.5

Suppose we tested the dowsing rods in Test and Control Sites 50 times, resulting in 40 positive results in Control sites and 10 positive results in Test Sites. Given this result, we would be inclined to reject the null hypothesis and accept the alternative hypothesis.

2. Formulate an analysis plan. The analysis plan describes how to use sample data to accept or reject the null hypothesis. It should specify the following elements.

    • Significance level. The amount of evidence required to accept that an event is unlikely to have arisen by chance is known as the significance level or critical p-value. The p-value is the probability with which the observed event would occur, if the null hypothesis were true. If the obtained p-value is smaller than the significance level, then the null hypothesis is rejected. Often, researchers choose significance levels equal to 0.01, 0.05, or 0.10; but any value between 0 and 1 can be used.
    • Test method. Typically, the test method involves a test statistic and a sampling distribution. Computed from sample data, the test statistic might be a mean score, proportion, difference between means, difference between proportions, z-score, t-score, chi-square, etc. Predominately, we use a two sample t-test. Given a test statistic and its sampling distribution, a researcher can assess probabilities associated with the test statistic. If the test statistic probability is less than the significance level, the null hypothesis is rejected.

3. Analyze sample data. Using sample data, perform computations called for in the analysis plan.

4. Test statistic. When the null hypothesis involves a mean or proportion, use either of the following equations to compute the test statistic.

Test statistic = (Statistic - Parameter) / (Standard deviation of statistic)
Test statistic = (Statistic - Parameter) / (Standard error of statistic)

Parameter is the value appearing in the null hypothesis, and Statistic is the point estimate of Parameter. As part of the analysis, you may need to compute the standard deviation or standard error of the statistic.

  • P-value. The P-value is the probability of observing a sample statistic as extreme as the test statistic, assuming the null hypothesis is true.
  • Find degrees of freedom. The degrees of freedom (DF) is the number of independent observations in a sample minus the number of population parameters that must be estimated from sample data.

For example, the exact shape of a t distribution is determined by its degrees of freedom. When the t distribution is used to compute a confidence interval for a mean score, one population parameter (the mean) is estimated from sample data. Therefore, the number of degrees of freedom is equal to the sample size minus one. If DF does not compute to an integer, round it off to the nearest whole number. Some texts suggest that the degrees of freedom can be approximated by the smaller of n1 - 1 and n2 - 1; but the above formula gives better results.

5. Interpret the results. If the sample findings are unlikely, given the null hypothesis, the researcher rejects the null hypothesis. Typically, this involves comparing the P-value to the significance level, and rejecting the null hypothesis when the P-value is less than the significance level.

6. Compute P-value. The P-value is the probability of observing a sample statistic as extreme as the test statistic. Since the test statistic is a t-score, use a t Distribution Calculator to assess the probability associated with the t-score, having the degrees of freedom computed above.

7. Evaluate null hypothesis. The evaluation involves comparing the P-value to the significance level, and rejecting the null hypothesis when the P-value is less than the significance level.

Back to SGHA articles