𝗣𝗗𝗙 | On Jan 1, , Steve Strand and others published Statistics for research: With a guide to SPSS (2nd ed.). George Argyrous, London. The package is particularly useful for students and researchers in psychology, sociology . The SPSS Syntax Reference Guide (SPSS, Inc., c) is a reference .. Most statistics menu selections open dialogue boxes; a typical example. use the Statistical Package for the Social Sciences (SPSS) for data analysis. The include an example procedure using variables in the EZDATA file.
|Language:||English, German, Hindi|
|Genre:||Science & Research|
|ePub File Size:||17.54 MB|
|PDF File Size:||17.33 MB|
|Distribution:||Free* [*Registration Required]|
now will make future work in independent research projects and beyond much simply write, for example, Help > Tutorial to tell you where to find a certain action All output from statistical analyses and graphs is printed to the SPSS Viewer. A guide to quantitative methods Almquist, Ashir & Brännström . PRODUCING DESCRIPTIVE STATISTICS IN SPSS. For example, the researchers wants. [KINDLE] Statistics for Research: With a Guide to SPSS by George Argyrous. Book file PDF easily for everyone and every device. You can download and read .
Also, please excuse any mistakes or slips I make. The window with a check mark is the active one. An educational resource for those seeking knowledge related to machine learning and statistical computing in R. Merging Data 3. To learn more about the SPSS user interface, you can look at the on-line tutorial that comes with the software: click Help - Tutorial.
Notz, Michael A. Start My Free Month. Codebooks are like maps to help you figure out the structure of the data. SPSS Statistics Stats Make Me Cry Consulting. It is not designed to be a comprehensive review of the most important SPSS features. Reverse coding variables b. SPSS syntax is a programming language unique to SPSS that can be used as an alternative to the drop-down menus for data manipulation and statistical analyses. SPSS training is available from lynda. SPSS to generate the numbers you need, and much more.
The following will give a description of each of them. Follow along with our examples on downloadable practice data files. The pre-condition to learning IBM SPSS is rather more important than the software use itself and would define your knowledge-base and longetivity assuming you desire to go in to data analytics proper. Written and illustrated tutorials for the statistical software SPSS.
Computing total scores and means 4. SPSS Statistics is a software package used for interactive, or batched, statistical analysis. Complete columns and rows can be deleted by clicking on the grey cell at the top or side and pressing the Delete Similarly it can exported as a pdf file.
Chicago, IL, Inserting a new variable b. Specifically, we demonstrate procedures for running Dependent-Sample or A model is a set of rules, formulas, or equations that can be used to predict an outcome based on a set of input fields or variables.
Use one of the following procedures to install the data on your computer. Although multilevel modeling is an advanced data analysis procedure that requires specialized software and data analysis skills, several readily available statistical packages provide the capability to conduct such analyses, including the Advanced Statistics module of SPSS IBM Statistics, used for the analysis in this primer.
This tutorial is designed for those people who want to learn how to start working with Microsoft Access. Dive deeper into SPSS Statistics for more efficient, accurate, and sophisticated data analysis and visualization.
Statistical Product and Service Solutions. These two SPSS 21 manuals are listed here because they contain significantly more information than their version 22 and later counterparts, and that information is still accurate for the current version of SPSS. SPSS is super easy software for analyzing data and running statistical tests. SPSS Home. The contents are set This manual version shows the typical bell shaped normal distribution. Assessment of the suitability of the data for factor analysis 2.
Tip 3: Following the convention of Modeller the homology modeling. First I suggest that you get a book that guides you through this. It covers the following topics: 1. Although it has many uses, the mixed command is most commonly used for running linear mixed effects models i.
SPSS also has a number of ways to summarize and display data in the form of tables and graphs. Statistics Coach.
The tutorials and statistical selection guide were originally developed as part of SPSS training sessions for a class in resarch methods in social work at California State University, Sacramento.
This tutorial is a quick overview of what it looks like and how it basically works. In this tutorial we will use the fasta format where the header line starting. Short Description: SPSS is an integrated collection of quantitative analysis software that is particularly popular with social science researchers.
Eliminating impractical ideas may also be achieved through exploratory research. Again, literature searches, just like interviews, may be useful to eliminate imprac- tical ideas. Another helpful aspect of exploratory research is the formulation of hypotheses.
A hypothesis is a claim made about a population, which can be tested by using sample results. Marketers frequently put forward such hypotheses because they help structure decision making processes. In Chap. Another use of exploratory research is to develop measurement scales.
For example, what questions can we use to measure customer satisfaction? What ques- tions work best in our context? Do potential respondents understand the wording, or do we need to make changes? Exploratory research can help us answer such ques- tions. For example, an exploratory literature search may contain measurement scales that tell us how to measure important variables such as corporate reputation and service quality. Many of these measurement scales are also included in the Marketing Handbook of Scales, such as the scale book published by Bruner et al.
Descriptive Research As its name implies, descriptive research is all about describing certain phenomena, characteristics or functions. It can focus on one variable e. Such descriptive research often builds upon previous exploratory research.
Key ways in which descriptive research can help us include describing customers, competitors, market segments, and measuring per- formance.
Uses of Descriptive Research Market researchers conduct descriptive research for many purposes. These include, for example, describing customers or competitors. For instance, how large is the UK market for pre-packed cookies? How large is the worldwide market for cruises priced 10, USD and more? How many new products did our competitors launch last year?
Descriptive research helps us answer such questions. For example, the Nielsen Company has vast amounts of data available in the form of scanner data. Scanner data are mostly collected at the checkout of a supermarket where details about each product sold are entered into a vast database. By using scanner data, it is, for example, possible to describe the market for pre-packed cookies in the UK.
Descriptive research is frequently used to segment markets. As companies often cannot connect with all customers individually, they divide markets into groups of consumers, customers, or clients with similar needs and wants. These are called segments. Firms can then target each of these segments by positioning themselves in a unique segment such as Ferrari in the high-end sports car market.
PRIZM segments consumers along a multitude of attitudinal, behavioral, and demographic characteristics and companies can use this to better target their customers. Segments have names, such as Up-and-Comers young professionals with a college degree and a mid-level income and Backcountry Folk older, often retired people with a high school degree and low income. Another important function of descriptive market research is to measure perfor- mance. However, market researchers also frequently measure performance using measures that are quite specific to marketing, such as share of wallet i.
Causal Research Market researchers undertake causal research less frequently than exploratory or descriptive research. Nevertheless, it is important to understand the delicate rela- tionships between variables that are important to marketing.
Causal research is used to understand how changes in one variable e. Causality is the relationship between an event the cause and a second event the effect , when the second event is a consequence of the first.
To claim causality, we need to meet four requirements: First, the variable that causes the other needs to be related to the other. Simply put, if we want to determine whether price changes cause sales to drop, there should be a relationship or correlation see Chap.
Note that people often confuse correlation and causality. Just because there is some type of relationship between to variables does not mean that the one caused the other see Box 2.
Second, the cause needs to come before the effect. This is the requirement of time order. Third, we need to control for other factors. If we increase the price, sales may go up because competitors increase their prices even more. Controlling for other factors is difficult, but not impossible.
This is achieved by, for example, randomly giving participants a stimulus such as information on a price increase in an experiment, or by controlling environmental factors by conducting experiments in labs where, for example, light and noise conditions are constant controlled.
To control for other factors, we can also use statistical tools that account for external influences. These statistical tools include analysis of variance see Chap. Finally, an important criterion is that there needs to be a good explanatory theory. For example, we may observe that when we advertise, sales decrease.
Without any good explanation for this such as that people dislike the advertise- ment , we cannot claim that there is a causal relationship. Box 2. For example, in Fig. Clearly, the picture shows a trend.
If the harvested area of melons increases, the number of US fatal motor vehicle crashes increases. This is a correlation and the first requirement to determine causality. Where the story falls short in determining causality is explanatory theory. What possible mechanism could explain the findings? Other examples include the following: Therefore, ice cream causes drowning.
Therefore, global warming is caused by a lack of pirates. If the above facts were to be presented, most people would be highly skeptical and would not interpret these facts as describing a causal mechanism. Causal research may help us to determine if causality really exists in these situations. Some of the above and further examples can be found in Huff or on Wikipedia.
Key examples of causal research include lab and field experiments and test markets. Uses of Causal Research Experiments are a key type of causal research and come in the form of either lab or field experiments. Lab experiments are performed in controlled environments usually in a company or academic lab to gain understanding of how changes in one variable called stimulus causes changes in another variable. For example, substantial experimental research is conducted to gain understanding of how changing websites helps people navigate better through online stores, thereby increasing sales.
Field experiments are experiments conducted in real-life settings where a stimulus often in the form of a new product or changes in advertising is provided to gain understanding of how these changes impact sales. Field experiments are conducted regularly. We discuss experimental set-ups in more detail in Chap.
In Table 2. Design the Sample and Method of Data Collection After having determined the research design, we need to design a sampling plan and choose a data-collecting method.
This involves deciding whether to use existing secondary data or to conduct primary research. We discuss this in further detail in Chap. Collect the Data Collecting data is a practical but sometimes difficult part of the market research process.
How do we design a survey? How do we measure attitudes toward a product, brand, or company if we cannot observe these attitudes directly? How do we get CEOs to respond? Dealing with such issues requires careful planning and knowledge of the marketing process. We discuss related key issues in Chap. Analyze the Data Analyzing data requires technical skills. We discuss how to describe data in Chap.
After this, we introduce key techniques, such as hypothesis testing and analysis of variance ANOVA , regression analysis, factor analysis, and cluster analysis in Chaps.
In each of these chapters, we discuss the key theoretical choices and issues market researchers face when using these techniques. Consequently, researchers should provide detailed answers and actionable suggestions based on data discussed in Chaps. The last step is to clearly communicate the findings and recommendations to help decision making and implementation.
This is further discussed in Chap. Follow-Up Market researchers often stop when the results have been interpreted, discussed, and presented.
However, following up on the research findings is important too. Implementing market research findings sometimes requires further research because suggestions or recommendations may not be feasible or practical and market conditions may have changed. Qualitative and Quantitative Research In the discussion above on the market research process, we ignored the choice between qualitative and quantitative research. Nevertheless, market researchers often label themselves as either quantitative or qualitative researchers.
The two types of researchers use different methodologies, different types of data, and focus on different research questions. Most people regard the difference between qualita- tive and quantitative as one between numbers and words, with quantitative researchers focusing on numbers and qualitative researchers on words.
This dis- tinction is not accurate, as many qualitative researchers use numbers in their analyses. Rather, the distinction should be made according to when the information is quantified. If we know that the possible values occur in the data before the research starts, we conduct quantitative research, while if we know only this after the data have been collected, we conduct qualitative research.
Think of it in this way: Because we know all possible values beforehand, the data are also quantified beforehand. This is therefore quantitative research. This means we have no idea what the possible answer values are. Therefore, these data are qualitative.
We can, however, recode these qualitative data and assign values to each response. Thus, we quantify the data, allowing further statistical analysis.
Qualitative and quantitative research are equally important in the market research industry in terms of money spent on services. Why do we follow a structured process when conducting market research?
Are there any shortcuts you can take? What are the similarities and differences? Describe what exploratory, descriptive, and causal research are and how these are related to one another.
Provide an example of each type of research. What are the four requirements for claiming causality? Do we meet these requirements in the following situations? What is qualitative research and what is quantitative research? In which type of research exploratory, descriptive, and causal is qualitative or quantitative research most likely to be useful?
A rogue economist explores the hidden side of everything. HarperCollins, New York, NY An entertaining book that discusses statistical mis conceptions and introduces cases where people confuse correlation and causation. Also worth a read. Market Research Services, Inc.
Nielsen Retail Measurement at http: Qualitative vs. Quantitative Research: Key Points in a Classic Debate at http: J Prod Innov Manage 25 6: Chapter 3 Data Learning Objectives After reading this chapter you should understand: By data we mean a collection of facts that can be used as a basis for analysis, reasoning, or discussions.
In this chapter, we will discuss some of the different types of data. This will help you describe what data you use and why. Subsequently, we discuss strategies to collect data in Chap. Data are present in the form of variables and cases for quantitative data and in the form of words and pictures for qualitative data we will further discuss this distinction later in this chapter.
A variable is an attribute whose value can change. For example, the price of a product is an attribute of that product and typically varies over time. If the price does not change, it is a constant. A case or observa- tion consists of all the observed variables that belong to an object such as a customer, a company or a country.
The relationship between variables and cases is that within one case we usually find multiple variables. In Table 3. In the columns, you find the variables. In the lower rows, you can see the four observations we have. Although marketers often talk about variables, they also use the word item, which usually refers to a survey question put to a respondent.
Another important term that is frequently used in market research is construct, which refers to a variable that is not directly observable. For example, while gender and age are directly observable, this is not the case with concepts such as customer satisfaction, loyalty or brand trust.
More precisely, a construct can be defined as a concept which cannot be directly observed, and for which there are multiple referents, but none all- inclusive. For example, type of car bought from Table 3. Furthermore, the type of car can be directly observed e.
On the other hand, constructs consist of groups of func- tionally related behaviors, attitudes, and experiences and are psychological in nature. Researchers cannot directly measure satisfaction, loyalty, or brand trust.
Also from SAGE Publishing
However, they can measure indicators or manifestations of what we have agreed to call satisfaction, loyalty, or trust, for example, in a brand, product or company. Consequently, we generally have to combine several items to form a so called multi-item scale which can be used to measure a construct.
This aspect of combining several items is called scale development, operatio- nalization, or, in special cases, index construction. Essentially, it is a combination of theory and statistical analysis, such as factor analysis discussed in Chap. For example, in Table 3. The construct is not an individual item that you see in the list, but it is captured by calculating the average of a number of related items. But how do we decide which and how many items to use when measuring specific constructs?
For example, DeVellis and Rossiter provide two different approaches to scale development. Unfortunately, these procedures require much technical expertise. Describing each step goes beyond the scope of this book. However, for many scales you do not need to use this procedure, as existing scales can be found in scale handbooks, such as the Handbook of Marketing Scales by Bearden and Netemeyer Furthermore, marketing and management journals frequently publish research articles that introduce new scales, such as for the reputation of non-profit organizations Sarstedt and Schloderer or brand relevance Fischer et al.
Nevertheless, we introduce two distinctions that are often used to discuss constructs in Box 3. Box 3. When considering reflective constructs, there is a causal relationship from the construct to the items. In other words, the items reflect the construct. This is also the case in our previous example, with three items reflecting the concept of brand trust. Thus, if a respondent changes his assessment of brand trust e. Erdem and Swait use several interrelated items to capture different aspects of brand trust, which generally yields more exact and stable results.
Moreover, if we have multiple items, we can use analysis techniques that inform us about the quality of measurement such as factor analysis or reliability analysis discussed in Chap. On the other hand, formative constructs consist of a number of items that define a construct. A typical example is socioeconomic status, which is formed by a combination of education, income, occupation, and residence. If any of these measures increases, socioeconomic status would increase even if the other items did not change.
This distinction is important when operationalizing constructs, as it requires dif- ferent approaches to decide on the type and number of items. Moreover, reliability analyses discussed in Chap.
For an overview of this distinction, see Diamantopoulos and Winklhofer or Diamantopoulos et al. Multi items vs. Rather than using a large number of items to measure constructs, practitioners often opt for the use of single items to operationalize some of their constructs. While this is a good way to make measurement more efficient and the questionnaire shorter, it also reduces the quality of your measures.
Thus, the former can be measured by means of a single item whereas multiple items should be used to measure the latter. See Bergkvist and Rossiter and Sarstedt and Wilczynski for a discussion.
Fuchs and Diamantopoulos offer guidance on when to use multi items and single items. Primary and Secondary Data Generally, we can distinguish between two types of data: While primary data are data that a researcher has collected for a specific purpose, secondary data have already been collected by another researcher for another purpose. It also includes the prices people pay for these products and services.
Since these data have already been collected, they are secondary data. If a researcher sends out a survey with various questions to find an answer to a specific issue, the collected data are primary data. If these primary data are re-used to answer another research question, they become secondary data.
Secondary data can either be internal or external or a mix of both. Internal secondary data are data that an organization or individual already has collected, but wants to use for other research purposes.
For example, we can use sales data to investigate the success of new products, or we can use the warranty claims people make to investigate why certain products are defective. External secondary data are data that other companies, organizations, or individuals have available, sometimes at a cost. Secondary and primary data have their own specific advantages and disadvan- tages. These are summed up in Table 3. Generally, the most important reasons for Table 3.
For example, if you want to have access to the US Consumer Expenditure Survey, all you have to do is point your web browser to http: Furthermore, the authority and competence of some of these research organizations might be a factor.
However, important drawbacks of secondary data are that they may not answer your research question. If you are, for example, interested in the sales of a specific product and not in a product or service category , the US Expenditure Survey may not help much.
In addition, if you are interested in reasons why people download products, this type of data may not help answer your question. Lastly, as you did not control the data collection, there may be errors in the data.
Europe focused on interactive business. In contrast, primary data tend to be highly specific because the researcher you! In addition, primary research can be carried out when and where it is required and cannot be accessed by competitors.
However, gathering primary data often requires much time and effort and, there- fore, is usually expensive compared to secondary data. As a rule, start looking for secondary data first. If they are available, and of acceptable quality, use them! We will discuss ways to gather primary and secondary data in Chap. Quantitative and Qualitative Data Data can be quantitative or qualitative.
Quantitative data are presented in values, whereas qualitative data are not. The distinction between qualitative and quantitative data is not as black-and-white as it seems, because quantitative data are based on qualitative judgments. For example, the questions on brand trust in Table 3.
Think, for example, of how people respond to a new product in an interview. We can code this by setting neutral responses to 0, somewhat positive responses to 1, positives ones to 2, and very positive ones to 3. We have thus turned qualitative data contained in the interview into quantitative data. However, the process of how we interpret qualitative data is also subjective.
To reduce some of these problems, qualitative data are often coded by highly trained researchers. Most people think of quantitative data as being more factual and precise than qualitative data, but this is not necessarily true. Unit of Analysis The unit of analysis is the level at which a variable is measured.
Researchers often ignore this, but it is crucial because it determines what we can learn from the data. Typical measurement levels include individuals respondents or customers , stores, companies, or countries. It is best to use data at the lowest possible level, because this provides more detail and if we need these data at another level, we can aggregate the data.
Aggregating data means that we sum up a variable at a lower level to create a variable at a higher level. For example, if we know how many cars all car dealers in a country sell, we can take the sum of all dealer sales, to create a variable measuring countrywide car sales. This is not always possible, because, for example, we may have incomplete data. Dependence of Observations A key issue for any data is the degree to which observations are related. If we have exactly one observation from each individual, store, company, or country, we label the observations independent.
That is, the observations are completely unrelated. If, however, we have multiple observations from each individual, store, company, or country, we label them dependent.
Although the advertisement may influence the respondents, it is likely that the first and second response will still be related. That is, if the respondents first rated the Cola negatively, the chance is higher that they will still rate the Cola negatively rather than positively after the advertisement. If the data are dependent, this often impacts what type of analysis we should use. For example, in Chap. Dependent and Independent Variables An artificial distinction that market researchers make is the difference between dependent and independent variables.
Independent variables are those that are used to explain another variable, the dependent variable.
For example, if we use the amount of advertising to explain sales, then advertising is the independent variable and sales the dependent. This distinction is artificial, as all variables depend on other variables.
However, the distinction is frequently used and helpful, as it indicates what is being explained, and what variables are used to explain it. Measurement Scaling Not all data are equal! For example, we can calculate the average age of the respondents of Table 3. Why is this? The values that we have assigned male 0 or female 1 respondents are arbitrary; we could just as well have given males the value of 1 and female the value of 0, or we could have used the values of 1 and 2.
Therefore, choosing a different coding would result in different results.
What statistical analysis should I use? Statistical analyses using SPSS
The example above illustrates the problem of measurement scaling. Measure- ment scaling refer to two things: This can be highly confusing! There are four levels of measurement: These relate to how we quantify what we measure.
It is vital to know the scale on which something is measured because, as the gender example above illustrates, the measurement scale determines what analysis techniques we can, or cannot, use. We will come back to this issue in Chap. The nominal scale is the most basic level at which we can measure something. Essentially, if we use a nominal scale, we substitute a word for a label numerical value.
For example, we could code what types of soft drinks are bought as follows: In this example, the numerical values represent nothing more than a label. The ordinal scale provides more information.
If we have a variable measured on an ordinal scale, we know that if the value of that variable increases, or decreases, this gives meaningful information. Therefore, something measured with an ordinal scale provides information about the order of our observations. However, we do not know if the differences in the order are equally spaced. If something is measured with an interval scale, we have precise information on the rank order at which something is measured and, in addition, we can interpret the magnitude of the differences in values directly.
What the interval scale does not give us, is an absolute zero point. The value of 0 therefore does not mean that there is no temperature at all. The ratio scale provides the most information.
If something is measured with a ratio scale, we know that a value of 0 means that that particular variable is not present. Therefore, the zero point or origin of the variable is equal to 0. While it is relatively easy to distinguish the nominal and interval scales, it is quite hard to understand the difference between the interval and ratio scales. Fortunately, in practice, the difference between the interval and ratio scale is largely ignored and both categories are sometimes called the quantitative scale or metric scale.
Table 3. Good measures are those that consistently measure what they are supposed to measure. We can think of this as a measure- ment problem through which we relate what we want to measure — whether existing customers like a new TV commercial — with what we actually measure in terms of the questions we ask. If these relate perfectly, our actual measure- ment is equal to what we intend to measure and we have no measurement error. If these do not relate perfectly, as is usual in practice, we have measurement error.
This measurement error can be divided into a systematic and a random error. We can express this as follows: Systematic error is a measurement error through which we consistently measure higher, or lower, than we actually want to measure. If we were to ask, for example, customers to evaluate a TV commercial and offer them remuneration in return, they may provide more favorable information than they would otherwise have. This may cause us to think that the TV commercial is systematically more, or less, enjoyable than it is in reality.
On the other hand, there may be random errors. Some customers may be having a good day and indicate that they like a commercial whereas others, who are having a bad day, may do the opposite. Systematic errors cause the actual measurement to be consistently higher, or lower, than what it should be.
On the other hand random error causes random variation between what we actually measure and what we should measure. Validity refers to whether we are measuring what we want to measure and, therefore, to a situation where the systematic error XS is zero. Reliability is the degree to which what we measure is free from random error and, therefore, relates to a situation where the XR is zero.
In this analogy, repeated measurements e. The closer the average to the true score, the higher the validity. If several arrows are fired, reliability is the degree to which the arrows are apart. This corresponds to the upper left box where we have a scenario in which the measure is reliable but not valid.
In the upper right box, both reliability and validity are given. In the lower left box, though, we have a situation in which the measure is neither reliable, nor valid. Obviously, this is because the repeated measurements are scattered quite heavily and the average does not match the true score.
However, even if the latter were the case i. An unreliable measure can never be valid because there is no way we can distinguish the systematic error from the random error. If we repeated the measurement, say, five more times, the random error would likely shift the cross in a different position.
Thus, reliability is a necessary condition for validity. So how do we know if a measure is valid? These types of validity help us understand the association between what we should measure and what we actually measure.
Construct validity in a general term that includes the different types of validity discussed next. Face validity is an absolute minimum requirement for a variable to be valid and refers to whether a variable reflects what you want to measure. Essentially, face validity exists if a measure seems to make sense. Researchers should agree on face validity before starting the actual measure- ment i.
Face validity is often deter- mined by using a sample of experts who discuss and agree on the degree of face validity this is also referred to as expert validity. Content validity is strongly related to face validity but is more formalized. To assess content validity, researchers need to first define what they want to measure and discuss what is included in the definition and what not.
This definition clearly indicates what should be mentioned in the questions used to measure trust honesty and benevolence.
After researchers have defined what they want to measure, questions have to be developed that relate closely to the definition. Consequently, content validity is mostly achieved before the actual measurement. Predictive validity can be tested if we know that a measure should relate to an outcome.
For example, loyalty should lead to people downloading a product, or a measure of satisfaction should be predictive of people not complaining about a product or service.
Assessing predictive validity requires collecting data at two points in time and therefore often takes a great deal of time. Criterion validity is closely related to predictive validity, with the one difference that we examine the relationship between two constructs measured at the same time.
Reliability can be assessed by three key factors: Stability of the measurement means that if we measure something twice also called test—retest reliability we expect similar outcomes. Stability of mea- surement requires a market researcher to have collected two data samples and is therefore costly and could prolong the research process. Operationally, researchers administer the same test to the same sample on two different occasions and evaluate 1 There are other types of validity, such as discriminant validity see Chap.
See Netemeyer et al. Population and Sampling 37 how strongly the measurements are related more precisely, they compute correla- tions between the measurements; see Chap.
For a measure to be reliable, i. This approach is not without problems. For example, it is often hard, if not impossible, to survey the same people twice. Moreover, test—retest approaches do not work when the survey is about specific time points. If we ask respondents to provide information on their last restaurant experience, the second test might relate to a different restaurant experi- ence.
Thus, test—retest reliability can only be assessed for variables that are rela- tively stable over time. Internal consistency is by far the most common way of assessing reliability. Internal reliability requires researchers to simultaneously use multiple variables to measure the same thing think of asking multiple questions in a survey. If these measures relate strongly and positively to one another there is a considerable degree of internal consistency.
Inter-rater reliability is a particular type of reliability that is often used to assess the reliability of secondary data or qualitative data discussed later. If you want to measure, for example, which are the most ethical organizations in an industry, you could ask several experts to provide a rating and then check whether their answers relate closely.
If they do, then the degree of inter-rater reliability is high. Population and Sampling A population is the group of units about which we want to make judgments. These units can be groups of individuals, customers, companies, products, or just about any subject in which you are interested. Populations can be defined very broadly, such as the people living in Canada, or very narrowly, such as the directors of large hospitals.
What defines a population depends on the research conducted and the goal of the research. Sampling is the process through which we select cases from a population. When we develop a sampling strategy, we have three key choices: The most important aspect of sampling is that the sample selected is representative of the population. A one-way analysis of variance ANOVA is used when you have a categorical independent variable with two or more categories and a normally distributed interval dependent variable and you wish to test for differences in the means of the dependent variable broken down by the levels of the independent variable.
For example, using the hsb2 data file , say we wish to test whether the mean of write differs between the three program types prog. The command for this test would be:. The mean of the dependent variable differs significantly among the levels of program type.
However, we do not know if the difference is between only two of the levels or all three of the levels. The F test for the Model is the same as the F test for prog because prog was the only variable entered into the model. If other variables had also been entered, the F test for the Model would have been different from prog. To see the mean of write for each level of program type,. From this we can see that the students in the academic program have the highest mean writing score, while students in the vocational program have the lowest.
The Kruskal Wallis test is used when you have one independent variable with two or more levels and an ordinal dependent variable. In other words, it is the non-parametric version of ANOVA and a generalized form of the Mann-Whitney test method since it permits two or more groups. We will use the same data file as the one way ANOVA example above the hsb2 data file and the same variables as in the example above, but we will not assume that write is a normally distributed interval variable.
If some of the scores receive tied ranks, then a correction factor is used, yielding a slightly different value of chi-squared. With or without ties, the results indicate that there is a statistically significant difference among the three type of programs. A paired samples t-test is used when you have two related observations i. For example, using the hsb2 data file we will test whether the mean of read is equal to the mean of write. The Wilcoxon signed rank sum test is the non-parametric version of a paired samples t-test.
You use the Wilcoxon signed rank sum test when you do not wish to assume that the difference between the two variables is interval and normally distributed but you do assume the difference is ordinal. We will use the same example as above, but we will not assume that the difference between read and write is interval and normally distributed.
The results suggest that there is not a statistically significant difference between read and write. If you believe the differences between read and write were not ordinal but could merely be classified as positive and negative, then you may want to consider a sign test in lieu of sign rank test. Again, we will use the same variables in this example and assume that this difference is not ordinal. These binary outcomes may be the same outcome variable on matched pairs like a case-control study or two outcome variables from a single group.
Continuing with the hsb2 dataset used in several above examples, let us create two binary outcomes in our dataset: These outcomes can be considered in a two-way contingency table. The null hypothesis is that the proportion of students in the himath group is the same as the proportion of students in hiread group i.
You would perform a one-way repeated measures analysis of variance if you had one categorical independent variable and a normally distributed interval dependent variable that was repeated at least twice for each subject. This is the equivalent of the paired samples t-test, but allows for two or more levels of the categorical variable. This tests whether the mean of the dependent variable differs by the categorical variable.
In this data set, y is the dependent variable, a is the repeated measure and s is the variable that indicates the subject number. You will notice that this output gives four different p-values. No matter which p-value you use, our results indicate that we have a statistically significant effect of a at the. If you have a binary outcome measured repeatedly for each subject and you wish to run a logistic regression that accounts for the effect of multiple measures from single subjects, you can perform a repeated measures logistic regression.
The exercise data file contains 3 pulse measurements from each of 30 people assigned to 2 different diet regiments and 3 different exercise regiments. A factorial ANOVA has two or more categorical independent variables either with or without the interactions and a single normally distributed interval dependent variable.
For example, using the hsb2 data file we will look at writing scores write as the dependent variable and gender female and socio-economic status ses as independent variables, and we will include an interaction of female by ses. Note that in SPSS, you do not need to have the interaction term s in your data set.
You perform a Friedman test when you have one within-subjects independent variable with two or more levels and a dependent variable that is not interval and normally distributed but at least ordinal. We will use this test to determine if there is a difference in the reading, writing and math scores. The null hypothesis in this test is that the distribution of the ranks of each type of score i.
To conduct a Friedman test, the data need to be in a long format. SPSS handles this for you, but in other statistical packages you will have to reshape the data before you can conduct this test. Hence, there is no evidence that the distributions of the three types of scores are different.
Medical Statistics: A Guide to SPSS, Data Analysis and Critical Appraisal, 2nd Edition
Ordered logistic regression is used when the dependent variable is ordered, but not continuous. For example, using the hsb2 data file we will create an ordered variable called write3.
This variable will have the values 1, 2 and 3, indicating a low, medium or high writing score. We do not generally recommend categorizing a continuous variable in this way; we are simply creating a variable to use for this example. We will use gender female , reading score read and social studies score socst as predictor variables in this model.
We will use a logit link and on the print subcommand we have requested the parameter estimates, the model summary statistics and the test of the parallel lines assumption. There are two thresholds for this model because there are three levels of the outcome variable. One of the assumptions underlying ordinal logistic and ordinal probit regression is that the relationship between each pair of outcome groups is the same.
In other words, ordinal logistic regression assumes that the coefficients that describe the relationship between, say, the lowest versus all higher categories of the response variable are the same as those that describe the relationship between the next lowest category and all higher categories, etc.
This is called the proportional odds assumption or the parallel regression assumption. Because the relationship between all pairs of groups is the same, there is only one set of coefficients only one model.
If this was not the case, we would need different models such as a generalized ordered logit model to describe the relationship between each pair of outcome groups. A factorial logistic regression is used when you have two or more categorical independent variables but a dichotomous dependent variable.
For example, using the hsb2 data file we will use female as our dependent variable, because it is the only dichotomous variable in our data set; certainly not because it common practice to use gender as an outcome variable.
We will use type of program prog and school type schtyp as our predictor variables. Because prog is a categorical variable it has three levels , we need to create dummy codes for it. SPSS will do this for you by making dummy codes for all variables listed after the keyword with.
SPSS will also create the interaction term; simply list the two variables that will make up the interaction separated by the keyword by. Furthermore, none of the coefficients are statistically significant either.
This shows that the overall effect of prog is not significant. A correlation is useful when you want to see the relationship between two or more normally distributed interval variables. For example, using the hsb2 data file we can run a correlation between two continuous variables, read and write.
In the second example, we will run a correlation between a dichotomous variable, female , and a continuous variable, write. Although it is assumed that the variables are interval and normally distributed, we can include dummy variables when performing correlations. In the first example above, we see that the correlation between read and write is 0. By squaring the correlation and then multiplying by , you can determine what percentage of the variability is shared.
In the output for the second example, we can see the correlation between write and female is 0.This percentage is similar across most industries, although it is much less in government sectors and, particularly, in health care Churchill and Iacobucci Ambiguous problems occur when we know very little about the issues important to solve them. However, please note that most descriptions are rather technical and require a sound understanding of statistics.
We have only one variable in our data set that is coded 0 and 1, and that is female. The students in the different programs differ in their joint distribution of read , write and math.
The output above shows the linear combinations corresponding to the first canonical correlation. Descriptive Research As its name implies, descriptive research is all about describing certain phenomena, characteristics or functions.
Please comment on the following statement:
- WORD SMART FOR THE NEW GRE PDF
- PDF FOR SEARCH ENGINE OPTIMIZATION
- RESUME FORMAT FOR MECHANICAL ENGINEER FRESHER PDF
- PFSENSE THE DEFINITIVE GUIDE PDF
- LATEST RESUME FORMAT PDF
- ELECTRONICS FOR YOU JANUARY 2016 PDF
- PROBABILITY AND STATISTICS FOR ENGINEERS AND SCIENTISTS PDF
- DIGITAL SLR PHOTOGRAPHY FOR DUMMIES PDF
- JAMSA C C++ PROGRAMMERS BIBLE PDF
- STRATEGIC BRAND MANAGEMENT KELLER 4TH EDITION PDF
- PERFUME THE STORY OF A MURDERER EPUB
- NOVEL NIBIRU DAN KESATRIA ATLANTIS PDF
- NISKARAM MALAYALAM PDF