Chat with us, powered by LiveChat Week 12 Section VI: Middle Range Theories Chapter 25: The Community Nursing Practice Model Chapt | Office Paper
+1(978)310-4246 credencewriters@gmail.com
  

  Week 12 Section VI: Middle Range Theories  

Chapter 25: The Community Nursing Practice Model

Chapter 26: Rozzano Locsin’s Technological Competency as Caring in Nursing

 

 How do you see the benefit of using both the Community Nursing Practice Model and Locsin’s Technological Competency as Caring in Nursing in today’s nursing environment? 

Chapters attached

Inferential Analysis
Chapter 20

NUR 6812Nursing Research

Florida National University

Introduction – Inferential Analysis

We will discuss analysis of variance and regression, which are technically part of the same family of statistics known as the general linear method but are used to achieve different analytical goals

ANALYSIS OF VARIANCE

Analysis of variance (ANOVA) is used so often that Iversen and Norpoth (1987) said they once had a student who thought this was the name of an Italian statistician.

You can think of analysis of variance as a whole family of procedures beginning with the simple and frequently used t-test and becoming quite complicated with the use of multiple dependent variables (MANOVA, to be explained later in this chapter) and covariates.

Although the simpler varieties of these statistics can actually be calculated by hand, it is assumed that you will use a statistical software package for your calculations.

If you want to see how these calculations are done, you could try to compute a correlation, chi-square, t-test, or ANOVA yourself (see Yuker, 1958; Field, 2009), but in general it is too time consuming and too subject to human error to do these by hand.

IMPORTANT TERMINOLOGY

Several terms are used in these analyses that you need to be familiar with to understand the analyses themselves and the results. Many will already be familiar to you.

Statistical significance: This indicates the probability that the differences found are a result of error, not the treatment. Stated in terms of the P value, the convention is to accept either a 1% (P ≤ 0.01), or 1 out of 100, or 5% (P ≤ 0.05), or 5 out of 100, possibility that any differences seen could have been due to error (Cortina & Dunlap, 2007).

Research hypothesis: A research hypothesis is a declarative statement of the expected relationship between the dependent and independent variable(s).

Null hypothesis: The null hypothesis, based on the research hypothesis, states that the predicted relationships will not be found or that those found could have occurred by chance, meaning the difference will not be statistically significant.

Effect size: This is defined by Cortina and Dunlap as “the amount of variance in one variable accounted for by another in the sample at hand” (2007, p. 231). Effect size estimates are helpful adjuncts to significance testing. An important limitation, however, is that they are heavily influenced by the type of treatment or manipulation that occurred and the measures that are used.

Confidence intervals: Although sometimes suggested as an adjunct or replacement for the significance level, confidence intervals are determined in part by the alpha (significance level) (Cortina & Dunlap, 2007). Likened to a margin of error, the confidence intervals indicate the range within which the true difference between means may lie. A narrow confidence interval implies high precision; we can specify believable values within a narrow range. A wide interval implies poor precision; we can only specify believable values within a broad and generally uninformative range.

Degrees of freedom: In their most simple form, degrees of freedom are 1 less than the total number of observations. This sometimes-confusing term refers to the smallest number of values (terms) that one must know to determine the remaining values (terms). For example, if you know the weights of 12 out of a sample of 13 people and also the sum (grand total) of the weights of these 13 people, you can easily calculate the weight of the 13th person. In this case, the degrees of freedom would be 12, or 13 minus 1 degree of freedom. If you had a second sample of 13 people and again needed to know the weights of 12 to calculate the 13th, the degrees of freedom for these two subsamples together would be 12 + 12 = 24. Not all calculations of degrees of freedom are this simple, but they are based on this principle (Iversen & Norpoth, 1987; Keppel, 2004).

Variance: This is a measure of the dispersion of scores around the mean, or how much they are spread out around the mean. Statistically, it equals the square of the standard deviation (Iversen & Norpoth, 1987; Munro, 2005).

Mean: The mean is the arithmetic average of a set of numbers, usually the scores or other results for a sample or subsample. This is simple to calculate by hand unless you have a very large sample. Variable: A variable is a characteristic or phenomenon that can vary from one subject to another or from one time to another (O’Rourke, Hatcher, & Stepanski, 2005).

Independent variable: In experimental research, the independent variable is the treatment or manipulation that occurs. In nonexperimental research, it is the theoretical causative factor that affects the dependent or outcome variable. In other words, it is the explanatory variable, also called the predictor variable.

Dependent variable: In experimental research, the dependent variable is the measured outcome of the treatment (in the broadest sense of the term treatment). In nonexperimental research, the dependent variable is the theoretical result of the effects of the independent variable(s). It is also called the criterion variable.

T-Tests

The cardinal feature of t-tests and ANOVAs also provides an important clue to their usage: these statistical procedures analyze the means of at least one continuous (interval or ratio) response variable in terms of the levels of a categorical variable, which has the role of predictor or independent variable (Der & Everitt, 2006).

The simplest of these statistics are t-tests. They may be used under the following conditions:

There is just one predictor or independent variable that has just two values, such as male/female, treated/not treated, or hospital #1 patients/hospital #2 patients.

There is a single criterion or dependent variable measured at the interval or ratio level. You can see that the applicability of the t-test is limited by these criteria. In most cases that do not fit these criteria, ANOVA becomes the procedure of choice. There are two common types of t-tests (O’Rourke et al., 2005):

Independent samples: This type of t-test is appropriate when there are two subsamples being compared on an outcome measure. For example, you might randomly assign severe asthma patients to an environmental control education program or a general asthma education program and compare the number of times they used their rescue inhalers in the 3 months following intervention. (Note that this is a posttest-only design; there is no pretest.)

Paired sample: This type of t-test is appropriate when the same subjects constitute each sample being compared under two different sets of conditions. Because they are the same people, the results are obviously not independent of one another and are said to be paired or correlated. For example, you could compare severe asthma patients’ use of rescue inhalers before and after they attend an educational program on environmental control. (Note that this is a one-group pretest-posttest design.)

T-tests may be used in nonexperimental situations as well. Most common is a comparison of naturally occurring groups or events such as the difference between male and female students’ mathematical abilities (an example of independent samples) or a comparison of marital discord scores before and after the birth of the first child (an example of paired samples). An example of each of these t-tests will help to clarify terms and demonstrate their use.

Independent Samples

Independent samples are samples that are selected randomly so that its observations do not depend on the values other observations.

Many statistical analyses are based on the assumption that samples are independent.

Others are designed to assess samples that are not independent.

Paired Samples

A paired samples t-test is used to compare the means of two samples when each observation in one sample can be paired with an observation in the other sample.

A paired samples t-test is commonly used in two scenarios:

1. A measurement is taken on a subject before and after some treatment – e.g., the max vertical jump of college basketball players is measured before and after participating in a training program.

2. A measurement is taken under two different conditions – e.g., the response time of a patient is measured on two different drugs.

In both cases we are interested in comparing the mean measurement between two groups in which each observation in one sample can be paired with an observation in the other sample.

Paired Samples t-test: Assumptions

For the results of a paired samples t-test to be valid, the following assumptions should be met:

The participants should be selected randomly from the population.

The differences between the pairs should be approximately normally distributed.

There should be no extreme outliers in the differences.

ANOVA Analysis of Variance

ANOVA Analysis of variance extends the t-test to three or more groups. It is especially useful in examining the impact of different treatments (Muller & Fetterman, 2002).

If you had three subsamples to compare on one outcome measure, you could do this with a set of three t-tests, but this approach is inefficient and increases the risk of type I error.

Instead, analysis of variance performs these comparisons simultaneously and produces a significant result if any of the sample means differ significantly from any other sample mean (Evans, 1996, p. 339).

ANOVA compares the variation or difference between the means of the subsamples or groups with how much variation there is within each group or sub-sample (Iversen & Norpoth, 1987, p. 25).

ANOVA Analysis of Variance

F Ratio

Analysis of variance computations produce an F ratio. There is usually variation within the groups as well as between the groups. An F ratio is the ratio of between-treatment group variations to within-treatment group variation. F ratios close to 1 indicate the differences are random or chance differences. F ratios much larger than 1 indicate that the difference is greater than would be expected by chance.

One-Way ANOVA

One-way ANOVA is the basic analysis of variance. It involves (1) a single predictor or independent variable that is categorical in nature but may have two or more values, and (2) a single criterion or dependent variable at the interval or ratio level of measurement (O’Rourke et al., 2005, p. 210). One-way ANOVA involves only one predictor variable. As with t-tests, there are two basic types, a between-subjects model, which is similar to the independent sample t-test, and a repeated-measures model, which is similar to the paired t-test

ANOVA Analysis of Variance

Repeated Measures Designs These designs are also called within-subjects designs because more than one measurement is obtained on each participant. The simplest of these designs is the testing of the same participants under two or more different treatment conditions.

Advantages of this design, which uses participants as their own controls, are that fewer participants are needed and the treatment groups do not differ (Munro, 2005). These advantages, however, are often outweighed by the following disadvantages:

High attrition rate: A large number of participants are lost from the study between the two treatment conditions.

Order effect: Participants may not be as enthusiastic about trying the second or third treatment option, reducing adherence.

Carryover effect: Participants may continue to experience or benefit from the effects of the first treatment (O’Rourke et al., 2005).

ANOVA Analysis of Variance

Mixed Designs

A second repeated measures design uses different participants in each treatment group.

This eliminates order and carryover effects, but it does mean that the participants in each treatment group will not be identical. Even with random assignment to treatment group, there will probably be some variation between groups at baseline.

This second repeated measures design is called a mixed design because it will generate both between-group (the different treatment groups) and within-group (change or lack of change from one time to another) measures.

The analysis of the results will provide three types of information:

Change over time

Differences between the groups

The interaction of time and group effects (Munro, 2005)

ANCOVA Analysis of Covariance

ANCOVA Analysis of covariance is a procedure in which the effects of factors called covariates are extracted or controlled before the analysis of variance is done (Der & Everitt, 2006).

The covariates are often confounding variables or extraneous variables that contribute to the variation and reduce the magnitude of the differences between the groups being compared.

Controlling for these extraneous or confounding variables can reduce the error variance and increase the power of the analysis (Munro, 2005).

There are two main instances when ANCOVA is used:

1. When a variable is known to have an effect on the dependent (outcome) variable in an analysis of variance

2. When the groups being compared are not equivalent on one or more variables, either because they were not randomized or in spite of randomization (Munro, 2005, p. 200)

Two-Way ANOVA

The ANOVA-based analyses discussed so far have employed a single independent variable at the nominal or categorical level of measurement.

(The independent variable is the treatment variable in experimental research or the explanatory variable in nonexperimental research.)

Two-way analysis of variance allows you to examine the effects of two between-subjects independent variables at once, including the interaction between the two independent variables (Munro, 2005; O’Rourke et al., 2005).

MANOVA

MANOVA One additional procedure from the analysis of variance family, often a very useful one, is the multivariate analysis of variance (MANOVA).

You have encountered mention of avoiding type I error (rejecting the null hypothesis when it is true) several times already in this chapter.

When you have a number of criteria or outcome variables that are conceptually related, instead of analyzing each one separately using ANOVA, you can begin the analysis with a MANOVA.

REGRESSION ANALYSIS

In this second half of the chapter, we will focus on prediction of the dependent variable based on knowledge of the independent variable rather than on comparison of means.

The discussion will be limited to the most basic and commonly used linear regression analyses. Regression analyses can become very complex in some of their iterations. You will find these discussed in advanced statistics textbooks.

The primary assumption behind linear regression analysis is clearly described by Evans (1996):

Its most essential assumption is that variables x and y have a straight-line relationship with each other…. If that assumption is true for a set of pairs of scores, then y values can be predicted from x values. The stronger the correlation between x and y, the more accurate the predictions (p. 160).

The x variable, by the way, is the predictor (independent) variable, and the y variable is the criterion (dependent) variable. We can do much more than this with regression, but this is the fundamental basis of regression: to predict values of y from values of x.

Simple Linear Regression

There is an interesting and deceptively simple set of cognitive function tests called the category fluency tests.

To administer the test, the examiner asks the person being tested to name as many animals or as many fruits, vegetables, words beginning with F, modes of transportation, items of clothing, or other categories as possible in 1 minute.

The answers are recorded, and the score is simply the number of relevant, nonredundant items or words generated in 1 minute.

The simplicity of the test makes it easy to understand.

The number of factors that might influence the total score makes it an interesting example to illustrate linear regression analysis.

Multiple Regression with Two or More Independent Variables

Multiple regression is used to examine the “collective and separate effects of two or more independent variables on a dependent variable” (Pedhazur, 1982, p. 6).

The discussion will begin with the use of continuous (interval or ratio level) independent variables and then address the use of nominal-level (categorical) independent variables through what is called dummy coding or effect coding.

Multicollinearity

The choice of independent variables to include in a regression equation is often a challenge for the researcher.

Theory underlying the study, the results of prior studies, and the study hypothesis should guide the selection.

It is tempting to include as many variables as possible to boost the R2 and improve the predictive power of the equation, but this approach increases the risk of multicollinearity among the independent variables.

Collinearity may be defined as redundancy among the variables. In other words, some of the independent variables added to a regression equation may contribute little to the information that has already been contributed by other variables.

Dummy Coding

After encountering all those difficult technical terms, this next one, dummy coding, might provide some comic relief.

Despite its odd name, however, dummy coding extends the reach of multiple regression in some very useful ways.

Up to this point, all of the variables entered into the regression analyses have been continuous variables, measured at the interval or higher level. (In some cases, ordinal variables can also be used.)

Dummy coding allows us to include nominal or categorical level variables (called qualitative in some texts) as well.

Selection

You could see in the previous section on dummy coding that entering additional independent variables can have an effect on both the weight of other variables and the overall results of the regression.

As mentioned earlier, selecting the variables to enter into the regression equation and deciding which ones should be retained is often a challenge.

Theory and prior research results should be your primary guides, but they do not always provide enough guidance. Another approach is to use the results of exploratory analysis to make these decisions.

There are a number of different selections you can make to conduct this exploratory regression analysis:

Maximum R2: This analysis begins by selecting one or two variables that produce the highest R2, interchanges them until those with maximum improvement in R2 are identified, and then brings in additional variables and interchanges them until the optimum combination is found.

Forward selection: This analysis begins with the simple (one-variable) model that produces the largest R2 and adds variables until no further increase is found. Selected variables are not deleted later in this approach.

Backward elimination: This analysis begins with all of the variables in the regression and then deletes those with the least significance, one at a time, until all remaining variables are at the prespecified level of significance.

Stepwise: This analysis begins with the one-variable model that produces the largest R2, and then adds and deletes variables until no further improvement can be made.

Hierarchical Linear Regression

Hierarchical Linear Regression

Instead of putting all the variables into the analysis at once, as is done in multiple regressions, or entering them in an order determined by preset limits such as significance level, hierarchical linear regression employs a series of steps or blocks of variables determined in advance on a theoretical basis.

This is an advanced technique that will not be described in detail but summarized so the reader will have a general idea of when it is an appropriate choice and how it works.

Sample Size

The temptation to include (some would say “throw in”) as many variables as possible in the regression equation and the problems associated with doing this have already been mentioned but are emphasized once more when considering the sample size needed to conduct these analyses.

A common rule of thumb is to have at least 10 subjects per variable in the analysis (Munro, 2005). Any fewer than that will result in unstable outcomes and appreciable shrinkage of the adjusted R2.

You can also conduct a power analysis to determine the sample size needed. You can find the formulas to do this in Cohen (1988) or generate a power analysis from your statistical analysis program.

Logistic Regression

Up to this point, we have addressed regression of independent variables on a continuous dependent variable.

Logistic regression addresses the use of a categorical dependent variable in the equation.

If this variable is dichotomous (having only two different values), then logistic regression is done.

If the dependent variable has three or more values, then a polytomous regression is done.

These analyses may also be done with dependent variables that can be ordered

CONCLUSION

One of the best ways to gain an appreciation of the analytic procedures described in this chapter is to apply them to a dataset—your own, if possible, or one of the demonstration datasets that accompany most software packages and statistical textbooks.

Each of these procedures has its uses but also its drawbacks.

It is important to understand both when you apply them in your research.

Obtaining guidance from an experienced researcher and/or statistician will help you not only select the most powerful procedures for your dataset, but also avoid inappropriate applications of them.

Reference

Tappen, R. M. (2015). Advanced Nursing Research. Chapter 20 [VitalSource Bookshelf]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781284132496/

Analysis of Qualitative Data
Chapter 21

NUR 6812 Nursing Research

Florida National University

Introduction

The most structured approach to the analysis of qualitative data uses coding and quantifying of the qualitative data.

Content analysis is a specific case of the quantification of qualitative data. At the other end of the continuum are three of the great traditions in qualitative research: ethnography, grounded theory, and phenomenological analysis.

In between lie a variety of coding and thematizing analyses that operate within a semistructured and unstructured framework.

Each of these reflects a very different tradition, which needs to be kept in mind as you match your data to the appropriate analytic framework.

Data collection and data analysis may occur virtually simultaneously in the most unstructured of these approaches, whereas the more structured analyses are done after the data have been collected and processed.

PROCESSING THE DATA

Faced with a mountain of observational notes and transcribed conversations, many qualitative researchers have thought to themselves,

“What do I do now?” Handling that mountain of qualitative data requires some organization.

There are several activities you may need to complete during the data collection and analysis stages to manage the data and facilitate the final analysis.

PROCESSING THE DATA

Organize Your Material Keep notes organized with sources and time frames clearly identified

Prepare accurate transcriptions of notes and recordings. This step is not an absolute necessity if you (1) plan to hand code (as opposed to using a software program), (2) will do all the analysis yourself, (3) write clearly, and (4) do not have a mountain of data.

Upload Data for Analysis

Upload the texts.

Maintain lists of the codes you have created.

Mark text by code or theme.

Retrieve text that you have coded.

Support creation of a hierarchy of codes and themes.

Allow you to write memos and attach them to text.

WHY QUANTIFY? Even in qualitative research, there are occasions where counting is useful, including the following:

To describe the sample

To report the frequency of a response

To compare frequencies across subgroups

To combine quantified qualitative data with other quantitative data:

STRUCTURED AND SEMISTRUCTURED ANALYSIS Coding

Coding is “a deliberate and thoughtful process of categorizing the content of the text” (Gibbs, 2007, p. 39). Two purposes for coding will be discussed.

Coding of responses to structured and semistructured questions for the purpose of quantifying them is our immediate interest. Later in the chapter, the coding of text for qualitative analysis will be illustrated. Even more specific, Miles, Huberman, and Saldaña describe codes as tags or “labels that assign symbolic meaning to the descriptive or inferential information compiled during a study” (2014, p. 71).

They identify several types of codes that can be affixed to responses or portions of text:

Descriptive codes: These are the most concrete level of labeling data that simply divide data into various categories or groups of phenomena. In most qualitative studies, you will want to use the respondent’s own words as much as possible to preserve their language.

Interpretive codes: These codes are more abstract and are based on your understanding of the meaning underlying what has been said or done.

Pattern codes: Even more abstract and complex, these codes suggest connections between various patterns or meanings and are indicative of possible themes within the data (Miles & Huberman, 1994).

A simple example will be used to illustrate the use of coding to quantify qualitative data at relatively concrete levels of analysis—the descriptive and interpretive.

CONTENT ANALYSIS

Content analysis and its kin, narrative, conversation, and discourse analysis, are a special case in qualitative analysis.

Content analysis is “a family of analytic approaches ranging from impressionistic, intuitive, interpretive analysis to systematic, strict textual analyses” (Hsieh & Shannon, 2007, p. 61).

In other words, content analysis may range from highly structured, quantitative analysis to unstructured qualitative analysis.

Words in naturally occurring verbal material (text), whether recorded conversation, diaries, reports, electronic text, or books, constitute the data used in content analysis (McTavish & Pirro, 2007, p. 217).

ANALYZING THE TEXT

Selecting an approach to the analysis of text begins with a well-constructed research question. As Krippendorff (2004) reminds us, texts convey many different meanings. Several questions need to be answered in constructing the research question:

• What am I looking for in this text?

• Is the focus on content, interpersonal interaction, or both?

• Will I be working at the micro level or macro level of analysis? How micro or macro?

• What type of content analysis best answers my question? ◦ Quantitative or qualitative?

◦ Analysis of the story or experience or the interactional processes that occur?

◦ Strong emphasis on the influence of context or minimal attention to specific context?

ANALYZING THE TEXT

Data Transcription

Data transcription should reflect the analytical purpose. For some purposes, especially conversation and linguistic analyses, you need to have every inflection, pause, vocalization, and contraction noted precisely.

Data Exploration

After transcription is completed and checked for accuracy, your next step is to read and reread the entire dataset, making notes on general impressions, possible coding schemes, and the different perspectives from which you could analyze the data. If you plan to do a conversational analysis, ten Have (1999) suggests looking at turn-taking including pauses and overlaps; sequencing including the beginning and end of a particular sequence or “chunk” of the conversation that follows a specific thread; what each participant is doing on each turn and the form chosen

Completing the Analysis

Once the coding scheme has been created, it is time to apply it to the entire dataset. If the dataset is large, use of a qualitative data analysis program is very helpful. Following is a list of some of the activities this type of program can support (Krippendorff, 2004, p. 262):

Dividing the text into analytical units: These units could be syllables, words, phrases, sentences, or even paragraphs.

Searching the text: Find, list, sort, count, retrieve, and cross-tabulate the identified analytical units (Krippendorff, 2004, p. 262).

Computational content analysis: The results of the coding can be analyzed quantitatively in some cases.

Interactive hermeneutic approaches: This is an interpretive approach to the analysis. Second- and third-level coding is supported by most qualitative analysis programs.

UNSTRUCTURED ANALYSIS

The goal of most qualitative analysis is to move beyond the most concrete, descriptive level to higher levels of abstraction and interpretation, identifying the themes and sometimes constructing new concepts or theoretical propositions from the results.

ETHNOGRAPHIC ANALYSIS

The ethnographic approach to data collection and analysis is designed to achieve understanding of other cultures.

Originally employed by anthropologists who often devoted years to the study of remote places and people, it has also been used to study subcultures closer to home: street people, gay and lesbian groups, hospital …

error: Content is protected !!