← Go back
Evaluative Research Mini Course: Analysis
Sections Data Quality and ManagementIdentifying ObjectivesQuantitative AnalysisWhy use inferential statistics?Graphical Analysis and VisualizationExample of Data VisualizationUnivariate Analysis (one variable)Test StatisticsBivariate Analysis (2 variables)Multivariate Analysis (more than 2 variables)Quantitative Analysis TipsQualitative AnalysisQualitative Analysis vs. ‘Anecdotal’ ReportingTips to Improve Qualitative Data QualityPreparing data for analysisTextual analysisThematic AnalysisResources for Qualitative AnalysisArts-Based AnalysisExamples of Arts-Based AnalysisMore Examples of Arts-Based AnalysisIssues to Consider in Arts-Based AnalysisMixed-Method AnalysisMixed Methods IssuesTriangulatingDrawing ConclusionsOther ResourcesEnd of Part 5: Analysis
Analysis
Analysis is the process of turning collected results into meaningful information so is a crucial part of any evaluation. A well-conducted analysis will yield evaluation findings that you can use to strengthen your program.
The techniques used to analyze information collected depend on the type collected and what you hope to learn. There are no hard and fast rules for analyzing information – analysis is itself both an ‘art’ and a ‘science.’ That said, it is crucial that analysis follows a methodical and systematic approach to ensure that you don’t draw incorrect or inappropriate conclusions.
Strategies for analysis should be considered before information is collected during the evaluation design stage.
Third-Party Analysis
Without training in analysis techniques, it may be best to consult a professional for guidance or to enlist a professional to take a lead role in conducting the analysis, interacting with you frequently to ensure that the analysis answers the questions that you feel are most important. The Statistical Society of Canada may be a good resource if you don't already have a statistician in mind.
Note: You cannot just give the information you collected to someone else and say “analyze it”. There are a huge number of options as to how to approach the large amount of information you have gathered – and it is essential that you stay actively involved in ensuring that the data analysis is structured to answer the right questions.
Information Quality
The following constructs are often used to assess information/data quality:
  • Validity: Information measures what it is intended to measure.
  • Reliability: Information is measured and collected consistently according to standard definitions and methodologies; the results are the same when measurements are repeated.
  • Completeness: All elements are included (as per the definitions and methodologies specified).
  • Precision: Information has sufficient detail.
  • Integrity: Information is protected from deliberate bias or manipulation for political or personal reasons.
  • Timeliness: Information is up to date (current) and is available on time.
Tips to Improve Data/Information Quality
  • Use standardized collection tools, which have already been tested in real life situations
  • If you need to make changes to adapt tools to your local context, you should try to conduct a pilot test to improve the tool before using it more generally
  • Use experienced collectors when possible
  • Provide training for collectors on the specific tool and/or supervise collection to reduce bias (e.g., inappropriate prompting for answers during interviews) and errors (e.g., misunderstanding which program elements need to be observed)
  • Consult key stakeholders (e.g., program staff and participants) throughout the evaluation process (participatory evaluation)
Data/Information Management
It is crucial to develop effective and consistent information management processes for:
Even the most rigorous collection effort could have inaccurate or missing information. Data/information cleaning involves finding and dealing with any errors that occur during the collecting, storing, transferring or processing of information.
Identifying Objectives of Analysis
Understanding the topic/issue being investigated as well as its relationship with relevant social, economic and/or environmental factors is essential to guide the analysis effort. Specifying your objectives and formulating a set of questions to be answered by analyzing the information is critical to having a meaningful evaluation.
Questions you may consider are:
  • What is the topic or issue?
  • What is the context? For whom and under what circumstances?
  • How will the analysis be used?
At this stage, it may also be useful to formulate a set of expectations for what the information will reveal. An understanding of why certain patterns may emerge and what they may mean will help with analysis and drawing conclusions. Additionally, It may be useful to define upfront what constitutes “success” by constructing specific guidelines.
Quantitative Data Analysis
With quantitative data, the goals are to describe patterns in the numbers collected and explore relationships or effects. Analysis will vary based on the data collection method used.
You could start by running a frequency distribution to see how each question was answered by the respondents. For example, how many people gave a question 5 out of 5 for level of agreement, how many said 4, etc. so you can see how much variability there is on each question. This is called univariate analysis.
Once this has been done, you can decide which variables are important to look at separately. For example, do you want to see how many females answered each question a certain way compared to males. This is called bivariate analysis.
Tips for Quantitative Analysis:
  • The level of data analysis should be appropriate to the data gathered. For example, if the sample is small, sophisticated data analytic techniques may not be warranted, and statistical analysis may make no sense.
  • Results should be interpreted with caution, particularly where a change is observed
  • All data collected as part of the evaluation should be included in the analysis to reduce bias and improve the validity of the evaluation findings
  • It is important to consider response rates and missing data
  • It is recommended that the evaluator ask a peer to look over their analysis and to verify their interpretations of the data to reduce bias and improve the credibility of the findings
  • It is particularly important to triangulate quantitative data with observations and knowledge from other means, to ensure everything makes sense. It is essential that the report be reviewed with people who know the context.
  • For more in-depth analysis, it is recommended to consult an evaluation consultant or an academic partner with the necessary expertise
Constructing a Measurement Matrix
A common mistake made by evaluators is failure to develop a clear analysis plan at the outset. In order to maximize the usefulness of quantitative data collected, it is paramount that evaluators decide how they will use all information collected by painting a clear picture of the variables that will be constructed from questions asked, as well as analysis procedures that will be used.
One useful tool to clarify these plans is to construct a Measurement Matrix, which displays how each item in a survey will be used to address major evaluation constructs. For each survey question, the matrix may identify:
  • the study concept it intends to operationalize,
  • the level of measurement,
  • the specific objective/hypothesis addressed,
  • the scales from which questions were drawn and their reliability and validity, and
  • the intended analysis.
Creating this matrix challenges the evaluator to consider the usefulness of each item in terms of their objectives and consider which items may be removed or added.
Preparing Quantitative Data for Analysis
Quantitative data must be entered or imported into spreadsheets in data analysis software such as Microsoft Excel or more advanced software programs such as SPSS, SAS, R, or STATA.
Data should be coded so that all data are in number form and cleaned.
You must also decide how to deal with missing data.
Types of Quantitative Data
Categorical data have a limited number of possible values.
  • Nominal: Numbers assigned to categories do not necessarily have inherent meaning and the order of the categories may be arbitrary. For example, when asking about marital status, there are a limited set of possible responses and categories can be ordered in numerous ways (e.g. 1 = “married”, 2 = “not married”).
  • Ordinal: Data are ordered, but the distances are not quantifiable (you cannot add or subtract). A question where the responses range from 1 = “strongly agree” to 5 = “strongly disagree” is an example of this type of categorical data.
Continuous data can have almost any numeric value along a continuum and can be broken down into smaller parts and still have meaning. Age, weight, height, and income are all examples of continuous data.
  • Interval:Data is like ordinal except we can say the intervals between each value are equidistant. This allows us to order the items that are measured and to quantify and compare the magnitudes of differences between them. For example, the difference between 20 and 21 degrees Fahrenheit is the same magnitude as the difference between 70 and 71. Data can take on positive or negative values.
  • Ratio: Data is like interval data, but with a true zero point, meaning you can have nothing less than zero (no negative numbers). When the variable equals 0, there is none of that variable. For example, time is ratio since 0 time is meaningful. Variables like height and weight are ratio variables. Data are continuous positive measurements on a nonlinear scale.
Types of Analysis: Descriptive Analysis
Descriptive statistics are simple quantitative summaries of the characteristics of the data set you have collected using totals, frequencies, ranges and averages. This helps you understand the data set in detail and put the data in perspective. Descriptive statistics do not allow us to make conclusions beyond the data we have analysed or reach conclusions regarding any hypotheses we might have made.
Types of Analysis: Inferential Analysis
Inferential analysis involves making statements, or inferences, about the world beyond the data you have collected. These analytical techniques can enable you to gain a deeper understanding of the data, including change over time; comparison between groups; comparing like with like; and relationships between variables.
Inferential statistics enable the evaluator to make judgments about how likely (the probability) an observed outcome is due to chance.
Inferential statistics:
  • Infer from the sample to the population
  • Determine probability of characteristics of the population based on the characteristics of your sample
  • Help assess strength of the relationship between your independent (causal) variables, and you dependent (effect) variables.
Why use inferential statistics?
  • Many peer-reviewed academic journals will not publish articles that do not use inferential statistics
  • Allows you to generalize your findings to the larger population
  • Allows you to assess the relative impact of various program inputs on your program outcomes/objectives.
Parametric inferential statistics: Used for data that follow an approximately normal distribution (e.g., parallels a bell curve). Used for interval or ratio scales
Non-Parametric Inferential Statistics: Used for data that do not follow a normal distribution (ie, the distribution does not parallel a bell curve). Used for nominal or ordinal data.
Graphical Analysis and Visualization
Data visualization is a powerful tool for both the analysis and communication of evaluation findings. Graphical analysis is a useful way to gain an instant picture of the distribution of data and to identify any relationships in the data that may require further investigation and may otherwise be difficult to discern. A range of graphical techniques can be used to present data in a visual format (e.g., column graphs, row graphs, dot graphs, and line graphs). Selecting which type of visualization to use will depend on the nature of the data. Different types of visualization may be better suited for data analysis and communication, respectively.
For further information on specific methods of data visualization, see BetterEvaluation.org.
Example of Data Visualization
The examples shown below all visualize the same data.
Data analysis
For further information on specific methods of data visualization, see BetterEvaluation.org.
Univariate Analysis (one variable)
The first step to understanding a data set is to look at each variable in detail, one at a time, using univariate statistics. Even if you plan to take your analysis further to explore the linkages, or relationships, between two or more of your variables you are best to start by examining each variable very carefully on its own. Univariate analysis is useful for profiling the characteristics of participants, and can also help with identifying missing data, outliers, and low response rates.
Examples of Univariate Analysis
Frequencies, or counts, describe how many times something has occurred within a given interval, such as a particular category or period of time. For example, the number of sessions attended by a participant is a frequency. Frequencies can be used for categorical or continuous data.
Percentages are the given number of units divided by the total number of units and multiplied by 100. Percentages can be used for categorical or continuous data. For example, if 10 out of 20 participants are girls, Equation of participants are girls.
Ratios show the numerical relationship between two groups. For example, the ratio of the number of participants in a particular program (18) to the number of facilitators in that same program (3) would be 18/3, or 6:1. Ratios can only be used for continuous data.
Mean, median, and mode are three summary measures representing a central value of a distribution (also called measures of central tendency). A mean, or average, is determined by summing all the values and dividing by the total number of units in the sample. A median (the middle value) is the 50th percentile point, with half of the values above the median and half of the values below the median. A mode is the category or value that occurs most frequently within a dataset.
Range, inter-quartile range, variance, and the standard deviation are summary measures that provide information about how values are distributed around the centre, demonstrating how much variation there is in the data (also called measures of dispersion). The range is the difference between the highest and lowest scores in a data set and is the simplest measure of spread. The inter-quartile range is the difference between the upper and lower quartiles (three points that divide the data set into four equal groups, each group comprising a quarter of the data). The variance is a number indicating how spread out the data is. Standard deviation is a number representing how far from the average each score is.
Confidence intervals are used to estimate a value/score in a population based on the score of the participants in your sample. A 95% confidence interval indicates you are 95% confident that you can predict/infer the value/score of a population within a specified range based on the value/score of your sample.
Test Statistics
Test statistics are inferential statistics that are used to test statistically how likely a relationship or difference between groups or variables is to be due to chance. Different statistical procedures use different test statistics (e.g., t-test, F-test, chi-square statistics). Since the “truth” of a theoretical hypothesis can never be known for certain, you can use test statistics to determine whether to reject the null hypothesis.
The “null” hypothesis is that any relationship or change observed is due to chance (e.g., that there is no difference or relationship between the variables). If you reject the null hypothesis, you are concluding that it is unlikely that an observation is due to chance alone - it may be an effect of your program or of some other variable.
The “alternative” hypothesis states that any difference or relationship is not random or due to chance.
In reading about quantitative analysis, you will often encounter the phrase statistically significant and read about the p-value. A statistically significant result is a result that is not attributed to chance. The test statistic is used to calculate the p-value. The p-value helps you determine the significance of your results.
  • A small p-value (typically ≤ .05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis and therefore reject that the observation is due to chance.
  • A large p-value (> .05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis and assume that any observation is due to chance.
  • p-values very close to the cutoff (.05) are considered to be marginal (could go either way). Always report the p-value so your readers can draw their own conclusions.
Bivariate Analysis (2 variables)
Bivariate analysis involves looking at associations between pairs of variables and trying to understand how these associations work.
Questions to Consider in Bivariate Analysis:
  1. How big/important is the association
  2. Is the association statistically significant, meaning is it due to chance, or is it likely to exist in the overall population to which we want to generalize? Statistical tests answer this question.
  3. What is the direction of the association? (look at graphs)
  4. What is its shape? Is it linear or non linear? (look at graphs)
Bivariate Analysis: Contingency Tables and Chi Square Statistic
When are they used?
When you have two categorical variables and you want to know if they are related (e.g., gender and score on outcome measurement).
How do you interpret them?
The chi-square statistics can be used to determine the strength of the relationship (e.g., goes knowing someone’s gender help you predict their outcome score/value). If the probability associated with the chi-square statistics is .05 or less (p ≤ .05), then you can assert that the independent variable can be used to predict scores on the dependent or outcome variable. You can also use the contingency table to compare the actual scores across the independent variable on the dependent variable or outcome measurement (e.g., compare the number/percent of males who agreed that the program had a positive impact on their lives to the percent of females who agreed.)
Bivariate Analysis: T-test or ANOVA
When is it used? When you have a categorical and continuous variable and you want to compare mean scores of two (t-test) or more groups (ANOVA) (e.g., you want to compare mean self-esteem score of program participants across race).
How do you interpret it? The t-test or F-statistic can be used to determine if the groups have significantly different means. If the probability associated with the F statistics is .05 or less (p ≤ .05), you can assert that there is a difference in the means.
Bivariate Analysis: Pearson Correlation
When is it used? When you have a continuous independent variable and a continuous dependent variable.
How do you interpret it? Pearson’s correlation is a statistical measure ranging from +1.0 to -1.0 that indicates how strongly two or more variables are related. A positive correlation (+1.0 to 0) indicates that two variables will either increase or decrease together, while a negative correlation (0 to -1.0) indicates that as one variable increases, the other will decrease. When the probability associated with the test statistic is .05 or less (p ≤ .05), you can assume there is a relationship between the dependent and independent variable. For instance you may want to know if the number of hours participants spend in your program is positively related to their scores on a self-esteem scale. Correlation quantifies the degree to which two variables are related. Correlation does not fit a line through the data.
Bivariate Analysis: Linear Regression
When is it used? Like Pearson’s Correlation, When you have a continuous independent variable and a continuous dependent (outcome) variable. With a correlation you don't have to think about cause and effect. You simply quantify how well two variables relate to each other. With regression, you do have to think about cause and effect as the regression line is determined as the best way to predict Y (the dependent variable) from X (the independent variable).
How do you interpret it? When the probability associated with the F statistic is .05 or less (p ≤ .05), you can assume there is a relationship between the dependent and the independent variable. Regression is a way of describing how one variable, the outcome, is numerically related to predictor variables. The regression equation (e.g., a+bX) can be used to make predictions on Y based on values of X.
Multivariate Analysis (more than 2 variables)
Multivariate analysis involves understanding the effects of two or more independent variable at a time on a dependent variable.
Questions to Ask in Multivariate Analysis
  1. How is a relationship between two variables changed if a third variable is controlled? (Multiple crosstabs, partial correlation, multiple regression, MANOVA)
  2. What is the overall variance of a dependent variable that can be explained by several independent variables. What are the relative strengths of different predictors (independent variables)? (Multiple regression)
  3. What groups of variables tend to correlate with each other, given a multitude of variables? (Factor analysis)
  4. Which individuals tend to be similar concerning selected variables? (Cluster analysis)
Multivariate Analysis: Elaborated Chi-Square statistic
When is it used? When you have more than one independent categorical variable, and one dependent categorical variable.
How is it interpreted? You divide one of the independent variables into two groups and then do a chi square for each group (e.g., divide gender into males and females and do a chi-square for males and one for females. For females you can do a chi square of outcome measurement by race, and then do the same for males).
Multivariate Analysis: Multivariate Regression
When is it used? Multivariate regression is used when you have more than one independent (causal) variable and one dependent (effect or outcome) variable. You not only want to know if your intervention has an impact on the outcome, but you want to know which aspects of your intervention has an impact and/or the relative impact of different aspects of your intervention
How do you interpret it? If the probability associated with the F statistic is .05 or less (p ≤ .05), the model as a whole likely has statistically significant predictive capability. If p > .05 none of the independent variables in the model are correlated with the dependent variable. If the probability associated with the T statistic for each of the independent variables is .05 or less (p ≤ .05), you can assert that independent variable has an impact on the outcome, independent of the other variables. A predictor that has a low p-value is likely to be a meaningful addition to your model because changes in the predictor's value are related to changes in the response variable. Conversely, a larger (insignificant) p-value suggests that changes in the predictor are not associated with changes in the response. The value of the T statistics can be compared across the independent variables to determine the relative value of each.
A useful overview of techniques for quantitative analysis can be found at BetterEvaluation.org
Qualitative Data Analysis
Qualitative analysis is sometimes poorly understood in evaluation and there is sometimes a sense that qualitative data are less authoritative or reliable than quantitative data. However, when executed rigorously and methodically, qualitative analysis can provide reliable and trustworthy insight into questions that cannot be answered by quantitative data, such as how and why a program may be effective or the meaning of an arts process to participants. In fact, it is important to stress that qualitative analysis is no less rigorous than quantitative analysis – it is just designed to answer different questions!
Qualitative Analysis vs. ‘Anecdotal’ Reporting
It is important to distinguish qualitative analysis from ‘anecdotal’ reporting.
It is sometimes thought that ‘anecdotal’ evidence, which often takes the form of personal testimonials or single case studies, will be effective in winning the hearts and minds of funders and policy makers. ‘Anecdotal’ evidence is different from systematic evidence collected through qualitative evaluation activities. ‘Anecdotal’ evaluation is unlikely to be taken seriously by people external to the project and contains so many inherent biases that it is unlikely to be useful for project development in the longer term. While telling stories can be useful, it may not be applicable to systematic evaluation as an individual story or anecdote cannot be generalized.
Conversely, balanced reporting of qualitative data that are methodically collected can produce rich, detailed evidence and stories that can inform advocacy and provide meaningful information to support project improvement. While some techniques and theories of qualitative research may be too complex for many project evaluation settings, the key principles from qualitative research can be usefully applied, especially committing to treat the information that you collect methodically, fairly and comprehensively and to avoid selecting out the examples that seem to tell the most exciting story or the story that funders and other external audiences are assumed to want to hear.
Tips to Improve Qualitative Data Quality
  • The interviewer or facilitator must be skilled at guiding the discussion without leading it to fit their own agenda.
  • The interviewer or facilitator must be especially sensitive to the instances when participants may feel inhibited or find it difficult to discuss challenges and problems that they have experienced within the project.
  • In addition to asking initial questions, the interviewer needs to be skilled at following up with prompts, ensuring that the interviewee is relaxed and that the process is not intrusive or upsetting.
  • It might be preferable to undertake these in naturalistic settings where project activity takes place so that participants are familiar with the setting and associate it with the activity being discussed.
  • Interviews that include sensitive topics should not be undertaken in settings where participants might be distracted by activity going on, or where there is no guarantee that the interview will not be interrupted.
  • Interviewers need to have in place a range of strategies for responding appropriately to a range of disclosures that may need action, and opportunities to debrief in case they themselves find the process challenging.
  • Analysis normally takes place on completion of the project. However, if the project is of a lengthy duration or a lot of data are gathered over the course of the project, it may be helpful to analyse data at intervals throughout the project to minimize the amount of work required post-project and also to ensure that any information gathered is still fresh in the evaluator’s mind.
Preparing data for analysis
When you are conducting interviews or focus groups, be sure to take notes and/or record the sessions so they can be transcribed verbatim later. You can use software such as Microsoft Office® and/or NVivo 10 to transcribe the audio recordings. Qualitative questionnaire responses and other documents may also be entered or imported into these software programs for analysis.
Resources
This tool is meant to introduce topics specific to qualitative analysis. Below are some additional resources if you would like to explore these topics in more depth.
Textual analysis
Content analysis: reducing large amounts of unstructured textual content into manageable data relevant to the (evaluation) research questions. (e.g., identifying the instances where particular words are used by participants in feedback forms.)
Thematic coding: recording or identifying passages of text or images that are linked by a common theme or idea allowing the indexation of text into categories.
Framework matrices: a method for summarizing and analyzing qualitative data in a two-by-two matrix table. It allows for sorting data across case and by theme.
Narratives: construction of coherent narratives of the changes occurring for an individual, a community, a site or a program or policy.
Timelines and time-ordered matrices: aids analysis by allowing for visualization of key events, sequences and results.
Thematic Analysis
Thematic analysis involves a step by step process that seeks to stay close to participants’ words, coding responses and successively grouping them so that overarching themes can be identified. It is useful for identifying patterns in qualitative data including similarities and differences, trends and unusual responses or cases. It can be undertaken relatively quickly and is easy to learn and allows evaluators to summarise a large volume of data.
  1. Familiarise yourself with the data by reading and rereading it.
  2. Generate initial codes. This entails working systematically to identify and name interesting items, especially if these are repeated. They could be words used by participants to describe their responses to a project. An inductive approach will stay close to participants’ language, while a more deductive approach may search for codes using a predetermined conceptual framework. Deductive approaches may seem more manageable in evaluation but they carry the drawback that the analysis might miss participants’ unanticipated responses.
  3. Group your codes into overarching themes. These might be different types of response, such as reported feelings, moods, creative challenges and other reflections.
  4. Review themes in order to gain a sense of what the different themes are, how they fit together, and the overall story they tell about the data.
  5. Define and name themes. This is an attempt to capture the essential character of each theme and show how it fits within the overall picture.
  6. Produce the report. The aim here is to tell the rich story of your data in a way that convinces the reader of the rigour of the analysis. This allows you to highlight out vibrant cases while showing how these fit within the overall body of information.
Resources for Qualitative Analysis
This tool is meant to introduce topics specific to qualitative analysis. Below are some additional resources if you would like to explore these topics in more depth.
Arts-Based Analysis
Data may be collected using arts-based methods such as drama, film, poetry, dance, photovoice, etc. In terms of true arts-based methods of data analysis, less has been written. In many evaluations of arts-based programs, more traditional forms of qualitative data are collected throughout the artistic process and analysed accordingly.
Arts-based methods should always be participatory, including the artist in the analysis. For example, participants in an evaluation using photovoice are asked to write captions for their own photos to inform the analysis of the images.
Arts-based data analysis is an emerging research area.
Examples
Visit the Examples page to see ways in which evaluators have approached arts-based data analysis.
Examples of Arts-Based Analysis
Some examples of ways in which evaluators have approached arts-based data analysis include:
  • Evaluators listen to or read transcripts of evaluation data and then move in the form of dance according to meaning being seen, heard or felt. Interpretations of the movement in terms of the raw data can then be shared and these interpretation can form the basis of themes that may be connected with other themes that form the evaluation story (Simons & McCormack, 2007).
  • Working with nurses in cancer service, movement, narratives, stories, poetry, collage, and creative writing were used, with data analysis performed by subgroups that played with transcripts, pictures, and poems to derive themes and categories to explain the quality of clinical practice, which were then explained to other subgroups that agreed, challenged, or extended interpretations. In this way, data are transformed from “cold data” into dynamic, creative, and embodied forms, and interpretation takes the form of artistic creation (Buck et al., 1999).
More Examples of Arts-Based Analysis
A few more examples of ways in which evaluators have approached arts-based data analysis:
  • In an alternative approach to evaluating sexual health promotion, dramatized sexual scenes provide a context within which to analyse many of the behavioural and epidemiological factors associated with sexual practice and offered an entry point for dialogue. Participants’ analysis of narratives through a dramatized scene offer a testimony to sexual experience in their own terms. Evaluation is conducted collaboratively in examining changes in sexual scenes at multiple time points (e.g., 3-month, 6-month or 12-month intervals). Change is examined on an individual and structural level (Paiva, 2005).
  • An a/r/tographical framework is a method which links art, research and teaching, and privileges both text and image. (Garcia Lazo & Smith, 2014).
  • Photo-elicitation is the process of analyzing photos taken by the evaluator or the participant in the data collection process to gain insights into social phenomena that oral or written data cannot provide. (Lapenta, 2004).
Issues to Consider in Arts-Based Analysis
  • Arts-based evaluation can open new ways of seeing and understanding, incorporating both emotion and intellect
  • Arts-based data sources are often less tangible than numbers or transcripts and may therefore be less amenable to standard criteria
  • Participants must overcome inhibitions and fear of being judged
  • Arts-based evaluators may need a specialized skill set in both evaluation and artistic techniques
  • Must be careful that focus on art-making and creative expression does not overshadow the evaluation
  • We must broaden our concept of validity to embrace understandings gained from arts-based expression
For more info and for an exploration of possible criteria for arts-based evaluation and conceptualizations of external validity, see Integrating Arts-Based Inquiry in Evaluation Methodology written by Simons & McCormack.
Mixed Methods
A mixed methods approach involves integrating methodologies, traditionally quantitative and qualitative, but arts-based methods can also be integrated with quantitative and qualitative.
The benefit of mixing methods is that it can help you overcome the limitations and weaknesses that arise when each method is used alone and allow you to ask a broader set of questions. If different data sources reveal the same findings, or findings that at coherent with each other, this can lend credibility to your evaluation. A mixed method evaluation can also deepen understanding of your program, its effects, and context.
How to use Mixed Methods in Analysis
  • What stage of the evaluation to mix methods? (The design is considered much stronger if mixed methods are integrated into several or all stages of the evaluation.)
  • Will methods be used:
    • sequentially (the data from one source inform the collection of data from another source), or
    • concurrently (triangulation is used to integrate information from different independent sources)
  • Will qualitative and quantitative methods will be given relatively equal weighting?
Triangulating
Ensuring that your evaluation is comprehensive and holistic may involve a range of approaches. Triangulation involves transforming data from multiple sources into a logical and manageable structure that attempts to address the evaluation agenda. Utilizing multiple data collection methods leads to reliability and validity when the data from the various sources are comparable and consistent.
Drawing Conclusions
Once you have analysed and feel that you have a good grasp of your data, you can work towards drawing appropriate conclusions. You can assess the results of your analysis by comparing findings with the objectives and expectations you set out in the planning phase of your evaluation.
Answers will not always be clear cut. Your analysis may provide you with the basis for describing what happened, but there may be multiple possible explanations for observed effects. It is important to consider the interrelationships between social, economic and environmental factors. In some cases, you may need to seek clarification through further analysis and research.
Some questions to consider when drawing conclusions:
  • What are the main results or conclusions that can be drawn?
  • What other interpretations could there be?
  • Do conclusions make sense?
  • Did the results differ from initial expectations? If so, how?
End of Part 5: Analysis
Take a quiz, proceed to the next section, or go back to the main menu: