Sunday, November 17

Guide To Assessment

0
Kristen Lee

by Kristen Lee

You jump from one meeting to the next—strategic planning, curricular committee, program decisions, faculty evaluations, and so forth—only for you to have the “A” word crop up on your radar as the next meeting you must attend. As you’re sitting in the meeting, words like accreditation, goals, learning outcome, analysis, embedded are thrown around, as all eyes look towards you to provide direction on how your department or college is to move.
 
Assessment is not usually listed as a “qualification” in the job description of a leadership position; however, in today’s higher education landscape, it’s part of the package in every leader’s role. While many guides are presented in a way that may cause readers greater confusion—much like the feeling of going down the rabbit’s hole—the intention of this CHEL guide is to place your assessment fears at ease and be more comfortable with the educational assessment. The purpose of this document is to provide academic leaders new to assessment with an easy-to-read, quick-to-digest, practical guide on educational assessment.

 

Download the PDF

The guide is broken into seven sections that provide a fundamental understanding of the behemoth called assessment. The first five sections break down educational assessment into digestible content to cover the five Ws: Who, What, Where, When, Why. Two additional sections discuss particular topics related to assessment. Section 6 discusses and dispels the common myths held concerning assessment, and the final section discusses a topic that weighs on every leader’s mind—accreditation.

Section 1

The What: Do you speak assessment?

Definition of Assessment

Defining assessment from the start is essential given the breadth of concepts and ideas it has taken on. In this guide, we define assessment as:

Assessment is a collaborative, intentional and iterative process of gathering and analyzing information to improve student learning.

You could say that this is a loaded definition, but if we break this definition down, it provides the skeleton of what assessment is all about.

The Assessment Process

The Assessment Process

The Assessment Process

There are variations to the assessment process, but one commonality is its circuitous process; and depending on how it’s written, there may be more or fewer steps in the process. However, the following is a synthesis of the common elements embedded into most, if not all, assessment processes. And here’s the magical part: All the elements of the assessment process compose the Assessment Plan! Usually your institution will have an assessment plan template, so if possible, use their template to continue the consistency.

Identify and Align

While cyclical in nature, there is a starting point to the assessment process, and it begins with approaching assessment as a united front.

1. Identify and engage key players

Each stakeholder is essential and serves a crucial role (see Section 4 for more details), and so having a diversity of players in the assessment discussion will enhance the quality of assessment.

2. Agree on the assessment related terminology used

I say tomato, you say tomahto. Depending on the accreditation agency or institution, each comes up with a different way of saying the same thing. To provide clarity and consistency across the college campus, consider coming up with glossaries for assessment.

3. Reach an agreement on the goals and outcomes[1]

Recommendations:

  • Align your goals and outcomes with the mission and values of the institution. Your mission statement encapsulates the special qualities of your institution or program. In other words, it articulates the priorities and philosophical position of the program/institution.
  • Goals and outcomes articulate the intended result and so are contingent on the expectations produced at the end of a course/program/upon graduation. As a starting point, begin at the “end.” Start your discussion around what you expect students will be able to do at the end of the course/major/upon graduation.

[1] While I define all 3 terms (goals, objectives, and outcomes) in this guide, I focus on goals and outcomes because your objectives will naturally appear when you begin strategizing on the approach, methods, and instrumentation.

Strategize

Once goals and outcomes are identified, the next step is to make your plan of attack.

At this point, I need to stop and let you in on something no one has really clarified. During the strategizing, you will be drawing multiple plans, using multiple approaches, methods, designs, and instruments.

This entails deciding on :

  1. The intention of the assessment (the approach)
  2. How you will assess (the method)
  3. The strategies you will be employ the assessment (the design)
  4. The instrument you will use to assess it (the instrument)

The Approach

When planning assessment, solidifying the intention of the assessment is important. Usually there are two purposes: activities aimed at improvement, and those intended for determining accountability.

Formative assessment is used for monitoring student learning or development and is primarily for gauging progress.

Examples:
Formative Assessment at a course level: Essay with multiple draft submissionsFormative Assessment at a program level: Writing portfolio

Summative assessment is used for evaluating the achievement at the end of a course, program, or upon graduation. This approach is gauges the actual end result against a benchmark (what we thought would happen).

Examples:
Summative Assessment at a course level: Final projectSummative Assessment at a program level: Competency Exams

The Method

As teams and faculty determine the assessment method, a necessary point of discussion is consideration of whether the intent is to know if individual students or students in a program or college have mastered the outcome, or if it is for students to reflect on their learning. Depending on which is your focus, you will choose between Direct and Indirect assessment methods.

Direct Assessment Methods: a means of obtaining direct evidence of the student’s ability to demonstrate a performance/achievement.
Examples:
assignment, class exam, term paper, competency/proficiency exam, musical performance, portfolio of student work, presentation.
Pros Cons
  • Requires students to demonstrate proficiency.
  • Provides information that directly measures achievement.
  • At times can be difficult to administer.
  • At times can be difficult to observe direct performance.
Indirect Assessment Methods: a means of obtaining perception of the student’s ability for performance/achievement.
Examples:
student satisfaction surveys, focus groups, graduation rates.
Pros Cons
  • Student perception asks students to reflect on their learning.
  • Easy to administer.
  • Useful in validating values and beliefs.
  • Provides only indications of proficiency but not actual.
  • Can be subjective.

During this stage, another consideration to make is whether your assessment method will be qualitative or quantitative (or mixed).

Qualitative Assessment: collection of information that is non-numerical and interpretive in nature.Quantitative Assessment: collection of information that generates numerical data, measurements, or experimental design which require statistical analysis.
Examples: Focus groups, interviews and observations, student reflections.Examples: Questionnaires, surveys, multiple choice exams.

Tip: It is a good practice to have a combination of direct and indirect measures, whether it be qualitative or quantitative methods in your assessment plan. In doing so, you will have more credible evidence.

Tip: Check with regional and professional accrediting agencies and their requirements. While all methods are welcome, some require specific methods to be used. For example, in most accrediting bodies, direct assessments are a requirement.

The Design

This section is often glossed over; however, knowing what kind of information you wish to collect is important. The approach you take will yield differing results.

Cross-sectional approach: comparison of different subgroups at the same point in time.

The benefit of a cross-sectional design is that it allows for comparison of many variables at one given time (such as age, gender, race/ethnicity). However, what it won’t be able to do is to provide definite conclusions about any causal relationships. In other words, because you’re only looking at a snapshot of your institution frozen in a specific time, we cannot determine whether something before or after will have an impact or effect.

Longitudinal approach: comparison of the same group over different periods of time.

The benefit of a longitudinal design is your ability to spot changes in your population at the group and individual level because they look at a population over an extended period of time. For example, if freshmen were given an exam in the first year, and then in each subsequent year until graduation. The drawback of this design is its time-intensive nature. In order to glean information, you would have to wait until you have enough information gathered.

In both cases, it’s important not to read into the results without taking into consideration factors that may contribute to the results. For example, when using an English proficiency exam across the senior class, the length of time a student has had schooling in an English-speaking country will have a major impact on the scores.

The Instrument

An instrument in this case refers to the actual activity that will be utilized for assessment purposes. To find the right instrument for the job, consider the following:

Commercially available tests are things such as the National Survey of Student Engagement and  ETS Major Field Tests.
Pros Cons
  • It’s readily available (albeit, someone will need to take the time to familiarize themselves with it).
  • Reliability and Validity is already conducted and available for you.
  • They are norm-referenced! You can compare your results against others who use the instrument.
  • It could get costly.
  • It’s one-size-fits-all, so it may not address the particular needs or concerns of your institution.
Locally derived instruments are tests that are developed internally.
Pros Cons
  • Custom fit for your institution. Not only can faculty determine the content, but the mastery level needed to be “competent” in the outcome.
  • Faculty engagement and involvement.
  • The close tie between the curriculum and the instrument.
  • Not norm-referenced so if you were to compare, you could only compare between subgroups at your institution, or different cohorts over time.
  • Can get very time-involved.
  • Test needs to be reliable and valid.

Tying all four together, the combinations are endless!

Collect & Analyze

Collecting Data

Before everyone is dismissed and asked to go and “collect” data there is a crucial planning step, which is to decide on:

  • Who will be responsible for collecting the data?
  • Where will the data be collected? In a course, outside the course?
  • When will we collect the data?

Tip: Most colleges/universities have an Office of Institutional Research (see Section 4). They are invaluable in providing institution-wide data. This office is commonly used to conduct institutional-wide surveys such as the NSSE, SSI, Campus Climate Surveys, etc. Another benefit of this office is their expert knowledge on survey construction and data analysis.

Analyzing the results

The main concern when it comes to analyzing assessment data is how meaningful the results are. At times we utilize the following analysis to report findings:

  • Comparative information
  • Descriptive information
  • Impact Study
  • Statistical analysis
  • Qualitative Analysis

Important: Make sure to present your findings in relation to the program’s identified goals and objectives. This provides the framework for the findings.

Tip: Use campus experts! Not only will campus experts be useful in designing instruments, there are experts who have the knowledge to conduct the analysis.

Share and Evaluate

Data collected and analyzed will bear no improvement when findings are not shared and discussed, with opportunities to evaluate and learn from the findings. Sharing reports of the findings can lend to greater transparency but better than that, this information can be used at different levels and across departments and programs.

Tip: When presenting information on the data and its analysis, make sure that it’s readable and can be shared with several audiences. Your report on the data analysis and findings should reflect and contain relevant information according to  your audience (the public, faculty members with the program, accrediting agencies, board of trustees, students).

Allow persons time to reflect and evaluate. Discuss the findings—this should not be done alone! Upon discussion, develop recommendations based on the analysis of data.

 Take Action

The largest waste that could happen is if all the work and time put into assessment ends up collecting dust on a shelf. To ensure the results of the discussions and results are used, as a leader you should encourage the team to take action steps towards improvement.

Tip: Prioritize! You can’t change everything. So gauge what is manageable:

  • What do we have the time to do?
  • Do we have the capacity (manpower, finance) to invest in a large assessment endeavor? Or currently, do we only have the capacity to tackle low-hanging fruit?

Tip: Make a plan. Without having a plan documenting and designating who’s responsible, what steps will be taken, and deadlines, no one becomes accountable and no actions take place. As a result, the action plan doesn’t leave the table. So determining an action plan provides people direction as to where they should be moving and what they should be doing.

As an action plan is made, this will trigger the cycle all over again. For example, it will either cause you to keep the outcomes and/or edit them, or it will cause you to tweak the approach, design, method taken, and so forth.

Section 2

The Where: A piece of the puzzle

The puzzle pieces

Assessment occurs at multiple levels and across areas. No level is more important than the next; rather, they are all aligned and inform each other to produce one cohesive picture or story.

The puzzle pieces

Tip: Typically, regional accrediting agencies look at the institutional and program level. Professional accrediting agencies usually examine the program and course level.

No program is immune to assessment, including Student Affairs. While most conversation around assessment revolves around academic units, Student  Affairs plays a large role in a student’s learning experience. During the span of a student’s stay at your institution, only approximately one-third of his/her time is spent in academics while the remaining two-thirds of time is spent in other activities such as residence halls, extracurricular activities, advising, and tutoring (Banta and Palomba, 2015). So it would be fair to say that we can’t speak to the institution’s overall effectiveness in being able to fulfill its goals and measure its outcomes without taking into account this other aspect. Also, the assessment gleaned from an office unit within Student Affairs, such as the Office of Advising, could be useful for many academic programs.

Putting the puzzles pieces together

Being able to identify the puzzle pieces only provides you with a fraction of the whole picture. So how can we put the pieces together? This is where alignment comes into the picture. In order to make a cohesive story, each piece needs to be interwoven with one common thread. While the previous section details where assessment occurs, this section provides you with an alignment tool to map out the related learning experiences in a sequential and coherent manner.

Curriculum Map

Curriculum Maps do many things, including helping with the design of the assessment plan in that it teaches us where courses/programs are placed in relation to the outcomes. Do not be fooled by its name because this mapping tool does not apply only to academic programs; it can also be used for student affairs as well as across all the levels of institution. In essence, a curriculum map is a grid that points the relationships in the student’s learning experience.

The following illustrates several ways that a curriculum map is used.

1)  Examining a learning outcome in relationship to the courses within an academic program.

 Outcome 1Outcome 2Outcome 3Outcome 4Outcome 5
Course A X X 
Course BXX   
Course C  X X
Course D   XX

2)  Examining the progression of an outcome in relation in a course/content area.

 Expected Outcome
 IntroductoryIntermediateMastery
Course AX  
Course B   
Course C X 
Course D  X
 Expected Outcome
 IntroductoryIntermediateMastery
WritingCourse A

Course C

Course D

Course E
Quantitative ReasoningCourse BCourse YCourse Z

3) Examining the relationship between outcomes at different levels.

 Institutional Learning Outcome 1Institutional Learning Outcome 2Institutional Learning Outcome 3
Program Outcome 1 X 
Program Outcome 2X  
Program Outcome 3  X

These are just a few of the ways in which curriculum maps can be utilized to help operationalize the alignment between levels and across areas. In addition to the benefit of visualizing how different areas and levels are aligned, it also serves to see gaps or inconsistencies present in the sequence created in the learning experience.

Section 3

The When: Eat your fruits and vegetables because they’re good for you

At a certain point, overeating vegetables and fruits becomes detrimental to you health. It can cause weight gain, bloating, indigestion, and spikes in blood sugar. The timing and frequency of assessment activities are important, and so following the tips below can make the assessment process more manageable while producing the highest quality.

Tip: Check your institutional requirements. Usually there is already something driving the assessment process. Typically assessment is conducted over the span of 3 to 5 years. Strategically spreading out assessments over the span of 5 years, for example, is much more manageable than assessing all outcomes on a yearly basis.

Tip: Don’t forget about resources and the financial investment assessment practices could take. Let’s be realistic. It is important to take into consideration the amount of time invested in assessment, the cost, the training, and the capacity to institute certain assessment plans.

Tip: Flexibility is key. Sometimes the frequency of assessments becomes the task and you end up seeing the forest for the trees. Consider at what point does assessment interfere with student’s learning itself?

Section 4

The Who: Is it me you’re looking for?

While every person should be involved in assessment, when it comes to planning and decision-making it’s important to choose the right team for the task. Below is a guideline that provides you with how each player contributes to the overall assessment process. From this list, depending on the assessment task—whether it is planning overall institutional assessment or making strategic decisions and recommendations based on a course assessment—the team tasked for the job will vary. Further, while all the players may not be at the table, you can still garner their opinions and perspectives; for example, using focus groups or surveys to assessment student perspectives on a program’s effectiveness.

Key players and their contribution to the assessment process:

  • Faculty members bring the depth and expertise of their academic program.
  • Academic administrators provide an eagle’s-eye vantage point of a program.
  • Professionals from different areas of the institution provide insight into the academic and collegiate experience of students’ development.
  • Students and their parents express opinions on their expectations of post-secondary education.
  • The Board of Trustees act as a liaison and sounding board on the relationship between the long-term strategic direction of the institution and assessment.
  • External entities offer insight on the demands of the industry and what is expected from a graduate from your institution.

Among administrators and professionals in your institution, there may be a coordinator or a unit with the name Office of Assessment (or a derivative of this). They are essential in all of the assessment planning, collection, analysis, sharing, evaluating, and strategizing because they are the experts on assessment. They are a great resource and should be heavily utilized when bringing assessment into a program, the institution, or even at a course level.

In addition, another great resource is the Institutional research analyst at your institution. Housed in another unit usually called the Office of Institutional Research, personnel in this office not only collect, store, and report on quantitative and qualitative data on the institution (students, enrollment, staff, faculty, finance, courses—you name it, they know it), they also regularly conduct data analysis and construct institutional surveys. They will play a large role when it comes to the strategizing and data collecting and analysis due to their knowledge and skills.

What is your role as a leader?

Essential to assessment is providing leadership to the team you are working with. This sets the team up with the expectations and valuation of assessment. If you are on board, they are on board. If you value it, they value it. Fundamentally, you are the champion for building a culture of assessment at your institution.

Tip: Everyone should be on the same page. In order to do so, as a leader, you need to provide resources: Do they need additional training on assessment? Do they need additional time to develop?

Section 5

The Why: To know thyself

Assessment serves many purposes and produces many benefits. Below are some of the reasons for assessment.

  1. Assessment ultimately improves the quality of the student experience.
  2. Gauges Institutional effectiveness.
  3. Determines the effectiveness of resources used and allocated.
  4. Assists in program validation/credibility.
  5. Is used as an accountability to stakeholders.
  6. Acts as a tool for making strategic decisions. Because assessment is intentional and is based on evidence, it is a great means of guiding strategic decisions made at all levels of the institution.

Assessment has a strong impact at each level. Done right, it benefits faculty, students, and the institution.

For faculty and staff, assessment:

  • sets clear expectations.
  • can be used to coordinate what is taught and when.
  • provides direction of programs by identifying strengths and areas of improvement.
  • Provides insight on the allocation of resources.

For students, assessment serves to:

  • clearly identify expectations.
  • prioritize learning and help students know where they should focus their time and energy.

For the Institution, assessment:

  • clearly identifies the expectations of the institution.
  • provides a means of consistency across the campus.
  • provides information on institutional effectiveness.
  • Provides insight on the allocation of resources

For outside agencies and the public, assessment:

  • acts as an accountability measure for governmental and professional and regional accrediting agencies.
  • provides clear information on the institution’s expectations.

How do we build a culture of assessment?

Assessment doesn’t seem to be going away, and the demands and wants for improvement from many parties are present at your institution. So what do you do? You are in a prime role to ready your institution to build a culture of assessment!

  • Develop values and guiding principles. Make sure that there is a campus commitment to improvement through a consensus of values and guiding principles that articulate the purpose and intended uses of assessment clearly.
  • Establish good assessment practices. Collaborate and come up with a list of best practices that model an effective assessment program, plan, etc.
  • Find a common language. Sometimes, obstacles to implementing a cohesive institutional assessment culture are due to ideas and plans being lost in translation. Therefore, to create consistency, take the time to build a glossary.
  • Dispel confusion. There may be apprehension from groups due to misinformation about assessment. Providing the time for training and open transparency are ways to dispel misunderstanding about assessment efforts.

Section 6

Dispelling Assessment Myths

Misunderstandings deter the full engagement of stakeholders and stall in building a culture of assessment. The following is a list of common misunderstandings:

Myth 1: Course grades can be used as an assessment measure. Grades do not equal assessment, and are not the best measure for student learning. There are so many things that are wrapped up in a course grade that you will fail to tease out whether a student is learning. For example, course grades include subjective things such as student engagement, participation, and attendance; while also taking into account exams and assignments that may cover more than just the outcome you want to measure. Grades focus on student achievement, but that is not the same as student learning.

Myth 2: Assessment is a means of evaluating performance. Staff and faculty become concerned because they believe that this information will have an impact on their tenure or their performance review. Be clear with faculty and staff that assessment efforts are poised to examine how we could improve programs, and not a spying tool.

Myth 3: We’re doing well, so why do it? Assessment is an iterative process. It’s always about improving student learning, and not just keeping the status quo. There is a saying that a chain is no stronger than its weakest link. While the weakest link may be made of steel, it could always get stronger. Also, the reality is that most accrediting bodies require assessments to be conducted.

Myth 4: It’s only to make the external entities happy and for us to keep accreditation. Assessment is  accountability, transparency, improvement. What it is not, is a reporting function. Assessment is not the same as accreditation. It is not to tick off all the checkboxes for accreditation. This will ruin your culture of assessment because your institution will only view assessment as a means to an end.

Myth 5: We must assess EVERYTHING. The reality is that you don’t have to assess everything all the time. The first and foremost consideration is that assessment must be manageable for the capacity of your institution, college, department, or program; hence prioritizing and strategically collecting data that is useful will be essential. When we collect too much data, it can get overwhelming and it becomes too difficult to distinguish the wheat from the chaff.

Myth 6: It’s all about the data. Data is merely a tool. Therefore what’s important is the planning and the use of data. Collecting data that doesn’t relate to the outcome will produce results that do not relate to the outcome. Therefore, collaborate and take the time to plan your assessment.

Section 7

The elephant in the room: What does this have to do with Accreditation?

People GROAN at the thought of accreditation. However, accreditation serves two functions. One is to guarantee the quality of an institution or program (quality assurance), and the second is to ensure that programs and institutions continue to improve through providing assistance and guidance.

In addition, obtaining accreditation for your institution is of value because it provides the public with the assurance that:

  1. Your institution is vetted and has met expectations held by those in higher education (or the professional field).
  2. An acknowledgement that you have voluntarily gone through certain steps and activities for improving the quality of their institution/program.

It also assures students that:

  1. The quality of education received at your institution is vetted and their degree upon graduation is acknowledged by the profession.
  2. They are able to carry their credits to another institution upon transferring or be admitted to a graduate program.

While there are multiple benefits, accreditation is not the same as assessment. Accreditation asks you to report on your assessment plan and process. Placing accreditation in precedence of building a culture of assessment will confuse staff, students, and faculty and they will easily fall into Myth 4. They will devalue assessment and efforts at improvement will become minimal.

Rather, a way to envision the relationship between assessment and accreditation is If you have a good assessment plan and process, accreditation is easy. Accreditation agencies ask you to report on your assessment findings and to evaluate your institution—so if your institution already has a solid assessment plan to follow, it is just a matter or writing it down!

Lastly, agencies want to know that while there are champions for assessment in a program or institution, a culture of assessment exists or is in the process of being built. They want to see evidence that everyone knows about assessment and is on the same page when it comes to assessment at your institution.

Tip: Getting accredited or seeking re-accreditation is a big deal and requires all hands on deck. Have an assessment person sit on the accreditation team. They will be an asset! 

Share.

Comments are closed.