On this page:

Short Separator

How to Redesign the Assessment to Cope With AI Challenges

Although generative AI tools have the potential to revolutionize education, they present several challenges to higher education and assessments. Cheating is one of the most pressing concerns. AI-powered tools can be used to provide students with answers or even complete the assessment for them, which is a growing problem as AI technology becomes more advanced. This makes it difficult to detect cheating—putting the integrity of the assessment process at risk.

Another challenge is the potential for generative AI tools to limit creativity. They can offer a form of creativity but might also restrict development of students’ creative skills during the assessment process. This can be a disadvantage, particularly in fields where creativity and are highly valued.

A valid concern specific to education – especially as we strive to increase orders of cognition, personal relevance, transference, and application in any discipline – is the tendency to rely on AI as a substitution for demonstrating and growing critical thinking and problem-solving skills. AI is sometimes used as a shortcut to providing an answer, drafting a response, or completing many activities, but doing so often avoids the rigor and engagement of active learning where students are invested participants in their education.

Bias is also a significant issue in AI-powered assessments. AI algorithms can be biased as a result of bad data, which might lead to false answers or major flaws in the assessment process. This can have serious consequences, particularly in high-stakes assessments.

AI can potentially transform how we approach education and assessment, but addressing the challenges it presents is important. Employing effective strategies to overcome these challenges and ensure that assessments are conducted fairly, accurately, and effectively is a must.

We recommend two approaches to cope with assessment integrity concerns: avoidance and activation.

Avoidance Approach

The avoidance approach involves intentionally designing assessments to minimize students’ use of AI while emphasizing processes, strategies, and skills that require human behaviors and intelligence. Educators who choose this approach might use traditional assessment methods, such as paper-based written exams or in-person hands-on assessments.

In situations where traditional assessment methods are not applicable, the following strategies will help transform assessments to avoid the risk of students inappropriately using generative AI tools. These strategies also support Universal Design for Learning principles by providing students with multiple means of action and expression.

Click the section headers below for more detail:

Leverage higher order thinking skills

Generative AI tools excel at explaining common concepts at lower order thinking and are constantly increasing their ability to produce logical analyses (of written prompts, data sets, and more). However, the efficiency at which AI produces these analyses is not matched by the empirical evidence and theoretical applications demonstrated when validating claims to these conclusions, as is often required when students are tasked to explain or demonstrate their reasoning or arrival at a conclusion. To leverage higher order thinking skills in students, follow Bloom’s cognitive domain and emphasize proof, rationale, and/or application of theorems in assignments and assessments. Bloom’s taxonomy comprises six levels of thinking: Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Generally, Analyzing, Evaluating, and Creating are considered higher order thinking.

When designing assessments, focus on encouraging students to use higher ordering skills such as finding innovative solutions (rather than relying on standard answers) and explaining their thought processes (instead of simply providing results).

Example: Higher order thinking skill assessment question (Analyzing and Evaluating): Which of the following plays the most significant role in this scenario: coercion, authority, force, or influence? How would it look different if another base of power was used in this scenario? (Analyzing and Evaluating)

Leverage information literacy skills

Generative AI tools can sometimes provide misleading and/or false information, often referred to as hallucinations. Consider reviewing the video titled “Confabulations – Why AI Makes Things Up”, under “What are Generative AI Tools?”, for further explanation. Knowing this, students should be encouraged to develop AI literacy to evaluate the accuracy and validity of generative AI tool responses.  As this literacy is evolutionary and in its infancy, faculty are encouraged to work with your academic liaison or informationist to provide training on information literacy skills.

Encourage collaboration-based assessments

Encourage students to work together to solve problems, share ideas, and complete projects.

Make Real-Life Connections

AI often shows poorer performance when it’s queried on narrow or rapidly evolving topics. Encourage students to apply concepts and theories to personal experiences or real-world scenarios, including current affairs. Students should be guided to develop personally relevant goals and solutions.

Embrace innovative mediums

Provide students with opportunities to showcase their talents and abilities in different ways, leading to a more well-rounded and comprehensive assessment strategy. Consider oral, video, multimedia, web pages/portfolios, and interactive assignments.

Reflect on specific readings or lecture content

Encourage students to elaborate or reflect on the key concepts and ideas from readings and knowledge points or discussion threads from the lecture and class.

Emphasize process as much as results

By emphasizing the process, educators can encourage students to think critically and engage more deeply with the material. This can help reduce the potential for cheating, as it requires students to demonstrate their understanding of the material in their own words and through their own thought processes.

Focus on the writing process using social annotation tools

Focus on the writing process using tools that also allow for collaboration. Students can make use of tools such as those found in Microsoft 365 and Google Workspace (MS Word, Google Docs, etc.) to develop and share their writing. Social annotation – collaborative note-taking, for instance – is also possible using some of the applications, such as MS OneDrive.

Review the version history to see how students’ writing has progressed

When students are logged int Microsoft Office 365 or Google Workspace, it is possible to track changes and revisit older versions. The version history will show whether text was created in the document or copied and pasted. If this strategy is used, educators should foster transparency by making students informed of their expectations. Additionally, instructors should be cautious and recognize different individual approaches to the writing process (such as pasting text) do not automatically equate to violations in academic integrity.

Give students ownership of their writing by having them annotate their writing throughout the course

Encourage students to explore how a generative AI tool responds to the assignment prompt: Compare and contrast the results the tool generates with the desired answers and document adjustments or improvements as needed.

Design assessments that include a self-reflection component

Students can reflect on their writing process, peer performance on group work, or what they have learned from the content.

Activation Approach

An activation approach might involve seeking out ways to integrate generative AI tools into assessment processes. Before adopting this approach, we recommend that educators include AI Use Policies, as discussed under the Proactive Communication to Students  heading on the Teaching Strategies page, and an AI Citation Guide in their syllabus. By integrating generative AI tools, educators can leverage the benefits of these tools while mitigating the potential risks associated with cheating and bias.

When incorporating expectations that specific AI tools are used by students in learning activities, make sure to link terms of use, such as the OpenAI Terms of Use, in the syllabus or assignment. Faculty should also consider a product’s accessibility and practicality (including cost) for any tool not endorsed by JHU prior to introducing it to students. We recommend that you also allow students to opt-out of AI assignments by providing alternatives. By inclusively and thoughtfully integrating generative AI tools, educators can leverage the benefits of these tools while mitigating the potential risks associated with cheating and bias.

Click the section headers below for more detail:

Differentiate between AI-friendly and non-AI assessments

Design assessments specifically for generative AI tools and guide using generative AI tools to complete specific tasks.

Example: Use DALL∙E 2 to generate digital art that depicts the relation among coercion, authority, force, and influence and then explain why this art was created and how it can be improved.

Offer students opportunities to evaluate AI-generated solutions

Provide assessment questions together with AI-generated solutions and ask students to sharpen their AI literacy skills by evaluating the solutions and supplying alternatives.

  • Use ChatGPT as a way for students to evaluate the validity or accuracy of a text written by a generative AI tool. See How to Use Generative AI Tools to Design Engaging Course Activities.
  • Have students engage in a conversation with a generative AI tool such as ChatGPT as part of their preparation for an assignment. Ask students to annotate the transcript of their conversation and note how they were helped by the tool. Have them reflect on what they learned in the process.

Use AI as an improvement tool

Encourage students to use generative AI tools to provide feedback on their initial assignments and improve their assignments based on the feedback.

Use AI for brainstorming

If students are having difficulty finding a topic for a specific theme, have them use ChatGPT to help brainstorm ideas. Suggest that they ask the same question with minor tweaks or ask the tool to elaborate on an idea.

Ultimately, the best approach might depend on the individual student’s goals and values and the specific situation they are facing. A balanced approach that incorporates elements of both avoidance and activation might be the most effective way to cope with the challenges presented by AI in higher education assessments.

How to Use Generative AI Tools to Provide Valuable Assignment Feedback

Generative AI tools not only offer advantages for creating engaging educational activities but also can be employed to provide beneficial feedback on student assignments in multiple, effective ways.

Provide Feedback Using Generative AI Tools

Generative AI tools can be used to provide feedback in a range of contexts. The following examples illustrate how various tools can be employed to provide different types of feedback.

Click the section headers below for more detail:

Automated feedback

AI-powered tools can provide automated feedback on assignments, allowing educators to give quick feedback. Specific tools include the following:

  • Gradescope: This tool uses AI to grade certain types of questions, such as multiple-choice, fill-in-the-blank, short answer, or even coding documents, and it provides students with detailed feedback. The university has an enterprise subscription to Gradescope. Refer to CTEI’s site regarding its integration with Canvas.
  • Grammarly: This tool, a frequent add-in to Google and Microsoft products, uses AI to provide feedback on written assignments, including grammar and spelling errors, as well as suggestions for improving writing style and structure. Grammarly has a free subscription with limited features.
  • Microsoft Office Proofing Tools: Built into Microsoft 365 (MS 365) products, and part of the university’s enterprise subscription, these MS tools now include the Microsoft Editor. Editor is an AI tool which offers feedback beyond grammar based on an identified or selected writing style (formal, professional, or casual). The feedback includes basic grammar and mechanics as well as suggestions for everything from conciseness to sensitive geopolitical references. It even allows a check for similarity to online sources.

Criteria-based feedback

Generative AI tools can be used to provide feedback based on specific criteria or learning objectives. Specific tools include the following:

  • Feedback Studio rubric: This tool uses AI to help educators create rubrics for grading assignments. It can also analyze student work against the rubric and provide feedback based on the criteria.
  • WriQ: This tool provides feedback on written assignments based on specific criteria, such as grammar and spelling errors, word choice, and sentence structure. It can also provide feedback on student progress over time.

Content analysis feedback

Generative AI tools can analyze the content of an assignment to provide feedback on the quality of writing, depth of research, and coherence of arguments presented. Specific tools include the following:

  • ChatGPT
    • This tool could analyze students’ essays to identify whether students included relevant sources to support their argument or whether their writing is clear and concise.
      Note: You must consider FERPA guidelines before submitting student work and ensure no identifiable information is included.
    • If provided with enough information about assignment expectations, the tool could assess the strengths or weaknesses of a student’s argument or analysis.
    • For class discussions, the educator could copy and paste the students’ posts into ChatGPT and ask it to summarize the key points of the discussion.

Considerations for Using Generative AI Tools to Provide Feedback

Follow these steps to provide valuable assignment feedback using generative AI tools:

  1. Choose the right tool: Many different generative AI tools are available for providing assignment feedback, so educators should choose the one that best fits their needs. Educators should consider the type of assignment they are grading, the specific feedback they want to provide, and the level of customization they are requiring.
  2. Monitor the quality of feedback: Monitoring the quality of feedback generated by AI tools to ensure that it is accurate and helpful is essential. If any issues are present, such as incorrect, biased, or inconsistent feedback, adjust the grading criteria or consider using a different tool.
  3. Combine AI feedback with human feedback: Although generative AI tools can provide valuable feedback, they should be used in conjunction with human feedback. Consider making additional comments or insights that the tool might not have captured and take the time to review assignments carefully to ensure a comprehensive assessment of student work is being provided.

As we indicated in Basic Principles of Using AI-Generated Content, it’s important to note that generative AI tools should be used as a complement to human grading and feedback, not as a replacement. Although generative AI tools can provide valuable insights and save time, they cannot replace the human touch necessary for genuinely effective feedback.