On this page:
- FERPA Guidelines
- HIPPA Guidelines
- Ownership of Data
- Ethical Considerations
- Guidelines for Students
- Example Syllabi Statements
- Collecting and Publishing Data From Courses
FERPA Guidelines
Instructors should consider FERPA guidelines before submitting student work to generative AI tools like chatbots (e.g., generating draft feedback on student work) or using tools like Zoom’s AI Companion. Proper de-identification under FERPA requires removal of all personally identifiable information, as well as a reasonable determination made by the institution that a student’s identity is not personally identifiable. This includes single or multiple releases and must take into account other reasonably available information that might be available online. Depending on the nature of the assignment, student work could potentially include identifiable information if they are describing personal experiences that would need to be removed before it could be considered properly de-identified.
HIPPA Guidelines
Instructors including faculty, staff, and trainees who present should also consider HIPAA implications before using generative AI tools in the context of healthcare education. Almost all identifiable health information maintained by health care providers and payers (covered entities), and the vendors they use (business associates), is subject to HIPAA and its implementing regulations. Instructors should educate themselves on the permissible uses and disclosures of identifiable patient and plan member information to ensure any use or disclosure of such information in the context of generative AI does not run afoul of HIPAA or Johns Hopkins policies. Even though regulated health information can be de-identified with the removal of direct identifiers, use or disclosure of such data should be made with caution. External information in the context of generative AI could be available to render the information reidentifiable, posing privacy risks to patients and plan members. Use or disclosure of health information should be made in the context of generative AI only with JH-approved vendors where appropriate agreements are in place that offer appropriate protections to the information submitted and generated.
Ownership of Data
JHU faculty, students, and staff should adhere to the established JHU Intellectual Property Policy including copyrights. Policies describing ownership of AI-generated content are still evolving. Users should begin by checking the policies or guidelines for the tools they are using. That said, the U.S. Copyright Office currently recognizes copyright only when it is created by human beings. In addition, the office “accepts that works ‘containing’ AI-generated material may be copyrighted under some circumstances, such as ‘sufficiently creative’ human arrangements or modifications of AI-generated material or works that combine AI-generated and human-authored material. The office states that the author may only claim copyright protection ‘for their own contributions’ to such works, and they must identify and disclaim AI-generated parts of the work if they apply to register their copyright.”
Copyright ownership, however, of AI-generated content has not been articulated by the U.S. Copyright Office. Therefore, companies providing AI software may acquire or reject ownership of copyright through their terms and conditions. As previously noted, check the policies and guidelines for the tools you are using.
Resources
- Copyright Chaos: Legal Implications of Generative AI. Bloomberg Law.
- Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms? Forbes.
- Navigating Data Ownership in the AI Age, Part 1: Types of Big Data and AI-Derived Data. National Law Review.
- Generative AI Ethics & Privacy. JHU’s Center for Learning Design and Technology.
Ethical Considerations
When integrating generative AI in higher education, there is an abundance of ethical considerations that demand careful attention. It is crucial for divisions to prioritize ethical guidelines and establish comprehensive frameworks that not only leverage the benefits of generative AI but also safeguard users against its potential ethical pitfalls, ensuring an equitable and conducive learning environment for all students. The following are some highlights and strategies to help faculty and instructors address those issues.
- Potential biases within the AI algorithms can inadvertently perpetuate and reinforce existing prejudices and inequalities.
Strategy: For AI-generated instructional content, faculty should ensure that the content covers diverse and inclusive perspectives to mitigate such biases. - Maintaining transparency about the use of AI-generated content and clearly delineating between automated and human-generated materials is essential to uphold academic integrity.
Strategy: It is important for faculty to clearly outline the expectations of generative AI usage at the beginning of a course to avoid frustration in students’ learning processes. - The use of vast datasets to train AI models raises questions about data sourcing and consent. There is a risk that sensitive or personal data might be incorporated into AI models without individuals’ explicit consent, leading to privacy breaches and potential misuse of information.
Strategy: Safeguarding student data and privacy, and obtaining consent for data usage, is equally imperative in adhering to ethical standards. Proper de-identification under FERPA requires removal of all personally identifiable information, as well as a reasonable determination made by the institution that a student’s identity is not personally identifiable. - The potential for social shame associated with the use of generative AI, like chatbots for content creation, is a valid concern. There might be a perception that relying on AI for content creation could undermine the authenticity and originality of an individual’s work, leading to apprehension within professional or academic communities.
Strategy: Providing clear guidelines on the appropriate use of generative AI tools, including transparent attribution when AI-generated content is used, can help mitigate the potential for social shame. Encouraging open conversations about the benefits and limitations of AI technology and emphasizing its role as a tool for augmenting human capabilities rather than replacing them can foster a more accepting and supportive environment.
Guidelines for Students
The following are example guidelines faculty can share with students who may use AI tools for course work.
- Validation: Always verify and cross-reference AI-generated content with credible sources. Maintain a critical approach to information and be mindful of the possibility of manipulated or misleading content. Be cautious when relying solely on AI-generated content for critical decision-making or academic research.
- Inclusiveness: Understand the limitations and potential biases of AI-generated content. Always promote inclusive and equity perspectives in your work.
- Communication: Communicate with and seek guidance from your professors or instructors when you are uncertain about the authenticity of AI-generated materials.
- Transparency: Respect intellectual property rights by acknowledging the sources of AI-generated content used in your work.
Example Syllabi Statements
Syllabus statement should reflect the unique uses or concerns for a course along with the AI tools that might be available to students or assigned by the instructor. The following section provides examples of statements instructors included in their syllabus to explain appropriate and inappropriate use of AI. Additional examples are provided online by Lance Eaton – College Unbound, which includes AI policy statements crowdsourced from instructors across the United States. Please consult your division for specific language to include.
Carly Schnitzler, Krieger School of Arts and Sciences, University Writing Program
Our class developed the following standards of conduct for AI-powered tool use:
- AI tool use should support—and not detract from—the goals of the course: developing critical thinking, analytical skills, personal voice, etc. To this end, AI-generated output will not be permitted in any writing assignment, unless explicitly part of an assignment (read: You can’t copy/paste output and turn it in).
- AI tools can be used for light brainstorming (think: getting unstuck from writer’s block, working through an outline) and light editing or revision. All uses of AI tools in any assignment (daily work/feeders/major projects) must be disclosed. If you use AI tools in any part of the writing process, please disclose via this form (also on Canvas). Failure to disclose is in violation of our course syllabus and the Homewood Undergraduate Academic Ethics Policy.
- Conferences and revision are privileged over any disciplinary action in the case of overreliance on AI tools in the course (read: If you’re relying on AI tools too much, I’ll ask to meet with you as a first step of intervention).
Shannon Robinson, Krieger School of Arts and Sciences, Introduction to Fiction and Poetry
There is currently an AI policy in place for Introduction to Fiction and Poetry (IFP) I and II, which is taught across multiple sections. All syllabi from IFP I and II contain the following:
“The use of artificial intelligence (AI) to produce any writing for this course is not allowed. A student who is found to have used AI-generated content for an assignment will receive an F for the assignment and may fail the course. A notation will be made in departmental records. Students enrolled in IFP I and II are required to sign an honor code statement acknowledging that they understand these policies.”
(“These policies” refers to both this AI policy and the plagiarism policy, which is outlined in the same section.)
I’ve directed instructors to use Canvas’s Turnitin function to scan for AI on all assignments (poetry and stories). Also, instructors are to have students do short in-class writing assignments (pen and paper), which they hand in—at least once a week. This way, the instructors will have samples of each student’s writing that is uncontestably in their own voice.
Mike Reese, Krieger School of Arts and Sciences, Sociology, Introduction to Social Statistics
Acceptable Use of Generative Artificial Intelligence (ChatGPT, Google Bard, etc.): Students should not use generative AI to write any required essays or solve problems. This is considered use of unauthorized electronic devices or software as stated in the university ethics policy. More important, the purpose of these assignments is to give you practice developing the important skill of writing and applying course concepts. Students are allowed to use generative AI to brainstorm ideas or ask questions about how to solve a problem, but the final work should be their own narrative. Students who use generative AI should indicate how they used it in their homework or paper. The purpose is not to raise suspicion, but to help the instructor identify possible sources of incorrect information. Generative AI has a bias to give an answer, even if it is not correct. This is especially the case with more sophisticated topics such as those studied in this course. When using generative AI, be sure to verify any information provided against the course materials. If you have any questions about appropriate use of these technologies, please consult the instructor.
Jim Diamond, School of Education, Digital Age Learning & Educational Technology Program
AI-Generated Material is Permitted in this Course: In this course, you are allowed to use AI tools (e.g., ChatGPT, etc.) to support your work. To maintain academic integrity, however, you must specify any AI-generated material you use and properly cite it. For proper APA formatting of such citations, see How to cite ChatGPT. You are responsible for fact checking statements composed by AI tools.
I know that many of us in this group are very interested in thinking about how to use AI (such as ChatGPT) in support of educational experiences. I am all for you using it in this course, but only as a tool to supplement your own thinking, or design. I’m going to leave it up to you to determine what’s meant by “supplement.” You’re all doctoral students, and I can’t imagine that you would want to deprive yourself of opportunities to structure your thinking through the process of writing (as one example) in pursuit of a grade. (Of course, there are many other things for which you might be using the chatbot, or other forms of AI—I’ll be interested to see them.) So, don’t use ChatGPT to write your discussion posts. But it might be interesting, for example, to have ChatGPT generate a discussion post and then you engage that post critically. No matter what you do, just indicate exactly who’s responsible for what (which might get tedious and could be an educative experience in and of itself) in the work you present.
Anna Broughel, School of Advanced International Studies Online MASE Program, Economics of Sustainable Energy
Enrollment at the School of Advanced International Studies (SAIS) requires each student to conduct all activities in accordance with the rules and spirit of the school’s Honor Code and Academic Integrity Policy listed in The Red Book: SAIS Student and Academic Handbook. Students are required to be truthful and exercise integrity and honesty in all their academic endeavors. With the SAIS Honor Code in mind, the use of artificial intelligence (AI) tools in this course is allowed, but only according to a set of guidelines to ensure responsible and ethical utilization.
Students are prohibited from using ChatGPT or other AI tools to generate written content for assignments; the primary responsibility for original text creation lies with the student. However, AI tools may be employed for light editing purposes, focusing on improving grammar and clarity. During quizzes, the use of AI tools is prohibited. In the research and preparation phase of assignments, students are permitted to leverage AI for generating ideas, questions, or summaries. Nevertheless, the submitted text must be entirely composed and written by the student.
It is crucial for students to be aware of both the benefits and limitations of AI as a learning and research tool. Students should critically evaluate sources, methods, and outputs provided by the algorithm. Violations of these guidelines will be treated as academic misconduct, with appropriate consequences as laid forth in the SAIS Honor Code. Students are encouraged to seek clarification if they have any questions about the expectations surrounding AI, fostering open communication to ensure a clear understanding of the guidelines.
Anicia Timberlake, Peabody
Collecting and Publishing Data From Courses
Instructors are permitted to collect data and analyze it to continually improve their teaching, however, if faculty plan to present or publish on that research they must submit an application to their respective Institutional Review Board (e.g., Homewood, School of Medicine, Bloomberg School of Public Health). Publishing or presenting on course data falls under human-subjects research.