A Place to Belong. A Person to Become.

AI at Hanover College

Artificial Intelligence at Hanover College

Hanover College is thoughtfully engaging with artificial intelligence as part of its liberal arts mission — exploring how AI can enhance learning, teaching and operations while remaining grounded in academic integrity, ethical responsibility and human-centered education.

AI Across the Curriculum

Faculty across disciplines are examining how AI intersects with their fields, helping students learn to analyze, question, and apply emerging technologies thoughtfully within a liberal arts framework.

Responsible & Ethical AI Use

Hanover emphasizes transparency, academic integrity and alignment with institutional values as AI tools evolve and are used.

College-Wide Collaboration

AI at Hanover is guided by a cross-collaborative task force representing multiple areas ensuring shared stewardship and broad perspective. The AI Task Force is led by Carey Adams, Provost & Vice President for Academic Affairs.

  • Academic Affairs
  • Information Technology
  • Library
  • Student Success
  • Business Program
  • Business Affairs
  • Communications & Marketing

Responsible Use of Artificial Intelligence at Hanover

The Hanover College Principles call us to pursue academic excellence with integrity, to engage in the open exchange of ideas, to practice honesty and accountability, and to be good stewards of our community and the world. These commitments guide how we approach artificial intelligence.

AI tools offer genuine opportunities for learning, creativity, and productivity. At the same time, they raise complex ethical questions about authorship, accuracy, privacy, environmental sustainability, and societal bias.

What Responsible AI Use Means at Hanover

  • Practicing academic integrity — ensuring our work reflects our own thinking, not just an algorithm’s output
  • Fostering intellectual vitality — using AI to enhance learning, not replace it
  • Being honest and transparent — acknowledging when and how AI assistance is used
  • Exercising accountability — taking responsibility for the accuracy and appropriateness of AI-assisted work
  • Acting as good stewards — protecting sensitive data and considering AI’s broader impacts

Purpose and Scope

This guidance is intended to help students, faculty, and staff use AI tools thoughtfully and in alignment with Hanover College values. It does not replace existing College policies (such as Academic Integrity, FERPA/HIPAA compliance, or employee conduct expectations). When this guidance intersects with existing policies, the relevant policy governs.

Risks to Be Aware Of

1. Academic Integrity Violations

Using AI to complete assignments or generate work presented as your own may constitute plagiarism or academic dishonesty.

2. Inaccuracy & Hallucination

AI tools generate plausible responses, not guaranteed facts. They may produce information that appears credible but is false. Always verify facts, citations, and references.

3. Privacy & Confidentiality

Avoid entering personal data, confidential information, or student records into AI tools. Rule of thumb: if you would not email it to a stranger, do not paste it into a public AI tool.

A quick reference chart is available: AI Tool Security Classification Framework

  • High Risk: Student records, personnel files, financial data, health information, unpublished research, legal matters
  • Medium Risk: Internal communications, draft documents, meeting notes, operational data
  • Low Risk: Public information, general brainstorming, personal learning

4. Bias & Harmful Content

AI may reflect or amplify societal biases and may generate offensive or harmful content, particularly around sensitive topics such as race, gender, or identity.

5. Intellectual Property

AI-generated content may raise copyright or authorship concerns. Fair use may apply in some academic contexts, but users should understand how a tool was trained and who owns the resulting output.

Responsible Use Guidelines

  • Be transparent. Acknowledge AI use in academic or professional work. Cite or note assistance (e.g., Chicago Manual recommendations). Inform meeting participants if AI is used to record or summarize.
  • Use AI for assistance, not substitution. AI may support brainstorming or learning, but should not replace original analysis or creative work.
  • In the classroom. Students should follow course-specific guidance. If AI use is not addressed, assume it is not permitted. Faculty should clearly state expectations in syllabi.
  • Protect data. Never upload student records, employee data, or sensitive institutional materials into public AI tools.
  • Be critical and accountable. Verify AI outputs and citations. You remain fully responsible for any work that includes AI-generated content.
  • Be a steward. Use AI purposefully and efficiently, recognizing environmental and resource impacts.
  • Evaluate tools before use. Consult IT before using new tools for institutional work, especially those handling sensitive data.

The ROBOT Test

  • Reliability: How transparent and trustworthy is the tool?
  • Objective: What is it designed to do, and why?
  • Bias: What biases or ethical concerns may exist?
  • Owner: Who controls the tool and has access to inputs?
  • Type: What kind of AI is it, and does it require human oversight?

Compliance & Questions

If your work involves FERPA, HIPAA, IRB or other regulated data, consult the appropriate office before using AI tools. When in doubt, use professional judgment and seek guidance.

Questions may be directed to your department head or the Office of Academic Affairs.