Policy

As a Lasallian institution committed to whole-person formation, ethical practice, and academic excellence, Saint Mary’s University of Minnesota recognizes the transformative potential of Artificial Intelligence (AI) in education, research, and operations. This policy establishes guidelines to ensure AI is leveraged responsibly, ethically, equitably, and in alignment with our mission and values.

Scope

This policy applies to all members of the Saint Mary’s University community, including students, faculty, staff, and administrators, and encompasses the use of AI tools, technologies, and systems in academic, operational, and research contexts.

General Information

This policy serves as guidance for the ethical and responsible use of artificial intelligence across the university. Saint Mary’s community members should reference this policy when adopting or utilizing AI technologies to enhance education, research, and operational effectiveness while upholding the university’s commitment to ethical practice and human dignity.

General Principles

Ethical Use: All AI applications must align with ethical principles, including fairness, transparency, accountability, and respect for privacy and human dignity.

Academic Integrity: AI use must uphold academic honesty and integrity. Faculty, staff, and students must adhere to the established academic integrity policies when using AI tools.

Data Privacy: AI tools and applications must comply with privacy laws, regulations, and university policies. AI tools  and applications should protect confidential and private data. 

Inclusivity and Accessibility: AI tools and systems must be inclusive and accessible to all members of the university community, ensuring equitable opportunities and avoiding biases.

Human Oversight: AI should support, not replace, human judgment. Decisions impacting individuals or groups must involve human oversight and accountability.

1. Use of AI in Academics

1.1 Faculty Responsibilities

  • Faculty members who integrate AI into teaching and learning must do so thoughtfully, while fostering critical thinking and ethical engagement with the technology.
  • Faculty must communicate clear expectations regarding the use of AI tools in the course syllabus as outlined in the Catalog.
  • Faculty members are responsible for any content that they produce or publish that includes AI-generated material.
  • Faculty members overseeing student research are expected to guide appropriate AI use. If misuse occurs despite this guidance, the student will be held responsible.

1.2 Student Responsibilities

  • Students must adhere to the faculty member’s AI policy as written in the course syllabus. 
  • When using AI tools, students must properly cite the AI’s contributions in their work. For more information visit the Writing Center.
  • Students are responsible for any content that they produce or publish that includes AI-generated material.
  • Students are prohibited from using AI tools to misrepresent their work or violate academic integrity standards. Students must adhere to policies on academic integrity in their use of AI.
  • Violations of this policy will be addressed through existing university procedures for academic integrity misconduct.

1.3 Research

  • AI use in research must comply with ethical standards as outlined in the IRB application.
  • Researchers must ensure that AI tools used in research uphold data privacy, security, and ethical considerations. 
  • Faculty advisors are responsible for ensuring that use of AI by all members of the research team reflect responsible, transparent, and ethical use of AI.

2. Operational Use of AI

2.1 Administrative Applications

  • AI systems used in university operations (e.g., admissions, advising, scheduling, facilities management, and financial aid) must prioritize fairness, transparency, and data security.
  • Decision-making processes that rely on AI should include human review, particularly in cases with significant impacts on individuals (e.g., admission decisions or financial aid eligibility).
  • University departments using AI must maintain clear channels for reporting concerns or errors, ensuring rapid response and resolution.

2.2 Privacy

  • AI tools must comply with data protection laws, including FERPA.
  • Personal data processed through AI systems must be anonymized wherever possible to minimize privacy risks.
  • AI tools should include mechanisms for data encryption and access control to safeguard sensitive information.

2.3 Procurement and Implementation

  • All AI systems procured or developed by the university must undergo review by the Data Governance and Technology Planning Committee to ensure alignment with institutional values and compliance with this policy.
  • Pilot testing must be conducted for new AI tools to assess functionality, accuracy, and potential risks before full implementation.
  • Vendors providing AI solutions must demonstrate adherence to ethical standards and unbiased practices.

Governance and Accountability

Policy Violations

  • All AI systems procured or developed by the university must undergo review by the Data Governance and Technology Planning Committee to ensure alignment with institutional values and compliance with this policy.
  • Pilot testing must be conducted for new AI tools to assess functionality, accuracy, and potential risks before full implementation.
  • Vendors providing AI solutions must demonstrate adherence to ethical standards and unbiased practices.

Contact Information: For questions or further clarification regarding this policy, please contact the VPAAs.