51ĀŅĀ×

Search within:
Brain and computer
Teaching and Learning with GenAI

Teaching and Learning with Gen AI

CTLA Position Statement on Teaching with GenAI

The Center for Teaching, Learning, and Assessment supports the principled implementation and integration of Generative Artificial Intelligence (GenAI) in higher education when and where it promotes

  • student achievement of stated learning outcomes in a course or across a curriculum;
  • support for faculty by increasing effectiveness and efficiency in the performance of instructional and administrative tasks;
  • our collective ability to leverage technology-rich contexts to build community and human connection; and
  • the facilitation of personalized learning, tutoring and other customized assistance for teaching and learning tasks.

This position is grounded on the following seven principles:

AI is here to stay.

Far from being merely a gimmick or a flash-in-the-pan, AI has increasingly infiltrated professional work and many people’s daily lives. We believe that these technologies present fundamental changes to writing, multimodal creating, and professional workflows. Students graduating with understanding of and competence in disciplinary application of AI will be better prepared for future professional and personal aspirations. 

Like any technology, AI has affordances and limitations.

Because we see AI as transformative for and increasingly a fundamental aspect of education and work, it is our goal to maximize the affordances and mitigate its limitations. We are committed to nurturing conversations and offering resources to aid you in that endeavor. 

AI will require us as educators to update our approaches inside and outside the classroom.

We believe that now more than ever, ongoing pedagogical development is necessary. Many concerns educators have regarding the implications of AI, and particularly students’ use of it, relate to pedagogical approaches that predate AI. Good pedagogical practice accounts for the broader context in which teaching occurs. If the context has changed because of AI tools, then we need to continue to reflect on and develop our pedagogy and practices in light of those changes.

Healthy GenAI initiatives should include room for skeptics, the cautious, and those opposed to the technology.

We believe that critical thinking is the hallmark of higher education and support for opposing viewpoints is essential to the advancement of knowledge. While some learning outcomes related to GenAI, such as prompt-engineering or bot programming, may make this challenging, we believe that a measured approach to the radically evolving technology of AI necessitates options for different kinds of engagement, to the extent that it is possible. This could mean allowing students who object to the carbon footprint of AI technology to use a less resource-intensive model or providing an alternative means for them to demonstrate their learning. This principle supports expanding options related to GenAI assignments and activities to incorporate the active engagement of teachers, researchers, and learners who are more cautious than enthusiastic about the technology and/or its industrial impact.

Sound, evidence-based Scholarship of Teaching and Learning (SoTL) can be adapted to and supportive of sound pedagogical practice with AI.

While the rising influence of AI on professional and academic contexts may require us to update our pedagogical practices, we recognize that SoTL can provide valuable resources for evidence-based research that may inform our responses to AI tools. CTLA is dedicated to familiarizing you with the best research and scholarship related to teaching and learning in the context of AI. 

Empowering AI literacy requires at least two components: effective practices and ethical considerations and contexts for critical use.

We recognize that practitioners (faculty, staff, students, and others) need to know how to engineer prompts, select particular AI applications, understand the affordances and limitations of different platforms, and so forth. However, real agency with AI requires that we understand the ethical contexts both narrowly, in terms of our ethical practices with AI, and broadly, in terms of larger ethical concerns, such as inherent biases embedded in programming or training, exploitation of human trainers of LLMs, and power consumption of servers. 

AI detection is a flawed technology and often indicates a problematic subject position.

We recognize that AI tools can give false positives; false negatives; may misclassify the work of non-native English speakers, neurodivergent writers, and others with learning differences as AI-generated; and that there are ethical concerns regarding privacy and student ownership of texts that may be processed by third-party apps without their consent. We further understand that as third-party applications, AI-detectors often operate as ā€œblack boxes,ā€ providing little insight or transparency as to how they arrive at their conclusions. We encourage faculty and programs to think carefully before utilizing these systems, which can often result in removing humans from the loop of negotiating what academic integrity means. 

A one-size-fits-all approach to AI in higher education is counter-productive.

We recognize that faculty, staff, and students are concerned about the implications of AI for higher education, and that everyone wants to ensure they are doing the right thing regarding these tools. These concerns may provoke a desire for an institutional response that clearly defines how and when AI should be used at the university. However, we believe positions regarding AI use are better developed within specific contexts by the stakeholders most closely engaged with those domains.

Just as what constitutes ā€œgood writingā€ varies from discipline to discipline, what constitutes ā€œgood AI useā€ may vary as well. Our mission is to provide you with training and resources for pedagogical development to support you in the process of developing effective policies, course design, and pedagogical implementations. However, as the expert in your context, you will be central in adapting your practices and materials to respond effectively to innovative technologies such as AI tools.

2025-26 AI Faculty Fellows

College of Arts and Sciences
Paul Shovlin

Paul Shovlin is assistant professor of English. He serves as the lead GenAI in teaching and learning faculty fellow. Shovlin has been leading and facilitating faculty development programming on generative artificial intelligence and teaching and learning for the center since fall 2022.

Shovlin co-chaired President Lori Stewart Gonzalez's Dynamic Strategy Learn Committee and participated in Provost Elizabeth Sayrs' AI Think Tank, leading the ethics group. In addition to facilitating three successful CTLA faculty learning communities on GenAI in higher education, Shovlin has opened the conversation university-wide, hosting AI Coffees with his FLC co-facilitator, and he has shared his work on GenAI in workshops and panels. 

College of Arts and Sciences
Jared DeForest

I am a Professor researching the role of nutrient cycles in forest ecosystem dynamics and I serve as the Chair of Environmental and Plant Biology. I aim to empower my students with a solid understanding of scientific concepts, fostering critical thinking and awareness of environmental issues. 

I envision that such knowledge will catalyze their own journey of learning beyond their academic years. I'm convinced that generative AI (GenAI) can significantly support this educational objective, but it necessitates guided use and regular practice to be truly beneficial. I anticipate that the ability to proficiently utilize GenAI will become an essential competency required by employers for all college graduates.

I think custom GPT chatbots, like the one I developed named SoilSage, will be extremely beneficial for education and research. This chatbot acts as an interactive teaching assistant in my lecture and lab capstone course, aiding students in achieving learning goals and enhancing their critical thinking abilities. 

The chatbot is designed to assist students in learning through questioning and exploration, steering them towards the right answer in order to develop their critical thinking skills. In groups, students will be taught to develop and instruct their personalized chatbots, which will function as tutoring aids and interactive study companions. 

My graduate students can utilize this chatbot as a dynamic tool for conducting a literature review of my research program and for assisting in the generation of hypotheses. They will be taught how to build a chatbot that functions as an interactive lab journal.

Nevertheless, all my actions with GenAI are aimed at mitigating faculty workloads, enhancing student participation and their overall learning experiences.

Faculty Webpage: /cas/deforest

Patton College of Education
Jennifer Lisy

In December 2022 I discovered a new world of possibilities when Open AI widely released ChatGPT 3 to the masses. Immediately, I began exploring the affordances and limitations of these new tools: creating multiple-choice questions, creating case studies to apply new concepts to real world situations, and writing drafts of documents. As a former classroom teacher and now teacher educator, I have always been an early adopter of new technologies. 

My research on digital writing showed that typing slowed second-grade students' composing speed but improved their spelling. As a teacher educator, I embed technology throughout my courses using tools my undergraduate students will use in their future classrooms and tools to enhance engagement and learning. Last spring one of my undergraduates and I presented how we use AI to enhance our teaching at the Ohio Educational Technology Conference.

As educators we have a responsibility to prepare our students for this ever-shifting technological landscape. Artificial intelligence is utilized in many of the apps and programs we use daily, from the map software that helps you navigate to a new part of town to Grammarly helping you improve your writing. 

As an assistant professor of instruction for teacher educators at 51ĀŅĀ× Zanesville, I am deeply committed to preparing our students with the best 51ĀŅĀ× has to offer. Our students on the regional campuses often include nontraditional students that may enter our classrooms with less technology experience. While we may think of our students as digital natives, we must continue to build critical digital literacy skills that will allow them to understand the affordances, limitations, and ethical use of these tools. 

As an AI Faculty Fellow, I look forward to continuing the work that I began as a member of the AI Think Tank and in the AI Faculty Learning community last spring. I am excited that 51ĀŅĀ× continues to be a leader in the nation and across the region preparing our students for this new reality. 

College of Fine Arts
Basil Masri Zada
As a Faculty in the Digital Art + Technology area, I have been experimenting, researching, and creating creative practices with my own work and my students’ work with unique and different AI models within many collaborative projects. 
 
Since the recent advancements in AI tools, I have been integrating AI into studio art practices and research as part of my teaching philosophy. It starts from understanding the concerns and issues of Generative AI tools in art around authenticity, legality, ethicality, creativity, ownership, free vs. commercial use, who owns the creative rights and work, and whether it is cheating, imitation, or creation. 
 
The main concern for artists, students, and creative minds is based on the fear that AI companies would use their work without crediting them or permission, especially with image-image-based generations. I address these teaching problems by creating a safe, private, unique, and intellectually protected experience in many projects to allow the students to learn how to use AI as a tool, not as a purpose to create with it, be creative, own the work, in a safe, private environment that I have been researching and creating rather than them rejecting it. 
 
AI is here to stay, and our students need to explore, adapt, and learn how they can utilize it or at least understand it so their work is still unique and relevant rather than ignoring it and becoming disconnected from the technological revolution in Art and Technology. In Digital Art and Technology, we explore text-based, image-based, video-based, and sound/music-based AI tools.
 
Some articles and the paper about the AI project integrated into our DAT courses:

GenAI Teaching Resources for OHIO Faculty

  • Faculty who are new to GenAI and teaching and learning, may want to begin with advice on how to conceptualize their use.
  • The CTLA offers a GenAI in Teaching and Learning asynchronous institute with a carousel of modules that allow faculty to develop their course policy and a series of pedagogical approaches that leverage or mitigate AI use.
  • These curated developed by OHIO instructors as part of their course redesigns offer diverse examples of highly effective methods for guiding student ethical use of and learning about AI tools.

Featured in the Media

In The New York Times
CTLA fellow Paul Shovlin on human connections

The value that we add as instructors is the feedback that we’re able to give students. It’s the human connections that we forge with students as human beings who are reading their words and who are being impacted by them.

In  the CTLA position statement is provided as a way to assist faculty in determining how to leverage the technology while maintaining a human connection. 

NPR's Fresh Air
CTLA's Position Statement

In this  interview, Times journalist Kashmir Hill refers to 51ĀŅĀ× and the CTLA faculty fellows for their investigation into how to effectively deploy AI in teaching and learning. (Forward to the 11-minute mark.)

Ohio Supercomputer Center
CTLA fellow Basil Masri Zada on experiential learning and AI

It’s important for students to understand how technology can enhance their artistic practice. This project is giving them a real-world application of how art and technology can intersect in a professional setting.

In The New York Times
CTLA fellow Jared DeForest on custom chatbots

In  Professor Jared DeForest, GenAI fellow and chair of Environmental and Plant Biology, is featured as he ā€œcreated his own tutoring bot, called SoilSage, which can answer students’ questions based on his published research papers and science knowledge. Limiting the chatbot to trusted information sources has improved its accuracy."

Sharing What We've Learned

CTLA fellows and staff have shared scholarly teaching approaches and scholarship locally, regionally and nationally. The following summarizes 2024-25 activities.

  • DeForest, J. (October 2024). Mitigating generative AI inaccuracies in soil biology. Soil Biology and Biochemistry (197).
  • Lisy, J.G., & Masri Zada, B. (February 14, 2025). AI in Education, A future? Navigating Ethical and Innovative Applications [Conference session]. Ohio Educational Technology Conference, Columbus, Ohio.
  • Lisy, J.G. (October 15, 2024). AI Tools for Better Assessment: Creating Questions and Improving Learning [Conference session]. Ohio Council for the Social Studies, Lewis Center, Ohio.
  • Masri Zada, B., & Lisy, J.G. (January, 2025). Revamping Curriculum: Artificial Intelligence Integration in Higher Education, Case Study [Conference Session]. Lilly Conferences: International Teaching Learning Cooperative, San Diego, California.
  • Rhodes-DiSalvo, M., and Shovlin, P. (May 29, 2025). Teaching and assessing AI-enhanced courses: An institutional initiative. Teaching & Learning with AI Conference, Orlando, FL.
  • Shovlin, P., Lisy, J.G., Masri Zada, B., Shepard, M., Durado, A., & DeForest, J. (2025, March 20). Generative AI in Higher Education from Transactional to Transformational. We-Note at Spotlight on Learning Conference, Center for Teaching, Learning, and Assessment, 51ĀŅĀ×, Athens.