Course Inquiry Starter Kit
Your compass for course and program-level data-driven inquiry.
Every course tells a story in its data. The Course Inquiry Starter Kit helps you start reading yours with guided prompts and AI tools you can use right now to explore questions about your course design, student outcomes and learning alignment.
Good teaching starts with good questions
Faculty regularly notice things in their courses that are hard to explain — patterns in grades, recurring feedback, assignments that don't land the way they should. But finding time to investigate those patterns is another matter.
The Course Inquiry Starter Kit gives you tools and guided prompts to begin exploring those questions on your own — using your syllabus, your assignments, and your course data. These aren't diagnostic reports. They're starting points: structured ways to look at your own course with fresh eyes.
Below, you’ll find a set of inquiry paths you can choose from, each focused on a different kind of question about your course. Pick the one that feels most urgent and dive in.
All activities use tools you already have access to at UMD. No new accounts. No new software. Your data stays secure and private.
- ELMS-Canvas (your LMS): Built-in analytics including grade distributions, assignment performance, weekly engagement, page views, and individual student activity reports.
- NotebookLM: Upload your own documents — syllabus, assignments, eval reports — and have a structured conversation with them. Best for deep analysis of your own course materials.
- Gemini (linked to Google Workspace): Access your Google Drive files directly. Best for structured comparison tasks when your materials already live in Drive.
- TerpAI: UMD's open-access AI tool, available to all faculty, staff, and students. Best for open-ended prompting and quick reflective exercises.
- Microsoft Copilot: Integrated with Microsoft 365. Best for faculty working primarily in Word, Excel, or OneNote — useful for rubric analysis and document comparison.
We encourage you to exclusively use UMD-approved GenAI tools, which are deployed in alignment with institutional security and compliance requirements.
Activity Menu
Each activity below contains three components: it begins with an Overview of the time and tools you will need to complete the activity, and then reviews How to do it, which outlines the concrete steps for setting up and running the activity, including a starter prompt you can paste or adapt. It then includes Notes describing what the activity can and cannot tell you, and a bridge pointing to pathways for deeper institutional support.
Activity Cluster 1:
Who is my course designed for — and who does it actually serve?
Every course is designed with an imagined student in mind. This cluster helps you make that imagined student visible — and ask whether your course works for the students actually in your room.
Prerequisite Knowledge Mapping
What prior knowledge is needed to successfully complete my first assignment?
Syllabus assumption audit
Based on my syllabus, what does a student need to succeed in my course?
High-stakes front-loading check
Does the weight of my assignments set students up for success?
OSCQR Course Quality Review
Does my course support problem solving and have real world applications?Activity Cluster 2:
Where are students getting lost?
The data in your gradebook and LMS already tells a story. This cluster helps you read it — looking for patterns that are easy to miss when you're in the middle of the semester.
Grade Distribution Cliff Finder
How can I better understand how course sequencing impacts student performance?
Engagement drop-off analysis
What insights can I gain from analyzing student engagement data?
Assignment performance comparison
What can I learn about my course by closely analyzing a few assignments?
Resource access audit
Are students utilizing my course materials effectively?Activity Cluster 3:
Does my course actually do what I think it does?
Misalignment between what a course says it does and what it actually measures is one of the most common — and most fixable — problems in course design. This cluster helps you audit the gap.
Outcomes to assessment alignment check
Are my assessments properly aligned with my course's learning outcomes?
Bloom's level audit
Are my assessments asking students to demonstrate the skill level I want?
Rubric to outcome coherence check
Are my assignment rubrics measuring my course's learning outcomes?
Grading policy coherence review
Does my course grading policy center and support students' learning?Activity Cluster 4:
What do my students actually think is happening?
Student feedback is easy to read reactively and hard to read systematically. This cluster helps you step back from the loudest comments and find what's actually recurring across time.
Course evaluation pattern finder
How can I use patterns in student feedback to improve my course?
Mid-semester check-in design
How can mid-semester feedback from students inform how I adjust my course?
Assignment confusion detector
Is the clarity of my assignment prompts impacting student performance?
Multi-semester evaluation trend analysis
What can student feedback over time reveal about my course redesign?Ready to Go Further?
Have you tried one or two of the activities above and found yourself wanting more than the tool could show you?
The activities in this kit are designed to help you begin exploring questions about your own course, but if you find yourself wanting to go further, that’s an important signal. When your questions extend beyond a single class—such as comparing across sections, identifying patterns among student populations, or examining trends at the program or institutional level—additional support is available.
The Teaching Innovation Team (TLTC) offers course-level consultations to help you interpret what you’re seeing, refine your pedagogy, and think through next steps in your inquiry.
For broader, data-intensive questions, the Educational Effectiveness team partners with faculty to analyze institution-level data, uncover systemic patterns, and connect findings to meaningful teaching interventions. This work is collaborative rather than self-serve.
If you’re unsure where to start, you can reach out to either team and we’ll help you determine the right level of support.
AI Usage Statement:
The TLTC outlined each activity and used Claude (Anthropic, 2026) to generate possible prompts and limitations, which were then revised and adapted.