AutoMR: Automated Scoring Platform for the DIT2 Moral Reasoning Inventory

Funded by the Center for the Study of Ethical Development at University of Alabama

Overview

AutoMR (automated “moral reasoning”) is an AI-assisted platform that streamlines administration, scoring, and data quality control for the Defining Issues Test 2 (DIT2)—a widely used instrument for assessing moral reasoning in adolescents and adults. AutoMR replaces fragile, manual workflows (multiple exports, SPSS scripts, hand checks) with a robust, auditable, Python-based pipeline that consolidates survey data from multiple platforms and produces validated DIT2 scores at scale.


Why AutoMR?

Researchers and educators rely on DIT2 to study ethical development, instructional impact, and professional decision-making. Current practices are:

  • Manual & error-prone: scattered CSVs, varying headers, brittle scripts.
  • Hard to scale: platform differences and data anomalies consume staff time.
  • Costly to verify: quality control requires domain expertise and repeated rework.

AutoMR reduces operational burden while improving accuracy, transparency, and reproducibility—so teams can spend more time on insight and less on plumbing.


Goals & Deliverables

Objective 1 — AutoMR v0: Python Scoring Engine

  • Re-implement DIT2 scoring from legacy SPSS into a tested Python module.
  • Build a unit-tested library of edge cases derived from real datasets and expert review.
  • Add AI-assisted detection of input anomalies (e.g., missing fields, invalid codes, inconsistent patterns).

Deliverable: Open, versioned Python package + test suite; scoring parity report vs. legacy outputs.


Objective 2 — AutoMR v1: Unified Data Ingest

  • Standardize CSV header schema across survey platforms.
  • Provide ingestion scripts to normalize exports (e.g., Qualtrics, REDCap, Google Forms, LMS tools).
  • Integrate LLM-assisted quality checks to flag missing/duplicate respondents, malformed items, and platform-specific quirks.

Deliverable: Data consolidation utility + schema validator + QC reports.


Objective 3 — AutoMR v1: End-to-End Backend

  • Combine ingest + scoring into a single backend service that takes raw CSVs and returns DIT2 scores and audit logs.
  • Enable troubleshooting views and (optionally) analytics/visualization hooks for cohort-level summaries.
  • Establish versioning and change-management to track input schema changes and AI model updates.

Deliverable: Containerized backend with API/CLI, documentation, and validation against expert benchmarks.


What AutoMR Produces

  • Validated DIT2 Scores: including P-scores and standard indices per accepted scoring rules.
  • Audit Trails: every transformation logged (source, version, checks performed, decisions).
  • QC Reports: anomaly summaries, suggested fixes, and reproducible issue IDs.
  • Research-Ready Exports: clean tables for statistical analysis and archiving.

Methods & Safeguards

  • Reproducible Engineering: Python package, unit tests, CI, semantic versioning.
  • AI-Assisted QC (not AI-scored): LLMs help detect data issues; the official DIT2 scoring logic remains rule-based and transparent.
  • Domain Expert Validation: iterative comparisons to legacy outputs; adjudication of edge cases.
  • Privacy & Compliance: de-identification options; secure handling of respondent data; IRB-aligned workflows.

Expected Impact

  • Lower Cost, Higher Scale: cut manual processing time, enabling larger multi-site studies.
  • Data Integrity: standardized headers, validated inputs, consistent scoring across cohorts and years.
  • Faster Insight Cycles: rapid turnarounds for instructors and researchers; easier replication and meta-analysis.
  • Open Science Enablement: clear provenance and artifacts for sharing and review.

Who Should Use AutoMR?

  • Education researchers studying ethical development and curricular impact.
  • Program evaluators tracking professional formation (e.g., teacher prep, nursing, engineering).
  • Institutional assessment teams needing reliable, repeatable scoring at scale.

Collaborations & Partners

We welcome partnerships with DIT2 custodians, assessment offices, and multi-institution research networks to:

  • Validate edge cases and platform connectors
  • Co-design governance and change-management
  • Pilot at scale and co-author methodological notes

Getting Involved

  • Pilot with us: contribute anonymized sample datasets and legacy outputs for validation.
  • Adopt the schema: map your platform to the AutoMR CSV header standard.
  • Extend the adapters: help maintain connectors for your survey tools.

Resources (as released)

  • Documentation: setup, schema, API/CLI usage, and QC playbook
  • Code Repository: Python package, tests, examples
  • Release Notes & Parity Reports: transparent change history and validation summaries

(Links will be posted here upon public release.)


Contact

Project Lead: Dr. Taha Hassan, ALAAI Postdoc Researcher, The University of Alabama


Acknowledgments

We thank our domain experts and data partners for guidance on DIT2 scoring practices and validation of edge cases. Any public release will credit contributors and include guidance on responsible use.