Hestia Meets a Virginia (US) High School Latin Class: Part III

One of the ways that my collaboration with Hestia added to my own “pedagogical toolbox” as a teacher was working with Elton to design the survey.  The survey itself had to target a student audience and, at the same time, deliver robust information for the research team. I had a lot of experience and interest in writing good questions and assessments to evaluate student mastery, but I did not have a lot of experience thinking in survey metrics.

As a classroom teacher, I was free to develop my own rubrics for special projects that enrich the curriculum, perhaps in terms of the United States’ Standards for Classical Language Learning (ACL & APA 1997) or in terms of Virginia State Standards, or our district curriculum guide.  A rubric along the lines of the 1997 ACL/APA report would target goals for Communication, Culture, Connections, Comparisons and Communities, for example, and tasks that I could report on might be to what extent Hestia introduced students to the Greek language; how well could a student identify Thermopylae on a digital map; how students would explain Hestia to a social studies class; how English acquired the word “marathon” from the Battle of Marathon; and what place Hestia holds among other exciting digital resources in classics and classical civilization.

But the Hestia research team needed information that did not narrowly answer to these standards, because Hestia meets those standards and yet ambitiously does so much more.  And so, I redrafted my learning goals a number of times and vetted them with Elton; and for that I thank him for his patience and dedication.  What we developed together was successful in at least two senses: one, Hestia did gather measurable data about user experience; and two, my students — who were only fourteen or fifteen years old, remember — delivered serious answers to serious questions, and they truly rose to the task. This is even apparent to me as I review their submissions in retrospect — the opportunity that Hestia afforded for students to engage as participants with something meaningful to say and to offer the project was a big motivation for these high school students.

With this in mind, we asked two types of questions: closed (Likert scale) + open (text). Next came the matter of working with the content-management-system at my home school, learning its interface, writing and editing questions, and testing them before we went ahead with student trials.

The Likert scale is a psychometric scale commonly used in questionnaires, and is the most widely used scale in survey research. Generally the level of agreement or disagreement is measured according to five ordered response levels:

  • Strongly disagree
  • Disagree
  • Neither agree nor disagree
  • Agree
  • Strongly agree

While the Likert scale was thought to present the team with a sound basis for measuring the success of the trial, it was also felt desirable to encourage the students to articulate their engagement with the Hestia technologies with more ‘open’ questions, which invited the students to share their experiences and opinions in a more anecdotal form.

Read the next post for the results!