Wednesday, July 29, 2009

Implementation, Part 1

Given that there are somewhat more limited times that I can work at TIS than at the IT lab, I decided to tackle the interactive elements of these modules first, using Captivate.

Captivate is great because it has a wide variety of existing question types that are easy to use: true/false, multiple choice, fill in the blank, short answer, matching, putting answers in order, and clicking on hotspots. It also includes some useful ways to integrate individual slides into an overall quiz or interactive exercise. Most interesting to me are:
  1. Question pools - which allow you to include more questions than a user will need to answer in any given iteration of the quiz and then pulls a subset of these questions out at random. The use of random questions makes it more interesting and educational for an individual re-doing the same quiz.
  2. Branching - You can set the slides up so that students go to a different slide if they get a question right than they do it they get a question wrong. This can allow for more detailed feedback and a more individually-based focus on particular topics.
  3. The ability to have graded or ungraded questions. "Graded" in this context simply means that students will get a response about whether they answered correctly or not; this does not necessarily imply that their responses are being reported to a teacher or librarian or that they are received a grade for their participation. Whether questions are graded or ungraded can be determined separately for each question.
So, question slides are great, but... they can also be somewhat inflexible. My first implementation step, then, was to identify how the active learning elements associated with each module would best fit within the existing question slide types.

Scholarly vs. popular articles: Roxanne has been doing this activity from a blog post where she lists 15 articles and links to the citation and abstract for each one (mostly in Academic Search Complete) - there are a couple of exceptions where she links directly to a wikipedia article and a newspaper article. Students fill out a worksheet in groups, deciding for each article whether it is popular or scholarly and including reasons why they think so. These worksheets are ungraded, but are returned to Roxanne so that she can see how well students are getting the concepts. My idea of how to modify this exercise was to have a slide for each article that looked something like this:
  • I wanted to store their responses and produce, after they had completed all the questions, a chart that included some of their criteria for distinguishing between scholarly and popular articles. They would then be shown a chart produced by the library instruction department with similar content to which they could compare their criteria. This has proved not quite feasible, but I'll include details about how I've revised this idea in my next post.
  • Research vs. review articles: This exercise is much the same as the scholarly vs. popular exercise, but with only 8 articles.
  • Keywords and controlled vocabulary: This module addresses broader, narrower, and related terms and shows example diagrams both as concept maps and and hierarchies. The students are then asked to create a hierarchy or concept map for each of the following subjects: microsopy and simians. I thought this could be implemented in Captivate either by having a list of terms and a series of text boxes arranged hierarchically (for students to fill in the blanks) or by having an open text box with instructions to indent for different levels of the hierarchy. Roxanne decided that filling in blanks would be a clearer exercise for students to engage in.
  • Boolean operators and search strategies: The existing version of this module involves the TA leading the class in a stand-up/sit-down exercise based on hair color and eye color. As an alternative exercise for individuals working on this module alone, we would like to have a venn diagram representing animals that lay eggs and animals that fly and a number of illustrations of animals. Students would, in turn, drag and drop the appropriate animals into the appropriate area of the diagram for one OR statement, one AND statement, and one NOT statement. Unfortunately, Captivate's question slides don't really accomodate this. The ordering and matching question slides do incorporate the drag-and-drop action, but they work only for text and only when one target goes to one location (not many targets to one location). Matt suggested creating a drag and drop animation in Flash and importing into Captivate as an animation, but I'm not yet sure if this will work (in the time I have, that is, given that I know nothing about Flash). Roxanne and I decided that I would give up to a day's worth of time over to trying out Flash (more details coming in a later blog post...). If that doesn't work, we'll simply have a multiple choice or matching question, asking students to identify which diagram represents an OR statement, etc.
  • A second exercise in this module gives students a series of citations and, for each one, asks them to create a search strategy to find other similar articles. These were straight-forward to enter as short-answer questions. They will be ungraded, given the variety of potential appropriate responses.
  • Databases vs. the catalog: The exercise for this module gives studnets a list of citations of books, articles, and dissertations with certain elements highlighted (e.g. the article title highlight in one citation and the authors highlighted in another). Students are then asked whether they would look for this element in the library catalog or in a database. These questions were straight-forward to enter as true/false in Captivate. I took a stab at different messages to give if the wrong answer was chosen, and Roxanne reviewed and revised these by email.
  • Searching in PubMed: This exercise involves studnets using some advanced features in PubMed to answer specific questions (e.g. the author of a 2006 article in a particular journal; topics of an article by a particular author in a particular journal, etc.). These were relatively simple to enter as short-answer questions. The questions had to be revised slightly, however, because we wanted them to be graded and therefore needed unambiguous answers. (Captivate allows you to enter up to 8-10 correct answers, but the students answer must match one of them exactly in order to be identified as correct.) For example, we asked students to identify MeSH for a certain article rather than the more vague "topics."

No comments:

Post a Comment