Thursday, July 30, 2009

Notes from today's meeting with Roxanne

We took a look at my drafts of the demonstration portion of each module. I'd sent these to Roxanne via email on Tuesday, but she'd had trouble viewing them (possibly because I produced some in Camtasia 5 at TIS and some in Camtasia 6 at the IT lab). Roxanne was able to look at the keywords and controlled vocabulary module over the last couple of days and create a script and notes tied specific times in that video. I shared the rest of the modules with her via webspace - hopefully this means she'll be able to view them better. Here are the notes I took while we viewed the other videos detailing some of the changes I need to make:
  • Boolean & controlled vocabulary module - I need to include a segment where I go to the search history after doing all three searches.
  • PubMed module - When you do the searches while logged into MyNCBI, your search terms are highlighted. I either need to redo the module while logged in or add highlighting in Camtasia to simulate this tool. I need to remove more of Camtasia's auto-zooming from this module - especially in each search's results, when limiting the search by date, and when view the search history at the end.
  • Research vs review module - I currently have a slide with citations of two articles listed, then I have a video of each scrolling through each article and highlighting relevant aspects. The citation slide shouldn't make the titles of the articles look like links (since they don't actually link). Instead, I'll include a slide after we the video portion with links to the articles for future reference. In the video portion, I need to be sure that I show the article outline (below the abstract) for each.
  • Scholarly vs popular module - I need to make the same changes to the citation slide as in the research vs review module. I'll also move the chart of criteria for distinguishing between scholarly and popular articles to the end - after the interactive portion.
Another overall issue to address is that some of these videos looked much fuzzier than others, despite the fact that I made each of them in pretty much the same way. I need to check in with Matt about why this might be the case and how to make sure that each video is as clear as possible.

Implementation, Part 2

Once I had a sense of how all of the exercises would work with Captivate, I went back to one of my more complicated exercises: scholarly vs. popular articles. I had three main questions that I used Captivate help and community forums to answer: 1) How can I combine two questions on one slide? 2) How can I insert links to external websites on a question slide? 3) How can I save students' answers to one question and use them in another slide?

Here's what I found out:
  1. You can only have one question per question slide - no exceptions. This didn't seem to be a huge problem - I could simply have the "why" question on the next slide.
  2. You can't actually put a link on a question slide! But, in Adobe's community forum, I found the following work-around: You can create a small, separate Captivate file that just contains one slide with a transparent button whose action is to open a URL. Publish this as a flash file and you can insert it as an animation in your question slide. A problem I have yet to solve with this is that the animation then shows a green Adobe Captivate loading bar which appears under the text on the question slide.
  3. I read all kinds of information on user-defined variables, which seemed to be just what I needed in order to transfer students' answers into my chart. When I followed the steps described for using these variables, however, my menus didn't look the same and the options I needed to choose weren't available. Eventually, I realized that variables were a new feature in Captivate 4, and I am using Captivate 3.
So there went my plan of having a user-generated chart of criteria! I currently have these questions (and the research vs. review questions) entered as simple true/false questions. I then focused on creating slides to go after each question, showing clues that the article was scholarly or popular based on elements of the citation and abstract. I used SnagIt to take screenshots of the citation and abstract and added highlighting and call-outs as illustration. Here are a few examples:
I took both large (including the entire abstract) and small (1-2 excerpts) screenshots and ended up using all of the smaller ones because of the size of our end product. It's going to be a little smaller in order to accommodate a table of contents on the left and fit an existing template used by TIS and Library Instruction Services (so it will end up looking a little like this tutorial). Roxanne reviewed these screenshots by email and revised a few of them.

Wednesday, July 29, 2009

Implementation, Part 1

Given that there are somewhat more limited times that I can work at TIS than at the IT lab, I decided to tackle the interactive elements of these modules first, using Captivate.

Captivate is great because it has a wide variety of existing question types that are easy to use: true/false, multiple choice, fill in the blank, short answer, matching, putting answers in order, and clicking on hotspots. It also includes some useful ways to integrate individual slides into an overall quiz or interactive exercise. Most interesting to me are:
  1. Question pools - which allow you to include more questions than a user will need to answer in any given iteration of the quiz and then pulls a subset of these questions out at random. The use of random questions makes it more interesting and educational for an individual re-doing the same quiz.
  2. Branching - You can set the slides up so that students go to a different slide if they get a question right than they do it they get a question wrong. This can allow for more detailed feedback and a more individually-based focus on particular topics.
  3. The ability to have graded or ungraded questions. "Graded" in this context simply means that students will get a response about whether they answered correctly or not; this does not necessarily imply that their responses are being reported to a teacher or librarian or that they are received a grade for their participation. Whether questions are graded or ungraded can be determined separately for each question.
So, question slides are great, but... they can also be somewhat inflexible. My first implementation step, then, was to identify how the active learning elements associated with each module would best fit within the existing question slide types.

Scholarly vs. popular articles: Roxanne has been doing this activity from a blog post where she lists 15 articles and links to the citation and abstract for each one (mostly in Academic Search Complete) - there are a couple of exceptions where she links directly to a wikipedia article and a newspaper article. Students fill out a worksheet in groups, deciding for each article whether it is popular or scholarly and including reasons why they think so. These worksheets are ungraded, but are returned to Roxanne so that she can see how well students are getting the concepts. My idea of how to modify this exercise was to have a slide for each article that looked something like this:
  • I wanted to store their responses and produce, after they had completed all the questions, a chart that included some of their criteria for distinguishing between scholarly and popular articles. They would then be shown a chart produced by the library instruction department with similar content to which they could compare their criteria. This has proved not quite feasible, but I'll include details about how I've revised this idea in my next post.
  • Research vs. review articles: This exercise is much the same as the scholarly vs. popular exercise, but with only 8 articles.
  • Keywords and controlled vocabulary: This module addresses broader, narrower, and related terms and shows example diagrams both as concept maps and and hierarchies. The students are then asked to create a hierarchy or concept map for each of the following subjects: microsopy and simians. I thought this could be implemented in Captivate either by having a list of terms and a series of text boxes arranged hierarchically (for students to fill in the blanks) or by having an open text box with instructions to indent for different levels of the hierarchy. Roxanne decided that filling in blanks would be a clearer exercise for students to engage in.
  • Boolean operators and search strategies: The existing version of this module involves the TA leading the class in a stand-up/sit-down exercise based on hair color and eye color. As an alternative exercise for individuals working on this module alone, we would like to have a venn diagram representing animals that lay eggs and animals that fly and a number of illustrations of animals. Students would, in turn, drag and drop the appropriate animals into the appropriate area of the diagram for one OR statement, one AND statement, and one NOT statement. Unfortunately, Captivate's question slides don't really accomodate this. The ordering and matching question slides do incorporate the drag-and-drop action, but they work only for text and only when one target goes to one location (not many targets to one location). Matt suggested creating a drag and drop animation in Flash and importing into Captivate as an animation, but I'm not yet sure if this will work (in the time I have, that is, given that I know nothing about Flash). Roxanne and I decided that I would give up to a day's worth of time over to trying out Flash (more details coming in a later blog post...). If that doesn't work, we'll simply have a multiple choice or matching question, asking students to identify which diagram represents an OR statement, etc.
  • A second exercise in this module gives students a series of citations and, for each one, asks them to create a search strategy to find other similar articles. These were straight-forward to enter as short-answer questions. They will be ungraded, given the variety of potential appropriate responses.
  • Databases vs. the catalog: The exercise for this module gives studnets a list of citations of books, articles, and dissertations with certain elements highlighted (e.g. the article title highlight in one citation and the authors highlighted in another). Students are then asked whether they would look for this element in the library catalog or in a database. These questions were straight-forward to enter as true/false in Captivate. I took a stab at different messages to give if the wrong answer was chosen, and Roxanne reviewed and revised these by email.
  • Searching in PubMed: This exercise involves studnets using some advanced features in PubMed to answer specific questions (e.g. the author of a 2006 article in a particular journal; topics of an article by a particular author in a particular journal, etc.). These were relatively simple to enter as short-answer questions. The questions had to be revised slightly, however, because we wanted them to be graded and therefore needed unambiguous answers. (Captivate allows you to enter up to 8-10 correct answers, but the students answer must match one of them exactly in order to be identified as correct.) For example, we asked students to identify MeSH for a certain article rather than the more vague "topics."

Deciding on Software

To begin catching up on what I've been doing...

Following my meeting with Matt, and at the recommendation of Roxanne, I took a look at the Digital Research Tools (DiRT) Wiki, which has a page on screencasting software: http://digitalresearchtools.pbworks.com/Screencasts. Besides the two commercial products that I can use through the iSchool's IT lab and the libraries' Technology Integration department (Camtasia and Captivate), this wiki suggested the following free and open source software: CamStudio, Jing, uTIPu, and Wink. From the brief descriptions of these programs on DiRT and the product descriptions on their developers' websites, I determined that none of them offered enough editting functionality for my purposes. Most basically, they do not allow separate audio and video editting. So my choice was down to Camtasia or Captivate. In the end, after meeting jointly with Matt and Roxanne, I decided to use both - Camtasia for the screen-captures and demonstrations (making use of its smoother video recording and zoom-and-pan capabilites) and Captivate for the quizzes and interactive portions.