Showing posts with label gaming. Show all posts
Showing posts with label gaming. Show all posts

July 22, 2013

Grading and assessment, water and oil?

As I've been getting ready for ACRL Immersion 2013 Program Track (I leave in 1 week!), I've been finishing up a lot of readings on assessment. I've actually been really glad to read these articles, because as I've better solidified my notion of assessment through applying it to the instruction I am doing, I am finding my ideas are aligning with what I am reading.

The pattern I am finding in these readings is that assessment needs to be more holistic; assessment should be a method for students to learn rather than a focus on evaluation; and assessment should provide ongoing, meaningful feedback for students to practice instead of being judged.

We are wrapping up the summer semester with the badges pilot, and Purdue Passport incorporates assessment within earning a badge. Typically, a badge is given after a skill has been achieved, where assessment is more evaluative and judgmental rather than to provide feedback for improvement. This clashes with how I would prefer to teach and use the badges, so I've been using the feedback/assessment mechanism in Passport differently than it might be intended.

This is good because I think students are getting more out of the class, but also poses some conflicts:
  1. If badges being awarded are not based on more rigid judgement of skill acquisition, how valuable are they?
  2. On this note, how interoperable are they? Can their qualities be translated or compared to other institutions or libraries offering similar badges if desired evidence isn't as clearly enforced?
Because this is a credit class, grades need to be tied to student work. For this, the badges are essentially pass/fail. You either earn the badge or you don't. If a student is late in finishing badge work an exception is made to give them half off, but this is the only partial credit awarded. There are pros and cons to this as well:

Pros: Students can take risks in their responses and have less fear of failure (this positive aspect is rooted in game mechanics); I can focus more on the quality of my feedback rather than what level of good or bad the student's work falls into

Cons: How is good student work differentiated from bad work? Particularly if bad work is due to sloppiness or disinterest. Shouldn't a student who submitted excellent work (or evidence) for a badge be awarded the badge, where less stellar work would not be awarded the badge? Isn't the purpose of awarding badges to demonstrate that a skill was successfully acquired?

I have such mixed feelings on this. But one feature of Passport is to allow students a re-do. I use this often for sloppy work. I will leave feedback explaining exactly what I'm looking for and give the student a second chance (next semester I will be sharing specific rubrics for each badge with students so they have an even better concept of what level of understanding is desired). 

I am not a stickler on lower-level concepts like formatting a citation perfectly or memorizing exact steps on how to find an article when you only have a citation (these are specific assessments in the class to address more basic skills within learning outcomes). If a student has most of a citation right but forgets to italicize the journal title for MLA style, it's really just busy work for them to make them re-do it or for me to take points off. I leave feedback letting them know they mostly got it and to remember to double check these things for formal papers; and then I give them all the points. I love Barbara Fister's 2013 LOEX Keynote (in fact, my team read it as part of our strategic planning for the new fiscal year). I agree so strongly with her whole presentation, and using a specific example here, "very rarely outside of school are citations needed." I care way more about if students are able to understand what the purpose of a citation is and to incorporate this into their new understanding of "research as conversation" than about styles and how to format.

One assigned article that has been part of this class for a long time is a CQ Researcher article on cheating: why students cheat and how they cheat. It's interesting to see what students agree with in their reflections and a number do say that when a student doesn't feel course material has real application in their lives (or when an instructor provides little to no meaningful feedback), a student has no motivation or investment to put in quality work, and so cheating is easy. Focusing less on grades and more on understanding and a conversation between the students and us as instructors creates a richer experience for all. Their reflections resonate well with what we're doing in the course to make it apply to their lives, to attain better work from them, and in turn to provide more meaningful, continuous feedback. This also allows for continuous improvement on our end; the crux of assessment. 


July 15, 2013

Feelings and games

I'm currently enrolled in a MOOC (I am unsure how I currently feel about MOOCs overall in theory, but am liking this one so far). It's the Game Elements for Learning (#GE4L) MOOC offered through Canvas.

One of the readings we have been directed to is Karl Kapp's post, The emotional toll of instructional games.

(Kapp also wrote: The gamification of learning and instruction: Game-based methods and strategies for training and education. 2012).

I am finding this very intriguing because I have been having reservations about incorporating some game elements into the badge pilot I have designed for our 1 credit course. I wanted to have a leaderboard so students could feel good about doing extra work, but I did not want people to "shut down," as Kapp says, for not making it onto the leaderboard. The intent is to motivate, but in reality, it could have an opposite, unintended effect.

Kapp says, "If you decide to add game-elements (gamification) or if you decide to create a learning game with winners and losers, you need to find a way to deal with those who do not win. You need to help them avoid some of the negative feelings. You may even decide that a cooperative game is better than placing someone in a losing situation."

In the class, we are not offering enough rewards where certain people would feel like everyone but them is winning, and I am trying to mix it up so that a variety of people are included. I am also incorporating easter egg "mini-badges" for exceptional work, cooperative skills, or etc. be varied so the same people are not winning for excelling at particular skills while ignoring other skills. Students can unlock them without knowing. Still feeling it out as I go, and since this is the pilot it will be great to see what worked and what did not. I'm honestly a little worried that there is the reverse effect, that students might not care at all about the leaderboard. I'm still getting great work from them, but I'm not clear on if it is factoring into their submissions or not. They are very likely more motivated by the grade. And since this is a summer class, people are traveling and might have their thoughts elsewhere. Looking forward to survey data at the end of summer. I think I'm going to need to revisit the game + instructional design to make it more well-defined, include more motivational game elements, and find a way to make it not so heavily grade-focused (if possible since it is a credit course).


Kapp lists 12 ways to "mitigate losing in an instructional game or gamification situation," and I thought it might be useful to comment on each:

1) Forewarn the learners that they might become upset or frustrated if they find themselves losing and that is part of the learning process.
I did not do this, but I have at least tried to address affective learning outcomes and how research can cause frustration; that it's not easy or linear and you have to practice. I hope these skills can be applied to the game mechanics and course content as well, but perhaps it's better to come right out and say it.

2) Inform learners that they might lose the game and that is OK, learning will still occur.
I have done this in a way to allow students multiple attempts at a badge. This really can tie in to taking risks and having a decreased fear of failure. Since assessment should be to improve learning, not just judge work, I try to make the feedback really count.

3) Carefully brief all the learners on the instructional objectives of the game and de-emphasize winning.
Since the badging structure is tied directly to the course, the course objectives are the main focus. In this case, badges are more of a visual way to track progress.

4) Acknowledge the frustration or anger at losing.
Similar to my response for #1.

5) Ask learners to find lessons and reason within the lose. Have them dissect why they lost. Ask “can those insights lead to learning?”
Perhaps I should have students do more cognitive work here, but when they get it wrong when submitting work for a badge, I reiterate and clarify what exactly I'm looking for so they have a better opportunity to "win" the badge when they try again.

6) Don’t spend a great deal of time extolling the winners. Acknowledge winning and move right to the instructional lesson.
Exactly. I post a leaderboard in the news section and just leave it at that.

7) Provide a list of strategies that will help the learner win next time. (After the game.)
I try to do this in the assignment description so students know exactly what I'm looking for. It might even be better to post the actual rubric next semester.

8) Within the curriculum, follow the game activity with an activity where everyone can feel positive.
I'm not sure how this would play out in my scenario, but I think the participation points for discussion might work in this way. Rather than being graded on what is said, general points are given simply for being involved.

9) If in a classroom, allow people who did not win a chance to discuss why they didn’t win. Online, provide chat opportunities.
More chances for reflection would be very beneficial. The badges offered are so incremental, however, larger reflection might not be a good fit.

10) Consider if creating “winning” or “losing” is really what you want in the learning experience. Sometimes it is appropriate. Often it is appropriate but be prepared for unintended consequences and negative feedback if you don’t handle the situation properly.
Yes, definitely reconsidering even having a leaderboard. Maybe these easter egg "mini-badges" could instead be private between instructor and student.

11) Create different levels of winning, can a learner win a round, or one task, can small victories occur throughout the game. This is helpful because if a learner falls behind early, they may mentally drop out early in the learning process. Find ways to keep them engaged.
Love this.

12) Finally, you may want to consider building a cooperative rather than a competitive game. Working together is far more inclusive than competition.
Love this too. Just have to find a way to track each student's own work to tie to badge earning.

December 11, 2012

Library research expertise, collect them all

From Purdue Passport
Student motivation can be problematic in college courses, and particularly with auxiliary college work where skills are encouraged, but aren't necessarily required to be learned (ahem, library research skills). Some instructors are serious about students building a knowledge base in using the library and developing critical thinking in regards to information, but it's not across the board. As we know from the ERIAL Project, student perceptions are heavily influenced by their instructors' relationships with the library. When the library has a good relationship with an instructor, research assignment design tends to be strong and students get a better grounding in using library resources. As great as this is and as much as we'd hope to advertise this fact to faculty, we can't exactly force every instructor on campus to work with us and especially to incorporate a research-related or info lit-type of assignment if they either don't want to, or it doesn't fit with the course.

So, we wonder, how can we help students develop these skills even if we can't work with them through a class, or if we haven't yet become embedded where they are. I've been thinking about this a lot over the past year in relation to student retention and also gaming and motivation, and became very interested in Mozilla's Open Badges, which I discussed here back in January when exploring badge systems. These badges are tied to certain skills that can be earned through reading and completing certain tasks, which can then be displayed in a portfolio or on social networking sites.

Thinking about how this can be tied to education has been apparent in MOOCs, and just recently, Purdue has developed Passport to offer badges in a university setting. I have been approved to be a beta tester, which I am really excited about. We have been talking about incorporating gamification and a badge system here at the University of Arizona Libraries since I started and was particularly enthusiastic about it, but we run into issues with the programming side of the system since we have limited staff in that regard. We are hoping to develop a gamification layer over our existing tutorials and guides and will have badges tied to the ACRL Information Literacy Standards (as a very basic explanation of these ideas).

Now, let's be realistic, I think we all get it that most students aren't going to be persuaded to do extra work in learning library research skills just because they might get a PNG image after completing tutorials and quizzes (I certainly know I wouldn't have been convinced as an undergrad). However, I am hoping we are able to work with the career center, tutoring, and other areas on campus that might help give the badges more value so students feel they are meaningful. If only one unit on campus is offering these badges, what exactly do they even mean? However, if students can include a suite of them in an eportfolio or on a resume, that does have more value. On the flipside, from our analytics, we do see that students, and even non-students, complete our tutorials regularly without them being assigned, and for the ones offering a certificate upon completion, we have a large number of people submitting their information to receive one. So, there is clearly intrinsic motivation present, but we hope to use a combination of intrinsic and extrinsic to find the right balance in helping students build these skills.

I wrote a literature review on motivation in gamified learning scenarios for a gaming in education course I took this semester, which you can read here if you're interested. Applying these ideas to a badge system in libraries is more tricky than a classroom since we typically do one-shot sessions, and like I mentioned these skills are often treated as auxiliary to a class.

Anyhow, I will keep this blog more updated than usual as I beta test and incorporate badges into our resources! More next time...

July 20, 2012

Mystery solved: Assessment of Mystery in the Stacks

Last week, I wrote about the murder mystery, or "Mystery in the Stacks," that we used in our outreach and instruction to a summer program for high school students. This was the first time we had done this program and also the first time we had used a mystery to engage the students.

We received all positive feedback from students and parents... some examples:
"...I wanted to thank you for your coordinating "Mystery in the Stacks".  [Student] enjoyed the day and really learned a lot.  I hope you have it again next year…I will pass on the excellent rating. I wish AYU would put this program on for adults!”
 “...He really enjoyed the class - the Dante book especially made a big impression. He said the librarians were cool - praise from a 13 year old, hard to come by!”
 “[Student] had a great time at the Mystery in the Stacks. He really enjoyed it and, honestly, couldn't stop talking about it for hours!” Thank you all for  your hard work and a great day. “

We were so pleased to see that the students had fun and the parents seemed to feel the program was worth the money and time. The other question though, and perhaps more pressing, is did the students actually learn anything?

They did solve the mystery with essentially no help needed from me, so I would say so. I sat in the computer lab while the students were solving the mysteries to answer any questions and provide instructional support, so I was able to witness their problem solving processes. Overall, they really did do everything right and retained what was taught during the instruction portion to help them solve the mystery. There were just a few minor snags that I think could have been worked into instruction and/or planning....

  1. In the second clue, the students are prompted to search Stedman's Medical Dictionary to understand that the medical examiner means dehydration as cause of death when she lists the synonym, exsiccion, in her report. I saw a student immediately jump to Google instead of even trying the medical dictionary, and of course I said, "Hey now! You want to make sure your information is accurate, so use the medical dictionary to find the answer..." And when you search Google for the term, nothing really comes up anyhow. I think I should have stressed more the uncertainty of Google. Of course it is good for quick definitions, I use it all the time, but for specialized information, using a trusted source actually saves more time.

  2. I taught the students the basics of Boolean logic during the instruction portion, and was so happy to see they remembered how to use AND during the catalog-searching clue. However, for some reason when they put the first term on line 1 and the second on line 2, those results differed from my answer-checking when I typed term AND term on one line. I'm glad I caught them before the ran up to retrieve the next clue because they would have been led to an incorrect location based on the catalog results. I hadn't even thought to check this, but now I know.

  3. Google seems to really capture the students' attention, so perhaps spending more time on search tricks and evaluating websites might be good. I covered the difference between Google and databases at the beginning, and showed how they search differently. We talked a bit about credibility and in the second half of the instruction they completed part of a tutorial on evaluating websites. I think maybe spending a little more time on instruction and incorporating some quick games or activities could be good. We chose not to because the mystery was the major hands-on/game portion, but perhaps more specialized instruction would have been good since these kids seemed to be more advanced.
So overall it went pretty great, I think with just a little more time on instruction and maybe a shorter tour would work well. We are now talking about repurposing these mysteries into orientations for K-12 outreach and/or UA students. More on that another time!

July 10, 2012

Library mystery as outreach and instruction

We do outreach to the community, particularly over the summer, and tomorrow we will have high school students visiting the library for a summer workshop on research skills. Since it's more of a summer camp and these are younger students, we wanted to make sure they would have some fun and be engaged... so we are using murder mysteries as our hands-on activity after a short instruction session to prepare the students for detective work.

I think the mystery I created is fun and it works; I'm sure it would be much better if I had more background in game design (working on that), but this at least will hit all the learning outcomes in a cohesive way:

  1. Students will understand how databases work, and what the difference is between library databases and Google.
  2. Students will be able to construct a basic search using synonyms for a broader search strategy.
  3. Students will be able to locate a book using the library catalog.
  4. Students will be able to evaluate websites using the CRAAP test.
  5. Students will be able to use information appropriately by citing sources in APA style.

Assessment will be done by seeing if they solve the mystery, and since they have to write down answers along the way, we can see some of their search process to get a sense of how much they learned during the instruction portion of the session.

The mystery takes them through using different types of resources in the library, including (hopefully) getting value and comfort in asking a librarian for help. In the end, they wind up in Special Collections where they are spending the afternoon, and will solve the case at the end of the day. We decided to tie our instruction to Special Collections so the students get a more holistic picture of the research process.

I am sure I will notice some snags along the way as this is the first time we are doing this, so I hope to do a follow up post about what went wrong and what could be improved. This would be a great way to gamify orientations to the library for UA freshmen, especially for the smaller student success courses, and could then be tied to retention efforts.

See the mystery with answer key here.
(The narrative makes more sense and is more engaging if you read the full mystery here, below is a synopsis.)

The students start off with information that Wilbur Wildcat (the UA mascot) has been found in the library by one of the exhibits. They need to use the library website to figure out which one and where; they are given a clue that the exhibit features two types of music that were influential in Tucson's culture.


Here they get information the police have collected as well as stats from the medical examiner. They find out Wilbur died from exsiccion, which when they are prompted to look up in Stedman's Medical Dictionary from our health subject guide, they realize that this is actually a synonym for dehydration. From that, they are given a riddle to figure out that a five-letter word for a liquid that can cure dehydration is water. They then need to search the library catalog for a book about water and border issues. Once they find a particular book, they need to go to the stacks to get their next clue.


In this next clue, the students realize an important fact was left off the police report: the suspect left a copy of the Oxford English Dictionary at the scene of the crime, open to the page on aliens. Since the physical copy is locked up at police headquarters, they can luckily search the OED online through the library. They must write down the first use of the term alien in science fiction to realize that the suspect is extra-terrestrial. With this info, they then go to the reference desk and are required to ask a librarian for help in locating an article on UFO sightings in Arizona in the last 50 years. Once they find an article, they must write down the citation in APA style; if the librarian approves that the citation is correct, s/he will hand the team their next clue.


Going to the police with the hypothesis that the killer is an alien would probably get the detectives laughed at, so it is suggested in the next clue to get background information first. A great place to start for background info is CQ Researcher. They must look up UFOs in this database and click on the most recent entry (which, unfortunately, is 1996). They are prompted to read about the University of Arizona professor, James E McDonald, who was a pro-UFO meteorologist. He happened to collect dirt samples from UFO sightings, which are housed in Special Collections (I think this is awesome). They locate his name in CQ Researcher, then must search the catalog to find any works by him as an author in the stacks. They will find the McDonald papers which are housed in Special Collections, along with the dirt, and it is there they will apprehend the killer... who in fact isn't really a killer since the medical examiner made a small mistake in pronouncing Wilbur dead: he was simply in a coma from dehydration and just needs to drink some water (keeping it PG).


I'm excited to see how this goes, and how well my portion of the mystery ties into what Special Collections will be covering. More next time...

January 17, 2012

Reflections on Code Academy and Code Year so far





I've started Code Academy and as of last night, completed Week 1. This is a free program with weekly, online lessons to learn how to code (Javascript). Librarians have started using the hashtag, #codeyear to communicate with each other on their progress (and you can sign up for the lessons at the Code Year site). There has been a push in Libraryland for librarians to learn coding so we can be more self-sufficient in developing digital services and products, as well as just communicating better with IT professionals. There is even a newly-established ALA Connect group for librarians to discuss and help each other with the weekly lessons.

My impressions so far of Code Academy are mixed. Of course, no doubt, this is a great thing. It's free, it's accessible, and it's an intro-level program that is incredibly interactive. It can be hard to teach yourself these types of skills, so opening up the playing field is huge.

It's also nice that the lessons are given in increments, so you get Week 1 for a week, and then are sent Week 2 the next week. You can do more if there is more content up on the site, but it at least makes it more digestible. The leveling up and getting badges is another thing I like. It could be a little bit of gamification, but since these lessons have been made more social through Code Academy and also through the library community, it adds a little more fun to it. I've taken a particular interest as well as to how the Mozilla Open Badges project will relate to library instruction (or could relate), so experiencing a badge-generating program is useful to me and I'm seeing how it could potentially work with students. Although the Mozilla Open Badges project is for open access education, I still think it could be a beneficial concept to try in university and other formal academic settings as well.

Back to Code Academy, there are also some things that I am finding problematic. When considering good pedagogy, detailed feedback contributes to effective learning. Code Academy does not really give any feedback. You put your code in and run it, and then you are right or wrong. There is a little bit of info that pops up when you do enter wrong code, but it's not often enough to help you figure out where you went wrong. The hints are great at the beginning of the lessons, but get more obtuse and mysterious as you progress. I think it can be a good method that they are giving sort of a sandbox atmosphere to try out coding without being bogged down with theory and memorizing definitions (and where you don't have to be afraid of failure, which is a quality of a good game BTW) but at the same time, not really understanding the logic behind how some of the code works makes it very hard to understand why your answer does not work. I was glad to have other librarians who understand coding logic explain why my answer for Week 1, Lesson 8.2 was incorrect, so I was able to progress and finish the week.

Overall, I really do think Code Academy is great, and I'm going to continue on with the lessons. It can be difficult to weave detailed feedback in to an automatic, teach yourself-type program, but at the same time, it is essential for people who are just starting out. I think this article by Tech Crunch, "Will we need teachers or algorithms" (interesting read also for emerging trends in education) rings true here to a degree. Human or AI-driven though, if you can't figure out what you did wrong in a meaningful way, you can't learn from your mistakes and progress.