October 31, 2013

Adding another piece to the library + student retention puzzle

image from www.businesscontinuityjournal.com
Grit has become a big, flashy word over the last year or so, in regards to instilling adaptive learning in students and building resiliency. The idea is that students who are able to adapt and bounce back from failure are able to learn from mistakes, being more likely to stick around in school when they are faced with challenges.

In looking at educational psychology literature, and is becoming more widely known through research on positive effects of gaming, experiencing failure supports adaptive learning. Rohrkemper and Corno (1988) highlight the problematic duality of failure versus success, where failure is always bad and success is always good. This in fact is not true, where constant success can be detrimental and failure can improve performance (that is, learning from failure). Focusing on how students think, rather than what they know, is one step in the right direction, along with modeling adaptive behavior, and teaching students to understand that tasks and learning can be malleable. In library instruction, this reminds me of what Pegasus Librarian (I believe?) mentioned in regards to providing students with a "dirt-view" of research (I can't find this post, I'm thinking it could have possibly been an episode on Adventures in Library Instruction). But basically, where we show students research takes work and builds on failure, and it's almost hilarious when we show students practiced, perfect searches because that is not how research works at all. Kluger and DeNisi (1996) support this notion of learning through failure by arguing that after doing an enormous meta-analysis of feedback interventions research, the conclusion is that the feedback literature is inconclusive and highly variable based on situations and learners involved. They explain that learners are most successful in learning through discovery, rather than feedback, particularly controlling feedback (ahem, grades).

Brownell (1947) advocated for teaching meaning in arithmetic, typically a rote, "tool subject." You'd think the argument of teaching meaning would be quite clear, especially in 2013, but this debate continues in some ways. We can see the shift in information literacy, it seems there is more agreement now to move away from navigation and clicks ("bibliographic instruction") to teaching students a more holistic understanding of research in "information literacy." Barbara Fister, as always, is very eloquent in how she explains the importance of this. But really this is another avenue to instill resiliency in students, by focusing on higher order thinking (though, higher order thinking is not always appropriate in every context), we truly are looking toward students' process rather than having interest only in final product. As Brownell explains, teaching meaning provides a greater context for students to find value in the particular subject being taught. With all the difficulty librarians can have regarding one shots (this model could/should change) in building connections with students and improving motivation for students to learn aspects of the research process, providing deeper knowledge about why, and not just what and how, can improve the learning environment.

I was going to next talk about how to provide successful feedback, because it is important, but to avoid making this post so long that no one actually reads it, I just want to wrap up with whether in a class (credit-bearing, one-shot) or through more auxiliary approaches, libraries should be places for students to build grit and resiliency through exploring failure. We talk about how orientations are important for students to develop a social connection and feel comfortable somewhere on campus, and this is a very important aspect of retention, but these safe spaces should also provide opportunities for students to take safe risks and learn how to adapt to failure. This doesn't necessarily mean libraries need to gamify the whole library or offer badges as a panacea for solving student retention or student motivation concerns, but these are examples of methods that could prove useful. Setting up other opportunities in the library for students to test out ideas are ways in which to draw them in and instill adaptivity. Hopefully they are also getting opportunities for safe failure in their campus-wide courses, but it's certainly not a guarantee. Libraries should think about how we can provide opportunities for safe risk in a variety of ways, whether it's instruction, programming, collections, or UX. It's one step in figuring out how we can support student retention initiatives on campus and demonstrate value.

Brownell, W. A. (January 01, 1947). The Place of Meaning in the Teaching of Arithmetic.The Elementary School Journal, 47, 5, 256-265.

Kluger, A. N., & DeNisi, A. (January 01, 1996). The Effects of Feedback Interventions on Performance: A Historical Review, a Meta-Analysis, and a Preliminary Feedback Intervention Theory. Psychological Bulletin, 119, 2, 254-284.

Rohrkemper, M., & Corno, L. (January 01, 1988). Success and Failure on Classroom Tasks: Adaptive Learning and Classroom Teaching. The Elementary School Journal, 88,3, 297-312.

October 10, 2013

TMI Instruction and Student Retention

In my Educational Psychology course I'm taking this semester, we were discussing effectiveness of instructor transparency on student motivation. Because people ascribe more positive attributes to others who appear "warm" (rather than "cold"), it seemed like it could be a good thing to be forthcoming with personal information to students. For example, how an instructor had a hard time learning x subject and overcame it, or even addressing if an instructor received negative reviews on TCEs, etc. My opinion was to not show much weakness. You can really hurt your credibility with students who are looking to you as an expert on a topic and the authority figure for the classroom if you try to take yourself down a few notches to be on the same level as them. It sort of makes me think of He's Hip. He's Cool. He's 45! From Kids in the Hall:

Edit: @kellymce pointed out this is another good example (and a much better one, I think!):

Students can sniff out a try-hard. Sometimes I'm tempted to share, particularly when I'm working with student retention-related groups, that I dropped out of college for awhile and also hated using the library and didn't care to ever learn. But I don't! Because I'm supposed to be the authority in the class and I'm in charge. Can you imagine if you went to a therapist and they started telling you about all their psychological issues? Their credibility would be shot, and it would also be very confusing as to why they are sharing this information. As instructors, we are there to teach a particular subject and guide students to learning. We can relate to them in small ways, in a mentor-ish capacity, but emptying out the closet skeletons is not an effective way to motivate or draw students into learning.

Anyhow, these are my thoughts and I realize how strong they are after reading this article that came out today on Inside Higher Ed: TMI from Professors (study indicates role of over-sharing by professors in encouraging uncivil student behavior). Apparently, students are less likely to behave well in class if you try and rap with them (as in the outdated 70s slang for talk/relate to). Check it out, interesting stuff.
"When students reported that their instructors engaged in a lot of sharing about their lives -- particularly stories about past academic mistakes, even stories designed to stress that everyone has difficulty learning some topics -- there is an immediate and negative impact on classroom attitudes."

September 9, 2013

Reflection on Feminist Pedagogy for Library Instruction (book)

image from powderroom.jezebel.com
I just finished reading Maria T. Accardi's Feminist Pedagogy for Library Instruction (Library Juice Press, edited by Emily Drabinski). Aside from it resonating with me because I do try to employ critical library instruction and feminist pedagogy when I can, a lot of what Accardi discusses in the book also relates to what I'm doing with digital badges and also student retention.

First, for some background, Accardi explains that feminist pedagogy resides within critical pedagogy. Feminist pedagogy might carry the misconception of being instruction about women and feminism. Although it can often be related to that and employed in women's studies courses, it can be integrated in any form of curriculum. It typically exposes students to issues hidden in society, particularly injustices based on race, class, ability, sexual orientation, etc., and of course gender. Accardi quotes bell hooks (1994) for a concise description: "Feminist teaching techniques are anti-hierarchical, student-centered, promote community and collaboration, validate experiential knowledge, discourage passivity, and emphasize well being and self-actualization" (hooks in Accardi, p.31). To explain this further, it's to help students develop a critical consciousness and be able to take action on their learning.

So I wanted to look at some of the work I'm doing through this lens after this book made me think more clearly about what I am trying to accomplish.

Digital Badges: one of the issues I'm really struggling with for our badges are in scalability. There is a conflict between reaching many with limited FTE (meaning having automatic assessments that don't require intervention) versus reaching fewer, but retaining the ability to provide meaningful feedback and interact with students. One thing about badges is that typically they are awarded for rigid criteria. In a sense they need to be because a badge means something specific and ascribes value to a particular skill. So, if you have no concrete way of measuring this skill to determine if a badge was "rightfully earned" or not, what does it even mean if anyone or no one can actually obtain it? On the other hand, I believe students need to create their own learning and be proactive (feminist pedagogy), and I don't believe there should necessarily be an authority figure telling them what is right or wrong in absolute terms. Obviously, I know more about information literacy than they do, so I would need to develop content, etc., but as Accardi explains, feminist pedagogy is about being a guide and a facilitator rather than an all-knowing "sage-on-the-stage." A lot of the badges I have created focus on affective outcomes, students developing their own meaning of content, and opportunities for reflection and relating material to students' own lived experience. It's difficult enough to measure this as it is, let alone within the more rigid confines of a badge rubric. Not all badges need to be this way, but when attempting to design a suite of badges for campus, making as many automatic as possible without intervention on a 40k campus with 10 FTE instruction librarians tends to be more desirable. Using an automatic multiple choice quiz to determine skill acquisition is an easy, yet banking-model-esque method to award badges at scale. So something here I am trying to figure out is how to use feminist pedagogy but be simultaneously efficient? I'm working on some ideas for this, but it's certainly a point for discussion. How do you reconcile this in your teaching, particularly when instruction is for high numbers of students?

Student Retention: another area that I focus on. How conflicting that student retention is measured in rigid, big data and explained ROI, but it turns out some of the most effective methods to retain students include providing opportunities for personalization, social involvement, and affective learning outcomes. A lot of the instruction I do, and particularly for student success courses and "at-risk" groups includes promoting greater awareness and comfort in the library, rather than an explicit focus on content. I think student retention work would benefit greatly from feminist pedagogy, as would library instruction in general based on the high anxiety many students feel when using the library (and as Accardi does touch on).
This is my brief rundown of my most current thoughts from reading this book. I thought it was a great introduction to understanding feminist pedagogy and how it can be applied to library instruction. Accardi talked about her experience with the ACRL Immersion Program and also talked about issues with ACRL Standards, which I'd like to address in another post.

August 29, 2013

The Pygmalion Effect

I think I've mentioned in previous posts that I'm earning a second masters (MS) in Instructional Design and Educational Technology, but a new update to that is I'm also earning a certificate in motivation + learning environments through the Educational Psychology department to coincide with my degree. My Ed Psych course for this semester is Seminal Readings in Education and Educational Psychology. So, I might blog about either Ed Tech or Ed Psych as I'm going along.

Image from theinsideouteffect.com

Today we discussed some readings we did on the Pygmalion Effect. This is the notion that preconceived expectations for others impact performance or an outcome, so it's the self-fulfilling prophecy. What's interesting is these preconceived expectations have the same effect whether they are self-generated or imposed by an outside source (though, naturalistic expectations are stronger). So, for example, foremen in a warehouse were told certain employees did good or bad on an exam for the job (regardless of how good or bad they actually did), and the foremen rated those employees who they believed to be smarter as better and more efficient. Another study experimented on mice (I am not a fan of this, but...) where mice were either lesioned through lobotomy or made to look like they were so mice handlers could not tell the difference. Handlers were told mice were either bright or dull regardless of lobotomy. Unsurprising is that lesion-free mice with handlers led to believe they were bright performed the best. What was surprising was that lesion-free mice with handlers led to believe they were dull performed just as poorly as lesioned mice who were determined to be dull.

We are looking at this research more directly related to classrooms and formal education next week, but there are huge implications. Visual cues in all of this are one of the most important factors. There is a study in psychology of "thin-slicing" (person perception based on superficial aspects in a short period of time.. so, first impressions, essentially) for student perceptions of teachers, where students watched 30 second clips of teachers teaching with no sound and were to rate their effectiveness as a teacher based on that video clip alone. The study found that the students watching the clips had nearly the same ratings of teachers as the students who actually completed the class and filled out TCEs.
Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology,64(3), 431-441.
Another (and very recent) study by Chia Jung Tsay did something similar where people would receive short clips with no sound of musicians competing in formal events, and they would need to predict who won based on the videos alone, hearing no sound. Accuracy in guessing was astounding, where visual impressions clearly had a greater impact than actual talent. When participants tried to base their ranking guesses on audio alone, they were not able to distinguish who won. Tsay points out that this "suggests that the visual trumps the audio, even in a setting where audio information should matter much more."

In looking at perception-of-self and perception of librarians by patrons, students, faculty, etc., this is important to think about (and something we are examining in the Librarian Wardrobe book). How we are perceived by others might influence how they evaluate us, and how we perceive others might influence how we evaluate them. If visual cues are especially important, then understanding how we present ourselves, whether in gesturing, other physical movements, or clothing, then studying how we dress and public perceptions would be quite significant.

August 10, 2013

IRB: Your research isn't Research

Image from Southern Fried Science
I'm writing this post in part to procrastinate getting to my 3rd year review packet candidate statement (overwhelming!) and also to share some interesting facts I learned from meeting with IRB on campus to discuss restrictions of my study on effectiveness of digital badges in an IL course for student success.

There was a lot of confusion at first when filling out IRB because we were so low risk, and then had to deal with so many restrictions. Our IRB rep explained that historically, any research involving human subjects needed to come through them, and this was a ton of work for them to handle even if research studies proposed didn't really need to be under their jurisdiction. More recently, "Research" with a capital R has been defined by the federal government as being generalizable (some of this info might be here, though our rep said there isn't one definitive page or site explaining this yet for the general public). This is wholly different from research that is not meant to be generalizable, for example: program evaluations, quality improvement, case studies, etc.

It was funny because after providing the explanation, our rep wanted to backtrack and say that she wasn't implying we aren't doing "research," we just aren't doing "Research." We in fact are doing program evaluation research because we are measuring a specific program at the UA in order to improve the program and will be showing what was successful/problematic for us. We would not be saying our results clearly apply to all IL programs or credit courses across the country. However, if we did want to try to prove that somehow, we would need to stay under their jurisdiction (we have been approved as Exempt level 2 I believe). If we were to stay under IRB, we have to keep them updated on any changes to methods and the form of consent for students, as well as if we were to make any changes in obtaining data. We would also have a ton of restrictions in what we can and cannot access with student info.

Filling out what is the "309 form" here at the UA to essentially rescind our IRB application (I am thrilled to do this, but it's probably not a good idea for me to think about how much time I spent on this for my own sanity) will then move us to program evaluation and we can essentially do whatever we want so long as it's generally ethical and follows FERPA regulations.

With IRB, we would have had to only do an opt-in to our study for students, anonymize data to potentially miss out on seeing trends, and be cautious about asking certain questions in our survey. As program evaluation, we can really gather data any way we want.

My co-researcher made a good point that the basis for Research vs research is in the eye of the beholder (the reader). We won't be saying our research is generalizable, but obviously if someone is reading the article (if we are able to get published, of course), that means they are considering how they might apply our findings to their own program or credit course. It can be very nuanced.

So anyhow, I just wanted to share this. It sounds like it's getting a big push not just on our campus but all over. This conflicts with LIS anxiety over publishing Research, but program evaluation is not any less important. The only distinction is that it does not go through IRB.

July 22, 2013

Grading and assessment, water and oil?

As I've been getting ready for ACRL Immersion 2013 Program Track (I leave in 1 week!), I've been finishing up a lot of readings on assessment. I've actually been really glad to read these articles, because as I've better solidified my notion of assessment through applying it to the instruction I am doing, I am finding my ideas are aligning with what I am reading.

The pattern I am finding in these readings is that assessment needs to be more holistic; assessment should be a method for students to learn rather than a focus on evaluation; and assessment should provide ongoing, meaningful feedback for students to practice instead of being judged.

We are wrapping up the summer semester with the badges pilot, and Purdue Passport incorporates assessment within earning a badge. Typically, a badge is given after a skill has been achieved, where assessment is more evaluative and judgmental rather than to provide feedback for improvement. This clashes with how I would prefer to teach and use the badges, so I've been using the feedback/assessment mechanism in Passport differently than it might be intended.

This is good because I think students are getting more out of the class, but also poses some conflicts:
  1. If badges being awarded are not based on more rigid judgement of skill acquisition, how valuable are they?
  2. On this note, how interoperable are they? Can their qualities be translated or compared to other institutions or libraries offering similar badges if desired evidence isn't as clearly enforced?
Because this is a credit class, grades need to be tied to student work. For this, the badges are essentially pass/fail. You either earn the badge or you don't. If a student is late in finishing badge work an exception is made to give them half off, but this is the only partial credit awarded. There are pros and cons to this as well:

Pros: Students can take risks in their responses and have less fear of failure (this positive aspect is rooted in game mechanics); I can focus more on the quality of my feedback rather than what level of good or bad the student's work falls into

Cons: How is good student work differentiated from bad work? Particularly if bad work is due to sloppiness or disinterest. Shouldn't a student who submitted excellent work (or evidence) for a badge be awarded the badge, where less stellar work would not be awarded the badge? Isn't the purpose of awarding badges to demonstrate that a skill was successfully acquired?

I have such mixed feelings on this. But one feature of Passport is to allow students a re-do. I use this often for sloppy work. I will leave feedback explaining exactly what I'm looking for and give the student a second chance (next semester I will be sharing specific rubrics for each badge with students so they have an even better concept of what level of understanding is desired). 

I am not a stickler on lower-level concepts like formatting a citation perfectly or memorizing exact steps on how to find an article when you only have a citation (these are specific assessments in the class to address more basic skills within learning outcomes). If a student has most of a citation right but forgets to italicize the journal title for MLA style, it's really just busy work for them to make them re-do it or for me to take points off. I leave feedback letting them know they mostly got it and to remember to double check these things for formal papers; and then I give them all the points. I love Barbara Fister's 2013 LOEX Keynote (in fact, my team read it as part of our strategic planning for the new fiscal year). I agree so strongly with her whole presentation, and using a specific example here, "very rarely outside of school are citations needed." I care way more about if students are able to understand what the purpose of a citation is and to incorporate this into their new understanding of "research as conversation" than about styles and how to format.

One assigned article that has been part of this class for a long time is a CQ Researcher article on cheating: why students cheat and how they cheat. It's interesting to see what students agree with in their reflections and a number do say that when a student doesn't feel course material has real application in their lives (or when an instructor provides little to no meaningful feedback), a student has no motivation or investment to put in quality work, and so cheating is easy. Focusing less on grades and more on understanding and a conversation between the students and us as instructors creates a richer experience for all. Their reflections resonate well with what we're doing in the course to make it apply to their lives, to attain better work from them, and in turn to provide more meaningful, continuous feedback. This also allows for continuous improvement on our end; the crux of assessment. 

July 15, 2013

Feelings and games

I'm currently enrolled in a MOOC (I am unsure how I currently feel about MOOCs overall in theory, but am liking this one so far). It's the Game Elements for Learning (#GE4L) MOOC offered through Canvas.

One of the readings we have been directed to is Karl Kapp's post, The emotional toll of instructional games.

(Kapp also wrote: The gamification of learning and instruction: Game-based methods and strategies for training and education. 2012).

I am finding this very intriguing because I have been having reservations about incorporating some game elements into the badge pilot I have designed for our 1 credit course. I wanted to have a leaderboard so students could feel good about doing extra work, but I did not want people to "shut down," as Kapp says, for not making it onto the leaderboard. The intent is to motivate, but in reality, it could have an opposite, unintended effect.

Kapp says, "If you decide to add game-elements (gamification) or if you decide to create a learning game with winners and losers, you need to find a way to deal with those who do not win. You need to help them avoid some of the negative feelings. You may even decide that a cooperative game is better than placing someone in a losing situation."

In the class, we are not offering enough rewards where certain people would feel like everyone but them is winning, and I am trying to mix it up so that a variety of people are included. I am also incorporating easter egg "mini-badges" for exceptional work, cooperative skills, or etc. be varied so the same people are not winning for excelling at particular skills while ignoring other skills. Students can unlock them without knowing. Still feeling it out as I go, and since this is the pilot it will be great to see what worked and what did not. I'm honestly a little worried that there is the reverse effect, that students might not care at all about the leaderboard. I'm still getting great work from them, but I'm not clear on if it is factoring into their submissions or not. They are very likely more motivated by the grade. And since this is a summer class, people are traveling and might have their thoughts elsewhere. Looking forward to survey data at the end of summer. I think I'm going to need to revisit the game + instructional design to make it more well-defined, include more motivational game elements, and find a way to make it not so heavily grade-focused (if possible since it is a credit course).

Kapp lists 12 ways to "mitigate losing in an instructional game or gamification situation," and I thought it might be useful to comment on each:

1) Forewarn the learners that they might become upset or frustrated if they find themselves losing and that is part of the learning process.
I did not do this, but I have at least tried to address affective learning outcomes and how research can cause frustration; that it's not easy or linear and you have to practice. I hope these skills can be applied to the game mechanics and course content as well, but perhaps it's better to come right out and say it.

2) Inform learners that they might lose the game and that is OK, learning will still occur.
I have done this in a way to allow students multiple attempts at a badge. This really can tie in to taking risks and having a decreased fear of failure. Since assessment should be to improve learning, not just judge work, I try to make the feedback really count.

3) Carefully brief all the learners on the instructional objectives of the game and de-emphasize winning.
Since the badging structure is tied directly to the course, the course objectives are the main focus. In this case, badges are more of a visual way to track progress.

4) Acknowledge the frustration or anger at losing.
Similar to my response for #1.

5) Ask learners to find lessons and reason within the lose. Have them dissect why they lost. Ask “can those insights lead to learning?”
Perhaps I should have students do more cognitive work here, but when they get it wrong when submitting work for a badge, I reiterate and clarify what exactly I'm looking for so they have a better opportunity to "win" the badge when they try again.

6) Don’t spend a great deal of time extolling the winners. Acknowledge winning and move right to the instructional lesson.
Exactly. I post a leaderboard in the news section and just leave it at that.

7) Provide a list of strategies that will help the learner win next time. (After the game.)
I try to do this in the assignment description so students know exactly what I'm looking for. It might even be better to post the actual rubric next semester.

8) Within the curriculum, follow the game activity with an activity where everyone can feel positive.
I'm not sure how this would play out in my scenario, but I think the participation points for discussion might work in this way. Rather than being graded on what is said, general points are given simply for being involved.

9) If in a classroom, allow people who did not win a chance to discuss why they didn’t win. Online, provide chat opportunities.
More chances for reflection would be very beneficial. The badges offered are so incremental, however, larger reflection might not be a good fit.

10) Consider if creating “winning” or “losing” is really what you want in the learning experience. Sometimes it is appropriate. Often it is appropriate but be prepared for unintended consequences and negative feedback if you don’t handle the situation properly.
Yes, definitely reconsidering even having a leaderboard. Maybe these easter egg "mini-badges" could instead be private between instructor and student.

11) Create different levels of winning, can a learner win a round, or one task, can small victories occur throughout the game. This is helpful because if a learner falls behind early, they may mentally drop out early in the learning process. Find ways to keep them engaged.
Love this.

12) Finally, you may want to consider building a cooperative rather than a competitive game. Working together is far more inclusive than competition.
Love this too. Just have to find a way to track each student's own work to tie to badge earning.

July 8, 2013

#ala2013 recap: Badges, student retention, and over-capacity parties

Wow #ala2013 went by so fast! This was hands down my favorite conference that I've been to over the last 3 years. Here's my brief recap of highlights:

I didn't attend as many sessions as I would have liked; I presented twice, led a discussion group, and reviewed people's resumes for NMRT's Resume Review Service, so a lot of my time was already nailed down, but it was all stuff I wanted to do so it worked out.

On Saturday morning, I presented on the LITA: What to know before gamifying your library panel. We had a range of topics including: Bohyun Kim (moderator) giving an overview of gamification; Dave Pattern's use of Library Game / Library Lemontree at the University of Huddersfield (UK); Annie Pho covering the not-fun-but-very-important stuff on how to create institutional buy in and obtain grant money for these sorts of projects; and Young Lee explaining the technology aspects involved and how he plans to use badges in a law school library. My presentation was titled, "Anchoring the badge: Setting standards for game-based learning in library instruction." I discussed my current implementation of badges for instruction at The University of Arizona Libraries. You can see the Slideshare presentation with everyone's slides; though, since it was such a large panel not all of us contributed slides (myself included). So you won't get much from what I discussed in that link. Here is a very brief summary below; I am sure I will be speaking and writing about this project more as it progresses (have IRB approval!), so I plan to share more information in the near future.

Importance and benefits of using badges for instruction:
  • Makes instruction more scalable, can ensure wider adoption of IL skills: trackable, measurable
  • With trackability and assessment built in, this presents possibilities for customized learning ("microcredentialing," demonstrate specific skills; customization can greatly improve motivation and learning)
  • Evidence is tied to the idea of competency-based learning (use specific outcomes to show criteria has been met for assessment, accreditation, program SLOs, other standards like the ACRL IL Standards, etc.)
  • What we are doing at the University of Arizona: my overview was very brief since I'm still currently studying this and have gotten IRB approval to do so
I left those in attendance with some thoughts from Dan Hickey of Indiana University, via a Campus Technology article, How badges really work in higher education:
  • "What sorts of claims will your badges make about the earners and what evidence will your badges contain to support those claims? 
  • What assumptions about learning will frame your consideration and implementation of badges?
  • How will your badges program be introduced? Will it be a centralized effort or pockets of innovation? "
You can read more about badges and gamification in academic libraries from what I have published in ACRL TechConnect on initial plans for badges at the UA Libraries, as well as our use of SCVNGR back in a pilot:
Char Booth also has a great post on badges at her blog, Info-mational, looking at badging in higher ed and discussing how she is using this form of micro-credentialing in the ACRL Immersion Teaching with Technology track. See her post, MYOB: Make your own badge.
More on Badges for Instruction
On Sunday, I presented a Conversation Starter with Annie Pho and Young Lee: Achievement unlocked: Motivating and assessing user learning with digital badges. Our hashtag was #alabadge, and you can see some helpful Tweets summarizing the session.

Student Retention
On Saturday, I also co-facilitated my and Jaime Hammond's ACRL Student Retention Discussion Group meeting.  You can also find the group on ALA Connect. Our topic for this meeting was:
How do we measure causation versus correlation in the library’s role in student success and retention? The ACRL Student Retention Discussion Group will be discussing the impact of a “culture of assessment” on libraries and demonstrating value on campus in regards to retention. We will discuss how effective demonstration of value in campus retention is through traditional methods and hope to explore ideas participants have for new initiatives.
To help guide the discussion, we used Megan Oakleaf's article on assessment strategies:
Oakleaf, M. (March 01, 2013). Building the Assessment Librarian Guildhall: Criteria and Skills for Quality Assessment. The Journal of Academic Librarianship, 39, 2, 126-128.  
We had some great discussions about what people are doing at their institutions, and seemed to have a good mix of academic librarians from community colleges and universities. The minutes should be posted within the next week or so; if this interests you, joining the Connect group will keep you up to speed. We also organize monthly article discussions during the regular academic year, with volunteers choosing articles and facilitating.

Other things included the Librarian Wardrobe + Every Library After Hours Party, which will have a solid recap on Librarian Wardrobe soon. We had a great time helping to raise awareness and $$ for Every Library, and so excited to plan more events with them at future conferences. Apologies to anyone who could not get into the party, it's very, very hard to find venues that allow for a large capacity without charging tons of money that neither LW or EL have to spare. We do have plans to accommodate more of everyone for #ala2014.

There was a lot of other great stuff but I'm going to stop there since this is already getting pretty long. I had a lot of fun spending time with friends and meeting new people at this conference. In the meantime, I am getting ready to go to ACRL Immersion in Seattle later this month for Program Track and have some other, exciting projects in progress as well. Check back here for more updates on badges and other stuff!

June 12, 2013

#ala2013 scheduling

Posting my tentative schedule for #ALA2013 below to share but also promote some panels and events I'm involved in. In reality I think I have at least 5 sessions marked per time slot, but these are the ones I am either presenting for or will most likely wind up at. Excited to see friends and meet new people, too!

Friday, 6/28

Annual Unconference

ACRL Leadership Council Networking Session + Meeting

Emerging Leaders Poster Session & Reception

ACRL Instruction Soiree (don't see an official page yet)

STACKS! Soul Librarian Dance Party & Benefit for the Read/Write Library

Saturday, 6/29

LITA Panel: What you need to know before gamifying your library
I will be participating on this panel, presenting: Anchoring the badge: Setting standards for game-based learning in library instruction.

ACRL New Members Discussion Group
Moderated discussion about the intersection of gender and academic libraries

ACRL Student Retention Discussion Group
I created this group with Jaime Hammond to discuss issues related to student retention in academic libraries.You can join the group in our Connect space here: http://connect.ala.org/node/173037.
Our topic for this conference is: 
How do we measure causation versus correlation in the library’s role in student success and retention? The ACRL Student Retention Discussion Group will be discussing the impact of a “culture of assessment” on libraries and demonstrating value on campus in regards to retention. We will discuss how effective demonstration of value in campus retention is through traditional methods and hope to explore ideas participants have for new initiatives.
LITA Instructional Technologies Interest Group

ACRL Instruction Session Current Topics Discussion

WGSS Social

8th ALA Annual 2013 Newbie & Veteran Librarian Tweet-up

ALA After Hours: EveryLibrary & Librarian Wardrobe Party (Facebook event page)
Like last year, there will be a best-dressed contest (just for fun) and a walkoff for anyone who wants to participate. Librarian Wardrobe will have more details posted soon. There will also be Librarian Wardrobe photographers snapping pictures of people during the conference. And check out the EveryLibrary site.

Sunday, 6/30

Conversation Starters: Achievement unlocked: Motivating and assessing user learning with digital badges
I will be presenting with Annie Pho and Young Lee, covering what digital badges are, their use in instruction, and potential with technology. And check out Annie's post about this session.

NMRT Resume Reviewer Shift
If you're interested in getting your resume reviewed, there might still be some slots left.

Libraries and Student Success: A Campus Collaboration Using High Impact Educational Practices

ACRL ANSS Studying Ourselves: Libraries and the User Experience

Various socials/happy hours (LITA, NMRT, GLBTRT....)

Monday, 7/1

ACRL Undergraduate Librarians Discussion Group

How to Teach and Assess Discipline-Specific Information Literacy

Annual Library Camp

Chicago Showdown: ALA Battledecks IV

Que(e)ry: Leather Bound in CHICAGO!

June 7, 2013

Evaluating sources is not a dichotomy

As you might know by now if you read my blog or follow me on Twitter, I'm co-teaching our one-credit course for undergrads and am implementing the use of digital badges (we just got IRB approval as of yesterday!). In one of the tasks students need to complete, I found myself falling into the trap of absolute language. I wanted them to complete an activity where they evaluate a website, and in the directions realized that I wrote, "...and explain why you think this is or is not a credible source..." etc.

The problem with that is most sources are not all good or all bad. There are some sources out there that do set out to deceive people (though sometimes I don't think those content creators even think they are being "evil," they just believe their viewpoint is right and important and want others to do the same; it all comes down to perspective). But anyhow, I think it's dangerous for students to be put in the mindset that a piece of information is all good or all bad. They might use a checklist and go through the site / resource to determine if it meets particular criteria, but not having them think critically about a range of goodness/badness and a gray area sets them up to actually think less critically overall. Once they check off enough boxes to determine a source as all good or all bad, they don't have to think about the information much anymore: it's just use it or don't use it at this point.

I changed my wording on this activity to encourage a better understanding of a spectrum of quality and credibility. My hope for when they get to this assignment, after doing some readings, tutorials, and critical thinking, is that they will realize research is a fluid and organic process, and they shouldn't stop thinking just because a lens for evaluation takes some of the burden off of them initially.

May 3, 2013

Joining the club: Library video contest to promote student awareness and library connections

At an Instructional Services Team meeting we had here awhile back, we were talking about revamping our orientation materials (virtual tours, videos, site info, etc.) and since information gets updated so often, we were discussing the sort of futility of making new videos every single year. Since other libraries had such great success doing student video contests (I particularly love this video from the University of Minnesota Duluth), I felt that we could both have our materials current each year, and also involve students in the process through teaching others about the UA Libraries; this would benefit everyone.

So, this was the first semester we opened up a contest for students to create 2-5 minute videos about The Libraries. We encouraged humor and creativity, with incentive prizes for first and second place. First place gets $500 to the UA Bookstores and second place gets passes to The Loft's (independent theater here) Cult Classics Series.

We wound up with four finalists after videos were vetted for accuracy and following the guidelines by those of us organizing the contest, and then opened it up to student voting. We have some seriously magical IT specialists here who were able to whip up a voting page based on student net ID qualifications essentially in a matter of minutes, and we wound up having over about 700 students vote.

There are a number of things we learned this year that we plan to adjust for next time around, but all around it was a great experience for us, and it seems students were really positive about it as well. From our meeting with student leadership, they were also very excited about it and helped get the word out, along with the tutoring center, and a number of other areas on campus. We also have a marketing department here who helped us with graphics and developed a campus-wide marketing plan for the entirety of the semester.

Here is our video page with links to the winning videos (and contest details), but I'm going to embed first and second place here (so excited about these)!

First place


Second place

University of Arizona Library from Library Promo on Vimeo.

February 6, 2013

Some brief thoughts on classroom management, techniques, and future lesson plans

I started off writing this post reflecting on the negatives of a difficult instruction session that I had and although it's really helpful to examine failure, I think it's even better to look at what has gone well and what does work. The difficult class I taught was a student success course for (mostly) athletes in their freshman year. It's extremely remedial to ensure they get the right footing before entering into more advanced classes. On one hand, from what I observed, it seems like it is a necessary thing for some students, and at the same time, it seemed like they were frustrated and perhaps felt the class was beneath their skill level. So, with that situation (and numerous classroom management issues) and a last minute request for instruction, it was an uphill battle.

The ERIAL Project has highlighted the issue of students with the lowest skill level in library research being the most confident about their abilities. I definitely notice this in the classes I teach, and particularly in this student success course. They seemed to feel very confident and like they didn't need me to show them anything (not the whole class, but the majority). In contrast, the students who were excelling and were doing more advanced research were the only ones asking questions and putting effort into the activity. I think an effective method in this case is to set them up for some struggle first and then show them that they could really use instruction. For example, have them search the database without direction, and then when they see they haven't found very useful results or too many results, demonstrating tactics and tricks can better capture their attention. That way when we say knowing how to do research effectively will actually save them time in the long run, they will believe it.

On the flipside, I went back to teach another session to freshmen football during their study table hours (this is part of my work in student retention), and it went amazingly well. The lead tutor who oversees their study table hours said my colleague and I are great at engaging a very difficult population (hooray!) and asked me to come back next month even though now we've covered all the sessions we agreed upon for the academic year (orientation, basic searching, evaluating sources, and citing/avoiding plagiarism).

With this group, I have been planning game-like activities to engage their competitive nature. Anytime they can go up against one another, they seem to get really into it. We planned a BINGO-style orientation session for them over the summer and they were hardcore about enforcing no answer sharing or explaining answers until the competition is over because they all wanted to win. At this latest session, we did plagiarism court and offered candy for answering correctly. I'm already plotting out our next session and think now that they have the basics, I'd love to teach them "research as conversation," and framing it that way should really help them understand the process better. I'm working on developing some things to illustrate this in a fun way and will share what I create along with the results. This is an exciting group to work with because I can try out a lot of different things and can make it fun.

January 3, 2013

Workshop for Faculty: Designing Effective Research Assignments

Designing Effective Research Assignments from Nicole Pagowsky

Today, myself and a colleague presented a workshop to faculty on designing effective research assignments for student success. Since we consult with faculty often and see good/bad assignment design in action through library instruction and feedback from the reference desk, we were proactive and offered this session as part of the Office of Instruction and Assessment teaching academy offered each semester. We thought this would be a great opportunity to work more closely with faculty who might not know about library services or best practices / pitfalls.

If you download the PPT instead of just viewing it on SlideShare, you can see our presenter notes, detailing what was covered in each slide. We started off talking about issues students have with research and research assignments through looking at the ERIAL Project, Project Information Literacy, and Kuhlthau's Information Search Process. We discussed how faculty and librarians overestimate students' skills in research (ERIAL), how students overestimate their own skills as well but are anxious about research (and even dread it, PIL), and then how to understand this affective learning and when/how to intervene (Kuhlthau). Applying this knowledge to ACRL Info Lit standards, we had faculty start to think about current assignments they are using, or assignments they would hope to use, in this new context.

Next, we covered specific design pitfalls and best practices, breaking best practices down into: scaffolding, transparency, context, critical thinking, process over perfection, and embedding academic integrity.

As part of hands-on activities, we, as I mentioned, had instructors use a worksheet to think about their own assignments and evaluating their effectiveness, then we also had them evaluate a sample assignment using criteria related to being specific, transparent, and encouraging critical thinking.

The session went very well, we even received applause at the end with many thank yous. There is really nothing I can think of to modify at this time, other than spending more time hearing about what kinds of assignments instructors are using. We will be getting formal assessment back soon from the Office of Instruction and Assessment, who hosted the workshop series. We will be offering this workshop again in a month or two and am looking forward to working with more faculty.