Showing posts with label education. Show all posts
Showing posts with label education. Show all posts

August 15, 2014

Instructional design for librarians

image via edtechdojo.com
Instructional design (ID) is an important component of good instruction to understand, but because most librarians (myself included) were not trained in this in library school or afterward, it is something that we should catch up on to close the gap in our knowledge and skills. ID helps an instructor connect learning goals/outcomes with instructional practices and assessment in order to create a learning experience that could be more efficient and effective for learners. I'm sure most would agree that initial instruction experiences for librarians are trial-by-fire. 

ALA invited me to teach a course on an instruction-related topic for these reasons and so I thought instructional design would a good way to cover principles for both face-to-face and online teaching in any type of library. I asked Erica DeFrain to join me in teaching since she has some serious skills, as well as degrees in Instructional Design and (finishing up) her PhD in Educational Psychology. If this interests you, more information follows!

Course Instructors: Nicole Pagowsky & Erica DeFrain
September 15 - October 15, 2014

This four week, online course will allow you to work at your own pace while receiving feedback on projects and having conversations with your instructors and coursemates. Upon completion of the course you’ll have a fully developed lesson plan that includes pedagogically sound instructional strategies and a meaningful assessment plan.

What you will get out of this course:
  • How to use an instructional design (ID) model to create your own teaching, while being critical of the limitations of ID
  • How to leverage learning theories and knowledge of student motivation to create more compelling instruction
  • How to integrate assessment holistically into your curriculum, lesson, or learning object so that you can help students reflect on their own progress, while you reflect on your teaching
  • How to critically select and position technology within your instruction to enhance student learning
  • How to develop an awareness for critical pedagogical practices to create inclusive classroom atmospheres or learning objects
      
Erica is fancy - here is her instructor bio if you aren't familiar with her work:

Erica DeFrain is a librarian with over ten years of professional experience developing and designing instruction. In April of 2014 she joined the Research and Instructional Services department at the University of Nebraska - Lincoln as an Assistant Professor and Social Sciences Librarian. A doctoral candidate in Educational Psychology, she has an MLIS and MS in Educational Technology from the University of Arizona. A huge fan of the Guide on the Side, one of her Guides was featured as an ACRL PRIMO Site of the Month in April.


Nicole Pagowsky is a Research & Learning Librarian at the University of Arizona, and is the liaison for online learning, student retention and success initiatives, general education, and the College of Architecture and Planning. Both her MLIS and MS in Instructional Design & Technology degrees are from the University of Arizona. Nicole's research focuses on game-based learning, student motivation, and critical pedagogy. 


Hope anyone interested will join us, feel free to contact either Erica or myself if you have questions.

September 9, 2013

Reflection on Feminist Pedagogy for Library Instruction (book)

image from powderroom.jezebel.com
I just finished reading Maria T. Accardi's Feminist Pedagogy for Library Instruction (Library Juice Press, edited by Emily Drabinski). Aside from it resonating with me because I do try to employ critical library instruction and feminist pedagogy when I can, a lot of what Accardi discusses in the book also relates to what I'm doing with digital badges and also student retention.

First, for some background, Accardi explains that feminist pedagogy resides within critical pedagogy. Feminist pedagogy might carry the misconception of being instruction about women and feminism. Although it can often be related to that and employed in women's studies courses, it can be integrated in any form of curriculum. It typically exposes students to issues hidden in society, particularly injustices based on race, class, ability, sexual orientation, etc., and of course gender. Accardi quotes bell hooks (1994) for a concise description: "Feminist teaching techniques are anti-hierarchical, student-centered, promote community and collaboration, validate experiential knowledge, discourage passivity, and emphasize well being and self-actualization" (hooks in Accardi, p.31). To explain this further, it's to help students develop a critical consciousness and be able to take action on their learning.

So I wanted to look at some of the work I'm doing through this lens after this book made me think more clearly about what I am trying to accomplish.

Digital Badges: one of the issues I'm really struggling with for our badges are in scalability. There is a conflict between reaching many with limited FTE (meaning having automatic assessments that don't require intervention) versus reaching fewer, but retaining the ability to provide meaningful feedback and interact with students. One thing about badges is that typically they are awarded for rigid criteria. In a sense they need to be because a badge means something specific and ascribes value to a particular skill. So, if you have no concrete way of measuring this skill to determine if a badge was "rightfully earned" or not, what does it even mean if anyone or no one can actually obtain it? On the other hand, I believe students need to create their own learning and be proactive (feminist pedagogy), and I don't believe there should necessarily be an authority figure telling them what is right or wrong in absolute terms. Obviously, I know more about information literacy than they do, so I would need to develop content, etc., but as Accardi explains, feminist pedagogy is about being a guide and a facilitator rather than an all-knowing "sage-on-the-stage." A lot of the badges I have created focus on affective outcomes, students developing their own meaning of content, and opportunities for reflection and relating material to students' own lived experience. It's difficult enough to measure this as it is, let alone within the more rigid confines of a badge rubric. Not all badges need to be this way, but when attempting to design a suite of badges for campus, making as many automatic as possible without intervention on a 40k campus with 10 FTE instruction librarians tends to be more desirable. Using an automatic multiple choice quiz to determine skill acquisition is an easy, yet banking-model-esque method to award badges at scale. So something here I am trying to figure out is how to use feminist pedagogy but be simultaneously efficient? I'm working on some ideas for this, but it's certainly a point for discussion. How do you reconcile this in your teaching, particularly when instruction is for high numbers of students?

Student Retention: another area that I focus on. How conflicting that student retention is measured in rigid, big data and explained ROI, but it turns out some of the most effective methods to retain students include providing opportunities for personalization, social involvement, and affective learning outcomes. A lot of the instruction I do, and particularly for student success courses and "at-risk" groups includes promoting greater awareness and comfort in the library, rather than an explicit focus on content. I think student retention work would benefit greatly from feminist pedagogy, as would library instruction in general based on the high anxiety many students feel when using the library (and as Accardi does touch on).
This is my brief rundown of my most current thoughts from reading this book. I thought it was a great introduction to understanding feminist pedagogy and how it can be applied to library instruction. Accardi talked about her experience with the ACRL Immersion Program and also talked about issues with ACRL Standards, which I'd like to address in another post.


August 29, 2013

The Pygmalion Effect

I think I've mentioned in previous posts that I'm earning a second masters (MS) in Instructional Design and Educational Technology, but a new update to that is I'm also earning a certificate in motivation + learning environments through the Educational Psychology department to coincide with my degree. My Ed Psych course for this semester is Seminal Readings in Education and Educational Psychology. So, I might blog about either Ed Tech or Ed Psych as I'm going along.

Image from theinsideouteffect.com

Today we discussed some readings we did on the Pygmalion Effect. This is the notion that preconceived expectations for others impact performance or an outcome, so it's the self-fulfilling prophecy. What's interesting is these preconceived expectations have the same effect whether they are self-generated or imposed by an outside source (though, naturalistic expectations are stronger). So, for example, foremen in a warehouse were told certain employees did good or bad on an exam for the job (regardless of how good or bad they actually did), and the foremen rated those employees who they believed to be smarter as better and more efficient. Another study experimented on mice (I am not a fan of this, but...) where mice were either lesioned through lobotomy or made to look like they were so mice handlers could not tell the difference. Handlers were told mice were either bright or dull regardless of lobotomy. Unsurprising is that lesion-free mice with handlers led to believe they were bright performed the best. What was surprising was that lesion-free mice with handlers led to believe they were dull performed just as poorly as lesioned mice who were determined to be dull.

We are looking at this research more directly related to classrooms and formal education next week, but there are huge implications. Visual cues in all of this are one of the most important factors. There is a study in psychology of "thin-slicing" (person perception based on superficial aspects in a short period of time.. so, first impressions, essentially) for student perceptions of teachers, where students watched 30 second clips of teachers teaching with no sound and were to rate their effectiveness as a teacher based on that video clip alone. The study found that the students watching the clips had nearly the same ratings of teachers as the students who actually completed the class and filled out TCEs.
Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology,64(3), 431-441.
Another (and very recent) study by Chia Jung Tsay did something similar where people would receive short clips with no sound of musicians competing in formal events, and they would need to predict who won based on the videos alone, hearing no sound. Accuracy in guessing was astounding, where visual impressions clearly had a greater impact than actual talent. When participants tried to base their ranking guesses on audio alone, they were not able to distinguish who won. Tsay points out that this "suggests that the visual trumps the audio, even in a setting where audio information should matter much more."

In looking at perception-of-self and perception of librarians by patrons, students, faculty, etc., this is important to think about (and something we are examining in the Librarian Wardrobe book). How we are perceived by others might influence how they evaluate us, and how we perceive others might influence how we evaluate them. If visual cues are especially important, then understanding how we present ourselves, whether in gesturing, other physical movements, or clothing, then studying how we dress and public perceptions would be quite significant.

July 22, 2013

Grading and assessment, water and oil?

As I've been getting ready for ACRL Immersion 2013 Program Track (I leave in 1 week!), I've been finishing up a lot of readings on assessment. I've actually been really glad to read these articles, because as I've better solidified my notion of assessment through applying it to the instruction I am doing, I am finding my ideas are aligning with what I am reading.

The pattern I am finding in these readings is that assessment needs to be more holistic; assessment should be a method for students to learn rather than a focus on evaluation; and assessment should provide ongoing, meaningful feedback for students to practice instead of being judged.

We are wrapping up the summer semester with the badges pilot, and Purdue Passport incorporates assessment within earning a badge. Typically, a badge is given after a skill has been achieved, where assessment is more evaluative and judgmental rather than to provide feedback for improvement. This clashes with how I would prefer to teach and use the badges, so I've been using the feedback/assessment mechanism in Passport differently than it might be intended.

This is good because I think students are getting more out of the class, but also poses some conflicts:
  1. If badges being awarded are not based on more rigid judgement of skill acquisition, how valuable are they?
  2. On this note, how interoperable are they? Can their qualities be translated or compared to other institutions or libraries offering similar badges if desired evidence isn't as clearly enforced?
Because this is a credit class, grades need to be tied to student work. For this, the badges are essentially pass/fail. You either earn the badge or you don't. If a student is late in finishing badge work an exception is made to give them half off, but this is the only partial credit awarded. There are pros and cons to this as well:

Pros: Students can take risks in their responses and have less fear of failure (this positive aspect is rooted in game mechanics); I can focus more on the quality of my feedback rather than what level of good or bad the student's work falls into

Cons: How is good student work differentiated from bad work? Particularly if bad work is due to sloppiness or disinterest. Shouldn't a student who submitted excellent work (or evidence) for a badge be awarded the badge, where less stellar work would not be awarded the badge? Isn't the purpose of awarding badges to demonstrate that a skill was successfully acquired?

I have such mixed feelings on this. But one feature of Passport is to allow students a re-do. I use this often for sloppy work. I will leave feedback explaining exactly what I'm looking for and give the student a second chance (next semester I will be sharing specific rubrics for each badge with students so they have an even better concept of what level of understanding is desired). 

I am not a stickler on lower-level concepts like formatting a citation perfectly or memorizing exact steps on how to find an article when you only have a citation (these are specific assessments in the class to address more basic skills within learning outcomes). If a student has most of a citation right but forgets to italicize the journal title for MLA style, it's really just busy work for them to make them re-do it or for me to take points off. I leave feedback letting them know they mostly got it and to remember to double check these things for formal papers; and then I give them all the points. I love Barbara Fister's 2013 LOEX Keynote (in fact, my team read it as part of our strategic planning for the new fiscal year). I agree so strongly with her whole presentation, and using a specific example here, "very rarely outside of school are citations needed." I care way more about if students are able to understand what the purpose of a citation is and to incorporate this into their new understanding of "research as conversation" than about styles and how to format.

One assigned article that has been part of this class for a long time is a CQ Researcher article on cheating: why students cheat and how they cheat. It's interesting to see what students agree with in their reflections and a number do say that when a student doesn't feel course material has real application in their lives (or when an instructor provides little to no meaningful feedback), a student has no motivation or investment to put in quality work, and so cheating is easy. Focusing less on grades and more on understanding and a conversation between the students and us as instructors creates a richer experience for all. Their reflections resonate well with what we're doing in the course to make it apply to their lives, to attain better work from them, and in turn to provide more meaningful, continuous feedback. This also allows for continuous improvement on our end; the crux of assessment. 


January 17, 2012

Reflections on Code Academy and Code Year so far





I've started Code Academy and as of last night, completed Week 1. This is a free program with weekly, online lessons to learn how to code (Javascript). Librarians have started using the hashtag, #codeyear to communicate with each other on their progress (and you can sign up for the lessons at the Code Year site). There has been a push in Libraryland for librarians to learn coding so we can be more self-sufficient in developing digital services and products, as well as just communicating better with IT professionals. There is even a newly-established ALA Connect group for librarians to discuss and help each other with the weekly lessons.

My impressions so far of Code Academy are mixed. Of course, no doubt, this is a great thing. It's free, it's accessible, and it's an intro-level program that is incredibly interactive. It can be hard to teach yourself these types of skills, so opening up the playing field is huge.

It's also nice that the lessons are given in increments, so you get Week 1 for a week, and then are sent Week 2 the next week. You can do more if there is more content up on the site, but it at least makes it more digestible. The leveling up and getting badges is another thing I like. It could be a little bit of gamification, but since these lessons have been made more social through Code Academy and also through the library community, it adds a little more fun to it. I've taken a particular interest as well as to how the Mozilla Open Badges project will relate to library instruction (or could relate), so experiencing a badge-generating program is useful to me and I'm seeing how it could potentially work with students. Although the Mozilla Open Badges project is for open access education, I still think it could be a beneficial concept to try in university and other formal academic settings as well.

Back to Code Academy, there are also some things that I am finding problematic. When considering good pedagogy, detailed feedback contributes to effective learning. Code Academy does not really give any feedback. You put your code in and run it, and then you are right or wrong. There is a little bit of info that pops up when you do enter wrong code, but it's not often enough to help you figure out where you went wrong. The hints are great at the beginning of the lessons, but get more obtuse and mysterious as you progress. I think it can be a good method that they are giving sort of a sandbox atmosphere to try out coding without being bogged down with theory and memorizing definitions (and where you don't have to be afraid of failure, which is a quality of a good game BTW) but at the same time, not really understanding the logic behind how some of the code works makes it very hard to understand why your answer does not work. I was glad to have other librarians who understand coding logic explain why my answer for Week 1, Lesson 8.2 was incorrect, so I was able to progress and finish the week.

Overall, I really do think Code Academy is great, and I'm going to continue on with the lessons. It can be difficult to weave detailed feedback in to an automatic, teach yourself-type program, but at the same time, it is essential for people who are just starting out. I think this article by Tech Crunch, "Will we need teachers or algorithms" (interesting read also for emerging trends in education) rings true here to a degree. Human or AI-driven though, if you can't figure out what you did wrong in a meaningful way, you can't learn from your mistakes and progress.