For me, there are two things to think about: 1) the skills/tools I will implement to communicate with/instruct students, and 2) the skills/tools I will want students to have/use in order to work with each other—these may not be the same things or be used the same ways. Since I am not teaching right now, it requires a sort of projecting out to think about which tools might be the most useful to me and future students. The tools that I find most helpful are the ones which help make the processes of writing evident—something like Google Slides/Docs where groups can work on and respond to the same draft either to give feedback or to create a group project. Or Brainshark, where students could actually talk through their revision process or think through a piece of writing for others to listen to/see. I also have been thinking a lot about our own Weebly websites as portfolios, and would want students in writing workshops—both online and face-to-face—to have this place to create a portfolio of their work which would allow students to really show their writing process and changing drafts over time. We often put artists’ books together for writing workshops, and while the online website loses some of the tactile qualities of an artist’s book, it also holds the potential for new imaginings of word & image. For other tools, I was so glad Michael used Blabberize with the photo of Marx for his Web 2.0 presentation, because it opened up more possibilities about how to use it than I think I would have come up with on my own (I had talking camels in my head and couldn’t get past animals & comic uses). One of the pieces I ask students to do is to write portraits of people who are important to them—Blabberize could be part of this—to add some humor but also more serious content. I think one of the more important things I have re-learned, is to try things out even if at first, they don’t seem to offer what I think I might want/need—and the flip side of that is to not be afraid to discard things that don’t seem to be working—discard, but don’t forget—because you never know when an old/unsuccessful tool might be just the ticket.
The personalization principle offers the concept that an informal, friendly manner of speaking is often more effective at reaching students and promoting learning than a more distant and formal mode of speech (or writing). There is some evidence to support this concept but, as with all the other principles, there are circumstances (boundary conditions) when the evidence does not show the principle being effective (such as with more sophisticated learners). The authors point to discourse processing studies that show people are more diligent about paying attention when they feel they are in conversation with someone. What I noticed though, in a couple of their examples, is that accompanying the informal tone of presentation, is often the incorporation of narrative—something they do not identify, but that I guess also plays a role in inviting students/learners to pay closer attention to or connect with what is being said. In the example in Figure 9.6, the learner is invited to go on an adventure. Incorporating narrative is a powerful tool to help with memory (making up stories about the numbers of PI is one way many people have been able to memorize hundreds of decimal points out—Mike Keith’s “Cadaeic Cadenza”—a short story--represents 3800 digits). You can have an informal tone without a narrative in place, but so many of these agents or character s(Herman the Bug, Peedy the Parrot, Jim the student) invite students to create stories (why is Herman the Bug telling me about plants or why is a parrot interested in proportions?). Narrative is part of the process of learning. I do include personalization in all my classes—it is fundamental to how I run a writing workshop—I am trying to create a community of writers & learners—of which I am a member (albeit a more experienced member). It is very important for the students to trust each other (and me), so they can share their writing and give & take response. Creating this community of writers without all the in-person cues we would normally have in a room where we see & hear each other, is one of the great challenges of teaching a writing workshop online—not an insurmountable challenge, but a challenge none-the-less. I see some of the tools of Web 2.0 as helping cross the gulf of space and time in an online workshop—I think some of these tools offer me and students different ways of interacting with each other and having fun as well—so I can see using Voki, Narable, etc. in an online class or even in a face to face workshop. As I said in my discussion board post, I think there is some common sense to the Coherence Principle: eliminating unnecessary music, audio, narration, images, or video clips. But I still find the theoretical underpinnings of these principles very weak—so for me, it is matter of trying to save what seems to make sense from practice and experience and throwing the theory out for the most part—or at least not letting it get in my way of using what might be useful. I still very much have my doubts about the redundancy principle given that what I will produce will not be presented one time, at one speed, in a darkened room. All the media will be available 24/7 with students having control over speed and time—I think the redundancy principle really falls apart under these circumstances—though that doesn’t mean I think it is a good idea to read the text presented on a slide—there is a happy medium of what needs to be dais & seen & read.
I don’t know how the principles will impact what I design—I think “less is more” is not a revolutionary idea, but fairly accepted by designers in and out of the classroom. The contiguity principle also makes sense—having words & images together—makes sense—for understanding and linking things in memory. I have been doing some research into the criticism of Mayer’s theories, and there is quite a bit of solid criticism of the Cognitive Load Theory on which the Dual Channel Theory rests. Ton de Jong has a long piece in titled “Cognitive load theory, educational research, and instructional design: some food for thought” (open access at Springerlink.com, 2009) to which Roxanna Moreno (one of Mayer’s co-authors for a few studies) thoughtfully responds in another piece posted on Springer titled ‘Cognitive load theory: more food for thought” (2009). There are a few other blogs—I am just getting started reading through these materials, but it tells me my reactions are not totally off base—there are a lot of questions about the theory and even some about the data collected. So, as with Bloom’s taxonomies, I am going to beg to differ and point to other possible models, explanations, and ways of evaluating what we do in teaching & learning. To end on a less critical note: I think I am falling in love with Brainshark—which is a bit surprising since talking into a microphone was one of me least favorite things to do, but I begin to see how personalizing what is being seen can be very helpful—a way to be there for the students even if it is not immediately interactive. It is like listening to a recorded book or remembering teachers who used to read books aloud to us in school to bring them off the page. Obviously this is a bit different: I am not reading my slides, but the fact of a speaking voice adds a necessary dimension to the slides. I also think having students take advantage of this tool would enrich their involvement with their learning. As I was looking though all my class materials, trying to discover what I might use to create the second presentation, I came across several writing/photo-essay assignments which would be made richer and easier experiences because of the tools available through Brainshark, Google slides, etc., whether in an online class or actual classroom setting. Blog Post 5 The Modality Principle, as explained in the text, states that when possible, it is best to use audio recording of text with images instead of written text with images (even if they are presented closely together as the Contiguity Principle states) in order to maximize the learner’s use of working memory—to enhance learning. This is based on some narrow research into how students learn and perform on tests after the fact, and it is based on the two-channel theory of how our cognitive system takes in information. The Redundancy Principle states that it is usually not better to include both written text and audio recording of the same text on a slide with an image or visual representation: that this splits the learner’s attention in unproductive ways between the visual and phonetic processing channels of our minds. This principle is also based on fairly narrow research and a fairly narrow set what the authors call boundary conditions: namely, that the presentation be fast moving, with familiar words, and with many words on the screen. Outside of these boundaries, things are not so clear. So, for instance, if the presentation is not in the learner’s mother tongue, it may be appropriate to have both visual and aural presentation of the text; if there are technical terms new to the learner, and—most importantly—if the time element is removed—the redundancy principle does not hold. This last element of time seems crucial and seems to bear more exploration/explanation than is given in this chapter: in more and more online education, learners do have control of how often they view presentations and at what speed. I understand the authors always put a section at the end of each chapter about what they don’t know about the principles, but it doesn’t seem enough.
Problems and Questions I have: The principles are based on reviews of literature and fairly narrow research which seems designed to test short term memory (memory transfer tests) as different from long term memory and/or deep learning. At one point earlier in the book, the authors mention not knowing about how long term memory & learning is effected by the principle Multi Media principle they are discussing (87), but it seems all of these principles need to be questioned in terms of long term retention or learning. There is an assumption, it seems, that memory and learning are the same thing—and I do not think this is the case. How are they related? How do you discover what is learned and what is just memorized or remembered? They discuss problem solving, but do not go into any discussion about what it is they are actually looking for in transfer tests, etc.—which is disappointing. Many of the studies reviewed are written or co-written by the main author (Mayer), which seems a little suspect to me. I understand this is newish field of research, and there may be few researchers looking into these things; still, it is hard to stay objective when you have a vested interest in what you have created—and there is no research discussed which offers any criticism of the principles. Well, this has been a trying project--and I mean trying in a couple senses of the word: frustrating and also trying out many ideas without much luck. I am still not very happy with this--there are no images in it to speak of unless you count the punctuation marks themselves. These could have been more elaborate or "designed" but I am not sure that really helps in what I am trying to do. I like adding images, but I could not really come up with anything that would add to the objectives of explaining the rules of sentence joins. So--it is sparse and blue. I did use the notes section to try to add what I would say to accompany many, but not all, of the slides. When I teach in a classroom, I do mini grammar lessons--they are a combination of mini lecture or overview, followed by an activity--usually in groups and game-like--to help them practice what the rule is. I like the idea of having these lecture/reviews in a slide format with a voice over perhaps--followed by some kind of activity I could ask them to do--if it is aside presentation, it is always there for them to refer to. I will continue to think about the image problem--and how to get around my own myopia about what is possible.
Blog Post 3: The Contiguity Principle The contiguity principle is pretty straightforward: when you are putting together a visual presentation you should pair images and the words associated with those images (or words with sounds/music) as closely together as possible. This enables the audience to focus on content—what is being read/said paired with what they are seeing/hearing—instead of using extra mental resources to remember what they saw on a previous slide with what they read on the next slide. The contiguity principle relies on the theory that our brains process “input” on two separate channels: the audio and the visual (hence, it I best to pair image and word), as well as the theory that we have limited working memories: so the less we tax our working memories, the more mental energy there is to put to use to long-term or “deep” learning. I think it makes sense to do this, though I question the two-channel theory of how we receive and process information. I think our brains are much more complex than the two channel theory of input allows for. My daughter had to watch and write about a documentary film for her English class last semester, and she chose the documentary film Alive Inside which is about how music can reach parts of the brains of Alzheimer patients who were thought beyond language and remembering. It turns out music awakens memories and movements in parts of the brain which were thought not to include memory or language—but they do. Our brains are very plastic, and it turns out that if one part of the brain is injured, other parts can and often do, retool themselves to take over those functions. So, while the two-channel theory may help us bolster what seems to make common sense, and what may be supported by research into how people do on short-term memory tests after seeing presentations on how a bike pump works, I think ultimately, there is going to have to be a more comprehensive look at the brain and learning—in the text book, the authors themselves say they do not know what long-term learning look like after using the contiguity principle for presentations (I do not have my text book with me—otherwise, I would put the page number down for this). So, my quibble with the theory does not take away from my thinking that putting image and word together makes sense—I just think why it works is much more complex than the metaphor of our brains as a DVD player hook-up. I have seen very few power point presentations recently, so it is difficult to dredge up memories of this principle being violated or used well. Probably the most recent power points I sat through were for the school district’s community involvement series, and I have to say these were done pretty well—from what I remember—perhaps word/information heavy—but nothing overwhelming. Each of these presentations was followed by a working session in which participants had tasks to do which may or may not have had to do with the presentations. One thing which I have heard a few times, which annoys me quite a bit, is when presenters see people taking notes (I am usually one of them), and they say to the whole audience something like this: we have provided you with a copy of the power point and it will be available online, so don’t feel it is necessary to take notes. It is a statement that really ignores the power of writing/jotting down things in your own words to help you remember and learn what you are hearing or being presented with. Photo used under Creative Commons from amsfrank In its most concrete and literal definition, as the guy in the video clip says, “multi” means “many” and “media” is the plural of medium which in its most basic meaning, refers to the material you use to present/make something. I like that he includes books as multi media (the beautiful codex as an illustration), but I do not think, in today’s world, books come to mind when we talk about multimedia (even if on some absolute level of definition they can be included)—perhaps some artists’ books make sense to be included. In day-to-day use, multimedia refer to visual images combined with words in slides or on screens which are projected, or refer to in person events in galleries which incorporate image, sound, movement in a particular space or order, or refer to film and video—the almost perfect blend of multimedia. So, when I think of multi media, I think of seeing and hearing something—and movement of either of images or bodies or words. This is not too precise—but seems to take in the large area of overlapping areas where multi media happen. I do not agree with what he says about media not effecting what is put into it—media are not trucks or hand carts carrying apples equally—but this is an old argument & I won’t go there now.
I think multimedia have a place in education: books , power point, prezi, photos, audio recordings, films, podcasts, the list goes on and on—and I think we have all used various types of multimedia for along time—but some things have changed: we can bring so much more multimedia into our homes, onto our screens (both large and small)—and this presents certain challenges, I think, in terms of context and purpose and integration into the learning of students. I am not sure yet how I will use these new multimedia in whatever class I teach next. I think it is powerful to hear and see someone giving a reading, so, if there are not actual writers coming to campus, going to one of the various writing organizations websites to find videos of readings or writers talking about some question of writing, would be something I would consider. I always used to take my comp classes to the gallery at COD to begin a series of pieces of writing about looking and seeing; what the Internet allows is to go to other museums and galleries as well. What I wouldn’t want to do is to distract from the work & reward that is writing—I would want to enhance it—a balance perhaps not easy to get to. Well, this is the first blog post here. I am so happy I am not beginning the TOUT series of courses with this as a first course; trying to get set up on Weebly--easy though it is--and trying to wrap my head around all the technology would have probably sent me packing. As it is, once I logged in to Weebly, almost everything came back to me about how to proceed. There are things I would like to do to the rest of the pages here--streamlining and organizing, writing better introductions, fiddling with the pictures, but all in all, I felt OK coming back into this space.
I am not all together at ease about the multi-media things we are going to have to produce--talking into a micro phone is not a natural act--especially when there is no audience present to gauge effect. I have given readings before, but with great anxiety--so this will be a test of my metal I guess--one of those things I do because it is good for me and not because I really want to do it. I think this is party a generational thing--my kids are totally comfortable in front microphones and video cameras because these technologies are always around in what they do for fun and at school; but, there is also the leaning of personality, and I am definitely on the introspective side of things--it unnerves me to see a picture of myself when I log in to Blackboard, even though it is about a centimeter square. It unnerves me more to think about how many others have access to this picture now that it is online--but that brings up a whole other issue or cluster of issues around privacy and control which bear some thinking about at a later date. So, in order to fulfill the rubric, what do I want to learn from this class? I want to learn some of the technologies of media production for online classes and for use in face-to-face classes--not because I am comfortable with them but precisely because I am not, but because I feel responsible to whatever students I have in the future to know what is possible and to be able to do what is possible. I want to keep most of my anxieties intact--they keep me honest and questioning what is essential. |
AuthorGlynis Benbow-Niemier Archives
May 2015
Categories |