After reading the three articles on how labels affect visitors' behavior and experience interacting with science exhibits, I have several thoughts and questions about the role of labels and of how researchers draw conclusions on their effects.
I noticed that the researchers used metrics such as time spent, different actions taken, or positive/negative sentiments expressed to measure guest engagement with an exhibit . Knowing that these actions are all that researchers can really observe in the museum, I wonder if they directly reflect the actual outcomes that the exhibits and their signage are meant to effect. Exhibits are not designed for the express purpose of maximizing time spent in front of them. Rather, they're usually meant to accomplish less tangible goals, like increasing knowledge of a scientific topic, building individual learning habits, or increasing confidence in the process of scientific inquiry. So the question should not be, "Do different types of labels increase time spent at an exhibit," but rather, "Do different types of labels make the intended outcome of the exhibit more likely?" The obvious problem is that it can be impossible to measure some of the intended outcomes. The studies are limited to the confines of the museum, and to the time that visitors spend at the exhibit itself. It's very difficult for a researchers to reach visitors later on to measure their retention of a scientific principle or an increased confidence in their own scientific learning ability.
With that in mind, I wonder if there are other studies that compare science museum visitation generally to scientific aptitude in a school setting, that can also possibly control for museums' different philosophies regarding signage and layout. I know already that this is also a near impossible study. Science museums are geographically separated and come with a host of demographic and local interferences. I just wonder how a museum might actually try to assess the true learning outcome of different label types beyond the metrics that the researchers used in the studies we read.
This finally brings me to wondering how we label (or don't) at MOXI, and what exactly the intended outcome of each exhibit, and the museum as a whole is. How can these outcomes be met with strategic signage? If MOXI's mission is "to ignite learning through interactive experiences with science and creativity," how can we measure how different types of signs (even those just consisting of basic graphics) can encourage learning, creativity, and scientific inquiry? Have we undertaken any of these studies? What have we found? I would love to find out more about how we measure guest engagement, and how those measurements may speak to how effective out exhibits and their labels are.
-Sam S.
Subscribe to:
Post Comments (Atom)
Evaluation plan (formative) - Sam S.
My capstone would benefit from several evaluations, both in the formative stage, as well as summative evaluation to inform long-term projec...
-
Observations: Observations would probably be the easiest method to use at MOXI since we already do it all the time. T...
-
http://www.informalscience.org/sites/default/files/MoPOP_Full%20Evaluation%20Report_Final.pdf I was interested in this study because I got ...
-
Ring Launch- revised Engagement Levels: 1. Watch 2. Press button 3. Read sign 4. Manipulate materials Observing: ...
Sam, I think you have touched on one of the big challenges of working in an informal learning environment: evaluation. How do we evaluate, especially for learning outcomes, when learning often looks quite nebulous and evaluation without formal assessment is quite challenging.
ReplyDeleteWhat about at MOXI where learning outcomes are less defined? We will bring this up for conversation at class today.