Sunday, April 14, 2019

MOXI Exhibit Evaluations & Measuring Learning - Juliana


Class review: Using the floor-specific data you looked at in class, how would you apply some of this information to design an evaluation? What types of questions would you ask and why? Are there specific exhibits you might look at? 
Sophia and I looked at responses to exhibits on the second floor. One thing that I noticed when looking at the surveys completed by staff is that when asked about frustrations of guests, they responded with more of their own frustrations rather than guests’ frustrations. For example, for the Maglev train, one comment mentioned that the legos are a lot to clean up, which I don’t think is something a visitor would be frustrated with. So I think that when determining the frustrations of an exhibit this would need to come directly from the visitor. This could be through detailed observations of guest interactions, or even interviewing guests after their interactions and asking what frustrations they had with the exhibit. I think this would be especially helpful for the less popular exhibits such as the Maglev train, because those exhibits we don’t see very much interaction with so the frustrations are not as clear. While at the more popular exhibits, frustrations of visitors are sometimes more obvious (ex. Mindball - signage, BiTiRi- not enough pieces, cars getting stuck on top).
Readings: What are some of the challenges of measuring learning in MOXI? What kinds of evaluations do you think would be the most helpful in assessing learning in this space and why?  
Measuring learning in MOXI and other museums can be challenging because there is so much information and so many different learning paths any one visitor can take. So if visitors aren’t individually being observed to see which exhibits they spend their time at, it would be difficult to know exactly which questions to ask. We also wouldn’t know what background knowledge they had coming into the museum without a thorough interview prior to their visit. As discussed in chapter 3, measuring knowledge retention through the methods of recall and recognition would not be efficient in measuring learning in museums for these reasons, as these methods test learning that is very specific to each exhibit.

I think a better method of measuring learning would be determining conceptual change through clinical interviews. In this method, participants are given questionnaires before and after their interaction with an exhibit. Participants are also asked to make predictions about what they think will happen. This method, while it would be much more time consuming compared to some of the other methods, would give much more thorough results when evaluating visitors’ learning. This is because it looks at each visitor individually, which is important in informal learning settings because there are so many different pathways of learning.

No comments:

Post a Comment

Evaluation plan (formative) - Sam S.

My capstone would benefit from several evaluations, both in the formative stage, as well as summative evaluation to inform long-term projec...