Monday, April 15, 2019

Data and Measuring Learning

Class review: Using the floor-specific data you looked at in class, how would you apply some of this information to design an evaluation? What types of questions would you ask and why? Are there specific exhibits you might look at?

What most stood out to me about the roof exhibits were the ages that people perceived as the primary audience.  I'm curious about why the data are often evenly distributed or if not evenly, at least distributed widely.  Why is there so little consensus on who the exhibits are for?  For example, about as many people think that the whisper dishes are geared toward adults, as do people who think it's toward elementary-aged guests. It seems like for most exhibits, there is a contingent who agree that the exhibits are geared toward all ages, but then there are large chunks who seem to think that it is geared toward another group entirely - and not just one other group, but all of the other groups.

I would like to connect these perceptions to how visitors interact with the exhibit. If a visitor thinks that an exhibit is for a different age group than themselves, how does it change their interaction.  I think this would require observation and a post-visit interview.  Maybe we could also evaluate how Sparks facilitate an exhibit based on who they think it's geared toward.  Or maybe we can dig into what creates the idea that an exhibit is for a specific age group, so that we can add/take away features to make the exhibit appear more accessible to all ages.
Readings: What are some of the challenges of measuring learning in MOXI? What kinds of evaluations do you think would be the most helpful in assessing learning in this space and why? 

There are so many things that make evaluation challenging at MOXI.  Foremost, people are in MOXI to play and be curious, and being evaluated can interrupt that flow.  It can be boring and people might not have a stake in proving what they've learned, so maybe they give the most convenient answer that will get them back to playing.  And since we're not going for a single learning objective, it can be very hard to focus in on a specific anything that was learned.  Also, since visitors are often not ongoing students, we don't have context for them as learners.  Therefore it can be hard to gauge growth - especially since learning can take the shape of an upward spiral and not a straight line.  I think that evaluation needs to be broad, fun, and engaging.  I think that having people reteach is an effective way to evaluate what people understand.  Maybe as we roll out more maker-driven program carts, we can include aspects that allow people to create tutorial-type materials - they could add tips or tricks that they learned as they did the project.  This would allow them to describe their learning without being prompted.


1 comment:

  1. I really like the idea of having more maker-driven program carts! It would be cool to see how people reteach how to make something to a new guest, and I think we would see some interesting learning there.

    ReplyDelete

Evaluation plan (formative) - Sam S.

My capstone would benefit from several evaluations, both in the formative stage, as well as summative evaluation to inform long-term projec...