In my Program Evaluation class this week, we had the first of our two final presentations. This was the bigger one, where we presented the findings of our research project to a meeting of the MN Children and Nature Connection (part of the Children and Nature Network). The smaller one, presenting an EPA grant request, is next week.
The research looked at reasons childcare providers didn’t spend more time bringing their preschool aged children outside into natural, wild areas for unstructured play.
Methodology in a Nutshell:
We sent out 400 surveys to childcare providers throughout the state, received 81 back, called a few non-responders to make sure their answers more or less tracked the respondent’s answers, and then we analyzed the data.
Apparently, when you are a student at UMD, you can use some statistical software that helps makes sense of survey data. Good to know, since my thesis is looking like it’ll include some sort of questionnaire (not that I’m thinking about methodology yet…). Nice that those tools are there. I haven’t looked, but I wonder if there’s pretty much just a single program that everyone uses, or if it’s like citation software, and everyone uses their own pet program.
Once we wrapped our heads around the big picture of all of our data, the class wrote a comprehensive summary report, created a presentation and a poster (which we never displayed, unfortunately), and went down to the Twin Cities to present our findings.
This was a formal presentation, so we all gussied ourselves up. This is toward the beginning of our presentation:
About half the class took turns presenting, for about an hour, then we broke out into small working groups for another hour. Each group had a different topic to banter around. We captured all of the possible solutions, wrote it all up and will be emailing it out to everyone.
That’s the quick and dirty version of it. And really, the whole process was quick. Julie drove us along at a pretty good clip, and I think it took us all a while to really understand exactly what we were doing. I think we were already doing the project before we really ‘got it.’ At least that was my experience. But what’s neat is that you can go from zero to sixty with a research project in very short order… especially when you know what you’re doing and have twenty helpers!
The nice thing about being involved in this project is that it’s really helped me to see what aspects of my own project might look like. Handy, how that works – school and all.
Fortunately, I’m still early enough in my process that this change isn’t going to be too disruptive.
Until now, I’ve been thinking my thesis would take a look at the differences in learning between two groups: one using traditional field guides and the other using electronic field guides.
I still think it sounds like a good project, but I was having a pretty hard time getting my head around everything I’d need to do, in order to make it valid, meaningful and doable.
So, while slogging my way through my lit review (hey – those things really work!), the one thing that I haven’t been finding is an analysis of what is currently going on – specifically, what technologies outdoor educators are currently using while outside, while teaching.
And that’s what it’s going to be.
So, I’ll survey outdoor educators to find out what gizmos they use out in the field while teaching. GPS? Probably. Fish finders? Likely. Cell phones? Probably not. iPod Touch/iPhones? Maybe. Laptops? I can’t imagine.
Now, I just need to retool my lit review to see what I can find that’s more specifically relevant to this topic. And come up with a survey. Hmm… where can a person find a reliable and relevant tool?
In one of my recent Program Evaluation classes, we were talking about how environmental education and outdoor education differ.
It’s kind of a no-brainer, I guess, but since I’m involved with both, I’ve always pegged outdoor ed and environmental ed as two sides of the same coin. As in: most people start with OE, and eventually make their way over to EE. Or just keep rolling from the one to the next.
Not necessarily the case, I suppose. Some outdoor educators only teach outdoor skills, and never get over to the next idea of using outdoor skills to further environmental education.
Outdoor Education = Learn how to ski
Environmental Education = Learn how to ski to the river to learn how to take water samples to learn how to report on changes in turbidity
Put in that way, it kind of makes it look like EE might be biting off a little much, doesn’t it?
Met with my thesis advisor today.
Half the meeting was me ranting about discussing some of the things that I had questions about, including a paper last semester and our grant writing seminar this past Monday.
In my paper, I thought I was supposed to give my impression of how closely the RSOP was cleaving to its mission, through the use of several different educational models. Turns out it wasn’t just a ‘my opinion’ exercise. Oh well.
With the grant writing, it was driving me crazy, that we were discussing nuts-and-bolts stuff. I wanted to talk about when to write grants, and who should be writing them.
After I got those out of my system, we briefly discussed the big picture plan for the semester. During my Foundations of Education Research class, I will be creating one possible skeleton for my final research project. Ken asserted multiple times that the class goes too fast to develop any chapters of a thesis. An idea, sure. A skeleton, maybe. But certainly not enough to use and turn in.
In Julie’s class, he recommended that I think about how I might use the technology idea when writing my grant. Perhaps… Depends on whether Tim comes back with a grant proposal idea tech would fit within.
Lastly – I’ve tried to make it clear that my thesis idea is not about particular gadgets, but about how gadgets in general affect learning about non-technology types of things. I only mention that because every once in a while, it sounds like someone will get hung up on some new device or technology (ipad, anyone?). I sometimes think my thesis idea is actually more psychology than EE, since it’s all about how the learning happens or doesn’t happen. That idea is a little frightening, since psychology is most definitely not an area I have any expertise in.
I think I’m going to take my lunch break over in the library, and try to get in some Program Evaluation readings. That class is very heavy with the reading.
When I am struggling my way through reading about validity assessments, as I so often am lately, I sometimes stop in amazement. What is amazing to me is that people are passionate about validity and Crohnbach’s alpha and reliability null measures. I’m just barely getting my head around this stuff, and some people make it their life’s work. And presumably, they really dig it!
I consider myself to be a more or less intelligent person, but as I am forging my way through these readings, I think it’s a wonder than anyone makes this (research about research) their life’s work. It makes me wonder if I’m just not seeing the story behind the numbers – if I just haven’t yet found the ‘hook’ that brings it all into technicolor wonder and glory. Hah.
So right now, I’m reading an article by Harold Hungerford about developing curriculum for environmental education programming. As I try to summarize what has been written, I am continually stymied. EE should have a solid base. It shouldn’t suffer the folly of Conservation Education (which… did not have a solid base?). EE curriculum should follow the proposed guidelines. The proposed guidelines have been refereed and are sound.
I don’t know. Throughout my program, I have thought that EE spends an awful lot of time justifying itself, and this article seems to continue that trend. I mean the real purpose of the article is to lay out a framework for how to establish EE curriculum, but the underlying message (to me) seems to be that EE is good enough, it’s important enough, and doggone it, people like it.
Does biology do this? Does astronomy, or political science? What other fields make sure that we know how important and valid they are? I’ve gotta say – it’s off-putting, and I feel like it actually has the opposite effect. I think it would make people less inclined to take it seriously. Just do the work, and get over justifying yourself. Either people get it or they don’t.
Hmm. I think this post transmogrified into a rant. Sorry ’bout that.
Granted, the article was written in 1980. So what is Ernst looking for us to take from this? I guess the line from the reading that ‘program evaluation is a prerequisite to sound decision making.’
Next up: Understanding By Design
This semester, I am taking Program Evaluation and Foundations of Educational Research.
Program Evaluation has already started, and so far, we are talking about how the field of EE seems to lend itself to attack from those forces who would seek to diminish the impact or role of EE. Environmental Education tends to do this either by not using good science, or by not being even-handed and non-partisan.
The problem with EE being non-partisan is that it’s not non-partisan. Just by its name, you could figure out that Environmental Education would be a field which is pro-environment. And yet, people (like Michael Sanera, so far) criticize EE for being biased. It is interesting to me that, when you search for Sanera, you find him often associated with a lot of right-wing think tanks.
And Sanera’s assertion appears to be that EE should be non-partisan. That it should not say (for instance) that mountaintop removal mining is a bad thing, it’s just a thing that is happening in the environment. How you feel about it should be your decision, and not the view of some environmental educator that is being forced into your brain. That doesn’t seem realistic, to me. It would be like saying that driving drunk is not a bad thing, it is just a thing that some drivers choose to do. How you feel about it should be up to you, and not up to MADD, the legislature, or anyone else. Seems silly, doesn’t it?
Anyway, this isn’t about roasting Sanera or anyone else. To me, it’s just interesting that EE is defending itself for not being non-partisan enough, and from clearly partisan attacks. It’s kind of like the field of EE has an inferiority complex.
Well, this is a new angle that I’m learning about, so I’ve obviously got some reading to do on this one. Interesting idea.
Oh – the connection to ‘Program Evaluation’ is (approximately) that until we can effectively evaluate our programming, and know what we are doing and why we’re doing it, we leave ourselves open to attacks that may or may not have any basis in reality. It’ll be interesting to see where this all leads.