My first week back from maternity leave and I'm already getting philosophical ...
One of the big projects I was working on before my maternity leave was a patron survey on technology. The survey ran while I was on leave and we just got the reports back from the group that created the survey for us.
And my reactions are so mixed ...
On the one hand, it's great we were able to collect so much detailed data on our patrons' use of library technology. There's all sorts of fascinating data (such as 85.8% of our patrons feel public library internet access is either important or very important to the community) that we can now easily share with the Board and other stakeholders.
On the other hand, there's some pretty strong evidence that this was not a representative sample of our community and is likely missing representation from the heaviest users of library technology. While I love the level of detail of the data collected, it made the survey really long. Colleagues who tested it out said it generally took much longer than the advertised 10 minutes. Despite advertising the survey in many formats throughout the library and the community, many who might have taken the survey were scared off by its length.
The middle-aged, white, well-educated, financially well-off females who have other means of internet access and were clearly willing to sit through the survey use the library's internet, website, and electronic resources very differently than those who are from other races, younger, less well-off, less educated, and have no other options for internet access. While we learned a lot about what this one specific group of patrons needs in regards to library technology, the other groups of library technology users we interact with every day simply weren't represented in this survey and we learned next to nothing about their technology needs.
Part of the reason these flaws are frustrating is because this survey is now required from one of our governmental funding sources. In theory, this makes sense. Funding agencies should be picky about giving money to libraries that are responsive to patron needs and one of the ways we can do that is by surveying our patrons to have a better sense of what those needs are. But when the survey doesn't necessarily represent our population, is that really helping our ability to assess our patrons' needs?
These are just my first impressions. We just received the results of the survey and I'm sure we'll be spending lots of time digging into these figures and just exactly what they mean, but at this point, I'm feeling a little disillusioned about the whole process.
One of the big projects I was working on before my maternity leave was a patron survey on technology. The survey ran while I was on leave and we just got the reports back from the group that created the survey for us.
And my reactions are so mixed ...
On the one hand, it's great we were able to collect so much detailed data on our patrons' use of library technology. There's all sorts of fascinating data (such as 85.8% of our patrons feel public library internet access is either important or very important to the community) that we can now easily share with the Board and other stakeholders.
On the other hand, there's some pretty strong evidence that this was not a representative sample of our community and is likely missing representation from the heaviest users of library technology. While I love the level of detail of the data collected, it made the survey really long. Colleagues who tested it out said it generally took much longer than the advertised 10 minutes. Despite advertising the survey in many formats throughout the library and the community, many who might have taken the survey were scared off by its length.
The middle-aged, white, well-educated, financially well-off females who have other means of internet access and were clearly willing to sit through the survey use the library's internet, website, and electronic resources very differently than those who are from other races, younger, less well-off, less educated, and have no other options for internet access. While we learned a lot about what this one specific group of patrons needs in regards to library technology, the other groups of library technology users we interact with every day simply weren't represented in this survey and we learned next to nothing about their technology needs.
Part of the reason these flaws are frustrating is because this survey is now required from one of our governmental funding sources. In theory, this makes sense. Funding agencies should be picky about giving money to libraries that are responsive to patron needs and one of the ways we can do that is by surveying our patrons to have a better sense of what those needs are. But when the survey doesn't necessarily represent our population, is that really helping our ability to assess our patrons' needs?
These are just my first impressions. We just received the results of the survey and I'm sure we'll be spending lots of time digging into these figures and just exactly what they mean, but at this point, I'm feeling a little disillusioned about the whole process.
Comments