Impact: ‘any effect of the service on an individual or group’
(This can be positive or negative, intended or accidental, and affecting any stakeholder)
The professional area which currently most interests me most is the impact which libraries, and librarians have on their students, academic staff, employees and other users (hereafter collectively ‘stakeholders’). With this in mind, I jumped at the opportunity to attend the UC&R event East Midlands event ‘Making an Impact’, held on Tuesday at De Montford University (#UCREMimpact). I had wanted to write a report of the day along thematic lines, and first-drafted with this intention – unfortunately, the result just didn’t read right. The event is already receding in my mind, however, so instead I have decided to present one of the discussed projects in this post, and intend to re-hash the rest of my notes into something coherent over the weekend. Please bear in mind that this is not a verbatim report, so some elements of this may be conjecture.
Paul Stainthorp (University of Lincoln) spoke at length about his involvement in the JISC Library Impact Data Project (LIDP): a collaboration between 8 UK universities, this project was designed to improve the intelligence of library systems, collating useful data and applying it to ‘join the loop’ and corroborate what we anecdotally already believe - that use of the university library has a positive correlation with higher degree results. Whilst there is data in the professional body of knowledge regarding the impact libraries have on users and parent institutions, surprisingly there has been no UK-based, publicly-available study into this for HE libraries: therefore, such data would provide valuable ammunition to library services fighting for the budget to remain effective in the new HE landscape.
Whilst the data available to be collected and collated varied between partner university libraries, there was a core focus on three areas: circulation of stock, use of e-resource gateways and gate entry stats. The hope was to gather data to support statements like the following:
User A did B C times during D time. User A achieved E in their degree.
As an individual statement, there is only a weak correlative link between B (B*C?) and E; create the statement thousands of times, and the resultant correlation is much more reliable and statistically-relevant. Note that, even with large data-sets, this link cannot be considered causal - there are too many variables not captured to make definitive statements about the library as key contributing factor to degree success (apparently there’s some teaching goes on in the other buildings on campus; who knew...).
Stainthorp highlighted that this data can be difficult to collate, even where it was already being collected, and may require access to data-sets from other departments (such as Registry). However, each partner library managed to produce some data, on the following kind of lines:
LIDP results are likely to be published in due course (with the anonymised data sets also possibly to be made available): however, initial results show a positive correlation between greater circulation numbers and higher degree results across all partner university libraries, with similar results for e-resource access. These results would potentially be rendered more significant and reliable through further study. This may take the form of longitudinal study, replicating methodology over several years to confirm initial results are accurate; it could take the form of comparing library use figures with results of other research, such as the Student Satisfaction Survey, to examine whether greater incidence of use of the library correlates to better scoring of University facilities in general (and the library in particular); it may focus in further, targeting specific groups within the user population. This last area bears further scrutiny.
‘Students’ are not a homogeneous group.
'Business students’ and ‘English students’ are not homogeneous groups either; indeed, in some ways the only reliable correlation which can be drawn is at the level of the individual. However, subject groupings provide a reasonable level of demographic stratification for a large-scale project given the unavoidable variables introduced by different partners’ facilities. UCAS codes were included in the data set for this project, and are currently being utilised to produce more stratified data for LIDP results. However, we must be prepared, once the data is broken down in this way, for some upsetting results - it is possible that there will emerge negative correlations between elements of library use and degree results for some subjects due to differences in the currency and breadth of resources for that subject area. As David Streatfield, a later speaker on the day, highlighted, data from impact studies must be presented ‘warts and all’, so that it is clear that we are presenting an honest picture of the library’s importance and influence over the institution’s academic success. Whilst Streatfield slightly contradicted and challenged the usefulness of large, wide-ranging studies such as LIDP, I remain convinced of their usefulness in providing a consistent, quantifiable argument for the value of libraries. I await the full results of this study with great interest.