Abstract Full Text (1197566 bytes)
 
Evaluating electronic information resources in London and the South-East (UK)

  Cumbers B. J. (eKAT (electronic Knowledge Access Team))
 
The evaluation of electronic information resources is problematic: a quantitative approach relies on statistics which are not usually gathered in a consistent way from supplier to supplier and which give little information on user satisfaction; and a qualitative approach is essentially limited by sampling. In the National Health Service in England, with its geographically dispersed staff and the historical variation in information provision between sectors, there is also the need to evaluate the reasons for non-use as well as the effectiveness of publicity.

KA24 is a joint project in the London and South-East regions of the NHS giving electronic access to a selection of databases and journals to all people working in health care in the two regions regardless of profession or place of work. It includes not only NHS staff, but also social care, hospice and voluntary sector staff, many of whom had no access to library information sources before KA24. Its functional existence for two years now allows an in depth evaluation both quantitatively and qualitatively which can be used to inform the development of the new National Core Content project providing electronic information resources for the whole of England.

Quantitative. The bulk of the KA24 content comes through a single supplier. It was stipulated at the outset that detailed statistics of usage by profession, by organisation, and by individuals should be available. The evaluation criteria were based on quarterly cumulations of these, with the ambitious aim of bringing all measurements up to the average for the previous quarter. In general, the aim has not been achieved, and the possible reasons have been examined.

Qualitative. The methods used were:
an online user survey;
questionnaires to a random group of staff, both users and non-users;
a series of structured interviews of frequent users, infrequent users, first-time users and non-users.
Interviewees were drawn from as wide a range of professions in the various sectors as was practicable.
Preliminary results indicate that:
the service has been effective in filling an important information need;
the principal cause of dissatisfaction stems from unrealistic expectations, particularly in full text journal coverage;
publicity has been effective in the acute sector, and less so in mental health and primary care.

In conclusion, the robustness of the various evaluation methods used will be examined critically.