Optimising cohort data in Europe
The difficulty, however, resides in maximising and leveraging such existing resources in an appropriate way. That is, these types of resources have to be versatile in order to have value. In the present state however, existing standards present significant barriers to entry for researchers, which prevents their identification and use. Namely, there is not enough empirical data about the standards in use and it is thus difficult to determine what standards researchers are using in practice. Moreover, researchers have difficulties in finding these standards, as their use is both wide and fragmented. It is thus crucial to identify the kinds of capabilities needed to maximise the use of existing metadata standards and alleviate entry barriers. Combinative capabilities (that use combinations of existing knowledge to obtain results and actions that can be applied across contexts) play an important role. An effective combinative capability is to generate a catalogue for collecting standards for input. This would enhance standardisation and harmonisation since researchers would no longer have to generate new modules from scratch and would be able to use already existing modules instead. A major hurdle in this respect is that the resources required for common datasets differ from those needed for metadata standards. Namely, at the present stage, common and minimal datasets are not versatile but rather specialised resources, whichmeans that they can be applied only to a limited range of contexts. That is, common and minimal datasets are generally very small and are approved by the research institutions conducting the research. Minimal datasets are also dependent on the particular field of research. For instance, epidemiologists are likely to require different variables (and thus different fixed data items) than pharmacologists. Common datasets are possible only for certain kinds of data, for instance adverse events. It is thus crucial that specialised resources should be coordinated effectively, so that robust and effective common datasets could be generated. For this endeavour, we need both integrative capabilities (where coordinated efforts of individual specialists with different types of knowledge are needed) and knowledge integration mechanisms. A key integrative capability here is the one of knowledge aggregation (i.e. the efficiency of knowledge transfer depends on its aggregation). This kind of capability allows structuring the data despite inter-study differences so that the data will be organised in a similar manner and therefore being more comparable. For each adverse event, it is possible to choose a common data element that indicates the start date, the end date and the on-going status of the event. This common data element becomes the main code for structuring all adverse event data. The integrative capability in this context, is thus to obtain fixed data items and apply them to structure the data for further comparison. However, such activities can only be done by the research community and thus require knowledge integration mechanisms. This is because a single researcher cannot fully know what a minimal dataset should be. In order to identify which knowledge integration
Made with FlippingBook flipbook maker