Optimising cohort data in Europe
is not global but local: participants may well be autonomous in particular sectors of activity but not in others (Richterich, 2018; McLennan et al., 2019). For instance, individuals with disabilities experience restrictions on their autonomy in certain domains (e.g. accommodation, mobility) but not in others (e.g. social media use) (McRae et al., 2020). It is thus crucial to find consent forms that specify different spheres of autonomous activity. For instance, it is generally not possible to request specific consent for purposes that are not foreseen (Bialke et al., 2018; Conboy, 2020). The solution is often to adapt initial consent to foreseeable purposes only when new research purposes (or new related research projects altogether) could require a renewed consent (MacMahon and Denaxas, 2019). However, the local and fragmented nature of autonomy makes it very difficult to determine which type and what amount of information is truly needed. There is no clearly established threshold of clarity and information in cohort research because study purposes evolve on a continuum rather than on fixed modules. In practice, this means that each biobank will require a comparable independent ethics review board, regardless of whether the biobank operates under a legislation requiring specific, broad or any other form of consent (Kiehntopf, 2019). There are also concerns about the state of autonomy within big data research contexts. Big data research entails automatic retrieval of personal data on an immense scale (Richterich, 2018). As a result, big data research often undermines the ''traditional'' modalities of informed consent. For instance, data collection from wearable devices means that participants cannot be fully informed about the extent to which their data are collected (Dobrick et al., 2018). In this sense, researchers have access to a vast amount of personal data even if research participants did not give explicit consent for it (Hulsen et al., 2019). Second, big data may also involve consent biases. While individuals are able to provide (personal) data themselves (e.g. posting on internet platforms), they are not necessarily aware of the future uses of their data. Even awareness does not solve the issues because many platforms do not allow users to fully control their privacy arrangements (McRae et al., 2020; Fröhlich, et al., 2018). Implications from big data results may also harm vulnerable communities and reduce their autonomy. Namely, participants of big data studies (especially those with digital/ communication technologies) are willing to give personal data and comply with the conditions of the web service used for data collection (e.g. ''Google Maps”). Hence, the results do not represent individuals who are more privacy-conscious, lack the necessary temporal and material resources to participate, and experience less peer pressure to use certain platforms (Aiello et al., 2020). Applying the results from communication devices data indiscriminately to the population as a whole does not only exclude such individuals from research, but alsomanufactures consent by framing it as representative, socially acceptable and taken for granted (Menychtas et al., 2020; Richterich, 2018).
Made with FlippingBook flipbook maker