Data Collection Overview
Data is currently being collected from a growing number of clinical trial registries and data object repositories (collectively known as 'data sources'), in order to feed the data into a metadata repository (MDR). At the moment these data collection exercises are 'one-off' - they bring down snapshots of data, which are then processed and exported to the core mdr repository.
This is clearly not sustainable in the longer term, and the need is for mechanisms that (usually after an initial large-scale download) can identify new or changed data in the data sources, and then only process and export that data, growing the MDR on an incremental basis. This page therefore proposes a strategy for capturing the data from the data sources on a continuous basis.
Contents
The Overall Strategy
The overall process, as envisaged for the long term operation of the MDR, is summarised below and depicted in Figure 1.
for each data source, on a periodic basis (e.g. weekly or monthly)...
1. Identifying the data of interest
In many cases only a subset of the data available from any source will be relevant to the MDR. Ideally, it should be possible to identify data that is both subject relevant, i.e. relates to clinical research studies, and that has been revised or added since the last import from that particular source, and only that data would be imported. In practice it might only be possible to identify either subject relevant material or recently revised / added material, with the final triage of data taking place only once data had been imported into the system. In some cases all the available data might have to be imported each time, and the selection of the relevant data would then take place entirely within the MDR's systems.
2. Data import
Import could be by download, API calls, web scraping – whatever is most effective. Each import should be logged, and would result in what is called here 'session data', i.e. the data collected during that import process from that source. There would need to be matching session data (schema = sd) tables in any database used for storing and processing the data, which would hold only the data from a single session - i.e. they would be truncated and refilled during each import exercise. Data added to the sd tables undergoes coding where appropriate, using the category systems within the MDR, to make comparison with the data already collected easier and quicker. If not already done, data should also be filtered to subject relevant material and, if possible, restricted to new or recently revised records.
3. Comparison of the most recent data import with the data already collected.
The 'data already collected' are stored in their own tables, which are called accumulated data (schema = ad) tables. The ad tables need to be structured in a way that is close enough to the structure of the sd 'session' tables for comparison to be easy, but near enough to the ECRIN metadata scheme for later steps to be straightforward. Each ad table requires an associated procedure that allows the 'source' sd table, or tables, to be compared across the relevant data points in the ad table. If not done earlier in the process, this is the final stage at which new and / or revised data is identified.
4. Transfer of new and edited data
The new or revised data points in the sd data are added to the ad tables. This means that the ad tables contain all versions of any data that has changed - which therefore calls for indicators on the data to make it clear which is the current version of any data item. The ad tables are intended to contain a permanent and slowly growing collection of the data obtained from a particular source.
5. Final coding / restructuring of the new data to match the ECRIN metadata scheme
Data added to the ad tables would undergo any additional processing and / or coding required to bring it into full compliance with the ECRIN metadata schema, or at least to a state that is easily mappable to that schema. This is necessary for the next stage of the process.
Note there will be no single consistent route through these processes, because each data source will demand a different strategy. The approach taken will be dependent on the nature of the source data, the API facilities available or the web scraping that is possible, any additional data collection, the identifiers and links present in the data, the coding required, etc., etc.
and then working with all the data...
6. Splitting and transfer of new data.
The next stage involves aggregating data from different sources.
Registry data, by definition, contains study related data, but it also contains at least one data object dataset, relating to the registry entry itself, and often contains additional references to other data objects. Data repository data will tend only to have data object data, but may include a mix of such objects, and will also normally contain additional information about which studies are linked to the data objects.
The data from different data sources needs to be combined and compared, and it is simpler to use three different aggregating schemas (all within the main mdr database) - one for study data, one for data objects, and one for the links between them. The most recent data from each source is imported into the relevant aggregating schema, to be added to the existing data there.
7. Resolution of study and data object duplication
References to the same study or the same data object (but originating in different data sources) then need to be resolved. Study and data object attribute data may also be updated and / or expanded as new data points become available from different sources. This will require a complex set of comparison mechanisms and procedures to be developed, but only once this is done can the entities receive their final identifiers and the links between studies and data objects, studies and studies, and data objects and data objects be finalised.
8. Transfer of the new data from the aggregate databases to the metadata repository
The final step is to transfer the linked and de-duplicated data from the three aggregate databases to the mdr core database itself.
Once there where it can be queried, either directly or through export of JSON files to the elastic search system.
Documentation support
Each data source will require its own unique set of processes to extract the data and pass it on to the aggregation process, and these will therefore need to be documented.
One approach to documentation would be to use the points listed above to structure a document list. Together with introductory information, this creates a list that covers:
- The nature of the data source, including its size and scope.
- Any legal or other constraints on the use / re-use of the data, and the arrangements put into place by ECRIN to ensure legal requirements have been met.
- A description of how relevant data will be identified and selected (if it can be) in the source data system.
- The structure of the source data, including the identification of the data points that will be extracted, either because they will form part of the final ECRIN dataset or because they can help in its processing and tracking (or in some cases because they may be potentially useful in the future, even if not required now).
- The processes by which the raw data (e.g. the retrieved XML, JSON or files) is filtered / extracted / transformed into the session data, including the coding that is applied. Thius should include or reference a detailed description of the session data tables.
- The comparison mechanisms that take place to turn the sd table data into ad table data, providing (or referencing) a detailed description of the ad tables.
- How the ad data, once transferred into the aggregate databases, is processed and compared to identify possible duplicates, integrate data points, and establish final identifiers.
- (Periodically), the numbers of records imported and transferred into the core MDR system, and the nature of those records (e.g. data object types).
In many cases some of the documents listed above could be combined.
Databases and Data Schemas Required
In Postgres data can held within multiple databases, each of which can be subdivided into different data schemas, or groups of related tables.
The databases required will be:
- A separate database for each data source, named after that source.
- A 'context' database, that acts as a central store of contextual data (e.g. organisations, countries) and the look up tables used within the system.
- The mdr database itself, which aggregates all the data into a single set of tables.
In more detail, the schemas required are:
- In each data source database
- A schema (sd) that holds the data as imported each time.
- A schema (ad) for the accumulated data from that source.
- In some cases it may be useful to have a third, supplementary schema (pp), to hold source specific support tables and procedures.
- In the context database
- A schema (lup) that holds a central store of lookup tables, i.e. controlled terminology.
- A schema (ctx) that holds contextual data relating to organisations and people, countries and regions, and languages.
- In the mdr database
- A schema (st) that holds aggregate study data points, i.e. from all data sources (mostly trial registries), and links them together so that study duplications are resolved.
- A schema (ob) that holds aggregate data object data points, i.e. from all sources, and links them together so that data object duplications are resolved.
- A schema (nk) that holds the linkage data, between studies and data objects, i.e. from all sources, and links them together so that duplications are resolved.
- The metadata repository itself, (schema = core), the final end point of the import processes and the data flowing through them. It brings together the data from the three 'aggregating' schemas. Data processing within this schema, beyond simple import, should be minimal.
- A monitor / logging schema (mn), to hold a list of data sources and a log record of the imports carried out. This assumes a central scheduling system that runs the import routines as required and which logs the reported results to this database.
Postgres allows direct links between schemas in the same database - for example queries can be written that reference tables in different schemas. Having 5 schemas within the main mdr database - mn, st, ob, nk and core - simplifies the interactions between these data stores. Postgres does not allow, however, simple references between databases. Instead schemas from an external database can be imported as 'foreign tables' (not copies but references to the source tables and their data) into pre-existing schemas in the importing database. By convention these foreign table schemas are distinct from the 'native' schemas and data, and usually take the same name as the source schema.
In the data system required for the MDR, each of the source specific databases and the mdr database requires the lup (look up tables) and ctx (context data) avalable as foreign tables. In that way it is necessary to store only one copy of this important contextual data.
In addition the central mdr database, as well as its 5 native schemas, needs the ad schemas of each of the source databases to be available as foreign table schemas, so that the data can be easily transferred.
Figure 2 summarises the schemas and their interactions.
In a fully developed system, that might have many source systems, (possibly up to a 100) it may be necessary to 'stage' the aggregation process and include intermediate database layers between the source and the main databases, to reduce the number of foreign table schemas in any one database. For instance different intermediate databases could be established for trial registries, or for institutional or bibliographic systems.
Identifiers and audit mechanisms required
The source data identifier, sd_id
For each source registry the data is imported into the ‘session data’ (sd) tables. In most cases there will be a ‘source data identifier’ present as part of the data, for example for registry data it will normally be the registry Id, for PubMed data it will be the PMID. This field, imported into the tables as sd_id, links records together across the multiple session data tables. (If there is no source data identifier, as may be the case with some data object sources, then such an identifier will need to be manufactured, by a consistent method, on each import). Because these tables are truncated and refilled on each import, there is no need at this stage to apply a system generated identifier to the records.
The accumulated data identifier, ad_id
The session data is compared with the accumulated data (ad) tables, in the same source database.
Data with a completely new source data identifier can be identified as completely new and transferred to the ad tables.
Data that has the same source data identifier as existing ad records is compared to see if any edits have taken place, on a table by table basis. If so the most recent record, i.e. the one from the sd tables, is added to the ad tables. The ad tables therefore contain every version of the data as imported. (N.B. It may be necessary to exclude some sd fields from this exercise, if they are altered automatically on each import – e.g. they may simply record the date of export from the source system).
As ad tables may contain multiple instances of data for the same source study the sd_id can no longer be used to uniquely identify records. Instead a new identifier, ad_id, must be generated for each new record added to the ad 'core' entitiy tables (ad.studies and ad.data_objects), as a simple integer accession number. It will also need to be applied to the other 'attribute' tables, when required, to link the records (the sd_id is also retained). This data is a permanent part of the system so it makes sense to apply a system generated initial accession number.
Accumulated data audit fields
As well as ad_id field, ad tables also need to include the following audit and support fields:
- added_on: Datetime first added – when this records was first added to the ad table. No records are edited in the ad tables - both completely new and new versions of records are simply added.
- last_confirmed_on: Datetime this data was last confirmed. For newly added or edited data will be the same as added_on. For data that is unchanged will be the date time the comparison process reported no change.
- is_latest_version: a boolean set to true for a new or edited record added to the ad table (in the latter case any existing record with the same sd_id will need to have this value set to false). This field is used to identify the subset of the AD records that must be used when new SD data is compared with it.
- record_status_id: An integer that indicates if this record is an unchanged record (=0), is a completely new record (=1) or a new version of an existing record (=2). Therefore used to identify which records need to be transferred on to the rest of the data transformation process (status > 0).
- source_id: An integer indicating the source registry or data object source, taken from the master list in the monitor mn schema. It could be added during the transfer process to the aggregating schemas but probably easier to make it a field within all AD tables.
The ad tables in each registry / source database should therefore end up with a comprehensive record of all the unique data gathered from a particular source, including all versions of that data, and the datetime in which it was gathered and last confirmed.
Aggregate schema identifiers
For studies: In the aggregate study database the study records from different registries will need to be merged so that any study only has a single entry. To do this, in addition to tables that represent the mdr study tables, the database requires a table that links the ad_ids, the source ids, and the sd_id fields imported from different sources to a single id – again created as an integer accession number (this becomes the final id in the mdr system). The ad_ids cannot be used because they are not necessarily unique across source schemas. The new id, once generated, also needs to be applied to the records in attribute and linkage tables.
A record transferred to the aggregate study schema that represents an edit will therefore require a look up in the ad_id – id linkage table, to discover to which study record the update should apply.
A record transferred to the aggregate study DB table that represents a new record will need to be checked to see if it really is a new record or just a new registry entry for a study that already exists in the system. The algorithm for this process needs to be developed but it will need to start by considering the study_identifiers table, to see if the sd_id matches an identifier that has not itself been used as a sd_id. If that is the case the record has been matched with an ‘other identifier’ listed elsewhere. A secondary check, e.g. using a processed version of the study title, could also be used.
For data objects: An analogous process needs to take place, ensuring that objects of the same type, linked to the same study, are identified and checked to see if they represent a new, distinct object or the same object but described in a different data source (as far as that can be checked given the often limited information available). The algorithms needed for this process will need to be developed. For some types of data objects, such as journal articles, the presence of a doi or similar PID may make comparisons easy. For other types, e.g. protocols and SAPs, there may need to be assumptions made about the numbers of such objects expected (though versioning if present may make this more difficult). Again a central lookup table will be required to allow the system to generate (for new records) or use (for edited records) the definitive id that has been assigned to any data object.
Aggregate schema audit data
The records in the aggregate databases also need some audit fields, though less than the ad tables:
- date_of_data: Datetime added or edited; an indication of the date this record was included in the system in this form.
- record_status_id: An integer that indicates if this record is unchanged (=0) newly added (=1) or newly edited (=2).
- source_id: An integer indicating the source registry or data object source, taken from the master list in the mn schema.
Once all the records are linked to a new study id in the aggregate database, the study – data object link records, in the aggregate data object database, need to be modified to reflect that new id.
Core mdr audit data
As a final step the new or edited data is transferred to the mdr system itself, from the aggrgeate databases. In the mdr the necessary audit fields are
- date_of_data: The date this record was included in the system in this form. This is necessary, for example, to identify the records that require new or replacement json files to be generated for them.
- source_id: An integer indicating the source registry or data object source.
The audit fields in the mdr system, plus the ability to link back to the aggregating and source databases, means that it is possible to check (and if required display) the provenance of any data in the system.
Figure 3 illustrates the overlapping 'scope' of each id used within the system, that allows this provenance to be tracked. Initially source data ids are used within sd_id fields, then the source specific 'accumulated data' ids are used in ad_id fields, before the definitive mdr ids are applied in the aggregating schemas and used in the core schema.
Initial strategy - Early development
The strategy above describes a relatively mature system, with data pipelines in place and periodic imports taking place. The MDR has not yet reached this state, as we are still in the process of establishing the pipelines and repeated, periodic imports have not yet been set up.
Instead, at the moment, imports are 'one-off' snapshots, with all ad tables being set up new each time, along with the sd tables. This allows us to focus on the processes required and their documentation, and still generate a central MDR, but leave the more complex comparison mechanisms for later development.