Difference between revisions of "Data Collection Overview"

From ECRIN-MDR Wiki
Jump to navigation Jump to search
(The Overall Strategy)
(The Overall Strategy)
Line 3: Line 3:
 
== The Overall Strategy ==
 
== The Overall Strategy ==
 
Data is collected from a growing number of clinical trial registries and data object repositories (collectively known as 'data sources'), transformed into ECRIN schema metadata, and stored in a central database so that it can be accessed by the web portal or APIs. In fact there are four distinct processes involved in data collection and extraction, which apply to all data sources, and which are shown in figure 1. <br/>
 
Data is collected from a growing number of clinical trial registries and data object repositories (collectively known as 'data sources'), transformed into ECRIN schema metadata, and stored in a central database so that it can be accessed by the web portal or APIs. In fact there are four distinct processes involved in data collection and extraction, which apply to all data sources, and which are shown in figure 1. <br/>
<br/>
 
 
===Data Download===
 
===Data Download===
 
All data used by the MDR is first downloaded onto an ECRIN managed server, and stored as an XML file. The data may start as an XML file at the source (as for ClinicalTrials.gov or Pubmed) in which case downloading the file is relatively straightforward, using API calls to identify the files required. The data may be in a downloadable csv file (e.g. WHO ICTRP data) which is then processed to generate an XML file per record (row) in the file. The data may be need to be scraped from one or more web pages, in which case an XML file is again constructed for each record.<br/>
 
All data used by the MDR is first downloaded onto an ECRIN managed server, and stored as an XML file. The data may start as an XML file at the source (as for ClinicalTrials.gov or Pubmed) in which case downloading the file is relatively straightforward, using API calls to identify the files required. The data may be in a downloadable csv file (e.g. WHO ICTRP data) which is then processed to generate an XML file per record (row) in the file. The data may be need to be scraped from one or more web pages, in which case an XML file is again constructed for each record.<br/>

Revision as of 18:42, 28 October 2020

Needs to be updated...

The Overall Strategy

Data is collected from a growing number of clinical trial registries and data object repositories (collectively known as 'data sources'), transformed into ECRIN schema metadata, and stored in a central database so that it can be accessed by the web portal or APIs. In fact there are four distinct processes involved in data collection and extraction, which apply to all data sources, and which are shown in figure 1.

Data Download

All data used by the MDR is first downloaded onto an ECRIN managed server, and stored as an XML file. The data may start as an XML file at the source (as for ClinicalTrials.gov or Pubmed) in which case downloading the file is relatively straightforward, using API calls to identify the files required. The data may be in a downloadable csv file (e.g. WHO ICTRP data) which is then processed to generate an XML file per record (row) in the file. The data may be need to be scraped from one or more web pages, in which case an XML file is again constructed for each record.
Data files that are created rather than simply downloaded demand more processing, but that processing can be used to start the process of cleaning and transforming the data from its 'raw' state into one that matches the ECRIN schema. The XML files generated are therefore relatively easy to harvest in the next stage of the process, compared with the 'native' XML files from, for example, PubMed.
Successive download operations result in steadily growing collections of source data, stored locally on the MDR database server. Each source has its own folder, or set of folders. For the WHO data the download process splits the records up and distributes the resulting files to different folders according to the source registry.
For small sources, it is often simpler to re-download the whole set of source data and replace the existing files. For large sources this takes too much time, and just a subset of records are downloaded each time - usually those revised or added since the last download. This does not prevent even large datasets being completely replaced at intervals - perhaps annually - to ensure synchronisation between the source and the MDR's version of it. Either way, at any one time, the MDR has the totality of the relevant material from each source available locally.

Data Harvesting

At intervals the local data can be processed and inserted into a database, a process that in this system is known as 'harvesting'. The harvested data is inserted into - effectively - a temporary holding database, so that it can be examined and additional processing carried out as required.
In the mdr system each source has a single database but uses at least two schemas, or distinct sets of tables. The schema that the harvested data is placed into is known as the session data schema (the name of each table in it is prefixed with 'sd.'). This differentiates it from the accumulated data schema tables (all prefixed with 'ad.'). As the name suggests the accumulated data tables hold the totality of data obtained from the source. They are usually created when the source is first accessed and then gradually grow and are revised over time. The session data tables, on the other hand, hold only the data from the last harvested session. These tables are dropped and recreated each time a harvest session takes place.
In most circumstances, harvests are set up to process only files that have been added or revised since the most recent import operation (described below). This means that only data that is potentially new to the system is harvested and placed in the sd tables. In most cases therefore, the sd tables will hold a small fraction of the volume in the ad tables, but it will be the data of current interest, because it is data recently changed in or added to the source. (It is possible to do a '100% harvest', but this would be relatively rare in normal operations). The other important aspect of harvesting is that it completes the transformation of the data into the structure of the ECRIN metadata schema. The different databases will have different numbers of tables in their sd and ad schemas, (some sources are more complex than others) but a table of a particular type will be the same in all the databases, i.e. contain the same fields, and those fields will conform to the ECRIN schema. For the XML files generated by the download process this second transformation stage is usually straightforward. For ClinicalTriuals.gov and PubMed files, all the transformation has to be done during harvesting, which can therefore be relatively complex.

Figure 1: Data collection data flows


Data Import

The data import process brings the data into the accumulated 'ad' tables.

Data Aggregation

Workflow Principles

3.  Comparison of the most recent data import with the data already collected.
The 'data already collected' are stored in their own tables, which are called accumulated data (schema = ad) tables. The ad tables need to be structured in a way that is close enough to the structure of the sd 'session' tables for comparison to be easy, but near enough to the ECRIN metadata scheme for later steps to be straightforward. Each ad table requires an associated procedure that allows the 'source' sd table, or tables, to be compared across the relevant data points in the ad table. If not done earlier in the process, this is the final stage at which new and / or revised data is identified.
4.  Transfer of new and edited data
The new or revised data points in the sd data are added to the ad tables. This means that the ad tables contain all versions of any data that has changed - which therefore calls for indicators on the data to make it clear which is the current version of any data item. The ad tables are intended to contain a permanent and slowly growing collection of the data obtained from a particular source.
5.  Final coding / restructuring of the new data to match the ECRIN metadata scheme
Data added to the ad tables would undergo any additional processing and / or coding required to bring it into full compliance with the ECRIN metadata schema, or at least to a state that is easily mappable to that schema. This is necessary for the next stage of the process.

Note there will be no single consistent route through these processes, because each data source will demand a different strategy. The approach taken will be dependent on the nature of the source data, the API facilities available or the web scraping that is possible, any additional data collection, the identifiers and links present in the data, the coding required, etc., etc.

and then working with all the data...

6.  Splitting and transfer of new data.
The next stage involves aggregating data from different sources.
Registry data, by definition, contains study related data, but it also contains at least one data object dataset, relating to the registry entry itself, and often contains additional references to other data objects. Data repository data will tend only to have data object data, but may include a mix of such objects, and will also normally contain additional information about which studies are linked to the data objects.
The data from different data sources needs to be combined and compared, and it is simpler to use three different aggregating schemas (all within the main mdr database) - one for study data, one for data objects, and one for the links between them. The most recent data from each source is imported into the relevant aggregating schema, to be added to the existing data there.
7.  Resolution of study and data object duplication
References to the same study or the same data object (but originating in different data sources) then need to be resolved. Study and data object attribute data may also be updated and / or expanded as new data points become available from different sources. This will require a complex set of comparison mechanisms and procedures to be developed, but only once this is done can the entities receive their final identifiers and the links between studies and data objects, studies and studies, and data objects and data objects be finalised.
8.  Transfer of the new data from the aggregate databases to the metadata repository
The final step is to transfer the linked and de-duplicated data from the three aggregate databases to the mdr core database itself.
Once there where it can be queried, either directly or through export of JSON files to the elastic search system.

Documentation support

Each data source will require its own unique set of processes to extract the data and pass it on to the aggregation process, and these will therefore need to be documented.
One approach to documentation would be to use the points listed above to structure a document list. Together with introductory information, this creates a list that covers:

  • The nature of the data source, including its size and scope.
  • Any legal or other constraints on the use / re-use of the data, and the arrangements put into place by ECRIN to ensure legal requirements have been met.
  • A description of how relevant data will be identified and selected (if it can be) in the source data system.
  • The structure of the source data, including the identification of the data points that will be extracted, either because they will form part of the final ECRIN dataset or because they can help in its processing and tracking (or in some cases because they may be potentially useful in the future, even if not required now).
  • The processes by which the raw data (e.g. the retrieved XML, JSON or files) is filtered / extracted / transformed into the session data, including the coding that is applied. Thius should include or reference a detailed description of the session data tables.
  • The comparison mechanisms that take place to turn the sd table data into ad table data, providing (or referencing) a detailed description of the ad tables.
  • How the ad data, once transferred into the aggregate databases, is processed and compared to identify possible duplicates, integrate data points, and establish final identifiers.
  • (Periodically), the numbers of records imported and transferred into the core MDR system, and the nature of those records (e.g. data object types).

In many cases some of the documents listed above could be combined.

Databases and Data Schemas Required

In Postgres data can held within multiple databases, each of which can be subdivided into different data schemas, or groups of related tables.

Figure 2: Data collection schemas

The databases required will be:

  • A separate database for each data source, named after that source.
  • A 'context' database, that acts as a central store of contextual data (e.g. organisations, countries) and the look up tables used within the system.
  • The mdr database itself, which aggregates all the data into a single set of tables.


In more detail, the schemas required are:

  • In each data source database
    • A schema (sd) that holds the data as imported each time.
    • A schema (ad) for the accumulated data from that source.
    • In some cases it may be useful to have a third, supplementary schema (pp), to hold source specific support tables and procedures.
  • In the context database
    • A schema (lup) that holds a central store of lookup tables, i.e. controlled terminology.
    • A schema (ctx) that holds contextual data relating to organisations and people, countries and regions, and languages.
  • In the mdr database
    • A schema (st) that holds aggregate study data points, i.e. from all data sources (mostly trial registries), and links them together so that study duplications are resolved.
    • A schema (ob) that holds aggregate data object data points, i.e. from all sources, and links them together so that data object duplications are resolved.
    • A schema (nk) that holds the linkage data, between studies and data objects, i.e. from all sources, and links them together so that duplications are resolved.
    • The metadata repository itself, (schema = core), the final end point of the import processes and the data flowing through them. It brings together the data from the three 'aggregating' schemas. Data processing within this schema, beyond simple import, should be minimal.
    • A monitor / logging schema (mn), to hold a list of data sources and a log record of the imports carried out. This assumes a central scheduling system that runs the import routines as required and which logs the reported results to this database.


Postgres allows direct links between schemas in the same database - for example queries can be written that reference tables in different schemas. Having 5 schemas within the main mdr database - mn, st, ob, nk and core - simplifies the interactions between these data stores. Postgres does not allow, however, simple references between databases. Instead schemas from an external database can be imported as 'foreign tables' (not copies but references to the source tables and their data) into pre-existing schemas in the importing database. By convention these foreign table schemas are distinct from the 'native' schemas and data, and usually take the same name as the source schema.
In the data system required for the MDR, each of the source specific databases and the mdr database requires the lup (look up tables) and ctx (context data) avalable as foreign tables. In that way it is necessary to store only one copy of this important contextual data.
In addition the central mdr database, as well as its 5 native schemas, needs the ad schemas of each of the source databases to be available as foreign table schemas, so that the data can be easily transferred.
Figure 2 summarises the schemas and their interactions.
In a fully developed system, that might have many source systems, (possibly up to a 100) it may be necessary to 'stage' the aggregation process and include intermediate database layers between the source and the main databases, to reduce the number of foreign table schemas in any one database. For instance different intermediate databases could be established for trial registries, or for institutional or bibliographic systems.

Identifiers and audit mechanisms required

The source data identifier, sd_id
For each source registry the data is imported into the ‘session data’ (sd) tables. In most cases there will be a ‘source data identifier’ present as part of the data, for example for registry data it will normally be the registry Id, for PubMed data it will be the PMID. This field, imported into the tables as sd_id, links records together across the multiple session data tables. (If there is no source data identifier, as may be the case with some data object sources, then such an identifier will need to be manufactured, by a consistent method, on each import). Because these tables are truncated and refilled on each import, there is no need at this stage to apply a system generated identifier to the records.

The accumulated data identifier, ad_id
The session data is compared with the accumulated data (ad) tables, in the same source database.
Data with a completely new source data identifier can be identified as completely new and transferred to the ad tables.
Data that has the same source data identifier as existing ad records is compared to see if any edits have taken place, on a table by table basis. If so the most recent record, i.e. the one from the sd tables, is added to the ad tables. The ad tables therefore contain every version of the data as imported. (N.B. It may be necessary to exclude some sd fields from this exercise, if they are altered automatically on each import – e.g. they may simply record the date of export from the source system).

As ad tables may contain multiple instances of data for the same source study the sd_id can no longer be used to uniquely identify records. Instead a new identifier, ad_id, must be generated for each new record added to the ad 'core' entitiy tables (ad.studies and ad.data_objects), as a simple integer accession number. It will also need to be applied to the other 'attribute' tables, when required, to link the records (the sd_id is also retained). This data is a permanent part of the system so it makes sense to apply a system generated initial accession number.

Figure 3: Scopes of the data collection ids

Accumulated data audit fields
As well as ad_id field, ad tables also need to include the following audit and support fields:

  • added_on: Datetime first added – when this records was first added to the ad table. No records are edited in the ad tables - both completely new and new versions of records are simply added.
  • last_confirmed_on: Datetime this data was last confirmed. For newly added or edited data will be the same as added_on. For data that is unchanged will be the date time the comparison process reported no change.
  • is_latest_version: a boolean set to true for a new or edited record added to the ad table (in the latter case any existing record with the same sd_id will need to have this value set to false). This field is used to identify the subset of the AD records that must be used when new SD data is compared with it.
  • record_status_id: An integer that indicates if this record is an unchanged record (=0), is a completely new record (=1) or a new version of an existing record (=2). Therefore used to identify which records need to be transferred on to the rest of the data transformation process (status > 0).
  • source_id: An integer indicating the source registry or data object source, taken from the master list in the monitor mn schema. It could be added during the transfer process to the aggregating schemas but probably easier to make it a field within all AD tables.

The ad tables in each registry / source database should therefore end up with a comprehensive record of all the unique data gathered from a particular source, including all versions of that data, and the datetime in which it was gathered and last confirmed.

Aggregate schema identifiers
For studies: In the aggregate study database the study records from different registries will need to be merged so that any study only has a single entry. To do this, in addition to tables that represent the mdr study tables, the database requires a table that links the ad_ids, the source ids, and the sd_id fields imported from different sources to a single id – again created as an integer accession number (this becomes the final id in the mdr system). The ad_ids cannot be used because they are not necessarily unique across source schemas. The new id, once generated, also needs to be applied to the records in attribute and linkage tables.

A record transferred to the aggregate study schema that represents an edit will therefore require a look up in the ad_id – id linkage table, to discover to which study record the update should apply.
A record transferred to the aggregate study DB table that represents a new record will need to be checked to see if it really is a new record or just a new registry entry for a study that already exists in the system. The algorithm for this process needs to be developed but it will need to start by considering the study_identifiers table, to see if the sd_id matches an identifier that has not itself been used as a sd_id. If that is the case the record has been matched with an ‘other identifier’ listed elsewhere. A secondary check, e.g. using a processed version of the study title, could also be used.

For data objects: An analogous process needs to take place, ensuring that objects of the same type, linked to the same study, are identified and checked to see if they represent a new, distinct object or the same object but described in a different data source (as far as that can be checked given the often limited information available). The algorithms needed for this process will need to be developed. For some types of data objects, such as journal articles, the presence of a doi or similar PID may make comparisons easy. For other types, e.g. protocols and SAPs, there may need to be assumptions made about the numbers of such objects expected (though versioning if present may make this more difficult). Again a central lookup table will be required to allow the system to generate (for new records) or use (for edited records) the definitive id that has been assigned to any data object.

Aggregate schema audit data
The records in the aggregate databases also need some audit fields, though less than the ad tables:

  • date_of_data: Datetime added or edited; an indication of the date this record was included in the system in this form.
  • record_status_id: An integer that indicates if this record is unchanged (=0) newly added (=1) or newly edited (=2).
  • source_id: An integer indicating the source registry or data object source, taken from the master list in the mn schema.

Once all the records are linked to a new study id in the aggregate database, the study – data object link records, in the aggregate data object database, need to be modified to reflect that new id.

Core mdr audit data
As a final step the new or edited data is transferred to the mdr system itself, from the aggrgeate databases. In the mdr the necessary audit fields are

  • date_of_data: The date this record was included in the system in this form. This is necessary, for example, to identify the records that require new or replacement json files to be generated for them.
  • source_id: An integer indicating the source registry or data object source.

The audit fields in the mdr system, plus the ability to link back to the aggregating and source databases, means that it is possible to check (and if required display) the provenance of any data in the system.
Figure 3 illustrates the overlapping 'scope' of each id used within the system, that allows this provenance to be tracked. Initially source data ids are used within sd_id fields, then the source specific 'accumulated data' ids are used in ad_id fields, before the definitive mdr ids are applied in the aggregating schemas and used in the core schema.

Initial strategy - Early development

The strategy above describes a relatively mature system, with data pipelines in place and periodic imports taking place. The MDR has not yet reached this state, as we are still in the process of establishing the pipelines and repeated, periodic imports have not yet been set up.
Instead, at the moment, imports are 'one-off' snapshots, with all ad tables being set up new each time, along with the sd tables. This allows us to focus on the processes required and their documentation, and still generate a central MDR, but leave the more complex comparison mechanisms for later development.