Learn practical skills, build real-world projects, and advance your career

Introduction

This is a test for doing some basic ETL on data.

The Goal

The goal of this task is to take a pair excel files, load them into SQL, and then transform the dataset into a separate schema (called FHIR).

Input data

The input data is an pair of excel files with patient information in it. Each file represents an extract of a data system, each made a month apart. Each file includes patients discharged during the last two months, so the data will have intersecting data points, but the union of the two is really what you want. In the boilerplate code below, you can see the file paths and open them up in excel to browse it if you'd like. Feel free to ask questions about the file formatting.

Exercise

Load into SQL

First, begin by loading the data into a SQL database. We have Postgres running for you already, so use that.
Username: carta
Password: password

The data should be a union of the data in both extract files, with an update time corresponding to the newest update date available for each row.

Transform into FHIR

After the data is loaded into SQL, you'll produce some FHIR resources from the data. In particular, there are two resources you'll make:

  1. Patient
  2. Encounter

Both of those are defined on the FHIR website, here: http://fhir.org. We will do a validation on the data once it's done!

import pandas as pd
import numpy as np
from carta_interview import Datasets, get_data_file
patient_extract1 = get_data_file(Datasets.PATIENT_EXTRACT1)
patient_extract2 = get_data_file(Datasets.PATIENT_EXTRACT2)
!pip install SQLAlchemy psycopg2 jovian --upgrade --quiet
'/home/jovyan/data/patients-extract1.xlsx'
psql = {
    "username": "carta",
    "password": "password"
}