There are several reasons:
I want in the future do (maybe project nr2?) a model, that based on the input paragraph can generate Question-Answering Pairs.
In this project, they give student baseline model, along with some other code, that I don't see often in tutorials or courses, which inclues:
I think the practices mentioned above will be useful for all my future projects.
It provides Squad 2.0 Leaderboard, which kind of like Kaggle, can guide me, telling me if I'm going in a good direction.
The Baseline model is based on a model from 2017 (BiAF), so there is lot of room for improvement.
As what to do after adapting BERT. Theere are several things I can do to improve model, mentioned in handout provided by CS 224n staff:
Pre-trained Contextual Embeddings (PCE), aka ELMo & BERT
Non-PCE Model Types
More models and papers
If I manage to get a good model in 4 weeks, during last week I'll try do fine-tune the model on a different dataset (transfer learning), or I'll try to make API out of it and deploy it on server (not sure if API-making is doable in one week).
!pip install jovian -q
import jovian
jovian.commit()
[jovian] Saving notebook..