Quick reivew — Transfer learning
Transfer learning aims to transfer learning from one domain to another. It’s motivated by the fact that building models from scratch comes with high cost in terms of time and resources.
Transfer learning uses the source domain as the knowledge base and transfers its knowledge to the newly targeted domain. The assumption here that we need to relax is that the dataset is identically independent distributed.
There are few types of transfer learning:
1- instance based transfer learning where we include the similar examples as an additional training data with their weights. If A is the source domain and B is the target, then we should feed A∩B as feature to B. A good example for this is when transferring models from restaurants to hotels, there are few attributes that can be used such as dates or number of people. These categories can be used with the same weight from the source domain.
2- Mapping based transfer learning where all examples from both source and target domain are mapped to a new space and represented in unified fashion.
3- network based transfer learning where you train a neural network or sub-network in the source domain, then you take that sub-network to initialize the target domain network. This follows the pretrain and fine-tune paradigm approach.
4- adversarial based transfer learning where the learning aims to optimize two losses. The goal should be to reduce the ability to discriminate between the source and target domain. This is done by adding an deversial layer while training one single model with multiple heads that respond to both source and domain.