aka multi-instance learning, multi-task learning
See Deep learning
good for generalizing models, . Good when don't have much supervision data.
Learn embeddings in one task and transfer these to solve new tasks
Example. He exaplains how deep multi-instance learning works. Nice
Corruption (and example here)
Example: Bi-lingual word embeddings
When you can't corrupt the data: Siamese networks Paper
Example: Question answering system. Followed by relation learning (learning triplets like "cat eats mouse")
memory networks (see below) may be useful for transfer learning too..
One-shot learning using conv nets, as we've already have good embeddings, just compare objects in embeddings. See beginning of this
See also Incremental learning