Michael T. Matthews’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


Figure 2: Few-shot classification accuracy of FuMI on iNat-Anim compared to MAML. Error bars show the uncertainty across 5 random seeds.
Figure 3: The left pane shows the distribution of species in iNat-Anim across birds, reptiles and mammals. The right pane shows a histogram of the number of words in each description of each species.
Multi-Modal Fusion by Meta-Initialization
  • Preprint
  • File available

October 2022

·

29 Reads

Matthew T. Jackson

·

·

Michael T. Matthews

·

Yousuf Mohamed-Ahmed

When experience is scarce, models may have insufficient information to adapt to a new task. In this case, auxiliary information - such as a textual description of the task - can enable improved task inference and adaptation. In this work, we propose an extension to the Model-Agnostic Meta-Learning algorithm (MAML), which allows the model to adapt using auxiliary information as well as task experience. Our method, Fusion by Meta-Initialization (FuMI), conditions the model initialization on auxiliary information using a hypernetwork, rather than learning a single, task-agnostic initialization. Furthermore, motivated by the shortcomings of existing multi-modal few-shot learning benchmarks, we constructed iNat-Anim - a large-scale image classification dataset with succinct and visually pertinent textual class descriptions. On iNat-Anim, FuMI significantly outperforms uni-modal baselines such as MAML in the few-shot regime. The code for this project and a dataset exploration tool for iNat-Anim are publicly available at https://github.com/s-a-malik/multi-few .

Download