Figure 3 - uploaded by Peyman Bateni
Content may be subject to copyright.
Comparison of the feature extraction and classification in CNAPS versus Simple CNAPS: Both CNAPS and Simple CNAPS share the feature extraction adaptation architecture detailed in Figure 4. CNAPS and Simple CNAPS differ in how distances between query feature vectors and class feature representations are computed for classification. CNAPS uses a trained, adapted linear classifier whereas Simple CNAPS uses a differentiable but fixed and parameter-free deterministic distance computation. Components in light blue are have parameters that are trained, namely f τ θ in both models and ψ c φ in the CNAPS adaptive classification. As shown, CNAPS classification requires 778k parameters while Simple CNAPS is fully deterministic.

Comparison of the feature extraction and classification in CNAPS versus Simple CNAPS: Both CNAPS and Simple CNAPS share the feature extraction adaptation architecture detailed in Figure 4. CNAPS and Simple CNAPS differ in how distances between query feature vectors and class feature representations are computed for classification. CNAPS uses a trained, adapted linear classifier whereas Simple CNAPS uses a differentiable but fixed and parameter-free deterministic distance computation. Components in light blue are have parameters that are trained, namely f τ θ in both models and ψ c φ in the CNAPS adaptive classification. As shown, CNAPS classification requires 778k parameters while Simple CNAPS is fully deterministic.

Source publication
Preprint
Full-text available
Few-shot learning is a fundamental task in computer vision that carries the promise of alleviating the need for exhaustively labeled data. Most few-shot learning approaches to date have focused on progressively more complex neural feature extractors and classifier adaptation strategies, as well as the refinement of the task definition itself. In th...

Contexts in source publication

Context 1
... we show that regularized class-specific covariance estimation from task-specific adapted feature vectors allows the use of the Mahalanobis distance for classification, achieving a significant improvement over state of the art. A high-level diagrammatic comparison of our "Simple CNAPS" architecture to CNAPS can be found in Figure 3. ...
Context 2
... class mean µ k is obtained by mean-pooling the feature vectors of the support examples for class k extracted by the adapted feature extractor f τ θ . A visual overview of the CNAPS adapted classifier architecture is shown in Figure 3, bottom left, red. ...
Context 3
... considered other ratios and making λ τ k 's learnable parameters, but found that out of all the considered alternatives the simple deterministic ratio above produced the best results. The architecture of the classifier in Simple CNAPS appears in Figure 3, bottom-right, blue. ...
Context 4
... we show that regularized class-specific covariance estimation from task-specific adapted feature vectors allows the use of the Mahalanobis distance for classification, achieving a significant improvement over state of the art. A high-level diagrammatic comparison of our "Simple CNAPS" architecture to CNAPS can be found in Figure 4. ...
Context 5
... class mean µ k is obtained by mean-pooling the feature vectors of the support examples for class k extracted by the adapted feature extractor f τ θ . A visual overview of the CNAPS adapted classifier architecture is shown in Figure 4, bottom left, red. ...
Context 6
... considered other ratios and making λ τ k 's learnable parameters, but found that out of all the considered alternatives the simple deterministic ratio above produced the best results. The architecture of the classifier in Simple CNAPS appears in Figure 4, bottom-right, blue. ...

Similar publications

Preprint
Full-text available
When experience is scarce, models may have insufficient information to adapt to a new task. In this case, auxiliary information - such as a textual description of the task - can enable improved task inference and adaptation. In this work, we propose an extension to the Model-Agnostic Meta-Learning algorithm (MAML), which allows the model to adapt u...
Chapter
Full-text available
An intelligent model for classifying architectural styles is presented in this chapter. Traditional machine vision methods face difficulties in image classification of architectural styles, particularly in the feature extraction phase where many visual features need to be extracted, refined, and optimized. This step is typically conducted manually...
Conference Paper
Full-text available
Few-shot learning is a fundamental task in computer vision that carries the promise of alleviating the need for exhaustively labeled data. Most few-shot learning approaches to date have focused on progressively more complex neural feature extractors and classifier adaptation strategies, and the refinement of the task definition itself. In this pape...
Article
Full-text available
The focus of multi-objective optimization is to derive a set of optimal solutions in scenarios with multiple and often conflicting objectives. However, the ability of multi-objective evolutionary algorithms in approaching the Pareto front and sustaining diversity within the population tends to diminish as the number of objectives grows. To tackle t...
Preprint
Full-text available
Multi-step prediction models, such as diffusion and rectified flow models, have emerged as state-of-the-art solutions for generation tasks. However, these models exhibit higher latency in sampling new frames compared to single-step methods. This latency issue becomes a significant bottleneck when adapting such methods for video prediction tasks, gi...