Main content
ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: We introduce ColorMAE, a simple yet effective data-independent method which generates different binary mask patterns by filtering random noise. Drawing inspiration from color noise in image processing, we explore four types of filters to yield mask patterns with different spatial and semantic priors. ColorMAE requires no additional learnable parameters or computational overhead in the network, yet it significantly enhances the learned representations. This work was accepted at ECCV 2024.
ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders
Image and Video Understanding Lab, AI Initiative, KAUST
Authors: Carlos Hinojosa
, Shuming Liu
, Bernard Ghanem
Links: Paper
, Supplementary Material
, Project
, GitHub
Can we enhance MAE performance beyond random masking without relying on input data or incurring additional computational costs?
We introduce ColorMAE, a …
Files
Files can now be accessed and managed under the Files tab.
Citation
Recent Activity
Unable to retrieve logs at this time. Please refresh the page or contact support@osf.io if the problem persists.