Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders **[Image and Video Understanding Lab, AI Initiative, KAUST](https://ivul.kaust.edu.sa/)** Authors: [`Carlos Hinojosa`](https://carloshinojosa.me/), [`Shuming Liu`](https://sming256.github.io/), [`Bernard Ghanem`](https://www.bernardghanem.com/) ![enter image description here][1] [1]: https://raw.githubusercontent.com/carlosh93/ColorMAE/8d173eb422979c105e0cdb30d90b5659f4dc2efb/assets/proposed.png Links: [`Paper`](https://carloshinojosa.me/files/ColorMAE.pdf), [`Supplementary Material`](https://carloshinojosa.me/files/ColorMAE_Supp.pdf), [`Project`](https://carloshinojosa.me/project/colormae/), [`GitHub`](https://github.com/carlosh93/ColorMAE) ---------- >Can we enhance MAE performance beyond random masking without relying on input data or incurring additional computational costs? We introduce ColorMAE, a simple yet effective **data-independent** method which generates different binary mask patterns by filtering random noise. Drawing inspiration from color noise in image processing, we explore four types of filters to yield mask patterns with different spatial and semantic priors. ColorMAE requires no additional learnable parameters or computational overhead in the network, yet it significantly enhances the learned representations. Please see our GitHub for implementation details: https://github.com/carlosh93/ColorMAE
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.