# ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders
**[Image and Video Understanding Lab, AI Initiative, KAUST](https://ivul.kaust.edu.sa/)**
Authors: [`Carlos Hinojosa`](https://carloshinojosa.me/), [`Shuming Liu`](https://sming256.github.io/), [`Bernard Ghanem`](https://www.bernardghanem.com/)
![enter image description here][1]
[1]: https://raw.githubusercontent.com/carlosh93/ColorMAE/8d173eb422979c105e0cdb30d90b5659f4dc2efb/assets/proposed.png
Links: [`Paper`](https://carloshinojosa.me/files/ColorMAE.pdf), [`Supplementary Material`](https://carloshinojosa.me/files/ColorMAE_Supp.pdf), [`Project`](https://carloshinojosa.me/project/colormae/), [`GitHub`](https://github.com/carlosh93/ColorMAE)
----------
>Can we enhance MAE performance beyond random masking without relying on input data or incurring additional computational costs?
We introduce ColorMAE, a simple yet effective **data-independent** method which generates different binary mask patterns by filtering random noise. Drawing inspiration from color noise in image processing, we explore four types of filters to yield mask patterns with different spatial and semantic priors. ColorMAE requires no additional learnable parameters or computational overhead in the network, yet it significantly enhances the learned representations.
Please see our GitHub for implementation details: https://github.com/carlosh93/ColorMAE