Shuchen DuL-DAE: Latent Denoising Autoencoder for Self-supervised Pre-trainingA potential alternative to MAE for image modeling as pre-trainingFeb 4Feb 4
Shuchen DuinAI SalonSelf-supervised Visual Model Pre-training for Small DatasetsSelf-supervised learning (SSL) is quite polular pre-training method to resolve label-deficient datasets, in order to get high model…Sep 4, 2023Sep 4, 2023
Shuchen DuinTowards Data SciencePrompt Context Learning in Vision-Language Fine-tuningA parameter-efficient method for efficient model adaptationSep 21, 20221Sep 21, 20221
Shuchen DuinArtificial Intelligence in Plain EnglishObject-level Vision-Language Contrastive Pre-trainingExplicit and implicit methods for aligning object-level vision and language features without human annotationsSep 12, 2022Sep 12, 2022
Shuchen DuinGeek CultureContrastive Learning without Negative PairsSome methods that make your contrastive pre-training implementation light and simpleJul 28, 2022Jul 28, 2022
Shuchen DuinTowards Data Science5 Probabilistic Training Data Sampling Methods in Machine LearningAppropriate data sampling methods matter for training a good modelJul 21, 2022Jul 21, 2022
Shuchen DuinTowards Data ScienceMultiMAE: An Inspiration to Leverage Labeled Data in Unsupervised Pre-trainingBoost your model performance via multimodal masked auto-encodersJul 17, 2022Jul 17, 2022
Shuchen DuinTowards Data ScienceContrastive Pre-training of Visual-Language ModelsFully leveraging supervision signals in contrastive perspectivesJul 14, 2022Jul 14, 2022
Shuchen DuinTowards Data ScienceEnhancing the Performance in Training Tiny Neural NetworksBe aware of the differences in training large and tiny neural networksApr 15, 2022Apr 15, 2022
Shuchen DuinTowards Data SciencePixel-level Dense Contrastive LearningDense contrastive learning with active sampling strategyApr 3, 2022Apr 3, 2022