
Junbang Liang
Featured in:
amazon.science
Articles
-
Jul 24, 2024 |
amazon.science | Muhammad Khan |Junbang Liang |Chun-Kai Wang |Shan Yang
Pre-training has been an important ingredient in developing strong monocular depth estimation models in recent years. For instance, self-supervised learning (SSL) is particularly effective by alleviating the need for large datasets with dense ground-truth depth maps. However, despite these improvements, our study reveals that the later layers of the SOTA SSL method are actually suboptimal.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →