论文标题

satlaspretrain:用于遥感图像理解的大型数据集

SatlasPretrain: A Large-Scale Dataset for Remote Sensing Image Understanding

论文作者

Bastani, Favyen, Wolters, Piper, Gupta, Ritwik, Ferdinando, Joe, Kembhavi, Aniruddha

论文摘要

遥感图像对于从跟踪森林砍伐到解决非法捕鱼的各种行星监测应用非常有用。地球非常多样化 - 遥感图像中的潜在任务数量很大,而且特征的尺寸从几公里到仅数十厘米。但是,创建可推广的计算机视觉方法是一个挑战,部分原因是缺乏大规模数据集,该数据集捕获了许多任务的这些不同功能。在本文中,我们介绍了Satlaspretrain,这是一个遥感数据集,宽度和比例都很大,将Sentinel-2和NAIP图像与302M标签结合了137个类别和七种标签类型。我们评估了SATLASPRORTAIN的八个基础和提出的方法,并发现在解决针对遥感的研究挑战方面有很大的改进空间,包括处理图像时间序列,这些图像时间序列由来自非常不同类型的传感器的图像组成,并利用远距离的空间环境。此外,我们发现对SatlasPrretrain的预训练大大提高了下游任务的性能,比ImageNet的平均准确度提高了18%,而下一个最佳基线的平均准确性则增加了6%。数据集,预训练的模型权重和代码可在https://satlas-pretrain.allen.ai/上找到。

Remote sensing images are useful for a wide variety of planet monitoring applications, from tracking deforestation to tackling illegal fishing. The Earth is extremely diverse -- the amount of potential tasks in remote sensing images is massive, and the sizes of features range from several kilometers to just tens of centimeters. However, creating generalizable computer vision methods is a challenge in part due to the lack of a large-scale dataset that captures these diverse features for many tasks. In this paper, we present SatlasPretrain, a remote sensing dataset that is large in both breadth and scale, combining Sentinel-2 and NAIP images with 302M labels under 137 categories and seven label types. We evaluate eight baselines and a proposed method on SatlasPretrain, and find that there is substantial room for improvement in addressing research challenges specific to remote sensing, including processing image time series that consist of images from very different types of sensors, and taking advantage of long-range spatial context. Moreover, we find that pre-training on SatlasPretrain substantially improves performance on downstream tasks, increasing average accuracy by 18% over ImageNet and 6% over the next best baseline. The dataset, pre-trained model weights, and code are available at https://satlas-pretrain.allen.ai/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源