Professor Kevin Tansey
kjt7 [at] leicester.ac.uk
Professor of Remote Sensing, The University of Leicester, Leicester, United Kingdom Personal Page
Topic of Talk: Using deep learning techniques using remotely sensed satellite data products for improved agricultural yield estimations
The rapid and effective acquisition of crop yield information is critical to the stability of agricultural markets. It is an important baseline observation that is used for ensuring regional and global food security. In this talk, we explain how we have used novel deep learning framework was developed for winter wheat yield estimation using meteorological data and two remotely sensed indices, Vegetation Temperature Condition Index (VTCI) and Leaf Area Index (LAI) at the main growth stages of winter wheat in the Guanzhong Plain in China. A deep learning model was based on Long Short-Term Memory (LSTM) neural network with an attention mechanism (ALSTM). It was demonstrated that the ALSTM model provided good generalization ability for sampling sites under different farming systems, including irrigation and rain-fed sampling sites. In a further study, coarse resolution vegetation indices and land surface temperature products were downscaled to the field scale that provides critical information on crop yields during growth periods. In conclusion, our findings highlighted that the deep learning approaches alongside, in some cases, data augmentation techniques, to improve the interpretability of neural networks and models along with remotely sensed biophysical indices can provide a reliable and robust estimation of crop yield.
Professor Danfeng Hong
hongdf [at] aircas.ac.cn
Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
Topic of Talk: Deep Learning for Remote Sensing Image Analysis
The rapid development of Earth observation (EO) techniques enables the measurements and monitoring of Earth on the land surface and beneath, of the quality of air and water, and of the health of humans, plants, and animals. Remote sensing (RS) is one of the most important contact-free sensing means for EO to extract relevant information about the physical properties of the Earth and the environment system from space. Over the past decades, enormous efforts have been made to analyze RS images mainly by various expert systems. Nevertheless, with the ever-growing RS data, either in categories or quantities, manpower costs pose new challenges on improving efficiency of RS image processing and analysis. Owing to the recent advances of deep learning, we will give a talk on the use of deep learning in remote sensing image analysis and beyond. The potential topics will include a wide range of applications, e.g., spectral unmixing, data fusion, image classification, object detection, etc.
Professor Hamid Soltanian-Zadeh
hszadeh [at] ut.ac.ir
University of Tehran and Institute for Research in Fundamental Sciences (IPM), Tehran, Iran Personal Page
Topic of Talk: Computerized Analysis of Chest Images for Diagnosis and Prognosis of COVID-19
The coronavirus disease (COVID-19), declared as a pandemic by the World Health Organization (WHO), is an infectious disease affecting millions of people worldwide. To deal with the disease, deep learning models have been used as an effective tool for assisting radiologists in detecting COVID-19 cases, as well as reducing the burden on healthcare systems. Detection of COVID-19 cases using X-ray images allows quarantine high-risk patients until a thorough examination is followed. In this talk, we report our study of four state-of-the-art deep learning models (VGG-16, VGG-19, EfficientNetB0, and ResNet50) on 464 chest X-ray images of COVID-19 and normal cases. Next, we describe heatmaps, which can be used to illustrate the area of focus within the lungs. Then, we present the image analysis methods we developed to assist radiologists with the detection and quantification of the COVID-19-related lung infections. We explain Artificial intelligence (AI) based lesion segmentation and quantification methods based on U-Net, Attention U-Net, R2U-Net, and Attention R2U-Net models. These models are trained and evaluated using a dataset consisting of 8739 CT images of the lungs from 147 healthy subjects and 150 patients infected by COVID-19. The results show that the Attention R2U-Net model is superior to the others. The lesion volumes estimated by the Attention R2U-Net model are highly correlated with those of the manual segmentations by an expert.
Dr. Gerald Schaefer
gerald.schaefer [at] ieee.org
Department of Computer Science, Loughborough University, Loughborough, UKPersonal Page
Topic of Talk: Content-based image retrieval in the JPEG compressed domain
Content-based image retrieval (CBIR), based on the principle of extracting image features and using these to judge visual similarity, has attracted much research and been shown to be useful, especially as most images are unannotated. However, while virtually all images are stored in compressed form (and most in JPEG format), the CBIR algorithms generally operate in the uncompressed pixel domain. This not only leads to a computational overhead for feature calculation, image compression but can also lead to a drop of retrieval accuracy, in particular at extreme compression rates. In my talk, I will present several efficient and effective CBIR techniques we have developed that operate directly in the JPEG compressed domain without the need of full decompression for feature extraction. In particular, I will explore how CBIR features can be extracted from DCT coefficients, from differentially coded DC data, and from information contained in the JPEG header, with the latter capable of supporting online image retrieval scenarios.