EarthView3D is a research product developed by the EarthView3D Lab of the Canada Research Chair in Advanced Geomatics Image Processing (CRC-AGIP) Group.
About CRC-AGIP Group:
CRC-AGIP is a research group at the Department of Geodesy and Geomatics Engineering, University of New Brunswick, under the direction of Dr. Yun Zhang. The goal of the CRC-AGIP Group is to develop new technologies for collecting, extracting and visualizing geospatial information through remote sensing image processing and computer vision. Several technologies developed by the Group have been licensed to world-leading companies. Dr. Zhang's automated image-fusion technique is being used across five continents by leading organizations including NASA, Google Earth and the Department of National Defence. Some of the group's past and present research activities are:
Web Mapping Research Activities:
- UNB Street View System (2000 - 2001)
- UNB Pansharp (2000 – ongoing)
- UNB Online Image Mapping Application (2002)
- Stereo Satellite Anaglyph 3D (2003)
- EarthView3D (2005 – ongoing)
Street View: Figure shows panoramic image from UNB Street View Application (Top - acquired in 2001) and Google Street View (Bottom - acquired in May 2012) for an area in downtown Fredericton, New Brunswick, Canada. The general principle used by Google Street View (launched 6 years after the development of UNB Street View) and the UNB system is essentially the same – that is, both create a 360 degree virtual tour of a particular location by stitching together several images taken at that particular location. This research project was led by Prof. Dr. YC Lee ( Rawlinson S, Lee YC, Zhang Y (2001) ).
UNB-PanSharp image fusion software: Original QuickBird MS image, 2.8 m resolution (left); original QuickBird Pan image, 0.7 m (middle); pan-sharpened QuickBird MS image, 0.7 m, using UNB-PanSharp (right). In 2001, Dr. Zhang developed a new image fusion technique often referred to as UNB-PanSharp. It significantly reduced colour distortion and consistently produces optimal fusion results, achieving minimum colour distortion, maximum spatial detail and optimum colour and detail integration. In 2002 and 2003, UNB-PanSharp was licensed to PCI Geomatics and DigitalGlobe, respectively, and a US patent was awarded in 2008. UNB-PanSharp is currently used by worldwide by many industrial, government and military organizations (from NASA …. to Mauritius ...).
Online Image Mapping Application: Figure shows the online satellite image mapping applicatoin developed at CRC-AGIP lab in 2002. ( View video ). This system allowed users to view UNB pan-sharpened IKONOS 1m colour imagery online of Fredericton, NB, Canada. From its launch in 2002, this application was widely used by the local community until Google made high resolution satellite images of Fredericton available in 2006. Google Earth was launched in 2005, three years after this application. As shown in this figure, the image mapping system included a school locator function that would zoom to a selected destination (local high school in this figure) and supported full screen, zooming and panning capabilities.
Multi-Scale Satellite 3D Web Mapping System: Figure shows 3D satellite images of UNB Campus created by CRC-AGIP lab. Centre: Anaglyph 3D image generated from pan-sharpened IKONOS images (Zhang (2002)); Upper Left: Anaglyph 3D image generated from pan-sharpened QuickBird images (Zhang, Y., P. Xie, and H. Li (2007)). This project investigated approaches to create 3D images from both stereo and non-stereo satellite scenes. The resulting multi-scale 3D image database was available for viewing in an online mapping system as shown above (upper right).
EarthView3D (Anaglyph Mode): Figure shows an anaglyph 3D image of Honolulu, USA. In Fall 2015, CRC-AGIP lab launched EarthView3D — an online application for viewing satellite images and line maps in stereoscopic 3D and anaglyph 3D. With EarthView3D, we can use 3D glasses (e.g. active shutter glasses or anaglyph glasses) to see a true 3D representation of the earth’s surface. The approach used by EarthView3D to show 3D is different than that used by many applications such as Sketchup where monocular cues (e.g. relative size, linear perspective, etc.) are used to give a perspective 3D perception.
Selected Geospatial Research Activities:
- Supervised Image Segmentation
- Moving Speed Detection Using Single Set Satellite Imagery
- Image Matching of Smooth Areas
- 3D Assisted Change Detection
Supervised image segmentation: This figure shows comparison of segmentation results achieved using commercial software ( left ) and approach developed at CRC-AGIP lab ( right ) in 2005 and 2012, respectively, an algorithm and a software tool for supervised image segmentation was developed at the CRC-AGIP lab – see Maxwell and Zhang (2005) and Tong et al. (2012). This method calculates a set of optimal segmentation parameters through an algorithm training and fuzzy logical analysis process. This is oppose to traditional approaches that use a very labour-intensive and time-consuming operator trial-and-error process. The technique is highly recognized by researchers and leading remote sensing organizations and software was provided to US Geologic Survey for evaluation. It is now at an industry evaluation stage. A US patent was granted in 2012.
Moving speed detection using single set satellite imagery: Original GeoEye-1 MS image, 2 m (Upper Left); original GeoEye-1 Pan image, 5.0 m (Upper right); UNB Pan-sharpened GeoEye-1 MS image, 0.5 m (Lower left); overlay of original GeoEye-1 MS (red) and Pan (cyan) images (Lower right). Yellow circle, moving objects with “tails”; red circle, static objects without “tails”. In 2006, an approach for detecting vehicles and calculating vehicle speeds was developed at the CRC-AGIP lab. This approach makes use of time delays between MS images and the corresponding Pan image. The research results were published in Photogrammetric Engineering and Remote Sensing (PE&RS) -- Xiong and Zhan (2008). ASPRS selected this paper as recipient of the John I. Davidson President's Award for Practical Papers. A Canadian company is now advancing this technique for highway traffic monitoring etc.[ http://www.4dm-inc.com/index.php/solutions/ ]
Image matching of smooth areas: Feature points in a forest area of a QuickBird MS image pair (2.8 m). These points were extracted and matched by the control network based matching technique developed at CRC-AGIP. To overcome problems associated with matching images containing large "smooth" areas, a Control Network Based Matching technique was developed at the CRC-AGIP Lab in 2008. The algorithm first detects the most prominent points, i.e. “super points”, in the image pair and then uses the super points to establish a control network in both images. Based on the network, matching points are discovered even for smooth areas existing in the image (Xiong and Zhang (2009)).
Change Detection Using off-Nadir Satellite and Airborne images: Image 1 shows building lean when viewed from different view points. Image 2 shows the borders of the building roof in a WV-2 satellite image. Image 3 shows the building roof borders being transferred from image 2 to image 3 using PWCR method. Image 4 shows the building roof borders being transferred from image 2 to image 4. The borders being transferred are used to identify changes in image with significantly different off-nadir angles. The main problem in urban change detection using off-nadir satellite or airborne imagery is to find similar objects in bi-temporal images regardless of the acquisition angles of the imagery. This task is challenging due to relief displacements of the objects towards different directions. We developed a Patch-Wise Co-Registration (PWCR) method to overcome this problem and to find the corresponding objects in bi/multi-temporal images with different viewing angles.
For more information about the CRC-AGIP and the labs research activities, refer to CRC-AGIP website.