Computer scientists at the University of Utah have developed software which can edit huge panoramic photographs containing billions of pixels in seconds rather than hours.
There’s now a way to produce preview images of massive photographs, such as panoramas or satellite photos, which contain over a gigapixel of data in a fraction of the time by interactively editing the image.
“You can go anywhere you want in the image. You can zoom in, go left, right. From your perspective, it is as if the full ‘solved’ image has been computed,” says associate professor at Utah Valerio Pascucci.
The software, named Visualization Streams for Ultimate Scalability or ViSUS, has been developed by a team headed by Pascucci with funding from the US Department of Energy. The technology allows for images to be interactively edited and offers a range of possibilities for doctors, artists, photographers, engineers and intelligence analysts.
The results of the research will be published on Saturday in a paper that describes ViSUS as “a simple framework for progressive processing of high-resolution images with minimal resources … [that] for the first time, is capable of handling gigapixel imagery in real time.”
As the software is not dealing with the data all at once it also means that it is possible for the technology to work with greatly reduced processing power.
“ViSUS allows an ordinary desktop computer or even an iPhone to interactively visualize and manipulate such large datasets. ViSUS reorganizes the data in a fractal-like structure then efficiently streams and visualizes just the data needed at just the level of detail required to process and display on the screen at any given moment,” says the University of Utah’s Scientific Computing and Imaging Institue website.
In the study, titled ‘Interactive Editing of Massive Imagery Made Simple: Turning Atlanta into Atlantis’, researchers showed how well the software performs by “seamlessly cloning” one 3.7 gigapixel satellite image of the earth and merging it with a 116 giga pixel photograph of the city of Atlanta. The scientists then merged the city with the Gulf of Mexico, presumably prompting geeky guffawing.
“An artist can interactively place a copy of Atlanta under shallow water and recreate the lost city of Atlantis,” Pascucci said.
“It’s just a way to demonstrate how an artist can manipulate a huge amount of data in an image without being encumbered by the file size.”
The software, which can apparently be used just as effectively with 3D as 2D, could have applications editing medical images such as MRI and CT scans, as well as in the gaming world.
“We are studying the possibility of involving the player in building their own [gaming] environment on the fly,” says Pascucci.
Furthermore, ViSUS could be used by intelligence analysts to compare satellite photos to monitor military movements in enemy territory. Whereas it would previously be necessary to process all the data in such an image – potentially taking a whole day – it will now be possible to “quickly build an approximation of the difference between the images, and allow the analyst to explore interactively smaller regions of the total image at higher resolution without having to wait.”
The team also announced that they are already working on a version which will be able to deal with terapixels.
It is not mentioned if the team take requests but ‘Birmingham’ and ‘North Sea’ are widely hoped to be in the title of the next study.