Skip to navigationSkip to content
AAA: Automation, Augmentation, AI

AAA: Automation, Augmentation, AI

Andre Demers
Par Andre Demers
Product Marketing Manager
26 juin 2019
Partager
Infolettre

Rester connecté, inscrivez-vous à notre infolettre.

There are no shortage of definitions for the “AAA” acronym.

From automobile clubs and video games, to artillery and everything in between, “AAA” has a catchiness that cannot be denied. But that’s not the reason that the latest entrant to the “AAA” scene is gaining momentum.

In the world of GEOINT, Automation, Augmentation, and Artificial Intelligence (AI) is a new and burgeoning approach to the creation of 3D maps, visualizations, and common synthetic (virtual) environments that is seeing increased adoption by agencies and organizations tasked with creating accurate and detailed representations of cities, countries, or the entire planet.

The AAA – or Triple A – approach was born as a result of the many challenges the industry faced – and still does to this day: a glut of data.

Data. So Much Data.

We are witnessing an explosion in the amount of geographical and geo-located information collected on a daily basis. Whether it is sourced from space-based technology, ISR assets, mobile phones, autonomous vehicles, open or commercial sources, the sheer volume of data is forcing agencies to rethink the way they produce maps, visualizations, intelligence, synthetic environments, and simulation databases.

In the past, GEOINT data collection was predominantly done through National Technical Means (NTM) funded and operated by governmental or para-governmental organizations. Today, however, the task of providing the vast majority of all geospatial data necessary for government use is becoming unsustainable as the amount of data increases, and more demands are placed on these organizations (e.g.: fusing sensor imagery with foundational data). Their ability to quickly adapt or scale their processes, or integrate new data or data streams are at risk, thereby making them vulnerable to delays, mistakes, bottlenecks and inefficiencies.

Aside from being the first “A” in the AAA approach, automation is key to all aspects of this strategy.
Cause and Effect

So what are the consequences of too much data?

Specifically, the volume of data and the inability to consume it and publish it in a systematic, logical, and repeatable manner can lead to a range of problems:

  • Errors and Delays: When multiplying data sources (e.g.: unmanned aerial sensors) and new data types (e.g.: point clouds) into an established workflow, agencies risk introducing /workflow/personnel/publishing errors and/or triggering costly delays.
  • Correlation Issues: Mis-correlation can occur when two or more datasets for the same geographical coordinates are either incompatible, erroneous, or unsynchronized. In a perfect world, each dataset would contain the same buildings, trees, and rivers. However, it is common for geospatial datasets to contain differing levels of detail that might make them more difficult to correlate.
  • Outdated Information: Depending on a map’s purpose or function, it is very possible that many maps or visualizations are outdated in the time it takes to make them – especially with regard to dense, urban environments.
  • Inconsistent Quality: When it comes to publishing content from a centralized source, it is likely that agencies will need to publish multiple qualities for multiple purposes. Without tight controls on formats, level of detail and thousands of other variables, the likelihood of producing uncorrelated or inconsistent maps rises – either from badly processed data, human error, or both.

By now, it should be quite clear that introducing automation to the traditional manually-intensive process of building maps, plans, synthetic environments, and simulation databases is the best way for agencies and organizations to receive, process, fuse, and publish the petabytes of geospatial data they are facing.

Is Automation the Answer? Yes.

In his GEOINT 2017 allocution, Robert Cardillo, former Director of the National Geospatial Agency, stated, “We intend to automate 75 percent of the repetitive tasks our analysts perform so they have more time to analyze that last play and more accurately anticipate the next one. And then they can look much harder at our toughest problems — the 25 percent that require the most attention.”

By automating some or all aspects of the end-to-end workflow in the creation of maps, agencies will be equipped to manage any amount of data, from a wide variety of sources in a scalable, repeatable, and sustainable manner.

  • Scalability: With automation, it is possible to scale a workflow to thousands of machines, making results available in minutes instead of days.
  • Quality Assurance: By automating QA processes, workflow is executed in a deterministic way, that is, a given workflow will yield exactly the same outputs given the same inputs.
  • Unbiased: An advantage to automation is the negation of variability from one human to another – for better or worse. Subjective interpretation is replaced by authoritative pre-established rules-based processes.
  • Repeatability and Traceability: By automating data cleanup on input and formalizing transformation processes, agencies can gain traceability and repeatability and drastically reduce manual operations.
  • Correlation: Automation enables the correlation of outputs as well as multiple outputs with the same inputs.
  • Reliability: Workflows, by their nature, are standardized in an automated process to ensure reliability.
  • Expand Workflows: Automation also allows agencies to leverage interesting powerful workflows. For example, users can enable fusion workflows where partial data updates can be integrated into existing base maps.
Automation

By automating data cleanup and formalizing transformation processes for all data sources, automation gives agencies and organizations the ability to produce 2D, 3D, or VR environments for a wide range of applications while providing the required traceability and repeatability. It also allows for the drastic reduction – and sometimes outright removal – of tedious or manual operations which then allows analysts to focus on critical tasks that require their attention and tradecraft.

Augmentation

By leveraging cloud computing and computer vision, the herculean task of quickly processing, ingesting, and transforming multiple geospatial data sources into time-sensitive, useable, actionable intelligence is now readily available. Computer vision allows for the acquisition, processing, analyzing and understanding of digital images, and extraction of high-dimensional data.

AI

Automation and workflows can be enhanced using both artificial intelligence and machine learning. The combination of Computer Vision and AI algorithms opens one of the most interesting avenues to automate the processing, integration and analysis of GEOINT data. Thanks to the emergence of massive storage and processing capabilities in the cloud, the field of Machine Learning is progressing more rapidly than ever and can help with the automation of numerous tasks once relegated to human manual interventions, such as:

  • Digital Terrain Model (DTM) extraction
  • Road network extraction
  • Building footprints, height and rooftop extractions
  • Vegetation extraction
  • Land use classification
  • Temporal Change Detection
  • GIS data and Sensor fusion

So how can agencies and organizations adopt the AAA strategy to overcome these geospatial data processing challenges? A solution would need to support a wide range of industry-standard data formats and network simulation standards, and excel at the creation of virtual environments for a wide range of applications including GEOINT, defense and security, autonomous vehicles, and smart cities.

To solve these challenges, Presagis created VELOCITY.

VELOCITY: An Automated Solution

Automation is at the core of VELOCITY.

Leveraging over 20 years of experience providing geospatial processing tools and services to the defense and security industry, Presagis developed VELOCITY, a software solution that supports the continual automated processing of geospatial data, maps, visualizations, and 3D terrain.

As mentioned, the challenge of producing maps has increased as the data sources become more numerous and updates more frequent. Merging imagery, using aerial/UAV data, or including public and commercial data requires a robust production workflow that can accommodate this geospatial data. And that is what VELOCITY delivers.

The GEOINT Advantages of Automation

Through automation, VELOCITY streamlines the processing and production of rich and complex 3D maps and can help agencies:

  1. With the continual automated ingestion of foundational geospatial data and sensor data (including LiDAR, photogrammetry, and Radar),
  2. Maintain a centrally curated foundation data repository from which the majority of derivative geospatial 2D and 3D maps, visualizations, and simulation databases can be produced,
  3. Produce derived synthetic environments in hours rather than weeks or months.

By focusing on automation, VELOCITY is able to excel in the following areas:

  • Scalability
  • One World
  • Data Fusion
  • Machine Learning/AI
  • Publishing
  • Open Standards
Scalability

Automated and deterministic workflows are the key to massive scalability in the cloud. With automation in VELOCITY, it is possible to scale a workflow to thousands of machine, making results available in minutes instead of days. Integration of cloud computing permits distributed processing for any sized project on public or private cloud.

One World: A Master Correlated World

VELOCITY enables and streamlines the continuous and automated consumption/ingestion of geospatial and sensor data to a single master geospatial representation of the world. The data repository or dataset may or may not be centralized, but there is only one correlated world. From this repository, agencies could very quickly generate and deliver nearly all geospatial 2D and 3D maps, visualizations, simulations, and derivatives to the point of need. This one-to-many/many-to-one organization of data and workflows ensures correlation between the multiple derived datasets as they are all published from a single corrected master representation of the world.

Fusion: Many to One

To address the challenge of fusing a wide range of data types or sensors with foundational GEOINT data in a large-scale automated and incremental fashion, VELOCITY can create, augment and constantly maintain a current, integrated central geospatial data repository that can be used to create a number of geospatial products to be used by geospatial, defense and intelligence organizations.

Machine Learning/AI

Through projects with various universities, governments and private partners, Presagis is actively involved in leveraging AI/ML algorithms for the integration of Lidar, WAMI and FMV sensor data as well as a number of feature extraction processes to allow VELOCITY to further the automated curation of 3D geospatial data repositories.

Publishing: One-to-Many

By publishing these geospatial products from a central repository in an automated manner, users can avoid the redundancies, discrepancies and inconsistencies that occur when these products are created in a non-collaborative manner or independently. In addition, this approach allows for much better productivity and scaling.

Open Standards

Data comes from anywhere and everywhere – from public, commercial, or private, all the way to government and secret. Using open standards brings a high level of inter-operability and exchange at both the data and modeling levels. Because the types and sources of data increase, along with the outputs and their platforms, VELOCITY is built to ingest and publish using open, internationally accepted standards in order to allow interoperability of different types of systems developed at different times – all in an automated manner.

Agencies and organizations are at a crossroads and need to introduce automation to the manual processes that exist as soon as possible.
Conclusion

The problems facing those that build maps, virtual environments, visualizations, and simulation databases are growing:

  • Too many data sources
  • Too many data types
  • Mis-correlation of sources and outputs
  • Unmanageable frequency of updates
  • Need to publish to multiple targets/platforms

Through its ability to continually combine foundational geospatial data and fuse various data types and sensors into a central 3D geospatial data repository, VELOCITY is able to support any type of automated production pipeline in hours rather than weeks or months.

By automating data cleanup on input and formalizing transformation processes, VELOCITY provides traceability and repeatability and allows to significantly reduce – and sometimes even remove – manual operations.

The benefits of this AAA production pipeline approach are multiple:

  • Increase Quality and Accuracy through the fusion of multiple sources, automated 3D maps can provide better situational awareness through access to the latest picture of the mission theater. For example, firefighters will have the most recent information to plan firebreaks, or national emergency management agencies can more accurately plan disaster assistance.
  • Faster production with the application of these technologies will allow for augmented throughput of geospatial data and allow agencies to provide a better quality of service to its stakeholders. Tasks that once took weeks or months can now be accomplished in hours.
  • Less expensive: This new approach requires less manual intervention, thereby permitting a strategic uses of your workforce. Additionally, the automation of tasks renders maps less prone to errors and ensures a more consistent quality.
  • Scalable: Integration of cloud computing permits distributed processing for any sized project on public or private cloud. VELOCITY is infrastructure agnostic and makes use of scripting and virtualization.
  • Integrates with customer technology, processes, and data by using SOA.
  • Supports all major industry formats.

From its inception, VELOCITY was designed with an emphasis on automation. Presagis also prioritized VELOCITY’s ability to support third party software in order to directly address the challenges present in the field today and provide a solution that has no equal in the industry.

By leveraging widely used and recognized automation technologies such as Python™ and HTCondor™, and by integrating market best solutions from the geospatial, simulation, gaming, architecture and entertainment industries such as GDAL, Terra Vista™, Unreal Engine™, CityEngine™ and Cinema4D™, VELOCITY promises important productivity gains through automation. VELOCITY supports all standards (legacy and new) – CDB, VBS, FBX, OFT/MFT, OBJ as well as the new streaming services from ESRI (I3S) and Cesium (3D Tiles).

Whether you are interested in using VELOCITY to build a specific, localized area, or an entire continent, we invite you to contact us or meet us at one of the many events we attend around the world.

Partager
Infolettre

Rester connecté, inscrivez-vous à notre infolettre.

Automation can be invaluable for fusing point clouds or ISR data with foundation GEOINT data.