HYDROSYS System specification

Transcription

HYDROSYS System specification
HYDROSYS
System specification
Authors
Ernst Kruijff, Eduardo Veas, Erick Mendez, Antti Nurminen, Ville
Lehtinen, Vincent Luyet, Michael Lehning, Thomas Grünewald, Ari
Jolma, Silvia Simoni, Thanasis Papaioannou, Ali Salehi, Jaouhar
Jemai, Ed Rosten, Brian Williams
Web
Contact
http://www.hydrosysonline.eu
[email protected]
HYDROSYS is a EC funded Seventh Framework programme STREP project (grant 224416, DG
INFSO) on spatial analysis tools for on-site environmental monitoring and management.
Report summary
The HYDROSYS project aims at providing a system infrastructure to support teams of users in
on-site monitoring events, analyzing natural resources. This report specifies the system
architecture encompassing the hardware and software components. These components are
matched to the needs of the end-users, as being specified in report D2.2 Application
Specifications, delivered simultaneously with this report.
This document provides an overview of the system architecture, after which the different
components are described in detail. Coupled to this document are the project control lists (D3.1)
that provide an overview of the planned work per task as well as their interdependence.
HYDROSYS (224416) – D2.3 System Specification
2
Contents
1
System summary ......................................................................................................................................5
1.1
Report basis ......................................................................................................................................6
1.2
Needs and requirements...................................................................................................................7
1.3
System configuration.......................................................................................................................10
1.4
Innovation baseline .........................................................................................................................10
2 System component overview ..................................................................................................................12
2.1
Basic system components ..............................................................................................................12
2.2
Network requirements and setup ....................................................................................................13
2.3
Cellular networks.............................................................................................................................14
2.4
Deployment .....................................................................................................................................16
2.5
System integration and validation factors .......................................................................................19
3 Sensors and Blimp Component ..............................................................................................................21
3.1
Sensor overview..............................................................................................................................21
3.2
State of the art.................................................................................................................................21
3.3
Hydrological and other environmental sensors...............................................................................22
3.4
Blimp ...............................................................................................................................................25
3.5
Camera framework..........................................................................................................................27
3.6
Validation factors.............................................................................................................................30
4 GSN Component.....................................................................................................................................31
4.1
State of the art.................................................................................................................................31
4.2
GSN sensor network and components ...........................................................................................31
4.3
Data exchange formats...................................................................................................................38
4.4
Validation factors.............................................................................................................................39
5 SmartClient Component..........................................................................................................................40
5.1
Introduction to data services ...........................................................................................................40
5.2
Transcoding pipeline .......................................................................................................................42
5.3
Data processing for lightweight visualization .................................................................................43
5.4
Validation factors.............................................................................................................................45
6
Simulation Component...........................................................................................................................46
6.3
State of the art.................................................................................................................................46
6.4
Validation ........................................................................................................................................50
7 Tracking Component...............................................................................................................................51
8 Interaction and Graphics Component - Graphics....................................................................................55
8.1
Interaction and graphics overview ..................................................................................................55
8.2
State of the art.................................................................................................................................56
8.3
Visualization platforms ....................................................................................................................59
8.4
Visualization approach....................................................................................................................61
8.4.1
Visualization for the Augmented Reality platform......................................................................63
8.4.2
Visualization for cell phones ......................................................................................................65
8.5
Focus + context..............................................................................................................................66
8.5.1
Focus+context for the Handheld Platform .................................................................................66
8.5.2
Focus+context for the Cellphone Platform ................................................................................68
8.6
Validation factors.............................................................................................................................70
9 Interaction and Graphics Component – Interaction ................................................................................71
9.1
Handheld User interface overview ..................................................................................................71
9.2
State of the art.................................................................................................................................71
9.3
Sensors and support setups ...........................................................................................................72
9.4
User interfaces for the Augmented Reality platform .......................................................................73
9.5
Cell phone interfaces ......................................................................................................................73
9.6
AR Widget toolkit.............................................................................................................................76
9.7
Viewpoint manipulation ...................................................................................................................77
9.8
Sensor placement interface module ...............................................................................................80
9.9
Data selection interface module......................................................................................................81
9.10 Simulation Interface ........................................................................................................................84
9.11 Collaboration tools ..........................................................................................................................85
9.12 Base interface .................................................................................................................................86
9.13 Validation factors.............................................................................................................................87
Appendix 1. Studierstube sub-components ...................................................................................................88
Appendix 2 Specification sheets ....................................................................................................................91
Blimp ...........................................................................................................................................................91
HYDROSYS (224416) – D2.3 System Specification
3
Panasonic Toughbook CF-U1 ....................................................................................................................92
Appendix 3 Site and campaign planning........................................................................................................96
Appendix 4 WLAN bridge setup and experiment results ...............................................................................99
Appendix 5 Network diagrams .....................................................................................................................103
Appendix 6 Simulation models .....................................................................................................................104
Appendix 7 Traditional visualization methods ..............................................................................................105
Appendix 8 Handheld display system construction......................................................................................109
Appendix 9 Abbreviations.............................................................................................................................112
References ...................................................................................................................................................113
HYDROSYS (224416) – D2.3 System Specification
4
1 System summary
Our natural environment is undergoing dramatic changes. Scientific and political efforts are
undertaken that focus on the earth’s ecological changes, among others by tackling problems
associated with environmental degradation. The HYDROSYS project aims at providing aids that
could help in this process. Foremost focusing on hydrological processes and their effects on the
environment, the project consortium will research and develop tools that aid stakeholders to
monitor and manage environmental situations. These situations may encompass a whole range
of problems, including water sanity or damages caused by melting of permafrost. Though the
infrastructure developed within the project will not solve all problems at once, they allow for new
ways of analysis that were previously hardly possible: to take a close look at environmental
processes where they truly happen, in the field, aided by modern sensing technologies. Whereas
at current the analysis of environmental processes is largely done in the office, the connection of
both the actual situation in the field and detailed sensing information is expected to yield results
that will change the way stakeholders will observe and take decisions. Even though in-field
observations have been performed for longer time, the information base line on which decisions
can be made will be far higher when the HYDROSYS system is deployed. It is expected that
environmental problems and their impact can be defined more accurate, aiding the process of
providing better solutions.
Thus, by using the HYDROSYS system, we expect that decision making processes can be
optimized, especially when different stakeholders access information using HYDROSYS as a
common platform.
The system will allow users to analyze and control sensor data sources at a fine time and space
scale, enabling planning or effective measures taking (solutions).
Though HYDROSYS mainly focuses on hydrological processes, the scope is wider: the system is
designed with being able to potentially cope with any kind of environmental process. Throughout
the user requirements and system design phase, a large group of representative end-users was
involved, encompassing different disciplines that may use the system. Stakeholders that are
currently actively target and were involved include:
•
•
•
•
•
Environmental specialists (engineers, biologists, geologists, geographers)
Specialists in hazard management
Municipalities
Environmental authorities
Watershed managers
In addition, we also included more general user groups like tourists and school children that may
use the system for more general purposes like education or obtaining environmental information.
The tasks that encompass the analysis of environmental processes on site can be structured in
several major high level tasks groups that integrate work being performed both in the office and in
the field. The major task groups are monitoring and understanding environmental processes, and
managing environmental matters. Both groups of tasks are supported by decision making and
communication activities. Finally, due to the dependency on modern sensors, activities also
encompass the setup and maintenance of sensors or sensor networks. HYDROSYS will provide
direct (or indirect) support for most of the tasks in these task groups.
Approach and high level system components
All these tasks directly relate to the core of the functionality of the system: to support spatial
analysis of environmental processes in so-called event-driven campaigns. During these
campaigns, users access data gathered by a grid of sensors or sensorstations that can be
connected into a sensor network, and external data sources. The potential power of the
HYDROSYS system is an outcome of the ability to directly relate sensed data over the
environment being observed at a mobile, handheld device like a smartphone of an ultra portable
computer. The relation between environment and sensory data is achieved by using an overlay
technique: visualizations of sensory data are either registered (overlaid over the right position) on
a map-view, or over a real-time video image. The latter technique, also known as augmented
HYDROSYS (224416) – D2.3 System Specification
5
reality, allows the user to walk around and observe the environment, continuously getting a
“correct view” on the sensor data.
Figure 1 Overlay technique for video images (Augmented Reality)
The overlay techniques greatly enhance the support for decision-making, both in the field and on
remote locations, by providing an accurate and recent overview of the observed site.
Visualizations of sensory data can be produced on the fly, allowing the user to see the most
recent sensory data. In addition, sensory data can be run through different simulations to
enhance prediction making. All the sensory data and any other associated information can be
retrieved from a shared information space over network, independent of the current location of the
user.
In order to support the mentioned approach, a system has been defined that consists of several
high-level components:
•
•
•
•
•
•
A grid of sensors or sensor stations
Data storage for sensor data and associated information
Services to select, process and retrieve data
Continuous and on demand simulation services
Tracking services of mobile devices
A portable interactive visualization platform (mobile device)
Section 3
Section 4
Section 5
Section 6
Section 7
Section 8/9
Throughout the report, the different system components will described in detail. Section 2 will
provide more information on the core systems components, including the data flow and the
network requirements. It also provides a summary of how the work is planned. Section 3 deals
with all the sensors being used in the project, whereas section 4 states how the sensor data is
stored. Section 5 provides an overview of the range of data services to process the sensor data,
whereas section 8 and 9 provide details on the visualization aspects and front-end of the system.
1.1 Report basis
The system report is a result of multiple steps performed during the first half-year of the project.
Figure 2 provides a basic overview of these steps. The process is characterized by multiple steps
of refinement as a result of discussions (the loops are not visualized in this figure). More
information on the methodology of the different steps can be found in the UCD manual D2.1,
results of the end-user steps are noted down in the application report D2.2.
The user feedback has provided valuable input to refine the initial plans laid out in the description
of work (Annex). Especially, it refined the boundaries for deployment later on by the actual endusers, improving the potential impact of the project. Supported by the cyclical nature of the
project, the system description still holds several levels of innovation that are not directly
envisioned (but principally supported) by the end-users to truly advance the field of research. This
is of utmost importance to support the innovative nature of the project, since the end-users
(mostly bound to current practice) are hardly able to interpret many of the more technical aspects
HYDROSYS (224416) – D2.3 System Specification
6
of the project. UCD-based projects have shown that once these more technically advanced
methods are shown in later phases of a project, end-users better understand the final direction of
an envisioned research prototype. Previous experiences with UCD have shown this actually
helps the improvement of the current work situation to a larger extend.
Figure 2 UCD Work Process
1.2 Needs and requirements
Throughout the user-centered design (UCD) process, representative users were involved
repeatedly in defining on-site monitoring and its purpose for different user groups. A detailed
overview of this process can be found in the UCD manual (D2.1), whereas the Application
descriptions (D2.2) provide an in-depth overview of users, tasks and requirements. In this section,
a short summary is provided as reference for the system needs.
As noted in the introduction, the main tasks are monitoring and management, supported by
decision making and communication, and tasks associated with the setup and maintenance of
sensors. The tasks can be separated in actions that can or should take place in the office, and
those that extend the current practice by taking place in the field using the envisioned
HYDROSYS infrastructure. An overview of the high level task groups and their subtasks can be
found in the table on the next page. A more detailed table of these tasks and subtasks together
with information on direct and indirect support in HYDROSYS can be found in the appendix of the
D2.2 report. It also provides a detailed overview of the envisioned effect of the HYDROSYS
system on these tasks.
HYDROSYS (224416) – D2.3 System Specification
7
Task
Subtasks
System requirements
System components offered by HYDROSYS platform
Monitoring and
understanding
environmental
processes
at
workplace and
onsite
•
•
•
•
•
•
•
•
•
•
Outline problem.
Gather data.
Order and download data.
Integrate, visualize and
analyse data.
Model environmental process.
Define solutions .
Supervise projects.
•
•
•
•
•
software system platform for visual
analysis
Possibilities for data collection in
field (environment, sensor data)
Data storage and distribution
solution (shared information
system)
Tools that support decision-making
in the field and in the office
Data comparison methods
Support for modeling / simulation
•
•
•
•
•
•
Managing
environmental
processes
at
workplace and
onsite
•
•
•
•
Design plans.
Setting up plan .
React (enforce plan).
Quantify damages.
Decision making
•
•
•
Study problem, situation.
Generating model and
possible solutions for problem,
situation.
Make decisions.
Sensor setup
and
maintenance
•
•
•
Create sensorstations.
Place sensorstation in field.
Maintain sensorstation.
Communication
•
•
•
Communication.
Sharing information.
Coordination.
HYDROSYS (224416) – D2.3 System Specification
•
3D/ AR software platform for on-site monitoring
Scene reconstruction methods (3D model generation through
blimp)
Multivariate sensors and sensor stations
Shared information system for streaming / time-stamped data
On-site monitoring interfaces: User interfaces that allow the
user to observe changes in the environment / observe
visualized sensor data
Visual methods for data comparison (visualization and user
interface techniques)
Multi-viewpoint analysis methods through usage of different
cameras on-site
Simulation support
Methods to monitor changes in the
environment
Communication / collaboration
methods
Reporting / annotation methods
•
•
•
•
On-site monitoring interfaces: User interfaces that allow the
user to observe changes in the environment / observe
visualized sensor data
Integrated voice communication methods
Data sharing methods
Annotation / reporting methods
Communication methods in-field an
to the office
Shared information system
•
•
Integrated voice communication methods
Shared information system
•
•
•
Sensor (station) system overview
Data quality methods
Preliminary sensor feedback for
placement
•
•
•
Sensor station front end providing information on sensor state
and sanity
Data quality methods integrated in shared information system
Sensor placement interface
•
•
Communication methods
Shared information system
•
•
Integrated voice communication methods
Shared information system for streaming / time-stamped data
•
•
•
•
8
It should be noted that the separation of the work done in office and on-site is a fundamental
difference in the current work practices concerning any environmental location-bound activities –
a theme that was frequently brought up in the end-user interviews of virtually all the user groups
(details can be found on the D2.2 report). Some of the tasks such as decision making are not
inherently necessary to be done at either end but most of the tasks such as gathering data and
outlining the problem demand presence and/or resources at both on-site and workplace. The
problem stems from the different resources available on the two ends. Generally at the office the
worker is able to access a wide range of information systems (including GIS) and expertise
(colleagues), while at the site itself the worker has manual access to the problem at hand, which
is usually necessary for delivering actions. The HYDROSYS platform strives to assess this
problem for the part of offering better means of accessing relevant information on-site.
Furthermore it extends the office-accessible information sources by offering close to real-time
monitoring support using highly graphical interfaces and better support for team-work to better
sharing information and expertise.
HYDROSYS does not intend to eliminate the need for the current practice of delivering the work
at both workplace and onsite, but rather to provide more efficient means for bridging the
perceived gap between the two endpoints. The approach that HYDROSYS takes to this is that it
offers a platform onto which information can be inserted, related and shared on.
The main tasks for which HYDROSYS attempts to offer direct support concerning monitoring and
understanding environmental processes based on the table above include outlining the problem
and the gathering, ordering, integrating and visualizing of data. The platform itself does not
provide direct analysis of the data or the consequential decision making, but as these are
important parts of the users’ main tasks and processes, they are recognized throughout the
development process and indirect support is intended e.g. in user interface and data-format
aspects along with the overall system architecture. Administration and communication activities
for their part are supported respectively by advanced information sharing and collaborative data
generation and recording mechanisms. Sensor station setup and maintenance on the other hand
does not receive direct support using the system, except for the help that can be provided by
using the system in defining the optimal location for installing the sensor station. Also the datafeeds from the sensor-stations themselves can be used to offer information on the sensor
station’s condition in addition to the observed environmental attribute.
The task analysis was an essential aid in defining the context of use for the HYDROSYS platform.
The results of the end-user interviews were gradually harvested into information on the users
work practices and the functional and technical needs and requirements for the system along with
an overview of current tools and artifacts used in solving the relevant environmental problems.
Application concepts were done for the different defined user profiles to give an overview of how
the system together with its underlying technologies could potentially be used by the respective
user group. Furthermore the full overview of the tasks and processes of the different users
concerning relevant environmental activities were outlined during a task analysis. This
comprehensive overview of how the users solve problems using current tools and practices was
then adapted to provide an overview of how the problems could be assessed using the new
system. Various ideas and interpretations were given quick validation (e.g. comparison to
research goals and resources) during the process to see what parts of the formed overview could
and should be applied into the system development.
Collaboration
In hydrology, projects generally involve several partners (such as local and regional administration
and environmental companies,) having various background. They have to work together in order
among other to design technical solutions. At the present time, this stakeholder’s collaboration is
mainly done through mobile phone, email, maps, reports and meetings. However, this is not
enough, there are still some misunderstandings and limitations in exchanging and sharing
information. In that context, HYDROSYS technology will be useful within this process because
among many things it will develop real time communication (e. g. images, data, graph, and so on),
that will help coordination and will enhance data visualization process.
HYDROSYS will also examine the possibility of location based annotations that could be shared
within colleagues or the whole community. The annotations would allow users to place their
observations directly onto the environment, also acting as anchors for simple discussion forums
HYDROSYS (224416) – D2.3 System Specification
9
and rated by other users. Sharing the annotations and their associated discussions may happen
in near real time, pushed by the server. In addition, direct user-to-user messaging will be
examined, for example from a person working on a desktop environment to a person on site. The
need for more generic document sharing within the HYDROSYS system is examined, and if found
feasible, implemented. The development of these features depends on both true needs of users
and technical feasibility, determined as part of validation. In this sense, the differences between
platforms are considered: it might not be reasonable to share or send large CAD documents to
cell phones, while the same documents could be useful in desktop environments.
1.3 System configuration
HYDROSYS focuses on different end-users that may use the system under different
circumstances, and with different goals in mind. In order to offer a basic level of flexibility, the
system design presented in this document is reconfigurable. This re-configurability occurs at the
level of the functional representation (the number/kind of functions presented to a user), and the
way the software components can run at the different hardware configurations. To handle this,
we support our system on the concept of profiles. Profiles are configuration settings defined for
user preferences, task needs, location infrastructure, and hardware capabilities. The configuration
required for HYDROSYS is then a tuple of profiles: (User, Task, Location, Device)
•
•
•
•
User: the system will offer a way of setting up user profiles that configure the preferences
of the user. These preferences will have an effect on the functional space presented to the
user. For example, a school kid hardly has a purpose for a complex simulation interface,
so different tools or interfaces can simply be blend out.
Task: These preferences have a direct effect on the data retrieval (data filtering), and the
interaction tools presented to the user. HYDROSYS targets the support of multiple tasks
such as sensor placement or decision making. Each task has different requirements and
no all information is necessary at all times. The task profile then allows the system to
present (and request) only the information necessary without wasting resources.
Location: Every location or site will be prepared in advanced during the site building step
(section 2.4.1). This profile will define the information available in each field location and
necessary that is independent of tasks, devices or user preferences.
Device: Denotes the hardware capabilities. Users do not always have the possibility to
take an extensive hardware setup into the field: some of these components do require
technical support, or a simply not usable at a site. For example, weather conditions might
limit an extensive setup, some sites that environmental scientists access are not always
well reachable and might need special transportation, and stable network connections are
not always available. The profile will contain information on the current graphical,
processing and network connectivity capabilities of the hardware, this will guide the way
the system runs. For example, by reducing graphical enhancements on low powered
devices.
The system will allow flexibility in both functional space and hardware configuration to reduce the
complexity of introduction in an organization. It should be possible to make use of a basic
HYDROSYS system without introducing a noticeable level of technical overhead (like buying and
setting up numerous servers). More information on user profiles can be found in section 1.3,
whereas section 2.4.2 provides a more detailed overview of the hardware platform configuration.
1.4 Innovation baseline
Environmental issues have become a common discussion topic and worry among public.
Awareness of local environmental conditions is increasing. Municipalities and other authorities
have reacted by providing increasingly environmental friendly solutions in urban design and
regulations. However, decisions are often based on local environmental data, which is typically
inaccurate or not up to date. Environmental monitoring could be improved by sensor stations, but
traditionally these stations are mere weather stations, although nature is nearly infinitely rich in
properties, features and events that depend on and affect our environment. In addition, resources
for setting up dense multi-purpose sensor networks are limited. While the real environment is
constantly changing, most GIS data sets are essentially static. Sometimes these data sets are
HYDROSYS (224416) – D2.3 System Specification
10
even unsuitable for modeling properly the phenomena at hand. For example, if surface water
flowing in urban areas is estimated only with digital elevation maps, a simulation would provide
results where water would flow through buildings – and the available tools may not support
integration of building CAD models. When problems arise in the environment, they are seldom
noticed automatically. Municipalities do not have resources to monitor everything, but can act
when a problem is pointed out. However, when the environment is examined on location, what
ever GIS tools exist at office are not available in the field, or provide only very simple
functionalities, again without much spatial or temporal resolution, without taking advantage of the
physical environment, and presenting the data in an abstracted manner.
HYDROSYS addresses these issues by leveraging small scale, temporally and spatially accurate
and dense measurements, facilitating mobile, highly graphical interfaces to the environment,
streaming the related static and near real time measured sensor data sets to the mobile devices
when needed. HYDROSYS measurements are organized as campaigns as quick responses to
observed events, placing sensor stations on demand and utilizing accurate simulation models.
Certain local, possibly abstracted static data sets can be replaced by real time feeds from a
blimp, which allows direct aerial observations.
The system is designed to support a range of environmental processes, from water pollution to
permafrost observations. HYDROSYS mobile applications go beyond state-of-the-art in GIS,
providing an interactive and intuitive interface directly to the environment. Multiple near real time
sensor feeds and data sets are integrated and presented on the device, selectable by the user
and adapted according to the users’ profiles.
The main innovation of HYDROSYS comes from emphasizing the dynamic nature of the world,
and the importance of small scale phenomena that can be researched and modeled with accurate
data sets. HYDROSYS allows the examination of the same data both at office and at field, where
observations can be directly annotated, at the very moment they are made. Cooperation is
leveraged with instant sharing and updating of all observations, up to the point where a single
observation can be associated with a full online chat or discussion thread. The support for
cooperation, when opened for the public, also involves citizens to actively participate in taking
care of the environment and make notes of potential problems that would otherwise go unnoticed
by the municipalities.
HYDROSYS assists decision making by providing multiple visualizations to complex data, where
the user may select from a variety of data sets and their related visualizations to find the most
intuitive view. HYDROSYS does not attempt to provide solutions to environmental problems; this
is the domain of human experts. In addition, most legacy data need to be manually processed to
be integrated to HYDROSYS. In time, this can hopefully be at least partially automated along with
data format standardization efforts and related open software development.
HYDROSYS (224416) – D2.3 System Specification
11
2 System component overview
2.1 Basic system components
We have generated a general system architecture for HYDROSYS that encapsulates all the
major software development activities. These components target multiple problems to be solved,
such as pose acquisition, data transmission and so on.
These components are software specific and therefore do not match directly to the work
packages tasks and description of the Annex. Most of the components span across multiple tasks
and milestones while correspondingly many work package tasks do not have a matching software
component (for instance, T3.1 Management). The components are not platform specific either,
some of the components may run on a single device (say laptop or UMPC) while others may be
performed across multiple devices (for example, multi server simulation tasks). The following
diagram outlines the general software architecture components and connections for HYDROSYS.
Figure 3 Software Architecture Diagram
This diagram shows the overall connectivity of all the HYDROSYS components. Each of these
performs a specific task for the system. In the general case, these tasks do not run concurrently
nor are they collocated. For example, simulations may be prepared days before the user goes to
the field. A more detailed description of the components is:
Sensors and Blimp. This component represents all data being generated by sensor devices such
as Sensorscope stations, temperature, moisture and water discharge sensors. This component
also encapsulates the information generated by the Blimp: Textured model of the environment as
well as video and pose.
GSN. This component is in charge of the storage and data transmission for propagation during
field work. This component runs in multiple devices at the same time and the exact deployment
configuration is specific to each site and campaign.
Simulation and Warning. This component performs all the simulation tasks of the project. It
receives input from sensors through the GSN component and its results are delivered back. Its
work is transparent to the rest of the project tasks.
Static Sources. This is not strictly a component controlled by HYDROSYS. This component
provides non streaming legacy data to the pipeline. Due to its static nature it is unnecessary to
create a special component for its handling. It is shown in this diagram mainly to illustrate the
need of other data sources aside from GSN.
Smart Client. This component performs several tasks both in the office and during field work. Its
main goal is to prepare information in a form that can be efficiently handled by the mobile devices.
Data coming from sensors and simulation serves a general purpose and is not specific to real
time visualization; it can be used by other projects supported in our findings. The Smart Client
component will reduce data, prepare 3D models, filter information and so on. This will be done in
several stages: information of static nature (such as DEMs, GIS data and pre acquired sensor
HYDROSYS (224416) – D2.3 System Specification
12
data) will be prepared during the campaign planning this will be performed on the graphics
workstations in the office. Information that can only be acquired during on-site field work such as
new sensor readings will have to be processed on-the-fly on the on-site laptop.
Interaction and Graphics. This component is the interface with the users with the system, making
the rest of the components transparent. Each user will not be able to directly control any of the
other components in the system but will instead interact via the Interaction and Graphics. This will
present render and overlaid views of the data depending on the user’s location and preferences,
as well as interaction means such as visual controls and communication tools.
Tracking. This component is in charge of providing pose information to the mobile devices. It is a
hybrid solution supported on multiple devices and techniques. Results are propagated in the
system by piping them through the GSN component.
Video. This last component is in charge of providing all video feeds of the devices and
propagating them through GSN. The only video feed not controlled by this component is that
provided by the Blimp.
2.2 Network requirements and setup
The HYDROSYS system depends on different kinds of networks for data transmission. Data
needs to be retrieved from sensors in the field, and users in the field want to receive processed
data from the remote servers. However, due to the unstructured nature of most of the sites endusers will explore, network connections can become an issue. Multiple sites we have explored
intermediately have just (low) telephone-based connections that likely do not work for our
bandwidth requirements: whereas in areas close to cities (the Nordic scenarios) do not pose large
problems, the remote areas in the Alps are rather badly connected. The consortium is taking up a
new direction (see next section) to solve this issue.
2.2.1 Network issues
The data received from normal sensors generally needs just a low-bandwidth connection: the
current state of wireless sensors makes use of GPRS connections to transmit data for low rate
sensors. However, once the number and update rate of sensors go up per sensing station, GPRS
may become a problem. Especially when telecom companies do not offer high bandwidth
connections data might not be received very fast by handheld units. Depending on the data being
observed, higher bandwidth might be required for users to explore data from remote sources: cell
phone users hereby have lower bandwidth requirements than handheld computer users. Next to
exploring alternative network connections (WLAN or WIFI bridge), the HYDROSYS system will
offer a pre-upload of data prepared in the office and optimization methods for streaming data to
lower bandwidth requirements.
As can be seen in Appendix 5, different kinds of networks will be deployed:
•
•
•
•
General networks (LAN, WAN) to interconnect servers running the remote services
GPRS networks for smartphone / cell phone communication with the remote services
WLAN bridge or GPRS for handheld units to communicate with remote services
WLAN for inter-handheld connections on-site
2.2.2 WLAN bridge
Remote measurement stations are usually queried over the mobile telephony network, whether
over GSM, GPRS or 3G technologies. The issue is made complex by the requirements of working
in remote mountainous areas, where no coverage is available or when the bandwidth of the
available telephony link is not sufficient, especially not for communication with the handheld units.
Generally, the following technologies are used to transfer data services from remote services.
•
GSM modem: This is the standard solution for communications with mountain stations,
usually used as this was the technology available when the station was constructed,
although it will also work with very low signal strengths. The communications rates are
relatively slow (56kbps) and when only a weak signal is available at the station, error bits
HYDROSYS (224416) – D2.3 System Specification
13
•
•
cause communication rates to be significantly reduced. Costs are calculated according to
the length of time taken for the communications. The modem has a low power draw.
GPRS modem: This is a low data rate solution (theoretically available up to 85.6kbps, but
usually reduced bandwidth) which has the same coverage as GSM. This modem also has
a low power draw. Costs are calculated according to the amount of data transferred,
hence for small data volumes, this is ideal.
3G technologies: This is the high data rate version of GPRS, with data rates up to
3.5Gbits/s. Coverage is limited in the mountains to populated areas.
Since these network links cannot always be guaranteed, a wireless link is therefore required to
bridge the link between the measurement area and an area where high-bandwidth mobile
telephony coverage or a fixed cable network is available. The link must accommodate the
following:
•
•
•
Wireless connection over a relatively long distance (several kilometers)
Low power requirement (the site is not connected to the power grid)
Bandwidth in line with the amount of data that has to be transferred
On the positive side, the requirements are very low by today's standard for several aspects:
•
•
•
The bandwidth requirements are often limited for sensor data transmission, whereas for
handheld communication, data transmission is likely only peaked (non-continuous, bulk)
and does not need to provide a continuous stream of information
The data can be stored on site for an extended period of time before performing a bulk
transfer
There is no need for an entirely permanent connection i.e. the communications equipment
can be put to sleep between transfers
The consortium choose to explore WIFI connections using high gain antennas to tackle the
network issue. This technology is cheap and unlicensed (although regulated). It uses regular WIFI
radios (and protocols) together with parabolic antennae. The technology is quite mature but has
not seen any kind of a widespread use coupled with high gain antennae.
The longest distance covered using WIFI (but with a temporary setup) was 279km. Commercial
antennae are available with gains of up to 27dB. The length of a communications link depends on
the terrain and the weather conditions, although distances of approx. 10km are easily possible
with these antennae. The use of a multi-hop network increases this distance infinitely. WIFI
technology will therefore be the technology used in HYDROSYS to bridge areas of low mobile
coverage or for high data rate up/downlinks. (14Mbps have been achieved over 6km distances in
initial experiments). First experiments have shown encouraging results. More details on the
WLAN bridge can be found in Appendix 4. HYDROSYS expects to deploy a total of 6-8 WLAN
bridge boxes to be able to operate at least two deployment sites simultaneously.
2.3 Cellular networks
3G cellular phone networks are nowadays available in urban areas (even though with varying
transmission speeds), but due to the shorter range of 3G networks, only older generation slower
networks can be expected on most of the Nordic Scenario areas. The 3D map branch of
HYDROSYS mobile applications can utilize any cell network capability that is available, and is
designed to utilize even slow connections to the fullest. Cellular networks do not require special
set-up, although extending them directly by our own base stations without operators is not
feasible – this is better done with WLAN for Alpine Scenarios.
The 3D map applications require a binary XML connection to the HYDROSYS Data Services, so
a server must be set up in a non-firewalled environment, or a TCP/IP port opened for it in a
firewall.
The Nordic Cases rely on cellular networks for transmitting 3D content and near time sensor data.
The current 3G and 3.5G cellular networks are advertised by mobile operators to yield fast and
HYDROSYS (224416) – D2.3 System Specification
14
reliable transmissions almost anywhere, but we expect that in practice, only previous generation
connections, such as GPRS, may be available.
To verify cell network capabilities, we have performed a set of small tests. The results are
indicative and dependent on operator, network load, position, motion and may vary from day to
day. Table 1 (A) presents the performance of a previous generation (2.5G) cell network in good
conditions in a city. The tests were performed on a Nokia N93 mobile phone in a stationary
position. The round trip times (RTT) were measured by sending and receiving minimum size
packets. We observed variation from 400ms to 800ms. Packet loss was measured using 500 byte
long UDP packets, and was nearly negligible: of 100 packets, 1–3 were dropped. The bandwidth
generally stays at 10kB/s or better. Table 1 (B) presents similar tests on 3G networks. In good
conditions, the round-trip time is significantly better, and the network speed stays near 40kB/s.
Technology
Packet loss (UDP 0.5kB)
UDP RTT
Max TCP speed
downlink
Max TCP speed uplink
TCP RTT
GPRS/EDGE
<3%
400–800ms
13-15kB/s
UMTS
<1%
140–150ms
40kB/s
9-11kB/s
400-800ms
40kB/s
140–150ms
Table 1 Network statistics for (A) 2.5G (GPRS/EDGE) and (B) 3G (UMTS) networks
Figure 4 Cell network characteristics. 3.5G network speed ranges between GPRS/EDGE (0-16kB/s), UMTS (1648kB/s) and HSDPA (48-384kB/s) as a test person walks from (1) to (4).
Finally, we performed a test on HSDPA (3.5G) networks to characterize the connection behavior
when the client is moving. A user walked a certain path in a city several times, while a server
pushed as much data as possible to the client. Figure 4 presents the observed network speed
and the related positions within the city. The peak rates are near 350kB/s, but the graph is
HYDROSYS (224416) – D2.3 System Specification
15
characterized by its variance and location dependency, not peak or average rates. We present a
single test in the figure, as averaging over time would have hidden the characteristic behavior.
Succeeding tests revealed the same location dependency and the same variance.
We performed less quantitative tests for network coverage in various environments, and found
out that quite often, 3G networks were not reachable inside buildings or outside urban areas,
despite local operator’s coverage advertisement. However, in these cases, GPRS was available.
Further tests may need to be performed on the Nordic Case areas, where at least GPRS
connections are expected.
With respect to the Alpine use cases, network coverage is as follows:
•
•
•
•
Dorfberg (Davos Dorf): Swisscom network coverage: available GSM/EDGE and UMTS
in the whole area, HSPA in the uppermost slope.
(Orange network coverage: mobile broadband GSM/GPRS available in the whole area.)
Gemsstock (Andermatt): Swisscom network: available: GSM/EDGE around the top
station and in the area facing towards Andermatt (north slopes). (Not available: UMTS,
HSPA)
Flüelapass (Davos Dorf): Swisscom network coverage: available GSM/EDGE in the
whole area, UMTS in a Northern sector from the hut (near the meteo station). Not
available: HSPA).
(Orange network coverage: mobile broadband near the meteo station and on the
permafrost site, UMTS on the whole site)
La Fouly: La Fouly catchment is poorly covered by network signal. Weak and noncontinuous signal was detected on the orographic right side of the catchment. There is a
stable signal further downstream in the valley at the little village called Ferret. We are
currently rely on that for transferring the data through the sensorscope network. In the
Gemsstock catchment, there is poor/patchy mobile network coverage available near the
top station. Good coverage near the topmost pylon and north facing rock wall.
2.4 Deployment
During field work, HYDROSYS will deploy several devices on site as well as being supported by a
number of workstations in office. Figure 5 provides an overview of the planned configuration
during field tests and how the migration of software components over different setups might work.
Sensor data is directly forwarded to a GSN server where it becomes available to all other
services. The same GSN server is also a hub for information coming from simulation and
warnings. It is mainly controlled by a GSN development server for administrative purposes. The
raw information from sensors is unsuitable for direct usage of the mobile devices so it has to be
preprocessed. This step is done by the Smart Client on site or the graphics workstations on the
office during campaign planning. The Blimp, Pant Tilt Unit and Ubisense system are controlled by
laptops on site whose information is forwarded to the on site Smart Client. This information is then
propagated to the handheld and cellphones for presentation to the user.
HYDROSYS (224416) – D2.3 System Specification
16
Figure 5 Field deployment diagram
HYDROSYS (224416) – D2.3 System Specification
17
2.4.1 Campaign planning
The HYDROSYS platform supports on-site monitoring and management of environmental
processes, structured through so-called event-based campaigns. These events are generally
coupled to some kind of environmental problem, may it be a site that is polluted, or the
expectation of a flooding. As such, campaigns are generally setup to outline and understand an
environmental process. Events might be triggered by an active process that already takes (or
took) place, or the expectation that an event might take place. Hence, sites can also be observed
to understand a process better in order to take more effective countermeasures when a problem
situation occurs. This is, for example, the case with setting up more effective plans for warning
scenarios. All data used during HYDROSYS needs to be geo-referenced, this means that some
legacy data will need a manual preparation step during campaign planning.
A campaign generally includes four different steps:
•
•
•
•
Site planning: the specification of a site to be observed, followed by the retrieval and
setup of its digital representation. These steps include the selection of an appropriate DTM
and textures to create a digital model of the site, the selection or planning of sensor data
sources. This stage generally takes place in the office. The site planning might also
include the planning of sensory data sources.
Campaign planning: based on the identified site, a campaign can be built. Here, the
sensory data sources can be more finely configured, and additional data (like documents)
can be selected that might be needed on-site. The campaign likely also include some
“user configuration” meaning the selection and possible configuration of involved users
and user profiles, and the setting of data access rights. All data and settings are stored in
a workspace that can be accessed and extended in the field. Normally, a campaign
manager will upload the selected data sources to the mobile device that is taken into the
field to limit network access.
On-site monitoring and management: when the user gets on-site, he/she can select a
campaign, and start working. During the on-site activities, the user can access the
workspace to communicate with other users, exchange data or store new data like
annotations.
Administration: Administration is basically the final stage of a campaign, in which reports
are written. During this stage, a campaign manager can upload the batch of data collected
or produced to the data storage, so that users can refer to the data later on.
A detailed description of the different steps in site and campaign planning can be found in
Appendix 3.
2.4.2 Hardware deployment flexibility
The HYDROSYS system infrastructure is composed of several quasi platform independent
components. The basic configuration of the platform is based on services that run on dedicated
machines. This is necessary, due to the potential high load the system will need to cope with, be
it larger streams of real-time data, or the calculation of complex simulations. These dedicated
services provide for optimal processing during event-driven campaigns that have the possibility to
set up and access the infrastructure. Within the Nordic and Alpine scenarios, the consortium will,
however, deliberately try the “full bandwidth” of hardware configurations. This is primarily done to
show the potential and limitations of all kinds of systems.
Minimal systems allow for more flexibility for end-users that can either not ensure the technical
overhead, or are simply not able to deploy the full infrastructure in the field. The need for this
flexibility has become clear during the end-user interviews: some of the end-users would like to
be independent of some of the hardware setups to react quicker or more easily to events.
Sometimes, setups can simply also not be set up due to technical limitations (geographical
restrictions for example to car access, weather / wind limiting blimp deployment, etc.). The
counter effect, of course, is that users likely get limited when using the system in minimal
configuration – for example, it will be impossible to run complex simulations in acceptable time on
a limited hardware configuration. The choice for a configuration, thus, depends on the case-byHYDROSYS (224416) – D2.3 System Specification
18
case necessity. Also, to support communication / collaboration between users, a minimal set of
(network-enabled) devices need to be supported.
2.5 System integration and validation factors
System integration and validation form one of the core “background” activities of the HYDROSYS
project. System integration is of key important to secure that the system goals can be reached,
and system components (especially those from different partners) form an effective information
processing pipeline when being coupled. This effectiveness depends both on system-technical,
but also on user-defined validation factors that have been defined at the start of the project.
These validation factors also aid in the identification of potential needs for change, and the
definition of there-out forthcoming alternative solutions.
The development of the different system subcomponents is defined in the project control list
(PCL). The PCL contains simple sheets that provide a clear overview of the work being planned,
dependencies between partner activities, and validation factors. Generally, the planning is PCL is
based on 6 month periods.
The tasks in the PCL are defined using timeboxing (Stapleton 1997): a timebox is a fixed timeslot
describing a set of related and prioritized tasks that need to be completed by a team within a
given deadline. Timeboxing allows separate partners to work independently from each other in a
highly structured way, but still focus on a common goal. Correct integration of tasks finished in a
timebox is achieved via synchronization periods that overlap with technical board meetings, may
it be face-to-face meetings or planned conference calls. The synchronization period is also used
for quality control, using the by the defined baselines defined in the PCL. Timeboxing is a
powerful method to work with distributed development teams, as is normal in European research,
limiting overhead by continuous need for discussion on developments.
The PCL states clear intervals in which iterations need to be made, taking the milestone structure
of the project into account. The PCL is refined during the project. Moreover, the PCL includes risk
management mechanisms, by defining fall-back solutions for specific system components.
The development process is continuously fed by the results of the user-centered design
approach. End users are let into the progressing loops of definition, evaluation and refinement of
system aspects from the start of the project, next to dissemination and exploitation. The end-user
involvement is put down in the user-centered design guide, feeding the PCL with the baseline to
which developments can be measured.
The development is ordered in three major phases (also see the UCD manual D2.1)
•
•
•
Phase 1: Definition phase (M1)
Phase 2: Validation and modification phase (M2-4)
Phase 3: Refinement and final evaluation phase (M5,6)
During these phases, system components are being defined (phase1) and developed, checking
them against end-user feedback and end-user/system validation session results (phase 2).
Finally, in phase 3, the system components are refined and gone through a final evaluation.
Hereby, the research system prototypes produced every 6 months form the integrated platform to
which the end-users feedback is reflected. WP7 (validation) and WP8 (exploitation) play an
important role in the end-user feedback loops.
Validation factors
Validation is essential for all development done for the various components and subcomponents
in the HYDROSYS project. Initial general principles for validation concerning the research and
development of the different components are presented in essential parts of this document and
also in the PCL. It should however be noted that the final process of validation is continuous and
derives from a set of factors or dimensions. In general the process of validation can be divided
into two essential problems; What is being validated and how? These two problems can be
further separated into four principal dimensions.
The first dimension of validation is specified by the generality of the feature that is being
validated. This means whether that which is being validated is a functionality for the system, an
implementation of a technology or a subcomponent or a feature or a characteristic of an
HYDROSYS (224416) – D2.3 System Specification
19
implementation. The first axis generally defines the importance and nature of what is being
validated.
The second dimension that defines the feature that is being validated is the degree of progress
when the validation happens. In regards to system integration the validation can be executed
before or after integrating the feature into the system architecture and/or the prototype.
Furthermore in some cases the validation may have to be left for when the system is being used.
Moreover in some critical features the validation is prudent to be done during all phases.
The remaining two dimensions define the how of the validation challenge. The third dimension
defines the validating authority and ultimately the reasoning for the validation. As a project that
aspires for user-centeredness, HYDROSYS involves activities that involve end-users of the
developed system such as end-user interviews, on-site activities, advisory board meetings and
prototype evaluations. The users are respectively the principal validating authorities for any
features that directly or indirectly affect their working with the system such as the user interface.
User-involvement however is often heavy and time-consuming and many of the features that
need to be validated can be challenging or impossible to be validated by the users. Other factors
that greatly affect validation include project management, technical restrictions and project
restrictions (budget, scope etc.).
Finally the fourth dimensions of validation are the conditions of validity. These mean the scale
and limits for the validation, according to which the feature or action is either approved and
implemented or ruled out and perhaps redeveloped. This is ultimately the essential factor that is
assessed for all subcomponent-activities inside the project individually. It is derived from the other
presented three dimensions and background study.
A detailed description of all the planned developments can be found in report D3.1 (Project
Control List).
HYDROSYS (224416) – D2.3 System Specification
20
3 Sensors and Blimp Component
3.1 Sensor overview
Needs on sensing technology
In order to provide an apt information space for end-users to analyze small sites, several needs
can be identified for the sensing technology that are to be deployed.
In order to outline the environmental problem or situation, a dense information space needs to be
provided for, through a sensor grid. The number of sensors / sensor stations in a grid is
depending on the situation at hand, but roughly vary between 5 and 10 for a small site. The
sensors being used should be quick to deploy, and preferably allow for wireless access. The
kinds of sensors being deployed also depend on the site being observed, but generally,
multivariate sensors are being used. Sensors should have the potential to sense at higher
frequencies, to cover for potentially quasi-real time observation of an environmental situation.
Finally, the visualization approach applied in this project requires textured 3D models, for which
reason sensors might need to be deployed to reconstruct environments, when information is
missing.
Deployed sensor technology
HYDROSYS will have access to a range of sensing technologies to cover the needs specified
above. Multivariate sensors and sensor stations are available that potentially provide around 20
different kinds of environmental sensor readings. A detailed overview of these sensors can be
found in section 3.3. Sensor data can be accessed via GPRS or WLAN bridge. For sensing and
reconstruction purposes, multiple cameras are deployed. An unmanned aerial vehicle can be
used with two cameras, a normal range and a thermal camera. As such, the blimp provides
sensory information (thermal) and detailed cartographic data, by producing large texture maps for
the terrain models. In addition, a limited number of cameras are mounted at sensor stations that
can be used by the users in the field to get another viewpoint on the site. Finally, the handheld
computing units hold cameras, which images can be shared among users.
Beside the sensors directly related to environmental processes, location and orientation sensing
devices (tracking devices) are being deployed, like GPS and inertial sensors – these sensors are
described in the Interactive Graphics chapter on user interfaces. In the next sections, the sensing
technology will be described in detail.
3.2 State of the art
The state-of-the-art in water quality monitoring and management is based on manual sampling
schemes, where water samples are taken and analysed bi-weekly or even more sparsely. Realtime monitoring is seldom applied but can be done in specific cases and always require particular
arrangements. The information transfer and utilization is based on purpose-built, i.e., not generic,
solutions. Systems that record water quantity and quality data on the field and transmit it to the
Internet using GPRS and other wireless communication protocols exist, but to deliver such data to
end-users on-site in near-real-time is a novel feature of the project. The state-of-the art in (alpine)
watershed monitoring and management is based on field campaigns that generally focused on
deploying relatively few “expensive” base stations having traditional sensors, limiting the spatial
coverage, the online data access as well as the possibility to have active control to respond to
changing conditions. For example, in hydrology, only limited field campaigns with in-situ spatial
observations (e.g. within water basins) of precipitation, soil moisture, stream flow, surface energy
components, etc have been undertaken. Moreover experimental field work in hydrologic sciences
is still very much dominated by single-use, often small-scale efforts of one or a few research
groups. A few exceptions exist, such as the national programmes LOCAR
(www.nerc.ac.uk/research/programmes/locar/background.asp)
and
CHASM
(www.ncl.ac.uk/chasm/ChasmOverview/index.htm).
Currently, environmental monitoring is mostly done by onsite observer, few expensive sensing
stations with data logger (e. g. IFKIS network in Switzerland) and Geographical Information
System (GIS). With this current approach, many processes cannot be monitored. Sensor data is
HYDROSYS (224416) – D2.3 System Specification
21
still (timely) limited and scattered in the environment: even when a problem is detected by a
sensor, it can often not be traced back directly, since detailed information on the area is missing.
As a result, many processes are badly understood and both their representation (including the
physical process model) and their visualization are incomplete (Barrenetxea et al, 2008).
Moreover, current sensing stations have several limitations (no real time data and few spatial
coverage, expensive cost). In hydrology, for example, there have been only limited field
campaigns with in-situ spatial observations (Barrenetxea et al, 2008). In that context, being able
to capture spatial and temporal variability of environmental parameters, to develop real time
image and data transmission (Shin, 2007) and data visualization in order to enhance
environmental model, prediction tool and decision making process will be the next challenge in
environmental monitoring (Bogue, 2008).
Therefore, current trends are moving toward deploying a large number of wireless sensing
stations in order to provide high spatial and temporal density measurement. Wireless Sensor
Networks (WSN) (Aberer et al, (2007), Culler et al. (2004), Chong & Kumar (2003)) have become
a widespread tool for monitoring a wide range of environmental phenomena. Many research
projects are investigating possible applications of sensor networks ranging from habitat
monitoring (Szewcszyk et al, 2004) to agriculture (Langendoen et al, 2006) and to environmental
monitoring (Selavo et al, 2007; Martinez et al, 2004; Werner-Allen et al, 2005). However,
deploying a WSN in the field has always been reported as a difficult task and remains challenging
(Barrenetxea et al, 2008).
Generally utilized water sensors for water quality monitoring vary from light handheld devices to
larger sensor stations. They all provide real-time water quality data of certain parameters, but
handheld devices are operated by the user on the field while sensor stations are operated
automatically. Handheld devices are suitable for one-time measurements and sensor stations for
longer time series. Sensor stations are built up with the demands of monitoring task and
installation location. Stations can be equipped with several water quality probes which are
operated by data-logger. Depending on the installation location and mobility requirements the
station can be self powered. Measured data can be downloaded manually from the station or
transmissioned to data-server via GSM-network. Sophisticated installation of the sensors and
quality control of the measured data is important when measuring with the water sensors. Many
things in the water column (e.g. weather conditions, sedimentation, vegetation, animals,
vandalism) can interrupt the measured data or damage the sensors.
Unmanned aerial vehicles are an attractive approach to remote monitoring. An aerial viewpoint is
the natural perspective from which to examine outdoor scenes since a commanding overview of
the scene can be obtained from a single view. Further, such a viewpoint permits the deployment
of sensors such as thermal imaging cameras in order data from large parts of the environment at
high spatial and temporal resolution. Common approaches include remote-controlled or
autonomous airplanes, helicopters (STARRS EU project) or blimps, each of which present
different tradeoffs [Elfes 1998]. Blimps present the most robust solution, if manoeuvrability or high
speed is not an important requirement. Their inherent stability has led to good results in control
and autonomous navigation [Hygounec 2004, Merino 2005]. Blimps suit themselves well to
building maps and digital elevation models from aerial images taken with on board cameras. Both
a map of the underlying ground and the vehicle's location are estimated simultaneously and
integrated with other sensors such as GPS an inertial measurement unit to improve accuracy.
This is an instance of the simultaneous localisation and mapping (SLAM) problem, first developed
in robotics research. Current approaches in aerial imaging use stereo camera setups to provide
depth information [Jung 2003, Meingast 2004]. SLAM is solved using a Kalman filter-based
approach and a digital elevation model is built from dense stereo fusion [Jung 2003] or using a
planar + parallax algorithm. Generally available elevation models are not always suitable,
because they often outdated and generally do not have a high spatial resolution.
3.3 Hydrological and other environmental sensors
A range of hydrological /environment sensors will be used in the HYDROSYS on-site
deployments. Outside the work on camera sensors, the consortium makes use of readily
available sensor solutions. All sensors being deployed are generally set up rather quickly (within
hours or at most several days for the larger stations) and being deployed for longer periods. This
HYDROSYS (224416) – D2.3 System Specification
22
section provides a short overview of the available sensors and data they produce. Three major
groups of sensors can be identified:
•
•
•
Sensors deployed in La Fouly: SensorScope stations integrating a multitude of
hydrological and environment sensors
Sensors deployed on sites close to Davos and Andermatt: Weather stations and sensors
related to permafrost measurement
Sensors deployed in Finland: Water quality sensing stations
Sensor deployed at La Fouly
Sensorscopes are the most predominant sensing devices already deployed at La Fouly.
A SensorScope sensing station is centered on a wireless sensor node, the TinyNode module,
manufactured by Shockfish. The sensing stations measure key environmental data such as air
temperature and humidity, surface temperature, incoming solar radiation, wind speed and
direction, precipitation, soil moisture and water succion. Table 2 gives the main sensors
specificities. The design of the sensing stations was conceived with the following baseline
requirements: low energy consumption, long communication range, low cost, simple installation
adapted to extreme sites, energy autonomy, high-quality data, water resistance and the prospect
of retrieving data in real-time.
Table 2 Sensorscope sensor specifications
The water level sensor that monitor continuously the water depth in the stream is similar to the
Finnish one.
Sensors deployed on sites close to Davos and Andermatt
The use of data gained with automatic weather stations is a common application in atmospheric
and geosciences. Those weather stations can be equipped with several types of sensors to
measure different parameters. The automatic weather station which is currently being set up at
the Dorfberg site is implemented with several sensors to investigate the spot’s energy balance:
Five snow temperature sensors provide temperature information in different depth of the snow
cover. Further sensors have been installed to measure air temperature (ventilated thermometer)
and snow surface temperature (infrared thermometer). In addition, wind speed and wind direction
are investigated with a Young’s wind sensor. Snow depth is being monitored by an ultrasonic
snow depth sensor. Finally the station is equipped with a sensor to measure net radiation.
There is a further need for sensors to measure snow water content with the handheld devices.
Furthermore a near-infrared camera should be implemented to record snow stratigraphy in a high
resolution. A Riegl LPM321 laser scanner was used for DTM generation and photographs of the
slope are taken in 15 minutes intervals from 7 to 17 o’clock every day.
The automatic weather station which is placed at the Gemsstock site is equipped with an air
temperature sensor, an ultrasonic snow depth sensor, a Young anemometer and a radiation
sensor. The borehole which is drilled into the permafrost in the rockface of Gemsstock is
equipped with several temperature sensors (YSI 44008) at different levels and borehole
deformation is measured manually using an inclinometer. Rock surface temperatures are
measured using UTL (Universal temperature loggers). 3D laserscanning (of the cable car station
and the rockwall) is effected annually using a Leica scanner. The angles of the station walls are
HYDROSYS (224416) – D2.3 System Specification
23
measured monthly (manually) using an inclinometer. Photographs of the rock wall and of the
underlying glaciers are taken at regular intervals.
Sensors at both weather stations transmit data in 30 min time steps.
Parameter
Air Temperature
Snow Surface
Temperature
Snow
Temperatures
Net radiation
Wind speed
Wind direction
Snow depth
Borehole
temperature
Rock surface
temperature UTL
Inclination angle
Range
-40° C- 100°
C
-30°C – 40°C
Resolution
0.01° C
Accuracy
±1% sF/0.3 K at 23°C
0.01° C
± 0.5° C - ± 4° C (depending on TA)
-35°C – 50°
C
0.2 – 100 µm
0.01° C
± 0.1°C
0 – 100 m/s
0° – 360°
0- 400 cm
-40°C-+40°C
0.1 m/s
1°
1 cm
0.01°C
Non linearity: < 1% up to 2000W/m2
Directional error (0-60°): < 30 W/m2
± 0.3 m/s
± 3°
±1 cm
0.1°C
-30°C - 40°C
0.1 °C
± 0.2 °C
0-360°
0.1°
Sensors deployed in Finland
Water sensor stations of HYDROSYS-project in Finland consist of water quality and water level
sensors, Luode-datalogger and battery. Water quality, including parameters of temperature,
turbidity and conductivity, is measured with YSI 600-series water quality probe. Water level,
which can be linked to flow and discharge functions, is measured with Keller pressure gage.
Sensors are connected to the Luode-datalogger, which is operating the sensors, storing the data
and transmissioning data to data server via gsm-network. Typical measurement interval of the
sensors varies from 1 to 60 minutes and data transmission interval from the station to data-server
from 1 to 24 hour. Water sensor stations are self powered and therefore they are removable.
Small size enables comfortable deployment of the stations to the field and also removal during
the measurement campaign. The water sensor stations are also flexible what comes to measured
parameters, more probes can be connected to data-logger during the campaign. Luode
measurement service includes quality control of the data and maintenance of the stations during
the measurement campaign.
Data type
Water level
Water flow
Water temperature
Water turbidity
Water conductivity
Sensing method and purpose
NORDIC
Pressure gage; Hydrologic and hydraulic modelling
Same as water level, flow and level are linked with a discharge function
Thermistor in a water quality sonde; water quality modelling
Optical measurement with a water quality sonde; correlates with
phosphorus concentration, indicates erosion
Conductivity electrodes in a water quality sonde, indicates e.q. waste
water effluents
Parameter
Water level
Temperature
Turbidity
Range
0 to 1 m
-5 to +50°C
0 to 1000 NTU
Resolution
0.001 m
0.01°C
0.1 NTU
Conductivity
0 to 100 mS/cm
0.001 to 0.1 mS/cm
(range dependent)
HYDROSYS (224416) – D2.3 System Specification
Accuracy
±0,1%
±0.15°C
±2% of reading or 0.3
NTU, whichever is
greater
±0.5% of reading +
0.001 mS/cm
24
3.4 Blimp
The blimp system will provide a live, up to
date surface model of the ground in the form Task-component relationship
of a hight map, temperature map and colour The work described in this section is part of Task
map. During live operation, the height map 4.2 / Remotely controlled blimp
will not be updated. The blimp will be flown
prior to the initial experiment in order to
generate a high precision height which will be optimized off-line. This model will be used during
the live experiments. The model will then be refined in an off-line step, from the experimental
data, making topographic data at different times available.
During live operation, the blimp will localize itself using the optical camera and the colour map of
the stored model, by minimizing the salient difference in appearance between the images and the
model. The field of view difference between the optical and thermal cameras will be calibrated
before the experiment. There are several technologies that we will investigate for this task. We
will investigate both two pass (where 3D mapping is performed off-line) and one pass SLAM
(simultaneous localization and mapping) techniques where the 3D mapping is performed as the
live data is gathered. The visual tracking requires localization from the camera input. We will
primarily perform tracking using visual features on the ground, but will also investigate a backup
system based on hill silhouettes and horizon finding.
Figure 6 Blimp - Ground Control Unit connect ion diagram
The blimp control is split in to two sections, the blimp on-board computer and the grand station
control computer. The blimp computer will perform basic tracking to localize the blimp to the 3D
model, and integrate this data with the IMU and GPS data. The blimp computer will send the
blimp location, optical images and thermal data along to the ground station via the high bandwidth
link. The ground-station tracking unit will provide a very coarse blimp location in the form of a
direction relative to the ground station, which can be used to aid relocalization of the blimp if
tracking and GPS information are not available. Control of the blimp will be achieved using the
radio remote control provided by the blimp, connected to the control computer. The blimp will be
HYDROSYS (224416) – D2.3 System Specification
25
untethered. Figure 6 shows the basic connections and data flow from the devices mounted on the
Blimp to the GSN Smart Client.
The control computer will request heading, speed, etc based on the desired blimp location, using
feedback from position data relayed from the blimp's on-board computer to the control computer.
In the case of failure of all tracking systems, the remote control will be disconnected from the
control computer, and the blimp will be returned under manual control.
The ground station will perform accurate registration of the images to the 3D model, and then
integrate the images in to the live model. In the absence of any specific request, the control
computer will control the blimp to continue collecting data over the specified area. The control
computer will also have a connection to the GSN. This will allow other GSN nodes to send
messages to the blimp, in order to request data from the 3D model, and to request specific data
collection regions for the blimp. The following diagram shows how the model and texture
information generated by the Blimp will be passed to the Mobile Device through GSN.
Figure 7 Blimp's textured model and pose data propagation
The blimp deployment is limited by the wind speed. A blimp of typical specifications for this task
(The Z7000 in Appendix 2) has an operating wind speed of 4m/s (14.5 km/h). The equipment will
be weather proof, but not designed for extended operation during rain. Operation of the blimp
requires line-of-sight. Although the remote control will operate without line of sight, the highbandwidth link which will relay accurate positions and images will not function with sufficient
reliability. Furthermore, safe operation requires that the blimp be in view continuously. Operation
of the tracking and mapping software will require sufficient visual differentiation on the ground.
For instance, flat, uniform snow covered ground is too self-similar for the mapping to work. If there
are large, uniform regions present, then it may be necessary to manually place visually distinct
markers on the ground.
Deployment is also limited by the altitude. Since air density decreases with altitude, the
unmodified blimp, with the maximum payload is limited to around 2000 m altitude. However, the
maximum altitude will be increased in cold air temperatures and with a decreased payload. For
HYDROSYS (224416) – D2.3 System Specification
26
higher altitude operations, it may be necessary to use a smaller/lighter optical camera, reduce the
onboard computing capability and/or rely entirely on the thermal camera.
Sensors
Blimp
Communications
Other hardware
Optical video camera
802.11b/g network interface
(minimum 640x480 pixels
at 25Hz)
Single-board
computer
Thermal camera 320x240 2.4 GHz wide band amplifier
pixels at 25 Hz. 2 Kelvin
accurcy.
IEEE1394 DCAM,
USB UVC or GigE
Vision Camera
interface
IMU (Inertial/magnetic
unit)
Remote control unit
GPS sensor
Ground station
802.11b/g network interface
Control computer
2.4 GHz wide band amplifier
Pan/tilt head with high gain 2.4GHz
antenna
Remote control
GSN connection
Software: request for region (spatial
and temporal) of map data (optical,
thermal and topographic)
Software: request to hover over a
location.
Blimp system specifications
Maximum altitude
> 2000m
Optical ground sample distance
50 cm/pixel
Thermal gound sample distance
100 cm/pixel
Topographic map resolution
2 m horizontally
Data capture rate
30 min/km sq
3.5 Camera framework
Being an AR project, HYDROSYS makes Task-component relationship
heavy use of cameras. Thus, at a given point The work described in this section is part of Task
in time, there will be several cameras 4.1 / Remotely controlled cameras
observing a site. Leaving aside the ones in the
handheld setups (mobile computer with
sensors), different types of cameras are to be
deployed mainly as sensors.
Next to the cameras on the blimp and the handhelds, several cameras will be mounted on the
SensorScope stations that can be accessed both in the field and at the workplace. Limited by
power usage, the cameras will provide low-framerate imaging, which will be communicated over
GPRS or, when possible, over the likely to be integrated WLAN bridge. The cameras will be
defined once the network infrastructure is known in more detail.
On the one hand, SensorStations are being equipped with cameras that can act on a low
resolution. These cameras bring several challenges, power consumption being among the crucial
ones. Power consumption is not only affected by the camera itself, which could work in low
power, but also by the transmission of camera frames. Since these cameras are working as
HYDROSYS (224416) – D2.3 System Specification
27
sensors the main software component they have to deal with is GSN. A GSN sensor node /
interface has to implement the necessary features to store camera frames in the sensor network,
this software has to be deployed together with the camera controller. The hardware of
SensorStations needs to support the camera controller and the GSN camera interface extension.
Figure 8 Field cameras available for HYDROSYS
On the other hand, HYDROSYS includes camera station(s) to be deployed during campaigns in
advantageous locations. These camera stations consist of a tripod with a pan-tilt unit holding a
high quality camera. They are connected to one of the field computers acting as a camera
controller, thus allowing users to remotely control the panning and tilting of the camera, changing
its orientation for visualization purposes. Control commands for the camera are routed through a
different network connection as are data (video), thus, the controller computer relies on the
Tracking/Input (Hybrid tracking) component as well as the Video component described later. The
camera station also acts as a sensor, therefore it also requires the corresponding GSN sensor
interface. Finally, user cameras, the ones being part of the mobile setup, can also be used
temporarily as sensors by other users wanting to change their view point. This feature requires
that mobile setups register their camera service (basically the GSN camera sensor node) with a
GSN node. The camera controller computer registers all camera nodes (from users and the
camera station) thus acting as a GSN node for such sensors. The software on the mobile
computer needs to account for the submitting camera frames through its GSN camera sensor
interface, this can be set at low frame rate to keep processing power for the rest of the mobile
client application. For the camera controller computer, this means that it manages all video data
for the current campaign. These data can be collected as part of the workspace, or directly in
GSN; the (dis) advantages of each approach need further analysis. Given the nature of the data
that can be obtained from cameras (visual) the user interface requires special components to
allow users to select and access cameras. By default a mobile user has direct access to her/his
own camera frame. A user can switch to a view through any of the available cameras, this
requires a special interface for camera switching (see section 9.7.1).
For all purposes, camera calibration is an offline process. It is accomplished using a camera
calibration component (described Appendix 1) and its result is a calibration file. The file is
associated with a camera, and used for configuration of AR when the camera is accessed for
visualization purposes.
HYDROSYS (224416) – D2.3 System Specification
28
The component dealing with camera feeds at the lowest level for PC based platforms is
OpenVideo (see Appendix 1). It is in charge of operating and configuring cameras and presents a
Sink/Source pattern implementation. Therefore, sink/source nodes need to developed in order to
transmit images over network. Possibly, an associated GSN sink node needs to be developed in
order to be able to submit images from OpenVideo directly to GSN. All modules that process
video information either for only display or tracking are required to support connection to GSN for
logging of data.
The GSN camera sensor is in charge of registering itself with a GSN node acting as campaign
manager. Clients (mobile or desktops) can then query the campaign manager for available
cameras. Each camera sensor needs to specify its pose (location and orientation) so that clients
(users) can decide whether it is worth to access a certain sensor (because it is viewing an area of
interest) or not (because it is pointing in a different direction), the pose is also needed for
augmentation purposes, as will be discussed shortly. The pose could be statically recorded in the
GSN camera sensor software, if the camera is fixed. However it is expected that most cameras
will be paired at least with GPS for position sensing, and will have some means of finding out
orientation. The software below GSN camera sensor interface is in charge of providing the
appropriate pose information to the GSN layer. The following diagram exemplifies how the video
and tracking information will be propagated during field work, notice that the Blimp information is
unidirectional.
Figure 9 Data propagation relationships among cellphones handhelds and the blimp.
On the front-end, access to camera feeds is controlled by a camera selector user interface. This
interface shows to users available cameras in their respective location and orientation and is
described with more detail in the user interface section (section 9). The selector allows clients to
select a different camera than the default (directly connected and accessible without any other
layers). Once a new camera is selected, the selector uses a camera switch mechanism that
translates from the position of the current camera to that of the newly selected one in a way that
allow users to keep track of location (see section 9). After switching cameras the user may want
to visualize her/his augmentations on the new camera feed, this might require querying GSN for
sensor values all over again using the pose of the newly selected camera as key. The camera
HYDROSYS (224416) – D2.3 System Specification
29
selector interface must specify which cameras can be controlled by the user (are mounted in a
pan-tilt unit), it must also request control of the camera to the campaign manager. If the camera
can be controlled, a user interface showing controls will appear. The user can then pan or tilt the
camera at will. Finally, cameras mounted as sensors need to be protected from weather. Special
encasings for outdoor cameras will be developed.
3.6 Validation factors
The sensors and sensor stations being deployed are regularly used, high quality (sensor range
and accuracy) devices providing a potentially high update rate of sensor data. As such, they
provide more than enough data for processing suitable data for users in the field. The bottleneck
for data browsing and visualization of sensor data is rather on the site of network connections or
the data processing capacities than on the sensors itself.
The different video streams / images used during observations of sites have various validation
issues. First, the video streams / images should be suitable quality both to recognize and overlay
visual data, requiring high quality cameras and lenses. The frame rate of video streams differs:
for general Augmented Reality imaging, a frame rate of at least 15 fps needs to be achieved
(which also depends on the actual rendering of content). Sharing streams or images between
users can be of lower frame rate or even image / screen shot wise. These streams are generally
limited by the available WLAN bandwidth and the video processing capacities of smaller
computer platforms. This also counts for the camera on the pan-tilt unit. Video / camera images
from the SensorScope camera can be very slow too, since the cameras don’t move anyway. The
images captured by the blimp cameras can be of low frame rate: for stitching normal and thermal
images, one does not need a high frame rate, and, similar to normal camera sharing, a user
retrieving a video stream from the blimp can make use of lower frame rates too.
The blimp is required to produce an optical map, and an up-to-date thermal map. Therefore the
main validation factors are: Accuracy of the 3D map, clarity of optical imaging, accuracy of
thermal imaging, spatial accuracy of the optical and thermal data, and finally the update rate for
thermal map (minutes per square kilometer imaged)
The initial two points can be validated by the tracking system, since the performance of the
tracking will depend on these. The thermal imaging accuracy can be validated directly by
examining the thermal image at points where sensor-stations are located and comparing the spot
measurements of temperature. The spatial accuracy can be measured by comparing images to
measured, easily identifiable objects on the ground.
The performance of the blimp system is dependent on the tracking and localization systems,
which include GPS localization and visual localization/tracking. The tracking system will be
designed to operate with any of the major localization components missing (due to bad GPS
reception, inadequate visual range for horizon tracking and so on). The performance of the optical
localization systems can be validated by comparison to GPS measurements, when reception is
known to be sufficient.
HYDROSYS (224416) – D2.3 System Specification
30
4 GSN Component
In order to handle the data from the multivariate sensors being deployed in HYDROSYS, a
sensor network is used to retrieve, store, process and distribute sensor data. This chapter
introduced GSN, the Global Sensor Network software platform being used for this purpose, and
the extensions that will be developed for this sensor network platform. Note that all data delivered
by sensors needs to be already geo-referenced before deployment.
4.1 State of the art
Many different systems exist for query planning and query optimization for processing data
streams. All of them support receiving data from distributed stream producing sources such as
wireless sensor networks. The Aurora (Abadi et al. 2003) and STREAM (Arasu et al. 2006)
projects, are based on a centralized model where all processing take place at a single system. In
distributed stream producing environment, moving some processing to the sources instead of
moving all data to a central system may lead to more efficient set of processing and network
resources. The TelegraphCQ (Chandrasekaran et al. 2003) achieves this goal by running several
TelegraphCQ instances in different machines and each machine, receive the streaming element
from stream producers and does the filtering and forwards the results to the other TelegraphCQ
node. Therefore, a TelegraphCQ node can receive several filtered streaming data and do the
processing on them and produce the output data stream. GSN's stream producing component is
different in TelegraphCQ by being adaptive and doing run time optimization on the queries. In
GSN (Aberer et al. 2006) the filtering operators can migrate between nodes in order to distribute
the load in the network. There is another version of Aurora called Aurora* and Medusa (Zdonik et
al. 2003) which is aimed to design the distributed version of the centralized stream processor of
Aurora (Abadi et al. 2003). The HiFi (Franklin et al. 2005) system is based on the Telegraph-CQ
(Chandrasekaran et al. 2003) stream processing system for enabling disparate, widely distributed
organizations to continuously monitor, manage and optimize their operations. The HiFi system
doesn't address different type of streaming data produced by different sensor networks, while
GSN (Aberer et al. 2006)does. The Cougar (Chen et al. 2001) and TinyDB (Madden et al. 2005)
projects both are design for facilitate this process and hide the underlying details by providing
declarative query languages for getting the data from a sensor network, and generate an
optimized and efficient (in term of communication cost and energy) query execution plan for in
network query processing. In GSN (Aberer et al. 2006), we have a different goal, we want to
integrate several heterogeneous sensor networks and issue complex queries on them. The GSN
gets the streaming data from the sensor network using the wrappers. A wrapper is an interface
between the GSN sensor server and the actual stream producer (e.g., a sensor network). If the
data coming into the sensor server is produced by a physical sensor network, the Cougar and
TinyDB projects can be used for efficiently getting the data from the sensor network and
delivering it to the seed node which in turn delivers it to the listening wrapper. Therefore, Cougar
and TinyDB projects are complementary to GSN.
4.2 GSN sensor network and components
GSN will act as the hub of information among
the different modules of the project. It will:
•
•
•
•
•
•
Task-component relationship
The work described in this section is part of
WP4 / Sensors and sensor network
allow merging of multiple GSN stores
(useful for data collection on the field).
allow compressed and secure
communication.
use the defined XML format for adapt exchange
allow queries on the meta information of sensor data
support data streaming (useful for video and tracking)
support a different type of video encoding instead of the current frame based.
A shared information system fusing data from different data sources will be implemented. Sensor
data acquisition and management are supported by means of a sensor network system. The
HYDROSYS (224416) – D2.3 System Specification
31
sensor network is based on the Global Sensor Network (GSN) middleware developed by EPFL
during the SwissEx project. The Global Sensor Networks (GSN) platform aims at providing a
flexible middleware to accomplish these goals. GSN assumes the simple model shown in Figure
10 A sensor network internally may use arbitrary multi-hop, ad-hoc routing algorithms to deliver
sensor readings to one or more sink node(s). A sink node is a node which is connected to a more
powerful base computer which in turn runs the GSN middleware and may participate in a (largescale) network of base computers, each running GSN and serving one or more sensor networks.
Figure 10 Global Sensor Network
GSN makes no assumptions on the internals of a sensor network other than that the sink node is
connected to the base computer via a software wrapper conforming to the GSN API. On top of
this physical access layer GSN provides so-called virtual sensors which abstract from
implementation details of access to sensor data and define the data stream processing to be
performed. Local and remote virtual sensors, their data streams and the associated query
processing can be combined in arbitrary ways and thus enable the user to build a data-oriented
“Sensor Internet” consisting of sensor networks connected via GSN.
A GSN server consists of the following subcomponents, as shown by different layers of Figure 11:
•
•
•
•
The virtual sensor manager (VSM) is responsible for providing access to the virtual
sensors, managing the delivery of sensor data (through local and remote wrappers), and
providing the necessary administrative infrastructure. The VSM has two main
subcomponents:
The life-cycle manager (LCM) provides and manages the resources provided to a virtual
sensor and manages the interactions with a virtual sensor such as sensor readings.
The input stream manager (ISM) is responsible for ensuring stream quality of service via
the included stream quality manager (SQM), yet in a simplified way until now, i.e. by
dropping outlier values. The data from/to the VSM passes through the storage layer which
is in charge of providing and managing persistent storage for data streams.
The query manager (QM) is responsible for query processing. QM includes the query
processor being in charge of SQL parsing, query planning, and execution of queries
(using an adaptive query execution plan). The query repository manages all registered
queries (subscriptions) and defines and maintains the set of currently active queries for
the query processor. The notification manager deals with the delivery of events and query
results to the registered clients. The notification manager has an extensible architecture
which allows the user to customize its functionality, for example, having results mailed or
being notified via SMS.
HYDROSYS (224416) – D2.3 System Specification
32
Figure 11 The GSN Server
Figure 12 The virtual sensor inside a GSN container
HYDROSYS (224416) – D2.3 System Specification
33
The key abstraction in GSN is the virtual sensor. GSN can contain multiple virtual sensors. Virtual
sensors abstract from implementation details of access to sensor data and correspond either to a
data stream received directly from sensors or to a data stream derived from other virtual sensors.
A virtual sensor can be any kind of data producer, for example, a real sensor, a wireless camera,
a desktop computer, a cell phone, or any combination of virtual sensors. A virtual sensor may
have any number of input data streams and produces exactly one output data stream based on
the input data streams and arbitrary local processing. In Figure 11, the big picture of the GSN
container is shown. As depicted therein, a virtual sensor may receive real-time data or archive
data and then applies the predefined filtering on the data. The specification of a virtual sensor
provides all necessary information required for deploying and using it, including (1) metadata
used for identification and discovery, (2) the structure of the data streams which the virtual sensor
consumes and produces (3) an SQL-based specification of the stream processing performed in a
virtual sensor, and (4) functional properties related to persistency, error handling, life-cycle,
management, and physical deployment. To support rapid deployment, these properties of virtual
sensors are provided in a declarative deployment descriptor XML file (see Figure 11).
Below, the data flow through GSN server is analyzed. In GSN a data stream is a set of
timestamped tuples. The order of the data stream is derived from the ordering of the timestamps
and GSN provides basic support for managing and manipulating the timestamps. The following
essential services
are provided:
1) a local clock at each GSN container;
2) implicit management of a timestamp attribute (TIMEID);
3) implicit timestamping of tuples upon arrival at the GSN container (reception time);
4) a windowing mechanism which allows the user to define count- or time-based windows
on data streams.
In this way it is always possible to trace the temporal history of data stream elements throughout
the processing history. Multiple time attributes can be associated with data streams and can be
manipulated through SQL queries. Thus sensor networks can be used as observation tools for
the physical world, in which network and processing delays are inherent properties of the
observation process which cannot be made transparent by abstraction.
Figure 13 Virtual sensor definition
HYDROSYS (224416) – D2.3 System Specification
34
The temporal processing in GSN is defined as follows: The production of a new output stream
element of a virtual sensor is always triggered by the arrival of a data stream element from one of
its input streams. Thus processing is event-driven and the following processing steps are
performed:
1) By default the new data stream element is timestamped using the local clock of the
virtual sensor provided that the stream element had no timestamp.
2) Based on the timestamps for each input stream the stream elements are selected
according to the definition of the time window and the resulting sets of relations are unnested into flat relations.
3) The input stream queries are evaluated and stored into temporary relations.
4) The output query for producing the output stream element is executed based on the
temporary relations.
5) The result is permanently stored if required (possibly after some processing) and all
consumers of the virtual sensor are notified of the new stream element.
A high level overview of the streaming data flow inside GSN is given in Figure 13 .
Figure 14 Data flow in GSN
4.2.1 New GSN components
In HYDROSYS, the following new functionality in GSN is going to be implemented:
GSN Real-time sensor data processing (T4.3)
In HYDROSYS, sensors can be stationary, i.e. at fixed locations, or mobile, e.g. cameras
mounted to the blimp. GSN middleware currently supports receiving data streams from stationary
sensors. Therefore, it should be properly extended to receive data streams from mobile sensors
as well. The proposed approach is that GSN wrappers within virtual sensors are extended to be
able to receive real-time data from moving sensors. A GSN wrapper is associated with a
particular data stream source, e.g. most often a sink node. A virtual sensor can be associated to
multiple wrappers. To this end, there are two alternative approaches:
•
•
A mobile sensor can be directly associated to a wrapper, if its movement and wireless
network infrastructure permits such a wireless network link to be maintained.
Otherwise, different wrappers can be associated to different base stations at the field and
the steaming data to be fed to the virtual sensor through the wrapper (and the base
station) that has the mobile sensor within its range.
The streaming data can be fed to the same virtual sensor regardless of the position of the mobile
sensor or can be fed to different virtual sensors based on sensor position. It is also possible that
such data source assignments are dynamically determined based on the quality of the received
data and the position of the mobile sensor. Such dynamic changes of data stream sources may
have to be supported inside the virtual sensor component.
Also, in HYDROSYS, real-time data sources have to be fused with archived ones. The archived
data sources can be remote with respect to the monitored site. Therefore, archived data should
HYDROSYS (224416) – D2.3 System Specification
35
be carefully located and the archived data streams to be delivered at the monitored site, i.e. to the
GSN server located there. The archived data sources may be accessible through a different GSN
server or even from multiple ones.
SwissEx provides basic data fusion among real-time and archived data. However, the association
of real-time data streams with archived ones can only be static. So, the location of the archived
data should be known and configured in advance at a wrapper. The proposed approach for
HYDROSYS is that this new functionality will be provided inside Input Stream Manager (ISM)
component of GSN. Specifically, the ISM of a GSN server should provide functionality to
dynamically associate remote data stream sources to local wrappers.
GSN SmartClients (T4.4)
The GSN SmartClients will build the data filtering and pre processing between the GSN stream
and the visualization front end. The different capabilities of the user end mobile devices and their
different queries pose requirements to GSN for applying different data filters and queries to
streaming data for different clients. In addition, communications to the cell phone platform utilize a
binary XML protocol, which the GSN is not able to manage.
The proposed solution is that GSN is extended in order to handle multiple potentially concurrent
client requests effectively handled, introducing a buffering query layer or a push/pull query model.
This new component will also register different data filters and queries for each different client.
Therefore, virtual sensors responsible for different queries and different data filters at the GSN
server will be used to concurrently serve different clients. GSN SmartClient extends the GSN
server for customized processing per client. For more information please see section 5.
GSN Security mechanisms (T4.5)
Monitoring data are considered of crucial importance for present and also for future reference. In
HYDROSYS, proper mechanisms for avoiding data loss or corruption will be developed.
The proposed solution for HYDROSYS is for data fault tolerance and restoration to be handled by
GSN inside Input Stream Manager (ISM) component of GSN. Specifically, monitored data should
be properly archived at a safe storage as soon as it is available, so as to prevent data loss. The
data from/to the Virtual Sensor Manager (VSM) will pass through the storage layer which is in
charge of providing and managing persistent storage for data streams. Also, already from
SwissEx, all transformation employed by different virtual sensors to the monitored data are
properly stored at the database for data provenance.
The monitored data and the transformed ones may also be sensitive to leakages to untrusted
clients or to eavesdroppers at the network. Access to real-time and archived data streams has to
be properly authorized to authenticated clients. Also, in case that insecure networks are included
in the data stream path, the confidentiality of the streaming data should also be properly
protected. Also, the granularity of the data access and data integrity can be defined at different
levels, for example, for the whole GSN container or for individual virtual sensors.
In HYDROSYS, the interface layer of GSN provides access functions for other GSN component
and via the Web, either through a browser or via web services (WS) interface. Access will be
authorized and shielded by the Access Control Manager (which is a new component residing at
the interface layer of GSN) providing access only to authenticated parties. Also, the data integrity
layer will provide data integrity by means of electronic signatures. Confidentiality of the data can
be provided by means of virtual private network establishment, when necessary. This can be
implemented at the network (e.g. by means of IPSec) or the overlay layer (e.g. by means of SSL).
Also, regarding authorized data access, database access control mechanisms will be employed
and the data wrapper component that is responsible for retrieving archived data will be properly
modified.
Simulation and modeling (T4.6)
HYDROSYS will use and enhance several environmental models, such as Alpine3D and others,
to validate environmental data and assess potential risk associated to them. Some of these
models have to be continuously fed with real-time sensor data and produce long-term results.
Also, environmental scientists in the field can benefit from partial results of real-time simulation
runs to evaluate the quality of the measurements and to check model performance. In addition,
the availability of on-site and real-time model feedback can support scientists and technicians in
HYDROSYS (224416) – D2.3 System Specification
36
placing new sensors at the “right” location (right, meaning most significant with respect to the
investigated process). This will results in a more efficient approach to tackle critical situations
where time can be an issue. Different scenarios can therefore be assessed on site. For
simulations purposes, two significantly different in-nature simulation servers are going to be
implemented in HYDROSYS:
•
•
On-Demand Simulation server. It performs calculation on-demand, possible initiated at the
field, based on real-time data assimilation, archived data or test data for short-term
experiments or evaluation of what-if scenarios.
Continuous Simulation Server. It constantly performs simulation calculations based on
real-time data for long-term experiments.
From an architectural point of view, each simulation model running on the servers should be
associated with one virtual sensor for providing input to the model and obtaining its results (see
Figure 15). Therefore, a new virtual sensor will be implemented per simulation model. Therefore,
GSN can be used for providing the input data and reading the output data of the simulation model
that runs at a simulation server. It is also possible that one GSN server instance running in a fixed
location is constantly associated to the static simulation server, while another GSN server
instance running at a laptop at the field is associated to the on-demand simulation server.
GSN Data quality control mechanisms (T4.7)
GSN should now support real data feeds from moving sensors. This means that connectivity may
be occasionally lost and that the communication links may be very noisy.
The proposed solution for HYDROSYS is that the Input Stream Manager (ISM) will now be
responsible for handling sensor disconnections, missing values, unexpected delays, etc., thus
ensuring stream quality of service via the included stream quality manager (SQM). Data
interpolation may be employed to recover missing values and synchronization techniques (e.g.
real or logical clocks) will be employed to deal with unexpected delays.
Also, in HYDROSYS, streaming data coming from different moving sensors may have to be
fused, e.g. from the different cameras that are mounted at the blimp.
A potential solution for HYDROSYS is for streaming data from different cameras to be fed to
different virtual sensors whose output is later fed to another virtual sensor that fuses the data and
produces the combined output stream. Another one would be for different wrappers to receive
input from the different cameras and all of them to be input streams to a single virtual sensor that
fuses the data. However, only the former approach allows for the data streams from different
cameras to be separately archived in the persistent data storage.
Figure 15 GSN' s architecture
HYDROSYS (224416) – D2.3 System Specification
37
4.3 Data exchange formats
HYDROSYS visualizes the world as a geospatial virtual environment, a GeoVE, or 3D map, with
a nearly realistic representation. To construct this representation, HYDROSYS needs to accept
various data sources. Raw data may consist of laser scan points accompanied by R,G,B and IR
channels, possibly sampled at different granularities. Aerial or satellite data may also exist. These
data sets can be represented as images, such as TIFF files, given that laser scan data is
orthoprojected onto the Earth’s surface. Available processed data may include digital elevation
models (DEMs), or topographic maps where heights are represented with isolines (elevation
contours). Topographic maps include feature information, including road and path data, but the
notation conventions (metadata) are typically localized. For example, the Finnish National Land
Survey provides a set of country-wise conventions for their map data. Many map data formats
exist, of which MapInfo and Shape are among the most common ones. Municipal CAD data is
often managed in proprietary systems, such as MicroStation. Common 3D model formats
include VRML, X3D and COLLADA. Urban environment descriptions are being adapted to the
CityGML format.
Support for many of the aforementioned formats and local metadata needs to be developed as
inputs to the HYDROSYS system. These data sets are essentially static, and can be
preprocessed for optimization purposes, and presented within HYDROSYS in the most suitable
formats, provided by HYDROSYS Smart Client. Support for client side format parsing will be
minimal. The cell phone platform will embed all data to unified binary XML streams, configured
according to the selected cases. Real time data feeds will support data transcoded to the binary
XML format. It is not expected for the mobile devices to generate data except for user
annotations, which will be mainly used within the system. Implementation of possible mobile
sensor inputs depends on available resources and will be validated during the project.
HYDROSYS considers the exchange of information with the general public and not only within
the consortium. For this we will create an interface for the exchange of sensor data. This will be
encoded in the standard format SensorML.
Sensor Model Language
We have chosen the Sensor Model Language (Sensor ML) as our encoding standard for
transport of data outside the consortium. This is based on the Geography Markup Language
(GML) of the Open Geospatial Consortium (OGC).
The Geography Markup Language (GML) is an XML grammar for expressing geographical
features. GML serves as a modeling language for geographic systems as well as an open
interchange format for geographic transactions on the Internet. As with most XML based
grammars, there are two parts to the grammar – the schema that describes the document and the
instance document that contains the actual data.
A GML document is described using a GML Schema. This allows users and developers to
describe generic geographic data sets that contain points, lines and polygons. However, the
developers of GML envision communities working to define community-specific application
schemas that are specialized extensions of GML. Of these extensions, a particular a dialect for
the encoding of sensor information has been created, the Sensor Model Language.
The OpenGIS® Sensor Model Language Encoding Standard (SensorML) specifies models and
XML encoding that provide a framework within which the geometric, dynamic, and observational
characteristics of sensors and sensor systems can be defined. There are many different sensor
types, from simple visual thermometers to complex electron microscopes and earth observing
satellites. These can all be supported through the definition of atomic process models and
process chains. Processes described in SensorML are discoverable and executable. All
processes define their inputs, outputs, parameters, and method, as well as provide relevant
metadata. SensorML models detectors and sensors as processes that convert real phenomena to
data. Within SensorML, all processes and components are encoded as application schema of the
Feature model in the Geographic Markup Language (GML) Version 3.1.1. This is one of the OGC
Sensor Web Enablement (SWE) suite of standards.
SensorML is currently encoded in XML Schema. However, the models and encoding pattern for
SensorML follow Semantic Web concepts of Object-Association-Object. HYDROSYS will use this
capability to forward meta information along with sensor data to the general public through a web
interface provided by GSN.
HYDROSYS (224416) – D2.3 System Specification
38
4.4 Validation factors
For the GSN system and subsystems, several validation factors can be stated that are in general
valid for database / data distribution systems. They among others include data consistency, the
speed easiness of accessing and distributing data, error handling, and persistence.
Some more specific validation issues can also be stated for the new subcomponents being
developed in HYDROSYS . GSN real-time data processing besides fusing real-time data sources
with archived ones presents some validation issues. GSN should now support real data feeds
from moving sensors. This means that connectivity may be occasionally lost and that the
communication links may be very noisy. Therefore, the input stream manager (ISM) will now be
responsible for handling sensor disconnections, missing values, unexpected delays, etc., thus
ensuring stream quality of service via the included stream quality manager. One validation issue
here is to check that GSN, as it is implemented now, can handle this kind of error-prone input
streams. A second validation issue is to observe that stream quality can significantly improve
when applying non-simplistic data quality improvement approaches. In addition, the simulation
services require that stable and regular output is delivered for semi-continuous update of the data
visualization in the field. For the user-specified simulations, a usable work flow needs to be
supported that allows users to select and provide data through GIS software and combine it with
data streams handled by GSN.
HYDROSYS (224416) – D2.3 System Specification
39
5 SmartClient Component
5.1 Introduction to data services
The SmartClient component includes all the steps related to accessing data for HYDROSYS
applications. These steps are divided into a set of components for convenience. Data services
define the data that HYDROSYS applications can handle and their format by providing a
metadata definition. These services include mechanisms to convert data from external formats
(preprocessing/ transcoding) to the formats required by HYDROSYS applications. These
components are divided into data querying components, data conversion components, storage
and data indexing components. As an example, a normal session of HYDROSYS application is
as follows. The SmartClient registers queries according to the current user profile. Before the
query is actually registered, a service checks whether this data has already being retrieved
(saving processing time by avoiding repetition of expensive conversions). If the query can not be
handled locally, it is registered with (if it is a query for streaming data), or just sent (if it’s a query
for past data) to GSN. When the data is available from GSN, it goes through a preprocessor to
perform necessary conversions (to convert to the format specified by metadata, to perform certain
platform specific conversions, etc). The data is then stored. A Data Retriever in the client
manager is notified of the arrival of data, and it forwards the data to the SmartClient. In the
SmartClient the data goes through more conversions, resulting in data ready for visualization.
The following figure exemplifies a typical connection diagram of how data is retrieved from GSN
and delivered to the mobile client. First, the Task and Location profiles are sent to the
SmartClient, these profiles were previously set during campaign planning (section 2.4.1). The
SmartClient receives the profiles and converts them into registration queries that can be
understand by the GSN server (the server may be locally in the same machine or remote
depending on network availability, for an example of a local GSN server see section 3.4). The
GSN server will respond with sensor data specific to the requested registration
(publisher/subscriber pattern) to the SmartClient. This will in turn pre process the information
before delivering it to the mobile device.
Figure 16 SmartClient data connection diagram
The following components can be identified within the SmartClient:
• Receiver: manages queries and retrieves data.
◦ Profile Manager: Manages a set of profiles that the user may activate, and a current
running profile storing all active queries for the user (see section 5.1.1)
◦ Transcoding: converts data from network transport format to a format appropriate for
the visualization component.
HYDROSYS (224416) – D2.3 System Specification
40
•
GSN Node: gathers data for clients (campaign wide) according to the client queries. It
includes conversions on the data necessary to account for platform details and data
caching to provide for similar queries from different users.
◦ Client Manager: In the case of multiple clients, it registers queries and retrieves data
for each client, depending on the deployment strategy there may be more than one of
these components acting at the same time in a certain machine.
▪ Query registrar: attempts to satisfy user queries with data in the local cache, if that
is not possible, queries are sent to GSN.
▪ Data Retriever: retrieves the data from local storage and forwards it to the client.
◦ Data Preprocessor: performs conversions on data received from GSN, adapting it for
the current platform.
The above diagram presents a specific version of the internal elements of a SmartClient since
there are several ways to deploy the data services components. One may consider the Local
GSN Node to be running in a special server that handles several clients. It is also possible, that it
runs in a handheld device, and the SmartClient in the same handheld device connects to it to
retrieve the data. In the last case, most probably the data has been queried and preprocessed
previously as part of the campaign preparation phase. Different platforms may require different
deployment plans (information present in the Device profile, see section 1.3): While handhelds
may deploy a light preprocessor and do heavier crunching in the transcoding component, mobile
phones probably require a preprocessor that does more work and offloading the work of the slim
platform.
5.1.1 Profiles handling user preferences
In order to deal with the high volume of data and its complexity, HYDROSYS relies on a
SmartClient component and the notion of profiles. The SmartClient is a negotiator for GSN data.
It stores profiles containing information about required data and how to retrieve it. In its most
simple format, a profile is just a query. Supposing GSN acts like a big database, each profile
would be a (complex) query to that database. The SmartClient is in charge of obtaining data
according to the profile(s) that are currently enabled. Profile management also includes other
tasks and information that are not GSN specific, like storing the device's hardware setup, and
user interface preferences. The latter would also include stripping down the functionality of the UI,
for example to offer a simple interface for school kids.
In reality, the profile includes several user preferences. One of the goals of profiles is to associate
them with user roles. This allows a user to view different data depending on which profile is
currently activated. An advanced feature that depends on processing power and network
bandwidth, would allow a user to activate more than one profile at the same time, switching
HYDROSYS (224416) – D2.3 System Specification
41
profiles to observe different aspects. This feature that could be regarded as layered augmentation
optimizes the screen space usage.
User interfaces for profile management including creation, update, deletion and comparison are
considered. Profiles are not created, nor updated, automatically. Profiles are complex enough,
storing preferences and information about different tasks, so that their automatic generation
would entail a complex process that is out of the scope of this project. However, part of the profile
management user interface, is the creation of profile templates. Profiles can be thought of as
representing roles that users assume when using the system, thus they cover different needs for
example of data. Profile management may include templates for roles (sensor specialist,
hydrologist, geologist, etc) that are required in a certain campaign or site. An administrator for a
campaign may have considered different roles and created profiles for each. The actual users
can download those profiles and modify them, or use profiles that they already have in their
setups. Profile switching becomes an important task when used for collaboration purposes, where
a user may want to share her viewpoint together with augmentations. In that case, another user,
to whom this viewpoint is remote, needs to switch viewpoints and profiles momentarily.
5.2 Transcoding pipeline
The information produced by sensors is not homogeneous, some sensors deliver data in binary
format, others as voltage readings, text files, and so on. Moreover the information, although georeferenced, is abstract and cannot be directly presented raw to the user. Traditionally,
hydrologists use math software to produce plots of these sets of information. This is, however, not
an automatic process and is unsuitable for interactive graphics. Therefore, we need a mechanism
to convert the raw input information, either from sensors, simulations or even user generated into
visual primitives. This process is called transcoding and it is essential for any complex system
with such varied sources of information.
The transcoding pipeline refers to the process of converting data from one format to another. In
the HYDROSYS project this refers to converting data from raw sensor output to visual
information. The required transcoding is not simply a one-to-one conversion from one format to
another but it also includes a higher level interpretation of the sensor data. The transcoding from
semantic attributes from sensor data into purely visual primitives necessarily implies information
loss. It is therefore important to find the right point in the pipeline for the transcoding: If the
semantic information is discarded early, it cannot be used for interaction with the user at a later
stage of the pipeline. If it is discarded late, this induces the overhead of a repeated interpretation
of the semantics at runtime, and adversely affects performance.
The transcoding pipeline creates sensor visual models based on real world data used by
researches of the hydrology community rather than on synthetic made-up data. The resulting
visual data sets will be embedded with semantic markup (such as date of data collection and type
of sensor), used for interactive filtering and styling of the models This allows the actual graphics
primitives to be generated on-the-fly by scripts that are are included in the data sets for interactive
visualization effect behavior without compromising interactive frame rates. This approach allows
defining the visualization styles with relation to the semantic markup and thus independently of
any actual object structures. On the other hand, for front end devices with less computational
resources, the stage can be placed a step earlier.
The transcoding pipeline for the HYDROSYS project will begin from the sensors themselves and
end on the mobile devices used on the field. The following diagram highlights the components
involved in the transcoding of sensor data.
The transcoding pipeline in the above diagram considers only the transcode of data between the
data server, data services and the presentation component. However, transcoding exists
essentially in every step of the system architecture, for example: for data from sensors to be
delivered up to the presentation component, the following steps are necessary:
•
•
Internally recorded. First, the data from sensors needs to be internally recorded in the
proprietary format.
GSN Delivery. The second step is to relay this information to GSN’s data store. The
sensor scope stations have a direct network connection to GSN but data from older
sensors needs to be manually collected.
HYDROSYS (224416) – D2.3 System Specification
42
•
•
•
XML encoding. The GSN is designed to distribute sensor data in a general, standardized
format. It encodes all input to an XML dialect (SensorML has be chosen as the standard to
be used), which is then received by any system that has subscribed onto GSN feeds.
Reception, decoding, processing. Once the data arrives at its HYDROSYS Data Services,
its fate depends on the services currently requested by the visualization front end. It may
be simply forwarded, possibly in a compressed format for lightweight devices and slow
networks. Or, if the visualization front end has requested further processing, such as
simulation, the data may first enter a Speculative Modeling module, which then outputs the
simulation results. Data Services can also bind and mix the data with HYDROSYS specific
data structures. Now the data is within the HYDROSYS system and can be transmitted to
the visualization front end in a proprietary format.
Display. The visualization front end receives the data and renders it for the user.
5.3 Data processing for lightweight
visualization
Task-component relationship
The work described in this section is part of Task
Mobile devices are characterized by their
3.4 Data Visualization
thinness, their lack of resources. Even
devices equipped with 3D hardware are able
to render and keep in memory only small data
sets, while real world data easily exceeds their capabilities. In addition, the devices run on
batteries. Running 3D visualizations at interactive frame rates strains the batteries easily. While
unnecessary computations in a desktop could go unnoticed, in the mobile device this would
reduce the power of the precious and limited energy source.
Developing mobile 3D applications is a demanding challenge, dominated by optimization issues
especially in relation to data. One needs to consider all the aspects of this environment. First,
manually built data sets intended for mobile use should be designed for lightweightness, and
existing or automatically generated data sets optimized for rendering. This involves graphics
designs and optimization processes that yield simpler geometry and simpler surface detail than
would be possible on desktop platforms. Furthermore, importance could be utilized, where
important data is highly detailed, while the unimportant data presented with more coarse
approximations.
Second, one must consider view dependency. Any rendering that does not yield a visible result,
such as rendering hidden or extremely small surfaces, should be avoided. In this case, the data
that doesn’t contribute to the current view is culled from the memory and the rendering pipeline.
Similarly, a complex object consuming only a small amount of screen space could be replaced by
a simpler object, or even an impostor, for example a billboard. Unfortunately, visibility
determination or level-of-detail simplification may induce demanding run time computations. One
solution is preprocessing, where static data sets are optimized as an offline process,
incorporating view dependencies, where the whole space is spanned automatically to determine
the visible objects and suitable representations for each view position.
Third, all resulting data sets need to be represented in an efficient, compact form, possibly
involving specialized compression methods. Even a properly designed, lightweight 3D scene
could still consume considerable amounts of space in a standardized model exchange format
such as X3D, while a binary, more proprietary solution could reduce the size by an order of
magnitude, without compromising the contents. This transcoding stage is essential for especially
real time data streams (see below).
The abovementioned methods mostly deal with static data. Unfortunately, real time data feeds
such as sensor data cannot be preprocessed, and may not even present directly renderable data.
Other analytical means of compression, data selection and geometry generation must be applied,
possibly in separate online processing servers. Alternatively, renderable geometry could be
procedurally generated at the mobile client. In this case, the generating algorithms should utilize
the previously mentioned optimization methods.
At the final stage, data is delivered to the devices. Mobile networks are claimed by operators to
achieve higher and higher peak rates, while in real world the situation is not so simple. High
speed cell network coverage does not extend outside urban areas, and even there, the
theoretical transmission rates are achieved only on rare occasion. In the worst case, no networks
HYDROSYS (224416) – D2.3 System Specification
43
are available. HYDROSYS attacks this problem by providing a local WLAN connection where
possible, and by utilizing efficient transmission protocols for all other cases, especially for the cell
phone platform. The actual transmission can adapt similar optimization schemes as in the
preprocess stages: the most important and proximate data sets are transmitted first, possibly
utilizing level-of-detail optimizations, sending only the currently needed representation of the data.
Graphics preprocessing for handheld computers
The handheld devices are small full featured computers with limited processing and graphical
power. They are capable of displaying graphics at a high speed, provided that they are not
excessively large. Therefore, the SmartClient component needs to pre process the incoming
sensor data before deploying it to the mobile devices. The pre processing takes place in several
levels such as level of detail and polygon reduction. Furthermore the SmartClient is also in
charge to deliver the data in a form that is understandable by the handheld device.
Level of detail optimization refers to the decrease of the detail of objects depending on their
distance to the viewer. This is, objects far away have less detail than those closer. This is a
common optimization technique that HYDROSYS will employ to minimize network transmissions.
Information coming from sensors is not directly understandable by the handheld client, it needs to
first be transcoded (previous section). In order to properly display information from sensors, the
handheld client needs to convert this information into geometrical primitives, or polygons. These
traditionally receive input from point clouds (such as DEMs or Simulation interpolations) and
generate a series of triangular polygons. The polygon reduction technique tries to find a minimal
representation where the number of polygons is reduced but the visual quality is not adversely
affected. These and other techniques will be implemented by HYDROSYS as part of the
SmartClient component.
Graphics preprocessing for cell phones
Figure 17 (left) presents a pipeline for the m-LOMA system, where 3D data is visibility and LOD
optimized, 2D road data processed onto topological structures, and various location-based data
sets unified and all placed onto a database. At run time, this data is requested from the server.
Figure 17 (right) presents a visibility based fetch scheme: based on the current location, the
client first requests a list of components visible from the location. Then, based on the list, the
actual 3D geometry is requested. Finally, based on the distance, the related textures are fetched.
All data is transferred with a lightweight binary XML protocol. The HYDROSYS system will utilize
the same processing, management and transmission structure, even though the visibility metrics
may be transformed onto importance.
Figure 17 Data processing and transmission for mobile, lightweight visualization
Remote Rendering
HYDROSYS (224416) – D2.3 System Specification
44
Due to the heavy requirements on data visualization and the aforementioned data issues, it may
sometimes be necessary to outsource rendering tasks to nearby servers. The project
HYDROSYS will have a dedicated server on-site upon which both the cellphone as well as the
UMPC may support on. This is called remote rendering, a technique that allows a powerful
machine to perform the rendering duties of low-powered devices. Remote rendering techniques
suit situations where the rendering requirements are exceptionally high and cannot be optimized
for mobile devices. In other cases, due to cell network latencies and limited bandwidths, remote
rendering cannot provide as interactive experience to the user as local rendering.
Data Quality Report
The project HYDROSYS will be composed of a complex communication between several servers,
mobile devices, sensors, cameras and so on. Due to likelihood of some hardware of being
unavailable for communication, steps have to be provided to, at a minimum, display their status.
The problems can be varied and they can include: Devices out of range, offline or unreachable,
damaged, busy and so on. Moreover, even if the device is available for connection, the data
being cast out may not be reliable (for example, incoherent sensor readings). The project
HYDROSYS will provide a mechanism to query the status of devices and whether they are
available for connection as well as the quality of their data.
5.4 Validation factors
The SmartClient infrastructure binds several system components in order to select and process
the data from GSN in a suitable format for the handhelds. As such, it requires for easy data
browsing and selection (likely to be provided by the HYDROSYS user interfaces) and close to
real-time processing of this data. Whereas the initial update of information on the handheld
should be quick (it can be based on the latest pre-processed data) the update of this data over
time through sensor data updates does not need to be real-time: once the data the data is
processed it can be updated at the handheld: especially since most sensors will not provide data
which is updated very frequently (like, every second), this does not pose any problems in the
field. Naturally, the update of this data requires a (stable) network connection.
HYDROSYS (224416) – D2.3 System Specification
45
6 Simulation Component
The applications of simulation include more Task-component relationship
than prediction and experimental testing. The work described in this section is part of Task
They aid in studying and understanding the 4.6 / GSN Simulation and modeling
system dynamics and characteristics. In
practice, some simulations span single characteristics of the whole system, and they are often not
considered as simulations but thought to be the normal output of sensors. This is because they
are so often used to visualize the raw data, that they are automatically associated with those data
without regarding a seemingly complex process used to produce them. The visualization
component relies on such simulations to obtain appropriate alternatives for the visualization of
data insofar as to require them to be run on the data automatically. These simulations include
different kinds of interpolations from sensors to overview large locations, as is the case of
temperature maps for an area obtained by interpolation of sensor readings. The same counts for
moisture maps, wind speed and direction, etc.
Figure 18 Desktop visualizations of simulation results
In order to calculate a temperature map, a simulation is run that takes as input a DTM and the
sensor readings and locations. The interpolation can not be produced without a DTM, for obvious
reasons. As explained later on, simulations run mostly offsite and offline, and interactive rates can
not be expected from simulations. However, given that these simulations play a crucial role for
the purposes of data visualization, some alternatives must be found for them to be executed
without user intervention. On the one hand, the profile of the users will determine whether they
are interested in such visualization form, thus enabling the SmartClient (responsible to obtain
data for visualization) to query for the data and simultaneously trigger the simulation (in order to
obtain the interpolation results). On the other hand, in a campaign that assumes that several
users will be interested in interpolations of data from the same set of sensors, a separate process
in charge of the data services can be initiated to trigger the interpolation simulations as soon as
new data arrives, attempting to pregenerate the desired results. All approaches must be
evaluated from the usability point of view, taking as base the opinion of the users, and from the
service point, considering the load balancing of the services.
Sensor information for HYDROSYS may be highly temporally varied. It may be sometimes useful
to present to the user a more typical representation of said information, by displaying traditional
graphs. Graph displays have the advantage of presenting an overview of varying data in a form
that hydrologists are comfortable with. Appendix 7 has several examples of how said graph might
look like in an AR environment.
6.1 State of the art
The goal of environmental sciences mainly consists in understanding natural processes to be
able to interact with them; in the special context of natural hazards, understanding those
mechanisms, which might trigger hazards threatening human lives and properties, provides a
mean of planning and arranging countermeasures.
Within this context, data collection and timing are important issues. Monitoring provides data and
data transfer protocols contribute to timing. Those data feeds mathematical models which are
mainly tools to reproduce the reality. There is an infinite amount of models and a selection is
usually made according to the environmental process of interest. A simulation is a model run
HYDROSYS (224416) – D2.3 System Specification
46
where the data have been given to a model which elaborates a possible scenario. A scenario can
be either a reproduction of a past event, if the simulation was run with the purpose of setting up
the model, or a future event if modeling is used for forecasting purposes.
In general, in environmental sciences, simulations represent a tool to investigate reality through
the designation of possible scenarios, and to capture those which might pose interesting
questions.
In addition, simulations allow to build sounded hypotheses and verify them following rigorous
mathematical and logical procedures.
6.2 Alpine scenario
In the proposed Alpine Scenarios (SLF) there will be the option to include simulations, based on
the models listed below. Simulations (SNOWPACK and ALPINE3D) play a major role in the
“Dorfberg” Alpine Scenario, where fast changing boundary conditions during field work have a
major impact on the measurements. The possibility of running simulations under given boundary
conditions to visualize relevant processes indicating most important spots for detailed
measurements could increase efficiency and accuracy of data acquisition on site. Herein actual
onsite simulations would play a minor role, more important are visualizations of simulation preruns (office task) when on site.
The simple engineering model will be applied in the “Gemsstock” Alpine scenario. This will
include simple on site data processing. This model will be run on site (locally) in order to prevent
time consuming data transfer to a remote server and back on site.
SNOWPACK model
The one-dimensional numerical model, SNOWPACK, allows simulation of local snow cover and
requires the input data from high-alpine automatic weather and snow stations (Lehning and
others, 1999; Bartelt and Lehning, 2002). This model is based on a Lagrangian finite-element
scheme and the snow cover is regarded as a three-component material consisting of air, water
and ice. The following processes are taken into account: snow settlement, transport of heat,
water vapour and liquid water in the snow cover, metamorphism of the snow crystals, phasechange processes and the exchange of mass and energy between the snow and the atmosphere
(Fierz and Lehning, 2001; Lehning and others, 2002a, b). The model application ranges from
avalanche warning services (simulation of the local snow cover characteristics, Fig.1) over onedimensional process studies to simplified climate change studies (research).
ALPINE3D model
ALPINE3D is a model for the high-resolution simulation of alpine surface processes, in particular
snow processes. The model can be driven by measurements from automatic weather stations or
by meteorological model outputs. Thereby the model incorporates the most important small-scale
physics at a typical spatial resolution of about 25m. As a preprocessing alternative, specific highresolution meteorological fields can be created by running a meteorological model. The core
three-dimensional ALPINE3D modules consist of a radiation balance and a drifting snow model.
The processes in the atmosphere are thus treated in three dimensions and are coupled to a onedimensional model of vegetation, snow and soil (SNOWPACK). The model is completed by a
conceptual runoff module (Lehning et al. 2002, Fig.2).
There is a wide range of model applications from simulations of snow and runoff dynamics in an
alpine catchment, of snow height distributions, of windfield and snow drift and spatial
investigations of climate change impacts and many more
A simple engineering model for data processing: This model still needs to be generated. It will
be applied for onsite data processing with the input of inclination and angle measurements of
infrastructures in permafrost and giving an output containing the information if deformation rates
are below or above a threshold value.
GEOtop
GEOtop, www.geotop.org, (Rigon et al 2006) is a distributed, physically based hydrological
model. It performs a detailed analysis of the hydrological cycle computing both energy and water
balance on a 3-dimensional grid built on a detailed topography. It solves Richards'equation
numerically in a quasi-3-D scheme, allowing lateral and vertical flows to be computed. It also
HYDROSYS (224416) – D2.3 System Specification
47
accounts for complex fluxes between soil and atmosphere which have a significant impact on soil
moisture content, on water table depth and on the matric suction during the interstorm period.
The model describes the temporal and spatial evolution of the water table, of the soil moisture
content and of the matric suction. It also simulates the transient pore water pressure due to
infiltration and redistribution processes, which are of primary importance as initial conditions for
landslide triggering.
6.3 Simulation in the Nordic cases
The processes that need to be simulated in the Nordic cases are (i) open channel flow and (ii)
selected material transport and chemical processes that happen within water in channels. The
open channel flow model code that is planned to be used is developed in-house at TKK and is
based on numerical solution of St.Venant equations for 1D unsteady, sub-critical flow (Helmiö
2004 and journal papers attached to it). The conditions are generally met in the case areas since
the slopes are mild. Material transport and chemical processes that are focused upon are
sediment transport and degredation of glycol (de-icing chemicals). Simple physical and chemical
models will be implemented on the top of the flow model.
The Ridalinpuro site is small and simple system with very few junctions, the Kylmäoja site is more
complex system with more than ten junctions. The geometry of the reach elements, the
catchment delineation, and the topology of the whole system will be constructed by hand (offline
and not with HYDROSYS tools) using GIS tools and the baseline data from the sites.
In the Nordic cases both the development of the simulation models for the sites and their use will
be studied within the project. The Ridalinpuro site can be considered a well-observed system as it
contains, e.g., installed V-notch dams for good discharge measurements. The Kylmäoja site is a
poorly-observed system, and the modeling and model use to solve problems or aid in planning
and management is much more constrained and challenging. Research questions linked to
modeling and model use include, e.g., the hydrological and hydraulic response of the catchments
and channel system, dominant material transport and chemical degradation processes and their
characteristics, the hydraulic retention effect of the vegetation, interaction between main channel
and floodplain. The modeling support that is sought from the system to be developed is, broadly
defined, capability to test a model and its input and output data against what the modeler
observes (with or without the help of sensors) in the field. The support from model use that is
sought from the system to be developed is, again broadly defined, problem solving related in
general to planning and management related to the water and environmental system. The
modeler and/or the system to be developed needs to consider issues like when to simulate, and
what to do in the case of incomplete data.
In the Nordic Case the simulation models need/have the following characteristics:
1. Description of the physical system as required by the simulation models
- physical dimensions (cross-sections, floodplains, undulation) and topology (junctions of
channels)
- characteristics of vegetation
2. Observed conditions at a given moment at given locations
- water level / input of water into the hydrosystem
- concentrations of pollutants / input of pollutants into the hydrosystem
Typically we need to do pre-preprocessing for data of both types to make them directly available
for the simulation model.
The models are typically run in now-casting mode (no simulation in time):
a. To validate it
b. To compute backwards to gain insight into things that we do not know (source of pollution,
amount of pollution at the source)
or in event-mode (simulating the effects of an, usually hypothetical, event in time)
c. To examine the implications of a design (current, planned)
HYDROSYS (224416) – D2.3 System Specification
48
The core of a simulation model is a program, which produces output data from input data. The
data is as explained above, and it relates water-related variable (dicharge, water level, pollutant
concentration) to a place and to an instant in time. More information can be found in the Appendix
6.
6.3.1 Simulation
Simulation High-Level Architecture
Two simulation servers running the different simulation models for the Alpine and the Nordic
scenarios are going to be implemented and integrated with GSN middleware:
•
•
An on-demand simulation server that runs simulation models upon request and based on
some real-time sensor data from the field or some test data producing conclusions on
some what-if scenarios. The results of these simulations are usually expected in small
time scales at the field.
A static simulation server that continuously runs simulation models based on real-time
sensor-data from the field and on archived data for producing long-term warning events.
These simulation models can be dynamically reconfigured.
Figure 19 Scenarios use cases
Use case
The following use cases represent how the simulation module can be used: Assuming users
(specialists, researchers) repeatedly (every 2 weeks) visit a site for observations. Specialists rely
on these visits to obtain information that is otherwise missing:
1. In one visit the specialist notes some points where changes in measurement might
introduce significant changes.
2. The specialist identifies the locations of interest and creates virtual sensors for them. If the
site is not remote (meaning that the person will be back in short), she may submit the
parameters to some other specialist for verification. The simulation might be started later
in the office or immediately on site (might have been started by the specialist on the field
or not), it takes 15 days to complete.
3. In the next visit to the site (the simulation has hopefully finished), the results of the
parameterized simulation can be visualized on the field. The specialist might for example
change some parameters in the simulation, (changing virtual sensor positions etc) or add
some physical sensor at a point where it makes sense to have one and go back, maybe
after triggering another cycle of simulation.
HYDROSYS (224416) – D2.3 System Specification
49
Assuming a visit to a site takes ~3 days and a certain simulation takes ~1 to complete.
1. A specialist decides she needs to run a parametrized simulation (maybe they are planning
some construction or it might be that visual observation shows differences with the digital
info on the lab [e.g. all the trees have been chopped off]). Whatever the reason,
specialists decide to run a simulation. They explore the site and define virtual sensors in
different spots and even some changes to the DTM (all tress where chopped off). They do
trigger the simulation and go about their business making other observations etc. during
the next day(s).
2. As soon as the simulation is finished, the specialists can visualize its results in the field,
they can also plan certain corrective measures and even might be able to run another
cycle before they return home.
6.4 Validation
The integrated simulation servers will be validated running simulation models based on test data
produced by special wrappers for this purpose. Simulation output will be first observed at the web
interface of the virtual sensor of each simulation model.
HYDROSYS (224416) – D2.3 System Specification
50
7 Tracking Component
The Tracking component provides the
information about the user’s position and Task-component relationship
orientation (pose) in the real world. For The work described in this section is part of
example, on the one hand GPS is a form of Task 3.6 Hybrid Tracking
tracking that tells us about the position in the
world of a device. On the other hand an inertia
measurement unit (IMU) provides us with information about the user’s view direction. These two
disjoint sources of information are complimentary, but they are hardly robust and reliable, GPS
signal might be lost or a magnetic field might interfere with the IMU readings. Therefore we
dedicate a component for tackling the problem of fusing multiple tracking sources, Hybrid
Tracking.
7.1 Hybrid tracking
A hybrid tracking is needed so that many tracking techniques could benefit from each other in
order to provide the best possible tracking. Some tracking techniques could fail or have a poor
performance under particular circumstances. Therefore, the fusion of these techniques could
ensure a continuous and reliable tracking.
Figure 20 Hybrid tracking device connection
GPS is a global tracking technique which provides 3d measurements at low update rates with
short term errors, but no long term drift. GPS measurements from the handheld unit may be
intermittent, if a GPS lock is not successfully acquired. GPS measurements of the UWB base
station vehicle position will always be present, since the location of the vehicle can be measured
over a long period of time, and will not be subject to drift over time.
IMU measurements (3D position and orientation) from the handheld units will be continuous and
rapid (approximately 100Hz) and provide accurate differential measurements. However, absolute
position is not measured, so the measurements are not suitable for long-term use since they drift
over time.
The UWB system provides accurate and high speed 3D position and orientation measurements,
which are relative to the GPS location of the vehicle. The measurements will only be available to
HYDROSYS (224416) – D2.3 System Specification
51
the handhelds within range of the vehicle, and the accuracy will be greatest close to the vehicle,
and less at the extreme ranges of operation.
The visual tracking system will be a two part system. One system will measure 3D position and
orientation by observing the horizon and hill silhouettes. The 3D localisation will relatively poor
and dependent on what is visible. Furthermore, the measurements will only be present when the
handheld unit is held in such a way as to capture horizon information. The output may be
ambiguous without input from GPS. The other part of the system will be a 3D tracking system
based on texture maps acquired from the blimp. This will provide somewhat better accuracy than
the horizon system, but requires a good estimate of an initial position in order to operate
successfully.
The hybrid tracking system will combine the different sensors in a robust and principled manner.
Kalman filtering, for instance is not suitable by itself since the measurements statistics are not
Gaussian, and measurements can not be taken independently of one another. For instance, an
approximate 3D position must be known in order for the visual tracking system to produce a
measurement.
After the sensor fusion steps, the 3D positions of the handheld units will be broadcast over the
local network to all other handheld units.
UBISENSE Localisation System
In order to track assets, objects or people in a certain area, a sensor cell should be set up. The
cell involves a master and slaves. The cell can include up to 12 sensors, in which up to 40 tags
can be tracked in one second. The tag has an omni directional antenna (see Appendix 2) with
roughly the same gain across the horizontal 360°. The antenna pattern is similar to a water fall,
where the signal undergoes from the bottom and the top massive attenuations. The sensor has a
directional antenna with a HPBW (Half Power Beamwidth) of 100°.
Figure 21 Ubisense deployment diagram
General setup
Figure 21 shows each element of the system and indicates how they are connected. The Cabling
consists of 2 separate connections. One is for Ethernet Data Communication and the other is
used to transmit the proprietary Ubisense Timing Signal as a basis for TDOA calculations. The
Timing Signal Connection is set up using a daisy-chaining approach with a min. number of hops.
The site’s actual cabling is subject to the Ubisense Installation Plan.
HYDROSYS (224416) – D2.3 System Specification
52
Typically, other systems access the data generated by the Ubisense positioning system either via
a direct connection to the Ethernet network that links the sensors (shown above), or via a firewall
isolating the sensor network from a more sensitive network, such as the public Internet. An
alternative approach is to use Wi-Fi bridges instead of cables and power supplies at the sensors.
Coverage issue
At 0 degrees (bore-sight), there is maximum gain (or minimum loss). As one moves off bore sight
there is a loss in gain, equating to a lower signal. At ~40° the loss is ~3dB and ~50° it is ~6dB.
This implies a range reduction factor of 1.4 at 3dB and 2 at 6dB, i.e. at ~50 ° off bore-sight ones
gets half the range. An example showing the coverage calculation is given in the following figure.
Figure 22 Range calculation example
A key deployment consideration is balancing horizontal range with close-in coverage. Small tilt
angles achieve maximum horizontal range. However, at a small tilt angle the drop-off in antenna
gain is considerable directly beneath the sensor; resulting in lower ranges (or poor coverage) at
cell edges. In Figure 22, a tilt of ~40° would imply a maximum sensor height of ~10m. At a height
of 10m and a tilt of 40° the bore-sight point would hit the ground at a horizontal range of
approximately 12m. However, the antenna gain does not drop-off too much until around 25-30°
off bore-sight. Hence, range of around 100m may be achievable (assuming the Tag bore-sight is
pointing at the sensor).
7.1.1 Vehicle Setup
In order to deploy some of the hardware Task-component relationship
setups in the field, one option is to transport The work described in this section is part of
and mount some of the hardware on a vehicle Task 3.7 Vehicle setup
like a car or a quad. Depending on the sites to
visit, deployments can thus be made easier.
Some sites might not grant access to vehicles (like very mountainous areas): in such a case, the
hardware setup needs to be flexible enough to be transported by other means. A rack-like
construction is envisioned that can be placed on tripods too to install some of the equipment.
Specifically, the vehicle will hold additional devices for the hybrid tracking set up, some
equipment for the blimp deployment and likely some of the support devices (and possibly
additional power sources) like the outdoor laptops. The latter can also be taken by the users
themselves, but of course it is rather convenient too to mount everything in a car, producing a
small control-room like infrastructure.
Due the directional antennas of the UWB sensors, the use of the Ubisense localization system
implies that the wider the baseline is the better the solution will be in terms of a better GDOP
(geometric dilution of precision). Moreover, up to a point (maximum number of sensors supported
by a cell ~12), the more sensors the area involves the better the solution (more averaging, better
GDOP) will be. The sensor positions should be measured by a total station, or with a laser range
measurement device. In case of the HYDROSYS project, a fast and reliable way to calibrate the
HYDROSYS (224416) – D2.3 System Specification
53
system in terms of sensor orientation (tilt: azimuth and elevation) is the automatic method via the
full calibration along a line e.g. This makes in all cases the system quicker to deploy.
There are two solutions, which are suitable for the HYDROSYS application.
Sensors mounted on a Truck
The first solution consists in equipping a car with a set of sensors onboard (truck). This solution
has the advantage of being movable and easy as the sensors are mounted on the vehicle on a
fixed configuration (see Figure 23).
Figure 23 Configuration of 8 sensors mounted on a
vehicle setup
Figure 24 Configuration of 8 sensors fixed on a
localisation area
However, it has the main disadvantage of the quality of tracking. Indeed, the tracking is sensitive
to the space between sensors, which is probably quite small in case of a truck. Generally, the
sensors should be out spaced as much as possible so that the field of view is as large as
possible. Eight sensors (two covering each side of the vehicle) would be the best configuration for
a good coverage.
If the sensors are too close to each other onboard of the vehicle, this causes a small baseline and
would mean that GDOP would get bad very quickly as the localization tag went out. Therefore,
this would limit the distance over which one could get sensible results. Not only would the TDoA
information be quite limited, so would the angle information (because a small change in angle
measurement would result in a big change in position – GDOP again).
Onsite Fixed stations
In general, easily-deployable base station units are required (see Figure 24). A suitable
installation could involve:
•
•
•
A master sensor mounted on a tripod
A combiner-splitter (to send timing down the same cable as the Ethernet/power)
A big reel of Ethernet on a drum attached to the tripod
Seven other sensors mounted on tripods or (pylons) placed on the area and wired back to a PoE
switch (and master, supplying the timing) at the vehicle or to the source of power, respectively.
HYDROSYS (224416) – D2.3 System Specification
54
8 Interaction and Graphics Component - Graphics
Conveying the information delivered by the
sensors to the users is no easy task. In the Task-component relationship
general case the sensor data is extensive, The interaction and graphics component spans
fast changing, semantically mixed, multi the Workpackage 3 (System Infrastructure and
dimensional
and
highly
spatially Integration) specifically Task 3.4 (Data
visualization) and Task 3.5 (Focus+Context
distributed.
Visualization of this highly heterogenic Visualization)
information is one of the most demanding
tasks of HYDROSYS. It involves an understanding of current state of the art in visualization of
sensor data, knowledge of the current state of the art on graphics hardware and graphics
techniques as well as optimization techniques. We put a strong emphasis on the visual results
while at the same time taking care of the hardware restrictions of mobile systems. Therefore,
HYDROSYS defines a system component in charge of techniques to better display sensor
information to the user, the Interaction and Graphics component. The TUG team will be the
coordinator for this component, but the work will be shared by TKK. The TUG team has chosen
the UMPC and desktop platforms while the TKK team will work on cell phones. This means that
many of the solutions to be developed will have to be independently tailored by each team to the
targeted platforms. Due to the complexity of this component we have split the description of work
in two sections, in this one we will focus on the work carried on the graphical and presentation
side, the next section will explain details on the interaction and user interfaces side.
The HYDROSYS project will be supported on several visualization techniques specific to
interactive and mobile graphics. The data available for HYDROSYS is highly heterogeneous,
even within each scenario (Alpine and Nordic). Data types include scalars (such as temperature),
vectors (such as wind direction), topographic data, cartographic data and imagery to name some
examples. The data sources are also varied in format and update rates, for example, a typical
temperature measurement provides values once every 2 seconds while wind direction provides
measurements at 20Hz. Once the data is available on the mobile device, a number of tasks will
need to be performed. For example, an excess of information might have to be filtered for a better
understanding of the scene. Additionally multiple sources of information may overlap in 3D space
which will require layouting of information. In section 8.4.1, we outline some of the possible
techniques to be developed throughout the HYDROSYS project.
8.1 Interaction and graphics overview
Effective planning needs tools that are able to weld together digital data and spatial data. Digital
data refers to all data coming from sensors, simulations and legacy data, while spatial data refers
to that information surrounding us. This means that effective planning requires a way to put digital
data in context to our surroundings, this is what Augmented Reality does and one of the goals of
HYDROSYS. A basic AR visualization application is composed of the following elements:
• A synthetic world view: This is produced from the interactive graphics generated from the
sensor data by the system.
• A real world view: This takes the form of a video camera feed.
• The relationship between views: This is a measurement of the real world position and view
direction of the user, commonly known as tracking.
HYDROSYS (224416) – D2.3 System Specification
55
Figure 25 Augmented Reality system requirements
Once we posses these three elements (explained in detail in Section 8.4) we can generate an
augmented reality view. The visualization component is in charge of generating the second part
of the AR view, the synthetic world view of the system. The generated graphics will draw
inspiration from traditional representations used by Hydrologists as those shown in the Appendix
7. Essentially, the graphical requirements of HYDROSYS are:
• Lightweight 3D models
• Interactive graphics (i.e. graphics that can be displayed at a minimum of 15 frames per
second –fps- , in comparison, traditional TV signals operate at 30 fps)
• Context rich data (with information such as location and time stamp)
The following diagram illustrates the components that are involved in the visualization
component. The synthetic world view is provided by the Interaction and Graphics Component, the
real world view is provided by the Video component and the relationship between views is
provided by the Tracking component.
8.2 State of the art
Handheld AR
Handheld augmented reality (AR) extends traditional 2D or 3D visualisations by overlaying
visualisations over video footages. Whereas a difference in interpretation rises (the data is
overlaid directly on the real environment), resulting in some additional perceptual issues
(including those related to the perception of depth and depth planes), most visualisation issues
are related to those mentioned in the previous section. On the other hand, interaction gets
potentially more powerful, but also more complex. Most handheld AR interfaces tend to make use
of more advanced and powerful small computer platforms that allow more complex application to
run. Both the special quality of interaction and the increasement of functionality gives rise to wellstructured interfaces to cope with the application. Due to the dependency on the limited physical
control structure (buttons, small keyboard), interaction design is of utmost importance to avoid
user frustration, and cognitive overload.
The field of handheld computer interfaces is grounded in the more general field of mobile
interaction (Jones and Marsden 2006) (used for cell phones and PDAs) and may make use of
HYDROSYS (224416) – D2.3 System Specification
56
traditional desktop interface methods, or 3D user interfaces (Bowman, Kruijff et al. 2005). Early
mobile user interfaces did not make use of the physical user environment and rather served as
complementary handheld displays in large Virtual Reality systems (Watsen 1999; Brunelli, Farella
et al. 2002; Bornik, Beichel et al. 2006). Early handheld augmented reality (AR) systems such as
AT&T Laboratories’s BatPortal (Newman, Ingram et al. 2001) and AR-PDA project (Gausemeier,
Fründ et al. 2003) use PDAs merely as portable displays running a thin client to render images
computed and transmitted by a dedicated server. While suitable for quick prototyping, thin clients
are inherently dependent on infrastructure, which seriously limits their mobility and thus their
widespread use in un-instrumented indoor and outdoor environments.
Slightly heavier in comparison to a PDA, over the last years only few ultra mobile PC installations
have been used for handheld AR (Schmalstieg and Reitmayr 2005) (Schall, Reitinger et al. 2007),
also in outdoor conditions, predominantly in urban areas. They basically replace laptop-based
AR systems varying from lighter weight, but therefore less powerful systems, to the larger
backpack-based solutions (Feiner, MacIntyre et al. 1997) (Thomas, Demczuk et al. 1998).
Similarly to early PDA-based AR systems, mobile phone-based AR exist (Möhring, Lessig et al.
2004; Henrysson, Ollila et al. 2005). Though these systems can run simple visualizations, it is
impossible to process the large data sets needed for on-site monitoring using AR methods. In
relation to the previous section, handheld devices exist that have been used in the exploration of
GIS data. These include ARVINO, exploring viticulture data (King, Piekarski et al. 2005), and
Priestnall and Polmaer’s simple landscape visualisation system (Priestnall and Polmear 2006). In
reflection to sensor networks, Badard made a concept for a system for querying GIS databases
with handheld devices potentially using Augmented Reality interfaces (Badard 2006). Similar
systems were envisioned by Rana and Sharma (Rana and Sharma 2006), and in a chapter in the
same book by Holweg and Kretschmer on Augmented reality visualization of geospatial data,
based on the results of the GEIST outdoor mobile AR project. Finally, some of the cognitive
factors aimed in our experiments of WP7 are in line with (Edwards and Bourbeau 2006). Several
EU projects have developed handheld interfaces (mostly PDA-based) for outdoor scenarios
(including ArcheoGuide), and environmental monitoring, but they were very simple and do not aid
on-site monitoring to the extend required in HYDROSYS. It can be stated that most outdoor AR
interfaces are still limited in functionality, limited in usage range (mostly urban usage), and very
seldom used to survey geo-scientific data.
Cell-phones and low-level 3D programming interfaces
Until 2003, development of 3D applications for mobile devices was only possible with specialized
application programming interfaces (API’s), such as the VRML model viewing library Pocket
Cortona (ParallelGraphics 2000) or certain mobile game development environments. In 2003,
OpenGL ES (KhronosGroup 2005), the first standardized mobile 3D API was published. OpenGL
ES is a low-level rasterization interface that exposes the features of the underlying rendering
pipeline, most efficiently accessed from the C or C++ languages. Currently,software
implementations for multiple platforms exist, for example by Hybrid Graphics (HybridGraphics
2006). The OpenGL ES 1.0 specification allows two alternate profiles, the Common and the Lite
(Blythe 2005),. The Lite is suited for the simplest of devices, and supports only a subset of
Common profile’s functionality. The current version, OpenGL ES 2.0, no longer supports the
limitations of the Lite profile (Munshi and Leech 2008).
Mainly due to the incompatibilities of smart phone operating systems and their low level
programming API’s, a java-based generic rendering engine, JSR-184 (JCP 2005), was developed
in 2004. JSR-184 uses OpenGL ES for rendering, and provides easy rendering of a static scene
graph, provided to the renderer in the M3G format. Unfortunately, the performance penalty
compared to OpenGL ES is significant, and at best, even with hardware acceleration, JSR-184
based applications are estimated to be only 50% as fast as similar implementations on OpenGL
ES (Aarnio 2005). While JSR-184’s strength lies in the wider support for mobile devices via
portable Java technology, its weakness lies in the lack of freedom for developers to create a 3D
engine suited for a particular task.
As a compromise to either the direct C/C++ OpenGL development or the Java based
development over a scene graph renderer, the Java OpenGL bindings were developed as the
JSR-239 (JCP 2006). This alternative misses the potential efficiency of the C/C++ based
HYDROSYS (224416) – D2.3 System Specification
57
development, and also the easiness of the JSR-184 API, providing access to the low level
features of OpenGL but with the cost of increased overhead and lack of full resource control.
Higher level software platforms are generally built on top of these low level programming
interfaces. As a consequence, a compatible hardware platform should provide them.
Unfortunately, despite the recent advances in mobile technology, fully compliant platforms are
sparse. For example, most PDA devices support only the OpenGL ES 1.0 Lite profile, intended
for only the tiniest of devices, not the full Common profile (or OpenGL ES 2.0). Or, only a partial
support is provided, which is the case in Google’s Android, which provides (a subset of) JSR-239,
but no C/C++ API for the underlying OpenGL ES. In addition, some hardware manufacturers
consistently fail in producing compatible 3D drivers.
Navigation assistants
The first generation of mobile electronic navigation assistants visualized the environment with
traditional 2D raster maps, with the same disadvantages as paper maps, namely fixed scale and
static content, further handicapped by small screen size. Later, zoomable, real-time rendered
vector graphics was introduced. Recently, the perspective 2D view has gained popularity
especially in car navigation systems, providing an egocentric view to the 2D map data. Navitime
is one of the most “total” navigation aids, providing even preliminary 3D views with low resolution
textures for local orientation, expecting faster cell networks and “more comprehensive 3D data” to
lead to better usability (Arikawa et al. 2007). Navigation assistants commonly support audio and
textual modalities. Mobile map based systems can act as gateways to location based data.
GeoNotes experimented with annotation of the environment by user messages using either
textual or 2D map interfaces with PDAs, pushing content updates to clients (Persson et al. 2001).
Mobile GIS and location based systems
Mobile GIS extends Geographic Information Systems from the office to the field by incorporating
technologies such as mobile devices, wireless communication and positioning systems. Mobile
GIS enables on-site capturing, storing, manipulating, analysing and displaying of geographical
data. The coupling of real time measurements from a distributed sensor network and Mobile GIS
opens up the possibility of mobile environmental information systems. Industrial Mobile GIS can
already be deployed in low end computing systems like PDA (ArcPad 2007). The current
commercial mobile GIS products include, for example, FieldWorker, GPSPilot, Fugawi, Todarmal,
ESRI ArcPad, and MapInfo MapXmobile. FieldWorker is used for exchanging information with
mobile workers. GPSPilot and Fugawi are examples of traditional 2D maps intended for
navigation, although there are nowadays plenty of similar products. Todarmal provides the
possibility to create map content (points, lines and 2D polygons) online in a layered manner.
ESRI ArcPad is intended for managing point type GIS data, where digital photos can be attached
to point information. The ArcPad comes with a support for routing with street map data. MapInfo
MapXmobile is a development tool similar to ArcPad, intended for creating map applications.
Google Earth is currently the most known desktop based 3D interface to spatial data, where the
idea of browsing is turned upside down: the Earth itself is the browser, onto which web provides
content (Jones 2007). The open interfaces of Google Earth have made it a common platform for
GIS visualizations. Currently, mobile version of Google Earth exists on the iPhone platform.
The presented commercial mobile GIS software and SDK's are based upon static map views, with
traditional raster or vector map representations, with overlaid point information and GPS support.
For map content, the most advanced dynamic feature is that of Todarmal's, allowing run time map
creation. In addition, some packages like ArcPad allow creation of point data, and modification of
their attributes. The point data is stored onto two-dimensional coordinates, or sometimes to street
addresses. These features are supported on PDA's, while their deployment in mobile phones is
still limited, or exist only in research projects (Sillanpää 2007).
Mobile 3D Maps
Mobile 3D maps portray the real environment as a virtual one, similar to their desktop
counterparts, but they run – or should run – in mobile devices. The first attempts at viewing 3D
cities in mobile devices suffered severely from lack of performance. In the 3D City Info project
(Rakkolainen et al. 2001), models intended for use with desktop workstations were simplified, but
still the rendering rate of the realistically textured model was only one frame per 8 seconds on a
HYDROSYS (224416) – D2.3 System Specification
58
PDA, rendered with the Pocket Cortona VRML viewing library. The related field experiments were
performed with still images on web pages. In the project TellMaris, we built a lightweight textured
3D city model, intended for mobile use. With a simple rectangular spatial subdivision, the guide
software, implemented by Nokia with a proprietary rasterizer on a Communicator, achieved a few
frames per second with rather coarse detail (texel size 1–2m). TellMaris also took the first steps
towards a mobile 3D navigation user interface (Kray et al. 2003; Laakso et al. 2003). The
LAMP3D project researched information retrieval from 3D models, and by manually incorporating
visibility information into a simplified VRML model, achieved a few frames per second with Pocket
Cortona (Burigat et al. 2005). After mobile 3D hardware arrived, later projects still faced resource
related problems. For example, the MOBIX3D viewer, written with C++ and using the OpenGL ES
API, without a caching mechanism or memory management, suffered from slow file I/O and
parsing in relation to X3D content and (low resolution) textures, to the extent where textures were
turned off (Mulloni et al. 2007). Similar problems were present in (Blescheid et al. 2005), where a
JSR-184 based viewer took several minutes to initialize and load a medium complexity city
model, which contained only 2-3 very low level textures, rendering the model with less than 1fps,
and suffering from frequent crashes.
8.3 Visualization platforms
HYDROSYS is supported by two visualization and interaction platforms: The handheld AR and
cellphone platforms. There are multiple strengths and weaknesses in each platform that render
them complimentary for the HYDROSYS purposes. For example, cellphones are widely available
devices that are already pervasive in our environment, however, they have also low power
capabilities and are generally not upgradeable. The handheld AR on the other hand is easier
upgraded or enhanced by peripheral devices, but it is bigger, heavier and more expensive. Due to
this and other trade-offs, HYDROSYS will use both platforms taking advantage of their individual
strengths.
We have envisioned two scenarios for field work where to test our developments: the Alpine
scenario (based in Switzerland) and the Nordic scenario (based in Finland). The handheld AR
platform will be primarily tested on the Alpine scenario while the cellphone platform will be tested
on the Nordic scenario. However, the design of the HYDROSYS system was done carefully as to
consider all scenarios and platforms and create a joint solution. This means that the platforms will
be interchangeable in each scenario; tests will be carried out to demonstrate this.
8.3.1 AR platform using handheld computers
Display system framework
The variety (and varying range) of sensors, and the high quality of visualizations needed by AR
applications impose strong requirements on the software platform. The Studierstube software
suite is a proven solution to develop and deploy AR applications. Studierstube has been used in
several Austrian and European projects such as: Vidente, IPCity and Liverplanner to name a few.
It provides a hardware abstraction (or input) layer, which is reconfigurable to deal with hardware
changes. Thus new sensors and controllers can be relatively quickly introduced in the application.
Studierstube combines this flexibility in the hardware layer with a state of the art presentation
layer. The presentation layer is based on a scene-graph that supports the latest improvements in
graphics hardware. The system also routes events from the input layer to scene-graph nodes and
engines guaranteeing the interactive requirements of AR applications. HYDROSYS relies on
Studierstube as software platform for mobile and desktop based AR applications. Extensions to
the Studierstube suite are planned for most of the levels, since HYDROSYS will extend the
current state of the art for mobile AR applications.
HYDROSYS (224416) – D2.3 System Specification
59
Figure 26 Handheld Mobile Device Diagram
Display Hardware
During the project HYDROSYS we will use a Panasonic Toughbook UMPC (CF-U1) ruggedized
for outdoor use. It has been chosen because it fulfills all of our requirements including:
Ruggedized form-factor for outdoor use, anti-reflective screen and a USB interface for attaching
the external sensors. The Toughbook is MIL-STD-810F and IP54 compliant — four-foot drop,
rain-, spill- and dust-resistant, thus usable in rougher outdoor conditions like will occur in
HYDROSYS.
8.3.2 Cell phone system platform
The m-LOMA and TellMaris mobile 3D maps
The m-LOMA system (Figure 27, right), developed at TKK, provides a C/C++ OpenGL ES
Common 1.0 code base and an environment for developing 3D maps suited for urban
environments. The system supports Linux, Windows, Windows CE and Symbian S60 platforms
from a unified source code tree. The engine and preprocess need to be redesigned for natural
open areas, where the environment is not heavily occluded by buildings. However, the system
provides a set of components that can be directly recycled:
•
•
•
•
•
•
•
•
•
•
•
A development environment for multiple platforms with support for Symbian specific
exceptions and features
A compact binary 3D data format with a VRML parser (preprocess)
Efficient OpenGL ES based rendering of textured meshes and billboards
Explicit memory management with caching for out-of-core rendering
Lightweight OpenGL ES widgets for traditional user interfaces
An efficient binary XML based communications protocol
Server with a Postgres database and support for the binary XML network protocol
3D navigation schemes for the user interface
Local routing
GPS and tracking functionalities
Landmark management
HYDROSYS (224416) – D2.3 System Specification
60
As a secondary software platform for 3D maps, parts of the TellMaris Mobile 3D Archipelago
(Figure 27, left) may be utilized, providing a preprocess for conventional map data in MapInfo
format, outputting binary 3D meshes with compressed thematic textures and the related
rendering functionality.
Figure 27 The TellMaris Mobile 3D Archipelago (left) and m-LOMA 3D City map (right)
The main hardware platform for the HYDROSYS 3D map will be the Nokia N95 smart phone and
any newer generation Symbian S60 phone with OpenGL ES 1.0 Common Profile or OpenGL ES
2.0 compliant hardware support for 3D graphics. Current Symbian 3D phones do not provide
touch screens, which poses a challenge for user interface design. Any suitable touch screen
enabled platforms with proper 3D development support will be analyzed during the project as they
emerge on the market. For example, Nokia has launched the N97, which has a touch screen and
should be on the market in the first half of 2009. Nokia also plans to launch a new 3D enabled
version of the so called Internet Table (N800), which is Linux based and also has a touch screen.
The current generation of mobile 3D hardware is a decade old design, which has now been on
the market for over 4 years. We expect newer generation hardware to appear during the project.
However, such platforms would need to provide a functional and stable development environment
and support for C/C++ OpenGL ES APIs. Of the more recent platforms, for example Google’s
Android only provides a Java based OpenGL ES API. On the other hand, Apple’s iPhone would
provide OpenGL, but the development environment is rather closed and would require substantial
reverse engineering. Platform suitability will be a part of the validation processes.
8.4 Visualization approach
SensorScope and other sensing stations Task-component relationship
measure key environmental data such as air The work described in this section is part of Task
temperature
and
humidity,
surface 3.4 / Visualization methods
temperature, incoming solar radiation, wind
speed and direction, precipitation, soil
moisture and water suction and so on. The measured information comes in a numerical form or
even perhaps as voltage readings. The raw presentation of said numerical information is
unsuitable as the cognitive load of understanding the readings is too high. Instead, hydrologists
conventionally use mathematical software to display the abstract information delivered by the
sensors on a more human readable form.
By using commercial mathematical tools, hydrologists can display the information as plotted
graphs, point clouds, colored iso surfaces and so on. Appendix 7 provides a table with typical
sensor readings and their representations for environmental parameters. For example, scalar
readings (such as temperature) are often displayed as graphs with varying time samples (see
Figure 28). This provides a convenient overview of data in a two dimensional space.
HYDROSYS (224416) – D2.3 System Specification
61
Figure 28 Traditional depiction of sensor data in desktop applications
These visual representations are tailored for a desktop display and are therefore unsuitable for
HYDROSYS. The reduced size of both the UMPC and the cell phone platform demand a
retargeting of visual presentations of sensor data.
Synthetic View
The AR visualization requires several sets of 3D models: sensor readings in graphical form,
digital terrain models (DTM), glyphs for known objects (such as other users in the field). The
traditional pipeline for generating interactive graphics based on sensor data is the following:
Given a set of sensor readings, the system generates polygons with corresponding values
associated to them. These values may represent colors, hues, brightness, and other visual
properties. These polygons are generated on the location given by the context data of the sensor.
The sources of data are varied, the main source is the sensor readings, however, HYDROSYS
does not discriminate data as long as it comes on the pre agreed format (section 5.2). This data
can then come from simulation results, legacy data or simply user input, for example. An
important requirement of the data, is that it has to be geo-referenced. All sensor stations, and
simulation results will provide data that is already geo-referenced and directly used by the
visualization generation.
When dealing with spatially distributed sensor samples, sometimes environmental researchers
use interpolation techniques to compensate for lacking information in unsampled areas. Surface
Modeling utilizes the spatial patterns in a data set to generate localized estimates throughout a
field. Conceptually it maps the variance by using geographic position to help explain the
differences in the sample values. In practice, it simply fits a continuous surface to the point data
spikes as depicted in the next figure. Notice that the color points vary according to their relative
height, this is, higher points are redder while lower are bluer.
Figure 29 Topography fit point data visualization.
Vector data, is a bit more complex to represent. Traditional spatial vector visualization is carried
by representing each vector as an arrow pointing in the direction of the vector. If the vectors are
not normalized, their magnitude is represented as another dimension such as color.
HYDROSYS (224416) – D2.3 System Specification
62
Figure 30 3D vector visualization
Real World View
The second component of an AR visualization is the real world view. This takes the form of the
incoming image from a live video feed. The HYDROSYS project has purchased multiple high end
cameras for this purpose (see Appendix 2) with accompanying lenses. In order to manage said
devices we require an abstraction layer in our system. Openvideo is a video abstraction library
developed at TUG. It is the software component that manages video feeds used by all the AR
mobile applications at TUG. Throughout the HYDROSYS project we will make extensive use of
this library and will extend it to fit our particular needs. For more information on Openvideo see
the Appendix 1.
Relationship between views
The third and last component of an AR visualization is the relationship between views. This takes
the form of tracking information coming from GPS and inertia devices. The HYDROSYS project
has purchased multiple high end GPS and inertia units for this purpose (see Appendix 2). In
order to manage said devices we require an abstraction layer in our system, the Hybrid tracking
component. It is the software component that manages positional and orientational information
coming from external devices and used by all the AR mobile applications.
8.4.1 Visualization for the Augmented Reality platform
The field of mobile visualizations is rather new and constantly changing. The UMPC platform was
created by the project Origami in 2006 while cell phones have only recently started to have 3d
accelerated graphic rendering capabilities. This means that some of the visualization techniques
are still in conceptual sketches and may change throughout the course of HYDROSYS. The
visualization component will be steered by the form factor of the display device, the screen size,
traditional already-used representations, perceptual problems and computational power of the
device. Therefore the following is only a suggestion of what the visualizations may look like by the
end of the project.
The visual presentation of sensor data has multiple aims. Unlike numerical presentation of data,
this component will allow the presentation of fast changing information to be understood by the
user. It will also allow the presentation of a high amount of spatially scattered data while
preserving their spatial relationships, even in relation to the user’s position. The presented
information will also retain contextual relationships with the physical surroundings during
browsing. All of these advantages will support tasks such as decision making and sensor
placement planning.
Enhancing the understanding of the presentation of sensor values is a hard task. This is, the
representation of multiple heterogeneous values, inevitably overlaps: for example, we have to
provide a consistent and informative representation of two scalar values, say soil moisture and
soil temperature. The question is then, what is this representation and how does its complexity
increase with an increase of data sources and types? The project HYDROSYS will investigate
throughout its course a number of techniques to advance the state-of-the-art sensor visualization.
HYDROSYS (224416) – D2.3 System Specification
63
Sensor data can have different visualization forms, that all have their specific advantages and
disadvantages to interpret the data. The current way we intend to interact with sensor data layers
is organized by browsing through the different visualization modes. For example, let’s take
temperature data. This data can be displayed as followed:
ƒ
ƒ
ƒ
ƒ
ƒ
As 3D overlay over the video image and / or DEM (mode 1)
As perspective 2D image plane, which can be of advantage to compare different
layers of data (mode 2 and 3)
As 2D image, presented parallel to the image plane (frontal view, mode 4). This
picture can, for example, go through a time browser (mode 5), or to some kind of
2D asset/data browser (6)
As plot (mode 7), which can, in perspective, theoretically be matched to different
temperature maps (mode 8)
As data table (mode 9)
The basic visualization mode should be put down in the user profile, but can be browsed through
during run-time. The figure below describes the process.
Data Comparison
Typically a field worker faces the need of contrasting data from multiple sources, types, locations
or temporal samplings. For example, one might want to compare today’s water discharge with
yesterday’s (temporal), or check the water temperature (type), or check the water discharge
values of a different site (location). This demands that the system posses tools for data
comparison as part of its standard toolset. Figure 31 shows a sketch of a typical example of
temporal and type data comparison of a sensor. The HYDROSYS project will develop tools for
this purpose.
HYDROSYS (224416) – D2.3 System Specification
64
Figure 31 Visualization of time related data
Performance
Performance driven visualizations will be emphasized during the HYDROSYS project. The
amount of data available from sensors, simulations, cartographic and topographic data is
extensive. This demands suitable techniques for display on mobile low-powered devices such as
UMPCs and cell phones. Conventional techniques such as Level Of Detail based rendering will
be explored, but moreover, new techniques will be explored on optimization of rendering times,
and existing ones adapted for the mobile case (see next section).
8.4.2 Visualization for cell phones
The fundamental principle in all the cell phone based 3D maps is that the client side, due to the
lack of resources, does not attempt to perform complex computations. Rather, the client
rendering engine relies heavily on data preprocessing. Any data entering the system is
preprocessed onto a compact form, with further optimizations that may be perceptually based,
include visibility tables, importance metrics and various levels of detail (see below). In general,
surface geometry is stored onto hardware-friendly forms, such as triangle strips in vertex arrays
instead of separate polygons, intended for one sided rendering. Static, preprocessed data is
placed onto a 3D database. At run time, the data is serialized for transmission, which may
happen on demand, and with importance priorization. For real time data, such as the GSN sensor
feed, only format transcoding and possibly server side culling is performed. However, if
bandwidth is significantly saved by generating primitives at the mobile device, it might be a viable
option instead of generation at a prior transcoding phase (see Data Services).
The HYDROSYS data preprocessing will involve emphasis of natural features of the environment
that are considered important, interesting or otherwise salient to the user in the frameworks of
navigation or hydrological significance. Several visual emphasis techniques exist that speed up
rendering without sacrificing detail. For example, (Döllner et al. 2000) present a set of texturing
methods as tools for the visual design of terrain contents (map data sets). They classify their
texture based emphasis layers to 1) Thematic 2) Luminance 3) Topographical and 4) Visibility.
These layers provide, respectively, 1) the background terrain as an RGB texture, 2) a light map
texture for modifying the brightness of the other texture layers, 3) shading information for a terrain
model (specialized luminance texture), and 4) one-channel alpha texture for determining layer
visibility. Figure 32 presents the use of a topographical texture layer to emphasize selected parts
of the geometry, while the geometry itself has been reduced to a rather coarse level.
HYDROSYS (224416) – D2.3 System Specification
65
Figure 32 Emphasizing a coarse map geometry (left) with a topographical texture layer (Döllner et al 2000)
8.5 Focus + context
Task-component relationship
The previous section dealt with the
The work described in this section is part of Task
generation of graphics for visualization,
3.5 / Focus + context visualization
from the graphical primitive generation to
some techniques for the fast display of
data, however, no emphasis was placed on the enhancement of the visual properties of arbitrary
portions of data. Focus and Context (F+C) is a branch of visual computing that deals with the
highlight of a focus object while keeping an overview of its relationship with other related data.
The idea of visually segregating the parts of the data in focus from the rest (i.e., the context) is
very general. In volume rendering, for example, usually full opacity is used for parts of the data in
focus, whereas a high transparency is used for context visualization. Hauser described a
generalized definition of focus+context visualization in the following way: Focus+context
visualization is the uneven use of graphics resources (space, opacity, color, etc.) for visualization
with the purpose to visually discriminate data-parts in focus from their context, i.e., the rest of the
data (Kosara et al 2002).
8.5.1 Focus+context for the Handheld Platform
Attention direction
When visualizing large amounts of information scattered around a site, it is sometimes important
for the system to direct the attention of the user to possible areas of interest. Be it to highlight
danger, relevance to current task, or damaged readings, it is necessary to posses mechanisms to
draw the attention of the user to certain portions of the scene.
There exist several techniques to achieve this, such as distorting the image, overlaying
conspicuous artifacts or modifying the visual properties of the scene. Of particular interest are
those that modify the image’s saliency, this is, the property of certain locations of an image to call
the user’s attention. These are some of the state-of-the-art techniques available today. During the
course of HYDROSYS, we will push forward on further techniques to direct the user’s attention.
HYDROSYS (224416) – D2.3 System Specification
66
Figure 33 Directing the attention to particular sensor readings in the scene, by distortion, overlay and saliency
modification
Filtering
An excess of available information can also cause problems such as visual clutter when all data
is displayed simultaneously. This can cause issues of understanding of the data to the point of
making the system unusable. Throughout the years, several methods have been researched in
how to filter data to only the essentially necessary. This depends, for example, on the system that
it is displayed on, the current task situation and the amount and heterogeneity of the data. Cell
phones, UMPCs and desktop systems all have different display form factors which demands
specific targeted filtering. These techniques also have to adapt to the current task of the user, for
example, a user might be interested in soil moisture readings but only mildly interested in air
temperature.
Filtering techniques, therefore, take place at different levels of the data pipeline, starting from
sensor data collection. During the visualization part, it can itself be based on multiple bases such
as semantic-based filtering, location-based filtering or interest-based filtering. Semantic-based
techniques filter out information depending on tagged meta data such as date of collection or
sensor manufacturer. Location based techniques filter data depending on their location either
absolute or relative to the user’s position, this can generate queries such as “display only
readings within 100 meters radius”. Interest-based techniques can be, for example, related to
specific tasks. For instance, the user may setup a query such as “display all values needed for
sensor maintenance”.
Figure 34 Filtering examples. First image denotes problem. Second denotes semantic filtering “only values
related to air”. Third image denotes filtering by location, values farther away are increasingly faded out
Depth perception
Understanding of the scene is not only encumbered by heterogeneity of data, but also by other
extrinsic attributes such as distance to viewer and occluding objects. This raises cognitive
problems such as depth perception, a common study in augmented reality. There exist several
techniques to enhance the user’s perception of depth of virtual information. For example, view
restrictions, abstract occluding representations and dynamic transparencies.
HYDROSYS (224416) – D2.3 System Specification
67
Figure 35 Occlusion handling techniques. View restriction, abstract overlay and dynamic transparency
There exists, however, by no means a completely effective technique as of this day. The project
HYDROSYS will investigate techniques to push forward the boundary of perception related
problems in augmented environments.
Picture in picture
A typical example of Spatial F+C is the picture in picture technique. This technique uses a portion
of the available screen space and displays a related view to the current task. For example, Figure
36 shows in the main view an image of a site overlaid with sensor information, on the upper right
corner it shows the Picture-In-Picture of an exocentric view of the same site.
Figure 36 Example of Picture-In-Picture
Explosions
The data to be handled by HYDROSYS varies not only on its size and heterogeneity, but also on
its connectivity. This means that the data can be seen as connected in itself with other related
data: the pollution readings of a creek may be related to the pollution of a nearby watershed, the
humidity levels in relation to the year’s temperature may also be related to similar readings 20
years before, and so on. The connection among data may be hierarchical and in several levels,
spatial, temporal, or some other higher level meaning defined by the user. Explosions are a set of
standard visualization techniques that explore hierarchical data (typically spatial) by visually
aligning the information preserving hierarchies.
8.5.2 Focus+context for the Cellphone Platform
Another technique to emphasize parts of data is the so called Focus and Context (see section
8.5). For example, (Kosara et al 2002) have developed a specific adaptation for this which they
call the Semantic Depth of Field (SDOF). This approach separates relevant objects from the
background using blurring methods. The relevant object in the foreground is presented
accurately, while the background is blurred in the manner familiar from photography (see Figure
HYDROSYS (224416) – D2.3 System Specification
68
37). We apply the same methodology to speed up rendering and save memory resources.
Instead of actively blurring the parts of the view providing the context, these parts are simply
represented using lower LOD textures and geometry.
Figure 37 The depth of field in photography (Kosara et al 2002)
Both the abovementioned methods assume that importance and map data already exists. This
can be partially achieved by associating terrain geometry with hydrological data, such as rivers,
which are known to be of importance to HYDROSYS and that can be determined from the
metadata of topographical map data sets. However, this solves only part of the problem. Most
terrain rendering algorithms apply aerial photography over the geometry. This straightforward
solution does not allow the use of F+C methods if the features to emphasize are part of the
texture. For HYDROSYS, the data optimization phase will utilize ideas presented in (Premoze et
al 1999), where the surface texture is first analyzed for the contents, and then re-synthesized.
Figure 38 presents the approach for alpine environments. Vegetation, snow, talus etc. are
segmented using computer vision techniques and the scene synthesized using template objects
and surfaces. In our case, this method minimizes the memory consumption and allows us to use
F+C methods. The method suits best relatively analytical environments, where the segmentation
algorithms yield reliable results.
Figure 38 Synthesizing the components of an environment based on aerial imagery (Premoze et al 1999)
The final 3D map can act as a background, on top of which various overlay visualizations can be
rendered. Figure 27 presents a 3D terrain with route and label overlays, and a 3D city with
dynamic components (buses, trams) marked by further annotations. In HYDROSYS, a similar
approach is used to render sensor data on top of the virtual environment, sharing the graphics
design with the UMPC platform. The particular sensor data visualization designs will be created at
tasks T3.4 Data Visualization and T3.5 Focus+context visualization.
HYDROSYS (224416) – D2.3 System Specification
69
8.6 Validation factors
For the visualizations of the sensor data, it is important that the appearance is understandable for
the end-user. Thus, the created visualizations will be tested with end user groups to validate
whether they convey the information in an understandable manner. This is an iterative process
that will take place throughout the whole project and will likely change the development for
milestones. Among others, the validation implies the following aspects. The visual quality of the
visualizations should convey information to correctly interpret the visualized data, which includes
recognizability of the virtual environment and “readability” of the 3D map view. The users should
be able to compare data in a suitable manner (also a user interface validation issue), in an apt
context (related to focus+context techniques). The visualizations should run in real-time, meaning
at least 10 frames per second on a cell-phone, and 15 frames per second on the handheld device
(UMPC).
HYDROSYS (224416) – D2.3 System Specification
70
9 Interaction and Graphics Component – Interaction
9.1 Handheld User interface overview
The user interface is, in itself, a big challenge for HYDROSYS. The project targets several
platforms (mobile UMPC, mobile phone, desktop) with different capabilities and linked
functionalities. This chapter offers an overview of the different hardware and software aspects
with the required user interfaces. The user interface forms the entry point to the functionality that
is required for perform different kinds of monitoring and management actions. As described in
section 1.2, the performance of these actions can take place at both workplace and at on-site
locations. Technically, both the Augmented Reality (AR) and the 3D map platforms can base their
actions both on desktop computers and the handheld units themselves. Though building on the
same software platform, the hardware characteristics are quite different and lead to different
requirements on the user interfaces. The differences are largely based on screen size and
graphics capacity differences. Though the user interface techniques for the AR platform are
uniform, they are tuned for the different hardware systems to provide for apt user interaction. For
3D maps, the hardware platform differences are even more significant: a cell phone lacks a
pointing device such as a mouse, and a full sized keyboard. The related interaction metaphors
need to be designed separately for the platforms. Similarly, the use cases on the field and office
differ. For example, on the field, the location dependency is dominant, and the user interface (the
3D view) can be driven by sensors such as a GPS. While the office user may be interested in
available and existing data sources, the mobile user may make observations and perhaps
annotations based on the actual environment. Careful planning needs to take place to decide how
to support various operations adequately for desktop, handheld setups and cell phones.
This chapter provides an overview of the basic technique directions of the different user
interfaces, and focuses on several of the specific functional models that support specific tasks like
data retrieval or sensor placement.
9.2 State of the art
Most of the state of the art is shared with the previous section.
Mobile GIS
Mobile GIS extends Geographic Information Systems from the office to the field by incorporating
technologies such as mobile devices, wireless communication and positioning systems. Mobile
GIS enables on-site capturing, storing, manipulating, analysing and displaying of geographical
data. The coupling of real time measurements from a distributed sensor network and Mobile GIS
opens up the possibility of mobile environmental information systems (ArcPad 2007). Early on,
researchers identified the potential of location based services as a means to increase awareness
in the general public. Services developed for such purposes provide information on the spot,
basically including text and sometimes images, about the environmental situation, i.e. bathing
water quality [8]. As an extension, the ability to interact with location based information was added
to mobile GIS applications and services. Such services play an important role for on-site analysis,
aiding critical decision making with information about environmental processes. They are suited
for data collection with online monitoring purposes (Rakkolainen, Timmerheid et al. 2001; Kray,
Elting et al. 2003). Industrial Mobile GIS can already be deployed in low end computing systems
like PDA (ArcPad 2007). The current commercial mobile GIS products include, for example,
FieldWorker, GPSPilot, Fugawi, Todarmal, ESRI ArcPad, and MapInfo MapXmobile. FieldWorker
is used for exchanging information with mobile workers. GPSPilot and Fugawi are examples of
traditional 2D maps intended for navigation, although there are nowadays plenty of similar
products. Todarmal provides the possibility to create map content (points, lines and 2D polygons)
online in a layered manner. ESRI ArcPad is intended for managing point type GIS data, where
digital photos can be attached to point information. The ArcPad comes with a support for routing
with street map data. MapInfo MapXmobile is a development tool similar to ArcPad, intended for
creating map applications.
HYDROSYS (224416) – D2.3 System Specification
71
The presented commercial mobile GIS software and SDK's are based upon static map views, with
traditional raster or vector map representations, with overlaid point information and GPS support.
For map content, the most advanced dynamic feature is that of Todarmal's, allowing run time map
creation. In addition, some packages like ArcPad allow creation of point data, and modification of
their attributes. The point data is stored onto two-dimensional coordinates, or sometimes to street
addresses. These features are supported on PDA's, while their deployment in mobile phones is
still limited, or exist only in research projects [6].
Several drawbacks have been identified when deploying Mobile GIS/Location Based
Environmental Services for purposes of field-based management of environmental resources.
These drawbacks are related to connectivity and visualization issues (Parallelgraphics 2007).
None of the available products supports real-time data feeds, such as input from sensor
networks. The data cannot be tied to other elements in the environment; for example, it is not
possible to attach location based data to street topology using lengths along street segments and
possible lateral offsets. There are standardization activities that address data association issues,
for example CityGML, which allows data to be associated to underlying topological structures,
including semantic information.
In HYDROSYS application scenarios, it will be important to be able to tie information to water
ecosystems and hydrological features, but current standardization efforts do include natural
environments or natural features only to a limited level. In addition, current GIS systems are twodimensional, while the real world is three-dimensional. Although the 3D perspective view is
gaining popularity among navigation applications, the related data sets are still 2D. Navigation
functionalities such as routing depend on street networks; support for visual navigation, for
example with the aid of landmarks, is missing. No mobile GIS system is able to provide a realistic
3D view, which would allow more intuitive, accurate and natural recognition of the environment,
and contextual or topological positioning of location based features.
9.3 Sensors and support setups
The user interfaces allow access to location-specific information. For that purpose, they require
localization sensors (like GPS), and orientation measurements, to allow for a correct overlap over
the viewed data.
This section describes the required localization techniques used in the AR setup (hybrid tracking)
and how some of the tracking techniques and additional support setups can be deployed in the
field using a vehicle setup.
9.3.1 Handheld display system construction
The UMPC is one of HYDROSYS target
lightweight devices is. A UMPC is a fully Task-component relationship
featured handheld personal computer for The work described in this section is part of
mobile use. It is not a device based on Task 5.1 / On-site monitoring interfaces
embedded operating systems (such as
Windows CE or Symbian), but has a full
operating system such as Windows. It is the smallest PC, and its most common features include:
Touchscreen, x86 Processor, integrated keyboard and mouse, USB connection, and sound
system. These systems can often be extended with extra modules like HSDPA, GPS or a
webcam, but these modules tend to be of low quality. HYDROSYS requires high quality sensing
devices to function properly, for which reason the UMPC is extended with additional localization
and orientation technology, a high quality camera, and additional controls. In order to make use of
the handheld computer and its required sensors, the consortium has to develop a new, robust
construction that holds all components. This construction supports the storage of all additional
sensors and partially protects them from outdoor conditions. Appendix 8 provides a detailed
overview of the design and production process of a first prototype.
HYDROSYS (224416) – D2.3 System Specification
72
Figure 39 Part of the new construction, complete version used by Finnish end-users in the field (Nummela site)
9.4 User interfaces for the Augmented Reality platform
In order to accommodate the front-end for the different actions, HYDROSYS will provide a higher
level user interface (UI) toolkit (the widget toolkit) and a set of navigation techniques. These
techniques are, subsequently, then used by different task-specific interfaces. The interfaces can
be adapted using user profiles, which were introduced in section 2.4.1.
The order we discuss the modules is slightly related to the campaign process we discussed in
chapter 1. In more detail, the different parts of the user interface are as follows:
ƒ
AR user interface toolkit: The user interface toolkit consists of the widget toolkit,
which offers higher level user interface elements like layers, forms, buttons,
sliders, and so on. The widget toolkit focuses on data organization and
visualization adaptation for complex spatial data sets to accommodate for the
different hardware platforms. In addition, the AR user interface toolkit offers
several navigation techniques that are required to explore 3D data sets or to switch
between different physical viewpoints associated with the multiple cameras being
deployed on-site.
ƒ
Interfaces: the different interface modules make use of the user interface toolkit,
using tailored front-ends partly extending the user toolkit itself. The sensor
placement interface module offers support to plan and set up sensor networks.
The data produced by the sensors can be accessed through the data retrieval
module that basically is the data selection front-end to the global sensor network
(GSN). Users can perform simulations by using the simulation module to select
data sets and simulation scripts. Finally, users can communicate, annotate and
share data using the collaboration tools. Most of these modules are integrated in
the base interface, the interface used at the workplace, extended with functionality
that enables users to plan a site and setup a campaign, and to perform post onsite analysis.
Controls
The UI is used with different input methods. At workplace, the UI will be used with general
desktop devices like mouse and keyboard, whereas the handheld unit supports input via a minikeyboard, a micro-joystick, buttons and a pen. Some of the controls for the handheld unit are
implemented in a special hardware platform (a handheld construction) that is described in section
9.3.1
9.5 Cell phone interfaces
Traditional widgets
Similar to the UMPC, the cell phone platform utilizes a lightweight widget set, which provides all
traditional menus, popups, buttons, etc. These widgets are based on the same graphics API as
HYDROSYS (224416) – D2.3 System Specification
73
the 3D interface itself, the OpenGL ES, thereby providing a platform independent, integral UI
system without requiring any external toolkits, The widgets are memory optimized: they consume
only a few kB of memory in run time.
The differences in hardware systems are accommodated by the widget system where possible.
As an example, Figure 40 presents two widgets for defining a route between two addresses. In
the presence of a real keyboard, text can be simply typed in the text boxes. In a cell phone, the
limited keyboard functions as usually with a cell phone, for example when writing a text message.
For a device without a keyboard, such as a PDA, an active text box pops up a virtual keyboard,
allowing typing via a touch screen. The pull-down menus can be used to select entries from a
predefined list. The focus between widget components can be changed similarly. They can be
clicked with a mouse or via a touch screen, or selected via a special focus button that switches
the focus from one functional component to other. Depending on the availability of keys, hot keys
may be assigned further widget specific functionalities, such as ‘help’ or ‘close’.
Figure 40 Lightweigt OpenGL ES based widgets: setting up a route
Controls
An integral part of interaction design of navigation is the assignment of interface controls to
movement and action in the mobile map environment. In HYDROSYS, despite the current poor
situation with fully compliant devices with 3D support, we attempt to address the full scale of the
possible systems, varying from desktop to personal digital assistants (PDAs) and smart phones
(see Figure 41). The controls offered by PDAs commonly include a touch screen, a few buttons,
and a joypad. The touch screen essentially provides a direct pointing mechanism on a 2D plane,
and a two-dimensional analog input. The touch screen is operated by a stylus (a pointing stick).
The buttons and the joypad are simple discrete inputs. A direct text input method may be missing
due to the lack of buttons, but in that case, it is compensated by a virtual keyboard, operable by
the stylus. A smart phone commonly provides only discrete buttons. Usually a few of the buttons
are assigned to serve menu selections, and the rest are used either in typing text or dialing.
Sometimes a joypad is provided for easier menu navigation. Exceptions to theses prototypical
examples exist, such as Nokia's Communicators, which include a full keyboard. Some even
contain both a keyboard and a touch screen, such as the Palm Treo 650. Sometimes, a small
keyboard can be attached to a PDA. When a control state is propagated to, and received by an
application, it is called an input event. Mobile operating systems commonly prevent input events
to be generated in parallel, in a forced attempt to reduce possible control DOFs to 1; if one button
is down, the next possible event is the up event from that button. Platform-dependent software
development kits provide information to override this restriction.
HYDROSYS (224416) – D2.3 System Specification
74
Figure 41 Mobile devices differ in the availability and layout of interfaces
The use of controls can be described as a mapping from control space G to navigation space N.
The navigation space can be described by a navigation state, which may contain the camera
position and orientation, but also other variables such as speed, field of view (FOV) etc. The
mapping can also be called the function g of the input i, g: G Æ N. We will call the dimension of G
the degrees of freedom of G, the control DOF. The mapping provides movement on a guide
manifold, a constrained subspace of the navigation space [XX]. Discrete control inputs can be
used for providing either a single event or multiple independent discrete events. The mapping can
depend on time, possibly in a nonlinear manner. A 2D input can be used similarly for a single 2D
event, a discrete sequence of such events (x, y)j, or a time dependent input (x,y,t). The motion
derivatives (x’,y’) can also be used as an input. If a control is given cyclic behavior, the mappings
depend on the current cycle c of the inputs, g: Gci Æ N. In this case, one button can control two or
more functions, each button release advancing the function to next on the cycle.
The 3D interface also provides a set of visual UI components, such as various labels and markers
(, left), which are active, as is the entire scene. When selected context sensitively, they can
launch a traditional menu. In (middle), a context sensitive selection on a building has launched a
menu for further actions, including a content query. The first action on the list is Fly to, providing
an animated view transition to the target. The second action provides a direct content query for
the selection. In the HYDROSYS, similar functionality would yield for example sensor data. Again,
without a pointing device, the selection itself is a challenge, and will be addressed in
HYDROSYS. For a device with a pointing device, maneuvering within the 3D view can be easy.
(right) presents a maneuvering scheme with a touch screen or mouse enabled system, where the
user can drag a pointer along the display surface to guide the viewpoint. The visualizations are
redesigned where needed for HYDROSYS, unified with the AR application, and when possible,
resulting graphics are shared between the two applications.
Figure 42 Widget-like visual UI components within a 3D view; (middle) a traditional context sensitive menu,
triggered by a selection on a building; (right) maneuvering with a touch screen enabled device or mouse
HYDROSYS (224416) – D2.3 System Specification
75
9.6 AR Widget toolkit
As stated in the introduction, the user interface
spans over multiple interfaces (modules) that
Task-component relationship
will make use of a uniform interface style. As
The work described in this section is part of all
we will explain in this section, the HYDROSYS
tasks in WP5
AR widget toolkit offers the backbone for all
user interactions at the UMPC, addressing
specific needs from both the system and user-side that are imposed on the system platform. The
explanations presented in this and the following sections make use of some of the sketches we
have used in the participatory sessions with the end-users during the first UCD phase of the
project. Similar sketches have also been used in the visualization section. The next section, on
cell phone interfaces, specifically focuses on how the cell phone makes use of similar interaction
styles.
R&D process
The widget and techniques presented in this section form one toolbox. That is, all interfaces the
system is comprised of make use of the different elements in the toolkit. The presented concepts
represent out current view on the problems, and are the initial direction we take for solving the
user interface questions. The current state of work in this area is rather minimal – even though
interfaces exist for game devices that have a similar size (like the Sony PSP, or MID devices
running operating systems like Ubuntu Remix), the requirements for HYDROSYS are very much
different, and far more complex than previous systems, such as Vidente (www.vidente.at). Thus,
the R&D process will be highly cyclical.
System needs and human factors
The user interfaces will be designed for the previously introduced hardware platform, the
Panasonic Toughbook. This UMPC has a small screen (5.6 inch), which is comparable to MID
devices at the market. The device has a touch screen allowing pen input, and a tiny keyboard
below the screen. The device also has some additional buttons which are, not very usable when
using it in the construction presented in the previous section. Also, the device does not hold a
micro-joystick that would be useful for interacting with menu systems. Better performing buttons
and micro-joysticks, though, are supported in the handheld construction.
The size of the screen puts limits on the perceptual quality of the graphics being displayed. Even
though it offers a good resolution of 1024 x 600, the graphics being displayed are generally quite
dense. Thus, the user interface must be very screen-effective in order to allow the user to
observe the visualized data in the biggest size possible, and avoid visual clutter at the same time.
This requires ordering the available graphical data in an effective way, and support unobtrusive
interaction using minimal UI elements, where possible. The ordering is important especially in
those cases where users need to compare multiple kinds of data. HYDROSYS will research
screen management issues to cover for perceptual and cognitive factors.
Figure 43 Different layers used in the AR visualization
HYDROSYS (224416) – D2.3 System Specification
76
Layer approach
The approach taken in the UMPC widget toolkit is to use layers for multiple purposes. Layers are
a “natural” approach to take, since the data layers rendered over the video image to obtain an
Augmented Reality image are layers by nature. Within HYDROSYS, we take layers one step
further by organizing the information provided in a stack of layers. The layers are organized in the
tabs that can be adapted to the users needs in the stack manager (see next section). The
controls are a separate layer on top of the tabs. For example, a user might have selected 3 kinds
of data (moisture, temperature and wind). All sensor data types get a separate tab, but so can the
(potentially available) DEM and the video signal itself. This has considerable perceptual and
cognitive advantages. Not only can the user toggle on or off specific layers, one can also
potentially perform specific actions on layers to improve the visual quality of the overall interface,
for example to avoid visual clutter. Such actions include blending in or out tabs (layers), or
perform modifications on the rendered data or video image to advance readability. The latter can
potentially be directly coupled to the focus+context techniques being developed to improve the
perceptual quality of the visualization.
Ordering data layers through the stack manager (View management)
Multiple layers, represented as tabs in the interface, could be organized through a stack
manager. We use the word “stack” since basically, the different data layers form a stack over the
video image and/or DEM. Thus, to select what show be displayed, and how it should be displayed
(like changing the opacity), a stack viewer can be shown. In the stack viewer the user can add,
duplicate or remove layers, or shuffle all layers through (order the layers). Basically, the user may
have a large stack of possible tens of data layers that can be moved through. The stack manager
will largely depend on the profile manager that defines the users’ preferences for ordering and
display of data (see section 2.4.1). Reorganizing layers in the stack allows more important data to
be rendered on top of contextual data, aiding the users in visualizing directly what they are most
interested in. All these techniques allow for the user to customize their viewpoint on the data. At
runtime, it becomes tedious if a user needs to change blending options, reorganize the stack or
even (de)activate layers for every possible view on the data. A mechanism to browse through the
stack will be introduced to allow users to quickly visualize the data as a subset of the current
stack.
Control widgets
Control widgets can be accessed through buttons on a (context sensitive) menu. Control widgets
are used, for example, used as basis for the SmartClient, the profile manager, or a text
annotation tool. Many of the elements in the control widgets are standard interface elements such
as buttons, lists or sliders. These elements, though, will be optimized for the small screen.
Control widgets may also require text input. Text input can take place by using the mini keyboard
on the UMPC, but also may be eased by using text templates. Especially when users need to
type with gloves, the second possibility will likely prove advantageous.
9.7 Viewpoint manipulation
HYDROSYS organizes large, heterogeneous
datasets spatially for visualization. Spatial
organization depends on the data being spatially
anchored (geo-referenced, or generated in
particular points of a 3D space). Due to the
possible complexity of data, and the ability to
take different viewpoints on the analyzed site,
viewpoint manipulation is a key aspect.
Task-component relationship
The work described in this section is part of
Task 5.5 / Remotely controlled camera
interface module
9.7.1 Viewpoint manipulation in the AR setup (traveling)
The main factor exploited by Augmented Reality is that the application relies on the user’s sense
of orientation for visual understanding of the data presented. Several methods will be explored in
HYDROSYS for exploration of complex spatial datasets. As stated in section 9.6, the organization
of datasets in layers helps divide the relevant data in units that can be switched on or off in
HYDROSYS (224416) – D2.3 System Specification
77
response to user interest. The layer organization in a stack is, in itself, an aid for the exploration
of data. Stacking too many layers, or single layers with large datasets can still introduce visual
clutter ending in disorientation and visual confusion. To cope with these issues several navigation
techniques for spatial datasets will be explored. Navigation techniques can be applied both to
spatial datasets and to visualization methods. The former deals with (virtually) moving through the
spatial dataset thus changing viewpoint, while the latter addresses the issue changing the view of
the visualized data (the appearance of the view, without changing the viewpoint itself).
Traveling
Traveling techniques are the source of much research in the virtual reality domain. The main
motivation being to introduce a change in the user’s viewpoint without the user’s perception of
displacement results in disorientation. Traveling techniques attempt to solve the problem of
disorientation caused by artificial changes in point of view (particularly those where the user does
not really move). These techniques include methods to select the destination point, and to
animate the translation in such a way that the user perceives the change in displacement. In
HYDROSYS, traveling techniques will be applied to switch from egocentric to exocentric and
back, as well as to camera switching (see camera framework section).
Changing viewpoints using virtual aids
The first view that users explore in an augmented reality system, is called egocentric, a firstperson point of view. A view where the user is one extra object in the visible scene is sometimes
called birds eye view, third person view or exocentric view. The transition between egocentric and
exocentric view points is referred to as “Traveling” while the exploration of information around the
user in exocentric view is referred as “Browsing”. Throughout the HYDROSYS project, the
Traveling and Browsing tools will be convenient not only because of the high amount of data to
visualize, but also because HYDROSYS places emphasis on the collaboration with other user’s
information. This tool will help the users to browse external video and tracking feeds as well as
annotations taken on site. Browsing interfaces are discussed in section 9.9.1. Figure 44 shows a
conceptual drawing of the traveling mechanism.
Figure 44 Changing viewpoints from first person or user view to third person view
Augmented reality is mainly based in an egocentric view, as the user visualizes augmentations in
the video feed from her view of the world. In order to provide for an exocentric view of the area,
HYDROSYS relies on accurate 3D digital models (DTM) representing the location at hand. By
visualizing the digital model of the area it is possible to place the user as another asset in the
scene. Since accurate digital models are expensive in terms of processing power required to
render them and the space required to store them, when visualizing large areas, HYDROSYS
HYDROSYS (224416) – D2.3 System Specification
78
introduces a second exocentric mode. This exocentric mode is based on 2D maps instead of
digital 3D models, reducing the resources allocated to the model presentation.
Switching from AR (first person, real world view) to VR (third person, 3D model view) to 2D (map
view) and back requires some traveling mechanism, to account for the transformation in
viewpoint. Also the changes in viewpoint in any third person mode (VR, 2D) require similar
traveling mechanisms, since they involve changing position and viewpoint without moving.
Changing viewpoints using external cameras
Camera switching requires a special consideration. This technique is enabled by the
characteristics of HYDROSYS deployment, having several cameras in the field, observing the
location of interest from different viewpoints. Although the same general techniques can be used
for traveling and browsing available cameras as resources, once a user has switched to an
external video feed (using the GSN video services), it will be necessary to orient the user to her
own location. Different techniques can be considered in this case, one could be a video in video
technique, inserting a smaller version of the users video feed in the current view. Adding avatars
for the user itself or to indicate the general direction where the user is located if she lays out of
range for the current view. At this point, it should be stated that one of the cameras being
deployed is a pan-tilt unit. Using a simple front-end, users will be able to control the orientation of
this camera. Similar, users will be able to select a “hovering” spot for the blimp, to get a specific
exocentric viewpoint of the site.
9.7.2 Viewpoint manipulation on the cell phone
3D Navigation without a mouse or a proper keyboard is a major design challenge to mobile 3D
maps. It is separate from the traditional widget functionality, even though it depends on the same
controls (the input devices). It is a process involving both mental and physical action, both wayfinding and movement (Darken and Sibert 1996). All movement requires maneuvering,
performing a series of operations to achieve subgoals. Whereas maneuvering in a 2D view can
be mapped to a few input controls in a relatively straightforward manner, movement in a 3D world
cannot. In addition to 3D position, one needs to specify 3D orientation as well. Direct control over
such a view would require simultaneous specification of at least six degrees of freedom.
Generally, producing decent motion requires even more variables, but a mobile device only has a
few controls, of which the user might want to use only one at a time.
Figure 45 presents a simple navigation task and the resulting viewpoint movement, guided by a
subject, initiated (left) at sky, and (2) street level. In both cases, the controls are the same: push
buttons provide rotation, elevation, and movement forward and backward. In Figure 45 (left), a
subject first orients locally by observing features of the environment close to the surface, spots
his physical position (A) and starts to maneuver towards the goal (B) at rooftop level. In Figure 45
(right), the task starts at street level. After initial orientation (13 interactions with buttons), the
subject starts maneuvering towards the goal. Despite the simple task, both subjects spend a lot of
effort in just operating on the controls to move the viewpoint (30 interactions with buttons).
We assess that developing interaction for a mobile 3D map depends heavily on solving the
viewpoint manipulation problem. In HYDROSYS, this problem is addressed and the currently
available 3D navigation metaphors of the m-LOMA system are extended in the general
framework. we allow several levels of interaction, from explicit controls to automatic navigation,
where the viewpoint is driven by for example a GPS. The design of a suitable 3D navigation
interface is one of the main challenges for the 3D map, and includes quasi-experimental field
trials.
Maneuvering class
Freedom of control
Explicit
The user controls motion with a mapping depending on
the current navigation metaphor.
Assisted
The navigation system provides automatic supporting
movement and orientation triggered by features of the
environment, current navigation mode, and context.
Constrained
The navigation space is restricted and cannot span the
entire 3D space of the virtual environment.
HYDROSYS (224416) – D2.3 System Specification
79
Scripted
Animated view transition is triggered by user interaction,
depending on environment, current navigation mode,
and context.
Automatic
Movement is driven by external inputs, such as a GPS
device or electronic compass.
Table 3 Maneuvering classes in decreasing order of navigation freedom (Nurminen and Oulasvirta 2008)
Figure 45 Navigation with limited inputs using direct controls. (left) navigation from A to B, and the path of the
viewpoint driven by the subject (1 to 4); (right) Navigating the same route at street level with direct controls
(rotate, move) (Oulasvirta, Nurmine and Nivala 2007)
9.8 Sensor placement interface module
Sensor deployment has special consideration within HYDROSYS. The following diagram shows a
simplified version of the steps that have to be considered when deploying a sensor. A sensor
deployment session will include such a sequence of steps for each sensor. Planning for sensor
deployment is done offline, possibly supported by using the base interface.
Figure 46 Sensor deployment
HYDROSYS (224416) – D2.3 System Specification
80
9.8.1 Sensor placement using AR setup
Potential deployment using the AR setup is as follows: A deployment starts by visualizing planned
locations for all to-be-deployed sensors. These locations are visualized as a special sensor icon
in HYDROSYS’ handheld platform. If the user selects a not-deployed sensor, the system can
then display navigational aids to reach the location appointed for deployment. Different types of
navigational aids (like directional markers or a compass) will be evaluated to find appropriate
ones for the task at hand. Once the user reaches the deployment point, the sensor can be
deployed. The platform could inform the user that the deployment point has been reached,
however this will still require user intervention because of errors in location sensors. Once the
sensor is deployed the deployment needs to be validated. There are different validation steps that
must be taken. On the one hand, it is necessary to validate the deployment location. On the other
hand, the sensor readings must be validated. The first step can be validated out of the location
coordinates from the user and the sensor itself, however it will be necessary to account for
location sensing errors (GPS errors). The user interface to validate location can simply show the
real location of the sensor in a map and the expected location together with the difference in
distance. The second step, validating the sensor readings, depends on validation procedures
established at the planning level, or by GSN. The sensor network and sensor development
processes provide validation procedures for sensors, these must be run to check proper
functioning of the sensors (T4.7 GSN Data quality control). The user interface depends on the
validation procedure, but it most cases it will have to show some scalars and whether the sensor
running correctly. Initially, the system will connect to the sensor using GSN, same as for all
sensors. Eventually, some sensors can be directly connected to the computer, instead of
accessing the network. For example moisture sensors or discharge devices (instrument
measuring water conductivity) could be connected directly to the handheld for visualization. This
would require some special module for direct input (in GSN or another low level library).
9.8.2 Sensor placement using the cell phone
The 3D map provides an interface to the real world in one-to-one mapping fashion. Virtual
sensors can be directly placed to the world as annotations, connected to the world’s simulated
representations via Data Services. The associated sensor values can be defined using the
traditional lightweight widgets for point type sensors. In a desktop system, sensor placing can be
bound to a context sensitive menu. In a cell phone, the default alternatives would be the current
viewpoint and the current GPS position. In the case of GPS position, the user would actually
stand physically in the place where he would place the virtual sensor. If area type sensors are
needed, a desktop system could draw a set of vertices that would define a mesh. Then, values
could be placed either to the vertices or to the entire mesh. For a cell phone without a pointing
device, defining such an area would be inconvenient. The system could provide a similar solution
as for point devices: a user could walk around the area, and the GPS points would define the
outer ring of the area. A mesh would be generated automatically to cover the area.
9.9 Data selection interface module
Data selection and retrieval is one of the key Task-component relationship
aspects of the HYDROSYS user interface. The work described in this section is part of Task
This section explains some of the concepts 5.2 / GSN data retrieval interface module
we will explore before settling on a final set
of interfaces. The data retrieval is based on
the mechanisms offered by the SmartClient
(see section 5). The user interface to the SmartClient helps users to select data sources and
visualization preferences. It directly influences the data retrieval, affecting what data are retrieved,
on what basis, allowing to specify filters to be set on the data provider. In other words this
interface deals with profile management.
Figure 47 Data selection process
HYDROSYS (224416) – D2.3 System Specification
81
Configuration of data retrieval involves selection of data sources (sensors) and all sorts of data
retrieval modifiers. A query includes data sources and retrieval instructions, for example:
“temperature data from the La Fouly site produced later than December 2007”. This query
includes a data source selection (temperature sensors) modified with location (La Fouly) and time
(later than December 2007). Selection of data sources in general is done using some sort of
browser UI. Visualization preferences are given when associating a layer with a dataset (data
corresponding to a query). The layer organizer UI aids in the creation of layers including their
association with datasets, as well as their organization in a layer stack.
9.9.1 Data selection using the AR setup (browsing)
For the AR setup, HYDROSYS will at least include a “traditional” selection interface that likely
makes use of drop-down lists for users to select available sources. The drop-down lists can be
applied in such a way that each adds a restriction, reducing the number of possibilities. Dropdown lists, and other traditional selection and input widgets can be combined with browsing
techniques to help restrict selections while visualizing the results. In the frame of HYDROSYS
several browsing alternatives will be explored. The main drive being that some alternatives exploit
locality (the fact that the user is co-located with the data sources), and are better suited for users
in the field. The importance of the browsing technique lies in the fact that assets (users, sensors,
cameras, etc) are located in particular positions with respect to the user (i.e they all have own
position). Browsing resources in a registered view (in a view where the resources are placed
accordingly with respect to the user) gives the user a better idea of where the resources actually
are (as opposed to a browser that arranges resources in a list). Browsing in egocentric views
provides a clear idea to where resources are located. Browsing in exocentric views enables the
user to view more resources, since the area that can be covered using this view is usually larger.
However, it also requires orientation aids to be included in the view. Further refinements to the
browsing technique include adding special symbols for different types of resources and virtual
sensors (virtual compass) to keep the user oriented. Third-person views have the advantage that
they are also useable from remote locations. Users that work in campaign preparation using the
base interface will benefit mostly from third person views. One way to provide a third person view,
is using a 3D model of the site as representative of the location, registering assets on top of it.The
view based on digital model allows changing point of views (not only birds-eye) to observe terrain
details. Another exocentric view can be based on a 2D map. The advantage of a map is that
potentially more assets can be presented, since it does not suffer from occlusion problems.
Another advantage is that it requires less processing power than a view based on a digital model.
Exocentric views have the advantage that they reduce the effects of occlusion, but they might
also increase disorientation.
Egocentric views are mainly usable by on-site users. A simple iconic list of assets can be used to
display available assets, while relying on connection lines to identify the location with respect to
the current viewpoint (see Figure 48 left). Visual clutter can quickly render this view unusable,
therefore it is necessary to restrict the amount of assets displayed in it.
These issues inherent to the first person view will be addressed by exploring some special
widgets. For example, a ring widget (asset browser) could indicate coordinates and the positions
of resources in the distance. Different kinds of resources can be identified by applying some
discriminative approach (color-coding, icons, etc). Visual selection could be done by clicking on
an asset or drawing a polygon around a set thereof. Connection lines could be used to indicate
the real position of a selected asset. This widget could overcome visual occlusion by showing the
assets position in a different plane (inside the widget) with perspective. It also could address the
visual clutter issue by scaling assets in the widget according to their distance. The widget would,
however, not be able to show assets located behind the user, and it looses all elevation
information (the widget would be drawn on a plane).
HYDROSYS (224416) – D2.3 System Specification
82
Figure 48 Asset browser
Another concept that will be explored is a 360 browser (cone browser) that could allow
visualization of sensors even behind the user, while using the camera feed in a distorted first
person view to maintain orientation. The widget uses the camera feed to show the visible area
around the user. It relies on a 3D model (DEM, DTM) to render the remaining views (not viewed
by the camera) from the users standing point. The views would be stitched together in a
panorama and wrapped around a cylinder. The latter step accounts for screen space usage and
orientation, since a panorama would not fit the screen and loose the orientation cues. The assets
can be then overlaid in the rendered picture. Control widgets are shifted to the corners to optimize
screen usage. Perspective lines can be shown in the center of the cylinder (where the user’s
head is theoretically situated) to identify the view direction and frustum. The 360 widget still
suffers an occlusion problem. As a solution, the ring widget could be placed around the view
cylinder. The ring (asset browser) would be used to visualize otherwise occluded sensors, around
the user, allowing thus to visualize assets in all view directions.
Figure 49 Real time distorted views
Data retrieval
All browsing and visual selection alternatives can be used together with normal screening
mechanisms (lists to select type of asset, or numerical input for range etc). Such inputs help by
screening out some unwanted assets leaving more screen space for the objects of interest. They
also specify modifiers for data retrieval. Modifiers can screen out sensors or data from sensors
(sensors: when selecting only humidity sensors, data: when giving some range). Some modifiers
can also be applied to the retrieval itself (specifying refresh rate). The data retrieval mechanism
registers all queries and notifies updates as they are received, allowing the application to make
use of data. This mechanism acts as a GSN client, and relies on GSN retrieval mechanisms. The
retrieval mechanism is part of data services package. It is pipelined with data conversion and
optimization components, delivering data in a format appropriate to the user platform.
HYDROSYS (224416) – D2.3 System Specification
83
Layer organizer
Once a query has been specified, it can be associated with a layer. This is part of a layer
organizer widget, that allows creating new layers. The layer organizer was discussed before in
section 9.6.
9.9.2 Data selection using the cell phone
The cell phone platform will provide a similar widget based approach for resource selection as the
AR platform. It will also provide a direct access to the properties of the environment through the
3D view. The user can directly select anything, such as sensor stations, or just a point on the
ground. A query based on an object id or coordinates along with the user’s profile and
preferences is made to fetch data associated with the selection. For example, if simulation
interpolation values are available for that point, they are shown in a widget. Otherwise, simply
coordinates and general sensor values from the overall area, such as surface temperature, are
presented, if available. Any object that is associated with further data structures, such as a river,
will provide access to the whole topological structure. The associable data structures will be
developed as part of Data Services development.
The main challenge for data selection in the cell phone platform comes from the possible lack of a
pointing device. Without capability for direct pointing, the platform needs to provide alternative
solutions. A user could orient the view towards the target, but without further assisting features
(such as magnets), targeting will be difficult. An alternative approach would be serializing the
emphasized objects on the current view (the objects in Focus) based on their importance and
possibly distance from the screen center. A hot button can then be used to loop a selection focus
between the objects. The selection interface will be developed in WP5.
9.10
Simulation Interface
As described in section 6, simulations are generally not run in the field, but executed by a
simulation server off-site. They run mostly offline: a user does not trigger a simulation and sits to
wait for the answers. However, since they are considered a useful aid for decision making, the
results of simulations can be visualized on-site. Visualization is closely dependent on the data
pre-processing component, as simulations might need special representations, and also on the
visualization component (see section 8).
A normal simulation scheduling requires selection of a simulation script and data sources.
Normally a simulation script dictates the kind of data sources needed. After the script and data
have been validated (meaning it has been validated that a simulation can be executed). The
simulation is queued in a simulation server that runs offsite and offline. Offline means that it is not
expected that simulations will be performed with interactive rates. Appropriate interfaces need be
developed for the selection of sensors and locations, as well as 3D models required by
simulations.
9.10.1 Simulation using the AR setup
Task 5.4 will provide an apt interface for selection and scheduling of simulations, as well as some
specific functionality related to what we call “speculative simulation”. The tool includes:
ƒ
ƒ
ƒ
A simple data selection and filtering method to select data sources: the simulation script
will require only certain types of data, the others can be filtered out. The selection tools
necessary to select the data for the simulation will be based on the ones already
described in section 2.4.1.
An interface for selection of an area or 3D models: This interface could be based on the
drawing interface already described, but it may also be a simple selection interface for the
3D model.
An interface for speculative (what if) simulations: This interface allows, for the purposes of
a certain simulation, to replace some sensors by virtual sensors for which a specialist
specifies values. The values can be ranges, the limits of a range to be reached within a
period of time, For example, a specialist might define that a sensor will reach 35 deg in
the next 24 hours. The simulation is run otherwise normally and its results are presented
as in the case of normal simulations. This module relies on a special feature from GSN,
HYDROSYS (224416) – D2.3 System Specification
84
allowing defining virtual sensors. It requires a user interface to define these virtual sensors
and specify values for them: it will reuse some of the drawing/annotation functionality
introduced in the annotation section 1.2.
It is of no importance, for the software component, whether a simulation is triggered while on-site,
or as part of an off-site support activity. The outcome is that a simulation is in progress – the user
might get informed on the progress of the simulation
9.10.2 Simulation using the cell phone
There are two main approaches for running simulations: either the simulations run continuously
based on incoming sensor data, or there exists a predefined set of simulation parameters and
inputs, and the related simulations are run as batch jobs (offline processes). User-triggered online
simulations may be implemented, but depend on their feasibility. This is assessed as part of the
validation processes.
The main interfaces to the simulation results for the cell phone platforms are similar to any other
data in HYDROSYS. If simulation results are present in the view, they can be directly selected, or
browsed via menu structures. For fetching data related to the offline simulations, widget
interfaces will provide the same configuration space as has been used to run the simulations.
9.11
Collaboration tools
The collaboration module includes tools not
Task-component relationship
only to share resources, but also to
The work described in this section is part of Task
interactively
discuss
and
document
3.3 / Collaboration tools
resources. The user interface and the
collaboration module are closely related,
and create a shared workspace with resources. A division can be made between the actual
collaboration (meaning the time when users are communicating and working together) and the
resources that are generated for collaboration. This module supports both the generation of
resources for collaboration (annotations) and the actual collaboration. In case of the AR setups,
the collaboration is prepared using the base interface, meaning that a list of possible
collaborators is available, next to some data sources that might be specially needed during
collaboration (like text documents). The latter are placed in the campaign workspace (a “shared
workspace”).
Figure 50 Diagram describing a possible collaboration process
9.11.1 Collaboration tools for the AR setup
In order to select collaborators, a user interface will be created that exploits the locality of the
users. For users in the field, some sort of map (2D or 3D overview) can be used so that location
of collaborators is clearly indicated. Collaborators located remote to the site can be displayed in
the limits of the map. Once collaborators are selected, communication can be initiated. The
communication protocol will depend on what possibilities are available on the field. At any time
during communication a user can decide to share a resource. This involves accessing a resource
selection interface to pick the desired resource and sending it to the participants of the session.
Before the sending, the user might want to annotate it. An interface for adding simple information
(sketches from pen-tablet) will be provided for that matter. The user can thus highlight interesting
areas in photos, maps or add short handwritten text. These annotations are shared in association
with the resource at hand.
HYDROSYS (224416) – D2.3 System Specification
85
Another useful collaboration tool is viewpoint sharing, the mechanisms are similar to camera
sharing (see section 9.6). When sharing a viewpoint a user provides the background image from
the current camera to all collaborators, together with augmentations, so that they all can share
the view. Annotations can be drawn in a shared view by all collaborators. The viewpoint sharing
has a direct relation to the traveling methods presented in section 9.7.
A number processes provide support for collaboration tools. A management process (campaign
management) registers all users participating in the campaign, so that they can be located and
selected for collaboration, also a workspace is created where resources corresponding to the
campaign can be stored. The workspace allows the sharing of resources even without initiating a
collaboration session. One sort of resource used while inspecting a site is the markup or drawing.
It is used in order to mark regions of the environment. The tool allows a user to use the handheld
device to generate a geo-referenced drawing marking a small area. This marking can be recalled
to highlight an area of interest, for example.
Another tool used to generate resources is the annotation. HYDROSYS will support the
generation of geo-referenced annotations of different kinds. In order to place an annotation, a
location is needed indicating, at least, position for the annotation. The location selection tool relies
on the same mechanisms used by the markup tool (a simple drawing tool), and it can be used to
select a point or a small region where the annotation will be anchored. Several annotation forms
can be inserted:
•
•
•
•
Iconic: places an icon with a meaning (pre) associated. The icons and meanings is pregenerated in order to minimize the time it takes to place annotations.
Simplified text: allows a user to place a (pre) simplified text annotation. The text is based
on pre-generated labels. These are created offline and used during on-site analysis.
Template based: the annotation uses a (pre) created template. The user needs to fill a
minimum amount of text. Templates are created offline during preparation process.
Descriptive: the user writes the annotation text without using any pre-generated data.
Annotations are stored as resources for the campaign and can be obtained by giving the
appropriate settings to the SmartClient.
9.11.2 Collaboration tools for the cell phone
The 3D map can provide annotations as any data. For online collaboration, it can send tracking
information of the mobile users, and forward any user generated annotations directly to the
collaborators via a publish/subscribe mechanism. These annotations automatically act as anchors
for a location based discussion forum. With the publish/subscribe mechanism, the forum
essentially forms a chat channel. Any activity on the channel can be visualized on the annotation
marker in the 3D view. Any open channel is automatically updated, unless the user is currently
writing a reply.
9.12
Base interface
HYDROSYS is building support tools for onsite analysis and event driven campaigns. Task-component relationship
Part of of the support for on-site campaigns The work described in this section is part of
is provided off-site. Desktop user interfaces Task 5.6 / Base monitoring front-end
included in HYDROSYS target these off-site
support tasks. Some of these off-site
activities are meant to be primarily carried out offline (not during the campaign). Site and
campaign preparation are examples of mainly offline tasks and are part of what is referred to as
„base monitoring interface“. A general representation of the tasks in the site and campaign
management can be found in section 2.4.1. The set of tools for site and campaign building
directly relate to the campaign manager.
When a new site is planned for the first time, initial data has to be gathered and indexed, so that it
is available for future campaigns. For example, DTMs, measurements from previous sensor
deployments, imagery, weather data, etc. Indexing and including all these is not a trivial task.
Some data sources might not be geo-referenced, in which case, manual geo-referencing needs
to be carried out. All preparation of initial data for a given site corresponds to a site building task.
HYDROSYS (224416) – D2.3 System Specification
86
A task subsequent to site building is campaign building. Since several campaigns can target the
same site, campaign building is done more often than site building and takes as starting point a
site selection, using all the data available for a certain site. It might be that a certain campaign
involves introducing new data that may also not be geo-referenced, and must be dealt with
accordingly. In most cases, the hard part has been taken care of by the site building step.
The base interface will access several similar interfaces as used by the user in the field, with
modified screen management / layout, and is extended with a simple front end to manage user
profiles (see section 2.4.1). In addition, the base interface will take advantage of the better
graphics capacities possible in the office, thus receiving higher quality pre-processed data.
For the cell phone platform, all data selected for the campaigns needs to go through the
optimization preprocesses before placed onto the Data Services. Therefore, a suitable set of
preprocesses need to be created in advance to support the most typical legacy data types.
Further preparation for a campaign may also involve manual interventions, depending on the data
sets. For example, partial 3D modeling by hand may be needed, if no true 3D data sets with
visual detail exist for the campaign area.
9.13
Validation factors
The user interface binds several validation factors that can also be found in the other chapters,
basically since it provides the front-end to the system. In general, the user interface needs to take
in account frequently used usability measures like structuring of information and controls, or the
screen layout. The user interfaces need allow for clear access and interpretation of the
multivariate data – it should allow search, find and compare the information. At different stages in
the project, perceptual and cognitive tests need to be performed to secure a suitable usability. Of
specific interest will be the used traveling and browsing techniques that allow to view the
observed scene from multiple perspectives with their co-located / associated data sources. It is
likely that multiple techniques will need to be tested: they provide key aids to the user orientation
(meaning the user being oriented) in the field and to the user’s understanding of where resources
are in the field with respect to herself.
It will be needed to test perceptual / cognitive issues of viewpoint changing, in particular mental
map and orientation issues. The collaboration techniques will need to be tested, both on
suitability of making the annotations well enough (for example, when using gloves), and to
communicate and exchange information in an apt way. For the sensor placement interface the
error of the hybrid tracking module is of special importance for this activity. Until initial tests are
carried out the error can not be estimated since it depends on several factors. Finally, the
simulation front-end needs to be tested on suitability of usage in the field, since it likely represents
a different work flow for the user.
HYDROSYS (224416) – D2.3 System Specification
87
Appendix 1. Studierstube sub-components
This appendix provides detailed descriptions of several of the subcomponents of the Studierstube
software framework.
Openvideo Concepts
The figure below shows the Video Flow Structure in more detail.
Video Flow Structure in detail.
As a starting point of the video flow it is necessary to feed OpenVideo with the video data. For
that matter some sort of video input (device) can be attached to a PC where the driver takes care
of reading in the data. Examples for possible input sources are:
• Camera
• File
• Ultra sound
• MRI data
OpenVideo integrates several drivers to access different forms of video sources and channel
them to an Open-Video sink. Video capture modules act as sources (in context of the sink/source
principle), most often used source modules include DVSL and Video4Linux. Direct Show Video
Processing Library (DSVL) builds up on Microsoft's DirectShow module and is consequently used
when working under the Windows operating system.
HYDROSYS (224416) – D2.3 System Specification
88
XML example:
<DSVLSrc config-file="MyCamera.xml" pixelformat="B8G8R8" flip-v="true">
<some nested video sinks />
</DSVLSrc>
Video4Linux is a video capturing library which is written for acquiring video data under Linux.
Video from file (DivX) enables OpenVideo to take an existing video from a file. This source is
often used in test environments. Therefore it is not necessary to have an active video capturing
device connected to your computer. Video Sources channel video data to OpenVideo sinks.
There are several ways to feed an OpenVideo sink with the video signal. The figure below shows
how the video sources can be connected with the video sinks.
OpenVideo sinks and soures.
The following OpenVideo example configuration shows two sources which are wrapped around
some sinks. The first one is a DSVL source which takes the camera configuration file and the
pixelformat as parameters. The second source is a VideoWrapper source. Note that the ordering
of the nested sinks does not matter.
<openvideo scheduleMode="timer" updateRate="14">
<DSVLSrc config-file="MyCamera.xml" pixelformat="B8G8R8" flip-v="true">
<VideoSink name="myVideoSink" pixelformat="R8G8B8"/>
<GLUTSink name="myGLUTWindow" pixelformat="B8G8R8"/>
<GL_TEXTURE_2DSink name="myGLTexture" pixelformat="R8G8B8"/>
<FileSink name="myFileSink" filename="c:\myFile.avi" pixelformat="B8G8R8"/>
</DSVLSrc>
</openvideo>
Video Background
This section describes what happens inside the framework when a new video frame is captured
by a camera which should be shown as video background in the viewer component of
HYDROSYS (224416) – D2.3 System Specification
89
Studierstube. Let's assume the current frame is fetched by DSVL over DirectShow and forwarded
to a VideoSink. VideoSink is implemented as Publisher and follow the Publish/Subscriber design
pattern. In our case the Studierstube VideoComponent acts as Subscriber and receives the video
frames as they are passed to the sink. In turn the VideoComponent is implemented as a
Publisher inside Studierstube. Subscribers are for example the viewer and the event system. The
viewer sets up the SoVideoBackground node which is part of the Studierstube scene graph.
When the SoVideoBackground node is traversed it renders a texture of the current video frame
on the far plane of the viewing volume.
Calibration
Graphics rendering are essentially simulations of perfect pinholes cameras. Unfortunately, real
cameras are not perfect and must, therefore, be calibrated before being properly used. To
calibrate a new camera (or new lens) it is necessary to find out the camera’s intrinsic parameters
and radial distortion. From the intrinsic parameters it is necessary to undistort the incoming video
image and to model the virtual pinhole camera to emulate the focal length and the principal point
offset. TUG’s augmented reality framework, Studierstube, already includes an abstract model of
an offaxis camera. This node models a virtual camera from the given parameters of principal
point offset and focal length of a real camera. Additionally, in order to deal with the lens distortion
a new video background node was created. This node gets a camera calibration file as input from
which it retrieves camera calibration parameters. Any new cameras and lenses acquired for the
project have to go through the same stages of calibration before being used.
Difference between uncalibrated and calibrated images.
HYDROSYS (224416) – D2.3 System Specification
90
Appendix 2 Specification sheets
We have acquired wide variety of cameras for the HYDROSYS project. The following are the
specifications:
uEye
USB based, Chromatic, 640x480 resolution,
CC-Mount lens, 70 fps.
uEye Camera
Guppy
Firewire based, Chromatic, 640x480 resolution,
CC-Mount lens, 70 fps.
Guppy Camera
Thermal camera
Undecided
The uEye cameras will be used for the UMPC based setups while the Guppy camera will be used
for the PTU based setup. The heat camera will be mounted on top of the Blimp for aerial
surveillance.The cameras acquired for the project support CC mount type of lenses. This allows
to have interchangeable focal lengths for different tasks. In particular we have acquired 4.2mm
Pentax lenses.
Blimp
Model
Length
Volume
Flight time
Top
speed
Cruise
speed
Operating
wind speed
Payload
(m)
(m3)
(h)
(km/h)
(km/h)
(m/s)
(kg)
Z7000
7
13.5
1
40
25
4
2.5
Z8000
8
15.5
1.5
40
25
4
3.9
Z10000
Basic
10
35
3
40
30
6
8.5
Z10000
Electric
10
35
1-2
45
30
6
5
Z10000
Top
10
25
2–3
65
30
9
8.5
Z10000
Pro
10
35
2-4
75
30
11
8.5
HYDROSYS (224416) – D2.3 System Specification
91
Panasonic Toughbook CF-U1
Panasonic Toughbook UMPC to be used in HYDROSYS
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Intel® Atom™ processor Z520 1.33GHz with 533MHz FSB, 512KB L2 cache
16GB solid state removable drive (32GB optional)
1GB memory
5.6” WSVGA sunlight viewable touchscreen (1024 x 600 resolution)
Anti-reflective screen treatment
LED backlighting
Extremely rugged
MIL-STD-810F and IP54 compliant
foot drop approved
Magnesium alloy chassis encased with ABS and elastomer
Removable solid state drive
Sealed all-weather design
Rain-, spill-, dust- and vibration-resistant
Rotating hand strap
Intel® Wireless WiFi Link 5100 Series (802.11a/g/draft-n)
Bluetooth® v2.0 + EDR
Interfaces:
USB 2.0 x 1
SD Card x 1
Microphone x 1
Headphone x 1
Expansion Bus x 1
Integrated options include 3G mobile broadband, integrated camera, fingerprint
scanner, GPS, barcode or RFID readers
Optional expansion modules for magnetic stripe reader & serial/ethernet/smartcard
are expected in late 2008
Approximately 9 hours of battery life
lbs (with strap and both batteries)
2.2” (H) x 7.2” (W) x 5.9” (D)
HYDROSYS (224416) – D2.3 System Specification
92
Ubisense System Specifications
Accuracy
Tag update rate
Aggregate
cell
update rate
UWB
radio
transmission
Telemetry
radio
channel
Tag
–
sensor
maximum range
Suggested
cell
configuration
Suggested
cell
geometry
3D static accuracy up to 15cm at 95% confidence level,
depending on specific environment and system configuration
Variable from 10 per second to 1 every 15 minutes
Max. 40 updates / second
6.0 GHz – 8.5 GHz, -41.3 dBm/MHz
Centre frequency: 7.02 GHz
2.4 GHz ISM band, Ubisense control protocol
>160m in Open Field Measurement (optimally aligned Slim Tag)
1 to 10 sensors per cell, number of cells is not limited
35m x 35m with 4 sensors indoor in open environment (e.g.
warehouse), sensor positions as high as possible
Minimum Server Specification
CPU
Memory
Hard disk
Network
OS
2.4GHz Celeron or equivalent
512MB
40GB
100Mb Ethernet
Component
Operating System
Controller and Platform server Linux 2.4.18 or later
Linux 2.6.3 or later
Windows XP Pro SP2
Windows Vista Business
Services
Linux 2.4.18 or later
Linux 2.6.3 or later
Windows XP Pro SP2
Windows Vista Business
All server components will also run on Windows Server 2000 and
Windows Server 2003, but are not supported by Ubisense. Server
components will not run on Microsoft Virtual PC.
Minimum Graphical Client Machine Specification
OS
Graphics card
Component
Administration tools
Operating System
Windows XP Pro SP1/SP2
Windows Vista Business
Client tools
Windows XP Pro SP1/SP2
Windows Vista Business
Developer
Windows XP Pro SP1/SP2
Windows Vista Business
Any graphics card with support for 3D acceleration with drivers that
correctly support DirectX 9.0c
HYDROSYS (224416) – D2.3 System Specification
93
Ubitag (Compact Tag)
Series 7000 Compact
Size and Weight
Dimensions 38mm x 39mm x 16.5mm
(1.50” x 1.53” x 0.65”)
Weight 25g (0.88 oz)
Temperature
Standard -20°C to 60°C (-4F to 140F)
Extended -30°C to 70°C (-22F to 158F)
Further temperature ranges available on request
Humidity
0 to 95%, Non-condensing
Peripherals
LED (user programmable), Push button (user programmable), Motion detector
Radio Frequencies
Ultra-wideband 6GHz – 8GHz
Telemetry channel 2.4GHz
Certifications
FCC part 15, EU CE
Power Supply & Battery Life
3v coin cell (CR2477)
Over 5 years at a continuous 5 second beacon rate
Mounting Options
Industrial adhesive pad (supplied)
Badge clip, Cord lanyard, wrist/arm strap, industrial
Velcro, magnetic, screw and flexible mounting, brackets
Ubisensor
Size and Weight
Dimensions 20cm x 13cm x 6cm (8” x 5” x 2.5”)
Weight 650g (23 oz)
Operating Conditions
Temperature -20°C to 60°C (-4F to 140F) Standard
Extended temperature ranges available on request
Humidity 0 to 95%, Non-condensing
HYDROSYS (224416) – D2.3 System Specification
94
Enclosure
Standard IP30
IP63, 65, 67 NEMA and Intrinsically Safe available on request
Operating Range
Standard greater than 160m (500 feet) OFM
Precision
Achievable accuracy better than 15cm (6”) in 3D
Radio Frequencies
Ultra-wideband 6GHz – 8GHz
Telemetry channel 2.4GHz
Certifications
FCC part 15; EU CE
Intrinsic Safety – Class 1 Div 1, Zone 1 on request
Power Supply
Power-over-Ethernet IEEE 802.3af
Low voltage 12V DC @ 10W
Mounting Options
Adjustable mounting bracket (supplied)
HYDROSYS (224416) – D2.3 System Specification
95
Appendix 3 Site and campaign planning
This appendix provides a possible use case describing how a campaign is planned and performed.
HYDROSYS (224416) – D2.3 System Specification
96
HYDROSYS (224416) – D2.3 System Specification
97
HYDROSYS (224416) – D2.3 System Specification
98
Appendix 4 WLAN bridge setup and experiment results
This appendix provides details on the long distance WIFI link equipment and test being
performed.
Wavelength
The wavelength of communication is critical in acquiring the hardware:
ƒ
ƒ
2.4 Ghz is 802.11b/g and very common, it has good propagation range but it
interferes with cordless phones and microwaves ovens. Moreover, it is very
sensitive to water in the air (close to the absorption frequency of water)
5.8 GHz is 802.11a and less common. But it is less susceptible to interference and
less sensitive to water absorption.
5.8GHz will therefore be used in long distance communications links, whereas 2.4GHz will be
used locally in the field where required.
Gain
The propagation distance of a single link depends on the available signal gain. A simple method
of calculating this distance is by using online system performance calculators, although it can be
much more accurately predicted using free software from RadioMobile (an example of the output
of the software is shown below). Using this software minimize the emitted power (to reduce the
power required at the remote station) and compensate using a high gain antenna (which is bulky,
but this is not a problem for a fixed station). It is much better to over-engineer the link to
compensate for weather effects as hardware of different gains does not usually differ in price.
Antenna
There are several possible antenna types: grid antennas, parabolic antennas and Yagi antennas.
The antenna focuses the electromagnetic wave (the energy is not radiated omnidirectionaly but
focused on a target), increasing the power available in a single direction. Alignment of the
antennae can be achieved using the selected software, the only assumption is that we will be
able to maintain alignment after an extended period of time exposed to wind, freezing
temperatures, etc. This can only be resolved by an extended period of experimentation. In order
to avoid having to buy a radome (necessary for a parabolic antenna in order to improve its
HYDROSYS (224416) – D2.3 System Design Guide
99
resistance to the wind), grid antennas have been selected. The cost of these is approx 80Euros
each plus minimal costs for cabling.
Wifi radio
A number of routers have been investigated by SLF. The final decision, based on power
consumption and flexibility has been to use ALIX/WRAP boards from PC engines, combined with a
Compex 23dBm, 200mW wireless radio card. ALIX/WRAP boards are small (approx 20cmx20cm),
low power (approx. 3-5W) UNIX based PCs. The cost of this setup is approx 100-200 Euros/unit
(depending on accessories and whether outdoor enclosures are required). The routing may
implemented simply in many of these systems by using Mikrotik Router OS. This is licenced
software, but allows complete control of all the required parameters and more in a graphical interface
(licences are $45).
Test setup
A test set-up is to be implemented over a 6km stretch between SLF’s Wannengrat field site and
Davos. This is to use the house of a member of the public (where line of sight communications are
available) as one of the hops in the network. In order to have as much margin as possible to
evaluate the performances of the system, everything is overdesigned. On both sides of the link, the
same 27dB, 5.8GHz, grid antenna is used. The ideal goal would be to be able to use such an antenna
for the stations and to use a low gain, omnidirectional antenna at the receiving station (in the valley
at ‘In den Buehlen’): thus the same receiving station could serve several stations. The initial setup
however, uses a parabolic grid antenna at either end. Once this system has been optimised, we can
evaluate (from the achieved SNR) whether such an omnidirectional setup is viable. Originally, the
boards were set up using the Voyage Linux OS. This was however found to be limited as the Regional
Code for the radio was set to 0 (hardcoded) and hence the radio power could not be increased
sufficiently to obtain a link. A better method was found using the Router OS software, which allowed
complete control of all the parameters, showed system performance and simplified the routing
process.
The wireless network is as follows:
Black dotted lines indicate wireless communication. The left hand link is a 2.4GHz link between the
fixed cable connection and the wireless bridge on the roof, the right hand link is the 5.8GHz 6km link
to the meteo station on Wannengrat.
Site Survey
Equipment used:
Wannengrat Site:
Antenna
27dB 5700- 5800Mhz parabolic
Embedded Network board WRAP2c
Radio Card
Atheros AR5212 mPCI
Platform
RouterOS v3.11
HYDROSYS (224416) – D2.3 System Design Guide
'In den Buehlen' Site:
27dB 5700- 5800Mhz parabolic
ALIX-2b
Atheros AR5212 mPCI
RouterOS v3.11
100
Survey Results:
•
•
•
•
Radio power @ 17dm [50mW], with a 27dB antenna gain, losses about 1dB (maybe
more)
802.11a 5.8Ghz, Channel 157: 5785Mhz
Distance between Wannengrat and In den Buelen is 5.74 km (3.6 miles)
Terrain elevation variation is 835.6 m
Wannengrat Site:
'In den Buehlen' Site:
27dB – Cable loss and terminations 27dB – Cable loss and terminations
(~1dB)
(~1dB)
RSSI -79dBm, TX -78dBm SNR: 25dB
RSSI: -82dBm TX: -77dBm SNR: 24dB
Signal
17dBm (50mW)
17dBm (50mW)
TX power
ODFM 16-QAM, 24Mbits max
Modulation ODFM 16-QAM, 24Mbits max
~2364m
~1640m
Elevation
Elevation
-7.2344°
7.2344°
angle
Magnetic
92.9°
272.9°
North
Azimuth
True North
94.3°
274.4°
Azimuth
Gain
GIS Radio Mapping: The image below shows a generated view of the terrain between the two sites, the
yellow lines show the antenna response pattern.
HYDROSYS (224416) – D2.3 System Design Guide
101
Conclusion on site survey
After successfully positioning the two sites and establishing a connection we were surprised to
see that we achieved better results than simulated using a GIS Radio surveying software, this
could be due to the accuracy in the positional data and antenna data. From the Radio Mobile
simulation a maximum of 19.7dB was achieved, however on the 13th August 2008 (in good
weather conditions) we were able to achieve a maximum of 25dB and average of 23dB.After
finding the best possible position for both antennas and then optimising the connection we were
able to run data bandwidth tests. A maximum TCP throughput of 12.7Mbits was achieved
(uncompressed) and 14.4Mbits with compression added. As the results are good, steps can be
taken to improve the link even more, e.g changing the type of cable assemble, (N-Type
connectors )or using higher quality and shorter cable [LMR400].
HYDROSYS (224416) – D2.3 System Design Guide
102
Appendix 5 Network diagrams
Network connection using WLAN
Network using GPRS connection
HYDROSYS (224416) – D2.3 System Design Guide
103
Appendix 6 Simulation models
Example of a simulation output of SNOWPACK showing snow grain size evolution over time
during winter (picture source: www.slf.ch)
Flow chart of the ALPINE3D model system (picture source: Lehning et al. 2006)
HYDROSYS (224416) – D2.3 System Design Guide
104
Appendix 7 Traditional visualization methods
Water Level
Graph with no
specific color
code.
Water flow
Graph with no
specific color
code.
(hydrogram)
Soil moisture
and pressure
Graph with no
specific color
code.
Topography
with
interpolated
colors.
Water, Air
and skin
Temperature
Graph with no
specific color
code.
Topography
with
interpolated
colors.
HYDROSYS (224416) – D2.3 System Design Guide
105
Relative
humidity
Graph with no
specific color
code.
Precipitation
Bar and line
graph with no
specific color
code.
Topography
iso-lines with
interpolated
colors.
HYDROSYS (224416) – D2.3 System Design Guide
106
Wind speed
and direction
Vector arrows
with no
specific color
code.
Solar
radiation
Graph with no
specific color
code.
Hydrological
danger map
(in
Switzerland)
HYDROSYS (224416) – D2.3 System Design Guide
107
Land use
and land
cover
HYDROSYS (224416) – D2.3 System Design Guide
108
Appendix 8 Handheld display system construction
This appendix describes the design of the new handheld construction required for on-site
exploration of environmental resources, and future extensions.
End-user requirements
During the UCD interviews, end-users expressed positive interest in the handheld system, as
long as it is not “yet another device” that functions similar to devices end-users already need to
take into the field. Fortunately, the UMPC (with its extensions) can serve multiple purposes,
thereby integrating different functions. End-users put some main requirements on the handheld
device, though: it should be robust and still work at low temperature (up to -25 degrees Celcius),
be preferably weather-resistant, and easily carryable. In addition, based on our experience with
end-users in other projects, some other requirements can be stated that largely affect the
ergonomic usage of the device. This includes the need for well-functioning controllers, an
ergonomic and balanced grip for both one and two-handed usage, and allow to use the device in
a non-obtruding pose. The task space that describes the functionality normally performed with a
UMPC is limited. Previous results of task analysis investigated a range of applications and
setups, and delivered information about common tasks performed in AR applications including:
viewpoint manipulation, maneuvering, moving large datasets, system control, changing
visualization modes, object selection, manipulation, numerical and textual input.
Need for extension
Next to the positive aspects of UMPCs, they also have some strict limitations for usage as
Augmented Reality platform. The processing power, the quality of the (possibly) embedded
camera, the size of the screen and the available controls limit the functional possibilities for offthe-shelf devices. These often need to be extended with sensors and controllers, such as an
additional high quality camera and a GPS, to accurately support augmented reality applications.
These extensions become necessary even when the UMPCs already contain such sensors. It is
mostly a matter of the quality required by AR applications that needs to be addressed by
additional sensors (high framerate and high resolution camera, high resolution and low error
GPS, gyroscopes).
Current state of extensions
A common practice is to accommodate extensions in an external casing or hull connected to the
UMPC. Homebrew versions of such hulls provide a quick way to prototype AR systems, but they
restrict the manipulation of the device and suffer from frequent disconnection of sensors,
unbalanced weight, limited access to the sensors (having to put everything apart to reach a
sensor), and reduce possibilities for control. For production systems, these limitations render
homebrew solution unsuitable. The same applies for usage under potentially rough conditions as
the ones envisioned by HYDROSYS and its scenarios. The bottom line is that ergonomics play
an important role when considering the extension of handheld devices, since additional sensors
can only add weight and volume, which must be held together with the device.
Our previous work
A device aimed at AR applications that both enables hardware extensions, and targets such tasks
in an ergonomic, usable manner has been presented in Veas and Kruijff 2008.
HYDROSYS (224416) – D2.3 System Design Guide
109
Vesp’R prototype
Vesp’R is a lightweight construction with two handles that can be configured dynamically on the
sides or below a box that holds the UMPC (a Sony Vaio UX) and the sensors. The construction
was built of nylon stereolithography (STL) parts, layered with a thin layer of velvety rubber. During
evaluations, the construction performed extremely well, getting high ratings for ergonomic usage,
and control of applications. In the framework of the Vidente project (www.vidente.org), the Vesp’R
has been used by engineers in outdoor situations in Austria for over a year and demonstrated at
many occasions. This, however, also resulted in a worn-out version of the construction: it was not
built for enduring rough usage.
HYDROSYS robust platform: first prototype
The hardware platform targeting mobile AR applications in HYDROSYS shares all the above
mentioned requirements, and adds on top the need to work under possibly rough outdoor
conditions. HYDROSYS requires that the sensors attached to a handheld device be as accurate
and high-quality as possible. Therefore, integrating high-quality sensors is a must while, on the
other hand, supporting AR tasks for interaction is already a priority. HYDROSYS aims at
providing an ergonomic solution that satisfies the above requirements.
In order to discuss the future system developed in HYDROSYS with end-users, we needed to
come up with a new construction, as Vesp’R is not usable for that purpose anymore. Since
augmented reality is quite difficult to explain without „using“ it, we thus designed and created a
new, initial prototype. This prototype proved to be very useful already in multiple discussion
sessions with end-users in Switzerland and Finland.
Based on our previous experience on creating handheld constructions and the new conditions for
HYDROSYS, we came up with the following requirements for the construction:
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
Ergonomic usage: the constructions needs to be balanced well, have a good grip
to hold the device up, possibly also for longer duration usage
Robust: the construction needs to withstand the weight of the devices attached,
and users not handling the construction with care.
Protective: the device should protect the rather sensible sensors from damage
Weather-repellent: especially the box that should hold the sensors should be
holding out water/snow/dirt as good as possible.
Lower temperature usage: some of the sensors do not work well under very low
temperature, thus, some isolation needs to be provided for
Flexibility / modularity: the construction should allow the user to make
modifications, like connecting different sensors.
To create the new construction, we started with crafting different models with clay, to come to
some kind of grip / platform that would hold the UMPC and support connecting sensors in some
way. Also, the grip should allow users to hold the device in multiple ways. Normally, due to the
weight, users hold the construction with two hands, but in order to use the pen or keyboard, a
single-handed grip is useful. This grip should also be well balanced to avoid that the construction
tilts to one side, causing fatigue. A first clay version of the grip can be seen in the picture below: it
supports holding the construction from the sides, but also with a well balanced, ”below” grip.
Figure xyz: Clay models of the new construction
HYDROSYS (224416) – D2.3 System Design Guide
110
Taking the grip as origin for further try-outs, we created digital models that hold two boxes: one
box holds the UMPC (it can hold either the Sony Vaio UX we have used in previous installations,
or the Panasonic Toughbook) and additional power sources (batteries) used to power some of
the sensors. It also protects the computer from above, with a small screen, holding off some
reflections caused by sunlight. The second box, mounted on the back of an x-form that is
integrated in the grip, holds all the sensors (GPS, orientation sensor, camera) and has place for
more sensors, like an additional location sensor for hybrid tracking. All devices can be attached
nicely to a fin-like construction: when the box is opened from the top, the fin with all sensors can
easily be taken out. Just like the Vesp’R, the new prototype was printed as nylon STL, but not
covered with the rubber material used before.
During the sessions with the end-users, the construction proved to be very robust and ergonomic,
but also quite big: since we made the “sensor box” spacious for extension with other devices, it
has quite some open space that can be optimized. Also, even when it protects the sensors much
better than our previous construction, the box can be improved to have a better weather
resistance.
HYDROSYS next prototype with sensor box
In the second half-year of the project, a new prototype of the construction will be built. The
construction will make use of the same grip, but will remove some parts that have found to be
less useful. The front box can be largely removed, since the Panasonic Toughbook is by itself
well protected. Also, the screen of the Panasonic is not as reflective as the Sony Vaio UX, thus,
also does not necessarily need the “hood” as was provided in the first prototype. The biggest
change, though, will be made in the “sensor box”.
The new sensor box will be designed specifically to fit the devices that are used in HYDROSYS.
This has the big advantage that we can shorten the cabling in the box considerably (the cables
normally take up a large part of the space), and fit the sensors in the most place-effective
position. We expect that we can make the sensor box about 40% smaller as the current box. The
initial sensor box was a single wall construction (3mm thick). For the new box, we intend to make
a double wall construction of thinner walls, but with a thin layer of isolative material in between, to
provide for protection against cold weather. The box will also be better sealed to keep out water,
snow, and dust/dirt as good as possible. The sensor box should be a self-contained unit: it should
be easily attachable from the handheld construction. It thus could also be mountable easily at
another location like a car, or a tripod.
HYDROSYS (224416) – D2.3 System Design Guide
111
Appendix 9 Abbreviations
Technical
UMPC
AR
VR
UWB
OS
WiFi
GPS
IMU
UCD
PTU
F+C
MID
GSN
Studierstube
Openvideo
OpenInventor
OpenGL
CityGML
COLLADA
DFKI
EPFL
FFG
FOSS
GEOSS
GPL
GPS
HITLab
ICT
LGPL
PM
RTLS
SensorML
SLAM
SwissEx
TKK
TUG
UCAM
UCL
UNISA
UWB
WSL
X3D
Ultramobile Personal Computer
Augmented Reality
Virtual Reality
Ultra Wide Band
Operating System
Wireless Frequency
Global Positioning Unit
Inertia Measurement Unit
User Centered Design
Pan Tilt Unit
Focus and Context
Mobile Internet Device
Global sensor Networks. Developed at EPFL
AR Framework. Developed at TUG
Video abstraction. Developed at TUG
Rendering Engine. Opensource by Systems in Motion
Graphics API
Common information model for the representation of 3D urban objects
COLLAborative Design Activity for establishing an interchange file format for interactive 3D
applications
German Research Center for Artificial Intelligence
Ecole Polytechnique fédérale de Lausanne
Austrian national research funding agency
Free and open source software
Global Earth Observation System of Systems
General Public License
Global Positioning System
Human Interface Technology Laboratory
Information and Communication Technology
Lesser General Public License
Person-month
Real-time location systems
Sensor Markup Language
Simultaneous Localization and Mapping
Swiss Experiment project
Helsinki University of Technology
Technische Universität Graz
University of Cambridge
University College London
University of South Australia
Ultra Wideband
Swiss Federal Institute for Forest, Snow and Landscape Research
ISO standard XML-based file format for representing 3D computer graphics
HYDROSYS (224416) – D2.3 System Design Guide
112
References
Aarnio, T. M3g api overview, ACM SIGGRAPH 2005 Course #35: Developing Mobile 3D Applications With
OpenGL ES and M3G, August 2005.
Aarnio.T.. JSR 297: Mobile 3D Graphics API 2.0 Public Review Draft.
http://www.jcp.org/en/jsr/detail?id=297. 2008
Abadi, D. Carney, U. Cetintemel, M. Cherniack, C. Convey, S. Lee, M. Stonebraker, N. Tatbul, S. Zdonik.
Aurora: A New Model and Architecture for Data Stream Management. In VLDB Journal (12)2: 120139, August 2003.
Aberer. K., G. Alonso, G. Barrenetxea, J. Beutel, J. Bovay, H. Dubois-Ferrière, D. Kossmann, M. Parlange,
L. Thiele, and M. Vetterli. Infrastructures for a Smart Earth - The Swiss NCCR-MICS initiative, PIK Praxis der Informationsverarbeitung und Kommunikation,
Aberer, K. Hauswirth,M. andSalehi, A. A middleware for fast and flexible sensor network deployment, Very
Large Data Bases (VLDB) Seoul, Korea, 2006.
Arasu, A., Babcock, B., Babu, S., Cieslewicz, J., Datar, M., Ito, K., Motwani, R., Srivastava, U.,Widom., J.:
STREAM: The Stanford Data Stream Management System. In: Data-Stream Management:
Processing High-Speed Data Streams. Springer 2006.
Arikawa,M., Konomi,S. and Ohnishi,K.. Navitime: Supporting Pedestrian Navigation in the Real World.
IEEE Pervasive Computing 6, no. 3, pages 21-29. 2007.
ArcPad. "ArcPad - Mobile GIS Software for Field Mapping Applications, available at:
http://www.esri.com/software/arcgis/arcpad/index.html." 2007
Badard, T.. Geospatial Service Oriented Architecture for Mobile Augmented Reality. First International
Workshop on Mobile Geospatial Augmented Reality. 2006
Barnes, M. and Finch, E.L. COLLADA - Digital Asset Schema Release 1.5.0. 2008.
Barrenetxea. G., F. Ingelrest, Y. M. Lu and M. Vetterli. Assessing the Challenges of Environmental Signal
Processing through the SensorScope Project. The 33rd IEEE International Conference on
Acoustics, Speech, and Signal Processing (ICASSP 2008). Las Vegas, Nevada, USA, 30 March - 4
April 2008
Barrenetxea. G., F. Ingelrest, G. Schaefer, M. Vetterli, O. Couach and M. Parlange. SensorScope: Out-ofthe-Box Environmental Monitoring. The 7th International Conference on Information Processing in
Sensor Networks (IPSN 2008). St. Louis, Missouri, USA, 22-24 April 2008.
Bartelt, P. and M. Lehning. 2002. A physical SNOWPACK model for the Swiss avalanche warning. Part I:
numerical model. Cold Reg. Sci. Technol., 35(3), 123–145.
Blescheid, H., Etz,M. and Haist,J. Providing of dynamic three-dimensional city models in location-based
services. In In MOBILE MAPS 2005 - Interactivity and Usability of Map-based Mobile Services,
workshop at HCI. 2005.
Blythe,D. OpenGL ES Common/Common-Lite Profile Specification, Version 1.0.02.
http://www.khronos.org/opengles. 2005.
Bornik, A., R. Beichel, et al. A Hybrid User Interface for Manipulation of Volumetric Medical Data.
Proceedings of the 2006 Symposium on 3D user interfaces (3DUI 2006), IEEE Virtual Reality
Conference.2006.
Bogue, R. Environmental sensing: strategies, technologies and applications. Sensor Review SENSOR
REVIEW. 28.4, pp 275-282, 2008.
Bowman, D., E. Kruijff, et al. 3D user interfaces : theory and practice, Addison-Wesley. 2005.
Brunelli, D., E. Farella, et al. Untethered interaction for immersive virtual environment through handheld
devices. Eurographics Italian Chapter. 2002.
Brutzman, D. and Daly, L. X3D Extensible 3D Graphics for Web Authors. Morgan Kaufmann Publishers,
Elsevier Inc. 2007.
Burigat, S. and Chittaro, L.. Location-aware visualization of VRML models in GPS-based mobile guides. In:
Web3D '05: Proceedings of the tenth international conference on 3D Web technology, pages 5764. ACM Press, New York, NY, USA. ISBN 1-59593-012-4. 2005.
Chandrasekaran, S., Cooper, O., Deshpande, A., Franklin, M.J., Hellerstein, J.M., Hong, W.,
Krishnamurthy, S., Madden, S., Raman, V., Reiss, F., Shah, M.A.: TelegraphCQ: Continuous
Data_ow Processing for an Uncertain World. In: CIDR. 2003.
Chen, Z., Gehrke, J. E. and Korn,F. Query Optimization In Compressed Database Systems. In Proceedings
of the 2001 ACM Sigmod International Conference on Management of Data, Santa Barbara,
California, May 2001.
Chong, C.Y. and Kumar, S.P., Sensor Networks: Evolution, Opportunities, and Challenges. Proceedings of
the IEEE, vol. 91, no. 8, pp. 1247–1256, August 2003.
Darken, R.P., and Sibert, J.L. Navigating large virtual spaces. International Journal of Human-Computer
Interaction, Vol. 8, pp. 49-71.. 1996.
HYDROSYS (224416) – D2.3 System Design Guide
113
Davies, H. and P. Phillips, Mountain Drag Along the Gotthard Section During Alpex.Journal of
Athmospheric Science, 42(20): p. 2093 - 2109.Douglas, K. PostgreSQL (2nd Edition). Publisher:
Sams; 2 edition . ISBN-10: 0672327562. 1985.
Döllner,J., Baumann, K. and Hinrichs, K. Texturing Techniques for Terrain Visualization. In:
VISUALIZATION '00: Proceedings of the 11th IEEE Visualization 2000 Conference (VIS 2000),
pages 227-234. IEEE Computer Society, Washington, DC, USA. ISBN 0-7803-6478-3. 2000.
Edwards, G. and M. Bourbeau Cognitive Design Factors for Mixed Reality Environments. First International
Workshop on Mobile Geospatial Augmented Reality. 2006.
Elfes, A., et al., Project AURORA: Development of an Autonomous Unmanned Remote Monitoring Robotic
Airship. Journal of the Brazilian Computer Society, 1998 4(4).
Feiner, S., B. MacIntyre, et al. A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for
exploring the Urban Environment. Proceedings of ISWC'97. 1997.
Fierz, C. and M. Lehning.. Assessment of the microstructure based snow-cover model SNOWPACK:
thermal and mechanical properties. Cold Reg. Sci. Technol., 33(2–3), 123–132. 2001.
Franklin, M., Jeffery, S., Krishnamurthy, S., Reiss, F., Rizvi, S., Wu, E., Cooper, O.,Edakkunni, A., Hong,
W.: Design Considerations for High Fan-in Systems: The HiFi Approach. In: CIDR. 2005.
Gausemeier, J., J. Fründ, et al.. Development of a Real-time Image-based Object Recognition Method for
Mobile AR Devices. In Proc. of the 2nd International Conference on Computer Graphics, Virtual
Reality, Visualization and Interaction in Africa (AFRIGRAPH '03). 2003.
Gohm, A., G. Zangl, and G. Mayr, South Foehn in the Wipp Valey on 24 October 1999 (Map IOP 10):
Verification of high-resolution numerical simulations with observations. Monthly weather review, 132(1): p.
78-102. 2004.
Hanson, A.J., and Wernert, E.A. Constrained 3D navigation with 2D controllers. IEEE Visualization, pp.
175-182. 1997.
Helmiö T. Effects Of Cross-Sectional Geometry, Vegetation And Ice On Flow Resistance And Conveyance
Of Natural Rivers. Helsinki University of Technology Water Resources Publications. TKK-VTR-11,
2004.
Henrysson, A., M. Ollila, et al.. Mobile Phone Based AR Scene Assembly. Proceedings of the Fourth
International Conference on Mobile and Ubiquitous Multimedia (MUM 2005). 2005
Hybrid Graphics, OpenGL ES, http://www.hybrid.fi/opengles, 2006.
Hygounec, E., et al., The autonomous blimp project of laas-cnrs: Achievements in flight control and terrain
mapping. International Journal of Robotics Research, 2004. 4(5): p. 473-511.
JCP, JSR 184: Mobile 3d graphics api for j2me, http://www.jcp.org/en/jsr/detail?id=184, 2005.
JCP, JSR 239: Java Binding for the OpenGL ES API, http://jcp.org/en/jsr/detail?id=239. 2006.
Jones, M. and G. Marsden. Mobile interaction design, John Wiley & Sons Ltd. 2006
Jones, M.T.. Google's Geospatial Organizing Principle. IEEE Computer Graphics and Applications 27, no.
4, pages 8-13. 2007.
Jung, I. and S. Lacroix. High resolution terrain mapping using low altitude aerial stereo imagery. In In
proceedings of IEEE ICCV'03. 2003.
King, G., W. Piekarski, et al. ARVino - outdoor augmented reality visualisation of viticulture GIS data.
International Symposium on Mixed and Augmented Reality 2005 (ISMAR'05). 2005.
Kolbe, T.H., Groger, G., and Plumer, L.. CityGML – Interoperable Access to 3D City Models. In: Oosterom,
Zlatanova, and Fendel (editors), First International Symposium on Geo-Information for Disaster
Management GI4DM. Springer Verlag. 2005.
Kosara,R., Miksch,S. and Hauser,H. Semantic Depth of Field. In: INFOVIS '01: Proceedings of the IEEE
Symposium on Information Visualization 2001 (INFOVIS'01), page 97. 2001.
Kosara,R., Miksch,S. and Hauser,H. Focus and context taken literally. IEEE Computer Graphics &
Applications, Special Issue on Information Visualization, 22(1):22-29, 2002.
Kray, C., C. Elting, et al.. Presenting route instructions on mobile devices. ACM IUI'03. 2003.
Kumar, V. ed.: Special Section on Sensor Network Technology and Sensor Data Management (Part I).
SIGMOD Record, 32(4) 2003.
Kähäri, M. and Murphy, D.J.. MARA - Sensor Based Augmented Reality System for Mobile Imaging. A
demo at ISMAR 06, the Fifth IEEE and ACM International Symposium on Mixed and Augmented
Reality. 2006
Laakso, K., Gjesdal, O., and Sulebak, J. Tourist information and navigation support by using 3D maps
displayed on mobile devices. In: Mobile HCI 2003 Workshop on HCI in Mobile Guides. 2003.
Langendoen. K., A. Baggio, and O. Visser. Murphy loves potatoes: Experiences from a pilot sensor network
deployment in precision agriculture. In Proceedings of the IEEE International Parallel and
Distributed Processing Symposium (IPDPS), Apr. 2006
Lehning, M., Völksch, I., Gustafsson, D., Stähli, M., Zappa, M. ALPINE3D: a detailed model of mountain
surface processes and ist application to snow hydrology. Hydrological Processes 20: 2111-2128.
2006.
HYDROSYS (224416) – D2.3 System Design Guide
114
Lehning, M., P. Bartelt, B. Brown, T. Russi, U. Stöckli and M. Zimmerli. SNOWPACK model calculations for
avalanche warning based upon a new network of weather and snow stations. Cold Reg. Sci.
Technol., 30(1–3), 145–157. 1999.
Lehning, M., P. Bartelt, B. Brown, C. Fierz and P. Satyawali. A physical SNOWPACK model for the Swiss
avalanche warning. Part II: snow microstructure. Cold Reg. Sci. Technol., 35(3), 147–167. 2002a.
Lehning, M., P. Bartelt, B. Brown and C. Fierz.. A physical SNOWPACK model for the Swiss avalanche
warning. Part III: meteorological forcing, thin layer formation and evaluation.Cold Reg. Sci.
Technol., 35(3), 169–184. 2002b
Madden, S. R., Franklin, M. J., Hellerstein, J. M., and Hong, W. TinyDB: an acquisitional query processing
system for sensor networks. ACM Trans. Database Syst. 30, 1 (Mar. 2005), 122-173. 2005.
Martinez. K., J. Hart, and R. Ong. Environmental sensor networks. IEEE Computer, 37(8):50–56, 2004.
Meingast, M., C. Geyer, and S. Sastry, Vision based terrain recovery for landing unmanned aerial vehicles.
In proceedings of Decision and Control, 2004. 2(1670- 1675).
Merino, L., et al. Cooperative fire detection using unmanned aerial vehicles. In In proceedings of ICRA'05.
2005.
Mulloni,A., Nadalutti,D. and Chittaro,L. Interactive walkthrough of large 3D models of buildings on mobile
devices. In: Web3D '07: Proceedings of the twelfth international conference on 3D web technology,
pages 17-25. ACM, New York, NY, USA. ISBN 978-1-59593-652-3. 2007.
Munshi,A. and Leech,J. OpenGL ES Common Profile Specification¨Version 2.0.22 (Full Specification).
http://www.khronos.org/opengles. 2008.
Möhring, M., C. Lessig, et al. Optical Tracking and Video See-through AR on Consumer Cell Phones. Proc.
of Workshop on Virtual and Augmented Reality of the GI-Fachgruppe AR/VR, 2004.
Newman, J., D. Ingram, et al. Augmented Reality in a Wide Area Sentient Environment. Proc. of the 4th
IEEE and ACM International Symposium on Augmented Reality (ISAR'01). 2001.
Nurminen,A. and Oulasvirta,A. Designing Interactions for Navigation in 3D Mobile Maps. In Meng, L. and
Zipf, A. (eds.) Map-based Mobile Services. Springer. ISBN 978-3-540-37109-0. 2008.
Oulasvirta, A., Nurminen, A. and Nivala, A-M. Interacting with 3D and 2D mobile maps: An exploratory
study. HIIT technical report 2007 (1).
Parallelgraphics (2007). Pocket cortona website, available at
http://www.parallelgraphics.com/products/cortonace.
Persson, P. Espinoza, F, and Cacciatore, E. GeoNotes: social enhancement of physical space. In: CHI '01:
CHI '01 extended abstracts on Human factors in computing systems, pages 43-44. ACM, New
York, NY,USA. ISBN 1-58113-340-5. 2001.
Premoze,S., Thompson,W.B. and Shirley,P.. Geospecific rendering of alpine terrain. In: Eurographics
Rendering Workshop. European Association for Computer Graphics. 1999.
Priestnall, G. and G. Polmear Landscape Visualisation: From lab to field. First International Workshop on
Mobile Geospatial Augmented Reality. 2006.
Pulli, K., Aarnio, T., Roimena, K. and Vaarala, J.. DesigningGraphics Programmning Interfaces for Mobile
Devices. Computer Graphics and Applications 25, no. 6, pages 66-75. 2005.
Rakkolainen, I., J. Timmerheid, et al. "A 3D city info for mobile users." Journal of Computers and Graphics
25(4): 619-625. 2001.
Rana, S. and J. Sharma. Geographic Information Technologues - An overview. 2006.
Rigon, R., Bertoldi, G., Over, T. 2006, GEOtop: A Distributed Hydrological Model with Coupled Water and
Energy Budgets,” Journal of Hydrometeorology, Vol. 7, No. 3, 371–388.
Schall, G., B. Reitinger, et al.. Handheld Geospatial Augmented Reality Using Urban 3D Models.
Proceedings of the Workshop on Mobile Spatial Interaction, ACM International Conference on
Human Factors in Computing Systems (CHI´07). 2007.
Schmalstieg, D. and G. Reitmayr The World as a User Interface: Augmented Reality for Ubiquitous
Computing. Proceedings of the Eurographics Central European Multimedia and Virtual Reality
Conference. 2005.
Selavo. L., A. Wood, Q. Cao, T. Sookoor, H. Liu, A. Srinivasan, Y. Wu, W. Kang, J. Stankovic, D. Young,
and J. Porter. LUSTER: Wireless sensor network for environmental research. In Proceedings of the
ACM International Conference on Embedded Networked Sensor Systems (SenSys), Nov. 2007
Shin, I. Development of an internet-based water-level monitoring and measuring system using CCD
camera. Proceedings of SPIE--the international society for optical engineering ICMIT, 2007.
Sillanpää, N. Pollution Loading From A Developing Urban Catchment In Southern Finland. In In
proceedings of the 11th International Conference on Diffuse Pollution and the 1st Joint Meeting of
the IWA Diffuse Pollution and Urban Drainage Specialist Groups. 2007.
Stapleton, J., DSDM. Dynamic Systems Development Method. Addison-Wesley. 1997.
Szewcszyk. R., A. Mainwaring, J. Polastre, J. Anderson, and D. Culler. Lessons from a sensor network
expedition. In Proceedings of the IEEE European Workshop on Wireless Sensor Networks and
Applications (EWSN), Jan. 2004.
HYDROSYS (224416) – D2.3 System Design Guide
115
Thomas, B., V. Demczuk, et al.. A Wearable Computer System for Augmented Reality to support Terrestrial
Navigation. Proceedings of ISWC'98. 1998.
VRML 1997 - The VRML Consortium Incorporated. The Virtual Reality Modeling Language, International
Standard ISO/IEC 14772-1:1997. 1997.
Watsen, K., R. Darken, and M. Capps A Handheld Computer as an Interaction Device to a Virtual
Environment. Proceedings of the Third Immersive Projection Technology Workshop, Stuttgart,
Germany. 1999.
Werner-Allen. G., J. Johnson, M. Ruiz, M. Welsh, and J. Lees. Monitoring volcanic eruptions with a
wireless sensor network. In Proceedings of the IEEE European Workshop on Wireless Sensor
Networks and Applications (EWSN), Jan. 2005.
Zdonik, S., Stonebraker, M., Cherniack, M., Cetintemel, U., Balazinska, M., Balakrishnan, H.: The Aurora
and Medusa Projects. Bulletin of the Technical Committe on Data Engineering, IEEE Computer
Society.2003
HYDROSYS (224416) – D2.3 System Design Guide
116