Platform for Control and Delivery of services in  Next Generation Networks

 

 

 

 

 

Deliverable 1.1

 

Application Scenarios and

User-Provider Requirements

 

 

 

 

 

 

CEFRIEL_%20LOGO

 


 

Abstract

 

The PICO project concentrates on application streaming and context awareness as topical techniques (described in D1_2) to build distributed applications in the domain of emergency situations.

This document defines system and domain requirements for applications supporting services in emergency situations (fire fighting, emergency medical aid, earthquakes, etc.). Further this document describes a number of scenarios in those contexts. From these scenarios, application requirements are derived. These requirements will be used as input to develop prototypes of emergency applications.

 

 

 

CHANGE LOG

 

Version

Date

Description

1.0

15/04/2008

This is the first draft of the Deliverable 1.0

2.0

30/10/2008

Final version

 

 

 

 

 

 

 

 

Table of contents

Contents

Abstract 3

Table of contents. 5

Introduction. 7

1. Telecommunications and Public Protection. 8

1.1     Public Protection requirements 8

2. Project MESA.. 10

2.1     What is MESA?. 10

2.2     Statement of Requirements (SoR) 12

3.         Technical Specifications and Application Scenarios 16

3.1     From SoR to Technical Specifications 16

3.2     Devices requirements and limitations 17

3.3     Application Scenarios 18

3.3.1 Emergency Medical Services case: Heart Attack Scenario.. 20

3.3.2 Fire Fighters case: Residential Fire Scenario.. 22

3.3.3 Law Enforcement case: Traffic Stop Scenario.. 25

3.3.4 Multi-Discipline/Multi Jurisdiction case: Explosion Scenario.. 28

3.3.5 Application on Demand in Emergency situations. 32

4.         Context-Aware support to emergency services. 35

4.1     Application Scenarios 38

4.1.1 Health emergency. 39

4.1.2 Road emergency. 41

4.2     Emergency situations analysis. 43

Appendix A Acronyms Table.. 47

Figures Index. 49

Bibliography. 50

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

                             Introduction

 

The last years are characterized by important natural disasters and intentional attacks that evidenced the deficiency of telecommunications. In many situations telecommunication networks were heavily damaged or totally destroyed.

 The concept of Emergency Telecommunications (EMTEL) addresses a broad spectrum of aspects related to the provisioning of telecommunications services in emergency situations. Emergency situations may range from a narrow perspective of an individual being in a state of personal emergency (with need to make an emergency call due to sudden illness, traffic accident, outbreak of fire in the home...) to a very broad perspective of serious disruptions to the functioning of society (viz. disaster situations due to events or processes such as earthquakes, floods, large scale terrorist attacks, etc.). The concept also covers the telecommunications needs of society's dedicated resources for ensuring public safety; including police forces, fire fighting units, ambulance services and other health and medical services, as well as civil defense services. The telecommunications needs of such services have until now been satisfied by dedicated networks and equipment, often different for different services, but with modern technology it is possible to increasingly integrate such services with the public telecommunications services. Terrestrial and satellite radio/TV broadcasting and Internet services provide means for dissemination of information to the general public, in particular in hazardous and disaster situations. Telecommunications means may also be increasingly used as parts of various community functions such as health services (e.g. remote patient monitoring to reduce need for hospitalization).

In this context the study and development of specific solutions for the management of these critical situations become of particular interest for the technical community. The effectiveness results of these studies depend on the accuracy of the used scenario reconstructed by models capable to describe as more as possible accurately the end-users acts and environmental constraints. The goal of this project is to analyze different emergency scenarios and derive from them specific information to develop applications to be used in crisis situations. The deployment of applications on demand or telecommunication services could be really helpful for Public Protection (PP) operators. In fact the higher number of available operations and retrievable information allows to save more life and to guarantee security in citizens’ life.

In the first paragraph we give a brief introduction regarding the public protection personals involved in emergency situations and their requirements. In the second we will concentrate on the MESA project, an international partnership project between the two major world’s standards development organizations; this will lead to finally describe some application scenarios underlying the importance of delivering telecommunication services and application on demands within this context.

1. Telecommunications and Public Protection

 

Public Protection and Disaster Relief (PPDR) users are personnel dedicated to respond to day-to-day emergencies and disasters, trying to ensure safety of citizens. They would typically be PP personnel grouped into mission oriented categories, like police, fire brigades and emergency medical response. For the rest of the document we will relate the PPDR personnel as these three main subgroups:

·     Law Enforcement (LE)

·     Fire Fighting (FF)

·     Emergency Medical Services (EMS)

 

These groups use PPDR telecommunication services during an emergency or disaster in order to solve their mission as fast as possible.

 

 

1.1           Public Protection requirements

 

In order to provide effective communications, PPDR agencies and organizations have a set of objectives and requirements that include interoperability, reliability, functionality, operations security and fast call set-up in each area of operation, besides requiring telecommunication solutions characterized by high data rates, along with video and multimedia capabilities. It can be inferred that Public Protection and Disaster Relief radio communication systems should be able to fulfill the following general requirements:

·     provide radio communications that are vital to the achievement of:

o     law and order maintenance;

o    emergency situations response and protection of life and property;

o    disaster relief situations response;

·     provide services to the community over a wide range of geographic coverage areas, including urban, suburban, rural and remote environments;

·     aid the provision of advanced solutions requiring high data rates, video and multimedia;

·     support interoperability and interworking between networks, both nationally and for cross-border operation, in emergency and disaster relief situations;

·     accommodate a variety of mobile terminals from those which are small enough to be carried on by one person to those which are mounted on vehicles and develop specific applications that are able to be streamed in real time for these devices.

 

The usage of an appropriate telecommunication system could help the PPDR community to execute their tasks in a better and safer way, for themselves and for the others involved.

 

 

2. Project MESA 

 

2.1     What is MESA?

 

MESA is the acronym for “Mobility for Emergency and Safety Applications”.

Project MESA is an international partnership project between the two major world’s standards development organizations, the European Telecommunications Standards Institute (ETSI) and the Telecommunications Industry Association (TIA) of USA.

It rises from the awareness that nowadays threats, challenges and needs of public safety and emergency response professionals are growing in complexity, frequency of occurrence and scope. In fact, currently, Public Protection officers usually can rely only on equipments that allow them to exchange voice. Moreover, communications are transmitted over narrow-band radios without benefit of advanced capabilities. So, we can understand how responses to natural and man-made disasters are hampered by the inability to communicate simultaneous voice, data, imaging, and live video. Moreover, currently employed network communications facilities can fall down when called upon to support hundreds of transmissions related to a flood, train crash or other catastrophic or criminal acts.

Hence we can realize that Public Safety personnel, but also citizens, need improvements in emergency medical services, fire prevention and suppression, public protection, disaster response, civil defense, and infrastructure maintenance and expansion. On a more sinister level, criminal and terrorist elements are coordinating their illicit activities with increasingly sophisticated communications, in many cases matching or exceeding the level of technology available to the local law enforcement community. In these and similar cases, orchestrating any kind of coordinated response invariably places an exceeding demand upon communications facilities.

In light of such a critical situation, various efforts have been made for new services and capabilities designed to effectively, efficiently and economically meet emerging Public Safety challenges and needs. So, in the year 2000, TIA and ETSI realized the importance of starting to progress specifications for advanced emergency service and applications that would address Public Safety needs of not only in North America and Europe, but also in other parts of the world. This realization led to the Public Safety Partnership Project (PSPP), a joint project aimed at addressing common standardization needs of Public Safety users. Result of this Public Safety-oriented activity will lead to specifications for broadband terrestrial mobility applications and services, driven by common scenarios and spectrum allocations.

One of the drivers of the project is the fact that criminal activities are aided by communications technologies more advanced than those currently available to law enforcement and other public safety purposes. Critical uses include multimedia applications such as two-way imaging, real-time mobile full-motion video and wireless telemedicine (remote patient monitoring). Such applications require large data rates for delivery, amply in excess of what is currently developed for third-generation (3G) mobile standards.

Besides U.S. and European Union (EU), other regions standards groups (e.g., Asia and Canada) and international organizations (e.g., UN/NATO) are also becoming engaged in Project MESA activities.

Then understandably, various Public Safety services may have very different communication needs, which may differ between agencies and countries. Having a common standardized broadband communication system will help to ensure interoperability of Public Safety services and applications, within and between agencies and/or countries. Public Safety and law enforcement issues are now, more than ever, a worldwide problem. Also, to facilitate effective communication and interoperability during emergency situations, it is crucial that both users and various types of terminals understand each other, allowing for information exchange via multiple and divergent facilities, platforms and devices.

 

 

 

Figure 1: Interoperability in MESA systems.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Project MESA exists to facilitate dependable, advanced, efficient, effective and interoperable equipment, specifications and applications that are involved within public safety communication needs. Additionally, MESA will attempt to harmonize existing specifications and scenarios as part of its mandate.

MESA represents the first international initiative to involve users and organizations from the Public Protection, disaster response and civil defense sectors, working in the fields of law enforcement, fire fighting, homeland security, national/international crime and terror investigation, emergency and medical services and disaster response, and make them collaborate with radio and networking experts from industry, academia, research establishments. In other words, MESA brings together industry and users to produce truly global standards for public safety applications.

But nowadays the experience of professional users has made clear the need for next-generation public safety communications capabilities to provide broadband data access, interoperability, increased security, user transparency and communications over myriad technological platforms and applications. Principal goal of project MESA is then how best develop a system for communication between authorities and organizations involved in emergency response or disaster relief activities, maximizing usage of existing communication technologies and infrastructures, even new technologies still under development, and at the same time providing the necessary remedies where current technologies are not able to fulfill users’ needs, or when existing infrastructures are not perfectly working and, extremely, are totally down.

So, proposed MESA systems and capabilities will involve emergency communication scenarios where infrastructure exists and where it is nonexistent, exhausted or overloaded by worried citizens.

In fact, it is well understood that currently no communication technology exists which is capable of supporting an effective response to disasters, either natural or man-made, generally characterized by severe disruption to public telecommunication infrastructures. Moreover, currently, voice communication is transmitted over narrow-band radios without benefit of advanced capabilities.  MESA-capable technology would allow first responders and command units not only digital voice communications but also utilization of streaming video and real-time data, including vital statistics, remote sensors, incident records and other information.

However, disasters can strike anywhere, so an important aspect of MESA-capable technology is its trans-jurisdictional mobility and rapid deployment and its ability to support an effective ad hoc emergency response characterized by disrupted or non-existent public infrastructure and electrical supply.

 

 

 

 

 

 

 

2.2                  Statement of Requirements (SoR)

 

Users of professional wireless telecommunications equipment within the sector of PPDR have developed the MESA Statement of Requirements (SoR) document, which has the task to exactly define all requirements and features that a telecommunication system for Public Protection activities should have.

Project MESA SoR reflects the vision of a mobile broadband network that can be simultaneously accessed by multiple users, using various applications and levels of security, in a specified geographical area, and that may operate potentially independently from availability of public networks and supply of commercial electrical power. Specifically, the SoR describes services and applications that a future advanced wireless telecommunication system should be able to support, in order to realize the most effective operational environment for users. Emphasis has been placed on those applications and technological services that current technology has not yet satisfactorily addressed, but which have been identified by users and their agencies as key requirements for applications and services.

SoR also represents the establishment of a clear understanding that advanced needs of PPDR sector should be based on a high mobility broadband wireless network that allows provision of dynamic bandwidth, offering self‑healing characteristics and secure network access.

We can assert that SoR is a priceless source of information in the aim of understanding the often very difficult and dangerous working environments which user community is facing, such that industry can provide the most effective and accurate technical solutions.

Project MESA SoR describes the overall requirements of most MESA user agencies in Europe and North America, including all criminal justice services, emergency management, EMS, fire, land/natural resource/wildlife management, military, and other similar governmental functions that have a need for aeronautical and terrestrial, high-speed, broadband, digital, mobile wireless communications. In fact, all public safety services disciplines that are participating in Project MESA have indicated their needs of a wireless communications support, crucial to allow them to provide a quality and efficient service to citizens. Technology requirements included in the SoR document will create a safer working environment for the world's professionals who perform all critical Public Safety and public service missions.

 

 

Figure 2: MESA actors overview.

 

 

MESA SoR expresses the need of public protection officers to have at disposal ad-hoc, rapidly deployed, mobile broadband networks, and specifically a system that would guarantee them the transport and distribution of rate-intensive and error-free data, high resolution digital video, infrared video and digital voice in every condition of work, from day to day operations to major disaster situations. It can be inferred that following MESA SoR will lead to a communication system characterized by the following features:

·           It will be independent of public infrastructures and public supply of electrical power: we think to a situation of totally disruption of existing infrastructures, or a disaster happened in a place where no sufficient infrastructures are present to allow public protection officers to efficiently communicate through them. So we can say that MESA system will be complementary to and interwork with infrastructure components, but it will not depend upon them.

·           It will be independent of public radio frequency spectrum: MESA participants have expressed the need for identification of frequency bands that could be used on a global/regional basis by administrations intending to implement future solutions for Public Protection agencies and organizations, including those dealing with emergency situations and disaster relief. The definition of a common frequency spectrum for all Public Protection communications will also guarantee a higher degree of interoperability. It is also envisioned that a reasonable tuning capability will be included in the key technology to accommodate regional requirements.

·           It will be characterized by ultra fast deployment, in order to allow supervened officers to start their work the most quickly is possible, since in certain situations this would allow them to save more lives.

·           It will be composed by auto establishing/self-healing/re-establishing wireless ad-hoc network elements that will perform extremely efficient and reliable communications: even during major critical situations, it is mandatory that communications will be maintained fully achievable.

·           It will guarantee large bandwidth requirements to facilitate broadband 2-way communications, data transfer, high resolution video and image exchange, etc.

·           It will provide a full interoperability with pre-existing and with other PPDR systems, but also with commercial and public systems. In fact it is foreseen that it will also contain standardized interfaces to public and private networks, where these interfaces will include, but will not be limited to, Public Switched Telephone Network (PSTN), private networks, public and private microwave systems, DSL and DS3 American and Japanese Common Carrier services, ISDN circuits.

·           It will need to support a range of security features: all specifications and standards written to comply with Project MESA SoR should allow for multiple levels and jurisdictionally specific types of security, in order to guarantee robust MESA user device and network security. To maximize the effectiveness of agents and officers in the field, the System should be capable of being encrypted for an extremely secure transmission of all voice and data traffic, including the use of password access codes where useful.

 

MESA Technical Specification Group (TSG) is utilizing all users’ inputs contained in SoR in order to map existing and needed capabilities, standards and gaps, progressing toward the development of corresponding technical specifications.

3.    Technical Specifications and Application Scenarios

 

This section focuses on the application scenarios description. Indeed it will be shown that the SoR analysis can be split in four steps reaching the technical specifications definition.

In detail, after a brief description of the devices requirements and limitations we will present a scenario analysis that represents the main aim of the document. This will lead us to the important of application on demand deployment in emergency situations.

 

 

3.1                  From SoR to Technical Specifications

 

The first task of MESA Technical Specification working group is to deeply analyze requirements expressed by users into SoR. It will then be possible to extract a list of technical features that MESA System should fulfill during every situation or condition where PP officers are involved. This goal has been divided by members of the Working Group into four parts, which are:

a)     Analysis of scenarios from the Statement of Requirements.

b)     Definition of Network requirements.

c)     Definition of devices requirements.

d)     Classification of considerable parameters to define System technical specifications.

In is not the aim of this document to discuss network requirements and system’s parameters. In order to concentrate on the application scenarios the following two sections describe:

 

 

 

 

 

 

 

 

 

 

3.2                  Devices requirements and limitations

 

Devices must be multi-interfaces and resistant to the very harsh conditions in which users can be involved. In fact, interoperability is the most crucial task that must be accomplished in order to have a flexible system able to be deployed in every situation. This results in the need to have multiple interfaces devices that can communicate with existing communication systems.

Devices used should be suitable for use also in uncomfortable situations, therefore they must satisfy some ergonometric requirements, which would facilitate first responders’ task. For example, in some particular conditions, hands-free operations should be made available to permit users to fulfill their task in the most comfortable manner is possible, furthermore users must be able to operate the system when wearing personal environmental protection equipment, such as foul-weather gear, survival suits, battle dress, etc. To facilitate users’ work, also weight and shape of devices must be appropriate to the application in which they are used, but also usability aspects of the equipment, such as button size and screen size, must be adequate. Moreover, devices must not introduce undue operator fatigue during continuous usage.

Since MESA System is aimed also at managing emergency and disaster situations, used devices must be able to bear with particularly harsh environment conditions. Examples of external conditions that might harm devices performance and that have an effect on the QoS (Quality of Service) of supplied services can be:

·     Presence of water, dust and other factors affecting operation mechanisms.

·     Presence of smoke or scarce visibility that might affect the use of a screen display.

·     Environmental noises that might affect intelligibility of audio messages. Besides, devices must be resistant to physical solicitations like knocks, bumps, collision and so on. A list of possible devices to be used comprehends:

·     Mobile devices such as all types of phones.

·     Portable devices such as notebooks.

 

Besides the use of such devices, an extensive use of sensors must be considered during public safety activities. Sensors can be seen as a particular type of transceivers devoted to monitoring different parameters. Network of sensors can be used to monitor wide areas like forests, volcanoes, seismic areas, but can also be wore by officers, constituting a Body Area Network (BAN). Users’ requirements for wide area networks of sensors mainly focus on the need of:

·       High autonomy

·       Power consumption

·       Transmission range capacity.

Wearable sensors must take into account wear ability issues, as presence of wires, size, weight and form factor (including batteries) as well as interaction with the users’ activities in terms of restrictions on movements’ freedom. Moreover we do not have to forget the importance of dimensions and weight of all the devices involved.  In fact these devices must be small and light and as a consequence they will have small hard disk capacity. For these and other issues that will be presented on the second deliverable application streaming is an essential service in emergency situations. As we will see in the next section, it will be possible to stream and use different applications for many different purposes using the same small device.

 

3.3                  Application Scenarios

 

Scenarios analysis is the correct way to have a complete overview of Public Safety workers tasks and needs. This section includes several scenarios description of typical public safety operations to provide a view of future public safety communications. These scenarios describe credible, realistic incidents, activities, and responses that involve public safety agencies and personnel. While these scenarios do not cover all possible activities and situations, this collection provides a comprehensive vision of the future of public safety communications. These scenarios provide descriptions of the voice and data communications used in routine, day-to-day operations and include a traffic stop, a structure fire, and a medical emergency.

 

Before directly treating the scenarios, we identify several common elements:

 

 

 

 

 

 

 

 

3.3.1   Emergency Medical Services case: Heart Attack Scenario

 

Initial Work Shift Tasks

 

At 3:00 p.m., two paramedics report for their shift with the Brookside ambulance service. After being assigned to ambulance A-34 and receiving the day’s situation updates from the shift supervisor, they go to their ambulance and begin their system initialization tasks. Both paramedics turn on the Public Safety Communications Devices (PSCDs) that are integrated with the medical equipment and the ambulance’s wireless incident area network (IAN), which allows them to have contact with the network when they operate outside the ambulance. At power up, all medical devices, including the video cameras, go through their self tests and report their status to the local command and control system on board the ambulance; the PSCDs go through their network registrations and the ambulance wireless network links to the hospital network to register and download the latest information from the county public health center, emergency procedures from sources, such as the poison center and instructional aides with the latest EMS training packages.

Both paramedics must go through a biometric identity check with their PSCDs. After authenticating each paramedic, the ambulance system sets up the profiles of the two paramedics on the medical equipment and the PSCDs, establishes the level of authorized data access for each paramedic across available databases, and initiates personal tracking of each paramedic so that a record can be made of all instructions given to each paramedic, and the treatment each paramedic provides.

Before formally alerting dispatch that A-34 is in an active status and available for calls, one paramedic goes through a training exercise (using a life-like mannequin) that simulates a situation in which parents have reported their child has stopped breathing. The other paramedic runs the “Required Inventory” program that identifies all the medical supplies needed on board, locates the inventory present via radio frequency identification (RF ID) tags, and restocks the supplies that the system identifies as insufficient.

At 3:25 p.m., A-34 reports to the dispatcher via its on-board data system that it is active and available for calls, and follows up with a radio voice call corroborating the same message. The dispatcher acknowledges that A-34 is active and that dispatch is properly receiving location data from the unit. He assigns the unit to patrol a prescribed grid in the Brookside area.

 

EMS Response to Heart Attack Call

 

At 4:19 p.m., the Brookside Public Safety Answering Point (PSAP) receives a 9-1-1 call from the relative of a man who has returned home from playing tennis and is reporting chest pains. From the PSAP’s computer aided dispatch (CAD) display, the dispatcher knows that the A-17 team is available and is close to the address but will require 7 minutes to reach the address because of heavy rush hour traffic near several factories. The CAD shows that A-34 is farther away from the address, but has little traffic in its path, and is therefore only 4 minutes away. The dispatcher notifies A-34 and simultaneously sends a digital message providing the patient’s name and address. A-34 leaves its location and the ambulance driver notifies the dispatcher who in turn relays the information to the relative stating that paramedics are on their way.

 

The ambulance driver views the patient’s address on the cab monitor display, which also maps the route for the driver; a computer-activated voice directs the driver to the appropriate lanes and where to turn. As the ambulance approaches traffic lights along the route, the on-board signaling system adjusts the traffic light sequence to allow the ambulance to travel through quickly and the on-board system also interrogates the county’s transportation system for road closures, blockages, train conflicts, or slow traffic conditions to route the ambulance around impediments and provide the fastest route to the patient. At the same time, the geo-location system provides information on the ambulance location and progress on the dispatcher’s CAD display.

 

A-34 arrives at the patient’s house at 4:23 p.m. and the paramedics enter the home to find the patient barely conscious on the living room couch. While one paramedic begins a preliminary medical assessment, the second paramedic acquires personal information about the patient through the other person present. The patient’s information is entered by the attending paramedic who uses the ambulance’s on-board facilities to capture the data patient’s name, address, gender, age, etc., through a voice recognition system. One paramedic checks the most recent list of available hospitals and confirms that the Brookside Hospital’s Emergency Room (ER) will be able to accept the patient. The paramedics discover the patient is wearing a medic alert RF ID bracelet; the paramedics scan the RF ID tag and find that the patient has a severe allergy to penicillin-based medicines.

 

The paramedics attach a wireless 12-lead electrocardiogram (EKG) unit to the patient and the unit begins transferring its digital information to the PSCD. The Brookside Hospital’s ER staff pulls the EKG information from A-34’s database. The staff route the information directly to the hospital’s emergency physician, who views information from all 12 leads of the EKG simultaneously, zeroing in and enlarging the waveforms for specific leads as required.

 

The cardiologist quickly determines the patient needs a cardiac catheterization and orders the paramedic team to bring the patient directly to the hospital’s catheterization lab. The same order activates the hospital’s teams to staff the lab and prepare for the patient’s arrival.

 

The patient is transported from the house to the ambulance with all medical monitors wirelessly attached to him, including the EKG unit, a respirator monitor, and a blood pressure monitor. The attending paramedic rides in the ambulance’s patient module and communicates to the driver via their hands-free PSCDs. As the ambulance approaches the hospital, the catheterization lab staff retrieves the patient’s information and vital statistics from the ambulance’s database.

 

When the ambulance arrives at the hospital from the house 19 minutes later, the catheterization lab is ready for the patient. After another 33 minutes, the cardiologist and the catheterization lab team successfully establish good blood flow to the affected coronary artery.

 

 

 

 

EMS Communications Summary

 

Throughout the scenario, the ambulance, the paramedic team, and the patient are tracked by the network providing geo-location information in real time. All patient information and vitals are recorded through wireless monitors and voice recognition systems with no reliance on paper reports and notes. All EMS hospital staff orders as well as paramedic treatments are recorded by the hospital and ambulance databases. All monitors and devices used with the patient are wireless to allow easy patient transport and mobility. All conversations between dispatcher and paramedics and between paramedics and hospital staff are conference call, simultaneous discussions.

 

 

3.3.2   Fire Fighters case: Residential Fire Scenario

 

Initial Work Shift Tasks

 

Three firefighters begin their shift at the Brookside Fire District Station BFD-7. After completing their administrative check-in, they complete their biometric identity check with their Public Safety Communications Devices (PSCD). After authenticating each firefighter, the system sets up their profiles on their PSCDs and the network, establishes the level of data access that each is authorized to have across available databases, and initiates personal tracking of each firefighter so that a record can be made of all instructions that are given to each, and the actions and responses provided by each firefighter. The firefighters initiate the equipment self-tests of the vests they will wear during a fire situation. The vests measure each firefighter’s pulse rate, breathing rate, body temperature, outside temperature, and three-axis gyro and accelerometer data. Each vest also provides geo-location information for each firefighter and measures the available air supply in the firefighter’s oxygen tank. The vests have a self-contained wireless personal area network (PAN) that interrogates each of the sensors and monitors. The vests code their information with the firefighter’s ID and then conduct their registration/authorization steps and report their status to the wireless network.

The firefighters begin their check-out of the fire equipment, the fire engine, E7, and fire ladder, L7, at the station. Each apparatus has sensors to measure water pressure, water flow, water supply, fuel supply, and geo-location. Each apparatus also has its own PAN for interrogating all apparatus monitors. The apparatus codes the apparatus ID with the measured values and geo-location information for routing to the network. After successfully completing all the self tests, the firefighters provide a digital status to the network that they have completed all initial set-ups and they are ready. The fire station network reports to the dispatcher, via the station’s and on-board data systems, which personnel and equipment are active and available for calls. The station battalion chief follows up with a PSCD voice call with the same message. The dispatcher acknowledges that BFD-7 is active and that dispatch’s Geographical Information System (GIS)/CAD systems are properly receiving location and status data from the units.

 

 

Fire Response to a Residential Fire Call

 

At 3:17 a.m., the Brookside PSAP receives a 9-1-1 call from a cab driver that the apartment building at 725 Pine is smoking and appears to be on fire. From the CAD display, the dispatcher finds that the BFD-7 station is available and close to the address. The dispatcher notifies BFD-7 to send E7 and L7, and to send BFD-7 battalion chief as the fire’s incident commander (IC). As E7 is leaving the fire station, firefighter F788 jumps onto the back of the vehicle. The vehicle registers that F788 has become part of the E7 crew for accountability and tracking. The dispatcher simultaneously sends a digital message providing the apartment building’s address. The dispatcher notifies another Brookside Fire Department, BFD-12, to also send an engine to the fire. By 3:19 a.m., E7, L7, and the incident commander leave BFD-7 and report their status to the dispatcher. As the incident commander’s command vehicle leaves the station, a nearby wireless PSCD sends the apartment’s building plans and the locations of nearby fire hydrants, the building’s water connections, the elevator, and the stairwells to the command vehicle’s GIS. The dispatcher sends a reverse 9-1-1 call message to all residents of the building, which has eight apartments on each of three floors. The nearest ambulance is alerted by the dispatcher to proceed to the scene. The local utility is alerted to stand-by for communications with the IC at 725 Pine.

 

The E7, L7, and IC drivers view the apartment’s address on the cab monitor displays, which also maps the route for the drivers; a computer-activated voice tells the drivers what lane to be in and which turns to make. As the fire vehicles approach traffic lights along the route, the on-board signaling system changes the lights to the emergency vehicles’ favor and the geo-location system provides the vehicles’ location and progress on the dispatcher’s CAD display. The on-board system also interrogates the county’s transportation system for road closures, blockages, train conflicts, or slow traffic conditions to route the vehicles around impediments and provide the fastest route to the fire.

 

The IC arrives on scene at 3:22 a.m., assesses the situation, noting that smoke and fire are visible, and alerts dispatch that 725 Pine is a working fire. The IC directs the local utility to shut off the gas to 725 Pine. As L7 and E7 arrive and get into position, all fire personnel and equipment are shown on the IC’s GIS display. The system automatically sets up the tactical communications channels for the IC and the fire crews. The fire crews are able to talk continuously with each other, reporting conditions and warning of hazards. Because the apartment building is not large enough to require a built-in wireless incident area network (IAN) for emergency services, the first fire crew into the apartment drops self-organizing wireless IAN pods on each of the floors as they progress through the building. Soon E12 and the assigned EMS unit arrive on site. The new personnel and equipment are automatically registered with the IC command post network and their PSCDs are automatically reprogrammed to operate on the incident’s PSCD radio channels and protocols.

 

Several families have already evacuated the building. As firefighters ask for their names and apartment numbers, they use the voice recognition capabilities of their PSCDs to capture the information, applying an RF ID wrist strap to each resident to track their status and location. Other firefighters enter the building to guide survivors out and to rescue those who are trapped. The IR cameras on the firefighter’s helmets provide the IC a view of fire conditions within the building and where the hot spots are located. Additionally, the firefighters monitor the temperature of the surrounding air in their location; this information is directly available to the firefighter, as well as the IC and EMS unit on-scene. Other passive sensors, such as hazardous gas detectors, are also operating in the firefighter’s PAN. With the IC’s guidance, the firefighters search each apartment for survivors and the source of the fire. The IC is able to monitor the location of each firefighter and is aware of which apartments have been searched by the information provided on the GIS displays.

 

The EMS unit outside the apartment monitors the vital signs of all the firefighters in and around the fire scene. The unit alerts the IC that firefighter F725 is showing signs of distress and the IC orders F725 and his partner F734 out of the building for a check-up with the EMS team.

 

Firefighter F765 pushes his emergency button when he becomes disoriented in the smoke. The IC immediately directs firefighter F788 to his aid by providing F765’s location relative to F788.

 

While the firefighters check every apartment for victims, the main fire is discovered in a second floor apartment kitchen where an electric range is burning. Two adults and two children are discovered in the apartment suffering from smoke inhalation. RF IDs are attached to their arms and each is given an oxygen tank and mask to help their breathing. They are carried outside the building where the EMS unit is ready to take over medical aid.

While the firefighters put out the fire in apartment 202, the IC checks the GIS display, which shows where the fire personnel are and where all the survivors and rescued individuals live in the apartment building. Two top-floor apartments have not been searched and the IC moves fire personnel to those apartments. The apartment database indicates an invalid may be living in apartment 321. The firefighters break down the doors of both apartments and in 321 find a bedridden individual, who is in good condition, and a pet dog in the other apartment. Both are outfitted with RF ID devices and taken from the building.

 

The fire is brought under control. The IC releases E12 and the IC network controller reconfigures E12’s PSCDs for return to the fire station. E7 and L7 wrap their fire operations and A34 has to transport one fire victim to the hospital. The IC releases all remaining equipment and gives control to dispatch.

 

Fire Communications Summary

 

Throughout the scenario, the fire personnel and equipment, EMS support personnel, and the fire victims are tracked by the network providing geo-location information in real time, providing the Incident Commander with current accountability of public safety personnel and of the fire’s victims. All victim information and vitals are recorded through wireless monitors and voice recognition systems with no reliance on paper reports and notes. All fire personnel and equipment have monitors to measure vital conditions and status that are reported by the wireless PAN and IAN systems to the IC’s GIS. The GIS also has access to city building department databases, which are searched and queried for building information and plans, fire hydrant locations, etc.

 

 

3.3.3   Law Enforcement case: Traffic Stop Scenario

 

Initial Work Shift Tasks

 

A police officer enters his 10-hour shift at the Brookside jurisdiction. After completing his administrative check-in, the officer takes his duty equipment to the squad car assigned to him for the shift. In the vehicle, the officer initiates his biometric identity check with his Public Safety Communications Device (PSCD). After authenticating the officer, the system sets up a profile of the officer on the PSCD and the network, establishes the level of data access the officer is authorized to have across available databases, and initiates tracking of the officer’s activities. The officer initiates the equipment self tests of the devices he will be using within the vehicle. The data terminals, status monitors, video cameras, displays, three-dimensional location sensor, etc., are integrated into a wireless personal area network (PAN). All of the devices code their information with the officer’s ID and then conduct their registration/authorization steps and report their status to the wireless network. Each device will be associated with the officer and will provide the officer with capabilities based upon the officer’s profile. When the officer starts the vehicle, a wireless hub recognizes the officer’s PAN and uploads the pertinent database files, the latest law enforcement alerts, and the current road and weather conditions to the PAN.

After successfully completing all the self tests and receiving all the updates from the wireless hub, the officer provides a digital status to the network indicating that he has completed all initial set-ups and is ready. The Police Center network reports to the dispatcher, via the center’s and on-vehicle data systems, which personnel and equipment are active and available for calls. The officer follows up with a PSCD voice call with the same message. The dispatcher acknowledges that the officer is active and that dispatch’s GIS and CAD systems are properly receiving location and status data from the officer’s vehicle and monitor units.

 

Law Enforcement Response to a High-Risk Traffic Stop

 

While on routine traffic patrol, the officer observes a car that runs through a red light at an intersection. The officer presses the “Vehicle Stop” button on his vehicle’s PSCD. The PSCD issues a message to dispatch, noting the operation underway, the officer’s ID, and the location information of the officer’s car. As the officer drives his squad car, the license number of the vehicle is captured by license plate recognition software and queried back to the Motor Vehicle Department. The video camera on the officer’s vehicle dashboard begins recording video of the vehicle stop onto a RAM buffer video storage device on the vehicle’s network; the video can be accessed at any time, on-demand, by the dispatcher and other authorized viewers. Other units in the area are alerted to the vehicle stop.

 

Shortly, the State Motor Vehicle Registration, Stolen Vehicle, and Wants/Warrants systems return their information to the officer’s PSCD. The officer also receives a picture of and information about the registered owner; the information indicates (both on the PSCD screen and with an audio signal) that there are no Wants/Warrants.

 

The vehicle pulls over and stops. The video feed will be available to dispatchers and supervisors on demand, or automatically displayed in the case of an emergency. When the officer leaves his squad car, he has access to all of his communications and data devices as the devices continue to communicate between his PAN and the vehicle’s network. The officer approaches the car and notes that there is a single occupant, the driver. The officer requests the driver’s license and registration, but the driver does not provide documentation.

 

While obtaining the information from the driver, the officer observes what he believes to be the remains of marijuana cigarettes in the ashtray. The officer decides to search the suspect’s vehicle and contacts dispatch to request a backup unit. The dispatcher enters the “Dispatch backup” command for the incident on her dispatch terminal and the CAD system recommends the dispatch of the closest unit based upon automatic vehicle location (AVL) information provided by the vehicles on patrol and known road and traffic conditions. The dispatcher glances at the console map to confirm the recommendation and presses the key to confirm the CAD recommendation. The dispatch of the backup unit is transmitted electronically to terminals in that vehicle, as well as to other nearby units and the area supervisor’s car for informational purposes. A user group is created on the network between the original and backup officer to share information. The backup officer acknowledges dispatch and asks the on-scene officer to confirm location and circumstances.

 

The supervisor brings up the real-time video of the event in her vehicle and briefly observes the situation. All appears under control and she releases the video link. The backup unit arrives on-scene. The responding officer orders the suspect to get out of his car. The backup officer watches the driver while the original officer searches the car. The original officer finds a number of bags of a white substance that appears to be cocaine. The original officer then places the driver under arrest and restrains him with handcuffs equipped with an RF ID tag. The RF ID tag is later loaded with the officer’s identity code, the nature of the crime, and a case number. The original officer radios dispatch to request a transport vehicle. The unit is dispatched and is linked to the original officer to communicate and obtain information as needed.

 

After the arrest, the officer takes the driver’s biometric sample with his PSCD. The PSCD submits the scan data to the biometric ID database for identification. Soon after, the PSCD returns an image, name, date of birth, and physical characteristics of the individual from the biometric sample that matches the name and DOB of one of the aliases returned by the license plate check, and matches the driver’s license picture. This indicates that the driver is the registered owner of the vehicle. The officer queries the criminal history database for information about the driver and receives a response that the individual has previously been arrested for drug possession.

 

When the transport unit arrives on scene, its PAN and vehicle network is automatically linked with the original officer’s network. Based on the incident and location, the system establishes a single user group so that the officers can exchange appropriate case information. The transport unit takes control of the arrested driver and transports him to the jail. The backup unit departs the scene and resumes patrol.

 

The original officer takes photo images of the suspect’s car and the suspected drugs and collects the evidence. He conducts field tests of the substances and confirms that the suspected drugs are cocaine. He places RF ID tags on all evidence bags.

 

The original officer radios dispatch to request a tow truck to impound the vehicle. Dispatch notifies the tow company and the original officer communicates directly with the tow truck operator to confirm location and status. While waiting for the tow truck, the original officer completes preliminary suspect and vehicle information on the crime to automatically populate the electronic Tow Report and Inventory Form and the Jail Booking Form. This information is transmitted electronically to the Sheriff’s Central Records System.

 

The transport officers arrive at the jail located in Central City. The officers bring the suspect in for booking. The booking officer queries the suspect’s RF ID tag on the handcuffs to begin the booking record, which is automatically populated from the information previously sent to the Central Records System. Information on the handcuff RF ID tag is cloned to a wristband that is then affixed to the suspect after the handcuffs are removed.

 

As the tow truck arrives, the truck’s network is recognized on the incident area network (IAN) at the scene. The tow truck and driver are registered and authorized to exchange information on the network. The tow truck company information automatically populates the tow report. The tow truck driver reviews the tow report with the associated officer code and case number and adds his electronic signature. The officer then continues to work on the arrest report, adding a narrative section describing the events, along with descriptions of the confiscated property and associated arrest information. The officer also updates the State Motor Vehicle database to show the vehicle status as “towed/stored.”

 

The officer completes the arrest report in electronic form. The report is transmitted to the officer’s supervisor. The supervisor notes one deficiency in the report and she issues it back to the officer. The officer corrects the report and re-transmits it to the supervisor. She electronically signs off on the report and forwards it to the Central Records System and to the District Attorney’s office.

 

The officer clears the incident on his PSCD, which automatically shuts off the video camera, and resumes patrol.

 

Law Enforcement Communications Summary

 

Throughout the scenario, the law enforcement personnel and equipment as well as the arrested suspect are tracked by the network providing geo-location information in real time to provide the field supervisor as well as dispatch with current accountability of all personnel. All suspect information and evidence are recorded through wireless monitors and voice recognition systems with no reliance on paper reports and notes. All information is tagged with the original officer’s identity code. All evidence is tracked with RF IDs to provide an audit trail. All law enforcement personnel and equipment have monitors to measure vital conditions and status that are reported by the wireless PAN and IAN systems to the IC’s GIS. National and state criminal justice records and state civilian records are searched and queried for information relating to the traffic stop, etc.

 

 

 

 

 

3.3.4   Multi-Discipline/Multi Jurisdiction case: Explosion Scenario

 

This scenario focuses on the command and control, asset status and tracking, and major communications interoperability aspects of an incident involving first responders. The scenario occurs from the perspective of the Incident Commander and Emergency Commander, and does not include first person, first responder perspectives. The communications capabilities described in the three first responder scenarios are implied (but not described) in this scenario. The italicized text indicates actions or responses of the Emergency Manager.

 

Explosion
 
A large explosion occurs at a chemical plant in Barberville, a suburb of Brookside. There is the potential for hazardous chemical leaks as well as toxic smoke from the chemicals burning.

 

Incident Command (IC) arrives on-scene and assesses the situation. After briefly surveying the area, the IC team initiates their mobile command center and begins to receive information from the temporary network created by the on-site first responder vehicles and personnel.

 

The Emergency Manager (EM) is alerted that a major incident has occurred and brings up the command terminal in the Emergency Operations Center (EOC) to monitor the regional situation. All of the region’s assets are available for query by the EM.

 

The mobile command center’s display registers all of the assets that are currently on-scene, including EMS, Law Enforcement (LE), and Fire. The status of each asset is also available, but is displayed on demand.

 

IC shifts the display to a GIS overlay of the explosion, with the location of all assets shown. Areas are marked to display casualties, fires, evidence, the incident perimeter, etc. The information for the GIS displays comes from a site survey already underway by LE, Fire, and EMS personnel.

 

Information is available on the EM’s system as the information is gathered by IC. This information is shown both in a GIS-map format as well as a textual set of data. On demand, the EM can call up the information on the incident as if the EM were on site in the capacity of IC.

 

As new units arrive on-scene, they are authenticated into the incident and added to the list of assets available to IC.

 

The on-scene Fire Branch monitors the status, location, and current duties of the Fire assets on their command screen, and reassigns them as necessary. Any data that is pertinent to the other Branches and IC is automatically forwarded onto their command systems. This same situation is repeated for both the LE Branch as well as the EMS Branch.

 

After completing all of the pre-defined tasks for this particular type of incident, IC begins coordinating with the LE, EMS, and Fire command posts. As IC begins directing the assets in the field, the Fire Branch informs IC that the incident is too large to be handled by the assets on hand. IC then puts in a request to the EM for the acquisition of more fire units.

 

As the request for more fire assets comes into the EM, the EM initiates the Mutual Aid agreements in place, and units are dispatched from the Brookside Metro area to Barberville.

 

The EMS Branch sets up a triage/treatment area and begins to direct the resources available to identify and handle casualties. The location of the triage/treatment area is disseminated to all first responders on-scene, and the area medical facilities are alerted as to the status of the triage/treatment area.

 

The Fire Branch is notified of an emergency on their command screen as one of the firefighters in the field has a passive sensor triggered by the detection of a hazardous chemical. The sensor determines that the hazardous chemical would not be ignited by a radio transmission, allowing the network to notify all first responders within 100 feet of the particular firefighter along with LE, EMS, and IC. The Fire Branch designates this area as a Hot Zone that alerts any personnel entering the designated area as to its status.

 

Because of the potential for the release of hazardous chemicals, the EM directs all available Hazardous Materials (HazMat) teams to the location, and puts these assets under the control of IC. IC sets up a secondary perimeter five blocks back from the incident.

 

The EM notes the perimeter change and initiates a Reverse 9-1-1 warning call that is sent to all fixed and cellular telephones inside the secondary perimeter. This call instructs the people inside the perimeter to find shelter in the area quickly and to close off all outside ventilation.

 

The LE Branch is directed by IC to coordinate with the Department of Transportation (DOT) to configure traffic management assets, such as traffic lights and electronic signs, to divert traffic away from the incident.

 

The LE Branch has enough assets to establish a perimeter, but needs more assets to maintain the security of the incident. IC puts in a request for LE assets to the EM.

 

The EM begins to coordinate with the public utilities and other pertinent private organizations for the appropriate responses, such as shutting down gas lines to the area, and dispatching electrical crews to handle situations, such as downed power lines. The EM also directs additional LE assets into the area upon receiving the request from IC.

 

Upon further investigation by LE and Fire assets, IC determines that this explosion was not an accident, directs LE to treat the area as a crime scene, and assigns Detectives to begin an investigation of the crime scene in coordination with Fire Investigators. This information is also available to the EM.

 

After determining that the probable cause of the situation is a bomb, IC directs the LE Branch to begin directing traffic away from the scene and to initiate a secondary explosive device search by the Explosive Ordinance Disposal (EOD) team.

 

The EMS Branch continues to coordinate the efforts of EMS assets. As casualty information comes onto the command screen via the RF ID tags used by personnel in the field, the most critical cases are selected for transport to the nearest available hospitals. The EMS Branch believes that the on-scene casualties will overburden the medical facilities selected to handle them. The transportation officer is directed to query the local medical facilities as to their status, and their capacity for casualties and what types of casualties can be taken. Casualty statistics are available on demand by IC and the EM. Additionally, the local medical centers coordinate among themselves regarding resource availability.

 

The EM begins to monitor the status of the casualties, as well as the status of the responding medical facilities. Seeing the casualties from the incident will overburden the nearby facilities, the EM puts a neighboring medical facility on alert for incoming casualties. The EM also directs additional EMS crews to respond to the incident.

 

As EMS assets arrive on-scene, the assets are registered and their capabilities are authorized for placement into the EMS asset pool for assignments given by the EMS Branch.

 

The Unit Commander of the EOD team notifies the LE Branch that no secondary devices have been found. The LE Branch pushes this information to IC. IC then automatically forwards this information to the EM.

 

The Fire Branch alerts IC that all of the fires have been identified and are marginally contained. Additionally, the hazardous chemical spill has been contained and eliminated by the HazMat teams dispatched by the EM. All but one HazMat team is released back into the asset pool.

 

The Fire Branch alerts IC that all of the fires have been eliminated, and that all but one Fire Crew has been released back into the asset pool.

 

The EMS Branch alerts IC that all of the casualties have been evacuated to appropriate medical facilities. The coroner has been contacted to begin removal of the corpses.

 

 

Multi-Discipline/Multi-Jurisdiction Communications Summary

 

The abstracted view of Incident Command is very different than that of a first responder reacting to a situation in the field. As such, their communications needs and capabilities are tailored to meet those differences. While the communications and actions depicted in the scenario are oversimplified versions of what would actually have occurred in real life, what has been captured is the general nature of the communications, the command and control functionality, and examples of access to a wide variety of information on an on-demand basis. The command and control of Incident Command on-scene and the Emergency Manager provide for the safety and accountability of all the assets at the incident and provides information on additional resources that could be brought to the incident. The networks for communications and information exchange are created on an ad hoc and/or temporary basis at the scenes. They overlay on one another to provide interoperability and integrate with the larger jurisdiction area networks to form a system of systems for command and control.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

3.3.5   Application on Demand in Emergency situations

 

In these scenarios we analyzed in detail the requirements and needs of different teams involved in emergency situations. As we saw, there are a big number of services, applications and information that different users may need to exchange with others in order to achieve their tasks faster and successful. In many cases that may require this telecommunication services or applications from a central location or server.

This possibility to increase the operation’s speed in emergency situations allows saving more lives.

Besides from a technical point of view we saw that many different devices are used as well as many different services. It could be useful to have the possibility to use few devices downloading the special service or application whenever it is required and deleting it whenever it is not.

It is not the aim of this document to analyze this aspect from a technical point of view; indeed our purpose is to explain that the application delivery can be really useful in emergency situations allowing saving more life and containing the infrastructure costs.

The following table summarizes many situations and scenarios where application streaming gives benefits in emergency scenarios. In the first column the involved team is defined; in the second column the emergency situation is recalled and in third column the detailed streamed application is described.

 

 

Who

When

Application

EMS

A consult of several teams (e.g. cardiologist and poison team) is required. For example a patient has a cardiac attack after heaving inhaled toxic paint.

In this case it is useful to download an application on the ambulance’s system. This application allows a video-conference with paramedics, cardiologist and poison team. Everyone is then able to watch the three/four windows related to every team in its own screen.

EMS

The ambulance is going to the patient’s address.

The ambulance’s system needs an application that allows enabling or disabling the traffic lights.

EMS

The paramedics need information about the patient’s health state from the hospital.

It is important to stream an application that allows the access to the database of the nearby hospitals. In another case it could be necessary streaming another application that allows a different access to a different database (e.g. update/query to a police database).

EMS

The paramedics have to update the database of their own hospital. In such a way the hospital will know the status of the equipment and the status of the potions on the ambulance. 

Streaming is important for the same reason of the previous point, in order to optimize the use of the system memory.

EMS

It’s important to know specific patient’s conditions or environment conditions.

The portable device of every paramedic could monitor vital signals but also analyze blood and chemical air composition, moreover the substance that intoxicated the patient. Streaming is helpful because, just in case, the paramedics could stream the plug-in for the analysis.

FF

On fire engine and fire ladder it’s important to know the condition of the fire in the burning structure.

In this case it is useful to download an application on the system of the fire engine and fire ladder. This application allows watching simultaneously an overview on the screen (in four windows) of four cameras (IR or not) on the helmets of fire fighters.

FF

Each Fire Fighter may want to know the environment situation from the outside of the burning structure.

On the personal device a Fire Fighter can download a plug-in to decode several videos. This application allows watching simultaneously an overview on the screen of four helicopters cameras.

FF

The fire engine and fire ladder are going to the blaze location.

The system of the fire engine and fire ladder needs an application that allows changing the color of the traffic lights on its way.

FF

In the structure could be the pyromaniac. 

Every Fire Fighter can download an application that allows comparing images of the helmet’s cameras with images in the police database regarding pyromaniacs.

FF

The Fire Fighter F765 becomes disoriented in the smoke. The IC (Incident Commander) immediately directs firefighter F788 to his aid.

 

 

On the device of F788 could be necessary to download an application that allows watching simultaneously on two windows the IR video of the F765 in the smoke and the GIS map of the floor because while F788 goes to the location of F765, F788 could direct F765 in a safe place using voice indications.

FF

Always.

A fire fighter can download an application that allows watching maps of several floors on the screen simultaneously in multiple-windows GIS.

LE

The suspect doesn’t provide documentation to identify him.

 

The officer with his device is able to download an application that allows the comparison of images or finger prints in the police database. If a match is found, the officer receives all information about the suspect; otherwise the suspect takes an ID, finger print and picture that are saved in the database.

LE

The officer arrests a suspect.

It is important to stream an application that allows the access to the database of the criminals.

LE

There is a robbery in a bank with hostages.

If the department obtains the videos of the cameras in the bank it is useful for officers to download an application that allows simultaneously an overview on the screen of the bank cameras (or to switch from one camera to another).

Table 1: Scenarios Analysis

4.    Context-Aware support to emergency services

 

Context Awareness is a technique to adapt services to rapidly changing end user parameters, like device features, media capabilities, ambience, transport means and UI capacity. It can be implemented via a component that gets this information from a specific sub-block inside the User Information Manager, and then performs the necessary changes in the workflow and the data handled in order to adapt the service to the context of each user.

 

Services are able to modify their behaviour and the information provided, based on the emergency context in which the user is. Users can be notified in case of an emergency (e.g. fires, accidents, robberies…) and be informed on how to proceed in those situations. Such context, could be defined by user profile (personal user information), mobile device profile (characteristics of mobile device), location (user spatial coordinates of the areas in which the service is available) and timetable (temporal interval in which the user uses the system).

 

There are a lot of definitions of user context but, in general, user context is “any information that can be used to characterize the situation of an entity (person, place, or object that is considered relevant to the interaction between an user and an application)”. The user context is especially dynamic for mobile systems such as mobile phones, laptop, etc. When using such devices, it is important to adapt the application’s behaviour to the ever changing situation. Some important aspects related to context are:

 

·       Environmental Factors

There are different approaches to choose the relevant aspects in environmental context. For example, it is distinguished between comprising network variations (bandwidth, latency, etc.), hardware variation (screen size, buttons, etc.), and memory and software variations (memory capacity, installed applications, etc.).

The information about the infrastructure and the location information (physical and logical, at home versus at work, e.g.) is also depicted as very relevant for the user context. The following kinds of environmental context can be considered:

 

o   User Context

This aspect takes into account personal information about the user and user’s classes as user's identity, characteristics, capabilities, universal preferences, the state of the user, information about his or her main activity, etc.

o   Resource Context

The resource context refers to the demand of multi-delivery and thus includes information about the relevant devices, device classes, documents, network, available services, etc.

o   Location Context

This context describes the geographical coordinates, identity and the state of the location (the people present at that location, etc).

The location context includes also aspects of the perception of the physical characteristics of the location, e.g. temperature, brightness, noise levels, etc.

o   Temporal Context

The temporal context (in diverse forms, e.g. the absolute time, hour, am / pm, etc.) allows to adapt the application with regard to certain timing constraints such as the hour to go to work.

 

General context categories

Features

User

Tasks of the user.

User’s profile (experience, etc.).

People nearby.

Characters, date and time formats.

Resources

Size of a display.

Type of the display (colour, etc.).

Input method (touch panels, buttons).

Network connectivity Communication cost and bandwidth.

Nearby resources (printers, displays).

Location

Position (latitude and longitude).

Lighting, temperature, weather conditions, noise levels.

Surrounding landscape.

User’s direction and movement.

Temporal

Time of day.

Week, month.

Season of the year

Table 2: Categorization of environmental context

 

·       Application State Factor

This factor has the information about the state of the application itself and complements the environmental factors.

·       History Factor

This is necessary to identify if the context values change over time. For this, it is needed to consider the historical circumstances and the current ones.

·       Static and Dynamic Context

User information can be static or dynamic. Some context information is considered static if the context is defined and it does not change during the application execution. The context information is considered as dynamic if it is determined during run-time.

·       Availability of Context Information

Not all context factors are always available, e.g. if one user has not interacted with the system yet, there may be no data about him/her available.

 

One typical application of Context Awareness components is the ability to answer to changes in the user location, provided that the device at hand comprises location tracking (typically a mobile device with GPS capability). As the user moves, the user coordinates are retrieved by a specific element. A service built with a specific location context-aware component will ask for this information and perform actions based on it. For example, a message will be sent if the user enters a specific area.

 

Creation and diffusion of both context-aware applications and services was possible by both an increasingly net availability and improvement of functionalities and capabilities of mobile devices. Device constraints such as media codec support and bandwidth are an important part of the user context. Being able to accurately detect the best level of richness of media for the user device and context, and adapt the data-flow to this level, can make the difference between a service which is unusable for all but a small fraction of users under very specific conditions and another service which transparently adapts to a large audience. The Context Awareness block comprises the following sub-blocks:

 

Adaptation engine: This element performs all supported kinds of adaptation (transport, media, format, etc.) and the interface to invoke them.

 

User Interface Personalization: This module will detect user preferences, habits and context information in order to determine the preferred way to represent the user graphical interface of platform and services, and perform the necessary operations in order to adapt the GUI to those preferences and context.

 

The application should be aware of changes of different parameters related to the users of the platform services as well as the capabilities of the terminals they might be using, network connections, etc. and adapt to them. The “Context Awareness” block (including Adaptation Engine and UI Personalization sub-blocks), will cover the identified requirements.

 

 

Requirement identified

Sub-block

Support for a variety of device profiles that can change in the future: Mobile phone, office phone, web browser on PC or phone, IM client

Adaptation engine

The platform must update the interface of the terminal used depending on the screen size (PC, mobile phone, etc.). Both basic and advanced modes/editors can adapt to terminal capabilities, while keeping the same core features and core organization. The advanced has a minimum capability limit, while the basic can adapt to any terminal, including audio only or text only

UI Personalization

The platform should allow service transfers from one terminal to another and adapt to the new context

Adaptation engine

The platform should provide the means for the service to be established fluently with no interruptions. If the available network bandwidth is not enough to run the service, the platform should abort the establishment of the session and indicate the user so (or adapt the media to need less bandwidth)

Adaptation engine

The platform must have the capacity to react to different kinds of network events that will affect the right execution of the service, such as bandwidth variations and network connections

Adaptation engine

In the moment the service is started, the platform must be aware of the different terminal capabilities and adapt the service to it (Media: Audio, video, text; Bandwidth and type of connection; Screen size and characteristics; Processing power; etc.)

Adaptation engine

Table 3: Context Awareness requirements and supporting components

 

4.1                  Application Scenarios

 

 

 

This section includes several scenarios description related to emergency situations in order to provide a view of how context-aware supports such situations. These scenarios describe incidents, activities, and responses that involve a context in which such situations occur. These scenarios do not cover all possible activities and situations, they provide a general vision of the way in which context-aware can be involved in different kind of emergency situations.

 

Following previous guidelines, the user context that we consider in this project is a very wide, open concept, as there are no restrictions in the user context to be considered:

 

·       Environmental factors

Environmental factors regarding user context are very important since the platform will contain a wide range of services to be executed in different environments.

o   User context

The user context information is composed of two parts: one static part (such as address, name, preferences), that is stored within the User profile and another dynamic part (such as location), which is collected at each moment by specific clients in the end-device and sent to the platform to be stored there.

The user context is an important element that will help the platform to adapt to the user preferences and it will also be used for identification tasks.

o   Resource context

This context is represented by the list of available services and the type of terminal being used.

o   Location context

This information is represented by the location provided after having processed the row data collected from the user device. Some services will take into account the exact position of the user (latitude-longitude), which will change as the user moves, so in this case the information will not be obtained from the user profile but from a base service providing this functionality.

o   Temporal context

Some services will be executed at a determined time, being the temporal context. The user specifies the temporal constraints of a determined service if that service offers that possibility of personalization.

o   Application state factor

The state of the application is represented by a workflow engine.

Some of the context info will be static, such as the information contained in the user profile, and some other will be dynamic, such as the user location and resources, which is stored in the proper module of the platform.

 

4.1.1   Health emergency

 

Support to health service (HS)

 

Health service system supports health emergency situations, consisting of the environment and all objects and participants within it. Recognition and interpretation of context related to the emergency situation is a crucial task for health assistance systems, which aim at supporting people in difficult emergency situations.

 

Assistance systems are able to achieve better performance in supporting people, because they have additional information about the environment. Reliability and quality of existing information can be improved, using the additional information from other sources. The available information can be passed as a warning or used for active intervention to the medical process.

 

Citizens with health problems can get a Health Service (HS) subscription – health organizations rely on the same service as well. From the patients’ side, HS is in charge of tracking vital signs such as heart beat and blood pressure

 

  According to the users’ medical history and current status, Health Service (HS) can make decisions to warn doctors and relatives. Such medical history can be retrieved by consulting current user records found in the data base information system.

  According to the current context status and having into account the user information retrieved from data base system, Health Service (HS) is able to apply logic rules in order to execute the best procedure related to the current emergency situation.

 

Doctors obviously keep a decision making role, first of all in initiating a patient’s diagnosis and then to define appropriate measures. Doctors can follow the standard procedure supported by context-aware system suggestions.

 

Medical emergency

 

At 9:00 am, a patient is going to suffer a heart attack. Health Service (HS) triggers an alarm to better coordinate the action of doctors, ambulances, family and relatives.

 

  The system has made an inference taking into account the user’s medical history, retrieved from data base information system, and the swift change of relevant parameters based on the current emergency context related to user information collected.

  A medical expert checks the user’s vital signs and alert an hospital team with the purpose of getting ready every needed resources for such medical emergency.

 

An emergency unit rushes to assist the patient

 

At the same time (9:00 am), Health Service (HS) alerts the closest ambulance to the user’s house and informs the patient of its imminent arrival. Health Service (HS) sends the relevant patient’s information to the ambulance, such as vital signs and available medical records retrieved from medical center data base system.

 

  The system has coordinated ambulance action by alerting the team and providing useful user information related to the current context. Such context contains emergency location, user information, vital signs status and relevant information. Such coordination allows to organize in the best possible way every resources and avoids.

  Once the system has alerted the ambulance team, it provides suggestions regarded to the shortest way for arriving to the emergency place, recommended medical procedure, etc. The previous procedure is executed by analyzing the whole context related to the current emergency.

 

Assistance is given and communication started

 

Before arriving at the patient’s house, the ambulance team tries to contact the user’s relatives and neighbors, according to his whishes. Once on the place, a preliminary diagnosis is made and the results are sent to the hospital via Health Service (HS), that records the intervention. Depending on the user’s state of health, and according to personal preferences, Health Service (HS) will also send a tentative report to his family.

 

  The system has contacted people who are interested in user health. Ambulance team has checked user vital signs in order to establish the current situation and how to proceed having into account Health Service (HS) suggestions.

  The results related to the user current status are transmitted and recorded by Health Service (HS) in order to warn the hospital team that is following the emergency.

  Hospital team received information in order to get ready for eventual intervention.

  System checked a way to inform user’s family. They have received a report of the current situation.

 

4.1.2   Road emergency

 

Support to road emergencies: Road Emergency System (RES)

 

Accidents and other unexpected events on the road can complicate travel plans. Road emergencies increase the chance that something will go wrong on the road and complicate even the shortest trip. Early indication of dangerous situations on the road, higher comfort of driving and better support for inexperienced drivers or for driving in a less familiar environment are only some of many benefits that are expected from driving emergency systems.

 

Context for a Road Emergency System (RES), regards a driving situation, consisting of the environment and all objects and traffic participants within it. Additionally, the driver, the state of the own vehicle and also the national driving regulations are part of such driving context. Recognition and interpretation of context is a crucial task for driving assistance systems, which aim at supporting the driver in difficult situations.

 

Assistance systems are able to achieve better performance in supporting the driver, because they have additional information about the environment. Reliability and quality of existing information can be improved, using the additional information from other sources. The available information can either be passed on to the user as a warning or used for active intervention to the driving process.

 

The context related to a driving situation is composed of the operating environment of the vehicle and all relevant objects within it. The most important element is the environment. “Spatial” context refers to the physical environment (the type of the road). The ”local” context is a regional physical environment where special driving rules apply and it is located within a spatial context. Local context could be intersections, level crossings, tunnel, crosswalks, etc. The local context depends on the spatial context. Traffic objects like signs, pedestrians or markings, are located within and valid for a spatial or local context, respectively. Road conditions complement the driving environment.

 

The driver-context comprises the current state (tired, drunk, sick, ...), experience (beginner, expert), the risk-willingness (high, medium, low) using finite value domains and also the driver’s intent for the next planned manoeuvre. Detection of these states with sensor systems is an important requirement.

 

Recommended driving behavior in emergency situations could differ with presence and absence of breaking system or electronic stability. For example, in case of absence of ABS, a more careful driving behavior should be recommended, because the stopping distance will increase. Driving regulations have substantial influence on the recommended behavior and have to be incorporated in the reasoning process. Driving rules differ between countries, therefore adaptation is necessary.

 

Road emergency: people are warned by RES

 

Maria travels from Milano to Torino by car. The spatial context (type of road) dictates the applicable standard driving rules under best conditions (without other participants, without additional traffic objects, in daylight and good weather). Any other object within the context presents an additional restriction to the given standard behavior.

 

At 8:00 am, Maria leaves home and takes the highway like everyday.  Forty minutes later an accident occurs and RES detects such event. Most of the time, a driver will not be alone on a road or highway section, and several vehicles will perceive the same context. Therefore it is common to exchange context information between vehicles in order to either confirm or reject current beliefs about the context, or even acquire new knowledge. As perception based on vision, roadside sensors, or in-vehicle sensors may be subject to errors, mixing the information from multiple sources helps improving the data quality.

 

Collaboration between vehicles means the exchange of information that could indicate a potentially dangerous situation, related to the mistake of a driver or bad road conditions. In this way, a driver, like Maria, can be warned before getting into a dangerous situation. Once a vehicle has perceived a significant object (another vehicle, a pedestrian, a biker, etc.), it should communicate this information to its neighbors. Vehicles can notify other ones about objects (e.g. pedestrians at an intersection) they may not yet have recognized, thus enhancing the overall information about the current context. Additionally, the number of announcements of the ”same” object can provide a measure of reliability.

 

On the other hand, exchange of information between cars and infrastructure is other way that allows to get useful information related to an emergency situation. For broadcasting this information to other drivers, several architectures are imaginable, including the use of a central broadcasting server, road-side wireless LANs, ad-hoc networking between vehicles, etc.

 

At 8:40 am, Maria receives as all the other people an alarm message from RES according to the type of mobile or gadget that she is using. According to the type of terminal, alarm messages are adapted based on the users’ preferences and situation.

 

  A blinking icon in Maria’s car (that was traveling to the workplace) and a special sound in mobile device of other people.

 

Road Emergency System (RES) will make use of users’ preferences and roles in order to personalize the message.

 

  E.g. people in noisy environments might receive an alert that can be still heard.

 

Road emergency: Maria gets instructions

 

Maria receives instructions depending on their location and availability on how to proceed in order to take either the safest way for going to Torino or other decision based on the emergency event (stop the car, going back home, etc).

 

  System has sent a set of instructions to Maria, that suggests the possibility of taking an alternative road in order to avoid eventual problems.

 

Additional bad weather conditions further influence the decision process. The system may propose even lower speed. Road emergency conditions can provide valuable information allowing drivers to choose alternative routes based on the instructions received.

 

 

  System has detected bad weather conditions are coming on.  It sends a message with information related to bad weather conditions and suggests going slower.

 

Instructions suggest taking an alternative road 5 km before the accident place and decreasing car speed for bad weather conditions. Maria decreases speed and continues her trip according the indications provided. She turns left 5 km before the accident place in order to take the alternative way suggested, and continues up to final destination.

 

4.2                  Emergency situations analysis

 

The following table summarizes some situations and scenarios where context-aware applications give benefits in emergency scenarios. In the first column the involved team is defined; in the second column the emergency situation is recalled and in third column the detailed context-aware application is described.

 

Who

When

Application

Adaptation

Health Emergency Service

People with health problems get a health service subscription.

Health service is in charge of tracking vital signs such as heart beat and blood pressure, and also can make decisions to warn doctors and relatives, according to the situation and elements involved.

According to the user context (identity, preferences, user state) and location context (position), the application adapts functionalities that could be provided.

Health Emergency Service

A patient is going to suffer a heart attack.

Taking into account the user’s medical history and the swift change of relevant parameters, health application triggers an alarm to better coordinate the action of doctors, ambulances, family and relatives.

The application takes into account user context (user state), resource context (documents, network, available services) and temporal context (emergency time), in order to adapt an alarm message and coordinate actions.

Health Emergency Service

Ambulance receives emergency request.

Closest ambulance  receives emergency request. The application gets ready to provide support.

According to user context (ambulance team preferences) and resource context (ambulance devices, available services), the application adapts resources to provide support.

Health Emergency Service

The ambulance is going to the patient’s address.

The application provides current road and weather state. Such information is updated, according to ambulance position, and shown taking into account ambulance team preferences.

Based on user context (ambulance team preferences), resource context (devices, network, available road and weather services) and location context (weather conditions), the application adapts user ambulance interface in order to provide useful information.

Health Emergency Service

Paramedics need information about the patient’s health state from the hospital.

Application retrieves relevant patient’s information, such as available medical records, from the hospital data base information system, and suggests standard procedures based on current emergency situation.

The application adapts the way in which patient’s info is shown according to user context (paramedics preferences) and resource context (type of display,  type of device, network, available information services).

Health Emergency Service

Once on the place. 

A preliminary diagnosis is made and the results are organized and sent to the hospital, that records the intervention.

According to user context (paramedic preferences) and resource context (type of device used to record the intervention), the application adapts the input interface to collect data intervention.

Health Emergency Service

The current situation is communicated to other people.

Depending on the context regarded to patient’s state of health, and according to personal preferences, the application prepare and send a tentative report to his family. Such report will contain relevant information about current situation.

According to user context (patient state, patient preferences), the application adapts the way in which a report, related to the current situation, is prepared and sent to the patient’s family.

Road Emergency Service

Checking the road conditions.

The application checks current road context in order to eventually warn drivers, according to the road emergency.

According to the user context (preferences, user state), resource context (kind of device, network, available road services) and location context (current position, lighting, weather conditions, surrounding landscape, user’s direction and movement),  the application gets ready to adapt functionalities provided.

Road Emergency Service

An accident occurs on the road.

The application is capable to detect such event. It retrieves road information, in order to establish the information to be sent.

The application establishes and adapts the type of message (info, suggestions) to be sent according to user context (message preferences) and location context (current position,  accident place, weather conditions).

Road Emergency Service

Vehicles are arriving to the emergency place.

The application sends an alarm message according to the type of mobile or gadget used.

The application adapts the alarm message based on resource context (type of device, display size, display type).

Road Emergency Service

User receives updated information.

According to the user position, the message with information about how to proceed in order to take the safest way, could be adapted and retransmitted according to changes in the contexts analyzed.

According to location context (proximity to incident place, current position, road changes, weather changes), the application adapts message (info, suggestions).

Table 4: Scenarios Analysis

 

Context-aware applications can address healthcare needs and support activities related to social interaction, environment control, and information flow. This document provides important characteristics of context awareness related to emergency scenarios. It also identifies and describes the requirements that have to be fulfilled, in order to define the main characteristics of a suitable application according to the context.

 

As we saw previously in mobile environments, information regarded to different emergency situations could be subject to rapid changes. Optimizing information related to context-aware applications is a process that involves quick adaptation of the way in which information is processed, according to changes in the environment. In addition, context-aware applications facilitates the interaction of people with computing devices and applications. The scenarios analyzed before, show that automatic context detection and adaptation allows applications to require significantly less user input to be fed in. Therefore, it enables people to focus on relevant information rather than forcing them to focus on operative details.

 

Additionally, the motivation behind this document is to explore a set of common emergency situations supported by context-aware applications. As mentioned in previous chapter, such applications are composed of several components that provide functionalities needed to analyze emergency situations. In conclusion, this document provides concepts, application scenarios and requirements related to context-aware emergency applications, taking into account the main characteristics of context, in order to process useful information by adapting the application according to the emergency situation.

Appendix A Acronyms Table

 

PP

Public Protection

PPDR

Public Protection and Disaster Relief

LE

Law enforcement

FF

Fire Fighters

EMS

Emergency Medical Services

MESA

Mobility for Emergency and Safety Applications

ETSI

European Telecommunications Standards Institute

TIA

Telecommunications Industry Association

PPSP

Public Safety Partnership Project

3G

Third Generation

EU

European Union

SoR

Statement of Requirements

PSTN

Public Switched Telephonic Network

DSL

Digital Subscriber Line

DS3

Digital Signal 3

ISDN

Integrated Services Digital Network

TSG

Technical Specification Group

QoS

Quality of Service

BAN

Body Area Network

PSAP

Public Safety Answering Point

CAD

Computer Aided Dispatch

GIS

Geographic Information System

PSCD

        Public Safety Communications Devices

BFD

Brookside Fire District

E7

Fire Engine (of BFD 7)

L7

Fire Ladder (of BFD 7)

DOT

Department of Transportation

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figures Index

Figure 1: Interoperability in MESA systems................................................................... 11

Figure 2: MESA actors overview..................................................................................... 13

 

Bibliography

[1]   CEFRIEL, “Study end develop of models for Public Protection scenarios”, Project MESA, 2005

[2]   www.projectmesa.org

[3]   http://www.wikipedia.org/

[4]   http://www.publicsafetycommunication.eu/

[5]   http://www.opuce.tid.es/