Platform for Control and Delivery of services in Next Generation Networks






Deliverable 1.2


Enabling Technologies












The PICO project concentrates on application streaming and context awareness as topical techniques to build distributed applications in the domain of emergency situations (described in D1_1).

This document describes in detail the technologies: application streaming, context awareness, mobile devices and platforms.

These technologies will be used to develop prototypes of emergency applications.











This is the first draft of the Deliverable 1.0


30 /10/2008

Final version










Abstract 3


Introduction. 8

1.†††† Applications on Demand technologies. 9

1.1†††† Thin and fat client 9

1.1.1†††† Thin client Software. 13†††† Xterm.. 13†††† OpenThinClient 14

1.1.2†††††† Remote Desktop Software. 15†††† VNC. 16†††† ICA. 17†††† RDP. 18†††† Remote Desktop comparison. 19

1.2††† Virtual Machine.. 20

1.2.1†††† VMware. 21

1.2.2†††† XEN. 23

1.2.3†††† OpenVZ. 25

1.3††† Web 2.0. 26

1.3.1†††† Ajax. 27

1.3.2†††† Flash & Flex. 28

1.3.3†††† Silverlight 30

1.3.4†††† JavaFX. 30

1.3.5†††† Web 2.0 technologies comparison. 30

1.4††† Application Streaming.. 30

1.4.1†††† Application Deployment 30

1.4.2†††† SoftGrid. 30

1.4.3†††† AppStream.. 30

1.4.4†††† Application streaming technologies comparison. 30

1.5††† Technologies Comparison. 30

2.†††† Mobile Devices. 30

2.1††† Development Environment 30

2.1.1†††† Java (J2ME) 30

2.1.2†††† Native code. 30

2.1.3†††† Web. 30

2.2††† Mobile OS. 30

2.2.1†††† iPhone. 30

2.2.2†††† Symbian. 30

2.2.3†††† Windows Mobile. 30

2.2.4†††† Proprietary OS. 30

2.2.5†††† Android. 30

2.2.6†††† MOTOMAGX. 30

2.2.7†††† Openmoko. 30

2.3††† Sample devices. 30

2.3.1†††† Apple iPhone. 30

2.3.2†††† Nokia N96. 30

2.3.3†††† HTC TyTN II 30

3.†††† Technologies support to Context-Aware.. 30

3.1††† Context-Aware Mobility Solution. 30

3.1.1†††† Active Location Tracking. 30

3.1.2†††† Passive Location Tracking. 30

3.1.3†††† Combining technologies. 30

3.1.4†††† Mobility solution analysis. 30

3.2††† Common context-aware components 30

3.2.1†††† Web 2.0. 30

3.2.2†††† Widget 30

3.2.3†††† REST (Representational State Transfer) 30

3.2.4†††† Social Networking. 30

3.2.5†††† Context Ontology. 30

3.2.6†††† Rule Engine. 30

3.2.7†††† Recommendation Techniques. 30

3.2.8†††† Data Mining. 30

Appendix A Acronyms Table.. 30

Figures Index. 30

Bibliography. 30






















†††††††††††††††††††††††††††† Introduction


In the last years the telecommunication companies begin to develop new technologies based on the idea that in a server-client environment a lot of client resources result not necessary. The aim of this document is to describe and analyze the history of this development focusing on the current enabling technologies and devices. Indeed, Application on Demand, or Application streaming is an alternative to installing applications locally on individual PCís. Applications are streamed on demand from a central server; when the user has finished with the applications all components are completely removed - as if the application was never there. This solution leads to important advantages like: reduction of installation and software license costs, reduction of upgrading cost and consistent decrease of disk space requirements. Small storage capacity implies portability; as we saw, this is a fundamental feature in emergency scenarios. Moreover Public Protection Operators need flexible system able to be deployed in every situation and application on demand supply the solution of this problem with a quickly wireless communication.

In the document D1_1 we analyzed the possible emergency scenarios from an operator point of view, describing requirements, needs, and the importance of delivery application on demand.

In this document we will first present the state of the art regarding the available application on demand technologies. In the second section we will then introduce and describe the current mobile devices that can be used in order to stream applications focusing on features like space requirement, storage capacity, portability, etc. In the third section

The aim is in therefore to complete the emergency scenarios analysis proposed in D 1_1, with a technical overview on the available software and devices in order to enable a faster delivery of telecommunication services useful, for example, in emergency situations.


Figure 1: Communications in emergency situations.

1.         Applications on Demand technologies


In this section we summarize the available technologies in order to delivery applications on demand and we propose a final comparison among the most important of them. In detail, in the first paragraph we are going to describe the starting point of this architecture, represented by the thin client and the remote desktop software; in the second paragraph we will focus on the virtualization technologies while in the third one we will present the new world wide web (Web 2.0) concepts and the available tools. Finally, before presenting a global comparison, we will describe the most famous proprietary application streaming software.



1.1           Thin and fat client


A thin client (sometimes also called a lean client) is a client computer or client software in client-server architecture networks which depends primarily on the central server for processing activities, and mainly focuses on conveying input and output between the user and the remote server. In contrast, a thick or fat client does as much processing as possible and passes only data for communications and storage to the server.

A thin client, as an application program, communicates with an application server and relies for most significant elements of its business logic on a separate piece of software, an application server, typically running on a host computer located nearby in a LAN or at a distance on a WAN or MAN. The term thin client is also sometimes used in an even broader sense which includes diskless nodes that does most of its processing on a central server with as little hardware and software as possible at the user's location, and as much as necessary at some centralized managed site. For example, the embedded operating system (OSE) in a thin client is stored in a "flash drive", in a Disk on Module (DOM), or is downloaded over the network at boot-up and usually uses some kind of write filter so that the OS and its configuration can only be changed by administrators.

In designing a client-server application, there is a decision to be made as to which parts of the task should be done on the client, and which on the server. This decision can crucially affect the cost of clients and servers, the robustness and security of the application as a whole, and the flexibility of the design for later modification or porting. One design question is how application-specific the client software should be. Using standardized client software such as a Web browsercan save on development costs, since one does not need to develop a custom client, but one must accept the limitations of the standard client.


Figure 2: Thin clients in a client-server context.


The most important advantages using a thin client are:

As we said, another possible solution in client-server architecture is the use of fat-clients. A fat client is a computer (client) in a network which typically provides rich functionality independently of the central server. A fat client still requires at least periodic connection to a network or central server, but is often characterized by the ability to perform many functions without that connection. In some cases fat client could be convenient. Important advantages are:

Depending on the tradeoff between low development costs and the standard client limitations we might say that we use either a thin client, a thick/fat client, or a hybrid client model in which some applications (such as web browsers) are run locally, while other applications (such as critical business systems) are run on the terminal server. One way to implement this is simply by running remote desktop software on a standard desktop computer. This introduces us to the concept of centralized computing.

The centralized computing indicates a technology where the computing is done at a central location, using terminal that are attached to a central computer. The terminals may be text terminals or thin clients for example. It offers greater security over decentralized systems because all of the processing is controlled in a central location. In addition, if one terminal breaks down, the user can simply go to another terminal and log in again, and all of their files will still be accessible. This type of arrangement does have some disadvantages. The central computer performs the computing functions and controls the remote terminals. This type of system relies totally on the central computer. Should the central computer crash, the entire system will be unavailable. For these reasons a terminal server is commonly defined as a server used in centralized computing. There are two contemporary models of centralized computing:


     Thin client model: the terminal server provides a Windows or Linux desktop to multiple users.

     Remote desktop model: an ordinary computer acts temporarily as a terminal server, providing its desktop to a remote computer over a wide area network such as the Internet ( software clients used in this architecture are known as remote desktop applications; however, these remote desktop applications are also used in the thin client model as well).


In the rest of this paragraph we describe the most important software and tools regarding these two models.













1.1.1 Thin client Software


Thin clients have been used for many years by businesses to reduce total cost of ownership, while web applications are becoming more popular because they can potentially be used on many types of computing device without any need for software installation. However, during the last years the structure is changing away from pure centralization, as thin client devices become more like diskless workstations due to increased computing power, and web applications start to do more processing on the client side, with technologies such as AJAX and rich clients. In addition, mainframes are still being used for some critical applications, such as payroll, or for processing day-to-day account transactions in banks. These mainframes will typically be accessed either using terminal emulators or via web applications.

Regarding thin client technologies, we report the most famous client that was the pioneer of the currents available technologies and the current open source solution:

     Xterm is the standard terminal emulator for the X Window System; a user can have many different invocations of xterm running at once on the same display, each of which provides independent input/output for the process running in it.

     OpenThinClient is an open source thin client Solution consisting of a Linux based operating system along with a comprehensive Java based management GUI and server component. It is intended for environments where a medium to large number of thin clients must be supported and managed efficiently.  Xterm


In computing, the X Window System is a system which implements the X display protocol and provides windowing on bitmap displays. It provides the standard toolkit and protocol with which to build graphical user interfaces (GUIs) on most Unix-like operating systems and OpenVMS, and has been ported to many other contemporary general purpose operating systems. X provides the basic framework, or primitives, for building GUI environments: drawing and moving windows on the screen and interacting with a mouse and/or keyboard. X does not mandate the user interface — individual client programs handle this. As such, the visual styling of X-based environments varies greatly; different programs may present radically different interfaces. X is not an integral part of the operating system; instead, it is built as an additional application layer on top of the operating system kernel.

An X terminal is a thin client that runs an X server. This architecture became popular for building inexpensive terminal parts for many users to simultaneously use the same large server (making programs being run on the server clients of the X terminal).

X terminals explore the network (the local broadcast domain) using the X Display Manager Control Protocol to generate a list of available hosts that they can run clients from. The initial host needs to run an X display manager. Dedicated (hardware) X terminals have become less common; a PC or modern thin client with an X server typically provides the same functionality at the same, or lower, cost.

Figure 3: X Windows System.  OpenThinClient


The OpenThinClient operating system is based on a customized Ubuntu Linux distribution optimized for use in diskless devices. Booting and configuration of the thin clients is implemented using industry-standard technologies like LDAP, DHCP, PXE, TFTP and NFS. OpenThinClient provides a powerful, Java-based graphical user interface to manage all aspects of the thin clients under its control. Furthermore, it supports integration with enterprise-wide management environments like LDAP or MS ADS.


Openthinclient differs from other solutions in the following aspects:


     Based on industry-standard protocols and technologies-integrates smoothly with existing systems management solutions like LDAP and MS ADS.

     Features a powerful management GUI - supports a large range of thin client hardware.

     No specialized hardware is required. The thin client only needs a PXE-capable network interface and no local storage like flash or hard disk. (i.e. boots devices into thin client mode without flash drives thanks to its PXE boot support).

     Several thin client applications come pre-packaged, like a Web browser, RDP client etc.

     The OpenThinClient Manager and the OpenThinClient Server are written in Java so they will run on any OS that is supported by Sun Java 6.

     A complete open source thin client solution free of charge.








1.1.2       Remote Desktop Software


In computing, remote desktop software is remote access and remote administration software that allows graphical user interface applications to be run remotely on a server, while being displayed locally.

Remote desktop applications have varying features: some allow attaching to an existing user's session (i.e. a running desktop) and remote controlling it in front of the user's eyes. It can also be explained as remote control of a computer by using another device connected via the internet or another network (see figure below). This is widely used by many computer manufacturers for technical troubleshooting for their customers. The quality, speed and functions of any remote desktop protocol are based on the system layer where the graphical desktop is redirected. Software such as VNC uses the top software layer to extract and compress the graphic interface images for transmission. Other products such as Microsoft RDP and others use a kernel driver level to construct the remote desktop for transmission.

We report the features and characteristics of the most important solutions concerning this architecture.




Figure 4: Remote Desktop interaction.












Virtual Network Computing (VNC) is a graphical desktop sharing system which is made up by three actors:

     a server

     a client

     a protocol


In detail, the VNC architecture uses the RFB protocol (remote framebuffer protocol). The server sends small rectangles of the framebuffer to the client to remotely control another computer. It transmits the keyboard and mouse events from one computer to another, relaying the graphical screen updates back in the other direction, over a network.



Figure 5: VNC architecture.


In its simplest form, the VNC protocol can use a lot of bandwidth, so various methods have been devised to reduce the communication overhead. For example, there are various encodings in order to determine the most efficient way to transfer the rectangles. The VNC protocol allows the client and server to negotiate which encoding will be used. The simplest encoding, which is supported by all clients and servers, is the raw encoding where pixel data is sent in left-to-right scan line order, and after the original full screen has been transmitted, only transfers rectangles that change. This encoding works very well if only a small portion of the screen changes from one frame to the next (like a mouse pointer moving across a desktop, or text being written at the cursor), but bandwidth demands get very high if a lot of pixels change at the same time, such as when scrolling a window or viewing full-screen video.

The main features of VNC are:


     VNC is platform-independent. A VNC viewer on any operating system can usually connect to a VNC server on any other operating system. There are clients and servers for almost all GUI operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one's work computer from one's home computer, or vice versa.

     VNC is not a secure protocol. While passwords are not sent in plain-text (as in telnet), brute-force cracking could prove successful if both the encryption key and encoded password are sniffed from a network.

     No state is stored at the viewer. This means you can leave your desk, go to another machine, whether next door or several hundred miles away, reconnect to your desktop from there and finish the sentence you were typing. Even the cursor will be in the same place.  

     It is small and simple. The Win32 viewer, for example, is about 150K in size and can be run directly from a floppy. There is no installation needed.

     It is sharable. One desktop can be displayed by several viewers at once even if only one is able to work on it.

     It is available for download and under the terms of the GNU Public License.



Independent Computing Architecture (ICA) is a proprietary protocol for an application server system, designed by Citrix Systems. A key challenge of the ICA architecture is performance; a graphically intensive application (as most are when presented using a GUI) served over a slow or bandwidth-restricted network connection requires considerable compression and optimization to render the application usable by the client. The client machine may be a different platform, and may not have the same GUI routines available locally: in this case the server may need to send the actual bitmap data over the connection. Depending on the client's capabilities, servers may also off-load part of the graphical processing to the client, e.g. to render multi-media content. The protocol lays down a specification for passing data between server and clients, but is not bound to any one platform.

Practical products conforming to ICA are Citrix's WinFrame and Citrix Presentation Server (formerly called Metaframe) products. These permit ordinary Windows applications to be run on a suitable Windows server, and for any supported client to gain access to those applications. Besides Windows, ICA is also supported on a number of Unix server platforms and can be used to deliver access to applications running on these platforms. The client platforms need not run Windows; for example, there are clients for Mac, Unix, Linux, and various Smartphones. ICA client software is also built into various thin client platforms.

ICA is broadly similar in purpose to window servers such as the X Window System. It also provides for the feedback of user input from the client to the server, and a variety of means for the server to send graphical output, as well as other media such as audio, from the running application to the client.


Terminal Services is one of the components of Microsoft Windows (both server and client versions) that allows a user to access applications and data on a remote computer over any type of network, although normally best used when dealing with either a Wide Area Network or Local Area Network, as ease and compatibility with other types of networks may differ. Terminal Services is Microsoft's implementation of thin-client terminal server computing, where Windows applications, or even the entire desktop of the computer running terminal services, are made accessible from a remote client machine.

Remote Desktop Connection (RDC) is the client application for Terminal Services. It allows a user to remotely log in to a networked computer running the terminal services server. RDC presents the desktop interface of the remote system; as if it were accessed locally (i.e. it allows watching and controlling the desktopís session of another pc; see figure). RDC uses the Remote Desktop Protocol (RDP) that is a multi-channel protocol in order to allow a user to connect to a computer running Microsoft Terminal Services. Microsoft refers to the official RDP client software as either Remote Desktop Connection or Terminal Services Client. The server component of Terminal Services (Terminal Server) listens on TCP port 3389. On the server, RDP uses its own video driver to render display output by constructing the rendering information into network packets by using RDP protocol and sending them over the network to the client. On the client, RDP receives rendering data and interprets the packets into corresponding Microsoft Win32 graphics device interface (GDI) API calls. For the input path, client mouse and keyboard events are redirected from the client to the server. On the server, RDP uses its own on-screen keyboard and mouse driver to receive these keyboard and mouse events.

Regarding the encryption, RDP uses RSA Security's RC4 cipher, a stream cipher designed to efficiently encrypt small amounts of data. RC4 is designed for secure communications over networks. Beginning with Windows 2000, administrators can choose to encrypt data by using a 56-or 128-bit key.

For the bandwidthís problem RDP supports various mechanisms to reduce the amount of data transmitted over a network connection. Mechanisms include data compression, persistent caching of bitmaps, and caching of glyphs and fragments in RAM. The persistent bitmap cache can provide a substantial improvement in performance over low-bandwidth connections, especially when running applications that make extensive use of large bitmaps.

  Remote Desktop comparison


In this section we propose a fast comparison among the systems presented; as we saw these protocols may differ in encoding method, caching, bandwidth usage, latency, video quality, etc. The idea is to underline these features evaluating in such a way the performances of these products.

Concerning the following results, the values are evaluated on tests over Wide Area Network and the PDA model used is AXIM V5.

Regarding the percentages on the audio/video quality they are related to a 100% available †††††††† quality with a personal computer.






Web Browsing (Page Download Latency) on PC


~0.7 Sec


~0.7 Sec


~1 Sec

Web Browsing (Page Download Latency) on PDA


~10 Sec


~1 Sec


~0.5 Sec

Audio/Video Quality on PC


~ 5%




~ 5%

Audio/Video Quality on PDA


~ 2%


~ 2%


~ 10%

Web Browsing Data Transfer (per page) on PC


~ 100 KB


~ 150 KB


~ 200 KB

Web Browsing Data Transfer (per page) on PDA


~ 60 KB


~ 20 KB


~ 10 KB

Audio Video Data Transfer on PC


~ 20 MB


~ 30 MB


~ 10 MB

Audio Video Data Transfer on PDA


~ 2 MB


~ 3 MB


~ 3 MB

Bandwidth Usage






As it possible to see on the table RDP and ICA claim almost the similar performances. VNC has instead some drawbacks also because the display encoding is not well supported causing an increase in terms of bandwidth usage.



1.2    Virtual Machine


Virtualization is a broad term that refers to the abstraction of computer resources. One definition is the following: Virtualization is a technique for hiding the physical characteristics of computing resources from the way in which other systems, applications, or end users interact with those resources. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple logical resources; or it can include making multiple physical resources (such as storage devices or servers) appear as a single logical resource.

As with other terms, such as abstraction and object orientation, virtualization is used in many different contexts, which can be grouped into two main types:


     Platform virtualization involves the simulation of whole computers.

     Resource virtualization involves the simulation of combined, fragmented, or simplified resources.


In practice, virtualization creates an external interface that hides an underlying implementation (e.g., by multiplexing access, by combining resources at different physical locations, or by simplifying a control system). Recent development of new virtualization platforms and technologies has refocused attention on this mature concept.

More in detail, a hypervisor (or virtual machine monitor-VMM) is indeed a virtualization platform that allows multiple operating systems to run on a host computer at the same time.

Hypervisors are currently classified in two types:


       A native (or bare-metal) hypervisor is a software that runs directly on a given hardware platform (as an operating system control program). A guest operating system thus runs at the second level above the hardware. The classic native hypervisor was CP/CMS, developed at IBM in the 1960s, ancestor of IBM's current z/VM. More recent examples are open source Xen, Citrix XenServer, Oracle VM, VMware's ESX Server, L4 microkernels, Green Hills Software's INTEGRITY Padded Cell, VirtualLogix's VLX, etc.



Figure 7: Native Hypervisor.



       A hosted hypervisor is software that runs within an operating system environment. A guest operating system thus runs at the third level above the hardware. Examples include VMware Server (formerly known as GSX), VMware Workstation, VMware Fusion, the open source QEMU, Microsoft's Virtual PC and Microsoft Virtual Server products, InnoTek's VirtualBox, as well as SWsoft's Parallels Workstation and Parallels Desktop.


There is seldom requirement for a guest OS to be the same as the host one. The guest system often requires access to specific peripheral devices to function, so the simulation must support the guest's interfaces to those devices.



Figure 8: Hosted Hypervisor.


We present the most important solutions regarding platform virtualization.





1.2.1 VMware


VMware Inc, a publicly-listed company, develops proprietary virtualization software products for x86-compatible computers, including both commercially-available and freeware versions. The name VMware comes from the acronym VM, meaning virtual machine and ware comes from second part of Software.

The two main product categories produced from VMware are:


     Desktop Software like VMware Workstation that allows users to run multiple instances of x86 or x86-64 compatible operating systems on a single physical PC. VMware Fusion provides similar functionality for users of the MacIntel platform, along with full compatibility with virtual machines created by other VMware products. For users without a license to use VMware Workstation or VMware Fusion, VMware offers the freeware VMware Player product, which can run (but not create) virtual machines.

     Server Software like VMware ESX Server (formerly called "ESX Server") and VMware Server (formerly called "GSX Server"). VMware ESX, an enterprise-level product, can deliver greater performance than the freeware VMware Server, due to lower system overhead. In addition, VMware ESX integrates into VMware Virtual Infrastructure, which offers extra services to enhance the reliability and manageability of a server deployment. VMware Server is also provided as freeware, like VMware Player but it is possible to create virtual machines with it.

VMware's desktop software runs atop Microsoft Windows, Linux, and Mac OS X (hosted) instead VMware ESX Server, runs directly on server hardware without requiring an additional underlying operating system (native-bare metal).


Casella di testo: Figure 9: VMware ESX Architecture.

VMware refers to the physical hardware computer as the host machine, and identifies the operating system (or virtual appliance) running inside a virtual machine as the guest. This terminology applies to both personal and enterprise-wide VMware software. Like an emulator, VMware software provides a completely virtualized set of hardware to the guest operating system. VMware software virtualizes the hardware for a video adapter, a network adapter, and hard disk adapters. The host provides pass-through drivers for guest USB, serial, and parallel devices. In this way, VMware virtual machines become highly portable between computers, because every host looks nearly identical to the guest. In practice, a systems administrator can pause operations on a virtual machine guest, move or copy that guest to another physical computer, and there resume execution exactly at the point of suspension. Alternately, for enterprise servers, a feature called VMotion allows the migration of operational guest virtual machines between similar but separate hardware hosts sharing the same storage area network (SAN).

However, unlike an emulator, such as Virtual PC for PowerPC Macintosh computers, VMware software does not emulate an instruction set for different hardware not physically present. This significantly boosts performance, but can cause problems when moving virtual machine guests between hardware hosts using different instruction-sets (such as found in 64-bit Intel and AMD CPUs), or between hardware hosts with a differing number of CPUs. Stopping the virtual-machine guest before moving it to a different CPU type generally causes no issues.

The VMware Tools package adds drivers and utilities to improve the graphical performance for different guest operating systems, including mouse tracking. The package also enables some integration between the guest and host systems, including shared folders, plug-and-play devices, clock synchronization, and cutting-and-pasting across environments.




1.2.2 XEN


Xen is a free software virtual machine monitor for IA-32, x86-64, IA-64 and PowerPC 970 architectures. It allows several guest operating systems to be executed on the same computer hardware at the same time. Xen originated as a research project at the University of Cambridge, led by Ian Pratt, senior lecturer at Cambridge and founder of XenSource, Inc. This company now supports the development of the open source project and also sells enterprise versions of the software. The first public release of Xen was made available in 2003. XenSource, Inc was acquired by Citrix Systems in October 2007. XenSource's products have subsequently been renamed under the Citrix brand:



When Citrix Systems completed its acquisition of XenSource the Xen project moved to This move had been under way for some time, and afforded the project an opportunity to make public the existence of the Xen Project Advisory Board (Xen AB), which currently has members from Citrix, IBM, Intel, Hewlett-Packard, Novell, Red Hat and Sun Microsystems. The Xen AB is chartered with oversight of the project's code management procedures, and with development of a new trademark policy for the Xen mark, which Citrix intends to freely license to all vendors and projects that implement the Xen hypervisor; the requirements for licensing will be solely the responsibility of the Xen AB.

Regarding the features system, a Xen system is structured with the Xen hypervisor as the lowest and most privileged layer. Above this layer are one or more guest operating systems, which the hypervisor schedules across the physical CPUs. The first guest operating system, called in Xen terminology "domain 0" (dom0), is booted automatically when the hypervisor boots and given special management privileges and direct access to the physical hardware. The system administrator logs into dom0 in order to start any further guest operating systems, called "domain U" (domU) in Xen terminology.

Modified versions of Linux, NetBSD and Solaris can be used as the dom0. Several modified Unix-like operating systems may be employed as guest operating systems (domU); on certain hardware, as of Xen version 3.0, unmodified versions of Microsoft Windows and other proprietary operating systems can also be used as guests if the CPU supports Intel VT or AMD V technologies.

As we know, the primary benefits of server virtualization are consolidation, increased utilization, and ability to rapidly provide and start a virtual machine, and increased ability to dynamically respond to faults by re-booting a virtual machine or moving a virtual machine to different hardware. Another benefit is the ability to securely separate virtual operating systems, and the ability to support legacy software as well as new OS instances on the same computer. Xen's support for virtual machine live migration from one host to another allows workload balancing and the avoidance of downtime. Besides, Xen virtual machines can be "live migrated" between physical hosts across a LAN without loss of availability. During this procedure, the memory of the virtual machine is iteratively copied to the destination without stopping its execution. Stoppage of around 60–300 ms is required to perform final synchronization before the virtual machine begins executing at its final destination, providing an illusion of seamless migration. Similar technology is used to suspend running virtual machines to disk and switch to another virtual machine, and resume the first virtual machine at a later date.

Xen may also be used on personal computers that run Linux but also have Windows installed. Traditionally, such systems are used in a dual boot setup, but with Xen it is possible to start Windows "in a window" from within Linux, effectively running applications from both systems at the same time.


Figure 10: XEN Architecture.













1.2.3 OpenVZ


We conclude this section with a fast description of an open source based solution. OpenVZ is an operating system-level virtualization technology based on the Linux kernel and operating system. OpenVZ allows a physical server to run multiple isolated operating system instances, known as containers, Virtual Private Servers (VPSs), or Virtual Environments (VEs). Each container performs and executes exactly like a stand-alone server; containers can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files.

As compared to virtual machines such as VMware, OpenVZ is limited in that it requires both the host and guest OS to be Linux (although Linux distributions can be different in different containers). However, OpenVZ claims a performance advantage; according to its website, there is only a 1-3% performance penalty for OpenVZ as compared to using a standalone server. An independent performance evaluation confirms this.

The OpenVZ project is an open source community project supported by Parallels and is intended to provide access to the code and ultimately for the open source community to test, develop and further the OS virtualization effort. It is also a proving ground for new technology that may evolve into the Parallels Virtuozzo Containers product offering.


The most important features are:


     Scalability: As OpenVZ employs a single kernel model, it is as scalable as the 2.6 Linux kernel; that is, it supports up to 64 CPUs and up to 64 GiB of RAM.

     Performance: The virtualization overhead observed in OpenVZ is limited, and can be neglected in many scenarios.

     Density: OpenVZ is able to host hundreds of containers.

     Mass-management: An administrator (i.e. root) of an OpenVZ physical server (also known as a Hardware Node or host system) can see all the running processes and files of all the containers on the system. That makes mass management scenarios possible. Consider that VMware or Xen is used for server consolidation: in order to apply a security update to 10 virtual servers, an administrator is required to log in into each one and run an update procedure. With OpenVZ, a simple shell script can update all containers at once.
















1.3    Web 2.0


Web 2.0 is a trend in the use of World Wide Web technology and web design that aims to facilitate creativity, information sharing, and, most notably, collaboration among users. These concepts have led to the development and evolution of web-based communities and hosted services, such as social-networking sites, wikis, blogs, and folksonomies.

The term became notable after the first O'Reilly Media Web 2.0 conference in 2004. Although the term suggests a new version of the World Wide Web, it does not refer to an update to any technical specifications, but to changes in the ways software developers and end-users use webs.

Web 2.0 websites allow users to do more than just retrieve information. They can build on the interactive facilities of Web 1.0 to provide Network as platform, computing, allowing users to run software-applications entirely through a browser. Users can own the data on a Web 2.0 site and exercise control over that data. These sites may have an architecture of participation that encourages users to add value to the application as they use it. This stands in contrast to very old traditional websites, the sort which limited visitors to viewing and whose content only the site's owner could modify. Web 2.0 sites often feature a rich, user-friendly interface based on Ajax, Flex or similar rich media. The sites may also have social-networking aspects. The concept of Web-as-participation-platform captures many of these characteristics. Bart Decrem, a founder and former CEO of Flock, calls Web 2.0 the participatory Web and regards the Web as information-source as Web 1.0.

The impossibility of excluding group-members who do not contribute to the provision of goods from sharing profits gives rise to the possibility that rational members will prefer to withhold their contribution of effort and free-ride on the contribution of others.


We can summarize the main characteristics of Web 2.0:


     rich user experience

     user participation

     dynamic content


     web standards




     collective intelligence by way of user participation


After this brief introduction on the meaning of Web 2.0, we are going to describe the most important available technologies used in the Web 2.0 world.








1.3.1 Ajax


Like DHTML and LAMP, AJAX (Asynchronous JavaScript and XML) is not a technology in itself, but a term that refers to the use of a group of technologies. It is a group of inter-related web development techniques used for creating interactive web applications. The main characteristics of AJAX are:


     It increases responsiveness and interactivity of web pages achieved by exchanging small amounts of data with the server "behind the scenes" so that entire web pages do not have to be reloaded each time there is a need to fetch data from the server. This is intended to increase the web page's interactivity, speed, functionality and usability.

     AJAX is asynchronous, in that extra data is requested from the server and loaded in the background without interfering with the display and behavior of the existing page. JavaScript is the scripting language in which AJAX function calls are usually made. Data is retrieved using the XMLHttpRequest (the core of AJAX which gives browsers the ability to make dynamic and asynchronous data requests without having to reload a page, eliminating the need for page refreshes) object that is available to scripting languages run in modern browsers, or, alternatively, through the use of Remote Scripting in browsers that do not support XMLHttpRequest. In any case, it is not required that the asynchronous content be formatted in XML.

     Ajax is a cross-platform technique usable on many different operating systems, computer architectures, and web browsers as it is based on open standards such as JavaScript and the document object model (DOM). There are free and open source implementations of suitable frameworks and libraries.

     Ajax uses a combination of:

o  XHTML (or HTML) and CSS for marking up and styling information.

o  The DOM accessed with a client-side scripting language, especially ECMAScript implementations such as JavaScript and JScript, to dynamically display and interact with the information presented.

o  The XMLHttpRequest object is used to exchange data asynchronously with the web server. In some Ajax frameworks and in certain situations, an IFrame object is used instead of the XMLHttpRequest object to exchange data with the web server, and in other implementations, dynamically added <script> tags may be used.

o  XML is sometimes used as the format for transferring data between the server and client, although any format will work, including preformatted HTML, plain text and JSON. These files may be created dynamically by some form of server-side scripting.


Finally, we conclude this brief description recalling that main advantages and disadvantages of this technology.


     Bandwidth usage: By generating the HTML locally within the browser, and only bringing down JavaScript calls and the actual data, Ajax web pages can appear to load relatively quickly since the payload coming down is much smaller in size, and the rest of the layout does not have to be redrawn on each update.

     Separation of Data, Format, Style and Function: A less specific benefit of the Ajax approach is that it tends to encourage programmers to clearly separate the methods and formats used for the different aspects of information delivery via the web.


On the contrary the following problems arise:


     Browser integration: the dynamically created page does not register itself with the browser history engine, so triggering the "Back" function of the users' browser might not bring the desired result.

     Response-time concerns: Network latency (or the interval between user request and server response) needs to be considered carefully during Ajax development. Without clear feedback to the user, preloading of data and proper handling of the XMLHttpRequest object, users might experience delays in the interface of the web application, something which they might not expect or understand.

     Search Engine Optimization: Websites that use Ajax to load data which should be indexed by search engines must be careful to provide equivalent Sitemaps data at a public, linked URL that the search engine can read, as search engines do not generally execute the JavaScript code required for Ajax functionality.

     JavaScript compliance: not all browsers handle Javascript in the same way, and many users disable JavaScript in their browsers.



1.3.2    Flash & Flex


Adobe Flash is a set of multimedia technologies developed and distributed by Adobe Systems and earlier by Macromedia. Since its introduction in 1996, Flash technology has become a popular method for adding animation and interactivity to web pages; Flash is commonly used to create animation, advertisements, various web page components, to integrate video into web pages, and more recently, to develop Rich Internet applications (RIA).

Flash can manipulate vector and raster graphics and supports bi-directional streaming of audio and video. It contains a scripting language called ActionScript. It is available in most common web browsers and some mobile phones and other electronic devices (using Flash Lite). Several software products, systems, and devices are able to create or display Flash, including the Adobe Flash Player. The Adobe Flash Professional multimedia authoring program used to create content for the Adobe Engagement Platform, such as web applications, games and movies, and content for mobile phones and other embedded devices.

Files in the SWF format, traditionally called "Flash movies" or "Flash games", usually have a .swf file extension and may be an object of a web page, strictly "played" in a standalone Flash Player, or incorporated into a Projector, a self-executing Flash movie (with the .exe extension in Microsoft Windows). Flash Video (FLV) files have a .flv file extension and are used from within .swf files.

Flash is increasingly used as a way to display video clips on web pages, a feature available since Flash Player version 6.

    The key to this success has been the player's wide distribution in multiple browsers and operating systems, rather than any superior video quality or properties. It is available for many popular platforms, including Windows, Mac OS X and Linux. Flash is used as the basis for many popular video sites, including YouTube and Google Video.

    One major flaw with multimedia embedded through Flash, however, is the considerable performance penalty placed on playback hardware as compared with a purpose built multimedia playback system. Many files that drop frames and skip audio when embedded within Flash play without any issues using other multimedia formats on the same hardware.


Flash Video (.flv files) is a container format, meaning that it is not a video format in itself, but can contain other formats. The video in Flash is encoded in H.263, and starting with Flash player 8, it may alternatively be encoded in VP6. The audio is in MP3. The use of VP6 is common in many companies, because of the large adoption rates of Flash Player 8 and Flash Player 9.


Adobe Flex is a collection of technologies released by Adobe Systems for the development and deployment of cross platform, rich Internet applications based indeed on Adobe Flash platform. Traditional application programmers found it challenging to adapt to the animation metaphor upon which the Flash Platform was originally designed. Flex seeks to minimize this problem by providing a workflow and programming model that is familiar to these developers. MXML, an XML-based markup language, offers a way to build and lay out graphic user interfaces. Interactivity is achieved through the use of ActionScript mentioned above. The Flex comes with a set of user interface components including buttons, list boxes, trees, data grids, several text controls, and various layout containers. Charts and graphs are available as an add-on. Other features like web services, drag and drop, modal dialogs, animation effects, application states, form validation, and other interactions round out the application framework. Unlike page-based HTML applications, Flex applications provide a stateful client where significant changes to the view don't require loading a new page. Similarly, Flex and Flash Player provide many useful ways to send and load data to and from server-side components without requiring the client to reload the view. Though this functionality offered advantages over HTML and JavaScript development in the past, the increased support for XMLHttpRequest in major browsers has made asynchronous data loading a common practice in HTML-based development too.


Currently, Flex has the largest market share of any other framework for rich Internet applications, with a penetration of around 90 percent (something that Microsoft Silverlight is challenging). When used properly, Flex enables a website to behave like a thick client application (one that exists solely on a userís computer rather than on the Internet).

As with any client-side technology, there are drawbacks. Not all browsers start out with the Flash plug in installed, and Flash is also updated from time to time. In either case, the end user is required to download a new version if he or she reaches a page that requires it. Other frameworks for rich Internet applications have the same issue, which is seen by some as a drawback since not all users will (or are permitted to) download the plug in, and in many cases will navigate away from the page entirely.














1.3.3    Silverlight


In response to the proliferation of other frameworks used to create rich Internet applications such as Flex from Adobe and AJAX-based frameworks, Microsoft Silverlight was recently introduced. Microsoft Silverlight is a browser plug-in that allows web applications to be developed with features like animation, vector graphics, and audio-video playback - features that characterize a rich internet application.

Silverlight competes with products such as Adobe Flash, Adobe Flex, Adobe Shockwave, Java FX, and Apple QuickTime. Version 2.0 brought improved interactivity and allows developers to use .NET languages and development tools when authoring Silverlight applications.

Silverlight was developed under the codename Windows Presentation Foundation/Everywhere (WPF/E). It is compatible with multiple web browser products used on Microsoft Windows and Mac OS X operating systems. A third-party free software implementation named Moonlight is under development to bring compatible functionality to GNU/Linux. Mobile devices, starting with Windows Mobile 6 and Symbian (Series 60) phones, will also be supported.


The main features of SilverLight are:


     High Quality Video Experience: silverlight enables very high quality videos, embedded in highly graphical websites. The same research and technology that was used for VC-1, the codec that powers BluRay and HD DVD, is used by Microsoft today with its streaming media technologies.

     Cross-Platform, Cross-Browser: web applications that work on most of the browser, and most of the operating systems.

     Developers and Graphic Designers interaction: Developers familiar with Visual Studio, will be able to develop Silverlight applications very quickly, and they will work on Mac's and Windows. Developers will finally be able to strictly focus on the back end of the application core, while leaving the visuals to the Graphic Design team using the power of XAML.





















1.3.4    JavaFX

JavaFX is a Sunís family of products for creating Rich Internet Applications with immersive media and content. The JavaFX products include a runtime and tools suite that web scripters, designers and developers can use to quickly build and deliver expressive rich interactive applications for desktop, mobile, TV and other platforms. JavaFX technology provides the presentation layer for the Java ecosystem that lays over the Java runtime environment.

Figure 11: JavaFX Platform.


Sun Currently has an open source community project hosted for JavaFX, OpenJFX, where developers can sign up for a private preview of the JavaFX SDK, as well as download the JavaFX Script plugin for NetBeans 6.1. JavaFX is anticipated to compete on the desktop with Adobe AIR, OpenLaszlo, and Microsoft Silverlight. It may also target Blu-ray Disc's interactive BD-J platform, although as yet no plans for a Blu-ray release have been announced.

Currently JavaFX consists JavaFX Mobile and JavaFX Script:


     JavaFX Mobile is a Java operating system for mobile devices initially developed by SavaJe Technologies and purchased by Sun Microsystems in April 2007. It is part of the JavaFX family of products. The JavaFX Mobile operating system provides a platform for PDAs, smartphones and feature phones. It features a Java SE and Java ME implementation running on top of a Linux kernel.

It is understood that Sun will distribute JavaFX Mobile as a binary operating system to device manufacturers who will brand the interface to differentiate their product.

     JavaFX Script, is a high-performance declarative scripting language for building and delivering the next generation of rich Internet applications for desktop, mobile, TV, and other platforms. It forms part of the JavaFX family of technologies on the Java Platform. JavaFX targets the Rich Internet Application domain (competing with Adobe Flex and Microsoft Silverlight), specializing in rapid development of visually rich applications for the desktop and mobile markets. JavaFX Script works with integrated development environments like NetBeans and Eclipse. The main features of this scripting language are:

o  JavaFX Script uses a declarative syntax for specifying GUI components, so a developer's code closely matches the actual layout of the GUI.

o  Through declarative databinding and incremental evaluation, JavaFX Script enables developers to easily create and configure individual components by automatically synchronizing application data and GUI components.

o  JavaFX Script will work with all major IDEs, including NetBeans, which is the reference implementation IDE for Java development.

o  Unlike many other Java scripting languages, JavaFX Script is statically typed and will have most of the same code structuring, reuse, and encapsulation features that make it possible to create and maintain very large programs in Java.

o  JavaFX Script is capable of supporting GUIs of any size or complexity.

o  JavaFX Script makes it easier to use Swing, one of the best GUI development toolkits of its kind.






























1.3.5    Web 2.0 technologies comparison


The goal of all these frameworks is to be able to build Rich Internet Applications more easily and to make the user experience as rich as possible.


     Currently a lot of this kind of functionality is being built with AJAX (asynchronous Javascript, CSS, DOM manipulation and XML) and Flash/Flex. Manual construction of applications with the four AJAX components is though, and thus companies are trying to create the silver bullet for easy creation of RIA applications.

     Creating applications with Flash/Flex is relatively easy and makes the applications really rich, but it is vendor-specific (Adobe).

     Some of the newly announced frameworks are more open source (JavaFX) than others (Silverlight).


To make the comparison easier, a table that lists different aspects of the frameworks is presented:








1.1 Alpha



Built-in UI Controls

Very limited to none

Via Swing



Visual Studio 2008
.NET Platform 3.5
Silverlight 1.1 Alpha

NetBean 6.01 JavaFX plugin

Flex Builder 3.0 (Eclipse platform)

IDE Visual Design




IDE Toolbar for Controls




Browser Client

Silverlight 1.1 Alpha

Java Plugin with JavaFX extension

Adobe Flash Player 9






XAMLJavaScript(C#, VB.Net, ASP.Net)















1.4    Application Streaming


1.4.1    Application Deployment


All Application virtualization software vendors have their own definition of Application virtualization. Basically it comes down to this: Application virtualization enables the deployment of software without modifying the local operating system or file system. It allows software to be delivered and updated in an isolated environment ensuring the integrity of the operating system and all applications. Application conflicts – and the need for regression testing - are significantly reduced. A single application can be bundled and deployed to multiple operating system versions. Applications are easier to provision, deploy, upgrade, and rollback.


In our opinion there are 3 approaches to application virtualization:





For what we are concerned we will present in detail the most important features regarding the two main software of the last branch.


Several years ago, surveys of data center activity attracted attention, because they indicated that few servers averaged more than 20 percent utilization of their CPU, memory and disk hardware – and some were far worse. The IT industry quickly responded by delivering virtualization tech≠nologies and many organizations with large data centers are now deploying these technologies successfully to improve hardware utilization. A similar trend is now emerging which focuses on PC software. Many organizations are beginning to recognize that the efficiency of their PC software utilization is also in the range of 20% and in some cases much worse.


This inefficient utilization of software assets has become very costly for companies. Software license fees are generally based on the number of copies of a specific application installed on each PC in the company regardless of how often each copy of the application is used. In order to insure that users have even rarely used applications available when needed, companies tend to keep an overabundance of software on each PC and, therefore, pay far more in license fees than if the fees were tied to actual software usage. A typical license agreement also allows for software to be uninstalled and then re-installed on another PC as many times as desired, as long as it only resides on one PC at a time. While it would be highly impractical for a corporation to manually move applications from one PC to another on a regular basis to accommodate changing needs, some companies have implemented innovative streaming technologies which allow for the automatic provision of software assets based on user demand.

For example, AppStream Inc.ís dynamic license management software streams an application to the userís PC when they need to use it, effectively eliminating the need to pay application license fees for unused software. This technology has benefits relative to software license compliance and software change management as well.

Computer application streaming is indeed a form of on-demand software distrbution.

The basic concept of application streaming has its foundation in the way modern computer programming languages and operating systems produce and run application code. Indeed, only specific parts of a computer program need to be available at any instance for the end user to perform a particular function. This means that a program need not be fully installed on a client computer, but parts of it can be delivered over a low bandwidth network as and when they are required. Application streaming is usually combined with application virtualization, so that applications are not installed in the traditional sense.



We can therefore summarize the most important features and benefits:

     Applications are streamed on demand from a central server; when the user has finished with the applications all components are completely removed - as if the application was never there.

     The cost of installation is sharply reduced. Each staff memberís requirements can be noted and the system set up to deliver whatís needed when the staff member logs into the network.

       The cost of upgrading software is also reduced. When the organization decided to move to a new version of a software product, itís a simple matter to tell the application streaming software to deliver the new version when the user logs in.

       The cost of software licenses and license administration can be reduced. The organization only needs to acquire enough licenses to handle whatís being done now, not enough for everyone to have a license. This means that organizations having worldwide operations should be able purchase much fewer licenses. Systems not in use donít need to be a ďlicense prison.Ē

       Reduced disk space requirements: No large downloads required.

       Mobile users are able to access or updated applications from any location.



Before going forward is necessary to underline that main difference among different Application Streaming software:


     The possibility to work with Application Streamed with or without a server connection.

     The modification of existing environment or OS: A growing number of vendors offer desktop streaming software that provisions the entire desktop environment from a server to a desktop PC (or thin client). Altiris, AppStream, and Microsoft (through its recent acquisition of Softricity) have pushed the concept to the next level, streaming applications rather then a complete desktop environment. This allows greater flexibility in what is provisioned, because IT can create a basic operating system image and then individual images for each application, and combine them as needed on the fly. You donít need a separate desktop image for each combination of applications.



On the figure below an example of Application Streaming utility is represented.


Figure 12: Application Streaming Example



In the rest of this section we will present the most important available technologies regarding Application Streaming proposing a final comparison between them.

















1.4.2    SoftGrid


Microsoft Application Virtualization (formerly Microsoft SoftGrid) is an application virtualization and application streaming solution from Microsoft.

This platform allows applications to be deployed in real-time to any client from a virtual application server. It removes the need for local installation of the applications. Instead, only the SoftGrid runtime needs to be installed on the client machines. All application data is permanently stored on the virtual application server. Whichever software is needed is streamed from the application server on demand and run locally. The SoftGrid stack sandboxes the execution environment so that the application does not make changes to the client itself. Softgrid applications are also sandboxed from each other, so that different versions of the same application can be run under Softgrid concurrently. This approach enables any application to be streamed without making any changes to its code.

SoftGrid thus allows centralized installation and management of deployed applications. It supports policy based access control; administrators can define and restrict access to the applications by certain users by defining policies governing the usage. SoftGrid also allows replication of the applications across multiple application servers for better scalability and fault tolerance, and also features a tracking interface to track the usage of the virtualized application.

The SoftGrid client runtime presents the user with a list of applications, to which the user has access. The user can then launch a virtualized streamed instance of the application. Depending on the configuration, the systems administrator can be either notified of the action via email or it can require an explicit confirmation from the administrator for the application to start streaming and initialize or it can just simply check the active directory for the user's rights and stream the application to the user if it is authorized to run the application. The SoftGrid client can also install local shortcuts that bootstrap the process of launching individual virtualized software instances.

In detail SoftGrid is capable of packaging applications for on-demand, streamed delivery into virtualized end point runtime environments. The figure below depicts the first step in this process.


Figure 13: Softgrid Application Packaging


A workstation is configured with the SoftGrid Sequencer application. As the SoftGrid administrator installs the target application on the workstation, the sequencer monitors all installation steps, including changes to the registry. The administrator can also select specific components to be included in the virtualized application package, such as DLLs as well as Java and .NET components. Further, the application can be configured to store information in a centralized location (e.g. a secure data center).

The final outcome of the sequencer process is a set of four files that comprise the virtualized application, with an initial application load just big enough to load and initially execute the application. According to Microsoft, the load size is approximately 20 to 40 percent of the total application size.

The four files are placed on a SoftGrid application server for distribution. The administrator grants access to the application by adding users with approved access to a related AD group. Only members of the group will be able to see the application icon on their desktops or access the application files on the server. To reverse the process—to revoke a userís access—simply remove him or her from the group.

Once a user is added to the proper group, the application icon will appear on her desktop at next login. If the user is already logged in, she can force a refresh of her desktop by using a SoftGrid utility typically found in the system tray. The application is accessed by double-clicking the icon. The figure depicts what occurs when the user runs the application for the first time.


Figure 14: Softgrid Application Streaming


The four files created and installed on the SoftGrid Application Server are accessed by the desktop. The result is the creation of a virtual application environment on the userís machine with the bare minimum of application components streamed into it. The result is a self-contained application runtime space that virtualizes the following components :

     Registry – registry changes unique to the application are not made to the main OS on the desktop. Rather, they are virtualized within the isolated application runtime space.

     File system – calls from the application for local disk access can be redirected to access DLLs and other components from a virtual file system.


     INI files

     Process environment



We can therefore summarize the most important features of this solution:

       Applying patches in a virtualized environment is a simple rebuild of the appropriate SoftGrid package. The next time a user runs the application, the updated version is automatically streamed to the desktop.

       Help Desk costs associated with failed application installations, overwritten application components, corrupted registries, etc. are all but eliminated when files and settings unique to an application are virtualized (Customers have cut help-desk costs by up to 30% by reducing call volume for application-related problems, and reduced end-user downtime by up to 80% by easing challenges with business continuity of applications).

       Use of applications accessed via the SoftGrid server is tracked. Further, administrators can link an active instance of a running application to a license. This metering of applications helps organizations remain compliant with licensing agreements.

       If a virtualized application environment is infected with malware, the threat is contained—prevented from spreading to other applications or the base operating system.

       The threat of data leaks is mitigated due to the virtualization of local cache associated with application processing. Further, configuration of applications such as the Microsoft Office Suite can Ďencourageí users to save documents in a secure, centralized environment.

       Application access is controlled by group membership. In addition, applications that run on laptops can be configured to stop running if the user doesnít authenticate to the enterprise network within a specified period. This prevents thieves from using laptop applications indefinitely.

       Mobile users are able to access patched or updated applications from any location.

       Minimize application conflicts and regression testing: By eliminating the requirement to install applications on desktops or laptops, and shielding the OS and applications from changes normally created when applications are installed and run, Microsoft SoftGrid prevents problems that hinder deployments. This also minimizes the need to perform regression testing and, as a result, speeds deployments.


1.4.3    AppStream


AppStream's on-demand application distribution and license management platform provides an optimal solution, developed by Symantec, to serve the application management needs of enterprise environments, providing high productivity with controlled, guaranteed access to necessary applications from any Windows applications from any location at any time, including remote and mobile users.

AppStream has a desktop management platform that is loaded on a network server and streams applications to the desktop. It is built to deliver PC applications on demand so that when the PC user initiates an application, the application is streamed directly to the PC and begins to run. AppStream can be ready to run as soon as it is installed and configured as it requires no special program coding or changes to any of the applications running on PCs. It works by loading a light-weight agent onto each client PC which then manages the loading of applications from the server to the PC. The AppStream software does not actually install the full application on the PC; it simply streams enough of the application to the PC for it to be able to run. Typically, PC users use only a small part of any PC application – often as little as 10 percent. AppStream takes advantage of this fact by dividing the PC application up into segments, initially only streaming the segments that are needed to launch the application and those features that the user normally uses. The benefits of this include:



AppStream keeps track of how users work with their applications and adjusts the usage profile dynamically if usage patterns change. Technically, AppStream communicates with PCís via a pro≠tocol that is HTTP compliant and can work through firewalls, proxy servers, and over VPNs. It is engineered to make efficient use of the network, optimizing network bandwidth and using local PC caches to smooth the network load.

The streaming process works in the same way both for local PC applications, such as Microsoft Office, and client/server applications that are distributed between the PC and network servers. It can also provision PC applications to laptops, but in that case it fully loads the application so that the laptop can work while disconnected.

The Platform runs on a Windows Server which is typically configured for high availability and fail-over, so that the service it provides runs 24 by 7. Hundreds of desktops can be managed from a suitably configured server. For very large desktop populations more servers can be added in a multi-tier deployment, with load balancing and load sharing capabilities configured between them. There is no obvious limit to how many desktops can be managed in this way. In the figure below it is possible to see an image representing AppStream architecture.

Figure 15: AppStream Application Streaming


Unlike more traditional models, the ability to distribute applications, monitor usage, and manage licenses is integrated in a single platform. IT retains control over application versioning and provisioning but gains flexibility to give off-line use of applications and the ability to use local graphics cards and other local resources.

The AppStream server holds its own database of usage information both to enable dynamic streaming and to accumulate usage statistics. Its management console allows the platform to be configured and managed, and for reports to be requested on desktop activity and license manage≠ment. Some of the benefits of this information include:



The platform permits a fine level of definition of user/application access. User access to any given PC application can be time-based so that it is only available between specific days or ďtime-bombedĒ so that access rights cease on a given day (to cover the situation when staff leave). If desired, unlimited access to an application can be given to a user. Application access can be limited to specific groups of users or even to single individual and different versions of the same application can be assigned to different users if need be. Administrators can de-provide or re-pro≠vide applications at any time.

To ensure that license agreements are never violated the maximum number of installations of a particular application can be specified and this value will never be exceeded no matter who re≠quests access. To ensure that the high water mark is efficiently implemented, AppStream cleans up any idle application packages that remain in the PC cache after a predefined period of time.

AppStream automatically reports on the efficiency of software utilization, identifying the usage frequency of all desktop applications and highlighting those applications that are under-utilized. It also provides a graphical global view of usage levels on an enterprise wide basis. A whole series of reports can be generated that provide details of users, user access rights, software package usage and license levels. Reports are customizable and the AppStream repository of information can be accessed directly via any reporting tool that is compliant with ODBC.

From a user perspective, AppStream is almost invisible, both in terms of how the Windows inter≠face looks and how it performs. The AppStream agent is designed to load applications as fast as possible holding software components in local cache and only using the program components that are needed. The whole configuration is self-optimizing and it ensures that the minimum amount of network bandwidth is consumed so that communications bottlenecks do not arise.

As a consequence, PC applications behave as if they were fully installed on the PC, with very little difference in wait times when an application loads. An application may take slightly longer the very first time it loads, but from then on it is likely to be held in cache on the PC and it will load as fast as if the application were installed locally. In all other respects the users will probably be unaware of how applications load. Indeed it is possible to mix and match applications with some being local and some downloaded and the user is unlikely to guess which are which.

Furthermore AppStream is easy to administer. The web-based control and management console and web-based infrastructure lets IT staff perform management tasks from anywhere within the corporate intranet or extranet. The robust provisioning process ensures that on-demand access can be offered to employees without concern that end users will be using applications they shouldn't be. And compatibility with Microsoft's Active Directory and the Lightweight Directory Access Protocol (LDAP) standard ensures that AppStream integrates easily with any existing enterprise entitlement system.


The following points summarize all the advantages supplied by AppStream.NOW software




We finally propose a table with that summarize the most important capabilities and relative benefits of this software.




Provides applications on-demand

Increases end user productivity through better, quicker access to needed applications

Software distribution and software deployment without IT intervention

Lowers administrative support costs, IT not required to push down all applications

Enables assigning of different package versions to different end users

Increases the flexibility of enterprise computing environment — end users can pull down the version they need

Reduces the steps required to deploy an application

Speeds software distribution of applications to end users

Provides sophisticated access provisioning and limiting

Ensures that end users are able to access the applications they need, while preventing access to unauthorized end users

Enables interoperability between streamed and traditionally installed applications

Lowers administrative requirements and allows for phased adoption

Provides a single point of access for end users, software packages, licensing and servers

Simple, intuitive administration




















1.4.4    Application streaming technologies comparison


In order to conclude the presentation of these software packages we summarize the most important features with the following table.









Virtualized applications can run on clients without agent locally installed.




Launch the application instantly from a remote location. The first blocks needed to start the application are locally cached on the client. When more features are used, more blocks are cached.



Centrally controlled access

Management software is included that can manage authorization on application delivery. Agent locally installed on the client is required.



Off-Line Usage

Applications can be launched even when a user is off-line (for example on a laptop). The streamed application is completely cached locally.



Application Interconnectivity / Binding

Virtualized applications, which are isolated, can be connected to each other. For example, .NET 2.0 framework is packaged once.

Applications that needs .NET framework connect to the virtualized .NET package.



Executes in user-mode only

There is no interaction with the kernel of the OS. Therefore, applications cannot crash the OS.



License Management

Can the usage of the applications be controlled? How many licenses do you have of an application and how many times is the application (concurrently) in use?



Tracking and reporting

The usage of applications can be tracked and monitored. Reports can be created.






16-bit application supported (only run on 32-bit OS)



64-bit application supported



Windows 2000



Windows XP



Windows Vista 64-bit



Windows Server 2008 (TS) 64-bit








1.5    Technologies Comparison


In this chapter we analyzed different methods in order to delivery applications on demand and fast deployment of services.

The basic idea that we have to account for is represented in the figure below.



Figure 16: Devices Interaction in application streaming.



†††† In this figure thee crucial points are shown:


       Network: the technologies have to support standards and protocols for different types of networks (WAN, LAN, PAN, etc;) (i.e. as we saw in [REF DELIVERABLE 1] public operators involved in emergency situations can be far way or in the same building).

       Devices: the technologies have to allow full device interoperability; (i.e. public operators must be able to use different devices without any interaction issue).

       Application on demand: the technologies must delivery application on demand and services as fast as possible.


We already analyzed the features for every tool and the aim of this section is to summarize the crucial point sfor the selected technologies proposing a table as a comparison; we avoid including in this section the Virtualization systems not being strictly connected to the mobile and portable devices.





VNC is a nice graphical desktop sharing system which uses the RFB protocol to remotely control another computer. It transmits the keyboard and mouse events from one computer to another, relaying the graphical screen updates back in the other direction, over a network.

VNC is platform-independent; a VNC viewer on any operating system usually connects to a VNC server on any other operating system. There are clients and servers for almost all GUI operating systems and for Java. Multiple clients may connect to a VNC server at the same time.

Itís improved a bit over the years, but still has several flaws.

     VNC server is required for an OS. Many operating systems have them, but some donít.

     RFB doesnít work well over high latency connections.

     All RFB clients and servers are only moderately adaptive to bandwidth.




ICA is a proprietary protocol for an application server system, designed by Citrix Systems. The protocol lays down a specification for passing data between server and clients, but is not bound to any one platform.

Besides Windows, ICA is also supported on a number of Unix server platforms and can be used to deliver access to applications running on these platforms. The client platforms need not run Windows; for example, there are clients for Mac, Unix, Linux, and various Smartphones. ICA client software is also built into various thin client platforms.

The ICA protocol is actually optimized for low bandwidth.





RDP is a multi-channel protocol that allows a user to connect to a computer running Microsoft Terminal Services. Clients exist for most versions of Windows (including handheld versions), and other operating systems such as Linux, FreeBSD, Solaris, Mac OS X, and PalmOS. The server listens by default on TCP port 3389. Microsoft refers to the official RDP client software as either Remote Desktop Connection (RDC) or Terminal Services Client (TSC).







AJAX refers to the use of a group of technologies for building Rich Internet Applications. It's based on 100% open standards the most compact and high performance result can come out but it has some big drawbacks:

     It requires fairly good working knowledge of asynchronous Javascript, CSS, DOM manipulation and XML and the construction of applications with the these components is quite a pain.

     Not all browsers handle Javascript in the same way, and many users disable JavaScript in their browsers. Web developers are indeed looking for at least some assistance to elide browser differences.




Flash is a software for creating rich, interactive content for digital, web, and mobile platforms; it provides the tools you need to deliver an engaging user experience. Flex is an application development solution for creating and delivering cross-platform rich Internet applications (RIAs) within the enterprise and across the web. It has a client-runtime which is based upon the Flash Player. Flex and Flash have complementary strengths. Flash is the leading authoring tool for web developers, multimedia professionals, animators, and videographers who want to create rich interactive content. Flex 2 products enable more application developers to leverage the powerful Flash runtime to create data-driven RIAs. In addition, developers can use Flash and Flex Builder together to add rich interactive elements to a structured, Flex based application. Flash/Flex is relatively easy and makes the applications really rich, but it is vendor-specific (Adobe). Not all browsers start out with the Flash plug in installed, and Flash is also updated from time to time. In either case, the end user is required to download a new version if he or she reaches a page that requires it (in many cases users will navigate away from the page entirely).




Silverlight is a cross-browser, cross-platform plug-in for delivering the next
generation of Microsoft .NET–based media experiences and
rich interactive applications for the Web.
Silverlight is meant to be running in the browser, it should be seen much more as a competitor to Flash/Flex than to the JavaFX, which are runtimes running outside the browser. Silverlight has the advantage it can easily be put on every Windows computer by letting it piggy-back with some automatic Windows update.

While Silverlight 1.0 only had basic capability in terms of simple animation and media support using Javascript as the primary scriptiong language, Silverlight 1.1/2.0 is a completely different animal. Silverlight 2.0 offers a complete .NET common language run-time in the browser including managed versions of Javascript and Python that will compile to binary on client and run extremely quickly. Obviously, by supporting Python (and Ruby as well, though not in the current alpha distribution) in the client, Silverlight's CLR in the browser now also support the Dynamic Language Runtime (DLR), making Silverlight have the richest support for RIA client-side languages currently available. Of course, sporting a lightweight version of .NET and its libraries comes at some cost, particularly download and installation times.




Sun has created a very interesting new entry in the RIA space. Designed to leverage the full breadth and depth of the extremely mature and robust Java platform, JavaFX is a scripting language that doubles as a declarative programming model. With the express goal of making it significantly easier to create Rich Internet Applications than it is now with current Java technologies, JavaFX offers some serious productivity-oriented features including: A highly efficient Model-View-Controller (MVC) data binding construct in the scripting language itself, declarative event triggers for assertions and CRUD, and even some cutting edge features such as extents (a notation to let you see all class instances of a certain type) and other mechanisms that will give one some concerns about the sacrifice of long-term code maintenance to the altar of code efficiency, but it's a pretty well thought-out model. JavaFXScript takes advantage of the Java Runtime Environment's (JRE)
ubiquity across devices and enables creative professionals to
begin building applications based on their current knowledge base, but has also some drawbacks:

     It requires installation of at least one additional plugin.

     Itís a new technology not fully tested and ready to be compared with Flash for example




Softgrid is an application virtualization and application streaming solution from Microsoft. This platform allows applications to be deployed in real-time to any client from a virtual application server removing the need for local installation of the applications. All application data is permanently stored on the virtual application server. Whichever software is needed is streamed from the application server on demand and run locally. Unfortunately the interoperability between different applications environment is not implemented and the SoftGrid runtime needs to be installed on the client machines. It still suffers from usability quirks and an overly complex sequencing process, and it lacks support for headless services.




AppStream has a desktop management platform that is loaded on a network server and streams applications to the desktop. It is built to deliver PC applications on demand so that when the PC user initiates an application, the application is streamed directly to the PC and begins to run. AppStream can be ready to run as soon as it is installed and configured as it requires no special program coding or changes to any of the applications running on PCs. Symantec gains acquiring AppStream a much needed streaming capability to support its already robust virtualization layer. The combined solution allows applications to be launched from a Web browser, and headless services are supported. However, the level of integration between the OEM components is imperfect and simple deployment tasks require too many steps, not to mention the slow initial response time for virtualized applications.





This table summarizes the most important features for the products presented:








Flash/ Flex





Bandwidth Usage










Browser Compatib.










Platform Compatib.










Functions supported









































2.  Mobile Devices


In the [REF DELIVERABLE 1] we analyzed emergency scenarios and we saw the importance of application delivery and the operatorsí equipments during such a situation. The application and services deployment needs to be as fast as possible and the operatorsí devices need to be small and portable.

In the first chapter we presented the available technologies regarding the application on demand service; the aim of this chapter is instead to provide a full overview of the available mobile devices including operating systems.

In the first paragraph we present indeed the most important environment regarding the development of software for small devices; in the second paragraph we will present the operating systems and in the third one we will give a brief overview concerning the most important current devices.


2.1    Development Environment


There are several ways to add software to a mobile device, depending on the device capabilities. The latest devices can have all the capabilities described below, making the device more versatile.


2.1.1    Java (J2ME)


J2ME (Java 2 Platform Micro Edition) is a specification of a subset of the Java platform aimed at providing a certified collection of Java APIs for the development of software for small, resource-constrained devices such as cell phones, PDAs and set-top boxes.

Most of the mobile phones can run J2ME applications, but there can be a lot of difference between the JVM capabilities from one mobile phone to another. The access to the net, gps, bluetooth etc. is still regulated by the phone operating system.

The JVM in the mobile phone use the Connected Limited Device Configuration (CLDC) that contains a strict subset of the Java-class libraries, and is the minimal amount needed for a Java virtual machine to operate. CLDC is basically used to classify myriad devices into a fixed configuration.

Using this common configuration itís possible to built applications using the upper layer of the J2ME stack, the Mobile Information Device Profile (MIDP).

Designed for cell phones, the Mobile Information Device Profile boasts GUI API, and MIDP 2.0 includes a basic 2D gaming API. Applications written for this profile are called MIDlets and can be downloaded directly from the web, installed and run just after the download.


Figure 17: J2ME architecure.


2.1.2    Native code


Building application in native code, means build an application that can run only in similar devices (typically with the same OS) but that can use all the features of the mobile phones without any limitations.

The developer can optimize the resource used by his own application, optimizing also the performance and the memory usage. He could access the storage and the hardware directly using the OS I/O.

Some deviceís manufacturer has distributed the SDK for develop native application on their mobile phone and often an IDE with simulatorís device.

As we can see, there are a lot of difference between a J2ME MIDlet and a native application. In general, a J2ME MIDlet can be more deployable and compatible in a lot of devices, but it canít reach the performance and the integration that a native application can reach in the OS that was made for.


2.1.3    Web


As we saw in the previous chapter the Web is changing its structure facilitating the sharing and collaboration among users. For this reason the latest devices have new versions of their internet browsers, that support al lot of the new features used in the Web 2.0. Itís therefore possible to create internet based applications, using css, javascript and ajax interaction.

Mobile browsers are optimized so as to display Web content most effectively for small screens on portable devices. Mobile browser software must be small and efficient to accommodate the low memory capacity and low-bandwidth of wireless handheld devices.

Obviously the user will interact only with the browser and the applications will not use any of the advanced features like gps, accelerometers, bluetooth etc.

But these applications can be run everywhere (if we have an internet connection) and in every device with an advanced web browser.


2.2    Mobile OS


After this introduction on the development environments for mobile devices and before presenting the most famous devices with the related capabilities, we give an overview on the current available operating systems.


2.2.1    iPhone


iPhone OS is the operating system developed by Apple Inc. for the iPhone and iPod touch. Like Mac OS X, from which it was derived, it uses the Darwin foundation. iPhone OS has three abstraction layers: a Core Services layer, a Media layer, and a Cocoa Touch layer.

The iPhone OS's user interface is based on the concept of direct manipulation, using multi-touch gestures. Interface control elements consist of sliders, switches, and buttons. The response to user input is supposed to be immediate to provide a fluid interface. Interaction with the OS includes gestures such as swiping, tapping, and pinching. Additionally, turning the device alters orientation in some applications.

Mac OS X applications cannot be copied to and run on an iPhone OS device. They need to be written and compiled specifically for the iPhone OS and the ARM architecture. However, the Safari web browser supports "web applications," as noted below.

The iPhone support third-party "applications" via the Safari web browser, referred to as web applications. The applications can be created using web technologies such as AJAX. Many third party iPhone web applications are now available.

Casella di testo: Figure 18: Apple iPhone.With the new generation, the iPhone support also the deployment of third party applications.

These applications can be developed with the free SDK available on the developer apple web site, and itís complete of an IDE and simulator.

The SDK itself is a free download, but in order to release software, one must enroll in the iPhone Developer Program, a step requiring payment and Apple's approval.

Signed keys are given to upload the application to Apple's App Store which is the sole method of distributing the software to an iPhone.

At the moment there is no Java support.



2.2.2    Symbian


Symbian OS is a proprietary operating system, designed for mobile devices, with associated libraries, user interface frameworks and reference implementations of common tools, produced by Symbian Ltd. It is a descendant of Psion's EPOC and runs exclusively on ARM processors.

The Symbian OS System Model contains the following layers, from top to bottom:

     UI Framework Layer

     Application Services Layer

o  Java ME

     OS Services Layer

o  generic OS services

o  communications services

o  multimedia and graphics services

o  connectivity services

     Base Services Layer

     Kernel Services & Hardware Interface Layer

Symbian is not Open Source software. However, phone manufacturers and other partners are provided with parts of its source code. The APIs are publicly documented and up to Symbian 8.1 anyone could develop software for Symbian OS.

Symbian 9.1 introduced capabilities and Platform Security framework. To access certain capabilities, the developer has to digitally sign their application. Basic capabilities are user-grantable and developer can self-sign them, more advanced require certification and signing via the Symbian Signed program.

The native language of the Symbian OS is C++, although it is not a standard implementation. There are multiple platforms based upon Symbian OS that provide an SDK for application developers wishing to target a Symbian OS device.

Symbian C++ programming is commonly done with an IDE like Carbide.c++, an Eclipse-based IDE developed by Nokia.

Symbian OS's C++ is very specialized. However, many Symbian OS devices can also be programmed in OPL, Python, Visual Basic, Simkin, Perl and with the Java ME.

Once developed, Symbian OS applications need to find a route to customers' mobile phones. They are packaged in SIS files which may be installed over-the-air, via PC connect or in some cases via Bluetooth or memory cards. An alternative is to partner with a phone manufacturer to have the software included on the phone itself. The SIS file route is more difficult for Symbian OS 9.x, because any application wishing to have any capabilities beyond the bare minimum must be signed via the Symbian Signed program.

Java ME applications for Symbian OS are developed using standard techniques and tools such as the Sun Java Wireless Toolkit.

Nokia S60 phones can also run python scripts when the interpreter is installed, with a custom made API that allows for Bluetooth support and such. There is also an interactive console to allow the user to write python scripts directly from the phone.





2.2.3    Windows Mobile


Windows Mobile is the OS developed by Microsoft as evolution of Windows CE.

Itís used by different brands, like HTC, Samsung, HP etc. and it provide a common environment in different devices.


The main features of the new version are:

     High screen resolution supported (from 320x320 to 800x480)

     Integrated with Office and Exchange

     VoIP support

     Remote Access (RDP)

     Browser with Ajax, JavaScript and XML DOM capabilities

     .NET Compact Framework 2

     SQL Server compact edition


Additional software can be installed directly after a download from the web; typically the executables are .cab files and are installed automatically by the system.

Third-party software development is available for the Windows Mobile operating system. There are several options for developers to use when deploying a mobile application. This includes writing native code with Visual C++, writing Managed code that works with the .NET Compact Framework, or Server-side code that can be deployed using Internet Explorer Mobile or a mobile client on the user's device. The .NET Compact Framework is actually a subset of the .NET Framework and hence shares many components with software development on desktop clients, application servers, and web servers which have the .NET Framework installed, thus integrating networked computing space. Support also J2ME applications.


2.2.4    Proprietary OS


Many manufacturers prefers to build their own OS and donít release any SDK for develop additional software.

Typically, the only way to expand and to install new applications itís with the J2ME Midlets or, if thereís an advanced web browser, with the web-based application.

Samsung, LG, Sony Ericsson and RIM are an example of these manufacturers.



2.2.5    Android


Android is a Linux-based OS for mobile devices.

Is developed by Google and Open Handset Alliance and will be available in open-source license.

Google provides the SDK from their website ( with a plug-in for the Eclipse IDE, for an easy development and deployment.

The SDK allow the developer to create application writing code in Java language, but the result itís a lot different than a J2ME Midlets.

The OS has a complete and complex architecture (see figure below). Built in the OS there is an optimized JVM (called Dalvik) that exposes all the features of the device at the applications.



Figure 19: Android architecture.


Itís possible therefore to create applications, which act like a native application, using the popular java language. The applications will run and will be managed in the Android Runtime, which includes a set of core libraries that provides most of the functionality available in the core libraries of the Java programming language.

Every Android application runs in its own process, with its own instance of the Dalvik virtual machine. Dalvik has been written so that a device can run multiple VMs efficiently with minimal memory footprint.


The main features of the Android device will be:

     Optimized graphics: The platform is adaptable to both larger, VGA, 2D graphics library, 3D graphics library based on OpenGL ES 1.0 specifications, traditional smartphone layouts.

     Storage: SQLite for structured data storage.

     Connectivity: Android supports a wide variety of connectivity technologies including GSM, CDMA, Bluetooth, EDGE, EV-DO, 3G, and Wi-Fi.

     Messaging: SMS, MMS, and XMPP are available forms of messaging including threaded text messaging.

     Web browser: The web browser available in Android is based on the open-source WebKit application framework.

     Java virtual machine: Software written in Java can be compiled into Dalvik bytecodes and executed in the Dalvik virtual machine, which is a specialized VM implementation designed for mobile device use, although not technically a standard Java Virtual Machine.

     Media support: Android will support advanced audio/video/still media formats such as MPEG-4, H.264, MP3, and AAC, AMR, JPEG, PNG, GIF.

     Additional hardware support: Android is fully capable of utilizing video/still cameras, touch screens, GPS, compasses, accelerometers, and accelerated 3D graphics.

     Development environment: Includes a device emulator, tools for debugging, memory and performance profiling, a plugin for the Eclipse IDE.


At the moment there is only the software simulator of the platform and only few experimental devices not for sale.



2.2.6    MOTOMAGX


MOTOMAGX is Motorola's next-generation Mobile Linux platform and will support three different application environments:


     WebUI: Web application that need Ajax and JavaScript capabilities, the browser will be based on WebKit browser engine.

     Native Linux: Developed in C/C++



2.2.7    Openmoko


Openmoko is an open source operating system for mobile phone Linux-based.

Native applications can be developed and compiled using the C or C++ programming languages and it use ipkg (Itsy Package Management System) for the package installation and management (similar to dpkg in Debian).

At the moment there is only one device model available at the web site of Openmoko.


2.3    Sample devices


We complete this chapter with a selection of devices that use the most diffuse operating systems already treated.


2.3.1    Apple iPhone


Apple iPhone is the innovative device released by Apple the past year and updated with the new version in June 2008.

Has a lot of interesting features and a unique user interaction.

It has a multi-touch screen with virtual keyboard and buttons, but a minimal amount of hardware input. The iPhone's functions include those of a camera phone and portable media player (iPod) in addition to text messaging and visual voicemail. It also offers Internet services including e-mail, web browsing, and local Wi-Fi connectivity. The first generation phone hardware was quad-band GSM with EDGE; the second generation uses UMTS and HSDPA and a GPS receiver.



     Size: 110 mm(h) ◊ 61 mm(w) ◊ 12 mm(d)

     Screen size: 3.5 in (89 mm)

     Screen resolution: 480◊320 pixels at 163 ppi

     Input devices: Multi-touch screen interface plus a "Home" button

     Built-in rechargeable, non-removable battery

     2 megapixel camera

     412 MHz ARM 1176 processor

     PowerVR MBX 3D graphics co-processor

     Memory: 128 MB DRAM

     Storage: 8 GB or 16 GB flash memory

     Operating System: iPhone OS

     Quad band GSM (GSM 850, GSM 900, GSM 1800, GSM 1900)

     GPRS and EDGE data

     Tri band UMTS/HSDPA (850, 1900, 2100 MHz)

     Wi-Fi (802.11b/g)

     Bluetooth 2.0 with EDR

     Weight: 133 g (4.7 oz)

     Headphone jack (non-recessed)

     Camera features geotagging (producing geocoded photograph)

     Battery has up to 10 hours of 2G talk, 5 hours of 3G talk, 5 (3G) or 6 (Wi-Fi) hours of Internet use, 7 hours of video playback, and up to 24 hours of audio playback, lasting over 300 hours on standby.

     Assisted GPS, with fallback to location based on Wi-Fi or cell towers








2.3.2    Nokia N96


The Nokia N96 itís the most evolutes smart phone equipped with a Symbian OS.



     Quad band GSM / GPRS / EDGE: GSM 850, GSM 900, GSM 1800, GSM 1900

     Dual band UMTS / HSDPA: UMTS 900, UMTS 2100

     3G and WLAN access.

     Mobile TV (network-dependent feature).

     GPS Navigation.

     Access to Ovi (Nokia service web based)

     Instant upload to Flickr, Vox, Yahoo! and Google.

     Full-HTML browser.

     Symbian OS v9.3 S60 3.2 Edition, user interface.

     Up to 16 GB of internal flash memory.

     2-way slide, as in Nokia N95.

     Expandable memory currently up to 24 GB courtesy of MicroSD cards.

     5-megapixel camera, Carl Zeiss optics.

     High quality VGA camera in front of the phone, for video calling and self-portrait use.

     Double LED flash for the camera.

     Plays music files, and lets you download easily via Nokia Web.

     Allows high-quality video calling using 3G

     A built-in motion sensor that automatically rotates the screen when tilted.


2.3.3    HTC TyTN II


The HTC TyTN II is a Microsoft Windows Mobile 6.0 Pocket PC phone manufactured by HTC.



     Quad band UMTS/HSDPA/HSUPA: UMTS 800, UMTS 850, UMTS 1900, UMTS 2100, UMTS 1700

     Quad band GSM/GPRS/EDGE: GSM 850, GSM 900, GSM 1800, GSM 1900

     Connectivity: Wi-Fi 802.11b/g, Bluetooth 2.0 + EDR with A2DP, A-GPS and GPS, USB 2.0

     GPS: Qualcomm

     CPU: 400 MHz Qualcomm 7200 (dual CPU, and integrates Imageon hardware 2D/3D graphics accelerator) Although the graphics hardware is without a Windows Mobile driver, so all graphics are done using the main CPU, therefore unaccelerated.

     Operating System: Windows Mobile 6

     Camera(s): 3.0(2048x1536) MP still/video camera with autofocus. VGA Video conferencing camera.

     Memory: 128 MB RAM, 256 MB ROM

     Memory card: SDIO, microSD, microSDHC 4GB and up, TransFlash

     Screen: 240x320, 2.8" (42 x 57 mm) TFT-LCD

     Weight: 190g

     Size: 112mm (L) x 59mm (W) x 19mm (T)

     Battery: Li-Ion 1350 mAH


3.  Technologies support to Context-Aware


Context-aware applications could be defined as computing applications that use context information in order to automatically adapt their behaviour to match the situation. Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between an user and an application. Context information can be gathered from a variety of sources, such as, sensors, profiles (capabilities of hardware devices or preferences of users) and applications that report their current state and data interpretation services (considers context information to derive higher-level information).


Context-aware applications can intelligently support users in a variety of tasks, including tasks that support disabled people. Context-aware applications can create smart home environments, provide health-care services and also support user tasks. Some common applications related to that kind of situations are:


       Flexible communication

Such applications provide instant communication with a family member, friend or health worker, achieved using context-aware communication channel (telephone, SMS, etc.) according to the current activity, available communication devices, and people preferences.

       Support for social interactions and virtual communities

Such applications have diverse goals in order to support independent communities:

o   assistance with everyday tasks by remote family or community members;

o   dynamic organization of groups that are interested in activities based on their preferences and availability;