Smartphone-based Video Surveillance System ⋆ 1 Introduction

Transcription

Smartphone-based Video Surveillance System ⋆ 1 Introduction
Journal of Computational Information Systems 10: 17 (2014) 7637–7644
Available at http://www.Jofcis.com
Smartphone-based Video Surveillance System ⋆
Wenjing LING 1 ,
1 College
2 Beijing
Limin ZHENG 1,2,∗, Tianzi WANG 3 , Ping WU 1 ,
Zheng SONG 4
of Information and Electrical Engineering, China Agricultural University, Beijing 100083,
China
Laboratory of Food Quality and Safety, China Agricultural University, Beijing 100083, China
3 Tongfang
4 State
Knowledge Network Technology Co., Ltd., Beijing 100192, China
Key Lab of Networking and Switching Technology, Beijing University of Posts and
Telecommunications, Beijing 100876, China
Abstract
Smartphone based surveillance systems have gained popularity against camera-based surveillance systems
for cost-efficiency and ease of deployment. However, existing smartphone-based surveillance systems need
expensive additional devices to achieve the remote control functions of pan-tilt-zoom cameras, which
limits their usage. In this paper, we present a smartphone-based video surveillance system with steering
function and remote control operations. To realize the function of steering, a small clip is creatively
used as a pedestal, while the vibration motor and digital compass embedded in the smartphone are
responsible for rotation generating and orientation control, respectively. A set of protocols for remote
control operations are also provided. Real-testbed based experiments show that the steering system
is easy to deploy and very accurate, with an average rotation deviation less than 2.500 degrees. The
latency of remote control under real network conditions is satisfactory, with an average time delay less
than 5 seconds.
Keywords: Smartphone; Video Surveillance System; Remote Control
1
Introduction
With the rapid development and wide deployment of wireless communication technologies, wireless video surveillance systems have gained popularity compared with wired surveillance systems,
because wireless cameras and networks are easy to deploy and maintain. As the quality of
surveillance video transmission on wireless networks has been greatly improved, wireless video
surveillance systems are becoming widely used in industry, such as transportation, medicine,
security, farming and etc..
⋆
This work was supported in part by the national science and technology program (No. 2012BAK17B09,
2013BAD19B09).
∗
Corresponding author.
Email address: [email protected] (Limin ZHENG).
1553–9105 / Copyright © 2014 Binary Information Press
DOI: 10.12733/jcis11627
September 1, 2014
7638
W. Ling et al. /Journal of Computational Information Systems 10: 17 (2014) 7637–7644
Today, most off-the-shelf smartphones are integrated with both high-quality digital cameras
which provide adequate video quality and 3G/WiFi network interfaces for surveillance video
transmission. Many researchers start to compose deployment-free wireless surveillance systems
using off-the-shelf smartphones [1, 2]. The architecture of a typical wireless surveillance system
consists of one or more smartphones which function as cameras, a central server which is in
charge of buffering and distributing captured video streams, and a set of clients which can be
either personal computers or mobile devices. The smartphones are connected with the server
through wireless communication networks, such as WiFi and 3G.
Although many works have been done on smartphone-based video surveillance systems, which
focus on transcoding [3, 4] or transmission of the video [1, 5], most existing prototypes fail to
handle the steering of the smartphones on contrast with a fixed camera and an inconvenient
manual intervention is required. Several patents have studied this problem which mainly concentrate on adding a specially designed mechanical pedestal to the smartphone [6]. However, such
methods require additional power and are expensive to be integrated in cost-efficient smartphone
surveillance systems.
In this paper, a smartphone-based steering control method is proposed, which mainly uses
phone-embedded sensors to achieve orientation-accurate rotation. To realize the function of steering, a small clip is used as a pedestal, while the vibration motor and digital compass embedded
in smartphones are responsible for rotation generating and orientation control, respectively. We
also provide a set of protocols for remote control operations.
The main contributions of this paper are:
• A novel smartphone steering method is proposed, which can realize the automatic steering
of the smartphone with no additional devices.
• A smartphone-based surveillance system is implemented, in which the efficiency and feasibility of our proposed steering method are verified.
The rest of the paper is organized as follows: Section II describes the related works. Section
III presents the physical and logical structure of the smartphone-based video surveillance system.
Section IV describes the steering design in detail. Section V shows the experimental results and
Section VI concludes this paper.
2
Related Work
Smartphone-based surveillance systems have been a hot research issue. Using the phone as a
camera, Bailey suggested an architecture of a real-time video streaming service from an Android
mobile device to a server [7], while Chandra inherited this structure but used peer-to-peer link to
remove server [8]. Using the phone as a receiver, Estevez-Ayres proposed an architecture to send
video data with android smartphones as user terminals [2], while Nahar designed a smartphones
responding device that not only accepts data but sends commands to the server and controls the
webcam to move to the desired position [9]. Yanan proposed a smartphone-based surveillance
system using smartphones in both sides, but her system failed to synchronize and coordinate the
transmission between the two sides, which resulted in packet delay to some extent [10].
Some researchers have noticed the lack of steering control in smartphone surveillance systems.
ZhuXiao et al. [6] introduced a smartphone pedestal with a small motor inside, which was
W. Ling et al. /Journal of Computational Information Systems 10: 17 (2014) 7637–7644
7639
connected to the smartphone’s mini-USB/audio interface to get its power supply. Jinduo et al.
[11] extended this idea by adding vertical direction motor to achieve omnidirectional rotation.
HaiTao and Zhuxiao [12, 13] further proposed a specially designed mobile cover and charger. The
mobile cover provides a protrusion connecting to the phone’s inbuilt motor, while the charger
offers a groove to match the protrusion which positions the smartphone. When the inbuilt motor
begins working, the smartphone and its charger start turning synchronously.
To realize the interaction between the smartphone and the server, remote control methods
are necessary. ZhuXiao et al. [13] used short messages and phone calls to control the behavior
of the smartphone camera. Milton [14] proposed a DTMF based remote control method. The
client device calls the video capture device, and sends control messages such as “take photo”
or “upload location” by encoding voice clips (DTMF). However, such methods consume great
amounts of bandwidth, and thus make the parallel transmission of control message and video
stream impossible.
As a conclusion, existing smartphone steering methods all require additional devices and are
too expensive to be used in cost-efficient smartphone-based surveillance systems. Moreover, the
corresponding control protocols fail to implement parallel transmission and make remote control
difficult to achieve.
3
System Overview
Our designed smartphone-based video surveillance system is shown in Fig. 1. It consists of a
smartphone, used as video capture device to monitor the target scene, a central server and a
client device to display the video stream of the target scene to the system clients. Both the video
capture device and the client device are connected to the central server through a WiFi or 3G
networks.
Fig. 1: The physical structure of smartphone-based video surveillance system
Two types of data are transmitted in the system network: a) The compressed captured video
stream from the video capture device to the client device (the video flow); b) The control messages
from the client device to the video capture device (the control flow). These two data transmission
procedures use different network protocols. The video flow deals with a large amount of data,
while a certain level of frame loss is endurable. On the contrary, the control flow transmits a small
amount of data, but requires high reliability. Therefore, RTP (Real-time Transport Protocol) is
applied to the video flow to provide real-time transmission, and TCP (Transmission Control
Protocol) is applied to the control flow to provide reliable transmission, as shown in Fig. 2.
Moreover, we introduce the interactions among the video capture device, the central server and
the client device.
1) Initialization: A new video capture/client device first logs in to the central server through
7640
W. Ling et al. /Journal of Computational Information Systems 10: 17 (2014) 7637–7644
Fig. 2: Two different transmission flows in the surveillance system
HTTP connections. The central server then verifies the identity of the device, and assigns a
randomly selected unoccupied port to the device for video flow transmission. The users of the
client device need to pick one video source from all registered video capture devices.
2) Video transmission: Once the video capture device receives the port number assigned by the
server, it starts to compress the real-time captured video flow into H.264 format, and sends the
encoded video packages to the destination port of the server. Upon receiving the packages, the
central server forwards the packages to all associated client devices, or say, socket ports.
3) Control message transmission: The client device translates the user’s operation commands
into HTTP requests, and submits to the server. The central server holds the control messages in
buffer, and distributes them to the corresponding video capture devices when the video capture
devices periodically query control messages from the server though HTTP requests.
4
4.1
Key Implement Elements
The realization of steering function
Fig. 3: The sketch map of steering device and its structure
To realize the steering function on the video capture device, a clip is used on the smartphone,
as shown in Fig. 3, where the smartphone is marked 1, and the side and underside of the clip
are marked 2 and 3, respectively. The clip clamps to one corner of the smartphone 1 and its two
laterals 2 press to the smartphone shell and its underside 3 stands on the ground. The centroid
axis of smartphone must fall within the underside of the clip to keep it standing upright, and
the centroid axis of the phone-embedded motor should fall within the underside of the clip to
avoid collapse. When the phone-embedded motor is working, the conjunction device rotates in a
horizontal plane due to sideways force.
W. Ling et al. /Journal of Computational Information Systems 10: 17 (2014) 7637–7644
7641
Fig. 4: Process and algorithm of steering control
Following the procedures given in Fig. 4, we are able to control the degree of rotation.
1. Assemble smartphone and clip, then place them in a horizontal plane.
2. When a target angle δ towards magnetic north is received, the device records the reading
of orientation sensor as α. The smartphone activates its embedded vibration motor, and
periodically records the orientation sensory readings as β every 3 microseconds.
3. Let ϵ denote the maximal tolerable orientation error. When |β − α| − δ < ϵ, the smartphone
inactivates its motor, and the steering procedure is accomplished.
Fig. 5: The above view of steering process and angle relationships
Fig. 5 describes the rotation process in detail, from which we can see clearly the relationship
between α, β and δ. We find in the experiment that the rotate direction of the motor is random,
which may lead to the wrong initial direction and cause excess rotating. So a target direction d
is required to be set by user, indicating a clockwise or anticlockwise rotation. When d deviates
from magnetic north, the value of β should always be greater than α; When d faces to magnetic
north, the value of β should always be smaller than α. If this restriction is not satisfied, stop and
restart the motor until the right direction is attained. This method is used to prevent the wrong
direction of rotating.
Compared with existing smartphone rotation methods, our proposed method doesn’t need
additional devices or energy supply, which makes the system easy to implement. Moreover, the
magnetoresistive sensor used in the process makes the steering function more precise and easy to
control.
4.2
Protocol of remote control
In order to adjust the monitoring scene, client devices can remotely control their associated video
capture device by calling the corresponding function interfaces through HTTP connection. The
implemented function interfaces are given in Table. 1.
7642
W. Ling et al. /Journal of Computational Information Systems 10: 17 (2014) 7637–7644
Table 1: Parameters and feedback of each control requests
Input parameters
Output parameter
Function
Parameter1
Parameter2
Success or not
Extra information
Steer
direction
angle
yes/no
angle towards magnetic north
Focus
far/near
focal length
yes/no
focal length
Zoom
far/near
magnification
yes/no
magnification
Take photos
-
-
yes/no
the acquired photo
Turn on/off flash light
open/close
-
yes/no
flash light status
Turn on/off camera
open/close
resolution/-
yes/no
camera status
Video capture device periodically queries the central server for buffered control messages. Once
a control message in the above given list is forwarded, the video capture device initializes a
ControlRequest object to handle the client’s request, and feeds it with the control function name
and input parameters. Different APIs are needed for handling different control requests, as given
in Table. 2:
Table 2: Android API needed for each control requests
Control request
Steer
Interface
Method
android.os.Vibrator
vibrate(long[] pattern, int repeat)
android.hardware.SensorManager
getOrientation (float[] R, float[] values)
Focus
android.hardware.Camera.AutoFocusCallback
autoFocus(Camera.AutoFocusCallback cb)
Zoom
android.hardware.Camera.OnZoomChangeListener
startSmoothZoom(int value), stopSmoothZoom()
Take pictures
android.hardware.Camera
takePicture(ShutterCallback shutter, raw, jpeg)
Turn on/off flash light
android.hardware.Camera.Parameters
setFlashMode(String value), getFlashMode()
Turn on/off camera
android.hardware.Camera
open(int cameraId), release ()
5
Experiments
We evaluate the performance of our proposed video surveillance system by experiments. We use
a ZTE V880 smartphone as the video capture device, a laptop (Thinkpad X230) as the central
server, and a HUAWEI Y300 smartphone as the client device. Java programming language is
used to realize all the functions. We also provide graphical user interfaces (GUI) to client device
users, as shown in Fig. 6.
Fig. 6: GUI implemented on the client device and the laptop
7643
W. Ling et al. /Journal of Computational Information Systems 10: 17 (2014) 7637–7644
First, we evaluate the accuracy of the steering function when varying ϵ, the error tolerant
threshold, as shown in Table. 3. For each ϵ value, the experiments are repeated 10 times, and an
average value of orientation error is calculated. Experiments shows that when error ϵ was set too
low, the device passes the target angle and can not stop; When error ϵ is set too high, accurate
steering can not be achieved. The average error is 0.923◦ , while the maximal error is 2.438◦ , no
more then 2.500◦ . Therefore the best error value should be between 2.500◦ and 3.000◦ .
Table 3: Angle error of 10 experiments
Experiment No.
1
2
3
4
5
6
7
8
9
10
Average error
Angle error(◦ )
1.710
0.062
0.031
0.734
0.938
2.438
1.391
0.516
0.022
1.391
0.923
Second, to show the time delay of the proposed steering function, we verify the time consumption of achieving deferent rotation angles, shown in Fig. 7. The data shows that there is strong
correlation between time consuming and target degree (R2 = 0.9942, P value = 1.686 × 10−5 ).
The average speed of steering is 0.1393 degrees per second which is calculated by curve fitting
method. The speed of steering is constant during the whole process. It is worth noting that the
straight line in the picture does not pass through the origin, which shows that the startup time
should be 1.6052 seconds.
Fig. 7: Time consumption for achieving deferent angle
We also record the time consumption of implementing the 6 remote control commands. In
this experiment, the server and two mobile devices are all connected through LAN. Each control
message is evaluated ten times, and the average time delay are given in Table. 4.
Table 4: Average time latencies of each remote control request
Control request
Time latencies
Remarks
Steering
8.667
Target angle is 45◦
Focus
4.740
Auto fucus
Zoom
4.372
1.250 times zooming
Take pictures
5.160
Photo resolution is 1536*2048 pix
Turn on/off flash light
3.328
Flash light on
Turn on/off camera
2.511
Turn off successfully
The experiment shows that all control requests and feedbacks have been received successfully,
and four of the seven requests are responded to within 5 seconds. All operations can be accom-
7644
W. Ling et al. /Journal of Computational Information Systems 10: 17 (2014) 7637–7644
plished within 10 seconds. We find that the main time consumption is on the reaction of video
capture device.
6
Conclusion
In this paper, we reveal our work on a smartphone-based video surveillance system with steering
function and remote control operations. Compared with existing video surveillance systems, our
system could forward the video captured by a smartphone embedded camera to multiple clients,
and allow the clients to remotely control the video capture device, e.g., the orientation of the
smartphone. Real testbed based experiments show that the steering function is both easy-toimplement and accurate. Meanwhile, the video received by the client device was fluent and the
control delay of our system is satisfactory for real-time control requirements.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
H. Pang, L. Jiang, L. Yang, and K. Yue, Research of android smart phone surveillance system, in
IEEE, ICCDA’10, 2010, pp. V2-373.
I. Estevez-Ayres, M. Garcia-Valls, P. Basanta-Val, and I. Fernandez-Pacheco, Using android smartphones in a service-oriented video surveillance system, in IEEE, ICCE’11, 2011, pp. 887-888.
L. D. Shefer and F. T. Marchese, A system for real-time transcoding and delivery of video to
smartphones, in IEEE, IV’10, 2010, pp. 494-499.
D. Ying-men, Design of real-time video surveillance system supporting wireless-network, Microcomputer Information, vol. 6, p. 062, 2012.
R. Rashmi and B. Latha, Video surveillance system and facility to access pc from remote areas
using smart phone, in IEEE, ICICES’13, 2013, pp. 491-495.
A mobile monitoring system and its monitoring method, Apr. 29 2009, cN Patent App. CN
200,810,239,408. [Online]. Available: http://www.google.com.tw/patents/CN101420707A?cl=zh.
J. Bailey, Live video streaming from android-enabled devices to web browsers, Ph.D. dissertation,
University of South Florida, 2011.
S. Chandra, P. Chiu, and M. Back, Towards portable multi-camera high definition video capture
using smartphones, in IEEE, ISM’13, 2013.
B. Nahar and M. L. Ali, Development of mobile phone based surveillance system, in IEEE, ICCIT’10, 2010, pp. 506-510.
Z. Yanan, Y. Lu, and Z. Limin, Remote video surveillance system based on android mobile phone,
Journal of Computer Applications, vol. 33, no. A01, pp. 283-286, 2013.
G. Jinduo, J. Mengan, and Q. Zhuxiao, Multifunctional rotating device, Aug. 18 2010, cN Patent
101,561,074. [Online]. Available: http://www.google.com.hk/patents/CN101561074B?cl=zh.
Monitoring phones and its monitoring methods, Nov. 9 2011, cN Patent 101,695,092. [Online].
Available: http://www.google.com.hk/patents/CN101695092B?cl=zh.
A monitoring mobile phone, including camera phone and chargers, Jun. 23 2010, cN Patent
201,515,406. [Online]. Available: http://www.google.com.tw/patents/CN201515406U?cl=zh.
M. A. A. Milton and A. A. S. Khan, Web based remote exploration and control system using
android mobile phone, in IEEE, ICIEV’12, 2012, pp. 985-990.