Visual Sensor Network Reconfigurationwith Deep Reinforcement Learning

Visual Sensor Network Reconfiguration
with Deep Reinforcement Learning

Paul Jasek
Ohio State University
Air Force Research Laboratory
   Bernard Abayowa
Air Force Research Laboratory
Abstract

We present an approach for reconfiguration of dynamic visual sensor networks with deep reinforcement learning (RL). Our RL agent uses a modified asynchronous advantage actor-critic framework and the recently proposed Relational Network module at the foundation of its network architecture. To address the issue of sample inefficiency in current approaches to model-free reinforcement learning, we train our system in an abstract simulation environment that represents inputs from a dynamic scene. Our system is validated using inputs from a real-world scenario and preexisting object detection and tracking algorithms.

1 Introduction

The application of deep neural networks in reinforcement learning (RL) has shown success in a variety of domains. For example, Deep Q-Networks [8] achieved human-level performance in Atari 2600 games. Other recent approaches, including trust region policy optimization [12], asynchronous advantage actor-critic [7], and proximal policy optimization [13], have shown success in domains such as 3D mazes and simulated robotic motion. However, the sample inefficiency of these algorithms limits the application of current deep RL solutions to many real-world problems where access to sample operations may be limited or expensive. A simulation environment generates sample observations quickly and cheaply, providing an RL agent with enough data to learn a high-performing policy.

We aim to apply reinforcement learning to a dynamic sensor-network configuration problem. While, we attempt maintain generality throughout our experiments, our specific motivation is to use cameras to capture high-resolution views of vehicles in a scene. Directly simulating this environment would involve a variety of difficult technical challenges and would likely be computationally expensive and unrealistic when compared to a real-world scenario. Instead, we focus on modeling an abstract scenario, where objects and sensors are represented as bounding boxes. A deep RL agent can learn to maximize the percentage of objects captured at high-resolution within a scene by training in this simulation environment. After learning an effective policy, the agent can operate within a real-world environment where preexisting object detection and tracking algorithms are applied to emulate the simulation environment from training.

2 Related work

Several methods have been proposed in the literature for reconfiguration of dynamic visual sensor networks of static and Pan-Tilt-Zoom (PTZ) cameras. These methods can be grouped into resource-aware methods, target-based methods, and coverage-oriented methods [9].

Resource-aware methods seeks to find the optimal trade-off between available resources in the sensor network and task performance requirements. The sensing parameters are reconfigured to minimize usage of resources such as power in energy-aware surveillance systems [4] and communication bandwidth in distributed camera networks [3]. Resource-aware methods are often found in settings where the visual sensors are static.

In target-based method the focus is on the optimization of the camera parameters to put a target of interest in view. Common applications include online adjustments of the orientation and zoom parameters of a PTZ camera for single target tracking [1], and camera assignment or hand-off for optimal view in static camera networks.

The group of methods related to ours are coverage-oriented methods in which the goal is to maintain optimal scene coverage with a network of PTZ cameras [2, 5, 10]. In these methods, the parameters of the PTZ cameras are adjusted to maximize the view of relevant areas in the scene while also adapting to the scene dynamics.

Existing methods for optimal coverage with visual sensor networks make use of hand-crafted mathematical models and shallow neural networks which do not generalize well. In this work we introduce a general framework for reconfiguring visual sensor networks to optimize coverage by leveraging advances in model-free RL and deep representation learning.

3 Background

This section contains relevant background information on the asynchronous advantage actor-critic (A3C) algorithm [7] and the relational network (RN) module [11] which are the foundations of our solution. A3C is used as the reinforcement learning algorithm and training framework for our agent, while RN is used as part of the deep neural network architecture to enable the effective application of A3C to our specific problem.

3.1 Asynchronous advantage actor-critic

Consider the standard reinforcement learning scenario in which an agent interacts with an environment . At any given time step , the agent receives a state and takes an action chosen from the set of possible actions according to its policy , which maps states to an action (or distribution of actions) . The goal of the agent is to maximize the discounted return at any given state , defined by , where is the reward discount factor. The value of any particular state following a policy is defined by . Similarly, the action value at any particular state is defined by . These two quantities are used to define the advantage of an action in state given by . The advantage function represents the expected increase in future reward if a given action is taken rather than following the current policy.

A3C is an example of a model-free policy-based method which trains an agent to maximize by updating the parameters of the policy . Methods stemming from the REINFORCE algorithm [15] update the parameters by performing approximate gradient ascent on . The standard REINFORCE algorithm updates in the approximate direction of using the unbiased estimate . Often, a function of the state known as the baseline is subtracted from to reduce the variance of the estimate, while remaining unbiased. If is learned estimate of , then can be seen as an estimate of the advantage , because estimates and estimates .

A3C uses an estimate of the advantage to scale the policy gradient,

Here, is a learned estimate of the value function and varies across states, but is bounded above by , the number of time steps performed before updating the policy parameters. To encourage exploration, an entropy regularization loss term, is added to the objective function. Here, computes the entropy of a distribution. This adds an additional hyperparameter, , which is used to scale the entropy regularization loss term. The resulting objective function for the policy is . To train the policy function , we apply gradient ascent to this objective function with respect to the policy function parameters . The estimated value function is trained via standard supervised learning to approximate the same bootstrapped estimate of that was used to compute the advantage function given by .

The system is designed to be trained on multiple CPU cores by running parallel simulation environments for a fixed amount of time steps, , before accumulating gradients and updating a global network. See the original paper [7] for more details on how this is implemented.

3.2 Relational Networks

The RN module is designed with the capacity to reason about the pairwise relations in a set of objects. Consider a set of objects . Here, the object is an -dimensional vector, . Additionally, we consider a condition, , represented as an -dimensional vector. The RN is expressed as a composite function,

(1)

Here and are defined as above and and are multilayer perceptrons (MLPs) with weights and , respectively. In this formulation, the role of is to compute a relation vector corresponding to the relationship between two objects under a given condition. The role of is to construct an output based on all relations by operating on the sum of all relation vectors.

Object representation vectors can be directly provided (as is the case for our inputs) or generated from another neural network module (such as a CNN) as demonstrated in [11]. The input size of is and the network may be several layers deep. Naturally, the input size of must be equal to the output size of and may also consist of multiple layers. The result is a simple, end-to-end differentiable, neural network module that can effectively reason about object relations.

4 Deep Multi-view Controller

4.1 Problem formulation

We consider a master-worker setup with a single stationary master camera which provides an overview of a scene of vehicles and multiple active cameras with a narrow field of view. Our goal is to view a maximum number of vehicles at a specified high-resolution. These vehicles may be moving or stationary and can exit or enter the scene at any point in time throughout the scenario. The scenario eventually ends, but the specific time that this happens is unknown to the agent.

We created an abstract simulation environment to enable the effective use of model-free reinforcement learning techniques. The developed simulation uses bounding boxes to represent the vehicles in a scene and the camera views. This abstraction generalizes the scenario to any sensor with a rectangular view and objects with similar movement patterns to vehicles. Objects within the simulation can randomly switch between moving and remaining stationary. Moving objects randomly turn by adjusting their direction continuously and may randomly reverse directions. The sensors within the simulation can select between five possible actions (do nothing, move up, move down, move left, and move right). The agent receives a positive reward whenever an object is captured at high-resolution by an active camera for the first time.

Figure 1: Simulation Environment used for training the agent. The light boxes represents sensor views within the environment. The green rectangles represent objects in the scene that have been captured at high-resolution, while the black rectangles represent objects that have yet to be captured at high-resolution.

To increase the observability of environment, the simulation environment marks vehicles that have already been captured at high-resolution. This can be visualized in Figure 1 where marked vehicles are represented as green. The simulation environment randomizes the number of objects and sensors, the object and sensor view sizes, the movement speeds of the sensors, and the time scale of observations by the agent. The purpose of this randomization is to increase the likelihood of a policy trained within the simulation environment being able generalize to a real-world scenario. We draw inspiration from recent work in which a robotic arm trained in a randomized non-photo-realistic simulation environment is able to perform the task in a real world setting without additional training [14].

The abstract representation was chosen, because we can use recent work in computer vision to translate a real-world scenario into the same representation. We assume access to sensor-view registration and an object and tracking system. While these systems are not trivial to implement, they are required to allow the agent to avoid keeping track of each previously detected object in the scene for the duration of the scenario. The absence of a tracking algorithm would require the agent to implicitly track each agent in the scene. Further, a system making use of the sensor network would likely be able to make use of an explicit tracking system for additional purposes.

4.2 A3C with Multiple Agents

Our agent is based on the A3C algorithm from  [7]. We use multiple parallel simulation environments and update a global network by accumulating gradients computed from sample observations in each simulation environment. We enable the use of multiple sensors by controlling each sensor with a different instance of the same agent. We provide each instance of the agent with a similar input state, but mark a different sensor as the controlled sensor. In each simulation environment, we use the same agent to control each individual sensor, but only perform updates to the agent based on a chosen main agent in the simulation environment. The main agent receives credit for the reward received by all other agents within the environment. This was intended to increase cooperation between the agents in the scene by eliminating incentive for the agents to compete to capture the same vehicles at high-resolution.

4.3 Model Architecture

Figure 2: Graphical representation of network architecture used in our agent.

We base our network architecture on the A3C architecture used in [7]. However, we make a significant modification by replacing the convolutional neural network (CNN) used to process the input state with a modified Relational Network (RN) module.

The objects used by the RN are the object representations for each sensor and object in the scene from the last 4 time steps. Each object is represented by a 4-dimensional vector representing the bounding box of the object concatenated with a 1-dimensional vector representing type of object. Bounding boxes are represented as a vector containing the xy-coordinates of the object’s center normalized to fall between -1 and 1 and the width and height of the object normalized to fall between 0 and 1. Concatenating these object vectors over the last 4 time steps results in 20-dimensional object representation vector.

To optimize memory and computation time, we only consider relations between objects in which at least one of the objects is a sensor. Additionally, we do not show the agent objects which have already captured at high-resolution. The relations are conditioned upon the vector representation of the sensor that is being controlled by the agent. This results in a 60-dimensional vector representation for each relation. We pass each relationship vector through an MLP with 3 fully-connected layers with sizes 128, 256, and 256 respectively. This is the MLP represented by the function in Equation 1. We then perform an element-wise sum operation and pass the result through a fully-connected layer with size 256 and 2% dropout. Two separate fully-connected layers are applied to resulting output to produce a vector of five action probabilities and an estimate of value function. This is the MLP represented by the function in Equation 1.

The entire architecture can be visualized in Figure 2. The element-wise sum operation in the RN module allows us to represent a dynamically sized input as a fixed-size vector. This vector is then used to generate the policy and value networks as done in the traditional A3C design. Arbitrary input size into the network is particularly important, because we have an arbitrary number of objects and sensors within the scene. In addition to feeling less natural, attempts to process our input using convolutional and recurrent neural networks resulted in significantly worse performance and increased running times.

5 Training and Experiments

Figure 3: Smoothed performance over the course of training 4 agents.

We trained out agent using 16 parallel simulation environments. The number of sensors and objects in each scene were selected from discrete uniform distributions with ranges of 1 to 5 and 1 to 50 respectively. We optimized three hyperparameters: learning rate, the entropy regularization constant, and the reward discount factor. We trained 4 agents with random hyperparameters from a limited range and selected the highest performing agent. A graph of this training process over the course of a week is shown in Figure 3. We observed instability in training with certain hyperparameters as can be seen with the yellow agent in Figure 3. The agent failed to learn a policy significantly better than simply taking random actions.

We evaluate our algorithm’s performance by comparing with random movement and with a hand-crafted ”lawn mower” method. We devised two random movement strategies which selected randomly and uniformly between the possible actions. One method included the ”do nothing” action in which the camera simply remains in the same position for a time step, while the other method assigned zero probability to this action resulting in a slight performance increase. Under the ”lawn mower” method, each active sensor view systematically covers the entire scene by moving up and down in columns, moving to the side for a single time step after reaching the top or bottom of the scene. The performance achieved by each method are shown in Table 1. We can see that our agent performs significantly better than both the random and ”lawn mower” strategies. Note that it certain scenarios within the simulation environment it may be impossible to capture 100% of the objects at high-resolution, because objects may move out of the scene before any sensor is able to view it. We do not place a large emphasis on the specific percentage of vehicles captured, as the performance of the agent can be easily manipulated by adjusting the parameters of the environment such as the movement speed and view size of the active cameras.

Agent Percentage of Objects
Captured at High-Resolution
Random with ”do nothing” 44.75%
Random 46.28%
”Lawn mower” method 64.44%
Ours (stochastic) 82.74%
Ours (deterministic) 84.75%
Table 1: Percentage of objects viewed at high-resolution over 100 episodes for our agents and several baseline methods. The stochastic version of our algorithm samples actions from the generated distribution. The deterministic version simply selects the action with the largest probability in the generated distribution, resulting in a slight increase in performance.

As an additional method of evaluating our learned policy, we look at the contribution of each relation considered by the agent towards the selected action distribution. This was calculated using the gradient of the KL divergence between the uniform action distribution and the selected action distribution with respect to a vector that is multiplied entry-wise with the computed relationships in the relational network evaluated at . Effectively, this results in a number scaled proportionally to the amount that each relation contributed to the chosen action distribution. The resulting contribution computation over several time steps in a scene can be visualized in Figure 4. Note that the sensor has learned to place high importance on the relations between itself and objects that are close to it. For example, at time step 16, we see that nearly all the focus of the policy is on the two objects closest to it. In this instance it chooses to captured the object on the left at high-resolution first. However, we see that the relationship between object to the right of the controlled camera contributes negatively to the agent’s choice.

Figure 4: Visualization of the contribution of the relation between pairs of objects towards the chosen action under the learned policy. Relationships that contribute strongly to the chosen action are shown in green, while relationships that contribute negatively to the chosen action are shown in red. The controlled sensor is shown in cyan, while other sensors are shown in blue. The objects in the scene that have not yet been viewed at high-resolution are shown in black, while the objects that have are hidden.

We attempt to validate the ability of our learned policy to generalize by constructing pseudo real-world test environment. This involved a real-world video stream which was treated as the sensor reading from an overview camera. We applied real-time object detection and tracking to the video stream to simulate the inaccuracies in a real-world environment. The individual sensor views were simulated similarly to in the training simulation environment. Qualitative results to this experiment can be found in Figure 5. Note that the object detector does not detect all vehicles and the tracker occasionally loses track of its targets. This increases the difficulty of the reconfiguration task, while also providing a good test for the problems that may be encountered in a real-world environment.

Figure 5: Visualization of the learned policy operating in a non-simulated environment. The white bounding boxes represent sensor views, while the vehicles in the scene that have not yet been viewed at high-resolution are shown in black, while the vehicles that have are shown in green.

6 Conclusion

We have shown that deep reinforcement learning can be applied to the problem of visual sensor network reconfiguration by training within a simulated environment. Although our results suggest that applying reinforcement learning to sensor network reconfiguration is feasible, there is much needed in terms of future work to reach a viable solution for the real world. There are several engineering challenges required to run object detection and tracking algorithms and control multiple cameras with low latency.

Additional future work is needed to improve the collaboration between multiple sensor controllers. Traditional reinforcement learning algorithms tend to ignore the case in which multiple agents are collaborating on a task. Our policy did not appear to learn many collaborative strategies outside of typically not overlapping sensor views. We can verify this by observing that the relation between the controlled sensor and other sensors in the scene does not seem to contribute to the selected action distribution as shown in Figure 4. Applying methods similar to recent research in multi-agent reinforcement learning such as MADDPG [6] may increase the performance of the sensor network by increasing the level of collaboration between individual sensors in capturing all vehicles at high-resolution.

References

  • [1] A. Del Bimbo, F. Dini, G. Lisanti, and F. Pernici. Exploiting distinctive visual landmark maps in pan–tilt–zoom camera networks. Computer Vision and Image Understanding, 114(6):611–623, 2010.
  • [2] A. Kansal, W. Kaiser, G. Pottie, M. Srivastava, and G. Sukhatme. Reconfiguration methods for mobile sensor networks. ACM Transactions on Sensor Networks (TOSN), 3(4):22, 2007.
  • [3] D. R. Karuppiah, R. A. Grupen, Z. Zhu, and A. R. Hanson. Automatic resource allocation in a distributed camera network. Machine Vision and Applications, 21(4):517–528, 2010.
  • [4] U. A. Khan and B. Rinner. A reinforcement learning framework for dynamic power management of a portable, multi-camera traffic monitoring system. In Green Computing and Communications (GreenCom), 2012 IEEE International Conference on, pages 557–564. IEEE, 2012.
  • [5] K. R. Konda and N. Conci. Optimal configuration of ptz camera networks based on visual quality assessment and coverage maximization. In Distributed Smart Cameras (ICDSC), 2013 Seventh International Conference on, pages 1–8. IEEE, 2013.
  • [6] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. CoRR, abs/1706.02275, 2017.
  • [7] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. CoRR, abs/1602.01783, 2016.
  • [8] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
  • [9] C. Piciarelli, L. Esterle, A. Khan, B. Rinner, and G. L. Foresti. Dynamic reconfiguration in camera networks: A short survey. IEEE Transactions on Circuits and Systems for Video Technology, 26(5):965–977, 2016.
  • [10] C. Piciarelli, C. Micheloni, and G. L. Foresti. Automatic reconfiguration of video sensor networks for optimal 3d coverage. In Distributed Smart Cameras (ICDSC), 2011 Fifth ACM/IEEE International Conference on, pages 1–6. IEEE, 2011.
  • [11] A. Santoro, D. Raposo, D. G. T. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. P. Lillicrap. A simple neural network module for relational reasoning. CoRR, abs/1706.01427, 2017.
  • [12] J. Schulman, S. Levine, P. Moritz, M. Jordan, and P. Abeel. Trust region policy optimisation. ICML, 2015.
  • [13] J. Schulman, F. Wolski, and P. Dhariwal. Proximal Policy Optimization Algorithms Background : Policy Optimization. CoRR, 2017.
  • [14] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization for transferring deep neural networks from simulation to the real World. ArXiv, pages 1–8, 2017.
  • [15] R. J. Willia. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8(3):229–256, 1992.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
254253
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description