Simultaneous Mapping and Target Driven Navigation

Simultaneous Mapping and Target Driven Navigation

Georgios Georgakis, Yimeng Li, and Jana Košecká
Department of Computer Science, George Mason University, Fairfax VA
{ggeorgak, yli44, kosecka}

This work presents a modular architecture for simultaneous mapping and target driven navigation in indoors environments. The semantic and appearance stored in 2.5D map is distilled from RGB images, semantic segmentation and outputs of object detectors by convolutional neural networks. Given this representation, the mapping module learns to localize the agent and register consecutive observations in the map. The navigation task is then formulated as a problem of learning a policy for reaching semantic targets using current observations and the up-to-date map. We demonstrate that the use of semantic information improves localization accuracy and the ability of storing spatial semantic map aids the target driven navigation policy. The two modules are evaluated separately and jointly on Active Vision Dataset [1] and Matterport3D environments [3], demonstrating improved performance on both localization and navigation tasks.

1 Introduction

Navigation is one of the fundamental capabilities of autonomous agents. In recent years there was a surge of novel approaches, which study the problem of goal or target driven navigation using end-to-end learning strategies [29, 13, 6, 7]. These approaches use data-driven techniques for learning navigation policies deployable in previously unseen environments without constructing an explicit spatial representation of the environment. These models exploit the power of recurrent neural networks for learning predictions from sequences of observations.

The approaches that address the navigation and planning by learning spatial representations of the environment often assume perfect localization both in the training and testing stage [9, 17]. The problem of localization and mapping is challenging on its own and existing approaches for learning spatial representations which are optimized for localization tasks  [10] have been shown to outperform traditional simultaneous localization and mapping methods (SLAM) on the localization task [2].

The presented work investigates the problem of simultaneous mapping and target driven navigation in previously unseen environments. The problem of target driven navigation is a problem of an agent finding its way through a complex environment to a target (e.g. go to the couch). The goal of our work is to exploit mapping and localization module to guide navigation strategies and relax the assumption of perfect localization and at the same time endow the map representation with richer semantic information. Towards this end we propose to build and use spatial allocentric 2.5D map, which will facilitate both localization and semantic target navigation.

Figure 1: We present a new method for target driven navigation that leverages an 2.5D allocentric map with learned semantic representations suitable for both localization and semantic target driven navigation.

Instead of using map representation derived directly from pixel values, we propose to learn suitable task related embeddings from outputs of an object detector and semantic segmentation.

The proposed method consists of two modules. First, a semantic map inspired by MapNet architecture [10] is responsible for continuous localization and registration of agent’s observations in the map. Second, is a navigation module that uses the partial map, predicted pose of the agent along with current observations for learning a target reaching policy. In summary our contributions are as follows:

  • We extend the MapNet [10] approach and endow 2.5D memory with semantically informed features and demonstrate its improvement on the localization task.

  • We show the effectiveness of spatial mapping for target driven navigation and learn the navigation policies for the task.

  • We evaluate both localization and target driven navigation tasks on real Active Vision Dataset [1] and Matterport3D [3] environment, demonstrating superior performance compared to previous methods.

2 Related Work

Traditional approaches for mapping and navigation problem focus on 3D metric and semantic mapping of the environment [2] followed by path planning and control. They require building a 2D or 3D map ahead of time before planning, and do not exploit general semantics and contextual cues in the planning and decision stage. In recent years there was a surge of novel approaches, which study the problem of goal or target driven navigation using end-to-end deep reinforcement learning and vary in the proposed architectures and reward structure to train the models [29, 13, 6, 7, 25] These methods use variations of Recurrent Neural networks (i.e. LSTM’s) with the memory implicitly represented by the hidden state of the model. Majority of the above mentioned methods do not have explicit notion of the map or spatial representation of the environment.

The methods which explicitly learn a spatial representation of the environment, proposed task-dependent differentiable spatial memories to represent the environment [9, 10, 4, 8, 17, 28] and typically focus on goal or target driven navigation, localization or exploration tasks. For example, Henriques et al[10] proposed an architecture that dynamically updates an agent’s allocentric representation for the task of localization. In Gupta et al[9] a mapping module fused information from learned image embeddings across multiple views in an egocentric top-view map of the environment. The mapping module was trained for goal point and semantic target based navigation tasks and assumed accurate localization both in training and testing stage. Authors in [4] train a policy that takes as input a predicted egocentric map and outputs long-term goal for a planner, while Gordon et al[8] uses a GRU to perform egocentric updates to a local window within a spatial memory given the agent’s current location and viewpoint. The spatial memory contains object confidences at each location of a 2D grid, but it does not encode the 3D spatial capacity of the objects in the environment and thus cannot take advantage of multi-view information to deal with occlusions. The work of Chen et al[5] considers the exploration task and constructs the top-view occupancy map by unprojecting 3D points observed in the depth images. The egocentric map is then passed to an exploration policy. With the exception of [8], the learned maps do not consider semantic and contextual information which has been shown to be important in learning generalizable navigation policies [15, 6].

The effectiveness of semantic component, semantic segmentation and object detection has been in approaches which use recurrent neural networks  [7, 11]. For example, Fang et al[7] uses a scene memory comprised of separately embedded observations at different time steps. While this scene memory encodes semantic information, the lack of structure neglects the spatial configurations of the objects and other semantic categories in the scene.

Our work is also related to large body of work on target driven navigation. The existing approaches differ in the level of supervision, model architectures and tasks.  [15, 19, 27, 29, 26, 6, 13, 24, 25]. For instance, Mousavian et al[15] and Ye et al[26] learn effective representations for navigation using the outputs of object detectors and semantic segmentations in order to enable better generalization to novel environments and consider navigation policies, with state modelled by LSTM. Sadeghi et al[19] focuses on collision avoidance for the goal reaching task and relies on convolutional LSTM to keep track of the goal’s position with respect to the agent. The works of Ye et al[27] and Zhu et al[29] use deep reinforcement learning to train room-type specific navigation policies using feed-forward architectures, where the goal is provided as an image cropped from the scene. Finally, Das et al[6] uses embeddings of a feed-forward model pre-trained on various tasks (i.e. semantic segmentation, depth prediction) as input observation for training the policy. Even though these methods usually include semantic information as an input, they do not explicitely store in a spatial memory and use LSTM modules to retain the history of observations and actions.

Figure 2: Overview of our simultaneous target driven navigation and mapping approach for a single timestep. We use as inputs the egocentric observations RGB image , the detection masks , and semantic segmentation . Each input is first projected to a ground grid before extracting a feature embedding from each grid location. The grids are stacked and passed through a recurrent map registration and update module (see text for more details) which provides the updated map and localization prediction . These, along with the egocentric observations are passed to a navigation module that extracts and concatenates their embeddings with the semantic target. Finally, the embeddings are passed to an LSTM that predicts the values for the next actions. Orange color signifies convolutional blocks, while other colors in the figure denote other feature representations.

3 Approach

Here we present the two modules of our method. First, we describe in detail the spatial map and how it is endowed with semantic information. Then, we outline the navigation policy and explain how the map is used in its training. An overview of the proposed architecture is in Figure 2.

3.1 Learned Semantic Map

We are interested in building an allocentric spatial map that encodes the agent’s experiences during navigation episodes. Following MapNet [10] formulation, the map at time is represented as a grid of spatial dimensions with feature embedding of size at each grid location. Besides an RGB image representation, we also extend the map to include the semantic information from object detection masks and semantic segmentation. The inputs are initially projected to an egocentric ground grid . For the projection we use the available depth image and the camera intrinsic parameters to obtain a 3D point cloud from the image points. Each 3D point is then mapped to one of the grid coordinates: , , where are the grid coordinates, are the dimensions of each bin in the grid, and are coordinates of the 3D point. The coordinate corresponding to the height of the point is neglected in this version of our work. Since multiple 3D points project to the same grid cell, the projected inputs are pooled to form a single vector. Specifically for each input type we get a grid as follows:
RGB image. Given an input image , we obtain a feature map from any backbone CNN (e.g. VGG-16, ResNet50). In order to aggregate the features from different image regions we perform max-pooling over all features vectors projected to the same grid cell to yield the final grid .
Detection mask. We run Faster R-CNN [18] which is pre-trained on COCO [12] and convert the detections to binary bounding box masks. Each channel has detection masks of a particular class from COCO, where is the number of available classes. We get the grid by averaging over the occurrences of each detected class in a bin.
Semantic segmentation. We use the model of [14] trained on NYUv2 dataset [21] that outputs a semantic segmentation of an image. Each pixel takes a value between 0 and where is the number of classes in the NYUv2 dataset. The grid for this observation holds a probability distribution over the semantic labels in each bin. Different inputs create separate grids, which are then passed through a small CNNs, comprised of two convolutional layers providing per grid cell feature embedding for each input. This step is deliberately applied on the grids rather than the images directly, such that the learned embeddings can capture spatial dependencies present in the map grid. We then stack the outputs of the small CNNs to form the egocentric 2D grid at time :


where , , and denote small CNNs applied to embeddings of RGB image, detection masks, and semantic segmentation. The details about choices of individual parameters are described in the experimental sections.

Given this semantically informed representation, we follow the strategy of [10] for localization and registration stage. In order to register in the current map we densely match with over all possible locations and over multiple orientations . This operation is carried out through cross-correlation (equivalent to convolution in deep learning literature) and produces a tensor of scores which denotes the likelihood of the agent’s position and orientation in the map at time . In practice, multiple rotated copies of are stacked together to obtain a tensor. After the cross-correlation, the output is passed through a softmax activation function to get . Before inserting in the map, we need to rotate it and translate it according to its localization prediction . This is achieved through a deconvolution operation between and that can be seen as a linear combination of weighted by . The result is a tensor that contains the egocentric grid observations at time , and is aligned to and has the same dimensions as .

Finally, localized egocentric map is used to update the current map using a long short-term memory unit (LSTM). Each location’s feature embedding is passed through LSTM and updated independently. We have also experimented with other update methods, such as averaging the features, but found the informed updating due to LSTM’s trainable parameters to be superior. This can be also attributed to the fact that the LSTM learns how to combine the embeddings of different modalities that comprise in order to be more effective during localization. The model is trained using localization loss.
Localization loss. We use cross-entropy loss to supervise the prediction of :


where is a one-hot vector representing the ground-truth pose, is the length of an episode, and is the number of classes corresponding to the discreet spatial locations and orientations in the map. We assume to be at the center of the map facing to the right, and all subsequent ground-truth poses are relative to .

3.2 Navigation Policy

Our task involves navigation to a semantic target within unknown environment. Therefore, it can be formulated as a partially observable Markov decision process (POMDP) , where the state space consists of the agent’s pose, action space consists of a discrete set of actions, and observation space is comprised of the egocentric RGB images. The reward is defined as the progress towards the semantic target when at state the action is executed that leads to state , where is the number of steps required on the shortest path between a state and the semantic target. Finally, represents the transition probabilities.

We are interested in learning a policy that can leverage the rich semantic and structured information in the map. To this end, the input to our policy is the allocentric spatial map which holds all past experiences of the agent during the episode. Since we do not assume perfect localization, the map is accompanied by the pose prediction in order to help the policy to pay attention to the relevant parts of the map.

The learned policy outputs a distribution over the action space given both egocentric observations and the map (that can be thought as a spatial memory). Since our focus is to navigate in novel environments, the map is being build as the agent moves along and the policy uses as input the accumulated map up to time . The semantic target is represented as a one-hot vector over the set of classes. Finally, a collision indicator represented as a single bit is concatenated to the rest of the inputs in order to encourage the policy to recover after a collision.


Following the work of Mousavian et al [15], we train our policy model to predict the cost of each action at a certain state and a given target using an L1 loss:


where is the ground-truth cost and is the predicted cost. Given the definition of , can only take one of three values; if the action takes the agent one step closer to target, if it takes the agent one step further from the target, or if the distance remains unchanged. The last case is possible since there can be multiple target poses in an episode. If an action leads to a collision, then we assign even though the agent has not moved, while if an action leads to a goal we assign .

The policy model is trained in a supervised fashion using an online variant of DAgger [23]. In particular, we first generate training episodes by sampling a random starting point and target in a scene and selecting the actions along the shortest path (expert policy). At this stage we also sample trajectories along randomly chosen paths in order to increase the coverage of the observation space in the scenes. During training we sample the next minibatch either from the initially generated episodes (expert and random), or by unrolling the current policy to select new episodes. We start with a high probability of selecting from the initial episodes and gradually decrease this probability with exponential decay.

To accommodate this training paradigm, the environment is represented by a graph, where the nodes are discrete poses of the agent and edges represent possible actions. Each pose has corresponding RGB and depth images and the absence of edges between two nodes is treated as collision. In this setting, the shortest path between two nodes can be easily computed and used as supervision.

Model architecture.

The policy is modeled by a convolutional NN. The image observations are first passed through a separate network that computes a 128 dimensional embedding. The map and pose estimation are passed through a convolutional layer of followed by batch normalization and max-pooling of kernel size 2 and stride 2. The outputs are flattened and passed though a fully connected layer followed by dropout to get the embedding. For the egocentric observation we stack any available images (i.e. RGB, detection masks, or semantic segmentation) and use a pretrained ResNet18 (without the last layer) to extract 512 dimensional features, prior to computing the embedding. The one-hot vector that denotes the semantic target is also encoded to a 128 dimensional embedding. Then, all embeddings are concatenated and used as input to an LSTM layer of 512 units. Finally, a fully connected layer predicts the cost of each action .


During inference, instead of directly choosing the action with the lowest cost, we sample from the predicted probability distribution over the actions by applying soft-max on the negated predicted costs. This still assigns the highest probability to the action with the predicted lowest cost, however it allows for some flexibility during decision making to avoid getting stuck in limited space situations.

Dataset AVD Matterport3D
Map Model APE-5 APE-20 APE-20
RGB 285 800 993
RGB-SSeg-Det 215 692 803
SSeg-Det 179 647 655
Table 1: Localization results on the AVD and Matterport3D dataset using map models trained with different combinations of input modalities. The Average Position Error (APE) is reported in millimeters for episodes of length 5 and 20.
Figure 3: Example inputs from the AVD (top row) and Matterport3D (bottom row) datasets. From left to right we show the RGB image, detection masks and semantic segmentations.
Figure 4: Visualizations of graphs along with target locations from an AVD scene (left) and a Matterport3D scene (right). The different shapes and colors denote different target objects.

4 Experiments

We perform two main experiments, evaluation of the localization accuracy in the trained spatial map (sec. 4.1) and evaluation of the learned navigation policy on unknown environments (sec. 4.2). The proposed method is demonstrated on two publicly available datasets, Active vision dataset (AVD) [1] and Matterport3D [3]. We illustrate input examples from the two datasets in Figure 3.

AVD. This dataset contains around 20,000 real RGB-D images from typical indoor scenes densely captured by a robot on a 2D grid every . Each location on the grid offers multiple views at intervals. The data for each scene are organized as a graph where the edges are defined over a discrete set of actions. This provides with the ability to simulate the movement of an agent in a scene but with the luxury of having real images as observations.

Matterport3D. This dataset contains visually realistic reconstructions of indoor scenes with varying appearance and layout. We endow the dataset with the same structure as AVD by densely sampling navigable positions at intervals on the occupancy map of each scene. At each navigable position, we render RGB images, semantic segmentations and depth images from different orientations at intervals through Habitat-Sim [20]. The images are connected through actions based on their spatial neighborhood and orientations. The built dataset contains more than images for each scene, which is considerably larger than AVD. We use 17 scenes for training and 5 for testing.

4.1 Localization

To validate the effectiveness of using the semantic information for mapping, we present localization results for allocentric maps that were trained with different input combinations and episode lengths of 5 and 20 steps. For both datasets an agent is simulated through random walks in order to collect and training episodes for AVD and Matterport3D respectively. The trained models are applied on episodes not seen during training and are evaluated on Average Position Error (APE), which measures the average Euclidean distance between the ground-truth pose and the predicted pose.

Implementation details.

RGB image is passed through a pretrained truncated ResNet50 using only the first 11 layers. The small CNNs for each input grid modality (, , ) are realized with two convolutional layers of and , where for , and for both and . The number of units for the LSTM corresponds to the summation of the embedding dimensions of the modalities used for the particular experiment. Regarding the hyper-parameters of the map, we define the map dimensions , the egocentric observation grid dimensions , the number of rotations , and the grid cell size .

Results and discussion.

The results are illustrated in Table 1, where semantic segmentation is denoted as SSeg and detection masks as Det. There is no direct comparison to the results from [10] since the exact train and test sets are not provided, however our map model trained only with RGB can be considered analogous to the MapNet trained in  [10]. We observe that the model with the lowest localization error in both 5 and 20 step cases is SSeg-Det, which is not using RGB information. This can be attributed to the fact that the RGB image representation needs to capture view-invariant properties, which is difficult to achieve, such that the egocentric ground grid can be accurately matched to the allocentric map. On the other hand, this is not necessary in the case of SSeg-Det, since it operates on recognition outputs. This is also highlighted on the Matterport3D results, where the images are synthetically generated and the average position error difference is more in favour of SSeg-Det. This effectively demonstrates that a scene can be memorized with respect to only semantic information such that it is useful for re-localization. In fact, when the RGB images are added then the error slightly increases. It is also important to note that the authors of [10] demonstrated superior performance of their approach with respect to traditional ORB-SLAM[16] on AVD using only RGB images. We exhibit that additional semantic information further improves the localization ability of the agent.

Dataset AVD Matterport3D
Method Success Rate (%) Path Len. Rat. Success Rate (%) Path Len. Rat.
Random Walk 24.3 4.4 13.6 10.7
Non-learning 35.3 9.3 42.8 3.6
No-Map-AVD [15] 48.0 - - -
No-Map-SUNCG [15] 54.0 - - -
Ours 64.6 1.9 69.5 2.3
Table 2: Results of semantic target navigation in novel scenes in AVD and Matterport3D.
Figure 5: Qualitative navigation results on the AVD dataset. Each row corresponds to a different episode. From top to bottom, the target object is the fridge, dining table, and TV. For each step of an episode we present the map with the agent’s trajectory up to that time, the agent’s orientation, RGB image and detection masks. Notice that in all three episodes the agent moves quickly towards the target once it is detected and placed in the map.

4.2 Navigation to semantic target

Here we investigate the effectiveness of the proposed target driven navigation policy. Our objective in this experiment is twofold. First, we would like to demonstrate the effect of using a spatial map in comparison to LSTM policies that do not use a map, and second, investigate navigation policies that are learned with spatial maps of different modalities.

The training procedure is as follows. First, the spatial map model is trained with the localization objective. The training of the navigation policy uses the frozen mapping module to update the map and predict the agent’s pose at each step. During this procedure, the mapping module is fine-tuned through the navigation objective. Note that at the beginning of each episode the map contains no information. Unless otherwise specified, we train the navigation policy using the SSeg-Det map model for AVD, while the RGB-SSeg-Det map model is utilized when learning the policy on Matterport3D.

For both AVD and Matterport3D five semantic targets are identified: {dining_table, refrigerator, tv, couch, microwave}. Figure 4 presents examples of scene graphs with marked target objects. In the case of AVD we compare our approach to [15], therefore we follow their train/test split of different environments (11 scenes for training and 3 for testing), and use the target locations of object categories they provide. As mentioned in sec. 3.2 the training data are generated using DAGGER with 55,000 and 88,000 initial training episodes for AVD and Matterport3D respectively. For evaluation, the percentage of successful episodes is reported along with the average path length ratio. The episode is successful when the agent is at least 5 steps away from a target pose. The path length ratio is the ratio between the predicted path and the shortest path length and is calculated only for successful episodes. The maximum number of steps for each episode is 100.

Comparison to other policies.

To demonstrate our method’s superiority to policies which do not use a spatial map we have defined three baselines:
Random walk. The agent chooses random actions until it reaches the target.
Non-learning baseline. Similarly to [15], the agent chooses random actions until the target object is detected, in which case the agent computes the shortest path to the target. Note that for this baseline, the agent has full knowledge of the environment and its graph once it detects the target.
Learned LSTM policy without a map. The method proposed in [15]. The policy is learned using only egocentric observations without any spatial memory. We compare to a policy trained on AVD that uses detection masks denoted as No-Map-AVD and policy trained on both AVD and the large synthetic dataset SUNCG [22] that uses detection masks and semantic segmentations, here referred to as No-Map-SUNCG. This is the best performing method reported in [15] as it leverages the vast amounts of data offered in SUNCG’s 200 synthetic environments.

Results are shown in Table 2. On AVD dataset the presented approach outperforms the second best method by , without any use of complementary synthetic data during training. The biggest advantage of our method compared to the other learned baselines is that our agent has access to semantic information stored in a structured memory that corresponds to the history of observations. This reduces the amount of information that LSTM retains and helps during the optimization of the policy. Furthermore, the lower average path length ratio than the Non-learning baseline, in both datasets, suggests that our navigation model learns contextual cues from the semantic map that reduce the time spent searching for the target. In addition, as demonstrated in Figure 5, the agent learns to quickly move towards the target upon detection. Note that the baseline uses an optimal path when the target is detected. Note that the average path length ratio result is not directly comparable to the other learned baselines since they require a stop action to end an episode, which we do not use.

Model Success Rate (%)
1. Nav-RGB 60.1
2. Nav-RGB-Det 61.1
3. Nav-RGB-SSeg-Det 63.2
4. Nav-SSeg-Det 64.6
5. Nav-RGB-NF 58.2
6. Nav-SSeg-Det-NF 60.4
7. Nav-RGB-NF-NE 53.9
8. Nav-SSeg-Det-NF-NE 56.9
Table 3: Results of our ablation study on AVD, illustrating the performance of navigation models trained with different map models, without fine-tuning the map (NF), or without using any egocentric observations (NE).
Model Success Rate (%)
1. Nav-SSeg-Det 62.9
2. Nav-RGB 66.7
3. Nav-RGB-SSeg-Det 69.5
4. Nav-SSeg-Det-NF 59.7
5. Nav-RGB-NF 64.2
Table 4: Results of our ablation study on Matterport3D, illustrating the performance of navigation models trained with different map models and without fine-tuning the map (NF).

Ablation Study

In this section we attempt to get a better understanding of how certain components of our method contribute to the overall performance.

Variations of spatial map modalities. Here we pose the question of which are the most suitable map feature representations when learning to navigate. To this end, we trained multiple navigation models using spatial maps that were learned with different modality inputs. Results are presented in lines 1-4 of Table 3 for AVD, and lines 1-3 of Table 4 for Matterport3D. In the case of AVD, we notice that the Nav-SSeg-Det outperforms Nav-RGB by . This validates our assumption that navigating to a semantic target can be successful using a map with purely semantic features. Note also that the map model SSeg-Det demonstrated the lowest localization error (see Table 1). However, for Matterport3D we observe that Nav-RGB outperforms Nav-SSeg-Det, while the best model is the one using all modalities (Nav-RGB-SSeg-Det). This can be explained by the fact that the detections are very noisy due to the artifacts in Matterport3D’s images. Hence, they are not as reliable when training the policy. Another reason could be that Matterport3D has larger scenes and object encounters are sparser, therefore providing less useful information in the map.

Joint training. To see the effect of fine-tuning the mapping module we re-trained selected navigation policies but kept the map network parameters frozen. The results are in lines 5, 6 of Table 3 for AVD and lines 4, 5 of Table 4 for Matterport3D, where the models trained without fine-tuning are denoted as . There is a consistent reduction of performance when fine-tuning is not performed in both datasets. This shows that the map embeddings learned during the initial training of the map module are not immediately applicable for navigation and further adjustment is required.

Effect of egocentric observation. We rely mainly on the allocentric map and pose prediction to decide future actions, which requires a depth sensor during the ground projection step. However, in cases where the agent is very close to an object and therefore outside of the effective range of the depth sensor, the projection is unreliable. We argue that using complementary egocentric observations as input to the navigation policy can help mitigate this problem. Results of models trained with and without egocentric observations are reported in lines 7, 8 of Table 3 for AVD. We observe that there is a rough degradation in the performance compared to lines 5, 6 in the same table, which validates our assumption.

5 Conclusions

We have presented a new method for simultaneous mapping and target driven navigation in novel environments. The mapping component leverages the outputs of object detection and semantic segmentation to construct a spatial representation of a scene which contains some semantic information. This representation is then used to optimize a navigation policy that takes advantage of the agent’s experiences during an episode encoded in the allocentric spatial map. The experiments on AVD and Matterport3D environments demonstrate that our approach outperforms only RGB baselines for the task of localization, and non-mapping baselines for the target-driven navigation.


Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description