PDF  PubReader

Xu* , Wei* , and Xiu*: Research on the Basic Rodrigues Rotation in the Conversion of Point Clouds Coordinate System

Maolin Xu* , Jiaxing Wei* and Hongling Xiu*

Research on the Basic Rodrigues Rotation in the Conversion of Point Clouds Coordinate System

Abstract: In order to solve the problem of point clouds coordinate conversion of non-directional scanners, this paper proposes a basic Rodrigues rotation method. Specifically, we convert the 6 degree-of-freedom (6-DOF) rotation and translation matrix into the uniaxial rotation matrix, and establish the equation of objective vector conversion based on the basic Rodrigues rotation scheme. We demonstrate the applicability of the new method by using a bar-shaped emboss point clouds as experimental input, the three-axis error and three-term error as validate indicators. The results suggest that the new method does not need linearization and is suitable for optional rotation angle. Meanwhile, the new method achieves the seamless splicing of point clouds. Furthermore, the coordinate conversion scheme proposed in this paper performs superiority by comparing with the iterative closest point (ICP) conversion method. Therefore, the basic Rodrigues rotation method is not only regarded as a suitable tool to achieve the conversion of point clouds, but also provides certain reference and guidance for similar projects.

Keywords: Basic Rodrigues Rotation , Engineering Coordinate System , Instrument Coordinate System , Three-Axis Error , Three-Term Error

1. Introduction

In the workflow of point clouds reconstruction and extraction, the conversion of coordinate system has always been a persistent issue. Consequently, it is necessary to convert the coordinates of scanning coordinate system to the specified coordinate system. For the purpose of illustration, there are totally three kinds of point clouds coordinate systems. The first is instrument coordinate system, which is aimed at non-directional scanners. The second is named camera coordinate system, it is mainly determined by the camera position. The engineering coordinate system is considered as the third category and is usually oriented to directional scanners [1,2]. In general, the conversion of instrument coordinates to engineering coordinates requires the coordinate system conversion. Against this background, the core of the point clouds coordinate conversion is the solution of conversion parameters [3,4]. The conversion parameters are divided into rotation, translation and scale parameters. Among them, the scale parameter is set to 1 in most previous studies. Over the past decade, various strategies have been conducted to calculate the mentioned conversion parameters. For instance, Shepperd [5] and Lohmann [6] suggested the translation parameters can be counted by common points. In contrast, the determination of rotation parameters is more complex and important. As several studies noted, the traditional 6 degree-of-freedom (6-DOF) model has shortcomings in nonlinear and large angle initialization [7,8]. Although the least square method based on the 6-DOF model belongs to rigorous scheme and will not produce error accumulation, the spatial complexity of calculating conversion parameters is quite high, as [9-11] illustrated. The modified Rodrigues method has overcome above problems by using the constant matrix replace the rotation matrix. However, it also involves complicated steps, rather than the robust model [12-14].

To solve above problems, this paper proposes a basic Rodrigues rotation method. Synoptically, the basic Rodrigues rotation method abandons the complicated parameterization work, and effectively reduces the generation of rotation matrix by controlling the single axis rotation. This means that the conversion between original coordinate system and target coordinate system only relies on an optional rotation angle. Specifically, we install laser scanner at a known point, so the translation matrix is fixed as zero. In other words, point clouds tend to rotate horizontally along the center of scanner. Therefore, we can determinate the rotation angle easily and establish the corresponding equation to achieve coordinate conversion. Our contributions can be summarized as follows:

We propose a basic Rodrigues rotation scheme to overcome the disadvantages of model initialization and large angle parameterization in point clouds coordinate system conversion, and establish the equation of the basic Rodrigues coordinate conversion.

We adopt a bar-shaped emboss point clouds as input to demonstrate the applicability of the new method, and excellent accuracy is obtained by validating the three-axis error and three-term error.

We conclude that the basic Rodrigues rotation equation has ability to achieve the conversion and seamless splicing of point clouds, and has superiority to the iterative closest point (ICP) coordinate conversion method.

This paper is structured as follows. Section 2 discusses the related work. Section 3 introduces the basic Rodrigues rotation model. In Section 4, this paper presents the experiment and results. In Section 5, this paper validates the accuracy and stability of the new method. The last Section 6 gives the conclusion of this paper.

2. Related Work

2.1 The Basic 6-DOF Transformation Model

Previous studies have suggested that point clouds obtained by the same scanner have no scale influence, so the scale parameter can be regarded as 1 [15-17]. Similar to seven-parameter coordinate conversion, the solution of conversion parameters for point clouds is a 6-DOF function when scale parameter is 1. The universal coordinate conversion is formulated as follows:

(1)
[TeX:] $$\left[\begin{array}{l} x_{2} \\ y_{2} \\ z_{2} \end{array}\right]=\left[\begin{array}{l} \Delta_{x} \\ \Delta_{y} \\ \Delta_{z} \end{array}\right]+R\left[\begin{array}{l} x_{1} \\ y_{1} \\ z_{1} \end{array}\right]$$

The mathematical meanings of above variables is the same as seven-parameter conversion model. The main difference from seven-parameter conversion model is that the scale parameter of Eq. (1) is 1. It should be noticed that "R" represents the rotation matrix, which includes three-axis direction. The matrix of "R" can be described as follows:

Rotation matrix along the z-axis:

(2)
[TeX:] $$R_{I}=\left[\begin{array}{ccc} \cos \alpha & -\sin \alpha & 0 \\ \sin \alpha & \cos \alpha & 0 \\ 0 & 0 & 1 \end{array}\right]$$

Rotation matrix along the y-axis:

(3)
[TeX:] $$R_{2}=\left[\begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \beta & -\sin \beta \\ 0 & \sin \beta & \cos \beta \end{array}\right]$$

Rotation matrix along the x-axis:

(4)
[TeX:] $$R_{3}=\left[\begin{array}{ccc} \cos \gamma & 0 & -\sin \gamma \\ 0 & 1 & 0 \\ \sin \gamma & 0 & \cos \gamma \end{array}\right]$$

In fact, the 6-DOF method is not limited to the conversion of laser point clouds. Previous studies have applied Eq. (1) to the absolute orientation of images in photogrammetry. Within this subject, the right side of Eq. (1) is considered as the coordinates within photographic coordinate system, and the left side of Eq. (1) is the coordinates belonging to ground measurement coordinate system [18,119]. Although the 6-DOF parameterization scheme is applicable for any rotation angle, there still exists some limitations in large angle recognition when the basic model is initialized.

When the number of input observation parameters is greater than necessary observation parameters, the least square method is usually involved to obtain optimal parameters. Although the least square method based on 6-DOF belongs to rigorous scheme and does not produce error accumulation, the model linearization is still a drawback to be overcome in the case of large rotation angle. Therefore, the least square method based on 6-DOF would not be described in detail.

2.2 Rodrigues 6-DOF Model

Generally, the 6-DOF coordinate conversion scheme needs to set an initial value and further linearization. The recent studies have illustrated that the trigonometric function values can be replaced by three constant variables of the Rodrigues matrix, to bypass the shortcomings of conventional 6-DOF model [20-22]. Therefore, an antisymmetric matrix is proposed, which is defined as follows:

(5)
[TeX:] $$S_{1}=\left[\begin{array}{ccc} 0 & -c & -b \\ c & 0 & -a \\ b & a & 0 \end{array}\right]$$

According to the properties of Eq. (5) and Rodrigues matrix, the rotation matrix can be expressed as follows:

(6)
[TeX:] $$R=\left(1+S_{1}\right)\left(1+S_{1}\right)^{-1}$$

Then, the matrix R is expanded as:

(7)
[TeX:] $$R=\left[\begin{array}{ccc} 1 & -c & -b \\ c & 1 & -a \\ b & a & 1 \end{array}\right]\left[\begin{array}{ccc} 1 & -c & -b \\ c & 1 & -a \\ b & a & 1 \end{array}\right]^{-1}$$

From Eq. (7), we can get:

(8)
[TeX:] $$R=\frac{1}{\Delta}\left[\begin{array}{ccc} 1+\mathrm{a}^{2}-\mathrm{b}^{2}-\mathrm{c}^{2} & -2 \mathrm{c}-2 \mathrm{ab} & -2 \mathrm{b}+2 \mathrm{ac} \\ 2 \mathrm{c}-2 \mathrm{ab} & 1-\mathrm{a}^{2}+\mathrm{b}^{2}+\mathrm{c}^{2} & -2 \mathrm{a}-2 \mathrm{bc} \\ 2 \mathrm{b}+2 \mathrm{ac} & 2 \mathrm{a}-2 \mathrm{bc} & 1-\mathrm{a}^{2}-\mathrm{b}^{2}+\mathrm{c}^{2} \end{array}\right]^{-1}$$

where can be calculated as follows:

(9)
[TeX:] $$\Delta=\left[\begin{array}{ccc} 1 & c & b \\ -c & 1 & a \\ -b & -a & 1 \end{array}\right]=1+a^{2}+b^{2}+c^{2}$$

Eq. (8) is named constant rotation matrix, but the constant variables need to be calculated by common points. After that, the conversion is achieved by coordinates rotating and translating. It should also be noticed that the conversion model requires three common known points, therefore, the values of constant variables are ultimately calculated by the difference of point pairs. The Rodrigues 6-DOF matrix has overcome the drawback of large rotation angle and model linearization, but the conversion process is still complicated and not optimal model. At the same time, the conversion accuracy of Rodrigues 6-DOF model is not high.

3. Basic Rodrigues Rotation Model

Inspired by previous studies, this paper proposes a basic Rodrigues rotation model. Unlike the Rodrigues 6-DOF model, the basic Rodrigues rotation model controls three-axis translation and fixes the rotation angle along the center of scanner. The core of our proposed new method is that it relies on the variations of the vectors before and after rotation. Additionally, the coordinates of the same points before and after conversion are known and relatively easy to obtain. Therefore, we can adopt the angle after inverse calculation of vector dot product as the optimal rotation angle. The workflow of the basic Rodrigues rotation model is described as follows.

The vector before rotation is p and the vector after rotation is q , the relationship between p and q is defined as:

(10)
[TeX:] $$p \cdot q=/p // q / c a s \theta$$

Then the angle between p and q is formulated as:

(11)
[TeX:] $$\theta=\arccos (p \cdot q /(/ p / / q D)$$

In the instrument coordinate system (x, y, z), the vector p rotates an angle along the z-axis to obtain the vector q in the target coordinate system. So that the rotation equation is modified as:

(12)
[TeX:] $$\begin{aligned} &x^{\prime}=\cos \theta x+\sin \theta y\\ &y^{\prime}=\sin \theta x+\cos \theta y \end{aligned}$$

Substitute Eqs. (11)-(12) into Eq. (10), the vector q can be formulated as follows:

(13)
[TeX:] $$q=\cos \theta(a x+b y)+\sin \theta(a y-b x)+c z$$

It is known that:

(14)
[TeX:] $$\begin{array}{c} p-(p \cdot z) z=p-c z=a x+b y \\ z \times p=a(z \times x)+b(z \times y)+c(z \times z)=a y-b x \end{array}$$

Eq. (14) can be substituted into the following equation:

(15)
[TeX:] $$q=\cos \theta(p-(p \cdot z) z)+\sin \theta(z \times p)+c z$$

Then, replacing cz, we can get:

(16)
[TeX:] $$q=p+\sin \theta(z \times p)+(\cos \theta-1)(p-(p \cdot z) z)$$

The cross product in Eq. (16) can be expressed as follows:

(17)
[TeX:] $$A_{0} p=z \times p=\left[\begin{array}{ccc} 0 & -z_{3} & z_{2} \\ z_{3} & 0 & -z_{1} \\ -z_{2} & z_{1} & 0 \end{array}\right]\left[\begin{array}{l} p_{1} \\ p_{2} \\ p_{3} \end{array}\right]$$

Combine Eqs. (14)-(17), we can get:

(18)
[TeX:] $$-(p-(p \cdot z) z)=-a x-b y=z \times(a y-b x) \\ z \times(a y-b x)=z \times(z \times(a x+b y+c z))=A_{0}^{2} p$$

The final coordinate conversion function can be formulated as follows:

(19)
[TeX:] $$q=I p+\sin \theta A_{0} p+(I-\cos \theta) A_{0}^{2} p$$

The scanner position and elevation of point clouds are correct, it means point clouds will not rotate along the x-axis and y-axis. The scanner is installed at a known point, which ensures the rotation center of each station remains invariant. It should be emphasized that we need to scan the plane targets in each station. Therefore, we can obtain the vectors before and after conversion and achieve the conversion of point clouds for each station.

4. Experiment

4.1 Study Area and Data

The experiment is carried out on a campus, and the point clouds are obtained from iron culture emboss scanning. Emboss is a bar-shaped structure with a length of 100 m, a thickness of 2 m and a height of 2.5 m. Scanning and modeling meaningful structures has significance influence on the protection of culture. And the scanning process is presented in Fig. 1.

Point clouds of bar-shaped emboss are obtained by MAPTEK I-site 8820 laser scanner. The scanner adopts modular design with built-in GPS and electronic compass to meet the requirements of field measurement. The maximum measurement distance of scanner is about 2,000 m, and corresponding minimum measurement distance is about 2.5 m. Besides, the angular resolution of scanner used in this paper is 0.025°, the scanning angle is from -80° to 80°, and the frequency of scanning is optional for 40 kHz and 80 kHz. In this paper, the four-speed with 40 kHz is adopted, and the number of points at each station is about 11,520,000.

Fig. 1.

The process of emboss scanning.
1.png

The experiment detail circumstance is described as follows. The scanner station is about 5 m away from emboss, and there are totally 4 scanning stations and 10 FARO plane targets. After that, in order to extract the central coordinates of target points, the plane targets is scanned closely. In addition, the GNSS-RTK method is used in this paper to obtain the CGCS2000 coordinates of scanner stations with 120 measuring times. The TM30 measuring robot is adopted to get the coordinates of 10 FARO plane targets by measuring back and forth method. Specifically, the coordinates of plane target points are obtained by without prism measurement technology. Meanwhile, the errors of the center coordinates of plane targets are limited to within 3 mm when using TM30 repeat measurement. Therefore, the vectors before and after conversion at each stations are easily determined. Besides, the average longitude of the campus is about 123°, and there is no super-elevation phenomenon in the projection surface of experiment domain.

4.2 Point Clouds Coordinate Conversion

In theory, point clouds are collected by four stations on campus. Under such situation, point clouds with the following characteristics. The central coordinates of the scanned four stations are correct, which means there is no translations in point clouds from four scanning stations. More importantly, the elevation is also correct and the coordinate system can be seamlessly spliced by rotating an angle along the z-axis of each station. Then the conversion of coordinate system can be achieved according to the vectors before and after the conversion between scanner station and plane target points. For example, the vector from scanner rotation axis to the center of one FARO target point in scanning coordinate system is p1. And the vectors of the other point clouds from scanner rotation axis to scanning area in scanning coordinate system is p2, p3...pn. The calculated or inverse calculated angle in Eq. (11) is , and then the vectors q1, q2, q3...qn in the target coordinate system can be determined by Eq. (19). The vectors in coordinate system conversion are summarized in Fig. 2.

In Fig. 2, O is the rotation axis of scanner station, p and q represent the vectors before and after coordinate system conversion. The basic Rodrigues rotation method is actually to rotate the point clouds along the z-axis by an angle of . The entire process of point clouds coordinate system conversion relies on the MAPTEK Studio software, and the steps can be descried as follows:

(1) According to the coordinates of known scanner stations and FARO target points under the target (engineering) coordinate system, extract the coordinates of target points under the instrument (scanning) coordinate system.

(2) Constitute the vectors before and after conversion, and calculate the rotation angle between vectors before conversion and vectors after conversion. To further illustration, the point clouds before conversion are presented in Fig. 3.

(3) Substitute rotation angle and vectors into the conversion function, and achieve the coordinate conversion of point clouds for four stations. For validation purpose, the point clouds after conversion are shown in Fig. 4.

(4) Validate the three-axis error of coordinate system conversion by using the remaining six FARO targets as input.

(5) Evaluate the three-term error of coordinate system conversion, including plane error, elevation error and three-dimensional error.

Fig. 2.

Point clouds coordinate system conversion.
2.png

Fig. 3.

The point clouds before conversion.
3.png

Fig. 4.

The point clouds after conversion.
4.png

5. Discussion

5.1 Accuracy Analysis

Taking the engineering coordinates of target points as validation datasets, and the converted engineering coordinates of target points as prediction datasets. Additionally, the error validation scheme is referenced by [23-25]. Then the results of three-axis error at six stations are presented in Table 1.

Table 1.

Three-axis error produced by basic Rodrigues rotation method
ID Deviation (mm)
x coordinate y coordinate z coordinate
1 -0.002 0.004 -0.007
2 0.006 -0.003 0.003
3 0.008 -0.007 0.007
4 0.002 0.006 0.006
5 0.004 0.007 -0.008
6 0.002 0.011 0.001

As we can be learned from Table 1, the maximum deviations of x, y, and z coordinates are 8 mm, 11 mm, and 8 mm, respectively. Therefore, it can be easily concluded that the three-axis error is stable in the coordinate system conversion of point clouds.

However, it’s not enough to validate the conversion results by three-axis error. In this paper, three-term error is used to further validate the accuracy of point clouds conversion, including plane error, elevation error and three-dimensional error. The equations of three-term error is defined as follows:

The plane error:

(20)
[TeX:] $$m_{x y}=\sqrt{\frac{\sum_{i=1}^{n}\left(\Delta X^{2}+\Delta Y^{2}\right)}{n-1}}$$

The elevation error:

(21)
[TeX:] $$m_{z}=\sqrt{\frac{\sum_{i=1}^{n}\left(\Delta z^{2}\right)}{n-1}}$$

The three-dimensional error:

(22)
[TeX:] $$m_{x y z}=\sqrt{\frac{\sum_{i=1}^{n}\left(\Delta X^{2}+\Delta Y^{2}+\Delta z^{2}\right)}{n-1}}$$

The plane, elevation and three-dimensional errors obtained by Eqs. (20)-(22) are 11.6 mm, 8.0 mm, and 13.6 mm, respectively. The above accuracy of point clouds conversion reflected by three-term error is equivalent to the accuracy of the single-base-station RTK, which demonstrates the applicability of our proposed new method.

Through the basic Rodrigues rotation method, the coordinates of point clouds at four stations are converted to engineering coordinate system. The basic Rodrigues rotation method proposed in this paper does not involve the drawback of large angle initialization, only the rotation angles of point clouds need to be calculated. Therefore, the new method has not only realized the coordinate system conversion of point clouds, but also achieved the seamless splicing of point clouds. More importantly, due to the conversion accuracy made in this study is relatively high, the further point clouds modelling would be more accurate.

5.2 Reliability Analysis

To further validate the reliability of the basic Rodrigues rotation model proposed in this paper, we adopt the ICP method to splice the point clouds, and select the FARO target points to convert the coordinate system. Besides, it should also be highlighted that the use of ICP method is also within the MAPTEK software. The ICP method mainly looks for the feature points of similar area, minimizes the difference of two neighbored point clouds to complete the splicing of point clouds.

In general, ICP method involves global splicing and requires to select the location of initial point clouds. As a consequence, the iterative process increases the time of point clouds splicing. After the four-station point clouds splicing, the coordinates of four FARO target points are taken for coordinate system conversion. It should be noted that the selection of four FARO target points is the same as basic Rodrigues rotation method, and the remaining six target points are also used for accuracy analysis. The there-axis error of ICP method is presented in Table 2.

Table 2 indicates that the maximum deviations of x, y, and z coordinates are 16 mm, 19 mm, and 15 mm, respectively. Compared with the basic Rodrigues rotation method, there is an obviously increase in three-axis error produced by ICP method. In addition, the three-term error of ICP method is calculated by Eqs. (20)-(22). The plane, elevation and three-dimensional errors of ICP method are 21.3 mm, 13.7 mm, and 25.3 mm, respectively. The plane and three-dimensional errors of ICP method are larger than our new method, and the elevation error is equivalent to our new method. The reason lays in the scanner used in this paper has high accuracy in angle and distance measurement, which results in the similarity in elevation error of two methods.

By comparing with ICP method for point clouds splicing and coordinate system conversion, the basic Rodrigues rotation method achieves the splicing and coordinate system conversion of point clouds simultaneously. Therefore, the basic Rodrigues rotation method reduces the error of manually selection of feature points, and the conversion accuracy has significant improvement. The comparison of ICP method and basic Rodrigues rotation method in the same software platform also suggests that the time of basic Rodrigues rotation method is 1/3 of ICP method, so the basic Rodrigues rotation method is more efficient. Consequently, we can conclude that the basic Rodrigues rotation method has better stability than ICP method.

Table 2.

Three-axis error of ICP method
ID Deviation (mm)
x coordinate y coordinate z coordinate
1 -0.011 0.014 -0.015
2 0.013 -0.011 0.009
3 0.016 -0.016 0.015
4 0.013 0.013 0.012
5 0.015 0.012 -0.014
6 0.011 0.019 0.008

6. Conclusions

The purpose of this paper is to propose a simple model to achieve the coordinate system conversion of point clouds. Due to the coordinate system conversion of point clouds is a basic work before point clouds semantic recognition and three-dimensional reconstruction, the subject has become more meaningful in various scientific fields. The basic Rodrigues rotation model proposed in this paper has advantages of need not linearization, simplicity and efficiency. Besides, the new method also performs superiority in high accuracy and seamless splicing by experimental comparison.

In this paper, we use the method of manual selection to determine the scanning coordinates of feature points. Manual selection undoubtedly increases the error of coordinate extraction and reduces the automation of workflow, which is a potential limitation needs to be further improved.

We will focus on the automatic identification of feature points, because the machine learning has always been a characteristic issue in big data field. Therefore, our next work is to achieve the automation of point clouds coordinate conversion. Besides, the feature recognition is not limited to the conversion of coordinate system, it has a widely applications in change detection and intelligent travel.

Acknowledgement

This paper is supported by the National Key Research and Development Project of China (No. 2016YFC0801602).

Biography

Maolin Xu
https://orcid.org/0000-0002-9469-9021

He is currently a professor of University of Science and Technology Liaoning, Anshan, China. His research interest includes mine monitoring and measurement data processing.

Biography

Jiaxing Wei
https://orcid.org/0000-0002-6818-9271

He is currently a professor of University of Science and Technology Liaoning, Anshan, China. His research interest includes mine monitoring and measurement data processing. He is currently a graduate student (master’s degree) of University of Science and Technology Liaoning, Anshan, China. His research interest includes data processing and application of point clouds.

Biography

Hongling Xiu
https://orcid.org/0000-0002-2081-8307

He is currently a professor of University of Science and Technology Liaoning, Anshan, China. His research interest includes mine monitoring and measurement data processing. He is currently a graduate student (master’s degree) of University of Science and Technology Liaoning, Anshan, China. His research interest includes data processing and application of point clouds. She is currently a graduate student (master’s degree) of University of Science and Technology Liaoning, Anshan, China. Her research interest includes remote sensing image processing and application.

References

  • 1 V. Carbone, M. Carocci, E. Savio, G. Sansoni, L. De Chiffre, "Combination of a vision system and a coordinate measuring machine for the reverse engineering of freeform surfaces," The International Journal of Advanced Manufacturing Technology, vol. 17, no. 4, pp. 263-271, 2001.custom:[[[-]]]
  • 2 N. Haala, M. Peter, J. Kremer, G. Hunter, "Mobile LiDAR mapping for 3D point cloud collection in urban areas: a performance test," in Proceedings of the 21st ISPRS Congress, Beijing, China, 2008;pp. 1119-1127. custom:[[[-]]]
  • 3 M. Olano, T. Greer, "Triangle scan conversion using 2D homogeneous coordinates," in Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Workshop on Graphics Hardware, Los Angeles, CA, 1997;pp. 89-95. custom:[[[-]]]
  • 4 N. J. Mitra, N. Gelfand, H. Pottmann, L. Guibas, "Registration of point cloud data from a geometric optimization perspective," in Proceedings of the 2004 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing, Nice, France, 2004;pp. 22-31. custom:[[[-]]]
  • 5 S. Shepperd, "Quaternion from rotation matrix," Journal of Guidance and Control, vol. 1, no. 3, pp. 223-224, 1978.custom:[[[-]]]
  • 6 A. W Lohmann, "Image rotation, Wigner rotation, and the fractional Fourier transform," Journal of the Optical Society of America A, vol. 10, no. 10, pp. 2181-2186, 1993.custom:[[[-]]]
  • 7 C. Gosselin, "Determination of the workspace of 6-dof parallel manipulators," Journal of Mechanical Design, vol. 112, no. 3, pp. 331-336, 1990.custom:[[[-]]]
  • 8 L. M. Paz, P. Pinies, J. D. Tardos, J. Neira, "Large-scale 6-DOF SLAM with stereo-in-hand," IEEE Transactions on Robotics, vol. 24, no. 5, pp. 946-957, 2008.doi:[[[10.1109/TRO.2008.2004637]]]
  • 9 I. Bonev, J. Ryu, "A new approach to orientation workspace analysis of 6-DOF parallel manipulators," Mechanism and Machine Theory, vol. 36, no. 1, pp. 15-28, 2001.custom:[[[-]]]
  • 10 H. Lim, S. N. Sinha, M. F. Cohen, M. Uyttendaele, H. J. Kim, "Real-time monocular image-based 6-DOF localization," The International Journal of Robotics Research, vol. 34, no. 4-5, pp. 476-492, 2015.doi:[[[10.1177/0278364914561101]]]
  • 11 H. Kim, S. Leutenegger, A. Davison, "Real-time 3D reconstruction and 6-DOF tracking with an event camera," in Computer Vision – ECCV 2016. Cham: Springerpp.349-364, pp. 2016 349-364, 2016.custom:[[[-]]]
  • 12 J. S. Dai, "Euler–Rodrigues formula variations, quaternion conjugation and intrinsic connections," Mechanism and Machine Theory, vol. 92, pp. 144-152, 2015.custom:[[[-]]]
  • 13 G. Gallego, Y. Anthony, "A compact formula for the derivative of a 3-D rotation in exponential coordinates," Journal of Mathematical Imaging and Vision, vol. 51, no. 3, pp. 378-384, 2015.doi:[[[10.1007/s10851-014-0528-x]]]
  • 14 Y. Bo, W. Xuan, "Extraction method of structure deformation parameters based on Rodrigue matrix," Geospatial Information, vol. 2016, no. 9, pp. 26-28, 2016.custom:[[[-]]]
  • 15 B. Bellekens, V. Spruyt, R. Berkvens, M. Weyn, "A survey of rigid 3d pointcloud registration algorithms," in Proceedings of the 4th International Conference on Ambient Computing, Applications, Services and Technologies (AMBIENT), Rome, Italy, 2014;pp. 8-13. custom:[[[-]]]
  • 16 Y. Li, N. Snavely, D. Huttenlocher, P. Fua, "Worldwide pose estimation using 3D point clouds," in Computer Vision – ECCV 2012. Heidelberg: Springer, pp. 15-29, 2012.custom:[[[-]]]
  • 17 R. C. Luo, V. W. Ee, C. K. Hsieh, "3D point cloud based indoor mobile robot in 6-DoF pose localization using Fast Scene Recognition and Alignment approach," in Proceedings of 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Baden-Baden, Germany, 2016;pp. 470-475. custom:[[[-]]]
  • 18 A. Kendall, M. Grimes, R. Cipolla, "PoseNet: a convolutional network for real-time 6-dof camera relocalization," in Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 2015;pp. 2938-2946. custom:[[[-]]]
  • 19 D. Jia, Y. Su, and C. Li, 2016;, https://arxiv.org/abs/1611.02776
  • 20 J. Shao, W. Zhang, Y. Zhu, A. Shen, "Fast registration of terrestrial LiDAR point cloud and sequence images," in Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Wuhan, China, 2017;pp. 875-879. custom:[[[-]]]
  • 21 B. Bercovici, J. W. McMahon, "Point-cloud processing using modified Rodrigues parameters for relative navigation," Journal of GuidanceControl, and Dynamics, vol. 40, no. 12, pp. 3167-3179, 2017.custom:[[[-]]]
  • 22 G. Pandey, S. Giri, J. R. Mcbride, "Alignment of 3D point clouds with a dominant ground plane," in Proceedings of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, 2017;pp. 2143-2150. custom:[[[-]]]
  • 23 W. Sun, J. R. Cooperstock, "An empirical evaluation of factors influencing camera calibration accuracy using three publicly available techniques," Machine Vision and Applications, vol. 17, no. 1, pp. 51-67, 2006.doi:[[[10.1007/s00138-006-0014-6]]]
  • 24 S. J. Buckley, J. A. Howell, H. D. Enge, T. H. Kurz, "Terrestrial laser scanning in geology: data acquisition, processing and accuracy considerations," Journal of the Geological Society, vol. 165, no. 3, pp. 625-638, 2008.custom:[[[-]]]
  • 25 S. Harwin, A. Lucieer, "Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery," Remote Sensing, vol. 4, no. 6, pp. 1573-1599, 2012.custom:[[[-]]]