Main MRPT website > C++ reference for MRPT 1.4.0
obs/CObservation3DRangeScan.h
Go to the documentation of this file.
1/* +---------------------------------------------------------------------------+
2 | Mobile Robot Programming Toolkit (MRPT) |
3 | http://www.mrpt.org/ |
4 | |
5 | Copyright (c) 2005-2016, Individual contributors, see AUTHORS file |
6 | See: http://www.mrpt.org/Authors - All rights reserved. |
7 | Released under BSD License. See details in http://www.mrpt.org/License |
8 +---------------------------------------------------------------------------+ */
9#ifndef CObservation3DRangeScan_H
10#define CObservation3DRangeScan_H
11
13#include <mrpt/utils/CImage.h>
16#include <mrpt/poses/CPose3D.h>
17#include <mrpt/poses/CPose2D.h>
18#include <mrpt/math/CPolygon.h>
19#include <mrpt/math/CMatrix.h>
21#include <mrpt/utils/adapters.h>
24
25namespace mrpt
26{
27namespace obs
28{
30
31 namespace detail {
32 // Implemented in CObservation3DRangeScan_project3D_impl.h
33 template <class POINTMAP>
34 void project3DPointsFromDepthImageInto(CObservation3DRangeScan & src_obs,POINTMAP & dest_pointcloud,const bool takeIntoAccountSensorPoseOnRobot,const mrpt::poses::CPose3D * robotPoseInTheWorld,const bool PROJ3D_USE_LUT);
35 }
36
37 /** Declares a class derived from "CObservation" that encapsules a 3D range scan measurement, as from a time-of-flight range camera or any other RGBD sensor.
38 *
39 * This kind of observations can carry one or more of these data fields:
40 * - 3D point cloud (as float's).
41 * - Each 3D point has its associated (u,v) pixel coordinates in \a points3D_idxs_x & \a points3D_idxs_y (New in MRPT 1.4.0)
42 * - 2D range image (as a matrix): Each entry in the matrix "rangeImage(ROW,COLUMN)" contains a distance or a depth (in meters), depending on \a range_is_depth.
43 * - 2D intensity (grayscale or RGB) image (as a mrpt::utils::CImage): For SwissRanger cameras, a logarithmic A-law compression is used to convert the original 16bit intensity to a more standard 8bit graylevel.
44 * - 2D confidence image (as a mrpt::utils::CImage): For each pixel, a 0x00 and a 0xFF mean the lowest and highest confidence levels, respectively.
45 * - Semantic labels: Stored as a matrix of bitfields, each bit having a user-defined meaning.
46 *
47 * The coordinates of the 3D point cloud are in meters with respect to the depth camera origin of coordinates
48 * (in SwissRanger, the front face of the camera: a small offset ~1cm in front of the physical focal point),
49 * with the +X axis pointing forward, +Y pointing left-hand and +Z pointing up. By convention, a 3D point with its coordinates set to (0,0,0), will be considered as invalid.
50 * The field CObservation3DRangeScan::relativePoseIntensityWRTDepth describes the change of coordinates from
51 * the depth camera to the intensity (RGB or grayscale) camera. In a SwissRanger camera both cameras coincide,
52 * so this pose is just a rotation (0,0,0,-90deg,0,-90deg). But in
53 * Microsoft Kinect there is also an offset, as shown in this figure:
54 *
55 * <div align=center>
56 * <img src="CObservation3DRangeScan_figRefSystem.png">
57 * </div>
58 *
59 * In any case, check the field \a relativePoseIntensityWRTDepth, or the method \a doDepthAndIntensityCamerasCoincide()
60 * to determine if both frames of reference coincide, since even for Kinect cameras both can coincide if the images
61 * have been rectified.
62 *
63 * The 2D images and matrices are stored as common images, with an up->down rows order and left->right, as usual.
64 * Optionally, the intensity and confidence channels can be set to delayed-load images for off-rawlog storage so it saves
65 * memory by having loaded in memory just the needed images. See the methods load() and unload().
66 * Due to the intensive storage requirements of this kind of observations, this observation is the only one in MRPT
67 * for which it's recommended to always call "load()" and "unload()" before and after using the observation, *ONLY* when
68 * the observation was read from a rawlog dataset, in order to make sure that all the externally stored data fields are
69 * loaded and ready in memory.
70 *
71 * Classes that grab observations of this type are:
72 * - mrpt::hwdrivers::CSwissRanger3DCamera
73 * - mrpt::hwdrivers::CKinect
74 *
75 * There are two sets of calibration parameters (see mrpt::vision::checkerBoardStereoCalibration() or the ready-to-use GUI program <a href="http://www.mrpt.org/Application:kinect-calibrate" >kinect-calibrate</a>):
76 * - cameraParams: Projection parameters of the depth camera.
77 * - cameraParamsIntensity: Projection parameters of the intensity (gray-level or RGB) camera.
78 *
79 * In some cameras, like SwissRanger, both are the same. It is possible in Kinect to rectify the range images such both cameras
80 * seem to coincide and then both sets of camera parameters will be identical.
81 *
82 * Range data can be interpreted in two different ways depending on the 3D camera (this field is already set to the
83 * correct setting when grabbing observations from an mrpt::hwdrivers sensor):
84 * - range_is_depth=true -> Kinect-like ranges: entries of \a rangeImage are distances along the +X axis
85 * - range_is_depth=false -> Ranges in \a rangeImage are actual distances in 3D.
86 *
87 * The "intensity" channel may come from different channels in sesnsors as Kinect. Look at field \a intensityImageChannel to
88 * find out if the image was grabbed from the visible (RGB) or IR channels.
89 *
90 * 3D point clouds can be generated at any moment after grabbing with CObservation3DRangeScan::project3DPointsFromDepthImage() and CObservation3DRangeScan::project3DPointsFromDepthImageInto(), provided the correct
91 * calibration parameters.
92 *
93 * Example of how to assign labels to pixels (for object segmentation, semantic information, etc.):
94 *
95 * \code
96 * // Assume obs of type CObservation3DRangeScanPtr
97 * obs->pixelLabels = CObservation3DRangeScan::TPixelLabelInfoPtr( new CObservation3DRangeScan::TPixelLabelInfo<NUM_BYTES>() );
98 * obs->pixelLabels->setSize(ROWS,COLS);
99 * obs->pixelLabels->setLabel(col,row, label_idx); // label_idxs = [0,2^NUM_BYTES-1]
100 * //...
101 * \endcode
102 *
103 * \note Starting at serialization version 2 (MRPT 0.9.1+), the confidence channel is stored as an image instead of a matrix to optimize memory and disk space.
104 * \note Starting at serialization version 3 (MRPT 0.9.1+), the 3D point cloud and the rangeImage can both be stored externally to save rawlog space.
105 * \note Starting at serialization version 5 (MRPT 0.9.5+), the new field \a range_is_depth
106 * \note Starting at serialization version 6 (MRPT 0.9.5+), the new field \a intensityImageChannel
107 * \note Starting at serialization version 7 (MRPT 1.3.1+), new fields for semantic labeling
108 *
109 * \sa mrpt::hwdrivers::CSwissRanger3DCamera, mrpt::hwdrivers::CKinect, CObservation
110 * \ingroup mrpt_obs_grp
111 */
113 {
114 // This must be added to any CSerializable derived class:
116
117 protected:
118 bool m_points3D_external_stored; //!< If set to true, m_points3D_external_file is valid.
119 std::string m_points3D_external_file; //!< 3D points are in CImage::IMAGES_PATH_BASE+<this_file_name>
120
121 bool m_rangeImage_external_stored; //!< If set to true, m_rangeImage_external_file is valid.
122 std::string m_rangeImage_external_file; //!< rangeImage is in CImage::IMAGES_PATH_BASE+<this_file_name>
123
124 public:
125 CObservation3DRangeScan( ); //!< Default constructor
126 virtual ~CObservation3DRangeScan( ); //!< Destructor
127
128 /** @name Delayed-load manual control methods.
129 @{ */
130 /** Makes sure all images and other fields which may be externally stored are loaded in memory.
131 * Note that for all CImages, calling load() is not required since the images will be automatically loaded upon first access, so load() shouldn't be needed to be called in normal cases by the user.
132 * If all the data were alredy loaded or this object has no externally stored data fields, calling this method has no effects.
133 * \sa unload
134 */
135 virtual void load() const MRPT_OVERRIDE;
136 /** Unload all images, for the case they being delayed-load images stored in external files (othewise, has no effect).
137 * \sa load
138 */
139 virtual void unload() MRPT_OVERRIDE;
140 /** @} */
141
142 /** Project the RGB+D images into a 3D point cloud (with color if the target map supports it) and optionally at a given 3D pose.
143 * The 3D point coordinates are computed from the depth image (\a rangeImage) and the depth camera camera parameters (\a cameraParams).
144 * There exist two set of formulas for projecting the i'th point, depending on the value of "range_is_depth".
145 * In all formulas below, "rangeImage" is the matrix of ranges and the pixel coordinates are (r,c).
146 *
147 * 1) [range_is_depth=true] With "range equals depth" or "Kinect-like depth mode": the range values
148 * are in fact distances along the "+X" axis, not real 3D ranges (this is the way Kinect reports ranges):
149 *
150 * \code
151 * x(i) = rangeImage(r,c)
152 * y(i) = (r_cx - c) * x(i) / r_fx
153 * z(i) = (r_cy - r) * x(i) / r_fy
154 * \endcode
155 *
156 *
157 * 2) [range_is_depth=false] With "normal ranges": range means distance in 3D. This must be set when
158 * processing data from the SwissRange 3D camera, among others.
159 *
160 * \code
161 * Ky = (r_cx - c)/r_fx
162 * Kz = (r_cy - r)/r_fy
163 *
164 * x(i) = rangeImage(r,c) / sqrt( 1 + Ky^2 + Kz^2 )
165 * y(i) = Ky * x(i)
166 * z(i) = Kz * x(i)
167 * \endcode
168 *
169 * The color of each point is determined by projecting the 3D local point into the RGB image using \a cameraParamsIntensity.
170 *
171 * By default the local coordinates of points are directly stored into the local map, but if indicated so in \a takeIntoAccountSensorPoseOnRobot
172 * the points are transformed with \a sensorPose. Furthermore, if provided, those coordinates are transformed with \a robotPoseInTheWorld
173 *
174 * \param[in] PROJ3D_USE_LUT (Only when range_is_depth=true) Whether to use a Look-up-table (LUT) to speed up the conversion. It's thread safe in all situations <b>except</b> when you call this method from different threads <b>and</b> with different camera parameter matrices. In all other cases, it's a good idea to left it enabled.
175 * \tparam POINTMAP Supported maps are all those covered by mrpt::utils::PointCloudAdapter (mrpt::maps::CPointsMap and derived, mrpt::opengl::CPointCloudColoured, PCL point clouds,...)
176 *
177 * \note In MRPT < 0.9.5, this method always assumes that ranges were in Kinect-like format.
178 */
179 template <class POINTMAP>
181 POINTMAP & dest_pointcloud,
182 const bool takeIntoAccountSensorPoseOnRobot,
183 const mrpt::poses::CPose3D *robotPoseInTheWorld=NULL,
184 const bool PROJ3D_USE_LUT=true)
185 {
186 detail::project3DPointsFromDepthImageInto<POINTMAP>(*this,dest_pointcloud,takeIntoAccountSensorPoseOnRobot,robotPoseInTheWorld,PROJ3D_USE_LUT);
187 }
188
189 /** This method is equivalent to \c project3DPointsFromDepthImageInto() storing the projected 3D points (without color, in local coordinates) in this same class.
190 * For new code it's recommended to use instead \c project3DPointsFromDepthImageInto() which is much more versatile.
191 */
192 inline void project3DPointsFromDepthImage(const bool PROJ3D_USE_LUT=true) {
193 this->project3DPointsFromDepthImageInto(*this,false,NULL,PROJ3D_USE_LUT);
194 }
195
196
197 /** Convert this 3D observation into an "equivalent 2D fake laser scan", with a configurable vertical FOV.
198 *
199 * The result is a 2D laser scan with more "rays" (N) than columns has the 3D observation (W), exactly: N = W * oversampling_ratio.
200 * This oversampling is required since laser scans sample the space at evenly-separated angles, while
201 * a range camera follows a tangent-like distribution. By oversampling we make sure we don't leave "gaps" unseen by the virtual "2D laser".
202 *
203 * All obstacles within a frustum are considered and the minimum distance is kept in each direction.
204 * The horizontal FOV of the frustum is automatically computed from the intrinsic parameters, but the
205 * vertical FOV must be provided by the user, and can be set to be assymetric which may be useful
206 * depending on the zone of interest where to look for obstacles.
207 *
208 * All spatial transformations are riguorosly taken into account in this class, using the depth camera
209 * intrinsic calibration parameters.
210 *
211 * The timestamp of the new object is copied from the 3D object.
212 * Obviously, a requisite for calling this method is the 3D observation having range data,
213 * i.e. hasRangeImage must be true. It's not needed to have RGB data nor the raw 3D point clouds
214 * for this method to work.
215 *
216 * \param[out] out_scan2d The resulting 2D equivalent scan.
217 * \param[in] sensorLabel The sensor label that will have the newly created observation.
218 * \param[in] angle_sup (Default=5deg) The upper half-FOV angle (in radians)
219 * \param[in] angle_sup (Default=5deg) The lower half-FOV angle (in radians)
220 * \param[in] oversampling_ratio (Default=1.2=120%) How many more laser scans rays to create (read above).
221 *
222 * \sa The example in http://www.mrpt.org/Example_Kinect_To_2D_laser_scan
223 */
226 const std::string & sensorLabel,
227 const double angle_sup = mrpt::utils::DEG2RAD(5),
228 const double angle_inf = mrpt::utils::DEG2RAD(5),
229 const double oversampling_ratio = 1.2 );
230
231
232 bool hasPoints3D; //!< true means the field points3D contains valid data.
233 std::vector<float> points3D_x,points3D_y,points3D_z; //!< If hasPoints3D=true, the (X,Y,Z) coordinates of the 3D point cloud detected by the camera. \sa resizePoints3DVectors
234 std::vector<uint16_t> points3D_idxs_x, points3D_idxs_y; //!< //!< If hasPoints3D=true, the (x,y) pixel coordinates for each (X,Y,Z) point in \a points3D_x, points3D_y, points3D_z
235
236 /** Use this method instead of resizing all three \a points3D_x, \a points3D_y & \a points3D_z to allow the usage of the internal memory pool. */
237 void resizePoints3DVectors(const size_t nPoints);
238
239 /** \name 3D points external storage functions
240 * @{ */
241 inline bool points3D_isExternallyStored() const { return m_points3D_external_stored; }
242 inline std::string points3D_getExternalStorageFile() const { return m_points3D_external_file; }
243 void points3D_getExternalStorageFileAbsolutePath(std::string &out_path) const;
245 std::string tmp;
246 points3D_getExternalStorageFileAbsolutePath(tmp);
247 return tmp;
248 }
249 void points3D_convertToExternalStorage( const std::string &fileName, const std::string &use_this_base_dir ); //!< Users won't normally want to call this, it's only used from internal MRPT programs.
250 /** @} */
251
252 /** \name Range (depth) image
253 * @{ */
254 bool hasRangeImage; //!< true means the field rangeImage contains valid data
255 mrpt::math::CMatrix rangeImage; //!< If hasRangeImage=true, a matrix of floats with the range data as captured by the camera (in meters) \sa range_is_depth
256 bool range_is_depth; //!< true: Kinect-like ranges: entries of \a rangeImage are distances along the +X axis; false: Ranges in \a rangeImage are actual distances in 3D.
257
258 void rangeImage_setSize(const int HEIGHT, const int WIDTH); //!< Similar to calling "rangeImage.setSize(H,W)" but this method provides memory pooling to speed-up the memory allocation.
259 /** @} */
260
261 /** \name Range Matrix external storage functions
262 * @{ */
263 inline bool rangeImage_isExternallyStored() const { return m_rangeImage_external_stored; }
264 inline std::string rangeImage_getExternalStorageFile() const { return m_rangeImage_external_file; }
265 void rangeImage_getExternalStorageFileAbsolutePath(std::string &out_path) const;
267 std::string tmp;
268 rangeImage_getExternalStorageFileAbsolutePath(tmp);
269 return tmp;
270 }
271 void rangeImage_convertToExternalStorage( const std::string &fileName, const std::string &use_this_base_dir ); //!< Users won't normally want to call this, it's only used from internal MRPT programs.
272 /** Forces marking this observation as non-externally stored - it doesn't anything else apart from reseting the corresponding flag (Users won't normally want to call this, it's only used from internal MRPT programs) */
273 void rangeImage_forceResetExternalStorage() { m_rangeImage_external_stored=false; }
274 /** @} */
275
276
277 /** \name Intensity (RGB) channels
278 * @{ */
279 /** Enum type for intensityImageChannel */
281 {
282 CH_VISIBLE = 0, //!< Grayscale or RGB visible channel of the camera sensor.
283 CH_IR = 1 //!< Infrarred (IR) channel
284 };
285
286 bool hasIntensityImage; //!< true means the field intensityImage contains valid data
287 mrpt::utils::CImage intensityImage; //!< If hasIntensityImage=true, a color or gray-level intensity image of the same size than "rangeImage"
288 TIntensityChannelID intensityImageChannel; //!< The source of the intensityImage; typically the visible channel \sa TIntensityChannelID
289 /** @} */
290
291 /** \name Confidence "channel"
292 * @{ */
293 bool hasConfidenceImage; //!< true means the field confidenceImage contains valid data
294 mrpt::utils::CImage confidenceImage; //!< If hasConfidenceImage=true, an image with the "confidence" value [range 0-255] as estimated by the capture drivers.
295 /** @} */
296
297 /** \name Pixel-wise classification labels (for semantic labeling, etc.)
298 * @{ */
299 /** Returns true if the field CObservation3DRangeScan::pixelLabels contains a non-NULL smart pointer.
300 * To enhance a 3D point cloud with labeling info, just assign an appropiate object to \a pixelLabels
301 */
302 bool hasPixelLabels() const { return pixelLabels.present(); }
303
304 /** Virtual interface to all pixel-label information structs. See CObservation3DRangeScan::pixelLabels */
306 {
307 typedef std::map<uint32_t,std::string> TMapLabelID2Name;
308
309 /** The 'semantic' or human-friendly name of the i'th bit in pixelLabels(r,c) can be found in pixelLabelNames[i] as a std::string */
311
312 const std::string & getLabelName(unsigned int label_idx) const {
313 std::map<uint32_t,std::string>::const_iterator it = pixelLabelNames.find(label_idx);
314 if (it==pixelLabelNames.end()) throw std::runtime_error("Error: label index has no defined name");
315 return it->second;
316 }
317 void setLabelName(unsigned int label_idx, const std::string &name) { pixelLabelNames[label_idx]=name; }
318 /** Check the existence of a label by returning its associated index.
319 * -1 if it does not exist. */
320 int checkLabelNameExistence(const std::string &name) const {
322 for ( it = pixelLabelNames.begin() ; it != pixelLabelNames.end(); it++ )
323 if ( it->second == name )
324 return it->first;
325 return -1;
326 }
327
328 /** Resizes the matrix pixelLabels to the given size, setting all bitfields to zero (that is, all pixels are assigned NONE category). */
329 virtual void setSize(const int NROWS, const int NCOLS) =0;
330 /** Mark the pixel(row,col) as classified in the category \a label_idx, which may be in the range 0 to MAX_NUM_LABELS-1
331 * Note that 0 is a valid label index, it does not mean "no label" \sa unsetLabel, unsetAll */
332 virtual void setLabel(const int row, const int col, uint8_t label_idx) =0;
333 virtual void getLabels( const int row, const int col, uint8_t &labels ) =0;
334 /** For the pixel(row,col), removes its classification into the category \a label_idx, which may be in the range 0 to 7
335 * Note that 0 is a valid label index, it does not mean "no label" \sa setLabel, unsetAll */
336 virtual void unsetLabel(const int row, const int col, uint8_t label_idx)=0;
337 /** Removes all categories for pixel(row,col) \sa setLabel, unsetLabel */
338 virtual void unsetAll(const int row, const int col, uint8_t label_idx) =0;
339 /** Checks whether pixel(row,col) has been clasified into category \a label_idx, which may be in the range 0 to 7
340 * \sa unsetLabel, unsetAll */
341 virtual bool checkLabel(const int row, const int col, uint8_t label_idx) const =0;
342
345
346 /// std stream interface
347 friend std::ostream& operator<<( std::ostream& out, const TPixelLabelInfoBase& obj )
348 {
349 obj.Print( out );
350 return out;
351 }
352
353 TPixelLabelInfoBase(unsigned int BITFIELD_BYTES_) :
354 BITFIELD_BYTES (BITFIELD_BYTES_)
355 {
356 }
357
359
360 const uint8_t BITFIELD_BYTES; //!< Minimum number of bytes required to hold MAX_NUM_DIFFERENT_LABELS bits.
361
362 protected:
364 virtual void internal_writeToStream(mrpt::utils::CStream &out) const = 0;
365 virtual void Print( std::ostream& ) const =0;
366 };
367 typedef stlplus::smart_ptr<TPixelLabelInfoBase> TPixelLabelInfoPtr; //!< Used in CObservation3DRangeScan::pixelLabels
368
369 template <unsigned int BYTES_REQUIRED_>
371 {
372 enum {
373 BYTES_REQUIRED = BYTES_REQUIRED_ // ((MAX_LABELS-1)/8)+1
374 };
375
376 /** Automatically-determined integer type of the proper size such that all labels fit as one bit (max: 64) */
378
379 /** Each pixel may be assigned between 0 and MAX_NUM_LABELS-1 'labels' by
380 * setting to 1 the corresponding i'th bit [0,MAX_NUM_LABELS-1] in the byte in pixelLabels(r,c).
381 * That is, each pixel is assigned an 8*BITFIELD_BYTES bit-wide bitfield of possible categories.
382 * \sa hasPixelLabels
383 */
384 typedef Eigen::Matrix<bitmask_t,Eigen::Dynamic,Eigen::Dynamic> TPixelLabelMatrix;
386
387 void setSize(const int NROWS, const int NCOLS) MRPT_OVERRIDE {
388 pixelLabels = TPixelLabelMatrix::Zero(NROWS,NCOLS);
389 }
390 void setLabel(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE {
391 pixelLabels(row,col) |= static_cast<bitmask_t>(1) << label_idx;
392 }
393 void getLabels( const int row, const int col, uint8_t &labels ) MRPT_OVERRIDE
394 {
395 labels = pixelLabels(row,col);
396 }
397
398 void unsetLabel(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE {
399 pixelLabels(row,col) &= ~(static_cast<bitmask_t>(1) << label_idx);
400 }
401 void unsetAll(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE {
402 pixelLabels(row,col) = 0;
403 }
404 bool checkLabel(const int row, const int col, uint8_t label_idx) const MRPT_OVERRIDE {
405 return (pixelLabels(row,col) & (static_cast<bitmask_t>(1) << label_idx)) != 0;
406 }
407
408 // Ctor: pass identification to parent for deserialization
410 {
411 }
412
413 protected:
415 {
416 {
417 uint32_t nR,nC;
418 in >> nR >> nC;
419 pixelLabels.resize(nR,nC);
420 for (uint32_t c=0;c<nC;c++)
421 for (uint32_t r=0;r<nR;r++)
422 in >> pixelLabels.coeffRef(r,c);
423 }
424 in >> pixelLabelNames;
425 }
427 {
428 {
429 const uint32_t nR=static_cast<uint32_t>(pixelLabels.rows());
430 const uint32_t nC=static_cast<uint32_t>(pixelLabels.cols());
431 out << nR << nC;
432 for (uint32_t c=0;c<nC;c++)
433 for (uint32_t r=0;r<nR;r++)
434 out << pixelLabels.coeff(r,c);
435 }
436 out << pixelLabelNames;
437 }
438 void Print( std::ostream& out ) const MRPT_OVERRIDE
439 {
440 {
441 const uint32_t nR=static_cast<uint32_t>(pixelLabels.rows());
442 const uint32_t nC=static_cast<uint32_t>(pixelLabels.cols());
443 out << "Number of rows: " << nR << std::endl;
444 out << "Number of cols: " << nC << std::endl;
445 out << "Matrix of labels: " << std::endl;
446 for (uint32_t c=0;c<nC;c++)
447 {
448 for (uint32_t r=0;r<nR;r++)
449 out << pixelLabels.coeff(r,c) << " ";
450
451 out << std::endl;
452 }
453 }
454 out << std::endl;
455 out << "Label indices and names: " << std::endl;
457 for ( it = pixelLabelNames.begin(); it != pixelLabelNames.end(); it++)
458 out << it->first << " " << it->second << std::endl;
459 }
460 }; // end TPixelLabelInfo
461
462 /** All information about pixel labeling is stored in this (smart pointer to) structure; refer to TPixelLabelInfo for details on the contents
463 * User is responsible of creating a new object of the desired data type. It will be automatically (de)serialized no matter its specific type. */
465
466 /** @} */
467
468 /** \name Sensor parameters
469 * @{ */
470 mrpt::utils::TCamera cameraParams; //!< Projection parameters of the depth camera.
471 mrpt::utils::TCamera cameraParamsIntensity; //!< Projection parameters of the intensity (graylevel or RGB) camera.
472
473 /** Relative pose of the intensity camera wrt the depth camera (which is the coordinates origin for this observation).
474 * In a SwissRanger camera, this will be (0,0,0,-90deg,0,-90deg) since both cameras coincide.
475 * In a Kinect, this will include a small lateral displacement and a rotation, according to the drawing on the top of this page.
476 * \sa doDepthAndIntensityCamerasCoincide
477 */
479
480 /** Return true if \a relativePoseIntensityWRTDepth equals the pure rotation (0,0,0,-90deg,0,-90deg) (with a small comparison epsilon)
481 * \sa relativePoseIntensityWRTDepth
482 */
484
485 float maxRange; //!< The maximum range allowed by the device, in meters (e.g. 8.0m, 5.0m,...)
486 mrpt::poses::CPose3D sensorPose; //!< The 6D pose of the sensor on the robot.
487 float stdError; //!< The "sigma" error of the device in meters, used while inserting the scan in an occupancy grid.
488
489 // See base class docs
490 void getSensorPose( mrpt::poses::CPose3D &out_sensorPose ) const MRPT_OVERRIDE { out_sensorPose = sensorPose; }
491 // See base class docs
492 void setSensorPose( const mrpt::poses::CPose3D &newSensorPose ) MRPT_OVERRIDE { sensorPose = newSensorPose; }
493
494 /** @} */ // end sensor params
495
496 // See base class docs
497 void getDescriptionAsText(std::ostream &o) const MRPT_OVERRIDE;
498
499 void swap(CObservation3DRangeScan &o); //!< Very efficient method to swap the contents of two observations.
500
501 void getZoneAsObs( CObservation3DRangeScan &obs, const unsigned int &r1, const unsigned int &r2, const unsigned int &c1, const unsigned int &c2 );
502
503 /** A Levenberg-Marquart-based optimizer to recover the calibration parameters of a 3D camera given a range (depth) image and the corresponding 3D point cloud.
504 * \param camera_offset The offset (in meters) in the +X direction of the point cloud. It's 1cm for SwissRanger SR4000.
505 * \return The final average reprojection error per pixel (typ <0.05 px)
506 */
508 const CObservation3DRangeScan &in_obs,
509 mrpt::utils::TCamera &out_camParams,
510 const double camera_offset = 0.01 );
511
512 /** Look-up-table struct for project3DPointsFromDepthImageInto() */
514 {
517 };
518 static TCached3DProjTables m_3dproj_lut; //!< 3D point cloud projection look-up-table \sa project3DPointsFromDepthImage
519
520 }; // End of class def.
522
523
524 } // End of namespace
525
526 namespace utils
527 {
528 // Specialization must occur in the same namespace
529 MRPT_DECLARE_TTYPENAME_PTR_NAMESPACE(CObservation3DRangeScan, mrpt::obs)
530
531 // Enum <-> string converter:
532 template <>
534 {
536 static void fill(bimap<enum_t,std::string> &m_map)
537 {
540 }
541 };
542 }
543
544 namespace utils
545 {
546 /** Specialization mrpt::utils::PointCloudAdapter<CObservation3DRangeScan> \ingroup mrpt_adapters_grp */
547 template <>
548 class PointCloudAdapter< mrpt::obs::CObservation3DRangeScan> : public detail::PointCloudAdapterHelperNoRGB<mrpt::obs::CObservation3DRangeScan,float>
549 {
550 private:
552 public:
553 typedef float coords_t; //!< The type of each point XYZ coordinates
554 static const int HAS_RGB = 0; //!< Has any color RGB info?
555 static const int HAS_RGBf = 0; //!< Has native RGB info (as floats)?
556 static const int HAS_RGBu8 = 0; //!< Has native RGB info (as uint8_t)?
557
558 /** Constructor (accept a const ref for convenience) */
559 inline PointCloudAdapter(const mrpt::obs::CObservation3DRangeScan &obj) : m_obj(*const_cast<mrpt::obs::CObservation3DRangeScan*>(&obj)) { }
560 /** Get number of points */
561 inline size_t size() const { return m_obj.points3D_x.size(); }
562 /** Set number of points (to uninitialized values) */
563 inline void resize(const size_t N) {
564 if (N) m_obj.hasPoints3D = true;
565 m_obj.resizePoints3DVectors(N);
566 }
567
568 /** Get XYZ coordinates of i'th point */
569 template <typename T>
570 inline void getPointXYZ(const size_t idx, T &x,T &y, T &z) const {
571 x=m_obj.points3D_x[idx];
572 y=m_obj.points3D_y[idx];
573 z=m_obj.points3D_z[idx];
574 }
575 /** Set XYZ coordinates of i'th point */
576 inline void setPointXYZ(const size_t idx, const coords_t x,const coords_t y, const coords_t z) {
577 m_obj.points3D_x[idx]=x;
578 m_obj.points3D_y[idx]=y;
579 m_obj.points3D_z[idx]=z;
580 }
581 }; // end of PointCloudAdapter<CObservation3DRangeScan>
582 }
583} // End of namespace
584
586
587#endif
#define DEFINE_SERIALIZABLE_POST_CUSTOM_BASE_LINKAGE(class_name, base_name, _LINKAGE_)
#define DEFINE_SERIALIZABLE(class_name)
This declaration must be inserted in all CSerializable classes definition, within the class declarati...
#define DEFINE_SERIALIZABLE_PRE_CUSTOM_BASE_LINKAGE(class_name, base_name, _LINKAGE_)
This declaration must be inserted in all CSerializable classes definition, before the class declarati...
#define MRPT_DECLARE_TTYPENAME_PTR_NAMESPACE(_TYPE, __NS)
Definition: TTypeName.h:72
This class is a "CSerializable" wrapper for "CMatrixFloat".
Definition: CMatrix.h:31
Column vector, like Eigen::MatrixX*, but automatically initialized to zeros since construction.
Definition: types_math.h:65
A "CObservation"-derived class that represents a 2D range scan measurement (typically from a laser sc...
Declares a class derived from "CObservation" that encapsules a 3D range scan measurement,...
float stdError
The "sigma" error of the device in meters, used while inserting the scan in an occupancy grid.
bool m_points3D_external_stored
If set to true, m_points3D_external_file is valid.
mrpt::utils::TCamera cameraParamsIntensity
Projection parameters of the intensity (graylevel or RGB) camera.
bool hasPixelLabels() const
Returns true if the field CObservation3DRangeScan::pixelLabels contains a non-NULL smart pointer.
void points3D_convertToExternalStorage(const std::string &fileName, const std::string &use_this_base_dir)
Users won't normally want to call this, it's only used from internal MRPT programs.
TPixelLabelInfoPtr pixelLabels
All information about pixel labeling is stored in this (smart pointer to) structure; refer to TPixelL...
TIntensityChannelID intensityImageChannel
The source of the intensityImage; typically the visible channel.
void swap(CObservation3DRangeScan &o)
Very efficient method to swap the contents of two observations.
void getDescriptionAsText(std::ostream &o) const MRPT_OVERRIDE
Build a detailed, multi-line textual description of the observation contents and dump it to the outpu...
bool hasConfidenceImage
true means the field confidenceImage contains valid data
mrpt::utils::TCamera cameraParams
Projection parameters of the depth camera.
float maxRange
The maximum range allowed by the device, in meters (e.g. 8.0m, 5.0m,...)
bool hasIntensityImage
true means the field intensityImage contains valid data
void getSensorPose(mrpt::poses::CPose3D &out_sensorPose) const MRPT_OVERRIDE
A general method to retrieve the sensor pose on the robot.
mrpt::poses::CPose3D relativePoseIntensityWRTDepth
Relative pose of the intensity camera wrt the depth camera (which is the coordinates origin for this ...
std::string m_points3D_external_file
3D points are in CImage::IMAGES_PATH_BASE+<this_file_name>
std::string m_rangeImage_external_file
rangeImage is in CImage::IMAGES_PATH_BASE+<this_file_name>
virtual void load() const MRPT_OVERRIDE
Makes sure all images and other fields which may be externally stored are loaded in memory.
void resizePoints3DVectors(const size_t nPoints)
Use this method instead of resizing all three points3D_x, points3D_y & points3D_z to allow the usage ...
bool hasPoints3D
true means the field points3D contains valid data.
bool doDepthAndIntensityCamerasCoincide() const
Return true if relativePoseIntensityWRTDepth equals the pure rotation (0,0,0,-90deg,...
void rangeImage_getExternalStorageFileAbsolutePath(std::string &out_path) const
static double recoverCameraCalibrationParameters(const CObservation3DRangeScan &in_obs, mrpt::utils::TCamera &out_camParams, const double camera_offset=0.01)
A Levenberg-Marquart-based optimizer to recover the calibration parameters of a 3D camera given a ran...
void project3DPointsFromDepthImage(const bool PROJ3D_USE_LUT=true)
This method is equivalent to project3DPointsFromDepthImageInto() storing the projected 3D points (wit...
void setSensorPose(const mrpt::poses::CPose3D &newSensorPose) MRPT_OVERRIDE
A general method to change the sensor pose on the robot.
CObservation3DRangeScan()
Default constructor.
mrpt::utils::CImage confidenceImage
If hasConfidenceImage=true, an image with the "confidence" value [range 0-255] as estimated by the ca...
bool m_rangeImage_external_stored
If set to true, m_rangeImage_external_file is valid.
std::string points3D_getExternalStorageFileAbsolutePath() const
void convertTo2DScan(mrpt::obs::CObservation2DRangeScan &out_scan2d, const std::string &sensorLabel, const double angle_sup=mrpt::utils::DEG2RAD(5), const double angle_inf=mrpt::utils::DEG2RAD(5), const double oversampling_ratio=1.2)
Convert this 3D observation into an "equivalent 2D fake laser scan", with a configurable vertical FOV...
bool hasRangeImage
true means the field rangeImage contains valid data
std::vector< float > points3D_z
If hasPoints3D=true, the (X,Y,Z) coordinates of the 3D point cloud detected by the camera.
mrpt::math::CMatrix rangeImage
If hasRangeImage=true, a matrix of floats with the range data as captured by the camera (in meters)
TIntensityChannelID
Enum type for intensityImageChannel.
@ CH_VISIBLE
Grayscale or RGB visible channel of the camera sensor.
static TCached3DProjTables m_3dproj_lut
3D point cloud projection look-up-table
virtual ~CObservation3DRangeScan()
Destructor.
void rangeImage_convertToExternalStorage(const std::string &fileName, const std::string &use_this_base_dir)
Users won't normally want to call this, it's only used from internal MRPT programs.
mrpt::utils::CImage intensityImage
If hasIntensityImage=true, a color or gray-level intensity image of the same size than "rangeImage".
void rangeImage_setSize(const int HEIGHT, const int WIDTH)
Similar to calling "rangeImage.setSize(H,W)" but this method provides memory pooling to speed-up the ...
mrpt::poses::CPose3D sensorPose
The 6D pose of the sensor on the robot.
void rangeImage_forceResetExternalStorage()
Forces marking this observation as non-externally stored - it doesn't anything else apart from reseti...
void getZoneAsObs(CObservation3DRangeScan &obs, const unsigned int &r1, const unsigned int &r2, const unsigned int &c1, const unsigned int &c2)
void points3D_getExternalStorageFileAbsolutePath(std::string &out_path) const
stlplus::smart_ptr< TPixelLabelInfoBase > TPixelLabelInfoPtr
Used in CObservation3DRangeScan::pixelLabels.
bool range_is_depth
true: Kinect-like ranges: entries of rangeImage are distances along the +X axis; false: Ranges in ran...
std::string rangeImage_getExternalStorageFileAbsolutePath() const
Declares a class that represents any robot's observation.
A class used to store a 3D pose (a 3D translation + a rotation in 3D).
Definition: CPose3D.h:73
A class for storing images as grayscale or RGB bitmaps.
Definition: CImage.h:102
This base class is used to provide a unified interface to files,memory buffers,..Please see the deriv...
Definition: CStream.h:39
void setPointXYZ(const size_t idx, const coords_t x, const coords_t y, const coords_t z)
Set XYZ coordinates of i'th point.
PointCloudAdapter(const mrpt::obs::CObservation3DRangeScan &obj)
Constructor (accept a const ref for convenience)
void getPointXYZ(const size_t idx, T &x, T &y, T &z) const
Get XYZ coordinates of i'th point.
void resize(const size_t N)
Set number of points (to uninitialized values)
An adapter to different kinds of point cloud object.
Definition: adapters.h:38
Structure to hold the parameters of a pinhole camera model.
Definition: TCamera.h:32
A bidirectional version of std::map, declared as bimap<KEY,VALUE> and which actually contains two std...
Definition: bimap.h:29
void insert(const KEY &k, const VALUE &v)
Insert a new pair KEY<->VALUE in the bi-map.
Definition: bimap.h:69
A helper base class for those PointCloudAdapter<> which do not handle RGB data; it declares needed in...
Definition: adapters.h:49
const Scalar * const_iterator
Definition: eigen_plugins.h:24
#define MRPT_OVERRIDE
C++11 "override" for virtuals:
Definition: mrpt_macros.h:28
void project3DPointsFromDepthImageInto(CObservation3DRangeScan &src_obs, POINTMAP &dest_pointcloud, const bool takeIntoAccountSensorPoseOnRobot, const mrpt::poses::CPose3D *robotPoseInTheWorld, const bool PROJ3D_USE_LUT)
This namespace contains representation of robot actions and observations.
double DEG2RAD(const double x)
Degrees to radians.
Definition: bits.h:69
This is the global namespace for all Mobile Robot Programming Toolkit (MRPT) libraries.
Look-up-table struct for project3DPointsFromDepthImageInto()
Virtual interface to all pixel-label information structs.
virtual void getLabels(const int row, const int col, uint8_t &labels)=0
virtual void internal_readFromStream(mrpt::utils::CStream &in)=0
virtual void unsetAll(const int row, const int col, uint8_t label_idx)=0
Removes all categories for pixel(row,col)
TMapLabelID2Name pixelLabelNames
The 'semantic' or human-friendly name of the i'th bit in pixelLabels(r,c) can be found in pixelLabelN...
virtual void internal_writeToStream(mrpt::utils::CStream &out) const =0
const uint8_t BITFIELD_BYTES
Minimum number of bytes required to hold MAX_NUM_DIFFERENT_LABELS bits.
virtual void Print(std::ostream &) const =0
void writeToStream(mrpt::utils::CStream &out) const
virtual bool checkLabel(const int row, const int col, uint8_t label_idx) const =0
Checks whether pixel(row,col) has been clasified into category label_idx, which may be in the range 0...
const std::string & getLabelName(unsigned int label_idx) const
int checkLabelNameExistence(const std::string &name) const
Check the existence of a label by returning its associated index.
virtual void setLabel(const int row, const int col, uint8_t label_idx)=0
Mark the pixel(row,col) as classified in the category label_idx, which may be in the range 0 to MAX_N...
friend std::ostream & operator<<(std::ostream &out, const TPixelLabelInfoBase &obj)
std stream interface
virtual void setSize(const int NROWS, const int NCOLS)=0
Resizes the matrix pixelLabels to the given size, setting all bitfields to zero (that is,...
static TPixelLabelInfoBase * readAndBuildFromStream(mrpt::utils::CStream &in)
virtual void unsetLabel(const int row, const int col, uint8_t label_idx)=0
For the pixel(row,col), removes its classification into the category label_idx, which may be in the r...
void setLabelName(unsigned int label_idx, const std::string &name)
void setLabel(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE
Mark the pixel(row,col) as classified in the category label_idx, which may be in the range 0 to MAX_N...
void unsetAll(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE
Removes all categories for pixel(row,col)
Eigen::Matrix< bitmask_t, Eigen::Dynamic, Eigen::Dynamic > TPixelLabelMatrix
Each pixel may be assigned between 0 and MAX_NUM_LABELS-1 'labels' by setting to 1 the corresponding ...
mrpt::utils::uint_select_by_bytecount< BYTES_REQUIRED >::type bitmask_t
Automatically-determined integer type of the proper size such that all labels fit as one bit (max: 64...
void internal_readFromStream(mrpt::utils::CStream &in) MRPT_OVERRIDE
bool checkLabel(const int row, const int col, uint8_t label_idx) const MRPT_OVERRIDE
Checks whether pixel(row,col) has been clasified into category label_idx, which may be in the range 0...
void Print(std::ostream &out) const MRPT_OVERRIDE
void internal_writeToStream(mrpt::utils::CStream &out) const MRPT_OVERRIDE
void unsetLabel(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE
For the pixel(row,col), removes its classification into the category label_idx, which may be in the r...
void getLabels(const int row, const int col, uint8_t &labels) MRPT_OVERRIDE
void setSize(const int NROWS, const int NCOLS) MRPT_OVERRIDE
Resizes the matrix pixelLabels to the given size, setting all bitfields to zero (that is,...
Only specializations of this class are defined for each enum type of interest.
Definition: TEnumType.h:24
Usage: uint_select_by_bytecount<N>::type var; allows defining var as a unsigned integer with,...



Page generated by Doxygen 1.9.2 for MRPT 1.4.0 SVN: at Mon Sep 20 00:36:32 UTC 2021