Automated 3D Modeling System
Project Description
Team Members: Tianyu Li, Yen Chin Loke, Arun Kumar, Bowen Feng, Nicole Baptist
Creating virtual 3D model could be difficult and time-consuming even with CAD or 3D graphics software. Our group came up with this idea to help speeding up this process through automation. Hopefully in the future, this system can help game designer or animation designer turn real-world object into virtual object in a more efficient way.
This project creates an automated system for reconstructing objects with point clouds. The hardware for this system includes a Rethink Sawyer robot arm with a RealSense camera as the end-effector, and a turtlebot3 (burger). The turtlebot3 is used as a turntable to allow the RealSense camera to record point cloud data for the object from all sides. The Sawyer robot arm is used to control the position and the viewing angle of the camera. In the current version, our system will record point clouds from four sides and the top view of the object to recontruct its point cloud.
Due to COVID-19, accessing the lab was not allowed before we fully could fine-tuned the project. We continued this project outside of the lab without the Sawyer arm.
The point cloud collecting system at home without Sawyer
Here is the 3D-modeling results from the above video:
The point cloud collecting system in the lab with Sawyer
Technical Outline:
A key point for 3D Reconstruction of point cloud data is to have accurate pose estimation. Here in this project, we used slam_toolbox
and move_base
package to estimate the pose of the turtlebot and ensure precise turning of the wheel. After having the estimated pose, we transformed the point cloud data and stitched them together by order, which produce a 3D model.
The workflow of the project:
Hardware
- The Rethink Sawyer Robot (not used in the at-home version)
- Turtlebot3 - Burger
- Remote PC
- Router
It was a little challenging to make all the hardwares to communicate with each other properly through local network. Although it is possible, the Rethink Sawyer Robot and turtlebot were not specifically designed to work with each other. Thanks to the help from our instructor Matthew Elwin, we were able to connect all the hardwares together with a router as the figure illustrated below.
The project overall consists of three ROS packages:
-
camera_reconstruct
- Using
Point Cloud Library (PCL)
to process point clouds data taken from theIntel RealSense Depth Camera
. The post-processing includes transformation, alignment and cropping for point clouds taken from multiple angles. This package mainly usesC++
since it provides more useful APIs than the python version.
- Using
-
camera_motion
- A helper tool for aligning camera with the target object during the initial setup. The raw depth pixels from the RealSense camera are processd using
OpenCV
.
- A helper tool for aligning camera with the target object during the initial setup. The raw depth pixels from the RealSense camera are processd using
-
arm_motion
- This package is not used for the at-home version
- This package controls and coordinates the motion of the
Sawyer robot arm
andturtlebot3
. - The movement of the Sawyer is controlled using
MoveIt!
. - The rotation of the turtlebot3 is controlled using
slam_toolbox
andmove_base
.
The project overall consists of two technical components:
-
Motion (not used for the at-home version)
- The motion component is in charge of the
Sawyer
andturtblebot
motions and their communications. When theSawyer
arm receives an message from the camera as it finished recording depth image, it will send message to theturtlebot
as an command for rotation. Once the rotation finished, theturtlebot
will send message to theSawyer
, which theSawyer
will tell the camera to take the next depth image. As theturtlebot
finishes the last rotation and the camera takes the last depth image, theSawyer
will move to the top of the object withMoveIt!
and command the camera to take a depth image of the top view.
- The motion component is in charge of the
-
Vision
- The vision component is in charge of the point clouds transformation, aligment and cropping using
PCL
. Once the point cloud node receives message from theSawyer
, it will obtain a new depth image from theIntel RealSense
depth topic throughROS
. The new depth data will then be fused with the exisitng fused point cloud after transformation.
- The vision component is in charge of the point clouds transformation, aligment and cropping using
My Contributions:
- Came up with the idea and designed the approach
- Implemented the turntable motion for
turtlebot
using timer open-loop control throughROS
topics - Enhanced the
turtlebot
turntable motion precision and accuracy withslam_toolbox
andmove_base
- Created the camera_motion package for alignment at the initial setup with
OpenCV
- Fused and cropped the captured point clouds using
C++
withPCL
- Generated mesh for the point clouds using
Open3D
library and output it to .ply file
Source Code
The code can be found on Github
Future Direction
Continuous Point Cloud Fusing with Iterative-Closest-Point(ICP)
Instead of have point clouds only from the four sides of the object, the modeling could be more accurate by having point clouds from all 360 degrees angle of the object. In order to do that, fixed transformation of point clouds from four perpendicular sides would not work anymore. A solution is to use Iterative Closest Point, which finds the transformation between two point clouds. Using the transformation, we can easily match neighboring point clouds which at the end align all point clouds.