The M300 is a very huge, uproarious, weighty, and complex framework. Simply bringing it out into the field requires at least 4 cases (for the airplane, L1 sensor, batteries, and base station). Setting up the robot is lumbering, as well, since you want to eliminate and join the legs separately, then unfurl and secure each arm, and guarantee the propeller chocks are not generally appended. There are many residue covers on every one of the ports and connection focuses and batteries, every one of which can be effortlessly lost. Nobody thing is especially troublesome, yet they all amount to a considerable amount of intricacy. As a pilot, in the event that I didn’t really require the lidar information, I would like to fly something straightforward like the Phantom 4 RTK.
Nonetheless, it very well may be unreasonable to contrast this with a Phantom 4 RTK on the grounds that the lidar framework is essentially a great deal more able regarding vegetation infiltration. What’s more, contrasting the L1 with other lidar frameworks available – there is basically no correlation. The L1’s capacity to coordinate dependably into the whole DJI equipment environment is unequaled. The L1 and M300 framework is far more straightforward to use than other robot lidar frameworks available.
Handling a mix of lidar and photogrammetry information is incredibly perplexing, with a limitless number of ways that you could deal with it. So instead of survey precisely why we process the information the way that we do, we will basically provide you with a concise outline of precisely the way in which we process the information at the present time.
First off, you should utilize DJI Terra to divert the crude information from the L1 sensor into a point cloud. There could be no different choices. DJI Terra is likewise very restricted with drone agriculture regards to its usefulness for assessors, so we suggest just involving DJI Terra as a pre-handling programming to make LAS point mists. We then utilize an alternate arrangement of point cloud handling programming to make an interpretation of the direct cloud toward our neighborhood project datum (which DJI doesn’t uphold) as well as cleaning any uproarious focuses, grouping point types, and vectorizing the information into helpful information.
Independently, we process the symbolism and ground control in a photogrammetry work process to make an orthophoto as well as a photogrammetry determined point cloud. The photogrammetry point cloud is more exact than the lidar information on hardscape. Further, the orthophoto is undeniably more precise in X and Y than the lidar information for separating planimetric highlights, so a mixed vectorization work process is required.
Eventually our objective, and the objective of most assessors, is to get a spotless arrangement of information into CAD, and we do that by removing the best information from both the lidar and photogrammetry inferred information. A few regions are more regrettable than others. The above screen capture is a profile perspective on the spring where the focuses have been physically characterized, and the ground information is displayed in brown. As you can see from this profile cut, there are significantly a bigger number of holes than previously.