跳转至

机器人操作系统期中作业全流程记录

由于我借助了开源项目XTDrone平台来完成期中作业,其环境搭建与配置开发过程都比较复杂,因此将自己捣鼓(踩坑)的过程全部记录下来,以备不时之需。

作业要求

心路历程

我选择的攻克方向是下面的第1题,并使用飞行机器人(即无人机)来完成任务。显然在一个月内从0开始完成这项任务是不可能的,因为哪怕是无人机的飞行控制,就需要大量的时间来钻研。所以寻找一个合适的开源项目,站在前人的肩膀上来实现自主定位和建图的目标才是比较现实的。

因此我在GitHub上花费了1周的时间进行地毯式地搜索,最终锁定了XTDrone无人机仿真平台。

其GitHub仓库地址:https://github.com/robin-shaun/XTDrone

选择该项目是因为它已经完整地实现了无人机的控制部分,可以让我直接开始研究导航和建图部分。并且其提供了大量的Demo程序和详尽的教学文档,让我能够比较“快速”和“顺利”地复现其功能。(后来发现好像并没有hhhhhh,配环境仍然是非常恼人的事情🤣)

搭建基础环境

操作系统选择

XTDrone项目基于ROS1开发,因此首先要安装一个支持ROS1的Ubuntu系统,这里就选择我们最最熟悉的Ubuntu 20.04 LTS版本了。由于该仿真平台中有大量非常庞大的rviz和gazebo等仿真文件,因此在我这台16G内存的电脑的虚拟机中运行恐怕会经常出现卡死崩溃的情景。好在我们有黑科技——WSL(2)。其本质上也是一种虚拟机,可以安装Linux的发行版(服务器版),由于没有图形界面且微软对其进行了大量优化,使其性能可以比VMWare等虚拟机高好几倍,并且虽然系统自身不带有图形界面,却可以通过Linux Gui技术来打开图形化的软件,再结合VSCode的远程开发功能,几乎可以说做到了让我们“在Windows系统下进行ROS开发”。

仿真平台基础配置(对应PX4 1.13版)

依赖安装

Bash
sudo apt install ninja-build exiftool ninja-build protobuf-compiler libeigen3-dev genromfs xmlstarlet libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev python3-pip gawk
Bash
pip2 install pandas jinja2 pyserial cerberus pyulog==0.7.0 numpy toml pyquaternion empy pyyaml 
pip3 install packaging numpy empy toml pyyaml jinja2 pyargparse kconfiglib jsonschema future

注意

在整个环境配置中,不要轻易使用 apt autoremove或者apt-get autoremove命令,因为这可能会误删某些必要的依赖包。

ROS安装

Ubuntu 20.04 LTS系统支持的ROS1版本为Noetic,这里我们使用小鱼一键安装脚本来安装ROS Noetic。

Bash
wget http://fishros.com/install -O fishros && . fishros

Gazebo安装

由于XTDrone项目中对Gazebo的ROS插件做了修改,所以需要从源码进行编译安装。

首先卸载安装ROS Noetic默认的Gazebo:

Bash
1
2
3
sudo apt-get remove gazebo* 
sudo apt-get remove libgazebo*
sudo apt-get remove ros-noetic-gazebo*

然后安装需要的依赖

Bash
sudo apt-get install ros-noetic-moveit-msgs ros-noetic-object-recognition-msgs ros-noetic-octomap-msgs ros-noetic-camera-info-manager  ros-noetic-control-toolbox ros-noetic-polled-camera ros-noetic-controller-manager ros-noetic-transmission-interface ros-noetic-joint-limits-interface

然后克隆并编译Gazebo的插件gazebo_ros_pkgs,放在~/catkin_ws/src下。

Bash
1
2
3
cd ~/catkin_ws
cp -r ~/XTDrone/sitl_config/gazebo_ros_pkgs src/
catkin build #开发者测试使用catkin_make会出问题,因此建议使用catkin build

MAVROS安装

这是用于视觉定位的依赖。

Bash
1
2
3
4
5
sudo apt install ros-noetic-mavros ros-noetic-mavros-extras
wget https://gitee.com/robin_shaun/XTDrone/raw/master/sitl_config/mavros/install_geographiclib_datasets.sh

sudo chmod a+x ./install_geographiclib_datasets.sh
sudo ./install_geographiclib_datasets.sh #这步需要装一段时间

PX4配置

Bash
1
2
3
4
5
6
git clone https://github.com/PX4/PX4-Autopilot.git
mv PX4-Autopilot PX4_Firmware
cd PX4_Firmware
git checkout -b xtdrone/dev v1.13.2
git submodule update --init --recursive
make px4_sitl_default gazebo
编译完成后,会弹出Gazebo界面,将其关闭即可。

接下来修改 ~/.bashrc,使得每次开启终端自动设置相应的环境变量。

Bash
1
2
3
4
source ~/catkin_ws/devel/setup.bash
source ~/PX4_Firmware/Tools/setup_gazebo.bash ~/PX4_Firmware/ ~/PX4_Firmware/build/px4_sitl_default
export ROS_PACKAGE_PATH=$ROS_PACKAGE_PATH:~/PX4_Firmware
export ROS_PACKAGE_PATH=$ROS_PACKAGE_PATH:~/PX4_Firmware/Tools/sitl_gazebo

XTDrone源码下载

Bash
git clone https://gitee.com/robin_shaun/XTDrone.git
cd XTDrone
git checkout 1_13_2
git submodule update --init --recursive
# 修改启动脚本文件
cp sitl_config/init.d-posix/* ~/PX4_Firmware/ROMFS/px4fmu_common/init.d-posix/
# 添加launch文件
cp -r sitl_config/launch/* ~/PX4_Firmware/launch/
# 添加世界文件
cp sitl_config/worlds/* ~/PX4_Firmware/Tools/sitl_gazebo/worlds/
# 修改部分插件
cp sitl_config/gazebo_plugin/gimbal_controller/gazebo_gimbal_controller_plugin.cpp ~/PX4_Firmware/Tools/sitl_gazebo/src
cp sitl_config/gazebo_plugin/gimbal_controller/gazebo_gimbal_controller_plugin.hh ~/PX4_Firmware/Tools/sitl_gazebo/include
cp sitl_config/gazebo_plugin/wind_plugin/gazebo_ros_wind_plugin_xtdrone.cpp ~/PX4_Firmware/Tools/sitl_gazebo/src
cp sitl_config/gazebo_plugin/wind_plugin/gazebo_ros_wind_plugin_xtdrone.h ~/PX4_Firmware/Tools/sitl_gazebo/include
# 修改CMakeLists.txt
cp sitl_config/CMakeLists.txt ~/PX4_Firmware/Tools/sitl_gazebo
# 修改部分模型文件
cp -r sitl_config/models/* ~/PX4_Firmware/Tools/sitl_gazebo/models/ 
# 替换同名文件
cd ~/.gazebo/models/
rm -r stereo_camera/ 3d_lidar/ 3d_gpu_lidar/ hokuyo_lidar/

坑点

因为Gazebo寻找模型的顺序,是先在~/.gazebo/models/下寻找,然后在其他路径寻找,所以在往PX4 SITL复制models时,要注意~/.gazebo/models/下有没有同名文件(比如~/.gazebo/models/下默认有stereo_camera),有的话要么将该同名文件删去,要么替换该同名文件。如果在仿真过程中出现了某个部件为白色方块的情况,大概率是因为~/.gazebo/models/下有和PX4_Firmware/Tools/sitl_gazebo/models重复的模型,此时删掉~/.gazebo/models/下的模型文件即可。

删除原编译文件,重新编译PX4固件:

Bash
1
2
3
cd ~/PX4_Firmware
rm -r build/
make px4_sitl_default gazebo

至此基础环境已经搭建完成。

选择无人机模型

XTDrone提供了多种型号的无人机(车),这里我们选择多旋翼无人机iris模型作为仿真对象,因为项目中有其附带各种相机及激光雷达的模型文件,具有很强的扩展性。

视觉惯性里程计(VIO)算法环境配置

在XTDrone通过开源框架VINS-Fusion(一个基于优化的多传感器融合的全局位姿估计框架)来实现VIO算法。

GitHub仓库地址: https://github.com/HKUST-Aerial-Robotics/VINS-Fusion

依赖安装

VINS-Fusion项目依赖开源C++库Ceres-solver来求解具有边界约束的非线性最小二乘问题以及一般的无约束优化问题。

首先安装VINS-Fusion所需的依赖:

Bash
1
2
3
4
5
6
7
8
# google-glog + gflags
sudo apt-get install libgoogle-glog-dev libgflags-dev
# Use ATLAS for BLAS & LAPACK
sudo apt-get install libatlas-base-dev
# Eigen3
sudo apt-get install libeigen3-dev
# SuiteSparse (optional)
sudo apt-get install libsuitesparse-dev

然后从GitHub下载VINS-Fusion项目的源码进行编译。

踩坑

需要注意的是由于环境中安装的OpenCV和Eigen版本都比较老,这里不能安装过高版本,推荐的是安装1.14.0版本。我一开始安装了最新的2.2.0版本,然后捣鼓了半天还是不行😇

1.14.0版本下载地址:https://ceres-solver.googlesource.com/ceres-solver/+/refs/tags/1.14.0

下载解压后将其放到~/dependencies目录下。随后

Bash
1
2
3
4
5
6
cd ~/dependencies/ceres-solver-1.14.0
mkdir build
cd build
cmake ..
make -j4
sudo make install

编译VINS-Fusion源码

创建~/xtdrone_ws目录作为工作空间(因为catkin_ws目录用于其余项目了,为了防止冲突重新建一个),随后运行

Bash
1
2
3
4
5
cp -r ~/XTDrone/sensing/slam/vio/VINS-Fusion ~/xtdrone_ws/src/
mkdir -p ~/xtdrone_ws/scripts/
cp ~/XTDrone/sensing/slam/vio/xtdrone_run_vio.sh ~/xtdrone_ws/scripts/
cd ~/xtdrone_ws
catkin_make #或catkin build,取决于您的编译工具

PX4飞控EKF配置

PX4默认使用的EKF配置为融合GPS的水平位置与气压计高度。如果我们想使用视觉定位,就需要把修改配置文件。

对于1.13版本,可以直接修改编译后的rcS文件,即~/PX4_Firmware/build/px4_sitl_default/etc/init.d-posix/rcS。将EKF2_AID_MASK参数的值从默认的1改为24EKF2_HGT_MODE参数的值从默认的0改为3

Text Only
1
2
3
4
5
6
7
8
9
# GPS used
# param set EKF2_AID_MASK 1
# Vision used and GPS denied
param set EKF2_AID_MASK 24

# Barometer used for hight measurement
# param set EKF2_HGT_MODE 0
# Barometer denied and vision used for hight measurement
param set EKF2_HGT_MODE 3

修改VINS-Fusion仿真配置文件

打开~/xtdrone_ws/src/VINS-Fusion/config/xtdrone_sitl/px4_sitl_stereo_imu_config.yaml文件。

YAML
1
2
3
4
5
6
7
8
imu_topic: "/iris_0/imu_gazebo"
image0_topic: "/iris_0/stereo_camera/left/image_raw"
image1_topic: "/iris_0/stereo_camera/right/image_raw"
output_path: "~/xtdrone_ws/vins_output"
.
.
.
pose_graph_save_path: "~/xtdrone_ws/vins_output/pose_graph/"  # save and load path

RTABMap环境配置

VIO获得的是稀疏点云,无法运用于避障与运动规划。为此,需要把视觉里程计与深度相机的深度图进行融合,从而进行三维稠密重建,这里采用的是RTABMap进行SLAM建图和导航。

安装RTABMap

Bash
sudo apt install ros-noetic-rtabmap* 
sudo apt install ros-noetic-octomap*

配置RTABMap

Bash
sudo cp ~/XTDrone/sensing/slam/vio/VINS-Fusion/config/xtdrone_sitl/rgbd.rviz ~/xtdrone_ws/src/VINS-Fusion/config/rgbd.rviz

打开/opt/ros/noetic/share/rtabmap_ros/launch/rtabmap.launch文件:

Bash
sudo vim /opt/ros/noetic/share/rtabmap_ros/launch/rtabmap.launch

修改其中的rviz_cfg参数,将其指向上面发处的rgbd.rviz文件的绝对地址。

XML
<arg name="rviz_cfg" default="~/xtdrone_ws/src/VINS-Fusion/config/rgbd.rviz" />

ego_planner源码编译

ego_planner是目前无人机领域的主流轨迹规划算法,其最大的优点是不需要ESDF地图,因此具有更快的规划速度。

Bash
1
2
3
cp -r  ~/XTDrone/motion_planning/3d/ego_planner ~/catkin_ws/src/
cd ~/xtdrone_ws/
catkin build ego_planner

仿真启动流程

准备工作

ego_planner需要输入深度图+相机位姿或是点云,这里以深度图+相机位姿的组合为例进行仿真,深度图来源于realsense_camera,因此要把launch文件中的iris_stereo_camera换成iris_realsense_camera,相机位姿由VINS-Fusion计算得到。这里我们选择zhihang2.world作为仿真世界。

打开~/PX4_Firmware/launch/zhihang2.launch文件,修改其中的sdf参数为装备深度相机的iris无人机。

XML
<arg name="sdf" value="iris_realsense_camera"/>

打开~/xtdrone_ws/scripts/xtdrone_run_vio.sh文件,将启动vins_rviz的语句注释掉,因为该rviz仿真生成的稀疏点云,对后续运动规划没有作用。

Bash
rosrun vins vins_node ~/xtdrone_ws/src/VINS-Fusion/config/xtdrone_sitl/px4_sitl_stereo_imu_config.yaml &
# roslaunch vins vins_rviz.launch

打开~/xtdrone_ws/src/ego_planner/plan_manage/launch/run_in_xtdrone.launch文件,将其中的flight_type参数改为1,表示使用2D Nav Goal进行目标点选择。

XML
1
2
3
<!-- 1: use 2D Nav Goal to select goal  -->
<!-- 2: use global waypoints below  -->
<arg name="flight_type" value="1" />

启动gazebo仿真世界

打开第一个终端运行

Bash
roslaunch px4 zhihang2.launch
第一次打开时会花费较长的时间加载资源,后续再次打开则会显著变快。

启动VINS-Fusion进行视觉定位

打开第二个终端运行

Bash
1
2
3
source ~/xtdrone_ws/devel/setup.bash
cd ~/xtdrone_ws/scripts
sh xtdrone_run_vio.sh

定位信息话题转换

由于VINS-Fusion发布的是Odometry类型的话题,我们要将其对应转为PX4所需的话题。

打开第三个终端运行

Bash
cd ~/XTDrone/sensing/slam/vio
python vins_transfer.py iris 0

现在视觉定位信息就通过mavros发到PX4,查看/iris_0/mavros/local_position/pose可以看到有正确的数据。

启动RTABMap进行三维稠密地图重建

打开第四个终端运行

Bash
source ~/xtdrone_ws/devel/setup.bash
roslaunch vins rtabmap_vins.launch

用键盘控制无人机起飞并悬停

打开第五个终端运行

Bash
cd ~/XTDrone/communication
python multirotor_communication.py iris 0 
启动通信脚本,iris代表无人机子机型,0代表飞机的编号。

打开第六个终端运行

Bash
cd ~/XTDrone/control/keyboard
python multirotor_keyboard_control.py iris 1 vel
启动键盘控制脚本,iris代表子机型,1代表飞机的个数,vel代表速度控制。

将焦点切换到第六个终端,不断按I键将z方向的期望速度提升到0.3m/s以上,然后按B切到offboard模式,再按T解锁即可起飞,飞到合适的高度后,按S即可实现悬停。

注意点

将无人机悬停后需要停止第六个终端中的键盘控制脚本,否则后影响后续自动路径规划。

启动ego_planner进行三维运动规划

打开第七个终端运行

Bash
cd ~/XTDrone/motion_planning/3d
python ego_transfer.py iris 0
启动转换相机位姿坐标系方向的脚本。

打开第八个终端运行

Bash
cd ~/XTDrone/motion_planning/3d
rviz -d ego_rviz.rviz
启动路径规划的rviz仿真。

打开第九个终端运行

Bash
source ~/xtdrone_ws/devel/setup.bash
roslaunch ego_planner single_uav.launch 
启动ego_planner进行路径规划。

指定目的地自动路径规划与三维建图

在rgbd.rviz中选择2D Nav Goal模式,然后根据已经建立的三维地图选定目标点,引导无人机到达最终目的地。

由于深度相机只能面朝前方,且视野范围有限,因此目前只能以距离当前位置不远处的点为目标地进行自动路径规划并同步建立沿途的三维地图,通过多次的目标点选择,逐步引导无人机走完全程并获得全局地图。否则ego_planner在规划路径时会无视不可见的障碍物并且无法根据新建立的地图部分及时更新路径,从而导致无人机撞墙。

所以下一步的研究方向就是:

直接指定最终目标点,无人机在陌生环境中自由探索,边探索边建图,并最终抵达目的地。

由于时间问题,这一步暂时没有实现。🧐

仿真过程演示视频

视频最后应该是因为地图建的很大爆内存然后导致rviz出问题了。单独跑仿真时没问题但是开了屏幕录制就一直出这个问题暂时没找到解决方法,所以就没有在视频中展示地图全貌🤔。

建图数据

RTabMap提供了多种形式的地图数据,有

基于八叉树数据结构的三维环境占据图

alt text 发布在话题/rtabmap/octomap_binary中,通过rgbd.rviz中的ColorOccupancyGrid节点展示。

将三维空间递归地划分为更小的立方体单元(Voxel)。每个Voxel存储一个占用概率值,表示该空间单元被障碍物占据的可能性。八叉树的层级结构使得OctoMap能够在保证精度的同时高效地存储和查询三维空间信息。

基于八叉树数据结构的二维占据网格图

alt text 发布在话题/rtabmap/octomap_grid中,通过rgbd.rviz中的Map节点展示。

该地图实际上是将上面的三维环境占据图投影到二维平面得到的栅格图。

普通二维网格图

alt text 发布在话题/rtabmap/grid_map中,通过rgbd.rviz中的Map节点展示。

等等等等......

项目结构拆解

PX4自动飞行控制算法

PX4是由苏黎世联邦理工学院的计算机视觉与几何实验室的一个软硬件项目PIXHAWK演变而来,有"开源飞控之王"之称,目的在于为学术、爱好和工业团体提供一款低成本高性能的高端的自驾仪,是专业的自动驾驶仪。它由来自工业界和学术界的世界级开发人员开发,并得到活跃的全球社区的支持,为各种载具尤其是无人机提供支持。

这里我们只关注多旋翼无人机飞行控制系统的架构,其控制框图为: alt text

这是一个标准的级联控制架构,控制器采用P和PID控制的组合方式。

PX4算法池提供了多种飞行模式,在本项目中我使用的是Offboard模式,该模式通过设置位置、速度、加速、姿态、姿态角速率或力/扭矩设定值来控制飞行器的位置和姿态。

也就是说在该模式下,上位机程序发布期望运动,底层控制器追踪此期望运动。无人机起飞前,需要先手动解锁,着陆后也需要重新上锁。

MAVROS---PX4与ROS连接的桥梁

MAVLink是一种轻量级的通信协议,PX4就是通过它来实现通信,而MAVROS则是基于MAVLink协议的ROS包,用于将ROS和MAVLink协议连接起来,以实现ROS与无人机之间的通信与控制。

PX4中为MAVROS提供了通信接口,后面我们也会用到。

在Gazebo仿真中与PX4通信

XML
<?xml version="1.0"?>
<launch>
    <arg name="est" default="ekf2"/>
    <arg name="world" default="$(find mavlink_sitl_gazebo)/worlds/zhihang2.world"/>
    <!-- gazebo configs -->
    <arg name="gui" default="true"/>
    <arg name="debug" default="false"/>
    <arg name="verbose" default="false"/>
    <arg name="paused" default="false"/>
    <!-- Gazebo sim -->
    <include file="$(find gazebo_ros)/launch/empty_world.launch">
        <arg name="gui" value="$(arg gui)"/>
        <arg name="world_name" value="$(arg world)"/>
        <arg name="debug" value="$(arg debug)"/>
        <arg name="verbose" value="$(arg verbose)"/>
        <arg name="paused" value="$(arg paused)"/>
    </include>
    <!-- iris_0 -->
    <group ns="iris_0">
        <!-- MAVROS and vehicle configs -->
            <arg name="ID" value="0"/>
            <arg name="ID_in_group" value="0"/>
            <arg name="fcu_url" default="udp://:24540@localhost:34580"/>
        <!-- PX4 SITL and vehicle spawn -->


        <include file="$(find px4)/launch/single_vehicle_spawn_xtd.launch">
        <!-- -16 for map2-1, -18 for map2-2, -18 for map2-3 -->
            <arg name="x" value="-16"/> 
            <arg name="y" value="0"/>
            <arg name="z" value="0.5"/>
            <arg name="R" value="0"/>
            <arg name="P" value="0"/>
            <arg name="Y" value="0"/>

            <arg name="vehicle" value="iris"/>
            <arg name="sdf" value="iris_realsense_camera"/>
            <arg name="mavlink_udp_port" value="18570"/>
            <arg name="mavlink_tcp_port" value="4560"/>
            <arg name="ID" value="$(arg ID)"/>
            <arg name="ID_in_group" value="$(arg ID_in_group)"/>
        </include>
        <!-- MAVROS -->
        <include file="$(find mavros)/launch/px4.launch">
            <arg name="fcu_url" value="$(arg fcu_url)"/>
            <arg name="gcs_url" value=""/>
            <arg name="tgt_system" value="$(eval 1 + arg('ID'))"/>
            <arg name="tgt_component" value="1"/>
        </include>
    </group>
</launch>
XML
<launch>
    <arg name="fcu_url" default="/dev/ttyACM0:57600" />
    <arg name="gcs_url" default="" />
    <arg name="tgt_system" default="1" />
    <arg name="tgt_component" default="1" />
    <arg name="log_output" default="screen" />
    <arg name="fcu_protocol" default="v2.0" />
    <arg name="respawn_mavros" default="false" />
    <arg name="config_yaml" default="$(find mavros)/launch/px4_config.yaml" />

    <include file="$(find mavros)/launch/node.launch">
        <arg name="pluginlists_yaml" value="$(find mavros)/launch/px4_pluginlists.yaml" />
        <arg name="config_yaml" value="$(arg config_yaml)" />

        <arg name="fcu_url" value="$(arg fcu_url)" />
        <arg name="gcs_url" value="$(arg gcs_url)" />
        <arg name="tgt_system" value="$(arg tgt_system)" />
        <arg name="tgt_component" value="$(arg tgt_component)" />
        <arg name="log_output" value="$(arg log_output)" />
        <arg name="fcu_protocol" value="$(arg fcu_protocol)" />
        <arg name="respawn_mavros" default="$(arg respawn_mavros)" />
    </include>
</launch>

这两个launch文件实现了同时启动ROS(1)、Gazebo、MAVROS和PX4的功能,其中 fcu_url 是运行仿真的计算机的 IP/端口。

当然里面也包含了无人机的各种硬件信息和基础参数的设置,例如无人机在仿真世界中的初始位置和姿态,尤其是指定了无人机的型号与其上携带的深度相机模型。

键盘控制无人机运动

如上面所说,首先需要通过MAVROS建立ROS节点与PX4的通信。

Python
import rospy
from mavros_msgs.msg import PositionTarget, ParamValue, State
from mavros_msgs.srv import CommandBool, SetMode, ParamSet
from geometry_msgs.msg import PoseStamped, Pose, Twist
from std_msgs.msg import String
from pyquaternion import Quaternion

import sys

rospy.init_node(sys.argv[1]+'_'+sys.argv[2]+"_communication")
rate = rospy.Rate(30)

class Communication:

    def __init__(self, vehicle_type, vehicle_id):

        self.vehicle_type = vehicle_type
        self.vehicle_id = vehicle_id
        self.current_position = None
        self.current_yaw = 0
        self.hover_flag = 0
        self.coordinate_frame = 1
        self.target_motion = PositionTarget()
        self.target_motion.coordinate_frame = self.coordinate_frame
        self.arm_state = False
        self.motion_type = 0
        self.flight_mode = None
        self.mission = None
        self.last_cmd = None


        # MAVROS Connection Check
        mavros_state = rospy.wait_for_message(self.vehicle_type+'_'+self.vehicle_id+"/mavros/state", State)
        if not mavros_state.connected:
            rospy.logwarn(self.vehicle_type+'_'+self.vehicle_id+": No connection to FCU. Check mavros!")
            exit(0)

        ####################
        ## ROS Interfaces ##
        ####################
        self.local_pose_sub = rospy.Subscriber(self.vehicle_type+'_'+self.vehicle_id+"/mavros/local_position/pose", PoseStamped, self.local_pose_callback,queue_size=1)
        self.cmd_sub = rospy.Subscriber("/xtdrone/"+self.vehicle_type+'_'+self.vehicle_id+"/cmd",String,self.cmd_callback,queue_size=3)
        self.cmd_pose_flu_sub = rospy.Subscriber("/xtdrone/"+self.vehicle_type+'_'+self.vehicle_id+"/cmd_pose_flu", Pose, self.cmd_pose_flu_callback,queue_size=1)
        self.cmd_pose_enu_sub = rospy.Subscriber("/xtdrone/"+self.vehicle_type+'_'+self.vehicle_id+"/cmd_pose_enu", Pose, self.cmd_pose_enu_callback,queue_size=1)
        self.cmd_vel_flu_sub = rospy.Subscriber("/xtdrone/"+self.vehicle_type+'_'+self.vehicle_id+"/cmd_vel_flu", Twist, self.cmd_vel_flu_callback,queue_size=1)
        self.cmd_vel_enu_sub = rospy.Subscriber("/xtdrone/"+self.vehicle_type+'_'+self.vehicle_id+"/cmd_vel_enu", Twist, self.cmd_vel_enu_callback,queue_size=1)
        self.cmd_accel_flu_sub = rospy.Subscriber("/xtdrone/"+self.vehicle_type+'_'+self.vehicle_id+"/cmd_accel_flu", Twist, self.cmd_accel_flu_callback,queue_size=1)
        self.cmd_accel_enu_sub = rospy.Subscriber("/xtdrone/"+self.vehicle_type+'_'+self.vehicle_id+"/cmd_accel_enu", Twist, self.cmd_accel_enu_callback,queue_size=1)

        ''' 
        ros publishers
        '''
        self.target_motion_pub = rospy.Publisher(self.vehicle_type+'_'+self.vehicle_id+"/mavros/setpoint_raw/local", PositionTarget, queue_size=1)

        '''
        ros services
        '''
        self.armService = rospy.ServiceProxy(self.vehicle_type+'_'+self.vehicle_id+"/mavros/cmd/arming", CommandBool)
        self.flightModeService = rospy.ServiceProxy(self.vehicle_type+'_'+self.vehicle_id+"/mavros/set_mode", SetMode)
        self.set_param_srv = rospy.ServiceProxy(self.vehicle_type+'_'+self.vehicle_id+"/mavros/param/set", ParamSet)
        rcl_except = ParamValue(4, 0.0)
        self.set_param_srv("COM_RCL_EXCEPT", rcl_except)

        print(self.vehicle_type+'_'+self.vehicle_id+": "+"communication initialized")

    def start(self):
        '''
        main ROS thread
        '''
        while not rospy.is_shutdown():
            self.target_motion_pub.publish(self.target_motion)
            rate.sleep()

    def local_pose_callback(self, msg):
        self.current_position = msg.pose.position
        self.current_yaw = self.q2yaw(msg.pose.orientation)

    def construct_target(self, x=0, y=0, z=0, vx=0, vy=0, vz=0, afx=0, afy=0, afz=0, yaw=0, yaw_rate=0):
        target_raw_pose = PositionTarget()
        target_raw_pose.coordinate_frame = self.coordinate_frame

        target_raw_pose.position.x = x
        target_raw_pose.position.y = y
        target_raw_pose.position.z = z

        target_raw_pose.velocity.x = vx
        target_raw_pose.velocity.y = vy
        target_raw_pose.velocity.z = vz

        target_raw_pose.acceleration_or_force.x = afx
        target_raw_pose.acceleration_or_force.y = afy
        target_raw_pose.acceleration_or_force.z = afz

        target_raw_pose.yaw = yaw
        target_raw_pose.yaw_rate = yaw_rate

        if(self.motion_type == 0):
            target_raw_pose.type_mask = PositionTarget.IGNORE_VX + PositionTarget.IGNORE_VY + PositionTarget.IGNORE_VZ \
                            + PositionTarget.IGNORE_AFX + PositionTarget.IGNORE_AFY + PositionTarget.IGNORE_AFZ \
                            + PositionTarget.IGNORE_YAW_RATE
        if(self.motion_type == 1):
            target_raw_pose.type_mask = PositionTarget.IGNORE_PX + PositionTarget.IGNORE_PY + PositionTarget.IGNORE_PZ \
                            + PositionTarget.IGNORE_AFX + PositionTarget.IGNORE_AFY + PositionTarget.IGNORE_AFZ \
                            + PositionTarget.IGNORE_YAW
        if(self.motion_type == 2):
            target_raw_pose.type_mask = PositionTarget.IGNORE_PX + PositionTarget.IGNORE_PY + PositionTarget.IGNORE_PZ \
                            + PositionTarget.IGNORE_VX + PositionTarget.IGNORE_VY + PositionTarget.IGNORE_VZ \
                            + PositionTarget.IGNORE_YAW

        return target_raw_pose

    def cmd_pose_flu_callback(self, msg):
        self.coordinate_frame = 9
        self.motion_type = 0
        yaw = self.q2yaw(msg.orientation)
        self.target_motion = self.construct_target(x=msg.position.x,y=msg.position.y,z=msg.position.z,yaw=yaw)

    def cmd_pose_enu_callback(self, msg):
        self.coordinate_frame = 1
        self.motion_type = 0
        yaw = self.q2yaw(msg.orientation)
        self.target_motion = self.construct_target(x=msg.position.x,y=msg.position.y,z=msg.position.z,yaw=yaw)

    def cmd_vel_flu_callback(self, msg):
        self.hover_state_transition(msg.linear.x, msg.linear.y, msg.linear.z, msg.angular.z)
        if self.hover_flag == 0:
            self.coordinate_frame = 8
            self.motion_type = 1
            self.target_motion = self.construct_target(vx=msg.linear.x,vy=msg.linear.y,vz=msg.linear.z,yaw_rate=msg.angular.z)  

    def cmd_vel_enu_callback(self, msg):
        self.hover_state_transition(msg.linear.x, msg.linear.y, msg.linear.z, msg.angular.z)
        if self.hover_flag == 0:
            self.coordinate_frame = 1
            self.motion_type = 1
            self.target_motion = self.construct_target(vx=msg.linear.x,vy=msg.linear.y,vz=msg.linear.z,yaw_rate=msg.angular.z)    

    def cmd_accel_flu_callback(self, msg):
        self.hover_state_transition(msg.linear.x, msg.linear.y, msg.linear.z, msg.angular.z)
        if self.hover_flag == 0:
            self.coordinate_frame = 8
            self.motion_type = 2
            self.target_motion = self.construct_target(ax=msg.linear.x,ay=msg.linear.y,az=msg.linear.z,yaw_rate=msg.angular.z)    

    def cmd_accel_enu_callback(self, msg):
        self.hover_state_transition(msg.linear.x, msg.linear.y, msg.linear.z, msg.angular.z)
        if self.hover_flag == 0:
            self.coordinate_frame = 1 
            self.motion_type = 2
            self.target_motion = self.construct_target(ax=msg.linear.x,ay=msg.linear.y,az=msg.linear.z,yaw_rate=msg.angular.z)    

    def hover_state_transition(self,x,y,z,w):
        if abs(x) > 0.02 or abs(y)  > 0.02 or abs(z)  > 0.02 or abs(w)  > 0.005:
            self.hover_flag = 0
            self.flight_mode = 'OFFBOARD'
        elif not self.flight_mode == "HOVER":
            self.hover_flag = 1
            self.flight_mode = 'HOVER'
            self.hover()
    def cmd_callback(self, msg):
        if msg.data == self.last_cmd or msg.data == '' or msg.data == 'stop controlling':
            return

        elif msg.data == 'ARM':
            self.arm_state =self.arm()
            print(self.vehicle_type+'_'+self.vehicle_id+": Armed "+str(self.arm_state))

        elif msg.data == 'DISARM':
            self.arm_state = not self.disarm()
            print(self.vehicle_type+'_'+self.vehicle_id+": Armed "+str(self.arm_state))

        elif msg.data[:-1] == "mission" and not msg.data == self.mission:
            self.mission = msg.data
            print(self.vehicle_type+'_'+self.vehicle_id+": "+msg.data)

        else:
            self.flight_mode = msg.data
            self.flight_mode_switch()

        self.last_cmd = msg.data

    def q2yaw(self, q):
        if isinstance(q, Quaternion):
            rotate_z_rad = q.yaw_pitch_roll[0]
        else:
            q_ = Quaternion(q.w, q.x, q.y, q.z)
            rotate_z_rad = q_.yaw_pitch_roll[0]

        return rotate_z_rad

    def arm(self):
        if self.armService(True):
            return True
        else:
            print(self.vehicle_type+'_'+self.vehicle_id+": arming failed!")
            return False

    def disarm(self):
        if self.armService(False):
            return True
        else:
            print(self.vehicle_type+'_'+self.vehicle_id+": disarming failed!")
            return False

    def hover(self):
        self.coordinate_frame = 1
        self.motion_type = 0
        self.target_motion = self.construct_target(x=self.current_position.x,y=self.current_position.y,z=self.current_position.z,yaw=self.current_yaw)
        print(self.vehicle_type+'_'+self.vehicle_id+":"+self.flight_mode)

    def flight_mode_switch(self):
        if self.flight_mode == 'HOVER':
            self.hover_flag = 1
            self.hover()
        elif self.flightModeService(custom_mode=self.flight_mode):
            print(self.vehicle_type+'_'+self.vehicle_id+": "+self.flight_mode)
            return True
        else:
            print(self.vehicle_type+'_'+self.vehicle_id+": "+self.flight_mode+"failed")
            return False

if __name__ == '__main__':
    communication = Communication(sys.argv[1],sys.argv[2])
    communication.start()

这段代码中订阅多个ROS主题,用于接收无人机当前位置、姿态、速度、加速度以及控制命令。(41~48行)

同时也定义了一个发布器,用于向MAVROS发布目标运动状态。(53行)

此外还创建了代理用于调用MAVROS的上锁/解锁服务、飞行模式设置服务以及参数设置服务。(58~62行)

接下来是将键盘上的按键映射为控制指令的脚本。

Python
import rospy
from geometry_msgs.msg import Twist
import sys, select, os
import tty, termios
from std_msgs.msg import String


MAX_LINEAR = 20
MAX_ANG_VEL = 3
LINEAR_STEP_SIZE = 0.01
ANG_VEL_STEP_SIZE = 0.01

cmd_vel_mask = False
ctrl_leader = False

msg2all = """
Control Your XTDrone!
To all drones  (press g to control the leader)
---------------------------
1   2   3   4   5   6   7   8   9   0
        w       r    t   y        i
a    s    d       g       j    k    l
        x       v    b   n        ,

w/x : increase/decrease forward  
a/d : increase/decrease leftward 
i/, : increase/decrease yaw 
j/l : increase/decrease yaw
r   : return home
t/y : arm/disarm
v/n : takeoff/land
b   : offboard
s/k : hover and remove the mask of keyboard control
0   : mask the keyboard control (for single UAV)
0~9 : extendable mission(eg.different formation configuration)
    this will mask the keyboard control
g   : control the leader
CTRL-C to quit
"""

msg2leader = """
Control Your XTDrone!
To the leader  (press g to control all drones)
---------------------------
1   2   3   4   5   6   7   8   9   0
        w       r    t   y        i
a    s    d       g       j    k    l
        x       v    b   n        ,

w/x : increase/decrease forward 
a/d : increase/decrease leftward 
i/, : increase/decrease upward 
j/l : increase/decrease yaw
r   : return home
t/y : arm/disarm
v/n : takeoff/land
b   : offboard
s/k : hover and remove the mask of keyboard control
g   : control all drones
CTRL-C to quit
"""

e = """
Communications Failed
"""

def getKey():
    tty.setraw(sys.stdin.fileno())
    rlist, _, _ = select.select([sys.stdin], [], [], 0.1)
    if rlist:
        key = sys.stdin.read(1)
    else:
        key = ''
    termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)
    return key

def print_msg():
    if ctrl_leader:
        print(msg2leader)
    else:
        print(msg2all)      

if __name__=="__main__":

    settings = termios.tcgetattr(sys.stdin)

    multirotor_type = sys.argv[1]
    multirotor_num = int(sys.argv[2])
    control_type = sys.argv[3]

    if multirotor_num == 18:
        formation_configs = ['waiting', 'cuboid', 'sphere', 'diamond']
    elif multirotor_num == 9:
        formation_configs = ['waiting', 'cube', 'pyramid', 'triangle']
    elif multirotor_num == 6:
        formation_configs = ['waiting', 'T', 'diamond', 'triangle']
    elif multirotor_num == 1:
        formation_configs = ['stop controlling']

    cmd= String()
    twist = Twist()   

    rospy.init_node(multirotor_type + '_multirotor_keyboard_control')

    if control_type == 'vel':
        multi_cmd_vel_flu_pub = [None]*multirotor_num
        multi_cmd_pub = [None]*multirotor_num
        for i in range(multirotor_num):
            multi_cmd_vel_flu_pub[i] = rospy.Publisher('/xtdrone/'+multirotor_type+'_'+str(i)+'/cmd_vel_flu', Twist, queue_size=1)
            multi_cmd_pub[i] = rospy.Publisher('/xtdrone/'+multirotor_type+'_'+str(i)+'/cmd',String,queue_size=3)
        leader_cmd_vel_flu_pub = rospy.Publisher("/xtdrone/leader/cmd_vel_flu", Twist, queue_size=1)
        leader_cmd_pub = rospy.Publisher("/xtdrone/leader/cmd", String, queue_size=1)

    else:
        multi_cmd_accel_flu_pub = [None]*multirotor_num
        multi_cmd_pub = [None]*multirotor_num
        for i in range(multirotor_num):
            multi_cmd_accel_flu_pub[i] = rospy.Publisher('/xtdrone/'+multirotor_type+'_'+str(i)+'/cmd_accel_flu', Twist, queue_size=1)
            multi_cmd_pub[i] = rospy.Publisher('/xtdrone/'+multirotor_type+'_'+str(i)+'/cmd',String,queue_size=1)
        leader_cmd_accel_flu_pub = rospy.Publisher("/xtdrone/leader/cmd_accel_flu", Twist, queue_size=1)
        leader_cmd_pub = rospy.Publisher("/xtdrone/leader/cmd", String, queue_size=3)

    forward  = 0.0
    leftward  = 0.0
    upward  = 0.0
    angular = 0.0

    print_msg()
    while(1):
        key = getKey()
        if key == 'w' :
            forward = forward + LINEAR_STEP_SIZE
            print_msg()
            if control_type == 'vel':
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))
            else:
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))

        elif key == 'x' :
            forward = forward - LINEAR_STEP_SIZE
            print_msg()
            if control_type == 'vel':
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))
            else:
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))

        elif key == 'a' :

            leftward = leftward + LINEAR_STEP_SIZE
            print_msg()
            if control_type == 'vel':
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))
            else:
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))

        elif key == 'd' :
            leftward = leftward - LINEAR_STEP_SIZE
            print_msg()
            if control_type == 'vel':
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))
            else:
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))

        elif key == 'i' :
            upward = upward + LINEAR_STEP_SIZE
            print_msg()
            if control_type == 'vel':
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))
            else:
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))

        elif key == ',' :
            upward = upward - LINEAR_STEP_SIZE
            print_msg()
            if control_type == 'vel':
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))
            else:
                print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))

        elif key == 'j':
            angular = angular + ANG_VEL_STEP_SIZE
            print_msg()
            print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))

        elif key == 'l':
            angular = angular - ANG_VEL_STEP_SIZE
            print_msg()
            print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))

        elif key == 'r':
            cmd = 'AUTO.RTL'
            print_msg()
            print('Returning home')
        elif key == 't':
            cmd = 'ARM'
            print_msg()
            print('Arming')
        elif key == 'y':
            cmd = 'DISARM'
            print_msg()
            print('Disarming')
        elif key == 'v':
            cmd = 'AUTO.TAKEOFF'
            print_msg()
            #print('Takeoff mode is disenabled now')
        elif key == 'b':
            cmd = 'OFFBOARD'
            print_msg()
            print('Offboard')
        elif key == 'n':
            cmd = 'AUTO.LAND'
            print_msg()
            print('Landing')
        elif key == 'g':
            ctrl_leader = not ctrl_leader
            print_msg()
        elif key in ['k', 's']:
            cmd_vel_mask = False
            forward   = 0.0
            leftward   = 0.0
            upward   = 0.0
            angular  = 0.0
            cmd = 'HOVER'
            print_msg()
            print("currently:\t forward vel %.2f\t leftward vel %.2f\t upward vel %.2f\t angular %.2f " % (forward, leftward, upward, angular))
            print('Hover')
        else:
            for i in range(10):
                if key == str(i):
                    cmd = formation_configs[i]
                    print_msg()
                    print(cmd)
                    cmd_vel_mask = True
            if (key == '\x03'):
                break

        if forward > MAX_LINEAR:
            forward = MAX_LINEAR
        elif forward < -MAX_LINEAR:
            forward = -MAX_LINEAR
        if leftward > MAX_LINEAR:
            leftward = MAX_LINEAR
        elif leftward < -MAX_LINEAR:
            leftward = -MAX_LINEAR
        if upward > MAX_LINEAR:
            upward = MAX_LINEAR
        elif upward < -MAX_LINEAR:
            upward = -MAX_LINEAR
        if angular > MAX_ANG_VEL:
            angular = MAX_ANG_VEL
        elif angular < -MAX_ANG_VEL:
            angular = - MAX_ANG_VEL

        twist.linear.x = forward; twist.linear.y = leftward ; twist.linear.z = upward
        twist.angular.x = 0.0; twist.angular.y = 0.0;  twist.angular.z = angular

        for i in range(multirotor_num):
            if ctrl_leader:
                if control_type == 'vel':
                    leader_cmd_vel_flu_pub.publish(twist)
                else:
                    leader_cmd_accel_flu_pub.publish(twist)
                leader_cmd_pub.publish(cmd)
            else:
                if not cmd_vel_mask:
                    if control_type == 'vel':
                        multi_cmd_vel_flu_pub[i].publish(twist)   
                    else:
                        multi_cmd_accel_flu_pub[i].publish(twist)
                multi_cmd_pub[i].publish(cmd)

        cmd = ''

    termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)

这里我们项目中主要用到了4个键I,B,T,S

其中I用于增大z轴方向上的期望线速度,B用于切换到offboard飞行模式,T用于解锁无人机上的电机,S用于切换到Hover模式,令无人机所有方向的期望速度为0,并保持姿态不变,从而悬停在空中。

视觉惯性里程计(VIO)

论文地址: A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors

GitHub仓库地址: https://github.com/HKUST-Aerial-Robotics/VINS-Fusion

视觉-惯性里程计(VIO)是一种利用相机(这里我们用的是双目相机)和IMU的测量数据进行状态估计的技术,可以在低纹理、高速运动和高动态场景中提供准确和鲁棒的状态估计。

VIO算法和常规SLAM算法最大的不同在于两点:

  • VIO在硬件上需要传感器的融合,包括相机和六轴陀螺仪,相机产生图片,六轴陀螺仪产生加速度和角速度。相机相对准但相对慢,六轴陀螺仪的原始加速度如果拿来直接积分会在很短的时间飘走(zero-drift),但六轴陀螺仪的频率很高,在手机上都有200Hz。

  • VIO实现的是一种比较复杂而有效的非线性优化或着卡尔曼滤波,比如MSCKF(Multi-State-Constraint-Kalman-Filter),侧重的是快速的姿态跟踪,而不花精力来维护全局地图,也不做keyframe based SLAM里面的针对地图的全局优化(bundle adjustment)。

算法流程: alt text

这里我们仿真时会启动一个ROS节点 vins_node ,它是在 rosNodeTest.cpp 文件中定义的,并且会读取 px4_sitl_stereo_imu_config.yaml 文件中定义的双目相机和IMU的配置参数。

C++
#include <stdio.h>
#include <queue>
#include <map>
#include <thread>
#include <mutex>
#include <ros/ros.h>
#include <cv_bridge/cv_bridge.h>
#include <opencv2/opencv.hpp>
#include "estimator/estimator.h"
#include "estimator/parameters.h"
#include "utility/visualization.h"

Estimator estimator;

queue<sensor_msgs::ImuConstPtr> imu_buf;
queue<sensor_msgs::PointCloudConstPtr> feature_buf;
queue<sensor_msgs::ImageConstPtr> img0_buf;
queue<sensor_msgs::ImageConstPtr> img1_buf;
std::mutex m_buf;


void img0_callback(const sensor_msgs::ImageConstPtr &img_msg)
{
    m_buf.lock();
    img0_buf.push(img_msg);
    m_buf.unlock();
}

void img1_callback(const sensor_msgs::ImageConstPtr &img_msg)
{
    m_buf.lock();
    img1_buf.push(img_msg);
    m_buf.unlock();
}


cv::Mat getImageFromMsg(const sensor_msgs::ImageConstPtr &img_msg)
{
    cv_bridge::CvImageConstPtr ptr;
    if (img_msg->encoding == "8UC1")
    {
        sensor_msgs::Image img;
        img.header = img_msg->header;
        img.height = img_msg->height;
        img.width = img_msg->width;
        img.is_bigendian = img_msg->is_bigendian;
        img.step = img_msg->step;
        img.data = img_msg->data;
        img.encoding = "mono8";
        ptr = cv_bridge::toCvCopy(img, sensor_msgs::image_encodings::MONO8);
    }
    else
        ptr = cv_bridge::toCvCopy(img_msg, sensor_msgs::image_encodings::MONO8);

    cv::Mat img = ptr->image.clone();
    return img;
}

// extract images with same timestamp from two topics
void sync_process()
{
    while(1)
    {
        if(STEREO)
        {
            cv::Mat image0, image1;
            std_msgs::Header header;
            double time = 0;
            m_buf.lock();
            if (!img0_buf.empty() && !img1_buf.empty())
            {
                double time0 = img0_buf.front()->header.stamp.toSec();
                double time1 = img1_buf.front()->header.stamp.toSec();
                // 0.003s sync tolerance
                if(time0 < time1 - 0.003)
                {
                    img0_buf.pop();
                    printf("throw img0\n");
                }
                else if(time0 > time1 + 0.003)
                {
                    img1_buf.pop();
                    printf("throw img1\n");
                }
                else
                {
                    time = img0_buf.front()->header.stamp.toSec();
                    header = img0_buf.front()->header;
                    image0 = getImageFromMsg(img0_buf.front());
                    img0_buf.pop();
                    image1 = getImageFromMsg(img1_buf.front());
                    img1_buf.pop();
                    //printf("find img0 and img1\n");
                }
            }
            m_buf.unlock();
            if(!image0.empty())
                estimator.inputImage(time, image0, image1);
        }
        else
        {
            cv::Mat image;
            std_msgs::Header header;
            double time = 0;
            m_buf.lock();
            if(!img0_buf.empty())
            {
                time = img0_buf.front()->header.stamp.toSec();
                header = img0_buf.front()->header;
                image = getImageFromMsg(img0_buf.front());
                img0_buf.pop();
            }
            m_buf.unlock();
            if(!image.empty())
                estimator.inputImage(time, image);
        }

        std::chrono::milliseconds dura(2);
        std::this_thread::sleep_for(dura);
    }
}


void imu_callback(const sensor_msgs::ImuConstPtr &imu_msg)
{
    double t = imu_msg->header.stamp.toSec();
    double dx = imu_msg->linear_acceleration.x;
    double dy = imu_msg->linear_acceleration.y;
    double dz = imu_msg->linear_acceleration.z;
    double rx = imu_msg->angular_velocity.x;
    double ry = imu_msg->angular_velocity.y;
    double rz = imu_msg->angular_velocity.z;
    Vector3d acc(dx, dy, dz);
    Vector3d gyr(rx, ry, rz);
    estimator.inputIMU(t, acc, gyr);
    return;
}


void feature_callback(const sensor_msgs::PointCloudConstPtr &feature_msg)
{
    map<int, vector<pair<int, Eigen::Matrix<double, 7, 1>>>> featureFrame;
    for (unsigned int i = 0; i < feature_msg->points.size(); i++)
    {
        int feature_id = feature_msg->channels[0].values[i];
        int camera_id = feature_msg->channels[1].values[i];
        double x = feature_msg->points[i].x;
        double y = feature_msg->points[i].y;
        double z = feature_msg->points[i].z;
        double p_u = feature_msg->channels[2].values[i];
        double p_v = feature_msg->channels[3].values[i];
        double velocity_x = feature_msg->channels[4].values[i];
        double velocity_y = feature_msg->channels[5].values[i];
        if(feature_msg->channels.size() > 5)
        {
            double gx = feature_msg->channels[6].values[i];
            double gy = feature_msg->channels[7].values[i];
            double gz = feature_msg->channels[8].values[i];
            pts_gt[feature_id] = Eigen::Vector3d(gx, gy, gz);
            //printf("receive pts gt %d %f %f %f\n", feature_id, gx, gy, gz);
        }
        ROS_ASSERT(z == 1);
        Eigen::Matrix<double, 7, 1> xyz_uv_velocity;
        xyz_uv_velocity << x, y, z, p_u, p_v, velocity_x, velocity_y;
        featureFrame[feature_id].emplace_back(camera_id,  xyz_uv_velocity);
    }
    double t = feature_msg->header.stamp.toSec();
    estimator.inputFeature(t, featureFrame);
    return;
}

void restart_callback(const std_msgs::BoolConstPtr &restart_msg)
{
    if (restart_msg->data == true)
    {
        ROS_WARN("restart the estimator!");
        estimator.clearState();
        estimator.setParameter();
    }
    return;
}

void imu_switch_callback(const std_msgs::BoolConstPtr &switch_msg)
{
    if (switch_msg->data == true)
    {
        //ROS_WARN("use IMU!");
        estimator.changeSensorType(1, STEREO);
    }
    else
    {
        //ROS_WARN("disable IMU!");
        estimator.changeSensorType(0, STEREO);
    }
    return;
}

void cam_switch_callback(const std_msgs::BoolConstPtr &switch_msg)
{
    if (switch_msg->data == true)
    {
        //ROS_WARN("use stereo!");
        estimator.changeSensorType(USE_IMU, 1);
    }
    else
    {
        //ROS_WARN("use mono camera (left)!");
        estimator.changeSensorType(USE_IMU, 0);
    }
    return;
}

int main(int argc, char **argv)
{
    ros::init(argc, argv, "vins_estimator");
    ros::NodeHandle n("~");
    ros::console::set_logger_level(ROSCONSOLE_DEFAULT_NAME, ros::console::levels::Info);

    if(argc != 2)
    {
        printf("please intput: rosrun vins vins_node [config file] \n"
            "for example: rosrun vins vins_node "
            "~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_imu_config.yaml \n");
        return 1;
    }

    string config_file = argv[1];
    printf("config_file: %s\n", argv[1]);

    readParameters(config_file);
    estimator.setParameter();

#ifdef EIGEN_DONT_PARALLELIZE
    ROS_DEBUG("EIGEN_DONT_PARALLELIZE");
#endif

    ROS_WARN("waiting for image and imu...");

    registerPub(n);

    ros::Subscriber sub_imu = n.subscribe(IMU_TOPIC, 2000, imu_callback, ros::TransportHints().tcpNoDelay());
    ros::Subscriber sub_feature = n.subscribe("/feature_tracker/feature", 2000, feature_callback);
    ros::Subscriber sub_img0 = n.subscribe(IMAGE0_TOPIC, 100, img0_callback);
    ros::Subscriber sub_img1 = n.subscribe(IMAGE1_TOPIC, 100, img1_callback);
    ros::Subscriber sub_restart = n.subscribe("/vins_restart", 100, restart_callback);
    ros::Subscriber sub_imu_switch = n.subscribe("/vins_imu_switch", 100, imu_switch_callback);
    ros::Subscriber sub_cam_switch = n.subscribe("/vins_cam_switch", 100, cam_switch_callback);

    std::thread sync_thread{sync_process};
    ros::spin();

    return 0;
}
YAML
%YAML:1.0

#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam; 
imu: 1
num_of_cam: 2

imu_topic: "/iris_0/imu_gazebo"
image0_topic: "/iris_0/stereo_camera/left/image_raw"
image1_topic: "/iris_0/stereo_camera/right/image_raw"
output_path: "~/xtdrone_ws/vins_output"

cam0_calib: "cam0_pinhole_p1.yaml"
cam1_calib: "cam1_pinhole_p1.yaml"
image_width: 752
image_height: 480


# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.

body_T_cam0: !!opencv-matrix
rows: 4
cols: 4
dt: d
data:  [0, 0, 1, 0,
        -1, 0, 0, 0.12,
        0, -1, 0, -0.3,
        0, 0, 0, 1.]


body_T_cam1: !!opencv-matrix
rows: 4
cols: 4
dt: d
data:  [0, 0, 1, 0,
        -1, 0, 0, 0,
        0, -1, 0, -0.3,
        0, 0, 0, 1.]


#Multiple thread support
multiple_thread: 1

#feature traker paprameters
max_cnt: 150            # max feature number in feature tracking
min_dist: 30            # min distance between two features 
freq: 80                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
flow_back: 1            # perform forward and backward optical flow to improve feature tracking accuracy

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 0.002          # accelerometer measurement noise standard deviation. 
gyr_n: 0.0006         # gyroscope measurement noise standard deviation.     
acc_w: 0.00002       # accelerometer bias random work noise standard deviation.  
gyr_w: 0.000003       # gyroscope bias random work noise standard deviation.     
g_norm: 9.81007     # gravity magnitude

#unsynchronization parameters
estimate_td: 0                      # online estimate time offset between camera and imu
td: 0.0                             # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

#loop closure parameters
load_previous_pose_graph: 0        # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "~/xtdrone_ws/vins_output/pose_graph/"  # save and load path
save_image: 1                   # save image in pose graph for visualization prupose; you can close this function by setting 0 

可以看出这个ROS节点分别订阅了IMU数据、特征点数据、来自两个相机的图像数据以及一些控制话题。当接收到这些话题的数据时,会调用对应的回调函数(例如imu_callbackfeature_callback等)来处理数据。(252~258行)

260行的代码创建了一个新的线程,用于同步处理来自双目相机左右两只”眼睛“的图像数据。sync_process函数会检查两个图像缓冲区(img0_bufimg1_buf),找到时间戳相近的图像同时进行处理。

将VINS-Fusion发布的话题转换为PX4所需的格式

Python
import rospy
from geometry_msgs.msg import PoseStamped, Point
from nav_msgs.msg import Odometry
import math
from pyquaternion import Quaternion
import tf
import sys

vehicle_type = sys.argv[1]
vehicle_id = sys.argv[2]
local_pose = PoseStamped()
local_pose.header.frame_id = 'world'
quaternion = tf.transformations.quaternion_from_euler(0, -math.pi/2, math.pi/2)
q = Quaternion([quaternion[3],quaternion[0],quaternion[1],quaternion[2]])

def vins_callback(data):    
    local_pose.pose.position.x = data.pose.pose.position.x
    local_pose.pose.position.y = data.pose.pose.position.y
    local_pose.pose.position.z = data.pose.pose.position.z
    q_= Quaternion([data.pose.pose.orientation.w,data.pose.pose.orientation.x,data.pose.pose.orientation.y,data.pose.pose.orientation.z])
    q_ = q_*q
    local_pose.pose.orientation.w = q_[0]
    local_pose.pose.orientation.x = q_[1]
    local_pose.pose.orientation.y = q_[2]
    local_pose.pose.orientation.z = q_[3]

rospy.init_node(vehicle_type+"_"+vehicle_id+'_vins_transfer')
rospy.Subscriber("/vins_estimator/camera_pose", Odometry, vins_callback,queue_size=1)
position_pub = rospy.Publisher(vehicle_type+"_"+vehicle_id+"/mavros/vision_pose/pose", PoseStamped, queue_size=1)
rate = rospy.Rate(30) 

while not rospy.is_shutdown():
    if (local_pose.pose.position == Point()):
        continue
    else:
        print("Vins pose received")
        local_pose.header.stamp = rospy.Time.now()
        position_pub.publish(local_pose) 
    try:
        rate.sleep()
    except:
        continue

这样视觉定位信息就通过mavros发到PX4,运行

Bash
rostopic echo /iris_0/mavros/local_position/pose
就可以看到可传递给PX4的视觉定位信息:
YAML
header: 
  seq: 69
  stamp: 
    secs: 1844
    nsecs: 205000000
  frame_id: "map"
pose: 
  position: 
    x: -0.00769823836163
    y: -0.00358746759593
    z: -0.00920653529465
  orientation: 
    x: 0.00903992677157
    y: -0.00832445364273
    z: -0.000201883404367
    w: -0.999924540027

通过RTABMap对VIO获得的稀疏点云进行三维稠密重建

RTABMap是ROS自带的SLAM模块,其系统框图为:

alt text

RTABMAP是采用优化算法的方式求解SLAM问题的SLAM框架,支持RGB-D视觉信息的输入,并且输出包括位姿、二维占据栅格地图(2D Occupancy)、三维占据地图(3D Occupancy)和点云地图,输出地图具有多样性,RTABMAP有分级的内存管理机制,能够对输入帧进行图结构建模和统筹规划,将建图部分的计算量简化到设备可以承受的范围内,因而可以用于在大尺度的环境中使用。

启动文件如下:

XML
<launch>
    <!-- <node pkg="tf" type="static_transform_publisher" name="iris_0_camera_link_to_depth"
    args="0.0 0.0 0 0.0 0.0 0.0 /camera_link /iris_0/depth_camera_base 40" /> -->
    <node pkg="tf" type="static_transform_publisher" name="iris_0_camera_link_to_depth"
    args="0.0 0.0 0 0.0 0.0 0.0 /camera_link depth_camera_base 40" />
    <arg name="clip_distance"             default="-2"/>
    <arg name="use_rviz"                  default="true"/>
    <arg name="use_rtabmapviz"            default="false"/>
    <include file="$(find rtabmap_ros)/launch/rtabmap.launch">
            <arg name="rtabmap_args"       value="--delete_db_on_start"/>
            <arg name="depth_topic"        value="/iris_0/realsense/depth_camera/depth/image_raw"/>
            <arg name="frame_id"           value="camera_link"/>
            <arg name="visual_odometry"    value="false"/>
            <arg name="odom_topic"         value="/vins_estimator/odometry"/>
            <arg name="rgb_topic"          value="/iris_0/realsense/depth_camera/color/image_raw"/>
            <arg name="camera_info_topic"  value="/iris_0/realsense/depth_camera/color/camera_info"/>
            <arg name="queue_size"         value="200"/>
            <arg name="rviz"               value="$(arg use_rviz)"/>
            <arg name="rtabmapviz"         value="$(arg use_rtabmapviz)"/>
    </include>
</launch>

其中主要是指定了RTABMap三维重建所需要的深度图像数据的来源。

RTABMap的建图结果通过话题/rtabmap的子话题发布,包括基于八叉树数据结构的三维环境地图(/rtabmap/octomap_binary)和二维栅格地图(/rtabmap/octomap_grid)等。

八叉树(Octree)是一种树形数据结构,常用于分割三维空间,将其递归细分为八个卦限。使用八叉树可以对大量的稀疏点云进行压缩管理并进行快速的集合运算。

通过ego_planner结合三维环境地图进行路径规划

论文地址: EGO-Planner: An ESDF-free Gradient-based Local Planner for Quadrotors

GitHub仓库地址: https://github.com/ZJU-FAST-Lab/ego-planner

传统的基于梯度的运动规划算法需要构建所需的ESDF地图,然而构建地图花费了整个规划算法70%的时间,从而限制了在有限资源情况下的运动规划方法的使用。

而EGO-Planer主要由基于梯度的样条曲线优化器和细化过程组成,其主要特点有:

  1. 使用平滑性、碰撞性和动力学可行性项优化轨迹。碰撞项的构成通过比较障碍物内的轨迹与无碰撞的引导路径,然后用梯度信息将碰撞到障碍物的轨迹拉出障碍物,从而算法只需要计算碰撞处的障碍物梯度即可。
  2. 当某段轨迹动力学不可行时,激活细化过程,即增大该轨迹分配的时间。新生成的B样条曲线平衡了动力学可行性与拟合之前动力学不可行轨迹的准确性。在轴向和径向上拟合的准确性惩罚并不一样,以提高模型的鲁棒性。

ego_planner需要输入深度图+相机位姿的组合数据作为路径规划的根据,深度图来源于realsense_camera,相机位姿则由VIO计算得到。

第一步是转换相机位姿的坐标系方向:

Python
import rospy
from geometry_msgs.msg import PoseStamped
import math
from pyquaternion import Quaternion
import tf
import sys

vehicle_type = sys.argv[1]
vehicle_id = sys.argv[2]
local_pose = PoseStamped()
local_pose.header.frame_id = 'world'
quaternion = tf.transformations.quaternion_from_euler(-math.pi/2, 0, -math.pi/2)
q = Quaternion([quaternion[3],quaternion[0],quaternion[1],quaternion[2]])

def vision_callback(data):    
    local_pose.pose.position.x = data.pose.position.x
    local_pose.pose.position.y = data.pose.position.y
    local_pose.pose.position.z = data.pose.position.z
    q_= Quaternion([data.pose.orientation.w,data.pose.orientation.x,data.pose.orientation.y,data.pose.orientation.z])
    q_ = q_*q
    local_pose.pose.orientation.w = q_[0]
    local_pose.pose.orientation.x = q_[1]
    local_pose.pose.orientation.y = q_[2]
    local_pose.pose.orientation.z = q_[3]

rospy.init_node(vehicle_type+"_"+vehicle_id+'_ego_transfer')
rospy.Subscriber(vehicle_type+"_"+vehicle_id+"/mavros/vision_pose/pose", PoseStamped, vision_callback,queue_size=1)
position_pub = rospy.Publisher(vehicle_type+"_"+vehicle_id+"/camera_pose", PoseStamped, queue_size=1)
rate = rospy.Rate(60) 

while not rospy.is_shutdown():
    local_pose.header.stamp = rospy.Time.now()
    position_pub.publish(local_pose) 
    rate.sleep()

订阅VINS-Fusion提供的视觉位姿估计的话题,并对位姿进行坐标变换后发布给PX4。

启动ego_planner进行路径规划的几个有关launch文件如下:

XML
<launch>
    <arg name="map_size_x" value="100"/>
    <arg name="map_size_y" value="100"/>
    <arg name="map_size_z" value=" 5"/>

    <arg name="odom_topic" value="vins_estimator/odometry" />

    <node pkg="tf" type="static_transform_publisher" name="iris_0_map_to_world"
    args="0.0 0.0 0 0.0 0.0 0.0 /map /world 40" />
    <node pkg="tf" type="static_transform_publisher" name="iris_0_world_to_ground_plane"
    args="0.0 0.0 0 0.0 0.0 0.0 /world /ground_plane 40" />

    <include file="$(find ego_planner)/launch/run_in_xtdrone.launch">
        <arg name="drone_id" value="0"/>
        <!-- 
            <arg name="target_x" value="5.2"/>
            <arg name="target_y" value="-13"/>
            <arg name="target_z" value="1.0"/> -->

        <arg name="map_size_x" value="$(arg map_size_x)"/>
        <arg name="map_size_y" value="$(arg map_size_y)"/>
        <arg name="map_size_z" value="$(arg map_size_z)"/>
        <arg name="odom_topic" value="$(arg odom_topic)"/>
    </include>
</launch>
XML
<launch>
<!-- size of map, change the size inflate x, y, z according to your application -->
<arg name="map_size_x"/>
<arg name="map_size_y"/>
<arg name="map_size_z"/>
<arg name="target_x"/>
<arg name="target_y"/>
<arg name="target_z"/>

<arg name="drone_id"/>

<!-- topic of your odometry such as VIO or LIO -->
<arg name="odom_topic"/>

<!-- number of moving objects -->
<arg name="obj_num" value="1" />

<!-- main algorithm params -->
<include file="$(find ego_planner)/launch/advanced_param_xtdrone.xml">

    <arg name="drone_id" value="$(arg drone_id)"/>

    <arg name="map_size_x_" value="$(arg map_size_x)"/>
    <arg name="map_size_y_" value="$(arg map_size_y)"/>
    <arg name="map_size_z_" value="$(arg map_size_z)"/>
    <arg name="odometry_topic" value="$(arg odom_topic)"/>

    <arg name="obj_num_set" value="$(arg obj_num)" />

    <!-- camera pose: transform of camera frame in the world frame -->
    <!-- depth topic: depth image, 640x480 by default -->
    <!-- don't set cloud_topic if you already set these ones! -->
    <arg name="camera_pose_topic" value="camera_pose"/>
    <arg name="depth_topic" value="realsense/depth_camera/depth/image_raw"/>

    <!-- topic of point cloud measurement, such as from LIDAR  -->
    <!-- don't set camera pose and depth, if you already set this one! -->
    <arg name="cloud_topic" value="pcl_render_node/points"/>

    <!-- intrinsic params of the depth camera -->
    <arg name="cx" value="320.5"/>
    <arg name="cy" value="240.5"/>
    <arg name="fx" value="554.254691191187"/>
    <arg name="fy" value="554.254691191187"/>

    <!-- maximum velocity and acceleration the drone will reach -->
    <arg name="max_vel" value="1" />
    <!-- <arg name="max_vel" value="0.5" /> -->
    <arg name="max_acc" value="2" />
    <!-- <arg name="max_acc" value="0.5" /> -->

    <!--always set to 1.5 times grater than sensing horizen-->
    <arg name="planning_horizon" value="7.5" /> 

    <arg name="use_distinctive_trajs" value="true" />

    <!-- 1: use 2D Nav Goal to select goal  -->
    <!-- 2: use global waypoints below  -->
    <arg name="flight_type" value="1" />

    <!-- global waypoints -->
    <!-- It generates a piecewise min-snap traj passing all waypoints -->
    <arg name="point_num" value="1" />

    <!-- <arg name="point0_x" value="$(arg target_x)" />
    <arg name="point0_y" value="$(arg target_y)" />
    <arg name="point0_z" value="$(arg target_z)" /> -->

    <arg name="point0_x" value="38"/>
    <arg name="point0_y" value="5"/>
    <arg name="point0_z" value="1.5"/>


</include>
<!-- trajectory server -->
<node pkg="ego_planner" name="iris_$(arg drone_id)_traj_server" type="traj_server" output="screen">
    <remap from="position_cmd" to="/xtdrone/iris_$(arg drone_id)/planning/pos_cmd"/>
    <remap from="pose_cmd" to="/xtdrone/iris_$(arg drone_id)/cmd_pose_enu"/>
    <remap from="~planning/bspline" to="/xtdrone/iris_$(arg drone_id)/planning/bspline"/>

    <param name="traj_server/time_forward" value="1.0" type="double"/>
</node>
</launch>
XML
<launch>
<arg name="map_size_x_"/>
<arg name="map_size_y_"/>
<arg name="map_size_z_"/>

<arg name="odometry_topic"/>
<arg name="camera_pose_topic"/>
<arg name="depth_topic"/>
<arg name="cloud_topic"/>

<arg name="cx"/>
<arg name="cy"/>
<arg name="fx"/>
<arg name="fy"/>

<arg name="max_vel"/>
<arg name="max_acc"/>
<arg name="planning_horizon"/>

<arg name="point_num"/>
<arg name="point0_x"/>
<arg name="point0_y"/>
<arg name="point0_z"/>
<!-- <arg name="point1_x"/>
<arg name="point1_y"/>
<arg name="point1_z"/>
<arg name="point2_x"/>
<arg name="point2_y"/>
<arg name="point2_z"/>
<arg name="point3_x"/>
<arg name="point3_y"/>
<arg name="point3_z"/>
<arg name="point4_x"/>
<arg name="point4_y"/>
<arg name="point4_z"/> -->

<arg name="flight_type"/>
<arg name="use_distinctive_trajs"/>

<arg name="obj_num_set"/>

<arg name="drone_id"/>


<!-- main node -->
<!-- <node pkg="ego_planner" name="ego_planner_node" type="ego_planner_node" output="screen" launch-prefix="valgrind"> -->
<node pkg="ego_planner" name="iris_$(arg drone_id)_ego_planner_node" type="ego_planner_node" output="screen">

    <remap from="~odom_world" to="/$(arg odometry_topic)"/>
    <remap from="~planning/bspline" to = "/xtdrone/iris_$(arg drone_id)/planning/bspline"/>
    <remap from="~planning/data_display" to = "/xtdrone/iris_$(arg drone_id)/planning/data_display"/>
    <remap from="~planning/broadcast_bspline_from_planner" to = "/broadcast_bspline"/>
    <remap from="~planning/broadcast_bspline_to_planner" to = "/broadcast_bspline"/>

    <remap from="~grid_map/odom" to="/xtdrone/iris_$(arg drone_id)/$(arg odometry_topic)"/>
    <remap from="~grid_map/cloud" to="/iris_$(arg drone_id)/$(arg cloud_topic)"/>
    <remap from="~grid_map/pose"   to = "/iris_$(arg drone_id)/$(arg camera_pose_topic)"/> 
    <remap from="~grid_map/depth" to = "/iris_$(arg drone_id)/$(arg depth_topic)"/>


    <!-- planning fsm -->
    <param name="fsm/flight_type" value="$(arg flight_type)" type="int"/>
    <param name="fsm/thresh_replan_time" value="1.0" type="double"/>
    <param name="fsm/thresh_no_replan_meter" value="1.0" type="double"/>
    <param name="fsm/planning_horizon" value="$(arg planning_horizon)" type="double"/> <!--always set to 1.5 times grater than sensing horizen-->
    <param name="fsm/planning_horizen_time" value="3" type="double"/>
    <param name="fsm/emergency_time" value="1.0" type="double"/>
    <param name="fsm/realworld_experiment" value="false"/>
    <param name="fsm/fail_safe" value="true"/>

    <param name="fsm/waypoint_num" value="$(arg point_num)" type="int"/>
    <param name="fsm/waypoint0_x" value="$(arg point0_x)" type="double"/>
    <param name="fsm/waypoint0_y" value="$(arg point0_y)" type="double"/>
    <param name="fsm/waypoint0_z" value="$(arg point0_z)" type="double"/>
    <!-- <param name="fsm/waypoint1_x" value="$(arg point1_x)" type="double"/>
    <param name="fsm/waypoint1_y" value="$(arg point1_y)" type="double"/>
    <param name="fsm/waypoint1_z" value="$(arg point1_z)" type="double"/>
    <param name="fsm/waypoint2_x" value="$(arg point2_x)" type="double"/>
    <param name="fsm/waypoint2_y" value="$(arg point2_y)" type="double"/>
    <param name="fsm/waypoint2_z" value="$(arg point2_z)" type="double"/>
    <param name="fsm/waypoint3_x" value="$(arg point3_x)" type="double"/>
    <param name="fsm/waypoint3_y" value="$(arg point3_y)" type="double"/>
    <param name="fsm/waypoint3_z" value="$(arg point3_z)" type="double"/>
    <param name="fsm/waypoint4_x" value="$(arg point4_x)" type="double"/>
    <param name="fsm/waypoint4_y" value="$(arg point4_y)" type="double"/>
    <param name="fsm/waypoint4_z" value="$(arg point4_z)" type="double"/> -->

    <param name="grid_map/resolution"      value="0.1" /> 
    <param name="grid_map/map_size_x"   value="$(arg map_size_x_)" /> 
    <param name="grid_map/map_size_y"   value="$(arg map_size_y_)" /> 
    <param name="grid_map/map_size_z"   value="$(arg map_size_z_)" /> 
    <param name="grid_map/local_update_range_x"  value="5.5" /> 
    <param name="grid_map/local_update_range_y"  value="5.5" /> 
    <param name="grid_map/local_update_range_z"  value="4.5" /> 
    <param name="grid_map/obstacles_inflation"     value="0.3" /> 
    <param name="grid_map/local_map_margin" value="10"/>
    <param name="grid_map/ground_height"        value="-1"/>
    <!-- camera parameter -->
    <param name="grid_map/cx" value="$(arg cx)"/>
    <param name="grid_map/cy" value="$(arg cy)"/>
    <param name="grid_map/fx" value="$(arg fx)"/>
    <param name="grid_map/fy" value="$(arg fy)"/>
    <!-- depth filter -->
    <param name="grid_map/use_depth_filter" value="true"/>
    <param name="grid_map/depth_filter_tolerance" value="0.15"/>
    <param name="grid_map/depth_filter_maxdist"   value="5.0"/>
    <param name="grid_map/depth_filter_mindist"   value="0.5"/>
    <param name="grid_map/depth_filter_margin"    value="2"/>
    <param name="grid_map/k_depth_scaling_factor" value="1000.0"/>
    <param name="grid_map/skip_pixel" value="2"/>
    <!-- local fusion -->
    <param name="grid_map/p_hit"  value="0.65"/>
    <param name="grid_map/p_miss" value="0.35"/>
    <param name="grid_map/p_min"  value="0.12"/>
    <param name="grid_map/p_max"  value="0.90"/>
    <param name="grid_map/p_occ"  value="0.80"/>
    <param name="grid_map/min_ray_length" value="0.1"/>
    <param name="grid_map/max_ray_length" value="4.5"/>

    <param name="grid_map/virtual_ceil_height"   value="3"/>
    <param name="grid_map/visualization_truncate_height"   value="2"/>
    <param name="grid_map/show_occ_time"  value="false"/>
    <param name="grid_map/pose_type"     value="1"/>  
    <param name="grid_map/frame_id"      value="world"/>

<!-- planner manager -->
    <param name="manager/max_vel" value="$(arg max_vel)" type="double"/>
    <param name="manager/max_acc" value="$(arg max_acc)" type="double"/>
    <param name="manager/max_jerk" value="4" type="double"/>
    <param name="manager/control_points_distance" value="0.4" type="double"/>
    <param name="manager/feasibility_tolerance" value="0.05" type="double"/>
    <param name="manager/planning_horizon" value="$(arg planning_horizon)" type="double"/>
    <param name="manager/use_distinctive_trajs" value="$(arg use_distinctive_trajs)" type="bool"/>
    <param name="manager/drone_id" value="$(arg drone_id)"/>

<!-- trajectory optimization -->
    <param name="optimization/lambda_smooth" value="1.0" type="double"/>
    <param name="optimization/lambda_collision" value="0.5" type="double"/>
    <param name="optimization/lambda_feasibility" value="0.1" type="double"/>
    <param name="optimization/lambda_fitness" value="1.0" type="double"/>
    <param name="optimization/dist0" value="0.5" type="double"/>
    <param name="optimization/swarm_clearance" value="0.5" type="double"/>
    <param name="optimization/max_vel" value="$(arg max_vel)" type="double"/>
    <param name="optimization/max_acc" value="$(arg max_acc)" type="double"/>

    <param name="bspline/limit_vel" value="$(arg max_vel)" type="double"/>
    <param name="bspline/limit_acc" value="$(arg max_acc)" type="double"/>
    <param name="bspline/limit_ratio" value="1.1" type="double"/>

<!-- objects prediction -->
    <param name="prediction/obj_num" value="$(arg obj_num_set)" type="int"/>
    <param name="prediction/lambda" value="1.0" type="double"/>
    <param name="prediction/predict_rate" value="1.0" type="double"/>

</node>

</launch>

其中定义了很多运动规划相关的参数,通过修改这些参数,应该可以获得更好的规划效果,实现无人机单个远距离目标的全自动探索建图。不过这一步还需要我花费更多的时间理解ego_planner代码的底层逻辑,因而未能完成。