当前位置: 首页 > news >正文

亚博microros小车-原生ubuntu支持系列 27、手掌控制小车运动

背景知识

本节跟上一个测试类似:亚博microros小车-原生ubuntu支持系列:26手势控制小车基础运动-CSDN博客

都是基于MediaPipe hands做手掌、手指识别的。

为了方便理解,在贴一下手指关键点分布。手掌位置就是靠第9点来识别的。

2、程序说明

本节案例在机器人主控上可能会运行速度很卡顿,可以识别到手掌之后先把小车架起来测试,这样效果会好一些。

小车会根据手掌在画面中的位置,控制底盘的运动。

手掌在画面上方->小车前进

手掌在画面下方->小车后退

手掌在画面左方->小车左移

手掌在画面下方->小车右移

3 启动命令

启动小车代理、图像代理

sudo docker run -it --rm -v /dev:/dev -v /dev/shm:/dev/shm --privileged --net=host microros/micro-ros-agent:humble udp4 --port 8090 -v4

sudo docker run -it --rm -v /dev:/dev -v /dev/shm:/dev/shm --privileged --net=host microros/micro-ros-agent:humble udp4 --port 9999 -v4

终端输入,

ros2 run yahboom_esp32ai_car RobotCtrl 

不太好区别,加上指点图

我大概用画笔简单示意下。判断标准可以自行调整x,y的值,运行日志

ohu@bohu-TM1701:~/yahboomcar/yahboomcar_ws$ ros2 run yahboom_esp32ai_car RobotCtrl 
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1739176022.123238  464141 gl_context_egl.cc:85] Successfully initialized EGL. Major : 1 Minor: 5
I0000 00:00:1739176022.187429  464194 gl_context.cc:369] GL version: 3.2 (OpenGL ES 3.2 Mesa 23.2.1-1ubuntu3.1~22.04.3), renderer: Mesa Intel(R) UHD Graphics 620 (KBL GT2)
start it
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
W0000 00:00:1739176026.985982  464181 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1739176027.850109  464183 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1739176027.890610  464180 landmark_projection_calculator.cc:186] Using NORM_RECT without IMAGE_DIMENSIONS is only supported for the square ROI. Provide IMAGE_DIMENSIONS or use PROJECTION_MATRIX.
x 295
value: x:0.2,y:0.2
x 277
value: x:0.2,y:0.2
x 280
value: x:0.2,y:0.2
x 612
value: x:0.0,y:-0.2
x 622
value: x:0.0,y:-0.2
x 622
value: x:0.0,y:-0.2
x 614
value: x:0.0,y:-0.2
x 620
value: x:0.0,y:-0.2
x 607
value: x:0.0,y:-0.2
x 588
value: x:0.0,y:-0.2
x 586
value: x:0.0,y:-0.2
x 569
value: x:0.0,y:-0.2
x 562
value: x:0.0,y:-0.2
x 564
value: x:0.0,y:-0.2
x 550
value: x:0.0,y:-0.2
x 537
value: x:0.0,y:-0.2
x 463
value: x:0.0,y:-0.2
x 384
value: x:0.0,y:-0.2
x 313
value: x:0.0,y:0.0
x 262
value: x:0.0,y:0.2
x 231
value: x:0.0,y:0.2

代码

#!/usr/bin/env python3
# encoding: utf-8
import threading
import cv2 as cv
import numpy as np
from yahboom_esp32ai_car.media_library import *
import time
import rclpy
from rclpy.node import Node
from std_msgs.msg import Int32, Bool,UInt16
from cv_bridge import CvBridge
from sensor_msgs.msg import Image, CompressedImagefrom rclpy.time import Time
import datetimeclass HandCtrlArm(Node):def __init__(self,name):super().__init__(name)self.pub_Servo1 = self.create_publisher(Int32,"servo_s1" , 10)self.pub_Servo2 = self.create_publisher(Int32,"servo_s2" , 10)self.PWMServo_X = 0self.PWMServo_Y = 45self.s1_init_angle = Int32()self.s1_init_angle.data = self.PWMServo_Xself.s2_init_angle = Int32()self.s2_init_angle.data = self.PWMServo_Yself.media_ros = Media_ROS()#确保角度正常处于中间for i in range(10):self.pub_Servo1.publish(self.s1_init_angle)self.pub_Servo2.publish(self.s2_init_angle)time.sleep(0.1)self.hand_detector = HandDetector()self.arm_status = Trueself.locking = Trueself.init = Trueself.pTime = 0self.add_lock = self.remove_lock = 0self.event = threading.Event()self.event.set()def process(self, frame):frame, lmList, bbox = self.hand_detector.findHands(frame)if len(lmList) != 0:threading.Thread(target=self.arm_ctrl_threading, args=(lmList,bbox)).start()else:self.media_ros.pub_vel(0.0,0.0,0.0)self.media_ros.pub_imgMsg(frame)return framedef arm_ctrl_threading(self, lmList,bbox):if self.event.is_set():self.event.clear()fingers = self.hand_detector.fingersUp(lmList)self.hand_detector.draw = True#gesture = self.hand_detector.get_gesture(lmList)self.arm_status = Falsepoint_x = lmList[9][1]point_y = lmList[9][2]print("x",point_x)if point_y >= 270: x = -0.2elif point_y <= 150: x = 0.2else: x = 0.0if point_x >= 350: y = -0.2elif point_x <= 300: y = 0.2else: y = 0.0self.media_ros.pub_vel(x,0.0,y)print(f'value: x:{x},y:{y}')self.arm_status = Trueself.event.set()class MY_Picture(Node):def __init__(self, name):super().__init__(name)self.bridge = CvBridge()self.sub_img = self.create_subscription(CompressedImage, '/espRos/esp32camera', self.handleTopic, 1) #获取esp32传来的图像self.handctrlarm = HandCtrlArm('handctrl')self.last_stamp = Noneself.new_seconds = 0self.fps_seconds = 1def handleTopic(self, msg):self.last_stamp = msg.header.stamp  if self.last_stamp:total_secs = Time(nanoseconds=self.last_stamp.nanosec, seconds=self.last_stamp.sec).nanosecondsdelta = datetime.timedelta(seconds=total_secs * 1e-9)seconds = delta.total_seconds()*100if self.new_seconds != 0:self.fps_seconds = seconds - self.new_secondsself.new_seconds = seconds#保留这次的值start = time.time()frame = self.bridge.compressed_imgmsg_to_cv2(msg)frame = cv.resize(frame, (640, 480))action = cv.waitKey(1) & 0xFFframe = self.handctrlarm.process(frame)if action == ord('q'):self.handctrlarm.media_ros.cancel()end = time.time()fps = 1 / ((end - start)+self.fps_seconds)text = "FPS : " + str(int(fps))cv.putText(frame, text, (10,20), cv.FONT_HERSHEY_SIMPLEX, 0.8, (0,255,255), 2)cv.imshow('frame', frame)def main():rclpy.init() esp_img = MY_Picture("My_Picture")print("start it")try:rclpy.spin(esp_img)except KeyboardInterrupt:passfinally:esp_img.destroy_node()rclpy.shutdown()

主要逻辑:其实获取到的就是咱们手掌中指的第一个关节的坐标(第9点),通过判断这个坐标在画面中的位置来发送给底盘xy方向上的速度,即可实现控制。 

补充一个节点通信图

http://www.lryc.cn/news/534735.html

相关文章:

  • STM32 HAL库 CANbus通讯(C语言)
  • ML.NET库学习005:基于机器学习的客户细分实现与解析
  • (2/100)每日小游戏平台系列
  • 【Linux Oracle】杂货铺 日常实用2024
  • 浏览器的缓存方式几种
  • 黑马React保姆级(PPT+笔记)
  • 2025web寒假作业二
  • 三、OSG学习笔记-应用基础
  • CTFHub-RCE系列wp
  • Linux ping不通百度但浏览器可以打开百度的的解决方法
  • Redis中的某一热点数据缓存过期了,此时有大量请求访问怎么办?
  • 低成本+高性能+超灵活!Deepseek 671B+Milvus重新定义知识库搭建
  • TCP服务器与客户端搭建
  • PDF 文件的安全功能概述
  • 在Linux上部署Jenkins的详细指南
  • 碳纤维复合材料制造的六西格玛管理实践:破解高端制造良率困局的实战密码
  • Day83:图形的绘制
  • C# Dll嵌入到.exe
  • o3-mini、Gemini 2 Flash、Sonnet 3.5 与 DeepSeek 在 Cursor 上的对决
  • 如何在Vscode中接入Deepseek
  • 6 maven工具的使用、maven项目中使用日志
  • Day82:创建图形界面(GUI)
  • 字节跳动大模型应用 Go 开发框架 —— Eino 实践
  • 【Golang学习之旅】Go + MySQL 数据库操作详解
  • Http 的响应码有哪些? 分别代表的是什么?
  • 深入解析 Linux 系统中 Cron 定时任务的配置与管理
  • 关于 IoT DC3 中设备(Device)的理解
  • golang 版 E签宝请求签名鉴权方式
  • QTreeView和QTableView单元格添加超链接
  • 【WB 深度学习实验管理】使用 PyTorch Lightning 实现高效的图像分类实验跟踪