26 Commits

Author SHA1 Message Date
5e01e25d9c comment update 2025-01-06 16:40:40 +01:00
0d184261fd updated systemd kobukidriver service file 2025-01-06 16:40:00 +01:00
ccaa722973 edited startup file for kobukidriver
(everything works now)
2025-01-06 16:31:52 +01:00
7d1b878c30 fix yolo image boxes 2025-01-06 16:11:03 +01:00
228c508012 attempt to fix broken code 2025-01-06 16:02:17 +01:00
7845feb9f8 update yolo naming in image 2025-01-06 15:55:24 +01:00
20d6d8799d attempt to show name next to image box 2025-01-06 15:45:29 +01:00
9c7c774030 change boxing and text error of YOLO 2025-01-06 15:26:00 +01:00
0832da0d3b change mqtt port in python 2025-01-06 15:16:13 +01:00
a59b9c8714 requirements update 2025-01-06 15:11:27 +01:00
4a05ec5efc added dockerfile 2025-01-06 15:11:21 +01:00
c3d575ccf1 added yolo image object detection to /image 2025-01-06 13:00:00 +01:00
1b0b1e87ce change directories and added driver service config 2025-01-06 11:44:08 +01:00
9c41d64c69 remove mqtt debug print 2025-01-06 11:23:27 +01:00
b48eda9735 change image refresh time 2025-01-06 10:35:54 +01:00
629f9cba92 attempt to fix javascript 2025-01-06 10:34:00 +01:00
51aad34c78 added bracket 2025-01-06 10:08:05 +01:00
1b3fead2b3 documentation 2025-01-06 09:49:56 +01:00
ishak jmilou.ishak
61651a9a02 try to see why sensor value doesnt show on website 2024-12-18 12:22:22 +01:00
ishak jmilou.ishak
e5881f1b37 making message print only once 2024-12-18 11:56:43 +01:00
50b6b83299 fix headerfile 2024-12-17 11:19:15 -04:00
10597ba37d avoid busy waiting 2024-12-17 11:17:25 -04:00
6ab5716797 Documentation kobuki, mqtt, opencv 2024-12-13 18:33:24 +01:00
50b90e461f gitignore update 2024-12-13 10:52:53 +01:00
2811036595 change port of mqtt flask 2024-12-12 14:01:39 +01:00
58f1a931a6 Merge branch 'OpenCV' into 'main'
Open cv

See merge request technische-informatica-sm3/ti-projectten/rooziinuubii79!3
2024-12-12 13:48:59 +01:00
22 changed files with 327 additions and 39 deletions

1
.gitignore vendored
View File

@@ -32,3 +32,4 @@ Makefile
CMakeCache.txt
cmake_install.cmake
src/C++/OpenCV/main
.vs

View File

@@ -0,0 +1,51 @@
# Systemd Services
# What is a service
A service is a program or script that runs in the background and is managed by the system. Services are started at boot time and run until the system is shut down. Services can be started, stopped, and restarted by the system administrator.
# How to manage services on systemD
## Starting a service
To start a service, use the `systemctl start` command followed by the service name. For example, to start the `apache2` service, use the following command:
```bash
sudo systemctl start apache2
```
## Stopping a service
To stop a service, use the `systemctl stop` command followed by the service name. For example, to stop the `apache2` service, use the following command:
```bash
sudo systemctl stop apache2
```
## Restarting a service
To restart a service, use the `systemctl restart` command followed by the service name. For example, to restart the `apache2` service, use the following command:
```bash
sudo systemctl restart apache2
```
## Enabling a service
To enable a service to start at boot time, use the `systemctl enable` command followed by the service name. For example, to enable the `apache2` service, use the following command:
```bash
sudo systemctl enable apache2
```
## Creating a new service
To create a new service, you need to create a new service file in the `/etc/systemd/system/` directory. The service file should have a `.service` extension and contain the following sections:
### Example service file:
```bash
[Unit]
Description=FlaskApp #description of the service
After=network.target #start the service after the network is up
[Service]
User=ishak #start the service as a specific user
WorkingDirectory=/home/ishak/rooziinuubii79/src/Python/flask/web/ #working directory of the service
ExecStart=/usr/bin/gunicorn -w 3 -b 127.0.0.1:5000 app:app #command to start the service
```

50
docs/code/Mqtt.md Normal file
View File

@@ -0,0 +1,50 @@
# MQTT
## What is MQTT?
MQTT is a lightweight messaging protocol made for IOT devices. It allows efficient communication between IoT devices, servers, and applications by allowing them to
publish and subscribe to messages.
## How to connect
To connect to a MQTT server you need to create a instance of the class.
Example:
```cpp
// server adress, Client ID, Client Username, Client Password
MqttClient client("ws://145.92.224.21/ws/", "KobukiRPI", "rpi", "rpiwachtwoordofzo"); // create a client object
```
Later in the setup function you need to call ```client.connect();``` to connect to the mqtt server.
```cpp
client.connect();
```
When you've connected and the instance is initiated you can subscribe to topics or send messages to topics.
## Subscribing and receiving messages
Example subscribing to a topic:
```cpp
void setup(){
client.subscribe("home/commands");
}
```
Example receiving latest message from a topic:
```cpp
std::string foo(){
std::string latestMqttMessage = "";
latestMqttMessage = client.getLastMessage();
return latestMqttMessage;
}
```
If you want to subscribe to mulitple topics you need to initiate multiple instances of the mqtt class.
## Publishing messages
Example publishing a message:
```cpp
void foo(std::string Message){
//channel, payload
client.publishMessage("kobuki/example", Message);
}
```

View File

@@ -1,20 +1,69 @@
# OpenCV
## Requirements
For the camera we want it to detect what is happening on the video feed and identify it so it can identify dangers.
We want that the camera we want it to detect what is happening on the video feed and identify it so it can identify dangers.
## Issues
* OpenCL not grabbing gpu
* Solution: https://github.com/Smorodov/Multitarget-tracker/issues/93
## Installation
### Dependencies
* glew
* opencv
* glew (for openGL)
* opencv C++ lib
How to install OpenCV
```bash
sudo apt-get install libopencv-dev
```
## Code explanation
### Opening the camera with OpenCV
```cpp
VideoCapture cap(0); //Open the default camera (0), points to /dev/video0. You could also change the number to the preferred camera
if (!cap.isOpened()) { //if camera is not opened throw a error message
cerr << "Error: Could not open camera" << endl;
return;
}
```
## Taking a picture and storing it in a variable
```cpp
Mat frame; //create a new Matrix variable called frame
while (true) {
cap >> frame; // Capture a new image frame.
if (frame.empty()) { //if the variable frame is not filled return a error
cerr << "Error: Could not capture image" << endl;
continue;
}
```
## Encoding the image for sending it over MQTT
```cpp
vector<uchar> buf; //create a dyanmic buffer for the image
imencode(".jpg", frame, buf); //encode the image to the buffer
auto* enc_msg = reinterpret_cast<unsigned char*>(buf.data());
```
```cpp
void CapnSend() {
// Convert the image to a byte array
// Publish the image data
client.publishMessage("kobuki/cam", string(enc_msg, enc_msg + buf.size()));
cout << "Sent image" << endl;
std::this_thread::sleep_for(std::chrono::milliseconds(300)); // Send image every 1000ms
}
}
```
## Sources
* https://github.com/UnaNancyOwen/OpenCVDNNSample/tree/master

View File

@@ -0,0 +1,25 @@
# Kobuki driver
## How do i communicate with the kobuki
You can communicate with the kobuki by usb serial or the big serial port on the front. We chose the usb port paired with a raspberry Pi.
The Kobuki sends a message every 200ms with a baudrate of 115200. It sends all the sensordata and the message always starts with the same 2 bytes 0xAA and 0x55.
## Kobuki payloads
To communicate with the kobuki we need to send payloads to the kobuki. These are structured the same as the payloads that the kobuki sends.
```cpp
unsigned char KobukiPayload[11] = {
0xaa, // Start byte 1
0x55, // Start byte 2
0x08, // Payload length (the first 2 bytes dont count)
0x01, // payload type (0x01 = control command)
0x04, // Control byte or additional identifier
actual_speed % 256, // Lower byte of speed value (max actual_speed 1024)
actual_speed >> 8, // Upper byte of speed value
0x00, // Placeholder for radius
0x00, // Placeholder for radius
0x00 // Placeholder for checksum (will be applied later)
};
```
You can also find the documentation about the payloads on the kobuki website

View File

@@ -592,6 +592,8 @@ void CKobuki::robotSafety() {
std::cout << "Safety condition triggered!" << std::endl; // Debug print
forward(-100); // reverse the robot
}
std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<int>(100)));
}
}

View File

@@ -2,6 +2,8 @@
#include <iostream>
//moet checkenvalue gebruiken of moet kijken naar de payloadlength welke dingen er extra zijn
int KobukiParser::parseKobukiMessage(TKobukiData &output, unsigned char *data) {
std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<int>(20))); //avoid busy waiting. The kobuki sends a message every 20ms
int rtrnvalue = checkChecksum(data);
if (rtrnvalue != 0) {
// std::cerr << "Invalid checksum" << std::endl;

View File

@@ -2,6 +2,8 @@
#define KOBUKIPARSER_H
#include <vector>
#include <thread>
struct TRawGyroData {
int x, y, z;

View File

@@ -37,7 +37,6 @@ void MqttClient::subscribe(const std::string& topic, int qos) {
void MqttClient::publishMessage(const std::string& topic, const std::string& payload) {
try {
std::cout << "Publishing message: " << payload << std::endl;
client_.publish(topic, payload)->wait();
} catch (const mqtt::exception& exc) {
std::cerr << "Error: " << exc.what() << std::endl;

View File

@@ -36,7 +36,11 @@ int main()
std::thread sendMqtt([&]() { sendKobukiData(robot.parser.data); });
while(true){
parseMQTT(readMQTT());
std::string message = readMQTT();
if (!message.empty()){
parseMQTT(message);
}
}
sendMqtt.join();
@@ -46,10 +50,13 @@ int main()
std::string readMQTT()
{
message = client.getLastMessage();
if (!message.empty())
static std::string lastMessage;
std::string message = client.getLastMessage();
if (!message.empty() && message != lastMessage)
{
std::cout << "MQTT Message: " << message << std::endl;
lastMessage = message;
}
// Add a small delay to avoid busy-waiting
@@ -305,6 +312,6 @@ void CapnSend() {
client.publishMessage("kobuki/cam", string(enc_msg, enc_msg + buf.size()));
cout << "Sent image" << endl;
std::this_thread::sleep_for(std::chrono::milliseconds(300)); // Send image every 1000ms
std::this_thread::sleep_for(std::chrono::milliseconds(200)); // Send image every 200ms
}
}

41
src/Python/YOLO/app.py Normal file
View File

@@ -0,0 +1,41 @@
from ultralytics import YOLO
import cv2
import numpy as np
import requests
import time
model = YOLO("yolo11n.pt")
#try to fetch the image from the given url
def fetch_image(url):
try:
response = requests.get(url)
response.raise_for_status()
image_array = np.frombuffer(response.content, np.uint8)
image = cv2.imdecode(image_array, cv2.IMREAD_COLOR)
return image
except requests.RequestException as e:
print(f"Error: Could not fetch image - {e}")
return None
# URL of the photostream
url = "http://145.92.224.21/image"
while True:
frame = fetch_image(url)
if frame is None:
print("Error: Could not fetch image, retrying...")
time.sleep(1) # Wait for 1 second before retrying
continue
# Predict on the frame
results = model(frame)
# Display the results
results[0].show()
# Exit if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()

View File

@@ -0,0 +1 @@
__pycache__

View File

@@ -0,0 +1,18 @@
FROM python:3.9
WORKDIR /app
COPY . .
RUN apt-get update && apt-get install -y libgl1
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "web/app.py"]
#build instruction: sudo docker buildx build -t flaskapp:latest .
#run instruction: sudo docker run --network="host" flaskapp:latest
# need to use network host to connect to the host's mqtt server

View File

@@ -0,0 +1,5 @@
Flask==3.1.0
paho-mqtt==1.6.1
ultralytics==8.3.58
opencv-python-headless==4.6.0.66
numpy==1.23.4

View File

@@ -1,20 +1,49 @@
from flask import Flask, Response, request, render_template, jsonify
import paho.mqtt.client as mqtt
from ultralytics import YOLO
import cv2
import numpy as np
app = Flask(__name__)
kobuki_message = "empty"
# Load a model
model = YOLO("yolo11n.pt") # pretrained YOLO11n model
kobuki_message = ""
latest_image = None
yolo_results = []
# https://medium.com/@Mert.A/how-to-segment-objects-with-yolov11-68593eb49fa8
yolo_classes = list(model.names.values())
def on_message(client, userdata, message):
global kobuki_message, latest_image
global kobuki_message, latest_image, yolo_results
if message.topic == "kobuki/data":
kobuki_message = str(message.payload.decode("utf-8"))
elif message.topic == "kobuki/cam":
latest_image = message.payload
latest_image = np.frombuffer(message.payload, np.uint8)
latest_image = cv2.imdecode(latest_image, cv2.IMREAD_COLOR)
# Process the image with YOLO
results = model(latest_image)
yolo_results = []
for result in results:
for box in result.boxes:
class_id = int(box.cls.item()) # Convert to integer
class_name = yolo_classes[class_id]
yolo_results.append({
"class": class_name,
"confidence": box.conf.item(),
"bbox": box.xyxy.tolist()
})
# Draw bounding box on the image
x1, y1, x2, y2 = map(int, box.xyxy[0])
cv2.rectangle(latest_image, (x1, y1), (x2, y2), (0, 255, 0), 2)
cv2.putText(latest_image, f"{class_name} {box.conf.item():.2f}", (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2)
# Create an MQTT client instance
mqtt_client = mqtt.Client()
mqtt_client.username_pw_set("server", "serverwachtwoordofzo")
mqtt_client.connect("localhost", 80, 60)
mqtt_client.connect("localhost", 1884, 60)
mqtt_client.loop_start()
mqtt_client.subscribe("kobuki/data")
mqtt_client.subscribe("kobuki/cam")
@@ -49,23 +78,19 @@ def move():
def data():
return kobuki_message
@app.route('/image')
def image():
global latest_image
if latest_image is not None:
return Response(latest_image, mimetype='image/jpeg')
_, buffer = cv2.imencode('.jpg', latest_image)
return Response(buffer.tobytes(), mimetype='image/jpeg')
else:
return "No image available", 404
@app.route('/phpmyadmin/<path:path>')
def phpmyadmin_passthrough(path):
# Laat Apache deze route direct afhandelen
return "", 404
@app.route('/yolo_results', methods=['GET'])
def yolo_results_endpoint():
global yolo_results
return jsonify(yolo_results)
if __name__ == '__main__':
app.run(debug=True, port=5000)
app.run(debug=True, port=5000)

View File

@@ -14,7 +14,7 @@ document.addEventListener("DOMContentLoaded", function() {
body: JSON.stringify({ direction: direction })
})
.then(response => response.json())
.then(data => {script
.then(data => {
console.log("Success:", data);
})
.catch(error => {
@@ -25,9 +25,13 @@ document.addEventListener("DOMContentLoaded", function() {
// Fetch data from the server
async function fetchData() {
const response = await fetch("/data");
const data = await response.json();
return data;
try {
const response = await fetch("/data");
const data = await response.json();
return data;
} catch (error) {
console.error("Error:", error);
}
}
// Parse the data and show it on the website
@@ -49,9 +53,9 @@ document.addEventListener("DOMContentLoaded", function() {
img.src = "/image?" + new Date().getTime(); // Add timestamp to avoid caching
}
// Fetch and display sensor data every 5 seconds
// Fetch and display sensor data every 1 second
setInterval(parseData, 1000);
// Update the image every 5 seconds
setInterval(updateImage, 200);
// Update the image every 200 milliseconds
setInterval(updateImage, 100);
});

Binary file not shown.

View File

@@ -1,7 +0,0 @@
import sys
import logging
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0, "/home/ishak/rooziinuubii79/src/Python/flask/web")
from app import app as application

View File

@@ -0,0 +1,13 @@
[Unit]
Description=kobukiDriver
After=network.target
[Service]
User=user1
WorkingDirectory=/home/user1/rooziinuubii79/src/C++/Driver/
ExecStart=/home/user1/rooziinuubii79/src/C++/Driver/kobuki_control
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target