The Deployment of Aalto Explorer V 1.0 (Part 1)

In the previous series on the research conducted for Aalto Explorer, we have presented our groundwork implemented for Aalto Explorer – the first community-controlled vehicle and platform for unlimited underwater exploration. For more details, please visit the prior articles.

This sequel continues to bring you an overview of the deployment of our first 2 prototypes: V 1.0 and V 2.0.

A brief overview

Aalto Explorer (AE) envisions a remote vehicle and a network application – the so-called expedition platform – to be controlled by the community and be made available to any ocean-enthusiasts, casual explorers and life-long learners, sending them on exciting journeys of oceanic exploration on their own terms and at their own pace. The community and/or experts can organise and book an expedition on one of the AE units deployed around the world.  

FIND-X units are composed of three main elements: a Floating Module, a Remote Operated Vehicle (ROV) and a Graphic User Interface (GUI), known as the Expedition Platform. The communication between users and the floating module is enabled by 4G technology. The floating module connects to the ROV via an umbilical cord that supplies energy and allows the communication with the cameras and the control of the ROV unit.

DEPLOYMENT OF PROGRAMMING & ELECTRONICS IN V 1.0

RASPBERRY CONFIGURATION

Description

The main software system works in the Raspberry board. It connects to the Arduino board, which provides the interface for the sensors, and communicates with the external world through the 4G Router.

The software running on top of it is Ubuntu Mate 16.04 with the ROS middleware, Kinectic version. This allows the system to work under a node decentralized scheme, meaning each node holds one function at a time, i.e. one node handles one camera, other nodes handle other cameras, etc.

Thus, for the whole system to work, Raspberry needs to be configured for these communications.

Steps

  1. 1. Burn the operating system image to the SD card (to be used in the Raspberry). This image is already configured and stored in a local hard drive
  2. Resize the boot partition to use the whole drive.
  3. Connect to a screen and enable remote desktop services and SSH connections.
  4. Update the packages.
  • In Raspberry, to connect through RealVNC, SSH and camera in sudo raspi-config
  • Interfacing options: Camera, SSH, Serial. Enable all connections
  • Remote Desktop: Operating VNC Server at the command line

You can operate VNC Server exclusively at the command line or via SSH if you prefer.

Common commands for Raspbian Jessie (which is based on Debian 8, and uses systemd) are:

  • To start VNC Server now: sudo systemctl start vncserver-x11-serviced.service.
  • To start VNC Server at next boot, and every subsequent boot: sudo systemctl enable vncserver-x11-serviced.service
  • To stop VNC Server: sudo systemctl stop vncserver-x11-serviced.service
  • To prevent VNC Server from starting at boot: sudo systemctl disable vncserver-x11-serviced.service.

Configuring for Arduino communication

Install Arduino IDE in linux ubuntu (laptop) and in linux ubuntu mate (raspberry). Open arduino IDE and use a simple code to test the led blinking.

// the setup function runs once when you press reset or power the board

void setup() {

// initialize digital pin LED_BUILTIN as an output.

pinMode(LED_BUILTIN, OUTPUT);

}

// the loop function runs over and over again forever

void loop() {

digitalWrite(LED_BUILTIN, HIGH); // turn the LED on (HIGH is the voltage level)

delay(10000); // wait for a second

digitalWrite(LED_BUILTIN, LOW); // turn the LED off by making the voltage LOW

delay(10000); // wait for a second

}

Results

RASPBERRY CAMERA CONFIGURATION

Description

This is the first camera to be used, and thus will be the ‘navigation’ camera due to its lower resolution compared to the HD camera ( the ‘exploration’ camera). For its management, we use the raspicam library.

Test camera on raspberry pi

Codes are in Aalto_Progr, local directory.

cam_pict.py

cam_vid.py

Publishing content from raspi into ROS Network.

First configure network between the raspberry and laptop, then run the roscore and talker in raspberry and listener in laptop.

For this test, the talker and listener can either be written with C++ or with Python. To make it easy, the followings are in Python.

Start the master (RASPBERRY)

We need to select one machine to run the master; we’ll go with pi. The first step is ‘start the master’:

ssh [email protected]

roscore

Start the listener (LAPTOP)

Now we’ll start a listener on pi, configuring ROS_MASTER_URI so that we can use the master that was just started:

ssh [email protected]

export ROS_MASTER_URI=http://10.42.0.223:11311

rosrun rover_package simple_listen.py

Start the talker (RASPBERRY)

Next we’ll start a talker on [email protected], also configuring ROS_MASTER_URI so that the master on hal is used:

ssh [email protected]

export ROS_MASTER_URI=http://10.42.0.223:11311

rosrun rover_package simple_talk.py

HD CAMERA CONFIGURATION

Description

This is the second camera to be used and its perk is its HD resolution. For its management, we also use the raspicam library but in a  slightly different way.

Structure

Laptop as the Master | Raspberry as the worker, video server node

image_view node | usb_cam node - vidsrv package, launch file

Configuration

Connect USB Camera to Raspberry and verify if the camera was recognized

lsusb

ls /dev | grep video*

Install ros node in raspberry pi

sudo apt install ros-kinetic-usb-cam

**Launch file can be seen with

cat /opt/ros/kinetic/share/usb_cam/launch/usb_cam-test.launch

roslaunch usb_cam usb_cam-test.launch

Node usb__cam _will start but image-view will fail because of no GUI resources in Rasperry

Node in Raspberry published to topic

In ubuntu laptop, read camera data with image_view

rosrun image_view image_view image:=/usb_cam/image_raw

or

rqt_image_view

Package to web server streaming

In Raspberry. install node

sudo apt install ros-kinetic-web-video-server

create pkg ‘vidsrv’ and launch file in catkin workspace ‘catkin_ws’ in my case:

cd catkin_ws/src

catkin_create_pkg vidsrv std_msgs rospy roscpp

mkdir -p vidsrv/launch

nano vidsrv/launch/vidsrv.launch

cd ..

source devel/setup.bash

Content of launch file:

<launch>

<!- This node description you can take from usb_cam-test.launch ->

<node name=”usb_cam” pkg=”usb_cam” type=”usb_cam_node” output=”screen” >

<param name=”video_device” value=”/dev/video0” />

<param name=”image_width” value=”640” />

<param name=”image_height” value=”480” />

<param name=”pixel_format” value=”yuyv” />

<param name=”camera_frame_id” value=”usb_cam” />

<param name=”io_method” value=”mmap”/>

</node>

<!- This node will launch web video server ->

<node name=”web_video_server” pkg=”web_video_server” type=”web_video_server” />

</launch>

and then just run the launch file

roslaunch vidsrv vidsrv.launch

In the Laptop, open a web browser with the raspberry ip address and the port 8080 10.42.0.223:8080

Running the server node and viewing in browser

Laptop as master

Initiate the core and run launch file

roscore #master in laptop

Raspberry as slave, video server node

ssh [email protected]

export ROS_MASTER_URI=http://10.42.0.1:11311

roslaunch usb_cam usb_cam-test.launch #Launch file for rqt image

roslaunch vidsrv vidsrv.launch #Run the launch file for web streaming

Back to Laptop

run launch file to visualize raw data (#Read camera data with image_view. of course after Raspberry nodes execution!!)

rosrun image_view image_view image:=/usb_cam/image_raw

or open a web browser with the raspberry ip address and the port 8080 10.42.0.223:8080

—————————————————————————————————————————-

Don’t forget to sign up to our newsletters or join our pioneer group for further updates!

 

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top