PhD Project Page

David Gollasch, Research Associate @ Technische Universität Dresden, Germany

Our goal is to develop an “AAA” – adaptive, accessibly assistance robot to support people with special needs as well as elderly people during their daily life; whether they are at home or in complex public environments.

We are constantly working on new features to build up a multi-purpose robot that is able to adapt to the user’s needs and preferences. For getting there, we follow two development paths. First, we are implementing many “function bits” or features for our robot ﹣ similar to building blocks for later task process chains. Second, we are developing a match-making system to construct the named process chains adaptively, based on the user’s needs. The result shall become our final A3Bot system.

We are working with the Segway Robotics Loomo platform. A versatile, affordable, Android-based robot on wheels, which is a self-balancing vehicle at the same time.

Project Progress
Requirements Analysis
Development Progress
Component Prototyping

Components

Human-Robot Interaction

Amazon Alexa Integration

Turing Loomo into an Alexa client to enable natural voice interaction and get access to existing skills. Furthermore, controlling the robot using voice commands handled by Alexa.

  • Alexa client for Loomo
  • Controlling Loomo using external Alexa

Augmented Reality

Extending the visual output by projecting information onto objects using an attached LED projector.

  • Projector mount developed.
  • Information projection prototype using Android Things

Emotion Modelling

Defining an emotion model that can be applied to Loomo to express emotions.

  • Emotion model based on human emotions
  • Emotion model based on animals and objects
  • Robotic basic concept

Emotion Recognition

Recognising emotions using the RGB HD camera of a personal robot and machine learning. Sensor fusion for refinement.

  • First Prototype
  • Recognition refined using Camera and Voice

Gesture Control

Designing body gestures to control the Loomo personal robot.

  • Gesture recognition prerequisites based on Intel RealSense 3D camera

Microsoft Voice Service Integration

Interact with Loomo via your voice; approach using Microsoft voice services.

  • First Prototype

MyCroft.ai Integration

Interact with Loomo via your voice; approach using the open source voice assistant MyCroft.ai

  • First Prototype

Adaptivity

User Identification

Getting to know new users and identify them using their voice.

  • First Prototype.

Navigation

Driving Using Ultrasonic Sensors

Let a personal robot drive around and explore its environment by only using the ultrasonic sensor to avoid collisions.

  • First Prototype

Floor Plan Navigation

Mapping and Navigation Approach Based on Exit Plans

  • Concept for converting exit plans to navigation graph structure
  • Path finding and moving based on map structure.

Obstacle Avoidance

Obstacle detection and path correction based on 3D and ultrasonic sensors.

  • Detecting large static objects and drive by those.

Applications

Caring Assistance

Implementing features that support people with dementia keeping up personal responsibility during everyday activities.

  • Routine Assistant

Cooperative Drone-Robot Interaction

Fetching objects using a drone that is controlled by Loomo.

  • Automated drone movement coordination using markers

Smart Home Integration

Integration of smart-home sensors and actors into the robot’s ecosystem using Z-Wave to enable Loomo to interact with its environmental gadgets.

  • First Prototype