Join Us
Current Openings
Funded PhD Research Opportunities
Our funded studentships applications will re-open in January 2025. In the meantime, please do contact CHART team members to enquire about their research topics and start to work on potential applications.
Self-Funded PhDs
The CHART Research Team are offering several projects in collaboration with their partners for self-funded PhD students - please get in touch with the named supervisor if you are interested.
Camera-based movement analysis to provide real-time optimisation of therapeutic non-invasive neuromodulation for neurological diseases: A Human-in-the-Loop and Machine Learning Approach
Involuntary movements, such as tremor, tics, sudden jerks (chorea) and muscle spasms (dystonia) occur in many neurological disorders. These involuntary movements are difficult to treat, requiring medications (which often have unpleasant side effects) or invasive deep brain stimulation (requiring electrodes to be implanted in the brain). Recent innovative research by Prof Stephen Jackson (UoN Psychology) found that non-invasive median nerve stimulation (MNS), which involves stimulating a nerve in the wrist using a wearable electronic device, suppresses involuntary movements in Tourette syndrome by entraining oscillations in relevant brain circuits when the device is active. Clinical trials are now underway, or being developed, to test the therapeutic effect in a number of neurological disorders, including Parkinson disease (PD) ataxia telangiectasia (A-T) and restless legs syndrome (RLS).
However, involuntary movements are intermittent and highly variable, meaning that continuous stimulation by the MNS device is not required or desirable. Real-time detection of involuntary movements could allow personally-tailored therapeutic stimulation strategies. optimised to detect different types of involuntary movements seen in diseases such as PD, AT and RLS, and used to optimise the stimulation regime on an individual basis.
In this project, we will analyse videos of trial participants to understand how MNS affects the patients' symptoms, and to optimise the device in real time to provide the maximum benefit for each individual. To achieve this, we will utilise state-of-the-art marker-less pose estimation in combination with multi-modal machine learning to better understand the complex movements of those with movement disorders. This will enable us to tailor the treatment to each individual's specific needs in real time and maximise its effectiveness by tuning the MNS device. Demonstrating the feasibility of the human-in-the-loop approach will directly enable clinical trials of the effectiveness of personalised home-administered MNS stimulation in reducing the unwanted movements, and allow exploration of the potential benefits of this technology in improving quality of life for individuals with movement disorders.
This PhD project will benefit from a strong multidisciplinary approach, combining computer science, psychology, and neuroscience. Applicants are expected to have a combination of programming experience (python) and an interest/background in neuroscience.
Supervisors: Dr Alexander Turner (School of Computer Science), Prof Stephen Jackson (School of Psychology), Prof Robert Dineen (Faculty of Medicine & Health Sciences)
For further details and to arrange an interview please contact Dr Alexander Turner (School of Computer Science)
"Learning by Doing": A Cross-Disciplinary Exploration of Human Motor Learning, AI, and Robot Learning
AI and Robotics technologies are becoming pervasive in everyday human life, e.g. telepresence robots, robot vacuums, humanoids in the factories, etc. To operate in and interact with humans and made-for-human environments, robot learning plays a vital role, allowing robots to be more easily extended and adapted to novel situations. An example of applying AI and machine learning to robotics is learning-by-demonstration, a supervised learning technique, where robots acquire new skills by learning to imitate an expert.
The research challenge lies in developing robot learning models by modelling the different stages of motor learning in humans, i.e., modelling humans as naïve learners, rather than experts. The fundamental research in human motor learning (HML), by which humans acquire and refine motor skills through practice, feedback, and adaptation, proves critical in this regard. This can allow the robot to mimic human motor control mechanisms, such as joint flexibility and compliance, more naturally, with the benefit of refining the acquired skill over time and input from humans. Example applications include: an assistive robotic arm assisting stroke patients guide their movements and providing corrective cues; therapy patients learning to adjust their motor patterns for better outcomes, based on the AI models, etc.
This research will address the challenge noted, focussing on three measurable outcomes – (i) novel research into AI to model HML in subjects, including adults and children, especially with the goal of mapping the HML model to robot motion; (ii) implementing the developed HML models in a robot learning architecture (in robotic manipulation or navigation) evaluated against novel benchmarking metrics (to be investigated) for the applicability and utility of developed research in real-world; and (iii) annotated and labelled dataset (videos, sensor data, etc.) of human as well as robot motions made publicly available, for the benefit of the scientific communities in psychology, AI, and robotics.
Prospective PhD applicants may have a degree in Computer Science or Robotics (or Psychology with an experimental focus), and with knowledge of Machine Learning, Deep Learning, AI, and Robotics (preferably). This project will require excellent programming skills with evidence of proficient working knowledge in one or more of the following: C++, Python, ROS.
Supervisors: Dr Nikhil Deshpande (School of Computer Science), Dr Deborah Serrien (School of Psychology) deborah.serrien@nottingham.ac.uk.
For further details and to arrange an interview please contact Dr Nikhil Deshpande (School of Computer Science).
Multimodal machine learning of parent-child interactions as a predictor of child cognitive functions
The first five years of the child’s life play a critical role for cementing cognitive functions. Several studies have shown that the quantity and quality of parent-child interactions can affect children’s cognitive development later on. Periods of shared attention between caregivers and children have important implications for developing the child’s attention span and language skills. Recent neuroimaging research has also found that some parts of the brain may be activated in similar ways for both parents and children during these interactions, and interestingly, the extent of this analogous brain activation might be influenced by factors like how stressful the home environment is. Despite what is known about the association between child cognitive functions and the modalities (i.e., audio-visual and brain activity) involved in parent-child interactions, the extent to which these can be combined to inform better cognitive developmental outcomes for infants is unknown. If we can better understand these associations, we can help caregivers interact with young children in more effective ways that could potentially transform their development in the crucial early stages of life.
In this project you will work to better understand the interactions between young children and caregivers as they interact during exploratory play. You will apply state-of-the art machine learning techniques to analyse videos of interactions, detecting poses, activities and key events. You will explore novel deep learning methods for integrating multi-modal information sources, to combine video events, audio data and FNIRS data and extract perceptual, verbalization, affect, brain function information (to name a few) during parent-child interactions to predict cognitive functions in children. Video, questionnaire and neuroimaging data are already available from an ongoing, longitudinal project assessing neurocognition in children in the School of Psychology at the University of Nottingham. Separately, it might be possible to design and collect more data in the future.
Applicants will be expected to have a good working experience with current machine learning and image processing tools and techniques. Prior knowledge of biomedical signal processing and natural language processing is desirable but not essential.
Supervisors: Dr Joy Egede (School of Computer Science), Dr Sobana Wijeakumar, (School of Psychology), Dr Aly Magassouba (School of Computer Science) aly.magassouba@nottingham.ac.uk.
For further details and to arrange an interview please contact Dr Joy Egede (School of Computer Science)
Safety Assurance in Assistive Human-Robot Interaction
To discuss this project please email: praminda.caleb-solly@nottingham.ac.uk
This research will address the design and evaluation of safe and trustworthy collaborative robots in assistive scenarios.
Robotics and autonomous systems (RAS) are emerging as disruptive technologies with the potential to provide personalised and cost effective support for a range of care-related tasks for people with disabilities, including the provision of physical and social assistance and physiotherapy. However, in order to ensure real-world deployment and commercialisation, application-focussed research into safe Human-Robot Interaction, Hazard Analysis and Risk Assessment in a range of dynamic environments and scenarios of use is needed.
This imperative for these areas of research is particularly significant, given the vulnerability of the end-users interacting with these systems – giving rise to a range of very complex safety and reliability issues and concerns. It is necessary to carefully and deeply consider the safety of assistive robots at not just an operational and functional level – but also from human factors and clinical efficacy perspectives.
The research will start with a user-centred approach to identifying safety-related issues of specific assistive robots to scope the requirements for real-world use assistive robots by people with different accessibility needs and contexts. As part of this you will also need to review the existing methods and approaches for safety assurance for these systems, with a view to exploring critical barriers to assurance and regulation. Through user-based testing and evaluation using existing assistive robotic platforms, you will analyse the adequacy of current guidelines and standards for assistive robots and identify gaps in the standards using a set of real-world use cases.
Based on your skills and interest, there are several routes you can also consider for this PhD, from the development and validation of hybrid implicit and explicit human-AI mechanisms for generating safe behaviours and conducting hazard analysis and risk assessment, to considering human-factors and psychology related issues that can impact safe interaction, or even experimenting with new forms of embodiment-based social signalling.
Prospective PhD applicants are expected to have a degree in Engineering, Computer Science or Maths with knowledge of Data Science, Machine Learning and AI. Applicants with a background in human-factors and psychology are also welcome. This project will require excellent programming skills with evidence of proficient working knowledge in one or more of the following: C++, C, Java, Python, ROS.
Supervisors: Profs Praminda Caleb-Solly (School of Computer Science), Carl Macrae (Professor of Organisational Behaviour and Psychology)
For further details and to arrange an interview please contact Prof Praminda Caleb-Solly.
Multimodal Feedback for Assistive Robot-Based Navigation and Dance
To discuss this project please email: praminda.caleb-solly@nottingham.ac.uk
Physical activities such as walking, exercise and dance are not only good for physical well-being, but also mental well-being, particularly when done together with someone.
This research will explore how robotics technology-based mobility devices could be designed and used to connect people in different remote locations to mediate and coordinate physical interaction between them for physical activities such as exercise, walking and dance.
There are number of avenues to consider in this research, such as exploring most intuitive, effective and engaging ways for people to coordinate their movement-based activities via their assistive robots when they are not in the same physical space, investigating what kind of personalisation methods and input/output modalities are useful to improve the interaction between humans through the robotics technology-based mobility devices and enable long term adaptation to changing needs, or in what ways are interactions affected by people's accessibility needs, their cultures, communities and the interaction environments, or what are suitable embodiments or form-factors for such devices.
This PhD project will benefit from a strong multidisciplinary approach at the interface of Computer Science, Robotics, and Physiotherapy and Dance. Applicants are expected to develop technological advancements in AI and Interaction Design, including using machine-learning for generating personalised user models for children and adults, adaptive motion planning in social environments, feedback generation. In addition, the successful student will design, conduct and analyse experiments to investigate the socio-psychological effects of the technologies.
Supervisors: Praminda Caleb-Solly and Paul Tennent
For further details and to arrange an interview please contact Prof. Praminda Caleb-Solly.
Multimodal interfaces to enable multisensory accessible interaction in remote cultural environments through telepresence robots
To discuss this project please email: praminda.caleb-solly@nottingham.ac.uk
Partner: Screen South https://screensouth.org/
Telepresence robots offer a significant digital opportunity for people to remotely access social, work and cultural spaces, autonomously moving around them, giving a feeling of connection and presence. As such, telepresence robots can be a transformative tool in enabling engagement with museums and galleries, making connections and improving wellbeing. For a number of disabled people, and those shielding due to lowered immunity due to long-term conditions, having the choice to access cultural spaces and interact with people and objects through telepresence robots, can offer more freedom and flexibility to be ‘present’ in locations.
However, the interfaces to control telepresence robots can be cumbersome and inaccessible, particularly for those with sensory and/or physical impairments, making it difficult or impossible for them to use these effectively. We are also interested in exploring how by combining telepresence robots with other digital devises, such as VR and haptics, we can enable truly immersive multisensory experiences that are accessible to a variety of participants.
The aim of this research is to co-design and test a range of different input and output devices and modalities to develop multisensory interfaces that will enable accessible, smooth and enjoyable control and remote interaction. You will explore the integration and use of speech, head and ear-switches, electromyograms, and gaze, amongst other modalities, for control, and visual, haptic and aural modalities for feedback of information to enable rich and creative experiences of the remote space, people and objects. You will study and develop metrics for evaluating usability and user experience for accessible teleoperation using these modalities and custom devices, as well as developing a best practice framework to support future accessible design. The research will also offer the opportunity to draw on disability studies research to understand the lived experience of using telepresence in different contexts, understanding impact on self-efficacy, identity, social relationships and agency in interactions. This research offers several technical and non-technical strands to explore, based on the candidate’s background, skills and experience.
For further details and to arrange an interview please contact Prof. Praminda Caleb-Solly.
Ambient and Augmented Reality Information Visualisation of Smart Sensor Data for Real-Time Clinical Decision Making
To discuss this project please email: praminda.caleb-solly@nottingham.ac.uk
Partner: Queen’s Medical Centre, University Hospital, Nottingham
In busy clinical environments, particularly where patients have a high-level of staff dependency, providing support for clinical staff to improve patient monitoring, triage and management can not only help to ease level of staff stress, but also potentially improve patient safety. This research will investigate how information to assist with clinical decision making can be presented through creative ambient and/or augmented information displays and the impact that different modes and modalities have on user cognitive load, attention and efficiency. This research is situated in the use of tangible devices, and ambient and augmented reality displays, exploring topics in information visualisation, sensory substitution, human factors and user experience design. Considering the context of high-pressured environments, such as dementia wards, you will begin the research with a qualitative observational study, scoping the requirements using co-design with clinical and care professionals, before designing, developing and evaluating a range of approaches for representing the required information.
Based on the candidate’s academic background, skills and experience, the research focus can be either on developing intelligent sensing to capture and represent the key information required for decision-making, or design and development of the approaches for displaying it through different means and modalities, or a combination of both.
For further details and to arrange an interview please contact Prof. Praminda Caleb-Solly.
Intelligent sensing and machine learning to adapt social robot assistance to support independent living
To discuss this project please email: praminda.caleb-solly@nottingham.ac.uk
Partner: Robotics For Good CIC https://www.roboticsforgood.co.uk/
Assistive technologies, such as smart home environments, integrated sensors and service robotics are recognised as emerging tools in helping people with long-term conditions improve their quality of life and live independently for longer. A key aspect of the research into assistive robotics for assisted living is developing contextual and social intelligence for the robot to interact appropriately, safely, and reliably in real-time. This research relates to developing assistive robot behaviour by incorporating both environmental and user data, and behaviour, as part of an overall intelligent control system architecture.
In addition to having a ‘memory’ of previous interactions and situations, assistive robots need access to information that is current and one that provides a dynamic world view of the user (including their emotional state) so that they can provide information and responses that are contextually appropriate. Typical activities for which support can be provided is support with rehabilitation, medication management, cognitive and social stimulation, nutrition management etc. Drawing on information from environmental and activity sensors instrumented into a smart home, and information about the user’s current physical and emotional state, assistive robots can potentially create value through provision of interventions that are more socially intelligent regarding how, and what advice and support they provide. To create a more holistic service, that takes into consideration prioritisation of events based on aspects of health and social circumstance requires an adaptable, intelligent learning system. Building on existing research on intelligent control system architectures, the aim of this research will be to design and test modular semantic memory architectures that can be adapted over time. You will investigate optimal combinations of contextual data comprising implicit (emotional, physiological) and explicit user data (interaction), as well as behavioural activity data assimilated from a range of wearable and smart home sensors, to develop adaptive, intelligent and emotionally engaging robot behaviour to support independent living.
Human-Robot Interaction for Real-Life Inspection in Extreme and Factory Scenarios
To discuss this project please email: Ayse.Kucukyilmaz@nottingham.ac.uk
We have two fully funded PhD studentships for talented candidates to join us from the 1st of October 2024.
You will recieve an annual tax-free stipend based on the UKRI rate plus fully-funded PhD tuition fees for the four years (Home/UK students only)
The students will work with the Boston Dynamics Spot robot to solve real life inspection problems in extreme and factory scenarios. We will develop incremental learning methodologies to develop context-based policies, not only for navigation, but error recovery in long term automation. Human-in-the-loop and teleoperated control methods will be used as the backbone strategy to ensure increasing levels of autonomy during inspection. We will look at human-robot interaction methodologies for the day-to-day operation of the Boston Dynamics Spot mobile inspection robot.
The two projects will be in collaboration with RACE (https://race.ukaea.uk/) and Reckitt (https://www.reckitt.com/)
Learning, user modelling and assistive shared control to support wheelchair users
To discuss this project please email: Ayse.Kucukyilmaz@nottingham.ac.uk
This PhD project will develop on the Nottingham Robotic Mobility Assistant, NoRMA (https://github.com/HCRLabRepo/NoRMA) to study triadic learning methodologies for developing effective assistance policies for wheelchair users to support their day to day activities.
Long term autonomy and mobile inspection of extreme environments with a quadruped robot
To discuss this project please email: Ayse.Kucukyilmaz@nottingham.ac.uk
This PhD project will be in collaboration with RACE (https://race.ukaea.uk/) and aims to develop incremental learning methodologies to develop context-based policies, not only for navigation, but error recovery in long term automation. Human-in-the-loop and teleoperated control methods will be used as the backbone strategy to ensure increasing levels of autonomy during inspection. We will look at human-robot interaction methodologies for efficient management and optimisation of parallel tasks encountered in day-to-day operation of the Boston Dynamics Spot mobile inspection robot.
Exploring Bilateral Trustworthiness in Human-Robot Collaborative
To discuss this project please email: Ayse.Kucukyilmaz@nottingham.ac.uk
This PhD studentship will investigate trust from a theory of mind point of view to model a robot’s trustworthiness from the perspective of a human, and vice versa.
Lifelong learning with robotic vacuum cleaners in social spaces: In collaboration with Beko Plc. (https://www.bekoplc.com/), this PhD project will focus on these challenges by targeting multiple strands of research in perception, planning, human-in-the-loop learning, and shared control for service robots. The ability to detect and recover from errors during navigation is an essential ability for an autonomous service robot that can run for extended periods of time. In addition, functioning in human settings, these robots should be programmed to adhere to social cues in a context- dependent manner, not only to enable safe, but also acceptable functionality.
Inertial Sensor-Based Gesture Recognition for Human-Robot Interaction
To discuss this project please email: Lucas.Fonseca@nottingham.ac.uk
Human-robot interaction (HRI) is a multidisciplinary field that studies how humans and robots can communicate and collaborate effectively and naturally. Gesture recognition is one of the key components of HRI, as it enables humans to use intuitive and expressive body motions to convey commands, intentions, and emotions to robots. However, most of the existing gesture recognition methods rely on vision-based sensors, such as cameras, that have limitations in terms of occlusion, illumination, privacy, and computational cost.
The aim of this PhD project is to develop novel methods for inertial sensor-based gesture recognition for HRI.
The project will involve the following objectives:
Review the state-of-the-art methods and challenges of inertial sensor-based gesture recognition for HRI.
Develop new methods for gesture segmentation, classification, and generation using inertial sensors, such as accelerometers and gyroscopes, worn on the human body.
Evaluate the performance and usability of the proposed methods on various HRI scenarios and tasks, such as navigation, manipulation, social interaction, and entertainment.
Investigate the human factors and ethical issues of using inertial sensors for gesture recognition for HRI.
The successful candidate will have a strong background in computer science, engineering, or mathematics, with good programming skills in Python or C++, and good knowledge of machine learning. Experience in inertial sensor data processing, machine learning, or human-robot interaction is desirable but not essential. The candidate will be supervised by Dr. Lucas Fonseca and Prof Praminda Caleb-Solly from the School of Computer Science, and will have access to the state-of-the-art facilities and resources of the CHART research group.
Exploring Human Movement as a Strategy for Human-Machine Interfaces
To discuss this project please email: Lucas.Fonseca@nottingham.ac.uk
Human-machine interfaces (HMIs) are systems that enable humans to interact with machines, such as computers, robots, or assistive devices, using various modalities, such as speech, touch, or gesture. HMIs have many applications, such as entertainment, education, health, and industry. However, most of the existing HMIs are based on small and predefined set of actions, which limit the naturalness and expressiveness of the human-machine interaction. In addition, they often don't consider the user's limitations.
The aim of this PhD project is to explore the use of human movement as a strategy for designing and evaluating novel HMIs that can adapt to the user’s preferences, context, and goals.
The project will involve the following objectives:
Review the state-of-the-art methods and challenges of using human movement for HMIs.
Develop new methods for capturing, analyzing, and synthesizing human movement data using various sensors, such as inertial sensors, motion capture systems, or cameras.
Design and implement novel HMIs that use human movement as an input or output modality for various tasks and domains, such as gaming, education, or rehabilitation.
Evaluate the usability and user experience of the proposed HMIs using quantitative and qualitative methods.
The successful candidate will have a strong background in computer science, engineering, or design, with good programming skills in Python or C++. Experience in human movement analysis, machine learning, or human-computer interaction is desirable but not essential. The candidate will be supervised by Dr. Lucas Fonseca and Prof Praminda Caleb-Solly from the School of Computer Science, and will have access to the state-of-the-art facilities and resources of the CHART research group.
Bimanual Manipulation of complex objects
To discuss this project please email: luis.figueredo@nottingham.ac.uk
We are currently seeking highly motivated candidates to join our research team for PhD positions in the field of bimanual robotic manipulation. The research will focus on developing intelligent robot algorithms to perform complex tasks commonly encountered in household activities as well as in intelligent manufacturing settings. Tasks involve non-trivial manipulation of objects, often requiring bimanual coordination. For example, everyday activities such as opening child-safe medicine containers demand a bimanual approach. At the same time, cutting a piece requires using one arm as a fixture while the other performs the task followed by a sequence of regrasps to optimize the task. The goal of the research is to design advanced robotic systems capable of performing forceful tasks such as pushing, pulling, puncturing, cutting, and drilling on connected, articulated, and complex objects in a safe, stable manner, and real-time manner.
Key Objectives:
Fundamentals of bimanual manipulation, sequential manipulation planning, tool usage, and motion-task planning.
Develop a bimanual system capable of exploring both manipulation and fixture setups, using object-to-robot, robot-to-environment, and object-to-environment contacts.
Implement coordinated manipulation techniques while ensuring the stability of the workpiece in real-time;
Design autonomous and semi-autonomous behaviors for diverse tasks, considering the dynamics of the object and its contents;
Collaborate with other researchers in teaching new tasks through demonstration, task-representation and identification, and motion planning.
Research Focus: The selected candidates will focus their attention on some of the following topics bimanual manipulation, geometric methods, geometric and force constraint definition and satisfaction, sequential manipulation planning for addressing multiple tasks, tool usage, and task planning. Additionally, potential extensions may include exploring bimanual manipulation of articulated and deformable objects.
Qualifications:
BSc/MSc or equivalent in robotics, computer science, artificial Intelligence, or Engineering with a focus on artificial intelligence/robotics/control;
Excellent programming skills (e.g., C++, Python, Matlab, and/or machine learning frameworks e.g. PyTorch);
Good English communication skills and ability to work collaboratively in a research team;
Strong passion for research, and curious personality.
Desired Experience: Background in robotic manipulation, motion planning, and control, simulation environments, and/or usage of real-robots.
For more information, check the following research papers:
Switching strategy for flexible task execution using the cooperative dual task-space framework (ICRA, 13)
Manipulation planning under changing external forces (Autonomous Robots, 2020)
Predictive Multi-Agent based Planning and Landing Controller for Reactive Dual-Arm Manipulation (Transaction on Robotics, 2023)
My website: https://www.luisfigueredo.com/
My Youtube Page: https://www.youtube.com/@figueredo_robotics/playlists
To Apply: Interested candidates are invited to submit their application, including a (1) CV with contact info for 2 references, (2) academic transcripts, (3) a cover letter (1-page) outlining their research interests and relevant experience, and (4) a 1-page research proposal including the problem you would like to address, the current state-of-the-art and which methods you think would be applicable.
Application via the link: https://forms.gle/sPr7S5QSpcJRhtA4A
Deadline: January 21st - 21.01.2024
We look forward to welcoming passionate and dedicated individuals to contribute to cutting-edge research in bimanual robotic manipulation.
Manipulation of complex and unknown objects - Playing and Learning Geometric and Force Constraints
To discuss this project please email: luis.figueredo@nottingham.ac.uk
We are currently seeking highly motivated candidates to join our research team for PhD positions in task definition, identification, and control from a geometric perspective. The research will focus on developing intelligent robot algorithms to perform complex tasks commonly encountered in household activities as well as in intelligent manufacturing settings. Many tasks involve non-trivial manipulation of unknown tools, handles and objects, often requiring exploration under safety constraints. For instance, simple activities such as handling multiple door handles and tilt-and-turn windows demand exploring the system's different articulations with limited force and sequential connection of constraints (one can only open the window once it tilts enough). Similarly, assembly and disassembly tasks in industry often require tactile exploration followed by sequential manipulation.
The goal of this research is to design advanced robotic systems capable of exploring tactile tasks and corresponding geometric and force constraints in a safe, stable manner, and real-time manner.
Key Objectives:
Task definition involving different forces and geometric constraints in a geometric-consistent manner;
Data collection for different tasks and task classification and identification with set-based constraints;
Manipulation control with safety certification in terms of constraints satisfaction (Set-based methods) and torque-based methods;
Human studies (in collaboration) for task exploration and transfer to robotic systems whilst ensuring hard system constraints: defining tasks within the previous frameworks;
Taks generalization to similar constraints with hard and soft constraints;
Research Focus: The selected candidates will focus their attention on some of the following topics geometric methods, geometric and force constraint definition and satisfaction, sequential manipulation planning for addressing multiple tasks, tool usage, and task representation. Additionally, potential extensions may include exploring bimanual manipulation.
Qualifications:
Master's degree in robotics, mechanical engineering, computer science, or a related field.
Excellent programming skills (e.g., C++, Python, Matlab) and experience with robotic simulation environments.
Good communication skills and ability to work collaboratively in a research team.
Desired Experience: Background in robotic manipulation, motion planning, and control, and/or usage of real-robots.
For more information, check the following research papers
Integrated Bi-Manual Motion Generation and Control shaped for Probabilistic Movement Primitives (Humanoids, 20222, best paper award finalist)
Manipulation planning under changing external forces (Autonomous Robots, 2020)
A Solution to Slosh-free Robot Trajectory Optimization (IROS, 2022)
My website: https://www.luisfigueredo.com/
My Youtube Page: https://www.youtube.com/@figueredo_robotics/playlists
To Apply: Interested candidates are invited to submit their application, including a (1) CV with contact info for 2 references, (2) academic transcripts, (3) a cover letter (1-page) outlining their research interests and relevant experience, and (4) a 1-page research proposal including the problem you would like to address, the current state-of-the-art and which methods you think would be applicable.
Application via the link: https://forms.gle/sPr7S5QSpcJRhtA4A
Deadline: January 21st - 21.01.2024
We look forward to welcoming passionate and dedicated individuals to contribute to cutting-edge research in task understanding, representation and control.
Natural (language) Learning of Tasks via Human-Robot Interaction
To discuss this project please email: luis.figueredo@nottingham.ac.uk
We are currently seeking highly motivated candidates to join our research team for PhD positions in the field of natural Learning in Human-Robot Interaction. The research will focus on developing innovative solutions to address the open problem of teaching robots through human demonstrations, with a particular emphasis on enhancing practical applicability in assistive and industrial tasks.
Project Context: Future assistive robots in care or industrial facilities face diverse tasks that involve direct contact with everyday users. Current approaches to designing plans for complex tasks, such as preprogramming by experienced roboticists, are time-consuming and limiting, especially when considering factors like safety integration, personalization, environmental changes, and task transfer between robots. To overcome these challenges, this research aims to explore novel solutions that enable robots to learn efficiently from human demonstrations, particularly through different user modalities such as natural language processing grounded into the robot's inherent geometric and force constraints. The selected Ph.D. candidate will investigate how different modalities of interaction impact teaching and learning by demonstration going beyond simple kinesthetic teaching.
Research Focus: will include studying multimodal integration, such as natural language processing combined with visual information (learning from watching) grounded to human-to-robot manipulation transfer. The goal is to develop a framework that requires minimal time from demonstration to deployment on the robot, and minimal cognitive load and expertise from the human teacher. The research will also involve user studies to assess the acceptability and personalization of different modalities, and demonstration methods such as shared-autonomy, teleoperation.
Qualifications:
Master's degree in robotics, artificial intelligence, mechanical engineering, computer science, or a related field.
Excellent programming skills (e.g., C++, Python, Matlab) and experience with robotic simulation environments.
Strong background, expertise or high interest in machine learning tools.
Good communication skills and ability to work collaboratively in a research team.
Desired Experience: Background in robotic manipulation, motion planning, and control, and/or usage of real-robots.
For more information, check the following research papers
Integrated Bi-Manual Motion Generation and Control shaped for Probabilistic Movement Primitives (Humanoids, 20222, best paper award finalist)
Latte: Language trajectory transformer (ICRA, 2023)
My website: https://www.luisfigueredo.com/
My Youtube Page: https://www.youtube.com/@figueredo_robotics/playlists
To Apply: Interested candidates are invited to submit their application, including a (1) CV with contact info for 2 references, (2) academic transcripts, (3) a cover letter (1-page) outlining their research interests and relevant experience, and (4) a 1-page research proposal including the problem you would like to address, the current state-of-the-art and which methods you think would be applicable.
Application via the link: https://forms.gle/sPr7S5QSpcJRhtA4A
Deadline: January 21st - 21.01.2024
We look forward to welcoming passionate and dedicated individuals to contribute to cutting-edge research in task understanding, representation and control.
Biomechanics-Aware Manipulation Planning
To discuss this project please email: luis.figueredo@nottingham.ac.uk
We are currently seeking highly motivated candidates to join our research team for PhD positions in the field of Biomechanics-Aware Manipulation Planning. The research will focus on developing methods that enable robots to predict and adapt to human kinematic and biomechanical responses during collaborative manipulation, enhancing the efficiency and comfort of human-robot collaboration.
Research Context: When humans and robots collaborate in manipulating objects, the robot must consider the kinematic and biomechanical responses of the human to optimize its actions. This project aims to develop a method that predicts both the kinematic and biomechanical response of humans during forceful human-robot collaboration (fHRC). These predictions will then be used to plan robot grasps and configurations that minimize the biomechanical load on the human, specifically by reducing predicted muscular effort, and enhancing ergonomics.
Research Focus: will include studying the fundamentals of biomechanics for robotics, focusing on human-arm manipulation capabilities and constraints. This includes studying kinematics, dynamics, and existing biomechanics models. The candidate will use this knowledge to define manipulation regions based on different tasks and design controllers and planners for exploring these regions, particularly in the context of sequential tasks.
Qualifications:
Master's degree in robotics, artificial intelligence, computer science, biomechanics or a related field.
Excellent programming skills (e.g., C++, Python, Matlab) and experience with robotic simulation environments.
Good communication skills and ability to work collaboratively in a research team.
Desired Experience: Background in robotic manipulation, motion planning, and control, biomechanics and/or usage of real-robots.
For more information, check the following research papers
Planning to Minimize the Human Muscular Effort during Forceful Human-Robot Collaboration (Transaction on Human-Robot Interaction, 2021)
Manipulation planning under changing external forces (Autonomous Robots, 2020)
My website: https://www.luisfigueredo.com/
My Youtube Page: https://www.youtube.com/@figueredo_robotics/playlists
To Apply: Interested candidates are invited to submit their application, including a (1) CV with contact info for 2 references, (2) academic transcripts, (3) a cover letter (1-page) outlining their research interests and relevant experience, and (4) a 1-page research proposal including the problem you would like to address, the current state-of-the-art and which methods you think would be applicable.
Application via the link: https://forms.gle/sPr7S5QSpcJRhtA4A
Deadline: January 21st - 21.01.2024
We look forward to welcoming passionate and dedicated individuals to contribute to cutting-edge research in task understanding, representation and control.
Multimodal language understanding for robotic task planning
To discuss this project please email: aly.magassouba@nottingham.ac.uk
Context: Task planning for industrial or service robots consists of decomposing a given task into a sequence of feasible primitive actions. Traditional approaches are based on the representation of behavior trees with handcrafted algorithms compatible with PDDL (Planning Domain Description Language). The task to be solved is defined manually in advance. Other approaches offer more flexibility by automatically creating a plan given additional inputs such as CAD models. However, these methods are difficult to generalize as they have been applied to a very limited and specific set of assembly tasks.
Project: Instead, in this Ph.D. project, we argue that it is more natural to use language instructions to accurately solve a given task. A simple analogy can be made with domestic appliances, which are always accompanied by an instruction manual that details their installation procedure. Therefore, this Ph.D. project aims to solve tasks by decomposing a high-level instruction into a sequence of atomic actions from multimodal inputs (vision, language, etc.). The goal is then to generate a sequence of actions given instructions such as: “Insert the small rotor shaft into the housing of the large rotor. When the two shafts are aligned, place a lid on top of them”. Different learning methods combining large language models, perception, and action will be explored. Applications for industrial (assembly) and service (cooking, manipulation) robots will be developed based on the above methods.
Given the strong multidisciplinary context of this project, applicants are expected to have or develop skills in AI, perception, and language understanding.
Linguistic explanation for bidirectional communication during human-robot interaction
To discuss this project please email: aly.magassouba@nottingham.ac.uk
Context: During an interaction, robots should not only understand the user's intent, but they should also be able to communicate in a comprehensive way about their actions and decisions. Bidirectional communication favors interaction and trust towards robots. Paradoxically, despite the recent success of robot learning methods, current neural network models are not adapted to this paradigm as they are inherently black boxes. For this reason, the research topic related to explainable AI (XAI) has recently been established. The purpose of XAI is to develop white-box neural networks that can be interpreted and understood. This approach has been particularly used in the computer vision community, with methods such as class activation maps or the attention branch network providing an explanation through visual attention. However, all these approaches have been limited to the perception level so far and have not been grounded in the physical world with robots.
Project: In this Ph.D. project, the applicant will develop models for linguistic explanation by considering multimodal input related to the state of the operator, robot, and environment and the task being performed. These models would generate a set of sentences that describe the actions / decisions taken by the robot, such as “I cannot reach the blue bin. You are on my path, and I might collide with you”. To generate such a sentence, network architectures based on transformers and state-of-the-art LLM, combining supervised and reinforcement learning will be developed. Additional applications will be derived from the above approach. More specifically, a summarizer engine will be developed to report all the tasks performed by a robot. An ergonomics/safety recommendation engine will also be developed to warn the user about possible hazardous motion/behavior considering human activity.
Given the strong multidisciplinary context of this project, applicants are expected to have or develop skills in AI, perception, and language understanding.
Immersive Mixed Reality Interfaces for Remote Telerobotics / Telepresence
To discuss this project please email: nikhil.deshpande@nottingham.ac.uk
Context: Robotics provides an advanced solution to mitigate risks in extreme work environments (e.g., nuclear, disaster response, etc.), through technologies such as remote telerobotics, advanced haptics master devices, and smart sensing and visualization. This project will develop new software and hardware systems for an immersive 3D user interaction experience for interfacing with robotic systems (e.g., Franka Emika Panda robot arm, Boston Dynamics Spot robot, etc.) The project will use, develop, and integrate advanced technologies in VR / AR / MR towards improving the situational awareness of the operator(s), providing an intuitive and intelligent user interface for robotic teleoperation and monitoring in high-risk environments. The project will build on the strong existing technological capabilities in the CHART group, acquired through the successful implementation of high-tech projects in this field. During this program, the student will develop and utilize their knowledge in:
Real-time 3D reconstruction and tracking of dynamic remote scenes and objects;
Real-time rendering of complex remote information in an immersive MR interface;
Deep learning for semantic remote scene understanding
Project: This PhD project will benefit from a strong multidisciplinary approach at the interface of Computer Science and Robotics and will focus on evaluating the usability and user experience aspects for accessible telerobotics using mixed reality. Prospective students should have a degree in Computer Science, Engineering, or other related fields, and would be beneficial to have relevant competencies in computer vision, coding (C/C++, Python), deep learning algorithms (YOLO, TensorFlow), VR software (Unity, Unreal Engine), and VR hardware devices (HTC Vive, Meta Quest), etc. Experience with robotic software (ROS, Gazebo) and hardware (manipulators, mobile robots) is a plus!
Real-time 3D reconstruction for Motion Planning and Haptic Guidance
To discuss this project please email: nikhil.deshpande@nottingham.ac.uk
Context: In recent years, 3D mapping capabilities in robotics have seen tremendous progress, especially with the proliferation of low-cost RGB-D sensors and powerful computing hardware. Recent investigations into efficient 3D reconstruction methods have facilitated high-quality spatial representation of the environment in real-time. In particular, non-parametric surface representations such as Signed Distance Fields (SDFs) have become the de facto method for high quality 3D mapping, representing the environment in discrete voxels, which store the relevant map information, i.e., distance and gradient. Robotic operations, especially in the vicinity of sensitive or hazardous structures, can utilize the real-time 3D reconstruction approach based on SDFs for the planning task, since: (i) it builds a growing map on-the-fly, (ii) it takes into account previously viewed information that is no longer visible, (iii) it holds distance information implicitly, and (iv) it works with consumer-grade hardware. Similarly, the same approach can be used to estimate distances and forces to nearby objects during remote telerobotic operations, giving a haptic sensation to the user. Such an approach utilizes current as well as historically viewed information for collision avoidance, motion planning, and haptic guidance in cluttered environments, in a “feel-where-you-don’t-see” paradigm.
The project will build on the strong existing technological capabilities in the CHART group, acquired through the successful implementation of high-tech projects in this field. During this program, the student will develop and utilize their knowledge in:
Real-time 3D reconstruction and tracking of dynamic remote scenes and objects;
Real-time processing and interfacing with haptic feedback and force estimation;
Project: This PhD project will benefit from a strong multidisciplinary approach at the interface of Computer Science and Robotics and will focus on evaluating the usability and user experience aspects for accessible telerobotics using real-time 3D vision and haptics. Prospective students should have a degree in Computer Science, Engineering, or other related fields, and would be beneficial to have relevant competencies in computer vision, coding (C/C++, Python), deep learning algorithms (YOLO, TensorFlow), VR software (Unity, Unreal Engine), and VR hardware devices (HTC Vive, Meta Quest), etc. Experience with robotic software (ROS, Gazebo) and hardware (manipulators, mobile robots) is a plus!
Human-in-the-loop Natural Language Interaction for Remote Telerobotics
To discuss this project please email: nikhil.deshpande@nottingham.ac.uk
Context: The adaptable, human-assistant paradigm in telerobotics and telepresence applications becomes even more useful as well as complex when we consider interfaces for expert as well as non-expert users, especially in multi-person/multi-robot remote telepresence scenarios. Mixed Reality interfaces for surgeons in telesurgery, for patients in rehabilitation, and for teleoperation in space, industry, and disaster response are diverse applications, but bear the same requirement of intuition and immersion for the user, through an ergonomic interface. One of the ways of achieving that is using natural language interaction (NLI) to increase the robot autonomy and user assistance in teleoperation tasks, thereby reducing the cognitive burden on the user. For e.g., “grasp the cup in the bottom half for better grip” is a natural language command that the robot should be able to execute autonomously. Such NLI can also help improve deep learning-based semantic scene understanding and object tracking outcomes and accuracies. Visualized scenes can be represented with generated mesh models in the MR interface using high-level encoding, i.e., real-time text-to-scene description and 3D processing (“the vessel at the top right of the field-of-view feels softer than the one on the left”, “the object at the bottom is a chair with a broken arm lying on its side”, etc.). This would help improve the 3D reconstruction and representation outcomes of the system, while also reducing the real-time 3D data streaming requirements.
Prospective students should have a degree in Computer Science, Engineering, or other related fields, and would be beneficial to have relevant competencies in natural language processing and large language model APIs (ChatGPT, Bard, etc.), coding (C/C++, Python), deep learning algorithms, VR software (Unity, Unreal Engine), and VR hardware devices (HTC Vive, Meta Quest), etc. Experience with robotic software (ROS, Gazebo) and hardware (manipulators, mobile robots) is a plus!
Application Requirements for Fully Funded Studentships in the School of Computer Science
NOW CLOSED - Please look again in January 2025
Entry Requirements:
Qualification Requirement: Applicants are normally expected to have a 2:1 Bachelor or Masters degree or international equivalent, in a related discipline
International and EU equivalents: We accept a wide range of qualifications from all over the world. For information on entry requirements from your country, see our country pages.
An IELTS score of 6.5 (with 6.0 in each element) or another English Language qualification is also required for candidates who do not have English as a first language. Any offer will be subject to the University admissions requirements.
Application process:
Please check your eligibility against the entry requirements prior to proceeding.
If you are interested in applying, please contact potential supervisors to discuss your research proposal. If the supervisor wishes to support your application post interview, they will direct you to make an official application through the MyNottingham system. You will be required to state the name of your supervisor and the studentship reference number in your application.
DO NOT SUBMIT your application via the My Nottingham platform without having confirmed support of a supervisor first.
Please email the person/people named next to the topic you are interested in with an up-to-date copy of your CV, marks transcripts, and a cover email explaining why you will be suitable for the selected PhD topic. Based on this information you will be invited to an informal discussion. You will then be invited to submit a short research proposal to your potential supervisor, and following this, an interview with your potential supervisory team. Following a successful interview, you will then be informed whether to proceed with a formal application on My Nottingham.
(Go Top)
Topics
Safety
Analysis of the impact of cognitive loading and distractions during human-robot collaboration for assistive tasks (Praminda Caleb-Solly)
Linguistic explanation for bidirectional communication during human-robot interaction (Aly Magassouba)
Embodied intelligence and sensing
Intelligent sensing and machine learning to improve the diagnosis and treatment of children with movement disorders (Alex Turner)
Design of smart actuated sensing devices and environments to support cognitive function/diagnostics in assisted living contexts (Praminda Caleb-Solly/Armaghan Moemeni)
Cyber-physical Space in Personalised Ambient Assisted Living (AAL) - Digital Twin/Blockchain/Machine Learning (Armaghan Moemeni)
Intelligent sensing to measure human trust using physiological sensing in virtual reality - for application of cognitive training and support (Armaghan Moemeni)
Accessible Interaction
Enhancing usability of augmented reality interfaces for cognitive support (Praminda Caleb-Solly/Armaghan Moemeni)
Modular robotics
Reconfigurable modular rehabilitation robots to monitor and manage frailty (Praminda Caleb-Solly)
Telepresence and Teleoperation
Multimodal real-time feedback (haptic, auditory, visual) for teleoperation of assistive and rehabiliation tasks (Praminda Caleb-Solly)
Immersive Mixed Reality Interfaces for Remote Telerobotics / Telepresence (Nikhil Deshpande)
Human-in-the-loop Natural Language Interaction for Remote Telerobotics (Nikhil Deshpande)
Autonomous and tele-manipulation
Improving autonomous complex robot manipulation capabilities that go beyond just grasping (Ayse Kucukyilmaz)
Shared and traded control
Modulation of levels of autonomy in human-robot teamwork through shared and traded autonomy paradigms (Ayse Kucukyilmaz)
Real-time 3D reconstruction for Motion Planning and Haptic Guidance (Nikhil Deshpande)
Assisted Mobility
Designing and developing learning-based methodologies for wheelchair driving assistance (Ayse Kucukyilmaz)
Enhancing driving performance and safety using AR and haptics technologies in robotic wheelchairs (Ayse Kucukyilmaz)
Multimodal feedback for shared control of Early Years Powered Mobility (children's wheelchairs) to support independent mobility (Praminda Caleb-Solly)
Manipulation for Human-Robot Interaction
Natural (language and ergonomics) Learning of Tasks via Human-Robot Interaction (Luis Figueredo)
Manipulation of complex and unknown objects - Playing and Learning Geometric and Force Constraints (Luis Figueredo)
Bimanual Manipulation of complex objects (Luis Figueredo)
Safety-aware motion planning (Luis Figueredo)
Biomechanics aware Multi-arm manipulation for sequential planning (Luis Figueredo)
Planning for Human-Robot Interaction
Multimodal language understanding for robotic task planning (Aly Magassouba)
(Go Top)
Where to find us
We are located in the Cobot Maker Space in the Nottingham Geospatial Institute On Jubilee Campus, University of Nottingham