Human Activity Recognition (HAR) is an important component in assistive technologies, however, we have not seen wide adoption of HAR technologies in our homes. Two main hurdles to the wide adoption of HAR technologies in our homes are the expensive infrastructure requirement and the use of supervised learning in the HAR technologies. Many HAR researches have been carried out assuming an environment embedded with sensors. In addition, the majority of HAR technologies use supervised approaches, where there are labeled data to train the expert system. In reality, our natural living environment are not embedded with sensors. Labeled data are not available in our natural living environment. We are developing a framework for autonomous HAR suitable in our natural living environment, i.e. the sensor-less homes. The framework uses unsupervised learning approach to enable a robot, acting as a mobile sensor hub, to autonomously collect data and learn the different human activities without requiring manual (human) labeling of the data.
Smart devices in an IoT system, such as the smart home, either connect through their own proprietary server running their server side applications, or they connect through the user home network where a center device is running the necessary server side applications. The second approach is not flexible and not user friendly to setup. There is increasing number of systems that take the first approach of having their own cloud server. However, not all developers are capable of hosting their cloud server to cater for large volume of users. This restricts the development of large scale IoT systems to large companies. There is also the concern of being tied to a proprietary service. To address this issue and to allow amateur developers to build large scale IoT systems, we are developing a new form of connectivity for IoT systems by exploiting storage-cloud services widely used by general public such as DropBox and Google Drive.
Self-driving car technologies are growing and maturing. It will be part of the future transport system. We are initiating our venture into this domain inline with our interest in robot navigation. In this project, we will build a prototype self-driving car and use it to conduct research works in the various aspects of self-driving car.
Deep learning needs lots of data for training; however, in some industrial applications, the significant amount of data may not be available, limiting the deep learning approach. Modern techniques like transfer learning and generative adversarial networks show some hope to solve this challenge. The objective of the project is to propose new techniques for deep learning training.
Deep-learning networks are susceptible to butterfly effect wherein small alterations in the input data can point to drastically distinctive outcomes, making the deep learning network inherently volatile. Thus, the output of deep learning network may be controlled by altering its input or by adding noise. Research has shown that it is possible to fool the deep learning network by adding an imperceptible amount of noise in the input.
Generative Adversarial Networks may have potential to solve the text-to-image problem, but there are challenges in using GANs for NLP. Image classification have got benefitted with large mini-batches and one of the open question the question https://distill.pub/2019/gan-open-problems/#batchsize is if they can also help to scale GANs
Grant UBD/RSCH/I.11lFICBF/2018/002. In this research project, we aim to study the driving patterns of drivers using simulated and sensor data, identifying extensively detailed driving parameters including distance from traffic light, pressure during braking or acceleration, acceleration. These actual driving patterns along any road are not easily observable and measured by an analyst. By identifying different profiles (such as safe or unsafe driving) in the driving patterns, a system to warn the driver can be implemented. __ We have collected time series data and is currently looking for a suitable PhD student to analyse the data, applying AI, Machine Learning and/or Deep Learning techniques. (FOS/IADA/SDS project)
1) Grant: UBD/RSCH/1.11/FICBF(b)/2019/001. In this project, we aim to develop an evolutionary algorithm which is capable of performing human activities analysis for an autonomous robot. (IADA/SDS project) __ 2) Research in developing novel fuzzy, evolutionary and/or deep algorithms for data clustering (feature selection, metric learning, kernel-based approaches, constrained-based, graph-based) or optimisation is also of interest. (SDS project)
1) In this project, we plan to create a predictive model for determining risk of patients in non-communicable diseases such as cancer or cardiovascular diseases. (IHS/IADA/SDS project) __ 2) Developing learning frameworks for decision-making support in geoscience problems such as prediction of TOC and other geochemical properties, Caprock modelling, characterisation of source rock, reservoir and facies using data science and artificial intelligence. (Geology/IADA/SDS project) __ 3) The aim of this project is to characterise processes parameters relating to biomass processes using AI and Machine Learning to gain more insights and information about the processes from FTIR, HDLC and other data. __ 4) In this project, we develop a novel unsupervised and deep algorithms for extracting meaningful features to solve natural language processing and text categorisation problems. (SDS)
Deep learning excels at training powerful models from fixed datasets and stationary environments, often exceeding human-level ability. Yet, these models fail to emulate human learning, which is robust, incremental, compositional, constructive, and predictive from sequential experience to reason beyond experience. This project investigates mechanisms that can encode, recall and exploit diverse past experiences of an agent to imagine the future and solve non-trivial problems creatively. Approaches include at the most granular level, with gradient-based methods, as well as at the architectural level, with modular and memory-based, and meta-learning methods. The project will help understand the theoretical capabilities and limitations of AI, it will also pave the way towards creating AI systems that are explainable, controllable, reliable, and dependable — a major challenge with existing AI. Methodologies include dynamical systems, neuro-symbolic AI, episodic memory, predictive coding, reinforcement, self-supervising, and self-organizing networks.
How do humans imitate, learn and recycle skilled actions including the use of tools? This project develops different representations of the body, movement, and peri-personal spaces to enable learning of dynamic task-specific trajectory planning, body-chain engagement, motor control, and prediction. The project investigates how the motor knowledge acquired by a robot during learning a skill like drawing can be recycled in a completely different skill like the use of a tool. The project will look for designing a full-blown procedural memory and identifying mechanisms that allow humans (and robots) to generate any form of complex actions at runtime through shape compositionality and recycling of stable/metastable movement patterns emergent at phase transitions in different neural states. Methodologies employed include neuro-symbolic AI, predictive coding, control theory, RL, deep learning transformers, and self-organizing networks.
Language is grounded in sensory-motor experiences not word embeddings. Agents must interact physically with their world to grasp the essence of words and context. This project builds agents (and interactive robots) in simulated environments that learn and understand language via multisensory grounding (and robotic embodiment). The goals are two-fold. First, the project is using computational models to gain insights into how children acquire language over development. By building and testing models with near-realistic environments, we test psychological theories of human cognition and language acquisition. Second, we build a new generation of instructible language models with richer semantic representations leading to more intelligent machine behavior. Methodologies employed include dynamic field theory, neuro-symbolic AI, deep reinforcement learning, and transformer networks.