Venue: Florey Room, Wolfson College, Oxford, UK
Date: June 14, 2019. 10:00am - 12:30pm
Advances in machine learning, AI & data science is already transforming our ability to work with big data and complex information. The 1st Oxford Cross-disciplinary Applications of Machine Learning (OxfordXML) Workshop will bring together researchers from a range of disciplines to learn about the techniques, applications and impact of machine learning in their respective fields, such as civil engineering, healthcare, biomedical, and psychology.
This is an open event.
Registration is not required.
Recent years have seen an exponential growth in the development of efficient strategies for decision making based on data. This is particularly true in healthcare and engineering, where effective monitoring systems have been developed to track the health status of people and of critical engineering structures (eg structural health monitoring) in order to detect anomalies in the data and suggest preventive actions to restore normal conditions. Nonetheless, the majority of the research in health monitoring of systems is focusing on the development of advanced Machine Learning algorithms which extract information from the data under the assumption that the data collected is reliable but might be affected by noise. However, the electronic equipment used in the monitoring system can be faulty, and therefore the data might display patterns which do not represent the behaviour of the system being monitored. Problems in the monitoring equipment during assembly or operational stages might be unnoticed, and can lead to wrong preventive actions planning. This talk will present a strategy for detecting failures in the monitoring systems by combining information coming from recorded sensors’ data and failure reports, therefore exploiting the application of Machine Learning and Natural Language Processing techniques. Two approaches will be presented to address two case studies: failures detection of a low-cost wearable device and of a low-cost monitoring system for vehicles.
There is a vast network of buried infrastructure and services in the UK comprising water, sewer, gas and electricity which extends to well over 1.5 million km. Microtunnelling is an increasingly popular means of constructing these underground utilities compared to traditional ‘open cut’. The proliferation of data collected by tunnel boring machines poses a significant opportunity to present site engineers with meaningful information upon which to make informed and timely decisions. This talk will explore the potential for Gaussian Processes (GPs) to forecast the performance of a tunnel boring machine during microtunnelling and form part of an early warning system to avoid adverse responses on site. The GP forecasts will also be appraised through comparisons to existing empirical models currently used by industry as well as monitored data from two live UK construction sites.
Communication devices and connectivity are increasingly ubiquitous, contributing to a rapidly-expanding infrastructure. This development promises to tackle some of the most challenging issues facing society today - how healthcare is delivered to an aging and expanding population. Each year, millions of people worldwide suffer from chronic long-term diseases, such as diabetes, heart disease, and kidney malfunction, with limited access to appropriate treatment. Machine learning plays a key role in determining how effective healthcare will be delivered to future generations. Reliable continuous tracking of patient health can provide accurate early warning of health deterioration. “Big data” (e.g.: electronic health records and data from wearable devices for monitoring chronic diseases and well-being) are now being collected, which cover the entirety of patient care, throughout the life of a patient. It is therefore necessary to develop novel machine learning methods to exploit the contents of these large complex datasets by performing robust, automated inference at very large scale.
The gold standard to assess whether a baby is at risk of oxygen starvation during childbirth, is monitoring continuously the fetal heart rate with cardiotocography (CTG), comprising two time series: fetal heart rate and contraction strength. The goal of monitoring is to identify babies that could benefit from an emergency operative delivery (e.g., Cesarean section), in order to prevent death or permanent brain injury. The long, dynamic and complex CTG patterns are poorly understood and known to have high false positive and false negative rates. Visual interpretation by clinicians is challenging and reliable accurate fetal monitoring in labour remains an enormous unmet medical need. Our team has acquired a uniquely large and detailed cohort of routinely collected data during labour at Oxford (all monitored births between Apr’93 and Dec’18). We have already developed a basic computerized data-driven prototype for CTG evaluation: OxSys 1.5 (Georgieva at al. 2017 “Computerized data-driven interpretation of the intrapartum cadiotocogram: a cohort study”. Acta Obstet Gynecol Scand96(7)). It works comparable to CTG evaluation by doctors in clinical practice but is based only on a few clinical and CTG features and further improvements are needed. The size of our database confers scope for substantial improvement of OxSys and we are working towards developing a much more sophisticated OxSys 2.0.
In this talk I will present our work on the first application of deep learning for the analysis of the CTG. I will demonstrate that Multimodal Convolutional Neural Networks hold potential for the prediction the newborns with compromise at birth and further work is warranted. Furthermore, I will discuss why our deep learning models are currently not suitable for the detection of certain severe fetal injuries that are part of a heterogeneous, small, and poorly understood group. We suggest that the most promising way forward are hybrid approaches to CTG interpretation in labour, in which different diagnostic models can estimate the risk for different types of fetal compromise, incorporating clinical knowledge with data-driven analyses.
Humans learn to perform many different tasks over their lifespan. In machine learning, this “continual learning” is a major unsolved challenge, as artificial neural networks, in contrast to humans, suffer from catastrophic interference when learning new tasks. I'll present a summary of recently published work as well as preliminary findings from an ongoing project in which we examined choice patterns and neural responses of humans and state-of-the-art neural networks while they were learning to perform multiple categorisation tasks. Humans benefited from sequential learning of one task at a time, which seemed to allow them to learn optimally segregated representations of each task. In contrast, neural networks were only able to learn both tasks when trained in an interleaved fashion. We found out that interference under sequential training occurs predominantly in deep layers of the network, which encode abstract, task relevant variables. Furthermore, we discovered that humans who had a strong prior to represent the stimuli in a manner which was beneficial for rule learning benefited even more from a sequential training curriculum. We then trained neural networks to develop a similar prior bias in the early layers and observed that they suffered from less interference between sequentially learned tasks. We have now begun to collect neuroimaging data to formally compare how task representations differ between biological and artificial information processing systems and obtain empirical evidence for a mechanistic explanation of this hallmark of human cognition.
Made with Colorlib