\dm_csml_event_details
Speaker |
Artur Garcez |
---|---|
Affiliation |
City University |
Date |
Friday, 30 November 2018 |
Time |
13:00-14:00 |
Location |
Zoom |
Link |
Roberts G08 |
Event series |
DeepMind/ELLIS CSML Seminar Series |
Abstract |
Deep learning has achieved great success at image and audio analysis, language translation and multimodal learning. Recent results however indicate that deep networks are susceptible to adversarial examples, not being robust or capable of achieving extrapolation. To address this problem, much of the research has turned to neural Artificial Intelligence systems capable of harnessing knowledge as well as learning from large data sets. Neural-symbolic computing has sought to benefit from such combination of symbolic AI and neural computation for many years. In a neural-symbolic system, neural networks offer a machinery for efficient learning and computation, while symbolic knowledge representation and reasoning offer an ability to benefit from prior knowledge, transfer learning and extrapolation, and to produce explainable neural models. Neural-symbolic computing has found application in many areas including software specification evolution, training and assessment in simulators, and the prediction and explanation of the pathways to harm in gambling. In this talk, Professor Garcez will introduce the principles of neural-symbolic computing and will exemplify its use with defeasible knowledge representation, temporal logic reasoning and relational learning. He will then focus on Logic Tensor Networks (LTN), a neural-symbolic system capable of combining deep networks with first-order many-valued logics. LTNs were implemented in Tensorflow and have been applied successfully to semantic image interpretation and knowledge completion tasks, achieving state-of-the-art performance. Time permitting, he will also outline the main challenges for the research in AI and neural-symbolic integration going forward. |
Biography |