William Cohen

William CohenSponsored by

Professor at the Machine Learning Department and Language Technology Institute, Carnegie Mellon University, USA

William Cohen received his bachelor's degree in Computer Science from Duke University in 1984, and a PhD in Computer Science from Rutgers University in 1990. From 1990 to 2000 Dr. Cohen worked at AT&T Bell Labs and later AT&T Labs-Research, and from April 2000 to May 2002 Dr. Cohen worked at Whizbang Labs, a company specializing in extracting information from the web. Dr. Cohen is a past president of the International Machine Learning Society. In the past he has also served as an action editor for the the AI and Machine Learning series of books published by Morgan Claypool, for the journal Machine Learning, the journal Artificial Intelligence, the Journal of Machine Learning Research, and the Journal of Artificial Intelligence Research. He was General Chair for the 2008 International Machine Learning Conference, held July 6-9 at the University of Helsinki, in Finland; Program Co-Chair of the 2006 International Machine Learning Conference; and Co-Chair of the 1994 International Machine Learning Conference. Dr. Cohen was also the co-Chair for the 3rd Int'l AAAI Conference on Weblogs and Social Media, which was held May 17-20, 2009 in San Jose, and was the co-Program Chair for the 4rd Int'l AAAI Conference on Weblogs and Social Media. He is a AAAI Fellow, and was a winner of the 2008 the SIGMOD "Test of Time" Award for the most influential SIGMOD paper of 1998, and the 2014 SIGIR "Test of Time" Award for the most influential SIGIR paper of 2002-2004.

Dr. Cohen's has a long-standing interest in statistical relational learning, the learnability of first-order logical representations, and in learning from data that has non-trivial structure. He has also worked on a number of tasks related to learning and language, such as information extraction, reasoning with automatically-extracted information, and question-answering making use of both structured and unstructured data. He holds seven patents related to learning, discovery, information retrieval, and data integration, and is the author of more than 200 publications.

Title: Using Deep Learning Platforms to Perform Inference over Large Knowledge Bases
Diffent subcommunities of artificial intelligence have focused on different toolkits, containing different computational methods and analytic techniques. The knowledge representation (KR) and logic programming (LP) communities have focused on non-probabilistic first-order inference, and has relied heavily on computational complexity as guidance for design of inference systems; the probabilistic logic (PL) community has focused on probabilistic robust inference, but has large focused on inference methods that are computationally expensive and hence do not scale to large knowledge bases; the automatic knowledge-base construction (AKBC) community has focused on constructing and using very large amounts of simple structured information; and the machine learning (ML) community has focused on learning from data how to perform simple probabilistic operations like classification. Recently progress in ML has been greatly accelerated by high-performance, easily programmable tools for defining and optimizing deep neural-network architectures.

In this talk, I will summarize the most recent results in my attempts to bridge all of these areas. Specifically, I will describe a system that learns from data how to perform non-trivial probabilistic first-order inference tasks, efficiently, in a manner that scales with large KBs. The system I will describe, TensorLog, is a carefully restricted probabilistic first-order logic in which inference can be compiled to differentiable functions in a neural network infrastructure, such as Tensorflow. This enables one to use high-performance deep learning frameworks to learn parameters of a probabilistic logic. TensorLog has been used for several diverse tasks, including semi-supervised learning for network data (using logic constraints on classifiers), question-answering against a KB, and relational learning.

Download Slides

Marco Gori

Marco Gori

Full Professor of Computer Science at the Faculty of Engineering, University of Siena, Italy

Marco Gori received the Ph.D. degree in 1990 from Università di Bologna, Italy, working partly at the School of Computer Science (McGill University, Montreal). In 1992, he became an Associate Professor of Computer Science at Università di Firenze and, in November 1995, he joint the Università di Siena, where he is currently full professor of computer science. His main interests are in machine learning with applications to pattern recognition, Web mining, and game playing. He is especially interested in bridging logic and learning and in the connections between symbolic and sub-symbolic representation of information. He was the leader of the WebCrow project for automatic solving of crosswords, that outperformed human competitors in an official competition which took place during the ECAI-06 conference. As a follow up of this grand challenge he founded QuestIt, a spin-off company of the University of Siena, working in the field of question-answering. He is co-author of the book "Web Dragons: Inside the myths of search engines technologies," Morgan Kauffman (Elsevier), 2006, and “Machine Learning: A Constrained-Based Approach,” Morgan Kauffman (Elsevier), 2018.

Dr. Gori serves (has served) as an Associate Editor of a number of technical journals related to his areas of expertise, he has been the recipient of best paper awards, and keynote speakers in a number of international conferences. He was the Chairman of the Italian Chapter of the IEEE Computational Intelligence Society, and the President of the Italian Association for Artificial Intelligence. He is a fellow of the IEEE, ECCAI, IAPR. He is in the list of top Italian scientists kept by the VIA-Academy.

Title: Learning and Inference with Constraints

Learning and inference are traditionally regarded as the two opposite, yet complementary and puzzling components of intelligence. In this talk I point out that a constrained-based modeling of the environmental agent interactions makes it possible to unify learning and inference within the same mathematical framework, which originates information-based laws of intelligence. The unification is based on the abstract notion of constraint, which provides a representation of knowledge granules gained from the interaction with the environment, as well as of supervised examples. The theory offers a natural bridge between the formalization of knowledge and the inductive acquisition of concepts from data. I give examples of learning and inferential processes in different environments by focusing on their natural integration and on the complementary role of data and knowledge representation.

Download Slides

Maximilian Nickel

Maximilian Nickel

Research Scientist at Facebook AI Research

Maximilian Nickel is a research scientist at Facebook AI Research in New York. Before joining FAIR, he was a postdoctoral fellow at MIT where he was with the Laboratory for Computational and Statistical Learning and the Center for Brains, Minds and Machines. In 2013, he received his PhD with summa cum laude from the Ludwig Maximilian University Munich. From 2010 to 2013 he worked as a research assistant at Siemens Corporate Technology. His research centers around geometric methods for learning and reasoning with relational knowledge representations and their applications in artificial intelligence and network science.

Title: Hierarchical Representation Learning on Relational Data

Many domains such as natural language understanding, information networks, bioinformatics, and the Web are characterized by problems involving complex relational structures and large amounts of uncertainty. Representation learning has become an invaluable approach for making statistical inferences in this setting by allowing us to learn high-quality models on a large scale. However, while complex relational data often exhibits latent hierarchical structures, current embedding methods do not account for this property. This leads not only to inefficient representations but also to a reduced interpretability of the embeddings.

In this talk, I will first give a brief overview over state-of-the-art methods for learning representations of relational data such as graphs and text. I will then introduce a novel approach for learning hierarchical representations by embedding relations into hyperbolic space. I will discuss how the underlying hyperbolic geometry allows us to learn parsimonious representations which simultaneously capture hierarchy and similarity. Furthermore, I will show that hyperbolic embeddings can outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.