You can find the slides of our speakers below:
• Florian Strub, Research Scientist @ DeepMind
While our representation of the world is shaped by our perceptions, our languages, and our inter-actions, they have traditionally been distinct fields of study in machine learning. Fortunately, this partitioning started opening up with the recent advents of deep learning methods, which standardized raw feature extraction across communities. However, multimodal neural architectures are still at their beginning.
In this presentation, we will focuses on visually grounded language learning for three reasons (i) they are both well-studied modalities across different scientific fields (ii) it builds upon deep learning breakthroughs in natural language processing and computer vision (ii) the interplay between language and vision has been acknowledged in cognitive science.
This presentation will be divided into two parts:
As a first step, we will motive our line of research by speaking about the language grounding problem. (5-7min)
Then, we will introduce some fundamental visually grounding tasks that have been explored in the past 3 years. (2-3min)
Finally, we will focus on a specific kind of multimodal architecture, namely, Modulation Layers (i.e., Conditional Batch Norm and FiLM). (10-12min)
• Felix Le Chevallier, Lead Data Scientist @ Lifen
[PDF] Hacking Interoperability in Healthcare with AI: Structuring Medical Data to digitize medical communications
How we scaled from 0 to 100k daily predictions served to healthcare practitioners to help them communicate more efficiently, from simple heuristics with handcrafted rules and only a couple of clients, to classical machine learning, and then RNNs to structure information in free form medical notes.
• Janna Lipenkova, Founder @ Anacode
[PDF] Applications in data and text analytics often have an ontology as their conceptual backbone – that is, a hierarchical representation of the underlying knowledge domain. However, such representations are tedious to construct, maintain and customize in a manual fashion. In this talk, I will show how text data and lexical relations such as hypernymy, synonymy and meronymy can be leveraged to automatically construct ontologies. After a review of different unsupervised and distant-supervised methods proposed for lexical relation extraction from text, I will explain Anacode’s approach to building and maintaining large-scale, multilingual ontologies for the domain of business and market intelligence.