Paris NLP Season 3 Meetup #3 at Doctrine

We would like first thank Doctrine as host of this meetup, then thank our 3 speakers for their presentation and also thank the participants for coming so  many at this session.

You can find the slides of our three speakers below:

Hugo Vasselin & Benoit Dumeunier, Artefact

Comment redéfinir l’image d’une marque avec un simple compteur de mots ? Ce talk célèbre la rencontre entre la data science et la créa. Il vous raconte comment des techniques de NLP basiques, croisées à une approche créative ont permis de re-définir une marque. Dans un premier temps, nous avons conçu un outil permettant de donner une idée de la perception des différentes marques d’un grand groupe hotelier à travers le monde, par rapport à ses concurrents. Ces données ont fait ressortir un certain nombre de valeurs chères aux hôtes, qui ont servi de piliers pour des expériences de marque créatives et innovantes…

Slides Hugo Vasseling & Benoît Dumeunier (Artefact)

Romain Vial, Hyperlex

Hyperlex is a contract analytics and management solution powered by artificial intelligence. Hyperlex helps companies manage and make the most of their contract portfolio by identifying relevant information and data to manage key contractual commitments during the whole life of the contract. Our technology rests on a combination of specifically trained Natural Language Processing (NLP) algorithms and advanced machine learning techniques.

In this talk, I will present some of the challenges we are currently solving at Hyperlex through a focus on two important NLP tasks: (i) learning representations for texts and words using recent language modelling techniques; and (ii) building knowledge from predictions by mining relations in legal documents.

Slides Romain Vial (Hyperlex)

Grégory Châtel, Lead R&D @ Disaitek et membre du programme Intel AI software innovator

In this talk, I will present two recent research articles from openAI and Google AI Language about transfer learning in NLP and their implementation.

Historically, transfer learning for NLP neural networks has been limited to reusing pre-computed word embeddings. Recently, a new trend appeared, much closer to what transfer learning looks like in computer vision, consisting in reusing a much larger part of a pre-trained network. This approach allows to reach state of the art results on many NLP tasks with minimal code modification and training time. In this presentation, I will present the underlying architectures of these models, the generic pre-training tasks and an example of using such network to complete a NLP task.

Slides Grégory Châtel (Disaitek)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s