I am currently applied scientist at Leboncoin Lab. My role is to experiment and implement machine learning features for improving the experience of the Leboncoin customers.
I have a PhD in computer science from Télécom ParisTech spend at EURECOM under the supervision of Assistant Professor Raphaël Troncy and Dr. Giuseppe Rizzo. My interests lie at the intersection of Semantic Web, Natural Language Processing Machine Learning and Deep Learning. My PhD topic was to extract lexical units (entities) from textual documents to disambiguate them against Web resources contained in a knowledge base and in an adaptive manner. This adaptivity is on the language of the processed texts, the used knowledge base and the textual content that can be either formal text (journalistic style) or informal text (microposts).
Prior being an applied scientist, I was a software engineer and consultant for Orange. I hold a Master's Degree in artificial intelligence from University of Montpellier 2 in 2012. My master thesis was about Semantic Web, Natural Language Processing and Machine Learning, more precisely, extracting relations between entities in textual documents to populate a knowledge base.
I am a creative enthusiast who always seeks new ideas and technologies that aims to enhance my work. I am able to deal with priorities and tight deadlines with quality. I am a person that is always curious to learn new things and who cannot get enough of learning. I have good analytical skills and used to work under pressure and on different projects.
My main goal is to find a work abroad France, mainly in the San Francisco bay area, but I am ready to consider all the proposals for any location. I would like to work in a company but in a research and development environement because I want to continue doing research to see evolving applications used by people around the world.
This course provides a broad introduction to machine learning, datamining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI). The course will also draw from numerous case studies and applications, so that you'll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.
The course begins with a detailed discussion of how two parties who have a shared secret key can communicate securely when a powerful adversary eavesdrops and tampers with traffic. We will examine many deployed protocols and analyze mistakes in existing systems. The second half of the course discusses public-key techniques that let two or more parties generate a shared secret key. We will cover the relevant number theory and discuss public-key encryption and basic key-exchange. Throughout the course students will be exposed to many exciting open problems in the field.
This course introduce various aspects of program design. Through numerous case studies, it highlights the data structures and algorithms to provide solutions. As often in computer science, there is no single solution and we shall have to explore different classes of algorithm and compare them. This course introduce for this purpose the notion of complexity of a program that is both an estimate of the execution time of your program and the space required by it. It is tempting to believe that the "best" program is one that minimizes the execution time but very often this complexity is constrained by the memory you have.
My role is to experiment and implement machine learning features for improving the experience of the Leboncoin customers. Among the projects I developed, there are: Similar ads (a project on recommending ads based on pictures), ads classification and improving the image quality for ads.
My PhD topic is to extract lexical units (entities) from textual documents to disambiguate them against Web resources contained in a knowledge base. It exists different kind of textual documents (e.g. news article and microposts), different kind of knowledge bases (e.g. DBpedia and Freebase), different kind of entities (e.g. Person, Organization and Location), and textual documents can be written in many languages. My method handle all these characteristics in order to be adaptive to them.
Consultant at Orange on semantic Web, NLP and Machine Learning technologies for their in-house Web search engine. My works was about to implement a natural language question module. Take a question in natural language and turn it into a SPARQL query to provide an answer and to put DBpedia technologies into the Orange production pipeline. I also prototyped a French entity relation extraction system from crawled Web pages in order to populate a knowledge base.
During this internship I had three different tasks to do. The main one was to extract entities and relations among them from Wikipedia and LastFM texts. The second one was to develop a R2RML mapping for the musicbrainz dataset. The last one was to extract and model awards from Wikipedia texts.
I was in charge of the directed work and labs of a student group for the C, C++, Java and semantic Web courses.
The goal of this internship was to develop a proof of concept about using Semantic Web technologies for Passim and a RDF version of the NEPTUNE format.
My main task was to implement colors and shapes recognition algorithms in C++ with the OpenCV library and use those algorithms through a video stream from a robot (Meccano Spykee). The next step was to integrate this functionality into a dialogue engine in order to create a game for young kids. Kids were able to speak with the robot in order to ask him tasks like: follow the red square and what are the colors you are seeing?