Classroom course
Natural Language Processing by RNN to Transformers
Price
1.430 € + VAT
Duration
8 hours
Data:
September 1st
Qualification obtained
Certificate
Course code
AGSAI007
Natural Language Processing is the technology behind virtual assistants like Siri or Alexa, Google Search, or GoogleTranslate. This course is an introductory guide to the techniques of Deep Learning most used in the field of Natural Language Processing. In particular, Neural Recurring Networks will be introduced and the most recent Re-ti Transformers. For each architecture will be presented some applications in the field company of various complexity.
The course is aimed at:
- programmers
- graduate students who know programming
At the end of this course the latest techniques of Deep Learning will be learned. In particular the ones applied to Natural Language Processing, from use to solve real problems such as:
- identify customer satisfaction
- create Question Answering systems
- develop semantic search systems
- 1. Introduction to the course
- 1.1 Introduction to Natural Language Processing
- 1.2 How to use Google Colaboratory
- 2. Neural Networks and Deep Learning: In this section Deep Learning and Artificial Neural Networks will be ddiscussed, studying how these work and how they can be applied to the field of Natural Language Processing.
- 2.1 Introduction to Deep Learning
- 2.2 Operation of Artificial Neural Networks
- 2.3 Pre-processing of the text
- 2.4 Train a Neural Network
- 2.4 Some details and insights
- 3. Word Embeddings: This section explains the fundamental concept of word embeddings. It will also show how to use word embeddings to improve the training of a network and their use in different applications in Natural Language Processing.
- 3.1 Introduction to the concept of Word Vectors
- 3.2 Word embeddings
- 3.3 Word Embeddings Examples: Word2Vec and Glove
- 3.4 Insights and references
- 4. Recurrent Neural Networks: In this section we will introduce the Recurrent Neural Networks and some of its improvement variants.
- 4.1 Sequential Models
- 4.2 Recurrent Neural Networks
- 4.3 Limits of Recurring Networks
- 4.4 Long short-term networks (LSTM)
- 4.5 Gated Recurrent Unit (GRU) Networks
- 5. The Transformer Network for Natural Language Processing: In this section will be explained the Transformer neural networks, Deep Learning models designed to be more flexible and robust of Recurrent Neural Networks (RNN). We will start the section introducing several basic components of the Transformers. In the next part we will put these components together and explore the entire architecture of Transformers neural networks.
- 5.1 Internal product
- 5.2 Mechanism of attention
- 5.3 Sequence-to Sequence Encoder and Decoder
- 5.4 The Trasformer architecture
- 5.5 Insights and references
- 6. Business applications: In this section some useful business applications will be shown.
- 6.1 Carry out the Sentiment analysis
- 6.2 Create a Question Answering System
- 6.3 Develop a semantic search engine
Natural Language Processing is the technology behind virtual assistants like Siri or Alexa, Google Search, or GoogleTranslate. This course is an introductory guide to the techniques of Deep Learning most used in the field of Natural Language Processing. In particular, Neural Recurring Networks will be introduced and the most recent Re-ti Transformers. For each architecture will be presented some applications in the field company of various complexity.
The course is aimed at:
- programmers
- graduate students who know programming
At the end of this course the latest techniques of Deep Learning will be learned. In particular the ones applied to Natural Language Processing, from use to solve real problems such as:
- identify customer satisfaction
- create Question Answering systems
- develop semantic search systems
- 1. Introduction to the course
- 1.1 Introduction to Natural Language Processing
- 1.2 How to use Google Colaboratory
- 2. Neural Networks and Deep Learning: In this section Deep Learning and Artificial Neural Networks will be ddiscussed, studying how these work and how they can be applied to the field of Natural Language Processing.
- 2.1 Introduction to Deep Learning
- 2.2 Operation of Artificial Neural Networks
- 2.3 Pre-processing of the text
- 2.4 Train a Neural Network
- 2.4 Some details and insights
- 3. Word Embeddings: This section explains the fundamental concept of word embeddings. It will also show how to use word embeddings to improve the training of a network and their use in different applications in Natural Language Processing.
- 3.1 Introduction to the concept of Word Vectors
- 3.2 Word embeddings
- 3.3 Word Embeddings Examples: Word2Vec and Glove
- 3.4 Insights and references
- 4. Recurrent Neural Networks: In this section we will introduce the Recurrent Neural Networks and some of its improvement variants.
- 4.1 Sequential Models
- 4.2 Recurrent Neural Networks
- 4.3 Limits of Recurring Networks
- 4.4 Long short-term networks (LSTM)
- 4.5 Gated Recurrent Unit (GRU) Networks
- 5. The Transformer Network for Natural Language Processing: In this section will be explained the Transformer neural networks, Deep Learning models designed to be more flexible and robust of Recurrent Neural Networks (RNN). We will start the section introducing several basic components of the Transformers. In the next part we will put these components together and explore the entire architecture of Transformers neural networks.
- 5.1 Internal product
- 5.2 Mechanism of attention
- 5.3 Sequence-to Sequence Encoder and Decoder
- 5.4 The Trasformer architecture
- 5.5 Insights and references
- 6. Business applications: In this section some useful business applications will be shown.
- 6.1 Carry out the Sentiment analysis
- 6.2 Create a Question Answering System
- 6.3 Develop a semantic search engine