Subject Code and Title : NLP303 Natural Language Processing and Speech Recognition
Assessment: Programming Task
Individual/Group: Individual
Length: Source code with 750 words (+/- 10%) report
Weighting : 30%
NLP303 Natural Language Processing And Speech Recognition Assessment 1- Torrens University Australia.
Learning Outcomes : The Subject Learning Outcomes demonstrated by successful completion of the task below include:
a) Evaluate Natural Language Processing and Speech Recognition
techniques.
b) Apply the theories and the frameworks of Natural Language
Processing and Speech Recognition.
d) Develop solutions to Natural Language Processing and Speech
Recognition problems
Assessment Task:
This assessment is about exploring and investigating the use of the Hugging Face (HFace) open-source Transformers library. You will demonstrate the value of the library through high-level Natural Language Processing (NLP) tasks and delivering NLP solutions.
Please refer to the Task Instructions for details on how to complete this task.
Context:
This assessment recognises the tectonic shift taking place when approaching NLP projects in the 2020 s. According to Sebastian Ruder, key global thought leader and research scientist in NLP with Google Deep Mind, “It only seems to be a question of time until pretrained word em beddings will be dethroned and replaced by pretrained language models in the toolbox of every NLP practitioner”(Ruder, 2018). Together with the overwhelming demonstrations and transformer technology use by
OpenAI, Google, Facebook, Microsoft and Baidu, no reason at all exists for not considering this technology as part of any reasonably sized NLP projects (i.e., beyond toy) applications or research. Understanding transformer technology and implementations is essential for anyone who works in the NLP field. Therefore, this transformer-centric assessment is essential to undertake in order to:
1.Help provide a context to keep up with the continual news updates on transformers, and
2.Enter the workplace in any role associated with NLP if only to compare and contrast the new methods of conducting NLP tasks with the traditional, fragile, rules-based approach.
Completing this assessment will provide hands-on experience with how Transformer models fit into NLP projects as well as providing an understanding of the variety of models available from the Hugging Face Hub, in particular acquiring a model and running typical NLP tasks.
The skills developed includes the use of open source transformers and ability to self-direct learning towards practice with limited instruction. These two skills alone will garner the ability to undertake innovative proof of concepts or a minimum viable project with limited funding while benefiting from the technology investments of tech giants.
This concise background information on transformer technology and importance of the Hugging Face open source Transformer library provides sufficient background as well as the scale and scope of these models and vocabulary required to undertake this project before commencing projects in a learner or early professional role.
State-of-the-art natural language processing tools build on the neural network architectures of the Transformer (Vaswani et al., 2017). However, two key approaches have helped to ensure that Transformer models have now become the de facto model for NLP. The first one is self-attention
capturing dependencies between sequence elements. Secondly, the dominant and most important approach in NLP is transfer learning to pre-train models on large unlabelled text corpuses in an unsupervised manner, and then fine-tuning using a smaller task-specific data set. Further evidence of the superiority of Transformers over previous component architectures of recurrent and convolutional neural networks is available by studying the leader board of NLP benchmarks
Hugging Face is a machine learning company supporting an open-source community for language model development. The Transformers library includes pre-trained models available from
The Transformers library NLP machine learning model pipeline follows the workflow:
process data → apply a model → make predictions
The library enables high level NLP tasks such as text classification, name-entity recognition, machine language translation, summarisation, question/answering and much more. However, Transformers go well beyond handling NLP tasks. They also offer solutions such as text generation for To understand the variety of transformers beyond GPTs, other popular Transformer models available in open source include Google BERT (Bidirectional Encoder Representations from Transformers), Facebook BART, RoBERTa (Robustly Optimised BERT Pre-training) and T5 (Text-to- Text Transfer Transformer). The Transformer models handle a large number of neural network
A cheat sheet has been provided which encapsulates the progress of transformer architectures and variants together with parameter counts. This cheat sheet consolidates the transformer landscape into a single A3 poster (Sood, 2021). Please refer to the cheat sheet for guidance as you progress through the assessment.
Instructions:
You will need to use a Google Colaboratory notebook (.ipynb) with Python to undertake this assignment. Your Transformer model notebook benefits from using hardware acceleration and GPU runtime. In Colab select the menu option “Runtime” -> “Change runtime type”, select “Hardware
Accelerator” -> “GPU” and click “SAVE”.
You may find it extremely helpful to utilise the Markdown function available in the notebook in order to document your code snippets, generate comments and capture observations for your final
assessment report as you go through your own notebook. You can find plenty of examples of well documented Transformer notebook files available online and in Github for you to review including
Ensure the installation and import of Hugging Face Transformers and install/test the machine translation pipeline with the following notebook code:
Beyond taking the steps outlined to prepare your notebook, three multi-step activities are required in order to complete this assessment task. Remember that for ease of completion of this assignment and general good practice, you should document your notebook as you go using Markdown as you
move through multi-stage activities 1 and 2.
Beyond taking the steps outlined to prepare your notebook, three multi-step activities are required in order to complete this assessment task. Remember that for ease of completion of this assignment and general good practice, you should document your notebook as you go using Markdown as you
move through multi-stage activities 1 and 2.
Project Gutenberg is a library containing over 60,000 free e-books
available in the public domain. The file formats include plain text.
Owing to the changing nature of the field and enhancements to models, the publication cycle to implementation using new innovative Transformers has been considerably compressed. In light of this, no text book for reference is prescribed, but the online documentation should be consulted for
Activity 1: Programming Tasks for NLP Frequent Use — Cases with Hugging Face Transformers
Ensure Hugging Face Transformers is installed together with import of the pipeline. You will have hands-on experiences with state of the art, pre-trained language models available with the open source Transformer library and be able to see the possibilities with one line of code at a time.
For this activity:
1.Specify a sequence of text and using the Transformers pipeline for Named Entity Recognition (NER) and identify a list of words belonging to at least one of three classes, e.g., person, an organisation or a location.
2.Again, using your own sequence of text and Transformers pipeline, identify at least two sequences of text of 10-25 words as positive or negative sentiment.
3.Use Transformers pipeline to summarise an article or sequence of text comprising 350 to 500 words or so into 100 words or less.
4.Illustrate text generation using the text generation pipeline and auto-complete 500 words from your starting point of just a few sentences
5.Extract an answer from your text given a question using the Transformers pipeline question-answering. Show the answer extracted from the text together with a confidence score with the positions of the extracted answer in the text.
6.Translate text of 3 to 5 sentences from English to French using the translation pipeline.
Ensure your notebook Markdown commentary, code and output, including any references for this activity, are complete.
Activity 2: Programming Task for NLP Transformer Solutions
The Hugging Face model hub contains a large collection of
searchable pre-trained models on a range of NLP tasks, datasets and metrics. These models can be used out of the box as you have already witnessed when completing Activity 1. In the model Hub, you can try each model without coding though a simple interface and supporting documentation.The test drive capability builds on the HFace inference API built on top of pipeline.
NLP303 Natural Language Processing And Speech Recognition Assessment 1- Torrens University Australia.
Continue on with the same notebook from Activity 1.
For this activity:
1.Use the model Distilbert-base-uncased with a pipeline using your own example to illustrate masked language modelling. Here, the model generates text options to fill the masked input while mindful of a context of 5 to 10 words.
2.Locate and download the ProsusAI/finbert model from the HFace hub and select 3 to 5 stock market headlines to classify the sentiment of the financial content.
3.Download the Microsoft DialoGPT-large model, a “large scale pretrained dialogue response generation model for multiturn conversations” (Zhang et al, 2020). The model has been trained on 147M multi-turn dialogue from Reddit. Download the code snippet provided on the H Face hub to your notebook, make necessary changes/additions to the code and try
chatting. Chat for 5 lines or more.
4.Review the Facebook Wav2Vec2 model (Wav2Vec2-Base-960h). This is a speech recognition model learning the structure of speech from raw audio. Create a .wav audio file using the H Face hub interface to directly record your voice from browser. When satisfied with recording 10 words or so after playback through the same interface, save as “audio only.wav” or whatever name you prefer. Use the code below for transcribing audio and load on to your notebook and convert your audio to the text (speech recognition).
Writing up your recent practical experiences with transformers will help you demonstrate skills and knowledge associated with:
- Using transformers in NLP
- Different types of NLP tasks
- Ethics of NLP when using transformers to automatically generate writings
Beyond acquiring these skills, the reflection will help you determine if NLP is an area of interest for your future professional work.
NLP303 Natural Language Processing And Speech Recognition Assessment 1- Torrens University Australia.
For this activity:
1.Gather your code, outputs, comments and references from your notebook to form the body of your report of (approximately 500 words). Follow the chronology spelt out for Activity 1 and 2 to help structure and layout your report. Feel free to use any literature to highlight aspects of your report.
2.Conclude your report with a short reflection of 250 words or less covering any ‘Aha’ moments and what you found exciting or interesting, e.g., a favourite big pre-trained model. Also, you may have views you would like to share on the ethics of NLP given your recent experiences with auto-generation of text or chatting with a bot through multiple turns.
Excellent Assignment Help
We Aim At:
- Lowest Price.
- 100% Uniqueness.
- Assignment Fastest Delivery.