Skip to content

Commit f6ab807

Browse files
jmayank1511Mayank Jain (SW-TEGRA)mohnishparmar
authored
Fix broken links in multiple tutorials (#208)
* fix some broken links * remove empty cell * pin NeMo version * chore: Update TTS customization notebook * chore: TTS notebook description update --------- Co-authored-by: Mayank Jain (SW-TEGRA) <mayjain@nvidia.com> Co-authored-by: mohnishp <mohnishp@nvidia.com>
1 parent 5a2c68b commit f6ab807

7 files changed

+55
-18
lines changed

asr-customize-vocabulary-and-lexicon.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@
8686
" <other_parameters>...\n",
8787
"```\n",
8888
"\n",
89-
"Refer to Riva [documentation](https://docs.nvidia.com/deeplearning/riva/user-guide/docs/service-asr.html#pipeline-configuration) for build commands for supported models.\n",
89+
"Refer to Riva [documentation](https://docs.nvidia.com/deeplearning/riva/user-guide/docs/asr/asr-pipeline-configuration.html) for build commands for supported models.\n",
9090
"\n",
9191
"\n"
9292
]
@@ -299,7 +299,7 @@
299299
"\n",
300300
"### Sample Applications\n",
301301
"\n",
302-
"Riva comes with various sample applications. They demonstrate how to use the APIs to build various applications. Refer to [Riva Sampple Apps](https://docs.nvidia.com/deeplearning/riva/user-guide/docs/samples/index.html) for more information. \n",
302+
"Riva comes with various sample applications. They demonstrate how to use the APIs to build various applications. Refer to [Riva Sample Apps](https://docs.nvidia.com/deeplearning/riva/user-guide/docs/samples/index.html) for more information. \n",
303303
"\n",
304304
"\n",
305305
"### Additional Resources\n",

asr-finetune-conformer-ctc-nemo.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@
8080
"!pip install Cython\n",
8181
"\n",
8282
"## Install NeMo\n",
83-
"BRANCH = 'main'\n",
83+
"BRANCH = 'v1.23.0'\n",
8484
"!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]\n",
8585
"\n",
8686
"\"\"\"\n",

nmt-python-advanced-finetune-nmt-model-with-nemo.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@
7474
"<a id='nmt_requirements_and_setup'></a>\n",
7575
"### Requirements and Setup\n",
7676
"\n",
77-
"This tutorial needs to be run from inside a NeMo docker container. If you are not running this tutorial through a NeMo docker container, please refer to the [Riva NMT Tutorials](https://ngc.nvidia.com/resources/riem1phmzvud:riva:riva_nmt_ea_tutorials)'s [README.md](https://ngc.nvidia.com/resources/riem1phmzvud:riva:riva_nmt_ea_tutorials/files?version=2.2.0-ea) to get started.\n",
77+
"This tutorial needs to be run from inside a NeMo docker container.\n",
7878
"\n",
7979
"Before we get into the Requirements and Setup, let us create a base directory for our work here. "
8080
]
@@ -105,7 +105,7 @@
105105
"metadata": {},
106106
"outputs": [],
107107
"source": [
108-
"NeMoBranch = \"main\"\n",
108+
"NeMoBranch = \"'v1.23.0'\"\n",
109109
"!git clone -b $NeMoBranch https://github.com/NVIDIA/NeMo $base_dir/NeMo"
110110
]
111111
},
@@ -156,7 +156,7 @@
156156
"id": "6c69451f",
157157
"metadata": {},
158158
"source": [
159-
"2. Install the `nemo2riva` library from the [Riva Quick Start Guide](https://ngc.nvidia.com/resources/riem1phmzvud:riva:riva_quickstart)."
159+
"2. Install the `nemo2riva` library from the [pypi](https://pypi.org/project/nemo2riva/) or [github](https://github.com/NVIDIA/nemo2riva)."
160160
]
161161
},
162162
{
@@ -229,7 +229,7 @@
229229
"source": [
230230
"### Step 2. Data preprocessing\n",
231231
"\n",
232-
"Data preprocessing consists of multiple steps to improve the quality of the dataset. [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/machine_translation.html#data-cleaning-normalization-tokenization) provides detailed instructions about the 8-step data preprocessing for NMT. NeMo also provides a [jupyter notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Data_Preprocessing_and_Cleaning_for_NMT.ipynb) that takes users programatically through the different preprocessing steps. Note that depending on the dataset, some or all preprocessing steps can be skipped.\n",
232+
"Data preprocessing consists of multiple steps to improve the quality of the dataset. [NeMo documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/nlp/machine_translation/machine_translation.html#data-cleaning-normalization-tokenization) provides detailed instructions about the 8-step data preprocessing for NMT. NeMo also provides a [jupyter notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Data_Preprocessing_and_Cleaning_for_NMT.ipynb) that takes users programatically through the different preprocessing steps. Note that depending on the dataset, some or all preprocessing steps can be skipped.\n",
233233
"\n",
234234
"To simplify the fine-tuning process in the Riva NMT program, we have provided 3 preprocessing scripts through the NeMo repository. The input to these scripts will be the 2 parallel corpus (i.e., source and target language) data files. In this tutorial, we are using the Moses' version of the Scielo dataset, which directly provides us the source (`en_es.en`) and target (`en_es.es`) data files. If the dataset does not directly provide these files, then we first need to generate these 2 files from the dataset before using the preprocessing scripts.\n",
235235
"\n",
@@ -756,7 +756,7 @@
756756
"### Step 6. Deploying the fine-tuned NeMo NMT model on the Riva Speech Skills server.\n",
757757
"\n",
758758
"The NeMo-finetuned NMT model needs to be deployed on Riva Speech Skills server for inference. <br>\n",
759-
"Please follow the \"How to deploy a NeMo-finetuned NMT model on Riva Speech Skills server?\" tutorial from [Riva NMT Tutorials](https://ngc.nvidia.com/resources/riem1phmzvud:riva:riva_nmt_ea_tutorials) - This notebook covers deploying the .riva file obtained from Step 5, on Riva Speech Skills server."
759+
"Please follow the \"How to deploy a NeMo-finetuned NMT model on Riva Speech Skills server?\" tutorial from [Riva NMT Tutorials](https://github.com/nvidia-riva/tutorials/blob/main/nmt-python-advanced-deploy-nemo-nmt-model-on-riva.ipynb) - This notebook covers deploying the .riva file obtained from Step 5, on Riva Speech Skills server."
760760
]
761761
},
762762
{

nmt-python-advanced-synthetic-data-generation.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@
196196
"source": [
197197
"### Step 2. Data preprocessing\n",
198198
"\n",
199-
"Data preprocessing consists of multiple steps to improve the quality of the dataset. [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/machine_translation.html#data-cleaning-normalization-tokenization) provides detailed instructions about the 8-step data preprocessing for NMT. NeMo also provides a [jupyter notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Data_Preprocessing_and_Cleaning_for_NMT.ipynb) that takes users programatically through the different preprocessing steps. Note that depending on the dataset, some or all preprocessing steps can be skipped.\n",
199+
"Data preprocessing consists of multiple steps to improve the quality of the dataset. [NeMo documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/nlp/machine_translation/machine_translation.html#data-cleaning-normalization-tokenization) provides detailed instructions about the 8-step data preprocessing for NMT. NeMo also provides a [jupyter notebook](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Data_Preprocessing_and_Cleaning_for_NMT.ipynb) that takes users programatically through the different preprocessing steps. Note that depending on the dataset, some or all preprocessing steps can be skipped.\n",
200200
"\n",
201201
"To simplify the process in the Riva NMT program, we are only performing lang id filtering before data generation to get rid of any noise that maybe present in raw dataset. The input to these scripts will be a parallel corpus (i.e., source and target language) data files. In this tutorial, we are using the Moses' version of the Scielo dataset, which directly provides us the source (`en_es.en`) and target (`en_es.es`) data files. If the dataset does not directly provide these files, then we first need to generate these 2 files from the dataset before using the preprocessing scripts.\n",
202202
"\n",
@@ -324,7 +324,7 @@
324324
"source": [
325325
"### Step 4. Refer to the fine-tuning tutorial for using this data to customize the OOTB model.\n",
326326
"\n",
327-
"Lastly, follow the steps in \" in [Riva NMT Tutorials](https://ngc.nvidia.com/resources/riem1phmzvud:riva:riva_nmt_ea_tutorials) to use this data for customizing the OOTB model."
327+
"Lastly, follow the steps in \" in [Riva NMT Tutorials](https://github.com/nvidia-riva/tutorials/blob/main/nmt-python-advanced-finetune-nmt-model-with-nemo.ipynb) to use this data for customizing the OOTB model."
328328
]
329329
}
330330
],

nmt-python-basics.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@
4747
"3. **Bilingual models** are used for translation from one source language to another target language. For example, the `en_de_24x6` model can be used to translate from English to Russian. Bilingual models have a single pair of language codes in their name. Use a bilingual model when you want the best possible performance for a specific language pair direction. Running bilingual models produces faster results compared to running multilingual models. \n",
4848
"\n",
4949
"To learn more about Riva NMT, refer to the Riva NMT EA documentation. \n",
50-
"For more information about the NMT model architecture and training, refer to the [NeMo NMT documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/machine_translation.html)."
50+
"For more information about the NMT model architecture and training, refer to the [NeMo NMT documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/nlp/machine_translation/machine_translation.html)."
5151
]
5252
},
5353
{
@@ -96,7 +96,7 @@
9696
" * `Spanish (es) ASR, Spanish-to-English (es-en) NMT and English (en) TTS` models - The instructions to deploy Spanish (language code `es-US`) ASR model and English (`en-US`) TTS model can be found in the `config.sh` itself, as the latter section of this tutorial will cover using Speech-to-Speech (S2S) and Speech-to-Text (S2T) services. The model name corresponding to Spanish-English language pair can be found in the [table above](#nmt_language_pairs_supported).\n",
9797
"\n",
9898
"2. Install the Riva Client library. \n",
99-
"Follow the steps in the 'Running the Riva Client' in the Riva NMT EA Tutorials' [Overview section](https://ngc.nvidia.com/resources/riem1phmzvud:riva:riva_nmt_ea_tutorials) or [README.md](https://ngc.nvidia.com/resources/riem1phmzvud:riva:riva_nmt_ea_tutorials/files?version=2.2.0-ea) to install the Riva Client library. \n",
99+
"Follow the steps [here](https://github.com/nvidia-riva/python-clients?tab=readme-ov-file#installation) to install the Riva Client library. \n",
100100
"\n",
101101
"3. Install additional libraries needed to run this tutorial. "
102102
]
@@ -225,7 +225,7 @@
225225
"id": "c4a5110e",
226226
"metadata": {},
227227
"source": [
228-
"To learn more about `NeuralMachineTranslationClient`, refer to the corresponding [docstring](https://github.com/nvidia-riva/python-clients/blob/main/riva/client/nmt.py#L13). \n",
228+
"To learn more about `NeuralMachineTranslationClient`, refer to the corresponding [docstring](https://github.com/nvidia-riva/python-clients/blob/main/riva/client/nmt.py#L33). \n",
229229
"\n",
230230
"Now we submit the request to the server."
231231
]

tts-basics-customize-ssml.ipynb

Lines changed: 41 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,8 @@
3535
"source": [
3636
"## Basics: Generating Speech with Riva TTS APIs\n",
3737
"\n",
38-
"The Riva TTS service is based on a two-stage pipeline: Riva generates a mel spectrogram using the first model, then uses the mel spectrogram to generate speech using the second model. This pipeline forms a text-to-speech system that enables you to synthesize natural sounding speech from raw transcripts without any additional information such as patterns or rhythms of speech.\n",
38+
"The Riva TTS service is based on a two-stage pipeline: Riva models like FastPitch and RadTTS++ first generates a mel-spectrogram, and then generates\n",
39+
"speech using the HifiGAN model while MagpieTTS Multilingual generates tokens and then generates speech using the Audio Codec model. This pipeline forms a text-to-speech system that enables you to synthesize natural sounding speech from raw transcripts without any additional information such as patterns or rhythms of speech.\n",
3940
"\n",
4041
"Riva provides two state-of-the-art voices (one male and one female) for English, that can easily be deployed with the Riva Quick Start scripts. Riva also supports easy customization of TTS in various ways, to meet your specific needs. \n",
4142
"Subsequent Riva releases will include features such as model registration to support multiple languages/voices with the same API and support for resampling to alternative sampling rates. \n",
@@ -114,7 +115,7 @@
114115
"source": [
115116
"### TTS modes\n",
116117
"\n",
117-
"Riva TTS supports both streaming and batch inference modes. In batch mode, audio is not returned until the full audio sequence for the requested text is generated and can achieve higher throughput. But when making a streaming request, audio chunks are returned as soon as they are generated, significantly reducing the latency (as measured by time to first audio) for large requests. <br> \n",
118+
"Riva TTS supports both streaming and offline inference modes. In offline mode, audio is not returned until the full audio sequence for the requested text is generated and can achieve higher throughput. But when making a streaming request, audio chunks are returned as soon as they are generated, significantly reducing the latency (as measured by time to first audio) for large requests. <br> \n",
118119
"\n",
119120
"\n",
120121
"\n",
@@ -153,7 +154,8 @@
153154
"- ``language_code`` - Language of the generated audio. ``en-US`` represents English (US) and is currently the only language supported OOTB.\n",
154155
"- ``encoding`` - Type of audio encoding to generate. ``LINEAR_PCM`` and ``OGGOPUS`` encodings are supported.\n",
155156
"- ``sample_rate_hz`` - Sample rate of the generated audio. Depends on the microphone and is usually ``22khz`` or ``44khz``.\n",
156-
"- ``voice_name`` - Voice used to synthesize the audio. Currently, Riva offers two OOTB voices (``English-US.Female-1``, ``English-US.Male-1``)."
157+
"- ``voice_name`` - Voice used to synthesize the audio. Currently, Riva offers two OOTB voices (``English-US.Female-1``, ``English-US.Male-1``).\n",
158+
"- ``custom_pronunciation`` - Dictionary of words and their custom pronunciations. For ease of use, the python API accepts a dictionary of words and their custom pronunciations. While the gRPC API accepts a string of comma seperated entries of words and their custom pronunciations with the format ``word1 pronunciation1,word2 pronunciation2``."
157159
]
158160
},
159161
{
@@ -227,6 +229,15 @@
227229
"Let's look at customization of Riva TTS with these SSML tags in some detail."
228230
]
229231
},
232+
{
233+
"cell_type": "markdown",
234+
"metadata": {},
235+
"source": [
236+
"\n",
237+
"##### Note\n",
238+
"Magpie TTS Multilingual supports only ``phoneme`` tag."
239+
]
240+
},
230241
{
231242
"attachments": {},
232243
"cell_type": "markdown",
@@ -332,7 +343,7 @@
332343
"<audio controls src=\"https://raw.githubusercontent.com/nvidia-riva/tutorials/stable/audio_samples/tts_samples/ssml_sample_0.wav\" type=\"audio/ogg\"></audio>\n",
333344
"\n",
334345
"#### Note\n",
335-
"If the audio controls are not seen throughout notebook. Open the notebook in github dev or view it in the [riva docs](https://docs.nvidia.com/deeplearning/riva/user-guide/docs/tutorials/tts-python-basics-and-customization-with-ssml.html)\n"
346+
"If the audio controls are not seen throughout notebook. Open the notebook in github dev or view it in the [riva docs](https://docs.nvidia.com/deeplearning/riva/user-guide/docs/tutorials/tts-basics-customize-ssml.html)\n"
336347
]
337348
},
338349
{
@@ -457,6 +468,10 @@
457468
"#### Arpabet\n",
458469
"The full list of phonemes in the CMUdict can be found at [cmudict.phone](https://github.com/cmusphinx/cmudict/blob/master/cmudict.phones). The list of supported symbols with stress can be found at [cmudict.symbols](https://github.com/cmusphinx/cmudict/blob/master/cmudict.symbols). For a mapping of these phones to English sounds, refer to the [ARPABET Wikipedia page](https://en.wikipedia.org/wiki/ARPABET).\n",
459470
"\n",
471+
"#### Custom pronunciations\n",
472+
"\n",
473+
"We also support passing custom pronunciations for words with the request which will override the default pronunciation for the word for the request. For ease of use, the python API accepts a dictionary of words and their custom pronunciations. While the gRPC API accepts a string of comma seperated entries of words and their custom pronunciations with the format ``word1 pronunciation1,word2 pronunciation2``.\n",
474+
"\n",
460475
"Let's look at an example showing this custom pronunciation for Riva TTS:"
461476
]
462477
},
@@ -481,11 +496,28 @@
481496
"ssml_text = '<speak>You say <phoneme alphabet=\"ipa\" ph=\"təˈmeɪˌtoʊ\">tomato</phoneme>, I say <phoneme alphabet=\"ipa\" ph=\"təˈmɑˌtoʊ\">tomato</phoneme>.</speak>'\n",
482497
"# Older arpabet version\n",
483498
"# ssml_text = '<speak>You say <phoneme alphabet=\"x-arpabet\" ph=\"{@T}{@AH0}{@M}{@EY1}{@T}{@OW2}\">tomato</phoneme>, I say <phoneme alphabet=\"x-arpabet\" ph=\"{@T}{@AH0}{@M}{@AA1}{@T}{@OW2}\">tomato</phoneme>.</speak>'\n",
499+
"custom_pronunciation = {\n",
500+
" \"tomato\": \"təˈmeɪˌtoʊ\"\n",
501+
"}\n",
502+
"print(\"Raw Text: \", raw_text)\n",
503+
"print(\"SSML Text: \", ssml_text)\n",
504+
"\n",
505+
"req[\"text\"] = ssml_text\n",
506+
"# Request to Riva TTS to synthesize audio\n",
507+
"resp = riva_tts.synthesize(**req)\n",
508+
"\n",
509+
"# Playing the generated audio from Riva TTS request\n",
510+
"audio_samples = np.frombuffer(resp.audio, dtype=np.int16)\n",
511+
"ipd.display(ipd.Audio(audio_samples, rate=sample_rate_hz))\n",
512+
"\n",
513+
"# Passing custom pronunciation dictionary\n",
514+
"ssml_text = '<speak>You say tomato, I say <phoneme alphabet=\"ipa\" ph=\"təˈmɑˌtoʊ\">tomato</phoneme>.</speak>'\n",
484515
"\n",
485516
"print(\"Raw Text: \", raw_text)\n",
486517
"print(\"SSML Text: \", ssml_text)\n",
487518
"\n",
488519
"req[\"text\"] = ssml_text\n",
520+
"req[\"custom_pronunciation\"] = custom_pronunciation\n",
489521
"# Request to Riva TTS to synthesize audio\n",
490522
"resp = riva_tts.synthesize(**req)\n",
491523
"\n",
@@ -500,6 +532,11 @@
500532
"source": [
501533
"#### Expected results if you run the tutorial:\n",
502534
"`You say <phoneme alphabet=\"ipa\" ph=\"təˈmeɪˌtoʊ\">tomato</phoneme>, I say <phoneme alphabet=\"ipa\" ph=\"təˈmɑˌtoʊ\">tomato</phoneme>.` \n",
535+
"\n",
536+
"<audio controls src=\"https://raw.githubusercontent.com/nvidia-riva/tutorials/stable/audio_samples/tts_samples/ssml_sample_9.wav\" type=\"audio/wav\"></audio> \n",
537+
"\n",
538+
"`You say tomato, I say <phoneme alphabet=\"ipa\" ph=\"təˈmɑˌtoʊ\">tomato</phoneme>.`\n",
539+
"\n",
503540
"<audio controls src=\"https://raw.githubusercontent.com/nvidia-riva/tutorials/stable/audio_samples/tts_samples/ssml_sample_9.wav\" type=\"audio/wav\"></audio> \n"
504541
]
505542
},

tts-finetune-nemo.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@
9898
"outputs": [],
9999
"source": [
100100
"!pip install nvidia-pyindex\n",
101-
"!pip install nemo_toolkit['all']\n",
101+
"!pip install nemo_toolkit['all']==1.23.0\n",
102102
"!ngc registry resource download-version \"nvidia/riva/riva_quickstart:2.8.1\"\n",
103103
"!pip install \"riva_quickstart_v2.8.1/nemo2riva-2.8.1-py3-none-any.whl\"\n",
104104
"!pip install protobuf==3.20.0\n",

0 commit comments

Comments
 (0)