You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
piper/notebooks/piper_multilingual_training...

465 lines
21 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/rmcpantoja/piper/blob/master/notebooks/piper_multilingual_training_notebook.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eK3nmYDB6C1a"
},
"source": [
"# <font color=\"pink\"> **[Piper](https://github.com/rhasspy/piper) training notebook.**\n",
"## ![Piper logo](https://contribute.rhasspy.org/img/logo.png)\n",
"\n",
"---\n",
"\n",
"- Notebook made by [rmcpantoja](http://github.com/rmcpantoja)\n",
"- Collaborator: [Xx_Nessu_xX](https://fakeyou.com/profile/Xx_Nessu_xX)\n",
"\n",
"---\n",
"\n",
"# Notes:\n",
"\n",
"- <font color=\"orange\">**Things in orange mean that they are important.**"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AICh6p5OJybj"
},
"source": [
"# <font color=\"pink\">🔧 ***First steps.*** 🔧"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "qyxSMuzjfQrz"
},
"outputs": [],
"source": [
"#@markdown ## <font color=\"pink\"> **Google Colab Anti-Disconnect.** 🔌\n",
"#@markdown ---\n",
"#@markdown #### Avoid automatic disconnection. Still, it will disconnect after <font color=\"orange\">**6 to 12 hours**</font>.\n",
"\n",
"import IPython\n",
"js_code = '''\n",
"function ClickConnect(){\n",
"console.log(\"Working\");\n",
"document.querySelector(\"colab-toolbar-button#connect\").click()\n",
"}\n",
"setInterval(ClickConnect,60000)\n",
"'''\n",
"display(IPython.display.Javascript(js_code))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "ygxzp-xHTC7T"
},
"outputs": [],
"source": [
"#@markdown ## <font color=\"pink\"> **Check GPU type.** 👁️\n",
"#@markdown ---\n",
"#@markdown #### A higher capable GPU can lead to faster training speeds. By default, you will have a <font color=\"orange\">**Tesla T4**</font>.\n",
"!nvidia-smi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "sUNjId07JfAK"
},
"outputs": [],
"source": [
"#@markdown # <font color=\"pink\"> **Mount Google Drive.** 📂\n",
"from google.colab import drive\n",
"drive.mount('/content/drive', force_remount=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "_XwmTVlcUgCh"
},
"outputs": [],
"source": [
"#@markdown # <font color=\"pink\"> **Install software.** 📦\n",
"\n",
"#@markdown ####In this cell the synthesizer and its necessary dependencies to execute the training will be installed. (this may take a while)\n",
"\n",
"#@markdown #### <font color=\"orange\">**Do you want to use the patch?**\n",
"#@markdown The patch provides the ability to export audio files to the output folder and save a single model while training.\n",
"usepatch = True #@param {type:\"boolean\"}\n",
"#@markdown ---\n",
"# clone:\n",
"!git clone -q https://github.com/rmcpantoja/piper\n",
"%cd /content/piper/src/python\n",
"!wget -q \"https://raw.githubusercontent.com/coqui-ai/TTS/dev/TTS/bin/resample.py\"\n",
"#!pip install -q -r requirements.txt\n",
"!pip install -q cython>=0.29.0 piper-phonemize==1.1.0 librosa>=0.9.2 numpy>=1.19.0 onnxruntime>=1.11.0 pytorch-lightning==1.7.0 torch==1.11.0\n",
"!pip install -q torchtext==0.12.0 torchvision==0.12.0\n",
"# fixing recent compativility isswes:\n",
"!pip install -q torchaudio==0.11.0 torchmetrics==0.11.4\n",
"!bash build_monotonic_align.sh\n",
"!apt-get install -q espeak-ng\n",
"# download patches:\n",
"if usepatch:\n",
" print(\"Downloading patch...\")\n",
" !gdown -q \"1EWEb7amo1rgFGpBFfRD4BKX3pkjVK1I-\" -O \"/content/piper/src/python/patch.zip\"\n",
" !unzip -o -q \"patch.zip\"\n",
"%cd /content"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A3bMzEE0V5Ma"
},
"source": [
"# <font color=\"pink\"> 🤖 ***Training.*** 🤖"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "SvEGjf0aV8eg"
},
"outputs": [],
"source": [
"#@markdown # <font color=\"pink\"> **1. Extract dataset.** 📥\n",
"#@markdown ####Important: the audios must be in <font color=\"orange\">**wav format, (16000 or 22050hz, 16-bits, mono), and, for convenience, numbered. Example:**\n",
"\n",
"#@markdown * <font color=\"orange\">**1.wav**</font>\n",
"#@markdown * <font color=\"orange\">**2.wav**</font>\n",
"#@markdown * <font color=\"orange\">**3.wav**</font>\n",
"#@markdown * <font color=\"orange\">**.....**</font>\n",
"\n",
"#@markdown ---\n",
"\n",
"%cd /content\n",
"!mkdir /content/dataset\n",
"%cd /content/dataset\n",
"!mkdir /content/dataset/wavs\n",
"#@markdown ### Audio dataset path to unzip:\n",
"zip_path = \"/content/drive/MyDrive/Wavs.zip\" #@param {type:\"string\"}\n",
"!unzip \"{zip_path}\" -d /content/dataset/wavs\n",
"#@markdown ---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "E0W0OCvXXvue"
},
"outputs": [],
"source": [
"#@markdown # <font color=\"pink\"> **2. Upload the transcript file.** 📝\n",
"#@markdown ---\n",
"#@markdown ####<font color=\"orange\">**Important: the transcription means writing what the character says in each of the audios, and it must have the following structure:**\n",
"\n",
"#@markdown ##### <font color=\"orange\">For a single-speaker dataset:\n",
"#@markdown * wavs/1.wav|This is what my character says in audio 1.\n",
"#@markdown * wavs/2.wav|This, the text that the character says in audio 2.\n",
"#@markdown * ...\n",
"\n",
"#@markdown ##### <font color=\"orange\">For a multi-speaker dataset:\n",
"\n",
"#@markdown * wavs/speaker1audio1.wav|speaker1|This is what the first speaker says.\n",
"#@markdown * wavs/speaker1audio2.wav|speaker1|This is another audio of the first speaker.\n",
"#@markdown * wavs/speaker2audio1.wav|speaker2|This is what the second speaker says in the first audio.\n",
"#@markdown * wavs/speaker2audio2.wav|speaker2|This is another audio of the second speaker.\n",
"#@markdown * ...\n",
"\n",
"#@markdown And so on. In addition, the transcript must be in a <font color=\"orange\">**.csv format. (UTF-8 without BOM)**\n",
"\n",
"#@markdown ---\n",
"%cd /content/dataset\n",
"from google.colab import files\n",
"!rm /content/dataset/metadata.csv\n",
"listfn, length = files.upload().popitem()\n",
"if listfn != \"metadata.csv\":\n",
" !mv \"$listfn\" metadata.csv\n",
"%cd .."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "dOyx9Y6JYvRF"
},
"outputs": [],
"source": [
"#@markdown # <font color=\"pink\"> **3. Preprocess dataset.** 🔄\n",
"\n",
"import os\n",
"#@markdown ### First of all, select the language of your dataset.\n",
"language = \"English (U.S.)\" #@param [\"Català\", \"Dansk\", \"Deutsch\", \"Ελληνικά\", \"English (British)\", \"English (U.S.)\", \"Español\", \"Español (latinoamericano)\", \"Suomi\", \"Français\", \"Magyar\", \"Icelandic\", \"Italiano\", \"ქართული\", \"қазақша\", \"Lëtzebuergesch\", \"नेपाली\", \"Nederlands\", \"Norsk\", \"Polski\", \"Português (Brasil)\", \"Română\", \"Русский\", \"Српски\", \"Svenska\", \"Kiswahili\", \"Türkçe\", \"украї́нська\", \"Tiếng Việt\", \"简体中文\"]\n",
"#@markdown ---\n",
"# language definition:\n",
"languages = {\n",
" \"Català\": \"ca\",\n",
" \"Dansk\": \"da\",\n",
" \"Deutsch\": \"de\",\n",
" \"Ελληνικά\": \"grc\",\n",
" \"English (British)\": \"en\",\n",
" \"English (U.S.)\": \"en-us\",\n",
" \"Español\": \"es\",\n",
" \"Español (latinoamericano)\": \"es-419\",\n",
" \"Suomi\": \"fi\",\n",
" \"Français\": \"fr\",\n",
" \"Magyar\": \"hu\",\n",
" \"Icelandic\": \"is\",\n",
" \"Italiano\": \"it\",\n",
" \"ქართული\": \"ka\",\n",
" \"қазақша\": \"kk\",\n",
" \"Lëtzebuergesch\": \"lb\",\n",
" \"नेपाली\": \"ne\",\n",
" \"Nederlands\": \"nl\",\n",
" \"Norsk\": \"nb\",\n",
" \"Polski\": \"pl\",\n",
" \"Português (Brasil)\": \"pt-br\",\n",
" \"Română\": \"ro\",\n",
" \"Русский\": \"ru\",\n",
" \"Српски\": \"sr\",\n",
" \"Svenska\": \"sv\",\n",
" \"Kiswahili\": \"sw\",\n",
" \"Türkçe\": \"tr\",\n",
" \"украї́нська\": \"uk\",\n",
" \"Tiếng Việt\": \"vi\",\n",
" \"简体中文\": \"zh\"\n",
"}\n",
"\n",
"def _get_language(code):\n",
" return languages[code]\n",
"\n",
"final_language = _get_language(language)\n",
"#@markdown ### Choose a name for your model:\n",
"model_name = \"Test\" #@param {type:\"string\"}\n",
"#@markdown ---\n",
"# output:\n",
"#@markdown ### Choose the working folder: (recommended to save to Drive)\n",
"\n",
"#@markdown The working folder will be used in preprocessing, but also in training the model.\n",
"output_path = \"/content/drive/MyDrive/colab/piper\" #@param {type:\"string\"}\n",
"output_dir = output_path+\"/\"+model_name\n",
"if not os.path.exists(output_dir):\n",
" os.makedirs(output_dir)\n",
"#@markdown ---\n",
"#@markdown ### Choose dataset format:\n",
"dataset_format = \"ljspeech\" #@param [\"ljspeech\", \"mycroft\"]\n",
"#@markdown ---\n",
"#@markdown ### Is this a single speaker dataset? Otherwise, uncheck:\n",
"single_speaker = True #@param {type:\"boolean\"}\n",
"if single_speaker:\n",
" force_sp = \" --single-speaker\"\n",
"else:\n",
" force_sp = \"\"\n",
"#@markdown ---\n",
"#@markdown ### Select the sample rate of the dataset:\n",
"sample_rate = \"22050\" #@param [\"16000\", \"22050\"]\n",
"#@markdown ---\n",
"%cd /content/piper/src/python\n",
"#@markdown ### Do you want to train using this sample rate, but your audios don't have it?\n",
"#@markdown The resampler helps you do it quickly!\n",
"resample = False #@param {type:\"boolean\"}\n",
"if resample:\n",
" !python resample.py --input_dir \"/content/dataset/wavs\" --output_dir \"/content/dataset/wavs_resampled\" --output_sr {sample_rate} --file_ext \"wav\"\n",
" !mv /content/dataset/wavs_resampled/* /content/dataset/wavs\n",
"#@markdown ---\n",
"\n",
"!python -m piper_train.preprocess \\\n",
" --language {final_language} \\\n",
" --input-dir /content/dataset \\\n",
" --output-dir \"{output_dir}\" \\\n",
" --dataset-format {dataset_format} \\\n",
" --sample-rate {sample_rate} \\\n",
" {force_sp}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "ickQlOCRjkBL"
},
"outputs": [],
"source": [
"#@markdown # <font color=\"pink\"> **4. Settings.** 🧰\n",
"import json\n",
"import ipywidgets as widgets\n",
"from IPython.display import display\n",
"from google.colab import output\n",
"import os\n",
"#@markdown ### Select the action to train this dataset:\n",
"\n",
"#@markdown * The option to continue a training is self-explanatory. If you've previously trained a model with free colab, your time is up and you're considering training it some more, this is ideal for you. You just have to set the same settings that you set when you first trained this model.\n",
"#@markdown * The option to convert a single-speaker model to a multi-speaker model is self-explanatory, and for this it is important that you have processed a dataset that contains text and audio from all possible speakers that you want to train in your model.\n",
"#@markdown * The finetune option is used to train a dataset using a pretrained model, that is, train on that data. This option is ideal if you want to train a very small dataset (more than five minutes recommended).\n",
"#@markdown * The train from scratch option builds features such as dictionary and speech form from scratch, and this may take longer to converge. For this, hours of audio (8 at least) are recommended, which have a large collection of phonemes.\n",
"\n",
"action = \"finetune\" #@param [\"Continue training\", \"convert single-speaker to multi-speaker model\", \"finetune\", \"train from scratch\"]\n",
"#@markdown ---\n",
"if action == \"Continue training\":\n",
" if os.path.exists(f\"{output_dir}/lightning_logs/version_0/checkpoints/last.ckpt\"):\n",
" ft_command = f'--resume_from_checkpoint \"{output_dir}/lightning_logs/version_0/checkpoints/last.ckpt\" '\n",
" print(f\"Continuing {model_name}'s training at: {output_dir}/lightning_logs/version_0/checkpoints/last.ckpt\")\n",
" else:\n",
" raise Exception(\"Training cannot be continued as there is no checkpoint to continue at.\")\n",
"elif action == \"finetune\":\n",
" if os.path.exists(f\"{output_dir}/lightning_logs/version_0/checkpoints/last.ckpt\"):\n",
" raise Exception(\"Oh no! You have already trained this model before, you cannot choose this option since your progress will be lost, and then your previous time will not count. Please select the option to continue a training.\")\n",
" else:\n",
" ft_command = '--resume_from_checkpoint \"/content/pretrained.ckpt\" '\n",
"elif action == \"convert single-speaker to multi-speaker model\":\n",
" if not single_speaker:\n",
" ft_command = '--resume_from_single_speaker_checkpoint \"/content/pretrained.ckpt\" '\n",
" else:\n",
" raise Exception(\"This dataset is not a multi-speaker dataset!\")\n",
"else:\n",
" ft_command = \"\"\n",
"if action== \"convert single-speaker to multi-speaker model\" or action == \"finetune\":\n",
" try:\n",
" with open('/content/piper/notebooks/pretrained_models.json') as f:\n",
" pretrained_models = json.load(f)\n",
" if final_language in pretrained_models:\n",
" models = pretrained_models[final_language]\n",
" model_options = [(model_name, model_name) for model_name, model_url in models.items()]\n",
" model_dropdown = widgets.Dropdown(description = \"Choose pretrained model\", options=model_options)\n",
" download_button = widgets.Button(description=\"Download\")\n",
" def download_model(btn):\n",
" model_name = model_dropdown.value\n",
" model_url = pretrained_models[final_language][model_name]\n",
" print(\"Downloading pretrained model...\")\n",
" if model_url.startswith(\"1\"):\n",
" !gdown -q \"{model_url}\" -O \"/content/pretrained.ckpt\"\n",
" elif model_url.startswith(\"https://drive.google.com/file/d/\"):\n",
" !gdown -q \"{model_url}\" -O \"/content/pretrained.ckpt\" --fuzzy\n",
" else:\n",
" !wget -q \"{model_url}\" -O \"/content/pretrained.ckpt\"\n",
" model_dropdown.close()\n",
" download_button.close()\n",
" output.clear()\n",
" if os.path.exists(\"/content/pretrained.ckpt\"):\n",
" print(\"Model downloaded!\")\n",
" else:\n",
" raise Exception(\"Couldn't download the pretrained model!\")\n",
" download_button.on_click(download_model)\n",
" display(model_dropdown, download_button)\n",
" else:\n",
" raise Exception(f\"There are no pretrained models available for the language {final_language}\")\n",
" except FileNotFoundError:\n",
" raise Exception(\"The pretrained_models.json file was not found.\")\n",
"else:\n",
" print(\"Warning: this model will be trained from scratch. You need at least 8 hours of data for everything to work decent. Good luck!\")\n",
"#@markdown ### Choose batch size based on this dataset:\n",
"batch_size = 12 #@param {type:\"integer\"}\n",
"#@markdown ---\n",
"validation_split = 0.01\n",
"#@markdown ### Choose the quality for this model:\n",
"\n",
"#@markdown * x-low - 16Khz audio, 5-7M params\n",
"#@markdown * medium - 22.05Khz audio, 15-20 params\n",
"#@markdown * high - 22.05Khz audio, 28-32M params\n",
"quality = \"medium\" #@param [\"high\", \"x-low\", \"medium\"]\n",
"#@markdown ---\n",
"#@markdown ### For how many epochs to save training checkpoints?\n",
"#@markdown The larger your dataset, you should set this saving interval to a smaller value, as epochs can progress longer time.\n",
"checkpoint_epochs = 5 #@param {type:\"integer\"}\n",
"#@markdown ---\n",
"#@markdown ### Step interval to generate model samples:\n",
"log_every_n_steps = 1000 #@param {type:\"integer\"}\n",
"#@markdown ---\n",
"#@markdown ### Training epochs:\n",
"max_epochs = 10000 #@param {type:\"integer\"}\n",
"#@markdown ---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {
"background_save": true
},
"id": "X4zbSjXg2J3N"
},
"outputs": [],
"source": [
"#@markdown # <font color=\"pink\"> **5. Train.** 🏋️‍♂️\n",
"#@markdown Run this cell to train your final model! If possible, some audio samples will be saved during training in the output folder.\n",
"\n",
"get_ipython().system(f'''\n",
"python -m piper_train \\\n",
"--dataset-dir \"{output_dir}\" \\\n",
"--accelerator 'gpu' \\\n",
"--devices 1 \\\n",
"--batch-size {batch_size} \\\n",
"--validation-split {validation_split} \\\n",
"--num-test-examples 2 \\\n",
"--quality {quality} \\\n",
"--checkpoint-epochs {checkpoint_epochs} \\\n",
"--log_every_n_steps {log_every_n_steps} \\\n",
"--max_epochs {max_epochs} \\\n",
"{ft_command}\\\n",
"--precision 32\n",
"''')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6ISG085SYn85"
},
"source": [
"# Have you finished training and want to test the model?\n",
"\n",
"* If you want to run this model in any software that Piper integrates or the same Piper app, export your model using the [model exporter notebook](https://colab.research.google.com/github/rmcpantoja/piper/blob/master/notebooks/piper_model_exporter.ipynb)!\n",
"* Wait! I want to test this right now before exporting it to the supported format for Piper. Test your generated last.ckpt with [this notebook](https://colab.research.google.com/github/rmcpantoja/piper/blob/master/notebooks/piper_inference_(ckpt).ipynb)!"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"provenance": [],
"include_colab_link": true
},
"gpuClass": "standard",
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}