How to extract music sources: bass, drums, vocals and other ? – music separation with AI

calculator

In this article I will show how we can extract music sources: bass, drums, vocals and other accompaniments using neural networks.

Before you will continue reading please watch short introduction:

Separation of individual instruments from arranged music is another area where machine learning
algorithms could help. Demucs solves this problem using neural networks.

The trained model (https://arxiv.org/pdf/1909.01174v1.pdf) use U-NET architecture which contains two parts encoder and decoder.
On the encoder input we put the original track and after processing we get bass, drums, vocals and other accompaniments at the decoder output.
The encoder, is connected to the decoder, through additional LSTM layer,
as well as residual connections between subsequent layers.

neural network architecture

Ok, we have neural network architecture but what about the training data ?
This is another difficulty which can be handled by the unlabeled data remixing pipeline.
We start with another classifier, which can find the parts of music, which do not contain the specific instruments, for example drums.
Then, we mix it with well known drums signal, and separate the tracks using the model. 

Now we can compare, the separation results, with known drums track and mixture of other instruments. 

According to this, we can calculate the loss (L1 loss), and use it during the training. 

Additionally, we set different loss weights, for known track and the other. 

training data

The whole UI is kept in the docker image thus you can simply try it:

#for CPU
docker run --name aiaudioseparation -it -p 8000:8000 -v $(pwd)/checkpoints:/root/.cache/torch/hub/checkpoints --rm qooba/aimusicseparation

#for GPU
docker run --name aiaudioseparation --gpus all -it -p 8000:8000 -v $(pwd)/checkpoints:/root/.cache/torch/hub/checkpoints --rm qooba/aimusicseparation

web UI

AI Scissors – sharp cut with neural networks

scissors

Cutting photos background is one of the most tedious graphical task. In this article will show how to simplify it using neural networks.

I will use U^2-Net networks which are described in detail in the arxiv article and python library rembg to create ready to use drag and drop web application which you can use running docker image.

The project code is available on my github https://github.com/qooba/aiscissors
You can also use ready docker image: https://hub.docker.com/repository/docker/qooba/aiscissors

Before you will continue reading please watch quick introduction:

Neural network

To correctly remove the image background we need to select the most visually attractive objects in an image which is covered by Salient Object Detection (SOD). To connect a low memory and computation cost with competitive results against state of art methods the novel U^2-Net architecture will be used.

U-Net convolutional networks have characteristic U shape with symmetric encoder-decoder structure. At each encoding stage the feature maps are downsampled (torch.nn.MaxPool2d) and then upsampled at each decoding
stage (torch.nn.functional.upsample). Downsample features are transferred and concatenated with upsample features using residual connections.

U^2-Net network uses two-level nested U-structure where the main architecture is a U-Net like encoder-decoder and each stage contains residual U-block. Each residual U-block repeats donwsampling/upsampling procedures which are also connected using residual connections.

neural network architecture

Nested U-structure extracts and aggregates the features at each level and enables to capture local and global information from shallow and deep layers.

The U^2-Net architecture is precisely described in arxiv article. Moreover we can go through the pytorch model definition of U2NET and U2NETP.

Additionally the authors also shared the pretrained models: U2NET (176.3MB) and U2NETP (4.7 MB).

The lighter U2NETP version is only 4.7 MB thus it can be used in mobile applications.

Web application

The neural network is wrapped with rembg library which automatically download pretrained networks and gives simple python api. To simplify the usage I have decided to create drag and drop web application (https://github.com/qooba/aiscissors)

In the application you can drag and the drop the image and then compare image with and without background side by side.

web application

You can simply run the application using docker image:

docker run --name aiscissors -d -p 8000:8000 --rm -v $(pwd)/u2net_models:/root/.u2net qooba/aiscissors 

if you have GPU card you can use it:

docker run --gpus all  --name aiscissors -d -p 8000:8000 --rm -v $(pwd)/u2net_models:/root/.u2net qooba/aiscissors 

To use GPU additional nvidia drivers (included in the NVIDIA CUDA Toolkit) are needed.

When you run the container the pretrained models are downloaded thus I have mount local directory u2net_models to /root/.u2net to avoid download each time I run the container.

References

https://arxiv.org/pdf/2005.09007.pdf

https://github.com/NathanUA/U-2-Net

https://github.com/danielgatis/rembg

U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection, Qin, Xuebin and Zhang, Zichen and Huang, Chenyang and Dehghan, Masood and Zaiane, Osmar and Jagersand, Martin Pattern Recognition 106 107404 (2020)

Tensorflow meets C# Azure function

Meet

Tensorflow meets C# Azure function and … . In this post I would like to show how to deploy tensorflow model with C# Azure function. I will use the TensorflowSharp the .NET bindings to the tensorflow library. The InterceptionInterface will be involved to create http endpoint which will recognize the images.

Code

I will start with creating .net core class library and adding TensorFlowSharp package:

dotnet new classlib
dotnet add package TensorFlowSharp -v 1.9.0

Then create file TensorflowImageClassification.cs:

Here I have defined the http entrypoint for the AzureFunction (Run method). The q query parameter is taken from the url and used as a url of the image which will be recognized.

The solution will analyze the image using the convolutional neural network arranged with the Interception architecture.

The function will automatically download the trained interception model thus the function first run will take little bit longer. The model will be saved to the D:\home\site\wwwroot\.

The convolutional neural network graph will be kept in the memory (graphCache) thus the function don’t have to read the model every request. On the other hand the input image tensor has to be prepared and preprocessed every time (ConstructGraphToNormalizeImage).

Finally I can run command:

dotnet publish

which will create the package for the function deployment.

Azure function

To deploy the code I will create the Azure Function (Consumption) with the http trigger. Additionally I will set the function entry point, the function.json will be defined as:

The kudu will be used to deploy the already prepared package. Additionally I have to deploy the libtensorflow.dll from /runtimes/win7-x64/native (otherwise the Azure Functions won’t load it). The bin directory should look like:

Finally I can test the azure function:

The function recognize the image and returns the label with the highest probability.