Tensorflow meets C# Azure function and … . In this post I would like to show how to deploy tensorflow model with C# Azure function. I will use the TensorflowSharp the .NET bindings to the tensorflow library. The InterceptionInterface will be involved to create http endpoint which will recognize the images.
I will start with creating .net core class library and adding TensorFlowSharp package:
dotnet new classlib
dotnet add package TensorFlowSharp -v 1.9.0
Then create file TensorflowImageClassification.cs:
Here I have defined the http entrypoint for the AzureFunction (Run method). The q query parameter is taken from the url and used as a url of the image which will be recognized.
The solution will analyze the image using the convolutional neural network arranged with the Interception architecture.
The function will automatically download the trained interception model thus the function first run will take little bit longer. The model will be saved to the D:\home\site\wwwroot\.
The convolutional neural network graph will be kept in the memory (graphCache) thus the function don’t have to read the model every request. On the other hand the input image tensor has to be prepared and preprocessed every time (ConstructGraphToNormalizeImage).
Finally I can run command:
which will create the package for the function deployment.
To deploy the code I will create the Azure Function (Consumption) with the http trigger. Additionally I will set the function entry point, the function.json will be defined as:
The kudu will be used to deploy the already prepared package. Additionally I have to deploy the libtensorflow.dll from /runtimes/win7-x64/native (otherwise the Azure Functions won’t load it). The bin directory should look like:
Finally I can test the azure function:
The function recognize the image and returns the label with the highest probability.
Today I’d like to investigate the Databricks. I will show how it works and how to prepare simple recommendation system using collaborative filtering algorithm which can be used to help to match the product to the expectations and preferences of the user. Collaborative filtering algorithm is extremely useful when we know the relations (eg ratings) between products and the users but it is difficult to indicate the most significant features.
First of all, I have to setup the databricks service where I can use Microsoft Azure Databricks or Databricks on AWS but the best way to start is to use the Community version.
In this example I use the movielens small dataset to create recommendations for the movies. After unzipping the package I use the ratings.csv file.
On the main page of Databrick click Upload Data and put the file.
The file will be located on the DBFS (Databrick file system) and will have a path /FileStore/tables/ratings.csv. Now I can start model training.
The data is ready thus in the next step I can create the databricks notebook (new notebook option on the databricks main page) similar to Jupyter notebook.
Using databricks I can prepare the recommendation in a few steps:
First of all I read and parse the data, because the data file contains the header additionally I have to cut it. In the next step I split the data into training which will be used to train the model and testing part for model evaluation.
I can simply create the ratings for each user/product pair but also export user and products (in this case movies) features. The features in general are meaningless factors but deeper analysis and intuition can give them meaning eg movie genre. The number of features is defined by the rank parameter in training method (used for model training).
The user/product rating is defined as a scalar product of user and product feature vectors.
This gives us ability to use them outside the databricks eg in relational database prefilter the movies using defined business rules and then order using user/product features.
Finally I have shown how to save the user and product features as a json and put it to Azure blob.