Toxicless texts with AI – how to measure text toxicity in the browser

internet

In this article I will show how to measure comments toxicity using Machine Learning models.

Before you will continue reading please watch short introduction:

https://www.youtube.com/watch?v=AECV2qa0Kaw

Hate, rude and toxic comments are common problem in the internet which affects many people. Today, we will prepare neural network, which detects comments toxicity, directly in the browser. The goal is to create solution which will detect toxicity in the realtime and warn the user during writing, which can discourage from writing toxic comments.

To do this, we will train the tensorflow lite model, which will run in the browser using WebAssembly backend. The WebAssembly (WASM) allows running C, C++ or RUST code at native speed. Thanks to this, prediction performance will be better than running it using javascript tensorflowjs version.
Moreover, we can serve the model, on the static page, with no additional backend servers required.

web assembly

To train the model, we will use the Kaggle Toxic Comment Classification Challenge training data,
which contains the labeled comments, with toxicity types:
* toxic
* severe_toxic
* obscene
* threat
* insult
* identity_hate

data set

Our model, will only classify, if the text is toxic, or not. Thus we need to start with preprocessing training data. Then we will use the tensorflow lite model maker library.
We will also use the Averaging Word Embedding specification which will create words embeddings and dictionary mappings using training data thus we can train the model in the different languages.
The Averaging Word Embedding specification based model will be small <1MB.
If we have small dataset we can use the pretrained embeddings. We can choose MobileBERT or BERT-Base specification.
In this case models will much more bigger 25MB w/ quantization 100MB w/o quantization for
MobileBERT and 300MB for BERT-Base (based on: https://www.tensorflow.org/lite/tutorials/model_maker_text_classification#choose_a_model_architecture_for_text_classifier)

train

Using simple model architecture (Averaging Word Embedding), we can achieve about nighty five percent accuracy, and small model size, appropriate 
for the web browser, and web assembly. 

tensorflow lite

Now, let’s prepare the non-toxic forum web application, where we can write the comments.
When we write non-toxic comments, the model won’t block it.
On the other hand, the toxic comments will be blocked, and the user warned.

Of course, this is only client side validation, which can discourage users, from writing toxic comments.

web application

To run the example simply clone git repository and run simple server to serve the static page:

git clone https://github.com/qooba/ai-toxicless-texts.git
cd ai-toxicless-texts
python3 -m http.server

The code to for preparing data, training and exporting model is here:
https://github.com/qooba/ai-toxicless-texts/blob/master/Model_Maker_Toxicity.ipynb

TinyML with Arduino

Ant

In this article I will show how to build Tensorflow Lite based jelly bears classifier using Arduino Nano 33 BLE Sense.

Before you will continue reading please watch short introduction:

Currently a machine learning solution can be deployed not only on very powerful machines containing GPU cards but also on a really small devices. Of course such a devices has a some limitation eg. memory etc. To deploy ML model we need to prepare it. The Tensorflow framework allows you to convert neural networks to Tensorflow Lite which can be installed on the edge devices eg. Arduino Nano.

Arduino Nano 33 BLE Sense is equipped with many sensors that allow for the implementation of many projects eg.:
* Digital microphone
* Digital proximity, ambient light, RGB and gesture sensor
* 3D magnetometer, 3D accelerometer, 3D gyroscope
* Capacitive digital sensor for relative humidity and temperature

Examples which I have used in this project can be found here.

Arduino sensors

To simplify device usage I have build Arduino Lab project where you can test and investigate listed sensors directly on the web browser.

The project dependencies are packed into docker image to simplify usage.

Before you start the project you will need to connect Arduino through USB (the Arduino will communicate with docker container through /dev/ttyACM0)

git clone https://github.com/qooba/tinyml-arduino.git
cd tinyml-arduino
./run.server.sh
# in another terminal tab
./run.nginx.sh
# go inside server container 
docker exec -it arduino /bin/bash
./start.sh

For each sensor type you can click Prepare button which will build and deploy appropriate Arduino code.


NOTE:
Sometimes you will have to deploy to arduino manually to do this you will need to
go to arduino container

docker exec -it arduino /bin/bash
cd /arduino
make rgb

Here you have complete Makefile with all types of implemented sensors.


You can start observations using Watch button.
Arduino pdm
Arduino temperature
Arduino rgb

Now we will build TinyML solution.
In the first step we will capture training data:
Arduino capture

The training data will be saved in the csv format. You will need to repeat the proces for each class you will detect.

Captured data will be uploaded to the Colab Notebook.
Here I fully base on the project Fruit identification using Arduino and TensorFlow.
In the notebook we train the model using Tensorflow then convert it to Tensorflow Lite and finally encode to hex format (model.h header file) which is readable by Arduino.

Now we compile and upload model.h header file using drag and drop mechanism.

Arduino upload

Finally we can classify the jelly bears by the color:

Arduino classify

Fly AI with Tello drone

balloons

The popularity of drones and the area of their application is becoming greater each year.

In this article I will show how to programmatically control Tello Ryze drone, capture camera video and detect objects using Tensorflow. I have packed the whole solution into docker images (the backend and Web App UI are in separate images) thus you can simply run it.

The project code is available on my github https://github.com/qooba/aidrone
You can also use ready docker image: https://hub.docker.com/repository/docker/qooba/aidrone

Before you will continue reading please watch short introduction:

Architecture

architecture diagram

The application will use two network interfaces.
The first will be used by the python backend to connect the the Tello wifi to send the commands and capture video stream. In the backend layer I have used the DJITelloPy library which covers all required tello move commands and video stream capture.
To efficiently show the video stream in the browser I have used the WebRTC protocol and aiortc library. Finally I have used the Tensorflow 2.0 object detection with pretrained SSD ResNet50 model.

The second network interface will be used to expose the Web Vue application.
I have used nginx to serve the frontend application

Application

drone controls

Using Web interface you can control the Tello movement where you can:
* start video stream
* stop video stream
* takeoff – which starts Tello flight
* land
* up
* down
* rotate left
* rotate right
* forward
* backward
* left
* right

In addition using draw detection switch you can turn on/off the detection boxes on the captured video stream (however this introduces a delay in the video thus it is turned off by default). Additionally I send the list of detected classes through web sockets which are also displayed.

drone detection

As mentioned before I have used the pretrained model thus It is good idea to train your own model to get better results for narrower and more specific class of objects.

Finally the whole solution is packed into docker images thus you can simply start it using commands:

docker network create -d bridge app_default
docker run --name tello --network app_default --gpus all -d --rm -p 8890:8890 -p 8080:8080 -p 8888:8888 -p 11111:11111/udp  qooba/aidrone /bin/bash -c "python3 drone.py"
docker run -d --rm --network app_default --name nginx -p 80:80 -p 443:443 qooba/aidrone:front

To use GPU additional nvidia drivers (included in the NVIDIA CUDA Toolkit) are needed.

DeepMicroscopy – my portable ML laboratory

DIV

Today I’m very happy to finally release my open source project DeepMicroscopy.
In this project I have created the platform where you can capture the images from the microscope, annotate, train the Tensorflow model and finally observe real time object detection.
The project is configured on the Jetson Nano device thus it can work with compact and portable solutions.

The project code is available on my github https://github.com/qooba/deepmicroscopy

Before you will continue reading please watch quick introduction:

1. Architecture

The solution requires three devices:
* Microscope with usb camera – e.g. Velleman CAMCOLMS3 2Mpx
* Inference server – Jetson Nano
* Training server – PC equipped with GPU card e.g. NVIDIA GTX 1050 Ti

The whole solution was built using docker images thus now I will describe components installed on each device.

Jetson

The Jetson device contains three components:
* Frontend – Vue application running on Nginx
* Backend – Python application which is the core of the solution
* Storage – Minio storage where projects, images and annotations are stored

Training Server

The training server contains two components:
* Frontend – Vue application running on Nginx
* Backend – Python application which handles the training logic

2. Platform functionalities

The most of platform’s functionality is installed on the Jetson Nano. Because the Jetson Nano compute capabilities are insufficient for model training purposes I have decided to split this part into three stages which I will describe in the training paragraph.

Projects management

In the Deep Microscopy you can create multiple projects where you annotate and recognize different objects.

You can create and switch projects in the top left menu. Each project data is kept in the separate bucket in the minio storage.

Images Capture

When you open the Capture panel in the web application and click Play ▶ button the WebRTC socket between browser and backend is created (I have used the aiortc python library). To make it working in the Chrome browser we need two things:
* use TLS for web application – the self signed certificate is already configured in the nginx
* allow Camera to be used for the application – you have to set it in the browser

Now we can stream the image from camera to the browser (I have used OpenCV library to fetch the image from microscope through usb).

When we decide to capture specific frame and click Plus ✚ button the backend saves the current frame into project bucket of minio storage.

Annotation

The annotation engine is based on the Via Image Annotator. Here you can see all images you have captured for specific project. There are a lot of features eg. switching between images (left/right arrow), zoom in/out (+/-) and of course annotation tools with different shapes (currently the training algorithm expects the rectangles) and attributes (by default the class attribute is added which is also expected by the training algorithm).

This is rather painstaking and manual task thus when you will finish remember to save the annotations by clicking save button (currently there is no auto save). When you save the project the project file (with the via schema) is saved in the project bucket.

Training

When we finish image annotation we can start model training. As mentioned before it is split into three stages.

Data package

At the beginning we have to prepare data package (which contains captured images and our annotations) by clicking the DATA button.

Training server

Then we drag and drop the data package to the application placed on machine with higher compute capabilities.

After upload the training server automatically extracts the data package, splits into train/test data and starts training.
Currently I have used the MobileNet V2 model architecture and I base on the pretrained tensorflow model.

When the training is finished the model is exported using TensorRT which optimizes the model inference performance especially on NVIDIA devices like Jetson Nano.

During and after training you can inspect all models using builtin tensorboard.

The web application periodically check training state and when the training is finished we can download the model.

Uploading model

Finally we upload the TensorRT model back to the Jetson Nano device. The model is saved into selected project bucket thus you can use multiple models for each project.

Object detection

On the Execute panel we can choose model from the drop down list (where we have list of models uploaded for selected project) and load the model clicking RUN (typically it take same time to load the model). When we click Play ▶ button the application shows real time object detection. If we want to change the model we can click CLEAR and then choose and RUN another model.

Additionally we can fetch additional detection statistics which are sent using Web Socket. Currently the number of detected items and average width, height, score are returned.

3. Setup

To start working with the Jetson Nano we have to install Jetson Nano Developer Kit.

The whole platform is working with Docker and all Dockerfiles are included in the GitHub repository

Because Jetson Nano has aarch64 / arm64 architecture thus we need separate images for Jetson components.

Jetson dockers:
* front – frontend web app
* app – backend web app
* minio – minio storage for aarch64 / arm64 architecture

Training Server dockers:
* serverfront – frontend app
* server – backend app

If you want you can build the images by yourself or you can use built images from DockerHub.

The simplest option is to run run.app.sh on Jetson Nano and run.server.sh on Training Server which will setup the whole platform.

Thanks for reading 🙂

Tensorflow meets C# Azure function

Meet

Tensorflow meets C# Azure function and … . In this post I would like to show how to deploy tensorflow model with C# Azure function. I will use the TensorflowSharp the .NET bindings to the tensorflow library. The InterceptionInterface will be involved to create http endpoint which will recognize the images.

Code

I will start with creating .net core class library and adding TensorFlowSharp package:

dotnet new classlib
dotnet add package TensorFlowSharp -v 1.9.0

Then create file TensorflowImageClassification.cs:

Here I have defined the http entrypoint for the AzureFunction (Run method). The q query parameter is taken from the url and used as a url of the image which will be recognized.

The solution will analyze the image using the convolutional neural network arranged with the Interception architecture.

The function will automatically download the trained interception model thus the function first run will take little bit longer. The model will be saved to the D:\home\site\wwwroot\.

The convolutional neural network graph will be kept in the memory (graphCache) thus the function don’t have to read the model every request. On the other hand the input image tensor has to be prepared and preprocessed every time (ConstructGraphToNormalizeImage).

Finally I can run command:

dotnet publish

which will create the package for the function deployment.

Azure function

To deploy the code I will create the Azure Function (Consumption) with the http trigger. Additionally I will set the function entry point, the function.json will be defined as:

The kudu will be used to deploy the already prepared package. Additionally I have to deploy the libtensorflow.dll from /runtimes/win7-x64/native (otherwise the Azure Functions won’t load it). The bin directory should look like:

Finally I can test the azure function:

The function recognize the image and returns the label with the highest probability.