Skip to Content
Technical Articles
Author's profile photo Sudip Ghosh

Digital Covid-19 Detection Kit Based on Chest X-Ray Using Machine Learning and SAP Conversational AI

Introduction

This year been a very much trouble for everyone and the reason is Covid-19. So i was going through lots of Tech Hackathon to see How other Techies are trying their Hands to Solve this in different forums. I was really amazed to see Data Scientists are really working hard to solve problem around this. Though i am not a Medical Student i was really surprised to see some really great research on this and one of the research is ‘Detecting Covid-19 based on Chest X-Ray’ It’s because as per studies available over internet  Covid-19 create impact in lungs which is pretty much identificable in Chest X-ray or Chest CT scan and many people have started already working on this. So i was thinking why cant we make AI system which detect Covid-19 Based on Chest X-Ray Image and which would affordable to use for Normal People. Because Its only Technology which could make things simpler. And trust me when simplycity comes in my mind automatically SAP Conversational AI comes into my mind. You could curse me and tell me Why I promote SAP Conversational AI so much but trust me according to me they are Next Generation UX.

Wait a Minute you must be thinking What, SAP Conversational AI How it will detect Covid-19 based on X-ray Image when It is meant for Building AI Based Conversational System which understand human Language the way Human understand, well some extend we can infuse emotion too in it. But Detecting Covid-19 Based on X-Ray Image is no way, this guy is so high.

 

Well definately you are right, but i am not high. We can also integrate a Digital Radiologist also with Conversational AI, A Digital Radiologist who is educated enough to differentiate between Covid-19, Pneumonia and Normal. But How it is Possible, Well read the whole blog.

 

** In the end of the blog dont forget to watch the Demo Video.

 

What is Digital Radiologist Here

A Tensorflow based Custom Trained  Machine Learning Model with the help of expert radiologist, which would be able to differentiate between Covid-19.  For this example I have trained this model with public covid-19 image datasets are availavle in Kaggle and Google. Almost 442 Covid-19 X-ray Image, 1263 Normal X-Ray Image and 3295 Pneumonia X-ray Image.

 

Below are the some research in this direction

 

Evaluation of an AI system for detection of COVID-19 on Chest X-Ray images

COVID-19 on the Chest Radiograph: A Multi-Reader Evaluation of an AI System

COVID-19 detector using X-ray images

Automated detection of COVID-19 cases using deep neural networks with X-ray images

Covid-19 Detection using X-ray Images

 

So What exactly I am talking about

Very simple we are going to build a Conversational AI System which would accept a X-Ray Image and send that image to Custom Tensor Flow based Machine Learning Model which would analyse the X-ray Image and Predict Weather its Normal, or Pneumonia or Covid-19. And we are going to host this SAP Conversational AI based bot over WhatsApp so it would be very easy to use because everyone almost use WhatsApp. 

 

Let’s Connect the all Block

So if you look at the Architecture of the above Solution, it is clear you need expert radiologist to classify the x-ray image and train your machine learning model continuously, for building machine learning model and training your model you could use Google Cloud Platform, AWS or Azure. Today i am not going to discuss how to build Image classification Model using TensorFlow, Open CV and Flask. in another blog i might cover. In this example i have exported TensorFlow based machine learning model in proto buffer format, then created a Python script to resize and feed the input to cusrom trained machine learning model and then created a Python flask based Rest API to Integrate with SAP Conversational AI or Web Application or Mobile Application. In this Blog you would also learn how you could bring your own custom built machine learning model and deploy that model as rest based API into SAP Cloud Platform.

 

Let’s connect the dots

Data is the new oil

Well in order to build any custom machine learning model and train your model you first need good data and definately a validator who would validate the data. And in this case Data is as much as X-ray image i could collect. So i went to Kaggle, Github and Google image to collect the data and classified the data into three categories COVID19, PNEUMONIA, NORMAL.  Then i have divided whole data into 85% and 15%. 85% Data i have used for training and 15% data for testing. In below google drive all the data i have uploaded. If you want you can download.

Collection of COVID-19, Pneumonia and Normal  X-Ray Image Datasets for Training the machine learning model and Testing

I have built and trained this Machine Learning Model using Azure, but you can use Aws or Google Cloud Platform also. 

Packaging the machine learning App

After building and training the machine learning model we need to package it so that it could be deployed anywhere, and if you ask me to generalise then first things comes in my mind is docker. below is how my application structure looks like.

Code snippet for app.py which is responsible for routing your HTTP request

import json
import os
import io

# Imports for the REST API
from flask import Flask, request, jsonify

# Imports for image procesing
from PIL import Image

# Imports for prediction
from predict import initialize, predict_image, predict_url

app = Flask(__name__)

# 4MB Max image size limit
app.config['MAX_CONTENT_LENGTH'] = 4 * 1024 * 1024 

# Default route just shows simple text
@app.route('/')
def index():
    return 'GET Request is not supported'



@app.route('/imageurl', methods=['POST'])

def predict_url_handler(project='COVID19TK', publishedName='covid19tk_optimized_sudip'):
    try:
        image_url = json.loads(request.get_data().decode('utf-8'))['url']
        results = predict_url(image_url)
        return jsonify(results)
    except Exception as e:
        print('EXCEPTION:', str(e))
        return 'Error processing image'

if __name__ == '__main__':
    # Load and intialize the model
    initialize()

    # Run the server
    app.run(host='0.0.0.0', port=80)

Codesnippet for Python script which would resize the image, input the data to model and further calculation.

from urllib.request import Request, urlopen
from datetime import datetime
import tensorflow as tf

from PIL import Image
import numpy as np
import sys

try:
    import cv2
    use_opencv = True
    print("Using OpenCV resizing...")
except:
    use_opencv = False
    print("Using CVS resizing...")

filename = 'model.pb'
labels_filename = 'labels.txt'

network_input_size = 0

output_layer = 'loss:0'
input_node = 'Placeholder:0'

graph_def = tf.compat.v1.GraphDef()
labels = []


def initialize():
    print('Loading model...',end=''),
    with open(filename, 'rb') as f:
        graph_def.ParseFromString(f.read())
        tf.import_graph_def(graph_def, name='')

    # Retrieving 'network_input_size' from shape of 'input_node'
    with tf.compat.v1.Session() as sess:
        input_tensor_shape = sess.graph.get_tensor_by_name(input_node).shape.as_list()
        
    assert len(input_tensor_shape) == 4
    assert input_tensor_shape[1] == input_tensor_shape[2]

    global network_input_size
    network_input_size = input_tensor_shape[1]
   
    print('Success!')
    print('Loading labels...', end='')
    with open(labels_filename, 'rt') as lf:
        global labels
        labels = [l.strip() for l in lf.readlines()]
    print(len(labels), 'found. Success!')


def log_msg(msg):
    print("{}: {}".format(datetime.now(),msg))


def extract_bilinear_pixel(img, x, y, ratio, xOrigin, yOrigin):
    """
    Custom implementation of bilinear interpolation when opencv is not available
    img: numpy source image array
    x,y: target pixel coordinates
    ratio: scaling factor
    xOrigin, yOrigin: source image offset
    returns interpolated pixel value (RGB)
    """
    xDelta = (x + 0.5) * ratio - 0.5
    x0 = int(xDelta)
    xDelta -= x0
    x0 += xOrigin
    if x0 < 0:
        x0 = 0;
        x1 = 0;
        xDelta = 0.0;
    elif x0 >= img.shape[1]-1:
        x0 = img.shape[1]-1;
        x1 = img.shape[1]-1;
        xDelta = 0.0;
    else:
        x1 = x0 + 1;
    
    yDelta = (y + 0.5) * ratio - 0.5
    y0 = int(yDelta)
    yDelta -= y0
    y0 += yOrigin
    if y0 < 0:
        y0 = 0;
        y1 = 0;
        yDelta = 0.0;
    elif y0 >= img.shape[0]-1:
        y0 = img.shape[0]-1;
        y1 = img.shape[0]-1;
        yDelta = 0.0;
    else:
        y1 = y0 + 1;

    #Get pixels in four corners
    bl = img[y0, x0]
    br = img[y0, x1]
    tl = img[y1, x0]
    tr = img[y1, x1]
    #Calculate interpolation
    b = xDelta * br + (1. - xDelta) * bl
    t = xDelta * tr + (1. - xDelta) * tl
    pixel = yDelta * t + (1. - yDelta) * b
    return pixel


def extract_and_resize(img, targetSize):
    """
    resize and cropn when opencv is not available
    img: input image numpy array
    targetSize: output size
    returns resized and cropped image
    """
    determinant = img.shape[1] * targetSize[0] - img.shape[0] * targetSize[1]
    if determinant < 0:
        ratio = float(img.shape[1]) / float(targetSize[1])
        xOrigin = 0
        yOrigin = int(0.5 * (img.shape[0] - ratio * targetSize[0]))
    elif determinant > 0:
        ratio = float(img.shape[0]) / float(targetSize[0])
        xOrigin = int(0.5 * (img.shape[1] - ratio * targetSize[1]))
        yOrigin = 0
    else:
        ratio = float(img.shape[0]) / float(targetSize[0])
        xOrigin = 0
        yOrigin = 0
    resize_image = np.empty((targetSize[0], targetSize[1], img.shape[2]), dtype=np.float32)
    for y in range(targetSize[0]):
        for x in range(targetSize[1]):
            resize_image[y, x] = extract_bilinear_pixel(img, x, y, ratio, xOrigin, yOrigin)
    return resize_image


def extract_and_resize_to_256_square(image):
    """
    extracts image central square crop and resizes it to 256x256
    image: input image numpy array
    returns resized 256x256 central crop as numpy array
    """
    h, w = image.shape[:2]
    log_msg("crop_center: " + str(w) + "x" + str(h) +" and resize to " + str(256) + "x" + str(256))
    if use_opencv:
        min_size = min(h, w)
        image = crop_center(image, min_size, min_size)
        return cv2.resize(image, (256, 256), interpolation = cv2.INTER_LINEAR)
    else:
        return extract_and_resize(image, (256, 256))


def crop_center(img,cropx,cropy):
    """
    extracts central crop
    img: input image numpy array
    cropx, cropy: crop size
    returns central crop as numpy array
    """
    h, w = img.shape[:2]
    startx = max(0, w//2-(cropx//2))
    starty = max(0, h//2-(cropy//2))
    log_msg("crop_center: " + str(w) + "x" + str(h) +" to " + str(cropx) + "x" + str(cropy))
    return img[starty:starty+cropy, startx:startx+cropx]


def resize_down_to_1600_max_dim(image):
    """
    resized image to 1600px in max dimension if image exceeds 1600 by width or height
    image: input image numpy array
    returns downsized image
    """
    w,h = image.size
    if h < 1600 and w < 1600:
        return image

    new_size = (1600 * w // h, 1600) if (h > w) else (1600, 1600 * h // w)
    log_msg("resize: " + str(w) + "x" + str(h) + " to " + str(new_size[0]) + "x" + str(new_size[1]))
    
    if use_opencv:
        # Convert image to numpy array
        image = convert_to_nparray(image)
        return cv2.resize(image, new_size, interpolation = cv2.INTER_LINEAR)
    else:
        if max(new_size) / max(image.size) >= 0.5:
            method = Image.BILINEAR
        else:
            method = Image.BICUBIC
        image = image.resize(new_size, method)
        return image


def predict_url(imageUrl):
    """
    predicts image by url
    """
    log_msg("Predicting from url: " +imageUrl)
    imgrequest = Request(imageUrl, headers={"User-Agent": "Mozilla/5.0"})
    with urlopen(imgrequest) as testImage:
        image = Image.open(testImage)
        return predict_image(image)


def convert_to_nparray(image):
    """
    converts PIL.Image to numpy array and changes RGB order to BGR
    image: inpout PIL image
    returns image as a numpy array
    """
    # RGB -> BGR
    log_msg("Convert to numpy array")
    image = np.array(image)
    return image[:, :, (2,1,0)]


def update_orientation(image):
    """
    corrects image orientation according to EXIF data
    image: input PIL image
    returns corrected PIL image
    """
    exif_orientation_tag = 0x0112
    if hasattr(image, '_getexif'):
        exif = image._getexif()
        if exif != None and exif_orientation_tag in exif:
            orientation = exif.get(exif_orientation_tag, 1)
            log_msg('Image has EXIF Orientation: ' + str(orientation))
            # orientation is 1 based, shift to zero based and flip/transpose based on 0-based values
            orientation -= 1
            if orientation >= 4:
                image = image.transpose(Image.TRANSPOSE)
            if orientation == 2 or orientation == 3 or orientation == 6 or orientation == 7:
                image = image.transpose(Image.FLIP_TOP_BOTTOM)
            if orientation == 1 or orientation == 2 or orientation == 5 or orientation == 6:
                image = image.transpose(Image.FLIP_LEFT_RIGHT)
    return image


def preprocess_image_opencv(image_pil):
    """
    image_pil: PIL Image, already converted to 'RGB' and correctly oriented
    returns: nparray of extracted crop
    """
    image = convert_to_nparray(image_pil)
    h, w = image.shape[:2]

    min_size = min(h,w)
    crop_size = min(min_size, int(min_size * network_input_size / 256.0))
    startx = max(0, int(max(0, w//2-(crop_size//2))))
    starty = max(0, int(max(0, h//2-(crop_size//2))))
    new_size = (network_input_size, network_input_size)
    log_msg(f"crop: {w}x{h}  to {crop_size}x{crop_size}, origin at {startx}, {starty}, target = {network_input_size}")
    return cv2.resize(image[starty:starty+crop_size, startx:startx+crop_size], new_size, interpolation = cv2.INTER_LINEAR)


def preprocess_image(image_pil):
    """
    image_pil: PIL Image, already converted to 'RGB' and correctly oriented
    returns: nparray of extracted crop
    """
    # If the image has either w or h greater than 1600 we resize it down respecting
    # aspect ratio such that the largest dimention is 1600
    image_pil = resize_down_to_1600_max_dim(image_pil)

    # Convert image to numpy array
    image = convert_to_nparray(image_pil)
    
    # Crop the center square and resize that square down to 256x256
    resized_image = extract_and_resize_to_256_square(image)

    # Crop the center for the specified network_input_Size
    return crop_center(resized_image, network_input_size, network_input_size)


def predict_image(image):
    """
    calls model's image prediction
    image: input PIL image
    returns prediction response as a dictionary. To get predictions, use result['predictions'][i]['tagName'] and result['predictions'][i]['probability']
    """
    log_msg('Predicting image')
    try:
        if image.mode != "RGB":
            log_msg("Converting to RGB")
            image = image.convert("RGB")

        w,h = image.size
        log_msg("Image size: " + str(w) + "x" + str(h))
        
        # Update orientation based on EXIF tags
        image = update_orientation(image)
        
        if use_opencv:
            cropped_image = preprocess_image_opencv(image)
        else:
            cropped_image = preprocess_image(image)

        tf.compat.v1.reset_default_graph()
        tf.import_graph_def(graph_def, name='')

        with tf.compat.v1.Session() as sess:
            prob_tensor = sess.graph.get_tensor_by_name(output_layer)
            predictions, = sess.run(prob_tensor, {input_node: [cropped_image] })
            
            result = []
            for p, label in zip(predictions, labels):
                truncated_probablity = np.float64(round(p,8))
                if truncated_probablity > 1e-8:
                    result.append({
                        'type': label,
                        'chances': truncated_probablity * 100
                        })

            response = { 
              
                'created': datetime.utcnow().isoformat(),
                'predictions': result 
            }

            log_msg("Results: " + str(response))
            return response
            
    except Exception as e:
        log_msg(str(e))
        return 'Error: Could not preprocess image for prediction. ' + str(e)

Code Snippet for Docker file

FROM python:3.7-slim

RUN pip install -U pip
RUN pip install numpy==1.17.3 tensorflow==2.0.0 flask pillow

COPY app /app


# Expose the port
EXPOSE 80

# Set the working directory
WORKDIR /app

# Run the flask server for the endpoints
CMD python -u app.py

Requirements 

tensorflow==1.14.0
pillow==6.1.0
numpy==1.16.4
flask==0.12.4

Classification Labels

COVID19
NORMAL
PNEUMONIA

Docker build

docker build -t codersudip/covid19xraysudip:aarini . 

Docker run Locally

docker run -p 127.0.0.1:80:80 -d codersudip/covid19xraysudip:aarini  

Test it in Locally using Postman

Push to Docker Hub and Deploy Machine Learning Model into SAP Cloud Platform CF

 docker push codersudip/covid19xraysudip:aarini                            

Deploy into SAP Cloud Platform Cloud Foundry

cf push digitalcovid19tk --docker-image codersudip/covid19xraysudip:aarini

 

 

 

Building Conversational AI and Integrate with WhatsApp

Created a Intent called covid test and below is how all expression looks like

Below is how my skills look like

Trigger

Requirements

Action

Integrating Conversational AI with WhatsApp Follow below blogs

To understand How Conversational AI can be Integrated with WhatsApp follow this blog

In order to understand How Image also could be sent over WhatsApp follow this blog by Vandana Gupta

Now let’s have a look how the demo looks like, Thats Important because it only give more clear understanding.

DEMO

I hope you liked this blog, if you like then like, share and in comment let me know how it is. Meanwhile enjoy your day, be safe and Play with SAP Cloud Platform.

 

**N.B In order to use this in Production it needs lot of validation , need to train with more and more data, evaluate the model , approval from different Medical Regulatory Board, its just an idea or showcase which i wanted to present. 

 

Assigned Tags

      14 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Rajiv Das
      Rajiv Das

      When it's come to Innovation and Idea, then the Name comes to in my mind is "Sudip".

      Excellent blog and it touches almost all the cutting edge technology and most importantly

      the Global Pandemic " Covid-19" . Awesome job Sudip.

      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      Thank you for your kind words

      Author's profile photo Paul PINARD
      Paul PINARD

      Awesome job Sudip, I'll feature your article in SAP Conversational AI's social media channels.

      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      Thank you so much for that, i saw and i also retweeted 😀

      Author's profile photo Pierre COL
      Pierre COL

      Well done Sudip Ghosh, that is a very interesting proof of concept!

      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      Thank you so much, Yes it is Interesting, Actually in New Normal world there are many use cases could be build using same.

      Author's profile photo Vandana Gupta
      Vandana Gupta

      Awesome Blog post Sudip Ghosh ! Really interesting.

      And thanks for the mention! 🙂

      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      Thank you for your kind words

      Author's profile photo VENKATESH GOLLA
      VENKATESH GOLLA

      Nice blog Sudip Ghosh , good luck

      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      thank you so much

      Author's profile photo Barath Josh
      Barath Josh

      Innovative Sudip Ghosh 🙂

      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      thank you so much

      Author's profile photo Vijay Sharma
      Vijay Sharma

      Very nice Sudip Ghosh !!

      Author's profile photo Sudip Ghosh
      Sudip Ghosh
      Blog Post Author

      Thank you so much