Skin Cancer Detection using image Processing

Importance Of project :

The computer aided diagnostics is helpful to increase the diagnosis accuracy as well as the speed. Computer is not more intelligent than human but it may be able to extract some information, like colour variation, asymmetry, texture features, that may not be readily perceived by human eyes.

Time is very crusial in cancer treatment if cancer can be detected at early stage then it can be removed ny proper treatment and medicines by dermatologist. But if cancer reach into danger stage with passing time then there is no treatment available.

Using our system patient can atliest know that if he has a sign of skin cancer or not by uploading image of skin if system found that he can have melanoma then the user can consult dermatologist and do the proper medical test to be sure what kind of cancer and how severe it is.

Our system can be very handy to the user because he just need to upload the image and save the time of user.

Dataset :

We have use kaggle Dataset in our project. :-

https://www.kaggle.com/drscarlat/melanoma

This dataset contains 10,000 cancers photos of melanoma and Non-melanoma cancer in test_sep folder.

This dataset contains 3562 cancers photos of melanoma and Non-melanoma cancer in valid folder for validation purpse and contains 3561 cancers photos in test folder for testing the created model.

Preprocessing :

In order to make the most of our few training examples, we will “augment” them via a number of random transformations, so that our model would never see twice the exact same picture. This helps prevent overfitting and helps the model generalize better.

In Keras this can be done via the keras.preprocessing.image.ImageDataGenerator class. This class allows you to configure random transformations and normalization operations to be done on your image data during training.

It apply various operations on image like rotation , height shift , width shift , resclale RGB values , horizontal flip, zoom , fill mode.

Model creation :

CNN (convolution neural network) :

CNN’s are inspired by the structure of the brain. CNN’s are a class of Neural Networks that have proven very effective in areas of image recognition, processing, and classification. Convolutional neural networks (CNN) Might look or appears like magic to many but in reality, it’s just simple science and mathematics only. CNN’s are a class of neural networks that have proven very effective in areas of image recognition thus in most cases it’s applied to image processing.

CNN’s main task is feature extraction.

It’s a deep learning algorithm in which it takes input as an image and put weights and biases effectively to its objects and finally able to differentiate images from each other.

In machine learning, a convolutional neural network is a class of deep, feed-forward artificial neural network, most commonly applied to analysing visual imagery.

The Convolution + Pooling layers act as feature extractors from the input image while a fully connected layer acts as a classifier. In the above image figure, on receiving a melanoma image as input, the network correctly assigns the highest probability for it among all available categories.

There are four main operations in the ConvNet shown in the image above:

Ø Convolution

Ø Non Linearity (ReLU)

Ø Pooling or Sub Sampling

Ø Classification (Fully Connected Layer)

Simple ConvNet is a sequence of layers, and every layer of a ConvNet transforms one volume of activations to another through a differentiable function.

Convolution Layer

This holds the raw pixel values of the training image as input. In the example above an image (deer) of width 32, height 32, and with three color channels R, G, B is used. It goes through the forward propagation step and finds the output probabilities for each class. This layer ensures the spatial relationship between pixels by learning image features using small squares of input data.

Rectified Linear Unit (ReLU) Layer

A non-linear operation. This layer applies an element-wise activation function. ReLU is used after every Convolution operation. It is applied per pixel and replaces all negative pixel values in the feature map by zero.

Pooling Layer

Also called as subsampling or downsampling. The pooling layer does a downsampling operation along the spatial dimensions (width, height), resulting , reduces the dimensionality of each feature map but retains the most important information.

Max Pooling operation on a Rectified Feature map.

Fully Connected Layer

In Fully Connected Layer -each node is connected to every other node in the adjacent layer. FC layer computes the class scores with a traditional multilayer perceptron that uses a softmax activation function in the output layer.

The main job of this layer basically takes an input volume as is coming as output from Conv or ReLU or pool layer proceedings. Arrange the output in the N-dimensional vector where N is the number of classes that the program has to choose from.

We have applied Keras tuner to find out the optimal hyperparameter for our model. The Keras Tuner is a library that helps you pick the optimal set of hyperparameters for your TensorFlow program. The process of selecting the right set of hyperparameters for your machine learning (ML) application is called hyperparameter tuning or hyper tuning. we have applies Keras tuning on conv2D layer with hyperparameters on filter and kernel. for filter we have selected values from 32 to 128 with step size of 16 and on kernel we have selected kernel size from 3 to 5.i.e, kernel value can be 3*3, 4*4 or 5*5. we have also applied Keras tuning on MaxPooling2D with hyperparameters on pool size from 3 to 5.i.e, pool size value can be 3*3, 4*4 or 5*5.

We got testing accuracy of 92% on 3500 images after tuning our model with keras tuner by increasing layers and epochs.

We have tried different models whose results are shown below :

Final CNN Model Summary :

GUI :

We also make web application using FLASK , HTML , CSS . User can upload his skin image and get the result whether he has melanoma or not.

You can find our code at :

https://drive.google.com/drive/folders/1TIq58xKD0Kx8vYf_uHXO5xQ4QXCuPwpL?usp=sharing

Conclusion:

It can be easily concluded that the proposed system of skin cancer detection can be implemented using Image processing and deep learning to classify accurately whether image is cancerous or non-cancerous.

Accuracy of proposed system is more than 92%. It would be more efficient and cost-saving process than biopsy method. It will be more advantageous to patients

Blog Authors and Contributions:

Divyang Patel (https://www.linkedin.com/in/divyang-patel-55195313a/) : Data preprocessing and model creation

Anupam Kumar(https://www.linkedin.com/in/anupam-kumar-576397145/) : Model tuning and GUI coding

Under the guidance of

  1. Professor: linkedin.com/in/tanmoy-chakraborty-89553324
  2. Prof. Website: faculty.iiitd.ac.in/~tanmoy/
  3. Teaching Fellow: Ms Ishita Bajaj
  4. Teaching Assistants: Pragya Srivastava, Shiv Kumar Gehlot, Chhavi Jain, Vivek Reddy, Shikha Singh and Nirav Diwan.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store