DEEP LEARNING "Oral Cancer Detection Using CNN Based On Android Application"
Oral cancer is a type of cancer that grows and develops around the mouth to the oral cavity or oropharynx. At present, the level of public awareness and concern for oral cancer is still low, this is due to a lack of education about the dangers of oral cancer. In addition, the cost of screening for oral cancer is still relatively expensive. In addition, the use of Android applications has expanded today, involving various age groups from teenagers to adults. Through this android application, all activities that are usually carried out through the website can run more practically. One of them is the development of an android-based early detection of oral cancer application called OralVision which is useful for early detection of oral cancer for the general public. Where with this application, it is hoped that the public can perform early oral cancer screening without incurring high costs. The application used in developing this research is Android Studio and uses Deep Learning in its development. In this article, we will discuss the deep learning code utilized in oral cancer detection.
First, I used an oral cancer dataset from Kaggle. Then, I will discuss the steps in writing the code to create a deep learning model.
The next step is to :
from google.colab import drive
drive.mount('/content/drive')
train_dir = '/content/drive/MyDrive/Data/Dataset'
Labels = ['cancer', 'non_cancer']
print ("class : ")
for i in range(len(Labels)):
print (i, end = " ")
print (Labels[i])
print('Number of classes:',len(Labels))
module_selection = ("mobilenet_v2", 224, 1280)
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE = "https://tfhub.dev/google/tf2preview/{}/feature_vector/2".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
BATCH_SIZE = 16
feature_extractor = hub.KerasLayer(
MODULE_HANDLE,input_shape=IMAGE_SIZE+(3,),
output_shape=[FV_SIZE])
do_fine_tuning = False
if do_fine_tuning:
feature_extractor.trainable = True
for layer in base_model.layers[-30:]:
layer.trainable =True
else:
feature_extractor.trainable = False
LEARNING_RATE = 0.001
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
loss='categorical_crossentropy',
metrics=['accuracy'])
import os
os.environ['TF_XLA_FLAGS'] = '--tf_xla_enable_xla_devices=false'
model.compile(optimizer='adam',
loss='categorical_crossentropy', metrics=['accuracy'])
EPOCHS=15
history = model.fit(
train_generator,
steps_per_epoch=train_generator.samples//train_generator.batch_size,
epochs=EPOCHS,
validation_data=validation_generator,
validation_steps=validation_generator.samples//validation_generator.batch_ size)
import time
t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
tf.keras.models.save_model(model, export_path)
export_path
reloaded = tf.keras.models.load_model(export_path, custom_objects= {'KerasLayer':hub.KerasLayer})
def predict_reload(image):
probabilities = reloaded.predict(np.asarray([img]))[0]
class_idx = np.argmax(probabilities)
return {Labels[class_idx]: probabilities[class_idx]}
The deep learning code will generate a usable TFLite model for Android applications in detecting oral cancer. The functionality of this deep learning code involves detecting images stored on the phone or images captured directly. The code will output accuracy in percentage form. For users, this indicates whether oral cancer is detected with a high or low level of confidence.
For the complete code of this deep learning project, you can view it on the GitHub link
https://github.com/ajengdwija/Deep-Learning---Oral-Cancer-Detection
Thank You !