皮肤癌是皮肤细胞的异常生长产生的癌症,它是最常见的癌症之一,而且可能致命。但是如果及早发现,您的皮肤科医生可以对其进行治疗并彻底消除。
使用深度学习和神经网络,我们将能够对良性和恶性皮肤疾病进行分类,这可能有助于医生在早期阶段诊断出癌症。在本教程中,我们将创建一个皮肤疾病分类器,尝试使用Python中的TensorFlow框架仅从图像中区分良性(痣和脂溢性角化病)和恶性(黑素瘤)皮肤病。
好了,我们来一步一步操作吧。
▊ 安装所需的库:
pip3 install tensorflow tensorflow_hub matplotlib seaborn numpy pandas sklearn imblearn
打开一个新的笔记本(或bfwstudio)并导入必要的模块:
import tensorflow as tf import tensorflow_hub as hub import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from tensorflow.keras.utils import get_file from sklearn.metrics import roc_curve, auc, confusion_matrix from imblearn.metrics import sensitivity_score, specificity_score import os import glob import zipfile import random # to get consistent results after multiple runs tf.random.set_seed(7) np.random.seed(7) random.seed(7) # 0 for benign, 1 for malignant class_names = ["benign", "malignant"]
def download_and_extract_dataset(): # dataset from https://github.com/udacity/dermatologist-ai # 5.3GB train_url = "https://s3-us-west-1.amazonaws.com/udacity-dlnfd/datasets/skin-cancer/train.zip" # 824.5MB valid_url = "https://s3-us-west-1.amazonaws.com/udacity-dlnfd/datasets/skin-cancer/valid.zip" # 5.1GB test_url = "https://s3-us-west-1.amazonaws.com/udacity-dlnfd/datasets/skin-cancer/test.zip" for i, download_link in enumerate([valid_url, train_url, test_url]): temp_file = f"temp{i}.zip" data_dir = get_file(origin=download_link, fname=os.path.join(os.getcwd(), temp_file)) print("Extracting", download_link) with zipfile.ZipFile(data_dir, "r") as z: z.extractall("data") # remove the temp file os.remove(temp_file) # comment the below line if you already downloaded the dataset
# preparing data # generate CSV metadata file to read img paths and labels from it def generate_csv(folder, labels): folder_name = os.path.basename(folder) # convert comma separated labels into a list label2int = {} if labels: labels = labels.split(",") for label in labels: string_label, integer_label = label.split("=") label2int[string_label] = integer_label labels = list(label2int) # generate CSV file df = pd.DataFrame(columns=["filepath", "label"]) i = 0 for label in labels: print("Reading", os.path.join(folder, label, "*")) for filepath in glob.glob(os.path.join(folder, label, "*")): df.loc[i] = [filepath, label2int[label]] i += 1 output_file = f"{folder_name}.csv" print("Saving", output_file) df.to_csv(output_file) # generate CSV files for all data portions, labeling nevus and seborrheic keratosis # as 0 (benign), and melanoma as 1 (malignant) # you should replace "data" path to your extracted dataset path # don't replace if you used download_and_extract_dataset() function generate_csv("data/train", {"nevus": 0, "seborrheic_keratosis": 0, "melanoma": 1}) generate_csv("data/valid", {"nevus": 0, "seborrheic_keratosis": 0, "melanoma": 1}) generate_csv("data/test", {"nevus": 0, "seborrheic_keratosis": 0, "melanoma": 1})
# loading data train_metadata_filename = "train.csv" valid_metadata_filename = "valid.csv" # load CSV files as DataFrames df_train = pd.read_csv(train_metadata_filename) df_valid = pd.read_csv(valid_metadata_filename) n_training_samples = len(df_train) n_validation_samples = len(df_valid) print("Number of training samples:", n_training_samples) print("Number of validation samples:", n_validation_samples) train_ds = tf.data.Dataset.from_tensor_slices((df_train["filepath"], df_train["label"])) valid_ds = tf.data.Dataset.from_tensor_slices((df_valid["filepath"], df_valid["label"]))
Number of training samples: 2000 Number of validation samples: 150让我们加载图像:
# preprocess data def decode_img(img): # convert the compressed string to a 3D uint8 tensor img = tf.image.decode_jpeg(img, channels=3) # Use `convert_image_dtype` to convert to floats in the [0,1] range. img = tf.image.convert_image_dtype(img, tf.float32) # resize the image to the desired size. return tf.image.resize(img, [299, 299]) def process_path(filepath, label): # load the raw data from the file as a string img = tf.io.read_file(filepath) img = decode_img(img) return img, label valid_ds = valid_ds.map(process_path) train_ds = train_ds.map(process_path) # test_ds = test_ds for image, label in train_ds.take(1): print("Image shape:", image.shape) print("Label:", label.numpy())
上面的代码使用map()方法process_path()在两组样本中的每个样本上执行函数,它将基本上加载图像,解码图像格式,将图像像素转换为该范围[0, 1]并将其调整为(299, 299, 3),然后拍摄一张图像并打印形状:
Image shape: (299, 299, 3) Label: 0一切都按...
点击查看剩余70%
网友评论0