In the "Data Science Workshop: Alzheimer's Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI," the project aimed to address the critical task of Alzheimer's disease prediction. The journey began with a comprehensive data exploration phase, involving the analysis of a dataset containing various features related to brain scans and demographics of patients. This initial step was crucial in understanding the data's characteristics, identifying missing values, and gaining insights into potential patterns that could aid in diagnosis. Upon understanding the dataset, the categorical features' distributions were meticulously examined. The project expertly employed pie charts, bar plots, and stacked bar plots to visualize the distribution of categorical variables like "Group," "M/F," "MMSE," "CDR," and "age_group." These visualizations facilitated a clear understanding of the demographic and clinical characteristics of the patients, highlighting key factors contributing to Alzheimer's disease. The analysis revealed significant patterns, such as the prevalence of Alzheimer's in different age groups, gender-based distribution, and cognitive performance variations. Moving ahead, the project ventured into the realm of predictive modeling. Employing machine learning techniques, the team embarked on a journey to develop models capable of predicting Alzheimer's disease with high accuracy. The focus was on employing various machine learning algorithms, including K-Nearest Neighbors (KNN), Decision Trees, Random Forests, Gradient Boosting, Light Gradient Boosting, Multi-Layer Perceptron, and Extreme Gradient Boosting. Grid search was applied to tune hyperparameters, optimizing the models' performance. The evaluation process was meticulous, utilizing a range of metrics such as accuracy, precision, recall, F1-score, and confusion matrices. This intricate analysis ensured a comprehensive assessment of each model's ability to predict Alzheimer's cases accurately. The project further delved into deep learning methodologies to enhance predictive capabilities. An arsenal of deep learning architectures, including Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM) networks, Feedforward Neural Networks (FNN), and Recurrent Neural Networks (RNN), were employed. These models leveraged the intricate relationships present in the data to make refined predictions. The evaluation extended to ROC curves and AUC scores, providing insights into the models' ability to differentiate between true positive and false positive rates. The project also showcased an innovative Python GUI built using PyQt. This graphical interface provided a user-friendly platform to input data and visualize the predictions. The GUI's interactive nature allowed users to explore model outcomes and predictions while seamlessly navigating through different input options. In conclusion, the "Data Science Workshop: Alzheimer's Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI" was a comprehensive endeavor that involved meticulous data exploration, distribution analysis of categorical features, and extensive model development and evaluation. It skillfully navigated through machine learning and deep learning techniques, deploying a variety of algorithms to predict Alzheimer's disease. The focus on diverse metrics ensured a holistic assessment of the models' performance, while the innovative GUI offered an intuitive platform to engage with predictions interactively. This project stands as a testament to the power of data science in tackling complex healthcare challenges.
Workshop 1: Heart Failure Analysis and Prediction Using Scikit-Learn, Keras, and TensorFlow with Python GUI Cardiovascular diseases (CVDs) are the number 1 cause of death globally taking an estimated 17.9 million lives each year, which accounts for 31% of all deaths worldwide. Heart failure is a common event caused by CVDs and this dataset contains 12 features that can be used to predict mortality by heart failure. People with cardiovascular disease or who are at high cardiovascular risk (due to the presence of one or more risk factors such as hypertension, diabetes, hyperlipidaemia or already established disease) need early detection and management wherein a machine learning models can be of great help. Dataset used in this project is from Davide Chicco, Giuseppe Jurman. Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC Medical Informatics and Decision Making 20, 16 (2020). Attribute information in the dataset are as follows: age: Age; anaemia: Decrease of red blood cells or hemoglobin (boolean); creatinine_phosphokinase: Level of the CPK enzyme in the blood (mcg/L); diabetes: If the patient has diabetes (boolean); ejection_fraction: Percentage of blood leaving the heart at each contraction (percentage); high_blood_pressure: If the patient has hypertension (boolean); platelets: Platelets in the blood (kiloplatelets/mL); serum_creatinine: Level of serum creatinine in the blood (mg/dL); serum_sodium: Level of serum sodium in the blood (mEq/L); sex: Woman or man (binary); smoking: If the patient smokes or not (boolean); time: Follow-up period (days); and DEATH_EVENT: If the patient deceased during the follow-up period (boolean). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 2: Cervical Cancer Classification and Prediction Using Machine Learning and Deep Learning with Python GUI About 11,000 new cases of invasive cervical cancer are diagnosed each year in the U.S. However, the number of new cervical cancer cases has been declining steadily over the past decades. Although it is the most preventable type of cancer, each year cervical cancer kills about 4,000 women in the U.S. and about 300,000 women worldwide. Numerous studies report that high poverty levels are linked with low screening rates. In addition, lack of health insurance, limited transportation, and language difficulties hinder a poor woman’s access to screening services. Human papilloma virus (HPV) is the main risk factor for cervical cancer. In adults, the most important risk factor for HPV is sexual activity with an infected person. Women most at risk for cervical cancer are those with a history of multiple sexual partners, sexual intercourse at age 17 years or younger, or both. A woman who has never been sexually active has a very low risk for developing cervical cancer. Sexual activity with multiple partners increases the likelihood of many other sexually transmitted infections (chlamydia, gonorrhea, syphilis). Studies have found an association between chlamydia and cervical cancer risk, including the possibility that chlamydia may prolong HPV infection. Therefore, early detection of cervical cancer using machine and deep learning models can be of great help. The dataset used in this project is obtained from UCI Repository and kindly acknowledged. This file contains a List of Risk Factors for Cervical Cancer leading to a Biopsy Examination. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 3: Chronic Kidney Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Chronic kidney disease is the longstanding disease of the kidneys leading to renal failure. The kidneys filter waste and excess fluid from the blood. As kidneys fail, waste builds up. Symptoms develop slowly and aren't specific to the disease. Some people have no symptoms at all and are diagnosed by a lab test. Medication helps manage symptoms. In later stages, filtering the blood with a machine (dialysis) or a transplant may be required The dataset used in this project was taken over a 2-month period in India with 25 features (eg, red blood cell count, white blood cell count, etc). The target is the 'classification', which is either 'ckd' or 'notckd' - ckd=chronic kidney disease. It contains measures of 24 features for 400 people. Quite a lot of features for just 400 samples. There are 14 categorical features, while 10 are numerical. The dataset needs cleaning: in that it has NaNs and the numeric features need to be forced to floats. Attribute Information: Age(numerical) age in years; Blood Pressure(numerical) bp in mm/Hg; Specific Gravity(categorical) sg - (1.005,1.010,1.015,1.020,1.025); Albumin(categorical) al - (0,1,2,3,4,5); Sugar(categorical) su - (0,1,2,3,4,5); Red Blood Cells(categorical) rbc - (normal,abnormal); Pus Cell (categorical) pc - (normal,abnormal); Pus Cell clumps(categorical) pcc - (present, notpresent); Bacteria(categorical) ba - (present,notpresent); Blood Glucose Random(numerical) bgr in mgs/dl; Blood Urea(numerical) bu in mgs/dl; Serum Creatinine(numerical) sc in mgs/dl; Sodium(numerical) sod in mEq/L; Potassium(numerical) pot in mEq/L; Hemoglobin(numerical) hemo in gms; Packed Cell Volume(numerical); White Blood Cell Count(numerical) wc in cells/cumm; Red Blood Cell Count(numerical) rc in millions/cmm; Hypertension(categorical) htn - (yes,no); Diabetes Mellitus(categorical) dm - (yes,no); Coronary Artery Disease(categorical) cad - (yes,no); Appetite(categorical) appet - (good,poor); Pedal Edema(categorical) pe - (yes,no); Anemia(categorical) ane - (yes,no); and Class (categorical) class - (ckd,notckd). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 4: Lung Cancer Classification and Prediction Using Machine Learning and Deep Learning with Python GUI The effectiveness of cancer prediction system helps the people to know their cancer risk with low cost and it also helps the people to take the appropriate decision based on their cancer risk status. The data is collected from the website online lung cancer prediction system. Total number of attributes in the dataset is 16, while number of instances is 309. Following are attribute information of dataset: Gender: M(male), F(female); Age: Age of the patient; Smoking: YES=2 , NO=1; Yellow fingers: YES=2 , NO=1; Anxiety: YES=2 , NO=1; Peer_pressure: YES=2 , NO=1; Chronic Disease: YES=2 , NO=1; Fatigue: YES=2 , NO=1; Allergy: YES=2 , NO=1; Wheezing: YES=2 , NO=1; Alcohol: YES=2 , NO=1; Coughing: YES=2 , NO=1; Shortness of Breath: YES=2 , NO=1; Swallowing Difficulty: YES=2 , NO=1; Chest pain: YES=2 , NO=1; and Lung Cancer: YES , NO. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 5: Alzheimer’s Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Alzheimer's is a type of dementia that causes problems with memory, thinking and behavior. Symptoms usually develop slowly and get worse over time, becoming severe enough to interfere with daily tasks. Alzheimer's is not a normal part of aging. The greatest known risk factor is increasing age, and the majority of people with Alzheimer's are 65 and older. But Alzheimer's is not just a disease of old age. Approximately 200,000 Americans under the age of 65 have younger-onset Alzheimer’s disease (also known as early-onset Alzheimer’s). The dataset consists of a longitudinal MRI data of 374 subjects aged 60 to 96. Each subject was scanned at least once. Everyone is right-handed. 206 of the subjects were grouped as 'Nondemented' throughout the study. 107 of the subjects were grouped as 'Demented' at the time of their initial visits and remained so throughout the study. 14 subjects were grouped as 'Nondemented' at the time of their initial visit and were subsequently characterized as 'Demented' at a later visit. These fall under the 'Converted' category. Following are some important features in the dataset: EDUC:Years of Education; SES: Socioeconomic Status; MMSE: Mini Mental State Examination; CDR: Clinical Dementia Rating; eTIV: Estimated Total Intracranial Volume; nWBV: Normalize Whole Brain Volume; and ASF: Atlas Scaling Factor. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 6: Parkinson Classification and Prediction Using Machine Learning and Deep Learning with Python GUI The dataset was created by Max Little of the University of Oxford, in collaboration with the National Centre for Voice and Speech, Denver, Colorado, who recorded the speech signals. The original study published the feature extraction methods for general voice disorders. This dataset is composed of a range of biomedical voice measurements from 31 people, 23 with Parkinson's disease (PD). Each column in the table is a particular voice measure, and each row corresponds one of 195 voice recording from these individuals ("name" column). The main aim of the data is to discriminate healthy people from those with PD, according to "status" column which is set to 0 for healthy and 1 for PD. The data is in ASCII CSV format. The rows of the CSV file contain an instance corresponding to one voice recording. There are around six recordings per patient, the name of the patient is identified in the first column. Attribute information of this dataset are as follows: name - ASCII subject name and recording number; MDVP:Fo(Hz) - Average vocal fundamental frequency; MDVP:Fhi(Hz) - Maximum vocal fundamental frequency; MDVP:Flo(Hz) - Minimum vocal fundamental frequency; MDVP:Jitter(%); MDVP:Jitter(Abs); MDVP:RAP; MDVP:PPQ; Jitter:DDP – Several measures of variation in fundamental frequency; MDVP:Shimmer; MDVP:Shimmer(dB); Shimmer:APQ3; Shimmer:APQ5; MDVP:APQ; Shimmer:DDA - Several measures of variation in amplitude; NHR; HNR - Two measures of ratio of noise to tonal components in the voice; status - Health status of the subject (one) - Parkinson's, (zero) – healthy; RPDE,D2 - Two nonlinear dynamical complexity measures; DFA - Signal fractal scaling exponent; and spread1,spread2,PPE - Three nonlinear measures of fundamental frequency variation. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 7: Liver Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. This dataset was used to evaluate prediction algorithms in an effort to reduce burden on doctors. This dataset contains 416 liver patient records and 167 non liver patient records collected from North East of Andhra Pradesh, India. The "Dataset" column is a class label used to divide groups into liver patient (liver disease) or not (no disease). This data set contains 441 male patient records and 142 female patient records. Any patient whose age exceeded 89 is listed as being of age "90". Columns in the dataset: Age of the patient; Gender of the patient; Total Bilirubin; Direct Bilirubin; Alkaline Phosphotase; Alamine Aminotransferase; Aspartate Aminotransferase; Total Protiens; Albumin; Albumin and Globulin Ratio; and Dataset: field used to split the data into two sets (patient with liver disease, or no disease). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy.
This book presents cutting-edge research and applications of deep learning in a broad range of medical imaging scenarios, such as computer-aided diagnosis, image segmentation, tissue recognition and classification, and other areas of medical and healthcare problems. Each of its chapters covers a topic in depth, ranging from medical image synthesis and techniques for muskuloskeletal analysis to diagnostic tools for breast lesions on digital mammograms and glaucoma on retinal fundus images. It also provides an overview of deep learning in medical image analysis and highlights issues and challenges encountered by researchers and clinicians, surveying and discussing practical approaches in general and in the context of specific problems. Academics, clinical and industry researchers, as well as young researchers and graduate students in medical imaging, computer-aided-diagnosis, biomedical engineering and computer vision will find this book a great reference and very useful learning resource.
In this project, Data Science Workshop focused on Liver Disease Classification and Prediction, we embarked on a comprehensive journey through various stages of data analysis, model development, and performance evaluation. The workshop aimed to utilize Python and its associated libraries to create a Graphical User Interface (GUI) that facilitates the classification and prediction of liver disease cases. Our exploration began with a thorough examination of the dataset. This entailed importing necessary libraries such as NumPy, Pandas, and Matplotlib for data manipulation, visualization, and preprocessing. The dataset, representing liver-related attributes, was read and its dimensions were checked to ensure data integrity. To gain a preliminary understanding, the dataset's initial rows and column information were displayed. We identified key features such as 'Age', 'Gender', and various biochemical attributes relevant to liver health. The dataset's structure, including data types and non-null counts, was inspected to identify any potential data quality issues. We detected that the 'Albumin_and_Globulin_Ratio' feature had a few missing values, which were subsequently filled with the median value. Our exploration extended to visualizing categorical distributions. Pie charts provided insights into the proportions of healthy and unhealthy liver cases among different gender categories. Stacked bar plots further delved into the connections between 'Total_Bilirubin' categories and the prevalence of liver disease, fostering a deeper understanding of these relationships. Transitioning to predictive modeling, we embarked on constructing machine learning models. Our arsenal included a range of algorithms such as Logistic Regression, Support Vector Machines, K-Nearest Neighbors, Decision Trees, Random Forests, Gradient Boosting, Extreme Gradient Boosting, Light Gradient Boosting. The data was split into training and testing sets, and each model underwent rigorous evaluation using metrics like accuracy, precision, recall, F1-score, and ROC-AUC. Hyperparameter tuning played a pivotal role in model enhancement. We leveraged grid search and cross-validation techniques to identify the best combination of hyperparameters, optimizing model performance. Our focus shifted towards assessing the significance of each feature, using techniques such as feature importance from tree-based models. The workshop didn't halt at machine learning; it delved into deep learning as well. We implemented an Artificial Neural Network (ANN) using the Keras library. This powerful model demonstrated its ability to capture complex relationships within the data. With distinct layers, activation functions, and dropout layers to prevent overfitting, the ANN achieved impressive results in liver disease prediction. Our journey culminated with a comprehensive analysis of model performance. The metrics chosen for evaluation included accuracy, precision, recall, F1-score, and confusion matrix visualizations. These metrics provided a comprehensive view of the model's capability to correctly classify both healthy and unhealthy liver cases. In summary, the Data Science Workshop on Liver Disease Classification and Prediction was a holistic exploration into data preprocessing, feature categorization, machine learning, and deep learning techniques. The culmination of these efforts resulted in the creation of a Python GUI that empowers users to input patient attributes and receive predictions regarding liver health. Through this workshop, participants gained a well-rounded understanding of data science techniques and their application in the field of healthcare.
In the captivating journey of our data science workshop, we embarked on the exploration of Chronic Kidney Disease classification and prediction. Our quest began with a thorough dive into data exploration, where we meticulously delved into the dataset's intricacies to unearth hidden patterns and insights. We analyzed the distribution of categorized features, unraveling the nuances that underlie chronic kidney disease. Guided by the principles of machine learning, we embarked on the quest to build predictive models. With the aid of grid search, we fine-tuned our machine learning algorithms, optimizing their hyperparameters for peak performance. Each model, whether K-Nearest Neighbors, Decision Trees, Random Forests, Gradient Boosting, Naive Bayes, Extreme Gradient Boosting, Light Gradient Boosting, or Multi-Layer Perceptron, was meticulously trained and tested, paving the way for robust predictions. The voyage into the realm of deep learning took us further, as we harnessed the power of Artificial Neural Networks (ANNs). By constructing intricate architectures, we designed ANNs to discern intricate patterns from the data. Leveraging the prowess of TensorFlow, we artfully crafted layers, each contributing to the ANN's comprehension of the underlying dynamics. This marked our initial foray into the world of deep learning. Our expedition, however, did not conclude with ANNs. We ventured deeper into the abyss of deep learning, uncovering the potential of Long Short-Term Memory (LSTM) networks. These networks, attuned to sequential data, unraveled temporal dependencies within the dataset, fortifying our predictive capabilities. Diving even further, we encountered Self-Organizing Maps (SOMs) and Restricted Boltzmann Machines (RBMs). These innovative models, rooted in unsupervised learning, unmasked underlying structures in the dataset. As our understanding of the data deepened, so did our repertoire of tools for prediction. Autoencoders, our final frontier in deep learning, emerged as our champions in dimensionality reduction and feature learning. These unsupervised neural networks transformed complex data into compact, meaningful representations, guiding our predictive models with newfound efficiency. To furnish a granular understanding of model behavior, we employed the classification report, which delineated precision, recall, and F1-Score for each class, providing a comprehensive snapshot of the model's predictive capacity across diverse categories. The confusion matrix emerged as a tangible visualization, detailing the interplay between true positives, true negatives, false positives, and false negatives. We also harnessed ROC and precision-recall curves to illuminate the dynamic interplay between true positive rate and false positive rate, vital when tackling imbalanced datasets. For regression tasks, MSE and its counterpart RMSE quantified the average squared differences between predictions and actual values, facilitating an insightful assessment of model fit. Further enhancing our toolkit, the R-squared (R2) score unveiled the extent to which the model explained variance in the dependent variable, offering a valuable gauge of overall performance. Collectively, this ensemble of metrics enabled us to make astute model decisions, optimize hyperparameters, and gauge the models' fitness for accurate disease prognosis in a clinical context. Amidst this whirlwind of data exploration and model construction, our GUI using PyQt emerged as a beacon of user-friendly interaction. Through its intuitive interface, users navigated seamlessly between model selection, training, and prediction. Our GUI encapsulated the intricacies of our journey, bridging the gap between data science and user experience. In the end, our odyssey illuminated the intricate landscape of Chronic Kidney Disease classification and prediction. We harnessed the power of both machine learning and deep learning, uncovering hidden insights and propelling our predictive capabilities to new heights. Our journey transcended the realms of data, algorithms, and interfaces, leaving an indelible mark on the crossroads of science and innovation.
This book provides a thorough overview of the ongoing evolution in the application of artificial intelligence (AI) within healthcare and radiology, enabling readers to gain a deeper insight into the technological background of AI and the impacts of new and emerging technologies on medical imaging. After an introduction on game changers in radiology, such as deep learning technology, the technological evolution of AI in computing science and medical image computing is described, with explanation of basic principles and the types and subtypes of AI. Subsequent sections address the use of imaging biomarkers, the development and validation of AI applications, and various aspects and issues relating to the growing role of big data in radiology. Diverse real-life clinical applications of AI are then outlined for different body parts, demonstrating their ability to add value to daily radiology practices. The concluding section focuses on the impact of AI on radiology and the implications for radiologists, for example with respect to training. Written by radiologists and IT professionals, the book will be of high value for radiologists, medical/clinical physicists, IT specialists, and imaging informatics professionals.
This two-volume handbook presents a collection of novel methodologies with applications and illustrative examples in the areas of data-driven computational social sciences. Throughout this handbook, the focus is kept specifically on business and consumer-oriented applications with interesting sections ranging from clustering and network analysis, meta-analytics, memetic algorithms, machine learning, recommender systems methodologies, parallel pattern mining and data mining to specific applications in market segmentation, travel, fashion or entertainment analytics. A must-read for anyone in data-analytics, marketing, behavior modelling and computational social science, interested in the latest applications of new computer science methodologies. The chapters are contributed by leading experts in the associated fields.The chapters cover technical aspects at different levels, some of which are introductory and could be used for teaching. Some chapters aim at building a common understanding of the methodologies and recent application areas including the introduction of new theoretical results in the complexity of core problems. Business and marketing professionals may use the book to familiarize themselves with some important foundations of data science. The work is a good starting point to establish an open dialogue of communication between professionals and researchers from different fields. Together, the two volumes present a number of different new directions in Business and Customer Analytics with an emphasis in personalization of services, the development of new mathematical models and new algorithms, heuristics and metaheuristics applied to the challenging problems in the field. Sections of the book have introductory material to more specific and advanced themes in some of the chapters, allowing the volumes to be used as an advanced textbook. Clustering, Proximity Graphs, Pattern Mining, Frequent Itemset Mining, Feature Engineering, Network and Community Detection, Network-based Recommending Systems and Visualization, are some of the topics in the first volume. Techniques on Memetic Algorithms and their applications to Business Analytics and Data Science are surveyed in the second volume; applications in Team Orienteering, Competitive Facility-location, and Visualization of Products and Consumers are also discussed. The second volume also includes an introduction to Meta-Analytics, and to the application areas of Fashion and Travel Analytics. Overall, the two-volume set helps to describe some fundamentals, acts as a bridge between different disciplines, and presents important results in a rapidly moving field combining powerful optimization techniques allied to new mathematical models critical for personalization of services. Academics and professionals working in the area of business anyalytics, data science, operations research and marketing will find this handbook valuable as a reference. Students studying these fields will find this handbook useful and helpful as a secondary textbook.
This book discusses new cognitive informatics tools, algorithms and methods that mimic the mechanisms of the human brain which lead to an impending revolution in understating a large amount of data generated by various smart applications. The book is a collection of peer-reviewed best selected research papers presented at the International Conference on Data Intelligence and Cognitive Informatics (ICDICI 2020), organized by SCAD College of Engineering and Technology, Tirunelveli, India, during 8–9 July 2020. The book includes novel work in data intelligence domain which combines with the increasing efforts of artificial intelligence, machine learning, deep learning and cognitive science to study and develop a deeper understanding of the information processing systems.
This open access book explores ways to leverage information technology and machine learning to combat disease and promote health, especially in resource-constrained settings. It focuses on digital disease surveillance through the application of machine learning to non-traditional data sources. Developing countries are uniquely prone to large-scale emerging infectious disease outbreaks due to disruption of ecosystems, civil unrest, and poor healthcare infrastructure – and without comprehensive surveillance, delays in outbreak identification, resource deployment, and case management can be catastrophic. In combination with context-informed analytics, students will learn how non-traditional digital disease data sources – including news media, social media, Google Trends, and Google Street View – can fill critical knowledge gaps and help inform on-the-ground decision-making when formal surveillance systems are insufficient.
After a long time of neglect, Artificial Intelligence is once again at the center of most of our political, economic, and socio-cultural debates. Recent advances in the field of Artifical Neural Networks have led to a renaissance of dystopian and utopian speculations on an AI-rendered future. Algorithmic technologies are deployed for identifying potential terrorists through vast surveillance networks, for producing sentencing guidelines and recidivism risk profiles in criminal justice systems, for demographic and psychographic targeting of bodies for advertising or propaganda, and more generally for automating the analysis of language, text, and images. Against this background, the aim of this book is to discuss the heterogenous conditions, implications, and effects of modern AI and Internet technologies in terms of their political dimension: What does it mean to critically investigate efforts of net politics in the age of machine learning algorithms?