In this "Heart Failure Analysis and Prediction" data science workshop, we embarked on a comprehensive journey through the intricacies of cardiovascular health assessment using machine learning and deep learning techniques. Our journey began with an in-depth exploration of the dataset, where we meticulously studied its characteristics, dimensions, and underlying patterns. This initial step laid the foundation for our subsequent analyses. We delved into a detailed examination of the distribution of categorized features, meticulously dissecting variables such as age, sex, serum sodium levels, diabetes status, high blood pressure, smoking habits, and anemia. This critical insight enabled us to comprehend how these features relate to each other and potentially impact the occurrence of heart failure, providing valuable insights for subsequent modeling. Subsequently, we engaged in the heart of the project: predicting heart failure. Employing machine learning models, we harnessed the power of grid search to optimize model parameters, meticulously fine-tuning algorithms to achieve the best predictive performance. Through an array of models including Logistic Regression, KNeighbors Classifier, DecisionTrees Classifier, Random Forest Classifier, Gradient Boosting Classifier, XGB Classifier, LGBM Classifier, and MLP Classifier, we harnessed metrics like accuracy, precision, recall, and F1-score to meticulously evaluate each model's efficacy. Venturing further into the realm of deep learning, we embarked on an exploration of neural networks, striving to capture intricate patterns in the data. Our arsenal included diverse architectures such as Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM) networks, Self Organizing Maps (SOMs), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), and Autoencoders. These architectures enabled us to unravel complex relationships within the data, yielding nuanced insights into the dynamics of heart failure prediction. Our approach to evaluating model performance was rigorous and thorough. By scrutinizing metrics such as accuracy, recall, precision, and F1-score, we gained a comprehensive understanding of the models' strengths and limitations. These metrics enabled us to make informed decisions about model selection and refinement, ensuring that our predictions were as accurate and reliable as possible. The evaluation phase emerges as a pivotal aspect, accentuated by an array of comprehensive metrics. Performance assessment encompasses metrics such as accuracy, precision, recall, F1-score, and ROC-AUC. Cross-validation and learning curves are strategically employed to mitigate overfitting and ensure model generalization. Furthermore, visual aids such as ROC curves and confusion matrices provide a lucid depiction of the models' interplay between sensitivity and specificity. Complementing our advanced analytical endeavors, we also embarked on the creation of a Python GUI using PyQt. This intuitive graphical interface provided an accessible platform for users to interact with the developed models and gain meaningful insights into heart health. The GUI streamlined the prediction process, making it user-friendly and facilitating the application of our intricate models to real-world scenarios. In conclusion, the "Heart Failure Analysis and Prediction" data science workshop was a journey through the realms of data exploration, feature distribution analysis, and the application of cutting-edge machine learning and deep learning techniques. By meticulously evaluating model performance, harnessing the capabilities of neural networks, and culminating in the creation of a user-friendly Python GUI, we armed participants with a comprehensive toolkit to analyze and predict heart failure with precision and innovation.
Workshop 1: Heart Failure Analysis and Prediction Using Scikit-Learn, Keras, and TensorFlow with Python GUI Cardiovascular diseases (CVDs) are the number 1 cause of death globally taking an estimated 17.9 million lives each year, which accounts for 31% of all deaths worldwide. Heart failure is a common event caused by CVDs and this dataset contains 12 features that can be used to predict mortality by heart failure. People with cardiovascular disease or who are at high cardiovascular risk (due to the presence of one or more risk factors such as hypertension, diabetes, hyperlipidaemia or already established disease) need early detection and management wherein a machine learning models can be of great help. Dataset used in this project is from Davide Chicco, Giuseppe Jurman. Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC Medical Informatics and Decision Making 20, 16 (2020). Attribute information in the dataset are as follows: age: Age; anaemia: Decrease of red blood cells or hemoglobin (boolean); creatinine_phosphokinase: Level of the CPK enzyme in the blood (mcg/L); diabetes: If the patient has diabetes (boolean); ejection_fraction: Percentage of blood leaving the heart at each contraction (percentage); high_blood_pressure: If the patient has hypertension (boolean); platelets: Platelets in the blood (kiloplatelets/mL); serum_creatinine: Level of serum creatinine in the blood (mg/dL); serum_sodium: Level of serum sodium in the blood (mEq/L); sex: Woman or man (binary); smoking: If the patient smokes or not (boolean); time: Follow-up period (days); and DEATH_EVENT: If the patient deceased during the follow-up period (boolean). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 2: Cervical Cancer Classification and Prediction Using Machine Learning and Deep Learning with Python GUI About 11,000 new cases of invasive cervical cancer are diagnosed each year in the U.S. However, the number of new cervical cancer cases has been declining steadily over the past decades. Although it is the most preventable type of cancer, each year cervical cancer kills about 4,000 women in the U.S. and about 300,000 women worldwide. Numerous studies report that high poverty levels are linked with low screening rates. In addition, lack of health insurance, limited transportation, and language difficulties hinder a poor woman’s access to screening services. Human papilloma virus (HPV) is the main risk factor for cervical cancer. In adults, the most important risk factor for HPV is sexual activity with an infected person. Women most at risk for cervical cancer are those with a history of multiple sexual partners, sexual intercourse at age 17 years or younger, or both. A woman who has never been sexually active has a very low risk for developing cervical cancer. Sexual activity with multiple partners increases the likelihood of many other sexually transmitted infections (chlamydia, gonorrhea, syphilis). Studies have found an association between chlamydia and cervical cancer risk, including the possibility that chlamydia may prolong HPV infection. Therefore, early detection of cervical cancer using machine and deep learning models can be of great help. The dataset used in this project is obtained from UCI Repository and kindly acknowledged. This file contains a List of Risk Factors for Cervical Cancer leading to a Biopsy Examination. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 3: Chronic Kidney Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Chronic kidney disease is the longstanding disease of the kidneys leading to renal failure. The kidneys filter waste and excess fluid from the blood. As kidneys fail, waste builds up. Symptoms develop slowly and aren't specific to the disease. Some people have no symptoms at all and are diagnosed by a lab test. Medication helps manage symptoms. In later stages, filtering the blood with a machine (dialysis) or a transplant may be required The dataset used in this project was taken over a 2-month period in India with 25 features (eg, red blood cell count, white blood cell count, etc). The target is the 'classification', which is either 'ckd' or 'notckd' - ckd=chronic kidney disease. It contains measures of 24 features for 400 people. Quite a lot of features for just 400 samples. There are 14 categorical features, while 10 are numerical. The dataset needs cleaning: in that it has NaNs and the numeric features need to be forced to floats. Attribute Information: Age(numerical) age in years; Blood Pressure(numerical) bp in mm/Hg; Specific Gravity(categorical) sg - (1.005,1.010,1.015,1.020,1.025); Albumin(categorical) al - (0,1,2,3,4,5); Sugar(categorical) su - (0,1,2,3,4,5); Red Blood Cells(categorical) rbc - (normal,abnormal); Pus Cell (categorical) pc - (normal,abnormal); Pus Cell clumps(categorical) pcc - (present, notpresent); Bacteria(categorical) ba - (present,notpresent); Blood Glucose Random(numerical) bgr in mgs/dl; Blood Urea(numerical) bu in mgs/dl; Serum Creatinine(numerical) sc in mgs/dl; Sodium(numerical) sod in mEq/L; Potassium(numerical) pot in mEq/L; Hemoglobin(numerical) hemo in gms; Packed Cell Volume(numerical); White Blood Cell Count(numerical) wc in cells/cumm; Red Blood Cell Count(numerical) rc in millions/cmm; Hypertension(categorical) htn - (yes,no); Diabetes Mellitus(categorical) dm - (yes,no); Coronary Artery Disease(categorical) cad - (yes,no); Appetite(categorical) appet - (good,poor); Pedal Edema(categorical) pe - (yes,no); Anemia(categorical) ane - (yes,no); and Class (categorical) class - (ckd,notckd). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 4: Lung Cancer Classification and Prediction Using Machine Learning and Deep Learning with Python GUI The effectiveness of cancer prediction system helps the people to know their cancer risk with low cost and it also helps the people to take the appropriate decision based on their cancer risk status. The data is collected from the website online lung cancer prediction system. Total number of attributes in the dataset is 16, while number of instances is 309. Following are attribute information of dataset: Gender: M(male), F(female); Age: Age of the patient; Smoking: YES=2 , NO=1; Yellow fingers: YES=2 , NO=1; Anxiety: YES=2 , NO=1; Peer_pressure: YES=2 , NO=1; Chronic Disease: YES=2 , NO=1; Fatigue: YES=2 , NO=1; Allergy: YES=2 , NO=1; Wheezing: YES=2 , NO=1; Alcohol: YES=2 , NO=1; Coughing: YES=2 , NO=1; Shortness of Breath: YES=2 , NO=1; Swallowing Difficulty: YES=2 , NO=1; Chest pain: YES=2 , NO=1; and Lung Cancer: YES , NO. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 5: Alzheimer’s Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Alzheimer's is a type of dementia that causes problems with memory, thinking and behavior. Symptoms usually develop slowly and get worse over time, becoming severe enough to interfere with daily tasks. Alzheimer's is not a normal part of aging. The greatest known risk factor is increasing age, and the majority of people with Alzheimer's are 65 and older. But Alzheimer's is not just a disease of old age. Approximately 200,000 Americans under the age of 65 have younger-onset Alzheimer’s disease (also known as early-onset Alzheimer’s). The dataset consists of a longitudinal MRI data of 374 subjects aged 60 to 96. Each subject was scanned at least once. Everyone is right-handed. 206 of the subjects were grouped as 'Nondemented' throughout the study. 107 of the subjects were grouped as 'Demented' at the time of their initial visits and remained so throughout the study. 14 subjects were grouped as 'Nondemented' at the time of their initial visit and were subsequently characterized as 'Demented' at a later visit. These fall under the 'Converted' category. Following are some important features in the dataset: EDUC:Years of Education; SES: Socioeconomic Status; MMSE: Mini Mental State Examination; CDR: Clinical Dementia Rating; eTIV: Estimated Total Intracranial Volume; nWBV: Normalize Whole Brain Volume; and ASF: Atlas Scaling Factor. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 6: Parkinson Classification and Prediction Using Machine Learning and Deep Learning with Python GUI The dataset was created by Max Little of the University of Oxford, in collaboration with the National Centre for Voice and Speech, Denver, Colorado, who recorded the speech signals. The original study published the feature extraction methods for general voice disorders. This dataset is composed of a range of biomedical voice measurements from 31 people, 23 with Parkinson's disease (PD). Each column in the table is a particular voice measure, and each row corresponds one of 195 voice recording from these individuals ("name" column). The main aim of the data is to discriminate healthy people from those with PD, according to "status" column which is set to 0 for healthy and 1 for PD. The data is in ASCII CSV format. The rows of the CSV file contain an instance corresponding to one voice recording. There are around six recordings per patient, the name of the patient is identified in the first column. Attribute information of this dataset are as follows: name - ASCII subject name and recording number; MDVP:Fo(Hz) - Average vocal fundamental frequency; MDVP:Fhi(Hz) - Maximum vocal fundamental frequency; MDVP:Flo(Hz) - Minimum vocal fundamental frequency; MDVP:Jitter(%); MDVP:Jitter(Abs); MDVP:RAP; MDVP:PPQ; Jitter:DDP – Several measures of variation in fundamental frequency; MDVP:Shimmer; MDVP:Shimmer(dB); Shimmer:APQ3; Shimmer:APQ5; MDVP:APQ; Shimmer:DDA - Several measures of variation in amplitude; NHR; HNR - Two measures of ratio of noise to tonal components in the voice; status - Health status of the subject (one) - Parkinson's, (zero) – healthy; RPDE,D2 - Two nonlinear dynamical complexity measures; DFA - Signal fractal scaling exponent; and spread1,spread2,PPE - Three nonlinear measures of fundamental frequency variation. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 7: Liver Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. This dataset was used to evaluate prediction algorithms in an effort to reduce burden on doctors. This dataset contains 416 liver patient records and 167 non liver patient records collected from North East of Andhra Pradesh, India. The "Dataset" column is a class label used to divide groups into liver patient (liver disease) or not (no disease). This data set contains 441 male patient records and 142 female patient records. Any patient whose age exceeded 89 is listed as being of age "90". Columns in the dataset: Age of the patient; Gender of the patient; Total Bilirubin; Direct Bilirubin; Alkaline Phosphotase; Alamine Aminotransferase; Aspartate Aminotransferase; Total Protiens; Albumin; Albumin and Globulin Ratio; and Dataset: field used to split the data into two sets (patient with liver disease, or no disease). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy.
Summary Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. Written by Keras creator and Google AI researcher François Chollet, this book builds your understanding through intuitive explanations and practical examples. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the Technology Machine learning has made remarkable progress in recent years. We went from near-unusable speech and image recognition, to near-human accuracy. We went from machines that couldn't beat a serious Go player, to defeating a world champion. Behind this progress is deep learning—a combination of engineering advances, best practices, and theory that enables a wealth of previously impossible smart applications. About the Book Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. Written by Keras creator and Google AI researcher François Chollet, this book builds your understanding through intuitive explanations and practical examples. You'll explore challenging concepts and practice with applications in computer vision, natural-language processing, and generative models. By the time you finish, you'll have the knowledge and hands-on skills to apply deep learning in your own projects. What's Inside Deep learning from first principles Setting up your own deep-learning environment Image-classification models Deep learning for text and sequences Neural style transfer, text generation, and image generation About the Reader Readers need intermediate Python skills. No previous experience with Keras, TensorFlow, or machine learning is required. About the Author François Chollet works on deep learning at Google in Mountain View, CA. He is the creator of the Keras deep-learning library, as well as a contributor to the TensorFlow machine-learning framework. He also does deep-learning research, with a focus on computer vision and the application of machine learning to formal reasoning. His papers have been published at major conferences in the field, including the Conference on Computer Vision and Pattern Recognition (CVPR), the Conference and Workshop on Neural Information Processing Systems (NIPS), the International Conference on Learning Representations (ICLR), and others. Table of Contents PART 1 - FUNDAMENTALS OF DEEP LEARNING What is deep learning? Before we begin: the mathematical building blocks of neural networks Getting started with neural networks Fundamentals of machine learning PART 2 - DEEP LEARNING IN PRACTICE Deep learning for computer vision Deep learning for text and sequences Advanced deep-learning best practices Generative deep learning Conclusions appendix A - Installing Keras and its dependencies on Ubuntu appendix B - Running Jupyter notebooks on an EC2 GPU instance
Derive useful insights from your data using Python. You will learn both basic and advanced concepts, including text and language syntax, structure, and semantics. You will focus on algorithms and techniques, such as text classification, clustering, topic modeling, and text summarization. Text Analytics with Python teaches you the techniques related to natural language processing and text analytics, and you will gain the skills to know which technique is best suited to solve a particular problem. You will look at each technique and algorithm with both a bird's eye view to understand how it can be used as well as with a microscopic view to understand the mathematical concepts and to implement them to solve your own problems. What You Will Learn: Understand the major concepts and techniques of natural language processing (NLP) and text analytics, including syntax and structure Build a text classification system to categorize news articles, analyze app or game reviews using topic modeling and text summarization, and cluster popular movie synopses and analyze the sentiment of movie reviews Implement Python and popular open source libraries in NLP and text analytics, such as the natural language toolkit (nltk), gensim, scikit-learn, spaCy and Pattern Who This Book Is For : IT professionals, analysts, developers, linguistic experts, data scientists, and anyone with a keen interest in linguistics, analytics, and generating insights from textual data
This book provides a collection of comprehensive research articles on data analytics and applications of wearable devices in healthcare. This Special Issue presents 28 research studies from 137 authors representing 37 institutions from 19 countries. To facilitate the understanding of the research articles, we have organized the book to show various aspects covered in this field, such as eHealth, technology-integrated research, prediction models, rehabilitation studies, prototype systems, community health studies, ergonomics design systems, technology acceptance model evaluation studies, telemonitoring systems, warning systems, application of sensors in sports studies, clinical systems, feasibility studies, geographical location based systems, tracking systems, observational studies, risk assessment studies, human activity recognition systems, impact measurement systems, and a systematic review. We would like to take this opportunity to invite high quality research articles for our next Special Issue entitled “Digital Health and Smart Sensors for Better Management of Cancer and Chronic Diseases” as a part of Sensors journal.
Deep learning is often viewed as the exclusive domain of math PhDs and big tech companies. But as this hands-on guide demonstrates, programmers comfortable with Python can achieve impressive results in deep learning with little math background, small amounts of data, and minimal code. How? With fastai, the first library to provide a consistent interface to the most frequently used deep learning applications. Authors Jeremy Howard and Sylvain Gugger, the creators of fastai, show you how to train a model on a wide range of tasks using fastai and PyTorch. You’ll also dive progressively further into deep learning theory to gain a complete understanding of the algorithms behind the scenes. Train models in computer vision, natural language processing, tabular data, and collaborative filtering Learn the latest deep learning techniques that matter most in practice Improve accuracy, speed, and reliability by understanding how deep learning models work Discover how to turn your models into web applications Implement deep learning algorithms from scratch Consider the ethical implications of your work Gain insight from the foreword by PyTorch cofounder, Soumith Chintala
Every day we interact with machine learning systems offering individualized predictions for our entertainment, social connections, purchases, or health. These involve several modalities of data, from sequences of clicks to text, images, and social interactions. This book introduces common principles and methods that underpin the design of personalized predictive models for a variety of settings and modalities. The book begins by revising 'traditional' machine learning models, focusing on adapting them to settings involving user data, then presents techniques based on advanced principles such as matrix factorization, deep learning, and generative modeling, and concludes with a detailed study of the consequences and risks of deploying personalized predictive systems. A series of case studies in domains ranging from e-commerce to health plus hands-on projects and code examples will give readers understanding and experience with large-scale real-world datasets and the ability to design models and systems for a wide range of applications.
This open access book presents the first comprehensive overview of general methods in Automated Machine Learning (AutoML), collects descriptions of existing systems based on these methods, and discusses the first series of international challenges of AutoML systems. The recent success of commercial ML applications and the rapid growth of the field has created a high demand for off-the-shelf ML methods that can be used easily and without expert knowledge. However, many of the recent machine learning successes crucially rely on human experts, who manually select appropriate ML architectures (deep learning architectures or more traditional ML workflows) and their hyperparameters. To overcome this problem, the field of AutoML targets a progressive automation of machine learning, based on principles from optimization and machine learning itself. This book serves as a point of entry into this quickly-developing field for researchers and advanced students alike, as well as providing a reference for practitioners aiming to use AutoML in their work.
Learn Machine Learning!Machine learning is one of those topics that can be daunting at first blush. It's not clear where to start, what path someone should take and what APIs to learn in order to get started teaching machines how to learn.This is where Machine Learning by Tutorials comes in! In this book, we'll hold your hand through a number of tutorials, to get you started in the world of machine learning. We'll cover a wide range of popular topics in the field of machine learning, while developing apps that work on iOS devices.Who This Book Is ForThis books is for the intermediate iOS developer who already knows the basics of iOS and Swift development, but wants to understand how machine learning works.Topics covered in Machine Learning by TutorialsCoreML: Learn how to add a machine learning model to your iOS apps, and how to use iOS APIs to access it.Create ML: Learn how to create your own model using Apple's Create ML Tool.Turi Create and Keras: Learn how to tune parameters to improve your machine learning model using more advanced tools.Image Classification: Learn how to apply machine learning models to predict objects in an image.Convolutional Networks: Learn advanced machine learning techniques for predicting objects in an image with Convolutional Neural Networks (CNNs).Sequence Classification: Learn how you can use recurrent neural networks (RNNs) to classify motion from an iPhone's motion sensor.Text-to-text Transform: Learn how to use machine learning to convert bodies of text between two languages.By the end of this book, you'll have a firm understanding of what machine learning is, what it can and cannot do, and how you can use machine learning in your next app!
As big data becomes more ubiquitous, businesses are wondering how they can best leverage it to gain insight into their most important business questions. Using machine learning (ML) and deep learning (DL) in big data environments can identify historical patterns and build artificial intelligence (AI) models that can help businesses to improve customer experience, add services and offerings, identify new revenue streams or lines of business (LOBs), and optimize business or manufacturing operations. The power of AI for predictive analytics is being harnessed across all industries, so it is important that businesses familiarize themselves with all of the tools and techniques that are available for integration with their data lake environments. In this IBM® Redbooks® publication, we cover the best practices for deploying and integrating some of the best AI solutions on the market, including: IBM Watson Machine Learning Accelerator (see note for product naming) IBM Watson Studio Local IBM Power SystemsTM IBM SpectrumTM Scale IBM Data Science Experience (IBM DSX) IBM Elastic StorageTM Server Hortonworks Data Platform (HDP) Hortonworks DataFlow (HDF) H2O Driverless AI We map out all the integrations that are possible with our different AI solutions and how they can integrate with your existing or new data lake. We also walk you through some of our client use cases and show you how some of the industry leaders are using Hortonworks, IBM PowerAI, and IBM Watson Studio Local to drive decision making. We also advise you on your deployment options, when to use a GPU, and why you should use the IBM Elastic Storage Server (IBM ESS) to improve storage management. Lastly, we describe how to integrate IBM Watson Machine Learning Accelerator and Hortonworks with or without IBM Watson Studio Local, how to access real-time data, and security. Note: IBM Watson Machine Learning Accelerator is the new product name for IBM PowerAI Enterprise. Note: Hortonworks merged with Cloudera in January 2019. The new company is called Cloudera. References to Hortonworks as a business entity in this publication are now referring to the merged company. Product names beginning with Hortonworks continue to be marketed and sold under their original names.