Recent Developments in Decision Support Systems

Recent Developments in Decision Support Systems

Author: Clyde W. Holsapple

Publisher: Springer Science & Business Media

Published: 2013-06-29

Total Pages: 613

ISBN-13: 3662029529

DOWNLOAD EBOOK

Over the past two decades, many advances have been made in the decision support system (DSS) field. They range from progress in fundamental concepts, to improved techniques and methods, to widespread use of commercial software for DSS development. Still, the depth and breadth of the DSS field continues to grow, fueled by the need to better support decision making in a world that is increasingly complex in terms of volume, diversity, and interconnectedness of the knowledge on which decisions can be based. This continuing growth is facilitated by increasing computer power and decreasing per-unit computing costs. But, it is spearheaded by the multifaceted efforts of DSS researchers. The collective work of these researchers runs from the speculative to the normative to the descriptive. It includes analysis of what the field needs, designs of means for meeting recognized needs, and implementations for study. It encompasses theoretical, empirical, and applied orientations. It is concerned with the invention of concepts, frameworks, models, and languages for giving varied, helpful perspectives. It involves the discovery of principles, methods, and techniques for expeditious construction of successful DSSs. It aims to create computer-based tools that facilitate DSS development. It assesses DSS efficacy by observing systems, their developers, and their users. This growing body of research continues to be fleshed out and take shape on a strong, but still-developing, skeletal foundation.


System Sciences

System Sciences

Author: IEEE Computer Society

Publisher: Los Alamitos, Calif. : IEEE Computer Society Press

Published: 1991

Total Pages: 754

ISBN-13: 9780818624353

DOWNLOAD EBOOK


The Theory of Perfect Learning

The Theory of Perfect Learning

Author: Nonvikan Karl-Augustt Alahassa

Publisher: Nonvikan Karl-Augustt Alahassa

Published: 2021-08-17

Total Pages: 227

ISBN-13:

DOWNLOAD EBOOK

The perfect learning exists. We mean a learning model that can be generalized, and moreover, that can always fit perfectly the test data, as well as the training data. We have performed in this thesis many experiments that validate this concept in many ways. The tools are given through the chapters that contain our developments. The classical Multilayer Feedforward model has been re-considered and a novel $N_k$-architecture is proposed to fit any multivariate regression task. This model can easily be augmented to thousands of possible layers without loss of predictive power, and has the potential to overcome our difficulties simultaneously in building a model that has a good fit on the test data, and don't overfit. His hyper-parameters, the learning rate, the batch size, the number of training times (epochs), the size of each layer, the number of hidden layers, all can be chosen experimentally with cross-validation methods. There is a great advantage to build a more powerful model using mixture models properties. They can self-classify many high dimensional data in a few numbers of mixture components. This is also the case of the Shallow Gibbs Network model that we built as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0). As the Potts Models and many random Partitions are based on a similarity measure, we open the door to find \emph{sufficient} invariants descriptors in any recognition problem for complex objects such as image; using \emph{metric} learning and invariance descriptor tools, to always reach 100\% accuracy. This is also possible with invariant networks that are also universal approximators. Our work closes the gap between the theory and the practice in artificial intelligence, in a sense that it confirms that it is possible to learn with very small error allowed.


Naked Science

Naked Science

Author: Laura Nader

Publisher: Routledge

Published: 2014-01-02

Total Pages: 340

ISBN-13: 1136667504

DOWNLOAD EBOOK

Naked Science is about contested domains and includes different science cultures: physics, molecular biology, primatology, immunology, ecology, medical environmental, mathematical and navigational domains. While the volume rests on the assumption that science is not autonomous, the book is distinguished by its global perspective. Examining knowledge systems within a planetary frame forces thinking about boundaries that silence or affect knowledge-building. Consideration of ethnoscience and technoscience research within a common framework is overdue for raising questions about deeply held beliefs and assumptions we all carry about scientific knowledge. We need a perspective on how to regard different science traditions because public controversies should not be about a glorified science or a despicable science.


Advancements in Cybercrime Investigation and Digital Forensics

Advancements in Cybercrime Investigation and Digital Forensics

Author: A. Harisha

Publisher: CRC Press

Published: 2023-10-06

Total Pages: 428

ISBN-13: 1000840832

DOWNLOAD EBOOK

Vast manpower and resources are needed to investigate cybercrimes. The use of new advanced technologies, such as machine learning combined with automation, are effective in providing significant additional support in prevention of cyber-attacks, in the speedy recovery of data, and in reducing human error. This new volume offers a comprehensive study of the advances that have been made in cybercrime investigations and digital forensics, highlighting the most up-to-date tools that help to mitigate cyber-attacks and to extract digital evidence for forensic investigations to recover lost, purposefully deleted, or damaged files. The chapters look at technological cybersecurity tools such as artificial intelligence, machine learning, data mining, and others for mitigation and investigation.


Readings in Groupware and Computer-supported Cooperative Work

Readings in Groupware and Computer-supported Cooperative Work

Author: Ronald M. Baecker

Publisher: Morgan Kaufmann

Published: 1993

Total Pages: 904

ISBN-13: 9781558602410

DOWNLOAD EBOOK

This comprehensive introduction to the field represents the best of the published literature on groupware and computer-supported cooperative work (CSCW). The papers were chosen for their breadth of coverage of the field, their clarity of expression and presentation, their excellence in terms of technical innovation or behavioral insight, their historical significance, and their utility as sources for further reading. sourcebook to the field. development or purchase of groupware technology as well as for researchers and managers. groupware, and human-computer interaction.