COST Action IC0702 Summer Course 2009

 

cost

cost

Topics

(listed by lecturer name in alphabetical order; click on a topic for an abstract)


Visual Clustering Methods

James C. Bezdek

Part 1: A primer on cluster analysis

Part 1 gives definitions and notation associated with three types of clustering models: (i) prototype only = V models; (ii) partition only = U models; and (iii) joint (U, V) models. Each type is illustrated by one of its leading examples: (i) self-organizing maps; (ii) single linkage; and (iii) c-means. A fourth example given is the probabilistic mixture model, which is a (U V) model with extra parameters. Part 1 is 100% tutorial, and is accessible to anyone with a little experience in computational mathematics.

Part 2: Visual clustering methods

Part 2 gives the definitions and notation associated with the three canonical problems of clustering: (i) pre-clustering tendency assessment; (ii) clustering, and (iii) post-clustering validation. A short history of visual clustering (which began in 1939) is followed by discussion of (8) algorithms developed by the author and various colleagues that address various facets of visual clustering. Part 2 is about 10% tutorial, and 90% specialized research. The objective is to present some state of the art research problems and solutions in the growing field of data visualization.

lecture slides: part 1 - part 2 - part 3 - part 4


Fuzzy and Probabilistic Clustering

Christian Borgelt

The task of clustering consists in dividing a given data set of sample cases into a certain number of groups or clusters with the general objective that two data points from the same group should be as similar as possible while data points from different groups should be as dissimilar as possible. A common approach to formalize similarity in this context consists in defining a distance function: the smaller their distance, the more similar two data points are.

Once a distance measure has been chosen, the actual task of clustering is carried out in a metric space and can be solved in several different ways. This lecture focuses on so-called prototype-based approaches, in which each cluster is described by a prototype, which provides information about the location of a cluster, but may also be enhanced by size and shape information. The two most popular methods that rely on such a prototype-based scheme are probabilistic clustering in the form of estimating a mixture of Gaussian distributions with the help of the expectation-maximization (EM) algorithm and fuzzy clustering in the form of fuzzy c-means clustering and its extensions (like Gustafsson-Kessel clustering, Gath-Geva clustering).

This lecture introduces the basic ideas of these clustering methods and their formal ingredients and tries to emphasize the similarities and differences between probabilistic and fuzzy approaches. In addition, it presents several extensions of the basic schemes which allow to enhance the cluster prototypes with size and shape information as well as schemes to reasonably constrain the added flexibility in order to maintain the robustness of the basic methods.

lecture slides


Artificial Neural Networks

Christian Borgelt

(Artificial) neural networks are a family of methods from artificial intelligence that take their inspiration from the basic principles of biological neural networks. They are used for pattern recognition, classification, diagnosis, optimization, control, and in knowledge-based systems. The key advantages of (artificial) neural networks are their ability to learn and their inherent parallelism. This lecture introduces the basics of (artificial) neural networks from the perspective of computer science.

There is a vast number of different types of neural networks, like threshold logic units, multilayer perceptrons, radial basis function networks, self-organizing maps, Hopfield networks and recurrent neural networks. Clearly, not all of these types can be dealt with in detail in one lecture, which, as a consequence, focuses on multilayer perceptrons as the most commonly used type of neural networks.

In the last 20 years probabilistic graphical models - in particular Bayes networks and Markov networks - became very popular as tools for structuring uncertain knowledge about a domain of interest and for building knowledge-based systems that allow sound and efficient inferences about this domain.

lecture slides


A Unified View of Uncertainty Theories

Didier Dubois

The modeling of uncertainty is motivated by two concerns: taming the variability of external phenomena and facing incomplete information in decision processes. These two concerns are not unrelated, but the two concerns are distinct in the sense that variability is far from being the only cause of ignorance. However, the development of probability theory, as witnessed by the Bayesian school especially, tended to blur this distinction, suggesting that a unique probability distribution is enough to account for both randomness and incomplete information. More recently, new theories of uncertainty have emerged where partial ignorance is acknowledged and represented separately from randomness: the theories of evidence, possibility and imprecise probabilities, respectively. The aim of this talk is to provide a (partially) unified view of these approaches. The main point of the talk is that modern uncertainty theories put together probabilistic and set-valued representations, which allow for a clear separation between randomness and incompleteness.

The basic tool for representing information incompleteness is a subset of mutually exclusive values, one of which is the real one. This kind of uncertainty is naturally accounted for in logical representations. In the area of numerical modelling, the processing of incomplete information is basically carried out by interval analysis or constraint propagation methods. All above theories of uncertainty come down to introducing shades of plausibility within set-representations of incompleteness:

The course points out some important issues to be addressed with uncertainty theories such as the difference between generic and singular information, practical representations of imprecise probabilities on the real line, conditioning and fusion, uncertainty propagation, and decision. The role of possibility theory, as the simplest representation of imprecise probability will be emphasized.

lecture slides
accompanying paper


Introduction to Fuzzy Networks

Alexander Gegov
Nedyalko Petrov

The notion of complexity has recently become a serious challenge to scientific research in a multi-disciplinary context. For example, it is quite common to find complex systems in biology, cosmology, engineering, computing, finance and other areas. However, the understanding of complex systems is often a difficult task.

There are two main aspects of complexity: quantitative and qualitative. The quantitative aspect is usually associated with a large scale of an entity or a large number of elements within this entity. The qualitative aspect is often characterised by some uncertainty in the data or knowledge about an entity.

An obvious way of coping with quantitative complexity is to use the concept of a general network. The latter consists of nodes and connections, whereby the nodes represent the elements of the entity and the connections reflect the interactions among these elements. In this case, the scale is reflected by the overall size of the network, whereas the number of elements is given by the number of nodes.

A possible way of dealing with qualitative complexity is to use the concept of a fuzzy network. The latter consists of nodes and connections, whereby the nodes are fuzzy systems and the connections reflect the interactions among these systems. In this case, the uncertainty in the data or knowledge about the entity is reflected by the rule bases of the fuzzy systems and the underlying fuzzy logic.

In the context of these considerations, a fuzzy network represents a natural counterpart of a neural network. Both types of networks are computational intelligence based networks with nodes and connections. However, the nodes in a neural network are neurons, whereas the nodes in a fuzzy network are rule bases.

This tutorial introduces the novel concept of a fuzzy network within ten sections. The first section discusses complexity as a systemic feature and the ability of fuzzy systems to handle different attributes of complexity. Section 2 reviews several types of fuzzy systems in the context of systemic complexity, including systems with single, multiple and networked rule bases. Section 3 introduces formal models for fuzzy networks such as Boolean matrices, binary relations, block schemes and topological expressions. Section 4 presents basic operations on nodes in fuzzy networks, including merging and splitting in horizontal, vertical and output context. Section 5 discusses some structural properties of basic operations such as associativity of merging and variability of splitting in horizontal, vertical and output context. Section 6 describes advanced operations on nodes in fuzzy networks, including node transformation for input augmentation, output permutation and feedback equivalence, as well as node identification in horizontal, vertical and output merging. Section 7 shows the application of the theoretical results from Sections 3-6 in feedforward fuzzy networks with single or multiple levels and layers. Section 8 illustrates the application of the theoretical results from Sections 3-6 in feedback fuzzy networks with single or multiple local and global feedback. Section 9 evaluates fuzzy networks using structural metrics for comparison with different types of fuzzy systems, composition into standard fuzzy systems, decomposition into hierarchical fuzzy systems, model performance indicators, as well as by demonstrating some examples and case studies in Matlab. The last section highlights the theoretical significance, the application areas and the methodological impact of fuzzy networks in the context of an overall evaluation of the tutorial contents.

lecture slides


Fuzzy Data in Statistics: Formalization and Main Problems

Gil González-Rodríguez

Fuzzy random variables are employed to deal with random experiments in which the characteristics of interest are imprecise, and their values can be described with fuzzy sets.

There are two main kinds of fuzzy random variables/objectives, namely:

This lecture refers to the second point and is structured as follows:

lecture slides


Soft Computing for Sensor and Algorithm Fusion

James M. Keller

Sensor and algorithm fusion is playing an increasing role in many application domains. As detection and recognition problems become more complex and costly (for example, landmine detection and automatic activity monitoring), it is apparent that no single source of information can provide the ultimate solution. However, complementary information can be derived from multiple sources. Given a set of outputs from constituent sources, there are many frameworks within which to combine the pieces into a more definitive answer. This tutorial will focus on the fusion of multiple partial confidence values within the framework of fuzzy set theory.

So, the question then becomes: what methodology do we use to combine partial decision information? There are many choices, but I will focus on the use of fuzzy set theoretic mechanisms to fuse confidence from multiple sources. Two general approaches will be considered, fuzzy integrals and fuzzy logic rule-based systems. Fuzzy integrals have a long history and have been studied in the context of pattern recognition and information fusion for several years being first introduced for this purpose by Tahani and Keller in 1990. Fuzzy integrals combine the objective evidence supplied by each information source with the expected worth of each subset of information sources (via a fuzzy measure) to assign confidence to hypotheses or to rank alternatives in decision making. This is a nonlinear combination of information and the worth of the information for the decision in question, dealing with the uncertainty in both forms of data. Different fuzzy measures yield different integration operations, including averaging, linear combinations of order statistics, and many others. Measures can be found by heuristic assignment or via training algorithms. New results with discriminative training will be discussed.

Next, a fusion system based on a linguistic extension of the Choquet fuzzy integral will be shown. The uncertainty in the data is now expressed as a linguistic vector, i.e., a vector of fuzzy sets. The linguistic Choquet integral is used to fuse both position and confidence uncertainty in the landmine detection scenario. We demonstrate this system on the outputs from three algorithms on data collected from an outdoor test site by the GEO-CENTERS Energy Focusing Ground Penetrating Radar (EFGPR). The results show good improvement in the probability of detection and a reduction in the false alarm rate over the best single algorithm and two numeric fusion schemes.

Fuzzy logic rule-based systems provide another mechanism to fuse together the results of different features, classification algorithms and sensors. Such a system employs rules much like those that a human expert might derive. Again, uncertainty in the component parts is modeled by linguistic variables taking on fuzzy sets as values. I will describe the application of fuzzy rule-based classifiers in image processing and landmine detection.

lecture slides


Robust Statistics

Frank Klawonn

Data are often corrupted with outliers or unusually values, either coming from erroneous measurements or inputs or occuring by chance. Classicial statistical approaches are usually designed for large sample sizes and ideal data and are therefore often very sensitive with respect to outliers. Robust statistics provides methods that can cope with outliers and imperfect model assumptions.

The tutorial provides an introduction to the ideas of robust statistics including parameter estimation, regression and testing. Examples using the open source statistics software R will be provided.

lecture slides


Fuzzy Modeling: Fundamentals, Design and Challenges

Witold Pedrycz

Fuzzy modeling and fuzzy models have emerged as a novel paradigm firmly dwelled on the principles of computing with information granules: fuzzy sets. Over the past decades we have witnessed a truly remarkable progress in this area. We have arrived at a wealth of fuzzy models, their design patterns, and ensuing applications. The objective of this tutorial is to offer the audience a comprehensive exposure of the fundamentals, methodology and design practices of fuzzy modeling. We discuss motivating factors behind fuzzy models, bring forward the underlying algorithmic setup and offer a critical assessment of existing development methods by clearly identifying their advantages and possible limitations.

The tutorial is self-contained. As far as prerequisites are concerned, it is anticipated that the audience is familiar with the essentials of fuzzy sets.

lecture slides


An Axiomatic Approach to the Notion of Rational Preference Structures

Bernd Reusch

Decision making means selecting the "best" alternative from a set A = { a1, ..., ak}. In classical theory this is done with the help of a so-called utility function, typically u: A -> IR. The "best" alternative then simply is the ai with maximal u(ai).

The practical situation is more complicated. Usually there is various information about elements of A and some procedure to extract either some utility function or some preference structure on A directly. At least in the latter case there is need for a definition of "rational preferences" independent of utilities or given algorithms. In this contribution we develop an axiomatic theory of "rational preference structures" for this purpose.


Feature Selection

Michel Verleysen

Feature selection plays an important role in data mining, modelling and related tasks. Indeed in many situations, the number of features measured or acquired is large, just because the cost of data acquisition and storage has considerably decreased in many application areas. Among these features some are relevant for the problems at hand, some are less or not relevant. However, it is not obvious to know beforehand, or to ask experts, which features are relevant and which are not. Selecting only the important features before any modelling (classification, prediction, clustering) is important for two reasons. First, most if not all data analysis tools suffer from the curse of dimensionality; in other words, their performance decrease when the number of feature increases, because of the dimension of the data space. Secondly, in addition to modelling performances, eliminating not relevant features and understanding which ones contribute most to the problem at hand is important in the application domain, as it might help understanding the problem itself and decreasing acquisition burden and costs.

This tutorial will present state-of-the-art methods in feature selection. It will show that feature selection methods combine the choice of a relevance criterion, and of a greedy procedure in order to search for an optimal subset of inputs among the (too large) number of possible combinations. The respective advantages of filter, wrapper and embedded methods will be discussed.

lecture slides