What do we need to build explainable AI systemsfor the medical domain?

What do we need to build explainable AI systems for the medical domain?


Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. in autonomous driving, speech recognition, or recommender systems. Deep learning approaches, trained on extremely large data sets or using reinforcement learning methods have even exceeded human performance in visual tasks, particularly on playing games such as Atari, or mastering the game of Go. Even in the medical domain there are remarkable results. However, the central problem of such models is that they are regarded as black-box models and even if we understand the underlying mathematical principles of such models they lack an explicit declarative knowledge representation, hence have difficulty in generating the underlying explanatory structures. This calls for systems enabling to make decisions transparent, understandable and explainable. A huge motivation for our approach are rising legal and privacy aspects. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th 2018, will make black-box approaches difficult to use in business. This does not imply a ban on automatic learning approaches or an obligation to explain everything all the time, however, there must be a possibility to make the results re-traceable on demand. This is beneficial, e.g. for general understanding, for teaching, for learning, for research, and it can be helpful in court. In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain. This is due to the fact that medical professionals are working mostly with distributed heterogeneous and complex sources of data. In this paper we concentrate on three sources: images, *omics data and text. We argue that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitate transparency and trust.


Explainable AI for the Medical DomainHolzinger, Biemann, Pattichis, Kell \firstpageno1

1 Introduction and Motivation

Artificial intelligence (AI) has a long tradition in computer science and experienced many ups and downs since its formal introduction as an academic discipline six decades ago \citepHolland:1992:AdaptationBook,RusselNorvig:1995:AIbook. The field recently gained enormous interest, mostly due to the practical successes in Machine Learning (ML). The grand goal of AI is to provide the theoretical fundamentals for ML to develop software that can learn autonomously from previous experience, automatically and with no human-in-the-loop \citepShahriariAdamsFreitas:2016:HumanOutofTheLoop. Ultimately, to reach a level of usable intelligence, we need (1) to learn from prior data, (2) to extract knowledge, (3) to generalize, (4) to fight the curse of dimensionality, and (5) to disentangle the underlying explanatory factors of the data \citepBengioVincent:2013:RepresentationLearning. One grand challenge still remains open: to make sense of the data in the context of an application domain. The quality of data and appropriate features matter most, and previous work has shown that the best-performing methods typically combine multiple low-level features with high-level context \citepGirshickDonahueDarrellMalik:2014:Features. However, the full effectiveness of all AI/ML success is limited by the algorithm’s inabilities to explain its results to human experts - but exactly this is a big issue in the medical domain.

In the medical domain we are facing complex challenges particularly in the integration, fusion and mapping of various distributed and heterogeneous data in arbitrarily high dimensional spaces. Consequently, explainable-AI in the context of medicine must take into account that diverse data may contribute to a relevant result. This requires that medical professionals must have a possibility to understand how and why a machine decision has been made. Moreover, transparent algorithms could appropriately enhance trust of medical professionals in future AI systems. Research towards building explainable-AI systems for application in medicine requires to maintain a high level of learning performance for a range of machine learning and human-computer interaction techniques. There is an inherent tension between machine learning performance (predictive accuracy) and explainability. Often the best-performing methods (e.g., deep learning) are the least transparent, and the ones providing a clear explanation (e.g., decision trees) are less accurate \citepBolognaHayashi:2017:deep.

The performance of algorithms also depends on the choice of the data representations; hence much engineering effort goes into the design of pre-processing pipelines and in handcrafted integration, fusion, data transformations, and mappings that result in an appropriate representation. Current automatic learning algorithms still have an enormous weakness: they are unable to extract the discriminative knowledge from the data. Consequently, it is of utmost importance to expand the applicability of learning algorithms, hence, to make them less dependent on feature engineering. As already mentioned before, ultimately, a truly intelligent system must be able to understand the context, a long awaited but still far-off goal \citepZadeh:2008:HumanLevelIntelligence.

The topic of explainable-AI is of such great importance that the U.S. Defense Advanced Research Projects Agency (DARPA) has recently set an explainable-AI (XAI) program on its agenda1 \citepGunning:2016:DARPA-explainable-AI. DARPA emphasizes the importance of Human–Computer Interaction (HCI) for machine learning and knowledge discovery/data mining (KDD). This is manifested in the HCI-KDD approach, which fosters integrative ML, i.e. a synergistic combination of diverse methodological approaches in a concerted effort to augment human intelligence with artificial intelligence, and eventually to enable what neither of them could do on their own \citepHolzinger:2012:DATAconf,Holzinger:2013:HCI-KDD,Holzinger:2017:InauguralMAKE,HolzingerGoebelPaladeFerri:2017:Integrative.

In the following we provide an incomplete overview on some state-of-the-art of explainable models generally and selected research on explainable models for images, *omics data and text specifically. This trilogy of data combination is needed for future medicine.

2 Explainability

The problem of explainability is as old as AI and maybe the result of AI itself: whilst AI approaches demonstrate impressive practical success in many different application domains, their effectiveness is still limited by their inability to ”explain” their decisions in an understandable way \citepCoreEtAl:2006:eXAIsystems. Even if we understand the underlying mathematical theories it is complicated and often impossible to get insight into the internal working of the models and to explain how and why a result was achieved. Explainable-AI is an rapidly emerging research area with increasing visibility in the popular press2 and even daily press3.

In the pioneering days of AI \citepNewellShawSimon:1958:ChessComplexity, the predominant reasoning methods were logical and symbolic. These early AI systems reasoned by performing some form of logical inference on human readable symbols. Interestingly, these early systems were able to provide a trace of their inference steps and became the basis for explanation. There is some related work available on how to make such systems explainable \citepShortliffeBuchanan:1975:InexactReasoning,SwartoutEtAl:1991:Explanations,Johnson:1994:AgentsExplain,LacaveDiez:2002:ExplanationBayesNets.

In the medical domain there is growing demand in AI approaches, which are not only well performing, but trustworthy, transparent, interpretable and explainable. Methods and models are necessary to reenact the machine decision-making process, to reproduce and to comprehend both the learning and knowledge extraction process. This is important, because for decision support it is necessary to understand the causality of learned representations \citepPearl:2009:Causality,GershmanHorvitzTenenbaum:2015:ComputationalRationality,PetersJanzigSchoelkopf:2017:CausalityBook.

Understanding, interpreting, explaining are often used synonymously in the context of explainable-AI \citepDoranBesold:2017:ExplainableAI, and various techniques of interpretation have been applied in the past. There is a nice discussion on the ”Myth of model interpretability” by [Lipton(2016)]. In the context of explainable-AI the term “understanding” usually means a functional understanding of the model, in contrast to a low-level algorithmic understanding of it, i.e. to seek to characterize the model’s black-box behavior, without trying to elucidate its inner workings or its internal representations. [Montavon et al.(2017)Montavon, Samek, and Müller] discriminate in their work between interpretation, which they define as a mapping of an abstract concept into a domain that the human expert can perceive and comprehend; and explanation, which they define as a collection of features of the interpretable domain, that have contributed to a given example to produce a decision.

We argue that in the medical domain, something like “explainable medicine” would be urgently needed for many purposes including medical education, research and clinical decision making. If medical professionals are complemented by sophisticated AI systems and in some cases even overruled, the human experts must still have a chance, on demand, to understand and to retrace the machine decision process. However, we also point out that it is often assumed that humans are able to explain their decisions. This is often not the case; sometimes experts are not able, or even not willing to provide an explanation.

Explainable-AI calls for confidence, safety, security, privacy, ethics, fairness and trust \citepKiesebergEtAl:2016:TrustDocInLoop, and puts usability on the research agenda, too [Miller et al.(2017)Miller, Howe, and Sonenberg]. All these aspects together are crucial for applicability in medicine generally, and for future personalized medicine specifically \citepHamburgCollins:2011:PersonalizedMedicine.

3 Explainable Models

We can distinguish two types of explainability/interpretability, which can be denominated with Latin names used in law \citepFellmethHorwitz:2009:LatinLaw: post-hoc explainability = ”(lat.) after this”, occurring after the event in question; e.g., explaining what the model predicts in terms of what is readily interpretable; ante-hoc explainability = ”(lat.) before this”, occurring before the event in question; e.g., incorporating explainability directly into the structure of an AI-model, explainability by design.

Post-hoc systems aim to provide local explanations for a specific decision and making it reproducible on demand (instead of explaining the whole systems behaviour). A representative example is LIME (Local Interpretable Model-Agnostic Explanations) developed by [Ribeiro et al.(2016b)Ribeiro, Singh, and Guestrin], which is a model-agnostic system, where is the original representation of an instance being explained, and is used to denote a vector for its interpretable representation (e.g. may be a feature vector containing word embeddings, with being the bag of words). The goal is to identify an interpretable model over the interpretable representation that is locally faithful to the classifier. The explanation model is , where is a class of potentially interpretable models, such as linear models, decision trees, or rule lists; given a model , it can be visualized as an explanation to the human expert (for details please refer to \citepRibeiroSinghGuestrin:2016:ModelAgnosticInterpret). Another example for a post-hoc system is BETA (Black Box Explanations through Transparent Approximations, a model-agnostic framework for explaining the behavior of any black-box classifier by simultaneously optimizing for fidelity to the original model and interpretability of the explanation introduced by [Lakkaraju et al.(2017)Lakkaraju, Kamar, Caruana, and Leskovec]. [Bach et al.(2015)Bach, Binder, Montavon, Klauschen, Müller, and Samek] presented a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers which allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over bag of words features and for multilayered neural networks.

Ante-hoc systems are interpretable by design towards glass-box approaches \citepHolzingerEtAl:2017:glassbox; typical examples include linear regression, decision trees and fuzzy inference systems. The latter have a long tradition and can be designed from expert knowledge or from data and provides - from the viewpoint of HCI - a good framework for the interaction between human expert knowledge and hidden knowledge in the data \citepGuillaume:2001:InterpretableFuzzy. A further example was presented by [Caruana et al.(2015)Caruana, Lou, Gehrke, Koch, Sturm, and Elhadad], where high-performance generalized additive models with pairwise interactions (GAMs) were applied to problems from the medical domain yielding intelligible models, which uncovered surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain; important is that they demonstrated scalability of such methods to large data sets containing hundreds of thousands of patients and thousands of attributes while remaining intelligible and providing accuracy comparable to the best (unintelligible) machine learning methods. A further example for ante-hoc methods can be seen in [Poulin et al.(2006)Poulin, Eisner, Szafron, Lu, Greiner, Wishart, Fyshe, Pearcy, MacDonell, and Anvik], where they described a framework for visually explaining the decisions of any classifier that is formulated as an additive model and showed how to implement this framework in the context of three models: naïve Bayes, linear support vector machines and logistic regression, which they implemented successfully into a bioinformatics application \citepSzafronEtAl:2004:ProteomeAnalyst.

3.1 Example: Interpreting a Deep Neural Network

Deep neural networks (DNN), particularly convolutional neural networks (CNN) and recurrent neural networks (RNN) have demonstrated to be applicable to a wide range of practical problems, from image recognition \citepSimonyanZisserman:2014:DeepImageRecognition and image classification \citepEstevaThrun:2017:DermaNN to movement recognition \citepSinghEtAl:2017:RNN-AAL. At the same time these approaches are also theoretically interesting, because humans organize their ideas also hierarchically \citepBengioY:2009:deepLearning,Schmidhuber:2015:DLOverview.

Basically, a neural network (NN) is a collection of neurons organized in a sequence of multiple layers, where neurons receive as input the neuron activations from the previous layer, and perform a simple computation (e.g. a weighted sum of the input followed by a nonlinear activation). The neurons of the network jointly implement a complex nonlinear mapping from the input to the output. This mapping is learned from the data by adapting the weights of each individual neuron using backpropagation, which repeatedly adjusts the weights of the connections in the network in order to minimize the difference between the current output vector and the desired output vector. As a result of the weight adjustments, internal hidden units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units (refer to the original paper of [Rumelhart et al.(1986)Rumelhart, Hinton, and Williams] and the review by [Widrow and Lehr(1990)] for an overview).

Typically, deep neural networks are trained using supervised learning on large and carefully annotated data sets. However, the need for such data sets restricts the space of problems that can be addressed. This has led to a proliferation of deep learning results on the same tasks using the same well-known data sets \citepRolnickEtAl:2017:DeepRobust.

Annotated data is extremely difficult to obtain, especially for classification tasks with large numbers of classes (requiring extensive annotation) or with fine-grained classes (requiring skilled annotation by domain experts). Consequently, annotation can be very expensive and, for tasks requiring expert knowledge, may simply be unattainable at large scale - which is often a huge problem in the medical domain. Just to illustrate this problem on a popular example: the collection of ImageNet4 data required more than a year of human labor on Amazon Mechanical Turk. A large-scale ontology of images is a critical resource for developing advanced, large-scale content-based image search and image understanding algorithms, as well as for providing critical training and benchmarking data for such algorithms \citepDengLiEtAl:2009:ImageNet.

[Montavon et al.(2017)Montavon, Samek, and Müller] provide an excellent example of the problem of interpreting a concept learned by a deep neural network. A learned concept to be interpreted can be represented by neurons in the top layer. The problem is that top-layer neurons are abstract and not perceivable by a human, but the input domain of the network (both image or text) is usually interpretable. Thus something is necessary in the input domain, which is both interpretable and representative of the abstract learned concept - one possibility is to make use of the activation maximization principle5, \citepErhanBengioCourvilleVincent:2009:TechnicalReportVisDeep.

Activation maximization can be used as an analysis framework that searches for an input pattern to produce a maximum model response for a specific quantity of interest \citepBerkes:Wiskott:2006:QuadraticForms,SimonyanZisserman:2014:DeepImageRecognition. Consider a neural network classifier mapping data points to a set of classes . The output neurons encode the modeled class probabilities , and a prototype as representative of the class can be found by optimizing:


The class probabilities modeled by the neural net are functions having a gradient, consequently a widely-used technique in AI, gradient ascent, which aims at maximizing an objective function (opposite to gradient descent which aims to minimizing an objective function) \citepZinkevich:2003:gradientAscent. The term of the objective is an -norm regularizer that implements a preference for inputs that are close to the origin. When applied to image classification, prototypes thus take the form of mostly gray images, with only a few edge and color patterns at strategic locations \citepSimonyanZisserman:2014:DeepImageRecognition.

In order to focus on higher probable regions of the input space, the -norm regularizer can be replaced by a data density model which is called “expert” by [Montavon et al.(2017)Montavon, Samek, and Müller]. This leads to the following optimization problem:


Here, the prototype is encouraged to simultaneously produce strong class response and to resemble the data. By application of Bayes’ rule, the newly defined objective can be identified, up to modeling errors and a constant term, as the class-conditioned data density . The learned prototype thus corresponds to the most likely input for the class . A possible choice for the expert is the Gaussian Restricted Boltzmann Machine (RBM). The RBM is a two-layer, bipartite, undirected graphical model with a set of binary hidden units , a set of (binary or real-valued) visible units , with symmetric connections between the two layers represented by a weight matrix . The probabilistic semantics for an RBM is defined by its energy function (for details see the chapter by [Hinton(2012)]. Its probability function can be written as:




are factors with parameters learned from the data.

When interpreting more complex concepts such as natural images classes, other density models such as convolutional RBM’s \citepLeeNg:2009:convolutionalDeepBelief or pixel RNN’s \citepOordEtAl:2016:PixelRNN are better suitable.

The selection of the expert plays an important role. The relation between the expert and the resulting prototype is given qualitatively in Figure 1 by [Montavon et al.(2017)Montavon, Samek, and Müller]: Here we see four different cases: In case a the expert is absent, i.e. the optimization problem reduces to the maximization of the class probability function .

In case d we see the other extreme, i.e. the expert is overfitted on some data distribution, and thus, the optimization problem becomes essentially the maximization of the expert itself.

Figure 1: Four cases illustrating how the ”expert” affects the prototype found by activation maximization. The horizontal axis represents the input space, and the vertical axis represents the probability (extreme case a: expert is absent, extreme case d: expert is overfitted; Image source: \citepMontavonSamekMueller:2017:InterpertingDL.

When using activation maximization for the purpose of model validation, an overfitted expert (case d in Figure 1) must be especially avoided, as the latter could hide interesting failure modes of the model . A slightly underfitted expert (case b), e.g. that simply favors images with natural colors, can already be sufficient. On the other hand, when using AM to gain knowledge on a correctly predicted concept , the focus should be to prevent underfitting. Indeed, an underfitted expert would expose optima of potentially distant from the data, and therefore, the prototype would not be truly representative of .

In certain applications, data density models can be hard to learn, or they can be so complex that maximizing them becomes difficult or even intractable. Therefore, a useful alternative class of unsupervised models are generative models (e.g., Boltzmann machines, variational Autoencoders, etc.) which do not provide the density function directly, but are able to sample from it, usually via the following two steps:

  1. Sample from a simple distribution which is defined in an abstract code space ;

  2. Apply to the sample a decoding function , that maps it back to the original input domain.

One suitable model is the generative adversarial network (GAN) introduced by [Goodfellow et al.(2014)Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio]. It consists of two models: a generative model that captures the data distribution, and a discriminative model that estimates the probability that a sample came from the training data rather than from . The training procedure for is to maximize the probability of making an error - which works like a minimax (minimizing a possible loss for a worst case maximum loss) two-player game. In the space of arbitrary functions and , a unique solution exists, with recovering the training data distribution and equal to everywhere; in the case where and are defined by multi-layer perceptrons, the entire system can be trained with backpropagation.

To learn the generator’s distribution over data , a prior must be defined on the input noise variables , and then a mapping to the data space as , where is a differentiable function represented by a multi-layer perceptron with parameters . The second multi-layer perceptron outputs a single scalar. represents the probability that came from the data rather than . can be trained to maximize the probability of assigning the correct label to both training examples and samples from . Simultaneously can be trained to minimize ; in other words, and play the following two-player minimax game with value function :


[Nguyen et al.(2016)Nguyen, Dosovitskiy, Yosinski, Brox, and Clune] proposed to build a prototype for by incorporating such a generative model in the activation maximization framework. The optimization problem is redefined as:


where the first term is a composition of the newly introduced decoder and the original classifier, and where the second term is an -norm regularizer in the code space. Once a solution to the optimization problem is found, the prototype for is obtained by decoding the solution, that is, .

The -norm regularizer in the input space can be understood in the context of image data as favoring gray-looking images. The effect of the -norm regularizer in the code space can instead be understood as encouraging codes that have high probability. High probability codes do not necessarily map to high density regions of the input space; for more details please refer to the excellent tutorial given by [Montavon et al.(2017)Montavon, Samek, and Müller].

4 Explainable Models for Image Data

One technique, which is highly interesting for the medical domain, e.g. for images generated by digital pathology (which are orders of magnitudes larger than e.g. radiological images) \citepHolzingerEtAl:2017:AugmentedPathologist is the use of deconvolutional networks \citepZeilerEtAl:2010:DeconvolutionalNets. They are an excellent framework that permits the unsupervised construction of hierarchical image representations and thus enables visualization of the layers of convolutional networks \citepZeilerFergus:2014:VisUnderstandConvNets.

Understanding the operation of a convolutional neural network requires the interpretation of feature activity in intermediate layers, and these can be mapped back to the input image space, showing what input pattern originally caused a given activation in the feature maps. This brings us to enormously important fundamental research opportunities in causality \citepKrynskiTenenbaum:2007:CausalityUncertainty,Pearl:2009:Causality,Bottou:2014MLreasoning - which is novel in the context of personalized (P4) medicine.

From the perspective of fundamental research, the gained insights might contribute towards building machines that learn and think like people \citepLakeSalakTenenbaum:2015:ConceptLearning,LakeUlmanTenenbaumGershman2016:MachinesThink.

One possibility to make e.g. deep networks \citepLeCunBengioHinton:2015:DeepLearningNature transparent is to generate image captions in order to train a second network with explanations without explicitly identifying the semantic features of the original network.

Of enormous importance is the possibility to extend the approaches used to generate image captions to train a second deep network to generate explanations \citepHendricksAkataEtAl:2016:VisualExplanations – which is at the intersection of images and text and can be tackled with Visual Question Answering (VQA) \citepGoyalEtAl:2016:VisualQuestionAnswering. While this second network is not guaranteed to provide reasons correlated to those used in the original network, it seems promising to use neural attention mechanisms to be able to trace which part of the input contributed most to which part of the output, see e.g. \citepPavlopoulosEtAl:2017:DeeperAttention.

We envision alternative machine learning techniques that learn more structured, interpretable, and causal models. These can include Bayesian Rule Lists \citepLethamRudinEtAl:2015:InterpretableClassifiers, and in order to learn richer, more conceptual and generative models, techniques such as Bayesian Program Learning \citepLakeSalakTenenbaum:2015:ConceptLearning, learning models of causal relationships \citepMaierEtAl:2010:LearningCausalModels,MaierEtAl:2013:LearningCausalModels,AalenEtAl:2016:BelieveDAGs, and stochastic grammars to learn more interpretable structures \citepBrendelTodorovic:2011:LearningGraphsHumanActivities,ZhouTorre:2012:FactorizedGraphMatching,ParkNieZhu:2016:AndOrGrammar. Very useful for building explainable systems in the medical domain is generally genetic programming \citepKoza:1994:GeneticProgramming,PenaSipper:1999:FuzzyGenetic,TsakonasEtAl:2004:GeneticMedical, and specifically evolutionary algorithms \citepWangPalade:2011:InterpretableFuzzy,HolzingerKetAl:2014:DarwinLamarck,HolzingerEtAl:2016:iMLExperiment.

5 Explainable Features and Models in Images and *omics data

5.1 Image Analysis using Multiscale AM-FM Image Decompositions

Amplitude Modulation – Frequency Modulation (AM-FM) decompositions provide meaningful representations of medical image and video content. An image is decomposed into a sum of AM-FM components that can be easily visualized in for humans seeking to understand essential image content.

Input images can be expressed as a sum of AM-FM components, where the challenge is to decompose any input image into a sum of bi-dimensional AM–FM harmonics of the form


where denotes a slowly–varying amplitude function, denotes the phase, and indexes the different AM–FM harmonics. To each phase function, one can associate an instantaneous frequency vector field defined as . Finding the components from the bidimensional signal is called the decomposition problem \citepClauselOberlinPerrier:2015:WaveletAMFM.

We provide an example in Figure 2 in a symptomatic stroke plaque and an asymptomatic plaque in ultrasound imaging of the carotid. In top three rows of Figure 2, we have a symptomatic example that can be used to demonstrate several issues associated with high-risk cases. First, we have large dark regions corresponding to the lipids or other dangerous components. Second, these dark plaque regions are located very close to the plaque surface. However, in the original images, we cannot see any structure over these dark regions. A very rich structure plaque surface structure is revealed by the FM reconstructions of the second row. Starting from the very-low to the high frequency scales, instantaneous frequency can be seen adjusting to the local texture content with some sharp changes around different structures. In contrast, the asymptomatic plaque image reconstructions of the last two rows do not include significant low-intensity components. The high-intensity components of the fourth row (right image) dominate the reconstruction. There is also more regularity (homogeneity) in the asymptomatic reconstructions of the last row. Far more variability and heterogeneity are evident in the symptomatic FM reconstructions.

Figure 2: Multi-scale AM-FM decomposition based on fixed scales (non-adaptive). In the top three rows, we have images from a symptomatic plaque. In the bottom two rows is an asymptomatic example. int = intensity

AM-FM decompositions have been enabled following the introduction of new demodulation methods as summarized in \citepMurrayEtAl:2010:MultiscaleAMFM, \citepMurrayEtAl:2012:MultiscaleImaging and are a hot topic, and in a combined effort can be very beneficial for context understanding; a summary of several medical applications of novel multi-scale AM-FM methods can be found in our recently published survey \citepMurrayEtAl:2012:MultiscaleImaging. Earlier work with AM-FM models had demonstrated its promise with textured images, as in the example of fingerprint image classification \citepPattichisEtAl:2001:FingerprintAMFM, tree image analysis analysing growth seasons \citepRamachandranEtAl:2011:TreeImageGrowth, non-stationary wood-grain characterization, and other texture images \citepKokkinosEtAl:2009:TextureAnalysis.

The introduction of a multiscale approach in \citepMurrayEtAl:2010:MultiscaleAMFM demonstrated that the method can be used to reconstruct general images. In particular, a multi-scale AM-FM representation led to the important application of population screening for diabetic retinopathy6 as documented in \citepAgurtoEtAl:2010:MultiscaleDiabeticRetinopathy, \citepRahimEtAl:2015:Retinopathy, hysteroscopy image assessment \citepConstantinouEtAl:2012:AdaptiveMultiscale, fMRI and MRI image analysis \citepLoizouEtAl:2011:MultiscaleMRI, and atherosclerotic plaque ultrasound image and video analysis \citepLoizouEtAl:2011:MultiscaleCarotid. Alternatively, the definition of multidimensional AM-FM transforms over curvilinear coordinate systems has led to the earlier development of very low bitrate video coding as demonstrated in \citepLeePattichisBovik:2001:FoveatedVideo,LeePattichisBovik:2002:FoveatedQuality.

Complex wavelets can also be very powerful, e.g. in the analysis of images of electrophoretic gels used in the analysis of protein expression levels in living cells, where much of the positional information of a data feature is carried in the phase of a complex transform. Complex transforms allow explicit specification of the phase, and hence of the position of features in the image. Registration of a test gel to a reference gel is achieved by using a multiresolution movement map derived from the phase of a complex wavelet transform (the Q-shift wavelet transform) to dictate the warping directly via movement of the nodes of a Delaunay-triangulated mesh of points. This warping map is then applied to the original untransformed image such that the absolute magnitude of the spots remains unchanged. The technique is general to any type of image. Results are presented for a simple computer simulated gel, a simple real gel registration between similar “clean” gels with local warping vectors distributed along one main direction, a hard problem between a reference gel and a “dirty” test gel with multi-directional warping vectors and many artifacts, and some typical gels of present interest in post-genomic biology. The method compares favourably with others, since it is computationally rapid, effective and entirely automatic \citepWoodwardRowlandKell:2004:proteomeImages.

5.2 Meaningful AM-FM Decompositions and Features in Artificial Neural Networks

A key purpose of our proposal for explainable methods is to extend current deep learning methods to incorporate meaningful AM-FM decompositions and features into the classification models. Here, we propose to investigate the use of AM-FM based Artificial Neural Networks (AM-FM ANNs), where two separate approaches remain open for investigation: (i) Multiscale AM-FM ANNs, and (ii) Hybrid AM-FM ANN architectures. In our first system, we will investigate the use of AM-FM derived features in an extended ANN. Here, our goal is to extract multiscale feature maps based on the instantaneous amplitude and the instantaneous frequency that will be fed as inputs to a multilayer neural network for medical image classification purposes. In our second system, we want to develop a hybrid system by incorporating AM-FM decomposition in a hybrid ResNet architecture. Here, we are motivated by the fact that ResNet models have provided excellent classification results through the use of convolutional neural networks \citepJegouEtAl:2012:AggregatingLocalImage,XieEtAl:2016:AggregatedResidualDeep,HuangPeng:2017:DeepMetric,Taki:2017:DeepResidualNetworks.

The lower layers of the ResNet architecture make heavy use of convolutional layers to represent the input. ResNet relies on the use of skip connections to compute residuals resulting from the use of a group of convolutional layers.

Unfortunately, the use of 152 layers makes it very difficult to understand the internal structure of such networks. Consequently, we propose a hybrid system, where our goal is to use AM-FM decompositions to replace the overwhelming majority of the lower-level layers. The use of multiple AM-FM components will lead to a significant reduction in the residual representation of the input image. The significantly reduced residuals will be incorporated into a ResNet architecture that will focus on training a small number of upper-level layers. Overall, we will visualize all convolution layers through their frequency coverage. In the frequency domain, each filter will be characterized by an effective 2D spatial frequency band that will capture image periodicities in specific directions.

Co-author Kell has been developing novel methods to assess the severity of chronic, inflammatory diseases from the morphology of blood clots seen in either the Scanning Electron Microscope (SEM) or – when amyloid-specific stains are added – by confocal microscopy, see e.g. [Kell and Pretorius(2017)]. As yet, the analyses of the images have been either very simple or qualitative. We present four examples of AM-FM decompositions of such SEM images in Figure 3, where we show the original input images and the corresponding dominant Gabor filters.

In our example, we compute an AM-FM component from each Gabor filter. Then, the dominant Gabor filters are determined by requiring an excellent reconstruction that satisfies (Structural SIMilarity Index (SSIM) 0.85, not shown in Figure 3). In Figure 3, we show the frequency magnitude plots of the dominant Gabor filters. Overlapping regions appear brighter.

From Figure 3, it is clear that the list of the dominant Gabor filters provides for a very compact visualization of image content. In Figure 3, each symmetric pair of circles represents a single filter. With just 10 to 30 filters, we can describe strong variabilities in image content. The frequency domain is also very easy to explain. We observe strong directional selectivity orthogonal to image lines, strong concentration of low-frequency components, and select, high frequency components. As described earlier, these decompositions have provided excellent features for a wide-range of biomedical applications. Furthermore, in comparison, ResNet requires 152 layers that cannot be easily visualized.

Figure 3: SEM (Scanning Electron Microscopy) images of diabetes plasma with corresponding dominant Gabor filters. (a) and (c) Original nice spaghetti-like images in healthy controls. (b) and (d) Dominant Gabor filters for (a) and (c) respectively. (g) Dense matted deposits (DMDs) in type 2 diabetes that are removed (e) when we add small amounts of lipopolysaccharide-binding protein (LBP). (f) and (h) Dominant Gabor filters for (e) and (g) respectively. The dominant Gabor filters are shown in the frequency domain. SEM images taken from 2017 Nature Scientific Reports Diabetes and Control Data, Figure 7A.

5.3 *omics spectrum and Bioinformatics

Large-scale *omics are important footprints of biological mechanisms. These mechanisms consist of numerous synergistic effects emerging from various systems of interwoven biomolecules, cells and tissues. Modern biology harnesses its power from technological advances in the field of *omics and the advent of next generation sequencing (NGS). These technologies provide a spectrum of information ranging from genomics, transcriptomics and proteomics to epigenomics, pharmacogenomics, metagenomics and metabolomics, etc. The sheer size of data generated by these high-throughput methodologies necessitates the need to analyse, integrate and concurrently interpret this avalanche of information in a systemic way.

Currently, platforms are missing that help towards not only the analysis but the interpretation of the data, information and knowledge obtained from the above-mentioned omics technologies and to cross-link them to hospital data. Moreover, it is necessary to narrow down the gap between genotype and phenotype as well as providing additional information regarding biomarker research and drug discovery, where biobanks \citepHuppertz:2014:Biobank play an increasingly important role.

One of the grand challenges here is to close the research cycle in such a way that all the data generated by one research study can be consistently associated with the original samples, therefore the underlying original research data and the knowledge gained thereof, can be reused. This can be enabled by a catalogue providing the information hub connecting all relevant information sources \citepMuellerHolzinger:2015:BiobankIntegration. The key knowledge embedded in such a biobank catalogue is the availability and quality of proper samples to perform a research project. To overview and compare collections from different catalogues, visual analytics techniques are necessary, especially glyph based visualization techniques \citepMuellerEtAl:2014:MultilevelGlyphs. We cannot emphasize often enough the combined view on heterogeneous data sources in a unified and meaningful way, consequently enabling the discovery and visualization of data from different sources, which would enable totally new insights.

Here, toolsets are urgently needed to support the bidirectional interaction with computational multiscale analysis and modelling to help to go towards the far-off goal of future medicine7 \citepHoodFriend2011P4cancermedicine,TianPriceHood:2012:P4medicine.

6 Explainable Models for Text

Text is inherently different from continuous data such as images since it is composed of (natural language) symbols used for communication between humans. The formalization of natural language has been a long-standing effort of philosophy, starting from the Platonic cave of ideas and culminating in the vision of the Semantic Web \citepbernerslee2001semantic.

However, while logical reasoning with traces of inference steps (see above) is rather well-understood, automated text understanding is still hampered by the fact that text is more expressive than tractable forms of logic and their representations, e.g. see \citepKrotzsch2008 and there are no established ways to convert a text to its ontological representation \citepCimiano14.

Ontology learning generally \citepMaedcheStaab:2001:OntologyLearning, and ontology learning from text specifically is a hot topic \citepCimianoVoelker:2005:Text2Onto. Ontology reasoning, nowadays based on deep learning rather than logic-based formal reasoning \citepHoheneckerLukasiewicz:2017:DeepLearningOntology is of great interest, and the applications of ontology-guided approaches are of much practical help for the medical domain expert \citepGirardi:2016:iKDDdocInLoop,WartnerEtAl:2016:Limits-Doc-in-Loop.

Generally, in the field of natural language processing, symbolic methods, which are inherently more interpretable, have never fallen out of use since their heydays in the 1970ies, but have been mostly replaced with statistical, probabilistic and more recently deep neural network models.

On the side of interpretability, this meant going from brittle rule-based but glass-box approaches to more robust opaque-box approaches, with supporting statistics but no explicit ’real reasons’, to ultimately neural black-box approaches.

In such black-box approaches already the input symbols (words) are replaced by vectors (e.g. skip-gram model for learning vector representations of words from large amounts of unstructured text \citepMikolovChenCorradoDean:2013:WordVector), resulting in a few hundred un-interpretable dimensions (a.k.a. embeddings, e.g. \citepMikolovEtAlDean:2013:RepresentationsWords).

As opposed to fields such as speech or image processing, the improvements recently gained with deep learning on text are rather modest, yet its use is very attractive since neural representations reduce the workload of manually crafting features enormously \citepManning2015.

In the medical domain, where a large amount of knowledge is represented in textual form, there exists already a large knowledge graph of medical terms (the UMLS8), where it is crucial to underpin machine output with reasons that are human-verifiable and where high precision is imperative for supporting, not distracting practitioners. The only way forward seems to be the integration of both knowledge-based and neural approaches to combine the interpretability of the former with the high efficiency of the latter. To this end, there have been attempts to retrofit neural embeddings with information from knowledge bases (e.g. \citepFaruqui15) as well as to project embedding dimensions onto interpretable low-dimensional sub-spaces \citeprothe-ebert-schutze:2016:N16-1.

More promising, in our opinion, is the use of hybrid distributional models that combine sparse graph-based representations \citepBiemannRiedl13 with dense vector representations [Mikolov et al.(2013b)Mikolov, Sutskever, Chen, Corrado, and Dean] and link them to lexical resources and knowledge bases \citepFaralli16. Here a hybrid human-in-the-loop approach can be beneficial, where not only the machine learning models for knowledge extraction are supported and improved over time, the final entity graph becomes larger, cleaner, more precise and thus more usable for domain experts \citepYimamEtAl:2017:BioNLP. Contrary to classical automatic machine learning, human-in-the-loop approaches do not operate on predefined training or test sets, but assume that human expert input regarding system improvement is supplied iteratively. In such an approach the machine learning model is built continuously on previous annotations and used to propose labels for subsequent annotation [Yimam et al.(2016)Yimam, Biemann, Majnaric, Å abanović, and Holzinger].

Combined with an interpretable disambiguation system \citeppanchenko-EtAl:2017:EACLlong, this realizes concept-linking in context with high accuracy while providing human-interpretable reasons for why concepts have been selected. Figure 4 shows machine reading capabilities of the system described in [Panchenko et al.(2017)Panchenko, Ruppert, Faralli, Ponzetto, and Biemann]: The system can automatically assign more general terms in context and can disambiguate terms with several senses to the one matching the context. Note that while unsupervised machine learning is used for inducing the sense inventory, the senses are interpretable by providing a rich sense representation as visible in the figure. This method does not require a manually defined ontology and thus is applicable across languages and domains.

Figure 4: Output of an unsupervised interpretable model for text interpretation for the input: ”diabetes plasma is from blood transfusions with high sugar” (Note that it brings up plasma (material) not plasma (display device)! Image created online via ltbev.informatik.uni-hamburg.de/wsd on 27.12.2017, 19:30

The quest for the future is to generalize these notions to enable semantic matching beyond keyword representations (cf. [Cer et al.(2017)Cer, Diab, Agirre, Lopez-Gazpio, and Specia]) in order to transfer these principles from the concept level to the event level.

7 Conclusion and Future Outlook

Explainable-AI systems generally pose huge challenges yet offer enormous opportunities for the medical domain. Recent research demonstrates examples of promising directions, but to date none of these examples provides a complete solution, nor are these considered the only possible solutions.

If human intelligence is complemented by machine learning and at least in some cases even overruled, humans must be able to understand, to re-enact and to be able to interactively influence the machine decision process [Holzinger(2016)]. The European Union cannot afford to deploy highly intelligent AI systems that make unexpected decisions which cannot be reproduced, especially if these systems are in the medical domain. Consequently, a huge motivation for explainable-AI are rising legal and privacy aspects. The new European General Data Protection Regulation (GDPR 2016/679 and ISO/IEC 27001) entering into force on May, 25, 2018, will make black-box approaches difficult to use in business, if they are not able (on demand) to explain why a decision has been made. ”Potentially interpretable” could also mean ”accurate”, see for example the recent work ”One pixel attack for fooling deep neural networks” by [Su et al.(2017)Su, Vargas, and Kouichi]. Also small changes in images may lead to huge changes in the prediction in deep learning approaches, see ”Deep Neural Networks are Easily Fooled” by [Nguyen et al.(2015)Nguyen, Yosinski, and Clune]. None of this suggests easy interpretability is even possible for deep neural networks.

In the medical domain a large amount of knowledge is represented in textual form, and the written text of the medical reports is legally binding, unlike images nor *omics data. Here it is crucial to underpin machine output with reasons that are human-verifiable and where high precision is imperative for supporting, not distracting the medical experts. The only way forward seems the integration of both knowledge-based and neural approaches to combine the interpretability of the former with the high efficiency of the latter. Promising for explainable-AI in the medical domain seems to be the use of hybrid distributional models that combine sparse graph-based representations with dense vector representations and link them to lexical resources and knowledge bases.

Last but not least we emphasize that successful explainable-AI systems need effective user interfaces, fostering new strategies for presenting human understandable explanations, e.g. explanatory debugging \citepKuleszaEtAl:2015:ExplanatoryDebugging, affective computing \citepPicard:1997:AffectiveComputingBook, sentiment analysis \citepMaasNgEtAl:2011:WordVectorsSentiment,PetzEtAl:2015:Sentiment. While the aforementioned methods are inherently more explainable, their performance is less optimal hence possibilities to enhance 2-way interaction have to be explored, which calls for optical computing for machine learning purposes.


We are grateful for valuable comments from our local, national and international colleagues, including George Spyrou, Cyprus Institute of Neurology and Genetics, Chris Christodoulou, Ioannis Constantinou and Kyriacos Constantinou, University of Cyprus and Marios S. Pattichis, University of New Mexico.


  1. https://www.darpa.mil/program/explainable-artificial-intelligence
  2. https://www.computerworld.com.au/article/617359
  3. https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html
  4. ImageNet is an image database containing 14,197,122 images (as of 24.12.2017) organized according to nouns of WordNet, and is openly available: http://www.image-net.org
  5. presented as a poster during the ICML 2009 workshop on Learning Feature Hierarchies, http://www.cs.toronto.edu/ rsalakhu/deeplearning/program.html
  6. Early detection of diabetic retinopathy is extremely important in order to prevent premature visual loss and blindness
  7. often called P4-medicine, i.e. medicine that is Personal, Participatory, Predictive and Preventive
  8. https://www.nlm.nih.gov/research/umls


  1. Odd Olai Aalen, Kjetil Røysland, Jon Michael Gran, Roger Kouyos, and Tanja Lange. Can we believe the dags? a comment on the relationship between causal dags and mechanisms. Statistical methods in medical research, 25(5):2294–2314, 2016. doi: 10.1177/0962280213520436.
  2. Carla Agurto, Victor Murray, Eduardo Barriga, Sergio Murillo, Marios Pattichis, Herbert Davis, Stephen Russell, Michael Abràmoff, and Peter Soliz. Multiscale AM-FM methods for diabetic retinopathy lesion detection. IEEE transactions on medical imaging, 29(2):502–512, 2010. doi: 10.1109/TMI.2009.2037146.
  3. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015. doi: 10.1371/journal.pone.0130140.
  4. Yoshua Bengio. Learning deep architectures for ai. Foundations and trends in Machine Learning, 2(1):1–127, 2009. doi: 10.1561/2200000006.
  5. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013. doi: 10.1109/TPAMI.2013.50.
  6. Pietro Berkes and Laurenz Wiskott. On the analysis and interpretation of inhomogeneous quadratic forms as receptive fields. Neural computation, 18(8):1868–1895, 2006. doi: 10.1162/neco.2006.18.8.1868.
  7. Tim Berners-Lee, James Hendler, and Ora Lassila. The semantic Web. Scientific American, 284(5):34–43, May 2001.
  8. Chris Biemann and Martin Riedl. Text: Now in 2D! A Framework for Lexical Expansion with Contextual Similarity. Journal of Language Modelling, 1(1):55–95, 2013. doi: 10.15398/jlm.v1i1.60.
  9. Guido Bologna and Yoichi Hayashi. Characterization of symbolic rules embedded in deep dimlp networks: A challenge to transparency of deep learning. Journal of Artificial Intelligence and Soft Computing Research, 7(4):265–286, 2017. doi: 10.1515/jaiscr-2017-0019.
  10. Léon Bottou. From machine learning to machine reasoning. Machine learning, 94(2):133–149, 2014. doi: 10.1007/s10994-013-5335-x.
  11. William Brendel and Sinisa Todorovic. Learning spatiotemporal graphs of human activities. In IEEE international conference on Computer vision (ICCV 2011), pages 778–785. IEEE, 2011. doi: 10.1109/ICCV.2011.6126316.
  12. Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’15), pages 1721–1730. ACM, 2015. doi: 10.1145/2783258.2788613.
  13. Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada, August 2017. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/S17-2001.
  14. Philipp Cimiano and Johanna Völker. Text2onto: a framework for ontology learning and data-driven change discovery. In 10th international conference on Natural Language Processing and Information Systems (NLDB ’05), pages 227–238. Springer, 2005. doi: 10.1007/11428817˙21.
  15. Philipp Cimiano, Christina Unger, and John McCrae. Ontology-based interpretation of natural language, volume 24. Morgan and Claypool Publishers, 2014. doi: 10.2200/S00561ED1V01Y201401HLT024.
  16. Marianne Clausel, Thomas Oberlin, and Valérie Perrier. The monogenic synchrosqueezed wavelet transform: a tool for the decomposition/demodulation of am–fm images. Applied and Computational Harmonic Analysis, 39(3):450–486, 2015. doi: 10.1016/j.acha.2014.10.003.
  17. Ioannis Constantinou, Marios S. Pattichis, Vasillis Tanos, Marios Neofytou, and Constantinos S. Pattichis. An adaptive multiscale AM-FM texture analysis system with application to hysteroscopy imaging. In 12th International IEEE Conference on Bioinformatics & Bioengineering (BIBE 2012), pages 744–747. IEEE, 2012. doi: 10.1109/BIBE.2012.6399760.
  18. Mark G. Core, H. Chad Lane, Michael Van Lent, Dave Gomboc, Steve Solomon, and Milton Rosenberg. Building explainable artificial intelligence systems. In AAAI, pages 1766–1773. MIT Press, 2006.
  19. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Li Kai, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), pages 248–255. IEEE, 2009. doi: 10.1109/CVPR.2009.5206848.
  20. Derek Doran, Sarah Schulz, and Tarek R. Besold. What does explainable ai really mean? a new conceptualization of perspectives. arXiv:1710.00794, 2017.
  21. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. University of Montreal Technical Report Nr. 1341, 2009.
  22. Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115–118, 2017. doi: 10.1038/nature21056.
  23. Stefano Faralli, Alexander Panchenko, Chris Biemann, and Simone P. Ponzetto. Linked Disambiguated Distributional Semantic Networks, pages 56–64. Springer International Publishing, Cham, 2016. doi: 10.1007/978-3-319-46547-0˙7.
  24. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606–1615, Denver, Colorado, May–June 2015. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/N15-1184.
  25. Aaron X. Fellmeth and Maurice Horwitz. Guide to Latin in international law. Oxford University Press, 2009.
  26. Samuel J. Gershman, Eric J. Horvitz, and Joshua B. Tenenbaum. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245):273–278, 2015. doi: 10.1126/science.aac6076.
  27. Dominic Girardi, Josef Küng, Raimund Kleiser, Michael Sonnberger, Doris Csillag, Johannes Trenkler, and Andreas Holzinger. Interactive knowledge discovery with the doctor-in-the-loop: a practical example of cerebral aneurysms research. Brain Informatics, 3(3):133–143, 2016. doi: 10.1007/s40708-016-0038-2.
  28. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 580–587. IEEE, 2014. doi: 10.1109/CVPR.2014.81.
  29. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Zhoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, editors, Advances in neural information processing systems (NIPS), pages 2672–2680, 2014.
  30. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. arXiv:1612.00837, 2016.
  31. Serge Guillaume. Designing fuzzy inference systems from data: An interpretability-oriented review. IEEE Transactions on fuzzy systems, 9(3):426–443, 2001. doi: 10.1109/91.928739.
  32. David Gunning. Explainable artificial intelligence (XAI): Technical Report Defense Advanced Research Projects Agency DARPA-BAA-16-53. DARPA, Arlington, USA, 2016.
  33. Margaret A. Hamburg and Francis S. Collins. The path to personalized medicine. New England Journal of Medicine, 363(4):301–304, 2010. doi: 10.1056/NEJMp1006304.
  34. Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. Generating visual explanations. In European Conference on Computer Vision ECCV 2016, Lecture Notes in Computer Science LNCS 9908, pages 3–19. Springer, Heidelberg, 2016. doi: 10.1007/978-3-319-46493-0˙1.
  35. Geoffrey E. Hinton. A practical guide to training restricted boltzmann machines. In Neural networks: Tricks of the trade, Lecture Notes in Computer Science LNCS 7700, pages 599–619. Springer, Heidelberg, 2012. doi: 10.1007/978-3-642-35289-8˙32.
  36. Patrick Hohenecker and Thomas Lukasiewicz. Deep learning for ontology reasoning. arXiv:1705.10342, 2017.
  37. John Henry Holland. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT Press, Cambridge (MA), 1992.
  38. Andreas Holzinger. On Knowledge Discovery and Interactive Intelligent Visualization of Biomedical Data - Challenges in Human––Computer Interaction & Biomedical Informatics. In Markus Helfert, Chiara Fancalanci, and Joaquim Filipe, editors, DATA 2012, International Conference on Data Technologies and Applications, pages 5–16. 2012.
  39. Andreas Holzinger. Human––Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together? In Alfredo Cuzzocrea, Christian Kittl, Dimitris E. Simos, Edgar Weippl, and Lida Xu, editors, Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127, pages 319–328. Springer, Heidelberg, Berlin, New York, 2013. doi: 10.1007/978-3-642-40511-2˙22.
  40. Andreas Holzinger. Interactive machine learning for health informatics: When do we need the human-in-the-loop? Brain Informatics, 3(2):119–131, 2016. doi: 10.1007/s40708-016-0042-6.
  41. Andreas Holzinger. Introduction to machine learning and knowledge extraction (MAKE). Machine Learning and Knowledge Extraction, 1(1):1–20, 2017. doi: 10.3390/make1010001. URL http://www.mdpi.com/2504-4990/1/1/1.
  42. Andreas Holzinger, Markus Plass, Katharina Holzinger, Gloria Cerasela Crisan, Camelia-M. Pintea, and Vasile Palade. Towards interactive machine learning (iML): Applying ant colony algorithms to solve the traveling salesman problem with the human-in-the-loop approach. In Springer Lecture Notes in Computer Science LNCS 9817, pages 81–95. Springer, Heidelberg, Berlin, New York, 2016. doi: 10.1007/978-3-319-45507-56.
  43. Andreas Holzinger, Randy Goebel, Vasile Palade, and Massimo Ferri. Towards integrative machine learning and knowledge extraction. In Towards Integrative Machine Learning and Knowledge Extraction: Springer Lecture Notes in Artificial Intelligence LNAI 10344, pages 1–12. Springer International, Cham, 2017a. doi: 10.1007/978-3-319-69775-8˙1.
  44. Andreas Holzinger, Bernd Malle, Peter Kieseberg, Peter M. Roth, Heimo Müller, Robert Reihs, and Kurt Zatloukal. Towards the augmented pathologist: Challenges of explainable-ai in digital pathology. arXiv:1712.06657, 2017b.
  45. Andreas Holzinger, Markus Plass, Katharina Holzinger, Gloria Cerasela Crisan, Camelia-M. Pintea, and Vasile Palade. A glass-box interactive machine learning approach for solving np-hard problems with the human-in-the-loop. arXiv:1708.01104, 2017c.
  46. Katharina Holzinger, Vasile Palade, Raul Rabadan, and Andreas Holzinger. Darwin or lamarck? future challenges in evolutionary algorithms for knowledge discovery and data mining. In Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401, pages 35–56. Springer, Heidelberg, Berlin, 2014. doi: 10.1007/978-3-662-43968-5˙3.
  47. Leroy Hood and Stephen H. Friend. Predictive, personalized, preventive, participatory (P4) cancer medicine. Nature Reviews Clinical Oncology, 8(3):184–187, 2011. doi: 10.1038/nrclinonc.2010.227.
  48. Xin Huang and Yuxin Peng. Cross-modal deep metric learning with multi-task regularization. arXiv:1703.07026, 2017.
  49. Berthold Huppertz and Andreas Holzinger. Biobanks – a source of large biological data sets: Open problems and future challenges. In Andreas Holzinger and Igor Jurisica, editors, Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science LNCS 8401, pages 317–330. Springer, Berlin, Heidelberg, 2014. doi: 10.1007/978-3-662-43968-5˙18.
  50. Herve Jegou, Florent Perronnin, Matthijs Douze, Jorge Sánchez, Patrick Perez, and Cordelia Schmid. Aggregating local image descriptors into compact codes. IEEE transactions on pattern analysis and machine intelligence, 34(9):1704–1716, 2012. doi: 10.1109/TPAMI.2011.235.
  51. W. Lewis Johnson. Agents that learn to explain themselves. In Twelfth National Conference on Artificial Intelligence (AAAI ’94), pages 1257–1263. AAAI, 1994.
  52. Douglas B. Kell and Etheresia Pretorius. Proteins behaving badly. substoichiometric molecular control and amplification of the initiation and nature of amyloid fibril formation: lessons from and for blood clotting. Progress in biophysics and molecular biology, 123:16–41, 2017.
  53. Peter Kieseberg, Edgar Weippl, and Andreas Holzinger. Trust for the doctor-in-the-loop. European Research Consortium for Informatics and Mathematics (ERCIM) News: Tackling Big Data in the Life Sciences, 104(1):32–33, 2016.
  54. Iasonas Kokkinos, Georgios Evangelopoulos, and Petros Maragos. Texture analysis and segmentation using modulation features, generative models, and weighted curve evolution. IEEE transactions on pattern analysis and machine intelligence, 31(1):142–157, 2009. doi: 10.1109/TPAMI.2008.33.
  55. John R. Koza. Genetic programming as a means for programming computers by natural selection. Statistics and Computing, 4(2):87–112, 1994. doi: 10.1007/BF00175355.
  56. Markus Krötzsch, Sebastian Rudolph, and Pascal Hitzler. ELP: Tractable Rules for OWL 2, pages 649–664. Springer, Berlin, Heidelberg, 2008. doi: 10.1007/978-3-540-88564-1˙41.
  57. Tevye R. Krynski and Joshua B. Tenenbaum. The role of causality in judgment under uncertainty. Journal of Experimental Psychology: General, 136(3):430, 2007. doi: 10.1037/0096-3445.136.3.430.
  58. Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI 2015), pages 126–137. ACM, 2015. doi: 10.1145/2678025.2701399.
  59. Carmen Lacave and Francisco J. Diez. A review of explanation methods for Bayesian networks. The Knowledge Engineering Review, 17(2):107–127, 2002. doi: 10.1017/S026988890200019X.
  60. Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. doi: 10.1126/science.aab3050.
  61. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. arXiv:1604.00289, 2016.
  62. Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. Interpretable and explorable approximations of black box models. arXiv:1707.01154, 2017.
  63. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. doi: 10.1038/nature14539.
  64. Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In 26th annual international conference on machine learning (ICML ’09), pages 609–616. ACM, 2009. doi: 10.1145/1553374.1553453.
  65. Sanghoon Lee, Marios S. Pattichis, and Alan C. Bovik. Foveated video compression with optimal rate control. IEEE Transactions on Image Processing, 10(7):977–992, 2001. doi: 10.1109/83.931092.
  66. Sanghoon Lee, Marios S. Pattichis, and Alan C. Bovik. Foveated video quality assessment. IEEE Transactions on Multimedia, 4(1):129–132, 2002. doi: 10.1109/6046.985561.
  67. Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, and David Madigan. Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3):1350–1371, 2015. doi: 10.1214/15-AOAS848.
  68. Zachary C. Lipton. The mythos of model interpretability. arXiv:1606.03490, 2016.
  69. Christos P. Loizou, Victor Murray, Marios S. Pattichis, Marios Pantziaris, and Constantinos S. Pattichis. Multiscale amplitude-modulation frequency-modulation (am–fm) texture analysis of ultrasound images of the intima and media layers of the carotid artery. IEEE Transactions on Information Technology in Biomedicine, 15(2):178–188, 2011a. doi: 10.1109/TITB.2010.2081995.
  70. Christos P. Loizou, Victor Murray, Marios S. Pattichis, Ioannis Seimenis, Marios Pantziaris, and Constantinos S. Pattichis. Multiscale amplitude-modulation frequency-modulation (am–fm) texture analysis of multiple sclerosis in brain mri images. IEEE Transactions on Information Technology in Biomedicine, 15(1):119–129, 2011b. doi: 10.1109/TITB.2010.2091279.
  71. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 142–150, 2011.
  72. Alexander Maedche and Steffen Staab. Ontology learning for the semantic web. IEEE Intelligent systems, 16(2):72–79, 2001. doi: 10.1109/5254.920602.
  73. Marc E. Maier, Brian J. Taylor, Huseyin Oktay, and David D. Jensen. Learning causal models of relational domains. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10), pages 531–538. AAAI, 2010.
  74. Marc E. Maier, Katerina Marazopoulou, David Arbour, and David D. Jensen. A sound and complete algorithm for learning causal models from relational data. arXiv:1309.6843, 2013.
  75. Christopher D. Manning. Computational linguistics and deep learning. Computational Linguistics, 41(4):701—707, December 2015. doi: 10.1162/COLI˙a˙00239.
  76. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv:1301.3781, 2013a.
  77. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Christopher J.C. Burges, Leon Bottou, Max Welling, Zoubin Ghahramani, and Kilian Q. Weinberger, editors, Advances in neural information processing systems 26 (NIPS 2013), pages 3111–3119. NIPS foundation, 2013b.
  78. Tim Miller, Piers Howe, and Liz Sonenberg. Explainable ai: Beware of inmates running the asylum or: How i learnt to stop worrying and love the social and behavioural sciences. arXiv:1712.00547, 2017.
  79. Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. arXiv:1706.07979, 2017.
  80. Heimo Müller, Robert Reihs, Kurt Zatloukal, and Andreas Holzinger. Analysis of biomedical data with multilevel glyphs. BMC Bioinformatics, 15(Suppl 6):S5, 2014. doi: 10.1186/1471-2105-15-S6-S5.
  81. Heimo Müller, Robert Reihs, Kurt Zatloukal, Fleur Jeanquartier, Roxana Merino-Martinez, David van Enckevort, Morris A. Swertz, and Andreas Holzinger. State-of-the-art and future challenges in the integration of biobank catalogues. In Smart Health, Lecture Notes in Computer Science LNCS 8700, pages 261–273. Springer, Heidelberg, 2015. doi: 10.1007/978-3-319-16226-3˙11.
  82. Victor Murray, Paul Rodríguez, and Marios S. Pattichis. Multiscale AM-FM demodulation and image reconstruction methods with improved accuracy. IEEE transactions on image processing, 19(5):1138–1152, 2010. doi: 10.1109/TIP.2010.2040446.
  83. Victor Murray, Marios S. Pattichis, Eduardo Simon Barriga, and Peter Soliz. Recent multiscale AM-FM methods in emerging applications in medical imaging. EURASIP Journal on Advances in Signal Processing, 2012(1):23, 2012. doi: 10.1186/1687-6180-2012-23.
  84. Alan Newell, John C. Shaw, and Herbert A. Simon. Chess-playing programs and the problem of complexity. IBM Journal of Research and Development, 2(4):320–335, 1958. doi: 10.1147/rd.24.0320.
  85. Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 427–436. IEEE, 2015.
  86. Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural Information Processing Systems 29 (NIPS 2016), pages 3387–3395. Neural Information Processing Systems Foundation, 2016.
  87. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv:1601.06759, 2016.
  88. Alexander Panchenko, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, and Chris Biemann. Unsupervised does not mean uninterpretable: The case for word sense induction and disambiguation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 86–98, Valencia, Spain, April 2017. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/E17-1009.
  89. Seyoung Park, Bruce Xiaohan Nie, and Song-Chun Zhu. Attribute and-or grammar for joint parsing of human attributes, part and pose. arXiv:1605.02112, 2016.
  90. Marios S. Pattichis, George Panayi, Alan C. Bovik, and Shun-Pin Hsu. Fingerprint classification using an AM-FM model. IEEE Transactions on Image Processing, 10(6):951–954, 2001. doi: 10.1109/83.923291.
  91. John Pavlopoulos, Prodromos Malakasiotis, and Ion Androutsopoulos. Deeper attention to abusive user content moderation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1136–1146. Association for Computational Linguistics, 2017.
  92. Judea Pearl. Causality: Models, Reasoning, and Inference (2nd Edition). Cambridge University Press, Cambridge, 2009.
  93. Carlos A. Pena-Reyes and Moshe Sipper. A fuzzy-genetic approach to breast cancer diagnosis. Artificial intelligence in medicine, 17(2):131–155, 1999. doi: 10.1016/S0933-3657(99)00019-6.
  94. Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference: foundations and learning algorithms. Cambridge (MA), 2017.
  95. Gerald Petz, Michal Karpowicz, Harald Fuerschuss, Andreas Auinger, Vaclav Stritesky, and Andreas Holzinger. Computational approaches for mining user’s opinions on the web 2.0. Information Processing & Management, 51(4):510–519, 2015. doi: 10.1016/j.ipm.2014.07.011.
  96. Rosalind W. Picard. Affective Computing. MIT Press, Cambridge (MA), 1997.
  97. Brett Poulin, Roman Eisner, Duane Szafron, Paul Lu, Russell Greiner, David S. Wishart, Alona Fyshe, Brandon Pearcy, Cam MacDonell, and John Anvik. Visual explanation of evidence with additive classifiers. In National Conference On Artificial Intelligence, pages 1822–1829. AAAI, 2006.
  98. Sarni Suhaila Rahim, Vasile Palade, Chrisina Jayne, Andreas Holzinger, and James Shuttleworth. Detection of diabetic retinopathy and maculopathy in eye fundus images using fuzzy image processing. In Yike Guo, Karl Friston, Faisal Aldo, Sean Hill, and Hanchuan Peng, editors, Brain Informatics and Health, Lecture Notes in Computer Science, LNCS 9250, pages 379–388. Springer, Cham, Heidelberg, New York, Dordrecht, London, 2015. doi: 10.1007/978-3-319-23344-4˙37.
  99. Janakiramanan Ramachandran, Marios S. Pattichis, Louis A. Scuderi, and Justin S. Baba. Tree image growth analysis using instantaneous phase modulation. EURASIP Journal on Advances in Signal Processing, 2011(1):586865, 2011. doi: 10.1155/2011/586865.
  100. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Model-agnostic interpretability of machine learning. arXiv:1606.05386, 2016a.
  101. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144. ACM, 2016b.
  102. David Rolnick, Andreas Veit, Serge Belongie, and Nir Shavit. Deep learning is robust to massive label noise. arXiv:1705.10694, 2017.
  103. Sascha Rothe, Sebastian Ebert, and Hinrich Schütze. Ultradense word embeddings by orthogonal transformation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 767–777, San Diego, California, June 2016. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/N16-1091.
  104. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533–536, 1986. doi: 10.1038/323533a0.
  105. Stuart J. Russell and Peter Norvig. Artificial Intelligence: A modern approach. Prentice Hall, Englewood Cliffs, 1995.
  106. Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61(1):85–117, 2015. doi: 10.1016/j.neunet.2014.09.003.
  107. Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando de Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2016. doi: 10.1109/JPROC.2015.2494218.
  108. Edward H. Shortliffe and Bruce G. Buchanan. A model of inexact reasoning in medicine. Mathematical biosciences, 23(3-4):351–379, 1975. doi: 10.1016/0025-5564(75)90047-4.
  109. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
  110. Deepika Singh, Erinc Merdivan, Ismini Psychoula, Johannes Kropf, Sten Hanke, Matthieu Geist, and Andreas Holzinger. Human activity recognition using recurrent neural networks. In Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar Weippl, editors, Machine Learning and Knowledge Extraction: Lecture Notes in Computer Science LNCS 10410, pages 267–274. Springer International Publishing, Cham, 2017. doi: 10.1007/978-3-319-66808-6˙18.
  111. Jiawei Su, Danilo Vasconcellos Vargas, and Sakurai Kouichi. One pixel attack for fooling deep neural networks. arXiv:1710.08864, 2017.
  112. William Swartout, Cecile Paris, and Johanna Moore. Explanations in knowledge systems: Design for explainable expert systems. IEEE Expert, 6(3):58–64, 1991. doi: 10.1109/64.87686.
  113. Duane Szafron, Paul Lu, Russell Greiner, David S. Wishart, Brett Poulin, Roman Eisner, Zhiyong Lu, John Anvik, Cam Macdonell, and Alona Fyshe. Proteome analyst: custom predictions with explanations in a web-based tool for high-throughput proteome annotations. Nucleic acids research, 32(S2):W365–W371, 2004. doi: 10.1093/nar/gkh485.
  114. Masato Taki. Deep residual networks and weight initialization. arXiv:1709.02956, 2017.
  115. Qiang Tian, Nathan D. Price, and Leroy Hood. Systems cancer medicine: towards realization of predictive, preventive, personalized and participatory (P4) medicine. Journal of internal medicine, 271(2):111–121, 2012. doi: 10.1111/j.1365-2796.2011.02498.x.
  116. Athanasios Tsakonas, Georgios Dounias, Jan Jantzen, Hubertus Axer, Beth Bjerregaard, and Diedrich Graf von Keyserlingk. Evolving rule-based systems in two medical domains using genetic programming. Artificial Intelligence in Medicine, 32(3):195–216, 2004. doi: 10.1016/j.artmed.2004.02.007.
  117. Zhenyu Wang and Vasile Palade. Building interpretable fuzzy models for high dimensional data analysis in cancer diagnosis. BMC genomics, 12(2):S5, 2011. doi: 10.1186/1471-2164-12-S2-S5.
  118. Sandra Wartner, Dominic Girardi, Manuela Wiesinger-Widi, Johannes Trenkler, Raimund Kleiser, and Andreas Holzinger. Ontology-guided principal component analysis: Reaching the limits of the doctor-in-the-loop. In Information Technology in Bio- and Medical Informatics: 7th International Conference, ITBAM 2016, Porto, Portugal, September 5-8, 2016, Proceedings, pages 22–33. Springer International Publishing, Cham, 2016. doi: 10.1007/978-3-319-43949-5˙2.
  119. Bernard Widrow and Michael A. Lehr. 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE, 78(9):1415–1442, 1990. doi: 10.1109/5.58323.
  120. Andrew M. Woodward, Jem J. Rowland, and Douglas B. Kell. Fast automatic registration of images using the phase of a complex wavelet transform: application to proteome gels. Analyst, 129(6):542–552, 2004. doi: 10.1039/B403134B.
  121. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. arXiv:1611.05431, 2016.
  122. Seid Muhie Yimam, Chris Biemann, Ljiljana Majnaric, Å efket Å abanović, and Andreas Holzinger. An adaptive annotation approach for biomedical entity and relation recognition. Brain Informatics, 3(3):157–168, 2016. doi: 10.1007/s40708-016-0036-4.
  123. Seid Muhie Yimam, Steffen Remus, Alexander Panchenko, Andreas Holzinger, and Chris Biemann. Entity-centric information access with the human-in-the-loop for the biomedical domains. In Svetla Boytcheva, Kevin Bretonnel Cohen, Guergana Savova, and Galia Angelova, editors, Biomedical NLP Workshop, 11th International Conference on Recent Advances in Natural Language Processing (RANLP 2017), pages 42–48, 2017.
  124. Lotfi A. Zadeh. Toward human level machine intelligence - is it achievable? the need for a paradigm shift. IEEE Computational Intelligence Magazine, 3(3):11–22, 2008. doi: 10.1109/MCI.2008.926583.
  125. Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Fleet D., Pajdla T., Schiele B., and Tuytelaars T., editors, ECCV, Lecture Notes in Computer Science LNCS 8689, pages 818–833. Springer, Cham, 2014. doi: 10.1007/978-3-319-10590-1˙53.
  126. Matthew D. Zeiler, Dilip Krishnan, Graham W. Taylor, and Rob Fergus. Deconvolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), pages 2528–2535. IEEE, 2010. doi: 10.1109/CVPR.2010.5539957.
  127. Feng Zhou and Fernando De la Torre. Factorized graph matching. In CVPR 2012 IEEE Conference onComputer Vision and Pattern Recognition, pages 127–134. IEEE, 2012.
  128. Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In 20th International Conference on Machine Learning (ICML ’03), pages 928–936. AAAI, 2003.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description