Alexander Binder

Assistant Professor

Email: nyrknaqre_ovaqre@fhgq.rqh.ft
Telephone: +65 6499 8753
Room Number: 1.402.19
Research Interests:
Information Security in Cyber Physical Systems,Multi-Modal Information Retrieval,Machine Learning and Artificial Intelligence,Computer Vision,Signal Processing,Others

Pillar / Cluster: Information Systems Technology and Design

Current Matters:

My PhD students Jiamei, Penny and Marcus got high ranked submissions in the ImageCLEF 2017 Tuberculosis Challenge with their deep-learning based approaches.

We are hiring for Cybersecurity a

  • one research assistant (RA)
  • one postdoc
  • CV with info about work experience and grades to alexander _underscore_ binder _et_ sutd _dot_ edu _dot_ sg

research assistant hire:

  • Candidates with a B.Sc. or M.Sc in Computer Science or Computer Engineering or related fields
  • Good programming skills in C++
  • Interests in Security Topics
  • Knowledge about Security Tools (metasploit, wireshark, RATs. etc) and Kerberos is a plus
  • Knowledge about Windows 10 APIs or Windows Kernel is a plus
  • Willing to learn up new knowledge
  • Work in a team of 5-7 other people
  • Being able to solve problems independently
  • Position initially for 1 year, can be extended

PostDoc hire:

  • Candidates with a PhD in Computer Science, Computer Engineering, Math or related fields
  • track record of publications in reasonable conferences or journals
  • Interests in Security Topics and Deep Learning.
  • Good programming skills in C++ or in python, hands-on attitude
  • Experience in Deep Learning tools is a big plus (caffe, tensorflow and similar)
  • Knowledge about Security Tools (metasploit, wireshark, RATs. etc) and Kerberos is a plus
  • Knowledge about Windows 10 APIs or Windows Kernel is a plus
  • Willing to go beyond what you have done in your PhD
  • Work in a team of 5-7 other people
  • Being able to generate creative ideas
  • Position initially for 1 year, can be extended

In Singapore life is less expensive than in many places in Western Europe. Flat rent is pricier, but the rest is notably less expensive. The workplace is in the East of Singapore and close to newly built and relatively inexpensive apartments (e.g. Flora Drive).

About me

Alexander (Alex) Binder obtained a Ph.D. degree at the department of computer science, Technical University Berlin in 2013. Before he held a Diplom degree in mathematics from the Humboldt University Berlin. Since 2007, he has been working for the THESEUS project on semantic image retrieval at Fraunhofer FIRST where he was the principal contributor to top five ranked submissions at ImageCLEF2011 and Pascal VOC2009 challenges. From 2012 to 2015 he worked on real-time car localization topics in the Automotive Services department (ASCT) of the Fraunhofer Institute FOKUS. From to 2010 to 2015 he was with the Machine Learning Group at the TU Berlin. He likes to program in C++, knows a bit about internals of the Caffe toolbox and is using here and there python. His research interests include computer vision, medical applications, machine learning (kernel machines and deep learning), efficient heuristics and understanding non-linear predictions.

Education

  • PhD, Technical University Berlin

Research Interests

Machine Learning, Computer Vision, Deep Learning, Medical Data Analysis, Large-Scale Computing

Explaining Deep Neural Net predictions – which pixels are important?

Below you can see examples of single-test sample explanation (by Layer-wise Relevance Prediction) of which pixels/regions make a deep neural network arrive at a particular decision.

The deep neural network is the Googlenet implementation for prediction of ImageNet 1000 classes which was made available within the Caffe deep learning package ( http://caffe.berkeleyvision.org/ thank you Yangqing Jia, Sergio Guadarrama and many others ! Caffe is just great) . The output of the deep neural network is a vector of prediction scores for all 1000 classes, and for the top image of the rooster with the yellow flowers, the score for class rooster is highest.

But what did the neural network see as a rooster? The output of the deep neural network is one score for the whole image (for each of its classes). It does not answer this question which pixels are relevant for the prediction score.

rooster11.jpg_as_inputted_into_the_dnnrooster11.jpg_heatmapfrog13.jpg_as_inputted_into_the_dnnfrog13.jpg_heatmapscooter10.jpg_as_inputted_into_the_dnnscooter10.jpg_heatmapscooter11.jpg_as_inputted_into_the_dnnscooter11.jpg_heatmap

By Layer-wise Relevance Prediction, which is an example of Deep Taylor methods, we can compute Relevance Scores for each pixel – seen in the grey/red picture to the right of the rooster. The neural network picked up mostly the red head, and the neural network (as well as the explanation by layer-wise relevance prediction) ignored the strong gradients from the yellow flowers. Ok, that was an easy pic. Now look at something more complex.

Take a look at the motorbikes (my own photos from flickr), as an example for complex and cluttered scenes.  The neural network (as well as the explanation by layer-wise relevance prediction) ignores mostly strong gradients from the background trees against the sky (these would pop out by a canny edge detector or gabor filters), and mostly considers wheels and backs of motorbikes as evidence.

A last example are the green frogs in the green background, surely they do not pop out by color as the rooster does.

 

There are many great publications by other researchers in related visualization topics, e.g. the fooling papers by Nguyen et al, Matthew Zeilers deconvolution and many others. See for example the references section in: http://www.interpretable-ml.org/accv2016workshop/

Selected Publications

  • Analyzing and Validating Neural Networks Predictions                                 Best paper prize
    Alexander Binder, Wojciech Samek, Gregoire Montavon, Sebastian Bach, Klaus-Robert Müller
    2016 Workshop on Visualization for Deep Learning @ ICML 2016, http://icmlviz.github.io/assets/papers/18.pdf
 
 smiley-blue2_1274739_960_720
  • Deep Taylor Decomposition of Neural Networks
    Gregoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek, Klaus-Robert Müller
    2016 Workshop on Visualization for Deep Learning @ ICML 2016, http://icmlviz.github.io/assets/papers/13.pdf

  • Analyzing Classifiers: Fisher Vectors and Deep Neural Networks
    Sebastian Bach, Alexander Binder, Grégoire Montavon, Klaus-Robert Müller and Wojciech Samek
    CVPR 2016 + Arxiv.org preprint 2015 http://arxiv.org/pdf/1512.00172

  • Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers
    Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller, Wojciech Samek
    ICANN 2016 +
    Arxiv.org preprint 2016, http://arxiv.org/pdf/1604.00825

  • Evaluating the visualization of what a Deep Neural Network has learned
    Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller
    IEEE TNNLS accepted + Arxiv.org preprint 2015 http://arxiv.org/pdf/1509.06321

  • Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth
    Sebastian Bach, Alexander Binder, Grégoire Montavon, Klaus-Robert Müller and Wojciech Samek
    IEEE ICIP 2016
    accepted + Arxiv.org preprint 2016 http://arxiv.org/pdf/1603.06463

  • Explaining NonLinear Classification Decisions with Deep Taylor Decomposition
    Grégoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek, Klaus-Robert Müller
    Arxiv.org preprint 2015 http://arxiv.org/abs/1512.02479

  • Layer-wise Relevance Propagation for Deep Neural Network Architectures
    Alexander Binder, Sebastian Bach, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek
    IEEE ICISA 2016 (Information Science and Applications) http://link.springer.com/chapter/10.1007%2F978-981-10-0557-2_87

  • Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to Novel Algorithms
    Yunwen Lei, Ürün Dogan, Alexander Binder and Marius Kloft
    NIPS 2015 main conference +Arxiv.org preprint 2015 http://arxiv.org/pdf/1506.04359
  • On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller and Wojciech Samek
    PLoS ONE 2015
    http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140

  • Theory and Algorithms for the Localized Setting of Learning Kernels
    Yunwen Lei, Alexander Binder, Ürün Dogan and Marius Kloft
    NIPS 2015 Workshop “Feature Extraction: Modern Questions and Challenges”
  • Localized Multiple Kernel Learning – A Convex Approach
    Yunwen Lei, Alexander Binder, Ürün Dogan and Marius Kloft
    Arxiv.org preprint 2015 http://arxiv.org/pdf/1506.04364
  • Extracting latent brain states—Towards true labels in cognitive neuroscience experiments
    Anne K. Porbadnigk, Nico Görnitz, Claudia Sannelli, Alexander Binder, Mikio Braun, Marius Kloft and Klaus-Robert Müller
    NeuroImage 2015
  • Insights from Classifying Visual Concepts with Multiple Kernel Learning
    Alexander Binder, Shinichi Nakajima, Marius Kloft, Christina Müller, Wojciech Samek, Ulf Brefeld, Klaus-Robert Müller and Motoaki Kawanabe
    PLoS ONE 2012
  • On taxonomies for multi-class image categorization
    Alexander Binder, Klaus-Robert Müller and Motoaki Kawanabe
    International Journal of Computer Vision (IJCV) 2011
  • Enhanced Representation and Multi-Task Learning for Image Annotation
    Alexander Binder, Wojciech Samek, Klaus-Robert Müller and Motoaki Kawanabe
    Computer Vision and Image Understanding (CVIU) 2013
  • Multi-modal visual concept classification of images via Markov random walk over tags
    Motoaki Kawanabe, Alexander Binder, Christina Müller and Wojciech Wojcikiewicz
    WACV 2011
  • Multi Modal Identification and Tracking of Vehicles in Partially Observed Environments
    Daniel Becker, Alexander Binder, Jens Einsiedler and Ilja Radusch
    IPIN 2014
  • The joint submission of the TU Berlin and Fraunhofer FIRST (TUBFI) to the ImageCLEF2011 Photo Annotation Task
    Alexander Binder, Wojciech Samek, Marius Kloft, Christina Müller, Klaus-Robert Müller and Motoaki Kawanabe
    Working Notes of CLEF 2011
  • Method and System for the Automatic Analysis of an Image of a Biological Sample
    Frederick Klauschen, Motoaki Kawanabe, Klaus-Robert Müller and Alexander Binder
    US Patent Application 20,150,003,701  2011
  • The SHOGUN machine learning toolbox
    Gunnar Rätsch, Sören Sonnenburg, Sebastian Henschel, Christian Widmer, Jonas Behr, Alexander Zien, Fabio de Bona, Alexander Binder, Christian Gehl and Vojtěch Franc
    The Journal of Machine Learning Research (JMLR) 2010

Awards

  • ImageCLEF2011 Photo Annotation Task: TUBFI – Highest ranked submissions in two categories (Visual, Multimodal Ranking by mAP measure)

Google Scholar:

      https://scholar.google.com/citations?hl=en&user=5B8CTlEAAAAJ&view_op=list_works&sortby=pubdate

Miscellaneous:

a writeup on Value, Q function and the Bellman equations for my students

 

JD-PostDoc_VirtualReality_SUTDJD-PostDoc_VirtualReality_SUTD