Currently my work focusses on neural networks and their structure in particular. This is quite a complex intersection between Functional Analysis, Graph Theory, Learning Theory, Topology, Optimization and Machine Learning. As a computer scientist I mostly approach it from an computational and evidence based perspective. More generally speaking, I am interested in Machine Learning, Philosophy, Artificial General Intelligence and Psychology.
Publications
@unpublished{stier2021experiments,
author="Stier, Julian and Darji, Harshil and Granitzer, Michael",
title="Experiments on Properties of Hidden Structures of Sparse Neural Networks",
year={2021}
}
@unpublished{havas2021spatio,
author="Clemens Havas, and Lorenz Wendlinger, and Julian Stier, and Sahib Julka, and Veronika Krieger, and Cornelia Ferner, and Andreas Petutschnig, and Michael Granitzer, and Stefan Wegenkittl, and Bernd Resch",
title="Spatiotemporal machine learning analysis of social media data and refugee movement statistics",
year={2021}
}
@inproceedings{wendlinger2021evofficient,
title={Evofficient: Reproducing a Cartesian Genetic Programming Method.},
author={Wendlinger, Lorenz and Stier, Julian and Granitzer, Michael},
booktitle={EuroGP},
pages={162178},
year={2021}
}
@unpublished{benamor2021robustness,
author="Ben Amor, Mehdi and Stier, Julian and Granitzer, Michael",
title="Robustness of Sparse Neural Networks",
year={2021}
}
 DeepGG: a Deep Graph Generator contains results of our studies on building a generative model of graphs. It generalizes and analyses a model called DGMG. Following the line of previous publications, we try to improve on Neural Architecture Search by using search techniques in continuous domains and for this leverage graph embeddings. This idea just became popular in 2019/2020 within the NAScommunity.
@inproceedings{stier2020deep,
title={DeepGG: a Deep Graph Generator},
author={Stier, Julian and Granitzer, Michael},
booktitle={Advances in Intelligent Data Analysis XIX: 19th International Symposium on Intelligent Data Analysis, IDA 2021, Porto, Portugal, April 2628, 2021, Proceedings},
pages={325},
organization={Springer Nature}
}
 Structural Analysis of Sparse Neural Networks .. should be better titled Analysis of Neural Networks of random structure and contains results of our first invesigations into the influence of the structure solely. The motivation is biologically inspired and fuses network scientific results with Neural Architecture Search. Since 2018 we’re working on formalizing this in a structural prior hypothesis.
@article{stier2019structural,
title={Structural Analysis of Sparse Neural Networks},
author={Stier, Julian and Granitzer, Michael},
journal={Procedia Computer Science},
volume={159},
pages={107116},
year={2019},
publisher={Elsevier}
}
 Analysing neural network topologies: a game theoretic approach We’ve been actually one of the first to use Shapley’s value (an optimal game theoretic solution) to prune neurons of Neural Network models and investigated on its theoretical limiations. Shapley value applied in the feature space became later quite popular as a way to understand neural networks by researchers from Google.
@article{stier2018analysing,
title={Analysing neural network topologies: a game theoretic approach},
author={Stier, Julian and Gianini, Gabriele and Granitzer, Michael and Ziegler, Konstantin},
journal={Procedia Computer Science},
volume={126},
pages={234243},
year={2018},
publisher={Elsevier}
}
Projects

deepstruct: blending graph theory and neural networks
 GitHub Link: github.com/innvariant/deepstruct
 Readthedocs: deepstruct.readthedocs.io

deepgg: A generative model for graphs (a hot and complex topic with various applications)
 GitHub Link: github.com/innvariant/deepgg

eddy: Artifcial Landscapes for Optimization A nice little visualization and experimenting project I am currently working on. Basically I am collecting test functions for optimization, provide additional functionality for simple visualizations of them and test various optimization strategies over them.
 GitHub Link: github.com/innvariant/eddy

pyklopp: repeatable model training While the field of machine learning has a huge reproducibility problem and I am confronted with it permanently when designing new Neural Architecture Search experiments, I developed a simple binary which I am using for invoking model trainings. This is far from being finalized and currently there are lots of development efforts to make experiments repeatable.
 GitHub Link: github.com/innvariant/pyklopp
Theses
 Investigating Sparsity in Recurrent Neural Networks, Harshil Darji, 2021
 A Scalable Distributed Training Ecosystem, Marouene Zouauoui, 2020
 Evolutionary Neural Architecture Search with graphbased Performance Estimation, Jerome Würf, 2020
 A comparative evaluation of pruning techniques for Artificial Neural Networks, Paul Häusner, 2019
 Evaluation of Sparse Neural Networks robustness to Adversarial examples, Mehdi Ben Amor, 2019
 Evofficient: Reproducing and Enhancing a Cartesian Genetic Programming Method, Lorenz Wendlinger, 2019
 Analysis of Neural Networks from a Network Science Perspective, Hann Holze, 2019