pygarn: Graph Assembly Representations
May 20, 2022
From my work on generative models for graphs to learn distributions of graphs for Neural Architecture Search I started exploring graphs from an auto-regressive and an probabilistic perspective. Auto-regressive in this context means that the non-canonical representation used for a graph is based on a sequence in which the order matters and builds upon the previous step. The probabilistic perspective refers to a new idea for representing graphs not deterministically but with a representation which already supports some kind of fuzziness.
...
➦
Self-Archiving Reproducible Python Scripts
May 04, 2022
Conducting a lot of computation experiments with python I found it to be quite useful to work with some patterns during prototyping and producing experiment data: First, using generated keys for entities such as an experiment setting, a repetition or models instances. With these keys I can easily persist them and run scripts in parallel while producing large amounts of data. Secondly, using the date gives a really good self-archiving mechanism as it supports you later on to assess whether something is not relevant anymore.
...
➦
Pruning Neural Networks with PyTorch
Jun 23, 2021
Pruning is a surprisingly effective method to automatically come up with sparse neural networks. The motivation behind pruning is usually to 1) compress a model in its memory or energy consumption, 2) speed up its inference time or 3) find meaningful substructures to re-use or interprete them or for the first two reasons.
In this post, we will see how you can apply several quite simple but common and effective pruning techniques: random weight pruning and class-blinded, class-distributed and class-uniform magnitude-based pruning.
...
➦
Our brain could encode its computational graph
May 04, 2021
There is an enormous research effort currently going on around two major machine learning buzz words: graph embeddings and neural architecture search. Their core problems are highly related and fooling around with thoughts about connections to biology leads me to a statement I currently find quite fascinating: our brain could entirely encode different computational structures, flood parts of its underlying hardware in the face of a situation with one encoded structure and then run inferences over that structure with a situational representation.
...
➦
Variational Auto-Encoder with Gaussian Decoder
Mar 23, 2021
Recently I got quite fascinated by integrating a variational auto-encoder 1 technique - or especially the reparameterization trick - within a larger computational graph in which I was trying to learn embeddings in a first stage and then try to find “blurry directions or regions” within that embeddings to navigate a larger model through an auto-regressive task. What I stumbled upon was that variational auto-encoder were usually used for discrete class targets but when changing the problem to a continuous vector space and the cross entropy to a mean squared error loss while keeping the variational lower bound with the kullback-leibler divergence estimation for the gaussian parameters of the latent space I found that it was not simply working out of the box.
...
➦
Local Learning in Structured Geo-Based Models
Jan 27, 2021
It got so quiet at our offices over the last year that I really appreciated some discussions with colleagues the last days. With Christofer I had a long talk about geo-based graphical models which I previously tried to investigate on in context of the Human+ project but in which I both struggled from a moral perspective and the level of my understanding in stochastics & statistics at that time (and today).
...
➦
Text Classification with Naive Bayes in numpy
Jan 09, 2021
Goal: step by step build a model to classify text documents from newsgroups20 into their according categories.
You can download the accompanying jupyter notebook for this exercise from here and use the attached environment.yml to reproduce the used conda environment..
In this exercise we want to get to know a first classifier which is commonly referred to as “naive bayes”. But don’t get discouraged by the word “naive” as it refers to the circumstance that it uses some unrealistic but practical assumptions.
...
➦
How I experienced CoVid19 in 2020
Dec 31, 2020
tl;dr: first wave yay, second wave nay - lots of dedicated people, lots of covidiots - productive, withdrawn and for some time motivated while for the other dreary
The year is almost over and usually, in those times, its getting one more time turbulent before everybody drives home for christmas and goes through those fascinating two weeks of family wildness, seeing old friends, eventually deciding on how to spend new years’ eve and then finally enjoy the first days of the new year which I always experienced as very calm and recreative.
...
➦
Reading list Summer 2020
Jul 25, 2020
Here’s my reading list collection for Summer 2020. I decided to denote the reading lists with seasons instead of months as I am pretty busy reading very specialized publications instead of well-elaborated books.
Ghostwritten #reading Amazon.com Erkenne die Welt #reading - history philosophy 21 Lektionen für das 21. Jahrhundert This work of Harari currently really appeals to me as it directly hits my Zeitgeist and thoughts of the last year.
...
➦
Deep State Machine for Generating Graphs
Jul 17, 2020
Problem: sampling complex graphs with unknown properties learned from exemplary graph sets.
Possible solution: using a state machine with transitions learned from exemplary graphs with a setup composed of node embeddings, graph embedding, categorigcal action sampling and thoroughly chosen objectives.
Note: the real underlying problems (graph embeddings or distributions of graphs) are so difficult, we are just touching the tip of the iceberg and as soon as there would be adequate approximate solutions to it, there are going to be even more fascinating applications in medicine, chemistry, biology and machine learning itself.
...
➦