Pruning Neural Networks with PyTorch
Jun 23, 2021
Pruning is a surprisingly effective method to automatically come up with sparse neural networks. The motivation behind pruning is usually to 1) compress a model in its memory or energy consumption, 2) speed up its inference time or 3) find meaningful substructures to re-use or interprete them or for the first two reasons.
In this post, we will see how you can apply several quite simple but common and effective pruning techniques: random weight pruning and class-blinded, class-distributed and class-uniform magnitude-based pruning.
Our brain could encode its computational graph
May 04, 2021
There is an enormous research effort currently going on around two major machine learning buzz words: graph embeddings and neural architecture search. Their core problems are highly related and fooling around with thoughts about connections to biology leads me to a statement I currently find quite fascinating: our brain could entirely encode different computational structures, flood parts of its underlying hardware in the face of a situation with one encoded structure and then run inferences over that structure with a situational representation.
Variational Auto-Encoder with Gaussian Decoder
Mar 23, 2021
Recently I got quite fascinated by integrating a variational auto-encoder 1 technique - or especially the reparameterization trick - within a larger computational graph in which I was trying to learn embeddings in a first stage and then try to find “blurry directions or regions” within that embeddings to navigate a larger model through an auto-regressive task. What I stumbled upon was that variational auto-encoder were usually used for discrete class targets but when changing the problem to a continuous vector space and the cross entropy to a mean squared error loss while keeping the variational lower bound with the kullback-leibler divergence estimation for the gaussian parameters of the latent space I found that it was not simply working out of the box.
Local Learning in Structured Geo-Based Models
Jan 27, 2021
It got so quiet at our offices over the last year that I really appreciated some discussions with colleagues the last days. With Christofer I had a long talk about geo-based graphical models which I previously tried to investigate on in context of the Human+ project but in which I both struggled from a moral perspective and the level of my understanding in stochastics & statistics at that time (and today).
Text Classification with Naive Bayes in numpy
Jan 09, 2021
Goal: step by step build a model to classify text documents from newsgroups20 into their according categories.
You can download the accompanying jupyter notebook for this exercise from here and use the attached environment.yml to reproduce the used conda environment..
In this exercise we want to get to know a first classifier which is commonly referred to as “naive bayes”. But don’t get discouraged by the word “naive” as it refers to the circumstance that it uses some unrealistic but practical assumptions.
How I experienced CoVid19 in 2020
Dec 31, 2020
tl;dr: first wave yay, second wave nay - lots of dedicated people, lots of covidiots - productive, withdrawn and for some time motivated while for the other dreary
The year is almost over and usually, in those times, its getting one more time turbulent before everybody drives home for christmas and goes through those fascinating two weeks of family wildness, seeing old friends, eventually deciding on how to spend new years’ eve and then finally enjoy the first days of the new year which I always experienced as very calm and recreative.
Reading list Summer 2020
Jul 25, 2020
Here’s my reading list collection for Summer 2020. I decided to denote the reading lists with seasons instead of months as I am pretty busy reading very specialized publications instead of well-elaborated books.
Ghostwritten #reading Amazon.com Erkenne die Welt #reading - history philosophy 21 Lektionen für das 21. Jahrhundert This work of Harari currently really appeals to me as it directly hits my Zeitgeist and thoughts of the last year.
Deep State Machine for Generating Graphs
Jul 17, 2020
Problem: sampling complex graphs with unknown properties learned from exemplary graph sets.
Possible solution: using a state machine with transitions learned from exemplary graphs with a setup composed of node embeddings, graph embedding, categorigcal action sampling and thoroughly chosen objectives.
Note: the real underlying problems (graph embeddings or distributions of graphs) are so difficult, we are just touching the tip of the iceberg and as soon as there would be adequate approximate solutions to it, there are going to be even more fascinating applications in medicine, chemistry, biology and machine learning itself.
Reading list April 2020
Apr 17, 2020
Here’s my reading list collection for April 2020. I found lots of those resources already some while ago. Some of them, I only scanned through and read on particular sections. It gives some kind of relief writing noteworthy links and thoughts down and I will definitely look back at several of them when my understanding of the topics changed.
The Courage To Be Disliked: How to free yourself, change your life and achieve real happiness This sokrates-style dialogue introduced me to ideas of the Psychology of Adler quite in a moment of life where I also heard of it from other sources (see e.
Obtaining priors for geographically based simulation models
Apr 08, 2020
Problem: obtain real-world statistics and process them into a graph
Solution: geopandas, shapely, rasterio, nominatim, osm-router
Incorporating real-world information into models is non-trivial. It is often done in machine learning by e.g. training models on natural images. In this post, I collect some notes and information on processing geographic statistics. Those statistics are then used in a geographically based model as described in a previous post about thoughts on simulating migration flow.