Our brain could encode its computational graph
May 04, 2021
There is an enormous research effort currently going on around two major machine learning buzz words: graph embeddings and neural architecture search. Their core problems are highly related and fooling around with thoughts about connections to biology leads me to a statement I currently find quite fascinating: our brain could entirely encode different computational structures, flood parts of its underlying hardware in the face of a situation with one encoded structure and then run inferences over that structure with a situational representation.
Variational Auto-Encoder with Gaussian Decoder
Mar 23, 2021
Recently I got quite fascinated by integrating a variational auto-encoder 1 technique - or especially the reparameterization trick - within a larger computational graph in which I was trying to learn embeddings in a first stage and then try to find “blurry directions or regions” within that embeddings to navigate a larger model through an auto-regressive task. What I stumbled upon was that variational auto-encoder were usually used for discrete class targets but when changing the problem to a continuous vector space and the cross entropy to a mean squared error loss while keeping the variational lower bound with the kullback-leibler divergence estimation for the gaussian parameters of the latent space I found that it was not simply working out of the box.
Local Learning in Structured Geo-Based Models
Jan 27, 2021
It got so quiet at our offices over the last year that I really appreciated some discussions with colleagues the last days. With Christofer I had a long talk about geo-based graphical models which I previously tried to investigate on in context of the Human+ project but in which I both struggled from a moral perspective and the level of my understanding in stochastics & statistics at that time (and today).
Text Classification with Naive Bayes in numpy
Jan 09, 2021
Goal: step by step build a model to classify text documents from newsgroups20 into their according categories.
You can download the accompanying jupyter notebook for this exercise from here and use the attached environment.yml to reproduce the used conda environment..
In this exercise we want to get to know a first classifier which is commonly referred to as “naive bayes”. But don’t get discouraged by the word “naive” as it refers to the circumstance that it uses some unrealistic but practical assumptions.
Reading list Summer 2020
Jul 25, 2020
Here’s my reading list collection for Summer 2020. I decided to denote the reading lists with seasons instead of months as I am pretty busy reading very specialized publications instead of well-elaborated books.
Ghostwritten #reading Amazon.com Erkenne die Welt #reading - history philosophy 21 Lektionen für das 21. Jahrhundert This work of Harari currently really appeals to me as it directly hits my Zeitgeist and thoughts of the last year.
Deep State Machine for Generating Graphs
Jul 17, 2020
Problem: sampling complex graphs with unknown properties learned from exemplary graph sets.
Possible solution: using a state machine with transitions learned from exemplary graphs with a setup composed of node embeddings, graph embedding, categorigcal action sampling and thoroughly chosen objectives.
Note: the real underlying problems (graph embeddings or distributions of graphs) are so difficult, we are just touching the tip of the iceberg and as soon as there would be adequate approximate solutions to it, there are going to be even more fascinating applications in medicine, chemistry, biology and machine learning itself.
Reading list April 2020
Apr 17, 2020
Here’s my reading list collection for April 2020. I found lots of those resources already some while ago. Some of them, I only scanned through and read on particular sections. It gives some kind of relief writing noteworthy links and thoughts down and I will definitely look back at several of them when my understanding of the topics changed.
The Courage To Be Disliked: How to free yourself, change your life and achieve real happiness This sokrates-style dialogue introduced me to ideas of the Psychology of Adler quite in a moment of life where I also heard of it from other sources (see e.
Obtaining priors for geographically based simulation models
Apr 08, 2020
Problem: obtain real-world statistics and process them into a graph
Solution: geopandas, shapely, rasterio, nominatim, osm-router
Incorporating real-world information into models is non-trivial. It is often done in machine learning by e.g. training models on natural images. In this post, I collect some notes and information on processing geographic statistics. Those statistics are then used in a geographically based model as described in a previous post about thoughts on simulating migration flow.
Learning just in case vs. just in time
Mar 26, 2020
Recently, I stumbled across a short blog post about learning “just in case” vs “just in time” and it got me thinking about how much I learned and later didn’t need at all and how much I learned (consciously) and now use very often.
What is learning “just in case” and “just in time”? It is the idea of dividing learning phases into the ones in which you learn something new with the expectation that the acquired skills, knowledge and understanding are of later use, and the ones in which you learn something new, because you have immediate application needs for it.
Including files in Hugo with a shortcode
Mar 25, 2020
Problem: Hugo can not simply include other files from your page bundle.
Solution: $.Page.Dir, path.Join, readFile
I usually organize larger posts in folders, e.g. this post lies under /content/posts/2020-03/include-files-hugo-shortcode/ and then the main file is an index.md. Next to it, I can add resources, which I reference locally and relatively to this main content file. This ensures, that I can easily move a post as a whole (which might include other resources such as images, code, .