Zen and the Art of Keeping Track of Things

zen and the art of keeping track of things

okay, so i haven’t been keeping a journal for the past year, which, after two or so years of doing so consistently, has been an interesting change. i’d like to say this lack of keeping track of things explicitly started partially as an experiment, but it was also the natural result of a series of unfathomably (at least, at the time) large life changes — finishing my bachelor’s at waterloo, starting/finishing new contracts and projects, getting accepted to the neural systems & computation graduate program at ini (@ eth zurich), and, of course, moving across the atlantic for the program in question.

so, to avoid the entirely possible rabbit hole that i may end up in, namely writing a (quite poor) novella about the adventures & excursions i’ve had the liberty of enjoying, i’ll keep things somewhat condensed.

for the past year or so, i’ve been working on extending the results of a paper on hyperspherical variational auto-encoders. it’s a lot of cool math especially relating to directional statistics and non-euclidean geometry but i’ll have to leave that for its own dedicated post. other than that, between september —> the start of my graduate studies in february, i was working as a research (ml?) engineer for ficra ai, which was a cool experience since i got to rehaul their embedding-based semantic search system as well as build some custom nlp features (high-accuracy language detection, color quantization etc.) for their customers — mostly ui/ux designers at large companies like robinhood, and some smaller startups juggling multiple app versions with limited design staff.

in my first term as a master’s student in zurich, 2/4 of my graduate courses had pretty cool final projects (neuromorphic intelligence, 3d vision).
for neuromorphic intelligence, we demonstrated unsupervised learning of a topological structure on a mixed-analog digital spiking chip.

for 3d vision, we tackled two open workshop challenges for cvpr 2025: action recognition and action detection from egocentric video (epic-kitchens-100).
we implemented a temporal graph network classifier over dynamic scene graphs constructed using vlms, sota segmentation & object tracking, and our custom graph representation of hand-object relations throughout a scene (we annotate individual frames and track object persistence as well as left/right hand interactions distinctly).

there’s an excerpt from a book that one of my close friends from waterloo left me before dropping out and seeking enlightenment and whatnot. truly some jobs-esque plotline, i know, but bear with me — almost two years later, i’ve been reading d.t. suzuki’s essays on zen buddhism, and very often i read something that gives me enough to think about for a day or even a week, which is quite rare these days.

“shadow follows a body and echo rises from a sound. he who in pursuit of the shadow tires out the body, does not know that the body produces the shadow; and he who attempts to stop an echo by raising his voice, does not understand that the voice is the cause of the echo. [in a similar way] he who seeks nirvāṇa by cutting desires and passions is to be likened to one who seeks a shadow apart from its original body; and he who aspires to buddhahood thinking it to be independent of the nature of sentient beings is to be likened to one who tries to listen to an echo by deadening its original sound.”

i’ll leave you with that for now. interpret it as you will.