July 2023

13 July:

Some musings about register allocation.

Before you read this, remember that this is a highly hypothetical scenario completely disconnected from how CPUs work. Registers do not have to be 8 bytes, there are caches for registers, etc…

Consider a special variant of register allocation where outside of wanting to minimise the amount of spills interprocedurally, we also want to put another constraints on how the registers are allocated. For example, instead of using the first available temporary register as many code generators for MIPS do (unfortunately further away from optimal on x86 due to register requirements for div, etc…); we want to pick a register such that the last referenced register is the closest to the currently considered register. In particular, consider some function f(x, n, m) which models the runtime of the two operand instruction being currently executed. Long division p <- p/q has the computational complexity of O(q^2), hence our function f(x,n,m)=(xm)^2, where x signifies the cost of loads from p and q. Loading Q after having loaded Q again is cheap (caching), but loading P after having loaded Q or vice versa is more expensive. The cost x is defined as |R_p - R_q| - i.e. the distance between two registers in the register file. This may come useful in scenarios where registers are large and the computer has multi-level cache that quickly evicts unused data and eagerly caches new data.

For example: div r1, r4 has the cost factor |1-4|=3 further applied to the worst-case (r4)^2 amount of operations - the instruction would take 3*(r4)^2 cycles. The cost factor of div r2, r1 would be only |2-1|=1, hence the instruction takes only (r1)^2 cycles.

Hence the question posed is: What is the most efficient algorithm to model this particular register preference? The answer to this question would probably provide answers to other similar questions regarding register preference that are ubiquitous on platforms where the exact register you choose for a particular variable does matter (e.g. x86; due to how certain instructions like to store output/take input from a hardcoded register).

A crucial thing to notice is that the problem of register allocation with a preference on the closest register available is essentially equivalent to a modified variation of the 1-D Travelling Salesman Problem where every node can be (and is encouraged to be) visited multiple times if possible!

It just so happens that compilers appear to emit low numbered registers, but that’s due to preferential treatment for volatile registers, as used by calling conventions and then coalesced etc. since using a higher numbered (typically where non-volatile/callee-save live) amounts to spill + reload insertion. In graph colouring, one could use a heuristic to select free colours as closest to an already selected colour. Hence the compiler backend developer’s solution to the problem would be prioritising colours closest to the direct neighbours already assigned colours assuming an ordering to the colours, obviously where colours are numbers to produce a non-optimal but relatively good result.

Notice how similar this approach is to the nearest neighbour search approximate solution to the Travelling Salesman Problem. Hence, to connect the dots: I think that this particular solution is the best one considering speed and how close the output is to being optimal. An optimal analogue would be the exhaustive search TSP solution, while a considerably less optimal but way faster in terms of computational complexity option could be applying the Christofides algorithm.

If you are still wondering what is the use of it, I have to disappoint you and refer you to reading and comprehending this article: https://esolangs.org/wiki/Brainfuck

6 July

A good example of this (red: “It’s not that computers are getting slower, it’s that software is becoming less efficient”) on a completely different layer is JS engines. Notice how stuff like V8 is extremely fast and complicated for what it does. If the web ran on QuickJS or PUC-Rio Lua it would be completely unusable. And this all is because of how much awful horrible JS code there is around, so instead of fixing the very obviously wrong code we simply make better interpreters/compilers, which in the long run, is significantly more complicated.

Instead of putting in the effort to write high quality optimising runtimes for functional or formally verified languages which would actually push computing forward in the long run, we keep trying to make a 30 year old, insignificant or even regressive from a theoretical standpoint, language run fast because the code written in it sucks.

1 July:

Reddit is currently in the cadaveric spasm stage, Twitter is becoming the next Gab, Facebook is a nursery house already, Instagram is for brainless zoomers who miss their childhood, TikTok is a spyware and data harvest tool for CCP with no real utility. Youtube is a clickbait sponsored content view farm full of ads if you don’t use a desktop PC or a laptop with an adblocker. Google, StackExchange and Quora are flooded with AI-generated garbage quicker than they can be cleaned, zoomers don’t know how to use a RSS reader and gen alpha can’t copy a file onto a thumb drive or explain where the downloaded file went because hierarchical file systems are the new black magic.

Sad times are coming. I hope that you have enjoyed the public internet, because it’s all over.

Mobile devices are becoming more locked down, it is currently happening to laptops now. There will be no demand for usable phones without ads in the system settings UI because nobody except select few who are interested in tech whose knowledge spans a little over the Unix directory structure.

Is it good that tech is becoming more accessible? Certainly. But making tech more accessible does not mean removing all power user tools and locking down the entire device “just in case”. This is not improving accessibility. This is an attempt at building a market monopoly where almost every internet-connected device is a trojan horse in its most literal version.

Regarding the recent WHO decision to classify aspartame as a potential carcinogen, remember that being a hairdresser or eating pickled food according to the medical literature is also potentially carcinogenic.

https://pubmed.ncbi.nlm.nih.gov/19755396/ & https://aacrjournals.org/cebp/article/21/6/905/69347/Pickled-Food-and-Risk-of-Gastric-Cancer-a

June 2023

30 June:

I have just removed 99% of JavaScript code that used to be on my website. The remaining 0.5KB is used for the mobile navbar to work. So, technically speaking, my website is now completely the same as if you disabled JS entirely. And it still has syntax highlighting in blog posts and math rendering. The only exception being the inherently dynamic subpages of my website (the SDL game ports, etc… - these obviously won’t work well without JS)

26 June:

Most people would never tolerate common scams in a physical setting but if you make one small change as to the technology being used, the mentality in some people changes.

This phenomenon of distancing layers via technology is actually really common; think of how many friends you have that would never fall for traditional multi-level marketing scheme, “get rich quick” scam, penny stock pump and dump, but then if you change the technology to, say, cryptocurrency, then some of those red flags just subconsciously go away. I’ve seen real-life examples of people who on one hand are aware enough to say all these influencers trying to shill this penny stock just want to pump and dump me but then later on they say “Yeah, I really do think that this doggy themed token with no utility whatsoever is going to become 100x more valuable so I better get in quick!” and you might even know somebody who went “I’m not going to give this RuneScape scammer my go- oh my goodness Obama’s doubling my Bitcoin on Twitter!!”.

So many new scams are just old scams with new technology because of this very same psychological distancing barriers that we subconsciously create.

25 June:

It would be nice if IRC didn’t die and someone came up with ways to extend the protocol to support E2EE and other stuff.

I remember being a pretty happy and frequent IRC dweller back in maybe 2019, but it has only gone downhill since (when it came to activity, quality of discussion, the entire freenode drama, etc…) and because I haven’t made that many particularly good friends, I didn’t end up being invited into mid-sized private networks which to my knowledge still thrive and do surprisingly well considering that they are IRC. I can only imagine a similar fate has met XMPP.

OTOH, most quality internet places are slowly moving away from centralised services and slowly dig themselves underground. It’s getting harder and harder to tell AIs apart from humans, some of my friends are particularly paranoid about their messages being used to train LLMs. Internet is slowly becoming an useless sea of spam again.

My main issue with python is the GIL, mess with venvs and other nonsense, bad performance. Python itself is not very embeddable, lacks a proper “type” system. It would be nice if we had some sort of an unintrusive typing system that would help to catch a lot of embarrassing mistakes we make while writing js/python/lua/other untyped languages; I feel like gradual typing from TS actually solves this problem pretty nicely!

A reasonably fast lisp/scheme built on top of a lua-like VM with a gengc & jit compiler + gradual typing of TS and a ground-up implementation of a rich standard library that doesn’t make the programmer reinvent hashtables, graphs or linked hashsets would also be nice. To me what makes a scripting lang good is reasonable (not C-level but still okayish) performance, a substantial amount of software you can graft code from to speed up development (see: python’s ecosystem), some sort of largely inference-based static verification with minimal amount of decorators and other crust to prevent certain classes of errors in the runtime, etc.

22 June:

I have 3 job offers piled up and they are all so interesting… I literally have an interview in 9 minutes (at 4pm)!! The first offer is related to things that feel very interesting to me, on-site, with a reasonable salary; the second is considerably less interesting but it’s remote and (probably) pays better. The third is being a TA. I am certainly going to do the first over the summer break, and then after the summer break I will probably both TA and work the second job concurrently to University. I am the least excited for the TA job, but free credit points are free credit points, and I am not going to be able to TA after I graduate, so I might as well do it while I can.

20 June:

A gentle reminder of the still unsolved issue that I had with the Linux Kernel ever since I started using a M.2 drive:

If you use LUKS to encrypt a M.2 SSD drive and then perform intensive I/O from within the system, it is going to lock up your entire machine. Impressive, isn’t it?

Debian has dm-crypt worker I/O queues enabled by default and they’re written very poorly, so the kernel waits until they are full or near-full before trying to sync them to the disk, and with multiple queues all fighting for disk access, the disk dies under the load and the system locks up. Now, a linux nerd is going to cry me a bucket of tears that the queues are written perfectly with no flaws whatsoever. The problem is that i don’t care, whatever iotop shows is the truth revealed.

I also can’t run any of my VirtualBox VMs because of this bug in VBox reported 10 years ago: https://www.virtualbox.org/ticket/10031?cversion=0&cnum_hist=14. Obviously, VBox hates I/O latency and eventually gives up if access to the host’s storage takes too long so the hypervisor turns back around to the guest VM and says that the read/write is impossible and the Windows instance in the machine randomly bluescreens.

Situations like these make me miss Windows pretty badly. Shame that W8, W10 and W11 are essentially spyware unless you go through heights to debloat them.

19 June:

Calling all C language lawyers for help: I am wondering whether empty structures are allowed or not. assuming either c99 or c11, whichever more favourable. To quote the standard,

6.2.6.1 Except for bit-fields, objects are composed of contiguous sequences of one or more bytes, the number, order, and encoding of which are either explicitly specified or implementation-defined.

This could imply that an empty struct has a non-zero size (so a C++ like behaviour), however, 6.2.5-20 says: “A structure type describes a sequentially allocated nonempty set of member objects”. So I thought that I can circumvent it the following way: struct { int dummy: 0; } variable;

One would have to remove the declarator, yielding struct { int : 0; } variable;, per 6.7.2.1.4: “The expression that specifies the width of a bit-field shall be an integer constant expression with a nonnegative value that does not exceed the width of an object of the type that would be specified were the colon and expression omitted.) If the value is zero, the declaration shall have no declarator.”

So finally, whether we can have empty structs or not depends on whether int : 0 counts as a member object, but i can’t find anything that would be conclusive on this matter. I have already observed that the C standard treats zero size bit-fields specially, but the only relevant bit of information I could find was 6.7.2.1.12: “As a special case, a bit-field structure member with a width of 0 indicates that no further bit-field is to be packed into the unit in which the previous bit-field, if any, was placed.”

Any ideas?

N.B. the wording in 6.7.2.1.4 says “non-negative”, not “positive”, meaning that the width of 0 is technically allowed as a “normal” width.

18 June:

One thing that I feel very thankful for is my university helping me to completely 180 my opinion of academia: from “It would be cool to be an uni-affiliated researcher after I am done with my studies” to open resentment.

My opinion is rather long and nuanced, but my main issues are the issues with publishing, how hierarchic academia is, especially when you look at freshly graduated M.Sc’s and B.Sc’s who were by chance allowed to TA in B.Sc-level classes.

In academia, you don’t even get to validate the results of “novel algorithms” yourself, let alone get any timings. Papers usually never publish source code, derivations - nothing. I wonder where is the outrage for the piss poor state of academic research, where you keep seeing lots of papers just rehashing known ideas, calling them new, presenting claims of SOA without even checking what’s out there, and providing no way to reproduce it? In image compression most are just based on MatLab simulations and cherry picked results on 4 or 5 low resolution images. Doesn’t help that most reviewers for scientific journals are completely illiterate when it comes to data compression. A lot of the papers are cryptic, seem difficult to implement, there’s no way to verify the claims so you have to trust the charts and tables, even if there was a reviewer motivated by hell knows what (reviewers are, usually, unpaid) the paper authors could corner him by saying that his re-implementation of their highly abstract paper is inefficient so the timings or memory usage don’t compare. And of course, nobody in academia ever publishes source code because by doing so you basically hand a loaded gun to the reviewer, as it’s simple to go through some ready made code, compile it and verify it rather than follow a dense math expression that looks like a 8th grader learning about “complex math notation” and try to reason whether that’s practical or not - not including the code makes it less likely for your fraud to be discovered. This problem isn’t even exclusive to data compression, but it’s specifically prevalent there. Many medical paper frauds aren’t properly uncovered for decades even though clinical trials can usually be repeated in just about every hospital around the world. The amount of people who want to go through a poorly typeset 70 page paper in some niche journal full of math gibberish that promises to turn lead into gold and spaghetti trees is probably close to zero. When you publish something, what you want on your cv is usually the only thing that matters. The people who will read and attempt at understanding your paper will probably be only you, the reviewer and that one random hobbyist who found it on google scholar and gave up when he saw the 3rd page.

May 2023

5 May:

This has to be the most curiosity-inducing error messagge that I have seen in a long while.

In static member function ‘static constexpr std::char_traits<char>::char_type* std::char_traits<char>::copy(char_type*, const char_type*, std::size_t)’,
   inlined from ‘static constexpr void std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::_S_copy(_CharT*, const _CharT*, size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.h:423:21,
   [...]
/usr/include/c++/12/bits/char_traits.h:431:56: warning: ‘void* __builtin_memcpy(void*, const void*, long unsigned int)’ accessing 9223372036854775810 or more bytes at offsets -4611686018427387902 and [-4611686018427387903, 4611686018427387904] may overlap up to 9223372036854775813 bytes at offset -3 [-Wrestrict]
 431 |         return static_cast<char_type*>(__builtin_memcpy(__s1, __s2, __n));
     |                                        ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~

Turns out that this is actually a compiler bug - https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105329 and the comments on this thread are awesome. Creating a new string instance using the (int n, char c) constructor basically causes warnings to break due to some issue with Ranger, it’s been a known bug for 2 minor versions of GCC, and there is no good fix for it. And you need a PhD in compiler design to understand why.

April 2023

18 April:

Realisation: When I was working on my ELF infector post, I once didn’t properly sandbox it from the rest of my system. This way I accidentally got cargo and a bunch of other binaries infected and when they randomly started behaving weirdly, I finally figured out what was going on and reinstalled Rust.

14 April:

One way to allow RC without a cycle collector would be to enforce an unidirectional heap. And the actor model will surely help in avoiding atomics too.

13 April:

Seems like bzip3 wipes the floor with lzma when it comes to java class files. What if i told you that the size of the average jar you see online could be 2x-4x smaller?

 94136320 lisp.tar
  6827813 lisp.tar.bz3
 13984737 lisp.tar.gz
  7693715 lisp.tar.lzma
  8292311 lisp.tar.zst
130934896 total

February 2023

22 February:

Day ∞ of people telling me to rewrite KamilaLisp from Java to Rust, C++, JS or [insert your favourite language here].

NO.

Frankly, none of these people seem to understand how language choice affects a project. There are many languages, all of them are fundamentally different in a way that affect the entire design, prototyping and development process. Java has a good garbage collector (in fact, one of the best ones available to my knowledge - a concurrent, regionalized and generational garbage collector that meets a soft real-time goal with high probability or Shenandoah with sub-nanosecond pauses for multi-gigabyte heaps according to Shipilëv), meaning that in my tree-walk interpreter workloads (adhering to the generational hypothesis) it performs substantially better than standard non-generational Cheney garbage collectors (as present in functional languages, I’m thinking of Idris because some madcap suggested me to rewrite KamilaLisp in that) or atomic reference counting that would need a hand-made cycle collector anyway. Java, for my purposes, is comfortable and flexible enough to implement my designs as I like (e.g. chaining i/o streams, collections, etc…) while focusing on what needs to be implemented, not various roadblocks that make it difficult (oops, my linked list destructor just overflowed the stack! oops, circular reference! oops, making a GUI takes forever! oops, Arc’s atomic inc/dec takes 70% of my runtime!). I have implemented a full hybrid tiling and floating window manager, terminal emulator all inside a remote IDE that communicates with KamilaLisp instances over the network with platform-agnostic UI in two days. And my application still stays portable as a single JAR that you can execute on Windows, Linux, MacOS and your friend’s old AMD Athlon machine from 2010 with no changes.

Which brings me upon the final point. If you pester OSS maintainers to [rewrite X in Y], most of the times you are just an idiot. And on top of that, from my experience you rarely get anything worthwhile done and constantly yak-shave even the simplest design decisions. Language choice is usually well reasoned in the mind of the main developer and there are usually reasons as to why we don’t rewrite our entire 40k lines/2 megabytes of code in [popular modern language with lots of hype around it]. There is not and will not be a place for people who suggest me to alter fundamental design decisions of my projects with little to no reasoning behind it. Nowadays I just block people like this on sight because I assume upfront that they’re not worth my time and never regret it.

16 February:

It’s alive…

--> cas:taylor x 0 (cas:fn x \sin x)
ƒ(x)=(- x (+ (+ (* (/ 1 6) (** x 3)) (* (/ 1 120) (** x 5))) (o (** x 6))))
--> cas:integral (cas:taylor x 0 (cas:fn x \sin x)) dx
ƒ(x)=(- (- (* (/ 1 720) (** x 6)) (+ (* (/ 1 24) (** x 4)) (* (* (/ 1 2) x) x))))

13 February:

I have decided to decommission Lovelace (the i5-7400 16G server) and sell it. Primarily because it’s not very power efficient and I can move my services to VPSes anyway.

9 February:

An interesting method of computing permutation parity using the 3-cycle composition property:

bool parity(int * p, int n) {
    if (n == 1) return 0;
    int C[n], I[n];
    for(int i = 0; i < n; i++)
        C[i] = i, I[i] = i;
    for (int i = 0; i < n - 2; i++) {
        if (C[i] != p[i]) {
            int j = I[p[i]];
            int tmp = C[i];
            C[i] = C[j];
            int k = j == n - 1;
            C[j] = C[n - 1 - k];
            C[n - 1 - k] = tmp;
            I[C[n - 1 - k]] = n - 1 - k;
            I[C[j]] = j;
            I[C[i]] = i;
        }
    }
    return C[n - 1] != p[n - 1];
}

Unsurprisingly, translates horribly into array logic languages. I wonder how would I implement it in my Lisp…

January 2023

14 January:

A (hopefully) interesting idea: A virtual machine with a rich standard library and instruction set, procedural, functional, based on the Actor model. I plan to use only reference counting and cycle collection, have it be variable-based (no manual register allocation and no stack to make matters worse). Fully immutable, but it’s possible to implement functional data structures using a cute way built into the interpreter. Likely JIT-compiled using either cranelift or llvm. Can send code over the LAN or even the Internet for transparently parallel execution. Provides some cute utilities for number pushing; completely statically typed and ideally the code is monomorphised before being passed to the VM.

December 2022

31 December:

Dealing with open source software is depressing.

Idiot 1 creates a library 1 that outperforms library 2, so you spend a while creating software around it that exploits its performance. then you realise that idiot 1 is stupid and there’s undefined behaviour all over the library. you report the bug, but idiot 1 closes the issue saying that segfault on invalid crafted input is expected behaviour, so then you fix some of the UB yourself, and eventually stumble upon a problem that doesn’t seem fixable an you’re 95% sure that fixing it ruins the performance gain the library provides.

Curtain.

20 December:

I hate programmers who have very big mouth and tunnel vision eyes that together jump to form the most radical and nonsense views I have seen in my life. And nothing to back their redundant opinions with.

8 December:

Having implemented the Lerch transcendent and Riemann zeta, now it’s time for the Hurwitz zeta. Technically speaking, the Lerch transcendent is a generalisation of the Hurwitz zeta, so that ζ(s,n)=L(0,n,s)=Φ(1,s,n); However, my implementation of Lerch phi (which is still not as efficient as I’d like…) computes the upper incomplete Gamma function value as a factor in the final result, and when z=1 a!=1 we stumble upon a funny case where the upper incomplete Gamma function has a complex pole /yet/ the Lerch phi is defined at this point (as of course the Hurwitz zeta).

The game plan now is to implement a somewhat general Euler-MacLaurin summation function and derive the formula for the n-th derivative of the Hurwitz zeta function with respect to s (which should obviously be trivial) to speed up the “general” method.

This will have an interesting consequence: We can compute an arbitrary derivative of the Hurwitz zeta at any point we wish, meaning that computing the Glaisher constant defined in terms of the derivative of zeta at some integral point will become attainable.

The pieces of puzzle in SciJava are slowly coming together.

5 December:

I feel in a way that’s unusually tricky to describe. University is eating away some of my time. My projects, my bf, whom I love a lot, and many other things presently require lots of effort from me. I find myself not having time for friends, which fucking sucks. I sense that many of my friends have drifted apart and don’t like it. If I had to guess why - it probably happened after the Spring or Summer of this year when I had surgery. I was bed-bound for a few weeks and primarily spent my time recovering. Before the surgery, I usually worked on some unpublished code and talked to maybe three or four people regularly. Either way, my friend circle went from “ok” to “rather narrow”, so I am asking for advice. How would you reconnect with your old friends? How would you try to rebuild your friend circle - look for the right people and connect with them? I should talk to others more, but I never find reasons to go out of my way and DM someone. I have quite a few IRL friends whom I often meet these days, but I’m longing to have many internet friends again. There’s something special and cosmopolitan about it.

November 2022

11 November:

I tasted vanilla Chai Tea for the first time in my life! And the first thing I did after coming back home from uni cafe was to try making it myself at home. Turns out i packaged some ginger when i moved, i had some cinnamon and i bought honey on my way home. I made it and the taste was rather mild and I realised that the tea I brewed from the leaves was too strong. Better luck tomorrow I hope.

October 2022

30 October:

A collection of more or less related thoughts on computer science and programming from this week:

  • Haxe is actually a pretty cool language. I always had a vision of having an operating system powered by a safe and managed language like C# or Java without the clunkiness of either. I have been secretly working on a project that works similarly to Haxe, however, it’s meant to ideally be compiled (in some way, just-in-time or ahead-of-time) through some common back end API instead of multiple programming languages. So far I am slightly stuck on solidifying the shape of this endeavor in my head.
  • Compromises, compromises, compromises. The single word that burrows in my ears every time I do anything related to data compression. Because ar-mrzip, a sub-project of my larger project which is a fork of lrzip (don’t ask), uses a locality-sensitive hash (think of Nilsimsa) to order the i-nodes to improve compression, the performance is hauled down by the actual phase that orders the i-nodes. Clearly the entire problem is just a rephrasing of the traveling salesman problem and I decided to implement the approximate solution to it using the nearest neighbour search using the distance between TLSH checksums in bytes as the factor determining similarity. It’s not nearly as fast as I would like, though. I have achieved the theoretical O(n^2) complexity which isn’t as bad as it could be, but I should probably do some prior clustering (maybe by population count?) or lower the amount of checksum bytes slightly decreasing the quality of similarity measure, but greatly increasing performance. All of these solutions are tradeoffs in one way or another, making them insanely hard to evaluate for me. I would have to do some testing of how much the ratio would be hurt by a worse yet faster approximation to accurately measure what I’m dealing with.
  • Oh my god, I’ve made a big mess with bzip3. Turns out the Fedora and Debian people want to package bzip3 for their distributions, meaning that unavoidably they will start finding some issues, I was expecting with the build system, which I would not be able to expertly fix. Thankfully (or not) their issues were easily fixable by me, but they were rather serious. Turns out, the Apache 2.0 license requires you to redistribute the entire license text alongside the software. And bzip3 did not do that, even though I am using libsais (licensed using Apache). I simply thought that mentioning the author and preserving the copyright notices would be enough, but this was a slightly unpleasant surprise, because I had to release a new version /a day/ after releasing another new version to quickly patch up the issue and bundle the Apache license with my code. Another rather annoying (at least for the maintainers) thing is the v1.2.0 release which I am planning for some time next week, because enough features have accumulated in the development branch already. Sorry.
  • Recently I’ve learned about LLVM’s VMKit and I’m so sad that it’s no longer a thing. Rolling your own garbage collector is so overrated. And it’ll end up being worse than state-of-art garbage collection (read: whatever the JVM has), always. That said, ideas are being born in my head, so maybe we can make everything work with an excessive amount of C++ code.
  • WebAssembly is kind of cool. It’s a shame that the GC proposal didn’t go through yet. It feels wrong to say this, but WebAsm looks like a cool platform for programming languages, maybe sans the entire web bullshit. I’ve always dreamed of a (relatively) high level common runtime for many languages. Imagine how cool would that be.

PS: I rarely proofread my posts. The grammatical and spelling errors are provided as is, without any warranty.

29 October:

It feels sort of cathartic to scroll though my old posts here. I like looking back to my thoughts which are tame enough to post to the public, I like seeing how much the circumstances have changed since the posts that introduce some of my projects. From what I noticed, a well-maintained Fedi account is actually way simpler to keep clean compared to the same Twitter account. It’s great that Fedi accommodates for the tentative wordsmiths with a relatively long message length limit.

In other news: I have pieced together a small calendar plan for myself until the end of the year. I should probably incorporate some sort of a calendar application to my daily life instead of trying to remember it all and relying on notes to remind me things instead.

Speaking of notes, I’m getting slowly accustomed at my new university. There’s also a slim chance that some people who know me IRL read this! Our campus is super large, but it turns out that most of my lectures will take place in the largest lecture halls in just a single block of the university and also the university cafe seems to be in the same group of buildings too. I have noticed that most of the people in non-programming-related lectures usually take notes on iPads or other devices (such as Remarkable). I chose to stick with my Thinkpad, a note taking app and my flawless LaTeX skill.

To speak of some frustrations I’ve had, I can’t believe how awful women’s clothing tends to be in terms of quality and also I can’t believe that I haven’t noticed this issue before. I know plenty of women who shop in the men’s section - and I can absolutely understand that. Most of the clothes I own are certainly overpriced not that durable. But they’re pretty comfortable and stylish which is probably the main selling point. It also seems like I don’t vibe in with the style of people here, because it always takes me hours upon hours to find something I really like in retailers I sometimes visit on my way home.

28 October:

Words can not describe my frustration with asmjs. Why do I have to copy my buffers back and forth between Emscripten heap and the normal heap? Surely just passing the JS buffer should work, but the ticket in the Emscripten repository that would allow this has been hanging open for years now, so I have the choice between publishing absolute garbage that will run with maybe 5-10% of the speed of the native version and use 10 times more memory, or just forget that I wanted to make an web port of my thing.

August 2022

30 August:

Taking another look at it, I feel like MalbolgeLISP (especially v1.2) might be the best thing I made in my life. It’s so weird to think that it’s been a year now…

29 August:

My thinkpad arrived today! Just E14 with a Zen 3 Ryzen 5 and 16G of RAM. Initially had a few issues with making it boot off USB and getting networking to work, but now it works pretty good and I’m happy with it.

May 2022

23 May:

Good and bad news!

First, my compressor is assymetric now. The bad news is that it’s assymetric the wrong way around - compression is quite a bit faster than decompression…

4 May:

Once in a while, the circumstances allow to use the goes-to operator…

            uint32_t log2 = (run_length <= 256)
                ? constant::LOG2[run_length - 1]
                : (31 - __builtin_clz(run_length));
            if(dst_idx >= count - log2)
                { res = false; break; }
            while(log2 --> 0) // The famous "goes to" operator.
                dst[dst_idx++] = (run_length >> log2) & 1;
            continue;