20 years ago I decided that I wanted a stable identity on the Internet. I had used free email accounts and web hosting for a few years but, specially the web hosting, was fragile and could change any time. Besides I had seen in my main email account at RocketMail that free services could change in ways we don’t like –RocketMail was acquired by Yahoo!–.
So it was time. I wanted to have a website that didn’t change, without ads, that depended solely on myself. I asked a couple of friends if they wanted to be part of it, I guess a bit in the tilde spirit.
They didn’t really care, and I can’t remember if they ever contributed any money to help with the costs, but this is how the website looked on the first day (in perfect Spanish!).
Later on some more friends had email on the server –for a while, at least–, and the services have been online since then. It all started with an hosted service in Arsys, then a small server in OVH –can’t remember if it was a VPS–, then a miniserver in Memset –just a VPS with Xen–, and today a droplet in Digital Ocean –just call it VPS already!–.
At some point there was even sub-domains, like the old blackshell.usebox.net, where I hosted my personal weblog in Spanish, but the main website had different uses over the years. At some point it was my business card for my freelance job, but at the end is just a redirection to my personal website; because that is what usebox.net is basically.
My infrastructure has grown a bit in 20 years, and the prices have gone down. Never has been this easy –and cheap– to have a personal server on the Internet, yet every time some business ask you for your email on the phone there is surprise because it isn’t an email address on one of the big providers.
Anyway, here we are and hopefully here we go for another 20!
I made a maintenance release of my ubox MSX libs back in April, which updated one dependency and the docs after I run some tests with the newest SDCC. It was the first release of 2022, because the project is mature and stable –although there’s interest to make changes–.
Since then, I made another round of dependency updates, with special interest in rasm, because this excellent assembler has changed the way it builds, and it required a bit of work to move to the latest version. So yesterday I thought I would make a release, to not keep those changes in the main branch without being “official”; and I found a small bug that needed fixing.
It was an interesting one. Some of the python tools that convert PNG images to different data formats used on the MSX was using a set, and looks like in the python version coming with my current Debian (3.9.2), the order of the elements have changed. Long story short, the sprite of the player character in Green –the demo game– was not green any more. It was easy to fix, and I decided to make some changes to the project website to look marginally better and give visibility to the fix.
And then I was checking some files randomly and I ended reading the TODO file, and without noticing I started implementing CAS support.
The CAS files are a representation of cassette data used by most emulators. You can load them on your emulator, or convert them to WAV –if not play them directly– and load them on the actual hardware. Most MSX aficionados will have more sophisticated ways of loading software, but if you don’t, the CAS files are the easiest –and cheapest– way of loading homebrew games.
The official take of the community is that “cartridges are best”, and I agree; but it is also true that by releasing my games in CAS format –as well as in ROM format–, more people had a chance to play them –including some extreme cases of models with 16K of RAM and two memory expansions so for example Uchūsen Gamma would load–.
Some time ago I released as OSS my mkcas tool, that I wrote to generate the CAS files of my games, and in the case of ubox MSX libs it was just matter of adding the tool to the build pipeline so the user could get the ROM file –just a cartridge image– and, optionally, a CAS version of the same code. Including a loading screen, of course.
The CAS file uses a multi-stage loader:
Firstly, a short BASIC program is loaded that will load and execute a binary loader.
Then the binary loader loads two blocks, using the BIOS functions and some fancy code to show multi-colour loading bars.
The first block is the compressed loading screen, that is decompressed and uploaded to the VDP, so there’s something nice to watch as we wait.
The second block is the ROM itself, compressed as well. It is uncompressed and setup like it was running from ROM, but in RAM.
The only tricky part is configuring the slots so we have ROM, RAM, RAM and RAM; and that’s why I made this optional, because it will not work in machines without enough memory.
To start with it requires more RAM than the cartridge, like 32K more –which is the size of the ROM–. There are also other limitations –like the compressed ROM being less than 24576 bytes–, but all in all I think having a CAS file as an extra is totally worth it (based, as I say, on my experience with my games).
I spent way too much time reviewing the code because I forgot that any MSX model with disk needs to boot pressing the shift key to disable the disk BIOS or the BIOS will use some memory that the loader needs, but it wasn’t too complicated at the end. I’m happy with the result!
I think I was reading about Oberon –the programming language, not the operating system– because a post on Hacker News, and I ended in the page of Niklaus Wirth.
Wirth was the chief designer of the programming languages: Euler (1965), PL360 (1966), ALGOL W (1966), Pascal (1970), Modula (1975), Modula-2 (1978), Oberon (1987), Oberon-2 (1991), and Oberon-07 (2007) –from the Wikipedia–, among other things.
That is truly remarkable. So I was looking around and I found that his website has PDFs describing the implementation of two compilers:
Compiler Construction, where Wirth builds a compiler for Oberon-0 (a subset of Oberon) for a RISC CPU –which is the type of simplification you usually find in books building a compiler, but this one at least is register based–.
The code is very readable, and is not because I had to write some Pascal back at University –and Oberon is a descendant of that language–. There are things missing, but is mostly refinement of error reporting and things like that. Everything that is important, is there.
For example, the code generation of the multiplication for Oberon-0:
PROCEDURE MulOp*(VAR x, y: Item); (* x := x * y *)
BEGIN
IF (x.mode = Const) & (y.mode = Const) THEN x.a := x.a * y.a
ELSIF (y.mode = Const) & (y.a = 2) THEN load(x); Put1(Lsl, x.r, x.r, 1)
ELSIF y.mode = Const THEN load(x); Put1(Mul, x.r, x.r, y.a)
ELSIF x.mode = Const THEN load(y); Put1(Mul, y.r, y.r, x.a); x.mode := Reg; x.r := y.r
ELSE load(x); load(y); Put0(Mul, RH-2, x.r, y.r); DEC(RH); x.r := RH-1
END
END MulOp;
It even shows constant folding –both operands are constants–. The resulting code may not be super-optimized, but it is a full example in an easy to understand size.
I skimmed through the “Compiler Construction” book during my holidays and is not too long. I’m looking forward to read it properly when I have some time.
Last night I was finally reading a Scheme Primer, after weeks open on a tab in my phone. Firefox for Android reloads the tab every time I select it, so I thought: why not having a local copy that I can read even if I’m offline or the page is not available?
Well, turns out that Firefox can’t save the page. Not as HTML, not as a full website, not as a PDF; it basically lacks the functionality. Which is a trend with Mozilla, to be honest. Chrome is terrible from the point of view of privacy and I don’t want to play into Google’s hands, but Firefox is great at keeping things working just about to ensure they are never successful, which is a real shame.
I looked around and I found a developer saying back in 2012 that the functionality would be added, but they hadn’t had time yet. I’m not holding my breath.
So I tried Chrome, that allows saving the page, but then when I try to open it using the “Files” app, it doesn’t know how to open the file and wants to search for an app to install. Brilliant.
You can print to a PDF –with Chrome, in Firefox I can’t find how to do it, so it may not be possible–, but the PDF reader is not ideal as the text doesn’t flow using the phone screen efficiently. There must be a better way, isn’t it?
It isn’t a solution out of the box, but this is what I ended doing:
Download the page as HTML in my desktop (using Firefox).
Convert the page to epub format using the always powerful pandoc:
pandoc --from=html --to=epub 'A Scheme Primer.html' -o scheme-primer.epub
Install ReadEra –that is free, but I’m considering buying the premium version because it is a great reader!–.
Copy the epub, and now I have the Scheme tutorial to read off-line anytime I want.
I don’t know if it is that the website is striclty following standards without much CSS, or pandoc being amazing, or the ebook reader being great, but the result is much better that I was expecting. Feels like an actual ebook.
When I’m away from my desktop PC I already have two apps from the public library to read magazines, ebooks, listen to audio-books, and whatnot, and I don’t use them much because… I guess what I wanted is to read about Scheme instead. I’m not a heavy app user, other than Podcast Addict and email, I tend to use the browser for everything else.
I’m glad I still have some options, but in reality, the ideal would have been that Firefox on Android was a better browser.
Shortly after I blogged about playing Pokémon, we stopped. We got some Pokémon to pass level 20, but the kids lost interest –I guess the combats are a bit samey–.
So we started something completely different, or may be not that different because is not an RPG but Lenna’s Inception –from our pile of unfinished games– gets a lot of inspiration from games like this. We are playing ‘The Legend of Zelda: The Minish Cap’, and looks like we are close to finish it –according to an online guide, at least–.
This is one of the Capcom Zeldas from 2004, and it is heavy on puzzles. The gimmick of this one is a magical talking cap that can shrink Link to the size of the Minish, a type of tiny magical creature that only children can see.
The game is your proto-Zelda game: introduces new objects to solve new types of puzzles that allow you to investigate new areas to progress on the story. It also has a good number of small side quests that can be unlocked by “fusing Kinstones” with the inhabitants of Hyrule. And it also includes rescuing Zelda from a curse, of course.
The game looks fantastic on the GBA and we are playing it emulated on a big screen. The bosses are interesting and fun, and considering that this is probably the further I have got on a Zelda game, I would say is not too difficult –even if sometimes it is very obtuse, but in those cases I would suggest checking a guide; we look at the screenshots and that’s enough to give us a spoiler free experience–.
In general I would say the game has aged pretty well. I recall some complains about swapping objects too much, including in the sequence to defeat some of the bosses and sometimes is not clear at all what or where, including using a bomb in specific place –how classic!–, but considering Minish Cap was designed to be played on a hand-held and we have the Internet to get unstuck, in my opinion this game is very enjoyable by today’s standards and I totally recommend it.
Hopefully we won’t lose interest and we will finish it, for a change!
Since the repo is not in GitHub, and that basically wipes completely any chance of anyone discovering the project accidentally –at least it would pop-up in my followers’ feed-, I thought I would use this blog to make it “public”.
I’m learning Haskell and I have decided to use the same project I used to re-learn Go, which again is huge and it is unlikely I will finish it, but there you go! I’m focusing on the compiler part, as I already scratched the interpreter itch.
The public repo is accessible at micro2-lang and the usual disclaimers apply –specially as I’m a Haskell newbie–. I know the name isn’t great, I may change it later on.
So far it has been a lot of fun. I’m using parser combinators via parsec and it is amazing the amount of functionality I have with a tiny fraction of the code I wrote in Micro. There are some downsides, though. For example, the errors messages are a bit less good.
For example, this program has an error:
1modulemain23// just add two numbers
4defadd(a:u8, b:u8):u8 {
5returna+b6}
78add(3, 2); // 10
The statements in Micro2 are delimited by semicolons, so the compiler reports:
Which is not terrible, but is not great either. Serviceable I would say. Considering the objectives of the project, this is absolutely fine.
I am currently working on the type-checking and semantic analysis, which is really parsing, but can’t be all implemented using parsec (it will know how a “return” looks like, but can’t tell if it is used inside a function and returns the right type).
As I did it already in Go, is not too complicated, but I’m getting comfortable with Haskell and its Monads. The harder part I suspect will be the code generation, and I’m looking forward to it!
I tried to learn Haskell about a year ago. Which was basically ordering a book (“Programming in Haskell” by Graham Hutton) via the training budget of my employer, to rage quit after reading about 100 pages. It was probably not the book, but I wasn’t really focused, so I moved on to other things.
Since then I have returned to Go, and wrote a non trivial amount of code, and it was alright. At some point I stopped thinking about the language and wrote code fluently, and that’s a good thing, even if I found Go itself a bit boring.
So in my search of adding a new exciting language to my toolset, I was reading a bit about Scheme –again, I know–, and long story short I decided to go back to Haskell. I’m reading now Learn You a Haskell for Great Good!, that has the advantage of having a free version that can be read online, and it is more direct that the other book I have. And I’m enjoying it!
I’m still on chapter 5 (of 14), so it is a bit early, but so far I’m finding it very similar to Scala –with the Typelevel stack, see Cats and Cats Effect–, and you can tell that Haskell has been a great influence –assuming it is not the other way around because in some cases, Scala seems to get the features a bit further–.
Reading technical books is hard because, at least for me, it feels like it is a lot of information that I don’t seem to retain, until I need to apply that information and then it surfaces –most of the time at least–. So now that it felt like I knew some Haskell, and following the Scala similarities, I wrote a small command line tool: Tomato.
It is a very simple Pomodoro tool –in reality is just a timer–, that is inspired by pomo in Go by Rob Muhlestein. I think I saw it in one of his streams and I thought it was neat. It is a simple tool, but it has already some meat to it, including input/output, which is perfect to practice some of the tricky bits from Haskell.
And the experience was very nice. Most of my problems were with some syntax I don’t know yet, and specially with a couple of libraries I had to use. They are documented, but perhaps I missed some examples.
For example, I spent way too much time to figure out how to add minutes to an UTC time.
Let’s look at this function:
doStart::String->Integer->IO ()
doStart stateFile mins =do currTime <- getCurrentTime
let deadline = addUTCTime secs currTime
writeFile stateFile (show deadline)
putStrLn ("Started, end on "++ show deadline)
where secs = secondsToNominalDiffTime (60* fromInteger mins)
It creates the state file with the UTC time adding the deadline for the timer, I had initially the minutes as an Int –and not an Integer; the difference is subtle– and secondsToNominalDiffTime requires a Pico type to produce the NominalDiffTime instance required by addUTCTime. Sometimes you can have this same problem in Scala, which I call type-matching bingo. The solution is clean, but oh it wasn’t easy to get there!
The library to deal with the program arguments gave some trouble as well, even considering the docs has two good examples. But in that case, it was only me to blame, and when the function signature clicked, then I understood how the defaults for the options worked. It was all because for the state file I’m using XDG_CACHE_HOME, and the function to get that directory returns an IO, so I couldn’t use exactly any of the examples.
The similarities with Scala are definitely a good thing: Option is Maybe (with Some -> Just and None -> Nothing), Either is the same, pattern matching is very similar –although Scala seems more flexible; it could be me not knowing all the tricks in Haskell though–, the IO Monad in Cats Effect is modelled after Haskell’s IO, etc.
The LSP support via haskell-language-server is good, although I tried to do a rename and it wasn’t supported. The docs are good, and I specially like that on hover you see the docs and a link to a local HTML file with the full docs of that package.
Perhaps the discoverability of Haskell is worse than in Scala, because of the syntax. In Scala you have a value and you can inspect methods with the help of your editor and LSP by just using value., but in Haskell syntax it seems to me like all are functions, so the dot trick is not possible. So at the end it is a bit overwhelming having all that API, but not really knowing it, so good look finding secondsToNominalDiffTime to start with!
You can see that the book intro to lists includes a lot of functions that you probably need to learn, eventually, and this is also a bit of a problem with Scala. You are trying to solve a problem, and the cleanest and most elegant solution is on a method you don’t know about.
All these are just first impressions, and I’m sure that I wrote some code that is a bit over my current knowledge of the language, but it wasn’t too frustrating and it only took me a bit longer that it should have. I found myself thinking about Scala native, because of those bits that feel a bit less flexible in Haskell than in Scala, but I have to say that I’m interested and I’ll keep going!
This is a short announcement: I have made “Micro”, my toy programming language, public and it is available here: Micro @ git.usebox.net.
It is finished, but not as finished as I planned originally. I knew building an interpreter and a compiler was a lot of work, but I also made the audacity to build my own thing as I was learning how to make an interpreter, refresh my Go, and on top of all that, my own thing wasn’t easy at all!
Micro is a statically typed programming language, and I was reading the most excellent Crafting Interpreters, that guides you in the journey of building an interpreter for a language that is dynamically typed, which means that I was pretty much on my own in a lot of things I had to write.
Besides, what I really wanted to write was a compiler targeting one 8-bit micro (likely the Z80), and I basically spent too much time dealing with the interpreter implementing things that it was very unlikely I could make happen in the compiler for the target CPU (e.g. closures, or recursive tail call optimizations).
Anyway, I’m very happy with the result and this is, with difference, the best of my toy programming languages. I’m proud of what I have accomplished and I think I’m better prepared to start a more focused project with more chances of success.
I don’t discard playing a bit more with Micro, but it is a bit unrealistic for a first project, so I’m happy to close this chapter for now.
I already decided to rely less on GitHub and use GitLab instead, and the truth is that since then I haven’t started many new projects –and I even forgot about GitLab and still made a couple of new repos in GH, :facepalm:–.
GitHub Copilot is now available to everybody and the controversy has gotten worse when they are charging money for the service, including a campaign by Software Freedom Conservancy.
[…] I would update your licenses to clarify that incorporating the code into a machine learning model is considered a form of derived work, and that your license terms apply to the model and any works produced with that model.
Which is probably the right thing to do, but we know that if GitHub (actually, Microsoft) have implemented Copilot in the way they have is because they think they can get away with it, so it is likely that adding a notice like that is not going to have any effect.
Anyway, I just re-considered if I need one of these hosting solutions (also known as “forge”), and I got to the conclusion that I don’t.
Self-hosting a git repo over SSH is very easy:
$ mkdir my_repo_name
$ cd my_repo_name
$ git init --bare
Optionally, if you plan to serve the repo over HTTP:
# PWD is still my_repo_name
$ cp hooks/post-update.sample hooks/post-update
And that’s all! You can use myuser@hostname:/path/to/repos/my_repo_name as remote and you have a private repo.
Although you may have a clone somewhere else, remember to set backups and things like that, and you are set.
I also wanted to have a way of browsing the repos via web, because sometimes is useful to not require git or a clone to check the code. I also like the idea of rendering a README.md as HTML to have a small microsite for a project that perhaps doesn’t need a dedicated page on my website.
For that I decided to use cgit, that you can see in action at git.usebox.net –very empty, for now–. I also enabled clones over HTTPS (read only), so the repos are public, and all together took me about 15 minutes.
It is clear that I have lost functionality –that I don’t need–, but this is perfect for me because:
My projects are small and likely to only interest me.
I can do without CI, that arguably would be a waste of resources and energy for such small projects.
I have almost never got any contributions, and when I have, the contributors are likely to have the skills to send patches by email (or provide me with a URL to pull). I recommend this tutorial on how to contribute to email driven projects.
I can always move to use a forge if the project grows to a point where there is a real benefit. For example, it is likely ubox MSX lib will stay on GitLab.
Obviously there are some benefits that come with centralisation. Besides easier workflows for contribution, discovery is an important one: you search on GitHub for projects.
In my experience, that wasn’t that important for my projects. Most of them got some stars only after I shared a link to the repo on a forum or social media, and for most people it was a way of having bookmark or just say “cool project”. And it really doesn’t matter: I shared the code in case it was useful to somebody else, but if I didn’t have any meaningful contributions, the stars didn’t do anything for me.
Anyway, the bottom line is that anything that is not GitHub won’t have the benefits of being on the most popular hosting service, so I think it won’t matter that much if I use GitLab or if I self-host my repositories.
I know this is just a drop on the ocean, but if we don’t do anything, nothing will ever change.
It is fair to say that at this point I have stopped refreshing my knowledge of Go and I’m learning new things. Lots of them, actually; thanks to the toy programming language that I’m implementing.
One of those things is altering the flow of the program when implementing statements like return, continue or break.
I am following Crafting Interpreters as reference for the implementation of an interpreter for my language. The book is implementing the tree-walk interpreter in Java and it can use exceptions, but those aren’t available in Go (which is a shame, I generally prefer languages that support exceptions).
Let’s look at an example of the book, converted to my language:
Because the way a tree-walk interpreter works, when the return in line 4 gets evaluated, the interpreter is a few functions deep:
Called count –we should return here–.
Evaluate count(1).
Evaluate for.
Evaluate if.
Evaluate return –this is where we are–.
In Java, going from the last point to the first one and return from there is quite simple because we have exceptions that will clear those function calls and take us to where we want to go –because we can catch the exception there–, but in Go we can’t do that. Instead we have panic and recover, and we can use them to do something similar –that I called panic jump, but that is probably not the technical name–.
Say which type of “panic jump” I’m using, because it could be a return call but it could be other statements as well that have a similar behaviour.
Provide a value (e.g. the value for return).
Track where that jump came from, so we can report useful errors (e.g. “return without function in filename:line:col”).
So in the evaluation of return we can use it like this:
// value is the value to return, and v.Loc is the location of that value
panic(&PanicJump{typ: PanicJumpReturn, value: val, loc: v.Loc})
And in the code that evaluates a function call we have something like this:
func (i*Interpreter) call(callast.Call) (resultany, errerror) {
//... more code irrelevant for the example ...
// handle return via panic call
deferfunc() {
ifr:= recover(); r!=nil {
ifval, ok:=r.(*PanicJump); ok&&val.typ==PanicJumpReturn {
result = val.valueerr = nil } else {
// won't be handled here
panic(r)
}
}
}()
// this function call may panic jump
_, err = fun.Call(i, args, call.Loc)
//... even more code ...
So before we call fun.Call, that could “panic jump”, we set a handler for it that will check that the panic jump is the one we can handle (PanicJumpReturn) and set the return values of Interpreter.call.
If is not a panic jump that we can handle, including an actual panic –hopefully my code is perfect and that never happens–, we propagate it by calling panic again, and it will be handled somewhere else.
The only thing I found slightly ugly is that the panic handler being a deferred function it can only set the return value of Interpreter.call is by using named return parameters, which is definitely less readable than using explicit return values.
We also need a “catch all” handler on the interpreter, because return could be called out of a function. Currently, that shouldn’t have use in my implementation because my parser already checks for that and it should never pass an AST (Abstract Syntax Tree) to the interpreter that is not 100% valid.
At the end is not too complicated, even if it is arguably less elegant than actual exceptions. It was a great Eureka! moment when I found the solution, even if I don’t know what is the impact performance-wise of this –I’m not really concerned about that for now!–.