This is a short announcement: I have made “Micro”, my toy programming language, public and it is available here: Micro @ git.usebox.net.
It is finished, but not as finished as I planned originally. I knew building an interpreter and a compiler was a lot of work, but I also made the audacity to build my own thing as I was learning how to make an interpreter, refresh my Go, and on top of all that, my own thing wasn’t easy at all!
Micro is a statically typed programming language, and I was reading the most excellent Crafting Interpreters, that guides you in the journey of building an interpreter for a language that is dynamically typed, which means that I was pretty much on my own in a lot of things I had to write.
Besides, what I really wanted to write was a compiler targeting one 8-bit micro (likely the Z80), and I basically spent too much time dealing with the interpreter implementing things that it was very unlikely I could make happen in the compiler for the target CPU (e.g. closures, or recursive tail call optimizations).
Anyway, I’m very happy with the result and this is, with difference, the best of my toy programming languages. I’m proud of what I have accomplished and I think I’m better prepared to start a more focused project with more chances of success.
I don’t discard playing a bit more with Micro, but it is a bit unrealistic for a first project, so I’m happy to close this chapter for now.
I already decided to rely less on GitHub and use GitLab instead, and the truth is that since then I haven’t started many new projects –and I even forgot about GitLab and still made a couple of new repos in GH, :facepalm:–.
GitHub Copilot is now available to everybody and the controversy has gotten worse when they are charging money for the service, including a campaign by Software Freedom Conservancy.
[…] I would update your licenses to clarify that incorporating the code into a machine learning model is considered a form of derived work, and that your license terms apply to the model and any works produced with that model.
Which is probably the right thing to do, but we know that if GitHub (actually, Microsoft) have implemented Copilot in the way they have is because they think they can get away with it, so it is likely that adding a notice like that is not going to have any effect.
Anyway, I just re-considered if I need one of these hosting solutions (also known as “forge”), and I got to the conclusion that I don’t.
Self-hosting a git repo over SSH is very easy:
$ mkdir my_repo_name
$ cd my_repo_name
$ git init --bare
Optionally, if you plan to serve the repo over HTTP:
# PWD is still my_repo_name
$ cp hooks/post-update.sample hooks/post-update
And that’s all! You can use myuser@hostname:/path/to/repos/my_repo_name as remote and you have a private repo.
Although you may have a clone somewhere else, remember to set backups and things like that, and you are set.
I also wanted to have a way of browsing the repos via web, because sometimes is useful to not require git or a clone to check the code. I also like the idea of rendering a README.md as HTML to have a small microsite for a project that perhaps doesn’t need a dedicated page on my website.
For that I decided to use cgit, that you can see in action at git.usebox.net –very empty, for now–. I also enabled clones over HTTPS (read only), so the repos are public, and all together took me about 15 minutes.
It is clear that I have lost functionality –that I don’t need–, but this is perfect for me because:
My projects are small and likely to only interest me.
I can do without CI, that arguably would be a waste of resources and energy for such small projects.
I have almost never got any contributions, and when I have, the contributors are likely to have the skills to send patches by email (or provide me with a URL to pull). I recommend this tutorial on how to contribute to email driven projects.
I can always move to use a forge if the project grows to a point where there is a real benefit. For example, it is likely ubox MSX lib will stay on GitLab.
Obviously there are some benefits that come with centralisation. Besides easier workflows for contribution, discovery is an important one: you search on GitHub for projects.
In my experience, that wasn’t that important for my projects. Most of them got some stars only after I shared a link to the repo on a forum or social media, and for most people it was a way of having bookmark or just say “cool project”. And it really doesn’t matter: I shared the code in case it was useful to somebody else, but if I didn’t have any meaningful contributions, the stars didn’t do anything for me.
Anyway, the bottom line is that anything that is not GitHub won’t have the benefits of being on the most popular hosting service, so I think it won’t matter that much if I use GitLab or if I self-host my repositories.
I know this is just a drop on the ocean, but if we don’t do anything, nothing will ever change.
It is fair to say that at this point I have stopped refreshing my knowledge of Go and I’m learning new things. Lots of them, actually; thanks to the toy programming language that I’m implementing.
One of those things is altering the flow of the program when implementing statements like return, continue or break.
I am following Crafting Interpreters as reference for the implementation of an interpreter for my language. The book is implementing the tree-walk interpreter in Java and it can use exceptions, but those aren’t available in Go (which is a shame, I generally prefer languages that support exceptions).
Let’s look at an example of the book, converted to my language:
Because the way a tree-walk interpreter works, when the return in line 4 gets evaluated, the interpreter is a few functions deep:
Called count –we should return here–.
Evaluate count(1).
Evaluate for.
Evaluate if.
Evaluate return –this is where we are–.
In Java, going from the last point to the first one and return from there is quite simple because we have exceptions that will clear those function calls and take us to where we want to go –because we can catch the exception there–, but in Go we can’t do that. Instead we have panic and recover, and we can use them to do something similar –that I called panic jump, but that is probably not the technical name–.
Say which type of “panic jump” I’m using, because it could be a return call but it could be other statements as well that have a similar behaviour.
Provide a value (e.g. the value for return).
Track where that jump came from, so we can report useful errors (e.g. “return without function in filename:line:col”).
So in the evaluation of return we can use it like this:
// value is the value to return, and v.Loc is the location of that value
panic(&PanicJump{typ: PanicJumpReturn, value: val, loc: v.Loc})
And in the code that evaluates a function call we have something like this:
func (i*Interpreter) call(callast.Call) (resultany, errerror) {
//... more code irrelevant for the example ...
// handle return via panic call
deferfunc() {
ifr:= recover(); r!=nil {
ifval, ok:=r.(*PanicJump); ok&&val.typ==PanicJumpReturn {
result = val.valueerr = nil } else {
// won't be handled here
panic(r)
}
}
}()
// this function call may panic jump
_, err = fun.Call(i, args, call.Loc)
//... even more code ...
So before we call fun.Call, that could “panic jump”, we set a handler for it that will check that the panic jump is the one we can handle (PanicJumpReturn) and set the return values of Interpreter.call.
If is not a panic jump that we can handle, including an actual panic –hopefully my code is perfect and that never happens–, we propagate it by calling panic again, and it will be handled somewhere else.
The only thing I found slightly ugly is that the panic handler being a deferred function it can only set the return value of Interpreter.call is by using named return parameters, which is definitely less readable than using explicit return values.
We also need a “catch all” handler on the interpreter, because return could be called out of a function. Currently, that shouldn’t have use in my implementation because my parser already checks for that and it should never pass an AST (Abstract Syntax Tree) to the interpreter that is not 100% valid.
At the end is not too complicated, even if it is arguably less elegant than actual exceptions. It was a great Eureka! moment when I found the solution, even if I don’t know what is the impact performance-wise of this –I’m not really concerned about that for now!–.
I started this blog over a year ago, and I mentioned in the first post that it was a work in progress. There was at least something that was likely I wanted to do, but I thought it was not important: make the posts’ tags visible.
Which is basically making Hugo render special pages listing posts with a specific tag. Although at the end, the motivation to add the functionality goes beyond the usual functionality of a weblog.
This site was a personal website before I added my personal log, and I implemented it with Hugo moving from my old Django managed website. I had a good amount of content I wanted to sort and classify, even if some of that content was very old –circa 2002–. I wanted to present all that information initially, and perhaps decide later if I wanted to get rid of some of it.
So I thought I could use tags, and that worked well to make the main index pages linked in the menu on the top of the pages. However, is not ideal because I ended with one large archive with basically anything I couldn’t put in any of the other sections.
I could just use those tags to navigate blog posts, but considering that everything is already tagged, I have decided to expose tags in all pages, hoping that it would improve navigation.
Now you can click on the tag names in any post –under the post title; this post is tagged blog–, and any other page that has tags will show them at the bottom after the last updated information. For example all ZX Spectrum tagged pages.
I’m not sure how much is this going to improve things as it is still mostly disconnected pages, only that now they are indexed by tag. The more I think about it, the more this should look like a wiki, where you can see incoming links and recently updated pages as well, but that’s probably something to explore at a different time.
I have accidentally spent some time recently reading wikis, and I ended in the WikiRevival page on Community Wiki:
Wiki movement is close to being dead. We all know that. One can only imagine how this wiki used to be years ago. Lively, pulsating. Something like that.
The page was last edited on 2021-08-23, so it is kind of fresh –sometimes is hard to know, you may read something that feels like it could have been written today, and it is 10 years old–.
As I understand it, the wiki movement refers to public wikis as a way of collaboration and as tool for building communities. And I wish I could say the rumours of its dead have been greatly exaggerated, but my anecdotal experience seems to confirm that public wikis aren’t at least as alive as they used to be.
In a way, we could say the same about blogs. Or more recently, forums. And before that mailing lists. And before that Usenet news groups. And depending on your age I guess we could go even further back.
I can hear you ask: what about the Wikipedia?
The page has more:
But what’s not dead? Atlassion (sic) Confluence (a proprietary product), WikimediaFoundation and wikis on wikifarms like Fandom (used to be Wikia) that describe modern media. Even wikis on services like GitHub or GitLab are not widely used.
So the Wikipedia is not what I think they meant with the movement.
Then it goes further and questions what features are required for a wiki engine to be successful today –e.g. mobile upport–, and what are the unresolved wiki issues. Obviously all this could be completely wrong, but it is an interesting read, and it ends with some optimism or at least hope that, with some small changes, wikis could be back to be.
I have always been fascinated by wikis. Or, should I say, the wiki movement that I didn’t experience. Not even back in early 2000s when I was active in my local Linux User Group, and it was standard to have a wiki. It had write permission restricted to the LUG members –that didn’t write much–. Sadly, it looks like we didn’t understand what was all about.
I have always been very interested in compilers/interpreters and operating system development. Back in Uni, those were my favourite topics, together with programming –of course–.
In the late 90s and early 2000s I had a lot of side projects working on both, although I never got too far in any of them –at some point I wrote an interpreter using flex/bison that could qualify as programming language, but back in DOS days, the source got lost on a broken diskette–, and I conceded that I would never get anywhere and focused on other things.
But the itch comes every now and then.
The last time I released something was my JTC –names are hard!–, and I took the toy part seriously because it is certainly useless. But it was fun, even if I had the feeling PLY was doing most of the work for me.
For the last few weeks I’ve been re-reading the very excellent Crafting Interpreters, and because I’m refreshing my Go, I started implementing an interpreter using that language. And I’m having a lot of fun!
At this point I think I have fully understood the tree-walk interpreter part of the book, and I’m finishing what could be a small statically-typed programming language.
Although it is still a toy, I’m proud of the features:
Statically typed, types are checked when parsing the code.
Variables, constants and functions.
Conditionals and loops (with break and continue).
Lexical scopes.
Functions are higher-order, support closures, and recursive tail-call optimization.
All with some test coverage, and a design that I hope will allow me to write also a compiler targeting the Z80 –initially, and I’m not sure how some of the current features will look like; e.g. closures–.
What I’m missing at this point is arrays, structures and some types –e.g. for now there’s an int64 that I call number, but I need types for 8 and 16 bit numeric values and conversions between them–.
The idea is that by making the language small with some restrictions, it should be easier to generate Z80 code. Also, by making a interpreter first, I can learn and it will be useful as well to write tests on the interpreter to validate the code that then will be compiled and run on a Z80 CPU.
I was waiting to have something to release before talking about it, but I thought: why not?
Considering my limited time, and that I’m in the middle of some professional changes –I may talk about it later on–, this means other projects are on hold. Which doesn’t mean abandoned, but there’s always some risk of that!
I was talking about Gemini here some time ago, and the official website. As part of the “official” resources there was a mailing list to discuss the protocol, make announcements, meet other users, trolling, and things like that; but something happened to it –there are some comments of being dead and not coming back–.
Some people immediately recovered a copy of the archive and put it online, and even there were some attempts of adding a “public inbox” to replace the mailing list, but it doesn’t look like they went too far.
And then seems like things moved to comp.infosystems.gemini –I don’t know the full story, looks like the newsgroup was already available–.
Of course that the Usenet newsgroups still exist and they are used –even if is not as much as in its golden days–, and you need access to an NNTP server to read and post messages.
Those services used to be provided by your Internet service provider when you got Internet access, and Google has a gateway via a web interface via Google Groups –that’s a Wikipedia link because Google requires account–. Anyway, I’m not going to do a history lesson here.
Aioe.org hosts a public news server, a USENET site that is intentionally kept open for all IP addresses without requiring any kind of authentication both for reading and for posting. In order to avoid mass abuses, every IP address is authorized to post no more than 40 messages per day.
So basically I added a newsgroup account to Thunderbird and it 2 minutes I was checking the groups available, which is all of them.
What a time-travel experience! You don’t get all the archives, but it was fun checking that the es.comp.os.linux.* groups were still there – “ecol” is a big Spanish community of Linux enthusiasts that wasn’t that different of today’s tildeverse, established on 1996–.
The Gemini group is active, but like Gemini itself, things are slow-moving and it doesn’t have a lot of traffic. I have spent some time checking groups, and there are some that look active, but to me it feels more like the echoes from the past, like the “ecol” groups.
Musk is a very divisive character, to put it mildly. So obviously, some users are unhappy with the situation and decided to move to Mastodon, which is an open source software for federated micro-blogging.
I’m not one of those, suddenly unhappy, because I could say honestly that I am always unhappy. I used identi.ca back in 2008, as it better aligned with my personal philosophy regarding software and freedom, but also because back then Twitter was a dumpster fire technically speaking –does anybody remember the fail whale?–.
The good thing about all this is that I decided to try Mastodon again. I tried it by the time I started this blog, but at that point I decided I didn’t want a replacement for Twitter. It was more that I didn’t want Twitter, or micro-blogging at all. So I went back to blogging and reduced my use of Twitter, acknowledging that it was still useful to promote my games.
Are things different now? Perhaps not that much, but I see Mastodon like a gateway to a community aspect that gets diluted on Twitter with big brands, outrage and news, all trying to get your attention and into the doom-scrolling that I’ve been avoiding since I reduced my Twitter usage.
Yes, the user interface is different. Some bits are still not there –the official Android app can’t take pictures, only upload them!–, and the federation makes other parts awkward –can’t see a user’s followers because they are on a different instance, or following a user from other instance if you are in your instance is one click but much more complicated if it happens in their instance–. But all that is fine, because it is open source, an that’s something Twitter can’t compete with.
I know that most of the angry people leaving Twitter don’t care about the same things I do. I don’t expect that the average user will suddenly understand why open source is good; those are the ones that complain about the user interface being different, and it is likely they won’t stay around. I have the feeling that it is always the same: rants and moans –on Twitter!–, for things to always stay the same.
Anyway, let’s see how this goes for me.
One of the trickiest things about a federated service, like in the case of XMPP –aka Jabber–, is that you need to choose an instance. I have an account on the SDF instance. I decided to go there for two reasons: I got a shell account with them –and also in ctrl-c.club; which is a story for a different post!–, and they started hosting StatusNet (aka identi.ca) back in 2010 before moving to Mastodon in 2017; so looks like they are trustworthy and likely to stick around for a while.
In case you want to connect on Mastodon, you can find my mastodon address in my about page.
After we quit playing Breath of Fire IV, we went back to Lenna’s Inception –which is an interesting game, written in Scala! we are in the final-ish boss, will talk about it soon–. But is not ideal to play with the kids, because they still don’t have the skills for such an action game –some bosses took me a good number of attempts–.
So we were looking for a new RPG to play, and because we have watched a few seasons of Pokémon on Amazon Prime, it seemed a good franchise to try. One of the first seasons we watched was “Pokémon: Advanced” –first aired on Japan 20 years ago–, and that season is apparently paired with Pokémon Ruby on the Game Boy Advance.
The graphics are super-cute, the GBA is easy to emulate –I have a GBA, but I don’t have this game, and the screen isn’t great by today’s standards–, and the kids are very familiar with the Pokémon of that series, so on a lazy Saturday morning, we decided to give it a go.
Recharging Pokémon after a battle
I don’t know how far we will get, but we are around 10 hours into the game –after a few short sessions–, with two gym badges, and a good set of Pokémon over level 15. Which is definitely more than I was expecting!
I’m not going to make a full review of the game here, but I’m surprised how deep the mechanics are. It is all simple on the surface: you can walk in tall grass to find Pokémon, battle other trainers, and move from town to town fighting the gym leaders to get badges; but that simplicity is perfect to pickup and play. There are some quests, but looking on your inventory you get clues on what to do next. And all very family friendly –you don’t kill Pokémon, they faint–.
Each Pokémon has its own strategy
And the combat is what I’m finding the most interesting. I have listened to some podcasts talking about the game, and most refer to it as “rock-paper-scissors” mechanics, but that is probably simplifying too much.
Yes, there is a lot of that. For example: a water Pokémon with water moves is strong against a fire Pokémon. Early on things aren’t too fun because the low level Pokémon have only two moves, but when you start to have move options and you need an strategy to win a battle where your opponent is stronger than you, it is clear that the mechanics are deeper than that.
Is not only fun finding those strategies, it is also very satisfying when you make a difficult battle easy by combining some moves in a way it wasn’t obvious when you first met that opponent.
So far I think we have barely scratched the surface –the main story is about 35 hours–, and it is possible we will get bored, but so far we are enjoying it. And I’m learning a lot of game design!
The IndieWeb is a people-focused alternative to the “corporate web”.
Which doesn’t explain too much, only that this is against big corporations providing the platforms where we do the social web of web 2.0.
Thankfully, they provide a list of three points:
Your content is yours.
You are better connected.
You are in control.
And I thought initially “OK, so the IndieWeb is what we used to call the web”, but that’s not exactly true. What I knew as the web initially depended on someone hosting your content, in the likes of GeoCities, Angelfire or Xoom –that are the ones I knew most back in the day, all of them free–.
Initially I thought the IndieWeb places us perhaps a bit later than that, when hosting started to be cheaper, and reading the getting started page, in order to join the community you need your own personal domain, a place to host your content, and setting your own home page and other “indieweb essentials”.
Which is more or less that I did in 2002 when I got the usebox.net domain, trying to establish a stable presence on the Internet for myself and a small group of friends. It was not only about a website, email was also important –although that may be out of scope for Indie Web I guess; hopefully doesn’t mean they are happy with Google owning their email, for example–.
But that was a misunderstanding, because the IndieWeb is not about self-hosting –although is not excluded–.
“A place to host your content” could mean Blogger or WordPress.com, which I don’t know how it goes with “your content is yours”, but it makes sense if you want to move away from the corporate web. The message really is: don’t get your content to Facebook, instead own your blog with your own domain.
There is a lot more to explore, like for example webmentions. Although I suspect a lot of these things may not be easy now that this blog is a static site!