Subscribe to Personal Log via RSS and find other blogs to read in my blogroll.

This week in gamedev #4

It is Thursday and, if you were paying attention, last week there was no gamedev update.

Life is sometimes busy, and you know I have been a bit intense with my Gemini server. Considering that I can do only one thing at a time (mostly), I haven’t progressed any of my new projects, and I was waiting on some things for Graveyard Shift.

And good news: the cover is ready, the inlay design is ready, the loading screen is almost ready, so what is left?

Cover art

The cover art is fantastic!

I need to write a short press release to be sent to a handful of specialised websites that cover retro-gaming, and that’s pretty much it.

When I have the loading screen, I will assemble a release candidate. There will be some last minute smoke tests, and finally release; likely next weekend.

I always prefer waiting a few days before sending a master to the publisher, so expect the physical release to be out from one to two weeks after the release. Remember tfw8b, 499 range.

Now, is not 100% accurate that I haven’t done anything else related to gamedev, but let’s keep it a secret for a bit longer, shall we?

Python's pattern matching

So Python has pattern matching in 3.10:

# the syntax highlighter doesn't know what to do
# with this new construct ;)
match point:
    case Point(x, y) if x == y:
        print(f"Y=X at {x}")
    case Point(x, y):
        print(f"Not on the diagonal")

And I think it looks fantastic!

I always found the chains of elif a bit awkward, but I guess that was part of what made Python simple yet powerful.

And this feature goes beyond that –take a look at the tutorial, search for value extractors-, and being this one of the features I liked most in my first contact with Scala, this is probably one of the few recent changes in Python that I find truly exciting (I must admit I didn’t like type hints initially, but now I think they are a very good idea).

It is still an alpha release, and it will need some time until it lands my Linux distribution (this Debian box is still on 3.7.3), but if I go back to do Python professionally, looks like it could be a lot of fun!

Hello Gemini!

So finally I decided to use one of my underused servers and deploy my Gemini server, and host some content.

If you don’t have a Gemini browser, I guess this link is not very useful, but there you go:

gemini://capsule.usebox.net/

I found it quite funny because, after configuring everything and checking that all was working, obviously nobody is going to visit my capsule –like a website in the Gemini jargon–, because nobody knows it is there.

But it was exciting nevertheless. I used the opportunity to write a short “how to”: deploying SpaceBeans on Debian.

It is using systemd to manage the service. Nothing special, but it works nicely. Not 100% happy with the logs, so I may revisit that part explaining a few options to deal with them.

One thing I’ve noticed is that the library I use to parse the command line depends on a library that is a bit heavy. Well, not that much in the grand scheme of things, but let’s say that adding 10MB to a project like this, sounds excessive.

So on the next version it will use scopt, that is excellent and it will make the distributed bundle smaller. It is a learning process. Worrying about 10MB in my day job would be laughable, so I’m not used to it.

I have a few more ideas to add to the sever, so I may implement something from that list before making a new release.

First SpaceBeans release!

Last night I finished the tests, or at least enough tests, so I could make a first release of my Gemini server: SpaceBeans.

With the name I tried to link the Gemini space theme with the idea of the server running on the Java Virtual Machine. So those beans are coffee beans, of course.

In case you haven’t read about it on this blog, the current list of features is:

  • Static files, including optional directory listings
  • IPv4 and IPv6
  • Configurable MIME types, or a built-in resolver
  • Virtual hosting, with SNI support
  • User provided certificates or auto-generated in memory (for development)
  • Configurable SSL engine (e.g. TLSv1.2 and/or TLSv1.3), with configurable ciphers

Which I believe it the minimum to make the server useful.

This is also my first release of anything Scala –out of what I do in my day job, that is–, so it has been an interesting experience.

Other than the source code (MIT licensed), you can download a .jar file that bundles everything you need to run the service on the JVM. My experience releasing services so far has been via Docker images, but in this case I think the lightweight nature of Gemini makes it a bit overkill.

There are a good number of Gemini servers out there, and some of them are very popular already, so I don’t expect anybody to use this; but you never know!

Tilde Club

I knew about SDF for years, a community coming from the early 90s, from the early BBS era –it started on a Apple IIe microcomputer– where you can have a free “UNIX Shell Account” (it is running NetBSD, so the “UNIX” part is true).

I even have vague memories of my own SDF account at some point, but the thing is that since the late 90s I’ve been using Linux, so having a shell account kind of lost the mystery at some point for me.

As part of my short trips around Gemini, I’ve ended in a couple of git hosting services I didn’t know about, like Sourcehut and tildegit. “tildegit” makes reference to the tildeverse.

“Tildeverse?” you may ask. Well, there’s a website for that:

we’re a loose association of like-minded tilde communities. if you’re interested in learning about *nix (linux, unix, bsd, etc) come check out our member tildes and sign up!

tildes are pubnixes in the spirit of tilde.club, which was created in 2014 by paul ford.

And that’s why I started mentioning SDF, because it looks pretty similar.

It is worth reading this post by Paul Ford where he explains how his tilde.club came about. Although there are a few references here and there about it being “not a social network”, for me, it’s all down to this quote form the post:

Tilde.club is one cheap, unmodified Unix computer on the Internet.

It reminds me very much of my own experiences early 2000s when I was involved in Free Software advocacy, as part of all the Linux User Group movement. A few of those groups, spawning sometimes from public newsgroups, formed large communities of self-hosted services with shared accounts.

My local group had an interesting twist on this, because we also formed a non-profit metropolitan area wireless network.

I guess everybody can have a Linux shell on a Raspberry Pi, but there’s something exciting about having an account on a public UNIX system, isn’t it? It is the community.

So tilde.club may be a social network after all.

PS: my sysadmin senses are constantly tingling when learning about these services. The old Linux User Groups systems relied heavily on trust, and I don’t know how these new communities deal with security, but it must be interesting for sure!

Gemini diagnostics: PASS!

I think I have ready everything I wanted to support in my Gemini server, at least on a first version that is “useful”. I may add more things later, perhaps experimenting with some ideas like server-side support for gemlogs.

This is the feature list:

  • Static files, including optional directory listings.
  • IPv4 and IPv6.
  • Configurable MIME types, or a built-in resolver.
  • Virtual hosting, with SNI support.
  • User provided certificates or auto-generated in memory (for development).
  • Configurable SSL engine (e.g. TLS 1.2 and/or TLS 1.3), with configurable ciphers.

Other than the virtual hosting with SNI support, there’s nothing too exciting.

Although I have already tests for the tricky parts, I still need to write some more to validate a few branches, and to have some regression protection moving forward.

But then I found this Gemini diagnostics repo created by the author of Jetforce Gemini server, as a torture test for Gemini servers.

And turns out, my server was passing all tests but two!

The first issue was related to returning a “not found” response when the request contained an invalid UTF-8 sequence, instead of “bad request”.

This was easy to fix: I was trusting Akka’s decode method, and it replaces invalid sequences with the infamous . Then I was checking for a resource on disk that obviously didn’t exist (hence the “not found” response).

The solution was to write my own decoder:

  val charsetDecoder = Charset.forName("utf-8").newDecoder()
  def decodeUTF8(value: ByteString): Either[Throwable, String] =
    Try(charsetDecoder.decode(value.toByteBuffer).toString()).toEither

Very simple. I included it in the input validation, and that test now passes.

The other issue is related to the TLS implementation provided by Akka, that doesn’t seem to provide a close_notify TLS signal before closing the connection.

I’m not a TLS expert, so I will investigate to see how important is this, what are the consequences, and if I can fix it or perhaps file a bug report to Akka.

So I’ll see to write some tests and I’m almost ready for a first release –that nobody will use, of course; look a Jetforce, it is very nice!–. I may deploy the service as well, and put some content on the Gemini space!

Not winning with Akka II: Won!

I got some help on Akka’s Gitter channel, and finally got to the bottom of the problem, and it wasn’t me!

It was all down to the TLSClosing parameter when creating the TLS listener. It uses IgnoreComplete, and that was breaking my code.

The explanation:

All streams in Akka are unidirectional: while in a complex flow graph data may flow in multiple directions these individual flows are independent from each other. The difference between two half-duplex connections in opposite directions and a full-duplex connection is that the underlying transport is shared in the latter and tearing it down will end the data transfer in both directions.

When integrating a full-duplex transport medium that does not support half-closing (which means ending one direction of data transfer without ending the other) into a stream topology, there can be unexpected effects. Feeding a finite Source into this medium will close the connection after all elements have been sent, which means that possible replies may not be received in full. To support this type of usage, the sending and receiving of data on the same side (e.g. on the Client) need to be coordinated such that it is known when all replies have been received. Only then should the transport be shut down.

So that’s why they had to change the behaviour on the TLS wrapper.

The thing is that you don’t need to use queues –most of the time, at least–, because Akka is smart enough to do the right thing if you define your flow correctly.

For example:

    Tcp()
      .bind(conf.address, conf.port) // not using TLS!
      .runForeach { connection =>
        logger.debug(s"new connection ${connection.remoteAddress}")

        val handler = Flow[ByteString]
          .via(
            Framing
              .delimiter(
                ByteString("\r\n"),
                maximumFrameLength = maxReqLen,
                allowTruncation = true
              )
          )
          .map(b => Request(b.utf8String))
          .map { req =>
              handleReq(connection.remoteAddress.getHostName(), req)
          }
          .take(1)
          .flatMapConcat(_.toSource)

        connection.handleWith(handler)
    }

I’m using framing to define what is a request, I map that into a Request, then my handler does its thing and maps that request into a Response.

At that point the element that is being handled by the flow is a response, so I say that I want to send back only one (using take), and I convert that response into a sequence of sources that are processed and converted into a stream of bytes that end in the client.

Akka is smart enough to keep the connection open until the response is sent, and then close automatically (I said I wanted to send back only one response).

And that’s managed with an internal status in the flow that is complete, and it was ignored by the TLS wrapper. So my connection was stuck after I had finished sending the response, no matter how I did it.

I found that using IgnoreCancel is what preserves the behaviour that I need:

IgnoreCancel means to not react to cancellation of the receiving side unless the sending side has already completed.

So basically, if the client has finished sending data, we don’t close until we have sent all our data (the completed signal), but the fact that the completed signal closes the connection is OK because Gemini protocol doesn’t support responses by the client.

This is a bit more complicated than what I’ve shown in the example, because it is possible that the client sends more data than the request. That happens, for example, if we try to access the Gemini server using curl –that speaks HTML and expects sending multi-line requests–. In those cases we have to drain that input (I’m using prefixAndTail, that splits one element from the input and provides a source that I can drain to essentially consume the data ignoring it).

I can’t tell how many different solutions I tried to this problem, and the issue was in the TLS layer. Anyway, looks like the server may work at the end!

Not winning with Akka

I got to a blocker with my TCP server using Akka Streams.

Looking for some examples, I found a few references to this piece in the Streaming TCP section of the docs:

Closing connections is possible by cancelling the incoming connection Flow from your server logic (e.g. by connecting its downstream to a Sink.cancelled and its upstream to a Source.empty). It is also possible to shut down the server’s socket by cancelling the IncomingConnection source connections.

Which sounds complicated, but after a couple of days of looking at the problem, it doesn’t make a lot of sense. Neither did for this fellow programmer 5 years ago (his solution is around the server, and I think the answer should come from terminating cleanly the stream).

And you know it is a tough problem when the only examples of TCP servers you can find online are pretty much the ones you have in the Akka docs –if this wasn’t a learning exercise, I think I would rather use a different technology–.

The problem I think is that when you are processing the request, you are in the flow, and is not possibleto do anything from there to complete the stream. Not sure if there is a reason to force this by design.

Sounds to me like a common pattern: client makes a request, the server sends back a response, and then it closes the connection. With Akka streams you can’t do that from the place where you are processing the request, at least as far as I can tell.

So it goes for ever, unless the client (or the server in case of an exception; for example an idle timeout) closes the connection.

My current code uses a Source.queue because it is supposed to complete the stream when complete() is called on the queue.

    Tcp()
      .bindWithTls(conf.address, conf.port, () => createSSLEngine)
      .runForeach { connection =>
        logger.debug(s"new connection ${connection.remoteAddress}")

        val (queue, source) =
          Source
            .queue[ByteString](1024, OverflowStrategy.backpressure)
            .preMaterialize()

        val sink = Sink.foreach { b: ByteString =>
          handleReq(connection.remoteAddress.getHostName(), b.utf8String)
            .map(_.compact)
            .map(b => queue.offer(b))
            }
            .run()
            .onComplete {
              case Failure(error) => logger.error(error)("Internal error")
              case Success(_) =>
                logger.debug("setting queue to complete")
                queue.complete()
            }
        }

        val handler = Flow.fromSinkAndSource(sink, source)

        connection.flow
          .idleTimeout(conf.idleTimeout)
          .joinMat(handler)(Keep.right)
          .run()
      }

The terminology of Akka is very specific, but basically I connect a flow to the incoming flow, created from a sink and a source. The sink reads the input and adds the response to a queue, that is where the source gets the data to send back to the client.

This seems to work. But the call to complete() is not completing the stream and closing the connection. If it change it for a call to fail, it does work by cancelling the stream.

And this is where I am at. I asked for help on Akka’s Gitter channel, without answer so far, but at least someone solved the issue with queues, so I must be doing something wrong!

Update: it is important that you read the second part.

This week in gamedev #3

It is Thursday, and this week most of my gamedev has been basically “admin”. But that’s OK because I’m very happy with my Gemini server –that just today got SNI support implemented–, so no complaints!

WIP of the cover

WIP of the cover

What I have today is mostly news regarding Brick Rick: graveyard shift.

First, the cover art by na_th_an (of the legendary Mojon Twins group) it is looking amazing already. I’m looking forward to see it in colour!

And secondly, the big news that the game will be published in the 499 range of tfw8b.com.

I know “The Future Was 8-Bit” for many years now. I’ve been customer –their hardware for our 8-bit machines is great–, and I like them personally. So I’m happy that Brick Rick is going to be available in cassette on a lovely budget range.

Is Graveyard Shift a budget game? I don’t know if that distinction makes sense any more in new 8-bit games (see that I didn’t say homebrew?), but I like the style and for this game I wanted to try something different to the big box releases I’ve had with Poly Play so far.

So that’s all for this week. Not a lot, but at least a good peek of what is coming!

Directory traversal attacks

As part of programming a Gemini server, I’m dealing with some classic problems, such as directory traversal attacks:

A directory traversal (or path traversal) attack exploits insufficient security validation or sanitization of user-supplied file names, such that characters representing “traverse to parent directory” are passed through to the operating system’s file system API.

For example, if the content we are serving is in /var/gemini/, our server should only serve content from that directory. In other words, the following request should be illegal:

gemini://server.local/../../etc/passwd

That file should never be served, of course.

There are different ways of preventing this type of attack, the most common is:

  • Get the root of our directory, in this example /var/gemini/.
  • Add the requested path, and notmalize the result; basically removing relative path components (for example, the ..). In this example we would have:
/var/gemini/../../etc/passwd -> /etc/passwd
  • If the resulting path doesn’t begin with the root directory, then the path is invalid.

It is very straightforward. But there’s one thing I don’t like: it leaks directory information out of the root directory.

For example, let’s say this URL is valid:

gemini://server.local/../../var/gemini/

It translates to exactly the root directory, and that’s a valid path. Which I guess is not that important, but besides the fact that the server is leaking the root directory path, that URL can’t be easily normalized and we could have multiple valid URLs for the same directory.

So I thought of an alternative way, that detects that URL as invalid, and all the valid URLs are easy to normalize via a redirect.

The basic idea is to split the path by the / separator. Then calculate for each path component a value based on:

  • .. goes back one directory: value -1.
  • . is the same directory, so no change: value 0.
  • Any other component: value 1 (as we go forward one directory).

If the value for a path component is less than zero, we can say that the path is illegal and we can happily return a bad request response.

Let’s show some Scala code:

def validPath(path: String): Boolean =
    !path
      .split('/')
      .drop(1)
      .foldLeft(List(0)) {
        case (acc, "..") => acc.appended(acc.last - 1)
        case (acc, ".")  => acc.appended(acc.last)
        case (acc, _)    => acc.appended(acc.last + 1)
      }
      .exists(_ < 0)

This code is called with the path already decoded from the URL, and the maximum length has been checked as well.

If the URL is valid, as in we never “went out of the root directory”, we can continue, and check as well if the URL is not normalized and redirect to the normalized version.

Now we can safely normalize and redirect. For example, all the following requests are redirected to the same path (the root document):

gemini://server.local/./
gemini://server.local/directory/../
gemini://server.local/directory/another/../../

I found it all very interesting, despite being an old problem to solve.