Subscribe to Personal Log via RSS and find other blogs to read in my blogroll.

This week in gamedev #14

The unnamed RPG project keeps moving forward!

This week can be summed up as “menus, menus everywhere”, because that is mostly what I have been working on.

Transfer item

Inventory management

There are several challenges:

  • Space constraints: mode 0 resolution leaves little horizontal space, and because I wanted the play area to be reasonably large, I don’t have a lot of columns –or rows really, because the wall of text at the bottom–.
  • No mouse: because “emulating a mouse” is a terrible idea in a 8-bit microcomputer (see Bloodwych, for example), and using keys like i for “inventory” is a problem because the CPC can have different keyboard layouts; so I’m focusing on implementing menus handled with the cursor (or a controller).
  • User Interfaces are hard: independently of the system you are implementing them in!
  • How many inventory slots are enough? I was planning 14, but then the menu looks too dense; so currently I’m on 10 –this is per character–. I thought this wasn’t enough, but then I was playing “Bard’s Tale 2” last night and it supports only 8 slots. Is it 10 good then?

So far I think I’m happy with the results, including a very packed “stats” screen, with everything I wanted to show.

Another thing I have been working on is adding tables with weapons and armors, all from White Box version of D&D, as well as implementing the rules (e.g. Clerics are only allowed to use blunt weapons). The White Box book is very clear, but this required a lot of back and forth and, for example, realising that a Halfling can wear a plate mail armor you found in a chest in some random dungeon: of course it is the right size! But that’s basic D&D, and I have decided to implement those rules –although I’m tempted to add a house rule to require strength over 14 to use heavy items)–.

Now that I can manage inventory and equip things (weapon, armor and shield), I have everything I need to start working on the combat; which is exciting! Finally something that feels RPG.

There is something that has been bothering me, though, and I should probably work on: the text wall. Because the double buffer and reasons, long story short: it is slow. Although I improved it a couple of weeks ago, it needs a rewrite, and finding a way to make it both more versatile (should support things like “X has done Y”) and faster.

Things are looking great, let’s implement the combat!

This week in gamedev #13

Not much progress, but slow progress is still progress!

I’ve been mostly working on the new project: the CRPG without name (and I should come up with a name, really).

I’m glad that I started with some flashy bits first, so you can see the player moving around the world, because now I’m designing the data structures that will keep the information to model White Box –not sure if that’s the official site, but there are links to buy the book and a free PDF download–; which is the D&D ruleset I decided to implement in the game.

I mean, it is difficult and boring, and there isn’t much to show. Hopefully I won’t make any fundamental mistake!

I have also tackled another important part of the game: being able to read data from disc –using the “c” as Amstrad did–. In reality is not that hard if you use the firmware, but that requires having enabled some ROMs that will reduce considerably the amount of RAM for the game (16K for the lower ROM and a reduces a bit the memory usable between 0x8000 and 0xc000).

My approach when making CPC games is to disable the firmware and unmap all the ROMs, so I have access to all the memory. But in that case, if I wanted to use the disc, I would need to use the hardware directly. Although there are libraries for that, it would reduce compatibility with some hardware expansions like the M4 board that don’t provide full FDC emulation. As I have mentioned here, my plan is to support 64K and disc; and using a M4 is a simpler way than having an external disc unit!

So my plan was to find out if it was possible to enable the firmware “on demand”, so I had all the memory, and taking into account that when I’m using the disc –and the firmware is enabled–, the memory under 0x4000 is mapped to ROM and some memory between 0x8000 and 0xc000 will be used by AMSDOS (the kernel jump table, variables, buffers, etc).

And turns out it is possible, after some help from CPC Wiki forum, to enable back the firmware. It is tricky and it requires knowing some internal functions of the firmware that change depending on the CPC model; but considering that there won’t be any new models, it means that supporting 464, 664 and 6128 (and the plus is the same), is just enough. I wouldn’t mind adding specific support for other ROMS as well.

That is one of the good things of making software for a computer that is frozen in time. Hacks like these wouldn’t be a good idea back in the day, because a new model could break the game (kind of like happened with the ZX Spectrum floating bus trick).

After I had my disc routines working, I’m now loading the map and the tileset from disc, which opens the door for a lot of content.

After I have all the internal structures ready and I implement the “use item” function, I may work on something exciting: the combat!

A new wild project appears

I have mentioned it here a few times, that I was doing research related to a CRPG –Computer Role-Playing Game, which is how western-style classic RPGs are usually labelled–, and I finally decided to get something started.

Well, the truth is that is not the first time I started a project like that –some people may remember “Tales of Dunegon”, a false start a few years ago–, but I think this time I have something defined enough so there are chances I will end with a game.

I am not very good planing my gamedev projects. Although I do some planing, it is not very formal. Rarely I even bother to write a design document, even if sometimes I have a TODO checklist when I’m close to the end of the project, just to ensure I’m not forgetting anything.

And it is because the games I’ve been producing for the last few years are essentially simple –I have joked sometimes that they are the same game–; very mechanics focused and using the same basic components, so I tend to keep all in my head.

But an RPG is not one of those projects. It is complicated, or at least that is what I think. So I have tried to split it in sub-systems, hoping that it will make the problem more manageable (divide-and-conquer approach).

I’m not a CRPG expert, so I have decided to study some games I like to try to understand why I think they work, so I can reproduce that formula.

WIP screen

Not much to show so far

If you follow me on Twitter and have seen some of my posts, you probably know already that I like the Ultima series, and I’m going to borrow from there –and other games–.

I still don’t have a name, and other than the Ultima inspiration, I have decided:

  • I’d love to implement unofficially D&D rules, probably some of the Open Game License rules. I like the “White Box” ruleset, from the OSR movement. It is emulating rules from the original 1974 edition of D&D, with a few improvements.
  • I finally decided to go with the Amstrad CPC –because drawing the graphics is easier for me–, with 64K initially and targeting disc –I need to load/save data–. I may look at modern cartridge solutions like the Dandanator, or even use 128K, but initially 64K and M4 board compatible would be my preference.
  • I want to have the engine first, so if all goes well, I may release a first game that won’t be large. This would allow testing the engine, and later on I may iterate with other games. This is a long time commitment!
  • I’m making the game for myself, because I want and, hopefully, because I can!

And it should be hard, but not impossible. One subsystem at a time, and I’ll try to write some bits here as I progress in my journey.

This week in gamedev #12

Not much to talk about this week, but I thought I would drop a quick note.

First of all, and primary reason why I haven’t done too much gamedev in the last week, I have released a new version of my Gemini server: SpaceBeans.

This release adds classic CGI support, as described in RFC-3875 and slightly adapted to Gemini. For this feature, the obvious inspiration is Jetforce, that is my favourite Gemini server. Instead of coming with my own take on CGI on Gemini, I decided to be mostly compatible with Jetforce, hoping that the more services support things in the same way, the closer we will be to a “standard”. You shouldn’t need to write your CGIs in a specific way to work on a specific server.

I know I wanted to implement SCGI, but classic CGI is simpler and still useful, so I decided to support it first and then, later on, potentially look again at SCGI.

I think I’m happy with the code and the test coverage, although there are some potential improvements that I want to make in a couple of weeks or so. As it is, I think it is usable –although I don’t think anyone is using the server, anyway–.

Then there is gamedev, or perhaps the lack of it.

I haven’t been too inspired. I need some time to think about a clear direction for “Outpost”, because I thought I would get up to speed with the level design, but I’m a bit blocked –sort of writer’s block I guess–. I may continue drawing graphics, but until I have a good plan, the project is a risk of not happening.

Which takes me to another project that is stuck, the CRPG that I’ve been mentioned previously here.

I’m still looking for a formula that I’m happy with and that I’m confident that I can implement on a 8-bit system in a way that I’m happy with it –I have been suggested that perhaps I should target a modern platform, with more or less classic restrictions–.

If I only made games for one 8-bit system, deciding this would be easy, but is not the case. So that means I’m no necessarily looking at the MSX 1 –you heard it here first!–, but it is too soon to talk more about it.

Yes, this update is not too exciting. I will be on holidays next week, that is probably what I need right now!

Scala, Mill and GitLab CI

I use sbt to build Scala projects at work, so trying to learn new things, I decided to use Mill at home. And I’m very happy with it, although it is less mature than sbt, and the user community is smaller.

One of the consequences of the community being smaller is that GitLab provides a template with sbt, but nothing with Mill. Which is OK, because it is very simple; with the exception of setting up the cache –so our builds speed up and we don’t use resources upstream every time we re-download a dependency–.

In reality is not that hard, if you happen to know that Mill uses Coursier instead of Apache Ivy to deal with the dependencies.

I also wanted to generate an artifact (a .jar bundle) every time I make a release, that in my project is just tagging a commit. Then I’ll create the release itself in GitLab and link that artifact to that release –this can be automated as well using GitLab API–.

This is my .gitlab-ci.yml:

image: openjdk:8

variables:
  MILL_CLI: "-D coursier.cache=$CI_PROJECT_DIR/.cache -j 0"

cache:
  paths:
    - .cache/

test:
  script:
    - ./mill $MILL_CLI server.test

package:
  script:
    - ./mill $MILL_CLI server.assembly
  artifacts:
    paths:
      - out/server/assembly/dest/*.jar
    when: on_success
    expire_in: never
  rules:
    - if: $CI_COMMIT_TAG

I have defined two jobs:

  • test: that is run every time I push any commit, and it basically runs the tests.
  • package: that generates the .jar bundle, but only when there is a tag. Also I want to keep the keep the artifact “forever”, if the build is successful.

The cache is configured to use .cache/ directory –relative to the project directory–, so I set Coursier accordingly.

I use a variable to not repeat the flags passed to Mill, and that’s all!

The other bits I setup in my project are related to Mill itself. I use VCS Version to derive the project version from git, which is perfect given that I use git tags to control what is a release; and I changed the output filename used by Mill –defaults to out.jar– so it can be used directly in the releases.

A way of accomplishing that in Mill 0.9.8 is by adding this to you ScalaModule in build.sc:

  val name = "myproject"
  def assembly = T {
    val version = VcsVersion.vcsState().format()
    val newPath = T.ctx.dest / (name + "-" + version + ".jar")
    os.move(super.assembly().path, newPath)
    PathRef(newPath)
  }

It basically replaces assembly with a new command that renames that out.jar to something like myproject-VERSION.jar, integrating nicely with VCS Version.

Moving to GitLab

I have never liked the idea of centralising too many open source projects in the same hosting service. It didn’t go well long time ago with SourceForge and, over the time, I have seen GitHub going a similar route. Then the Microsoft acquisition happened, and I thought that it was unlikely things would change immediately. It is human nature to be lazy, I guess; so I didn’t do anything –because there was no reason, really–.

It has been almost three years, and in reality things haven’t changed much. GihHub has been involved in several controversies over the years, but none of them was enough to make me leave the service, because the network effects are too strong: everybody is in GitHub, a lot of important open source happens there, and the social aspects work because everybody is there.

The last controversy is the launch of a technical preview of GitHub Copilot, that is a machine learning powered autocomplete tool:

GitHub Copilot understands significantly more context than most code assistants. So, whether it’s in a docstring, comment, function name, or the code itself, GitHub Copilot uses the context you’ve provided and synthesizes code to match.

The tool has been fed billions of lines of public code, and in theory it synthesizes the code snippets and should never provide verbatim code, but there is an obvious concern here regarding open source and licenses. Open source doesn’t focus only on having access to the source code, it is the distribution terms what really makes it relevant. The fact that Copilot may not respect the licences is both bad for the original projects and the users introducing code in their own products without permission.

So it is complicated and, in my opinion, what is decided here and now about this matter –and in similar situations related to machine learning–, it will shape how these technologies could change our world.

Perhaps this is not enough to decide to rely less on GitHub, but it is yet another thing, and I have decided to move to a different service that perhaps may be better aligned with my philosophy, and at very least diversify and stop contributing to the centralising trend.

I know I’m going to lose things, specially regarding the social part of the hosting service, because I won’t have stars in my projects –or not that many– to measure popularity, and possibly not that many contributions –although I had close to zero anyway–, but at least functionality wise I’m aiming to have the same.

There are a good number of hosting services for open source projects, and most of them tick all my boxes.

  • SourceForge: under new ownership, the bad practices seem to be gone.
  • Launchpad: if I can be blunt, I don’t like Canonical. That’s a story for a different time.
  • Bitbucket: I like this one a lot as well, I use it for some things –including private repos–, but perhaps it is too close to GitHub ethos.
  • Sourcehut: very promising service, although it is a bit too different.
  • Gitea: and others. You can self-host this, but I don’t want to do that for now. Equally, you can find instances of these that accept external projects.

Finally I have decided to give GitLab a go. And I’m not saying that I have fully read (and understood) GitLab’s ToS, and I haven’t compared it with GitHub’s ToS, but the fact that you can download and have a self-hosted version (and it is developed in the open) is something that I like a lot.

For now I have moved two projects that I’m working on actively:

  • ubox MSX libs: as basic as it goes, git hosting and perhaps will be using issues as well.
  • Spacebeans: this one uses more features, including CI. Will write a post about that!

I hope this decision doesn’t disrupt too much anyone using these –the MSX libs in reality, nobody cares about my Gemini server–, and I know this is not changing anything in the great scheme of things, but there you go.

This week in gamedev #11

Welcome to another Thursday and this week in gamedev.

ZX Spectrum 48K: Outpost

There should have been an update last week, but I recorded 3 sessions, so it felt a bit redundant. You can probably watch the videos –or not, according to YT stats, the average watched time is from 5 to 10 minutes, ouch!–. I’ve improved a lot the collision detection and I have a couple of enemy types, consolidating colour sprites –the second video is interesting–.

The game engine is solid now, although it is missing moving platforms, more enemies, and a special entity to manage item/flag/progress; which is basically the same as keys and doors. And more things I still don’t know it needs.

The last piece of work was related to text encoding. I had implemented 4-bit encoding in maps, to reduce memory footprint by limiting screens to 16 tiles and packing 2 of them in one byte; but the code was optimised (or specific) to that case. Then I did the same for 6-bit encoding, so I could encode a 64 character charset and save 20% of space; but that was already hard.

Turns out I was approaching the problem in the wrong way. I’ve implemented now a general packer that uses a bitstream by converting the bytes to bits, then grouping in 8-bit to get bytes. In this way I can easily encode using any number of bits, including 5-bit –which is what I’m finally using to encode text–.

With 5-bit encoding for text, the idea is to have two charsets of 32 characters, and use a escape code to select the less frequent when needed. It improves almost a 10% over the 6-bit encoding.

All this because I plan to use terminals in the game to give information to the player. This can be very effective, if done well. It can be used to tell a story –which should help to keep player’s engagement, like I did in Dawn of Kernel, clarify what I mean –sometimes puzzles can be misunderstood–, and even as part of the gameplay –where to go, what to do–.

The game will be 48K anyway and the amount of text will be limited, so the less space it uses, the better.

I’ve been drawing tiles as well, and I’m starting with game design. It is difficult when you have an empty map, but usually things start to get in place as soon as you start working on specific sections. I have a vague idea of what is going on in the game, but it needs more time until is complete.

Other ideas, other things

I have a couple of ideas in background, but other than preparing a template for Amstrad CPC games, I haven’t progressed any of them.

Why I haven’t done a template repo before? Good question! Bootstrapping a new project is always tedious, and my experience releasing my MSX libraries has forced me to organise the code better. So I guess this will make more likely that I can try a quick idea to see if it works or not, without committing a couple of gamedev sessions. I’m not sure if this is 100% positive, I’ve been focused on one project for quite some time, having side projects may or may not be a good thing!

Speaking of MSX, the tools in ubox MSX libs are now documented. When I was documenting it, I realised how difficult was to explain how some of these work, and that means it is very unlikely anyone was using them. I wonder what other things are missing for people to make new games. Let me know!

And that’s all for this update.

I am coding, season 2

Obviously, the part about it being the start of season 2 is a joke (isn’t it?), but I’ve uploaded a new video to my coding sessions.

I mentioned this in my latest weekly update, that I feel like recording a few of these, partially because some of the videos of my original experiment are OK and partially because they may work as motivation to keep working on my projects. Sharing my progress has always worked for me, not only because of the feedback –positive and negative–, but also because gamedev as a solo developer can be a bit lonely.

I have reviewed my OBS setup to make things look a better –and 1080p!–. I have only one screen, so it is awkward, but definitely better than the previous videos. Hopefully going back to make these periodically will help me to tweak my settings to improve the result, as opposed to making a video every few months and not remembering how things work –and that’s why the previous video back in January sounds terribly bad–. Not that is too important, considering that the content will be the same –me coding–.

Anyway, the direct link to the video is here: reidrac is CODING 22: working on Outpost (ZX Spectrum 48K). It is a bit over 1 hour, which is longer than planned; ideally I’d like to keep these around the 45 minutes mark.

This week in gamedev #10

This week has been very varied, with some unexpected things potentially happening!

ZX Spectrum 48K: Outpost

I have a working title for my ZX Spectrum game: Outpost.

I finally decided to not use the title I was planning for the cancelled sequel to Castaway, although I really like it. Unfortunately it would require a good amount of story to explain it, and Outpost is direct, makes the actual place a character, and according to Spectrum Computing the name is not taken –not that is too important, unless is a well known game–.

I drew the title for menu, and for now that’s it.

Outpost

The menu (colour cycle not shown)

Another thing that I got out of the way is the map encoding. I’ve implemented metatiles in a way that is not too annoying, thanks to a good amount of Python.

The basic idea is that I keep a tileset image and a metatiles image. When processing them, I generate the tileset normally –the monochrome data, and a tile/attribute map, all tiles of 8x8 pixels–, and a table that translates 2x2 meta tiles (16x16 pixels) to indexes in the tile/attribute map. The converter will find the 4 tiles that are part of a metatile by matching image data.

Then in tiled I draw my screens using the metatiles, and the game will expand them to use regular tiles before drawing, collision detection, etc. The result is very clean, and the only bit that is slightly cumbersome is having to keep both tiles and metatiles images in sync when I’m just trying graphics, but is not too bad and eventually both tiles and tilesets will be stable and it would be just level design.

It is hard to tell exact numbers, but looking at my test screens, the map data went from around 100 bytes to around 50 bytes, without limiting in the level design –other than the number of metatiles, but being 4 bytes per metatile, I don’t thing it will be a problem–.

There’s also the space used by entities, and there aren’t much savings there, so I calculated how many screens I can include in the game, and I’m aiming at 80 or 90. It should fit!

I’ve started with the level design already, stopping to implement engine features as I need them, and the game is starting to look good!

Things going cold, and a possible comeback

Unfortunately the MSX RPG is on hold, or shall I say cancelled? As I mentioned previously, I got to the conclusion that what I think that would work with MSX 1 restrictions is either:

  • not looking good –in my humble opinion–, so it is putting me off.
  • not the game I want to make.
  • although it may work, it would require a lot of graphics that I can’t draw.

Being a type of game that is likely to be a long time commitment, I can’t keep putting time on it unless I’m reasonably certain that I can finish the game.

That doesn’t mean all the bits I have done already are wasted, because I’ve learnt a few things and it is likely that I know another 8-bit system that it would be a better match for this project –sorry, is not the MSX2; there are a few RPGs on that system and I’m not sure I would be able to contribute anything new–.

On the MSX camp, I had several contributions to ubox MSX libs –all from Pedro de Medeiros; thank you!–, and those have reminded me that I didn’t document the tools used to build the example game.

In reality I wasn’t sure if I wanted to document those, as they are not really required to build games, but Pedro is keen to use them, and is not always obvious how they work. It is still a work in progress, but I hope the tools will be fully documented soon.

Finally, it is possible that “@reidrac is CODING” videos may have a comeback –second season?–. My inspiration has been on and off lately, and after a while, I must confess some of the videos I recorded aren’t too bad. It is possible that making very focused videos may spice things a bit and help me to get things done.

The maker controversy

First of all I want to clarify that in this post I’m focusing on 8-bit games, but you can find similar controversies and attitudes around makers in any gamedev market (see Game Maker Studio or Unity, for example).

It is a bit unfortunate that there is a lot of gatekeeping in the retro-community, and it has many forms: people that advocate using only the real hardware –dismissing those aficionados that use emulators, or even FPGAs–, what type of screen you must use (CRTs vs anything else), games must push the machines to their limits –if the game is fun or not, who cares?–, or that true gamedevs use ASM. Without being an excuse, I understand that this is a hobby and people are passionate and intense about it. And the gatekeepers are extremely loud!

As you can see, I’m also biased –everybody is, as long as you have an opinion–. Perhaps I’m in the let people enjoy things camp, which I hope can’t be considered gatekeeping.

One of the recurrent topics in all these discussions is how games made with makers –tools or frameworks that help you to make games, for example AGD or The Mojon Twins’ MKs–, are looked down as less quality games.

And it is true that you can find games that aren’t great, but that’s not necessarily because the way they are made, but because other reasons. It could be because the author didn’t use an original idea, or didn’t know how to use the tools, or directly because they reused too much from the basics that the tools offer, resulting in a game that it too similar to other games made with that tool. It could be as well that you don’t like the game.

However, in my opinion, that doesn’t mean it is because the tools –although they may have limitations, like any game engine, like I do have my own limitations as programmer because of time, interests or skills–. When you are playing a game and you are having fun (or not) you can’t say that is is because the tools that were used to make it, and that’s because those tools allow a wide range of results, all depending on the author’s skills using it.

Sure, you can have a flickering mess than moves slowly on the screen and ruins any chance of enjoyment, but the truth is that those cases are very unlikely because these makers tend to have a more than acceptable technical level –although I’m sure any tool, doesn’t matter how good it is, can deliver terrible results in the wrong hands–.

We got to the point where a very atmospheric game such as Nosy for the ZX Spectrum gets a good score on Crash Micro Action #2 (90%, with a Crash Smash!), and people complain and argue that the score is too high because the game was made with a maker. Apparently there are even people saying that any game made with a maker should have a 20% penalty, under the argument that it is unfair to compare this game –again, made with a maker– with masterpieces such as Aliens: Neoplasma.

(Edit 2021-06-09 19:00:00 BST: I’m referring to what I’ve been told, despite the original post not saying exactly that. The complaint was about the score and the Crash Smash, and the fact that the game was made with the MK1 engine was used to support the opinion that it is an average game. The post suggested that the score should be reduced by a 20% in all categories, and not that all games made with makers should score less. That was the origin of the controversy around Nosy, and it adds to the usual criticism of makers –and the community drama–. The post in question can be read in FB’s Crash Annual and Magazine public group)

We are living a new golden age for the 8-bit microcomputers. Despite being past their time, we keep getting new games, that are made by people putting a lot of love on them. And because people do things for love –most of these games a free as well–, you have games targeting the ZX Spectrum 48K, ZX Spectrum 128K, ZX Spectrum Next –I can hear gatekeepers screaming “that’s not a Spectrum!”–, or even games that use modern hardware add-ons such as Dandanator cartridges. All at the same time, with some people stuck in 1985, others in 1990, and some others living in the future! How can you compare all those games in different time-lines?

I don’t feel like deciding here how Crash reviews should be written, even if I have an opinion and not always agree with the reviews –which, by the way, is just another opinion–. I’m just happy that they exist, and the same goes with all those makers and people making games with them. Please, don’t stop!

I wish we could have a bit less gatekeeping, so everybody was welcome no matter how they want to enjoy this hobby. We can all help, by politely asking to those shouting to be a bit quieter.