I love the graphics –hand-drawn by Capcom–, the character design, how the story is told –e.g. you play a character that is not part of the party to know about his story–, the combat, how you can learn new skills from enemies –it is amusing learning the “command” special from the duck commander and use it to ask its minions to kill him–; there are a lot of things to love in this game.
So, why I am bouncing from this game after roughly 10 hours? Because I wanted to play a JRPG, and this one has a lot of mini-games that ask me to navigate screens to do uninteresting things.
And that is the main problem. The game looks fantastic, but the way they implemented the isometric graphics requires you to constantly rotate the camera to see where are you and where are you going. So having to do that chasing some sort of wild boar to find the mayor of a weird town so you get permissions to cross some tunnels is not only uninteresting but annoying.
The combat is alright, I found it to have the right amount of options and strategy, so I didn’t mind when I had to fight cute mice and roaches –big, strong ones– when I was already used to fight undead and zombies.
But when I start missing the combat because I spent one hour finding orphans playing hide and seek so I could continue my main quest, having to kill a few mice and roaches is too little when the game asks me again to play one of those mini-games.
It is a shame, but considering how little time I have to put on a game like this, I don’t think Breath of Fire IV is for me!
So shortly after I posted here my notes about my Python with LSP setup, I decided to give the neovim native LSP support a go. Mainly because I found that vim-lsc didn’t understand some of the messages Python LSP Server was sending and, although it looks like that didn’t have any effect, the error reporting was a bit distracting –even when I tried to disable it–. So I thought, how difficult would be using neovim’s built-in support?
Very easy, actually. I guess I’m not the only one not willing to burn all bridges in case I have to go back to use vim –that has been around long enough to make me think it will be there for ever–. There is no reason to think that neovim will disappear, or go in a direction I don’t fancy. My vim-lsc setup did work with vim, and in some cases I don’t have neovim available (or at least not at the minimal version required to enjoy all this).
But the fact is that I’m already enjoying some neovim-only features, like Telescope, so let’s go with full in with neovim for my LSP needs –at the end of the day, it is what I use for development–.
The changes to my previous configuration are actually very simple. First, I removed the vim-lsc plugin and added nvim-lspconfig, wrapped on a condition, so it doesn’t load if I use vim or the neovim version is not high enough:
That should make vim –and older neovim– load fine.
Then, in my init.vim for neovim, I added an include to the file that will have the LSP specific configuration –and now I realise it should be wrapped on a version check, as well–. This doesn’t include my metals setup because nvim-metals doesn’t use the nvim-lspconfig framework –I need to investigate this, actually–.
I decided to include the configuration in lua instead of vimscript, because it is readable, and because the minimal example is using that language.
For full reference –for my future self–, this is the configuration:
-- for nvim lsp support-- Mappings.-- See `:help vim.diagnostic.*` for documentation on any of the below functionslocal opts = { noremap=true, silent=true }
vim.api.nvim_set_keymap('n', '<leader>d', '<cmd>lua vim.diagnostic.open_float()<CR>', opts)
vim.api.nvim_set_keymap('n', '[d', '<cmd>lua vim.diagnostic.goto_prev()<CR>', opts)
vim.api.nvim_set_keymap('n', ']d', '<cmd>lua vim.diagnostic.goto_next()<CR>', opts)
vim.api.nvim_set_keymap('n', 'da', '<cmd>lua vim.diagnostic.setloclist()<CR>', opts)
-- Use an on_attach function to only map the following keys-- after the language server attaches to the current bufferlocal on_attach =function(client, bufnr)
-- Enable completion triggered by <c-x><c-o> vim.api.nvim_buf_set_option(bufnr, 'omnifunc', 'v:lua.vim.lsp.omnifunc')
-- Mappings.-- See `:help vim.lsp.*` for documentation on any of the below functions vim.api.nvim_buf_set_keymap(bufnr, 'n', '<C-]>', '<cmd>lua vim.lsp.buf.definition()<CR>', opts)
vim.api.nvim_buf_set_keymap(bufnr, 'n', 'K', '<cmd>lua vim.lsp.buf.hover()<CR>', opts)
vim.api.nvim_buf_set_keymap(bufnr, 'n', 'gi', '<cmd>lua vim.lsp.buf.implementation()<CR>', opts)
vim.api.nvim_buf_set_keymap(bufnr, 'n', 'gsh', '<cmd>lua vim.lsp.buf.signature_help()<CR>', opts)
vim.api.nvim_buf_set_keymap(bufnr, 'n', '<leader>rn', '<cmd>lua vim.lsp.buf.rename()<CR>', opts)
vim.api.nvim_buf_set_keymap(bufnr, 'n', '<leader>ca', '<cmd>lua vim.lsp.buf.code_action()<CR>', opts)
vim.api.nvim_buf_set_keymap(bufnr, 'n', 'gr', '<cmd>lua vim.lsp.buf.references()<CR>', opts)
vim.api.nvim_buf_set_keymap(bufnr, 'n', '<leader>F', '<cmd>lua vim.lsp.buf.formatting()<CR>', opts)
endlocal signs = { Error ="🔥", Warn ="⚠️ ", Hint ="✨", Info ="ℹ️ " }
for type, icon in pairs(signs) dolocal hl ="DiagnosticSign".. type
vim.fn.sign_define(hl, { text = icon, texthl = hl, numhl = hl })
end-- Use a loop to conveniently call 'setup' on multiple servers and-- map buffer local keybindings when the language server attacheslocal servers = { 'pylsp' }
for _, lsp in pairs(servers) do require('lspconfig')[lsp].setup {
on_attach = on_attach,
flags = {
-- This will be the default in neovim 0.7+ debounce_text_changes =150,
},
-- FIXME: shouldn't this be only for pylsp? settings = {
pylsp = {
plugins = {
pylsp_mypy = {
enabled =true,
live_move =false }
}
}
}
}
end
Which is the suggested minimal configuration, plus the keybindings I use in metals –and I’m used to them–, plus a couple of small things, that affect my metals setup as well –e.g. the icons in the gutter to signal errors, warnings, etc; also something I need to investigate further–.
And that’s all, really. Seems to work fine, and all the noise from vim-lsc seems to be gone. I don’t know if the native neovim LSP support is doing anything differently, but at least is not reporting errors, so I guess… if I don’t know about it, it is perfectly fine.
This is still a bit work in progress and there are things I still don’t fully understand, but I will get there!
I wrote a similar post back in April 2021 –on Gemini, that’s obscure!–, but I updated my main Debian box and something broke. Then I revisited it all after I dropped the few projects I still had with Python 2, and it all stopped working –did it ever work Python 3? Not sure–. So there you go, I did it again and I’m documenting it for “the future”.
I use vim-lsc –for now–, because it does a lot out of the box, and with almost zero configuration. Even now that I use nvim-metals for Scala –in neovim–, I have configured most bindings to behave like vim-lsc because I like it. I guess I could investigate the native neovim support for LSP, but I’ve been lazy. vim-lsc is good, but it doesn’t look like it is actively maintained, so take that into account if you want to try this.
The way I’m enjoying Python lately is with:
Python LSP Server –this is a community fork of the unmaintained Python Language Server–. This does a lot of things already, even if you only use this, it is a good experience.
Mypy –an optional static type checker for Python–, via pylsp-mypy. The code is checked on save.
Black –a code formatter; “Any color you like”–, via python-lsp-black. The code is formatted on save.
You can install all this in your local user using pip with:
This is best with Python 3.8 or later. You may also want to ensure neither yapf nor autopep8 are installed, or black won’t kick in. But you can skip black all together and use yapf –it is also nice, just not as nice–.
I’m not always annotating my code with types –and I should–, specially in old scripts, but in general this is my setup. And all controlled via vim-lsc and this piece of configuration:
vim-lsc will start the LSP server only when we edit a Python file. I recommend reading its help to know what are the keyboard bindings to navigate the code, autocomplete, etc.
I hope this doesn’t break again any time soon, specially now that I’m not using the outdated and unmaintained Python LSP server!
You have al the info at @reidrac is CODING, but basically I tried streaming a couple of my coding sessions on Twitch, and I liked it!
The way I’m looking at it is different than what I used to do in YouTube (and these are the videos so far): my “off-line” sessions are videos to keep in the channel, while in Twitch is more about sharing my programming session as they happen –currently I’m not keeping the videos, although I may keep some that are specially good to get a bit more attention–.
What I would like to get from these sessions? Good question!
First and foremost, finishig my current game –or whatever is the project I’m coding on, it won’t always be 8-bit gamedev–. If Twitch helps me with motivation and, in a way, to gamify the development process –like sharing progress on Twitter–, that’s great. One of the hardest things of gamedev is being persistent, because making a game is a long road, and until you get to the end… there’s no game!
Secondly, 8-bit gamedev is kind of lonely activity as in it is very unlikely I chat about it with anyone in “real life”, so I’m curious to see how will it work the community part of a streaming channel –which initially will be none, because nobody will watch my sessions–.
I was joking on the title of this post –see the Buggles song–, and I plan this to be non-stress and it will last as long as I feel like doing it. I won’t have the webcam and/or the microphone on if I’m not alone or it isn’t quiet enough around me –I don’t have dedicated office space at home–, and it is unlikely I will be able to schedule the streams in advance: I will code as I usually do, with the difference that I will be streaming it.
My Twitch channel is MrReidrac –because my reglar nickname was taken!–.
So I’m at the end of Crystal World and my next fight is Deathguise –I don’t know this one–, followed by Kuja, and then Necron –I don’t know this one either–. Looks like I’m under-equipped, and I can only go back to the previous area (Memoria), and that’s it.
According to the save game, that was almost 42 hours –in reality a bit more, as I used save-states as I thought it was useful–. I didn’t read any guide, so I would say the game is not hard.
I guess I could go back to Memoria and grind for a couple of hours until the end bosses are easier, although Deathguise uses attacks that can wipe half of my party in one hit, and that suggests I’m not using the right protections.
If I had used a guide, that would have helped to prepare my party for these boss fights. Some things can be found by trial and error –for example: the Soft potion will remove paralysis effects–, and I’m very proud of killing the undead type of boss at Lifa Tree by using Life spell, but I don’t think the equipment names give enough information on what they do.
Other than not being able to beat the game, I think I got most of what the game had to offer –and I can always watch the end on YouTube–. I enjoyed Final Fantasy IX, although I spent most of my play-through wishing it was better at some things.
The story is all over the place –which to be fair, could be considered your usual anime thing–, but the game is almost completely lineal –I know a side quest I didn’t do–, with a lot of “press X to read” parts with sprinkles of combat kind of to justify the RPG elements. I don’t know if it was because I was playing with the boys and they can’t read, but some of those pieces of exposition felt too long –for example, during the battle of Alexandria–.
But on the other hand the characters are well developed, the story is OK –including who’s the baddie now parts–, and there’s also a good amount of world building. With the combat not being too complicated, what I ended missing was a bit more RPG.
The 3D graphics are dated –it is a PlayStation game after all–, but I found them to be good enough to my eyes –and the boys really liked the big monsters: cute but a bit scary–. The pre-rendered background look great, but this was my biggest problem with the exploration: I got stuck at least 3 times just because I missed that there was an exit on a screen. That was half the scenes being very busy and half the camera angle, but it is a shame that it was the only times where I felt I didn’t know what to do or where to go.
I’m looking for my next game, continuing to research JRPGs. I tried a bit of Final Fantasy VI on the SNES –a patched ROM that improves the translation and adds a few bug-fixes–, but going back to those 16-bit graphics and the UI being a bit less refined felt too hard. So I’m going to try something different: Breath of Fire IV on the PlayStation. Those Capcom hand-drawn sprites look beautiful!
I’m a bit stuck with my CRPG project –that really, really needs a name!–. I think I got the gist of implementing the Whitebox rules, and a nice keyboard (or joystick) controlled menu system to manage inventory, but I have never made a full CRPG before –the closest was many years ago in GWBASIC–, so I don’t know if what I’m doing is correct.
It is true that it doesn’t matter too much as long as it works, but I have reasons to think that I’m not doing a great job. For example: I’m using a lot of code, and the cartridge format I’m using makes it easy to have lots of data, but code is tricky (specially coding in C).
So I have that feeling that I’m going to continue putting hours on the project to realise too late that I have lost my motivation, or I got to a dead end.
It has happened to me before, even recently working on Hyperdrive: I have implemented the same engine three times already–. A CRPG is a larger project and I don’t feel like starting everything again –there was a failed project before this one, and if you count moving from disc to cartridge as an iteration; this could be my third go already!–.
That’s why I have been focusing in more direct projects, like Hyperdrive –despite being unclear how things are going to be implemented, I have finished a couple of shoot’em ups already–, letting the CRPG project rest and keep thinking about it in background.
I watched a talk by Josh Ge on How to Make a Roguelike, and besides of being very inspiring –I don’t think I’m going to make a roguelike any time soon; although you never know!–, it made me realise how little of a plan I have to build my CRPG. And then a talk by Bob Nystrom (from Game Programming Patterns fame, among others) Is There More to Game Architecture than ECS made pretty clear that I have no idea if my approach will work –and the fact that the bits I have feel too big for an 8-bit could be a good indicator–. I may need a way of learning before I can succeed in an actual CRPG.
So now I’m thinking that perhaps I should try a smaller project, probably on a modern system so I can prototype and iterate faster, and manage to finish something –even if is small–. I know how to finish games, but not a CRPG.
Recently I’ve been playing with some C with SDL2 code that looks very simple –and not that different from what I do in 8-bit systems!–, and I also have a clean new codebase to do 2D games using Javascript and Canvas 2D. I’m considering one of these two options to implement something small on the side, so I can validate some of my current assumptions on the actual 8-bit CRPG. Sounds like a better way to do things wrong, and I may even finish a game!
Today is the first anniversary of ubox MSX lib –1.0 was released today one year ago–, and yesterday I tagged version 1.1.9, and it is likely that wraps 2021.
I believe the code is stable, and in the 14 releases on 2021 I mostly focused on:
Improving Windows compatibility. Although I didn’t want this to be a priority –I don’t use Windows at all, I can’t support it–, once I had one user trying to compile on Windows, it was a great opportunity to make things more portable –portability is good–.
Added compression (with aPLib and apultra). Arguably this should have been in the first release, but it was not essential.
Added missing documentation. This is mostly regarding the tools used by the example game. In my original idea, the important thing were the libraries, so that was well documented. Turns out users want to use the Python tools in their own projects.
Then I had a few contributions from one user (Pedro), and that resulted in a few unexpected improvements:
Some usability improvements in the Python tools.
A better build pipeline, including basic CI in GitLab.
And on top of all this, some bugs where fixed. These are mostly issues that I introduced accidentally when I ported the original code from Night Knight / Uchusen Gamma to something more general and usable.
What is next?
It is complicated because I’m currently not actively working on an MSX project, and adding functionality that you are not using, is hard. But I have a TODO list:
Add some CAS support.
Add perhaps another compression tool; probably ZX0 with salvador.
Support 48K ROMs.
Add some MSX 2 features.
Pedro is interested in the MSX 2 support, but that is something I’m still planning and I want to create an experimental branch to see how we can support some sort of configuration to target MSX or MSX 2, dealing with the limitations of SDCC (specially sdasz80). Besides it would be ideal if I could align that with a game project, so things are well tested and I can justify adding code to the project that I will have to maintain.
Open Source projects can take a lot of your time, and I very much prefer making games than libraries/tools, but with some patience I think we can do some interesting things in 2022.
It all started during a week off this last November. Long story short, I had too many holiday days and I wanted to use some, so I thought: the children are at school, I guess I can rest and perhaps do something different without them.
It wasn’t a bad plan, but then I got a bad cold, and because it is likely that the source was one of the boys, the young one was ill at home the first part of the week, and then the other one the second half. Stuck at home, not feeling great, so I started playing games: a bit of “Super Mario Sunshine” –but my 3D platforming skills aren’t great–, and then settling for “Final Fantasy IX” on the PlayStation.
Not that I played a lot that first week, because I found difficult to keep focused on the game: the story is goofy and I found little game in the first 4 hours. I complained on Twitter that it was a linear story with sprinkles of combat –and not a lot of it!–. Not what I was expecting from a JRPG.
Arguably I haven’t played many of these, and it was many years ago with emulation of 8 and 16 bit titles on DOS with my Pentium 100MHz. Besides, the only one I managed to finish was Chrono Trigger, that is a bit different –shorter and more accessible, perhaps?–.
Have you played other PS1 FF games? Them all start very guided, but later the world opens and you have a lot more freedom. 4 hours in a 30+ hour game is still early. Japanese RPGs never were the ones with the most hardcore RPG mechanics anyway.
I’m now 14+ hours into the story –I’m still on the second disk of four, the median to beat the game is around 39 hours–, and it is still very linear and… easy? Granted that there have been some party splits and the story is all over the place, but so far I haven’t encountered any hard puzzles or combat –there are a couple of fights that you can’t win, and are basically scripted story–. I don’t know if I’m nailing it, or is just that FF IX is not what I was expecting.
I want to finish it during the Christmas break. Very doable to play one or two hours a day, because the boys like it, and find fun the encounters and when one of the party members execute a super-powerful move that kills a funny-looking monster.
The secret project is not secrete any more, and has a name: Hyperdrive.
I’m officially working on an vertical scroller shooter for the Amstrad CPC, targeting 64K and CPC+ cartridges –without any CPC+ specific features for now– and the Dandanator. So far it has been tested on emulators (CPC+ on CPCEC and WinAPE, Dandanator on RVM2 and CPCEC), and my Amstrad CPC 464 using the CPR file and the M4 add-on.
I don’t like games that put technical excellence over gameplay, but for now the highlight is smooth one pixel scroll at 25Hz, and hopefully the fun gameplay will come later. My own Uchussen Gamma is going to be an important influence, of course, as well as the graphic style of Dawn of Kernel.
So far I’ve been focusing on the core, that is close to be completed. After that I will start with the first round of optimizations to see how far I can get in number of simultaneous sprites on screen –on my initial tests it was 9 8x16 masked sprites, which may not be enough–, and then enemy and level design.
This is not my first SHMUP, and I already know how to do a lot of complicated things that were new to me when I made Uchussen Gamma, but the Mode 0 resolution with 2:1 pixels has some challenges –that I apparently avoided in Dawn of Kernel–.
This seems to be a recurring topic in my Twitter, because every time I start a new project I try to find what is the best way to deal with the map data. I guess I haven’t found the perfect encoding or, at least, an encoding that is good for all cases.
In this post I’m going through the different approaches I’ve used in my games.
Of course, use compression!
This is common to all my games: I compress the map data. Which is funny because, there’s always someone on Twitter suggesting it: “have you tried compression?”. Sure!
Unfortunately compression is not enough targeting some microcomputers and a big number of screens.
For example, looking at Golden Tail, it uses screens of 20x10 tiles with tiles of 8x16 (although it is irrelevant for what we are looking at here, it means the tile map is smaller). The game had a total of 40 screens –not a large map–, which adds up to 8000 bytes with no compression. That is a bit too big for a 64K single-load Amstrad CPC game.
So the idea is compressing each screen independently and you uncompress the screen you need to draw. You may get better compression ratios by grouping screens, but that means a larger memory area to uncompress a group. In Golden Tail I was still using UCL compression –which is good, but the compression library is very fiddly–, and it really helps to reduce those 200 bytes.
Now, this example may not be ideal because in Golden Tail I was already limiting the amount of tiles you can use per screen –explained on the next point– and the screens are optimised for that, but in this case UCL gives us an average of 88.4 bytes per screen, which is 3536 bytes in total, saving us a 55.8%. Not bad!
Bit packing
It is possible that you want to reduce even further the size of the map data. For example, it could be that your tile/sprite engine is using a lot of memory, or the fact that you are writing the game in C –which will use much more space than hand written assembler–. Bit packing can help to go beyond what compression can provide.
One way to do that is limiting the amount of tiles you can use on a single screen, so the tile index uses less than 8-bits, and bit packing the map data before compressing.
Continuing with Golden Tail’s example, using 4-bit per tile, one screen is now 100 bytes (we store 2 tiles in one byte), and the average UCL compressed screen is 79.5 bytes. The 40 screens are now 3180 bytes, giving us a further 4.45% of savings. Again, this is not a good example because Golden Tail’s data is already optimized to use 4-bit per tile, so the plain compression is already giving very good results even without bit packing.
What is the downside of this approach? You have less tiles to design your screens, which makes things harder, but it is just matter of practice. Golden Tail didn’t look too bad despite using a maximum of 16 tiles per screen, if I say so myself.
Let’s look at a more extreme example: Night Knight.
Night Knight screens are 16x22, which is 352 bytes per screen, and the game has 80 screens. That’s 28160 bytes!
In this MSX game I used 2-bit per tile, allowing up to 4 tiles per screen. It also has some information that is not stored in the map data and instead is added by the game: shadows and brick patterns.
By using 2-bit per tile, our screens are now 88 bytes, making the complete set 7040 bytes. Although I’m talking about only map data, game usually include entities –like enemies– that need to be added to the screen, and those are stored in sequences of bytes –in this game is 3 or 4 bytes per entity–, and accounting those the real total is 9864 (123.30 bytes per screen, on average).
If we apply compression –I used UCL in this game as well–, we get a total of 7028 bytes with an average of 87.85 bytes per screen.
Night Knight is a type of game that can use 2-bit per tile, using different tile-sets during the game, and the screen still looks nice. It always depends on the game, but I would say 4-bit per tile is a good compromise. Besides we can always cheat using an entity that allows adding tiles out of the tile-set, in exchange for some memory, of course.
Using objects instead of tile maps
So far the methods I have described are more or less level design friendly, because is just drawing your scene using tiles with your favourite map editor –I use tiled, and I explained how on a video–.
A different approach is to not use a tile map at all but describe objects instead, that will be interpreted by your map renderer to draw the scene.
Brick Rick uses 20x22 screens, which would be a tile map of 440 bytes per screen. The game has 50 screens, which adds up to 22000 bytes –without counting the entities–.
By describing each screen with a set of objects (“fill area”, “draw platform”, etc) –plus adding shadows, brick patterns, etc–, those 50 screens use 3112 bytes, with and average of 62 bytes per screen.
Obviously we want to use the smaller amount of memory to describe those objects. Brick Rick implemented 3 objects encoded as 1 byte offset from the previous object, 3 bits for the type of object, 5 bits for width, and then depending the object, there could be one extra byte.
The end result is very optimized, but the level design is tedious because we have to draw those objects, which is less direct that putting together a tile map, and we won’t have a visual cue of what the screen looks like. This may be more a mismatch of the tool I’m using than an issue with the technique, but I ended drawing the tile map and then putting the objects on top of that using a different layer. Not the end of the world, just a bit of extra work.
In this case, the end result is so tight that there is no benefit from using compression.
Like in extreme bit packing, this method depends on the type of game. If I had to design screens as detailed as in Golden Tail, I would probably need more object types, and the end result wouldn’t be as good. In the case of Brick Rick, the resulting screens are richer than what I could get with 2 or 3-bit per tile.
Using meta-tiles
Some people call this super-tiles, but the idea is the same: use an intermediate table grouping your base tiles, and then use those groups –the meta-tiles– in your map data.
This is the system I’m using in my WIP title for the ZX Spectrum 48K: Outpost.
This game is not finished, so I don’t have a lot of data to show final numbers here, but I can provide some approximations that may be useful.
Outpost screens are 32x20 tiles, which is 640 bytes. Say we go with a small game like Golden Tail, that would add up to 25600 bytes. I plan to add more screens than that, and I’m targeting the 48K model with single-load, so I should do better than that!
If we use 4x4 meta-tiles, 64 meta-tiles would be 256 bytes –the speccy is 1-bpp, and the meta-tiles don’t need to store attributes; is just an indirection to the actual tile data–, and this reduces our screen size to 160 bytes (width and height are the half of the size with regular tiles). Besides, the meta-tiles based maps have the property of generally compressing better than small tiles, so when we apply compression –in this project I’m using aPLib compression, with the awesome apultra–, we reduce this to an average of 62 bytes –with the caveat that I have designed only a few screens, but the number sounds about right to me–.
So going back to our hypothetical 40 screens, that’s now 2480 bytes –plus the 256 bytes overhead of the meta-tiles table, and probably a bit of extra code–.
The downside of this method is that the level design is tricky because we have to draw our tiles and group them in meta-tiles, and then use those meta-tiles to design our screens. In my case I threw Python to it until the problem was simple enough. At the end the game expands the meta-tiles to just tiles, so things like the collision detection are just as usual.
Conclusion
Obviously the game size doesn’t necessarily translate into more or less fun, but if the game works, more game is likely to be better. We will always want to cram in as many screens as possible, and considering that other parts of the game are harder to make smaller –graphics, the code itself–, keeping the map data under control is very important.
As you can see, there is not a “one size fits all” approach, so knowing what are your options when you start to plan a new game, is always useful to avoid ending with a good game that is too short!