Playdate Zig - Part 3 Game intro

I’ve been making slow and not that steady progress with my Playdate game in Zig. So thought I’d write a bit of a status update of where I am with it, what I’m trying to achieve and some of the challenges I have faced or are likely to face. As much to get my own thoughts straight as anything else.

For anyone interested in the WIP - the repo is open sourced here: Playdate-Next Repo. When the game is done I might release it through Itch.io or something.

Premise

The game is heavily influenced by one of my favourite mobile games Minigore; which is a fast paced twin-stick shooter in which you fight off hordes of monsters/zombies/etc. I remember playing Minigore for the longest time, dying and retrying over and over. They really nailed the gameplay experience with the balance between the weighty feel of the enemies (they took multiple hits and were pushed back) but also were lightweight enough that you could take on a horde and win. The chaos and carnage of having 100 or so enemies on a small mobile screen was great fun.

The thought of having swarms of enemies surrounding you and mowing them down with a little Playdate crank-powered gatling gun just really appealed to me. Because of the input differences, the game I’m making will be similar to the “auto-aim” mode in Minigore. So the player is focused on shooting, moving to avoid enemies and moving to gather resources and the game will auto-pick the targets to aim at.

The aim of the game will be to beat your highscore for any given level but also to collect unique artefacts that will appear per level.

Minigore

Vision

Player experience

…“One more shot”, the player mutters after being overcome by a horde of enemies. Quickly they jump back into the game. A new level boots up, it’s one they haven’t played before. The enemies starts to appear, a few at first, moving towards the player’s location. The player cranks and the enemies are blown away. The player realises they are in open space and vulnerable from all sides, they start to move. More and more enemies are piling in. The gatling gun is keeping them at bay, pushing them back and allowing the player to keep on the move. “Uh-oh”, out of bullets. Player moves around in search of ammo. There are dozens of enemies now. The player is jinking in and out, narrowly avoiding being hit. The player reaches the ammo crate and unleashes a spray of bullets, wiping out a group of enemies. Their score is ranking up fast, combo streak is getting bigger. They are firing and dodging, constantly moving, trying to group the horde together on one side. They find themselves in open ground again and a group of enemies surround the player getting some hits in before the player has a chance to take them out. Health is low, the player needs to go and find more, but ammo is running out too. The player stops firing, conserving bullets and focuses on moving; using a few bullets here and there to clear a path or push an enemy back. Suddenly the player spots an artefact for this level - who knows when they will get a chance to grab it again. The player abandons their search for health and starts moving towards the artefact instead. The horde is 100 strong now, swarming in from all sides.

“Click click” no bullets left…

Current screenshot

Game is hitting the mark if…

  • Tries are short and sweet
  • Player is retrying lots - it’s addictive - one more try to beat the level / find an artefact / beat the highscore
  • Dying and retrying isn’t frustrating but super quick
  • During a play session (multiple tries) players should experience wins. Beating their highscore or collecting a new unique artefacts

  • Players feel powerful mowing down enemies with the gatling gun
  • Firing feels visceral
  • Enemies feel like a horde, there are lots of them and they swarm the player

  • Controls feel intuitive and players find a firing scheme that works for them
  • Movement is responsive, allowing the player to get out of tight situations
  • Movement is integral to gameplay - where and when to move is part of the strategy

  • Players have to make strategic choices about whether to get more ammo or health or other items
  • Player should feel smart at times

  • Player should feel moments of joy
  • Players should experience “oh shit” moments - low health, empty gun, surrounded by swarms

Theme & style

  • TBD - I’m no artist so will likely purchase a model pack. Something like monsters / zombies / ghosts / etc. Erring on the side of cuter characters that are really readable in 1-bit on a small screen
  • Cartoon violence
  • “Until there are none” - double meaning. Player is trying to take all the horde out but inevitably will become one of them - no humans left
  • Environments - TBD (will depend on what can be sourced/bought)

Game breakdown

Levels

  • Level selection is auto randomised on play. Inspired by Forrest Byrnes. Reduces friction, provides more flavour, plays into the 1-more go to see what’s next
  • Levels have no dead-ends but have open areas, funnels/chokepoints and obstacles to navigate
  • Levels are bounded/enclosed - so there is a finite movement space
  • Different levels feel distinct

Movement

  • Player is controlled with the d-pad
  • Movement is snappy (very fast acceleration, can move and stop on a dime)
  • Player faces towards targetted enemy. If no targetted enemy, player faces the direction of travel
  • Basic movement animation (bob / simple walk cycle)
  • Camera follows the player (player is mostly centered - probably slightly offset to show more in direction of movement)

Firing

  • Gatling gun (primarily controlled by crank but also hold A/B if preferred)
  • Bullets are actual projectiles
  • Target enemy is auto-selected based on some criteria (closest for now)
  • Gun has finite number of bullets and will stop firing when runs out

Enemies

  • Enemies are knocked back on hit
  • Enemies die with a quick ceremony when their health reaches zero
  • Enemies move towards the player (while trying not to end up all on top of each other)
  • When enemies reach the player they start dealing damage per second
  • When the player is killed by an enemy - there is quick ceremony and new level is picked
  • Enemies are spawned in around the player to try and surround them - sometimes offscreen, sometimes on

Pick-ups

  • Pick-ups are spawned based on some criteria (need + randomness)
  • Health pick-ups give the player N health
  • Ammo pick-ups give the player N bullets
  • Artefacts contribute to the players “trophy” screen

Objectives

  • Beat the highscore for the level (highscore is displayed on screen and celebrated if beaten at the end)
  • Scoring is based on enemy kills with combo that is broken on hit
  • Gather N artefacts for each level. Appear based on some criteria (+ randomness). Artefacts are displayed in a “trophy” screen

Audio

  • Unobtrusive music track
  • Layer in low ammo and low health backing
  • Enemy hit, death
  • Player death
  • Gun fire, empty
  • Reload SFX
  • Health up SFX
  • Menu SFX

Misc

  • “Trophy” screen showing artefacts collected and highscore per level
  • Win/Lose version of “trophy” screen showing new artefacts collected in that run and whether the highscore was beaten
  • Restart menu item
  • Launcher cards
  • Button instructions
  • Cartoon violence warning

Unknowns / Future

  • Do levels end? If so is it survive for a period of time? Kill so many enemies?
  • Are enemies able to pass through obstacles - are they ghosts? Might be determined by performance
  • What will the auto-target criteria be to provide the best experience?
  • Will we have more than one type of enemy? Maybe just little and large
  • Different players?
  • Different guns?

Status

Current screenshot

  • Single level
  • Isometric (ish) character that you can move around using the d-pad
  • Enemies that spawn and swarm towards the player (up to 100 enemies at a time for now)
  • 360 degree rendered models (they turn around)
  • No obstacles (full open space - unbounded)
  • Use the crank (or hold A/B) to fire
  • Bullets push the enemies back to give the player some breathing space, hit them enough and the enemies will die (and disappear without any fanfare)
  • Let the enemies get on top of you and they will drain your health, if you die the level resets
  • Track score for killing enemies with a combo chain for the more enemies you kill without being hit
  • Running out of bullets requires a short time to auto-reload where the player is vulnerable
  • Menu item to manually reset the level
  • Placeholder launcher card

Optimisations

  • Sparse grouping of enemies for bullet v enemy collision detection
  • Need to profile whether broader phase render culling for offscreen items has a benefit

Challenges

  • Pathfinding. Potentially having to pathfind / obstacle avoidance for 100+ enemies
  • Sorting. As an isometric(ish) game ensuring that the sorting looks appropriate while maintaining good performance
  • Mirror. I’ve heard that hooking up Mirror can really impact performance. Need to test and see
  • Gameplay feel/balance. Balancing the number of enemies, how they surround, how many hits to kill, magazine capacity, etc.
  • Gun VFX and bullet spawn locations for isometric characters
  • Shadows? Don’t know if I’ll have them

Potential future topics for blogs

  • Art pipeline from 3D models to 1-bit sprites
  • Architecuture of game
  • Approach for a 1-person title and how it differs from my day job
  • Sprites vs bitmaps
  • Culling or not
  • Audio using the Playdate C SDK
  • Collision detection
  • Performance optimisations in general
  • Importing level layouts

Playdate Zig - Part 2 Build to device from Windows

So my Playdate arrived and I thought I’d switch context from developing my little game (which I’ll cover in another post) to actually getting it deployed on device.

The following linked templates did all the heavy lifting for me and I can’t thank them enough. But as with the simulator, there were a few stumbling blocks that caught me out and I thought were worth sharing.

Worth noting that I am not an expert in anyway at build systems, I copy pasted and hacked away until I had something working. I don’t fully understand the Zig build system nor the artefacts required by the Playdate!

Commit of a functioning build.zig

Zig version

So this really caught me out. Originally I was working using Zig 0.9.0 but the references seemed to be using 0.10.0 (beta versions mostly). I upgraded, which involved some minor changes to the existing code. However, when I implemented build.zig for the device I was getting an unhelpful error on @cImport for pd_api.h. The error gave absolutely no information in Powershell (which was baffling as I could see online other people were getting helpful errors). Eventually, out of pure desperation, I rolled back to Zig 0.9.0 and on building I got actual errors relating to how I had installed the ARM GNU Toolchain (I had used the zip rather than the exe and hadn’t run the bat file to complete the setup that puts the includes into the correct locations).

Once I fixed the ARM GNU Toolchain issue I managed to get passed the cImport error but then encountered another error to do with linking and lld memory sections. Couldn’t figure this one out either - but again out of desperation I upgraded back to Zig 0.10.0 and those errors went away! Phew!

Linking Lib C

In my desperation to get the ELF compiling I had added in a call to linkLibC - don’t do this for device builds (and neither of the templates do). It causes linker errors. Removing this allowed me to progress (but keep it for the simulator build steps).

Linking library into ELF

As I say I’m definetely not an expert on these things but on my local machine the library compilation step was generating a file “libname.so.o” but the template code seemed to be expecting “name.o” - so I made the following change to the ELF build step:

const game_elf = b.addExecutable("pdex.elf", null);
game_elf.addObjectFile(b.pathJoin(&.{ lib.output_dir.?, b.fmt("lib{s}.so.o", .{lib.name}) }));

Also added the following to the library build step. The template missed these out and I was getting errors but noticed they were then being coded into the @cImport in the template - so I added them into build.zig instead as I did with the simulator build, like this

lib.defineCMacro("TARGET_SIMULATOR", null);
lib.defineCMacro("TARGET_EXTENSION", null);

Running but not displaying

One of the weirder, and more frustrating issues to solve, was once I had it up an running on the device. I was being presented with a blank screen but weirdly if I opened the menu, a frame of my game was rendered. I could close the menu, apply some input and open the menu again and see the frame had updated. Couldn’t find anything online about this but after some trial and error I noticed that if I called graphics.display() directly from my update loop then the game rendered correctly - but no-one else was calling this function explicitly.

So turns out that on simulator you can return any return value you like from your update loop but on device you have to return TRUE (1) to inform Playdate that you need it to update the display. The template was doing this but I hadn’t bothered looking at the game code because I already had a working version on simulator that returned 0. Just needed to change it to return 1.

Linker warning

I still get the following linker warning but something to worry about for another day.

LLD Link... warning(link): unexpected LLD stderr:
ld.lld: warning: cannot find entry symbol _start; not setting start address

Next Steps

  • I’ll need to cleanup Build.zig. Currently it builds a “fat” .pdx that runs on simulator and on device and I’ll want to be able to skip either depending on what I’m testing.
  • The main reason I wanted to deploy to device was to profile sprites vs bitmap drawing to inform the next steps of my game. So likely will post about the results

Playdate Zig - Part 1 Window's Simulator

As I’m still waiting on the arrival of my actual Playdate I figured I’d get up and running on the Window’s simulator. I wanted to use Zig as I haven’t had a chance to do any game dev with it and it has great interoperability with C - so figured it would be ideal for Playdate.

There are a couple of guides online for getting setup but they glossed over some things that gave me a little trouble (or only worked on Mac) so thought I’d post about them.

Here is a link to the commit that has the most basic working code for building and running on the simulator - just shows a blank screen. Basic Zig Windows Simulator

The main steps (and gotchas) I encountered are as follows:

Create a DLL

Essentially you just create a DLL with event handler exported

const simulator = b.addSharedLibrary("pdex", "src/main.zig", .unversioned);

Include the C libraries

///Path to Playdate SDK C_API folder
simulator.addIncludeDir(c_sdk_path);
simulator.linkLibC();

Add the neccessary defines

simulator.defineCMacro("TARGET_SIMULATOR", null);
simulator.defineCMacro("TARGET_EXTENSION", null);
simulator.defineCMacro("_WINDLL", null);

Barebones main.zig

You need to expose both the eventHandler and also regiater the update callback. If you don’t register the update callback then the simulator will fail to load with no error! Also ensure you ‘export’ the eventHandler and use the C call convention. This allows the simulator to call the function from the DLL

I decided to use Zig’s cImport functionality, which translates the c file as it imports, providing access to the Playdate C API in Zig.

const pd = @cImport({
    @cInclude("pd_api.h");
});

pub export fn eventHandler(playdate: [*c]pd.PlaydateAPI, event: pd.PDSystemEvent, _: c_ulong) callconv(.C) c_int {
    switch (event) {
        pd.kEventInit => playdate.*.system.*.setUpdateCallback.?(update, null),
        else => {},
    }
    return 0;
}

fn update(_: ?*anyopaque) callconv(.C) c_int {
    return 0;
}

Building a .pdx

This part turned out to be very straightforward in the end but was pretty confusing. I saw a lot of mixed posts talking about shared libraries and bin files, etc. The .pdx package requires a bin but I couldn’t figure out how to build it. Turns out it actually only uses the dll but requires that a bin file be present. So you can literally just create an empty ‘pdex.bin’ file in the same folder as the ‘pdex.dll’ and pass this folder to pdc.

This is the Powershell command I then use to build the .pdx package that I then dragged and dropped onto the simulator. Note I set a different output path (/Source) from the default in Zig build simulator.setOutputDir(output_path); and you need pdc in your path

zig build && echo $null >> .\zig-out\Source\pdex.bin && pdc -v -k .\zig-out\Source Test.pdx

References

Tom's Data Onion - A language a layer

I stumbled across a new programming puzzle on r/programming called “Tom’s Data Onion”. The puzzle (well actually set of puzzles) is built in layers such that solving one layer generates the instructions for the next layer. I’m not going to cover how I solved the puzzles (my solutions are on GitHub if anyone is interested) but instead talk a little bit about the languages I used to solve the puzzles.

I do quite a lot of coding challenges (CodinGame, Reddit Daily Programmer, Advent of Code, etc) mostly because I enjoy them but partly because I see them as a good opportunity to practice langauges I’ve not worked in for a while and also as a way to learn and test out new languages. I thought it would be fun to try and solve each layer of the Onion in a different language (BTW before this I’d never written an ascii85 decoder and now I have about 4 in various states of completeness).

I wanted to use a mixture of languages that had various styles, track records and different sized standard libraries. I ended up using the following:

  • Zig - Touted as a genuine replacement for C, I’ve been messing around with Zig recently and really enjoying its simplicity
  • Rust - Potential game dev language of the future. I’ve not used it in about a year and wanted to make sure I don’t forget how
  • Python - For these types of puzzles you often can’t beat Python for implementation time efficiency. I use Python quite a lot and puzzles can be a good way to probe new areas of the language
  • Go - Like with Rust, I haven’t used Go in a couple of years and wanted to refresh my memory
  • C++ - My bread and butter but I’ve been in C# land alot recently and wanted to brush off any cobwebs
  • Kotlin - Occasionally at work we have to interface with Android in Java. I wanted to explore Kotlin as an alternative to Java and hadn’t had a chance to look at it until now
  • Swift - Similar to Kotlin in that I wanted to evaluate Swift as an alternative to Obj-C and again hadn’t yet had a chance to mess around with it

Sort of in-keeping with what I felt the spirit of the challenge was (i.e. often the puzzle pointed to RFC docs) I was determined not to use any 3rd party libraries but to write the code myself. This also had the benefit of allowing anyone else to run my solutions without external dependencies - however I did allow myself to use any modules from the standard libraries. I know languages like Rust, Go and Python really shine with their support for external modules but I’d made my mind up.

An interesting side effect of using a language per layer when each layer required many of the same techniques (aside from writing about 4 different ascii85 decoders) is that I got a chance to see the relative strengths and weaknesses of the languages for solving these types of problems (mostly file reading/writing and bit manipulation). I thought I’d note down some of my observations about how each language faired for this particular puzzle.

Zig

I’ve only started using Zig recently but the more I use it the more I enjoy it. It’s probably the least well known of the languages on the list so will spend a bit more time outlining why I like it. Part of its ethos is that there is only one way to do something so you don’t get that analysis paralysis that you find with other languages (like Rust and C++). Despite it not having a large standard library, and therefore no off the shelf base85 decoder, (which is fine - I like the lean approach) I found I was able to implement the solution to layer 0 pretty quickly. I think this is largely because you don’t have to think too in-depth about what approach to take.

Even though it has quite a minimal standard library the file reading/writing support was good and as easy to use as any other language. While it is a “safe” language, and doesn’t allow implicit casting willy-nilly, I didn’t find doing low level bit manipulation too cumbersome (@truncate to the rescue). Another thing I discovered about Zig is how easy it is to write and run unit tests with its built in test framework - this was a nice surprise! Also it has Zig fmt so no squabbling over style (which I appreciate more and more as I get older). Plus like most modern lanaguages there’s no need for complex external build programs it comes with its own build toolchain. Something I’d never seen before in a language is that Zig supports arbitrary size integers; this can make bit packing super simple (for example in base64 encoding when you pack to 24 bits you could use a u24). One thing I actually really like about Zig is that it strongly encourages you to use pre-defined stack memory by default. For programs like my puzzle solver I can make very accurate, if not perfect, estimates of how much memory I need because the data is fixed - however I’m not sure how easy it is to use the allocators if the memory requirements weren’t as well defined (but TBH you usually have a good idea of the max memory requirements).

Zig’s documentation is pretty good but not complete and it doesn’t have a large community so not much of a Stack Overflow presence. Despite that I’ve been able to get up and running with it pretty easily; thanks to the existing documentation and the Zig GitHub which has tests and examples to reference. I think my only real gripe with Zig is that it doesn’t have any standard ‘for’ loop syntax. Range based looping is great (and with slices is probably sufficient in 99% percent of cases) but for this type of puzzle work you often need the index of the loop and the ability to increment in multiples of 1 and it feels cumbersome to use a while loop to do that.

Rust

It’s probably impossible not to have heard of Rust. I’m surprised they don’t have cold callers that rock up at your house and tell you to “rewrite all your software in Rust”. But evangelism aside, it really is a powerful language with a great toolchain. As a newbie getting into Rust the documentation is second to none and clippy is a great mechanism for learning Rust idioms on the job (except I will die on the hill that if x == false is more readable and less easily missed than if !x). Despite the evolution of the language, whenever I search for how to do something I seem to always find the most up to date way - this is great when I need to search how to write a file or convert a u32 to a u8 (I’m looking at you Swift!).

As you would expect from a beast like Rust there is a lot of great built-in module support. Particularly for puzzles like this there are useful convenience functions for endianess handling and for packing/unpacking bytes, etc. that can take a lot of the boilerplate away from the developer. Like with Zig the built in testing framework is really great (and useful in puzzles like this to test that my ascii85 decoder worked ok). Curiously, despite me seeing Rust as a C++ alternative, I tend to write my Rust a bit more ‘functionally’ and find myself reaching for iterators more often than not (so my Rust programs look nothing like my C++ one). For puzzles like this which are often very explicitly about manipulating blocks of homogenous data there is nothing quite as satisfying as a functional one liner. I find that writing functionally in Rust is almost as easy as say Python or Scala (sometimes you do have to fight the borrow checker but for a non-GC language the functional support is excellent). Plus having 128-bit integers is great for bit manipulation puzzles!

I think because Rust is multi-paradigm and has a lot of functionality I find that I probably solve puzzles slower in Rust than other languages because I’m trying to find the ‘best’ or most idiomatic way to solve the problem. Despite being very similar to Zig when it comes to type safety I found the explicit truncation casting in Rust much noisier than Zig (which is weird because val as u8 is less text than @truncate(u8, val) - perhaps the infix operator is what makes it seem noisy ¯\(ツ)/¯.

Python

Nothing really beats Python for programming challenges in terms of implementation speed (can’t always say the same for run-time performance). It has a seemingly never ending library of hex functions, base85 and base64 encoders, etc. Despite not being a ‘systems’ language the bit manipulation and packing (using struct) is pretty good - especially if you need to be explicit about endianess.

I think my main concerns with using Python for the Onion (and other similar challenges) is that you are never quite sure how big your int actually is or whether it has been implicitly converted to a float. Sometimes I want a 16bit int and I don’t want it to expand but TBH the variability of the number type never gave me any problems for layer 2. I think writing solutions of this size are about as large as I’d ever want to write with dynamically typed languages.

Go

Go shares that same great trait as Zig in that there is no debating about how to go about implementing something - you really only have loops, conditionals and functions. I was also pleasantly surprised by how good the std lib was with encoding, decoding and file reading/writing functions (was particularly surprised that it had base85 decoding)

Similarly to Zig I was able to get up and running really quickly and again no real hesitation about the best way to tackle the problem. The documentation was really helpful and, like with Rust docs, having an API doc as well as tutorials and examples is great for finding out what functions are available.

Rust, Zig and Go have error handling approaches that really appeal to me (i.e. error returns). However with Go the error handling can get quite cumbersome and generate a lot of boilerplate code. Quite often (in fact almost always) when I’m writing puzzle solving code I don’t expect to gracefully handle any errors because I control the whole pipeline so usually an error is my error and I just want to terminate. My Go script is littered with if err panic and I feel like Go could do with a force unwrap (like Rust, Zig or Swift).

C++

C++ is C++. For a number of years I was lost in the wilderness of “modern C++” but now I tend to use more vanilla (non standard library) C++ and dip into std when there is something that I need (like threads or file loading). You really can’t beat C++ for certain types of puzzles (knowing how the types work and being able to easily and implicitly cast between them can make the code much easier to read). I actually picked the order of the languages before I saw the puzzles so it was quite fortuitous that the wheel landed on C++ for this one because C++ has the great ability of being able to easily view data through different lenses (where other languages tie the memory too tightly to types). In this particular layer the same block of memory had to be parsed into a header struct (containing length, etc) and also iterated in 2 byte chunks in order to calculate a checksum. This is pretty trivial in C++ just by changing the pointer type and iterating - no need to wrap or copy.

Of course all this convenience comes at a cost and when I ported my C++ ascii85 encoder to Kotlin it caught a bunch of overflow errors! A couple of things that annoyed me with C++ for this particular puzzle was a) there isn’t a standard way to prevent struct padding so you can’t safely reinterpret memory as a struct (which would have been great for reading the header) and b) There isn’t any built in endianess handling so I just had to assume little endian and byte swap the network order data. I think being able to reinterpret memory safely as a struct and at the same time swap endianess if needed would make these puzzle solutions much simpler. The other thing about C++ when compared to all the other languages here is the lack of default toolchain - sticking with clang/LLVM always seems the best approach for me.

Kotlin

Ideally I would have used Kotlin Native but the particular layer I used Kotlin for required some AES decryption and I had to leverage the weight of the JVM libraries.

There isn’t too much to say about Kotlin that hasn’t already been covered by the other languages. It’s documentation was good and I really liked the iterator support and despite my worries about how much of a PITA bit manipulation would be, because of how strict Java is, it wasn’t too bad (obviously there was a lot of explicit casting and I had to keep correcting to the non-standard bit operation syntax shr, shl, etc). I’d really like to give Native a try in future.

One of the little things that caught me out with Kotlin - why is 0..N an inclusive range when in every other language that is exclusive? 0 until N is more cumbersome and is by far the more common case right?

Swift

Initially the puzzle only had 5 layers so I had a toss up between Kotlin and Swift to see which one I would use. I chose Kotlin because I was worried about having to introduce a dependency on the Xcode toolchain and I wanted to keep the solutions as standalone simple scripts. Thankfully it is actually easy to build and run Swift scripts independently from the command line.

Swift has a lot of the same plus points as the other modern languages I’ve listed - strong typing, nice error handling, etc. However of all the languages I used (bearing in mind I used Zig which has a small community and Rust that has had several breaking evolutions) I didn’t expect Swift was going to the hardest language to pick up - but it was by far! For some reason it seems to be really difficult to find up to date documentation for Swift. I kept finding no longer supported solutions for previous versions. The first party tutorial site was pretty good for grasping the basics but that was it. I found it really hard to find how to read/write files or how to convert bytes to strings (ASCII or UTF-8). I think part of the problem is because Swift and Apple are so linked and there is toll-free bridging with Obj-C libraries it can often be hard to figure out where Swift stops and Cocoa/Foundation/etc. start. FileManager is a carbon copy of the one for Obj-C and has the same PITA hoops regarding URL, document directories, etc to jump through just to read or write a file (feels like there should be a lower level File API implemented in Swift that FileManager wraps).

A couple of things that really killed me with Swift: 1) Why do slices keep the same indices as the underlying array? When is that ever useful? 2) When running from command line without the power of Xcode any crash logs were unsymbolised (and didn’t point to a line number) this wasn’t the case for any of the other languages.

Conclusion

There isn’t really anything profound in this doc. I picked the order of the languages at random so I think that shows you could really make any of them work for puzzles like this. It was just interesting to get a sense of the frustration points for each language on a mini-project. It was nice to try our Kotlin and Swift but I don’t think I’d use them out of anything but necessity - simply because I’d probably use Rust instead. For quick single script puzzles in particular having to symbolicate the crash logs is a bit of a pain with Swift. Kotlin still seems to be in the process of separating from the JVM so perhaps I’ll try again in a couple of years. It’s nice to see some of the same patterns emerging across the modern languages - built-in toolchains, auto-formatters/linters, built-in test frameworks, immutable by default, return based error handling, etc. If I find time perhaps I’ll go back through the layers and thread anything that can can be done so trivially to test out the relative threading capabilities and I’m definitely going to keep messing around with Zig.

If you haven’t had a bash at Tom’s Data Onion I’d encourage you to do so - but perhaps just re-use the same ascii85 decoder.

Keep world abstractions out of the type system

Something that I see farily frequently, particularly in OOP codebases, is programmers bringing real-world or design-world abstractions into the type system. Type systems in statically typed languages are useful but very inflexible and introducing the wrong types can make the type-system work against you. I find that often people work around this with interfaces or inheritance but I believe these are sticking plasters that introduce additional complexity at the cost of comprehension (and flexibility) and that the problem is best solved at the root.

Example

Here is a paraphrased example that I came across recently:

The game had a central character that the player could upgrade by unlocking new items (upgrades) and applying them.

class Upgrade
{
	int GetHealthUpgradeAmount()
	int GetAttackUpgradeAmount()
	int GetDefenceUpgradeAmount()
}

class CharacterStats
{
	void ApplyUpgrade(Upgrade upgrade)

	Upgrades[] m_upgrades

	int m_baseHealth
	int m_baseAttack
	int m_baseDefence
}

There was also a feature that triggered events that had time limited impacts on the character stats - the designers referred to these as Mod Events.

class ModEvent : Upgrade
{
	float GetDurationSecs()
}

class CharacterStats
{
	void ApplyUpgrade(Upgrade upgrade)
	void ApplyModEvent(ModEvent modEvent)

	void RemoveExpiredModEvents()

	Upgrade[] m_upgrades
	ModEvent[] m_modEvents

	int m_baseHealth
	int m_baseAttack
	int m_baseDefence
}

Finally there was a ‘trait’ feature where the character stats were increased or decreased based on who they were pitted against.

class Trait : Upgrade
{
	//Had stuff specific to traits like names but they aren't important for this example
}

class CharacterStats
{
	void ApplyUpgrade(Upgrade upgrade)
	void ApplyModEvent(ModEvent modEvent)
	void ApplyTrait(Trait trait)

	void RemoveExpiredModEvents()

	Upgrade[] m_upgrades //Including traits
	ModEvent[] m_modEvents

	int m_baseHealth
	int m_baseAttack
	int m_baseDefence
}

The approach above perhaps doesn’t seem too bad, and in this case the actual impact on the codebase was fairly minimal (I’ll cover it in a second), but I think when we reach for inheritance (and I don’t want to get into an explanation here of why inheritance is a bit of a code smell) it is usually because we’ve made sub-optimal decisions earlier on.

Splitting along data usage boundaries

I think root of the problem is conflating actual differences in usage with conceptual differences influenced by the world (in this case the game world). Where something comes from doesn’t necessarily fundamentally change what it is. If we look at Upgrades, ModEvents and Traits the only actual difference between them (besides where they are applied) is that one of them is time-limited. You could argue that the time-limitation isn’t actually a property of the type but of the context and if you follow that logic and think about what the data is, where it is stored, read and written (i.e. how it is actually used) and split along those lines (and in keeping with the original structure) you would probably end up with something like this:

struct StatMod
{
	int m_id
	int m_healthMod
	int m_attackMod
	int m_defenceMod
}

struct TimeRemaining
{
	int m_id
	float m_timeRemaining
}

class CharacterStats
{
	void ApplyStatMod(StatMod statMod)
	void ApplyTimeLimitedStatMod(StatMod statMod, float durationSecs)

	void RemoveExpiredStatMods()

	StatMod[] m_statMods
	TimeRemaining[] m_statModTimers

	int m_baseHealth
	int m_baseAttack
	int m_baseDefence
}

Having a single type now gives you the flexibility to group or categorise to best meet the usage and not by some arbitrary conceptual difference. For example if you need to display the total affect of all mods to the player (which is exactly what the game needed to do: e.g. "Attack 7 (+7)") you can store all the StatMods in one array and sum them. If instead the design changed so the complete breakdown is displayed to the player (e.g. "Attack 7 (Upgrade +6) (Trait -2) (Power-Up +3)") you could go back to storing each conceptual type in a different container. Essentially the name of the variable can be used to represent the context.

By splitting the data along usage boundaries it is trivial to evlove the codebase as the design evolves. We can easily make certain upgrades time-limited or we can support the ability for upgrades to have names (just by associating them with an id) without having fundamentally change how mods are applied.

Think about what types are being introduced

If you think about the most successful types they aren’t influenced by their context. An integer doesn’t have to change type based on whether it represents an age or a number of sandwiches (int m_age, int m_numSandwiches). What’s the difference between a stool and table - functionally nothing, they are both raised surfaces and making a distiniction between them makes it difficult to treat them the same whenit is best to do so (what if I want to sit on the table). Introducing new types into the program should be thought out carefully and always along the lines of how the data is used and not based on real world naming which, if English is anything to go by, is messy, inconsistent and often not helpful.