Last week, I previewed the work I was planning, and this past week I planned and worked on my first sprint since Toy Factory Fixer was published in December.
Sprint 57: Ports
Planned and Incomplete:
- Create Linux port
I usually try to plan my weekly sprints on Sunday, but I was not able to dedicate the time to it. The plan was a day later in coming together, but due to various family commitments earlier in the week and my back giving me problems later in the week, I didn’t get as much time to dedicate to it.
Now, “Create Linux port” is currently split into six tasks, mostly related to creating scripts. As I mentioned last week, I wanted to replace the manual and cumbersome virtual machines I have used in the past with Docker containers that I expect to be able to automate more easily.
First, I wanted to know how easy it would be to setup a 32-bit Docker image and container.
Two weeks ago, I was trying to figure out how much demand there might be for 32-bit Linux binaries. Most of the player metrics I have access to, such as Steam’s reports, show that 64-bit systems reign supreme, which makes sense. And I remember that various distros were trying to get rid of 32-bit architecture support.
But I also know that they got a lot of pushback, partly because people want to be able to play older games. I also know that people like to breathe new life into older computer hardware by installing Linux-based systems.
So here’s my current thinking: I’m going to create a 32-bit binary option for Linux-based systems.
Why? Well, I’ve done it before. Years ago, I released a game and made sure it worked on both architectures, and it basically amounted to having two VMs, running the same build scripts on each, then combining the binaries and libraries together, then providing a script that detects the current architecture and runs the appropriate binary. So it doesn’t require any more work, assuming I can replace the VMs part with the Docker containers.
And I was pleased to find that Debian images exist for both architectures. I have yet to look into whether or not I will run into problems trying to get my 64-bit host to have a 32-bit container running on it, but that will come in due time.
I want to use an older EOL Debian version as my base Docker image because years ago I was trying to ensure a game of mine was compatible with as many systems as possible, and I found that Ubuntu was automatically adding stack protection even though I was setting a flag to say I didn’t want it (read Linux Game Development: GLIBC_2.4 Errors Solved to learn more). Stack protection required a newer version of GLIBC, which meant that people running older distros couldn’t play the game. The Debian-based VMs I was using didn’t have that problem. I could argue that people should update their systems, but I could also just as easily make my game available by not having arbitrary requirements.
So far, I have a docker-compose file and a Dockerfile that pull down Debian Etch, which is the earliest version that introduced 64-bit support, install a bunch of development tools and needed libraries, and…that’s it. I didn’t get too far. In fact, I spent most of my time getting a refresher on Docker configuration as it has been some time since I last messed with it.
I realized that I don’t exactly have a workflow I’m aiming towards. I know I want to be able to spin up a container, build custom libraries that reduce the number of dependencies I need to provide, then build my game project based on those custom libraries, and produce a tarball that I can distribute to players.
But tasks I didn’t identify before the sprint started include actually figuring out how those things happen. I have existing scripts that create the custom libraries, create the game binary, and combine the 32-bit and 64-bit results into one package, and so I expect that if I have to do any work on those, it is minimal.
How do I get my scripts into the containers? How do I get my source code into the containers? How do I access the custom libraries, either when they are built and need to be stored, or when the container needs them to do the building?
With the VMs, I think I remember I copied files into a shared location, then extracted them inside the VM, then ran the scripts manually. It was a bit cumbersome and annoying, and I even wrote down instructions so I could reproduce it consistently, with the expectation that I would eventually turn it into an automated script that would actually reproduce the work consistently.
With Docker containers, I could have the source pulled in by using git to grab the current version from source control, but it feels redundant when I already have my current version of the project in the place that I am probably launching the container from anyway. I envision a Jenkins job doing this work, and it likely already pulled the source from source control and doesn’t need to do it a second time.
I could use a bind mount instead. Boom. As soon as the container is up, it has access to my project. I could similarly give it access to my toolchains and the source libraries.
And if I can get the containers access to all of those directories and files, then it should be a simple matter of using my existing scripts to build everything I need.
And then I could always write an actual master script that does it all for me with a single command rather than follow instructions manually.
So my original sprint plan wasn’t getting me where I needed to be, and I was struggling with figuring out what exactly I needed the Docker configuration to look like, but I have a much clearer idea now, and while I expect that my capacity to work on it will always be limited, I should be able to make good progress in this coming week.
Thanks for reading!
—
Want to learn when I release updates to Toytles: Leaf Raking, Toy Factory Fixer, or about future Freshly Squeezed games I am creating? Sign up for the GBGames Curiosities newsletter, and get the 19-page, full color PDF of the Toy Factory Fixer Player’s Guide for free!
One reply on “Freshly Squeezed Progress Report – Initial Linux Porting Work with Docker”
[…] my previous report, I talked about how my initial plan to setup Docker containers to help build the Linux-based […]